# Control Systems/Linear System Solutions

## State Equation Solutions

The solutions in this chapter are heavily rooted in prior knowledge of Ordinary Differential Equations. Readers should have a prior knowledge of that subject before reading this chapter.

The state equation is a first-order linear differential equation, or (more precisely) a system of linear differential equations. Because this is a first-order equation, we can use results from Ordinary Differential Equations to find a general solution to the equation in terms of the state-variable x. Once the state equation has been solved for x, that solution can be plugged into the output equation. The resulting equation will show the direct relationship between the system input and the system output, without the need to account explicitly for the internal state of the system. The sections in this chapter will discuss the solutions to the state-space equations, starting with the easiest case (Time-invariant, no input), and ending with the most difficult case (Time-variant systems).

## Solving for x(t) With Zero Input

Looking again at the state equation:

${\displaystyle x'=Ax(t)+Bu(t)}$

We can see that this equation is a first-order differential equation, except that the variables are vectors, and the coefficients are matrices. However, because of the rules of matrix calculus, these distinctions don't matter. We can ignore the input term (for now), and rewrite this equation in the following form:

${\displaystyle {\frac {dx(t)}{dt}}=Ax(t)}$

And we can separate out the variables as such:

${\displaystyle {\frac {dx(t)}{x(t)}}=Adt}$

Integrating both sides, and raising both sides to a power of e, we obtain the result:

${\displaystyle x(t)=e^{At+C}}$

Where C is a constant. We can assign D = eC to make the equation easier, but we also know that D will then be the initial conditions of the system. This becomes obvious if we plug the value zero into the variable t. The final solution to this equation then is given as:

${\displaystyle x(t)=e^{A(t-t_{0})}x(t_{0})}$

We call the matrix exponential eAt the state-transition matrix, and calculating it, while difficult at times, is crucial to analyzing and manipulating systems. We will talk more about calculating the matrix exponential below.

## Solving for x(t) With Non-Zero Input

If, however, our input is non-zero (as is generally the case with any interesting system), our solution is a little bit more complicated. Notice that now that we have our input term in the equation, we will no longer be able to separate the variables and integrate both sides easily.

${\displaystyle x'(t)=Ax(t)+Bu(t)}$

We subtract to get the ${\displaystyle Ax(t)}$ on the left side, and then we do something curious; we premultiply both sides by the inverse state transition matrix:

${\displaystyle e^{-At}x'(t)-e^{-At}Ax(t)=e^{-At}Bu(t)}$

The rationale for this last step may seem fuzzy at best, so we will illustrate the point with an example:

### Example

Take the derivative of the following with respect to time:

${\displaystyle e^{-At}x(t)}$

The product rule from differentiation reminds us that if we have two functions multiplied together:

${\displaystyle f(t)g(t)}$

and we differentiate with respect to t, then the result is:

${\displaystyle f(t)g'(t)+f'(t)g(t)}$

If we set our functions accordingly:

${\displaystyle f(t)=e^{-At}\qquad f'(t)=-Ae^{-At}}$
${\displaystyle g(t)=x(t)\qquad g'(t)=x'(t)}$

Then the output result is:

${\displaystyle e^{-At}x'(t)-e^{-At}Ax(t)}$

If we look at this result, it is the same as from our equation above.

Using the result from our example, we can condense the left side of our equation into a derivative:

${\displaystyle {\frac {d(e^{-At}x(t))}{dt}}=e^{-At}Bu(t)}$

Now we can integrate both sides, from the initial time (t0) to the current time (t), using a dummy variable τ, we will get closer to our result. Finally, if we premultiply by eAt, we get our final result:

[General State Equation Solution]

${\displaystyle x(t)=e^{A(t-t_{0})}x(t_{0})+\int _{t_{0}}^{t}e^{A(t-\tau )}Bu(\tau )d\tau }$

If we plug this solution into the output equation, we get:

[General Output Equation Solution]

${\displaystyle y(t)=Ce^{A(t-t_{0})}x(t_{0})+C\int _{t_{0}}^{t}e^{A(t-\tau )}Bu(\tau )d\tau +Du(t)}$

This is the general Time-Invariant solution to the state space equations, with non-zero input. These equations are important results, and students who are interested in a further study of control systems would do well to memorize these equations.

## State-Transition Matrix

Engineering Analysis

The state transition matrix, eAt, is an important part of the general state-space solutions for the time-invariant cases listed above. Calculating this matrix exponential function is one of the very first things that should be done when analyzing a new system, and the results of that calculation will tell important information about the system in question.

The matrix exponential can be calculated directly by using a Taylor-Series expansion:

${\displaystyle e^{At}=\sum _{n=0}^{\infty }{\frac {(At)^{n}}{n!}}}$
Engineering Analysis

Also, we can attempt to diagonalize the matrix A into a diagonal matrix or a Jordan Canonical matrix. The exponential of a diagonal matrix is simply the diagonal elements individually raised to that exponential. The exponential of a Jordan canonical matrix is slightly more complicated, but there is a useful pattern that can be exploited to find the solution quickly. Interested readers should read the relevant passages in Engineering Analysis.

The state transition matrix, and matrix exponentials in general are very important tools in control engineering.

### Diagonal Matrices

If a matrix is diagonal, the state transition matrix can be calculated by raising each diagonal entry of the matrix raised as a power of e.

### Jordan Canonical Form

If the A matrix is in the Jordan Canonical form, then the matrix exponential can be generated quickly using the following formula:

${\displaystyle e^{Jt}=e^{\lambda t}{\begin{bmatrix}1&t&{\frac {1}{2!}}t^{2}&\cdots &{\frac {1}{n!}}t^{n}\\0&1&t&\cdots &{\frac {1}{(n-1)!}}t^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &1\end{bmatrix}}}$

Where λ is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.

### Inverse Laplace Method

We can calculate the state-transition matrix (or any matrix exponential function) by taking the following inverse Laplace transform:

${\displaystyle e^{At}={\mathcal {L}}^{-1}[(sI-A)^{-1}]}$

If A is a high-order matrix, this inverse can be difficult to solve.

If the A matrix is in the Jordan Canonical form, then the matrix exponential can be generated quickly using the following formula:

   ${\displaystyle e^{Jt}=e^{\lambda t}{\begin{bmatrix}1&t&{\frac {1}{2!}}t^{2}&\cdots &{\frac {1}{n!}}t^{n}\\0&1&t&\cdots &{\frac {1}{(n-1)!}}t^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &1\end{bmatrix}}}$


Where λ is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.

### Spectral Decomposition

If we know all the eigenvalues of A, we can create our transition matrix T, and our inverse transition matrix T-1 These matrices will be the matrices of the right and left eigenvectors, respectively. If we have both the left and the right eigenvectors, we can calculate the state-transition matrix as:

[Spectral Decomposition]

${\displaystyle e^{At}=\sum _{i=1}^{n}e^{\lambda _{i}t}v_{i}w_{i}'}$

Note that wi' is the transpose of the ith left-eigenvector, not the derivative of it. We will discuss the concepts of "eigenvalues", "eigenvectors", and the technique of spectral decomposition in more detail in a later chapter.

### Cayley-Hamilton Theorem

Engineering Analysis

The Cayley-Hamilton Theorem can also be used to find a solution for a matrix exponential. For any eigenvalue of the system matrix A, λ, we can show that the two equations are equivalent:

${\displaystyle e^{\lambda t}=a_{0}+a_{1}\lambda t+a_{2}\lambda ^{2}t^{2}+\cdots +a_{n-1}\lambda ^{n-1}t^{n-1}}$

Once we solve for the coefficients of the equation, a, we can then plug those coefficients into the following equation:

${\displaystyle e^{At}=a_{0}I+a_{1}At+a_{2}A^{2}t^{2}+\cdots +a_{n-1}A^{n-1}t^{n-1}}$

### Example: Off-Diagonal Matrix

Given the following matrix A, find the state-transition matrix:

${\displaystyle A={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}}$

We can find the eigenvalues of this matrix as λ = i, -i. If we plug these values into our eigenvector equation, we get:

${\displaystyle {\begin{vmatrix}i&-1\\1&i\end{vmatrix}}v_{1}=0}$
${\displaystyle {\begin{vmatrix}-i&-1\\1&-i\end{vmatrix}}v_{2}=0}$

And we can solve for our eigenvectors:

${\displaystyle v_{1}={\begin{bmatrix}1\\i\end{bmatrix}}}$
${\displaystyle v_{2}={\begin{bmatrix}1\\-i\end{bmatrix}}}$

With our eigenvectors, we can solve for our left-eigenvectors:

${\displaystyle w_{1}={\begin{bmatrix}1\\-i\end{bmatrix}}}$
${\displaystyle w_{2}={\begin{bmatrix}1\\i\end{bmatrix}}}$

Now, using spectral decomposition, we can construct the state-transition matrix:

${\displaystyle e^{At}=e^{it}{\begin{bmatrix}1\\i\end{bmatrix}}{\begin{bmatrix}1&-i\end{bmatrix}}+e^{-it}{\begin{bmatrix}1\\-i\end{bmatrix}}{\begin{bmatrix}1&i\end{bmatrix}}}$

If we remember Euler's Identity, we can decompose the complex exponentials into sinusoids. Performing the vector multiplications, all the imaginary terms cancel out, and we are left with our result:

${\displaystyle e^{At}={\begin{bmatrix}\cos t&\sin t\\-\sin t&\cos t\end{bmatrix}}}$

The reader is encouraged to perform the multiplications, and attempt to derive this result.

### Example: Sympy Calculation

With the freely available python library 'sympy' we can very easily calculate the state-transition matrix automatically:

>>> from sympy import *
>>> t = symbols('t', positive = true)
>>> A = Matrix([[0,1],[-1,0]])
>>> exp(A*t).expand(complex=True)

⎡cos(t)   sin(t)⎤
⎢               ⎥
⎣-sin(t)  cos(t)⎦


You can also try it out yourself on this website:

### Example: MATLAB Calculation

Using the symbolic toolbox in MATLAB, we can write MATLAB code to automatically generate the state-transition matrix for a given input matrix A. Here is an example of MATLAB code that can perform this task:

function [phi] = statetrans(A)
t = sym('t');
phi = expm(A * t);
end


Use this MATLAB function to find the state-transition matrix for the following matrices (warning, calculation may take some time):

1. ${\displaystyle A_{1}={\begin{bmatrix}2&0\\0&2\end{bmatrix}}}$
2. ${\displaystyle A_{2}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}}$
3. ${\displaystyle A_{3}={\begin{bmatrix}2&1\\0&2\end{bmatrix}}}$

Matrix 1 is a diagonal matrix, Matrix 2 has complex eigenvalues, and Matrix 3 is Jordan canonical form. These three matrices should be representative of some of the common forms of system matrices. The following code snippets are the input commands into MATLAB to produce these matrices, and the output results:

Matrix A1
>> A1 = [2 0 ; 0 2];
>> statetrans(A1)

ans =

[ exp(2*t),        0]
[        0, exp(2*t)]

Matrix A2
>> A2 = [0 1 ; -1 0];
>> statetrans(A1)

ans =

[  cos(t),  sin(t)]
[ -sin(t),  cos(t)]

Matrix A3
>> A1 = [2 1 ; 0 2];
>> statetrans(A1)

ans =

[   exp(2*t), t*exp(2*t)]
[          0,   exp(2*t)]


### Example: Multiple Methods in MATLAB

There are multiple methods in MATLAB to compute the state transtion matrix, from a scalar (time-invariant) matrix A. The following methods are all going to rely on the Symbolic Toolbox to perform the equation manipulations. At the end of each code snippet, the variable eAt contains the state-transition matrix of matrix A.

Direct Method
t = sym('t');
eAt = expm(A * t);

Laplace Transform Method
s = sym('s');
[n,n] = size(A);
in = inv(s*eye(n) - A);
eAt = ilaplace(in);

Spectral Decomposition
t = sym('t');
[n,n] = size(A);
[V, e] = eig(A);
W = inv(V);
sum = [0 0;0 0];
for I = 1:n
sum = sum + expm(e(I,I)*t)*V(:,I)*W(I,:);
end;
eAt = sum;


All three of these methods should produce the same answers. The student is encouraged to verify this.