# Control Systems/Time Variant System Solutions

## General Time Variant Solution

The state-space equations can be solved for time-variant systems, but the solution is significantly more complicated than the time-invariant case. Our time-variant state equation is given as follows:

$x'(t) = A(t)x(t) + B(t)u(t)$

We can say that the general solution to time-variant state-equation is defined as:

[Time-Variant General Solution]

$x(t) = \phi(t, t_0)x(t_0) + \int_{t_0}^{t} \phi(t,\tau)B(\tau)u(\tau)d\tau$

Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q

The function φ is called the state-transition matrix, because it (like the matrix exponential from the time-invariant case) controls the change for states in the state equation. However, unlike the time-invariant case, we cannot define this as a simple exponential. In fact, φ can't be defined in general, because it will actually be a different function for every system. However, the state-transition matrix does follow some basic properties that we can use to determine the state-transition matrix.

In a time-variant system, the general solution is obtained when the state-transition matrix is determined. For that reason, the first thing (and the most important thing) that we need to do here is find that matrix. We will discuss the solution to that matrix below.

### State Transition Matrix

Note:
The state transition matrix φ is a matrix function of two variables (we will say t and τ). Once the form of the matrix is solved, we will plug in the initial time, t0 in place of the variable τ. Because of the nature of this matrix, and the properties that it must satisfy, this matrix typically is composed of exponential or sinusoidal functions. The exact form of the state-transition matrix is dependant on the system itself, and the form of the system's differential equation. There is no single "template solution" for this matrix.

The state transition matrix φ is not completely unknown, it must always satisfy the following relationships:

$\frac{\partial \phi(t, t_0)}{\partial t} = A(t)\phi(t, t_0)$
$\phi(\tau, \tau) = I$

And φ also must have the following properties:

 1 $\phi(t_2, t_1)\phi(t_1, t_0) = \phi(t_2, t_0)$ 2 $\phi^{-1}(t, \tau) = \phi(\tau, t)$ 3 $\phi^{-1}(t, \tau)\phi(t, \tau) = I$ 4 $\frac{d\phi(t_0, t_0)}{dt} = A(t)$

If the system is time-invariant, we can define φ as:

$\phi(t, t_0) = e^{A(t - t_0)}$

The reader can verify that this solution for a time-invariant system satisfies all the properties listed above. However, in the time-variant case, there are many different functions that may satisfy these requirements, and the solution is dependant on the structure of the system. The state-transition matrix must be determined before analysis on the time-varying solution can continue. We will discuss some of the methods for determining this matrix below.

## Time-Variant, Zero Input

As the most basic case, we will consider the case of a system with zero input. If the system has no input, then the state equation is given as:

$x'(t) = A(t)x(t)$

And we are interested in the response of this system in the time interval T = (a, b). The first thing we want to do in this case is find a fundamental matrix of the above equation. The fundamental matrix is related

### Fundamental Matrix

Here, x is an n × 1 vector, and A is an n × n matrix.

Given the equation:

$x'(t) = A(t)x(t)$

The solutions to this equation form an n-dimensional vector space in the interval T = (a, b). Any set of n linearly-independent solutions {x1, x2, ..., xn} to the equation above is called a fundamental set of solutions.

Readers who have a background in Linear Algebra may recognize that the fundamental set is a basis set for the solution space. Any basis set that spans the entire solution space is a valid fundamental set.

A fundamental matrix FM is formed by creating a matrix out of the n fundamental vectors. We will denote the fundamental matrix with a script capital X:

$\mathcal{X} = \begin{bmatrix}x_1 & x_2 & \cdots & x_n\end{bmatrix}$

The fundamental matrix will satisfy the state equation:

$\mathcal{X}'(t) = A(t)\mathcal{X}(t)$

Also, any matrix that solves this equation can be a fundamental matrix if and only if the determinant of the matrix is non-zero for all time t in the interval T. The determinant must be non-zero, because we are going to use the inverse of the fundamental matrix to solve for the state-transition matrix.

### State Transition Matrix

Once we have the fundamental matrix of a system, we can use it to find the state transition matrix of the system:

$\phi(t, t_0) = \mathcal{X}(t)\mathcal{X}^{-1}(t_0)$

The inverse of the fundamental matrix exists, because we specify in the definition above that it must have a non-zero determinant, and therefore must be non-singular. The reader should note that this is only one possible method for determining the state transition matrix, and we will discuss other methods below.

### Example: 2-Dimensional System

Given the following fundamental matrix, Find the state-transition matrix.

$\mathcal{X}(t) = \begin{bmatrix}e^{-t} & \frac{1}{2} e^{t} \\ 0 & e^{-t}\end{bmatrix}$

the first task is to find the inverse of the fundamental matrix. Because the fundamental matrix is a 2 × 2 matrix, the inverse can be given easily through a common formula:

$\mathcal{X}^{-1}(t) = \frac{\begin{bmatrix}e^{-t} & -\frac{1}{2}e^t \\ 0 & e^{-t}\end{bmatrix}}{e^{-2t}}$$= \begin{bmatrix} {e}^{t}&-\frac{1}{2}\,{e}^{3t}\\0&{e}^{t}\end{bmatrix}$

The state-transition matrix is given by:

$\phi(t, t_0) = \mathcal{X}(t)\mathcal{X}^{-1}(t_0) = \begin{bmatrix}e^{-t} & -\frac{1}{2} e^{t} \\ 0 & e^{-t}\end{bmatrix} \begin{bmatrix} {e}^{t_0}&\frac{1}{2}\,{e}^{3t_0}\\0&{e}^{t_0}\end{bmatrix}$
$\phi(t, t_0) = \begin{bmatrix} e^{-t + t_0} & \frac{1}{2}(e^{t + t_0} - e^{-t + 3t_0}) \\ 0 & e^{-t+t_0}\end{bmatrix}$

### Other Methods

There are other methods for finding the state transition matrix besides having to find the fundamental matrix.

Method 1
If A(t) is triangular (upper or lower triangular), the state transition matrix can be determined by sequentially integrating the individual rows of the state equation.
Method 2
If for every τ and t, the state matrix commutes as follows:
$A(t)\left[\int_{\tau}^{t}A(\zeta)d\zeta\right]=\left[\int_{\tau}^{t}A(\zeta)d\zeta\right]A(t)$
Then the state-transition matrix can be given as:
$\phi(t, \tau) = e^{\int_\tau^tA(\zeta)d\zeta}$
The state transition matrix will commute as described above if any of the following conditions are true:
1. A is a constant matrix (time-invariant)
2. A is a diagonal matrix
3. If $A = \bar{A}f(t)$, where $\bar{A}$ is a constant matrix, and f(t) is a single-valued function (not a matrix).
If none of the above conditions are true, then you must use method 3.
Method 3
If A(t) can be decomposed as the following sum:
$A(t) = \sum_{i = 1}^n M_i f_i(t)$
Where Mi is a constant matrix such that MiMj = MjMi, and fi is a single-valued function. If A(t) can be decomposed in this way, then the state-transition matrix can be given as:
$\phi(t, \tau) = \prod_{i=1}^n e^{M_i \int_\tau^t f_i(\theta)d\theta}$

It will be left as an exercise for the reader to prove that if A(t) is time-invariant, that the equation in method 2 above will reduce to the state-transition matrix $e^{A(t-\tau)}$.

### Example: Using Method 3

Use method 3, above, to compute the state-transition matrix for the system if the system matrix A is given by:

$A = \begin{bmatrix}t & 1 \\ -1 & t\end{bmatrix}$

We can decompose this matrix as follows:

$A = \begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}t + \begin{bmatrix} 0 & 1 \\ -1 & 0\end{bmatrix}$

Where f1(t) = t, and f2(t) = 1. Using the formula described above gives us:

$\phi(t, \tau) = e^{M_1\int_\tau^t \theta d\theta}e^{M_2 \int_\tau^t d\theta}$

Solving the two integrations gives us:

$\phi(t, \tau) = e^{\frac{1}{2}\begin{bmatrix}(t^2 - \tau^2) & 0 \\ 0 & (t^2-\tau^2)\end{bmatrix}}e^{\begin{bmatrix}0 & t-\tau \\ -t+\tau & 0\end{bmatrix}}$

The first term is a diagonal matrix, and the solution to that matrix function is all the individual elements of the matrix raised as an exponent of e. The second term can be decomposed as:

$e^{\begin{bmatrix}0 & t-\tau \\ -t+\tau & 0\end{bmatrix}} = e^{\begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix}(t-\tau)} = \begin{bmatrix}\cos(t-\tau) & \sin(t-\tau)\\ -\sin(t-\tau) & \cos(t-\tau)\end{bmatrix}$

The final solution is given as:

$\phi(t, \tau) =$$\begin{bmatrix}e^{\frac{1}{2}(t^2-\tau^2)} & 0 \\ 0 & e^{\frac{1}{2}(t^2-\tau^2)}\end{bmatrix}\begin{bmatrix}\cos(t-\tau) & \sin(t-\tau)\\ -\sin(t-\tau) & \cos(t-\tau)\end{bmatrix}$$= \begin{bmatrix}e^{\frac{1}{2}(t^2-\tau^2)}\cos(t-\tau) & e^{\frac{1}{2}(t^2-\tau^2)}\sin(t-\tau)\\ -e^{\frac{1}{2}(t^2-\tau^2)}\sin(t-\tau) & e^{\frac{1}{2}(t^2-\tau^2)}\cos(t-\tau)\end{bmatrix}$

## Time-Variant, Non-zero Input

If the input to the system is not zero, it turns out that all the analysis that we performed above still holds. We can still construct the fundamental matrix, and we can still represent the system solution in terms of the state transition matrix φ.

We can show that the general solution to the state-space equations is actually the solution:

$x(t) = \phi(t, t_0)x(t_0) + \int_{t_0}^{t} \phi(t,\tau)B(\tau)u(\tau)d\tau$