Control Systems/Eigenvalues and Eigenvectors

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Eigenvalues and Eigenvectors[edit | edit source]

Eigenvalues and Eigenvectors cannot be calculated from time-variant matrices. If the system is time-variant, the methods described in this chapter will not produce valid results.

The eigenvalues and eigenvectors of the system matrix play a key role in determining the response of the system. It is important to note that only square matrices have eigenvalues and eigenvectors associated with them. Non-square matrices cannot be analyzed using the methods below.

The word "eigen" comes from German and means "own" as in "characteristic", so this chapter could also be called "Characteristic values and characteristic vectors". The terms "Eigenvalues" and "Eigenvectors" are most commonly used. Eigenvalues and Eigenvectors have a number of properties that make them valuable tools in analysis, and they also have a number of valuable relationships with the matrix from which they are derived. Computing the eigenvalues and the eigenvectors of the system matrix is one of the most important things that should be done when beginning to analyze a system matrix, second only to calculating the matrix exponential of the system matrix.

The eigenvalues and eigenvectors of the system determine the relationship between the individual system state variables (the members of the x vector), the response of the system to inputs, and the stability of the system. Also, the eigenvalues and eigenvectors can be used to calculate the matrix exponential of the system matrix through spectral decomposition. The remainder of this chapter will discuss eigenvalues, eigenvectors, and the ways that they affect their respective systems.

Characteristic Equation[edit | edit source]

The characteristic equation of the system matrix A is given as:

[Matrix Characteristic Equation]

Where λ are scalar values called the eigenvalues, and v are the corresponding eigenvectors. To solve for the eigenvalues of a matrix, we can take the following determinant:

To solve for the eigenvectors, we can then add an additional term, and solve for v:

Another value worth finding are the left eigenvectors of a system, defined as w in the modified characteristic equation:

[Left-Eigenvector Equation]

For more information about eigenvalues, eigenvectors, and left eigenvectors, read the appropriate sections in the following books:

Diagonalization[edit | edit source]

Note:
The transition matrix T should not be confused with the sampling time of a discrete system. If needed, we will use subscripts to differentiate between the two.

If the matrix A has a complete set of distinct eigenvalues, the matrix can be diagonalized. A diagonal matrix is a matrix that only has entries on the diagonal, and all the rest of the entries in the matrix are zero. We can define a transformation matrix, T, that satisfies the diagonalization transformation:

Which in turn will satisfy the relationship:

The right-hand side of the equation may look more complicated, but because D is a diagonal matrix here (not to be confused with the feed-forward matrix from the output equation), the calculations are much easier.

We can define the transition matrix, and the inverse transition matrix in terms of the eigenvectors and the left eigenvectors:

We will further discuss the concept of diagonalization later in this chapter.

Exponential Matrix Decomposition[edit | edit source]

For more information about spectral decomposition, see:
Spectral Decomposition

A matrix exponential can be decomposed into a sum of the eigenvectors, eigenvalues, and left eigenvectors, as follows:

Notice that this equation only holds in this form if the matrix A has a complete set of n distinct eigenvalues. Since w'i is a row vector, and x(0) is a column vector of the initial system states, we can combine those two into a scalar coefficient α:

Since the state transition matrix determines how the system responds to an input, we can see that the system eigenvalues and eigenvectors are a key part of the system response. Let us plug this decomposition into the general solution to the state equation:


[State Equation Spectral Decomposition]

We will talk about this equation in the following sections.

State Relationship[edit | edit source]

As we can see from the above equation, the individual elements of the state vector x(t) cannot take arbitrary values, but they are instead related by weighted sums of multiples of the systems right-eigenvectors.

Decoupling[edit | edit source]

For people who are familiar with linear algebra, the left-eigenvector of the matrix A must be in the null space of the matrix B to decouple the system.

If a system can be designed such that the following relationship holds true:

then the system response from that particular eigenvalue will not be affected by the system input u, and we say that the system has been decoupled. Such a thing is difficult to do in practice.

Condition Number[edit | edit source]

With every matrix there is associated a particular number called the condition number of that matrix. The condition number tells a number of things about a matrix, and it is worth calculating. The condition number, k, is defined as:


[Condition Number]

Systems with smaller condition numbers are better, for a number of reasons:

  1. Large condition numbers lead to a large transient response of the system
  2. Large condition numbers make the system eigenvalues more sensitive to changes in the system.

We will discuss the issue of eigenvalue sensitivity more in a later section.

Stability[edit | edit source]

We will talk about stability at length in later chapters, but is a good time to point out a simple fact concerning the eigenvalues of the system. Notice that if the eigenvalues of the system matrix A are positive, or (if they are complex) that they have positive real parts, that the system state (and therefore the system output, scaled by the C matrix) will approach infinity as time t approaches infinity. In essence, if the eigenvalues are positive, the system will not satisfy the condition of BIBO stability, and will therefore become unstable.

Another factor that is worth mentioning is that a manufactured system never exactly matches the system model, and there will always been inaccuracies in the specifications of the component parts used, within a certain tolerance. As such, the system matrix will be slightly different from the mathematical model of the system (although good systems will not be severely different), and therefore the eigenvalues and eigenvectors of the system will not be the same values as those derived from the model. These facts give rise to several results:

  1. Systems with high condition numbers may have eigenvalues that differ by a large amount from those derived from the mathematical model. This means that the system response of the physical system may be very different from the intended response of the model.
  2. Systems with high condition numbers may become unstable simply as a result of inaccuracies in the component parts used in the manufacturing process.

For those reasons, the system eigenvalues and the condition number of the system matrix are highly important variables to consider when analyzing and designing a system. We will discuss the topic of stability in more detail in later chapters.

Non-Unique Eigenvalues[edit | edit source]

The decomposition above only works if the matrix A has a full set of n distinct eigenvalues (and corresponding eigenvectors). If A does not have n distinct eigenvectors, then a set of generalized eigenvectors need to be determined. The generalized eigenvectors will produce a similar matrix that is in Jordan canonical form, not the diagonal form we were using earlier.

Generalized Eigenvectors[edit | edit source]

Generalized eigenvectors can be generated using the following equation:


[Generalized Eigenvector Generating Equation]

if d is the number of times that a given eigenvalue is repeated, and p is the number of unique eigenvectors derived from those eigenvalues, then there will be q = d - p generalized eigenvectors. Generalized eigenvectors are developed by plugging in the regular eigenvectors into the equation above (vn). Some regular eigenvectors might not produce any non-trivial generalized eigenvectors. Generalized eigenvectors may also be plugged into the equation above to produce additional generalized eigenvectors. It is important to note that the generalized eigenvectors form an ordered series, and they must be kept in order during analysis or the results will not be correct.

Example: One Repeated Set[edit | edit source]

We have a 5 × 5 matrix A with eigenvalues . For , there is 1 distinct eigenvector a. For there is 1 distinct eigenvector b. From a, we generate the generalized eigenvector c, and from c we can generate vector d. From the eigevector b, we generate the generalized eigevector e. In order our eigenvectors are listed as:

[a c d b e]

Notice how c and d are listed in order after the eigenvector that they are generated from, a. Also, we could reorder this as:

[b e a c d]

because the generalized eigenvectors are listed in order after the regular eigenvector that they are generated from. Regular eigenvectors can be listed in any order.

Example: Two Repeated Sets[edit | edit source]

We have a 4 × 4 matrix A with eigenvalues . For we have two eigevectors, a and b. For we have an eigenvector c.

We need to generate a fourth eigenvector, d. The only eigenvalue that needs another eigenvector is , however there are already two eigevectors associated with that eigenvalue, and only one of them will generate a non-trivial generalized eigenvector. To figure out which one works, we need to plug both vectors into the generating equation:

If a generates the correct vector d, we will order our eigenvectors as:

[a d b c]

but if b generates the correct vector, we can order it as:

[a b d c]

Jordan Canonical Form[edit | edit source]

For more information about Jordan Canonical Form, see:
Matrix Forms

If a matrix has a complete set of distinct eigenvectors, the transition matrix T can be defined as the matrix of those eigenvectors, and the resultant transformed matrix will be a diagonal matrix. However, if the eigenvectors are not unique, and there are a number of generalized eigenvectors associated with the matrix, the transition matrix T will consist of the ordered set of the regular eigenvectors and generalized eigenvectors. The regular eigenvectors that did not produce any generalized eigenvectors (if any) should be first in the order, followed by the eigenvectors that did produce generalized eigenvectors, and the generalized eigenvectors that they produced (in appropriate sequence).

Once the T matrix has been produced, the matrix can be transformed by it and it's inverse:

The J matrix will be a Jordan block matrix. The format of the Jordan block matrix will be as follows:

Where D is the diagonal block produced by the regular eigenvectors that are not associated with generalized eigenvectors (if any). The Jn blocks are standard Jordan blocks with a size corresponding to the number of eigenvectors/generalized eigenvectors in each sequence. In each Jn block, the eigenvalue associated with the regular eigenvector of the sequence is on the main diagonal, and there are 1's in the sub-diagonal.

System Response[edit | edit source]

Equivalence Transformations[edit | edit source]

If we have a non-singular n × n matrix P, we can define a transformed vector "x bar" as:

We can transform the entire state-space equation set as follows:

Where:

We call the matrix P the equivalence transformation between the two sets of equations.

It is important to note that the eigenvalues of the matrix A (which are of primary importance to the system) do not change under the equivalence transformation. The eigenvectors of A, and the eigenvectors of are related by the matrix P.

Lyapunov Transformations[edit | edit source]

The transformation matrix P is called a Lyapunov Transformation if the following conditions hold:

  • P(t) is nonsingular.
  • P(t) and P'(t) are continuous
  • P(t) and the inverse transformation matrix P-1(t) are finite for all t.

If a system is time-variant, it can frequently be useful to use a Lyapunov transformation to convert the system to an equivalent system with a constant A matrix. This is not always possible in general, however it is possible if the A(t) matrix is periodic.

System Diagonalization[edit | edit source]

If the A matrix is time-invariant, we can construct the matrix V from the eigenvectors of A. The V matrix can be used to transform the A matrix to a diagonal matrix. Our new system becomes:

Since our system matrix is now diagonal (or Jordan canonical), the calculation of the state-transition matrix is simplified:

Where Λ is a diagonal matrix.

MATLAB Transformations[edit | edit source]

The MATLAB function ss2ss can be used to apply an equivalence transformation to a system. If we have a set of matrices A, B, C and D, we can create equivalent matrices as such:

[Ap, Bp, Cp, Dp] = ss2ss(A, B, C, D, p);

Where p is the equivalence transformation matrix.

← Time Variant System Solutions

Control Systems

Standard Forms →