Control Systems/Matrix Operations

From Wikibooks, open books for an open world
Jump to: navigation, search
For more about this subject, see:
Linear Algebra
Engineering Analysis

Laws of Matrix Algebra[edit]

Matrices must be compatible sizes in order for an operation to be valid:

Matrices must have the same dimensions (same number of rows, same number of columns). Matrix addition is commutative:
A + B = B + A
Matrices must have the same inner dimensions (the number of columns of the first matrix must equal the number of rows in the second matrix). For instance, if matrix A is n × m, and matrix B is m × k, then we can multiply:
AB = C
Where C is an n × k matrix. Matrix multiplication is not commutative:
AB \ne BA
Because it is not commutative, the differentiation must be made between "multiplication on the left", and "multiplication on the right".
There is no such thing as division in matrix algebra, although multiplication of the matrix inverse performs the same basic function. To find an inverse, a matrix must be nonsingular, and must have a non-zero determinant.

Transpose Matrix[edit]

The transpose of a matrix, denoted by:


is the matrix where the rows and columns of X are interchanged. In some instances, the transpose of a matrix is denoted by:


This shorthand notation is used when the superscript T applied to a large number of matrices in a single equation, and the notation would become too crowded otherwise. When this notation is used in the book, derivatives will be denoted explicitly with:



The determinant of a matrix it is a scalar value. It is denoted similarly to absolute-value in scalars:


A matrix has an inverse if the matrix is square, and if the determinant of the matrix is non-zero.


The inverse of a matrix A, which we will denote here by "B" is any matrix that satisfies the following equation:

AB = BA = I

Matrices that have such a companion are known as "invertible" matrices, or "non-singular" matrices. Matrices which do not have an inverse that satisfies this equation are called "singular" or "non-invertable".

An inverse can be computed in a number of different ways:

  1. Append the matrix A with the Identity matrix of the same size. Use row-reductions to make the left side of the matrice an identity. The right side of the appended matrix will then be the inverse:
    [A|I] \to [I|B]
  2. The inverse matrix is given by the adjoint matrix divided by the determinant. The adjoint matrix is the transpose of the cofactor matrix.
    A^{-1} = \frac{\operatorname{adj}(A)}{|A|}
  3. The inverse can be calculated from the Cayley-Hamilton Theorem.


The eigenvalues of a matrix, denoted by the Greek letter lambda λ, are the solutions to the characteristic equation of the matrix:

|X - \lambda I| = 0

Eigenvalues only exist for square matrices. Non-square matrices do not have eigenvalues. If the matrix X is a real matrix, the eigenvalues will either be all real, or else there will be complex conjugate pairs.


The eigenvectors of a matrix are the nullspace solutions of the characteristic equation:

(X - \lambda_i I)v_i = 0

There are is least one distinct eigenvector for every distinct eigenvalue. Multiples of an eigenvector are also themselves eigenvectors. However, eigenvalues that are not linearly independent are called "non-distinct" eigenvectors, and can be ignored.


Left Eigenvectors are the right-hand nullspace solutions to the characteristic equation:

w_i(A - \lambda_i I) = 0

These are also the rows of the inverse transition matrix.

Generalized Eigenvectors[edit]

In the case of repeated eigenvalues, there may not be a complete set of n distinct eigenvectors (right or left eigenvectors) associated with those eigenvalues. Generalized eigenvectors can be generated as follows:

(A -\lambda I)v_{n+1} = v_n

Because generalized eigenvectors are formed in relation to another eigenvector or generalize eigenvectors, they constitute an ordered set, and should not be used outside of this order.

Transformation Matrix[edit]

The transformation matrix is the matrix of all the eigenvectors, or the ordered sets of generalized eigenvectors:

T = [v_1 v_2 \cdots v_n]

The inverse transition matrix is the matrix of the left-eigenvectors:

T^{-1} = \begin{bmatrix}w_1' \\ w_2' \\ \cdots \\ w_n'\end{bmatrix}

A matrix can be diagonalized by multiplying by the transition matrix:

A = TDT^{-1}


T^{-1}AT = D

If the matrix has an incomplete set of eigenvectors, and therefore a set of generalized eigenvectors, the matrix cannot be diagonalized, but can be converted into Jordan canonical form:

T^{-1}AT = J


The MATLAB programming environment was specially designed for matrix algebra and manipulation. The following is a brief refresher about how to manipulate matrices in MATLAB:

To add two matrices together, use a plus sign ("+"):
C = A + B;
To multiply two matrices together use an asterisk ("*"):
C = A * B;
If your matrices are not the correct dimensions, MATLAB will issue an error.
To find the transpose of a matrix, use the apostrophe (" ' "):
C = A';
To find the determinant, use the det function:
d = det(A);
To find the inverse of a matrix, use the function inv:
C = inv(A);
Eigenvalues and Eigenvectors
To find the eigenvalues and eigenvectors of a matrix, use the eig command:
[E, V] = eig(A);
Where E is a square matrix with the eigenvalues of A in the diagonal entries, and V is the matrix comprised of the corresponding eigenvectors. If the eigenvalues are not distinct, the eigenvectors will be repeated. MATLAB will not calculate the generalized eigenvectors.
Left Eigenvectors
To find the left eigenvectors, assuming there is a complete set of distinct right-eigenvectors, we can take the inverse of the eigenvector matrix:
[E, V] = eig(A);
C = inv(V);

The rows of C will be the left-eigenvectors of the matrix A.

For more information about MATLAB, see the wikibook MATLAB Programming.

Control Systems