# A-level Mathematics/MEI/FP2/Matrices

## Introduction

This section aims to expand upon information gathered about matrices in FP1, dealing with 3x3 matrices, eigenvectors and eigenvalues as well as the Cayley-Hamilton theorem.

## Determinant of a 3x3 Matrix

### How to find the determinant

The determinant of a 3x3 matrix $M = \begin{pmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{pmatrix}$ is given by:
$|M| = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} = a_1 \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} - a_2 \begin{vmatrix} b_1 & c_1 \\ b_3 & c_3 \end{vmatrix} + a_3 \begin{vmatrix} b_1 & c_1 \\ b_2 & c_2 \end{vmatrix}$

To find the determinant of a 3x3 matrix, it is necessary to choose a row or column to start with. In the above example, the first column ($a$) was selected. The first entry in this column is selected (i.e. $a_1$) and multiplied by its minor. The minor of $a_1$ is obtained by crossing off the row and column which contains $a_1$, in this case to leave the matrix $\begin{pmatrix} b_2 & c_2 \\ b_3 & c_3 \end{pmatrix}$. The 'minor' is the determinant of this matrix. The minors of $a_2$ and $a_3$ are $\begin{vmatrix} b_1 & c_1 \\ b_3 & c_3 \end{vmatrix}$ and $\begin{vmatrix} b_1 & c_1 \\ b_2 & c_2 \end{vmatrix}$ respectively. To determine the sign which precedes the term of the expansion, the following matrix is used: $\begin{pmatrix} + & - & + \\ - & + & - \\ + & - & + \end{pmatrix}$. A minor along with its corresponding sign is known as a cofactor. The cofactor of $a_1\,$ may be written as $A_1\,$.

So if you were to select the second column ($b\,$), the determinant would be found by:
$|M| = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} = -b_1 \begin{vmatrix} a_2 & c_2 \\ a_3 & c_3 \end{vmatrix} + b_2 \begin{vmatrix} a_1 & c_1 \\ a_3 & c_3 \end{vmatrix} - b_3 \begin{vmatrix} a_1 & c_1 \\ a_2 & c_2 \end{vmatrix}$. This may be re-written as $b_1B_1 + b_2B_2 + b_3B_3\,$.

### Example

Find the determinant of the matrix $M\,$, where $M = \begin{pmatrix} 5 & 7 & 2 \\ 1 & 9 & 4 \\ 2 & 6 & 3 \end{pmatrix}$:

Solution:- $|M| = \begin{vmatrix} 5 & 7 & 2 \\ 1 & 9 & 4 \\ 2 & 6 & 3 \end{vmatrix} =$ $5 \begin{vmatrix} 9 & 4 \\ 6 & 3 \end{vmatrix} - 1 \begin{vmatrix} 7 & 2 \\ 6 & 3 \end{vmatrix} + 2 \begin{vmatrix} 7 & 2 \\ 9 & 4 \end{vmatrix} =$
$5(9\times3 - 6\times4) - 1(7\times3 - 6\times2) + 2(7\times4 - 9\times2) =\,$
$5(27-24) - 1(21 - 12) + 2(28 - 18) = \,$
$15 - 9 + 20 =\,$
$\underline{26}$

Alternatively, the Sarrus Method can also be used to find the determinant of a 3x3 matrix. Note that the Sarrus method doesn't work with larger dimensions (eg. 4x4 matrices).

## Inverse of a 3x3 Matrix

The inverse of a 3x3 matrix is found in a similar way to that of a 2x2 Matrix. To recap, the inverse of the 2x2 matrix $M = \begin{pmatrix} a & c \\ b & d \end{pmatrix}$ is found via:
$\mathbf{M}^{-1} = {1 \over det\mathbf{M}}\begin{pmatrix} d & -c \\ -b & a \end{pmatrix}$

It can be shown (using the properties of determinants) that $\begin{pmatrix} A_1 & A_2 & A_3 \\ B_1 & B_2 & B_3 \\ C_1 & C_2 & C_3 \end{pmatrix}\begin{pmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{pmatrix}=\begin{pmatrix} |\mathbf{M}| & 0 & 0 \\ 0 & |\mathbf{M}| & 0 \\ 0 & 0 & |\mathbf{M}|\end{pmatrix}$
So, by factoring out |M|, ${1 \over |\mathbf{M}|}\begin{pmatrix} A_1 & A_2 & A_3 \\ B_1 & B_2 & B_3 \\ C_1 & C_2 & C_3 \end{pmatrix}\begin{pmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{pmatrix}=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}$
This shows that ${1 \over |\mathbf{M}|}\begin{pmatrix} A_1 & A_2 & A_3 \\ B_1 & B_2 & B_3 \\ C_1 & C_2 & C_3 \end{pmatrix}$ is the inverse of $M\,$. The matrix $\begin{pmatrix} A_1 & A_2 & A_3 \\ B_1 & B_2 & B_3 \\ C_1 & C_2 & C_3 \end{pmatrix}$ is known as the adjoint or adjugate matrix of M. It is found by replacing each element of the matrix with its corresponding co-factor and then transposing (swapping the rows for columns of) the matrix.

## Eigenvectors and Eigenvalues

Consider a transformation achieved through a matrix - for the sake of argument, an enlargement - of scale factor two. If we think of lines through the origin, upon undergoing the transformation, each point on each line through the origin maps to another point on its line, but the origin maps to itself. We can say that, if $\mathbf{M}$ is the matrix responsible for the transformation, and $\mathbf{s}$ the 'point' on the line (in column-vector form - i.e., that of a 1x2 matrix), then $\mathbf{Ms}=\lambda \mathbf{s}$; in the case of our example of scale factor two, $\lambda$ would be 2. Every point undergoing the matrix would become a multiple of itself; that multiple would be two.

We can generalise this further to situations beyond mere enlargements. Formally, we can say that if $\mathbf{s}$ is a non-zero vector such that $\mathbf{Ms} = \lambda \mathbf{s}$, where $\mathbf{M}$ is a matrix and $\lambda$ is a scalar, then $\mathbf{s}$ is called an eigenvector of $\mathbf{M}$. The scalar $\lambda$ is known as an eigenvalue.

Let us illustrate this with an example:

Since:

$\begin{pmatrix} 4 & 2 \\ 1 & 3 \end{pmatrix} \begin{pmatrix} 2 \\ 1 \end{pmatrix} = \begin{pmatrix} 10 \\ 5 \end{pmatrix} = 5 \begin{pmatrix} 2 \\ 1 \end{pmatrix}$

and:

$\begin{pmatrix} 4 & 2 \\ 1 & 3 \end{pmatrix} \begin{pmatrix} -1 \\ 1 \end{pmatrix} = \begin{pmatrix} -2 \\ 2 \end{pmatrix} = 2 \begin{pmatrix} -1 \\ 1 \end{pmatrix}$

it must be that $\begin{pmatrix} 2 \\ 1 \end{pmatrix}$ and $\begin{pmatrix} -1 \\ 1 \end{pmatrix}$ are eigenvectors of the matrix $\mathbf{M} = \begin{pmatrix} 4 & 2 \\ 1 & 3 \end{pmatrix}$, respectively corresponding to the eigenvalues 5 and 2. It will become evident shortly that these are the only two eigenvalues.

Notice that all non-zero scalar multiples of these two eigenvectors are also eigenvectors of $\mathbf{M}$, with the same respective eigenvalues. Likewise, under the transformation, the eigenvector is enlarged by a scale factor equal to its eigenvalue, and that the direction of an eigenvector is unchanged by the transformation.

When finding eigenvectors, you need to be able to solve the equation $\mathbf{Ms}=\lambda \mathbf{s}$:

• $\mathbf{Ms}=\lambda \mathbf{s}$
• $\mathbf{Ms} - \lambda \mathbf{s} = \mathbf{0}$
• $\mathbf{Ms} - \lambda \mathbf{Is} = \mathbf{0}$ (The use of the identity matrix is not necessary here)
• $(\mathbf{M} - \lambda I)\mathbf{s} = \mathbf{0}$ (But it is vital here!)

Clearly, $\mathbf{s} = \mathbf{0}$ is always a solution, but is rather pointless. For non-zero solutions, we require $\mbox{det(}\mathbf{M-I}\lambda\mbox{)} = \mathbf{0}$. This equation is called the characteristic equation, and the resultant polynomial (from finding the determinant) is called the characteristic polynomial.

This leads to three steps to finding eigenvectors:

1. Form the characteristic equation $\mbox{det(}\mathbf{M-I}\lambda\mbox{)} = \mathbf{0}$
2. Solve the characteristic equation to find the eigenvalues, $\lambda$
3. For each eigenvector, find a corresponding eigenvalue by solving $\mbox{det(}\mathbf{M-I}\lambda\mbox{)} = \mathbf{0}$.

This is true irrespective of the size of M.

M^n = SA^nS^-1

## The Cayley-Hamilton Theorem

"Every square matrix M satisfies its own characteristic equation."