Linear Algebra/Print version/Part 2

From Wikibooks, open books for an open world
< Linear Algebra
Jump to: navigation, search



Chapter IV - Determinants

In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form  T\vec{x}=\vec{b} where T is a square matrix. We noted a distinction between two classes of T's. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system \vec{b}=\vec{0}, then T is associated with a unique solution for every \vec{b}. We call such a matrix of coefficients "nonsingular". The other kind of T, where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call "singular".

Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an  n \! \times \! n matrix  T is equivalent to each of these:

  1. a system  T\vec{x}=\vec{b} has a solution, and that solution is unique;
  2. Gauss-Jordan reduction of T yields an identity matrix;
  3. the rows of T form a linearly independent set;
  4. the columns of  T form a basis for  \mathbb{R}^n ;
  5. any map that  T represents is an isomorphism;
  6. an inverse matrix  T^{-1} exists.

So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say "matrix" in place of "square matrix".)

More precisely, we will develop infinitely many formulas, one for 1 \! \times \! 1 matrices, one for 2 \! \times \! 2 matrices, etc. Of course, these formulas are related — that is, we will develop a family of formulas, a scheme that describes the formula for each size.


Section I - Definition

For  1 \! \times \! 1 matrices, determining nonsingularity is trivial.

 \begin{pmatrix}
a
\end{pmatrix}  is nonsingular iff  a \neq 0

The 2 \! \times \! 2 formula came out in the course of developing the inverse.

 \begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}  is nonsingular iff  ad-bc \neq 0

The 3 \! \times \! 3 formula can be produced similarly (see Problem 9).

 \begin{pmatrix}
a  &b  &c  \\
d  &e  &f  \\
g  &h  &i
\end{pmatrix} is nonsingular iff  aei+bfg+cdh-hfa-idb-gec \neq 0

With these cases in mind, we posit a family of formulas, a, ad-bc, etc. For each n the formula gives rise to a determinant function \det\nolimits_{n \! \times \! n}:\mathcal{M}_{n \! \times \! n}\to \mathbb{R} such that an n \! \times \! n matrix T is nonsingular if and only if \det\nolimits_{n \! \times \! n}(T)\neq 0. (We usually omit the subscript because if  T is  n \! \times \! n then " \det(T) " could only mean " \det\nolimits_{n \! \times \! n}(T) ".)


1 - Exploration

This subsection is optional. It briefly describes how an investigator might come to a good general definition, which is given in the next subsection.

The three cases above don't show an evident pattern to use for the general n \! \times \! n formula. We may spot that the 1 \! \times \! 1 term  a has one letter, that the 2 \! \times \! 2 terms ad and bc have two letters, and that the 3 \! \times \! 3 terms aei, etc., have three letters. We may also observe that in those terms there is a letter from each row and column of the matrix, e.g., the letters in the cdh term


\begin{pmatrix}
&    &c \\
d           \\
&h
\end{pmatrix}

come one from each row and one from each column. But these observations perhaps seem more puzzling than enlightening. For instance, we might wonder why some of the terms are added while others are subtracted.

A good problem solving strategy is to see what properties a solution must have and then search for something with those properties. So we shall start by asking what properties we require of the formulas.

At this point, our primary way to decide whether a matrix is singular is to do Gaussian reduction and then check whether the diagonal of resulting echelon form matrix has any zeroes (that is, to check whether the product down the diagonal is zero). So, we may expect that the proof that a formula determines singularity will involve applying Gauss' method to the matrix, to show that in the end the product down the diagonal is zero if and only if the determinant formula gives zero. This suggests our initial plan: we will look for a family of functions with the property of being unaffected by row operations and with the property that a determinant of an echelon form matrix is the product of its diagonal entries. Under this plan, a proof that the functions determine singularity would go, "Where T\rightarrow\cdots\rightarrow\hat{T} is the Gaussian reduction, the determinant of T equals the determinant of \hat{T} (because the determinant is unchanged by row operations), which is the product down the diagonal, which is zero if and only if the matrix is singular". In the rest of this subsection we will test this plan on the 2 \! \times \! 2 and 3 \! \times \! 3 determinants that we know. We will end up modifying the "unaffected by row operations" part, but not by much.

The first step in checking the plan is to test whether the 2 \! \times \! 2 and 3 \! \times \! 3 formulas are unaffected by the row operation of pivoting: if


T \xrightarrow[]{k\rho_i+\rho_j} \hat{T}

then is  \det(\hat{T})=\det(T) ? This check of the 2 \! \times \! 2 determinant after the k\rho_1+\rho_2 operation


\det(
\begin{pmatrix}
a     &b       \\
ka+c  &kb+d    \\
\end{pmatrix}
)
= a(kb+d)-(ka+c)b = ad-bc

shows that it is indeed unchanged, and the other 2 \! \times \! 2 pivot k\rho_2+\rho_1 gives the same result. The 3 \! \times \! 3 pivot k\rho_3+\rho_2 leaves the determinant unchanged

\begin{array}{rl}
\det(
\begin{pmatrix}
a    &b    &c    \\
kg+d &kh+e &ki+f \\
g    &h    &i
\end{pmatrix}
)
&=\begin{array}{l}
a(kh+e)i+b(ki+f)g+c(kg+d)h \\
\ -h(ki+f)a-i(kg+d)b-g(kh+e)c  
\end{array}                                 \\
&=aei + bfg + cdh - hfa - idb - gec
\end{array}

as do the other 3 \! \times \! 3 pivot operations.

So there seems to be promise in the plan. Of course, perhaps the 4 \! \times \! 4 determinant formula is affected by pivoting. We are exploring a possibility here and we do not yet have all the facts. Nonetheless, so far, so good.

The next step is to compare  \det(\hat{T}) with  \det(T) for the operation


T \xrightarrow[]{ {\rho}_i \leftrightarrow {\rho}_j } \hat{T}

of swapping two rows. The 2 \! \times \! 2 row swap \rho_1\leftrightarrow\rho_2


\det(
\begin{pmatrix}
c  &d \\
a  &b
\end{pmatrix}
)
= cb - ad

does not yield  ad-bc . This \rho_1\leftrightarrow\rho_3 swap inside of a 3 \! \times \! 3 matrix


\det(
\begin{pmatrix}
g  &h  &i \\
d  &e  &f \\
a  &b  &c
\end{pmatrix}
)
= gec + hfa + idb - bfg - cdh - aei

also does not give the same determinant as before the swap — again there is a sign change. Trying a different 3 \! \times \! 3 swap \rho_1\leftrightarrow\rho_2


\det(
\begin{pmatrix}
d  &e  &f \\
a  &b  &c \\
g  &h  &i
\end{pmatrix}
)
= dbi + ecg + fah - hcd - iae - gbf

also gives a change of sign.

Thus, row swaps appear to change the sign of a determinant. This modifies our plan, but does not wreck it. We intend to decide nonsingularity by considering only whether the determinant is zero, not by considering its sign. Therefore, instead of expecting determinants to be entirely unaffected by row operations, will look for them to change sign on a swap.

To finish, we compare  \det(\hat{T}) to  \det(T) for the operation


T \xrightarrow[]{ k{\rho}_i } \hat{T}

of multiplying a row by a scalar k\neq 0. One of the 2 \! \times \! 2 cases is


\det(
\begin{pmatrix}
a   &b   \\
kc  &kd
\end{pmatrix}
)
= a(kd) - (kc)b
=k\cdot (ad-bc)

and the other case has the same result. Here is one 3 \! \times \! 3 case

\begin{array}{rl}
\det(
\begin{pmatrix}
a    &b    &c   \\
d    &e    &f   \\
kg   &kh   &ki
\end{pmatrix}
)
&= \begin{array}{l}
ae(ki) + bf(kg) + cd(kh)                \\
\quad -(kh)fa - (ki)db - (kg)ec  
\end{array}                                      \\
&= k\cdot(aei + bfg + cdh - hfa - idb - gec)
\end{array}

and the other two are similar. These lead us to suspect that multiplying a row by k multiplies the determinant by k. This fits with our modified plan because we are asking only that the zeroness of the determinant be unchanged and we are not focusing on the determinant's sign or magnitude.

In summary, to develop the scheme for the formulas to compute determinants, we look for determinant functions that remain unchanged under the pivoting operation, that change sign on a row swap, and that rescale on the rescaling of a row. In the next two subsections we will find that for each n such a function exists and is unique.

For the next subsection, note that, as above, scalars come out of each row without affecting other rows. For instance, in this equality


\det(
\begin{pmatrix}
3  &3  &9  \\
2  &1  &1  \\
5  &10 &-5
\end{pmatrix}
)
=3 \cdot \det(
\begin{pmatrix}
1  &1  &3  \\
2  &1  &1  \\
5  &10 &-5
\end{pmatrix}
)

the 3 isn't factored out of all three rows, only out of the top row. The determinant acts on each row of independently of the other rows. When we want to use this property of determinants, we shall write the determinant as a function of the rows: " \det (\vec{\rho}_1,\vec{\rho}_2,\dots\vec{\rho}_n) ", instead of as " \det(T) " or " \det(t_{1,1},\dots,t_{n,n}) ". The definition of the determinant that starts the next subsection is written in this way.

Exercises

This exercise is recommended for all readers.
Problem 1

Evaluate the determinant of each.

  1. 
\begin{pmatrix}
3    &1   \\
-1    &1
\end{pmatrix}
  2. 
\begin{pmatrix}
2    &0   &1  \\
3    &1   &1 \\
-1    &0   &1
\end{pmatrix}
  3. 
\begin{pmatrix}
4    &0   &1  \\
0    &0   &1 \\
1    &3   &-1
\end{pmatrix}
Problem 2

Evaluate the determinant of each.

  1.  \begin{pmatrix}
2  &0  \\
-1  &3
\end{pmatrix}
  2.  \begin{pmatrix}
2  &1  &1  \\
0  &5  &-2 \\
1  &-3 &4
\end{pmatrix}
  3.  \begin{pmatrix}
2  &3  &4  \\
5  &6  &7  \\
8  &9  &1
\end{pmatrix}
This exercise is recommended for all readers.
Problem 3

Verify that the determinant of an upper-triangular 3 \! \times \! 3 matrix is the product down the diagonal.


\det(
\begin{pmatrix}
a    &b   &c    \\
0    &e   &f    \\
0    &0   &i
\end{pmatrix}
)
=aei

Do lower-triangular matrices work the same way?

This exercise is recommended for all readers.
Problem 4

Use the determinant to decide if each is singular or nonsingular.

  1. 
\begin{pmatrix}
2    &1   \\
3    &1
\end{pmatrix}
  2. 
\begin{pmatrix}
0    &1   \\
1    &-1
\end{pmatrix}
  3. 
\begin{pmatrix}
4    &2   \\
2    &1
\end{pmatrix}
Problem 5

Singular or nonsingular? Use the determinant to decide.

  1. 
\begin{pmatrix}
2    &1   &1  \\
3    &2   &2 \\
0    &1   &4
\end{pmatrix}
  2. 
\begin{pmatrix}
1    &0   &1  \\
2    &1   &1 \\
4    &1   &3
\end{pmatrix}
  3. 
\begin{pmatrix}
2    &1   &0  \\
3    &-2  &0 \\
1    &0   &0
\end{pmatrix}
This exercise is recommended for all readers.
Problem 6

Each pair of matrices differ by one row operation. Use this operation to compare  \det(A) with  \det(B) .

  1.  A=\begin{pmatrix}
1  &2  \\
2  &3
\end{pmatrix}   B=\begin{pmatrix}
1  &2  \\
0  &-1
\end{pmatrix}
  2.  A=\begin{pmatrix}
3  &1  &0  \\
0  &0  &1  \\
0  &1  &2
\end{pmatrix}    B=\begin{pmatrix}
3  &1  &0  \\
0  &1  &2  \\
0  &0  &1
\end{pmatrix}
  3.  A=\begin{pmatrix}
1  &-1 &3  \\
2  &2  &-6 \\
1  &0  &4
\end{pmatrix}    B=\begin{pmatrix}
1  &-1 &3  \\
1  &1  &-3 \\
1  &0  &4
\end{pmatrix}
Problem 7

Show this.


\det(
\begin{pmatrix}
1    &1   &1    \\
a    &b   &c    \\
a^2  &b^2 &c^2
\end{pmatrix}
)
=(b-a)(c-a)(c-b)
This exercise is recommended for all readers.
Problem 8

Which real numbers  x make this matrix singular?


\begin{pmatrix}
12-x  &4  \\
8    &8-x
\end{pmatrix}
Problem 9

Do the Gaussian reduction to check the formula for 3 \! \times \! 3 matrices stated in the preamble to this section.

 \begin{pmatrix}
a  &b  &c  \\
d  &e  &f  \\
g  &h  &i
\end{pmatrix} is nonsingular iff  aei+bfg+cdh-hfa-idb-gec \neq 0

Problem 10

Show that the equation of a line in  \mathbb{R}^2 thru  (x_1,y_1) and  (x_2,y_2) is expressed by this determinant.


\det(
\begin{pmatrix}
x   &y   &1  \\
x_1 &y_1 &1  \\
x_2 &y_2 &1
\end{pmatrix})=0 \qquad x_1\neq x_2
This exercise is recommended for all readers.
Problem 11

Many people know this mnemonic for the determinant of a  3 \! \times \! 3 matrix: first repeat the first two columns and then sum the products on the forward diagonals and subtract the products on the backward diagonals. That is, first write


\left(\begin{array}{ccc|cc}
h_{1,1} &h_{1,2} &h_{1,3} &h_{1,1} &h_{1,2} \\
h_{2,1} &h_{2,2} &h_{2,3} &h_{2,1} &h_{2,2} \\
h_{3,1} &h_{3,2} &h_{3,3} &h_{3,1} &h_{3,2}
\end{array}\right)

and then calculate this.


\begin{array}{l}
h_{1,1}h_{2,2}h_{3,3}+h_{1,2}h_{2,3}h_{3,1}+h_{1,3}h_{2,1}h_{3,2} \\
\quad-h_{3,1}h_{2,2}h_{1,3}-h_{3,2}h_{2,3}h_{1,1}
-h_{3,3}h_{2,1}h_{1,2}
\end{array}
  1. Check that this agrees with the formula given in the preamble to this section.
  2. Does it extend to other-sized determinants?
Problem 12

The cross product of the vectors


\vec{x}=\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix}
\qquad
\vec{y}=\begin{pmatrix} y_1 \\ y_2 \\ y_3 \end{pmatrix}

is the vector computed as this determinant.


\vec{x}\times\vec{y}=
\det(\begin{pmatrix}
\vec{e}_1  &\vec{e}_2  &\vec{e}_3  \\
x_1        &x_2        &x_3        \\
y_1        &y_2        &y_3
\end{pmatrix})

Note that the first row is composed of vectors, the vectors from the standard basis for \mathbb{R}^3. Show that the cross product of two vectors is perpendicular to each vector.

Problem 13

Prove that each statement holds for 2 \! \times \! 2 matrices.

  1. The determinant of a product is the product of the determinants \det(ST)=\det(S)\cdot\det(T).
  2. If  T is invertible then the determinant of the inverse is the inverse of the determinant  \det(T^{-1})=(\,\det(T)\,)^{-1} .

Matrices T and T^\prime are similar if there is a nonsingular matrix P such that T^\prime=PTP^{-1}. (This definition is in Chapter Five.) Show that similar  2 \! \times \! 2 matrices have the same determinant.

This exercise is recommended for all readers.
Problem 14

Prove that the area of this region in the plane

Linalg parallelogram.png

is equal to the value of this determinant.


\det(
\begin{pmatrix}
x_1  &x_2  \\
y_1  &y_2
\end{pmatrix})

Compare with this.


\det(
\begin{pmatrix}
x_2  &x_1  \\
y_2  &y_1
\end{pmatrix})
Problem 15

Prove that for  2 \! \times \! 2 matrices, the determinant of a matrix equals the determinant of its transpose. Does that also hold for  3 \! \times \! 3 matrices?

This exercise is recommended for all readers.
Problem 16

Is the determinant function linear — is  \det(x\cdot T+y\cdot S)=x\cdot \det(T)+y\cdot \det(S) ?

Problem 17

Show that if  A is  3 \! \times \! 3 then  \det(c\cdot A)=c^3\cdot \det(A) for any scalar  c .

Problem 18

Which real numbers  \theta make


\begin{pmatrix}
\cos\theta  &-\sin\theta  \\
\sin\theta  &\cos\theta
\end{pmatrix}

singular? Explain geometrically.

? Problem 19

If a third order determinant has elements  1 ,  2 , ...,  9 , what is the maximum value it may have? (Haggett & Saunders 1955)


2 - Properties of Determinants

As described above, we want a formula to determine whether an n \! \times \! n matrix is nonsingular. We will not begin by stating such a formula. Instead, we will begin by considering the function that such a formula calculates. We will define the function by its properties, then prove that the function with these properties exists and is unique and also describe formulas that compute this function. (Because we will show that the function exists and is unique, from the start we will say " \det(T) " instead of "if there is a determinant function then  \det(T) " and "the determinant" instead of "any determinant".)

Definition 2.1

A  n \! \times \! n determinant is a function  \det:\mathcal{M}_{n \! \times \! n}\to \mathbb{R} such that

  1. 
\det (\vec{\rho}_1,\dots,k\cdot\vec{\rho}_i 
+ \vec{\rho}_j,\dots,\vec{\rho}_n)
=\det (\vec{\rho}_1,\dots,\vec{\rho}_j,\dots,\vec{\rho}_n)
for  i\ne j
  2. 
\det (\vec{\rho}_1,\ldots,\vec{\rho}_j,
\dots,\vec{\rho}_i,\dots,\vec{\rho}_n)
= -\det (\vec{\rho}_1,\dots,\vec{\rho}_i,\dots,\vec{\rho}_j,
\dots,\vec{\rho}_n)
for  i\ne j
  3. 
\det (\vec{\rho}_1,\dots,k\vec{\rho}_i,\dots,\vec{\rho}_n)
= k\cdot \det (\vec{\rho}_1,\dots,\vec{\rho}_i,\dots,\vec{\rho}_n)
for  k\ne 0
  4. 
\det(I)=1
where  I is an identity matrix

(the \vec{\rho}\,'s are the rows of the matrix). We often write  \left|T\right| for  \det (T) .

Remark 2.2

Property (2) is redundant since


T\;\xrightarrow[]{\rho_i+\rho_j}
\;\xrightarrow[]{-\rho_j+\rho_i}
\;\xrightarrow[]{\rho_i+\rho_j}
\;\xrightarrow[]{-\rho_i}
\;\hat{T}

swaps rows  i and  j . It is listed only for convenience.

The first result shows that a function satisfying these conditions gives a criteria for nonsingularity. (Its last sentence is that, in the context of the first three conditions, (4) is equivalent to the condition that the determinant of an echelon form matrix is the product down the diagonal.)

Lemma 2.3

A matrix with two identical rows has a determinant of zero. A matrix with a zero row has a determinant of zero. A matrix is nonsingular if and only if its determinant is nonzero. The determinant of an echelon form matrix is the product down its diagonal.

Proof

To verify the first sentence, swap the two equal rows. The sign of the determinant changes, but the matrix is unchanged and so its determinant is unchanged. Thus the determinant is zero.

For the second sentence, we multiply a zero row by −1 and apply property (3). Multiplying a zero row with a constant leaves the matrix unchanged, so property (3) implies that \det(T) = -\det(T). The only way this can be is if \det(T) = 0.

For the third sentence, where T \rightarrow\cdots\rightarrow\hat{T} is the Gauss-Jordan reduction, by the definition the determinant of T is zero if and only if the determinant of \hat{T} is zero (although they could differ in sign or magnitude). A nonsingular T Gauss-Jordan reduces to an identity matrix and so has a nonzero determinant. A singular T reduces to a \hat{T} with a zero row; by the second sentence of this lemma its determinant is zero.

Finally, for the fourth sentence, if an echelon form matrix is singular then it has a zero on its diagonal, that is, the product down its diagonal is zero. The third sentence says that if a matrix is singular then its determinant is zero. So if the echelon form matrix is singular then its determinant equals the product down its diagonal.

If an echelon form matrix is nonsingular then none of its diagonal entries is zero so we can use property (3) of the definition to factor them out (again, the vertical bars  \left|\cdots\right| indicate the determinant operation).


\begin{vmatrix}
t_{1,1}  &t_{1,2}  &     &t_{1,n}  \\
0        &t_{2,2}  &     &t_{2,n}  \\
&         &\ddots         \\
0        &         &     &t_{n,n}
\end{vmatrix}
=
t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot
\begin{vmatrix}
1        &t_{1,2}/t_{1,1}  &     &t_{1,n}/t_{1,1}  \\
0        &1                &     &t_{2,n}/t_{2,2}  \\
&                 &\ddots         \\
0        &                 &     &1
\end{vmatrix}

Next, the Jordan half of Gauss-Jordan elimination, using property (1) of the definition, leaves the identity matrix.


=
t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot
\begin{vmatrix}
1        &0                &     &0                \\
0        &1                &     &0                \\
&                 &\ddots         \\
0        &                 &     &1
\end{vmatrix}
=
t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot 1


Therefore, if an echelon form matrix is nonsingular then its determinant is the product down its diagonal.

That result gives us a way to compute the value of a determinant function on a matrix. Do Gaussian reduction, keeping track of any changes of sign caused by row swaps and any scalars that are factored out, and then finish by multiplying down the diagonal of the echelon form result. This procedure takes the same time as Gauss' method and so is sufficiently fast to be practical on the size matrices that we see in this book.

Example 2.4

Doing  2 \! \times \! 2 determinants


\begin{vmatrix}
2  &4  \\
-1 &3
\end{vmatrix}
=
\begin{vmatrix}
2  &4  \\
0  &5
\end{vmatrix}
=10

with Gauss' method won't give a big savings because the 2 \! \times \! 2 determinant formula is so easy. However, a  3 \! \times \! 3 determinant is usually easier to calculate with Gauss' method than with the formula given earlier.


\begin{vmatrix}
2  &2  &6  \\
4  &4  &3  \\
0  &-3 &5
\end{vmatrix}
=
\begin{vmatrix}
2  &2  &6  \\
0  &0  &-9 \\
0  &-3 &5
\end{vmatrix}
=
-\begin{vmatrix}
2  &2  &6  \\
0  &-3 &5  \\
0  &0  &-9
\end{vmatrix}
=-54
Example 2.5

Determinants of matrices any bigger than 3 \! \times \! 3 are almost always most quickly done with this Gauss' method procedure.


\begin{vmatrix}
1  &0  &1  &3  \\
0  &1  &1  &4  \\
0  &0  &0  &5  \\
0  &1  &0  &1
\end{vmatrix}
=
\begin{vmatrix}
1  &0  &1  &3  \\
0  &1  &1  &4  \\
0  &0  &0  &5  \\
0  &0  &-1 &-3
\end{vmatrix}
=
-\begin{vmatrix}
1  &0  &1  &3  \\
0  &1  &1  &4  \\
0  &0  &-1 &-3 \\
0  &0  &0  &5
\end{vmatrix}
=-(-5)=5

The prior example illustrates an important point. Although we have not yet found a 4 \! \times \! 4 determinant formula, if one exists then we know what value it gives to the matrix — if there is a function with properties (1)-(4) then on the above matrix the function must return 5.

Lemma 2.6

For each n, if there is an n \! \times \! n determinant function then it is unique.

Proof

For any n \! \times \! n matrix we can perform Gauss' method on the matrix, keeping track of how the sign alternates on row swaps, and then multiply down the diagonal of the echelon form result. By the definition and the lemma, all n \! \times \! n determinant functions must return this value on this matrix. Thus all n \! \times \! n determinant functions are equal, that is, there is only one input argument/output value relationship satisfying the four conditions.

The "if there is an n \! \times \! n determinant function" emphasizes that, although we can use Gauss' method to compute the only value that a determinant function could possibly return, we haven't yet shown that such a determinant function exists for all n. In the rest of the section we will produce determinant functions.

Exercises

For these, assume that an n \! \times \! n determinant function exists for all n.

This exercise is recommended for all readers.
Problem 1

Use Gauss' method to find each determinant.

  1.  \begin{vmatrix}
3  &1  &2  \\
3  &1  &0  \\
0  &1  &4
\end{vmatrix}
  2.  \begin{vmatrix}
1  &0  &0  &1 \\
2  &1  &1  &0 \\
-1  &0  &1  &0 \\
1  &1  &1  &0
\end{vmatrix}
Problem 2
Use Gauss' method to find each.
  1.  \begin{vmatrix}
2  &-1  \\
-1 &-1
\end{vmatrix}
  2.  \begin{vmatrix}
1  &1  &0  \\
3  &0  &2  \\
5  &2  &2
\end{vmatrix}
Problem 3

For which values of  k does this system have a unique solution?


\begin{array}{*{4}{rc}r}
x  &  &  &+ &z  &-  &w  &=  &2  \\
&  &y &- &2z &   &   &=  &3  \\
x  &  &  &+ &kz &   &   &=  &4  \\
&  &  &  &z  &-  &w  &=  &2
\end{array}
This exercise is recommended for all readers.
Problem 4

Express each of these in terms of  \left|H\right| .

  1.  \begin{vmatrix}
h_{3,1}  &h_{3,2} &h_{3,3} \\
h_{2,1}  &h_{2,2} &h_{2,3} \\
h_{1,1}  &h_{1,2} &h_{1,3}
\end{vmatrix}
  2.  \begin{vmatrix}
-h_{1,1}   &-h_{1,2}  &-h_{1,3} \\
-2h_{2,1}  &-2h_{2,2} &-2h_{2,3} \\
-3h_{3,1}  &-3h_{3,2} &-3h_{3,3}
\end{vmatrix}
  3.  \begin{vmatrix}
h_{1,1}+h_{3,1}  &h_{1,2}+h_{3,2} &h_{1,3}+h_{3,3} \\
h_{2,1}          &h_{2,2}         &h_{2,3} \\
5h_{3,1}         &5h_{3,2}        &5h_{3,3}
\end{vmatrix}
This exercise is recommended for all readers.
Problem 5

Find the determinant of a diagonal matrix.

Problem 6

Describe the solution set of a homogeneous linear system if the determinant of the matrix of coefficients is nonzero.

This exercise is recommended for all readers.
Problem 7

Show that this determinant is zero.


\begin{vmatrix}
y+z  &x+z  &x+y  \\
x    &y    &z    \\
1    &1    &1
\end{vmatrix}
Problem 8
  1. Find the 1 \! \times \! 1, 2 \! \times \! 2, and 3 \! \times \! 3 matrices with i,j entry given by (-1)^{i+j}.
  2. Find the determinant of the square matrix with  i,j entry  (-1)^{i+j} .
Problem 9
  1. Find the 1 \! \times \! 1, 2 \! \times \! 2, and 3 \! \times \! 3 matrices with i,j entry given by i+j.
  2. Find the determinant of the square matrix with i,j entry i+j.
This exercise is recommended for all readers.
Problem 10

Show that determinant functions are not linear by giving a case where  \left|A+B\right|\neq\left|A\right|+\left|B\right| .

Problem 11

The second condition in the definition, that row swaps change the sign of a determinant, is somewhat annoying. It means we have to keep track of the number of swaps, to compute how the sign alternates. Can we get rid of it? Can we replace it with the condition that row swaps leave the determinant unchanged? (If so then we would need new 1 \! \times \! 1, 2 \! \times \! 2, and 3 \! \times \! 3 formulas, but that would be a minor matter.)

Problem 12

Prove that the determinant of any triangular matrix, upper or lower, is the product down its diagonal.

Problem 13

Refer to the definition of elementary matrices in the Mechanics of Matrix Multiplication subsection.

  1. What is the determinant of each kind of elementary matrix?
  2. Prove that if  E is any elementary matrix then  \left|ES\right|=\left|E\right|\left|S\right| for any appropriately sized  S .
  3. (This question doesn't involve determinants.) Prove that if  T is singular then a product  TS is also singular.
  4. Show that  \left|TS\right|=\left|T\right|\left|S\right| .
  5. Show that if  T is nonsingular then  \left|T^{-1}\right|=\left|T\right|^{-1} .
Problem 14

Prove that the determinant of a product is the product of the determinants  \left|TS\right|=\left|T\right|\,\left|S\right| in this way. Fix the  n \! \times \! n matrix  S and consider the function  d:\mathcal{M}_{n \! \times \! n}\to \mathbb{R} given by  T\mapsto \left|TS\right|/\left|S\right| .

  1. Check that  d satisfies property (1) in the definition of a determinant function.
  2. Check property (2).
  3. Check property (3).
  4. Check property (4).
  5. Conclude the determinant of a product is the product of the determinants.
Problem 15

A submatrix of a given matrix A is one that can be obtained by deleting some of the rows and columns of A. Thus, the first matrix here is a submatrix of the second.


\begin{pmatrix}
3  &1  \\
2  &5
\end{pmatrix}
\qquad
\begin{pmatrix}
3  &4  &1  \\
0  &9  &-2 \\
2  &-1 &5
\end{pmatrix}

Prove that for any square matrix, the rank of the matrix is r if and only if  r is the largest integer such that there is an  r \! \times \! r submatrix with a nonzero determinant.

This exercise is recommended for all readers.
Problem 16

Prove that a matrix with rational entries has a rational determinant.

? Problem 17

Find the element of likeness in (a) simplifying a fraction, (b) powdering the nose, (c) building new steps on the church, (d) keeping emeritus professors on campus, (e) putting  B ,  C ,  D in the determinant


\begin{vmatrix}
1   &a   &a^2  &a^3  \\
a^3 &1   &a    &a^2  \\
B   &a^3 &1    &a    \\
C   &D   &a^3  &1
\end{vmatrix}.

(Anning & Trigg 1953)


3 - The Permutation Expansion

The prior subsection defines a function to be a determinant if it satisfies four conditions and shows that there is at most one n \! \times \! n determinant function for each n. What is left is to show that for each n such a function exists.

How could such a function not exist? After all, we have done computations that start with a square matrix, follow the conditions, and end with a number.

The difficulty is that, as far as we know, the computation might not give a well-defined result. To illustrate this possibility, suppose that we were to change the second condition in the definition of determinant to be that the value of a determinant does not change on a row swap. By Remark 2.2 we know that this conflicts with the first and third conditions. Here is an instance of the conflict: here are two Gauss' method reductions of the same matrix, the first without any row swap


\begin{pmatrix}
1  &2  \\
3  &4
\end{pmatrix}
\xrightarrow[]{-3\rho_1+\rho_2}
\begin{pmatrix}
1  &2  \\
0  &-2
\end{pmatrix}

and the second with a swap.


\begin{pmatrix}
1  &2  \\
3  &4
\end{pmatrix}
\xrightarrow[]{\rho_1\leftrightarrow\rho_2}
\begin{pmatrix}
3  &4  \\
1  &2
\end{pmatrix}
\xrightarrow[]{-(1/3)\rho_1+\rho_2}
\begin{pmatrix}
3  &4  \\
0  &2/3
\end{pmatrix}

Following Definition 2.1 gives that both calculations yield the determinant -2 since in the second one we keep track of the fact that the row swap changes the sign of the result of multiplying down the diagonal. But if we follow the supposition and change the second condition then the two calculations yield different values, -2 and 2. That is, under the supposition the outcome would not be well-defined — no function exists that satisfies the changed second condition along with the other three.

Of course, observing that Definition 2.1 does the right thing in this one instance is not enough; what we will do in the rest of this section is to show that there is never a conflict. The natural way to try this would be to define the determinant function with: "The value of the function is the result of doing Gauss' method, keeping track of row swaps, and finishing by multiplying down the diagonal". (Since Gauss' method allows for some variation, such as a choice of which row to use when swapping, we would have to fix an explicit algorithm.) Then we would be done if we verified that this way of computing the determinant satisfies the four properties. For instance, if T and \hat{T} are related by a row swap then we would need to show that this algorithm returns determinants that are negatives of each other. However, how to verify this is not evident. So the development below will not proceed in this way. Instead, in this subsection we will define a different way to compute the value of a determinant, a formula, and we will use this way to prove that the conditions are satisfied.

The formula that we shall use is based on an insight gotten from property (3) of the definition of determinants. This property shows that determinants are not linear.

Example 3.1

For this matrix  \det(2A)\neq 2\cdot\det(A) .


A=\begin{pmatrix}
2  &1  \\
-1  &3
\end{pmatrix}

Instead, the scalar comes out of each of the two rows.


\begin{vmatrix}
4  &2  \\
-2  &6
\end{vmatrix}
=2\cdot\begin{vmatrix}
2  &1  \\
-2  &6
\end{vmatrix}
=4\cdot\begin{vmatrix}
2  &1  \\
-1  &3
\end{vmatrix}

Since scalars come out a row at a time, we might guess that determinants are linear a row at a time.

Definition 3.2

Let  V be a vector space. A map  f:V^n\to \mathbb{R} is multilinear if

  1. 
f(\vec{\rho}_1,\dots,\vec{v}+\vec{w},
\ldots,\vec{\rho}_n)
=f(\vec{\rho}_1,\dots,\vec{v},\dots,\vec{\rho}_n)
+f(\vec{\rho}_1,\dots,\vec{w},\dots,\vec{\rho}_n)
  2. 
f(\vec{\rho}_1,\dots,k\vec{v},\dots,\vec{\rho}_n)
=k\cdot f(\vec{\rho}_1,\dots,\vec{v},\dots,\vec{\rho}_n)

for  \vec{v}, \vec{w}\in V and  k\in\mathbb{R} .

Lemma 3.3

Determinants are multilinear.

Proof

The definition of determinants gives property (2) (Lemma 2.3 following that definition covers the k=0 case) so we need only check property (1).


\det(\vec{\rho}_1,\dots,\vec{v}+\vec{w},
\dots,\vec{\rho}_n)
=\det(\vec{\rho}_1,\dots,\vec{v},\dots,\vec{\rho}_n)
+\det(\vec{\rho}_1,\dots,\vec{w},\dots,\vec{\rho}_n)

If the set  \{\vec{\rho}_1,\dots,\vec{\rho}_{i-1},\vec{\rho}_{i+1},
\dots,\vec{\rho}_n\} is linearly dependent then all three matrices are singular and so all three determinants are zero and the equality is trivial. Therefore assume that the set is linearly independent. This set of n-wide row vectors has n-1 members, so we can make a basis by adding one more vector \langle \vec{\rho}_1,\dots,\vec{\rho}_{i-1},\vec{\beta},
\vec{\rho}_{i+1},\dots,\vec{\rho}_n \rangle . Express \vec{v} and \vec{w} with respect to this basis

\begin{array}{rl}
\vec{v} &=v_1\vec{\rho}_1+\dots+v_{i-1}\vec{\rho}_{i-1}+v_i\vec{\beta}
+v_{i+1}\vec{\rho}_{i+1}+\dots+v_n\vec{\rho}_n                \\
\vec{w} &= w_1\vec{\rho}_1+\dots+w_{i-1}\vec{\rho}_{i-1}+w_i\vec{\beta}
+w_{i+1}\vec{\rho}_{i+1}+\dots+w_n\vec{\rho}_n
\end{array}

giving this.


\vec{v}+\vec{w}
=
(v_1+w_1)\vec{\rho}_1+\dots+(v_i+w_i)\vec{\beta}
+\dots+(v_n+w_n)\vec{\rho}_n

By the definition of determinant, the value of \det(\vec{\rho}_1,\dots,\vec{v}+\vec{w},\dots,\vec{\rho}_n) is unchanged by the pivot operation of adding -(v_1+w_1)\vec{\rho}_1 to \vec{v}+\vec{w}.


\vec{v}+\vec{w}-(v_1+w_1)\vec{\rho}_1
=
(v_2+w_2)\vec{\rho}_2+\cdots+(v_i+w_i)\vec{\beta}
+\dots+(v_n+w_n)\vec{\rho}_n

Then, to the result, we can add -(v_2+w_2)\vec{\rho}_2, etc. Thus


\det (\vec{\rho}_1,\dots,\vec{v}+\vec{w},\dots,\vec{\rho}_n)

\begin{align}
&=\det (\vec{\rho}_1,\dots,(v_i+w_i)\cdot\vec{\beta},\dots,\vec{\rho}_n) \\
&=(v_i+w_i)\cdot\det (\vec{\rho}_1,\dots,\vec{\beta},\dots,\vec{\rho}_n) \\
&=v_i\cdot \det (\vec{\rho}_1,\dots,\vec{\beta},\dots,\vec{\rho}_n)  
+w_i\cdot \det (\vec{\rho}_1,\dots,\vec{\beta},\dots,\vec{\rho}_n)
\end{align}

(using (2) for the second equality). To finish, bring v_i and w_i back inside in front of \vec{\beta} and use pivoting again, this time to reconstruct the expressions of \vec{v} and \vec{w} in terms of the basis, e.g., start with the pivot operations of adding v_1\vec{\rho}_1 to v_i\vec{\beta} and w_1\vec{\rho}_1 to w_i\vec{\rho}_1, etc.

Multilinearity allows us to expand a determinant into a sum of determinants, each of which involves a simple matrix.

Example 3.4

We can use multilinearity to split this determinant into two, first breaking up the first row


\begin{vmatrix}
2  &1  \\
4  &3
\end{vmatrix}
=
\begin{vmatrix}
2  &0  \\
4  &3
\end{vmatrix}
+
\begin{vmatrix}
0  &1  \\
4  &3
\end{vmatrix}

and then separating each of those two, breaking along the second rows.


=\begin{vmatrix}
2  &0  \\
4  &0
\end{vmatrix}
+
\begin{vmatrix}
2  &0  \\
0  &3
\end{vmatrix}
+
\begin{vmatrix}
0  &1  \\
4  &0
\end{vmatrix}
+
\begin{vmatrix}
0  &1  \\
0  &3
\end{vmatrix}

We are left with four determinants, such that in each row of each matrix there is a single entry from the original matrix.

Example 3.5

In the same way, a  3 \! \times \! 3 determinant separates into a sum of many simpler determinants. We start by splitting along the first row, producing three determinants (the zero in the 1,3 position is underlined to set it off visually from the zeroes that appear in the splitting).


\begin{vmatrix}
2              &1  &-1  \\
4              &3  &\underline{0}  \\
2              &1  &5
\end{vmatrix}
=
\begin{vmatrix}
2              &0  &0   \\
4              &3  &\underline{0}  \\
2              &1  &5
\end{vmatrix}
+
\begin{vmatrix}
0              &1  &0   \\
4              &3  &\underline{0}   \\
2              &1  &5
\end{vmatrix}
+
\begin{vmatrix}
0              &0  &-1  \\
4              &3  &\underline{0}  \\
2  &1  &5
\end{vmatrix}

Each of these three will itself split in three along the second row. Each of the resulting nine splits in three along the third row, resulting in twenty seven determinants


=
\begin{vmatrix}
2              &0  &0   \\
4              &0  &0   \\
2              &0  &0
\end{vmatrix}
+
\begin{vmatrix}
2  &0  &0   \\
4  &0  &0   \\
0  &1  &0
\end{vmatrix}
+
\begin{vmatrix}
2  &0  &0   \\
4  &0  &0   \\
0  &0  &5
\end{vmatrix}
+
\begin{vmatrix}
2  &0  &0   \\
0  &3  &0   \\
2  &0  &0
\end{vmatrix}
+\dots+
\begin{vmatrix}
0  &0  &-1  \\
0  &0  &\underline{0}  \\
0  &0  &5
\end{vmatrix}

such that each row contains a single entry from the starting matrix.

So an  n \! \times \! n determinant expands into a sum of  n^n determinants where each row of each summands contains a single entry from the starting matrix. However, many of these summand determinants are zero.

Example 3.6

In each of these three matrices from the above expansion, two of the rows have their entry from the starting matrix in the same column, e.g., in the first matrix, the 2 and the 4 both come from the first column.


\begin{vmatrix}
2               &0  &0   \\
4               &0  &0  \\
0               &1  &0
\end{vmatrix}
\qquad
\begin{vmatrix}
0               &0  &-1  \\
0               &3  &0  \\
0               &0  &5
\end{vmatrix}
\qquad
\begin{vmatrix}
0               &1  &0   \\
0               &0  &\underline{0}  \\
0               &0  &5
\end{vmatrix}

Any such matrix is singular, because in each, one row is a multiple of the other (or is a zero row). Thus, any such determinant is zero, by Lemma 2.3.

Therefore, the above expansion of the  3 \! \times \! 3 determinant into the sum of the twenty seven determinants simplifies to the sum of these six.

\begin{array}{rl}
\begin{vmatrix}
2  &1  &-1  \\
4  &3  &\underline{0}  \\
2  &1  &5
\end{vmatrix}
&=\begin{vmatrix}
2  &0  &0   \\
0  &3  &0   \\
0  &0  &5
\end{vmatrix}
+
\begin{vmatrix}
2  &0  &0   \\
0  &0  &\underline{0}   \\
0  &1  &0
\end{vmatrix}                      \\
&\quad+\begin{vmatrix}
0  &1  &0   \\
4  &0  &0   \\
0  &0  &5
\end{vmatrix}
+
\begin{vmatrix}
0  &1  &0   \\
0  &0  &\underline{0}   \\
2  &0  &0
\end{vmatrix}                      \\
&\quad+\begin{vmatrix}
0  &0  &-1  \\
4  &0  &0   \\
0  &1  &0
\end{vmatrix}
+
\begin{vmatrix}
0  &0  &-1  \\
0  &3  &0    \\
2  &0  &0
\end{vmatrix}                      
\end{array}

We can bring out the scalars.

\begin{array}{rl}
&=(2)(3)(5)\begin{vmatrix}
1  &0  &0  \\
0  &1  &0  \\
0  &0  &1
\end{vmatrix}
+(2)(\underline{0})(1)\begin{vmatrix}
1  &0  &0  \\
0  &0  &1  \\
0  &1  &0
\end{vmatrix}                     \\
&\quad+(1)(4)(5)\begin{vmatrix}
0  &1  &0  \\
1  &0  &0  \\
0  &0  &1
\end{vmatrix}
+(1)(\underline{0})(2)\begin{vmatrix}
0  &1  &0  \\
0  &0  &1  \\
1  &0  &0
\end{vmatrix}                       \\
&\quad+(-1)(4)(1)\begin{vmatrix}
0  &0  &1  \\
1  &0  &0  \\
0  &1  &0
\end{vmatrix}
+(-1)(3)(2)\begin{vmatrix}
0  &0  &1  \\
0  &1  &0  \\
1  &0  &0
\end{vmatrix}                       
\end{array}

To finish, we evaluate those six determinants by row-swapping them to the identity matrix, keeping track of the resulting sign changes.

\begin{array}{rl}
&=30\cdot (+1)+0\cdot (-1)  \\
&\quad+20\cdot (-1)+0\cdot (+1) \\
&\quad -4\cdot (+1)-6\cdot (-1)=12
\end{array}

That example illustrates the key idea. We've applied multilinearity to a 3 \! \times \! 3 determinant to get 3^3 separate determinants, each with one distinguished entry per row. We can drop most of these new determinants because the matrices are singular, with one row a multiple of another. We are left with the one-entry-per-row determinants also having only one entry per column (one entry from the original determinant, that is). And, since we can factor scalars out, we can further reduce to only considering determinants of one-entry-per-row-and-column matrices where the entries are ones.

These are permutation matrices. Thus, the determinant can be computed in this three-step way (Step 1) for each permutation matrix, multiply together the entries from the original matrix where that permutation matrix has ones, (Step 2) multiply that by the determinant of the permutation matrix and (Step 3) do that for all permutation matrices and sum the results together.

To state this as a formula, we introduce a notation for permutation matrices. Let \iota_j be the row vector that is all zeroes except for a one in its j-th entry, so that the four-wide \iota_2 is \begin{pmatrix} 0 &1 &0 &0 \end{pmatrix}. We can construct permutation matrices by permuting — that is, scrambling — the numbers 1, 2, ..., n, and using them as indices on the \iota's. For instance, to get a  4 \! \times \! 4 permutation matrix matrix, we can scramble the numbers from 1 to 4 into this sequence  \langle 3,2,1,4 \rangle  and take the corresponding row vector \iota's.


\begin{pmatrix}
\iota_{3} \\
\iota_{2} \\
\iota_{1} \\
\iota_{4} 
\end{pmatrix}=
\begin{pmatrix}
0  &0  &1  &0  \\
0  &1  &0  &0  \\
1  &0  &0  &0  \\
0  &0  &0  &1
\end{pmatrix}
Definition 3.7

An  n -permutation is a sequence consisting of an arrangement of the numbers 1, 2, ..., n.

Example 3.8

The 2-permutations are  \phi_1=\langle 1,2 \rangle  and  \phi_2=\langle 2,1 \rangle  . These are the associated permutation matrices.


P_{\phi_1}
=\begin{pmatrix}
\iota_1 \\
\iota_2 
\end{pmatrix}
=\begin{pmatrix}
1  &0         \\
0  &1   
\end{pmatrix}
\qquad
P_{\phi_2}
=\begin{pmatrix}
\iota_2 \\
\iota_1 
\end{pmatrix}
=\begin{pmatrix}
0  &1         \\
1  &0   
\end{pmatrix}

We sometimes write permutations as functions, e.g.,  \phi_2(1)=2 , and  \phi_2(2)=1 . Then the rows of P_{\phi_2} are \iota_{\phi_2(1)}=\iota_2 and \iota_{\phi_2(2)}=\iota_1.

The 3-permutations are  \phi_1=\langle 1,2,3 \rangle  ,  \phi_2=\langle 1,3,2 \rangle  ,  \phi_3=\langle 2,1,3 \rangle  ,  \phi_4=\langle 2,3,1 \rangle  ,  \phi_5=\langle 3,1,2 \rangle  , and  \phi_6=\langle 3,2,1 \rangle  . Here are two of the associated permutation matrices.


P_{\phi_2}
=\begin{pmatrix}
\iota_1 \\
\iota_3 \\
\iota_2 
\end{pmatrix}
=\begin{pmatrix}
1      &0        &0        \\
0      &0        &1        \\
0      &1        &0
\end{pmatrix}
\qquad
P_{\phi_5}
=\begin{pmatrix}
\iota_3 \\
\iota_1 \\
\iota_2 
\end{pmatrix}
=\begin{pmatrix}
0      &0        &1        \\
1      &0        &0        \\
0      &1        &0
\end{pmatrix}

For instance, the rows of P_{\phi_5} are \iota_{\phi_5(1)}=\iota_3, \iota_{\phi_5(2)}=\iota_1, and \iota_{\phi_5(3)}=\iota_2.

Definition 3.9

The permutation expansion for determinants is


\begin{vmatrix}
t_{1,1}  &t_{1,2}  &\ldots  &t_{1,n}  \\
t_{2,1}  &t_{2,2}  &\ldots  &t_{2,n}  \\
&\vdots                      \\
t_{n,1}  &t_{n,2}  &\ldots  &t_{n,n}
\end{vmatrix}
=
\begin{array}{l}
t_{1,\phi_1(1)}t_{2,\phi_1(2)}\cdots
t_{n,\phi_1(n)}\left|P_{\phi_1}\right|       \\[.5ex]
\quad+t_{1,\phi_2(1)}t_{2,\phi_2(2)}\cdots
t_{n,\phi_2(n)}\left|P_{\phi_2}\right|       \\[.5ex]
\quad\vdots                              \\
\quad+t_{1,\phi_k(1)}t_{2,\phi_k(2)}\cdots
t_{n,\phi_k(n)}\left|P_{\phi_k}\right| 
\end{array}

where  \phi_1,\ldots,\phi_k are all of the  n -permutations.

This formula is often written in summation notation


\left|T\right|=
\sum_{\text{permutations }\phi}\!\!\!\!
t_{1,\phi(1)}t_{2,\phi(2)}\cdots t_{n,\phi(n)}
\left|P_{\phi}\right|

read aloud as "the sum, over all permutations  \phi , of terms having the form  t_{1,\phi(1)}t_{2,\phi(2)}\cdots t_{n,\phi(n)} \left|P_{\phi}\right| ". This phrase is just a restating of the three-step process (Step 1) for each permutation matrix, compute  t_{1,\phi(1)}t_{2,\phi(2)}\cdots t_{n,\phi(n)} (Step 2) multiply that by  \left|P_{\phi}\right| and (Step 3) sum all such terms together.

Example 3.10

The familiar formula for the determinant of a 2 \! \times \! 2 matrix can be derived in this way.

\begin{array}{rl}
\begin{vmatrix}
t_{1,1}  &t_{1,2} \\
t_{2,1}  &t_{2,2}
\end{vmatrix}
&=
t_{1,1}t_{2,2}\cdot\left|P_{\phi_1}\right|
+
t_{1,2}t_{2,1}\cdot\left|P_{\phi_2}\right|      \\     
&=
t_{1,1}t_{2,2}\cdot\begin{vmatrix}
1  &0 \\
0  &1
\end{vmatrix}
+
t_{1,2}t_{2,1}\cdot\begin{vmatrix}
0  &1 \\
1  &0
\end{vmatrix}               \\
&=t_{1,1}t_{2,2}-t_{1,2}t_{2,1}
\end{array}

(the second permutation matrix takes one row swap to pass to the identity). Similarly, the formula for the determinant of a 3 \! \times \! 3 matrix is this.

\begin{array}{rl}
\begin{vmatrix}
t_{1,1}  &t_{1,2}  &t_{1,3} \\
t_{2,1}  &t_{2,2}  &t_{2,3} \\
t_{3,1}  &t_{3,2}  &t_{3,3} 
\end{vmatrix}
&=
\begin{align}
&t_{1,1}t_{2,2}t_{3,3}\left|P_{\phi_1}\right|
+t_{1,1}t_{2,3}t_{3,2}\left|P_{\phi_2}\right|
+t_{1,2}t_{2,1}t_{3,3}\left|P_{\phi_3}\right| \\
&\quad
+t_{1,2}t_{2,3}t_{3,1}\left|P_{\phi_4}\right|
+t_{1,3}t_{2,1}t_{3,2}\left|P_{\phi_5}\right|
+t_{1,3}t_{2,2}t_{3,1}\left|P_{\phi_6}\right|
\end{align}                                      \\
&=
\begin{align}
&t_{1,1}t_{2,2}t_{3,3}
-t_{1,1}t_{2,3}t_{3,2}
-t_{1,2}t_{2,1}t_{3,3}  \\
&\quad
+t_{1,2}t_{2,3}t_{3,1}
+t_{1,3}t_{2,1}t_{3,2}
-t_{1,3}t_{2,2}t_{3,1}
\end{align}
\end{array}

Computing a determinant by permutation expansion usually takes longer than Gauss' method. However, here we are not trying to do the computation efficiently, we are instead trying to give a determinant formula that we can prove to be well-defined. While the permutation expansion is impractical for computations, it is useful in proofs. In particular, we can use it for the result that we are after.

Theorem 3.11

For each n there is a n \! \times \! n determinant function.

The proof is deferred to the following subsection. Also there is the proof of the next result (they share some features).

Theorem 3.12

The determinant of a matrix equals the determinant of its transpose.

The consequence of this theorem is that, while we have so far stated results in terms of rows (e.g., determinants are multilinear in their rows, row swaps change the signum, etc.), all of the results also hold in terms of columns. The final result gives examples.

Corollary 3.13

A matrix with two equal columns is singular. Column swaps change the sign of a determinant. Determinants are multilinear in their columns.

Proof

For the first statement, transposing the matrix results in a matrix with the same determinant, and with two equal rows, and hence a determinant of zero. The other two are proved in the same way.

We finish with a summary (although the final subsection contains the unfinished business of proving the two theorems). Determinant functions exist, are unique, and we know how to compute them. As for what determinants are about, perhaps these lines (Kemp 1982) help make it memorable.

Determinant none,
Solution: lots or none.
Determinant some,
Solution: just one.

Exercises

These summarize the notation used in this book for the 2- and 3- permutations.

\begin{array}{c|cc}
i          &1      &2    \\
\hline
\phi_1(i)  &1      &2     \\
\phi_2(i)  &2      &1     
\end{array}
\qquad
\begin{array}{c|ccc}
i          &1     &2   &3    \\
\hline
\phi_1(i)  &1     &2   &3    \\
\phi_2(i)  &1     &3   &2    \\
\phi_3(i)  &2     &1   &3    \\
\phi_4(i)  &2     &3   &1    \\
\phi_5(i)  &3     &1   &2    \\
\phi_6(i)  &3     &2   &1    
\end{array}

This exercise is recommended for all readers.
Problem 1

Compute the determinant by using the permutation expansion.

  1. \begin{vmatrix}
1  &2  &3  \\
4  &5  &6  \\
7  &8  &9
\end{vmatrix}
  2. \begin{vmatrix}
2  &2  &1  \\
3  &-1 &0  \\
-2 &0  &5
\end{vmatrix}
This exercise is recommended for all readers.
Problem 2

Compute these both with Gauss' method and with the permutation expansion formula.

  1.  \begin{vmatrix}
2  &1  \\
3  &1
\end{vmatrix}
  2.  \begin{vmatrix}
0  &1  &4  \\
0  &2  &3  \\
1  &5  &1
\end{vmatrix}
This exercise is recommended for all readers.
Problem 3

Use the permutation expansion formula to derive the formula for  3 \! \times \! 3 determinants.

Problem 4

List all of the 4-permutations.

Problem 5

A permutation, regarded as a function from the set \{1,..,n\} to itself, is one-to-one and onto. Therefore, each permutation has an inverse.

  1. Find the inverse of each 2-permutation.
  2. Find the inverse of each 3-permutation.
Problem 6

Prove that  f is multilinear if and only if for all  \vec{v},\vec{w}\in V and  k_1,k_2\in\mathbb{R} , this holds.


f(\vec{\rho}_1,\dots,k_1\vec{v}_1+k_2\vec{v}_2,
\dots,\vec{\rho}_n)
=
k_1f(\vec{\rho}_1,\dots,\vec{v}_1,\dots,\vec{\rho}_n)+
k_2f(\vec{\rho}_1,\dots,\vec{v}_2,\dots,\vec{\rho}_n)
Problem 7

Find the only nonzero term in the permutation expansion of this matrix.


\begin{vmatrix}
0  &1  &0  &0  \\
1  &0  &1  &0  \\
0  &1  &0  &1  \\
0  &0  &1  &0
\end{vmatrix}

Compute that determinant by finding the signum of the associated permutation.

Problem 8

How would determinants change if we changed property (4) of the definition to read that  \left|I\right|=2 ?

Problem 9

Verify the second and third statements in Corollary 3.13.

This exercise is recommended for all readers.
Problem 10

Show that if an  n \! \times \! n matrix has a nonzero determinant then any column vector  \vec{v}\in\mathbb{R}^n can be expressed as a linear combination of the columns of the matrix.

Problem 11

True or false: a matrix whose entries are only zeros or ones has a determinant equal to zero, one, or negative one. (Strang 1980)

Problem 12
  1. Show that there are 120 terms in the permutation expansion formula of a  5 \! \times \! 5 matrix.
  2. How many are sure to be zero if the  1,2 entry is zero?
Problem 13

How many  n -permutations are there?

Problem 14

A matrix  A is skew-symmetric if  {{A}^{\rm trans}}=-A , as in this matrix.


A=\begin{pmatrix}
0  &3  \\
-3  &0
\end{pmatrix}

Show that  n \! \times \! n skew-symmetric matrices with nonzero determinants exist only for even  n .

This exercise is recommended for all readers.
Problem 15

What is the smallest number of zeros, and the placement of those zeros, needed to ensure that a  4 \! \times \! 4 matrix has a determinant of zero?

This exercise is recommended for all readers.
Problem 16

If we have  n data points  (x_1,y_1),(x_2,y_2),\dots\,,(x_n,y_n) and want to find a polynomial  p(x)=a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+\dots+a_1x+a_0 passing through those points then we can plug in the points to get an  n equation/ n unknown linear system. The matrix of coefficients for that system is called the Vandermonde matrix. Prove that the determinant of the transpose of that matrix of coefficients


\begin{vmatrix}
1       &1       &\ldots   &1       \\
x_1     &x_2     &\ldots   &x_n     \\
{x_1}^2 &{x_2}^2 &\ldots   &{x_n}^2 \\
&\vdots                     \\
{x_1}^{n-1} &{x_2}^{n-1}   &\ldots   &{x_n}^{n-1}
\end{vmatrix}

equals the product, over all indices  i,j\in\{1,\dots,n\} with  i<j , of terms of the form  x_j-x_i . (This shows that the determinant is zero, and the linear system has no solution, if and only if the  x_i 's in the data are not distinct.)

Problem 17

A matrix can be divided into blocks, as here,

 
\left(\begin{array}{cc|c}
1  &2   &0  \\
3  &4   &0  \\  \hline
0  &0   &-2 
\end{array}\right)

which shows four blocks, the square 2 \! \times \! 2 and 1 \! \times \! 1 ones in the upper left and lower right, and the zero blocks in the upper right and lower left. Show that if a matrix can be partitioned as


T=
\left(\begin{array}{c|c}
J   &Z_2  \\  \hline
Z_1 &K
\end{array}\right)

where J and K are square, and Z_1 and Z_2 are all zeroes, then  \left|T\right|=\left|J\right|\cdot\left|K\right| .

This exercise is recommended for all readers.
Problem 18

Prove that for any  n \! \times \! n matrix  T there are at most  n distinct reals  r such that the matrix  T-rI has determinant zero (we shall use this result in Chapter Five).

? Problem 19

The nine positive digits can be arranged into  3 \! \times \! 3 arrays in  9! ways. Find the sum of the determinants of these arrays. (Trigg 1963)

Problem 20

Show that


\begin{vmatrix}
x-2  &x-3  &x-4  \\
x+1  &x-1  &x-3  \\
x-4  &x-7  &x-10
\end{vmatrix}=0.

(Silverman & Trigg 1963)

? Problem 21

Let  S be the sum of the integer elements of a magic square of order three and let  D be the value of the square considered as a determinant. Show that  D/S is an integer. (Trigg & Walker 1949)

? Problem 22

Show that the determinant of the  n^2 elements in the upper left corner of the Pascal triangle


\begin{array}{cccccc}
1  &1  &1  &1  &.  &.  \\
1  &2  &3  &.  &.      \\
1  &3  &.  &.  &   &   \\
1  &.  &.  &   &   &   \\
.                      \\
.
\end{array}

has the value unity. (Rupp & Aude 1931)


4 - Determinants Exist

This subsection is optional. It consists of proofs of two results from the prior subsection. These proofs involve the properties of permutations, which will not be used later, except in the optional Jordan Canonical Form subsection.

The prior subsection attacks the problem of showing that for any size there is a determinant function on the set of square matrices of that size by using multilinearity to develop the permutation expansion.

\begin{array}{rl}

\begin{vmatrix}
t_{1,1}  &t_{1,2}  &\ldots  &t_{1,n}  \\
t_{2,1}  &t_{2,2}  &\ldots  &t_{2,n}  \\
&\vdots                      \\
t_{n,1}  &t_{n,2}  &\ldots  &t_{n,n}
\end{vmatrix}
&=
\begin{array}{l}
t_{1,\phi_1(1)}t_{2,\phi_1(2)}\cdots
t_{n,\phi_1(n)}\left|P_{\phi_1}\right|       \\
\quad+t_{1,\phi_2(1)}t_{2,\phi_2(2)}\cdots
t_{n,\phi_2(n)}\left|P_{\phi_2}\right|       \\
\quad\vdots                              \\
\quad+t_{1,\phi_k(1)}t_{2,\phi_k(2)}\cdots
t_{n,\phi_k(n)}\left|P_{\phi_k}\right| 
\end{array}                                                 \\
&=\displaystyle\sum_{\text{permutations }\phi}
t_{1,\phi(1)}t_{2,\phi(2)}\cdots t_{n,\phi(n)}
\left|P_{\phi}\right|
\end{array}

This reduces the problem to showing that there is a determinant function on the set of permutation matrices of that size.

Of course, a permutation matrix can be row-swapped to the identity matrix and to calculate its determinant we can keep track of the number of row swaps. However, the problem is still not solved. We still have not shown that the result is well-defined. For instance, the determinant of


P_{\phi}=
\begin{pmatrix}
0  &1  &0  &0 \\
1  &0  &0  &0 \\
0  &0  &1  &0 \\
0  &0  &0  &1
\end{pmatrix}

could be computed with one swap


P_{\phi}
\xrightarrow[]{\rho_1\leftrightarrow\rho_2}
\begin{pmatrix}
1  &0  &0  &0 \\
0  &1  &0  &0 \\
0  &0  &1  &0 \\
0  &0  &0  &1
\end{pmatrix}

or with three.


P_{\phi}
\xrightarrow[]{\rho_3\leftrightarrow\rho_1}
\begin{pmatrix}
0  &0  &1  &0 \\
1  &0  &0  &0 \\
0  &1  &0  &0 \\
0  &0  &0  &1
\end{pmatrix}
\xrightarrow[]{\rho_2\leftrightarrow\rho_3}
\begin{pmatrix}
0  &0  &1  &0 \\
0  &1  &0  &0 \\
1  &0  &0  &0 \\
0  &0  &0  &1
\end{pmatrix}
\xrightarrow[]{\rho_1\leftrightarrow\rho_3}
\begin{pmatrix}
1  &0  &0  &0 \\
0  &1  &0  &0 \\
0  &0  &1  &0 \\
0  &0  &0  &1
\end{pmatrix}

Both reductions have an odd number of swaps so we figure that  \left|P_{\phi}\right|=-1 but how do we know that there isn't some way to do it with an even number of swaps? Corollary 4.6 below proves that there is no permutation matrix that can be row-swapped to an identity matrix in two ways, one with an even number of swaps and the other with an odd number of swaps.

Definition 4.1

Two rows of a permutation matrix


\begin{pmatrix}
\vdots          \\
\iota_{k} \\
\vdots          \\
\iota_{j} \\
\vdots
\end{pmatrix}

such that  k>j are in an inversion of their natural order.

Example 4.2

This permutation matrix


\begin{pmatrix}
\iota_3  \\
\iota_2  \\
\iota_1
\end{pmatrix}
=
\begin{pmatrix}
0  &0  &1  \\
0  &1  &0  \\
1  &0  &0
\end{pmatrix}

has three inversions:  \iota_3 precedes  \iota_1 ,  \iota_3 precedes  \iota_2 , and  \iota_2 precedes  \iota_1 .

Lemma 4.3

A row-swap in a permutation matrix changes the number of inversions from even to odd, or from odd to even.

Proof

Consider a swap of rows j and k, where k>j. If the two rows are adjacent


P_{\phi}=
\begin{pmatrix}
\vdots           \\
\iota_{\phi(j)}  \\
\iota_{\phi(k)}  \\
\vdots
\end{pmatrix}
\xrightarrow[]{\rho_k\leftrightarrow\rho_j}
\begin{pmatrix}
\vdots           \\
\iota_{\phi(k)}  \\
\iota_{\phi(j)}  \\
\vdots
\end{pmatrix}

then the swap changes the total number of inversions by one — either removing or producing one inversion, depending on whether  \phi(j)>\phi(k) or not, since inversions involving rows not in this pair are not affected. Consequently, the total number of inversions changes from odd to even or from even to odd.

If the rows are not adjacent then they can be swapped via a sequence of adjacent swaps, first bringing row k up


\begin{pmatrix}
\vdots             \\
\iota_{\phi(j)}    \\
\iota_{\phi(j+1)}  \\
\iota_{\phi(j+2)}  \\
\vdots             \\
\iota_{\phi(k)}    \\
\vdots
\end{pmatrix}
\xrightarrow[]{\rho_k\leftrightarrow\rho_{k-1}}\;\;
\xrightarrow[]{\rho_{k-1}\leftrightarrow\rho_{k-2}}
\dots
\xrightarrow[]{\rho_{j+1}\leftrightarrow\rho_j}
\begin{pmatrix}
\vdots             \\
\iota_{\phi(k)}    \\
\iota_{\phi(j)}    \\
\iota_{\phi(j+1)}  \\
\vdots             \\
\iota_{\phi(k-1)}  \\
\vdots
\end{pmatrix}

and then bringing row j down.


\xrightarrow[]{\rho_{j+1}\leftrightarrow\rho_{j+2}}\;\;
\xrightarrow[]{\rho_{j+2}\leftrightarrow\rho_{j+3}}
\dots
\xrightarrow[]{\rho_{k-1}\leftrightarrow\rho_k}
\begin{pmatrix}
\vdots             \\
\iota_{\phi(k)}    \\
\iota_{\phi(j+1)}  \\
\iota_{\phi(j+2)}  \\
\vdots             \\
\iota_{\phi(j)}    \\
\vdots
\end{pmatrix}

Each of these adjacent swaps changes the number of inversions from odd to even or from even to odd. There are an odd number  (k-j)+(k-j-1) of them. The total change in the number of inversions is from even to odd or from odd to even.

Definition 4.4

The signum of a permutation  \sgn(\phi) is  +1 if the number of inversions in  P_\phi is even, and is  -1 if the number of inversions is odd.

Example 4.5

With the subscripts from Example 3.8 for the 3-permutations,  \sgn(\phi_1)=1 while  \sgn(\phi_2)=-1 .

Corollary 4.6

If a permutation matrix has an odd number of inversions then swapping it to the identity takes an odd number of swaps. If it has an even number of inversions then swapping to the identity takes an even number of swaps.

Proof

The identity matrix has zero inversions. To change an odd number to zero requires an odd number of swaps, and to change an even number to zero requires an even number of swaps.

We still have not shown that the permutation expansion is well-defined because we have not considered row operations on permutation matrices other than row swaps. We will finesse this problem: we will define a function  d:\mathcal{M}_{n \! \times \! n}\to \mathbb{R} by altering the permutation expansion formula, replacing \left|P_\phi\right| with \sgn(\phi)


d(T)=
\sum_{\text{permutations }\phi}t_{1,\phi(1)}t_{2,\phi(2)}\dots t_{n,\phi(n)}
\sgn(\phi)

(this gives the same value as the permutation expansion because the prior result shows that \det(P_\phi)=\sgn(\phi)). This formula's advantage is that the number of inversions is clearly well-defined — just count them. Therefore, we will show that a determinant function exists for all sizes by showing that  d is it, that is, that d satisfies the four conditions.

Lemma 4.7

The function  d is a determinant. Hence determinants exist for every n.

Proof

We'll must check that it has the four properties from the definition.

Property (4) is easy; in


d(I)=
\sum_{\text{perms }\phi}
\iota_{1,\phi(1)}\iota_{2,\phi(2)}\cdots \iota_{n,\phi(n)}
\sgn(\phi)

all of the summands are zero except for the product down the diagonal, which is one.

For property (3) consider d(\hat{T}) where  T[b]{\xrightarrow[]{k\rho_i}}\hat{T} .


\sum_{\text{perms }\phi}\!\!
\hat{t}_{1,\phi(1)}
\cdots\hat{t}_{i,\phi(i)}\cdots\hat{t}_{n,\phi(n)}
\sgn(\phi)                      
=\sum_{\phi}
t_{1,\phi(1)}\cdots kt_{i,\phi(i)}\cdots t_{n,\phi(n)}
\sgn(\phi)

Factor the  k out of each term to get the desired equality.


=k\cdot\sum_{\phi}
t_{1,\phi(1)}\cdots t_{i,\phi(i)}\cdots t_{n,\phi(n)}
\sgn(\phi)                 
=k\cdot d(T)


For (2), let  T[b]{\xrightarrow[]{\rho_i\leftrightarrow\rho_j}}\hat{T} .


d(\hat{T})=
\sum_{\text{perms }\phi}\!\!
\hat{t}_{1,\phi(1)}
\cdots\hat{t}_{i,\phi(i)}
\cdots\hat{t}_{j,\phi(j)}
\cdots \hat{t}_{n,\phi(n)}
\sgn(\phi)

To convert to unhatted  t 's, for each \phi consider the permutation  \sigma that equals  \phi except that the  i -th and  j -th numbers are interchanged,  \sigma(i)=\phi(j) and  \sigma(j)=\phi(i) . Replacing the  \phi in  \hat{t}_{1,\phi(1)}
\cdots\hat{t}_{i,\phi(i)}
\cdots\hat{t}_{j,\phi(j)}
\cdots \hat{t}_{n,\phi(n)} 
with this  \sigma gives     t_{1,\sigma(1)}
\cdots t_{j,\sigma(j)}
\cdots t_{i,\sigma(i)}
\cdots t_{n,\sigma(n)} 
. Now  \sgn(\phi)=-\sgn(\sigma) (by Lemma 4.3) and so we get

\begin{array}{rl}
&=\sum_\sigma
t_{1,\sigma(1)}
\cdots t_{j,\sigma(j)}
\cdots t_{i,\sigma(i)}
\cdots t_{n,\sigma(n)}
\cdot\bigl(-\sgn(\sigma)\bigr)        \\
&=-\sum_{\sigma}
t_{1,\sigma(1)}\cdots t_{j,\sigma(j)}
\cdots t_{i,\sigma(i)}\cdots t_{n,\sigma(n)}\cdot\sgn(\sigma)
\end{array}

where the sum is over all permutations  \sigma derived from another permutation  \phi by a swap of the  i -th and  j -th numbers. But any permutation can be derived from some other permutation by such a swap, in one and only one way, so this summation is in fact a sum over all permutations, taken once and only once. Thus  d(\hat{T})=-d(T) .

To do property (1) let  T[b]{\xrightarrow[]{k\rho_i+\rho_j}}\hat{T} and consider

\begin{array}{rl}
d(\hat{T})
&=\sum_{\text{perms }\phi}
\hat{t}_{1,\phi(1)}\cdots\hat{t}_{i,\phi(i)}
\cdots\hat{t}_{j,\phi(j)}\cdots\hat{t}_{n,\phi(n)}
\sgn(\phi)                \\
&=\sum_{\phi}
t_{1,\phi(1)}\cdots t_{i,\phi(i)}
\cdots (kt_{i,\phi(j)}+t_{j,\phi(j)})\cdots t_{n,\phi(n)}
\sgn(\phi)
\end{array}

(notice: that's  kt_{i,\phi(j)} , not  kt_{j,\phi(j)} ). Distribute, commute, and factor.

\begin{array}{rl}
=&\displaystyle\sum_{\phi}
\big[t_{1,\phi(1)}\cdots t_{i,\phi(i)}
\cdots kt_{i,\phi(j)}\cdots t_{n,\phi(n)}
\sgn(\phi)\\ 
&\displaystyle\qquad+t_{1,\phi(1)}\cdots t_{i,\phi(i)}
\cdots t_{j,\phi(j)}\cdots t_{n,\phi(n)}
\sgn(\phi)\big]
\\
\\
=&\displaystyle
\sum_{{\phi}} t_{1,\phi(1)}\cdots t_{i,\phi(i)}
\cdots kt_{i,\phi(j)}\cdots t_{n,\phi(n)}
\sgn(\phi)           \\
&\displaystyle\qquad
+\sum_{\phi}
t_{1,\phi(1)}\cdots t_{i,\phi(i)}
\cdots t_{j,\phi(j)}\cdots t_{n,\phi(n)}
\sgn(\phi)        
\\
\\
=&\displaystyle
k\cdot \sum_{{\phi}}
t_{1,\phi(1)}\cdots t_{i,\phi(i)}
\cdots t_{i,\phi(j)}\cdots t_{n,\phi(n)}
\sgn(\phi)+d(T)          
\end{array}

We finish by showing that the terms  t_{1,\phi(1)}\cdots t_{i,\phi(i)} \cdots t_{i,\phi(j)}\dots t_{n,\phi(n)} \sgn(\phi)  add to zero. This sum represents  d(S) where  S is a matrix equal to  T except that row j of S is a copy of row i of T (because the factor is  t_{i,\phi(j)} , not  t_{j,\phi(j)} ). Thus, S has two equal rows, rows i and j. Since we have already shown that d changes sign on row swaps, as in Lemma 2.3 we conclude that d(S)=0.

We have now shown that determinant functions exist for each size. We already know that for each size there is at most one determinant. Therefore, the permutation expansion computes the one and only determinant value of a square matrix.

We end this subsection by proving the other result remaining from the prior subsection, that the determinant of a matrix equals the determinant of its transpose.

Example 4.8

Writing out the permutation expansion of the general 3 \! \times \! 3 matrix and of its transpose, and comparing corresponding terms


\begin{vmatrix}
a  &b  &c  \\
d  &e  &f  \\
g  &h  &i
\end{vmatrix}
= \cdots\,+
cdh\cdot\begin{vmatrix}
0  &0  &1  \\
1  &0  &0  \\
0  &1  &0
\end{vmatrix}
+\,\cdots

(terms with the same letters)


\begin{vmatrix}
a  &d  &g  \\
b  &e  &h  \\
c  &f  &i
\end{vmatrix}
= \cdots\,+
dhc\cdot\begin{vmatrix}
0  &1  &0  \\
0  &0  &1  \\
1  &0  &0
\end{vmatrix}
+\,\cdots

shows that the corresponding permutation matrices are transposes. That is, there is a relationship between these corresponding permutations. Problem 6 shows that they are inverses.

Theorem 4.9

The determinant of a matrix equals the determinant of its transpose.

Proof

Call the matrix  T and denote the entries of  {{T}^{\rm trans}} with  s 's so that  t_{i,j}=s_{j,i} . Substitution gives this


\left|T\right|
=\sum_{\text{perms }\phi} t_{1,\phi(1)}\dots t_{n,\phi(n)}
\sgn(\phi)    
=\sum_{\phi} s_{\phi(1),1}\dots s_{\phi(n),n}
\sgn(\phi)

and we can finish the argument by manipulating the expression on the right to be recognizable as the determinant of the transpose. We have written all permutation expansions (as in the middle expression above) with the row indices ascending. To rewrite the expression on the right in this way, note that because \phi is a permutation, the row indices in the term on the right \phi(1), ..., \phi(n) are just the numbers 1, ..., n, rearranged. We can thus commute to have these ascend, giving s_{1,\phi^{-1}(1)}\cdots s_{n,\phi^{-1}(n)} (if the column index is j and the row index is \phi(j) then, where the row index is i, the column index is \phi^{-1}(i)). Substituting on the right gives


=\sum_{\phi^{-1}}
s_{1,\phi^{-1}(1)}\cdots s_{n,\phi^{-1}(n)} \sgn(\phi^{-1})

(Problem 5 shows that  \sgn(\phi^{-1})=\sgn(\phi) ). Since every permutation is the inverse of another, a sum over all \phi^{-1} is a sum over all permutations \phi


=\sum_{\text{perms }\sigma}
s_{1,\sigma^(1)}\dots s_{n,\sigma(n)} \sgn(\sigma)  
=\left|{{T}^{\rm trans}}\right|

as required.

Exercises

These summarize the notation used in this book for the 2- and 3- permutations.


\begin{array}{c|cc}
i          &1      &2    \\
\hline
\phi_1(i)  &1      &2     \\
\phi_2(i)  &2      &1     
\end{array}
\qquad
\begin{array}{c|ccc}
i          &1     &2   &3    \\
\hline
\phi_1(i)  &1     &2   &3    \\
\phi_2(i)  &1     &3   &2    \\
\phi_3(i)  &2     &1   &3    \\
\phi_4(i)  &2     &3   &1    \\
\phi_5(i)  &3     &1   &2    \\
\phi_6(i)  &3     &2   &1    
\end{array}

Problem 1

Give the permutation expansion of a general 2 \! \times \! 2 matrix and its transpose.

This exercise is recommended for all readers.
Problem 2

This problem appears also in the prior subsection.

  1. Find the inverse of each 2-permutation.
  2. Find the inverse of each 3-permutation.
This exercise is recommended for all readers.
Problem 3
  1. Find the signum of each 2-permutation.
  2. Find the signum of each 3-permutation.
Problem 4

What is the signum of the n-permutation  \phi=\langle n,n-1,\dots,2,1 \rangle  ? (Strang 1980)

Problem 5

Prove these.

  1. Every permutation has an inverse.
  2.  \sgn(\phi^{-1})=\sgn(\phi)
  3. Every permutation is the inverse of another.
Problem 6

Prove that the matrix of the permutation inverse is the transpose of the matrix of the permutation P_{\phi^{-1}}={{P_{\phi}}^{\rm trans}}, for any permutation \phi.

This exercise is recommended for all readers.
Problem 7

Show that a permutation matrix with  m inversions can be row swapped to the identity in  m steps. Contrast this with Corollary 4.6.

This exercise is recommended for all readers.
Problem 8

For any permutation \phi let g(\phi) be the integer defined in this way.


g(\phi)=\prod_{i<j} [\phi(j)-\phi(i)]

(This is the product, over all indices  i and  j with  i<j , of terms of the given form.)

  1. Compute the value of g on all 2-permutations.
  2. Compute the value of g on all 3-permutations.
  3. Prove this.
    
\sgn(\phi)=\frac{g(\phi)}{|g(\phi)|}

Many authors give this formula as the definition of the signum function.


Section II - Geometry of Determinants

The prior section develops the determinant algebraically, by considering what formulas satisfy certain properties. This section complements that with a geometric approach. One advantage of this approach is that, while we have so far only considered whether or not a determinant is zero, here we shall give a meaning to the value of that determinant. (The prior section handles determinants as functions of the rows, but in this section columns are more convenient. The final result of the prior section says that we can make the switch.)


1 - Determinants as Size Functions

This parallelogram picture

Linalg parallelogram.png

is familiar from the construction of the sum of the two vectors. One way to compute the area that it encloses is to draw this rectangle and subtract the area of each subregion.

Linalg parallelogram area.png         \begin{array}{l}
\text{area of parallelogram}                    \\
\quad 
=\text{area of rectangle}
-\text{area of }A-\text{area of }B \\
\qquad 
-\cdots-\text{area of }F                   \\
\quad 
=(x_1+x_2)(y_1+y_2)-x_2y_1-x_1y_1/2        \\
\qquad 
-x_2y_2/2-x_2y_2/2-x_1y_1/2-x_2y_1         \\
\quad 
=x_1y_2-x_2y_1        
\end{array}

The fact that the area equals the value of the determinant


\begin{vmatrix}
x_1  &x_2  \\
y_1  &y_2
\end{vmatrix}
=x_1y_2-x_2y_1

is no coincidence. The properties in the definition of determinants make reasonable postulates for a function that measures the size of the region enclosed by the vectors in the matrix.

For instance, this shows the effect of multiplying one of the box-defining vectors by a scalar (the scalar used is k=1.4).

Linalg parallelogram 2.png          Linalg parallelogram 3.png

The region formed by k\vec{v} and \vec{w} is bigger, by a factor of  k , than the shaded region enclosed by \vec{v} and \vec{w}. That is,  \text{size}\, (k\vec{v},\vec{w})=k\cdot\text{size}\, (\vec{v},\vec{w}) and in general we expect of the size measure that \text{size}\, (\dots,k\vec{v},\dots)=k\cdot\text{size}\, (\dots,\vec{v},\dots). Of course, this postulate is already familiar as one of the properties in the defintion of determinants.

Another property of determinants is that they are unaffected by pivoting. Here are before-pivoting and after-pivoting boxes (the scalar used is k=0.35).

Linalg parallelogram 4.png      Linalg parallelogram 5.png

Although the region on the right, the box formed by v and k\vec{v}+\vec{w}, is more slanted than the shaded region, the two have the same base and the same height and hence the same area. This illustrates that  \text{size}\, (\vec{v},k\vec{v}+\vec{w})=\text{size}\, (\vec{v},\vec{w}) . Generalized, \text{size}\, (\dots,\vec{v},\dots,\vec{w},\dots)
=\text{size}\, (\dots,\vec{v},\dots,k\vec{v}+\vec{w},\dots), which is a restatement of the determinant postulate.

Of course, this picture

Linalg parallelogram basis.png

shows that  \text{size}\, (\vec{e}_1,\vec{e}_2)=1 , and we naturally extend that to any number of dimensions \text{size}\,(\vec{e}_1,\dots,\vec{e}_n)=1, which is a restatement of the property that the determinant of the identity matrix is one.

With that, because property (2) of determinants is redundant (as remarked right after the definition), we have that all of the properties of determinants are reasonable to expect of a function that gives the size of boxes. We can now cite the work done in the prior section to show that the determinant exists and is unique to be assured that these postulates are consistent and sufficient (we do not need any more postulates). That is, we've got an intuitive justification to interpret  \det(\vec{v}_1,\dots,\vec{v}_n) as the size of the box formed by the vectors. (Comment. An even more basic approach, which also leads to the definition below, is in (Weston 1959).

Example 1.1

The volume of this parallelepiped, which can be found by the usual formula from high school geometry, is 12.

Linalg parallelepiped.png          \begin{vmatrix}
2 &0 &-1\\
0 &3 &0 \\
2 &1 &1
\end{vmatrix}=12

Remark 1.2

Although property (2) of the definition of determinants is redundant, it raises an important point. Consider these two.

Linalg parallelogram orientation 1.png Linalg parallelogram orientation 2.png
\begin{vmatrix}
4  &1   \\
2  &3
\end{vmatrix}=10 \begin{vmatrix}
1  &4   \\
3  &2
\end{vmatrix}=-10

The only difference between them is in the order in which the vectors are taken. If we take \vec{u} first and then go to \vec{v}, follow the counterclockwise arc shown, then the sign is positive. Following a clockwise arc gives a negative sign. The sign returned by the size function reflects the "orientation" or "sense" of the box. (We see the same thing if we picture the effect of scalar multiplication by a negative scalar.)

Although it is both interesting and important, the idea of orientation turns out to be tricky. It is not needed for the development below, and so we will pass it by. (See Problem 20.)

Definition 1.3

The box (or parallelepiped) formed by  \langle
\vec{v}_1,\dots,\vec{v}_n \rangle  (where each vector is from \mathbb{R}^n) includes all of the set 
\{t_1\vec{v}_1+\dots+t_n\vec{v}_n \,\big|\, t_1,\ldots,t_n\in [0..1]\}
. The volume of a box is the absolute value of the determinant of the matrix with those vectors as columns.

Example 1.4

Volume, because it is an absolute value, does not depend on the order in which the vectors are given. The volume of the parallelepiped in Example 1.1, can also be computed as the absolute value of this determinant.


\begin{vmatrix}
0  &2 &-1 \\
3  &0 &0 \\
1  &2 &1
\end{vmatrix}=-12

The definition of volume gives a geometric interpretation to something in the space, boxes made from vectors. The next result relates the geometry to the functions that operate on spaces.

Theorem 1.5

A transformation  t:\mathbb{R}^n\to \mathbb{R}^n changes the size of all boxes by the same factor, namely the size of the image of a box \left|t(S)\right| is \left|T\right| times the size of the box \left|S\right|, where T is the matrix representing t with respect to the standard basis. That is, for all n \! \times \! n matrices, the determinant of a product is the product of the determinants \left|TS\right|=\left|T\right|\cdot\left|S\right|.

The two sentences state the same idea, first in map terms and then in matrix terms. Although we tend to prefer a map point of view, the second sentence, the matrix version, is more convienent for the proof and is also the way that we shall use this result later. (Alternate proofs are given as Problem 16 and Problem 21].)

Proof

The two statements are equivalent because \left|t(S)\right|=\left|TS\right|, as both give the size of the box that is the image of the unit box \mathcal{E}_n under the composition t\circ s (where s is the map represented by S with respect to the standard basis).

First consider the case that \left|T\right|=0. A matrix has a zero determinant if and only if it is not invertible. Observe that if  TS is invertible, so that there is an M such that  (TS)M=I , then the associative property of matrix multiplication  T(SM)=I shows that  T is also invertible (with inverse SM). Therefore, if  T is not invertible then neither is  TS — if \left|T\right|=0 then \left|TS\right|=0, and the result holds in this case.

Now consider the case that \left|T\right|\neq 0, that T is nonsingular. Recall that any nonsingular matrix can be factored into a product of elementary matrices, so that TS=E_1E_2\cdots E_rS. In the rest of this argument, we will verify that if E is an elementary matrix then  \left|ES\right|=\left|E\right|\cdot\left|S\right| . The result will follow because then \left|TS\right|=\left|E_1\cdots E_rS\right|=\left|E_1\right|\cdots\left|E_r\right|\cdot\left|S\right|
=\left|E_1\cdots E_r\right|\cdot\left|S\right|=\left|T\right|\cdot\left|S\right|.

If the elementary matrix E is M_i(k) then M_i(k)S equals S except that row i has been multiplied by k. The third property of determinant functions then gives that \left|M_i(k)S\right|=k\cdot\left|S\right|. But \left|M_i(k)\right|=k, again by the third property because M_i(k) is derived from the identity by multiplication of row i by k, and so  \left|ES\right|=\left|E\right|\cdot\left|S\right| holds for E=M_i(k). The E=P_{i,j}=-1 and E=C_{i,j}(k) checks are similar.

Example 1.6

Application of the map t represented with respect to the standard bases by


\begin{pmatrix}
1  &1  \\
-2  &0
\end{pmatrix}

will double sizes of boxes, e.g., from this

Linalg parallelogram doubled 1.png          \begin{vmatrix}
2  &1  \\
1  &2
\end{vmatrix}=3

to this

Linalg parallelogram doubled 2.png          \begin{vmatrix}
3  &3  \\
-4  &-2
\end{vmatrix}=6

Corollary 1.7

If a matrix is invertible then the determinant of its inverse is the inverse of its determinant \left|T^{-1}\right|=1/\left|T\right|.

Proof

1=\left|I\right|=\left|TT^{-1}\right|=\left|T\right|\cdot\left|T^{-1}\right|

Recall that determinants are not additive homomorphisms, \det(A+B) need not equal \det(A)+\det(B). The above theorem says, in contrast, that determinants are multiplicative homomorphisms: \det(AB) does equal \det(A)\cdot \det(B).

Exercises

Problem 1

Find the volume of the region formed.

  1. \langle \begin{pmatrix} 1 \\ 3 \end{pmatrix},\begin{pmatrix} -1 \\ 4 \end{pmatrix} \rangle
  2. \langle \begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix},\begin{pmatrix} 3 \\ -2 \\ 4 \end{pmatrix},
\begin{pmatrix} 8 \\ -3 \\ 8 \end{pmatrix} \rangle
  3. \langle \begin{pmatrix} 1 \\ 2 \\ 0 \\ 1 \end{pmatrix},
\begin{pmatrix} 2 \\ 2 \\ 2 \\ 2 \end{pmatrix},
\begin{pmatrix} -1 \\ 3 \\ 0 \\ 5 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \\ 7 \end{pmatrix} \rangle
This exercise is recommended for all readers.
Problem 2

Is


\begin{pmatrix} 4 \\ 1 \\ 2 \end{pmatrix}

inside of the box formed by these three?


\begin{pmatrix} 3 \\ 3 \\ 1 \end{pmatrix}
\quad
\begin{pmatrix} 2 \\ 6 \\ 1 \end{pmatrix}
\quad
\begin{pmatrix} 1 \\ 0 \\ 5 \end{pmatrix}
This exercise is recommended for all readers.
Problem 3

Find the volume of this region.

Linalg parallelogram problem 3.png

This exercise is recommended for all readers.
Problem 4

Suppose that  \left|A\right|=3 . By what factor do these change volumes?

  1.  A
  2.  A^2
  3.  A^{-2}
This exercise is recommended for all readers.
Problem 5

By what factor does each transformation change the size of boxes?

  1. \begin{pmatrix} x \\ y \end{pmatrix}\mapsto\begin{pmatrix} 2x \\ 3y \end{pmatrix}
  2. \begin{pmatrix} x \\ y \end{pmatrix}\mapsto\begin{pmatrix} 3x-y \\ -2x+y \end{pmatrix}
  3. \begin{pmatrix} x \\ y \\ z \end{pmatrix}\mapsto\begin{pmatrix} x-y \\ x+y+z \\ y-2z \end{pmatrix}
Problem 6

What is the area of the image of the rectangle  [2..4]\times [2..5] under the action of this matrix?


\begin{pmatrix}
2  &3  \\
4  &-1
\end{pmatrix}
Problem 7

If  t:\mathbb{R}^3\to \mathbb{R}^3 changes volumes by a factor of  7 and  s:\mathbb{R}^3\to \mathbb{R}^3 changes volumes by a factor of  3/2 then by what factor will their composition changes volumes?

Problem 8

In what way does the definition of a box differ from the defintion of a span?

This exercise is recommended for all readers.
Problem 9

Why doesn't this picture contradict Theorem 1.5?

Linalg parallelogram problem 9 1.png

 \xrightarrow[]{\scriptstyle \begin{pmatrix}
2  &1 \\
0  &1
\end{pmatrix}} Linalg parallelogram problem 9 2.png
area is 2 determinant is 2 area is 5
This exercise is recommended for all readers.
Problem 10

Does  \left|TS\right|=\left|ST\right| ?  \left|T(SP)\right|=\left|(TS)P\right| ?

Problem 11
  1. Suppose that  \left|A\right|=3 and that  \left|B\right|=2 . Find  \left|A^2\cdot {{B}^{\rm trans}}\cdot B^{-2}\cdot {{A}^{\rm trans}} \right| .
  2. Assume that  \left|A\right|=0 . Prove that  \left|6A^3+5A^2+2A\right|=0 .
This exercise is recommended for all readers.
Problem 12

Let  T be the matrix representing (with respect to the standard bases) the map that rotates plane vectors counterclockwise thru  \theta radians. By what factor does  T change sizes?

This exercise is recommended for all readers.
Problem 13

Must a transformation  t:\mathbb{R}^2\to \mathbb{R}^2 that preserves areas also preserve lengths?

This exercise is recommended for all readers.
Problem 14

What is the volume of a parallelepiped in  \mathbb{R}^3 bounded by a linearly dependent set?

This exercise is recommended for all readers.
Problem 15

Find the area of the triangle in  \mathbb{R}^3 with endpoints  (1,2,1) ,  (3,-1,4) , and  (2,2,2) . (Area, not volume. The triangle defines a plane— what is the area of the triangle in that plane?)

This exercise is recommended for all readers.
Problem 16

An alternate proof of Theorem 1.5 uses the definition of determinant functions.

  1. Note that the vectors forming S make a linearly dependent set if and only if \left|S\right|=0, and check that the result holds in this case.
  2. For the \left|S\right|\neq 0 case, to show that \left|TS\right|/\left|S\right|=\left|T\right| for all transformations, consider the function  d:\mathcal{M}_{n \! \times \! n}\to \mathbb{R} given by  T\mapsto \left|TS\right|/\left|S\right| . Show that d has the first property of a determinant.
  3. Show that d has the remaining three properties of a determinant function.
  4. Conclude that \left|TS\right|=\left|T\right|\cdot\left|S\right|.
Problem 17

Give a non-identity matrix with the property that  {{A}^{\rm trans}}=A^{-1} . Show that if  {{A}^{\rm trans}}=A^{-1} then  \left|A\right|=\pm 1 . Does the converse hold?

Problem 18

The algebraic property of determinants that factoring a scalar out of a single row will multiply the determinant by that scalar shows that where  H is  3 \! \times \! 3 , the determinant of  cH is  c^3 times the determinant of  H . Explain this geometrically, that is, using Theorem 1.5,

This exercise is recommended for all readers.
Problem 19

Matrices H and G are said to be similar if there is a nonsingular matrix P such that H=P^{-1}GP (we will study this relation in Chapter Five). Show that similar matrices have the same determinant.

Problem 20

We usually represent vectors in  \mathbb{R}^2 with respect to the standard basis so vectors in the first quadrant have both coordinates positive.

Linalg basis orientation 1.png           {\rm Rep}_{\mathcal{E}_2}(\vec{v})=\begin{pmatrix} +3 \\ +2 \end{pmatrix}

Moving counterclockwise around the origin, we cycle thru four regions:


\cdots
\;\longrightarrow\begin{pmatrix} + \\ + \end{pmatrix}
\;\longrightarrow\begin{pmatrix} - \\ + \end{pmatrix}
\;\longrightarrow\begin{pmatrix} - \\ - \end{pmatrix}
\;\longrightarrow\begin{pmatrix} + \\ - \end{pmatrix}
\;\longrightarrow\cdots\,.

Using this basis

 B=\langle \begin{pmatrix} 0 \\ 1 \end{pmatrix},\begin{pmatrix} -1 \\ 0 \end{pmatrix} \rangle           Linalg basis orientation 2.png

gives the same counterclockwise cycle. We say these two bases have the same orientation.

  1. Why do they give the same cycle?
  2. What other configurations of unit vectors on the axes give the same cycle?
  3. Find the determinants of the matrices formed from those (ordered) bases.
  4. What other counterclockwise cycles are possible, and what are the associated determinants?
  5. What happens in  \mathbb{R}^1 ?
  6. What happens in  \mathbb{R}^3 ?

A fascinating general-audience discussion of orientations is in (Gardner 1990).

Problem 21

This question uses material from the optional Determinant Functions Exist subsection. Prove Theorem 1.5 by using the permutation expansion formula for the determinant.

This exercise is recommended for all readers.
Problem 22
  1. Show that this gives the equation of a line in  \mathbb{R}^2 thru  (x_2,y_2) and  (x_3,y_3) .
    
\begin{vmatrix}
x    &x_2 &x_3  \\
y    &y_2 &y_3  \\
1    &1   &1
\end{vmatrix}=0
  2. (Peterson 1955) Prove that the area of a triangle with vertices  (x_1,y_1) ,  (x_2,y_2) , and  (x_3,y_3) is
    
\frac{1}{2}
\begin{vmatrix}
x_1  &x_2 &x_3  \\
y_1  &y_2 &y_3  \\
1    &1   &1
\end{vmatrix}.
  3. (Bittinger 1973) Prove that the area of a triangle with vertices at  (x_1,y_1) ,  (x_2,y_2) , and  (x_3,y_3) whose coordinates are integers has an area of  N or  N/2 for some positive integer  N .


Section III - Other Formulas for Determinants

(This section is optional. Later sections do not depend on this material.)

Determinants are a fount of interesting and amusing formulas. Here is one that is often seen in calculus classes and used to compute determinants by hand.



1 - Laplace's Expansion

Example 1.1

In this permutation expansion

\begin{array}{rl}
\begin{vmatrix}
t_{1,1}  &t_{1,2}  &t_{1,3}  \\
t_{2,1}  &t_{2,2}  &t_{2,3}  \\
t_{3,1}  &t_{3,2}  &t_{3,3}
\end{vmatrix}             
&=\begin{align}
&t_{1,1}t_{2,2}t_{3,3}\begin{vmatrix}
1  &0  &0  \\
0  &1  &0  \\
0  &0  &1
\end{vmatrix}
+t_{1,1}t_{2,3}t_{3,2}\begin{vmatrix}
1  &0  &0  \\
0  &0  &1  \\
0  &1  &0
\end{vmatrix}           \\
&\quad
+t_{1,2}t_{2,1}t_{3,3}\begin{vmatrix}
0  &1  &0  \\
1  &0  &0  \\
0  &0  &1
\end{vmatrix}
+t_{1,2}t_{2,3}t_{3,1}\begin{vmatrix}
0  &1  &0  \\
0  &0  &1  \\
1  &0  &0
\end{vmatrix}        \\
&\quad       
+t_{1,3}t_{2,1}t_{3,2}\begin{vmatrix}
0  &0  &1  \\
1  &0  &0  \\
0  &1  &0
\end{vmatrix}
+t_{1,3}t_{2,2}t_{3,1}\begin{vmatrix}
0  &0  &1  \\
0  &1  &0  \\
1  &0  &0
\end{vmatrix}  
\end{align}
\end{array}

we can, for instance, factor out the entries from the first row

\begin{array}{rl}
&=t_{1,1}\cdot \left[t_{2,2}t_{3,3}\begin{vmatrix}
1  &0  &0  \\
0  &1  &0  \\
0  &0  &1
\end{vmatrix}
+t_{2,3}t_{3,2}\begin{vmatrix}
1  &0  &0  \\
0  &0  &1  \\
0  &1  &0
\end{vmatrix}\,\right]    \\
&\quad
+t_{1,2}\cdot \left[t_{2,1}t_{3,3}\begin{vmatrix}
0  &1  &0  \\
1  &0  &0  \\
0  &0  &1
\end{vmatrix}
+t_{2,3}t_{3,1}\begin{vmatrix}
0  &1  &0  \\
0  &0  &1  \\
1  &0  &0
\end{vmatrix}\,\right]  \\
&\quad
+t_{1,3}\cdot \left[t_{2,1}t_{3,2}\begin{vmatrix}
0  &0  &1  \\
1  &0  &0  \\
0  &1  &0
\end{vmatrix}
+t_{2,2}t_{3,1}\begin{vmatrix}
0  &0  &1  \\
0  &1  &0  \\
1  &0  &0
\end{vmatrix}\,\right]
\end{array}

and swap rows in the permutation matrices to get this.

\begin{array}{rl}
&=t_{1,1}\cdot \left[t_{2,2}t_{3,3}\begin{vmatrix}
1  &0  &0  \\
0  &1  &0  \\
0  &0  &1
\end{vmatrix}
+t_{2,3}t_{3,2}\begin{vmatrix}
1  &0  &0  \\
0  &0  &1  \\
0  &1  &0
\end{vmatrix}\,\right]    \\
&\quad
-t_{1,2}\cdot \left[t_{2,1}t_{3,3}\begin{vmatrix}
1  &0  &0  \\
0  &1  &0  \\
0  &0  &1
\end{vmatrix}
+t_{2,3}t_{3,1}\begin{vmatrix}
1  &0  &0  \\
0  &0  &1  \\
0  &1  &0
\end{vmatrix}\,\right]  \\
&\quad
+t_{1,3}\cdot \left[t_{2,1}t_{3,2}\begin{vmatrix}
1  &0  &0  \\
0  &1  &0  \\
0  &0  &1
\end{vmatrix}
+t_{2,2}t_{3,1}\begin{vmatrix}
1  &0  &0  \\
0  &0  &1  \\
0  &1  &0
\end{vmatrix}\,\right] 
\end{array}

The point of the swapping (one swap to each of the permutation matrices on the second line and two swaps to each on the third line) is that the three lines simplify to three terms.


=t_{1,1}\cdot \begin{vmatrix}
t_{2,2}  &t_{2,3}  \\
t_{3,2}  &t_{3,3}
\end{vmatrix}
-t_{1,2}\cdot \begin{vmatrix}
t_{2,1}  &t_{2,3}  \\
t_{3,1}  &t_{3,3}
\end{vmatrix}
+t_{1,3}\cdot \begin{vmatrix}
t_{2,1}  &t_{2,2}  \\
t_{3,1}  &t_{3,2}
\end{vmatrix}

The formula given in Theorem 1.5, which generalizes this example, is a recurrence — the determinant is expressed as a combination of determinants. This formula isn't circular because, as here, the determinant is expressed in terms of determinants of matrices of smaller size.

Definition 1.2

For any n \! \times \! n matrix T, the (n-1) \! \times \! (n-1) matrix formed by deleting row i and column j of T is the i,j minor of  T . The i,j cofactor T_{i,j} of T is (-1)^{i+j} times the determinant of the i,j minor of T.

Example 1.3

The 1,2 cofactor of the matrix from Example 1.1 is the negative of the second 2 \! \times \! 2 determinant.


T_{1,2}=
-1\cdot\begin{vmatrix}
t_{2,1}  &t_{2,3}  \\
t_{3,1}  &t_{3,3}
\end{vmatrix}
Example 1.4

Where


T=
\begin{pmatrix}
1  &2  &3  \\
4  &5  &6  \\
7  &8  &9
\end{pmatrix}

these are the  1,2 and  2,2 cofactors.


T_{1,2}=
(-1)^{1+2}\cdot\begin{vmatrix}
4  &6  \\
7  &9
\end{vmatrix}=6
\qquad
T_{2,2}=
(-1)^{2+2}\cdot\begin{vmatrix}
1  &3  \\
7  &9
\end{vmatrix}=-12
Theorem 1.5 (Laplace Expansion of Determinants)

Where  T is an  n \! \times \! n matrix, the determinant can be found by expanding by cofactors on row i or column j.

\begin{array}{rl}
\left|T\right|
&=t_{i,1}\cdot T_{i,1}+t_{i,2}\cdot T_{i,2}+\cdots+t_{i,n}\cdot T_{i,n}  \\
&=t_{1,j}\cdot T_{1,j}+t_{2,j}\cdot T_{2,j}+\cdots+t_{n,j}\cdot T_{n,j}
\end{array}
Proof

Problem 15.

Example 1.6

We can compute the determinant


\left|T\right|=
\begin{vmatrix}
1  &2  &3  \\
4  &5  &6  \\
7  &8  &9
\end{vmatrix}

by expanding along the first row, as in Example 1.1.


\left|T\right|
=1\cdot(+1)\begin{vmatrix}
5  &6  \\
8  &9
\end{vmatrix}
+2\cdot(-1)\begin{vmatrix}
4  &6  \\
7  &9
\end{vmatrix}
+3\cdot(+1)\begin{vmatrix}
4  &5  \\
7  &8
\end{vmatrix}     
=-3+12-9     
=0

Alternatively, we can expand down the second column.


\left|T\right|
=2\cdot(-1)\begin{vmatrix}
4  &6  \\
7  &9
\end{vmatrix}
+5\cdot(+1)\begin{vmatrix}
1  &3  \\
7  &9
\end{vmatrix}
+8\cdot(-1)\begin{vmatrix}
1  &3  \\
4  &6
\end{vmatrix}     
=12-60+48   
=0
Example 1.7

A row or column with many zeroes suggests a Laplace expansion.


\begin{vmatrix}
1 &5  &0  \\
2 &1  &1  \\
3 &-1 &0
\end{vmatrix}
=
0\cdot(+1)\begin{vmatrix}
2  &1  \\
3  &-1
\end{vmatrix}+
1\cdot(-1)\begin{vmatrix}
1  &5  \\
3  &-1
\end{vmatrix}+
0\cdot(+1)\begin{vmatrix}
1  &5  \\
2  &1
\end{vmatrix}
=16

We finish by applying this result to derive a new formula for the inverse of a matrix. With Theorem 1.5, the determinant of an  n \! \times \! n matrix  T can be calculated by taking linear combinations of entries from a row and their associated cofactors.


t_{i,1}\cdot T_{i,1}+t_{i,2}\cdot T_{i,2}+\dots+t_{i,n}\cdot T_{i,n}
=\left|T\right| \qquad (*)

Recall that a matrix with two identical rows has a zero determinant. Thus, for any matrix  T , weighing the cofactors by entries from the "wrong" row — row k with k\neq i — gives zero


t_{i,1}\cdot T_{k,1}+t_{i,2}\cdot T_{k,2}+\dots+t_{i,n}\cdot T_{k,n}=0
\qquad (**)

because it represents the expansion along the row k of a matrix with row  i equal to row  k . This equation summarizes (*) and (**).


\left(
\begin{array}{cccc}
t_{1,1}  &t_{1,2}  &\ldots  &t_{1,n}\\ 
t_{2,1}  &t_{2,2}  &\ldots  &t_{2,n}  \\ 
&\vdots\\ 
t_{n,1} &t_{n,2} &\ldots  &t_{n,n}
\end{array}
\right)
\begin{pmatrix}
T_{1,1}  &T_{2,1}  &\ldots  &T_{n,1}  \\
T_{1,2}  &T_{2,2}  &\ldots  &T_{n,2}  \\
&\vdots   &        &         \\
T_{1,n}  &T_{2,n}  &\ldots  &T_{n,n}
\end{pmatrix}                                  
=\begin{pmatrix}
|T|      &0        &\ldots  &0        \\
0        &|T|      &\ldots  &0        \\
&\vdots   &        &         \\
0        &0        &\ldots  &|T|
\end{pmatrix}

Note that the order of the subscripts in the matrix of cofactors is opposite to the order of subscripts in the other matrix; e.g., along the first row of the matrix of cofactors the subscripts are 1,1 then 2,1, etc.

Definition 1.8

The matrix adjoint to the square matrix  T is


\text{adj}\,(T)=
\begin{pmatrix}
T_{1,1}  &T_{2,1}  &\ldots  &T_{n,1}  \\
T_{1,2}  &T_{2,2}  &\ldots  &T_{n,2}  \\
&\vdots   &        &         \\
T_{1,n}  &T_{2,n}  &\ldots  &T_{n,n}
\end{pmatrix}

where  T_{j,i} is the  j,i cofactor.

Theorem 1.9

Where  T is a square matrix, T\cdot \text{adj}\,(T)=\text{adj}\,(T)\cdot T=\left|T\right|\cdot I.

Proof

Equations (*) and (**).

Example 1.10

If


T=\begin{pmatrix}
1  &0  &4  \\
2  &1  &-1 \\
1  &0  &1
\end{pmatrix}

then the adjoint \text{adj}\,(T) is


\begin{pmatrix}
T_{1,1}  &T_{2,1}  &T_{3,1} \\
T_{1,2}  &T_{2,2}  &T_{3,2} \\
T_{1,3}  &T_{2,3}  &T_{3,3}   
\end{pmatrix}
\!\!=\!\!\begin{pmatrix}
\begin{vmatrix}
1  &-1 \\
0  &1
\end{vmatrix}
&-\begin{vmatrix}
0  &4  \\
0  &1
\end{vmatrix}
&\begin{vmatrix}
0  &4  \\
1  &-1
\end{vmatrix}             \\
-\begin{vmatrix}
2  &-1 \\
1  &1
\end{vmatrix}
&\begin{vmatrix}
1  &4  \\
1  &1
\end{vmatrix}
&-\begin{vmatrix}
1  &4  \\
2  &-1
\end{vmatrix}            \\
\begin{vmatrix}
2  &1  \\
1  &0
\end{vmatrix}
&-\begin{vmatrix}
1  &0  \\
1  &0
\end{vmatrix}
&\begin{vmatrix}
1  &0  \\
2  &1
\end{vmatrix}
\end{pmatrix}
\!\!=\!                 
\begin{pmatrix}
1  &0  &-4  \\
-3 &-3 &9  \\
-1 &0  &1
\end{pmatrix}

and taking the product with T gives the diagonal matrix \left|T\right|\cdot I.


\begin{pmatrix}
1  &0  &4  \\
2  &1  &-1 \\
1  &0  &1
\end{pmatrix}
\begin{pmatrix}
1  &0  &-4  \\
-3 &-3 &9  \\
-1 &0  &1
\end{pmatrix}         
=\begin{pmatrix}
-3  &0  &0  \\
0  &-3 &0  \\
0  &0  &-3
\end{pmatrix}
Corollary 1.11

If  \left|T\right|\neq 0 then T^{-1}=(1/\left|T\right|)\cdot\text{adj}\,(T).

Example 1.12

The inverse of the matrix from Example 1.10 is (1/-3)\cdot\text{adj}\,(T).


T^{-1}
=\begin{pmatrix}  
1/-3  &0/-3  &-4/-3  \\
-3/-3  &-3/-3 &9/-3   \\
-1/-3  &0/-3  &1/-3
\end{pmatrix}
=\begin{pmatrix}
-1/3  &0  &4/3  \\
1    &1  &-3   \\
1/3  &0  &-1/3
\end{pmatrix}

The formulas from this section are often used for by-hand calculation and are sometimes useful with special types of matrices. However, they are not the best choice for computation with arbitrary matrices because they require more arithmetic than, for instance, the Gauss-Jordan method.

Exercises

This exercise is recommended for all readers.
Problem 1

Find the cofactor.


T=\begin{pmatrix}
1  &0  &2  \\
-1  &1  &3  \\
0  &2  &-1
\end{pmatrix}
  1.  T_{2,3}
  2.  T_{3,2}
  3.  T_{1,3}
This exercise is recommended for all readers.
Problem 2

Find the determinant by expanding


\begin{vmatrix}
3  &0  &1  \\
1  &2  &2  \\
-1  &3  &0
\end{vmatrix}
  1. on the first row
  2. on the second row
  3. on the third column.
Problem 3

Find the adjoint of the matrix in Example 1.6.

This exercise is recommended for all readers.
Problem 4

Find the matrix adjoint to each.

  1.  \begin{pmatrix}
2   &1  &4  \\
-1   &0  &2  \\
1   &0  &1
\end{pmatrix}
  2.  \begin{pmatrix}
3  &-1  \\
2  &4
\end{pmatrix}
  3.  \begin{pmatrix}
1   &1  \\
5   &0
\end{pmatrix}
  4.  \begin{pmatrix}
1   &4  &3  \\
-1   &0  &3  \\
1   &8  &9
\end{pmatrix}
This exercise is recommended for all readers.
Problem 5

Find the inverse of each matrix in the prior question with Theorem 1.9.

Problem 6

Find the matrix adjoint to this one.


\begin{pmatrix}
2  &1  &0  &0  \\
1  &2  &1  &0  \\
0  &1  &2  &1  \\
0  &0  &1  &2
\end{pmatrix}
This exercise is recommended for all readers.
Problem 7

Expand across the first row to derive the formula for the determinant of a  2 \! \times \! 2  matrix.

This exercise is recommended for all readers.
Problem 8

Expand across the first row to derive the formula for the determinant of a  3 \! \times \! 3 matrix.

This exercise is recommended for all readers.
Problem 9
  1. Give a formula for the adjoint of a  2 \! \times \! 2 matrix.
  2. Use it to derive the formula for the inverse.
This exercise is recommended for all readers.
Problem 10

Can we compute a determinant by expanding down the diagonal?

Problem 11

Give a formula for the adjoint of a diagonal matrix.

This exercise is recommended for all readers.
Problem 12

Prove that the transpose of the adjoint is the adjoint of the transpose.

Problem 13

Prove or disprove:  \text{adj}\,(\text{adj}\,(T))=T .

Problem 14

A square matrix is upper triangular if each  i,j entry is zero in the part above the diagonal, that is, when  i>j .

  1. Must the adjoint of an upper triangular matrix be upper triangular? Lower triangular?
  2. Prove that the inverse of a upper triangular matrix is upper triangular, if an inverse exists.
Problem 15

This question requires material from the optional Determinants Exist subsection. Prove Theorem 1.5 by using the permutation expansion.

Problem 16

Prove that the determinant of a matrix equals the determinant of its transpose using Laplace's expansion and induction on the size of the matrix.

? Problem 17

Show that


F_n=
\begin{vmatrix}
1  &-1  &1  &-1  &1  &-1  &\ldots  \\
1  &1   &0  &1   &0  &1   &\ldots  \\
0  &1   &1  &0   &1  &0   &\ldots  \\
0  &0   &1  &1   &0  &1   &\ldots  \\
.  &.   &.  &.   &.  &.   &\ldots
\end{vmatrix}

where  F_n is the  n -th term of  1,1,2,3,5,\dots,x,y,x+y,\ldots\, , the Fibonacci sequence, and the determinant is of order  n-1 . (Walter & Tytun 1949)


Topic: Cramer's Rule

We have introduced determinant functions algebraically by looking for a formula to decide whether a matrix is nonsingular. After that introduction we saw a geometric interpretation, that the determinant function gives the size of the box with sides formed by the columns of the matrix. This Topic makes a connection between the two views.

First, a linear system


\begin{array}{*{2}{rc}r}
x_1  &+  &2x_2  &=  &6  \\
3x_1  &+  &x_2   &=  &8 
\end{array}

is equivalent to a linear relationship among vectors.


x_1\cdot\begin{pmatrix} 1 \\ 3 \end{pmatrix}+x_2\cdot\begin{pmatrix} 2 \\ 1 \end{pmatrix}=\begin{pmatrix} 6 \\ 8 \end{pmatrix}

The picture below shows a parallelogram with sides formed from \binom{1}{3} and \binom{2}{1} nested inside a parallelogram with sides formed from x_1\binom{1}{3} and x_2\binom{2}{1}.

Linalg nested parallelogram 1.png

So even without determinants we can state the algebraic issue that opened this book, finding the solution of a linear system, in geometric terms: by what factors x_1 and x_2 must we dilate the vectors to expand the small parallegram to fill the larger one?

However, by employing the geometric significance of determinants we can get something that is not just a restatement, but also gives us a new insight and sometimes allows us to compute answers quickly. Compare the sizes of these shaded boxes.

Linalg nested parallelogram 2.png                  Linalg nested parallelogram 3.png                  Linalg nested parallelogram 4.png

The second is formed from x_1\binom{1}{3} and \binom{2}{1}, and one of the properties of the size function— the determinant— is that its size is therefore  x_1 times the size of the first box. Since the third box is formed from x_1\binom{1}{3}+x_2\binom{2}{1}=\binom{6}{8} and \binom{2}{1}, and the determinant is unchanged by adding x_2 times the second column to the first column, the size of the third box equals that of the second. We have this.


\begin{vmatrix}
6  &2  \\
8  &1
\end{vmatrix}
=
\begin{vmatrix}
x_1\cdot 1  &2  \\
x_1\cdot 3  &1
\end{vmatrix}
=
x_1\cdot \begin{vmatrix}
1  &2  \\
3  &1
\end{vmatrix}

Solving gives the value of one of the variables.


x_1=
\frac{\begin{vmatrix}
6  &2  \\
8  &1
\end{vmatrix} }{
\begin{vmatrix}
1  &2  \\
3  &1
\end{vmatrix}  }
=\frac{-10}{-5}=2

The theorem that generalizes this example, Cramer's Rule, is: if  \left|A\right|\neq 0 then the system  A\vec{x}=\vec{b} has the unique solution 
x_i=\left|B_i\right|/\left|A\right|
where the matrix B_i is formed from A by replacing column i with the vector  \vec{b} . Problem 3 asks for a proof.

For instance, to solve this system for  x_2


\begin{pmatrix}
1  &0  &4  \\
2  &1  &-1 \\
1  &0  &1
\end{pmatrix}
\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix}
=\begin{pmatrix} 2 \\ 1 \\ -1 \end{pmatrix}

we do this computation.


x_2=
\frac{ \begin{vmatrix}
1  &2  &4  \\
2  &1  &-1 \\
1  &-1 &1
\end{vmatrix}  }{
\begin{vmatrix}
1  &0  &4  \\
2  &1  &-1 \\
1  &0  &1
\end{vmatrix}  }
=\frac{-18}{-3}

Cramer's Rule allows us to solve many two equations/two unknowns systems by eye. It is also sometimes used for three equations/three unknowns systems. But computing large determinants takes a long time, so solving large systems by Cramer's Rule is not practical.

Exercises

Problem 1

Use Cramer's Rule to solve each for each of the variables.

  1. \begin{array}{*{2}{rc}r}
x  &- &y  &=  &4  \\
-x  &+ &2y &=  &-7
\end{array}
  2. \begin{array}{*{2}{rc}r}
-2x  &+  &y  &=  &-2 \\
x  &-  &2y &=  &-2  
\end{array}
Problem 2

Use Cramer's Rule to solve this system for  z .


\begin{array}{*{4}{rc}r}
2x  &+  &y  &+  &z  &=  &1 \\
3x  &   &   &+  &z  &=  &4 \\
x  &-  &y  &-  &z  &=  &2
\end{array}
Problem 3

Prove Cramer's Rule.

Problem 4

Suppose that a linear system has as many equations as unknowns, that all of its coefficients and constants are integers, and that its matrix of coefficients has determinant  1 . Prove that the entries in the solution are all integers. (Remark. This is often used to invent linear systems for exercises. If an instructor makes the linear system with this property then the solution is not some disagreeable fraction.)

Problem 5

Use Cramer's Rule to give a formula for the solution of a two equations/two unknowns linear system.

Problem 6

Can Cramer's Rule tell the difference between a system with no solutions and one with infinitely many?

Problem 7

The first picture in this Topic (the one that doesn't use determinants) shows a unique solution case. Produce a similar picture for the case of infintely many solutions, and the case of no solutions.


Topic: Speed of Calculating Determinants

The permutation expansion formula for computing determinants is useful for proving theorems, but the method of using row operations is a much better for finding the determinants of a large matrix. We can make this statement precise by considering, as computer algorithm designers do, the number of arithmetic operations that each method uses.

The speed of an algorithm is measured by finding how the time taken by the computer grows as the size of its input data set grows. For instance, how much longer will the algorithm take if we increase the size of the input data by a factor of ten, from a 1000 row matrix to a 10,000 row matrix or from 10,000 to 100,000? Does the time taken grow by a factor of ten, or by a factor of a hundred, or by a factor of a thousand? That is, is the time taken by the algorithm proportional to the size of the data set, or to the square of that size, or to the cube of that size, etc.?

Recall the permutation expansion formula for determinants.

\begin{array}{rl}
\begin{vmatrix}
t_{1,1}  &t_{1,2}  &\ldots  &t_{1,n}  \\
t_{2,1}  &t_{2,2}  &\ldots  &t_{2,n}  \\
&\vdots                      \\
t_{n,1}  &t_{n,2}  &\ldots  &t_{n,n}
\end{vmatrix}
&=\displaystyle\sum_{\text{permutations }\phi}\!\!\!\!
t_{1,\phi(1)}t_{2,\phi(2)}\cdots t_{n,\phi(n)}
\left|P_{\phi}\right|   \\
&=
\begin{align}
&t_{1,\phi_1(1)}\cdot t_{2,\phi_1(2)}\cdots
t_{n,\phi_1(n)}\left|P_{\phi_1}\right|       \\  
&\quad
+t_{1,\phi_2(1)}\cdot t_{2,\phi_2(2)}\cdots
t_{n,\phi_2(n)}\left|P_{\phi_2}\right|       \\
&\quad\vdots             \\
&\quad
+t_{1,\phi_k(1)}\cdot t_{2,\phi_k(2)}\cdots
t_{n,\phi_k(n)}\left|P_{\phi_k}\right| 
\end{align}
\end{array}

There are n!=n\cdot(n-1)\cdot(n-2)\cdots 2\cdot 1 different  n -permutations. For numbers n of any size at all, this is a large value; for instance, even if n is only 10 then the expansion has 10!=3,628,800 terms, all of which are obtained by multiplying n entries together. This is a very large number of multiplications (for instance, (Knuth 1988) suggests 10! steps as a rough boundary for the limit of practical calculation). The factorial function grows faster than the square function. It grows faster than the cube function, the fourth power function, or any polynomial function. (One way to see that the factorial function grows faster than the square is to note that multiplying the first two factors in n! gives n\cdot(n-1), which for large n is approximately n^2, and then multiplying in more factors will make it even larger. The same argument works for the cube function, etc.) So a computer that is programmed to use the permutation expansion formula, and thus to perform a number of operations that is greater than or equal to the factorial of the number of rows, would take very long times as its input data set grows.

In contrast, the time taken by the row reduction method does not grow so fast. This fragment of row-reduction code is in the computer language FORTRAN. The matrix is stored in the N \! \times \! N array A. For each ROW between 1 and N parts of the program not shown here have already found the pivot entry A(ROW,COL). Now the program does a row pivot.


-PIVINV\cdot \rho_{ROW}
+\rho_i

(This code fragment is for illustration only and is incomplete. Still, analysis of a finished version that includes all of the tests and subcases is messier but gives essentially the same conclusion.)

PIVINV=1.0/A(ROW,COL)
DO 10 I=ROW+1, N
DO 20 J=I, N
A(I,J)=A(I,J)-PIVINV*A(ROW,J)
20 CONTINUE
10 CONTINUE

The outermost loop (not shown) runs through N-1 rows. For each row, the nested I and J loops shown perform arithmetic on the entries in A that are below and to the right of the pivot entry. Assume that the pivot is found in the expected place, that is, that COL=ROW. Then there are (N-ROW)^2 entries below and to the right of the pivot. On average, ROW will be N/2. Thus, we estimate that the arithmetic will be performed about (N/2)^2 times, that is, will run in a time proportional to the square of the number of equations. Taking into account the outer loop that is not shown, we get the estimate that the running time of the algorithm is proportional to the cube of the number of equations.

Finding the fastest algorithm to compute the determinant is a topic of current research. Algorithms are known that run in time between the second and third power.

Speed estimates like these help us to understand how quickly or slowly an algorithm will run. Algorithms that run in time proportional to the size of the data set are fast, algorithms that run in time proportional to the square of the size of the data set are less fast, but typically quite usable, and algorithms that run in time proportional to the cube of the size of the data set are still reasonable in speed for not-too-big input data. However, algorithms that run in time (greater than or equal to) the factorial of the size of the data set are not practical for input of any appreciable size.

There are other methods besides the two discussed here that are also used for computation of determinants. Those lie outside of our scope. Nonetheless, this contrast of the two methods for computing determinants makes the point that although in principle they give the same answer, in practice the idea is to select the one that is fast.

Exercises

Most of these problems presume access to a computer.

Problem 1

Computer systems generate random numbers (of course, these are only pseudo-random, in that they are generated by an algorithm, but they pass a number of reasonable statistical tests for randomness).

  1. Fill a 5 \! \times \! 5 array with random numbers (say, in the range [0..1)). See if it is singular. Repeat that experiment a few times. Are singular matrices frequent or rare (in this sense)?
  2. Time your computer algebra system at finding the determinant of ten 5 \! \times \! 5 arrays of random numbers. Find the average time per array. Repeat the prior item for 15 \! \times \! 15 arrays, 25 \! \times \! 25 arrays, and 35 \! \times \! 35 arrays. (Notice that, when an array is singular, it can sometimes be found to be so quite quickly, for instance if the first row equals the second. In the light of your answer to the first part, do you expect that singular systems play a large role in your average?)
  3. Graph the input size versus the average time.
Problem 2

Compute the determinant of each of these by hand using the two methods discussed above.

  1. \begin{vmatrix}
2  &1  \\
5  &-3
\end{vmatrix}
  2. \begin{vmatrix}
3  &1  &1  \\
-1  &0  &5  \\
-1  &2  &-2 
\end{vmatrix}
  3. \begin{vmatrix}
2  &1  &0  &0  \\
1  &3  &2  &0  \\
0  &-1 &-2 &1  \\
0  &0  &-2 &1
\end{vmatrix}

Count the number of multiplications and divisions used in each case, for each of the methods. (On a computer, multiplications and divisions take much longer than additions and subtractions, so algorithm designers worry about them more.)

Problem 3

What 10 \! \times \! 10 array can you invent that takes your computer system the longest to reduce? The shortest?

Problem 4

Write the rest of the FORTRAN program to do a straightforward implementation of calculating determinants via Gauss' method. (Don't test for a zero pivot.) Compare the speed of your code to that used in your computer algebra system.

Problem 5

The FORTRAN language specification requires that arrays be stored "by column", that is, the entire first column is stored contiguously, then the second column, etc. Does the code fragment given take advantage of this, or can it be rewritten to make it faster, by taking advantage of the fact that computer fetches are faster from contiguous locations?


Topic: Projective Geometry

There are geometries other than the familiar Euclidean one. One such geometry arose in art, where it was observed that what a viewer sees is not necessarily what is there. This is Leonardo da Vinci's The Last Supper.

Última Cena - Da Vinci 5.jpg

What is there in the room, for instance where the ceiling meets the left and right walls, are lines that are parallel. However, what a viewer sees is lines that, if extended, would intersect. The intersection point is called the vanishing point. This aspect of perspective is also familiar as the image of a long stretch of railroad tracks that appear to converge at the horizon.

To depict the room, da Vinci has adopted a model of how we see, of how we project the three dimensional scene to a two dimensional image. This model is only a first approximation — it does not take into account that our retina is curved and our lens bends the light, that we have binocular vision, or that our brain's processing greatly affects what we see — but nonetheless it is interesting, both artistically and mathematically.

The projection is not orthogonal, it is a central projection from a single point, to the plane of the canvas.

Linalg central projection 1.png

(It is not an orthogonal projection since the line from the viewer to C is not orthogonal to the image plane.) As the picture suggests, the operation of central projection preserves some geometric properties — lines project to lines. However, it fails to preserve some others — equal length segments can project to segments of unequal length; the length of AB is greater than the length of BC because the segment projected to AB is closer to the viewer and closer things look bigger. The study of the effects of central projections is projective geometry. We will see how linear algebra can be used in this study.

There are three cases of central projection. The first is the projection done by a movie projector.

Linalg central projection 2.png

We can think that each source point is "pushed" from the domain plane outward to the image point in the codomain plane. This case of projection has a somewhat different character than the second case, that of the artist "pulling" the source back to the canvas.

Linalg central projection 3.png

In the first case S is in the middle while in the second case I is in the middle. One more configuration is possible, with P in the middle. An example of this is when we use a pinhole to shine the image of a solar eclipse onto a piece of paper.

Linalg central projection 4.png

We shall take each of the three to be a central projection by P of S to I.

Consider again the effect of railroad tracks that appear to converge to a point. We model this with parallel lines in a domain plane S and a projection via a P to a codomain plane I. (The gray lines are parallel to S and I.)

Linalg railroad perspective 1.png

All three projection cases appear here. The first picture below shows P acting like a movie projector by pushing points from part of S out to image points on the lower half of I. The middle picture shows P acting like the artist by pulling points from another part of S back to image points in the middle of I. In the third picture, P acts like the pinhole, projecting points from S to the upper part of I. This picture is the trickiest— the points that are projected near to the vanishing point are the ones that are far out on the bottom left of S. Points in S that are near to the vertical gray line are sent high up on I.

Linalg railroad perspective 2.png                  Linalg railroad perspective 3.png                  Linalg railroad perspective 4.png

There are two awkward things about this situation. The first is that neither of the two points in the domain nearest to the vertical gray line (see below) has an image because a projection from those two is along the gray line that is parallel to the codomain plane (we sometimes say that these two are projected "to infinity"). The second awkward thing is that the vanishing point in I isn't the image of any point from S because a projection to this point would be along the gray line that is parallel to the domain plane (we sometimes say that the vanishing point is the image of a projection "from infinity").

Linalg railroad perspective 5.png

For a better model, put the projector P at the origin. Imagine that P is covered by a glass hemispheric dome. As P looks outward, anything in the line of vision is projected to the same spot on the dome. This includes things on the line between P and the dome, as in the case of projection by the movie projector. It includes things on the line further from P than the dome, as in the case of projection by the painter. It also includes things on the line that lie behind P, as in the case of projection by a pinhole.

Linalg projective plane perspective 1.png

From this perspective P, all of the spots on the line are seen as the same point. Accordingly, for any nonzero vector \vec{v}\in\mathbb{R}^3, we define the associated point v in the projective plane to be the set \{k\vec{v}\,\big|\, k\in\mathbb{R}\text{ and }k\neq 0\} of nonzero vectors lying on the same line through the origin as \vec{v}. To describe a projective point we can give any representative member of the line, so that the projective point shown above can be represented in any of these three ways.


\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}
\qquad
\begin{pmatrix} 1/3 \\ 2/3 \\ 1 \end{pmatrix}
\qquad
\begin{pmatrix} -2 \\ -4 \\ -6 \end{pmatrix}

Each of these is a homogeneous coordinate vector for v.

This picture, and the above definition that arises from it, clarifies the description of central projection but there is something awkward about the dome model: what if the viewer looks down? If we draw P's line of sight so that the part coming toward us, out of the page, goes down below the dome then we can trace the line of sight backward, up past P and toward the part of the hemisphere that is behind the page. So in the dome model, looking down gives a projective point that is behind the viewer. Therefore, if the viewer in the picture above drops the line of sight toward the bottom of the dome then the projective point drops also and as the line of sight continues down past the equator, the projective point suddenly shifts from the front of the dome to the back of the dome. This discontinuity in the drawing means that we often have to treat equatorial points as a separate case. That is, while the railroad track discussion of central projection has three cases, the dome model has two.

We can do better than this. Consider a sphere centered at the origin. Any line through the origin intersects the sphere in two spots, which are said to be antipodal. Because we associate each line through the origin with a point in the projective plane, we can draw such a point as a pair of antipodal spots on the sphere. Below, the two antipodal spots are shown connected by a dashed line to emphasize that they are not two different points, the pair of spots together make one projective point.

Linalg antipodal.png

While drawing a point as a pair of antipodal spots is not as natural as the one-spot-per-point dome mode, on the other hand the awkwardness of the dome model is gone, in that if as a line of view slides from north to south, no sudden changes happen on the picture. This model of central projection is uniform — the three cases are reduced to one.

So far we have described points in projective geometry. What about lines? What a viewer at the origin sees as a line is shown below as a great circle, the intersection of the model sphere with a plane through the origin.

Linalg great circle.png

(One of the projective points on this line is shown to bring out a subtlety. Because two antipodal spots together make up a single projective point, the great circle's behind-the-paper part is the same set of projective points as its in-front-of-the-paper part.) Just as we did with each projective point, we will also describe a projective line with a triple of reals. For instance, the members of this plane through the origin in \mathbb{R}^3


\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, x+y-z=0\}

project to a line that we can described with the triple \begin{pmatrix} 1 &1 &-1 \end{pmatrix} (we use row vectors to typographically distinguish lines from points). In general, for any nonzero three-wide row vector {\vec{L}} we define the associated line in the projective plane, to be the set L=\{k\vec{L}\,\big|\, k\in\mathbb{R}\text{ and }k\neq 0\} of nonzero multiples of {\vec{L}}.

The reason that this description of a line as a triple is convienent is that in the projective plane, a point v and a line L are incident — the point lies on the line, the line passes throught the point — if and only if a dot product of their representatives v_1L_1+v_2L_2+v_3L_3 is zero (Problem 4 shows that this is independent of the choice of representatives {\vec{v}} and {\vec{L}}). For instance, the projective point described above by the column vector with components 1, 2, and 3 lies in the projective line described by \begin{pmatrix} 1 &1 &-1 \end{pmatrix}, simply because any vector in \mathbb{R}^3 whose components are in ratio 1\mathbin :2\mathbin :3 lies in the plane through the origin whose equation is of the form 1k\cdot x+1k\cdot y-1k\cdot z=0 for any nonzero k. That is, the incidence formula is inherited from the three-space lines and planes of which v and L are projections.

Thus, we can do analytic projective geometry. For instance, the projective line L=\begin{pmatrix} 1 &1 &-1 \end{pmatrix} has the equation 1v_1+1v_2-1v_3=0, because points incident on the line are characterized by having the property that their representatives satisfy this equation. One difference from familiar Euclidean anlaytic geometry is that in projective geometry we talk about the equation of a point. For a fixed point like


v=\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}

the property that characterizes lines through this point (that is, lines incident on this point) is that the components of any representatives satisfy 1L_1+2L_2+3L_3=0 and so this is the equation of v.

This symmetry of the statements about lines and points brings up the Duality Principle of projective geometry: in any true statement, interchanging "point" with "line" results in another true statement. For example, just as two distinct points determine one and only one line, in the projective plane, two distinct lines determine one and only one point. Here is a picture showing two lines that cross in antipodal spots and thus cross at one projective point.

Linalg intersecting projective lines.png                                (*)

Contrast this with Euclidean geometry, where two distinct lines may have a unique intersection or may be parallel. In this way, projective geometry is simpler, more uniform, than Euclidean geometry.

That simplicity is relevant because there is a relationship between the two spaces: the projective plane can be viewed as an extension of the Euclidean plane. Take the sphere model of the projective plane to be the unit sphere in \mathbb{R}^3 and take Euclidean space to be the plane z=1. This gives us a way of viewing some points in projective space as corresponding to points in Euclidean space, because all of the points on the plane are projections of antipodal spots from the sphere.

Linalg euclidean and projective planes.png                                (**)

Note though that projective points on the equator don't project up to the plane. Instead, these project "out to infinity". We can thus think of projective space as consisting of the Euclidean plane with some extra points adjoined — the Euclidean plane is embedded in the projective plane. These extra points, the equatorial points, are the ideal points or points at infinity and the equator is the ideal line or line at infinity (note that it is not a Euclidean line, it is a projective line).

The advantage of the extension to the projective plane is that some of the awkwardness of Euclidean geometry disappears. For instance, the projective lines shown above in (*) cross at antipodal spots, a single projective point, on the sphere's equator. If we put those lines into (**) then they correspond to Euclidean lines that are parallel. That is, in moving from the Euclidean plane to the projective plane, we move from having two cases, that lines either intersect or are parallel, to having only one case, that lines intersect (possibly at a point at infinity).

The projective case is nicer in many ways than the Euclidean case but has the problem that we don't have the same experience or intuitions with it. That's one advantage of doing analytic geometry, where the equations can lead us to the right conclusions. Analytic projective geometry uses linear algebra. For instance, for three points of the projective plane t, u, and v, setting up the equations for those points by fixing vectors representing each, shows that the three are collinear — incident in a single line — if and only if the resulting three-equation system has infinitely many row vector solutions representing that line. That, in turn, holds if and only if this determinant is zero.


\begin{vmatrix}
t_1  &u_1  &v_1  \\
t_2  &u_2  &v_2  \\
t_3  &u_3  &v_3
\end{vmatrix}

Thus, three points in the projective plane are collinear if and only if any three representative column vectors are linearly dependent. Similarly (and illustrating the Duality Principle), three lines in the projective plane are incident on a single point if and only if any three row vectors representing them are linearly dependent.

The following result is more evidence of the "niceness" of the geometry of the projective plane, compared to the Euclidean case. These two triangles are said to be in perspective from P because their corresponding vertices are collinear.

Linalg triangles in perspective.png

Consider the pairs of corresponding sides: the sides T_1U_1 and T_2U_2, the sides T_1V_1 and T_2V_2, and the sides U_1V_1 and U_2V_2. Desargue's Theorem is that when the three pairs of corresponding sides are extended to lines, they intersect (shown here as the point TU, the point TV, and the point UV), and further, those three intersection points are collinear.

Linalg desargue.png

We will prove this theorem, using projective geometry. (These are drawn as Euclidean figures because it is the more familiar image. To consider them as projective figures, we can imagine that, although the line segments shown are parts of great circles and so are curved, the model has such a large radius compared to the size of the figures that the sides appear in this sketch to be straight.)

For this proof, we need a preliminary lemma (Coxeter 1974): if W, X, Y, Z are four points in the projective plane (no three of which are collinear) then there are homogeneous coordinate vectors \vec{w}, \vec{x}, \vec{y}, and \vec{z} for the projective points, and a basis B for \mathbb{R}^3, satisfying this.


{\rm Rep}_{B}(\vec{w})=\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
\quad
{\rm Rep}_{B}(\vec{x})=\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}
\quad
{\rm Rep}_{B}(\vec{y})=\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}
\quad
{\rm Rep}_{B}(\vec{z})=\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}

The proof is straightforward. Because W,\,X,\,Y are not on the same projective line, any homogeneous coordinate vectors \vec{w}_0,\,\vec{x}_0,\,\vec{y}_0 do not line on the same plane through the origin in \mathbb{R}^3 and so form a spanning set for \mathbb{R}^3. Thus any homogeneous coordinate vector for Z can be written as a combination \vec{z}_0=a\cdot\vec{w}_0+b\cdot\vec{x}_0+c\cdot\vec{y}_0. Then, we can take \vec{w}=a\cdot\vec{w}_0, \vec{x}=b\cdot\vec{x}_0, \vec{y}=c\cdot\vec{y}_0, and \vec{z}=\vec{z}_0, where the basis is B=\langle \vec{w},\vec{x},\vec{y} \rangle .

Now, to prove of Desargue's Theorem, use the lemma to fix homogeneous coordinate vectors and a basis.


{\rm Rep}_{B}(\vec{t}_1)=\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
\quad
{\rm Rep}_{B}(\vec{u}_1)=\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}
\quad
{\rm Rep}_{B}(\vec{v}_1)=\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}
\quad
{\rm Rep}_{B}(\vec{o})=\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}

Because the projective point T_2 is incident on the projective line OT_1, any homogeneous coordinate vector for T_2 lies in the plane through the origin in \mathbb{R}^3 that is spanned by homogeneous coordinate vectors of O and T_1:


{\rm Rep}_{B}(\vec{t}_2)=a\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}
+b\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}

for some scalars a and b. That is, the homogenous coordinate vectors of members T_2 of the line OT_1 are of the form on the left below, and the forms for U_2 and V_2 are similar.


{\rm Rep}_{B}(\vec{t}_2)=\begin{pmatrix} t_2 \\ 1 \\ 1 \end{pmatrix}
\qquad
{\rm Rep}_{B}(\vec{u}_2)=\begin{pmatrix} 1 \\ u_2 \\ 1 \end{pmatrix}
\qquad
{\rm Rep}_{B}(\vec{v}_2)=\begin{pmatrix} 1 \\ 1 \\ v_2 \end{pmatrix}

The projective line T_1U_1 is the image of a plane through the origin in \mathbb{R}^3. A quick way to get its equation is to note that any vector in it is linearly dependent on the vectors for T_1 and U_1 and so this determinant is zero.


\begin{vmatrix}
1  &0  &x  \\
0  &1  &y  \\
0  &0  &z
\end{vmatrix}=0
\qquad
\Longrightarrow
\qquad
z=0

The equation of the plane in \mathbb{R}^3 whose image is the projective line T_2U_2 is this.


\begin{vmatrix}
t_2  &1    &x  \\
1    &u_2  &y  \\
1    &1    &z
\end{vmatrix}=0
\qquad
\Longrightarrow
\qquad
(1-u_2)\cdot x+(1-t_2)\cdot y+(t_2u_2-1)\cdot z=0

Finding the intersection of the two is routine.


T_1U_1\,\cap\, T_2U_2
=\begin{pmatrix} t_2-1 \\ 1-u_2 \\ 0 \end{pmatrix}

(This is, of course, the homogeneous coordinate vector of a projective point.) The other two intersections are similar.


T_1V_1\,\cap\, T_2V_2
=\begin{pmatrix} 1-t_2 \\ 0 \\ v_2-1 \end{pmatrix}
\qquad
U_1V_1\,\cap\, U_2V_2
=\begin{pmatrix} 0 \\ u_2-1 \\ 1-v_2 \end{pmatrix}

The proof is finished by noting that these projective points are on one projective line because the sum of the three homogeneous coordinate vectors is zero.

Every projective theorem has a translation to a Euclidean version, although the Euclidean result is often messier to state and prove. Desargue's theorem illustrates this. In the translation to Euclidean space, the case where O lies on the ideal line must be treated separately for then the lines T_1T_2, U_1U_2, and V_1V_2 are parallel.

The parenthetical remark following the statement of Desargue's Theorem suggests thinking of the Euclidean pictures as figures from projective geometry for a model of very large radius. That is, just as a small area of the earth appears flat to people living there, the projective plane is also "locally Euclidean".

Although its local properties are the familiar Euclidean ones, there is a global property of the projective plane that is quite different. The picture below shows a projective point. At that point is drawn an xy-axis. There is something interesting about the way this axis appears at the antipodal ends of the sphere. In the northern hemisphere, where the axis are drawn in black, a right hand put down with fingers on the x-axis will have the thumb point along the y-axis. But the antipodal axis has just the opposite: a right hand placed with its fingers on the x-axis will have the thumb point in the wrong way, instead, it is a left hand that works. Briefly, the projective plane is not orientable: in this geometry, left and right handedness are not fixed properties of figures.

Linalg projective plane nonorientablity 1.png

The sequence of pictures below dramatizes this non-orientability. They sketch a trip around this space in the direction of the y part of the xy-axis. (Warning: the trip shown is not halfway around, it is a full circuit. True, if we made this into a movie then we could watch the northern hemisphere spots in the drawing above gradually rotate about halfway around the sphere to the last picture below. And we could watch the southern hemisphere spots in the picture above slide through the south pole and up through the equator to the last picture. But: the spots at either end of the dashed line are the same projective point. We don't need to continue on much further; we are pretty much back to the projective point where we started by the last picture.)

Linalg projective plane nonorientablity 2.png         \Longrightarrow         Linalg projective plane nonorientablity 3.png         \Longrightarrow         Linalg projective plane nonorientablity 4.png

At the end of the circuit, the x part of the xy-axes sticks out in the other direction. Thus, in the projective plane we cannot describe a figure as right-{} or left-handed (another way to make this point is that we cannot describe a spiral as clockwise or counterclockwise).

This exhibition of the existence of a non-orientable space raises the question of whether our universe is orientable: is is possible for an astronaut to leave right-handed and return left-handed? An excellent nontechnical reference is (Gardner 1990). An classic science fiction story about orientation reversal is (Clarke 1982).

So projective geometry is mathematically interesting, in addition to the natural way in which it arises in art. It is more than just a technical device to shorten some proofs. For an overview, see (Courant & Robbins 1978). The approach we've taken here, the analytic approach, leads to quick theorems and — most importantly for us — illustrates the power of linear algebra (see Hanes (1990), Ryan (1986), and Eggar (1998)). But another approach, the synthetic approach of deriving the results from an axiom system, is both extraordinarily beautiful and is also the historical route of development. Two fine sources for this approach are (Coxeter 1974) or (Seidenberg 1962). An interesting and easy application is (Davies 1990).

Exercises

Problem 1

What is the equation of this point?


\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
Problem 2
  1. Find the line incident on these points in the projective plane.
    
\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix},\,\begin{pmatrix} 4 \\ 5 \\ 6 \end{pmatrix}
  2. Find the point incident on both of these projective lines.
    
\begin{pmatrix} 1 &2 &3 \end{pmatrix},\,\begin{pmatrix} 4 &5 &6 \end{pmatrix}
Problem 3

Find the formula for the line incident on two projective points. Find the formula for the point incident on two projective lines.

Problem 4

Prove that the definition of incidence is independent of the choice of the representatives of p and L. That is, if p_1, p_2, p_3, and q_1, q_2, q_3 are two triples of homogeneous coordinates for p, and L_1, L_2, L_3, and M_1, M_2, M_3 are two triples of homogeneous coordinates for L, prove that p_1L_1+p_2L_2+p_3L_3=0 if and only if q_1M_1+q_2M_2+q_3M_3=0.

Problem 5

Give a drawing to show that central projection does not preserve circles, that a circle may project to an ellipse. Can a (non-circular) ellipse project to a circle?

Problem 6

Give the formula for the correspondence between the non-equatorial part of the antipodal modal of the projective plane, and the plane z=1.

Problem 7

(Pappus's Theorem) Assume that T_0, U_0, and V_0 are collinear and that T_1, U_1, and V_1 are collinear. Consider these three points: (i) the intersection V_2 of the lines T_0U_1 and T_1U_0, (ii) the intersection U_2 of the lines T_0V_1 and T_1V_0, and (iii) the intersection T_2 of U_0V_1 and U_1V_0.

  1. Draw a (Euclidean) picture.
  2. Apply the lemma used in Desargue's Theorem to get simple homogeneous coordinate vectors for the T's and V_0.
  3. Find the resulting homogeneous coordinate vectors for U's (these must each involve a parameter as, e.g., U_0 could be anywhere on the T_0V_0 line).
  4. Find the resulting homogeneous coordinate vectors for V_1. (Hint: it involves two parameters.)
  5. Find the resulting homogeneous coordinate vectors for V_2. (It also involves two parameters.)
  6. Show that the product of the three parameters is 1.
  7. Verify that V_2 is on the T_2U_2 line..



Chapter V - Similarity

While studying matrix equivalence, we have shown that for any homomorphism there are bases B and D such that the representation matrix has a block partial-identity form.


{\rm Rep}_{B,D}(h)
=
\left(\begin{array}{c|c}
\textit{Identity}  &\textit{Zero}   \\
\hline
\textit{Zero}      &\textit{Zero}
\end{array}\right)

This representation describes the map as sending  c_1\vec{\beta}_1+\dots+c_n\vec{\beta}_n to c_1\vec{\delta}_1+\dots+c_k\vec{\delta}_k+\vec{0}+\dots+\vec{0} , where n is the dimension of the domain and  k is the dimension of the range. So, under this representation the action of the map is easy to understand because most of the matrix entries are zero.

This chapter considers the special case where the domain and the codomain are equal, that is, where the homomorphism is a transformation. In this case we naturally ask to find a single basis  B so that  {\rm Rep}_{B,B}(t) is as simple as possible (we will take "simple" to mean that it has many zeroes). A matrix having the above block partial-identity form is not always possible here. But we will develop a form that comes close, a representation that is nearly diagonal.


Section I - Linear Algebra/Complex Vector Spaces

This chapter requires that we factor polynomials. Of course, many polynomials do not factor over the real numbers; for instance,  x^2+1 does not factor into the product of two linear polynomials with real coefficients. For that reason, we shall from now on take our scalars from the complex numbers.

That is, we are shifting from studying vector spaces over the real numbers to vector spaces over the complex numbers— in this chapter vector and matrix entries are complex.

Any real number is a complex number and a glance through this chapter shows that most of the examples use only real numbers. Nonetheless, the critical theorems require that the scalars be complex numbers, so the first section below is a quick review of complex numbers.

In this book we are moving to the more general context of taking scalars to be complex only for the pragmatic reason that we must do so in order to develop the representation. We will not go into using other sets of scalars in more detail because it could distract from our goal. However, the idea of taking scalars from a structure other than the real numbers is an interesting one. Delightful presentations taking this approach are in (Halmos 1958) and (Hoffman & Kunze 1971).


1 - Factoring and Complex Numbers: A Review

This subsection is a review only and we take the main results as known. For proofs, see (Birkhoff & MacLane 1965) or (Ebbinghaus 1990).

Just as integers have a division operation— e.g., " 4 goes  5 times into  21 with remainder  1 "— so do polynomials.

Theorem 1.1 (Division Theorem for Polynomials)

Let  c(x) be a polynomial. If  m(x) is a non-zero polynomial then there are quotient and remainder polynomials  q(x) and  r(x) such that


c(x)=m(x)\cdot q(x)+r(x)

where the degree of  r(x) is strictly less than the degree of  m(x) .

In this book constant polynomials, including the zero polynomial, are said to have degree  0 . (This is not the standard definition, but it is convienent here.)

The point of the integer division statement " 4 goes  5 times into  21 with remainder  1 " is that the remainder is less than  4 — while  4 goes  5 times, it does not go  6 times. In the same way, the point of the polynomial division statement is its final clause.

Example 1.2

If  c(x)=2x^3-3x^2+4x and  m(x)=x^2+1 then  q(x)=2x-3 and  r(x)=2x+3 . Note that  r(x) has a lower degree than  m(x) .

Corollary 1.3

The remainder when  c(x) is divided by  x-\lambda is the constant polynomial  r(x)=c(\lambda) .

Proof

The remainder must be a constant polynomial because it is of degree less than the divisor  x-\lambda , To determine the constant, take m(x) from the theorem to be x-\lambda and substitute  \lambda for x to get  c(\lambda)=(\lambda-\lambda)\cdot q(\lambda)+r(x) .

If a divisor  m(x) goes into a dividend  c(x) evenly, meaning that  r(x) is the zero polynomial, then  m(x) is a factor of  c(x) . Any root of the factor (any  \lambda\in\mathbb{R} such that  m(\lambda)=0 ) is a root of  c(x) since  c(\lambda)=m(\lambda)\cdot q(\lambda)=0 . The prior corollary immediately yields the following converse.

Corollary 1.4

If  \lambda is a root of the polynomial  c(x) then  x-\lambda divides  c(x) evenly, that is, x-\lambda is a factor of c(x).

Finding the roots and factors of a high-degree polynomial can be hard. But for second-degree polynomials we have the quadratic formula: the roots of  ax^2+bx+c are


\lambda_1=\frac{-b+\sqrt{b^2-4ac}}{2a}
\qquad
\lambda_2=\frac{-b-\sqrt{b^2-4ac}}{2a}

(if the discriminant  b^2-4ac is negative then the polynomial has no real number roots). A polynomial that cannot be factored into two lower-degree polynomials with real number coefficients is irreducible over the reals.

Theorem 1.5

Any constant or linear polynomial is irreducible over the reals. A quadratic polynomial is irreducible over the reals if and only if its discriminant is negative. No cubic or higher-degree polynomial is irreducible over the reals.

Corollary 1.6

Any polynomial with real coefficients can be factored into linear and irreducible quadratic polynomials. That factorization is unique; any two factorizations have the same powers of the same factors.

Note the analogy with the prime factorization of integers. In both cases, the uniqueness clause is very useful.

Example 1.7

Because of uniqueness we know, without multiplying them out, that  (x+3)^2(x^2+1)^3 does not equal  (x+3)^4(x^2+x+1)^2 .

Example 1.8

By uniqueness, if  c(x)=m(x)\cdot q(x) then where  c(x)=(x-3)^2(x+2)^3 and  m(x)=(x-3)(x+2)^2 , we know that  q(x)=(x-3)(x+2) .

While  x^2+1 has no real roots and so doesn't factor over the real numbers, if we imagine a root— traditionally denoted  i so that  i^2+1=0 — then  x^2+1 factors into a product of linears  (x-i)(x+i) .

So we adjoin this root  i to the reals and close the new system with respect to addition, multiplication, etc. (i.e., we also add  3+i , and  2i , and  3+2i , etc., putting in all linear combinations of 1 and i). We then get a new structure, the complex numbers, denoted  \mathbb{C} .

In \mathbb{C} we can factor (obviously, at least some) quadratics that would be irreducible if we were to stick to the real numbers. Surprisingly, in  \mathbb{C} we can not only factor  x^2+1 and its close relatives, we can factor any quadratic.


ax^2+bx+c=
a\cdot \big(x-\frac{-b+\sqrt{b^2-4ac}}{2a}\big)
\cdot \big(x-\frac{-b-\sqrt{b^2-4ac}}{2a}\big)
Example 1.9

The second degree polynomial  x^2+x+1 factors over the complex numbers into the product of two first degree polynomials.


\big(x-\frac{-1+\sqrt{-3}}{2}\big)
\big(x-\frac{-1-\sqrt{-3}}{2}\big)
=
\big(x-(-\frac{1}{2}+\frac{\sqrt{3}}{2}i)\big)
\big(x-(-\frac{1}{2}-\frac{\sqrt{3}}{2}i)\big)
Corollary 1.10 (Fundamental Theorem of Algebra)

Polynomials with complex coefficients factor into linear polynomials with complex coefficients. The factorization is unique.


2 - Complex Representations

Recall the definitions of the complex number addition


(a+bi)\,+\,(c+di)=(a+c)+(b+d)i

and multiplication.

\begin{array}{rl}
(a+bi)(c+di) &=ac+adi+bci+bd(-1)  \\
&=(ac-bd)+(ad+bc)i
\end{array}
Example 2.1

For instance,  (1-2i)\,+\,(5+4i)=6+2i and  (2-3i)(4-0.5i)=6.5-13i .

Handling scalar operations with those rules, all of the operations that we've covered for real vector spaces carry over unchanged.

Example 2.2

Matrix multiplication is the same, although the scalar arithmetic involves more bookkeeping.


\begin{pmatrix}
1+1i  &2-0i  \\
i    &-2+3i
\end{pmatrix}
\begin{pmatrix}
1+0i  &1-0i  \\
3i    &-i
\end{pmatrix}

\begin{align}
&=\begin{pmatrix}
(1+1i)\cdot(1+0i)+(2-0i)\cdot(3i)   &(1+1i)\cdot(1-0i)+(2-0i)\cdot(-i) \\
(i)\cdot(1+0i)+(-2+3i)\cdot(3i)     &(i)\cdot(1-0i)+(-2+3i)\cdot(-i)
\end{pmatrix}                                                  \\
&=\begin{pmatrix}
1+7i  &1-1i  \\
-9-5i  &3+3i
\end{pmatrix}
\end{align}

Everything else from prior chapters that we can, we shall also carry over unchanged. For instance, we shall call this


\langle \begin{pmatrix} 1+0i \\ 0+0i \\ \vdots \\ 0+0i \end{pmatrix},
\dots,
\begin{pmatrix} 0+0i \\ 0+0i \\ \vdots \\ 1+0i \end{pmatrix},
\begin{pmatrix} 0+1i \\ 0+0i \\ \vdots \\ 0+0i \end{pmatrix},
\dots,
\begin{pmatrix} 0+0i \\ 0+0i \\ \vdots \\ 0+1i \end{pmatrix} \rangle

the standard basis for  \mathbb{C}^n as a vector space over \mathbb{C} and again denote it \mathcal{E}_n .


Section II - Similarity

1 - Definition and Examples

Definition and Examples

We've defined  H and  \hat{H} to be matrix-equivalent if there are nonsingular matrices  P and  Q such that  \hat{H}=PHQ . That definition is motivated by this diagram

Linalg matrix equivalent cd 1.png

showing that H and \hat{H} both represent h but with respect to different pairs of bases. We now specialize that setup to the case where the codomain equals the domain, and where the codomain's basis equals the domain's basis.

Linalg matrix equivalent cd 2.png

To move from the lower left to the lower right we can either go straight over, or up, over, and then down. In matrix terms,


{\rm Rep}_{D,D}(t)
={\rm Rep}_{B,D}(\mbox{id})\;{\rm Rep}_{B,B}(t)\;\bigl({\rm Rep}_{B,D}(\mbox{id})\bigr)^{-1}

(recall that a representation of composition like this one reads right to left).

Definition 1.1

The matrices  T and S are similar if there is a nonsingular  P such that  T=PSP^{-1} .

Since nonsingular matrices are square, the similar matrices T and S must be square and of the same size.

Example 1.2

With these two,


P=
\begin{pmatrix}
2  &1  \\
1  &1
\end{pmatrix}
\qquad
S=
\begin{pmatrix}
2  &-3  \\
1  &-1
\end{pmatrix}

calculation gives that S is similar to this matrix.


T=
\begin{pmatrix}
0  &-1  \\
1  &1
\end{pmatrix}
Example 1.3

The only matrix similar to the zero matrix is itself: PZP^{-1}=PZ=Z. The only matrix similar to the identity matrix is itself: PIP^{-1}=PP^{-1}=I.

Since matrix similarity is a special case of matrix equivalence, if two matrices are similar then they are equivalent. What about the converse: must matrix equivalent square matrices be similar? The answer is no. The prior example shows that the similarity classes are different from the matrix equivalence classes, because the matrix equivalence class of the identity consists of all nonsingular matrices of that size. Thus, for instance, these two are matrix equivalent but not similar.


T=
\begin{pmatrix}
1  &0  \\
0  &1
\end{pmatrix}
\qquad
S=
\begin{pmatrix}
1  &2  \\
0  &3
\end{pmatrix}

So some matrix equivalence classes split into two or more similarity classes— similarity gives a finer partition than does equivalence. This picture shows some matrix equivalence classes subdivided into similarity classes.

Linalg matrix similarity equiv classes.png

To understand the similarity relation we shall study the similarity classes. We approach this question in the same way that we've studied both the row equivalence and matrix equivalence relations, by finding a canonical form for representatives[1] of the similarity classes, called Jordan form. With this canonical form, we can decide if two matrices are similar by checking whether they reduce to the same representative. We've also seen with both row equivalence and matrix equivalence that a canonical form gives us insight into the ways in which members of the same class are alike (e.g., two identically-sized matrices are matrix equivalent if and only if they have the same rank).

Exercises

Problem 1

For


S=
\begin{pmatrix}
1  &3  \\
-2  &-6
\end{pmatrix}
\quad
T=
\begin{pmatrix}
0    &0  \\
-11/2 &-5
\end{pmatrix}
\quad
P=
\begin{pmatrix}
4  &2  \\
-3  &2
\end{pmatrix}

check that T=PSP^{-1}.

This exercise is recommended for all readers.
Problem 2

Example 1.3 shows that the only matrix similar to a zero matrix is itself and that the only matrix similar to the identity is itself.

  1. Show that the 1 \! \times \! 1 matrix (2), also, is similar only to itself.
  2. Is a matrix of the form cI for some scalar c similar only to itself?
  3. Is a diagonal matrix similar only to itself?
Problem 3

Show that these matrices are not similar.


\begin{pmatrix}
1  &0  &4  \\
1  &1  &3  \\
2  &1  &7
\end{pmatrix}
\qquad
\begin{pmatrix}
1  &0  &1  \\
0  &1  &1  \\
3  &1  &2
\end{pmatrix}
Problem 4

Consider the transformation t:\mathcal{P}_2\to \mathcal{P}_2 described by x^2\mapsto x+1, x\mapsto x^2-1, and 1\mapsto 3.

  1. Find T={\rm Rep}_{B,B}(t) where B=\langle x^2,x,1 \rangle .
  2. Find S={\rm Rep}_{D,D}(t) where D=\langle 1,1+x,1+x+x^2 \rangle .
  3. Find the matrix P such that T=PSP^{-1}.
This exercise is recommended for all readers.
Problem 5

Exhibit an nontrivial similarity relationship in this way: let  t:\mathbb{C}^2\to \mathbb{C}^2 act by


\begin{pmatrix} 1 \\ 2 \end{pmatrix}\mapsto\begin{pmatrix} 3 \\ 0 \end{pmatrix}
\qquad
\begin{pmatrix} -1 \\ 1 \end{pmatrix}\mapsto\begin{pmatrix} -1 \\ 2 \end{pmatrix}

and pick two bases, and represent  t with respect to then  T={\rm Rep}_{B,B}(t) and  S={\rm Rep}_{D,D}(t) . Then compute the  P and  P^{-1} to change bases from  B to  D and back again.

Problem 6

Explain Example 1.3 in terms of maps.

This exercise is recommended for all readers.
Problem 7

Are there two matrices  A and  B that are similar while  A^2 and  B^2 are not similar? (Halmos 1958)

This exercise is recommended for all readers.
Problem 8

Prove that if two matrices are similar and one is invertible then so is the other.

This exercise is recommended for all readers.
Problem 9

Show that similarity is an equivalence relation.

Problem 10

Consider a matrix representing, with respect to some B,B, reflection across the  x -axis in  \mathbb{R}^2 . Consider also a matrix representing, with respect to some D,D, reflection across the  y -axis. Must they be similar?

Problem 11

Prove that similarity preserves determinants and rank. Does the converse hold?

Problem 12

Is there a matrix equivalence class with only one matrix similarity class inside? One with infinitely many similarity classes?

Problem 13

Can two different diagonal matrices be in the same similarity class?

This exercise is recommended for all readers.
Problem 14

Prove that if two matrices are similar then their  k -th powers are similar when  k>0 . What if  k\leq 0 ?

This exercise is recommended for all readers.
Problem 15

Let  p(x) be the polynomial  c_nx^n+\cdots+c_1x+c_0 . Show that if  T is similar to  S then  p(T)=c_nT^n+\cdots+c_1T+c_0I is similar to  p(S)=c_nS^n+\cdots+c_1S+c_0I .

Problem 16

List all of the matrix equivalence classes of  1 \! \times \! 1 matrices. Also list the similarity classes, and describe which similarity classes are contained inside of each matrix equivalence class.

Problem 17

Does similarity preserve sums?

Problem 18

Show that if  T-\lambda I and  N are similar matrices then  T and  N+\lambda I are also similar.


2 - Diagonalizability

The prior subsection defines the relation of similarity and shows that, although similar matrices are necessarily matrix equivalent, the converse does not hold. Some matrix-equivalence classes break into two or more similarity classes (the nonsingular n \! \times \! n matrices, for instance). This means that the canonical form for matrix equivalence, a block partial-identity, cannot be used as a canonical form for matrix similarity because the partial-identities cannot be in more than one similarity class, so there are similarity classes without one. This picture illustrates. As earlier in this book, class representatives are shown with stars.

Linalg matrix similarity equiv classes 2.png

We are developing a canonical form for representatives of the similarity classes. We naturally try to build on our previous work, meaning first that the partial identity matrices should represent the similarity classes into which they fall, and beyond that, that the representatives should be as simple as possible. The simplest extension of the partial-identity form is a diagonal form.

Definition 2.1

A transformation is diagonalizable if it has a diagonal representation with respect to the same basis for the codomain as for the domain. A diagonalizable matrix is one that is similar to a diagonal matrix:  T is diagonalizable if there is a nonsingular  P such that  PTP^{-1} is diagonal.

Example 2.2

The matrix


\begin{pmatrix}
4 &-2 \\
1 &1
\end{pmatrix}

is diagonalizable.


\begin{pmatrix}
2  &0   \\
0  &3
\end{pmatrix}
=
\begin{pmatrix}
-1  &2  \\
1  &-1
\end{pmatrix}
\begin{pmatrix}
4  &-2 \\
1  &1
\end{pmatrix}
\begin{pmatrix}
-1  &2  \\
1  &-1
\end{pmatrix}^{-1}
Example 2.3

Not every matrix is diagonalizable. The square of


N=\begin{pmatrix}
0  &0  \\
1  &0
\end{pmatrix}

is the zero matrix. Thus, for any map n that  N represents (with respect to the same basis for the domain as for the codomain), the composition  n\circ n is the zero map. This implies that no such map  n can be diagonally represented (with respect to any B,B) because no power of a nonzero diagonal matrix is zero. That is, there is no diagonal matrix in N's similarity class.

That example shows that a diagonal form will not do for a canonical form— we cannot find a diagonal matrix in each matrix similarity class. However, the canonical form that we are developing has the property that if a matrix can be diagonalized then the diagonal matrix is the canonical representative of the similarity class. The next result characterizes which maps can be diagonalized.

Corollary 2.4

A transformation  t is diagonalizable if and only if there is a basis  B=\langle \vec{\beta}_1,\ldots,\vec{\beta}_n  \rangle  and scalars  \lambda_1,\ldots,\lambda_n such that  t(\vec{\beta}_i)=\lambda_i\vec{\beta}_i for each  i .

Proof

This follows from the definition by considering a diagonal representation matrix.


{\rm Rep}_{B,B}(t)=
\left(\begin{array}{c|c|c}
\vdots                    &       &\vdots                     \\
{\rm Rep}_{B}(t(\vec{\beta}_1)) &\cdots &{\rm Rep}_{B}(t(\vec{\beta}_n))  \\
\vdots                    &       &\vdots
\end{array}\right)
=
\left(\begin{array}{c|c|c}
\lambda_1   &       &0         \\
\vdots      &\ddots &\vdots    \\
0           &       &\lambda_n
\end{array}\right)

This representation is equivalent to the existence of a basis satisfying the stated conditions simply by the definition of matrix representation.

Example 2.5

To diagonalize


T=\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}

we take it as the representation of a transformation with respect to the standard basis T={\rm Rep}_{\mathcal{E}_2,\mathcal{E}_2}(t) and we look for a basis  B=\langle \vec{\beta}_1,\vec{\beta}_2 \rangle  such that


{\rm Rep}_{B,B}(t)
=
\begin{pmatrix}
\lambda_1  &0          \\
0          &\lambda_2
\end{pmatrix}

that is, such that t(\vec{\beta}_1)=\lambda_1\vec{\beta}_1 and t(\vec{\beta}_2)=\lambda_2\vec{\beta}_2.


\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}
\vec{\beta}_1=\lambda_1\cdot\vec{\beta}_1
\qquad
\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}
\vec{\beta}_2=\lambda_2\cdot\vec{\beta}_2

We are looking for scalars  x such that this equation


\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}
\begin{pmatrix} b_1 \\ b_2 \end{pmatrix}=x\cdot\begin{pmatrix} b_1 \\ b_2 \end{pmatrix}

has solutions b_1 and b_2, which are not both zero. Rewrite that as a linear system.


\begin{array}{*{2}{rc}r}
(3-x)\cdot b_1  &+  &2\cdot b_2       &=  &0  \\
&   &(1-x)\cdot b_2   &=  &0
\end{array}
\qquad (*)

In the bottom equation the two numbers multiply to give zero only if at least one of them is zero so there are two possibilities, b_2=0 and x=1. In the  b_2=0 possibility, the first equation gives that either b_1=0 or  x=3 . Since the case of both b_1=0 and b_2=0 is disallowed, we are left looking at the possibility of x=3. With it, the first equation in (*) is 0\cdot b_1+2\cdot b_2=0 and so associated with 3 are vectors with a second component of zero and a first component that is free.


\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}
\begin{pmatrix} b_1 \\ 0 \end{pmatrix}=3\cdot\begin{pmatrix} b_1 \\ 0 \end{pmatrix}

That is, one solution to (*) is \lambda_1=3, and we have a first basis vector.


\vec{\beta}_1=\begin{pmatrix} 1 \\ 0 \end{pmatrix}

In the x=1 possibility, the first equation in (*) is 2\cdot b_1+2\cdot b_2=0, and so associated with 1 are vectors whose second component is the negative of their first component.


\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}
\begin{pmatrix} b_1 \\ -b_1 \end{pmatrix}=1\cdot\begin{pmatrix} b_1 \\ -b_1 \end{pmatrix}

Thus, another solution is \lambda_2=1 and a second basis vector is this.


\vec{\beta}_2=\begin{pmatrix} 1 \\ -1 \end{pmatrix}

To finish, drawing the similarity diagram

Linalg matrix equivalent cd 3.png

and noting that the matrix {\rm Rep}_{B,\mathcal{E}_2}(\mbox{id}) is easy leads to this diagonalization.


\begin{pmatrix}
3  &0  \\
0  &1
\end{pmatrix}
=
\begin{pmatrix}
1  &1  \\
0  &-1
\end{pmatrix}^{-1}
\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}
\begin{pmatrix}
1  &1  \\
0  &-1
\end{pmatrix}

In the next subsection, we will expand on that example by considering more closely the property of Corollary 2.4. This includes seeing another way, the way that we will routinely use, to find the \lambda's.

Exercises

This exercise is recommended for all readers.
Problem 1

Repeat Example 2.5 for the matrix from Example 2.2.

Problem 2

Diagonalize these upper triangular matrices.

  1. \begin{pmatrix}
-2  &1  \\
0  &2
\end{pmatrix}
  2. \begin{pmatrix}
5  &4  \\
0  &1
\end{pmatrix}
This exercise is recommended for all readers.
Problem 3

What form do the powers of a diagonal matrix have?

Problem 4

Give two same-sized diagonal matrices that are not similar. Must any two different diagonal matrices come from different similarity classes?

Problem 5

Give a nonsingular diagonal matrix. Can a diagonal matrix ever be singular?

This exercise is recommended for all readers.
Problem 6

Show that the inverse of a diagonal matrix is the diagonal of the the inverses, if no element on that diagonal is zero. What happens when a diagonal entry is zero?

Problem 7

The equation ending Example 2.5


\begin{pmatrix}
1  &1  \\
0  &-1
\end{pmatrix}^{-1}
\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}
\begin{pmatrix}
1  &1  \\
0  &-1
\end{pmatrix}
=
\begin{pmatrix}
3  &0  \\
0  &1
\end{pmatrix}

is a bit jarring because for P we must take the first matrix, which is shown as an inverse, and for P^{-1} we take the inverse of the first matrix, so that the two -1 powers cancel and this matrix is shown without a superscript -1.

  1. Check that this nicer-appearing equation holds.
    
\begin{pmatrix}
3  &0  \\
0  &1
\end{pmatrix}
=
\begin{pmatrix}
1  &1  \\
0  &-1
\end{pmatrix}
\begin{pmatrix}
3  &2  \\
0  &1
\end{pmatrix}
\begin{pmatrix}
1  &1  \\
0  &-1
\end{pmatrix}^{-1}
  2. Is the previous item a coincidence? Or can we always switch the P and the P^{-1}?
Problem 8

Show that the P used to diagonalize in Example 2.5 is not unique.

Problem 9

Find a formula for the powers of this matrix Hint: see Problem 3.


\begin{pmatrix}
-3  &1  \\
-4  &2
\end{pmatrix}
This exercise is recommended for all readers.
Problem 10

Diagonalize these.

  1.  \begin{pmatrix}
1  &1  \\
0  &0
\end{pmatrix}
  2.  \begin{pmatrix}
0  &1  \\
1  &0
\end{pmatrix}
Problem 11

We can ask how diagonalization interacts with the matrix operations. Assume that  t,s:V\to V are each diagonalizable. Is  ct diagonalizable for all scalars  c ? What about  t+s ?  t\circ s ?

This exercise is recommended for all readers.
Problem 12

Show that matrices of this form are not diagonalizable.


\begin{pmatrix}
1  &c  \\
0  &1
\end{pmatrix}
\qquad c\neq 0
Problem 13

Show that each of these is diagonalizable.

  1.  \begin{pmatrix}
1  &2  \\
2  &1
\end{pmatrix}
  2.  \begin{pmatrix}
x  &y  \\
y  &z
\end{pmatrix}
\qquad x,y,z\text{ scalars}


3 - Eigenvalues and Eigenvectors

In this subsection we will focus on the property of Corollary 2.4.

Definition 3.1

A transformation  t:V\to V has a scalar eigenvalue  \lambda if there is a nonzero eigenvector  \vec{\zeta}\in V such that t(\vec{\zeta})=\lambda\cdot\vec{\zeta}.

("Eigen" is German for "characteristic of" or "peculiar to"; some authors call these characteristic values and vectors. No authors call them "peculiar".)

Example 3.2

The projection map


\begin{pmatrix} x \\ y \\ z \end{pmatrix}
\stackrel{\pi}{\longmapsto}
\begin{pmatrix} x \\ y \\ 0 \end{pmatrix}
\qquad x,y,z\in\mathbb{C}

has an eigenvalue of  1 associated with any eigenvector of the form


\begin{pmatrix} x \\ y \\ 0 \end{pmatrix}

where  x and  y are scalars at least one of which is non- 0 . On the other hand,  2 is not an eigenvalue of  \pi since no non- \vec{0} vector is doubled.

That example shows why the "non-\vec{0}" appears in the definition. Disallowing  \vec{0} as an eigenvector eliminates trivial eigenvalues.

Example 3.3

The only transformation on the trivial space  \{\vec{0}\,\} is

\vec{0}\mapsto\vec{0}.

This map has no eigenvalues because there are no non- \vec{0} vectors \vec{v} mapped to a scalar multiple \lambda\cdot\vec{v} of themselves.

Example 3.4

Consider the homomorphism  t:\mathcal{P}_1\to \mathcal{P}_1 given by  c_0+c_1x\mapsto(c_0+c_1)+(c_0+c_1)x . The range of  t is one-dimensional. Thus an application of  t to a vector in the range will simply rescale that vector:  c+cx\mapsto (2c)+(2c)x . That is,  t has an eigenvalue of  2 associated with eigenvectors of the form  c+cx where  c\neq 0 .

This map also has an eigenvalue of  0 associated with eigenvectors of the form  c-cx where  c\neq 0 .

Definition 3.5

A square matrix  T has a scalar eigenvalue  \lambda associated with the non- \vec{0} eigenvector  \vec{\zeta} if  T\vec{\zeta}=\lambda\cdot\vec{\zeta} .

Remark 3.6

Although this extension from maps to matrices is obvious, there is a point that must be made. Eigenvalues of a map are also the eigenvalues of matrices representing that map, and so similar matrices have the same eigenvalues. But the eigenvectors are different— similar matrices need not have the same eigenvectors.

For instance, consider again the transformation  t:\mathcal{P}_1\to \mathcal{P}_1 given by  c_0+c_1x\mapsto (c_0+c_1)+(c_0+c_1)x . It has an eigenvalue of  2 associated with eigenvectors of the form  c+cx where  c\neq 0 . If we represent  t with respect to  B=\langle 1+1x,1-1x \rangle


T={\rm Rep}_{B,B}(t)=
\begin{pmatrix}
2  &0  \\
0  &0
\end{pmatrix}

then  2 is an eigenvalue of  T , associated with these eigenvectors.


\{\begin{pmatrix} c_0 \\ c_1 \end{pmatrix}\,\big|\, \begin{pmatrix}
2  &0  \\
0  &0
\end{pmatrix}\begin{pmatrix} c_0 \\ c_1 \end{pmatrix}
=\begin{pmatrix} 2c_0 \\ 2c_1 \end{pmatrix}  \}
=\{\begin{pmatrix} c_0 \\ 0 \end{pmatrix}\,\big|\, c_0\in\mathbb{C},\, c_0\neq 0 \}

On the other hand, representing t with respect to  D=\langle 2+1x,1+0x \rangle  gives


S={\rm Rep}_{D,D}(t)=
\begin{pmatrix}
3  &1  \\
-3  &-1
\end{pmatrix}

and the eigenvectors of  S associated with the eigenvalue  2 are these.


\{\begin{pmatrix} c_0 \\ c_1 \end{pmatrix}\,\big|\, \begin{pmatrix}
3  &1  \\
-3  &-1
\end{pmatrix}\begin{pmatrix} c_0 \\ c_1 \end{pmatrix}
=\begin{pmatrix} 2c_0 \\ 2c_1 \end{pmatrix}  \}
=\{\begin{pmatrix} 0 \\ c_1 \end{pmatrix}\,\big|\, c_1\in\mathbb{C},\, c_1\neq 0 \}

Thus similar matrices can have different eigenvectors.

Here is an informal description of what's happening. The underlying transformation doubles the eigenvectors \vec{v}\mapsto 2\cdot\vec{v}. But when the matrix representing the transformation is  T={\rm Rep}_{B,B}(t) then it "assumes" that column vectors are representations with respect to  B . In contrast,  S={\rm Rep}_{D,D}(t) "assumes" that column vectors are representations with respect to  D . So the vectors that get doubled by each matrix look different.

The next example illustrates the basic tool for finding eigenvectors and eigenvalues.

Example 3.7

What are the eigenvalues and eigenvectors of this matrix?


T=
\begin{pmatrix}
1    &2    &1    \\
2    &0    &-2   \\
-1    &2    &3
\end{pmatrix}

To find the scalars  x such that  T\vec{\zeta}=x\vec{\zeta} for non- \vec{0} eigenvectors  \vec{\zeta} , bring everything to the left-hand side


\begin{pmatrix}
1    &2    &1    \\
2    &0    &-2   \\
-1    &2    &3
\end{pmatrix}
\begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}
-x\begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}
=\vec{0}

and factor  (T-x I)\vec{\zeta}=\vec{0} . (Note that it says T-xI; the expression  T-x doesn't make sense because  T is a matrix while  x is a scalar.) This homogeneous linear system


\begin{pmatrix}
1-x           &2            &1            \\
2           &0-x          &-2           \\
-1           &2            &3-x
\end{pmatrix}
\begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}
=
\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}

has a non- \vec{0} solution if and only if the matrix is singular. We can determine when that happens.

\begin{array}{rl}
0
&=\left|T-x I\right|                                               \\
&=\begin{vmatrix}
1-x          &2            &1            \\
2           &0-x          &-2           \\
-1           &2            &3-x
\end{vmatrix}                                       \\
&=x^3-4x^2+4x  \\
&=x(x-2)^2
\end{array}

The eigenvalues are  \lambda_1=0 and  \lambda_2=2 . To find the associated eigenvectors, plug in each eigenvalue. Plugging in \lambda_1=0 gives


\begin{pmatrix}
1-0         &2            &1            \\
2           &0-0          &-2           \\
-1           &2            &3-0
\end{pmatrix}
\begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}
=
\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}
\qquad\Longrightarrow\qquad
\begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}
=
\begin{pmatrix} a \\ -a \\ a \end{pmatrix}

for a scalar parameter  a\neq 0 ( a is non- 0 because eigenvectors must be non- \vec{0} ). In the same way, plugging in \lambda_2=2 gives


\begin{pmatrix}
1-2         &2            &1            \\
2           &0-2          &-2           \\
-1           &2            &3-2
\end{pmatrix}
\begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}
=
\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}
\qquad\Longrightarrow\qquad
\begin{pmatrix} z_1 \\ z_2 \\ z_3 \end{pmatrix}
=
\begin{pmatrix} b \\ 0 \\ b \end{pmatrix}

with  b\neq 0 .

Example 3.8

If


S=
\begin{pmatrix}
\pi      &1      \\
0        &3
\end{pmatrix}

(here  \pi is not a projection map, it is the number  3.14\ldots ) then


\left|
\begin{pmatrix}
\pi-x &1         \\
0     &3-x
\end{pmatrix} \right|
=
(x-\pi)(x-3)

so  S has eigenvalues of  \lambda_1=\pi and  \lambda_2=3 . To find associated eigenvectors, first plug in \lambda_1 for x:


\begin{pmatrix}
\pi-\pi     &1         \\
0           &3-\pi
\end{pmatrix}
\begin{pmatrix} z_1 \\ z_2 \end{pmatrix}
=
\begin{pmatrix} 0 \\ 0 \end{pmatrix}
\qquad\Longrightarrow\qquad
\begin{pmatrix} z_1 \\ z_2 \end{pmatrix}
=
\begin{pmatrix} a \\ 0 \end{pmatrix}

for a scalar  a\neq 0 , and then plug in \lambda_2:


\begin{pmatrix}
\pi-3       &1         \\
0           &3-3
\end{pmatrix}
\begin{pmatrix} z_1 \\ z_2 \end{pmatrix}
=
\begin{pmatrix} 0 \\ 0 \end{pmatrix}
\qquad\Longrightarrow\qquad
\begin{pmatrix} z_1 \\ z_2 \end{pmatrix}
=
\begin{pmatrix} -b/(\pi-3) \\ b \end{pmatrix}

where  b\neq 0 .

Definition 3.9

The characteristic polynomial of a square matrix  T is the determinant of the matrix  T-x I , where  x is a variable. The characteristic equation is \left|T-xI\right|=0. The characteristic polynomial of a transformation  t is the polynomial of any  {\rm Rep}_{B,B}(t) .

Problem 11 checks that the characteristic polynomial of a transformation is well-defined, that is, any choice of basis yields the same polynomial.

Lemma 3.10

A linear transformation on a nontrivial vector space has at least one eigenvalue.

Proof

Any root of the characteristic polynomial is an eigenvalue. Over the complex numbers, any polynomial of degree one or greater has a root. (This is the reason that in this chapter we've gone to scalars that are complex.)

Notice the familiar form of the sets of eigenvectors in the above examples.

Definition 3.11

The eigenspace of a transformation  t associated with the eigenvalue  \lambda is  V_\lambda=\{\vec{\zeta}\,\big|\, t(\vec{\zeta}\,)=\lambda\vec{\zeta}\,\} \cup\{\vec{0}\,\} . The eigenspace of a matrix is defined analogously.

Lemma 3.12

An eigenspace is a subspace.

Proof

An eigenspace must be nonempty— for one thing it contains the zero vector— and so we need only check closure. Take vectors  \vec{\zeta}_1,\ldots,\vec{\zeta}_n from  V_\lambda , to show that any linear combination is in  V_\lambda

\begin{array}{rl}
t(c_1\vec{\zeta}_1+c_2\vec{\zeta}_2+\cdots +c_n\vec{\zeta}_n)
&=c_1t(\vec{\zeta}_1)+\dots+c_nt(\vec{\zeta}_n)               \\
&=c_1\lambda\vec{\zeta}_1+\dots+c_n\lambda\vec{\zeta}_n          \\
&=\lambda(c_1\vec{\zeta}_1+\dots+c_n\vec{\zeta}_n)
\end{array}

(the second equality holds even if any  \vec{\zeta}_i is  \vec{0} since  t(\vec{0})=\lambda\cdot\vec{0}=\vec{0} ).

Example 3.13

In Example 3.8 the eigenspace associated with the eigenvalue  \pi and the eigenspace associated with the eigenvalue  3 are these.


V_{\pi}=\{\begin{pmatrix} a \\ 0 \end{pmatrix}\,\big|\, a\in\mathbb{R}\}
\qquad
V_3=\{\begin{pmatrix} -b/\pi-3 \\ b \end{pmatrix}\,\big|\, b\in\mathbb{R}\}
Example 3.14

In Example 3.7, these are the eigenspaces associated with the eigenvalues  0 and  2 .


V_0=\{\begin{pmatrix} a \\ -a \\ a \end{pmatrix}\,\big|\, a\in\mathbb{R}\},
\qquad
V_2=\{\begin{pmatrix} b \\ 0 \\ b \end{pmatrix}\,\big|\, b\in\mathbb{R}\}.
Remark 3.15

The characteristic equation is  0=x(x-2)^2 so in some sense  2 is an eigenvalue "twice". However there are not "twice" as many eigenvectors, in that the dimension of the eigenspace is one, not two. The next example shows a case where a number,  1 , is a double root of the characteristic equation and the dimension of the associated eigenspace is two.

Example 3.16

With respect to the standard bases, this matrix


\begin{pmatrix}
1  &0  &0  \\
0  &1  &0  \\
0  &0  &0
\end{pmatrix}

represents projection.


\begin{pmatrix} x \\ y \\ z \end{pmatrix}
\stackrel{\pi}{\longmapsto}
\begin{pmatrix} x \\ y \\ 0 \end{pmatrix}
\qquad x,y,z\in\mathbb{C}

Its eigenspace associated with the eigenvalue  0 and its eigenspace associated with the eigenvalue  1 are easy to find.


V_0=\{\begin{pmatrix} 0 \\ 0 \\ c_3 \end{pmatrix}\,\big|\, c_3\in\mathbb{C}\}
\qquad
V_1=\{\begin{pmatrix} c_1 \\ c_2 \\ 0 \end{pmatrix}\,\big|\, c_1,c_2\in\mathbb{C}\}

By the lemma, if two eigenvectors \vec{v}_1 and \vec{v}_2 are associated with the same eigenvalue then any linear combination of those two is also an eigenvector associated with that same eigenvalue. But, if two eigenvectors  \vec{v}_1 and  \vec{v}_2 are associated with different eigenvalues then the sum  \vec{v}_1+\vec{v}_2 need not be related to the eigenvalue of either one. In fact, just the opposite. If the eigenvalues are different then the eigenvectors are not linearly related.

Theorem 3.17

For any set of distinct eigenvalues of a map or matrix, a set of associated eigenvectors, one per eigenvalue, is linearly independent.

Proof

We will use induction on the number of eigenvalues. If there is no eigenvalue or only one eigenvalue then the set of associated eigenvectors is empty or is a singleton set with a non-\vec{0} member, and in either case is linearly independent.

For induction, assume that the theorem is true for any set of  k distinct eigenvalues, suppose that  \lambda_1,\dots,\lambda_{k+1} are distinct eigenvalues, and let  \vec{v}_1,\dots,\vec{v}_{k+1} be associated eigenvectors. If  c_1\vec{v}_1+\dots+c_k\vec{v}_k+c_{k+1}\vec{v}_{k+1}=\vec{0} then after multiplying both sides of the displayed equation by  \lambda_{k+1} , applying the map or matrix to both sides of the displayed equation, and subtracting the first result from the second, we have this.


c_1(\lambda_{k+1}-\lambda_1)\vec{v}_1+\dots
+c_k(\lambda_{k+1}-\lambda_k)\vec{v}_k
+c_{k+1}(\lambda_{k+1}-\lambda_{k+1})\vec{v}_{k+1}=\vec{0}

The induction hypothesis now applies:  c_1(\lambda_{k+1}-\lambda_1)=0,\dots,c_k(\lambda_{k+1}-\lambda_k)=0 . Thus, as all the eigenvalues are distinct,  c_1,\,\dots,\,c_k are all  0 . Finally, now  c_{k+1} must be  0 because we are left with the equation  \vec{v}_{k+1}\neq\vec{0} .

Example 3.18

The eigenvalues of


\begin{pmatrix}
2   &-2   &2   \\
0   &1    &1   \\
-4   &8    &3
\end{pmatrix}

are distinct:  \lambda_1=1 ,  \lambda_2=2 , and  \lambda_3=3 . A set of associated eigenvectors like


\{
\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 9 \\ 4 \\ 4 \end{pmatrix},
\begin{pmatrix} 2 \\ 1 \\ 2 \end{pmatrix}  \}

is linearly independent.

Corollary 3.19

An  n \! \times \! n matrix with  n distinct eigenvalues is diagonalizable.

Proof

Form a basis of eigenvectors. Apply Corollary 2.4.

Exercises

Problem 1

For each, find the characteristic polynomial and the eigenvalues.

  1.  \begin{pmatrix}
10  &-9 \\
4  &-2
\end{pmatrix}
  2. \begin{pmatrix}
1  &2  \\
4  &3
\end{pmatrix}
  3.  \begin{pmatrix}
0  &3  \\
7 &0
\end{pmatrix}
  4.  \begin{pmatrix}
0  &0  \\
0  &0
\end{pmatrix}
  5.  \begin{pmatrix}
1  &0  \\
0  &1
\end{pmatrix}
This exercise is recommended for all readers.
Problem 2

For each matrix, find the characteristic equation, and the eigenvalues and associated eigenvectors.

  1.  \begin{pmatrix}
3  &0  \\
8  &-1
\end{pmatrix}
  2.  \begin{pmatrix}
3  &2  \\
-1  &0
\end{pmatrix}
Problem 3

Find the characteristic equation, and the eigenvalues and associated eigenvectors for this matrix. Hint. The eigenvalues are complex.


\begin{pmatrix}
-2  &-1 \\
5  &2
\end{pmatrix}
Problem 4

Find the characteristic polynomial, the eigenvalues, and the associated eigenvectors of this matrix.


\begin{pmatrix}
1  &1  &1  \\
0  &0  &1  \\
0  &0  &1
\end{pmatrix}
This exercise is recommended for all readers.
Problem 5

For each matrix, find the characteristic equation, and the eigenvalues and associated eigenvectors.

  1.  \begin{pmatrix}
3  &-2 &0  \\
-2  &3  &0  \\
0  &0  &5
\end{pmatrix}
  2.  \begin{pmatrix}
0  &1   &0  \\
0  &0   &1  \\
4  &-17 &8
\end{pmatrix}
This exercise is recommended for all readers.
Problem 6

Let  t:\mathcal{P}_2\to \mathcal{P}_2 be


a_0+a_1x+a_2x^2\mapsto
(5a_0+6a_1+2a_2)-(a_1+8a_2)x+(a_0-2a_2)x^2.

Find its eigenvalues and the associated eigenvectors.

Problem 7

Find the eigenvalues and eigenvectors of this map  t:\mathcal{M}_2\to \mathcal{M}_2 .


\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}
\mapsto
\begin{pmatrix}
2c    &a+c  \\
b-2c  &d
\end{pmatrix}
This exercise is recommended for all readers.
Problem 8

Find the eigenvalues and associated eigenvectors of the differentiation operator  d/dx:\mathcal{P}_3\to \mathcal{P}_3 .

Problem 9
Prove that

the eigenvalues of a triangular matrix (upper or lower triangular) are the entries on the diagonal.

This exercise is recommended for all readers.
Problem 10

Find the formula for the characteristic polynomial of a 2 \! \times \! 2 matrix.

Problem 11

Prove that the characteristic polynomial of a transformation is well-defined.

This exercise is recommended for all readers.
Problem 12
  1. Can any non- \vec{0} vector in any nontrivial vector space be a eigenvector? That is, given a  \vec{v}\neq\vec{0} from a nontrivial  V , is there a transformation  t:V\to V and a scalar  \lambda\in\mathbb{R} such that  t(\vec{v})=\lambda\vec{v} ?
  2. Given a scalar  \lambda , can any non- \vec{0} vector in any nontrivial vector space be an eigenvector associated with the eigenvalue  \lambda ?
This exercise is recommended for all readers.
Problem 13

Suppose that  t:V\to V and  T={\rm Rep}_{B,B}(t) . Prove that the eigenvectors of  T associated with  \lambda are the non- \vec{0} vectors in the kernel of the map represented (with respect to the same bases) by  T-\lambda I .

Problem 14

Prove that if a,\ldots,\,d are all integers and  a+b=c+d then


\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}

has integral eigenvalues, namely  a+b and  a-c .

This exercise is recommended for all readers.
Problem 15

Prove that if  T is nonsingular and has eigenvalues  \lambda_1,\dots,\lambda_n then  T^{-1} has eigenvalues  1/\lambda_1,\dots,1/\lambda_n . Is the converse true?

This exercise is recommended for all readers.
Problem 16

Suppose that  T is  n \! \times \! n and  c,d are scalars.

  1. Prove that if  T has the eigenvalue  \lambda with an associated eigenvector  \vec{v} then  \vec{v} is an eigenvector of  cT+dI associated with eigenvalue  c\lambda+d .
  2. Prove that if  T is diagonalizable then so is  cT+dI .
This exercise is recommended for all readers.
Problem 17

Show that  \lambda is an eigenvalue of  T if and only if the map represented by  T-\lambda I is not an isomorphism.

Problem 18
  1. Show that if  \lambda is an eigenvalue of  A then  \lambda^k is an eigenvalue of  A^k .
  2. What is wrong with this proof generalizing that? "If  \lambda is an eigenvalue of  A and  \mu is an eigenvalue for  B , then  \lambda\mu is an eigenvalue for  AB , for, if  A\vec{x}=\lambda\vec{x} and  B\vec{x}=\mu\vec{x} then  AB\vec{x}=A\mu\vec{x}=\mu A\vec{x}\mu\lambda\vec{x} "?
(Strang 1980)
Problem 19

Do matrix-equivalent matrices have the same eigenvalues?

Problem 20

Show that a square matrix with real entries and an odd number of rows has at least one real eigenvalue.

Problem 21

Diagonalize.


\begin{pmatrix}
-1  &2  &2  \\
2  &2  &2  \\
-3  &-6 &-6
\end{pmatrix}
Problem 22

Suppose that  P is a nonsingular  n \! \times \! n matrix. Show that the similarity transformation map  t_P:\mathcal{M}_{n \! \times \! n}\to \mathcal{M}_{n \! \times \! n} sending  T\mapsto PTP^{-1} is an isomorphism.

? Problem 23

Show that if  A is an  n square matrix and each row (column) sums to  c then  c is a characteristic root of  A . (Morrison 1967)


Section III - Nilpotence

The goal of this chapter is to show that every square matrix is similar to one that is a sum of two kinds of simple matrices. The prior section focused on the first kind, diagonal matrices. We now consider the other kind.


1 - Self-Composition

This subsection is optional, although it is necessary for later material in this section and in the next one.

A linear transformations t:V\to V, because it has the same domain and codomain, can be iterated.[2] That is, compositions of t with itself such as  t^2=t\circ t and  t^3=t\circ t\circ t are defined.

Linalg iterates.png

Note that this power notation for the linear transformation functions dovetails with the notation that we've used earlier for their squared matrix representations because if {\rm Rep}_{B,B}(t)=T then  {\rm Rep}_{B,B}(t^j)=T^j .

Example 1.1

For the derivative map  d/dx:\mathcal{P}_3\to \mathcal{P}_3 given by


a+bx+cx^2+dx^3\stackrel{d/dx}{\longmapsto} b+2cx+3dx^2

the second power is the second derivative


a+bx+cx^2+dx^3\stackrel{d^2/dx^2}{\longmapsto} 2c+6dx

the third power is the third derivative


a+bx+cx^2+dx^3\stackrel{d^3/dx^3}{\longmapsto} 6d

and any higher power is the zero map.

Example 1.2

This transformation of the space of 2 \! \times \! 2 matrices


\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}
\stackrel{t}{\longmapsto}
\begin{pmatrix}
b  &a  \\
d  &0
\end{pmatrix}

has this second power


\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}
\stackrel{t^2}{\longmapsto}
\begin{pmatrix}
a  &b  \\
0  &0
\end{pmatrix}

and this third power.


\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}
\stackrel{t^3}{\longmapsto}
\begin{pmatrix}
b  &a  \\
0  &0
\end{pmatrix}

After that, t^4=t^2 and t^5=t^3, etc.

These examples suggest that on iteration more and more zeros appear until there is a settling down. The next result makes this precise.

Lemma 1.3

For any transformation  t:V\to V , the rangespaces of the powers form a descending chain


V\supseteq \mathcal{R}(t)\supseteq\mathcal{R}(t^2)\supseteq\cdots

and the nullspaces form an ascending chain.


\{\vec{0}\,\}\subseteq\mathcal{N}(t)\subseteq\mathcal{N}(t^2)\subseteq\cdots

Further, there is a  k such that for powers less than k the subsets are proper (if j<k then \mathcal{R}(t^j)\supset\mathcal{R}(t^{j+1}) and \mathcal{N}(t^j)\subset\mathcal{N}(t^{j+1})), while for powers greater than k the sets are equal (if j\geq k then \mathcal{R}(t^j)=\mathcal{R}(t^{j+1}) and \mathcal{N}(t^j)=\mathcal{N}(t^{j+1})).

Proof

We will do the rangespace half and leave the rest for Problem 6. Recall, however, that for any map the dimension of its rangespace plus the dimension of its nullspace equals the dimension of its domain. So if the rangespaces shrink then the nullspaces must grow.

That the rangespaces form chains is clear because if \vec{w}\in\mathcal{R}(t^{j+1}), so that \vec{w}=t^{j+1}(\vec{v}), then \vec{w}=t^{j}(\,t(\vec{v})\,) and so \vec{w}\in\mathcal{R}(t^{j}). To verify the "further" property, first observe that if any pair of rangespaces in the chain are equal  \mathcal{R}(t^{k})=\mathcal{R}(t^{k+1}) then all subsequent ones are also equal  \mathcal{R}(t^{k+1})=\mathcal{R}(t^{k+2}) , etc. This is because if  t:\mathcal{R}(t^{k+1})\to \mathcal{R}(t^{k+2}) is the same map, with the same domain, as  t:\mathcal{R}(t^{k})\to \mathcal{R}(t^{k+1}) and it therefore has the same range:  \mathcal{R}(t^{k+1})=\mathcal{R}(t^{k+2}) (and induction shows that it holds for all higher powers). So if the chain of rangespaces ever stops being strictly decreasing then it is stable from that point onward.

But the chain must stop decreasing. Each rangespace is a subspace of the one before it. For it to be a proper subspace it must be of strictly lower dimension (see Problem 4). These spaces are finite-dimensional and so the chain can fall for only finitely-many steps, that is, the power k is at most the dimension of V.

Example 1.4

The derivative map a+bx+cx^2+dx^3\stackrel{d/dx}{\longmapsto} b+2cx+3dx^2 of Example 1.1 has this chain of rangespaces


\mathcal{P}_3\supset\mathcal{P}_2\supset\mathcal{P}_1
\supset\mathcal{P}_0\supset\{\vec{0}\,\}=\{\vec{0}\,\}=\cdots

and this chain of nullspaces.


\{\vec{0}\,\}\subset\mathcal{P}_0\subset\mathcal{P}_1\subset\mathcal{P}_2
\subset\mathcal{P}_3=\mathcal{P}_3=\cdots
Example 1.5

The transformation  \pi:\mathbb{C}^3\to \mathbb{C}^3 projecting onto the first two coordinates


\begin{pmatrix} c_1 \\ c_2 \\ c_3 \end{pmatrix}
\stackrel{\pi}{\longmapsto}
\begin{pmatrix} c_1 \\ c_2 \\ 0 \end{pmatrix}

has \mathbb{C}^3\supset\mathcal{R}(\pi)=\mathcal{R}(\pi^2)=\cdots and  \{\vec{0}\,\}\subset\mathcal{N}(\pi)=\mathcal{N}(\pi^2)=\cdots\, .

Example 1.6

Let  t:\mathcal{P}_2\to \mathcal{P}_2 be the map  c_0+c_1x+c_2x^2 \mapsto 2c_0+c_2x. As the lemma describes, on iteration the rangespace shrinks


\mathcal{R}(t^0)=\mathcal{P}_2
\quad
\mathcal{R}(t)=\{a+bx\,\big|\, a,b\in\mathbb{C}\}
\quad
\mathcal{R}(t^2)=\{a\,\big|\, a\in\mathbb{C}\}

and then stabilizes \mathcal{R}(t^2)=\mathcal{R}(t^3)=\cdots, while the nullspace grows


\mathcal{N}(t^0)=\{0\}
\quad
\mathcal{N}(t)=\{cx\,\big|\, c\in\mathbb{C}\}
\quad
\mathcal{N}(t^2)=\{cx+d\,\big|\, c,d\in\mathbb{C}\}

and then stabilizes \mathcal{N}(t^2)=\mathcal{N}(t^3)=\cdots.

This graph illustrates Lemma 1.3. The horizontal axis gives the power j of a transformation. The vertical axis gives the dimension of the rangespace of t^j as the distance above zero— and thus also shows the dimension of the nullspace as the distance below the gray horizontal line, because the two add to the dimension n of the domain.

Linalg rank of iterates.png

As sketched, on iteration the rank falls and with it the nullity grows until the two reach a steady state. This state must be reached by the n-th iterate. The steady state's distance above zero is the dimension of the generalized rangespace and its distance below n is the dimension of the generalized nullspace.

Definition 1.7

Let  t be a transformation on an  n -dimensional space. The generalized rangespace (or the closure of the rangespace) is \mathcal{R}_\infty(t)=\mathcal{R}(t^n) The generalized nullspace (or the closure of the nullspace) is \mathcal{N}_\infty(t)=\mathcal{N}(t^n).

Exercises

Problem 1

Give the chains of rangespaces and nullspaces for the zero and identity transformations.

Problem 2

For each map, give the chain of rangespaces and the chain of nullspaces, and the generalized rangespace and the generalized nullspace.

  1. t_0:\mathcal{P}_2\to \mathcal{P}_2, a+bx+cx^2\mapsto b+cx^2
  2. t_1:\mathbb{R}^2\to \mathbb{R}^2,
    
\begin{pmatrix} a \\ b \end{pmatrix}\mapsto\begin{pmatrix} 0 \\ a \end{pmatrix}
  3. t_2:\mathcal{P}_2\to \mathcal{P}_2, a+bx+cx^2\mapsto b+cx+ax^2
  4. t_3:\mathbb{R}^3\to \mathbb{R}^3,
    
\begin{pmatrix} a \\ b \\ c \end{pmatrix}\mapsto\begin{pmatrix} a \\ a \\ b \end{pmatrix}
Problem 3

Prove that function composition is associative  (t\circ t)\circ t=t\circ (t\circ t) and so we can write t^3 without specifying a grouping.

Problem 4

Check that a subspace must be of dimension less than or equal to the dimension of its superspace. Check that if the subspace is proper (the subspace does not equal the superspace) then the dimension is strictly less. (This is used in the proof of Lemma 1.3.)

Problem 5

Prove that the generalized rangespace \mathcal{R}_\infty(t) is the entire space, and the generalized nullspace \mathcal{N}_\infty(t) is trivial, if the transformation t is nonsingular. Is this "only if" also?

Problem 6

Verify the nullspace half of Lemma 1.3.

Problem 7

Give an example of a transformation on a three dimensional space whose range has dimension two. What is its nullspace? Iterate your example until the rangespace and nullspace stabilize.

Problem 8

Show that the rangespace and nullspace of a linear transformation need not be disjoint. Are they ever disjoint?


2 - Strings

This subsection is optional, and requires material from the optional Direct Sum subsection.

The prior subsection shows that as  j increases, the dimensions of the \mathcal{R}(t^j)'s fall while the dimensions of the \mathcal{N}(t^j)'s rise, in such a way that this rank and nullity split the dimension of V. Can we say more; do the two split a basis— is  V=\mathcal{R}(t^j)\oplus\mathcal{N}(t^j) ?

The answer is yes for the smallest power j=0 since  V=\mathcal{R}(t^0)\oplus\mathcal{N}(t^0)=V\oplus\{\vec{0}\} . The answer is also yes at the other extreme.

Lemma 2.1

Where  t:V\to V is a linear transformation, the space is the direct sum  V=\mathcal{R}_\infty(t)\oplus\mathcal{N}_\infty(t) . That is, both  \dim(V)=\dim(\mathcal{R}_\infty(t))+\dim(\mathcal{N}_\infty(t)) and  \mathcal{R}_\infty(t)\cap\mathcal{N}_\infty(t)=\{\vec{0}\,\} .

Proof

We will verify the second sentence, which is equivalent to the first. The first clause, that the dimension n of the domain of t^n equals the rank of t^n plus the nullity of t^n, holds for any transformation and so we need only verify the second clause.

Assume that  \vec{v}\in\mathcal{R}_\infty(t)\cap\mathcal{N}_\infty(t) =\mathcal{R}(t^n)\cap\mathcal{N}(t^n) , to prove that \vec{v} is  \vec{0} . Because  \vec{v} is in the nullspace,  t^n(\vec{v})=\vec{0} . On the other hand, because  \mathcal{R}(t^n)=\mathcal{R}(t^{n+1}) , the map  t:\mathcal{R}_\infty(t)\to \mathcal{R}_\infty(t) is a dimension-preserving homomorphism and therefore is one-to-one. A composition of one-to-one maps is one-to-one, and so  t^n:\mathcal{R}_\infty(t)\to \mathcal{R}_\infty(t) is one-to-one. But now— because only  \vec{0} is sent by a one-to-one linear map to  \vec{0} — the fact that  t^n(\vec{v})=\vec{0} implies that  \vec{v}=\vec{0} .

Note 2.2

Technically we should distinguish the map t:V\to V from the map  t:\mathcal{R}_\infty(t)\to \mathcal{R}_\infty(t) because the domains or codomains might differ. The second one is said to be the restriction[3] of t to \mathcal{R}(t^k). We shall use later a point from that proof about the restriction map, namely that it is nonsingular.

In contrast to the j=0 and j=n cases, for intermediate powers the space V might not be the direct sum of \mathcal{R}(t^j) and \mathcal{N}(t^j). The next example shows that the two can have a nontrivial intersection.

Example 2.3

Consider the transformation of  \mathbb{C}^2 defined by this action on the elements of the standard basis.


\begin{pmatrix} 1 \\ 0 \end{pmatrix}
\stackrel{n}{\longmapsto}
\begin{pmatrix} 0 \\ 1 \end{pmatrix}
\quad
\begin{pmatrix} 0 \\ 1 \end{pmatrix}
\stackrel{n}{\longmapsto}
\begin{pmatrix} 0 \\ 0 \end{pmatrix}
\qquad
N={\rm Rep}_{\mathcal{E}_2,\mathcal{E}_2}(n)=\begin{pmatrix}
0  &0  \\
1  &0
\end{pmatrix}

The vector


\vec{e}_2=\begin{pmatrix} 0 \\ 1 \end{pmatrix}

is in both the rangespace and nullspace. Another way to depict this map's action is with a string.


\begin{array}{ccccc}
\vec{e}_1 &\mapsto &\vec{e}_2 &\mapsto &\vec{0}
\end{array}
Example 2.4

A map  \hat{n}:\mathbb{C}^4\to \mathbb{C}^4 whose action on  \mathcal{E}_4 is given by the string


\begin{array}{ccccccccc}
\vec{e}_1 &\mapsto &\vec{e}_2
&\mapsto &\vec{e}_3
&\mapsto &\vec{e}_4
&\mapsto &\vec{0}
\end{array}

has  \mathcal{R}(\hat{n})\cap\mathcal{N}(\hat{n}) equal to the span  [\{\vec{e}_4\}] , has  \mathcal{R}(\hat{n}^2)\cap\mathcal{N}(\hat{n}^2)=
[\{\vec{e}_3,\vec{e}_4\}] , and has  \mathcal{R}(\hat{n}^3)\cap\mathcal{N}(\hat{n}^3)=
[\{\vec{e}_4\}] . The matrix representation is all zeros except for some subdiagonal ones.


\hat{N}={\rm Rep}_{\mathcal{E}_4,\mathcal{E}_4}(\hat{n})
=\begin{pmatrix}
0  &0  &0  &0 \\
1  &0  &0  &0 \\
0  &1  &0  &0 \\
0  &0  &1  &0 
\end{pmatrix}
Example 2.5

Transformations can act via more than one string. A transformation  t acting on a basis  B=\langle \vec{\beta}_1,\dots,\vec{\beta}_5 \rangle  by


\begin{array}{ccccccc}
\vec{\beta}_1 &\mapsto &\vec{\beta}_2 &\mapsto &\vec{\beta}_3
&\mapsto &\vec{0} \\
\vec{\beta}_4 &\mapsto &\vec{\beta}_5 &\mapsto &\vec{0}
\end{array}

is represented by a matrix that is all zeros except for blocks of subdiagonal ones


{\rm Rep}_{B,B}(t)=
\left(\begin{array}{ccc|cc}
0  &0  &0  &0  &0  \\
1  &0  &0  &0  &0  \\
0  &1  &0  &0  &0  \\ \hline
0  &0  &0  &0  &0  \\
0  &0  &0  &1  &0
\end{array}\right)

(the lines just visually organize the blocks).

In those three examples all vectors are eventually transformed to zero.

Definition 2.6

A nilpotent transformation is one with a power that is the zero map. A nilpotent matrix is one with a power that is the zero matrix. In either case, the least such power is the index of nilpotency.

Example 2.7

In Example 2.3 the index of nilpotency is two. In Example 2.4 it is four. In Example 2.5 it is three.

Example 2.8

The differentiation map  d/dx:\mathcal{P}_2\to \mathcal{P}_2 is nilpotent of index three since the third derivative of any quadratic polynomial is zero. This map's action is described by the string x^2\mapsto 2x\mapsto 2\mapsto 0 and taking the basis  B=\langle x^2,2x,2 \rangle  gives this representation.


{\rm Rep}_{B,B}(d/dx)=
\begin{pmatrix}
0  &0  &0  \\
1  &0  &0  \\
0  &1  &0
\end{pmatrix}

Not all nilpotent matrices are all zeros except for blocks of subdiagonal ones.

Example 2.9

With the matrix \hat{N} from Example 2.4, and this four-vector basis


D=\langle \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} \rangle

a change of basis operation produces this representation with respect to  D,D .


\begin{pmatrix}
1  &0  &1 &0 \\
0  &2  &1 &0 \\
1  &1  &1 &0 \\
0  &0  &0 &1
\end{pmatrix}
\begin{pmatrix}
0  &0  &0 &0 \\
1  &0  &0 &0 \\
0  &1  &0 &0 \\
0  &0  &1 &0
\end{pmatrix}
\begin{pmatrix}
1  &0  &1 &0 \\
0  &2  &1 &0 \\
1  &1  &1 &0 \\
0  &0  &0 &1
\end{pmatrix}^{-1}\!\!
=
\begin{pmatrix}
-1  &0  &1   &0 \\
-3  &-2 &5   &0 \\
-2  &-1  &3  &0 \\
2  &1   &-2 &0
\end{pmatrix}

The new matrix is nilpotent; it's fourth power is the zero matrix since


(P\hat{N}P^{-1})^4
=P\hat{N}P^{-1}\cdot P\hat{N}P^{-1}\cdot P\hat{N}P^{-1}\cdot P\hat{N}P^{-1}
=P\hat{N}^4P^{-1}

and  \hat{N}^4 is the zero matrix.

The goal of this subsection is Theorem 2.13, which shows that the prior example is prototypical in that every nilpotent matrix is similar to one that is all zeros except for blocks of subdiagonal ones.

Definition 2.10

Let  t be a nilpotent transformation on  V . A  t -string generated by  \vec{v}\in V is a sequence  \langle \vec{v},t(\vec{v}),\ldots,t^{k-1}(\vec{v}) \rangle  . This sequence has length k. A  t -string basis is a basis that is a concatenation of  t -strings.

Example 2.11

In Example 2.5, the t-strings \langle \vec{\beta}_1,\vec{\beta}_2,\vec{\beta}_3 \rangle and \langle \vec{\beta}_4,\vec{\beta}_5 \rangle , of length three and two, can be concatenated to make a basis for the domain of t.

Lemma 2.12

If a space has a  t -string basis then the longest string in it has length equal to the index of nilpotency of t.

Proof

Suppose not. Those strings cannot be longer; if the index is  k then  t^k sends any vector— including those starting the string— to  \vec{0} . So suppose instead that there is a transformation t of index k on some space, such that the space has a t-string basis where all of the strings are shorter than length  k . Because t has index k, there is a vector  \vec{v} such that  t^{k-1}(\vec{v})\neq\vec{0} . Represent \vec{v} as a linear combination of basis elements and apply  t^{k-1} . We are supposing that  t^{k-1} sends each basis element to  \vec{0} but that it does not send  \vec{v} to  \vec{0} . That is impossible.

We shall show that every nilpotent map has an associated string basis. Then our goal theorem, that every nilpotent matrix is similar to one that is all zeros except for blocks of subdiagonal ones, is immediate, as in Example 2.5.

Looking for a counterexample, a nilpotent map without an associated string basis that is disjoint, will suggest the idea for the proof. Consider the map  t:\mathbb{C}^5\to \mathbb{C}^5 with this action.

Linalg nilpotent transformation.png                 {\rm Rep}_{\mathcal{E}_5,\mathcal{E}_5}(t)=
\begin{pmatrix}
0  &0  &0  &0  &0  \\
0  &0  &0  &0  &0  \\
1  &1  &0  &0  &0  \\
0  &0  &0  &0  &0  \\
0  &0  &0  &1  &0
\end{pmatrix}

Even after omitting the zero vector, these three strings aren't disjoint, but that doesn't end hope of finding a t-string basis. It only means that  \mathcal{E}_5 will not do for the string basis.

To find a basis that will do, we first find the number and lengths of its strings. Since t's index of nilpotency is two, Lemma 2.12 says that at least one string in the basis has length two. Thus the map must act on a string basis in one of these two ways.


\begin{array}{ccccc}
\vec{\beta}_1 &\mapsto &\vec{\beta}_2 &\mapsto &\vec{0}  \\
\vec{\beta}_3 &\mapsto &\vec{\beta}_4 &\mapsto &\vec{0}  \\
\vec{\beta}_5 &\mapsto &\vec{0}
\end{array}                
\begin{array}{ccccc}
\vec{\beta}_1 &\mapsto &\vec{\beta}_2 &\mapsto &\vec{0}  \\
\vec{\beta}_3 &\mapsto &\vec{0}   \\
\vec{\beta}_4 &\mapsto &\vec{0}   \\
\vec{\beta}_5 &\mapsto &\vec{0}
\end{array}

Now, the key point. A transformation with the left-hand action has a nullspace of dimension three since that's how many basis vectors are sent to zero. A transformation with the right-hand action has a nullspace of dimension four. Using the matrix representation above, calculation of t's nullspace


\mathcal{N}(t)=
\{\begin{pmatrix} x \\ -x \\ z \\ 0 \\ r \end{pmatrix}\,\big|\, x,z,r\in\mathbb{C} \}

shows that it is three-dimensional, meaning that we want the left-hand action.

To produce a string basis, first pick  \vec{\beta}_2 and  \vec{\beta}_4 from  \mathcal{R}(t)\cap\mathcal{N}(t)


\vec{\beta}_2=\begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}\qquad
\vec{\beta}_4=\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{pmatrix}

(other choices are possible, just be sure that  \{\vec{\beta}_2,\vec{\beta}_4\} is linearly independent). For  \vec{\beta}_5 pick a vector from  \mathcal{N}(t) that is not in the span of  \{ \vec{\beta}_2,\vec{\beta}_4 \} .


\vec{\beta}_5=\begin{pmatrix} 1 \\ -1 \\ 0 \\ 0 \\ 0 \end{pmatrix}

Finally, take  \vec{\beta}_1 and  \vec{\beta}_3 such that  t(\vec{\beta}_1)=\vec{\beta}_2 and  t(\vec{\beta}_3)=\vec{\beta}_4 .


\vec{\beta}_1=\begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}\qquad
\vec{\beta}_3=\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}

Now, with respect to  B=\langle \vec{\beta}_1,\ldots,\vec{\beta}_5 \rangle  , the matrix of t is as desired.


{\rm Rep}_{B,B}(t)=
\left(\begin{array}{cc|cc|c}
0  &0  &0  &0  &0  \\
1  &0  &0  &0  &0  \\  \hline
0  &0  &0  &0  &0  \\
0  &0  &1  &0  &0  \\  \hline
0  &0  &0  &0  &0
\end{array}\right)
Theorem 2.13

Any nilpotent transformation t is associated with a  t -string basis. While the basis is not unique, the number and the length of the strings is determined by  t .

This illustrates the proof. Basis vectors are categorized into kind 1, kind 2, and kind 3. They are also shown as squares or circles, according to whether they are in the nullspace or not.

Linalg nilpotent has string basis.png
Proof

Fix a vector space V; we will argue by induction on the index of nilpotency of t:V\to V. If that index is  1 then  t is the zero map and any basis is a string basis \vec{\beta}_1\mapsto\vec{0}, ..., \vec{\beta}_n\mapsto\vec{0}. For the inductive step, assume that the theorem holds for any transformation with an index of nilpotency between 1 and  k-1 and consider the index k case.

First observe that the restriction to the rangespace  t:\mathcal{R}(t)\to \mathcal{R}(t) is also nilpotent, of index  k-1 . Apply the inductive hypothesis to get a string basis for  \mathcal{R}(t) , where the number and length of the strings is determined by  t .


B=\langle \vec{\beta}_1,t(\vec{\beta}_1),\dots,
t^{h_1}(\vec{\beta}_1) \rangle \!\mathbin{{}^\frown}\!
\langle \vec{\beta}_2,\ldots,t^{h_2}(\vec{\beta}_2) \rangle \!\mathbin{{}^\frown}\!\cdots\!\mathbin{{}^\frown}\!
\langle \vec{\beta}_i,\ldots,t^{h_i}(\vec{\beta}_i) \rangle

(In the illustration these are the basis vectors of kind  1 , so there are i strings shown with this kind of basis vector.)

Second, note that taking the final nonzero vector in each string gives a basis  C=\langle t^{h_1}(\vec{\beta}_1),\dots,t^{h_i}(\vec{\beta}_i) \rangle  for  \mathcal{R}(t)\cap\mathcal{N}(t) . (These are illustrated with  1 's in squares.) For, a member of  \mathcal{R}(t) is mapped to zero if and only if it is a linear combination of those basis vectors that are mapped to zero. Extend  C to a basis for all of  \mathcal{N}(t) .


\hat{C}=C\!\mathbin{{}^\frown}\!\langle \vec{\xi}_1,\dots,\vec{\xi}_p \rangle

(The \vec{\xi}'s are the vectors of kind  2 so that  \hat{C} is the set of squares.) While many choices are possible for the  \vec{\xi} 's, their number  p is determined by the map  t as it is the dimension of  \mathcal{N}(t) minus the dimension of  \mathcal{R}(t)\cap\mathcal{N}(t) .

Finally,  B\!\mathbin{{}^\frown}\!\hat{C} is a basis for  \mathcal{R}(t)+\mathcal{N}(t) because any sum of something in the rangespace with something in the nullspace can be represented using elements of  B for the rangespace part and elements of  \hat{C} for the part from the nullspace. Note that

\begin{array}{rl}
\dim\big(\mathcal{R}(t)+\mathcal{N}(t)\big)
&=
\dim (\mathcal{R}(t))+\dim (\mathcal{N}(t))
-\dim(\mathcal{R}(t)\cap\mathcal{N}(t))  \\
&=
\mathop{\mbox{rank}} (t)+\text{nullity}\, (t)-i          \\
&=
\dim (V)-i
\end{array}

and so  B\!\mathbin{{}^\frown}\!\hat{C} can be extended to a basis for all of  V by the addition of  i more vectors. Specifically, remember that each of  \vec{\beta}_1,\dots,\vec{\beta}_i is in  \mathcal{R}(t) , and extend  B\!\mathbin{{}^\frown}\!\hat{C} with vectors  \vec{v}_1,\dots,\vec{v}_i such that  t(\vec{v}_1)=\vec{\beta}_1,\dots,t(\vec{v}_i)=\vec{\beta}_i . (In the illustration, these are the  3 's.) The check that linear independence is preserved by this extension is Problem 13.

Corollary 2.14

Every nilpotent matrix is similar to a matrix that is all zeros except for blocks of subdiagonal ones. That is, every nilpotent map is represented with respect to some basis by such a matrix.

This form is unique in the sense that if a nilpotent matrix is similar to two such matrices then those two simply have their blocks ordered differently. Thus this is a canonical form for the similarity classes of nilpotent matrices provided that we order the blocks, say, from longest to shortest.

Example 2.15

The matrix


M=\begin{pmatrix}
1  &-1  \\
1  &-1
\end{pmatrix}

has an index of nilpotency of two, as this calculation shows.


\begin{array}{c|cc}
 p   & M^p   & \mathcal{N}(M^p)     \\  \hline
 1 
&  M=\begin{pmatrix}
1  &-1  \\
1  &-1
\end{pmatrix}  
& \{\begin{pmatrix} x \\ x \end{pmatrix}\,\big|\,
x\in\mathbb{C}\}     \\
 2 
&  M^2=\begin{pmatrix}
0  &0   \\
0  &0
\end{pmatrix}  
& \mathbb{C}^2
\end{array}

The calculation also describes how a map m represented by M must act on any string basis. With one map application the nullspace has dimension one and so one vector of the basis is sent to zero. On a second application, the nullspace has dimension two and so the other basis vector is sent to zero. Thus, the action of the map is \vec{\beta}_1\mapsto\vec{\beta}_2\mapsto\vec{0} and the canonical form of the matrix is this.


\begin{pmatrix}
0  &0  \\
1  &0
\end{pmatrix}

We can exhibit such a m-string basis and the change of basis matrices witnessing the matrix similarity. For the basis, take  M to represent m with respect to the standard bases, pick a  \vec{\beta}_2\in\mathcal{N}(m) and also pick a  \vec{\beta}_1 so that  m(\vec{\beta}_1)=\vec{\beta}_2 .


\vec{\beta}_2=\begin{pmatrix} 1 \\ 1 \end{pmatrix}
\qquad
\vec{\beta}_1=\begin{pmatrix} 1 \\ 0 \end{pmatrix}

(If we take M to be a representative with respect to some nonstandard bases then this picking step is just more messy.) Recall the similarity diagram.

Linalg similarity cd 1.png

The canonical form equals  {\rm Rep}_{B,B}(m)=PMP^{-1} , where


P^{-1}
={\rm Rep}_{B,\mathcal{E}_2}(\mbox{id})
=\begin{pmatrix}
1  &1  \\
0  &1
\end{pmatrix}
\qquad
P=(P^{-1})^{-1}
=\begin{pmatrix}
1  &-1  \\
0  &1
\end{pmatrix}

and the verification of the matrix calculation is routine.


\begin{pmatrix}
1  &-1  \\
0  &1
\end{pmatrix}
\begin{pmatrix}
1  &-1  \\
1  &-1
\end{pmatrix}
\begin{pmatrix}
1  &1  \\
0  &1
\end{pmatrix}=
\begin{pmatrix}
0  &0  \\
1  &0
\end{pmatrix}
Example 2.16

The matrix


\begin{pmatrix}
0  &0  &0  &0  &0  \\
1  &0  &0  &0  &0  \\
-1 &1  &1  &-1 &1  \\
0  &1  &0  &0  &0  \\
1  &0  &-1 &1  &-1
\end{pmatrix}

is nilpotent. These calculations show the nullspaces growing.


\begin{array}{c|cc}
 p   & N^p   & \mathcal{N}(N^p)     \\  \hline
 1 
&\begin{pmatrix}
0  &0  &0  &0  &0  \\
1  &0  &0  &0  &0  \\
-1 &1  &1  &-1 &1  \\
0  &1  &0  &0  &0  \\
1  &0  &-1 &1  &-1
\end{pmatrix}  
& \{\begin{pmatrix} 0 \\ 0 \\ u-v \\ u \\ v \end{pmatrix} \,\big|\, u,v\in\mathbb{C}\}   \\
 2 
&\begin{pmatrix}
0  &0  &0  &0  &0  \\
0  &0  &0  &0  &0  \\
1  &0  &0  &0  &0  \\
1  &0  &0  &0  &0  \\
0  &0  &0  &0  &0
\end{pmatrix}  
& \{\begin{pmatrix} 0 \\ y \\ z \\ u \\ v \end{pmatrix}
\,\big|\, y,z,u,v\in\mathbb{C}\}    \\
 3 
&\textit{--zero matrix--}
& \mathbb{C}^5 
\end{array}

That table shows that any string basis must satisfy: the nullspace after one map application has dimension two so two basis vectors are sent directly to zero, the nullspace after the second application has dimension four so two additional basis vectors are sent to zero by the second iteration, and the nullspace after three applications is of dimension five so the final basis vector is sent to zero in three hops.


\begin{array}{ccccccc}
\vec{\beta}_1 &\mapsto &\vec{\beta}_2 &\mapsto &\vec{\beta}_3
&\mapsto &\vec{0}  \\
\vec{\beta}_4 &\mapsto &\vec{\beta}_5 &\mapsto &\vec{0}
\end{array}

To produce such a basis, first pick two independent vectors from  \mathcal{N}(n)


\vec{\beta}_3=\begin{pmatrix} 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{pmatrix} \quad
\vec{\beta}_5=\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 1 \end{pmatrix}

then add  \vec{\beta}_2,\vec{\beta}_4\in\mathcal{N}(n^2) such that  n(\vec{\beta}_2)=\vec{\beta}_3 and  n(\vec{\beta}_4)=\vec{\beta}_5


\vec{\beta}_2=\begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} \quad
\vec{\beta}_4=\begin{pmatrix} 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}

and finish by adding  \vec{\beta}_1\in\mathcal{N}(n^3)=\mathbb{C}^5 ) such that  n(\vec{\beta}_1)=\vec{\beta}_2 .


\vec{\beta}_1=\begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}

Exercises

This exercise is recommended for all readers.
Problem 1

What is the index of nilpotency of the left-shift operator, here acting on the space of triples of reals?


(x,y,z)\mapsto(0,x,y)
This exercise is recommended for all readers.
Problem 2

For each string basis state the index of nilpotency and give the dimension of the rangespace and nullspace of each iteration of the nilpotent map.

  1. 
\begin{array}{ccccccc}
\vec{\beta}_1 &\mapsto &\vec{\beta}_2 &\mapsto &\vec{0}  \\
\vec{\beta}_3 &\mapsto &\vec{\beta}_4 &\mapsto &\vec{0}
\end{array}
  2. 
\begin{array}{ccccccc}
\vec{\beta}_1 &\mapsto &\vec{\beta}_2 &\mapsto &\vec{\beta}_3
&\mapsto &\vec{0}  \\
\vec{\beta}_4 &\mapsto &\vec{0} \\
\vec{\beta}_5 &\mapsto &\vec{0} \\
\vec{\beta}_6 &\mapsto &\vec{0}
\end{array}
  3. 
\begin{array}{ccccccccc}
\vec{\beta}_1 &\mapsto &\vec{\beta}_2 &\mapsto &\vec{\beta}_3
&\mapsto &\vec{0}
\end{array}

Also give the canonical form of the matrix.

Problem 3

Decide which of these matrices are nilpotent.

  1. \begin{pmatrix}
-2  &4  \\
-1  &2
\end{pmatrix}
  2. \begin{pmatrix}
3  &1  \\
1  &3
\end{pmatrix}
  3. \begin{pmatrix}
-3  &2  &1  \\
-3  &2  &1  \\
-3  &2  &1
\end{pmatrix}
  4. \begin{pmatrix}
1  &1  &4  \\
3  &0  &-1 \\
5  &2  &7
\end{pmatrix}
  5. \begin{pmatrix}
45  &-22  &-19  \\
33  &-16  &-14  \\
69  &-34  &-29
\end{pmatrix}
This exercise is recommended for all readers.
Problem 4

Find the canonical form of this matrix.


\begin{pmatrix}
0  &1  &1  &0  &1  \\
0  &0  &1  &1  &1  \\
0  &0  &0  &0  &0  \\
0  &0  &0  &0  &0  \\
0  &0  &0  &0  &0
\end{pmatrix}
This exercise is recommended for all readers.
Problem 5

Consider the matrix from Example 2.16.

  1. Use the action of the map on the string basis to give the canonical form.
  2. Find the change of basis matrices that bring the matrix to canonical form.
  3. Use the answer in the prior item to check the answer in the first item.
This exercise is recommended for all readers.
Problem 6

Each of these matrices is nilpotent.

  1. 
\begin{pmatrix}
1/2  &-1/2  \\
1/2  &-1/2
\end{pmatrix}
  2. 
\begin{pmatrix}
0  &0  &0  \\
0  &-1 &1  \\
0  &-1 &1
\end{pmatrix}
  3. 
\begin{pmatrix}
-1  &1  &-1 \\
1  &0  &1  \\
1  &-1 &1
\end{pmatrix}

Put each in canonical form.

Problem 7

Describe the effect of left or right multiplication by a matrix that is in the canonical form for nilpotent matrices.

Problem 8

Is nilpotence invariant under similarity? That is, must a matrix similar to a nilpotent matrix also be nilpotent? If so, with the same index?

This exercise is recommended for all readers.
Problem 9

Show that the only eigenvalue of a nilpotent matrix is zero.

Problem 10

Is there a nilpotent transformation of index three on a two-dimensional space?

Problem 11

In the proof of Theorem 2.13, why isn't the proof's base case that the index of nilpotency is zero?

This exercise is recommended for all readers.
Problem 12

Let  t:V\to V be a linear transformation and suppose  \vec{v}\in V is such that  t^k(\vec{v})=\vec{0} but  t^{k-1}(\vec{v})\neq\vec{0} . Consider the t-string \langle \vec{v},t(\vec{v}),\dots,t^{k-1}(\vec{v}) \rangle .

  1. Prove that  t is a transformation on the span of the set of vectors in the string, that is, prove that  t restricted to the span has a range that is a subset of the span. We say that the span is a  t -invariant subspace.
  2. Prove that the restriction is nilpotent.
  3. Prove that the t-string is linearly independent and so is a basis for its span.
  4. Represent the restriction map with respect to the t-string basis.
Problem 13

Finish the proof of Theorem 2.13.

Problem 14

Show that the terms "nilpotent transformation" and "nilpotent matrix", as given in Definition 2.6, fit with each other: a map is nilpotent if and only if it is represented by a nilpotent matrix. (Is it that a transformation is nilpotent if an only if there is a basis such that the map's representation with respect to that basis is a nilpotent matrix, or that any representation is a nilpotent matrix?)

Problem 15

Let  T be nilpotent of index four. How big can the rangespace of  T^3 be?

Problem 16

Recall that similar matrices have the same eigenvalues. Show that the converse does not hold.

Problem 17

Prove a nilpotent matrix is similar to one that is all zeros except for blocks of super-diagonal ones.

This exercise is recommended for all readers.
Problem 18

Prove that if a transformation has the same rangespace as nullspace. then the dimension of its domain is even.

Problem 19

Prove that if two nilpotent matrices commute then their product and sum are also nilpotent.

Problem 20

Consider the transformation of  \mathcal{M}_{n \! \times \! n} given by  t_S(T)=ST-TS where  S is an  n \! \times \! n matrix. Prove that if  S is nilpotent then so is  t_S .

Problem 21

Show that if  N is nilpotent then  I-N is invertible. Is that "only if" also?

References

  1. More information on representatives is in the appendix.
  2. More information on function interation is in the appendix.
  3. More information on map restrictions is in the appendix.

Section IV - Jordan Form

This section uses material from three optional subsections: Direct Sum, Determinants Exist, and Other Formulas for the Determinant.

The chapter on linear maps shows that every h:V\to W can be represented by a partial-identity matrix with respect to some bases B\subset V and D\subset W. This chapter revisits this issue in the special case that the map is a linear transformation t:V\to V. Of course, the general result still applies but with the codomain and domain equal we naturally ask about having the two bases also be equal. That is, we want a canonical form to represent transformations as {\rm Rep}_{B,B}(t).

After a brief review section, we began by noting that a block partial identity form matrix is not always obtainable in this B,B case. We therefore considered the natural generalization, diagonal matrices, and showed that if its eigenvalues are distinct then a map or matrix can be diagonalized. But we also gave an example of a matrix that cannot be diagonalized and in the section prior to this one we developed that example. We showed that a linear map is nilpotent— if we take higher and higher powers of the map or matrix then we eventually get the zero map or matrix— if and only if there is a basis on which it acts via disjoint strings. That led to a canonical form for nilpotent matrices.

Now, this section concludes the chapter. We will show that the two cases we've studied are exhaustive in that for any linear transformation there is a basis such that the matrix representation {\rm Rep}_{B,B}(t) is the sum of a diagonal matrix and a nilpotent matrix in its canonical form.


1 - Polynomials of Maps and Matrices

Recall that the set of square matrices is a vector space under entry-by-entry addition and scalar multiplication and that this space  \mathcal{M}_{n \! \times \! n} has dimension  n^2 . Thus, for any  n \! \times \! n matrix T the  n^2+1 -member set  \{I,T,T^2,\dots,T^{n^2} \} is linearly dependent and so there are scalars  c_0,\dots,c_{n^2} such that c_{n^2}T^{n^2}+\dots+c_1T+c_0I is the zero matrix.

Remark 1.1

This observation is small but important. It says that every transformation exhibits a generalized nilpotency: the powers of a square matrix cannot climb forever without a "repeat".

Example 1.2

Rotation of plane vectors  \pi/6 radians counterclockwise is represented with respect to the standard basis by


T=
\begin{pmatrix}
\sqrt{3}/2  &-1/2  \\
1/2         &\sqrt{3}/2
\end{pmatrix}

and verifying that  0T^4+0T^3+1T^2-\sqrt{3}T+1I equals the zero matrix is easy.

Definition 1.3

For any polynomial  f(x)=c_nx^n+\dots+c_1x+c_0 , where  t is a linear transformation then  f(t) is the transformation  c_nt^n+\dots+c_1t+c_0(\mbox{id}) on the same space and where  T is a square matrix then  f(T) is the matrix  c_nT^n+\dots+c_1T+c_0I .

Remark 1.4

If, for instance,  f(x)=x-3 , then most authors write in the identity matrix:  f(T)=T-3I . But most authors don't write in the identity map:  f(t)=t-3 . In this book we shall also observe this convention.

Of course, if  T={\rm Rep}_{B,B}(t) then  f(T)={\rm Rep}_{B,B}(f(t)) , which follows from the relationships  T^j={\rm Rep}_{B,B}(t^j) , and  cT={\rm Rep}_{B,B}(ct) , and  T_1+T_2 ={\rm Rep}_{B,B}(t_1+t_2) .

As Example 1.2 shows, there may be polynomials of degree smaller than n^2 that zero the map or matrix.

Definition 1.5

The minimal polynomial  m(x) of a transformation  t or a square matrix  T is the polynomial of least degree and with leading coefficient  1 such that  m(t) is the zero map or  m(T) is the zero matrix.

A minimal polynomial always exists by the observation opening this subsection. A minimal polynomial is unique by the "with leading coefficient  1 " clause. This is because if there are two polynomials  m(x) and  \hat{m}(x) that are both of the minimal degree to make the map or matrix zero (and thus are of equal degree), and both have leading  1 's, then their difference  m(x)-\hat{m}(x) has a smaller degree than either and still sends the map or matrix to zero. Thus  m(x)-\hat{m}(x) is the zero polynomial and the two are equal. (The leading coefficient requirement also prevents a minimal polynomial from being the zero polynomial.)

Example 1.6

We can see that  m(x)=x^2-2x-1 is minimal for the matrix of Example 1.2 by computing the powers of T up to the power n^2=4.


T^2=
\begin{pmatrix}
1/2         &-\sqrt{3}/2  \\
\sqrt{3}/2  &1/2
\end{pmatrix}
\quad
T^3=
\begin{pmatrix}
0           &-1           \\
1           &0
\end{pmatrix}
\quad
T^4=
\begin{pmatrix}
-1/2        &-\sqrt{3}/2  \\
\sqrt{3}/2  &-1/2
\end{pmatrix}

Next, put  c_4T^4+c_3T^3+c_2T^2+c_1T+c_0I equal to the zero matrix


\begin{array}{*{5}{rc}r}
-(1/2)c_4  &  &             &+ &(1/2)c_2
&+ &(\sqrt{3}/2)c_1  &+  &c_0  &=  &0      \\
-(\sqrt{3}/2)c_4  &- &c_3 &- &(\sqrt{3}/2)c_2
&- &(1/2)c_1  &   &          &=  &0        \\
(\sqrt{3}/2)c_4  &+ &c_3 &+ &(\sqrt{3}/2)c_2
&+ &(1/2)c_1  &   &            &=  &0      \\
-(1/2)c_4  &  &             &+ &(1/2)c_2
&+ &(\sqrt{3}/2)c_1  &+  &c_0  &=  &0
\end{array}

and use Gauss' method.


\begin{array}{*{5}{rc}r}
c_4  &  &             &- &c_2
&- &\sqrt{3}c_1  &-  &2c_0  &=  &0      \\
&  &c_3 &+ &\sqrt{3}c_2
&+ &2c_1  &+  &\sqrt{3}c_0 &=  &0
\end{array}

Setting  c_4 ,  c_3 , and  c_2 to zero forces  c_1 and  c_0 to also come out as zero. To get a leading one, the most we can do is to set  c_4 and  c_3 to zero. Thus the minimal polynomial is quadratic.

Using the method of that example to find the minimal polynomial of a  3 \! \times \! 3 matrix would mean doing Gaussian reduction on a system with nine equations in ten unknowns. We shall develop an alternative. To begin, note that we can break a polynomial of a map or a matrix into its components.

Lemma 1.7

Suppose that the polynomial  f(x)=c_nx^n+\dots+c_1x+c_0 factors as  k(x-\lambda_1)^{q_1}\cdots(x-\lambda_\ell)^{q_\ell} . If  t is a linear transformation then these two are equal maps.


c_nt^n+\dots+c_1t+c_0
=
k\cdot(t-\lambda_1)^{q_1}\circ \cdots\circ
(t-\lambda_\ell)^{q_\ell}

Consequently, if  T is a square matrix then  f(T) and  k\cdot(T-\lambda_1I)^{q_1}\cdots(T-\lambda_\ell I)^{q_\ell} are equal matrices.

Proof

This argument is by induction on the degree of the polynomial. The cases where the polynomial is of degree  0 and  1 are clear. The full induction argument is Problem 21 but the degree two case gives its sense.

A quadratic polynomial factors into two linear terms  f(x)=k(x-\lambda_1)\cdot(x-\lambda_2)
=k(x^2+(\lambda_1+\lambda_2)x+\lambda_1\lambda_2) (the roots \lambda_1 and \lambda_2 might be equal). We can check that substituting  t for  x in the factored and unfactored versions gives the same map.

\begin{array}{rl}
\bigl(k\cdot(t-\lambda_1)\circ (t-\lambda_2)\bigr)\,(\vec{v})
&=\bigl(k\cdot(t-\lambda_1)\bigr)\,(t(\vec{v})-\lambda_2\vec{v})    \\
&=k\cdot\bigl(t(t(\vec{v}))-t(\lambda_2\vec{v})
-\lambda_1 t(\vec{v})-\lambda_1\lambda_2\vec{v}\bigr)    \\
&=k\cdot \bigl(t\circ t\,(\vec{v})-(\lambda_1+\lambda_2)t(\vec{v})
+\lambda_1\lambda_2\vec{v}\bigr)                    \\
&=k\cdot(t^2-(\lambda_1+\lambda_2)t+\lambda_1\lambda_2)\,(\vec{v})
\end{array}

The third equality holds because the scalar \lambda_2 comes out of the second term, as  t is linear.

In particular, if a minimial polynomial m(x) for a transformation t factors as m(x)=(x-\lambda_1)^{q_1}\cdots (x-\lambda_\ell)^{q_\ell} then  m(t)=(t-\lambda_1)^{q_1}\circ \cdots\circ
(t-\lambda_\ell)^{q_\ell} is the zero map. Since  m(t) sends every vector to zero, at least one of the maps  t-\lambda_i sends some nonzero vectors to zero. So, too, in the matrix case— if m is minimal for T then  m(T)=(T-\lambda_1I)^{q_1}\cdots (T-\lambda_\ell I)^{q_\ell} is the zero matrix and at least one of the matrices T-\lambda_iI sends some nonzero vectors to zero. Rewording both cases: at least some of the  \lambda_i are eigenvalues. (See Problem 17.)

Recall how we have earlier found eigenvalues. We have looked for \lambda such that T\vec{v}=\lambda\vec{v} by considering the equation \vec{0}=T\vec{v}-x\vec{v}=(T-xI)\vec{v} and computing the determinant of the matrix T-xI. That determinant is a polynomial in x, the characteristic polynomial, whose roots are the eigenvalues. The major result of this subsection, the next result, is that there is a connection between this characteristic polynomial and the minimal polynomial. This results expands on the prior paragraph's insight that some roots of the minimal polynomial are eigenvalues by asserting that every root of the minimal polynomial is an eigenvalue and further that every eigenvalue is a root of the minimal polynomial (this is because it says "1\leq q_i" and not just "0\leq q_i").

Theorem 1.8 (Cayley-Hamilton)

If the characteristic polynomial of a transformation or square matrix factors into


k\cdot (x-\lambda_1)^{p_1}(x-\lambda_2)^{p_2}\cdots(x-\lambda_\ell)^{p_\ell}

then its minimal polynomial factors into


(x-\lambda_1)^{q_1}(x-\lambda_2)^{q_2}\cdots(x-\lambda_\ell)^{q_\ell}

where  1\leq q_i \leq p_i for each  i between  1 and  \ell .

The proof takes up the next three lemmas. Although they are stated only in matrix terms, they apply equally well to maps. We give the matrix version only because it is convenient for the first proof.

The first result is the key— some authors call it the Cayley-Hamilton Theorem and call Theorem 1.8 above a corollary. For the proof, observe that a matrix of polynomials can be thought of as a polynomial with matrix coefficients.


\begin{pmatrix}
2x^2+3x-1  &x^2+2    \\
3x^2+4x+1  &4x^2+x+1
\end{pmatrix}
= \begin{pmatrix}
2  &1  \\
3  &4
\end{pmatrix}x^2
+ \begin{pmatrix}
3  &0  \\
4  &1
\end{pmatrix}x
+ \begin{pmatrix}
-1  &2  \\
1  &1
\end{pmatrix}
Lemma 1.9

If  T is a square matrix with characteristic polynomial  c(x) then  c(T) is the zero matrix.

Proof

Let  C be  T-xI , the matrix whose determinant is the characteristic polynomial  c(x)=c_nx^n+\dots+c_1x+c_0 .


C=\begin{pmatrix}
t_{1,1}-x        &t_{1,2}   &\ldots        \\
t_{2,1}          &t_{2,2}-x               \\
\vdots           &          &\ddots       \\
&          &       &t_{n,n}-x
\end{pmatrix}

Recall that the product of the adjoint of a matrix with the matrix itself is the determinant of that matrix times the identity.


c(x)\cdot I
=\text{adj}\, (C)C
=\text{adj}\, (C)(T-xI)
=\text{adj}\, (C)T- \text{adj}\,(C)\cdot x
\qquad(*)

The entries of  \text{adj}\, (C) are polynomials, each of degree at most  n-1 since the minors of a matrix drop a row and column. Rewrite it, as suggested above, as  \text{adj}\, (C)=C_{n-1}x^{n-1}+\dots+C_1x+C_0 where each  C_i is a matrix of scalars. The left and right ends of equation (*) above give this.

\begin{array}{rl}
c_nIx^n+c_{n-1}Ix^{n-1}+\dots+c_1Ix+c_0I
&=(C_{n-1}T)x^{n-1}+\dots+(C_1T)x+C_0T  \\
&\quad-C_{n-1}x^n-C_{n-2}x^{n-1}-\dots-C_0x
\end{array}

Equate the coefficients of  x^n , the coefficients of x^{n-1}, etc.

\begin{array}{rl}
c_nI
&=-C_{n-1}    \\
c_{n-1}I
&=-C_{n-2}+C_{n-1}T    \\
&\vdots             \\
c_{1}I
&=-C_{0}+C_{1}T    \\
c_{0}I
&=C_{0}T
\end{array}

Multiply (from the right) both sides of the first equation by  T^n , both sides of the second equation by  T^{n-1} , etc. Add. The result on the left is  c_nT^n+c_{n-1}T^{n-1}+\dots+c_0I , and the result on the right is the zero matrix.

We sometimes refer to that lemma by saying that a matrix or map satisfies its characteristic polynomial.

Lemma 1.10

Where  f(x) is a polynomial, if  f(T) is the zero matrix then  f(x) is divisible by the minimal polynomial of  T . That is, any polynomial satisfied by  T is divisable by  T 's minimal polynomial.

Proof

Let  m(x) be minimal for  T . The Division Theorem for Polynomials gives  f(x)=q(x)m(x)+r(x) where the degree of  r is strictly less than the degree of  m . Plugging  T in shows that  r(T) is the zero matrix, because T satisfies both f and m. That contradicts the minimality of  m unless  r is the zero polynomial.

Combining the prior two lemmas gives that the minimal polynomial divides the characteristic polynomial. Thus, any root of the minimal polynomial is also a root of the characteristic polynomial. That is, so far we have that if  m(x)=(x-\lambda_1)^{q_1}\dots(x-\lambda_i)^{q_i} then  c(x) must has the form  (x-\lambda_1)^{p_1}\dots(x-\lambda_i)^{p_i}
(x-\lambda_{i+1})^{p_{i+1}}\dots(x-\lambda_\ell)^{p_\ell} where each  q_j is less than or equal to  p_j . The proof of the Cayley-Hamilton Theorem is finished by showing that in fact the characteristic polynomial has no extra roots \lambda_{i+1}, etc.

Lemma 1.11

Each linear factor of the characteristic polynomial of a square matrix is also a linear factor of the minimal polynomial.

Proof

Let  T be a square matrix with minimal polynomial  m(x) and assume that  x-\lambda is a factor of the characteristic polynomial of  T , that is, assume that  \lambda is an eigenvalue of  T . We must show that x-\lambda is a factor of m, that is, that m(\lambda)=0.

In general, where  \lambda is associated with the eigenvector  \vec{v} , for any polynomial function  f(x) , application of the matrix  f(T) to  \vec{v} equals the result of multiplying  \vec{v} by the scalar  f(\lambda) . (For instance, if T has eigenvalue \lambda associated with the eigenvector \vec{v} and f(x)=x^2+2x+3 then  (T^2+2T+3)\,(\vec{v})=T^2(\vec{v})+2T(\vec{v})+3\vec{v}=
\lambda^2\cdot\vec{v}+2\lambda\cdot\vec{v}+3\cdot\vec{v}=
(\lambda^2+2\lambda+3)\cdot\vec{v} .) Now, as  m(T) is the zero matrix,  \vec{0}=m(T)(\vec{v})=m(\lambda)\cdot\vec{v} and therefore  m(\lambda)=0 .

Example 1.12

We can use the Cayley-Hamilton Theorem to help find the minimal polynomial of this matrix.


T=
\begin{pmatrix}
2  &0  &0  &1  \\
1  &2  &0  &2  \\
0  &0  &2  &-1 \\
0  &0  &0  &1
\end{pmatrix}

First, its characteristic polynomial  c(x)=(x-1)(x-2)^3 can be found with the usual determinant. Now, the Cayley-Hamilton Theorem says that  T 's minimal polynomial is either  (x-1)(x-2) or  (x-1)(x-2)^2 or  (x-1)(x-2)^3 . We can decide among the choices just by computing:


(T-1I)(T-2I)=\!
\begin{pmatrix}
1  &0  &0  &1  \\
1  &1  &0  &2  \\
0  &0  &1  &-1 \\
0  &0  &0  &0
\end{pmatrix}
\begin{pmatrix}
0  &0  &0  &1  \\
1  &0  &0  &2  \\
0  &0  &0  &-1 \\
0  &0  &0  &-1
\end{pmatrix}
=
\begin{pmatrix}
0  &0  &0  &0  \\
1  &0  &0  &1  \\
0  &0  &0  &0  \\
0  &0  &0  &0
\end{pmatrix}

and


(T-1I)(T-2I)^2=
\begin{pmatrix}
0  &0  &0  &0  \\
1  &0  &0  &1  \\
0  &0  &0  &0  \\
0  &0  &0  &0
\end{pmatrix}
\begin{pmatrix}
0  &0  &0  &1  \\
1  &0  &0  &2  \\
0  &0  &0  &-1 \\
0  &0  &0  &-1
\end{pmatrix}
=
\begin{pmatrix}
0  &0  &0  &0  \\
0  &0  &0  &0  \\
0  &0  &0  &0  \\
0  &0  &0  &0
\end{pmatrix}

and so  m(x)=(x-1)(x-2)^2 .

Exercises

This exercise is recommended for all readers.
Problem 1

What are the possible minimal polynomials if a matrix has the given characteristic polynomial?

  1. 8\cdot (x-3)^4
  2. (1/3)\cdot (x+1)^3(x-4)
  3. -1\cdot (x-2)^2(x-5)^2
  4.  5\cdot(x+3)^2(x-1)(x-2)^2

What is the degree of each possibility?

This exercise is recommended for all readers.
Problem 2

Find the minimal polynomial of each matrix.

  1.  \begin{pmatrix}
3  &0  &0  \\
1  &3  &0  \\
0  &0  &4
\end{pmatrix}
  2.  \begin{pmatrix}
3  &0  &0  \\
1  &3  &0  \\
0  &0  &3
\end{pmatrix}
  3.  \begin{pmatrix}
3  &0  &0  \\
1  &3  &0  \\
0  &1  &3
\end{pmatrix}
  4.  \begin{pmatrix}
2  &0  &1  \\
0  &6  &2  \\
0  &0  &2
\end{pmatrix}
  5.  \begin{pmatrix}
2  &2  &1  \\
0  &6  &2  \\
0  &0  &2
\end{pmatrix}
  6.  \begin{pmatrix}
-1 &4  &0  &0  &0  \\
0 &3  &0  &0  &0  \\
0 &-4 &-1 &0  &0  \\
3 &-9 &-4 &2  &-1 \\
1 &5  &4  &1  &4
\end{pmatrix}
Problem 3

Find the minimal polynomial of this matrix.


\begin{pmatrix}
0  &1  &0  \\
0  &0  &1  \\
1  &0  &0
\end{pmatrix}
This exercise is recommended for all readers.
Problem 4

What is the minimal polynomial of the differentiation operator d/dx on  \mathcal{P}_n ?

This exercise is recommended for all readers.
Problem 5

Find the minimal polynomial of matrices of this form


\begin{pmatrix}
\lambda  &0        &0          &\ldots  &        &0  \\
1        &\lambda  &0          &        &        &0  \\
0        &1        &\lambda                          \\
&         &           &\ddots                \\
&         &           &        &\lambda &0   \\
0        &0        &\ldots     &        &1       &\lambda
\end{pmatrix}

where the scalar \lambda is fixed (i.e., is not a variable).

Problem 6

What is the minimal polynomial of the transformation of  \mathcal{P}_n that sends  p(x) to  p(x+1) ?

Problem 7

What is the minimal polynomial of the map  \pi:\mathbb{C}^3\to \mathbb{C}^3 projecting onto the first two coordinates?

Problem 8

Find a  3 \! \times \! 3 matrix whose minimal polynomial is  x^2 .

Problem 9

What is wrong with this claimed proof of Lemma 1.9: "if  c(x)=\left|T-xI\right| then  c(T)=\left|T-TI\right|=0 "? (Cullen 1990)

Problem 10

Verify Lemma 1.9 for  2 \! \times \! 2 matrices by direct calculation.

This exercise is recommended for all readers.
Problem 11

Prove that the minimal polynomial of an  n \! \times \! n matrix has degree at most  n (not  n^2 as might be guessed from this subsection's opening). Verify that this maximum,  n , can happen.

This exercise is recommended for all readers.
Problem 12

The only eigenvalue of a nilpotent map is zero. Show that the converse statement holds.

Problem 13

What is the minimal polynomial of a zero map or matrix? Of an identity map or matrix?

This exercise is recommended for all readers.
Problem 14

Interpret the minimal polynomial of Example 1.2 geometrically.

Problem 15

What is the minimal polynomial of a diagonal matrix?

This exercise is recommended for all readers.
Problem 16

A projection is any transformation  t such that  t^2=t . (For instance, the transformation of the plane \mathbb{R}^2 projecting each vector onto its first coordinate will, if done twice, result in the same value as if it is done just once.) What is the minimal polynomial of a projection?

Problem 17

The first two items of this question are review.

  1. Prove that the composition of one-to-one maps is one-to-one.
  2. Prove that if a linear map is not one-to-one then at least one nonzero vector from the domain is sent to the zero vector in the codomain.
  3. Verify the statement, excerpted here, that preceeds Theorem 1.8.

    ... if a minimial polynomial m(x) for a transformation t factors as m(x)=(x-\lambda_1)^{q_1}\cdots (x-\lambda_\ell)^{q_\ell} then  m(t)=(t-\lambda_1)^{q_1}\circ \cdots\circ
(t-\lambda_\ell)^{q_\ell} is the zero map. Since  m(t) sends every vector to zero, at least one of the maps  t-\lambda_i sends some nonzero vectors to zero. ... Rewording ...: at least some of the  \lambda_i are eigenvalues.

Problem 18

True or false: for a transformation on an  n dimensional space, if the minimal polynomial has degree  n then the map is diagonalizable.

Problem 19

Let f(x) be a polynomial. Prove that if A and B are similar matrices then f(A) is similar to f(B).

  1. Now show that similar matrices have the same characteristic polynomial.
  2. Show that similar matrices have the same minimal polynomial.
  3. Decide if these are similar.
    
\begin{pmatrix}
1  &3  \\
2  &3
\end{pmatrix}
\qquad
\begin{pmatrix}
4  &-1 \\
1  &1
\end{pmatrix}
Problem 20
  1. Show that a matrix is invertible if and only if the constant term in its minimal polynomial is not 0.
  2. Show that if a square matrix  T is not invertible then there is a nonzero matrix  S such that  ST and  TS both equal the zero matrix.
This exercise is recommended for all readers.
Problem 21
  1. Finish the proof of Lemma 1.7.
  2. Give an example to show that the result does not hold if t is not linear.
Problem 22

Any transformation or square matrix has a minimal polynomial. Does the converse hold?


2 - Jordan Canonical Form

This subsection moves from the canonical form for nilpotent matrices to the one for all matrices.

We have shown that if a map is nilpotent then all of its eigenvalues are zero. We can now prove the converse.

Lemma 2.1

A linear transformation whose only eigenvalue is zero is nilpotent.

Proof

If a transformation  t on an  n -dimensional space has only the single eigenvalue of zero then its characteristic polynomial is  x^n . The Cayley-Hamilton Theorem says that a map satisfies its characteristic polynimial so  t^n is the zero map. Thus t is nilpotent.

We have a canonical form for nilpotent matrices, that is, for each matrix whose single eigenvalue is zero: each such matrix is similar to one that is all zeroes except for blocks of subdiagonal ones. (To make this representation unique we can fix some arrangement of the blocks, say, from longest to shortest.) We next extend this to all single-eigenvalue matrices.

Observe that if  t 's only eigenvalue is  \lambda then  t-\lambda 's only eigenvalue is  0 because  t(\vec{v})=\lambda\vec{v} if and only if  (t-\lambda)\,(\vec{v})=0\cdot\vec{v} . The natural way to extend the results for nilpotent matrices is to represent t-\lambda in the canonical form N, and try to use that to get a simple representation T for t. The next result says that this try works.

Lemma 2.2

If the matrices  T-\lambda I and  N are similar then  T and  N+\lambda I are also similar, via the same change of basis matrices.

Proof

With  N=P(T-\lambda I)P^{-1}=PTP^{-1}-P(\lambda I)P^{-1} we have N=PTP^{-1}-PP^{-1}(\lambda I) since the diagonal matrix  \lambda I commutes with anything, and so  N=PTP^{-1}-\lambda I . Therefore  N+\lambda I=PTP^{-1} , as required.

Example 2.3

The characteristic polynomial of


T=\begin{pmatrix}
2  &-1  \\
1  &4
\end{pmatrix}

is  (x-3)^2 and so  T has only the single eigenvalue  3 . Thus for


T-3I=\begin{pmatrix}
-1  &-1  \\
1  &1
\end{pmatrix}

the only eigenvalue is  0 , and  T-3I is nilpotent. The null spaces are routine to find; to ease this computation we take T to represent the transformation t:\mathbb{C}^2\to \mathbb{C}^2 with respect to the standard basis (we shall maintain this convention for the rest of the chapter).


\mathcal{N}(t-3)=\{\begin{pmatrix} -y \\ y \end{pmatrix}\,\big|\, y\in\mathbb{C}\}
\qquad
\mathcal{N}((t-3)^2)=\mathbb{C}^2

The dimensions of these null spaces show that the action of an associated map t-3 on a string basis is \vec{\beta}_1\mapsto\vec{\beta}_2\mapsto\vec{0}. Thus, the canonical form for t-3 with one choice for a string basis is


{\rm Rep}_{B,B}(t-3)
=N
=\begin{pmatrix}
0  &0   \\
1  &0
\end{pmatrix}
\qquad
B=\langle \begin{pmatrix} 1 \\ 1 \end{pmatrix},\begin{pmatrix} -2 \\ 2 \end{pmatrix} \rangle

and by Lemma 2.2,  T is similar to this matrix.


{\rm Rep}_{t}(B,B)=
N+3I=
\begin{pmatrix}
3  &0  \\
1  &3
\end{pmatrix}

We can produce the similarity computation. Recall from the Nilpotence section how to find the change of basis matrices P and P^{-1} to express  N as  P(T-3I)P^{-1} . The similarity diagram

Linalg similarity cd 3.png

describes that to move from the lower left to the upper left we multiply by


P^{-1}=\bigl({\rm Rep}_{\mathcal{E}_2,B}(\mbox{id})\bigr)^{-1}
={\rm Rep}_{B,\mathcal{E}_2}(\mbox{id})
=\begin{pmatrix}
1  &-2  \\
1  &2
\end{pmatrix}

and to move from the upper right to the lower right we multiply by this matrix.


P=\begin{pmatrix}
1  &-2  \\
1  &2
\end{pmatrix}^{-1}
=\begin{pmatrix}
1/2  &1/2  \\
-1/4 &1/4
\end{pmatrix}

So the similarity is expressed by


\begin{pmatrix}
3  &0  \\
1  &3
\end{pmatrix}
=
\begin{pmatrix}
1/2  &1/2  \\
-1/4 &1/4
\end{pmatrix}
\begin{pmatrix}
2  &-1  \\
1  &4
\end{pmatrix}
\begin{pmatrix}
1  &-2  \\
1  &2
\end{pmatrix}

which is easily checked.

Example 2.4

This matrix has characteristic polynomial  (x-4)^4


T=
\begin{pmatrix}
4  &1  &0  &-1  \\
0  &3  &0  &1   \\
0  &0  &4  &0   \\
1  &0  &0  &5
\end{pmatrix}

and so has the single eigenvalue 4. The nullities of t-4 are: the null space of t-4 has dimension two, the null space of (t-4)^2 has dimension three, and the null space of (t-4)^3 has dimension four. Thus, t-4 has the action on a string basis of \vec{\beta}_1\mapsto\vec{\beta}_2\mapsto\vec{\beta}_3\mapsto\vec{0} and \vec{\beta}_4\mapsto\vec{0}. This gives the canonical form N for t-4, which in turn gives the form for  t .


N+4I=
\begin{pmatrix}
4  &0  &0  &0   \\
1  &4  &0  &0   \\
0  &1  &4  &0   \\
0  &0  &0  &4
\end{pmatrix}

An array that is all zeroes, except for some number \lambda down the diagonal and blocks of subdiagonal ones, is a Jordan block. We have shown that Jordan block matrices are canonical representatives of the similarity classes of single-eigenvalue matrices.

Example 2.5

The  3 \! \times \! 3 matrices whose only eigenvalue is  1/2 separate into three similarity classes. The three classes have these canonical representatives.


\begin{pmatrix}
1/2  &0    &0  \\
0    &1/2  &0  \\
0    &0    &1/2
\end{pmatrix}
\qquad
\begin{pmatrix}
1/2  &0    &0  \\
1    &1/2  &0  \\
0    &0    &1/2
\end{pmatrix}
\qquad
\begin{pmatrix}
1/2  &0    &0  \\
1    &1/2  &0  \\
0    &1    &1/2
\end{pmatrix}

In particular, this matrix


\begin{pmatrix}
1/2  &0    &0    \\
0    &1/2  &0    \\
0    &1    &1/2
\end{pmatrix}

belongs to the similarity class represented by the middle one, because we have adopted the convention of ordering the blocks of subdiagonal ones from the longest block to the shortest.

We will now finish the program of this chapter by extending this work to cover maps and matrices with multiple eigenvalues. The best possibility for general maps and matrices would be if we could break them into a part involving their first eigenvalue  \lambda_1 (which we represent using its Jordan block), a part with  \lambda_2 , etc.

This ideal is in fact what happens. For any transformation  t:V\to V , we shall break the space  V into the direct sum of a part on which  t-\lambda_1 is nilpotent, plus a part on which  t-\lambda_2 is nilpotent, etc. More precisely, we shall take three steps to get to this section's major theorem and the third step shows that  V=\mathcal{N}_\infty(t-\lambda_1)\oplus\cdots\oplus
\mathcal{N}_\infty(t-\lambda_\ell) where  \lambda_1,\ldots,\lambda_\ell are  t 's eigenvalues.

Suppose that  t:V\to V is a linear transformation. Note that the restriction[1] of  t to a subspace  M need not be a linear transformation on  M because there may be an  \vec{m}\in M with  t(\vec{m})\not\in M . To ensure that the restriction of a transformation to a "part" of a space is a transformation on the partwe need the next condition.

Definition 2.6

Let  t:V\to V be a transformation. A subspace  M is t invariant if whenever  \vec{m}\in M then  t(\vec{m})\in M (shorter:  t(M)\subseteq M ).

Two examples are that the generalized null space \mathcal{N}_\infty(t) and the generalized range space \mathcal{R}_\infty(t) of any transformation t are invariant. For the generalized null space, if \vec{v}\in\mathcal{N}_\infty(t) then t^n(\vec{v})=\vec{0} where n is the dimension of the underlying space and so t(\vec{v})\in\mathcal{N}_\infty(t) because t^n(\,t(\vec{v})\,) is zero also. For the generalized range space, if \vec{v}\in\mathcal{R}_\infty(t) then \vec{v}=t^n(\vec{w}) for some \vec{w} and then t(\vec{v})=t^{n+1}(\vec{w})=t^n(\,t(\vec{w})\,) shows that t(\vec{v}) is also a member of \mathcal{R}_\infty(t).

Thus the spaces \mathcal{N}_\infty(t-\lambda_i) and \mathcal{R}_\infty(t-\lambda_i) are t-\lambda_i invariant. Observe also that t-\lambda_i is nilpotent on \mathcal{N}_\infty(t-\lambda_i) because, simply, if \vec{v} has the property that some power of t-\lambda_i maps it to zero— that is, if it is in the generalized null space— then some power of t-\lambda_i maps it to zero. The generalized null space \mathcal{N}_\infty(t-\lambda_i) is a "part" of the space on which the action of t-\lambda_i is easy to understand.

The next result is the first of our three steps. It establishes that  t-\lambda_j leaves  t-\lambda_i 's part unchanged.

Lemma 2.7

A subspace is  t invariant if and only if it is  t-\lambda invariant for any scalar  \lambda . In particular, where  \lambda_i is an eigenvalue of a linear transformation  t , then for any other eigenvalue \lambda_j, the spaces  \mathcal{N}_\infty(t-\lambda_i) and  \mathcal{R}_\infty(t-\lambda_i) are  t-\lambda_j invariant.

Proof

For the first sentence we check the two implications of the "if and only if" separately. One of them is easy: if the subspace is t-\lambda invariant for any \lambda then taking \lambda=0 shows that it is t invariant. For the other implication suppose that the subspace is t invariant, so that if \vec{m}\in M then t(\vec{m})\in M, and let \lambda be any scalar. The subspace M is closed under linear combinations and so if t(\vec{m})\in M then t(\vec{m})-\lambda\vec{m}\in M. Thus if \vec{m}\in M then (t-\lambda)\,(\vec{m})\in M, as required.

The second sentence follows straight from the first. Because the two spaces are t-\lambda_i invariant, they are therefore  t invariant. From this, applying the first sentence again, we conclude that they are also  t-\lambda_j invariant.

The second step of the three that we will take to prove this section's major result makes use of an additional property of  \mathcal{N}_\infty(t-\lambda_i) and  \mathcal{R}_\infty(t-\lambda_i) , that they are complementary. Recall that if a space is the direct sum of two others  V=\mathcal{N}\oplus \mathcal{R} then any vector  \vec{v} in the space breaks into two parts  \vec{v}=\vec{n}+\vec{r} where  \vec{n}\in \mathcal{N} and  \vec{r}\in \mathcal{R} , and recall also that if  B_{\mathcal{N}} and  B_{\mathcal{R}} are bases for  \mathcal{N} and  \mathcal{R} then the concatenation  B_{\mathcal{N}}\!\mathbin{{}^\frown}\!B_{\mathcal{R}} is linearly independent (and so the two parts of  \vec{v} do not "overlap"). The next result says that for any subspaces  \mathcal{N} and  \mathcal{R} that are complementary as well as  t invariant, the action of  t on  \vec{v} breaks into the "non-overlapping" actions of  t on  \vec{n} and on  \vec{r} .

Lemma 2.8

Let  t:V\to V be a transformation and let  \mathcal{N} and  \mathcal{R} be  t invariant complementary subspaces of  V . Then  t can be represented by a matrix with blocks of square submatrices T_1 and T_2


\left(\begin{array}{c|c}
T_1   &Z_2  \\  \hline
Z_1 &T_2
\end{array}\right)
\begin{array}{ll}
\} \dim(\mathcal{N})\text{-many rows}  \\
\} \dim(\mathcal{R})\text{-many rows}
\end{array}

where  Z_1 and  Z_2 are blocks of zeroes.

Proof

Since the two subspaces are complementary, the concatenation of a basis for  \mathcal{N} and a basis for  \mathcal{R} makes a basis  B=\langle \vec{\nu}_1,\dots,\vec{\nu}_p,
\vec{\mu}_1,\ldots,\vec{\mu}_q \rangle   for  V . We shall show that the matrix


{\rm Rep}_{B,B}(t)=
\left(\begin{array}{c|c|c}
\vdots                   &        &\vdots     \\
{\rm Rep}_{B}(t(\vec{\nu}_1))  &\cdots  &{\rm Rep}_{B}(t(\vec{\mu}_q))  \\
\vdots                   &        &\vdots     \\
\end{array}\right)

has the desired form.

Any vector  \vec{v}\in V is in  \mathcal{N} if and only if its final  q components are zeroes when it is represented with respect to  B . As  \mathcal{N} is  t invariant, each of the vectors  {\rm Rep}_{B}(t(\vec{\nu}_1)) , ...,  {\rm Rep}_{B}(t(\vec{\nu}_p)) has that form. Hence the lower left of  {\rm Rep}_{B,B}(t) is all zeroes.

The argument for the upper right is similar.

To see that  t has been decomposed into its action on the parts, observe that the restrictions of  t to the subspaces  \mathcal{N} and  \mathcal{R} are represented, with respect to the obvious bases, by the matrices  T_1 and  T_2 . So, with subspaces that are invariant and complementary, we can split the problem of examining a linear transformation into two lower-dimensional subproblems. The next result illustrates this decomposition into blocks.

Lemma 2.9

If T is a matrices with square submatrices T_1 and T_2


T=
\left(\begin{array}{c|c}
T_1   &Z_2  \\  \hline
Z_1   &T_2
\end{array}\right)

where the  Z 's are blocks of zeroes, then  \left|T\right|=\left|T_1\right|\cdot\left|T_2\right| .

Proof

Suppose that  T is  n \! \times \! n , that  T_1 is  p \! \times \! p , and that  T_2 is  q \! \times \! q . In the permutation formula for the determinant


\left|T\right|=
\sum_{\text{permutations }\phi}
t_{1,\phi(1)}t_{2,\phi(2)}\cdots t_{n,\phi(n)}\sgn(\phi)

each term comes from a rearrangement of the column numbers  1,\dots,n into a new order  \phi(1),\dots,\phi(n) . The upper right block Z_2 is all zeroes, so if a  \phi has at least one of  p+1,\dots,n among its first  p column numbers  \phi(1),\dots,\phi(p) then the term arising from  \phi is zero, e.g., if  \phi(1)=n then  t_{1,\phi(1)}t_{2,\phi(2)}\dots t_{n,\phi(n)}
=0\cdot t_{2,\phi(2)}\dots t_{n,\phi(n)}=0 .

So the above formula reduces to a sum over all permutations with two halves: any significant \phi is the composition of a \phi_1 that rearranges only  1,\dots,p and a \phi_2 that rearranges only  p+1,\dots,p+q . Now, the distributive law (and the fact that the signum of a composition is the product of the signums) gives that this


\left|T_1\right|\cdot\left|T_2\right|=
\bigg(\sum_{\begin{array}{c}\\[-19pt]
\scriptstyle\text{perms }\phi_1 \\[-5pt]
\scriptstyle\text{of } 1,\dots,p
\end{array}}
\!\!\! t_{1,\phi_1(1)}\cdots t_{p,\phi_1(p)}\sgn(\phi_1) \bigg)

\cdot
\bigg(\sum_{\begin{array}{c}\\[-19pt]
\scriptstyle\text{perms }\phi_2 \\[-5pt]
\scriptstyle\text{of } p+1,\dots,p+q
\end{array}}
\!\!\! t_{p+1,\phi_2(p+1)}\cdots t_{p+q,\phi_2(p+q)}\sgn(\phi_2)
\bigg)

equals \left|T\right|=
\sum_{\text{significant }\phi}
t_{1,\phi(1)}t_{2,\phi(2)}\cdots t_{n,\phi(n)}\sgn(\phi).

Example 2.10

\begin{vmatrix}
2  &0  &0  &0  \\
1  &2  &0  &0  \\
0  &0  &3  &0  \\
0  &0  &0  &3
\end{vmatrix}
=\begin{vmatrix}
2  &0  \\
1  &2
\end{vmatrix}
\cdot
\begin{vmatrix}
3  &0  \\
0  &3
\end{vmatrix}
=36

From Lemma 2.9 we conclude that if two subspaces are complementary and  t invariant then  t is nonsingular if and only if its restrictions to both subspaces are nonsingular.

Now for the promised third, final, step to the main result.

Lemma 2.11

If a linear transformation  t:V\to V has the characteristic polynomial  (x-\lambda_1)^{p_1}\dots(x-\lambda_\ell)^{p_\ell} then (1)  V=\mathcal{N}_\infty(t-\lambda_1)\oplus\cdots
\oplus\mathcal{N}_\infty(t-\lambda_\ell) and (2)  \dim(\mathcal{N}_\infty(t-\lambda_i))=p_i  .

Proof

Because  \dim (V) is the degree  p_1+\cdots+p_\ell of the characteristic polynomial, to establish statement (1) we need only show that statement (2) holds and that  \mathcal{N}_\infty(t-\lambda_i)\cap\mathcal{N}_\infty(t-\lambda_j) is trivial whenever  i\neq j .

For the latter, by Lemma 2.7, both  \mathcal{N}_\infty(t-\lambda_i) and  \mathcal{N}_\infty(t-\lambda_j) are  t invariant. Notice that an intersection of  t invariant subspaces is  t invariant and so the restriction of  t to  \mathcal{N}_\infty(t-\lambda_i)\cap\mathcal{N}_\infty(t-\lambda_j) is a linear transformation. But both  t-\lambda_i and  t-\lambda_j are nilpotent on this subspace and so if  t has any eigenvalues on the intersection then its "only" eigenvalue is both  \lambda_i and  \lambda_j . That cannot be, so this restriction has no eigenvalues:  \mathcal{N}_\infty(t-\lambda_i)\cap\mathcal{N}_\infty(t-\lambda_j) is trivial (Lemma V.II.3.10 shows that the only transformation without any eigenvalues is on the trivial space).

To prove statement (2), fix the index  i . Decompose  V as  \mathcal{N}_\infty(t-\lambda_i)\oplus\mathcal{R}_\infty(t-\lambda_i)

and apply Lemma 2.8.


T=

\left(\begin{array}{c|c}
T_1   &Z_2  \\  \hline
Z_1   &T_2
\end{array}\right)
\begin{array}{ll}
\} \dim(\,\mathcal{N}_\infty(t-\lambda_i)\,)\text{-many rows}  \\
\} \dim(\,\mathcal{R}_\infty(t-\lambda_i)\,)\text{-many rows}
\end{array}

By Lemma 2.9,  \left|T-xI\right|=\left|T_1-xI\right|\cdot\left|T_2-xI\right| . By the uniqueness clause of the Fundamental Theorem of Arithmetic, the determinants of the blocks have the same factors as the characteristic polynomial  \left|T_1-xI\right|=(x-\lambda_1)^{q_1}\dots(x-\lambda_\ell)^{q_\ell} and  \left|T_2-xI\right|=(x-\lambda_1)^{r_1}\dots(x-\lambda_\ell)^{r_\ell} , and the sum of the powers of these factors is the power of the factor in the characteristic polynomial:  q_1+r_1=p_1 , ...,  q_\ell+r_\ell=p_\ell . Statement (2) will be proved if we will show that q_i=p_i and that q_j=0 for all j\neq i, because then the degree of the polynomial \left|T_1-xI\right|— which equals the dimension of the generalized null space— is as required.

For that, first, as the restriction of  t-\lambda_i to  \mathcal{N}_\infty(t-\lambda_i) is nilpotent on that space, the only eigenvalue of  t on it is  \lambda_i . Thus the characteristic equation of  t on  \mathcal{N}_\infty(t-\lambda_i) is  \left|T_1-xI\right|=(x-\lambda_i)^{q_i} . And thus q_j=0 for all j\neq i.

Now consider the restriction of  t to  \mathcal{R}_\infty(t-\lambda_i) . By Note V.III.2.2, the map  t-\lambda_i is nonsingular on  \mathcal{R}_\infty(t-\lambda_i) and so  \lambda_i is not an eigenvalue of  t on that subspace. Therefore,  x-\lambda_i is not a factor of  \left|T_2-xI\right| , and so  q_i=p_i .

Our major result just translates those steps into matrix terms.

Theorem 2.12

Any square matrix is similar to one in Jordan form


\begin{pmatrix}
J_{\lambda_1}  &     &\textit{--zeroes--}                 \\
&J_{\lambda_2}                                              \\
&     &\ddots                                     \\
&     &                           &J_{\lambda_{\ell-1}}     \\
&     &\textit{--zeroes--} &  &J_{\lambda_{\ell}}
\end{pmatrix}

where each  J_{\lambda} is the Jordan block associated with the eigenvalue \lambda of the original matrix (that is, is all zeroes except for  \lambda 's down the diagonal and some subdiagonal ones).

Proof

Given an  n \! \times \! n matrix  T , consider the linear map  t:\mathbb{C}^n\to \mathbb{C}^n that it represents with respect to the standard bases. Use the prior lemma to write  \mathbb{C}^n=\mathcal{N}_\infty(t-\lambda_1)\oplus\cdots
\oplus\mathcal{N}_\infty(t-\lambda_\ell) where  \lambda_1,\ldots,\lambda_\ell are the eigenvalues of  t . Because each  \mathcal{N}_\infty(t-\lambda_i) is  t invariant, Lemma 2.8 and the prior lemma show that  t is represented by a matrix that is all zeroes except for square blocks along the diagonal. To make those blocks into Jordan blocks, pick each  B_{\lambda_i} to be a string basis for the action of  t-\lambda_i on  \mathcal{N}_\infty(t-\lambda_i) .

Jordan form is a canonical form for similarity classes of square matrices, provided that we make it unique by arranging the Jordan blocks from least eigenvalue to greatest and then arranging the subdiagonal 1 blocks inside each Jordan block from longest to shortest.

Example 2.13

This matrix has the characteristic polynomial  (x-2)^2(x-6) .


T=
\begin{pmatrix}
2  &0  &1  \\
0  &6  &2  \\
0  &0  &2
\end{pmatrix}

We will handle the eigenvalues 2 and 6 separately.

Computation of the powers, and the null spaces and nullities, of T-2I is routine. (Recall from Example 2.3 the convention of taking T to represent a transformation, here t:\mathbb{C}^3\to \mathbb{C}^3, with respect to the standard basis.)

\begin{array}{r|ccc}
\textit{power}  p   & (T-2I)^p  & \mathcal{N}((t-2)^p)  
&\textit{nullity}                                            \\  \hline
 1 
& \begin{pmatrix}
0  &0  &1  \\
0  &4  &2  \\
0  &0  &0
\end{pmatrix} 
& \{\begin{pmatrix} x \\ 0 \\ 0 \end{pmatrix}\,\big|\, x\in\mathbb{C}\}  
&1                                                   \\
 2 
& \begin{pmatrix}
0  &0  &0  \\
0  &16 &8  \\
0  &0  &0
\end{pmatrix} 
& \{\begin{pmatrix} x \\ -z/2 \\  z \end{pmatrix}\,\big|\, x,z\in\mathbb{C}\}  
&2                                                   \\
 3 
& \begin{pmatrix}
0  &0  &0  \\
0  &64 &32 \\
0  &0  &0
\end{pmatrix} 
&\textit{--same--}
&\textit{---}
\end{array}

So the generalized null space \mathcal{N}_\infty(t-2) has dimension two. We've noted that the restriction of t-2 is nilpotent on this subspace. From the way that the nullities grow we know that the action of t-2 on a string basis \vec{\beta}_1\mapsto\vec{\beta}_2\mapsto\vec{0}. Thus the restriction can be represented in the canonical form


N_2=
\begin{pmatrix}
0  &0  \\
1  &0
\end{pmatrix}
={\rm Rep}_{B,B}(t-2)
\qquad
B_2=\langle \begin{pmatrix} 1 \\ 1 \\ -2 \end{pmatrix},
\begin{pmatrix} -2 \\ 0 \\ 0 \end{pmatrix} \rangle

where many choices of basis are possible. Consequently, the action of the restriction of t to \mathcal{N}_\infty(t-2) is represented by this matrix.


J_2=N_2+2I={\rm Rep}_{B_2,B_2}(t)=
\begin{pmatrix}
2  &0  \\
1  &2
\end{pmatrix}

The second eigenvalue's computations are easier. Because the power of x-6 in the characteristic polynomial is one, the restriction of t-6 to \mathcal{N}_\infty(t-6) must be nilpotent of index one. Its action on a string basis must be \vec{\beta}_3\mapsto\vec{0} and since it is the zero map, its canonical form N_6 is the 1 \! \times \! 1 zero matrix. Consequently, the canonical form J_6 for the action of t on \mathcal{N}_\infty(t-6) is the 1 \! \times \! 1 matrix with the single entry 6. For the basis we can use any nonzero vector from the generalized null space.


B_6=\langle \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \rangle

Taken together, these two give that the Jordan form of  T is


{\rm Rep}_{B,B}(t)=
\begin{pmatrix}
2  &0  &0  \\
1  &2  &0  \\
0  &0  &6
\end{pmatrix}

where  B is the concatenation of B_2 and B_6.

Example 2.14

Contrast the prior example with


T=
\begin{pmatrix}
2  &2  &1  \\
0  &6  &2  \\
0  &0  &2
\end{pmatrix}

which has the same characteristic polynomial  (x-2)^2(x-6) .

While the characteristic polynomial is the same,


\begin{array}{r|ccc}
\textit{power}  p   & (T-2I)^p   & \mathcal{N}((t-2)^p)  
&\textit{nullity}                      \\  \hline
 1 
& \begin{pmatrix}
0  &2  &1  \\
0  &4  &2  \\
0  &0  &0
\end{pmatrix} 
& \{\begin{pmatrix} x \\ -z/2 \\ z \end{pmatrix}\,\big|\, x,z\in\mathbb{C}\}  
&2 \\
 2 
& \begin{pmatrix}
0  &8  &4  \\
0  &16 &8  \\
0  &0  &0
\end{pmatrix} 
&\textit{--same--}
&\textit{---}
\end{array}

here the action of t-2 is stable after only one application— the restriction of of t-2 to \mathcal{N}_\infty(t-2) is nilpotent of index only one. (So the contrast with the prior example is that while the characteristic polynomial tells us to look at the action of the t-2 on its generalized null space, the characteristic polynomial does not describe completely its action and we must do some computations to find, in this example, that the minimal polynomial is  (x-2)(x-6) .) The restriction of t-2 to the generalized null space acts on a string basis as \vec{\beta}_1\mapsto\vec{0} and \vec{\beta}_2\mapsto\vec{0}, and we get this Jordan block associated with the eigenvalue 2.


J_2=
\begin{pmatrix}
2  &0  \\
0  &2
\end{pmatrix}

For the other eigenvalue, the arguments for the second eigenvalue of the prior example apply again. The restriction of t-6 to \mathcal{N}_\infty(t-6) is nilpotent of index one (it can't be of index less than one, and since x-6 is a factor of the characteristic polynomial to the power one it can't be of index more than one either). Thus t-6's canonical form N_6 is the 1 \! \times \! 1 zero matrix, and the associated Jordan block J_6 is the 1 \! \times \! 1 matrix with entry 6.

Therefore,  T is diagonalizable.


{\rm Rep}_{B,B}(t)=
\begin{pmatrix}
2  &0  &0  \\
0  &2  &0  \\
0  &0  &6
\end{pmatrix}
\qquad
B=B_2\!\mathbin{{}^\frown}\!B_6
=\langle \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ -2 \end{pmatrix},
\begin{pmatrix} 3 \\ 4 \\ 0 \end{pmatrix} \rangle

(Checking that the third vector in B is in the nullspace of t-6 is routine.)

Example 2.15

A bit of computing with


T=
\begin{pmatrix}
-1  &4  &0  &0  &0  \\
0  &3  &0  &0  &0  \\
0  &-4 &-1 &0  &0  \\
3  &-9 &-4 &2  &-1 \\
1  &5  &4  &1  &4
\end{pmatrix}

shows that its characteristic polynomial is  (x-3)^3(x+1)^2 . This table


\begin{array}{r|ccc}
\textit{power}  p   & (T-3I)^p   & \mathcal{N}((t-3)^p)  
&\textit{nullity}   \\  \hline
 1 
&\begin{pmatrix}
-4  &4  &0  &0  &0  \\
0  &0  &0  &0  &0  \\
0  &-4 &-4 &0  &0  \\
3  &-9 &-4 &-1 &-1 \\
1  &5  &4  &1  &1
\end{pmatrix}  
& \{\begin{pmatrix} -(u+v)/2 \\
-(u+v)/2 \\
(u+v)/2 \\
u      \\
v \end{pmatrix}\,\big|\, u,v\in\mathbb{C}\}  
&2                                            \\
 2 
&\begin{pmatrix}
16  &-16&0  &0  &0  \\
0  &0  &0  &0  &0  \\
0  &16 &16 &0  &0  \\
-16  &32 &16 &0  &0  \\
0  &-16&-16&0  &0
\end{pmatrix}  
& \{\begin{pmatrix}  -z      \\
-z      \\
z      \\
u      \\
v \end{pmatrix}\,\big|\, z,u,v\in\mathbb{C}\}  
&3                                              \\
 3 
&\begin{pmatrix}
-64  &64   &0   &0   &0  \\
0  &0    &0   &0   &0  \\
0  &-64  &-64 &0   &0  \\
64  &-128 &-64 &0   &0  \\
0  &64   &64  &0   &0
\end{pmatrix}  
&\textit{--same--}
&\textit{---}
\end{array}

shows that the restriction of t-3 to \mathcal{N}_\infty(t-3) acts on a string basis via the two strings \vec{\beta}_1\mapsto\vec{\beta}_2\mapsto\vec{0} and \vec{\beta}_3\mapsto\vec{0}.

A similar calculation for the other eigenvalue


\begin{array}{r|ccc}
\textit{power}  p   & (T+1I)^p   & \mathcal{N}((t+1)^p)  
&\textit{nullity}  \\  \hline
 1 
&\begin{pmatrix}
0  &4  &0  &0  &0  \\
0  &4  &0  &0  &0  \\
0  &-4 &0  &0  &0  \\
3  &-9 &-4 &3  &-1 \\
1  &5  &4  &1  &5
\end{pmatrix}  
& \{\begin{pmatrix} -(u+v)   \\
0      \\
-v      \\
u      \\
v \end{pmatrix}\,\big|\, u,v\in\mathbb{C}\}  
&2                                              \\
 2 
&\begin{pmatrix}
0  &16 &0  &0  &0  \\
0  &16 &0  &0  &0  \\
0  &-16&0  &0  &0  \\
8  &-40&-16&8  &-8 \\
8  &24 &16 &8  &24
\end{pmatrix}  
&\textit{--same--}
&\textit{---}
\end{array}

shows that the restriction of t+1 to its generalized null space acts on a string basis via the two separate strings \vec{\beta}_4\mapsto\vec{0} and \vec{\beta}_5\mapsto\vec{0}.

Therefore  T is similar to this Jordan form matrix.


\begin{pmatrix}
-1  &0  &0  &0  &0  \\
0  &-1 &0  &0  &0  \\
0  &0  &3  &0  &0  \\
0  &0  &1  &3  &0  \\
0  &0  &0  &0  &3
\end{pmatrix}

We close with the statement that the subjects considered earlier in this Chpater are indeed, in this sense, exhaustive.

Corollary 2.16

Every square matrix is similar to the sum of a diagonal matrix and a nilpotent matrix.

Exercises

Problem 1

Do the check for Example 2.3.

Problem 2

Each matrix is in Jordan form. State its characteristic polynomial and its minimal polynomial.

  1. \begin{pmatrix}
3  &0  \\
1  &3
\end{pmatrix}
  2. \begin{pmatrix}
-1  &0  \\
0  &-1
\end{pmatrix}
  3. \begin{pmatrix}
2  &0  &0  \\
1  &2  &0  \\
0  &0  &-1/2
\end{pmatrix}
  4. \begin{pmatrix}
3  &0  &0  \\
1  &3  &0  \\
0  &1  &3  \\
\end{pmatrix}
  5. \begin{pmatrix}
3  &0  &0  &0  \\
1  &3  &0  &0  \\
0  &0  &3  &0  \\
0  &0  &1  &3
\end{pmatrix}
  6. \begin{pmatrix}
4  &0  &0  &0  \\
1  &4  &0  &0  \\
0  &0  &-4 &0  \\
0  &0  &1  &-4
\end{pmatrix}
  7. \begin{pmatrix}
5  &0  &0  \\
0  &2  &0  \\
0  &0  &3
\end{pmatrix}
  8. \begin{pmatrix}
5  &0  &0  &0  \\
0  &2  &0  &0  \\
0  &0  &2  &0  \\
0  &0  &0  &3
\end{pmatrix}
  9. \begin{pmatrix}
5  &0  &0  &0  \\
0  &2  &0  &0  \\
0  &1  &2  &0  \\
0  &0  &0  &3
\end{pmatrix}
This exercise is recommended for all readers.
Problem 3

Find the Jordan form from the given data.

  1. The matrix  T is  5 \! \times \! 5 with the single eigenvalue 3. The nullities of the powers are:  T-3I has nullity two,  (T-3I)^2 has nullity three,  (T-3I)^3 has nullity four, and  (T-3I)^4 has nullity five.
  2. The matrix  S is  5 \! \times \! 5 with two eigenvalues. For the eigenvalue 2 the nullities are:  S-2I has nullity two, and  (S-2I)^2 has nullity four. For the eigenvalue -1 the nullities are:  S+1I has nullity one.
Problem 4

Find the change of basis matrices for each example.

  1. Example 2.13
  2. Example 2.14
  3. Example 2.15
This exercise is recommended for all readers.
Problem 5

Find the Jordan form and a Jordan basis for each matrix.

  1. 
\begin{pmatrix}
-10  &4  \\
-25  &10
\end{pmatrix}
  2. 
\begin{pmatrix}
5   &-4 \\
9   &-7
\end{pmatrix}
  3. 
\begin{pmatrix}
4   &0    &0  \\
2   &1    &3  \\
5   &0    &4
\end{pmatrix}
  4. 
\begin{pmatrix}
5   &4    &3  \\
-1   &0    &-3 \\
1   &-2   &1
\end{pmatrix}
  5. 
\begin{pmatrix}
9   &7    &3  \\
-9   &-7   &-4 \\
4   &4    &4
\end{pmatrix}
  6. 
\begin{pmatrix}
2   &2    &-1 \\
-1   &-1   &1  \\
-1   &-2   &2
\end{pmatrix}
  7. 
\begin{pmatrix}
7   &1    &2   &2 \\
1   &4    &-1  &-1\\
-2   &1    &5   &-1\\
1   &1    &2   &8
\end{pmatrix}
This exercise is recommended for all readers.
Problem 6

Find all possible Jordan forms of a transformation with characteristic polynomial  (x-1)^2(x+2)^2  .

Problem 7

Find all possible Jordan forms of a transformation with characteristic polynomial  (x-1)^3(x+2) .

This exercise is recommended for all readers.
Problem 8

Find all possible Jordan forms of a transformation with characteristic polynomial  (x-2)^3(x+1) and minimal polynomial  (x-2)^2(x+1) .

Problem 9

Find all possible Jordan forms of a transformation with characteristic polynomial  (x-2)^4(x+1) and minimal polynomial  (x-2)^2(x+1) .

This exercise is recommended for all readers.
Problem 10
Diagonalize these.
  1.  \begin{pmatrix}
1  &1  \\
0  &0
\end{pmatrix}
  2.  \begin{pmatrix}
0  &1  \\
1  &0
\end{pmatrix}
This exercise is recommended for all readers.
Problem 11

Find the Jordan matrix representing the differentiation operator on  \mathcal{P}_3 .

This exercise is recommended for all readers.
Problem 12

Decide if these two are similar.


\begin{pmatrix}
1  &-1 \\
4  &-3 \\
\end{pmatrix}
\qquad
\begin{pmatrix}
-1  &0  \\
1  &-1 \\
\end{pmatrix}
Problem 13

Find the Jordan form of this matrix.


\begin{pmatrix}
0  &-1  \\
1  &0
\end{pmatrix}

Also give a Jordan basis.

Problem 14

How many similarity classes are there for  3 \! \times \! 3 matrices whose only eigenvalues are  -3 and  4 ?

This exercise is recommended for all readers.
Problem 15

Prove that a matrix is diagonalizable if and only if its minimal polynomial has only linear factors.

Problem 16

Give an example of a linear transformation on a vector space that has no non-trivial invariant subspaces.

Problem 17

Show that a subspace is  t-\lambda_1 invariant if and only if it is  t-\lambda_2 invariant.

Problem 18

Prove or disprove: two  n \! \times \! n matrices are similar if and only if they have the same characteristic and minimal polynomials.

Problem 19

The trace of a square matrix is the sum of its diagonal entries.

  1. Find the formula for the characteristic polynomial of a 2 \! \times \! 2 matrix.
  2. Show that trace is invariant under similarity, and so we can sensibly speak of the "trace of a map". (Hint: see the prior item.)
  3. Is trace invariant under matrix equivalence?
  4. Show that the trace of a map is the sum of its eigenvalues (counting multiplicities).
  5. Show that the trace of a nilpotent map is zero. Does the converse hold?
Problem 20

To use Definition 2.6 to check whether a subspace is t invariant, we seemingly have to check all of the infinitely many vectors in a (nontrivial) subspace to see if they satisfy the condition. Prove that a subspace is  t invariant if and only if its subbasis has the property that for all of its elements, t(\vec{\beta}) is in the subspace.

This exercise is recommended for all readers.
Problem 21

Is  t invariance preserved under intersection? Under union? Complementation? Sums of subspaces?

Problem 22

Give a way to order the Jordan blocks if some of the eigenvalues are complex numbers. That is, suggest a reasonable ordering for the complex numbers.

Problem 23

Let  \mathcal{P}_j(\mathbb{R}) be the vector space over the reals of degree  j polynomials. Show that if  j\le k then  \mathcal{P}_j(\mathbb{R}) is an invariant subspace of  \mathcal{P}_k(\mathbb{R}) under the differentiation operator. In  \mathcal{P}_7(\mathbb{R}) , does any of  \mathcal{P}_0(\mathbb{R}) , ...,  \mathcal{P}_6(\mathbb{R}) have an invariant complement?

Problem 24

In  \mathcal{P}_n(\mathbb{R}) , the vector space (over the reals) of degree  n polynomials,


\mathcal{E}=
\{p(x)\in\mathcal{P}_n(\mathbb{R})\,\big|\, p(-x)=p(x) \text{ for all }x\}

and


\mathcal{O}=
\{p(x)\in\mathcal{P}_n(\mathbb{R})\,\big|\, p(-x)=-p(x) \text{ for all }x\}

are the even and the odd polynomials;  p(x)=x^2 is even while  p(x)=x^3 is odd. Show that they are subspaces. Are they complementary? Are they invariant under the differentiation transformation?

Problem 25

Lemma 2.8 says that if  M and  N are invariant complements then  t has a representation in the given block form (with respect to the same ending as starting basis, of course). Does the implication reverse?

Problem 26

A matrix  S is the square root of another  T if  S^2=T . Show that any nonsingular matrix has a square root.

Footnotes

  1. More information on restrictions of functions is in the appendix.


Topic: Geometry of Eigenvalues

--Refer to Topic on Geometry of Linear Transformations---

The characterization of linear transformations in terms of the elementary operations is nice in some ways (for instance, we can easily see that lines are mapped to lines because each of the operations of projection, dilation, reflection, and skew maps lines to lines), but when a map is expressed as a composition of many small operations---no matter how simple---the description is less than ideal. We finish with another way, a somewhat more holistic way, of picturing the geometric effect of transformations of \mathbb{R}^2.

The pictures in that area give the action of the map on just one or two members of the domain. Although we know that a transformation is described completely by its action on a basis, and so to describe a transformation of \mathbb{R}^2 therefore, strictly speaking, requires only a description of where it sends the two vectors from any basis, those pictures seem not to convey much geometric intution. Can we make clear a linear map's geometry by putting in more information, but not so much information that the picture gets confused?

A transformation of \mathbb{R}^2 sends lines through the origin to lines through the origin. Thus, two points on a line y=k_1x will both be sent to the line, say, y=k_2x. Consider two such points. One is a multiple of the other, so we can write them with the second one as r times the first, for some scalar r.


\begin{pmatrix} x \\ k_1x \end{pmatrix}\quad\mbox{and}\quad\begin{pmatrix} (rx) \\ k_1(rx) \end{pmatrix}

Compare their images.


\begin{pmatrix}
a  &c  \\
b  &d
\end{pmatrix}
\begin{pmatrix} x \\ k_1x \end{pmatrix}
=
\begin{pmatrix} ax+ck_1x \\ bx+dk_1x \end{pmatrix}
\qquad
\begin{pmatrix}
a  &c  \\
b  &d
\end{pmatrix}
\begin{pmatrix} (rx) \\ k_1(rx) \end{pmatrix}
=
\begin{pmatrix} a(rx)+ck_1(rx) \\ b(rx)+dk_1(rx) \end{pmatrix}

The second vector is r times the first, and the image of the second is r times the image of the first. Not only does the transformation preserve the fact that the vectors are colinear, it also preserves the relative scale of the vectors. That is, a transformation treats the points on a line through the origin uniformily. To describe the effect of the map on the entire line, we need only describe its effect on a single non-zero point in that line.

Since every point in the space is on some line through the origin, to understand the action of a linear transformation of \mathbb{R}^2, it is sufficient to pick one point from each line through the origin (say the point that is on the upper half of the unit circle) and show how the map's effect on that set of points.

Here is such a picture for a straightforward dilation.

Linalg dilation circle.png

Below, the same map is shown with the circle and its image superimposed.

Linalg dilation circle 2.png

Certainly the geometry here is more evident. For example, we can see that some lines through the origin are actually sent to themselves: the x-axis is sent to the x-axis, and the y-axis is sent to the y-axis.

This is the flip shown earlier, here with the circle and its image superimposed.

Linalg flip.png

And this is the skew shown earlier.

Linalg skew.png

Contrast the picture of this map's effect on the unit square with this one.

Here is a somewhat more complicated map (the second coordinate function is the same as the map in the prior picture, but the first coordinate function is different).

Linalg dilation and rotation.png

Observe that some vectors are being both dilated and rotated through some angle


\begin{pmatrix} x \\ 2x \end{pmatrix}\mapsto\begin{pmatrix} x  \\ k_1x \end{pmatrix}

while others are just being dilated, not rotated at all.


\begin{pmatrix} x  \\ 3x \end{pmatrix}\mapsto\begin{pmatrix} x  \\ 3x \end{pmatrix}

Exercises

Problem 1
Show the effect each matrix has on the top half of the unit circle.
  1. \begin{pmatrix}
1  &1    \\
2  &2
\end{pmatrix}
  2. \begin{pmatrix}
2  &3    \\
1  &1
\end{pmatrix}
  3. \begin{pmatrix}
2  &3    \\
1  &-1
\end{pmatrix}

Which vectors stay on the same line through the origin?


Topic: The Method of Powers

In practice, calculating eigenvalues and eigenvectors is a difficult problem. Finding, and solving, the characteristic polynomial of the large matrices often encountered in applications is too slow and too hard. Other techniques, indirect ones that avoid the characteristic polynomial, are used. Here we shall see such a method that is suitable for large matrices that are "sparse" (the great majority of the entries are zero).

Suppose that the n \! \times \! n matrix T has the n distinct eigenvalues \lambda_1, \lambda_2, ..., \lambda_n. Then \mathbb{R}^n has a basis that is composed of the associated eigenvectors \langle \vec{\zeta}_1,\dots,\vec{\zeta}_n \rangle . For any \vec{v}\in\mathbb{R}^n, where \vec{v}=c_1\vec{\zeta}_1+\dots+c_n\vec{\zeta}_n, iterating T on \vec{v} gives these.

\begin{array}{rl}
T\vec{v}
&=c_1\lambda_1\vec{\zeta}_1+c_2\lambda_2\vec{\zeta}_2+
\dots+c_n\lambda_n\vec{\zeta}_n  \\
T^2\vec{v}
&=c_1\lambda_1^2\vec{\zeta}_1+c_2\lambda_2^2\vec{\zeta}_2+
\dots+c_n\lambda_n^2\vec{\zeta}_n  \\
T^3\vec{v}
&=c_1\lambda_1^3\vec{\zeta}_1+c_2\lambda_2^3\vec{\zeta}_2+
\dots+c_n\lambda_n^3\vec{\zeta}_n  \\
&\vdots                                            \\
T^k\vec{v}
&=c_1\lambda_1^k\vec{\zeta}_1+c_2\lambda_2^k\vec{\zeta}_2+
\dots+c_n\lambda_n^k\vec{\zeta}_n
\end{array}

If one of the eigenvaluse, say, \lambda_1, has a larger absolute value than any of the other eigenvalues then its term will dominate the above expression. Put another way, dividing through by \lambda_1^k gives this,


\frac{T^k\vec{v}}{\lambda_1^k}
=c_1\vec{\zeta}_1+c_2\frac{\lambda_2^k}{\lambda_1^k}\vec{\zeta}_2+
\dots+c_n\frac{\lambda_n^k}{\lambda_1^k}\vec{\zeta}_n

and, because \lambda_1 is assumed to have the largest absolute value, as k gets larger the fractions go to zero. Thus, the entire expression goes to c_1\vec{\zeta}_1.

That is (as long as c_1 is not zero), as k increases, the vectors T^k\vec{v} will tend toward the direction of the eigenvectors associated with the dominant eigenvalue, and, consequently, the ratios of the lengths |\,T^{k}\vec{v}\,|/|\,T^{k-1}\vec{v}\,| will tend toward that dominant eigenvalue.

For example (sample computer code for this follows the exercises), because the matrix


T=\begin{pmatrix}
3  &0  \\
8  &-1
\end{pmatrix}

is triangular, its eigenvalues are just the entries on the diagonal, 3 and -1. Arbitrarily taking \vec{v} to have the components 1 and 1 gives


\begin{array}{c|ccccc}
\vec{v}  &T\vec{v}  &T^2\vec{v}
&\cdots &T^9\vec{v} &T^{10}\vec{v}        \\ \hline
\begin{pmatrix} 1 \\ 1 \end{pmatrix}  &\begin{pmatrix} 3 \\ 7 \end{pmatrix} &\begin{pmatrix} 9 \\ 17 \end{pmatrix}
&\cdots  &\begin{pmatrix} 19\,683 \\ 39\,367 \end{pmatrix}
&\begin{pmatrix} 59\,049 \\ 118\,097 \end{pmatrix}
\end{array}

and the ratio between the lengths of the last two is 2.999\,9.

Two implementation issues must be addressed. The first issue is that, instead of finding the powers of T and applying them to \vec{v}, we will compute \vec{v}_1 as T\vec{v} and then compute \vec{v}_2 as T\vec{v}_1, etc. (i.e., we never separately calculate T^2, T^3, etc.). These matrix-vector products can be done quickly even if T is large, provided that it is sparse. The second issue is that, to avoid generating numbers that are so large that they overflow our computer's capability, we can normalize the \vec{v}_i's at each step. For instance, we can divide each \vec{v}_i by its length (other possibilities are to divide it by its largest component, or simply by its first component). We thus implement this method by generating

\begin{array}{rl}
\vec{w}_0  &=\vec{v}_0/|\vec{v}_0| \\
\vec{v}_1  &=T\vec{w}_0                 \\
\vec{w}_1  &=\vec{v}_1/|\vec{v}_1| \\
\vec{v}_2  &=T\vec{w}_2                 \\
&\vdots    \\
\vec{w}_{k-1}  &=\vec{v}_{k-1}/|\vec{v}_{k-1}| \\
\vec{v}_k  &=T\vec{w}_k
\end{array}

until we are satisfied. Then the vector \vec{v}_k is an approximation of an eigenvector, and the approximation of the dominant eigenvalue is the ratio |\vec{v}_{k}|/|\vec{w}_{k-1}|=|\vec{v}_k|.

One way we could be "satisfied" is to iterate until our approximation of the eigenvalue settles down. We could decide, for instance, to stop the iteration process not after some fixed number of steps, but instead when |\vec{v}_k| differs from |\vec{v}_{k-1}| by less than one percent, or when they agree up to the second significant digit.

The rate of convergence is determined by the rate at which the powers of |\lambda_2/\lambda_1| go to zero, where \lambda_2 is the eigenvalue of second largest norm. If that ratio is much less than one then convergence is fast, but if it is only slightly less than one then convergence can be quite slow. Consequently, the method of powers is not the most commonly used way of finding eigenvalues (although it is the simplest one, which is why it is here as the illustration of the possibility of computing eigenvalues without solving the characteristic polynomial). Instead, there are a variety of methods that generally work by first replacing the given matrix T with another that is similar to it and so has the same eigenvalues, but is in some reduced form such as tridiagonal form: the only nonzero entries are on the diagonal, or just above or below it. Then special techniques can be used to find the eigenvalues. Once the eigenvalues are known, the eigenvectors of T can be easily computed. These other methods are outside of our scope. A good reference is (Goult et al. 1975).

Exercises

Problem 1

Use ten iterations to estimate the largest eigenvalue of these matrices, starting from the vector with components 1 and 2. Compare the answer with the one obtained by solving the characteristic equation.

  1. \begin{pmatrix}
1  &5  \\
0  &4
\end{pmatrix}
  2. \begin{pmatrix}
3   &2  \\
-1  &0
\end{pmatrix}
Problem 2

Redo the prior exercise by iterating until |\vec{v}_k|-|\vec{v}_{k-1}| has absolute value less than 0.01 At each step, normalize by dividing each vector by its length. How many iterations are required? Are the answers significantly different?

Problem 3

Use ten iterations to estimate the largest eigenvalue of these matrices, starting from the vector with components 1, 2, and 3. Compare the answer with the one obtained by solving the characteristic equation.

  1. \begin{pmatrix}
4   &0  &1 \\
-2  &1  &0  \\
-2  &0  &1
\end{pmatrix}
  2. \begin{pmatrix}
-1  &2  &2  \\
2  &2  &2  \\
-3  &-6 &-6
\end{pmatrix}
Problem 4

Redo the prior exercise by iterating until |\vec{v}_k|-|\vec{v}_{k-1}| has absolute value less than 0.01. At each step, normalize by dividing each vector by its length. How many iterations does it take? Are the answers significantly different?

Problem 5

What happens if c_1=0? That is, what happens if the initial vector does not to have any component in the direction of the relevant eigenvector?

Problem 6

How can the method of powers be adopted to find the smallest eigenvalue?

Solutions

This is the code for the computer algebra system Octave that was used to do the calculation above. (It has been lightly edited to remove blank lines, etc.)

Computer Code

>T=[3, 0;
8, -1]
T=
3 0
8 -1
>v0=[1; 2]
v0=
1
1
>v1=T*v0
v1=
3
7
>v2=T*v1
v2=
9
17
>T9=T**9
T9=
19683 0
39368 -1
>T10=T**10
T10=
59049 0
118096 1
>v9=T9*v0
v9=
19683
39367
>v10=T10*v0
v10=
59049
118096
>norm(v10)/norm(v9)
ans=2.9999

Remark: we are ignoring the power of Octave here; there are built-in functions to automatically apply quite sophisticated methods to find eigenvalues and eigenvectors. Instead, we are using just the system as a calculator.


Topic: Stable Populations

Imagine a reserve park with animals from a species that we are trying to protect. The park doesn't have a fence and so animals cross the boundary, both from the inside out and in the other direction. Every year, 10% of the animals from inside of the park leave, and 1% of the animals from the outside find their way in. We can ask if we can find a stable level of population for this park: is there a population that, once established, will stay constant over time, with the number of animals leaving equal to the number of animals entering?

To answer that question, we must first establish the equations. Let the year n population in the park be p_n and in the rest of the world be r_n.

\begin{array}{rl}
p_{n+1}
&=.90p_n+.01r_n    \\
r_{n+1}
&=.10p_n+.99r_n
\end{array}

We can set this system up as a matrix equation (see the Markov Chain topic).


\begin{pmatrix} p_{n+1} \\ r_{n+1} \end{pmatrix}
=\begin{pmatrix}
.90  &.01  \\
.10  &.99
\end{pmatrix}
\begin{pmatrix} p_{n} \\ r_{n} \end{pmatrix}

Now, "stable level" means that p_{n+1}=p_n and r_{n+1}=r_n, so that the matrix equation \vec{v}_{n+1}=T\vec{v}_{n} becomes \vec{v}=T\vec{v}. We are therefore looking for eigenvectors for T that are associated with the eigenvalue 1. The equation (I-T)\vec{v}=\vec{0} is


\begin{pmatrix}
.10  &-.01  \\
-.10  &.01
\end{pmatrix}
\begin{pmatrix} p \\ r \end{pmatrix}=
\begin{pmatrix} 0 \\ 0 \end{pmatrix}

which gives the eigenspace: vectors with the restriction that p=.1r. Coupled with additional information, that the total world population of this species is is p+r=110\,000, we find that the stable state is p=10,000 and r=100,000.

If we start with a park population of ten thousand animals, so that the rest of the world has one hundred thousand, then every year ten percent (a thousand animals) of those inside will leave the park, and every year one percent (a thousand) of those from the rest of the world will enter the park. It is stable, self-sustaining.

Now imagine that we are trying to gradually build up the total world population of this species. We can try, for instance, to have the world population grow at a rate of 1% per year. In this case, we can take a "stable" state for the park's population to be that it also grows at 1% per year. The equation \vec{v}_{n+1}=1.01\cdot\vec{v}_n=T\vec{v}_{n} leads to ((1.01\cdot I)-T)\vec{v}=\vec{0}, which gives this system.


\begin{pmatrix}
.11  &-.01  \\
-.10  &.02
\end{pmatrix}
\begin{pmatrix} p \\ r \end{pmatrix}=
\begin{pmatrix} 0 \\ 0 \end{pmatrix}

The matrix is nonsingular, and so the only solution is p=0 and r=0. Thus, there is no (usable) initial population that we can establish at the park and expect that it will grow at the same rate as the rest of the world.

Knowing that an annual world population growth rate of 1% forces an unstable park population, we can ask which growth rates there are that would allow an initial population for the park that will be self-sustaining. We consider \lambda\vec{v}=T\vec{v} and solve for \lambda.


0=\begin{vmatrix}
\lambda-.9  &-.01  \\
-.10         &\lambda-.99
\end{vmatrix}
=(\lambda-.9)(\lambda-.99)-(.10)(.01)
=\lambda^2-1.89\lambda+.89

A shortcut to factoring that quadratic is our knowledge that \lambda=1 is an eigenvalue of T, so the other eigenvalue is .89. Thus there are two ways to have a stable park population (a population that grows at the same rate as the population of the rest of the world, despite the leaky park boundaries): have a world population that is does not grow or shrink, and have a world population that shrinks by 11% every year.

So this is one meaning of eigenvalues and eigenvectors— they give a stable state for a system. If the eigenvalue is 1 then the system is static. If the eigenvalue isn't 1 then the system is either growing or shrinking, but in a dynamically-stable way.

Exercises

Problem 1

What initial population for the park discussed above should be set up in the case where world populations are allowed to decline by 11% every year?

Problem 2

What will happen to the population of the park in the event of a growth in world population of 1% per year? Will it lag the world growth, or lead it? Assume that the inital park population is ten thousand, and the world population is one hunderd thousand, and calculate over a ten year span.

Problem 3

The park discussed above is partially fenced so that now, every year, only 5% of the animals from inside of the park leave (still, about 1% of the animals from the outside find their way in). Under what conditions can the park maintain a stable population now?

Problem 4

Suppose that a species of bird only lives in Canada, the United States, or in Mexico. Every year, 4% of the Canadian birds travel to the US, and 1% of them travel to Mexico. Every year, 6% of the US birds travel to Canada, and 4% go to Mexico. From Mexico, every year 10% travel to the US, and 0% go to Canada.

  1. Give the transition matrix.
  2. Is there a way for the three countries to have constant populations?
  3. Find all stable situations.


Topic: Linear Recurrences

In 1202 Leonardo of Pisa, also known as Fibonacci, posed this problem.

A certain man put a pair of rabbits in a place surrounded on all sides by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets a new pair which from the second month on becomes productive?

This moves past an elementary exponential growth model for population increase to include the fact that there is an initial period where newborns are not fertile. However, it retains other simplyfing assumptions, such as that there is no gestation period and no mortality.

The number of newborn pairs that will appear in the upcoming month is simply the number of pairs that were alive last month, since those will all be fertile, having been alive for two months. The number of pairs alive next month is the sum of the number alive current month and the number of newborns.


f(n+1)=f(n)+f(n-1)  \qquad \text{where }f(0)=1\text{, }f(1)=1

The is an example of a recurrence relation (it is called that because the values of f are calculated by looking at other, prior, values of f). From it, we can easily answer Fibonacci's twelve-month question.

\begin{array}{r|ccccccccccccc}
\textit{month}
&0  &1  &2  &3  &4  &5  &6  &7  &8  &9  &10  &11  &12  \\ \hline
\textit{pairs}
&1  &1  &2  &3  &5  &8  &13  &21  &34  &55  &89  &144  &233
\end{array}

The sequence of numbers defined by the above equation (of which the first few are listed) is the Fibonacci sequence. The material of this chapter can be used to give a formula with which we can can calculate f(n+1) without having to first find f(n), f(n-1), etc.

For that, observe that the recurrence is a linear relationship and so we can give a suitable matrix formulation of it.


\begin{pmatrix}
1  &1   \\
1  &0
\end{pmatrix}
\begin{pmatrix} f(n) \\ f(n-1) \end{pmatrix}
=
\begin{pmatrix} f(n+1) \\ f(n) \end{pmatrix}
\qquad\text{where }\begin{pmatrix} f(1) \\ f(0) \end{pmatrix}=\begin{pmatrix} 1 \\  1 \end{pmatrix}

Then, where we write T for the matrix and \vec{v}_{n} for the vector with components f(n+1) and f(n), we have that \vec{v}_n=T^n\vec{v}_0. The advantage of this matrix formulation is that by diagonalizing T we get a fast way to compute its powers: where T=PDP^{-1} we have T^n=PD^nP^{-1}, and the n-th power of the diagonal matrix D is the diagonal matrix whose entries that are the n-th powers of the entries of D.

The characteristic equation of T is \lambda^2-\lambda-1. The quadratic formula gives its roots as (1+\sqrt{5})/2 and (1-\sqrt{5})/2. Diagonalizing gives this.


\begin{pmatrix}
1  &1  \\
1  &0
\end{pmatrix}
=\begin{pmatrix}
\frac{1+\sqrt{5}}{2}  &\frac{1-\sqrt{5}}{2} \\
1                     &1
\end{pmatrix}
\begin{pmatrix}
\frac{1+\sqrt{5}}{2}  &0   \\
0                     &\frac{1-\sqrt{5}}{2}
\end{pmatrix}
\begin{pmatrix}
\frac{1}{\sqrt{5}}     &-\frac{1-\sqrt{5}}{2\sqrt{5}}  \\
\frac{-1}{\sqrt{5}}    &\frac{1+\sqrt{5}}{2\sqrt{5}}
\end{pmatrix}

Introducing the vectors and taking the n-th power, we have

\begin{array}{rl}
\begin{pmatrix} f(n+1) \\ f(n) \end{pmatrix}
&=\begin{pmatrix}
1  &1  \\
1  &0
\end{pmatrix}^n
\begin{pmatrix} f(1) \\ f(0) \end{pmatrix}                                          \\
&=\begin{pmatrix}
\frac{1+\sqrt{5}}{2}  &\frac{1-\sqrt{5}}{2} \\
1                     &1
\end{pmatrix}
\begin{pmatrix}
\frac{1+\sqrt{5}}{2}^n  &0   \\
0                       &\frac{1-\sqrt{5}}{2}^n
\end{pmatrix}
\begin{pmatrix}
\frac{1}{\sqrt{5}}     &-\frac{1-\sqrt{5}}{2\sqrt{5}}  \\
\frac{-1}{\sqrt{5}}    &\frac{1+\sqrt{5}}{2\sqrt{5}}
\end{pmatrix}
\begin{pmatrix} f(1) \\ f(0) \end{pmatrix}
\end{array}

We can compute f(n) from the second component of that equation.


f(n)=\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n+1}
-\left(\frac{1-\sqrt{5}}{2}\right)^{n+1}\right]

Notice that f is dominated by its first term because (1-\sqrt{5})/2 is less than one, so its powers go to zero. Although we have extended the elementary model of population growth by adding a delay period before the onset of fertility, we nonetheless still get an (asmyptotically) exponential function.

In general, a linear recurrence relation has the form


f(n+1)=a_nf(n)+a_{n-1}f(n-1)+\dots+a_{n-k}f(n-k)

(it is also called a difference equation). This recurrence relation is homogeneous because there is no constant term; i.e, it can be put into the form 0=-f(n+1)+a_nf(n)+a_{n-1}f(n-1)+\dots+a_{n-k}f(n-k). This is said to be a relation of order k. The relation, along with the initial conditions f(0), ..., f(k) completely determine a sequence. For instance, the Fibonacci relation is of order 2 and it, along with the two initial conditions f(0)=1 and f(1)=1, determines the Fibonacci sequence simply because we can compute any f(n) by first computing f(2), f(3), etc. In this Topic, we shall see how linear algebra can be used to solve linear recurrence relations.

First, we define the vector space in which we are working. Let V be the set of functions f from the natural numbers \mathbb{N}=\{0,1,2,\ldots\} to the real numbers. (Below we shall have functions with domain \{1,2,\ldots\}, that is, without 0, but it is not an important distinction.)

Putting the initial conditions aside for a moment, for any recurrence, we can consider the subset S of V of solutions. For example, without initial conditions, in addition to the function f given above, the Fibonacci relation is also solved by the function g whose first few values are g(0)=1, g(1)=3, g(2)=4, and g(3)=7.

The subset S is a subspace of V. It is nonempty because the zero function is a solution. It is closed under addition since if f_1 and f_2 are solutions, then


a_{n+1}(f_1+f_2)(n+1)+\dots+a_{n-k}(f_1+f_2)(n-k)

\begin{align}
&=(a_{n+1}f_1(n+1)+\dots+a_{n-k}f_1(n-k))          \\
&\quad+(a_{n+1}f_2(n+1)+\dots+a_{n-k}f_2(n-k))     \\
&=0.
\end{align}

And, it is closed under scalar multiplication since


a_{n+1}(rf_1)(n+1)+\dots+a_{n-k}(rf_1)(n-k)

\begin{align}
&=r(a_{n+1}f_1(n+1)+\dots+a_{n-k}f_1(n-k))   \\
&=r\cdot 0                                    \\
&=0.
\end{align}

We can give the dimension of S. Consider this map from the set of functions S to the set of vectors \mathbb{R}^k.


f
\mapsto
\begin{pmatrix} f(0) \\ f(1) \\ \vdots \\ f(k) \end{pmatrix}

Problem 3 shows that this map is linear. Because, as noted above, any solution of the recurrence is uniquely determined by the k initial conditions, this map is one-to-one and onto. Thus it is an isomorphism, and thus S has dimension k, the order of the recurrence.

So (again, without any initial conditions), we can describe the set of solutions of any linear homogeneous recurrence relation of degree k by taking linear combinations of only k linearly independent functions. It remains to produce those functions.

For that, we express the recurrence f(n+1)=a_nf(n)+\dots+a_{n-k}f(n-k) with a matrix equation.


\begin{pmatrix}
a_n  &a_{n-1}  &a_{n-2}  &\ldots  &a_{n-k+1} &a_{n-k}  \\
1    &0        &0        &\ldots  &0         &0        \\
0    &1        &0                                      \\
0    &0        &1                                      \\
\vdots &\vdots &         &\ddots  &           &\vdots  \\
0    &0        &0        &\ldots   &1          &0
\end{pmatrix}
\begin{pmatrix} f(n) \\ f(n-1) \\ \vdots  \\ f(n-k) \end{pmatrix}
=
\begin{pmatrix} f(n+1) \\ f(n) \\ \vdots  \\ f(n-k+1) \end{pmatrix}

In trying to find the characteristic function of the matrix, we can see the pattern in the 2 \! \times \! 2 case


\begin{pmatrix}
a_n-\lambda  &a_{n-1} \\
1            &-\lambda
\end{pmatrix}
=\lambda^2-a_n\lambda-a_{n-1}

and 3 \! \times \! 3 case.


\begin{pmatrix}
a_n-\lambda  &a_{n-1}   &a_{n-2}  \\
1            &-\lambda  &0        \\
0            &1         &-\lambda
\end{pmatrix}
=-\lambda^3+a_n\lambda^2+a_{n-1}\lambda+a_{n-2}

Problem 4 shows that the characteristic equation is this.


\begin{vmatrix}
a_n-\lambda &a_{n-1}  &a_{n-2}  &\ldots  &a_{n-k+1} &a_{n-k}  \\
1    &-\lambda &0        &\ldots  &0         &0        \\
0    &1        &-\lambda                                      \\
0    &0        &1                                      \\
\vdots &\vdots &         &\ddots   &           &\vdots  \\
0    &0        &0        &\ldots   &1          &-\lambda
\end{vmatrix}

=\pm(-\lambda^k+a_n\lambda^{k-1}+a_{n-1}\lambda^{k-2}
+\dots+a_{n-k+1}\lambda+a_{n-k})

We call that the polynomial "associated" with the recurrence relation. (We will be finding the roots of this polynomial and so we can drop the \pm as irrelevant.)

If -\lambda^k+a_n\lambda^{k-1}+a_{n-1}\lambda^{k-2}
+\dots+a_{n-k+1}\lambda+a_{n-k} has no repeated roots then the matrix is diagonalizable and we can, in theory, get a formula for f(n) as in the Fibonacci case. But, because we know that the subspace of solutions has dimension k, we do not need to do the diagonalization calculation, provided that we can exhibit k linearly independent functions satisfying the relation.

Where r_1, r_2, ..., r_k are the distinct roots, consider the functions f_{r_1}(n)=r_1^n through f_{r_k}(n)=r_k^n of powers of those roots. Problem 2 shows that each is a solution of the recurrence and that the k of them form a linearly independent set. So, given the homogeneous linear recurrence f(n+1)=a_nf(n)+\dots+a_{n-k}f(n-k) (that is, 0=-f(n+1)+a_nf(n)+\dots+a_{n-k}f(n-k)) we consider the associated equation 0=-\lambda^k+a_n\lambda^{k-1}+\dots+a_{n-k+1}\lambda+a_{n-k}. We find its roots r_1, ..., r_k, and if those roots are distinct then any solution of the relation has the form f(n)=c_1r_1^n+c_2r_2^n+\dots+c_kr_k^n for c_1, \dots, c_n\in\mathbb{R}. (The case of repeated roots is also easily done, but we won't cover it here— see any text on Discrete Mathematics.)

Now, given some initial conditions, so that we are interested in a particular solution, we can solve for c_1, ..., c_n. For instance, the polynomial associated with the Fibonacci relation is -\lambda^2+\lambda+1, whose roots are (1\pm\sqrt{5})/2 and so any solution of the Fibonacci equation has the form f(n)=c_1((1+\sqrt{5})/2)^n+c_2((1-\sqrt{5})/2)^n. Including the initial conditions for the cases n=0 and n=1 gives


\begin{array}{*{2}{rc}r}
c_1               &+  &c_2               &=  &1  \\
(1+\sqrt{5}/2)c_1 &+  &(1-\sqrt{5}/2)c_2 &=  &1
\end{array}

which yields c_1=1/\sqrt{5} and c_2=-1/\sqrt{5}, as was calculated above.

We close by considering the nonhomogeneous case, where the relation has the form f(n+1)=a_nf(n)+a_{n-1}f(n-1)+\dots+a_{n-k}f(n-k)+b for some nonzero b. As in the first chapter of this book, only a small adjustment is needed to make the transition from the homogeneous case. This classic example illustrates.

In 1883, Edouard Lucas posed the following problem.

In the great temple at Benares, beneath the dome which marks the center of the world, rests a brass plate in which are fixed three diamond needles, each a cubit high and as thick as the body of a bee. On one of these needles, at the creation, God placed sixty four disks of pure gold, the largest disk resting on the brass plate, and the others getting smaller and smaller up to the top one. This is the Tower of Bramah. Day and night unceasingly the priests transfer the disks from one diamond needle to another according to the fixed and immutable laws of Bramah, which require that the priest on duty must not move more than one disk at a time and that he must place this disk on a needle so that there is no smaller disk below it. When the sixty-four disks shall have been thus transferred from the needle on which at the creation God placed them to one of the other needles, tower, temple, and Brahmins alike will crumble into dusk, and with a thunderclap the world will vanish.

(Translation of De Parvill (1884) from Ball (1962).)

How many disk moves will it take? Instead of tackling the sixty four disk problem right away, we will consider the problem for smaller numbers of disks, starting with three.

To begin, all three disks are on the same needle.

Linalg towers of hanoi 1.png

After moving the small disk to the far needle, the mid-sized disk to the middle needle, and then moving the small disk to the middle needle we have this.

Linalg towers of hanoi 2.png

Now we can move the big disk over. Then, to finish, we repeat the process of moving the smaller disks, this time so that they end up on the third needle, on top of the big disk.

So the thing to see is that to move the very largest disk, the bottom disk, at a minimum we must: first move the smaller disks to the middle needle, then move the big one, and then move all the smaller ones from the middle needle to the ending needle. Those three steps give us this recurence.


T(n+1)=T(n)+1+T(n)=2T(n)+1 \quad \text{where }T(1)=1

We can easily get the first few values of T.

\begin{array}{r|cccccccccc}
n    &1  &2  &3  &4  &5  &6     &7    &8    &9   &10  \\
\hline
T(n) &1  &3  &7  &15  &31  &63  &127  &255  &511 &1023
\end{array}

We recognize those as being simply one less than a power of two.

To derive this equation instead of just guessing at it, we write the original relation as -1=-T(n+1)+2T(n), consider the homogeneous relation 0=-T(n)+2T(n-1), get its associated polynomial -\lambda+2, which obviously has the single, unique, root of r_1=2, and conclude that functions satisfying the homogeneous relation take the form T(n)=c_12^n.

That's the homogeneous solution. Now we need a particular solution.

Because the nonhomogeneous relation -1=-T(n+1)+2T(n) is so simple, in a few minutes (or by remembering the table) we can spot the particular solution T(n)=-1 (there are other particular solutions, but this one is easily spotted). So we have that— without yet considering the initial condition— any solution of T(n+1)=2T(n)+1 is the sum of the homogeneous solution and this particular solution: T(n)=c_12^n-1.

The initial condition T(1)=1 now gives that c_1=1, and we've gotten the formula that generates the table: the n-disk Tower of Hanoi problem requires a minimum of 2^n-1 moves.

Finding a particular solution in more complicated cases is, naturally, more complicated. A delightful and rewarding, but challenging, source on recurrence relations is (Graham, Knuth & Patashnik 1988)., For more on the Tower of Hanoi, (Ball 1962) or (Gardner 1957) are good starting points. So is (Hofstadter 1985). Some computer code for trying some recurrence relations follows the exercises.

Exercises

Problem 1

Solve each homogeneous linear recurrence relations.

  1. f(n+1)=5f(n)-6f(n-1)
  2. f(n+1)=4f(n)
  3. f(n+1)=6f(n)+7f(n-1)+6f(n-2)
Problem 2

Give a formula for the relations of the prior exercise, with these initial conditions.

  1. f(0)=1, f(1)=1
  2. f(0)=0, f(1)=1
  3. f(0)=1, f(1)=1, f(2)=3.
Problem 3

Check that the isomorphism given betwween S and \mathbb{R}^k is a linear map. It is argued above that this map is one-to-one. What is its inverse?

Problem 4

Show that the characteristic equation of the matrix is as stated, that is, is the polynomial associated with the relation. (Hint: expanding down the final column, and using induction will work.)

Problem 5

Given a homogeneous linear recurrence relation f(n+1)=a_nf(n)+\dots+a_{n-k}f(n-k), let r_1, ..., r_k be the roots of the associated polynomial.

  1. Prove that each function f_{r_i}(n)=r_k^n satisfies the recurrence (without initial conditions).
  2. Prove that no r_i is 0.
  3. Prove that the set \{f_{r_1},\dots,f_{r_k}\} is linearly independent.
Problem 6

(This refers to the value T(64)=18,446,744,073,709,551,615 given in the computer code below.) Transferring one disk per second, how many years would it take the priests at the Tower of Hanoi to finish the job?

Computer Code
This code allows the generation of the first few values of a function defined by a recurrence and initial conditions. It is in the Scheme dialect of LISP (specifically, it was written for A. Jaffer's free scheme interpreter SCM, although it should run in any Scheme implementation).

First, the Tower of Hanoi code is a straightforward implementation of the recurrence.

(define (tower-of-hanoi-moves n)
(if (= n 1)
1
(+ (* (tower-of-hanoi-moves (- n 1))
2)
1) ) )


(Note for readers unused to recursive code: to compute T(64), the computer is told to compute 2*T(63)-1, which requires, of course, computing T(63). The computer puts the "times 2" and the "plus 1" aside for a moment to do that. It computes T(63) by using this same piece of code (that's what "recursive" means), and to do that is told to compute 2*T(62)-1. This keeps up (the next step is to try to do T(62) while the other arithmetic is held in waiting), until, after 63 steps, the computer tries to compute T(1). It then returns T(1)=1, which now means that the computation of T(2) can proceed, etc., up until the original computation of T(64) finishes.)

The next routine calculates a table of the first few values. (Some language notes: '() is the empty list, that is, the empty sequence, and cons pushes something onto the start of a list. Note that, in the last line, the procedure proc is called on argument n.)

(define (first-few-outputs proc n)
(first-few-outputs-helper proc n '()) )


(define (first-few-outputs-aux proc n lst)
(if (< n 1)
lst
(first-few-outputs-aux proc (- n 1) (cons (proc n) lst)) ) )



The session at the SCM prompt went like this.


>(first-few-outputs tower-of-hanoi-moves 64)
Evaluation took 120 mSec
(1 3 7 15 31 63 127 255 511 1023 2047 4095 8191 16383 32767
65535 131071 262143 524287 1048575 2097151 4194303 8388607
16777215 33554431 67108863 134217727 268435455 536870911
1073741823 2147483647 4294967295 8589934591 17179869183
34359738367 68719476735 137438953471 274877906943 549755813887
1099511627775 2199023255551 4398046511103 8796093022207
17592186044415 35184372088831 70368744177663 140737488355327
281474976710655 562949953421311 1125899906842623
2251799813685247 4503599627370495 9007199254740991
18014398509481983 36028797018963967 72057594037927935
144115188075855871 288230376151711743 576460752303423487
1152921504606846975 2305843009213693951 4611686018427387903
9223372036854775807 18446744073709551615)


This is a list of T(1) through T(64). (The 120 mSec came on a 50 mHz '486 running in an XTerm of XWindow under Linux. The session was edited to put line breaks between numbers.)



Appendix

Mathematics is made of arguments (reasoned discourse that is, not crockery-throwing). This section is a reference to the most used techniques. A reader having trouble with, say, proof by contradiction, can turn here for an outline of that method.

But this section gives only a sketch. For more, these are classics: Methods of Logic by Quine, Induction and Analogy in Mathematics by Pólya, and Naive Set Theory by Halmos.


Propositions

The point at issue in an argument is the proposition. Mathematicians usually write the point in full before the proof and label it either Theorem for major points, Corollary for points that follow immediately from a prior one, or Lemma for results chiefly used to prove other results.

The statements expressing propositions can be complex, with many subparts. The truth or falsity of the entire proposition depends both on the truth value of the parts, and on the words used to assemble the statement from its parts.

Not

For example, where  P is a proposition, "it is not the case that  P " is true provided that  P is false. Thus, " n is not prime" is true only when  n is the product of smaller integers.

We can picture the "not" operation with a Venn diagram.

Linalg venn not.png

Where the box encloses all natural numbers, and inside the circle are the primes, the shaded area holds numbers satisfying "not  P ".

To prove that a "not  P " statement holds, show that  P is false.

And

Consider the statement form " P and  Q ". For the statement to be true both halves must hold: " 7 is prime and so is  3 " is true, while " 7 is prime and  3 is not" is false.

Here is the Venn diagram for " P and  Q ".

Linalg venn and.png

To prove " P and  Q ", prove that each half holds.

Or

A " P or  Q " is true when either half holds: " 7 is prime or  4 is prime" is true, while " 7 is not prime or  4 is prime" is false. We take "or" inclusively so that if both halves are true " 7 is prime or  4 is not" then the statement as a whole is true. (In everyday speech, sometimes "or" is meant in an exclusive way— "Eat your vegetables or no dessert" does not intend both halves to hold— but we will not use "or" in that way.)

The Venn diagram for "or" includes all of both circles.

Linalg venn and.png

To prove " P or  Q ", show that in all cases at least one half holds (perhaps sometimes one half and sometimes the other, but always at least one).

If-then

An "if  P then  Q " statement (sometimes written "P materially implies Q" or just " P implies  Q " or " P\implies Q") is true unless  P is true while  Q is false. Thus "if  7 is prime then  4 is not" is true while "if  7 is prime then  4 is also prime" is false. (Contrary to its use in casual speech, in mathematics "if  P then  Q " does not connote that  P precedes  Q or causes  Q .)

More subtly, in mathematics "if  P then  Q " is true when  P is false: "if  4 is prime then  7 is prime" and "if  4 is prime then  7 is not" are both true statements, sometimes said to be vacuously true. We adopt this convention because we want statements like "if a number is a perfect square then it is not prime" to be true, for instance when the number is  5 or when the number is  6 .

The diagram

Linalg venn ifthen.png

shows that  Q holds whenever  P does (another phrasing is " P is sufficient to give  Q "). Notice again that if  P does not hold,  Q may or may not be in force.

There are two main ways to establish an implication. The first way is direct: assume that  P is true and, using that assumption, prove  Q . For instance, to show "if a number is divisible by 5 then twice that number is divisible by 10", assume that the number is  5n and deduce that  2(5n)=10n . The second way is indirect: prove the contrapositive statement: "if  Q is false then  P is false" (rephrased, " Q can only be false when  P is also false"). As an example, to show "if a number is prime then it is not a perfect square", argue that if it were a square  p=n^2 then it could be factored  p=n\cdot n where  n<p and so wouldn't be prime (of course  p=0 or  p=1 don't give  n<p but they are nonprime by definition).

Note two things about this statement form.

First, an "if  P then  Q " result can sometimes be improved by weakening  P or strengthening  Q . Thus, "if a number is divisible by  p^2 then its square is also divisible by  p^2 " could be upgraded either by relaxing its hypothesis: "if a number is divisible by  p then its square is divisible by  p^2 ", or by tightening its conclusion: "if a number is divisible by  p^2 then its square is divisible by  p^4 ".

Second, after showing "if  P then  Q ", a good next step is to look into whether there are cases where  Q holds but  P does not. The idea is to better understand the relationship between  P and  Q , with an eye toward strengthening the proposition.

Equivalence

An if-then statement cannot be improved when not only does  P imply  Q , but also  Q implies  P . Some ways to say this are: " P if and only if  Q ", " P iff  Q ", " P and  Q are logically equivalent", " P is necessary and sufficient to give  Q ", " P\iff Q ". For example, "a number is divisible by a prime if and only if that number squared is divisible by the prime squared".

The picture here shows that  P and  Q hold in exactly the same cases.

Linalg venn equiv.png

Although in simple arguments a chain like " P if and only if R, which holds if and only if S ..." may be practical, typically we show equivalence by showing the "if  P then  Q " and "if  Q then  P " halves separately.


Quantifiers

Compare these two statements about natural numbers: "there is an  x such that  x is divisible by  x^2 " is true, while "for all numbers  x , that  x is divisible by  x^2 " is false. We call the "there is" and "for all" prefixes quantifiers.

For all

The "for all" prefix is the universal quantifier, symbolized  \forall .

Venn diagrams aren't very helpful with quantifiers, but in a sense the box we draw to border the diagram shows the universal quantifier since it dilineates the universe of possible members.

Linalg venn forall.png

To prove that a statement holds in all cases, we must show that it holds in each case. Thus, to prove "every number divisible by  p has its square divisible by  p^2 ", take a single number of the form  pn and square it  (pn)^2=p^2n^2 . This is a "typical element" or "generic element" proof.

This kind of argument requires that we are careful to not assume properties for that element other than those in the hypothesis— for instance, this type of wrong argument is a common mistake: "if  n is divisible by a prime, say  2 , so that  n=2k then  n^2=(2k)^2=4k^2 and the square of the number is divisible by the square of the prime". That is an argument about the case  p=2 , but it isn't a proof for general  p .

There exists

We will also use the existential quantifier, symbolized  \exists and read "there exists".

As noted above, Venn diagrams are not much help with quantifiers, but a picture of "there is a number such that  P " would show both that there can be more than one and that not all numbers need satisfy  P .

Linalg venn thereexists.png

An existence proposition can be proved by producing something satisfying the property: once, to settle the question of primality of  2^{2^5}+1 , Euler produced its divisor  641 . But there are proofs showing that something exists without saying how to find it; Euclid's argument given in the next subsection shows there are infinitely many primes without naming them. In general, while demonstrating existence is better than nothing, giving an example is better, and an exhaustive list of all instances is great. Still, mathematicians take what they can get.

Finally, along with "Are there any?" we often ask "How many?" That is why the issue of uniqueness often arises in conjunction with questions of existence. Many times the two arguments are simpler if separated, so note that just as proving something exists does not show it is unique, neither does proving something is unique show that it exists. (Obviously "the natural number with more factors than any other" would be unique, but in fact no such number exists.)


Techniques of Proof

Induction

Many proofs are iterative, "Here's why the statement is true for for the case of the number  1 , it then follows for  2 , and from there to  3 , and so on ...". These are called proofs by induction. Such a proof has two steps. In the base step the proposition is established for some first number, often  0 or  1 . Then in the inductive step we assume that the proposition holds for numbers up to some  k and deduce that it then holds for the next number k+1.

Here is an example.

We will prove that  1+2+3+\dots+n=n(n+1)/2 .

For the base step we must show that the formula holds when  n=1 . That's easy, the sum of the first  1 number does indeed equal  1(1+1)/2 .

For the inductive step, assume that the formula holds for the numbers  1,2,\ldots,k . That is, assume all of these instances of the formula.

\begin{array}{rl}
1o
&=1(1+1)/2  \\
\text{and}\quad 1+2
&=2(2+1)/2  \\
\text{and}\quad  1+2+3
&=3(3+1)/2  \\
&\vdots    \\
\text{and}\quad 1+\dots+k
&=k(k+1)/2
\end{array}

From this assumption we will deduce that the formula therefore also holds in the  k+1 next case. The deduction is straightforward algebra.


1+2+\cdots+k+(k+1)
=
\frac{k(k+1)}{2}+(k+1)
=
\frac{(k+1)(k+2)}{2}

We've shown in the base case that the above proposition holds for  1 . We've shown in the inductive step that if it holds for the case of  1 then it also holds for  2 ; therefore it does hold for 2. We've also shown in the inductive step that if the statement holds for the cases of  1 and  2 then it also holds for the next case  3 , etc. Thus it holds for any natural number greater than or equal to  1 .

Here is another example.

We will prove that every integer greater than  1 is a product of primes.

The base step is easy:  2 is the product of a single prime.

For the inductive step assume that each of  2, 3,\ldots ,k is a product of primes, aiming to show  k+1 is also a product of primes. There are two possibilities: (i) if  k+1 is not divisible by a number smaller than itself then it is a prime and so is the product of primes, and (ii) if  k+1 is divisible then its factors can be written as a product of primes (by the inductive hypothesis) and so  k+1 can be rewritten as a product of primes. That ends the proof.

(Remark. The Prime Factorization Theorem of Number Theory says that not only does a factorization exist, but that it is unique. We've shown the easy half.)

There are two things to note about the "next number" in an induction argument.

For one thing, while induction works on the integers, it's no good on the reals. There is no "next" real.

The other thing is that we sometimes use induction to go down, say, from  10 to  9 to  8 , etc., down to  0 . So "next number" could mean "next lowest number". Of course, at the end we have not shown the fact for all natural numbers, only for those less than or equal to  10 .

Contradiction

Another technique of proof is to show something is true by showing it can't be false.

The classic example is Euclid's, that there are infinitely many primes.

Suppose there are only finitely many primes  p_1,\dots,p_k . Consider  p_1\cdot p_2\dots p_k +1 . None of the primes on this supposedly exhaustive list divides that number evenly, each leaves a remainder of  1 . But every number is a product of primes so this can't be. Thus there cannot be only finitely many primes.

Every proof by contradiction has the same form: assume that the false proposition is true and derive some contradiction to known facts. This kind of logic is known as Aristotelian Logic, or Term Logic

Another example is this proof that  \sqrt{2} is not a rational number.

Suppose that  \sqrt{2}=m/n .


2n^2=m^2

Factor out the  2 's:  n=2^{k_n}\cdot \hat{n} and  m=2^{k_m}\cdot \hat{m} and rewrite.


2\cdot (2^{k_n}\cdot \hat{n})^2
=
(2^{k_m}\cdot \hat{m})^2

The Prime Factorization Theorem says that there must be the same number of factors of  2 on both sides, but there are an odd number  1+2k_n on the left and an even number  2k_m on the right. That's a contradiction, so a rational with a square of  2 cannot be.

Both of these examples aimed to prove something doesn't exist. A negative proposition often suggests a proof by contradiction.


Sets, Functions, Relations

Sets

Mathematicians work with collections called sets. A set can be given as a listing between curly braces as in  \{ 1,4,9,16 \} , or, if that's unwieldy, by using set-builder notation as in  \{x\,\big|\, x^5-3x^3+2=0 \} (read "the set of all  x such that \ldots"). We name sets with capital roman letters as with the primes  P=\{2,3,5,7,11,\ldots\,\} , except for a few special sets such as the real numbers  \mathbb{R} , and the complex numbers  \mathbb{C} . To denote that something is an element (or member) of a set we use " {}\in {} ", so that  7\in\{3,5,7\} while  8\not\in\{3,5,7\} .

What distinguishes a set from any other type of collection is the Principle of Extensionality, that two sets with the same elements are equal. Because of this principle, in a set repeats collapse  \{7,7\}=\{7\} and order doesn't matter  \{2,\pi\}=\{\pi,2\} .

We use " \subset " for the subset relationship:  \{2,\pi\}\subset\{2,\pi,7\} and " \subseteq " for subset or equality (if  A is a subset of  B but  A\neq
B then  A is a proper subset of  B ). These symbols may be flipped, for instance  \{ 2,\pi,5\}\supset\{2,5\} .

Because of Extensionality, to prove that two sets are equal  A=B , just show that they have the same members. Usually we show mutual inclusion, that both  A\subseteq B and  A\supseteq B .

Set operations

Venn diagrams are handy here. For instance,  x\in P can be pictured

Linalg venn xinP.png

and " P\subseteq Q " looks like this.

Linalg venn ifthen.png

Note that this is a repeat of the diagram for "if \ldots then ..." propositions. That's because " P\subseteq Q " means "if  x\in P then  x\in Q ".

In general, for every propositional logic operator there is an associated set operator. For instance, the complement of  P is  P^{\text{comp}}=\{x\,\big|\, \text{not }(x\in P)\}

Linalg venn not.png

the union is  P\cup Q=\{x\,\big|\,(x\in P) \text{ or }(x\in Q)\}

Linalg venn or.png

and the intersection is  P\cap Q=\{x\,\big|\, (x\in P)\text{ and }(x\in Q)\}.

Linalg venn and.png

}}When two sets share no members their intersection is the empty set  \{\} , symbolized  \varnothing . Any set has the empty set for a subset, by the "vacuously true" property of the definition of implication.

Sequences

We shall also use collections where order does matter and where repeats do not collapse. These are sequences, denoted with angle brackets:  \langle  2,3,7 \rangle \neq\langle 2,7,3 \rangle  . A sequence of length  2 is sometimes called an ordered pair and written with parentheses:  (\pi,3) . We also sometimes say "ordered triple", "ordered  4 -tuple", etc. The set of ordered  n -tuples of elements of a set  A is denoted  A^n . Thus the set of pairs of reals is  \mathbb{R}^2 .

Functions

We first see functions in elementary Algebra, where they are presented as formulas (e.g.,  f(x)=16x^2-100 ), but progressing to more advanced Mathematics reveals more general functions— trigonometric ones, exponential and logarithmic ones, and even constructs like absolute value that involve piecing together parts— and we see that functions aren't formulas, instead the key idea is that a function associates with its input  x a single output  f(x) .

Consequently, a function or map is defined to be a set of ordered pairs  (x,f(x)\,) such that  x suffices to determine  f(x) , that is: if  x_1=x_2 then  f(x_1)=f(x_2) (this requirement is referred to by saying a function is well-defined).\footnote{More on this is in the section on isomorphisms}

Each input  x is one of the function's arguments and each output  f(x) is a value. The set of all arguments is  f 's domain and the set of output values is its range. Usually we don't need know what is and is not in the range and we instead work with a superset of the range, the codomain. The notation for a function  f with domain  X and codomain  Y is  f:X\to Y .

Linalg domain range codomain.png

We sometimes instead use the notation 
x\stackrel{f}{\longmapsto} 16x^2-100 , read " x maps under  f to  16x^2-100 ", or "
16x^2-100 is the "image' of  x '.

Some maps, like  x\mapsto \sin(1/x) , can be thought of as combinations of simple maps, here,  g(y)=\sin(y) applied to the image of  f(x)=1/x . The composition of  g:Y\to Z with f:X\to Y, is the map sending  x\in X to  g(\, f(x)\,)\in Z . It is denoted  g\circ f:X\to Z . This definition only makes sense if the range of  f is a subset of the domain of  g .

Observe that the identity map  \mbox{id}:Y\to Y defined by  \mbox{id}(y)=y has the property that for any  f:X\to Y , the composition  \mbox{id}\circ f is equal to  f . So an identity map plays the same role with respect to function composition that the number  0 plays in real number addition, or that the number  1 plays in multiplication.

In line with that analogy, define a left inverse of a map  f:X\to Y to be a function  g:\text{range}(f)\to X such that  g\circ f is the identity map on  X . Of course, a right inverse of  f is a  h:Y\to X such that  f\circ h is the identity.

A map that is both a left and right inverse of  f is called simply an inverse. An inverse, if one exists, is unique because if both  g_1 and  g_2 are inverses of  f then  g_1(x)=g_1\circ  (f\circ g_2) (x)
= (g_1\circ f) \circ g_2(x)
=g_2(x) (the middle equality comes from the associativity of function composition), so we often call it "the" inverse, written  f^{-1} . For instance, the inverse of the function  f:\mathbb{R}\to \mathbb{R} given by  f(x)=2x-3 is the function  f^{-1}:\mathbb{R}\to \mathbb{R} given by  f^{-1}(x)=(x+3)/2 .

The superscript " f^{-1} " notation for function inverse can be confusing— it doesn't mean  1/f(x) . It is used because it fits into a larger scheme. Functions that have the same codomain as domain can be iterated, so that where f:X\to X, we can consider the composition of f with itself:  f\circ f , and  f\circ f\circ f , etc.

Naturally enough, we write f\circ f as  f^2 and f\circ f\circ f as  f^3 , etc. Note that the familiar exponent rules for real numbers obviously hold:  f^i\circ f^j=f^{i+j} and  (f^i)^j=f^{i\cdot j} . The relationship with the prior paragraph is that, where  f is invertible, writing  f^{-1} for the inverse and  f^{-2} for the inverse of  f^2 , etc., gives that these familiar exponent rules continue to hold, once  f^0 is defined to be the identity map.

If the codomain  Y equals the range of  f then we say that the function is onto (or surjective). A function has a right inverse if and only if it is onto (this is not hard to check). If no two arguments share an image, if  x_1\neq x_2 implies that  f(x_1)\neq f(x_2) , then the function is one-to-one (or injective). A function has a left inverse if and only if it is one-to-one (this is also not hard to check).

By the prior paragraph, a map has an inverse if and only if it is both onto and one-to-one; such a function is a correspondence. It associates one and only one element of the domain with each element of the range (for example, finite sets must have the same number of elements to be matched up in this way). Because a composition of one-to-one maps is one-to-one, and a composition of onto maps is onto, a composition of correspondences is a correspondence.

We sometimes want to shrink the domain of a function. For instance, we may take the function  f:\mathbb{R}\to \mathbb{R} given by  f(x)=x^2 and, in order to have an inverse, limit input arguments to nonnegative reals  \hat{f}:\mathbb{R}^+\to \mathbb{R} . Technically,  \hat{f} is a different function than  f ; we call it the restriction of  f to the smaller domain.

A final point on functions: neither  x nor  f(x) need be a number. As an example, we can think of  f(x,y)=x+y as a function that takes the ordered pair  (x,y) as its argument.

Relations

Some familiar operations are obviously functions: addition maps  (5,3) to  8 . But what of " < " or " = "? We here take the approach of rephrasing " 3<5 " to " (3,5) is in the relation  < ". That is, define a binary relation on a set  A to be a set of ordered pairs of elements of  A . For example, the  < relation is the set   \{(a,b)\,\big|\, a<b\} ; some elements of that set are  (3,5) ,  (3,7) , and  (1,100) .

Another binary relation on the natural numbers is equality; this relation is formally written as the set  \{\ldots,(-1,-1),(0,0),(1,1),\ldots\} .

Still another example is "closer than  10 ", the set  \{(x,y)\,\big|\, |x-y|<10 \} . Some members of that relation are  (1,10) ,  (10,1) , and  (42,44) . Neither  (11,1) nor  (1,11) is a member.

Those examples illustrate the generality of the definition. All kinds of relationships (e.g., "both numbers even" or "first number is the second with the digits reversed") are covered under the definition.

Equivalence Relations

We shall need to say, formally, that two objects are alike in some way. While these alike things aren't identical, they are related (e.g., two integers that "give the same remainder when divided by  2 ").

A binary relation  \{(a,b),\ldots \} is an equivalence relationwhen it satisfies

  1. reflexivity: any object is related to itself;
  2. symmetry: if  a is related to  b then  b is related to  a ;
  3. transitivity: if  a is related to  b and  b is related to  c then  a is related to  c .

(To see that these conditions formalize being the same, read them again, replacing "is related to" with "is like".)

Some examples (on the integers): " = " is an equivalence relation, " < " does not satisfy symmetry, "same sign" is a equivalence, while "nearer than  10 " fails transitivity.

Partitions

In "same sign"  \{ (1,3),(-5,-7),(-1,-1),\ldots\} there are two kinds of pairs, the first with both numbers positive and the second with both negative. So integers fall into exactly one of two classes, positive or negative.

A partition of a set  S is a collection of subsets  \{S_1,S_2,\ldots\} such that every element of  S is in one and only one  S_i :  S_1\cup S_2\cup \ldots{} = S , and if  i is not equal to  j then  S_i\cap S_j=\varnothing . Picture  S being decomposed into distinct parts.

Linalg partition.png

Thus, the first paragraph says "same sign" partitions the integers into the positives and the negatives.

Similarly, the equivalence relation "=" partitions the integers into one-element sets.

Another example is the fractions. Of course, 2/3 and 4/6 are equivalent fractions. That is, for the set S=\{n/d\,\big|\, n,d\in\mathbb{Z}\text{ and }d\neq 0\}, we define two elements n_1/d_1 and n_2/d_2 to be equivalent if n_1d_2=n_2d_1. We can check that this is an equivalence relation, that is, that it satisfies the above three conditions. With that, S is divided up into parts.

Linalg partition 2.png

Before we show that equivalence relations always give rise to partitions, we first illustrate the argument. Consider the relationship between two integers of "same parity", the set  \{ (-1,3),(2,4),(0,0),\ldots\} (i.e., "give the same remainder when divided by  2 "). We want to say that the natural numbers split into two pieces, the evens and the odds, and inside a piece each member has the same parity as each other. So for each  x we define the set of numbers associated with it:  S_x=\{y\,\big|\, (x,y)\in\text{same parity}\} . Some examples are  S_1=\{\ldots,-3,-1,1,3,\ldots\} , and  S_4=\{\ldots,-2,0,2,4,\ldots\} , and  S_{-1}=\{\ldots,-3,-1,1,3,\ldots\} . These are the parts, e.g.,  S_1 is the odds.


}}Theorem An equivalence relation induces a partition on the underlying set.

Proof

Call the set  S and the relation  R . In line with the illustration in the paragraph above, for each  x\in S define  S_x=\{y\,\big|\, (x,y)\in R\} .

Observe that, as  x is a member if  S_x , the union of all these sets is  S . So we will be done if we show that distinct parts are disjoint: if  S_x\neq S_y then  S_x\cap S_y=\varnothing . We will verify this through the contrapositive, that is, we wlll assume that  S_x\cap S_y\neq\varnothing in order to deduce that  S_x=S_y .

Let  p be an element of the intersection. Then by definition of S_x and S_y, the two  (x,p) and  (y,p) are members of R, and by symmetry of this relation  (p,x) and  (p,y) are also members of  R . To show that  S_x=S_y we will show each is a subset of the other.

Assume that  q\in S_x so that  (q,x)\in R . Use transitivity along with  (x,p)\in R to conclude that  (q,p) is also an element of  R . But  (p,y)\in R so another use of transitivity gives that  (q,y)\in  R . Thus  q\in S_y . Therefore  q\in S_x implies  q\in S_y , and so  S_x\subseteq S_y .

The same argument in the other direction gives the other inclusion, and so the two sets are equal, completing the contrapositive argument.

}}We call each part of a partition an equivalence class (or informally, "part").

We somtimes pick a single element of each equivalence class to be the class representative.

Linalg partition 3.png

Usually when we pick representatives we have some natural scheme in mind. In that case we call them the canonical representatives.

An example is the simplest form of a fraction. We've defined  3/5 and  9/15 to be equivalent fractions. In everyday work we often use the "simplest form" or "reduced form" fraction as the class representatives.

Linalg partition 4.png



Resources And Licensing

For information regarding the Licensing of this book please see Wikibooks' Copyright Policy. The original text of this wikibook has been copied form the book "Linear Algebra" by:

Jim Hefferon, Mathematics
Saint Michael's College
Colchester, Vermont USA 05439.

The original text is available here, and is released under either the GNU Free Documentation License or the Creative Commons Attribution-ShareAlike 2.5 License.



Other Books and Lectures

  • Linear Algebra - A free textbook by Prof. Jim Hefferon of St. Michael's College. This wikibook began as a wikified copy of Prof. Hefferon's text. Prof. Hefferon's book may differ from the book here, as both are still under development.
  • A Course in Linear Algebra - A free set of video lectures given at the Massachusetts Institute of Technology by Prof. Gilbert Strang. Prof. Strang's book on linear algebra has been a widely influential book and it is referenced many times in this text.
  • A First Course in Linear Algebra - A free textbook by Prof. Rob Beezer at the University of Puget Sound, released under GFDL.
  • Lecture Notes on Linear Algebra - An online viewable set of lecture notes by Prof. José Figueroa-O’Farrill at the University of Edinburgh.

Software

  • Octave a free and open soure application for Numerical Linear Algebra. There is also an Octave Programming Tutorial wikibook under development.
  • A toolkit for linear algebra students - An online software resource aimed at helping linear algebra students learn and practice a basic linear algebra procedures, such as Gauss-Jordan reduction, calculating the determinant, or checking for linear independence. This software was produced by Przemyslaw Bogacki in the Department of Mathematics and Statistics at Old Dominion University.
  • Online Javascript Matrix Calculator, basic matrix algebra, elementary row operations, RREF, inverses, determinants, characteristic polynomials, eigenvalues and eigenvectors, null space, range space, and least squares solutions to linear systems. The software was developed by the department of mathematics at the University of Houston.

Wikipedia

Wikipedia is frequently a great resource that often gives a general non-technical overview of a subject. Wikipedia has many articles on the subject of Linear Algebra. Below are some articles about some of the material in this book.




Bibliography

  • Microsoft (1993), Microsoft Programmers Reference, Microsoft Press .
  • William Lowell Putnam Mathematical Competition, Problem A-5, 1990.
  • The USSR Mathematics Olympiad, number 174.
  • Ackerson, R. H. (Dec. 1955), "A Note on Vector Spaces", American Mathematical Monthly (American Mathematical Society) 62 (10): 721 .
  • Anning, Norman (proposer); Trigg, C. W. (solver) (Feb. 1953), "Elementary problem 1016", American Mathematical Monthly (American Mathematical Society) 60 (2): 115 .
  • Anton, Howard (1987), Elementary Linear Algebra, John Wiley & Sons .
  • Arrow, J. (1963), Social Choice and Individual Values, Wiley .
  • Ball, W.W. (1962), Mathematical Recreations and Essays, MacMillan  (revised by H.S.M. Coxeter).
  • Bennett, William (March 15, 1993), "Quantifying America's Decline", Wall Street Journal 
  • Birkhoff, Garrett; MacLane, Saunders (1965), Survey of Modern Algebra, Macmillan .
  • Bittinger, Marvin (proposer) (Jan. 1973), "Quickie 578", Mathematics Magazine (American Mathematical Society) 46 (5): 286,296 .
  • Blass, A. (1984), "Existence of Bases Implies the Axiom of Choice", in Baumgartner, J. E., Axiomatic Set Theory, Providence RI: American Mathematical Society, pp. 31–33 .
  • Bridgman, P. W. (1931), Dimensional Analysis, Yale University Press .
  • Casey, John (1890), The Elements of Euclid, Books I to VI and XI (9th ed.), Hodges, Figgis, and Co. .
  • Clark, David H.; Coupe, John D. (Mar. 1967), "The Bangor Area Economy Its Present and Future", Reprot to the City of Bangor, ME .
  • Clarke, Arthur C. (1982), Great SF Stories 8: Technical Error, DAW Books .
  • Courant, Richard; Robbins, Herbert (1978), What is Mathematics?, Oxford University Press .
  • Coxeter, H.S.M. (1974), Projective Geometry (Second ed.), Springer-Verlag .
  • Cullen, Charles G. (1990), Matrices and Linear Transformations (Second ed.), Dover .
  • Dalal, Siddhartha; Folkes, Edward; Hoadley, Bruce (Fall 1989), "Lessons Learned from Challenger: A Statistical Perspective", Stats: the Magazine for Students of Statistics: 14-18 
  • Davies, Thomas D. (Jan. 1990), "New Evidence Places Peary at the Pole", National Geographic Magazine 177 (1): 44 .
  • de Mestre, Neville (1990), The Mathematics of Projectiles in sport, Cambridge University Press .
  • De Parville (1884), La Nature, I, Paris, pp. 285-286 .
  • Duncan, Dewey (proposer); Quelch, W. H. (solver) (Sept.-Oct. 1952), Mathematics Magazine 26 (1): 48 
  • Dudley, Underwood (proposer); Lebow, Arnold (proposer); Rothman, David (solver) (Jan. 1963), "Elemantary problem 1151", American Mathematical Monthly 70 (1): 93 .
  • Ebbing, Darrell D. (1993), General Chemistry (Fourth ed.), Houghton Mifflin .
  • Ebbinghaus, H. D. (1990), Numbers, Springer-Verlag .
  • Eggar, M.H. (Aug./Sept. 1998), "Pinhole Cameras, Perspective, and Projective Geometry", American Mathematical Monthly (American Mathematical Society): 618-630 .
  • Einstein, A. (1911), Annals of Physics 35: 686 .
  • Feller, William (1968), An Introduction to Probability Theory and Its Applications, 1 (3rd ed.), Wiley .
  • Gardner, Martin (May. 1957), "Mathematical Games: About the remarkable similarity between the Icosian Game and the Tower of Hanoi", Scientific American: 150-154 .
  • Gardner, Martin (April 1970), "Mathematical Games, Some mathematical curiosities embedded in the solar system", Scientific American: 108-112 .
  • Gardner, Martin (October 1974), "Mathematical Games, On the paradoxical situations that arise from nontransitive relations", Scientific American .
  • Gardner, Martin (October 1980), "Mathematical Games, From counting votes to making votes count: the mathematics of elections", Scientific American .
  • Gardner, Martin (1990), The New Ambidextrous Universe (Third revised ed.), W. H. Freeman and Company .
  • Gilbert, George T.; Krusemeyer, Mark; Larson, Loren C. (1993), The Wohascum County Problem Book, The Mathematical Association of America .
  • Giordano, R.; Jaye, M.; Weir, M. (1986), "The Use of Dimensional Analysis in Mathematical Modeling", UMAP Modules (COMAP) (632) .
  • Giordano, R.; Wells, M.; Wilde, C. (1987), "Dimensional Analysis", UMAP Modules (COMAP) (526) .
  • Goult, R.J.; Hoskins, R.F.; Milner, J.A.; Pratt, M.J. (1975), Computational Methods in Linear Algebra, Wiley .
  • Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1988), Concrete Mathematics, Addison-Wesley .
  • Haggett, Vern (proposer); Saunders, F. W. (solver) (Apr. 1955), "Elementary problem 1135", American Mathematical Monthly (American Mathematical Society) 62 (5): 257 .
  • Halmos, Paul P. (1958), Finite Dimensional Vector Spaces (Second ed.), Van Nostrand .
  • Halsey, William D. (1979), Macmillian Dictionary, Macmillian .
  • Hamming, Richard W. (1971), Introduction to Applied Numerical Analysis, Hemisphere Publishing .
  • Hanes, Kit (1990), "Analytic Projective Geometry and its Applications", UMAP Modules (UMAP UNIT 710): 111 .
  • Heath, T. (1956), Euclid's Elements, 1, Dover .
  • Hoffman, Kenneth; Kunze, Ray (1971), Linear Algebra (Second ed.), Prentice Hall 
  • Hofstadter, Douglas R. (1985), Metamagical Themas:~Questing for the Essence of Mind and Pattern, Basic Books .
  • Iosifescu, Marius (1980), Finite Markov Processes and Their Applications, UMI Research Press .
  • Ivanoff, V. F. (proposer); Esty, T. C. (solver) (Feb. 1933), "Problem 3529", American Mathematical Mothly 39 (2): 118 
  • Kelton, Christina M.L. (1983), Trends on the Relocation of U.S. Manufacturing, Wiley .
  • Kemeny, John G.; Snell, J. Laurie (1960), Finite Markove Chains, D.~Van Nostrand .
  • Kemp, Franklin (Oct. 1982), "Linear Equations", American Mathematical Monthly (American Mathematical Society): 608 .
  • Klamkin, M. S. (proposer) (Jan.-Feb. 1957), "Trickie T-27", Mathematics Magazine 30 (3): 173 .
  • Knuth, Donald E. (1988), The Art of Computer Programming, Addison Wesley .
  • Leontief, Wassily W. (Oct. 1951), "Input-Output Economics", Scientific American 185 (4): 15 .
  • Leontief, Wassily W. (Apr. 1965), "The Structure of the U.S. Economy", Scientific American 212 (4): 25 .
  • Liebeck, Hans. (Dec. 1966), "A Proof of the Equality of Column Rank and Row Rank of a Matrix", American Mathematical Monthly (American Mathematical Society) 73 (10): 1114 .
  • Macdonald, Kenneth; Ridge, John (1988), "Social Mobility", British Social Trends Since 1900 (Macmillian) .
  • Morrison, Clarence C. (proposer) (1967), "Quickie", Mathematics Magazine 40 (4): 232 .
  • Munkres, James R. (1964), Elementary Linear Algebra, Addison-Wesley .
  • Neimi, G.; Riker, W. (June 1976), "The Choice of Voting Systems", Scientific American: 21-27 .
  • O'Hanian, Hans (1985), Physics, 1, W. W. Norton 
  • O'Nan, Micheal (1990), Linear Algebra (3rd ed.), Harcourt College Pub .
  • Oakley, Cletus; Baker, Justine (April 1977), "Least Squares and the 3:40 Mile", Mathematics Teacher 
  • Pólya, G. (1954), Mathematics and Plausible Reasoning: Volume II Patterns of Plausible Inference, Princeton University Press 
  • Peterson, G. M. (Apr. 1955), "Area of a triangle", American Mathematical Monthly (American Mathematical Society) 62 (4): 249 .
  • Poundstone, W. (2008), Gaming the vote, Hill and Wang, ISBN 978-0-8090-4893-9 .
  • Ransom, W. R. (proposer); Gupta, Hansraj (solver) (Jan. 1935), "Elementary problem 105", American Mathematical Monthly 42 (1): 47 .
  • Rice, John R. (1993), Numerical Methods, Software, and Analysis, Academic Press .
  • Rucker, Rudy (1982), Infinity and the Mind, Birkhauser .
  • Rupp, C. A. (proposer); Aude, H. T. R. (solver) (Jun.-July 1931), "Problem 3468", American Mathematical Monthly (American Mathematical Society) 37 (6): 355 .
  • Ryan, Patrick J. (1986), Euclidean and Non=Euclidean Geometry: an Anylytic Approach, Cambridge University Press .
  • Salkind, Charles T. (1975), Contest Problem Book No 1: Annual High School Mathematics Examinations 1950-1960 .
  • Seidenberg, A. (1962), Lectures in Projective Geometry, Van Nostrandg .
  • Silverman, D. L. (proposer); Trigg, C. W. (solver) (Jan. 1963), "Quickie 237", Mathematics Magazine (American Mathematical Society) 36 (1) .
  • Strang, Gilbert (1980), Linear Algebra and its Applications (2nd ed.), Hartcourt Brace Javanovich 
  • Strang, Gilbert (Nov. 1993), "The Fundamental Theorem of Linear Algebra", American Mathematical Monthly (American Mathematical Society): 848-855 .
  • Taylor, Alan D. (1995), Mathematics and Politics: Strategy, Voting, Power, and Proof, Springer-Verlag .
  • Tilley, Burt, Private Communication .
  • Trigg, C. W. (proposer); Walker, R. J. (solver) (Jan. 1949), "Elementary Problem 813", American Mathematical Monthly (American Mathematical Society) 56 (1) .
  • Trigg, C. W. (proposer) (Jan. 1963), "Quickie 307", Mathematics Magazine (American Mathematical Society) 36 (1): 77 .
  • Trono, Tony (compilier) (1991), University of Vermont Mathematics Department High School Prize Examinations 1958-1991, mimeograhed printing 
  • Walter, Dan (proposer); Tytun, Alex (solver) (1949), "Elementary problem 834", American Mathematical Monthly (American Mathematical Society) 56 (6): 409 .
  • Weston, J. D. (Aug./Sept. 1959), "Volume in Vector Spaces", American Mathematical Monthly (American Mathematical Society) 66 (7): 575-577 .
  • Weyl, Hermann (1952), Symmetry, Princeton University Press .
  • Wickens, Thomas D. (1982), Models for Behavior, W.H. Freeman .
  • Wilansky, Albert (Nov. year=1951), "The Row-Sum of the Inverse Matrix", American Mathematical Monthly (American Mathematical Society) 58 (9): 614 .
  • Wilkinson, J. H. (1965), The Algebraic Eigenvalue Problem, Oxford University Press .
  • Yaglom, I. M. (1988), Felix Klein and Sophus Lie: Evolution of the Idea of Symmetry in the Nineteenth Century, Birkhäuser .
  • Zwicker, S. (1991), "The Voters' Paradox, Spin, and the Borda Count", Mathematical Social Sciences 22: 187-227 



Index

A

accuracy

of Gauss' method

addition

vector

additive inverse

adjoint matrix

angle

antipodal

antisymmetric matrix

argument

Arithmetic-Geometric Mean Inequality

arrow diagram 1, 2, 3, 4, 5

augmented matrix

automorphism

dilation
reflection
rotation

B

back-substitution

base step

of induction

basis 1, 2, 3

change of
definition
natural
orthogonal
orthogonalization
orthonormal
standard 1, 2
standard over the complex numbers
string

best fit line

binary relation

block matrix

box

orientation
sense
volume

C

C language

classes

equivalence

canonical form

for row equivalence
for matrix equivalence
for nilpotent matrices
for similarity

canonical representative

Cauchy-Schwarz Inequality

Caley-Hamilton theorem

change of basis

characteristic

equation
polynomial
value
vector

characterized

Chemistry problem 1, 2, 3

central projection

circuits

parallel
series
series-parallel

closure

of rangespace
of nullspace

codomain

cofactor

column

vector

column rank

full

column space

complement

complementary subspaces

orthogonal

complex numbers

vector space over

component

composition

self

computer algebra systems

concatenation

condition number

congruent figures

congruent plane figures

contradiction

contrapostivite

convex set

coordinates

homogeneous
with respect to a basis

corollary

correspondence 1, 2

cosets

Cramer's Rule

cross product

crystals

diamond
graphite
salt
unit cell

D

da Vinci, Leonardo

determinant 1, 2

cofactor
Cramer's Rule
definition
exists 1, 2, 3
Laplace Expansion
minor
Vandermonde
permutation expansion 1, 2

diagonal matrix 1, 2

diagonalizable

difference equation

homogeneous

dilation

matrix representation

dimension

physical

dilation 1, 2

direct map

direct sum

definition
of two subspaces
external
internal

direction vector

distance-preserving

division theorem

domain

dot product

double precision

dual space

E

echelon form

leading variable
free variable
reduced

eigenvalue

of a matrix
of a transformation

eigenvector

of a matrix
of a transformation

eigenspace

element

elementary

matrix

elementary reduction matrices

elementary reduction operations

pivoting
rescaling
swapping

elementary row operations

empty

Erlanger Program

entry

equivalence

class
canonical representative
representitive

equivalence relation 1, 2

row equivalence
isomorphism
matrix equivalence
matrix similarity

equivalent statements

Euclid

even functions 1, 2

even polynomials

external direct sum

F

Fibonacci sequence

field

definition

finite-dimensional vector space

flat

form

free variable

full column rank

full row rank

function 1, 2

argument
codomain
composition
composition
correspondence
domain
even
identity
inverse 1, 2
inverse image
left inverse
multilinear
range
restriction
odd
one-to-one function
onto
right inverse
structure preserving 1, 2
see homomorphism
two sided inverse
value
well-defined
zero

Fundamental Theorem

of Linear Algebra

G

Gauss' Method

accuracy
back-substitution
elementary operations
Gauss-Jordan

Gauss-Jordan

Gaussian operations

generalized nullspace

generalized rangespace

Gram-Schmidt Orthogonalization

Geometry of Eigenvalues

Geometry of Linear Maps

H

historyless

Markov Chain

homogeneous

homogeneous coordinate vector

homogeneous coordinates

homorphism

composition
matrix representation 1, 2, 3
nonsingular 1, 2
nullity
nullspace
rank 1, 2
rangespace
rank
zero

I

ideal line

ideal point

identity function

identity matrix 1, 2

identity function

if-then statement

ill-conditioned

image

under a function

improper subspace

incidence matrix

index

of nilpotentcy

induction 1, 2

inductive step

of induction

inherited operations

inner product

Input-Output Analysis

internal direct sum 1, 2

intersection

invariant subspace

definintion

inverse

additive
left inverse
matrix
right inverse
two-sided<