Linear Algebra/Jordan Canonical Form

From Wikibooks, open books for an open world
Jump to navigation Jump to search
Linear Algebra
 ← Polynomials of Maps and Matrices Jordan Canonical Form Topic: Geometry of Eigenvalues → 

This subsection moves from the canonical form for nilpotent matrices to the one for all matrices.

We have shown that if a map is nilpotent then all of its eigenvalues are zero. We can now prove the converse.

Lemma 2.1

A linear transformation whose only eigenvalue is zero is nilpotent.

Proof

If a transformation on an -dimensional space has only the single eigenvalue of zero then its characteristic polynomial is . The Cayley-Hamilton Theorem says that a map satisfies its characteristic polynimial so is the zero map. Thus is nilpotent.

We have a canonical form for nilpotent matrices, that is, for each matrix whose single eigenvalue is zero: each such matrix is similar to one that is all zeroes except for blocks of subdiagonal ones. (To make this representation unique we can fix some arrangement of the blocks, say, from longest to shortest.) We next extend this to all single-eigenvalue matrices.

Observe that if 's only eigenvalue is then 's only eigenvalue is because if and only if . The natural way to extend the results for nilpotent matrices is to represent in the canonical form , and try to use that to get a simple representation for . The next result says that this try works.

Lemma 2.2

If the matrices and are similar then and are also similar, via the same change of basis matrices.

Proof

With we have since the diagonal matrix commutes with anything, and so . Therefore , as required.

Example 2.3

The characteristic polynomial of

is and so has only the single eigenvalue . Thus for

the only eigenvalue is , and is nilpotent. The null spaces are routine to find; to ease this computation we take to represent the transformation with respect to the standard basis (we shall maintain this convention for the rest of the chapter).

The dimensions of these null spaces show that the action of an associated map on a string basis is . Thus, the canonical form for with one choice for a string basis is

and by Lemma 2.2, is similar to this matrix.

We can produce the similarity computation. Recall from the Nilpotence section how to find the change of basis matrices and to express as . The similarity diagram

describes that to move from the lower left to the upper left we multiply by

and to move from the upper right to the lower right we multiply by this matrix.

So the similarity is expressed by

which is easily checked.

Example 2.4

This matrix has characteristic polynomial

and so has the single eigenvalue . The nullities of are: the null space of has dimension two, the null space of has dimension three, and the null space of has dimension four. Thus, has the action on a string basis of and . This gives the canonical form for , which in turn gives the form for .

An array that is all zeroes, except for some number down the diagonal and blocks of subdiagonal ones, is a Jordan block. We have shown that Jordan block matrices are canonical representatives of the similarity classes of single-eigenvalue matrices.

Example 2.5

The matrices whose only eigenvalue is separate into three similarity classes. The three classes have these canonical representatives.

In particular, this matrix

belongs to the similarity class represented by the middle one, because we have adopted the convention of ordering the blocks of subdiagonal ones from the longest block to the shortest.

We will now finish the program of this chapter by extending this work to cover maps and matrices with multiple eigenvalues. The best possibility for general maps and matrices would be if we could break them into a part involving their first eigenvalue (which we represent using its Jordan block), a part with , etc.

This ideal is in fact what happens. For any transformation , we shall break the space into the direct sum of a part on which is nilpotent, plus a part on which is nilpotent, etc. More precisely, we shall take three steps to get to this section's major theorem and the third step shows that where are 's eigenvalues.

Suppose that is a linear transformation. Note that the restriction[1] of to a subspace need not be a linear transformation on because there may be an with . To ensure that the restriction of a transformation to a "part" of a space is a transformation on the partwe need the next condition.

Definition 2.6

Let be a transformation. A subspace is invariant if whenever then (shorter: ).

Two examples are that the generalized null space and the generalized range space of any transformation are invariant. For the generalized null space, if then where is the dimension of the underlying space and so because is zero also. For the generalized range space, if then for some and then shows that is also a member of .

Thus the spaces and are invariant. Observe also that is nilpotent on because, simply, if has the property that some power of maps it to zero— that is, if it is in the generalized null space— then some power of maps it to zero. The generalized null space is a "part" of the space on which the action of is easy to understand.

The next result is the first of our three steps. It establishes that leaves 's part unchanged.

Lemma 2.7

A subspace is invariant if and only if it is invariant for any scalar . In particular, where is an eigenvalue of a linear transformation , then for any other eigenvalue , the spaces and are invariant.

Proof

For the first sentence we check the two implications of the "if and only if" separately. One of them is easy: if the subspace is invariant for any then taking shows that it is invariant. For the other implication suppose that the subspace is invariant, so that if then , and let be any scalar. The subspace is closed under linear combinations and so if then . Thus if then , as required.

The second sentence follows straight from the first. Because the two spaces are invariant, they are therefore invariant. From this, applying the first sentence again, we conclude that they are also invariant.

The second step of the three that we will take to prove this section's major result makes use of an additional property of and , that they are complementary. Recall that if a space is the direct sum of two others then any vector in the space breaks into two parts where and , and recall also that if and are bases for and then the concatenation is linearly independent (and so the two parts of do not "overlap"). The next result says that for any subspaces and that are complementary as well as invariant, the action of on breaks into the "non-overlapping" actions of on and on .

Lemma 2.8

Let be a transformation and let and be invariant complementary subspaces of . Then can be represented by a matrix with blocks of square submatrices and

where and are blocks of zeroes.

Proof

Since the two subspaces are complementary, the concatenation of a basis for and a basis for makes a basis for . We shall show that the matrix

has the desired form.

Any vector is in if and only if its final components are zeroes when it is represented with respect to . As is invariant, each of the vectors , ..., has that form. Hence the lower left of is all zeroes.

The argument for the upper right is similar.

To see that has been decomposed into its action on the parts, observe that the restrictions of to the subspaces and are represented, with respect to the obvious bases, by the matrices and . So, with subspaces that are invariant and complementary, we can split the problem of examining a linear transformation into two lower-dimensional subproblems. The next result illustrates this decomposition into blocks.

Lemma 2.9

If is a matrices with square submatrices and

where the 's are blocks of zeroes, then .

Proof

Suppose that is , that is , and that is . In the permutation formula for the determinant

each term comes from a rearrangement of the column numbers into a new order . The upper right block is all zeroes, so if a has at least one of among its first column numbers then the term arising from is zero, e.g., if then .

So the above formula reduces to a sum over all permutations with two halves: any significant is the composition of a that rearranges only and a that rearranges only . Now, the distributive law (and the fact that the signum of a composition is the product of the signums) gives that this

equals .

Example 2.10

From Lemma 2.9 we conclude that if two subspaces are complementary and invariant then is nonsingular if and only if its restrictions to both subspaces are nonsingular.

Now for the promised third, final, step to the main result.

Lemma 2.11

If a linear transformation has the characteristic polynomial then (1) and (2) .

Proof

Because is the degree of the characteristic polynomial, to establish statement (1) we need only show that statement (2) holds and that is trivial whenever .

For the latter, by Lemma 2.7, both and are invariant. Notice that an intersection of invariant subspaces is invariant and so the restriction of to is a linear transformation. But both and are nilpotent on this subspace and so if has any eigenvalues on the intersection then its "only" eigenvalue is both and . That cannot be, so this restriction has no eigenvalues: is trivial (Lemma V.II.3.10 shows that the only transformation without any eigenvalues is on the trivial space).

To prove statement (2), fix the index . Decompose as

and apply Lemma 2.8.

By Lemma 2.9, . By the uniqueness clause of the Fundamental Theorem of Arithmetic, the determinants of the blocks have the same factors as the characteristic polynomial and , and the sum of the powers of these factors is the power of the factor in the characteristic polynomial: , ..., . Statement (2) will be proved if we will show that and that for all , because then the degree of the polynomial — which equals the dimension of the generalized null space— is as required.

For that, first, as the restriction of to is nilpotent on that space, the only eigenvalue of on it is . Thus the characteristic equation of on is . And thus for all .

Now consider the restriction of to . By Note V.III.2.2, the map is nonsingular on and so is not an eigenvalue of on that subspace. Therefore, is not a factor of , and so .

Our major result just translates those steps into matrix terms.

Theorem 2.12

Any square matrix is similar to one in Jordan form

where each is the Jordan block associated with the eigenvalue of the original matrix (that is, is all zeroes except for 's down the diagonal and some subdiagonal ones).

Proof

Given an matrix , consider the linear map that it represents with respect to the standard bases. Use the prior lemma to write where are the eigenvalues of . Because each is invariant, Lemma 2.8 and the prior lemma show that is represented by a matrix that is all zeroes except for square blocks along the diagonal. To make those blocks into Jordan blocks, pick each to be a string basis for the action of on .

Jordan form is a canonical form for similarity classes of square matrices, provided that we make it unique by arranging the Jordan blocks from least eigenvalue to greatest and then arranging the subdiagonal blocks inside each Jordan block from longest to shortest.

Example 2.13

This matrix has the characteristic polynomial .

We will handle the eigenvalues and separately.

Computation of the powers, and the null spaces and nullities, of is routine. (Recall from Example 2.3 the convention of taking to represent a transformation, here , with respect to the standard basis.)

So the generalized null space has dimension two. We've noted that the restriction of is nilpotent on this subspace. From the way that the nullities grow we know that the action of on a string basis . Thus the restriction can be represented in the canonical form

where many choices of basis are possible. Consequently, the action of the restriction of to is represented by this matrix.

The second eigenvalue's computations are easier. Because the power of in the characteristic polynomial is one, the restriction of to must be nilpotent of index one. Its action on a string basis must be and since it is the zero map, its canonical form is the zero matrix. Consequently, the canonical form for the action of on is the matrix with the single entry . For the basis we can use any nonzero vector from the generalized null space.

Taken together, these two give that the Jordan form of is

where is the concatenation of and .

Example 2.14

Contrast the prior example with

which has the same characteristic polynomial .

While the characteristic polynomial is the same,

here the action of is stable after only one application— the restriction of of to is nilpotent of index only one. (So the contrast with the prior example is that while the characteristic polynomial tells us to look at the action of the on its generalized null space, the characteristic polynomial does not describe completely its action and we must do some computations to find, in this example, that the minimal polynomial is .) The restriction of to the generalized null space acts on a string basis as and , and we get this Jordan block associated with the eigenvalue .

For the other eigenvalue, the arguments for the second eigenvalue of the prior example apply again. The restriction of to is nilpotent of index one (it can't be of index less than one, and since is a factor of the characteristic polynomial to the power one it can't be of index more than one either). Thus 's canonical form is the zero matrix, and the associated Jordan block is the matrix with entry .

Therefore, is diagonalizable.

(Checking that the third vector in is in the nullspace of is routine.)

Example 2.15

A bit of computing with

shows that its characteristic polynomial is . This table

shows that the restriction of to acts on a string basis via the two strings and .

A similar calculation for the other eigenvalue

shows that the restriction of to its generalized null space acts on a string basis via the two separate strings and .

Therefore is similar to this Jordan form matrix.

We close with the statement that the subjects considered earlier in this Chpater are indeed, in this sense, exhaustive.

Corollary 2.16

Every square matrix is similar to the sum of a diagonal matrix and a nilpotent matrix.

Exercises[edit | edit source]

Problem 1

Do the check for Example 2.3.

Problem 2

Each matrix is in Jordan form. State its characteristic polynomial and its minimal polynomial.

This exercise is recommended for all readers.
Problem 3

Find the Jordan form from the given data.

  1. The matrix is with the single eigenvalue . The nullities of the powers are: has nullity two, has nullity three, has nullity four, and has nullity five.
  2. The matrix is with two eigenvalues. For the eigenvalue the nullities are: has nullity two, and has nullity four. For the eigenvalue the nullities are: has nullity one.
Problem 4

Find the change of basis matrices for each example.

  1. Example 2.13
  2. Example 2.14
  3. Example 2.15
This exercise is recommended for all readers.
Problem 5

Find the Jordan form and a Jordan basis for each matrix.

This exercise is recommended for all readers.
Problem 6

Find all possible Jordan forms of a transformation with characteristic polynomial .

Problem 7

Find all possible Jordan forms of a transformation with characteristic polynomial .

This exercise is recommended for all readers.
Problem 8

Find all possible Jordan forms of a transformation with characteristic polynomial and minimal polynomial .

Problem 9

Find all possible Jordan forms of a transformation with characteristic polynomial and minimal polynomial .

This exercise is recommended for all readers.
Problem 10
Diagonalize these.
This exercise is recommended for all readers.
Problem 11

Find the Jordan matrix representing the differentiation operator on .

This exercise is recommended for all readers.
Problem 12

Decide if these two are similar.

Problem 13

Find the Jordan form of this matrix.

Also give a Jordan basis.

Problem 14

How many similarity classes are there for matrices whose only eigenvalues are and ?

This exercise is recommended for all readers.
Problem 15

Prove that a matrix is diagonalizable if and only if its minimal polynomial has only linear factors.

Problem 16

Give an example of a linear transformation on a vector space that has no non-trivial invariant subspaces.

Problem 17

Show that a subspace is invariant if and only if it is invariant.

Problem 18

Prove or disprove: two matrices are similar if and only if they have the same characteristic and minimal polynomials.

Problem 19

The trace of a square matrix is the sum of its diagonal entries.

  1. Find the formula for the characteristic polynomial of a matrix.
  2. Show that trace is invariant under similarity, and so we can sensibly speak of the "trace of a map". (Hint: see the prior item.)
  3. Is trace invariant under matrix equivalence?
  4. Show that the trace of a map is the sum of its eigenvalues (counting multiplicities).
  5. Show that the trace of a nilpotent map is zero. Does the converse hold?
Problem 20

To use Definition 2.6 to check whether a subspace is invariant, we seemingly have to check all of the infinitely many vectors in a (nontrivial) subspace to see if they satisfy the condition. Prove that a subspace is invariant if and only if its subbasis has the property that for all of its elements, is in the subspace.

This exercise is recommended for all readers.
Problem 21

Is invariance preserved under intersection? Under union? Complementation? Sums of subspaces?

Problem 22

Give a way to order the Jordan blocks if some of the eigenvalues are complex numbers. That is, suggest a reasonable ordering for the complex numbers.

Problem 23

Let be the vector space over the reals of degree polynomials. Show that if then is an invariant subspace of under the differentiation operator. In , does any of , ..., have an invariant complement?

Problem 24

In , the vector space (over the reals) of degree polynomials,

and

are the even and the odd polynomials; is even while is odd. Show that they are subspaces. Are they complementary? Are they invariant under the differentiation transformation?

Problem 25

Lemma 2.8 says that if and are invariant complements then has a representation in the given block form (with respect to the same ending as starting basis, of course). Does the implication reverse?

Problem 26

A matrix is the square root of another if . Show that any nonsingular matrix has a square root.

Solutions

Footnotes[edit | edit source]

  1. More information on restrictions of functions is in the appendix.
Linear Algebra
 ← Polynomials of Maps and Matrices Jordan Canonical Form Topic: Geometry of Eigenvalues →