Linear Algebra/Print version/Part 2
This is the print version of Linear Algebra You won't see this message or any elements not part of the book's content when you print or preview this page. |
Chapter IV - Determinants
In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form where is a square matrix. We noted a distinction between two classes of 's. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular is associated with a unique solution in any system, such as the homogeneous system , then is associated with a unique solution for every . We call such a matrix of coefficients "nonsingular". The other kind of , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call "singular".
Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an matrix is equivalent to each of these:
- a system has a solution, and that solution is unique;
- Gauss-Jordan reduction of yields an identity matrix;
- the rows of form a linearly independent set;
- the columns of form a basis for ;
- any map that represents is an isomorphism;
- an inverse matrix exists.
So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say "matrix" in place of "square matrix".)
More precisely, we will develop infinitely many formulas, one for matrices, one for matrices, etc. Of course, these formulas are related — that is, we will develop a family of formulas, a scheme that describes the formula for each size.
Section I - Definition
For matrices, determining nonsingularity is trivial.
is nonsingular iff
The formula came out in the course of developing the inverse.
is nonsingular iff
The formula can be produced similarly (see Problem 9).
is nonsingular iff
With these cases in mind, we posit a family of formulas, , , etc. For each the formula gives rise to a determinant function such that an matrix is nonsingular if and only if . (We usually omit the subscript because if is then "" could only mean "".)
1 - Exploration
This subsection is optional. It briefly describes how an investigator might come to a good general definition, which is given in the next subsection.
The three cases above don't show an evident pattern to use for the general formula. We may spot that the term has one letter, that the terms and have two letters, and that the terms , etc., have three letters. We may also observe that in those terms there is a letter from each row and column of the matrix, e.g., the letters in the term
come one from each row and one from each column. But these observations perhaps seem more puzzling than enlightening. For instance, we might wonder why some of the terms are added while others are subtracted.
A good problem solving strategy is to see what properties a solution must have and then search for something with those properties. So we shall start by asking what properties we require of the formulas.
At this point, our primary way to decide whether a matrix is singular is to do Gaussian reduction and then check whether the diagonal of resulting echelon form matrix has any zeroes (that is, to check whether the product down the diagonal is zero). So, we may expect that the proof that a formula determines singularity will involve applying Gauss' method to the matrix, to show that in the end the product down the diagonal is zero if and only if the determinant formula gives zero. This suggests our initial plan: we will look for a family of functions with the property of being unaffected by row operations and with the property that a determinant of an echelon form matrix is the product of its diagonal entries. Under this plan, a proof that the functions determine singularity would go, "Where is the Gaussian reduction, the determinant of equals the determinant of (because the determinant is unchanged by row operations), which is the product down the diagonal, which is zero if and only if the matrix is singular". In the rest of this subsection we will test this plan on the and determinants that we know. We will end up modifying the "unaffected by row operations" part, but not by much.
The first step in checking the plan is to test whether the and formulas are unaffected by the row operation of pivoting: if
then is ? This check of the determinant after the operation
shows that it is indeed unchanged, and the other pivot gives the same result. The pivot leaves the determinant unchanged
as do the other pivot operations.
So there seems to be promise in the plan. Of course, perhaps the determinant formula is affected by pivoting. We are exploring a possibility here and we do not yet have all the facts. Nonetheless, so far, so good.
The next step is to compare with for the operation
of swapping two rows. The row swap
does not yield . This swap inside of a matrix
also does not give the same determinant as before the swap — again there is a sign change. Trying a different swap
also gives a change of sign.
Thus, row swaps appear to change the sign of a determinant. This modifies our plan, but does not wreck it. We intend to decide nonsingularity by considering only whether the determinant is zero, not by considering its sign. Therefore, instead of expecting determinants to be entirely unaffected by row operations, will look for them to change sign on a swap.
To finish, we compare to for the operation
of multiplying a row by a scalar . One of the cases is
and the other case has the same result. Here is one case
and the other two are similar. These lead us to suspect that multiplying a row by multiplies the determinant by . This fits with our modified plan because we are asking only that the zeroness of the determinant be unchanged and we are not focusing on the determinant's sign or magnitude.
In summary, to develop the scheme for the formulas to compute determinants, we look for determinant functions that remain unchanged under the pivoting operation, that change sign on a row swap, and that rescale on the rescaling of a row. In the next two subsections we will find that for each such a function exists and is unique.
For the next subsection, note that, as above, scalars come out of each row without affecting other rows. For instance, in this equality
the isn't factored out of all three rows, only out of the top row. The determinant acts on each row of independently of the other rows. When we want to use this property of determinants, we shall write the determinant as a function of the rows: "", instead of as "" or "". The definition of the determinant that starts the next subsection is written in this way.
Exercises
- This exercise is recommended for all readers.
- Problem 1
Evaluate the determinant of each.
- Problem 2
Evaluate the determinant of each.
- This exercise is recommended for all readers.
- Problem 3
Verify that the determinant of an upper-triangular matrix is the product down the diagonal.
Do lower-triangular matrices work the same way?
- This exercise is recommended for all readers.
- Problem 4
Use the determinant to decide if each is singular or nonsingular.
- Problem 5
Singular or nonsingular? Use the determinant to decide.
- This exercise is recommended for all readers.
- Problem 6
Each pair of matrices differ by one row operation. Use this operation to compare with .
- Problem 7
Show this.
- This exercise is recommended for all readers.
- Problem 8
Which real numbers make this matrix singular?
- Problem 9
Do the Gaussian reduction to check the formula for matrices stated in the preamble to this section.
is nonsingular iff
- Problem 10
Show that the equation of a line in thru and is expressed by this determinant.
- This exercise is recommended for all readers.
- Problem 11
Many people know this mnemonic for the determinant of a matrix: first repeat the first two columns and then sum the products on the forward diagonals and subtract the products on the backward diagonals. That is, first write
and then calculate this.
- Check that this agrees with the formula given in the preamble to this section.
- Does it extend to other-sized determinants?
- Problem 12
The cross product of the vectors
is the vector computed as this determinant.
Note that the first row is composed of vectors, the vectors from the standard basis for . Show that the cross product of two vectors is perpendicular to each vector.
- Problem 13
Prove that each statement holds for matrices.
- The determinant of a product is the product of the determinants .
- If is invertible then the determinant of the inverse is the inverse of the determinant .
Matrices and are similar if there is a nonsingular matrix such that . (This definition is in Chapter Five.) Show that similar matrices have the same determinant.
- This exercise is recommended for all readers.
- Problem 15
Prove that for matrices, the determinant of a matrix equals the determinant of its transpose. Does that also hold for matrices?
- This exercise is recommended for all readers.
- Problem 16
Is the determinant function linear — is ?
- Problem 17
Show that if is then for any scalar .
- Problem 18
Which real numbers make
singular? Explain geometrically.
- ? Problem 19
If a third order determinant has elements , , ..., , what is the maximum value it may have? (Haggett & Saunders 1955)
2 - Properties of Determinants
As described above, we want a formula to determine whether an matrix is nonsingular. We will not begin by stating such a formula. Instead, we will begin by considering the function that such a formula calculates. We will define the function by its properties, then prove that the function with these properties exists and is unique and also describe formulas that compute this function. (Because we will show that the function exists and is unique, from the start we will say "" instead of "if there is a determinant function then " and "the determinant" instead of "any determinant".)
- Definition 2.1
A determinant is a function such that
- for
- for
- for
- where is an identity matrix
(the 's are the rows of the matrix). We often write for .
- Remark 2.2
Property (2) is redundant since
swaps rows and . It is listed only for convenience.
The first result shows that a function satisfying these conditions gives a criteria for nonsingularity. (Its last sentence is that, in the context of the first three conditions, (4) is equivalent to the condition that the determinant of an echelon form matrix is the product down the diagonal.)
- Lemma 2.3
A matrix with two identical rows has a determinant of zero. A matrix with a zero row has a determinant of zero. A matrix is nonsingular if and only if its determinant is nonzero. The determinant of an echelon form matrix is the product down its diagonal.
- Proof
To verify the first sentence, swap the two equal rows. The sign of the determinant changes, but the matrix is unchanged and so its determinant is unchanged. Thus the determinant is zero.
For the second sentence, we multiply a zero row by −1 and apply property (3). Multiplying a zero row with a constant leaves the matrix unchanged, so property (3) implies that . The only way this can be is if .
For the third sentence, where is the Gauss-Jordan reduction, by the definition the determinant of is zero if and only if the determinant of is zero (although they could differ in sign or magnitude). A nonsingular Gauss-Jordan reduces to an identity matrix and so has a nonzero determinant. A singular reduces to a with a zero row; by the second sentence of this lemma its determinant is zero.
Finally, for the fourth sentence, if an echelon form matrix is singular then it has a zero on its diagonal, that is, the product down its diagonal is zero. The third sentence says that if a matrix is singular then its determinant is zero. So if the echelon form matrix is singular then its determinant equals the product down its diagonal.
If an echelon form matrix is nonsingular then none of its diagonal entries is zero so we can use property (3) of the definition to factor them out (again, the vertical bars indicate the determinant operation).
Next, the Jordan half of Gauss-Jordan elimination, using property (1) of the definition, leaves the identity matrix.
Therefore, if an echelon form matrix is nonsingular then its determinant is the product down its diagonal.
That result gives us a way to compute the value of a determinant function on a matrix. Do Gaussian reduction, keeping track of any changes of sign caused by row swaps and any scalars that are factored out, and then finish by multiplying down the diagonal of the echelon form result. This procedure takes the same time as Gauss' method and so is sufficiently fast to be practical on the size matrices that we see in this book.
- Example 2.4
Doing determinants
with Gauss' method won't give a big savings because the determinant formula is so easy. However, a determinant is usually easier to calculate with Gauss' method than with the formula given earlier.
- Example 2.5
Determinants of matrices any bigger than are almost always most quickly done with this Gauss' method procedure.
The prior example illustrates an important point. Although we have not yet found a determinant formula, if one exists then we know what value it gives to the matrix — if there is a function with properties (1)-(4) then on the above matrix the function must return .
- Lemma 2.6
For each , if there is an determinant function then it is unique.
- Proof
For any matrix we can perform Gauss' method on the matrix, keeping track of how the sign alternates on row swaps, and then multiply down the diagonal of the echelon form result. By the definition and the lemma, all determinant functions must return this value on this matrix. Thus all determinant functions are equal, that is, there is only one input argument/output value relationship satisfying the four conditions.
The "if there is an determinant function" emphasizes that, although we can use Gauss' method to compute the only value that a determinant function could possibly return, we haven't yet shown that such a determinant function exists for all . In the rest of the section we will produce determinant functions.
Exercises
For these, assume that an determinant function exists for all .
- This exercise is recommended for all readers.
- Problem 1
Use Gauss' method to find each determinant.
- Problem 2
- Use Gauss' method to find each.
- Problem 3
For which values of does this system have a unique solution?
- This exercise is recommended for all readers.
- Problem 4
Express each of these in terms of .
- This exercise is recommended for all readers.
- Problem 5
Find the determinant of a diagonal matrix.
- Problem 6
Describe the solution set of a homogeneous linear system if the determinant of the matrix of coefficients is nonzero.
- This exercise is recommended for all readers.
- Problem 7
Show that this determinant is zero.
- Problem 8
- Find the , , and matrices with entry given by .
- Find the determinant of the square matrix with entry .
- Problem 9
- Find the , , and matrices with entry given by .
- Find the determinant of the square matrix with entry .
- This exercise is recommended for all readers.
- Problem 10
Show that determinant functions are not linear by giving a case where .
- Problem 11
The second condition in the definition, that row swaps change the sign of a determinant, is somewhat annoying. It means we have to keep track of the number of swaps, to compute how the sign alternates. Can we get rid of it? Can we replace it with the condition that row swaps leave the determinant unchanged? (If so then we would need new , , and formulas, but that would be a minor matter.)
- Problem 12
Prove that the determinant of any triangular matrix, upper or lower, is the product down its diagonal.
- Problem 13
Refer to the definition of elementary matrices in the Mechanics of Matrix Multiplication subsection.
- What is the determinant of each kind of elementary matrix?
- Prove that if is any elementary matrix then for any appropriately sized .
- (This question doesn't involve determinants.) Prove that if is singular then a product is also singular.
- Show that .
- Show that if is nonsingular then .
- Problem 14
Prove that the determinant of a product is the product of the determinants in this way. Fix the matrix and consider the function given by .
- Check that satisfies property (1) in the definition of a determinant function.
- Check property (2).
- Check property (3).
- Check property (4).
- Conclude the determinant of a product is the product of the determinants.
- Problem 15
A submatrix of a given matrix is one that can be obtained by deleting some of the rows and columns of . Thus, the first matrix here is a submatrix of the second.
Prove that for any square matrix, the rank of the matrix is if and only if is the largest integer such that there is an submatrix with a nonzero determinant.
- This exercise is recommended for all readers.
- Problem 16
Prove that a matrix with rational entries has a rational determinant.
- ? Problem 17
Find the element of likeness in (a) simplifying a fraction, (b) powdering the nose, (c) building new steps on the church, (d) keeping emeritus professors on campus, (e) putting , , in the determinant
3 - The Permutation Expansion
The prior subsection defines a function to be a determinant if it satisfies four conditions and shows that there is at most one determinant function for each . What is left is to show that for each such a function exists.
How could such a function not exist? After all, we have done computations that start with a square matrix, follow the conditions, and end with a number.
The difficulty is that, as far as we know, the computation might not give a well-defined result. To illustrate this possibility, suppose that we were to change the second condition in the definition of determinant to be that the value of a determinant does not change on a row swap. By Remark 2.2 we know that this conflicts with the first and third conditions. Here is an instance of the conflict: here are two Gauss' method reductions of the same matrix, the first without any row swap
and the second with a swap.
Following Definition 2.1 gives that both calculations yield the determinant since in the second one we keep track of the fact that the row swap changes the sign of the result of multiplying down the diagonal. But if we follow the supposition and change the second condition then the two calculations yield different values, and . That is, under the supposition the outcome would not be well-defined — no function exists that satisfies the changed second condition along with the other three.
Of course, observing that Definition 2.1 does the right thing in this one instance is not enough; what we will do in the rest of this section is to show that there is never a conflict. The natural way to try this would be to define the determinant function with: "The value of the function is the result of doing Gauss' method, keeping track of row swaps, and finishing by multiplying down the diagonal". (Since Gauss' method allows for some variation, such as a choice of which row to use when swapping, we would have to fix an explicit algorithm.) Then we would be done if we verified that this way of computing the determinant satisfies the four properties. For instance, if and are related by a row swap then we would need to show that this algorithm returns determinants that are negatives of each other. However, how to verify this is not evident. So the development below will not proceed in this way. Instead, in this subsection we will define a different way to compute the value of a determinant, a formula, and we will use this way to prove that the conditions are satisfied.
The formula that we shall use is based on an insight gotten from property (3) of the definition of determinants. This property shows that determinants are not linear.
- Example 3.1
For this matrix .
Instead, the scalar comes out of each of the two rows.
Since scalars come out a row at a time, we might guess that determinants are linear a row at a time.
- Definition 3.2
Let be a vector space. A map is multilinear if
for and .
- Lemma 3.3
Determinants are multilinear.
- Proof
The definition of determinants gives property (2) (Lemma 2.3 following that definition covers the case) so we need only check property (1).
If the set is linearly dependent then all three matrices are singular and so all three determinants are zero and the equality is trivial. Therefore assume that the set is linearly independent. This set of -wide row vectors has members, so we can make a basis by adding one more vector . Express and with respect to this basis
giving this.
By the definition of determinant, the value of is unchanged by the pivot operation of adding to .
Then, to the result, we can add , etc. Thus
(using (2) for the second equality). To finish, bring and back inside in front of and use pivoting again, this time to reconstruct the expressions of and in terms of the basis, e.g., start with the pivot operations of adding to and to , etc.
Multilinearity allows us to expand a determinant into a sum of determinants, each of which involves a simple matrix.
- Example 3.4
We can use multilinearity to split this determinant into two, first breaking up the first row
and then separating each of those two, breaking along the second rows.
We are left with four determinants, such that in each row of each matrix there is a single entry from the original matrix.
- Example 3.5
In the same way, a determinant separates into a sum of many simpler determinants. We start by splitting along the first row, producing three determinants (the zero in the position is underlined to set it off visually from the zeroes that appear in the splitting).
Each of these three will itself split in three along the second row. Each of the resulting nine splits in three along the third row, resulting in twenty seven determinants
such that each row contains a single entry from the starting matrix.
So an determinant expands into a sum of determinants where each row of each summands contains a single entry from the starting matrix. However, many of these summand determinants are zero.
- Example 3.6
In each of these three matrices from the above expansion, two of the rows have their entry from the starting matrix in the same column, e.g., in the first matrix, the and the both come from the first column.
Any such matrix is singular, because in each, one row is a multiple of the other (or is a zero row). Thus, any such determinant is zero, by Lemma 2.3.
Therefore, the above expansion of the determinant into the sum of the twenty seven determinants simplifies to the sum of these six.
We can bring out the scalars.
To finish, we evaluate those six determinants by row-swapping them to the identity matrix, keeping track of the resulting sign changes.
That example illustrates the key idea. We've applied multilinearity to a determinant to get separate determinants, each with one distinguished entry per row. We can drop most of these new determinants because the matrices are singular, with one row a multiple of another. We are left with the one-entry-per-row determinants also having only one entry per column (one entry from the original determinant, that is). And, since we can factor scalars out, we can further reduce to only considering determinants of one-entry-per-row-and-column matrices where the entries are ones.
These are permutation matrices. Thus, the determinant can be computed in this three-step way (Step 1) for each permutation matrix, multiply together the entries from the original matrix where that permutation matrix has ones, (Step 2) multiply that by the determinant of the permutation matrix and (Step 3) do that for all permutation matrices and sum the results together.
To state this as a formula, we introduce a notation for permutation matrices. Let be the row vector that is all zeroes except for a one in its -th entry, so that the four-wide is . We can construct permutation matrices by permuting — that is, scrambling — the numbers , , ..., , and using them as indices on the 's. For instance, to get a permutation matrix matrix, we can scramble the numbers from to into this sequence and take the corresponding row vector 's.
- Definition 3.7
An -permutation is a sequence consisting of an arrangement of the numbers , , ..., .
- Example 3.8
The -permutations are and . These are the associated permutation matrices.
We sometimes write permutations as functions, e.g., , and . Then the rows of are and .
The -permutations are , , , , , and . Here are two of the associated permutation matrices.
For instance, the rows of are , , and .
- Definition 3.9
The permutation expansion for determinants is
where are all of the -permutations.
This formula is often written in summation notation
read aloud as "the sum, over all permutations , of terms having the form ". This phrase is just a restating of the three-step process (Step 1) for each permutation matrix, compute (Step 2) multiply that by and (Step 3) sum all such terms together.
- Example 3.10
The familiar formula for the determinant of a matrix can be derived in this way.
(the second permutation matrix takes one row swap to pass to the identity). Similarly, the formula for the determinant of a matrix is this.
Computing a determinant by permutation expansion usually takes longer than Gauss' method. However, here we are not trying to do the computation efficiently, we are instead trying to give a determinant formula that we can prove to be well-defined. While the permutation expansion is impractical for computations, it is useful in proofs. In particular, we can use it for the result that we are after.
- Theorem 3.11
For each there is a determinant function.
The proof is deferred to the following subsection. Also there is the proof of the next result (they share some features).
- Theorem 3.12
The determinant of a matrix equals the determinant of its transpose.
The consequence of this theorem is that, while we have so far stated results in terms of rows (e.g., determinants are multilinear in their rows, row swaps change the signum, etc.), all of the results also hold in terms of columns. The final result gives examples.
- Corollary 3.13
A matrix with two equal columns is singular. Column swaps change the sign of a determinant. Determinants are multilinear in their columns.
- Proof
For the first statement, transposing the matrix results in a matrix with the same determinant, and with two equal rows, and hence a determinant of zero. The other two are proved in the same way.
We finish with a summary (although the final subsection contains the unfinished business of proving the two theorems). Determinant functions exist, are unique, and we know how to compute them. As for what determinants are about, perhaps these lines (Kemp 1982) help make it memorable.
Determinant none,
Solution: lots or none.
Determinant some,
Solution: just one.
Exercises
These summarize the notation used in this book for the - and - permutations.
- This exercise is recommended for all readers.
- Problem 1
Compute the determinant by using the permutation expansion.
- This exercise is recommended for all readers.
- Problem 2
Compute these both with Gauss' method and with the permutation expansion formula.
- This exercise is recommended for all readers.
- Problem 3
Use the permutation expansion formula to derive the formula for determinants.
- Problem 4
List all of the -permutations.
- Problem 5
A permutation, regarded as a function from the set to itself, is one-to-one and onto. Therefore, each permutation has an inverse.
- Find the inverse of each -permutation.
- Find the inverse of each -permutation.
- Problem 6
Prove that is multilinear if and only if for all and , this holds.
- Problem 7
Find the only nonzero term in the permutation expansion of this matrix.
Compute that determinant by finding the signum of the associated permutation.
- Problem 8
How would determinants change if we changed property (4) of the definition to read that ?
- Problem 9
Verify the second and third statements in Corollary 3.13.
- This exercise is recommended for all readers.
- Problem 10
Show that if an matrix has a nonzero determinant then any column vector can be expressed as a linear combination of the columns of the matrix.
- Problem 11
True or false: a matrix whose entries are only zeros or ones has a determinant equal to zero, one, or negative one. (Strang 1980)
- Problem 12
- Show that there are terms in the permutation expansion formula of a matrix.
- How many are sure to be zero if the entry is zero?
- Problem 13
How many -permutations are there?
- Problem 14
A matrix is skew-symmetric if , as in this matrix.
Show that skew-symmetric matrices with nonzero determinants exist only for even .
- This exercise is recommended for all readers.
- Problem 15
What is the smallest number of zeros, and the placement of those zeros, needed to ensure that a matrix has a determinant of zero?
- This exercise is recommended for all readers.
- Problem 16
If we have data points and want to find a polynomial passing through those points then we can plug in the points to get an equation/ unknown linear system. The matrix of coefficients for that system is called the Vandermonde matrix. Prove that the determinant of the transpose of that matrix of coefficients
equals the product, over all indices with , of terms of the form . (This shows that the determinant is zero, and the linear system has no solution, if and only if the 's in the data are not distinct.)
- Problem 17
A matrix can be divided into blocks, as here,
which shows four blocks, the square and ones in the upper left and lower right, and the zero blocks in the upper right and lower left. Show that if a matrix can be partitioned as
where and are square, and and are all zeroes, then .
- This exercise is recommended for all readers.
- Problem 18
Prove that for any matrix there are at most distinct reals such that the matrix has determinant zero (we shall use this result in Chapter Five).
- ? Problem 19
The nine positive digits can be arranged into arrays in ways. Find the sum of the determinants of these arrays. (Trigg 1963)
- ? Problem 21
Let be the sum of the integer elements of a magic square of order three and let be the value of the square considered as a determinant. Show that is an integer. (Trigg & Walker 1949)
- ? Problem 22
Show that the determinant of the elements in the upper left corner of the Pascal triangle
has the value unity. (Rupp & Aude 1931)
4 - Determinants Exist
This subsection is optional. It consists of proofs of two results from the prior subsection. These proofs involve the properties of permutations, which will not be used later, except in the optional Jordan Canonical Form subsection.
The prior subsection attacks the problem of showing that for any size there is a determinant function on the set of square matrices of that size by using multilinearity to develop the permutation expansion.
This reduces the problem to showing that there is a determinant function on the set of permutation matrices of that size.
Of course, a permutation matrix can be row-swapped to the identity matrix and to calculate its determinant we can keep track of the number of row swaps. However, the problem is still not solved. We still have not shown that the result is well-defined. For instance, the determinant of
could be computed with one swap
or with three.
Both reductions have an odd number of swaps so we figure that but how do we know that there isn't some way to do it with an even number of swaps? Corollary 4.6 below proves that there is no permutation matrix that can be row-swapped to an identity matrix in two ways, one with an even number of swaps and the other with an odd number of swaps.
- Definition 4.1
Two rows of a permutation matrix
such that are in an inversion of their natural order.
- Example 4.2
This permutation matrix
has three inversions: precedes , precedes , and precedes .
- Lemma 4.3
A row-swap in a permutation matrix changes the number of inversions from even to odd, or from odd to even.
- Proof
Consider a swap of rows and , where . If the two rows are adjacent
then the swap changes the total number of inversions by one — either removing or producing one inversion, depending on whether or not, since inversions involving rows not in this pair are not affected. Consequently, the total number of inversions changes from odd to even or from even to odd.
If the rows are not adjacent then they can be swapped via a sequence of adjacent swaps, first bringing row up
and then bringing row down.
Each of these adjacent swaps changes the number of inversions from odd to even or from even to odd. There are an odd number of them. The total change in the number of inversions is from even to odd or from odd to even.
- Definition 4.4
The signum of a permutation is if the number of inversions in is even, and is if the number of inversions is odd.
- Example 4.5
With the subscripts from Example 3.8 for the -permutations, while .
- Corollary 4.6
If a permutation matrix has an odd number of inversions then swapping it to the identity takes an odd number of swaps. If it has an even number of inversions then swapping to the identity takes an even number of swaps.
- Proof
The identity matrix has zero inversions. To change an odd number to zero requires an odd number of swaps, and to change an even number to zero requires an even number of swaps.
We still have not shown that the permutation expansion is well-defined because we have not considered row operations on permutation matrices other than row swaps. We will finesse this problem: we will define a function by altering the permutation expansion formula, replacing with
(this gives the same value as the permutation expansion because the prior result shows that ). This formula's advantage is that the number of inversions is clearly well-defined — just count them. Therefore, we will show that a determinant function exists for all sizes by showing that is it, that is, that satisfies the four conditions.
- Lemma 4.7
The function is a determinant. Hence determinants exist for every .
- Proof
We'll must check that it has the four properties from the definition.
Property (4) is easy; in
all of the summands are zero except for the product down the diagonal, which is one.
For property (3) consider where .
Factor the out of each term to get the desired equality.
For (2), let .
To convert to unhatted 's, for each consider the permutation that equals except that the -th and -th numbers are interchanged, and . Replacing the in with this gives . Now (by Lemma 4.3) and so we get
where the sum is over all permutations derived from another permutation by a swap of the -th and -th numbers. But any permutation can be derived from some other permutation by such a swap, in one and only one way, so this summation is in fact a sum over all permutations, taken once and only once. Thus .
To do property (1) let and consider
(notice: that's , not ). Distribute, commute, and factor.
We finish by showing that the terms add to zero. This sum represents where is a matrix equal to except that row of is a copy of row of (because the factor is , not ). Thus, has two equal rows, rows and . Since we have already shown that changes sign on row swaps, as in Lemma 2.3 we conclude that .
We have now shown that determinant functions exist for each size. We already know that for each size there is at most one determinant. Therefore, the permutation expansion computes the one and only determinant value of a square matrix.
We end this subsection by proving the other result remaining from the prior subsection, that the determinant of a matrix equals the determinant of its transpose.
- Example 4.8
Writing out the permutation expansion of the general matrix and of its transpose, and comparing corresponding terms
(terms with the same letters)
shows that the corresponding permutation matrices are transposes. That is, there is a relationship between these corresponding permutations. Problem 6 shows that they are inverses.
- Theorem 4.9
The determinant of a matrix equals the determinant of its transpose.
- Proof
Call the matrix and denote the entries of with 's so that . Substitution gives this
and we can finish the argument by manipulating the expression on the right to be recognizable as the determinant of the transpose. We have written all permutation expansions (as in the middle expression above) with the row indices ascending. To rewrite the expression on the right in this way, note that because is a permutation, the row indices in the term on the right , ..., are just the numbers , ..., , rearranged. We can thus commute to have these ascend, giving (if the column index is and the row index is then, where the row index is , the column index is ). Substituting on the right gives
(Problem 5 shows that ). Since every permutation is the inverse of another, a sum over all is a sum over all permutations
as required.
Exercises
These summarize the notation used in this book for the - and - permutations.
- Problem 1
Give the permutation expansion of a general matrix and its transpose.
- This exercise is recommended for all readers.
- Problem 2
This problem appears also in the prior subsection.
- Find the inverse of each -permutation.
- Find the inverse of each -permutation.
- This exercise is recommended for all readers.
- Problem 3
- Find the signum of each -permutation.
- Find the signum of each -permutation.
- Problem 4
What is the signum of the -permutation ? (Strang 1980)
- Problem 5
Prove these.
- Every permutation has an inverse.
- Every permutation is the inverse of another.
- Problem 6
Prove that the matrix of the permutation inverse is the transpose of the matrix of the permutation , for any permutation .
- This exercise is recommended for all readers.
- Problem 7
Show that a permutation matrix with inversions can be row swapped to the identity in steps. Contrast this with Corollary 4.6.
- This exercise is recommended for all readers.
- Problem 8
For any permutation let be the integer defined in this way.
(This is the product, over all indices and with , of terms of the given form.)
- Compute the value of on all -permutations.
- Compute the value of on all -permutations.
- Prove this.
Many authors give this formula as the definition of the signum function.
Section II - Geometry of Determinants
The prior section develops the determinant algebraically, by considering what formulas satisfy certain properties. This section complements that with a geometric approach. One advantage of this approach is that, while we have so far only considered whether or not a determinant is zero, here we shall give a meaning to the value of that determinant. (The prior section handles determinants as functions of the rows, but in this section columns are more convenient. The final result of the prior section says that we can make the switch.)
1 - Determinants as Size Functions
This parallelogram picture
is familiar from the construction of the sum of the two vectors. One way to compute the area that it encloses is to draw this rectangle and subtract the area of each subregion.
The fact that the area equals the value of the determinant
is no coincidence. The properties in the definition of determinants make reasonable postulates for a function that measures the size of the region enclosed by the vectors in the matrix.
For instance, this shows the effect of multiplying one of the box-defining vectors by a scalar (the scalar used is ).
The region formed by and is bigger, by a factor of , than the shaded region enclosed by and . That is, and in general we expect of the size measure that . Of course, this postulate is already familiar as one of the properties in the defintion of determinants.
Another property of determinants is that they are unaffected by pivoting. Here are before-pivoting and after-pivoting boxes (the scalar used is ).
Although the region on the right, the box formed by and , is more slanted than the shaded region, the two have the same base and the same height and hence the same area. This illustrates that . Generalized, , which is a restatement of the determinant postulate.
Of course, this picture
shows that , and we naturally extend that to any number of dimensions , which is a restatement of the property that the determinant of the identity matrix is one.
With that, because property (2) of determinants is redundant (as remarked right after the definition), we have that all of the properties of determinants are reasonable to expect of a function that gives the size of boxes. We can now cite the work done in the prior section to show that the determinant exists and is unique to be assured that these postulates are consistent and sufficient (we do not need any more postulates). That is, we've got an intuitive justification to interpret as the size of the box formed by the vectors. (Comment. An even more basic approach, which also leads to the definition below, is in (Weston 1959).
- Remark 1.2
Although property (2) of the definition of determinants is redundant, it raises an important point. Consider these two.
The only difference between them is in the order in which the vectors are taken. If we take first and then go to , follow the counterclockwise arc shown, then the sign is positive. Following a clockwise arc gives a negative sign. The sign returned by the size function reflects the "orientation" or "sense" of the box. (We see the same thing if we picture the effect of scalar multiplication by a negative scalar.)
Although it is both interesting and important, the idea of orientation turns out to be tricky. It is not needed for the development below, and so we will pass it by. (See Problem 20.)
- Definition 1.3
The box (or parallelepiped) formed by (where each vector is from ) includes all of the set . The volume of a box is the absolute value of the determinant of the matrix with those vectors as columns.
- Example 1.4
Volume, because it is an absolute value, does not depend on the order in which the vectors are given. The volume of the parallelepiped in Example 1.1, can also be computed as the absolute value of this determinant.
The definition of volume gives a geometric interpretation to something in the space, boxes made from vectors. The next result relates the geometry to the functions that operate on spaces.
- Theorem 1.5
A transformation changes the size of all boxes by the same factor, namely the size of the image of a box is times the size of the box , where is the matrix representing with respect to the standard basis. That is, for all matrices, the determinant of a product is the product of the determinants .
The two sentences state the same idea, first in map terms and then in matrix terms. Although we tend to prefer a map point of view, the second sentence, the matrix version, is more convienent for the proof and is also the way that we shall use this result later. (Alternate proofs are given as Problem 16 and Problem 21].)
- Proof
The two statements are equivalent because , as both give the size of the box that is the image of the unit box under the composition (where is the map represented by with respect to the standard basis).
First consider the case that . A matrix has a zero determinant if and only if it is not invertible. Observe that if is invertible, so that there is an such that , then the associative property of matrix multiplication shows that is also invertible (with inverse ). Therefore, if is not invertible then neither is — if then , and the result holds in this case.
Now consider the case that , that is nonsingular. Recall that any nonsingular matrix can be factored into a product of elementary matrices, so that . In the rest of this argument, we will verify that if is an elementary matrix then . The result will follow because then .
If the elementary matrix is then equals except that row has been multiplied by . The third property of determinant functions then gives that . But , again by the third property because is derived from the identity by multiplication of row by , and so holds for . The and checks are similar.
- Corollary 1.7
If a matrix is invertible then the determinant of its inverse is the inverse of its determinant .
- Proof
Recall that determinants are not additive homomorphisms, need not equal . The above theorem says, in contrast, that determinants are multiplicative homomorphisms: does equal .
Exercises
- Problem 1
Find the volume of the region formed.
- This exercise is recommended for all readers.
- Problem 2
Is
inside of the box formed by these three?
- This exercise is recommended for all readers.
- This exercise is recommended for all readers.
- Problem 4
Suppose that . By what factor do these change volumes?
- This exercise is recommended for all readers.
- Problem 5
By what factor does each transformation change the size of boxes?
- Problem 6
What is the area of the image of the rectangle under the action of this matrix?
- Problem 7
If changes volumes by a factor of and changes volumes by a factor of then by what factor will their composition changes volumes?
- Problem 8
In what way does the definition of a box differ from the defintion of a span?
- This exercise is recommended for all readers.
- This exercise is recommended for all readers.
- Problem 10
Does ? ?
- Problem 11
- Suppose that and that . Find .
- Assume that . Prove that .
- This exercise is recommended for all readers.
- Problem 12
Let be the matrix representing (with respect to the standard bases) the map that rotates plane vectors counterclockwise thru radians. By what factor does change sizes?
- This exercise is recommended for all readers.
- Problem 13
Must a transformation that preserves areas also preserve lengths?
- This exercise is recommended for all readers.
- Problem 14
What is the volume of a parallelepiped in bounded by a linearly dependent set?
- This exercise is recommended for all readers.
- Problem 15
Find the area of the triangle in with endpoints , , and . (Area, not volume. The triangle defines a plane— what is the area of the triangle in that plane?)
- This exercise is recommended for all readers.
- Problem 16
An alternate proof of Theorem 1.5 uses the definition of determinant functions.
- Note that the vectors forming make a linearly dependent set if and only if , and check that the result holds in this case.
- For the case, to show that for all transformations, consider the function given by . Show that has the first property of a determinant.
- Show that has the remaining three properties of a determinant function.
- Conclude that .
- Problem 17
Give a non-identity matrix with the property that . Show that if then . Does the converse hold?
- Problem 18
The algebraic property of determinants that factoring a scalar out of a single row will multiply the determinant by that scalar shows that where is , the determinant of is times the determinant of . Explain this geometrically, that is, using Theorem 1.5,
- This exercise is recommended for all readers.
- Problem 19
Matrices and are said to be similar if there is a nonsingular matrix such that (we will study this relation in Chapter Five). Show that similar matrices have the same determinant.
- Problem 20
We usually represent vectors in with respect to the standard basis so vectors in the first quadrant have both coordinates positive.
Moving counterclockwise around the origin, we cycle thru four regions:
Using this basis
gives the same counterclockwise cycle. We say these two bases have the same orientation.
- Why do they give the same cycle?
- What other configurations of unit vectors on the axes give the same cycle?
- Find the determinants of the matrices formed from those (ordered) bases.
- What other counterclockwise cycles are possible, and what are the associated determinants?
- What happens in ?
- What happens in ?
A fascinating general-audience discussion of orientations is in (Gardner 1990).
- Problem 21
This question uses material from the optional Determinant Functions Exist subsection. Prove Theorem 1.5 by using the permutation expansion formula for the determinant.
- This exercise is recommended for all readers.
- Problem 22
- Show that this gives the equation of a line in thru and .
- (Peterson 1955) Prove that the area of a triangle with vertices , , and is
- (Bittinger 1973) Prove that the area of a triangle with vertices at , , and whose coordinates are integers has an area of or for some positive integer .
Section III - Other Formulas for Determinants
(This section is optional. Later sections do not depend on this material.)
Determinants are a fount of interesting and amusing formulas. Here is one that is often seen in calculus classes and used to compute determinants by hand.
1 - Laplace's Expansion
- Example 1.1
In this permutation expansion
we can, for instance, factor out the entries from the first row
and swap rows in the permutation matrices to get this.
The point of the swapping (one swap to each of the permutation matrices on the second line and two swaps to each on the third line) is that the three lines simplify to three terms.
The formula given in Theorem 1.5, which generalizes this example, is a recurrence — the determinant is expressed as a combination of determinants. This formula isn't circular because, as here, the determinant is expressed in terms of determinants of matrices of smaller size.
- Definition 1.2
For any matrix , the matrix formed by deleting row and column of is the minor of . The cofactor of is times the determinant of the minor of .
- Example 1.4
Where
these are the and cofactors.
- Theorem 1.5 (Laplace Expansion of Determinants)
Where is an matrix, the determinant can be found by expanding by cofactors on row or column .
- Proof
- Example 1.6
We can compute the determinant
by expanding along the first row, as in Example 1.1.
Alternatively, we can expand down the second column.
- Example 1.7
A row or column with many zeroes suggests a Laplace expansion.
We finish by applying this result to derive a new formula for the inverse of a matrix. With Theorem 1.5, the determinant of an matrix can be calculated by taking linear combinations of entries from a row and their associated cofactors.
Recall that a matrix with two identical rows has a zero determinant. Thus, for any matrix , weighing the cofactors by entries from the "wrong" row — row with — gives zero
because it represents the expansion along the row of a matrix with row equal to row . This equation summarizes () and ().
Note that the order of the subscripts in the matrix of cofactors is opposite to the order of subscripts in the other matrix; e.g., along the first row of the matrix of cofactors the subscripts are then , etc.
- Definition 1.8
The matrix adjoint to the square matrix is
where is the cofactor.
- Theorem 1.9
Where is a square matrix, .
- Proof
Equations () and ().
- Example 1.10
If
then the adjoint is
and taking the product with gives the diagonal matrix .
- Corollary 1.11
If then .
The formulas from this section are often used for by-hand calculation and are sometimes useful with special types of matrices. However, they are not the best choice for computation with arbitrary matrices because they require more arithmetic than, for instance, the Gauss-Jordan method.
Exercises
- This exercise is recommended for all readers.
- Problem 1
Find the cofactor.
- This exercise is recommended for all readers.
- Problem 2
Find the determinant by expanding
- on the first row
- on the second row
- on the third column.
- Problem 3
Find the adjoint of the matrix in Example 1.6.
- This exercise is recommended for all readers.
- Problem 4
Find the matrix adjoint to each.
- This exercise is recommended for all readers.
- Problem 5
Find the inverse of each matrix in the prior question with Theorem 1.9.
- Problem 6
Find the matrix adjoint to this one.
- This exercise is recommended for all readers.
- Problem 7
Expand across the first row to derive the formula for the determinant of a matrix.
- This exercise is recommended for all readers.
- Problem 8
Expand across the first row to derive the formula for the determinant of a matrix.
- This exercise is recommended for all readers.
- Problem 9
- Give a formula for the adjoint of a matrix.
- Use it to derive the formula for the inverse.
- This exercise is recommended for all readers.
- Problem 10
Can we compute a determinant by expanding down the diagonal?
- Problem 11
Give a formula for the adjoint of a diagonal matrix.
- This exercise is recommended for all readers.
- Problem 12
Prove that the transpose of the adjoint is the adjoint of the transpose.
- Problem 13
Prove or disprove: .
- Problem 14
A square matrix is upper triangular if each entry is zero in the part above the diagonal, that is, when .
- Must the adjoint of an upper triangular matrix be upper triangular? Lower triangular?
- Prove that the inverse of a upper triangular matrix is upper triangular, if an inverse exists.
- Problem 15
This question requires material from the optional Determinants Exist subsection. Prove Theorem 1.5 by using the permutation expansion.
- Problem 16
Prove that the determinant of a matrix equals the determinant of its transpose using Laplace's expansion and induction on the size of the matrix.
- ? Problem 17
Show that
where is the -th term of , the Fibonacci sequence, and the determinant is of order . (Walter & Tytun 1949)
Topic: Cramer's Rule
We have introduced determinant functions algebraically by looking for a formula to decide whether a matrix is nonsingular. After that introduction we saw a geometric interpretation, that the determinant function gives the size of the box with sides formed by the columns of the matrix. This Topic makes a connection between the two views.
First, a linear system
is equivalent to a linear relationship among vectors.
The picture below shows a parallelogram with sides formed from and nested inside a parallelogram with sides formed from and .
So even without determinants we can state the algebraic issue that opened this book, finding the solution of a linear system, in geometric terms: by what factors and must we dilate the vectors to expand the small parallegram to fill the larger one?
However, by employing the geometric significance of determinants we can get something that is not just a restatement, but also gives us a new insight and sometimes allows us to compute answers quickly. Compare the sizes of these shaded boxes.
The second is formed from and , and one of the properties of the size function— the determinant— is that its size is therefore times the size of the first box. Since the third box is formed from and , and the determinant is unchanged by adding times the second column to the first column, the size of the third box equals that of the second. We have this.
Solving gives the value of one of the variables.
The theorem that generalizes this example, Cramer's Rule, is: if then the system has the unique solution where the matrix is formed from by replacing column with the vector . Problem 3 asks for a proof.
For instance, to solve this system for
we do this computation.
Cramer's Rule allows us to solve many two equations/two unknowns systems by eye. It is also sometimes used for three equations/three unknowns systems. But computing large determinants takes a long time, so solving large systems by Cramer's Rule is not practical.
Exercises
- Problem 1
Use Cramer's Rule to solve each for each of the variables.
- Problem 2
Use Cramer's Rule to solve this system for .
- Problem 3
Prove Cramer's Rule.
- Problem 4
Suppose that a linear system has as many equations as unknowns, that all of its coefficients and constants are integers, and that its matrix of coefficients has determinant . Prove that the entries in the solution are all integers. (Remark. This is often used to invent linear systems for exercises. If an instructor makes the linear system with this property then the solution is not some disagreeable fraction.)
- Problem 5
Use Cramer's Rule to give a formula for the solution of a two equations/two unknowns linear system.
- Problem 6
Can Cramer's Rule tell the difference between a system with no solutions and one with infinitely many?
- Problem 7
The first picture in this Topic (the one that doesn't use determinants) shows a unique solution case. Produce a similar picture for the case of infintely many solutions, and the case of no solutions.
Topic: Speed of Calculating Determinants
The permutation expansion formula for computing determinants is useful for proving theorems, but the method of using row operations is a much better for finding the determinants of a large matrix. We can make this statement precise by considering, as computer algorithm designers do, the number of arithmetic operations that each method uses.
The speed of an algorithm is measured by finding how the time taken by the computer grows as the size of its input data set grows. For instance, how much longer will the algorithm take if we increase the size of the input data by a factor of ten, from a row matrix to a row matrix or from to ? Does the time taken grow by a factor of ten, or by a factor of a hundred, or by a factor of a thousand? That is, is the time taken by the algorithm proportional to the size of the data set, or to the square of that size, or to the cube of that size, etc.?
Recall the permutation expansion formula for determinants.
There are different -permutations. For numbers of any size at all, this is a large value; for instance, even if is only then the expansion has terms, all of which are obtained by multiplying entries together. This is a very large number of multiplications (for instance, (Knuth 1988) suggests steps as a rough boundary for the limit of practical calculation). The factorial function grows faster than the square function. It grows faster than the cube function, the fourth power function, or any polynomial function. (One way to see that the factorial function grows faster than the square is to note that multiplying the first two factors in gives , which for large is approximately , and then multiplying in more factors will make it even larger. The same argument works for the cube function, etc.) So a computer that is programmed to use the permutation expansion formula, and thus to perform a number of operations that is greater than or equal to the factorial of the number of rows, would take very long times as its input data set grows.
In contrast, the time taken by the row reduction method does not grow so fast. This fragment of row-reduction code is in the computer language FORTRAN. The matrix is stored in the array A. For each ROW between and parts of the program not shown here have already found the pivot entry . Now the program does a row pivot.
(This code fragment is for illustration only and is incomplete. Still, analysis of a finished version that includes all of the tests and subcases is messier but gives essentially the same conclusion.)
PIVINV=1.0/A(ROW,COL)
DO 10 I=ROW+1, N
DO 20 J=I, N
A(I,J)=A(I,J)-PIVINV*A(ROW,J)
20 CONTINUE
10 CONTINUE
The outermost loop (not shown) runs through rows. For each row, the nested and loops shown perform arithmetic on the entries in A that are below and to the right of the pivot entry. Assume that the pivot is found in the expected place, that is, that . Then there are entries below and to the right of the pivot. On average, ROW will be . Thus, we estimate that the arithmetic will be performed about times, that is, will run in a time proportional to the square of the number of equations. Taking into account the outer loop that is not shown, we get the estimate that the running time of the algorithm is proportional to the cube of the number of equations.
Finding the fastest algorithm to compute the determinant is a topic of current research. Algorithms are known that run in time between the second and third power.
Speed estimates like these help us to understand how quickly or slowly an algorithm will run. Algorithms that run in time proportional to the size of the data set are fast, algorithms that run in time proportional to the square of the size of the data set are less fast, but typically quite usable, and algorithms that run in time proportional to the cube of the size of the data set are still reasonable in speed for not-too-big input data. However, algorithms that run in time (greater than or equal to) the factorial of the size of the data set are not practical for input of any appreciable size.
There are other methods besides the two discussed here that are also used for computation of determinants. Those lie outside of our scope. Nonetheless, this contrast of the two methods for computing determinants makes the point that although in principle they give the same answer, in practice the idea is to select the one that is fast.
Exercises
Most of these problems presume access to a computer.
- Problem 1
Computer systems generate random numbers (of course, these are only pseudo-random, in that they are generated by an algorithm, but they pass a number of reasonable statistical tests for randomness).
- Fill a array with random numbers (say, in the range ). See if it is singular. Repeat that experiment a few times. Are singular matrices frequent or rare (in this sense)?
- Time your computer algebra system at finding the determinant of ten arrays of random numbers. Find the average time per array. Repeat the prior item for arrays, arrays, and arrays. (Notice that, when an array is singular, it can sometimes be found to be so quite quickly, for instance if the first row equals the second. In the light of your answer to the first part, do you expect that singular systems play a large role in your average?)
- Graph the input size versus the average time.
- Problem 2
Compute the determinant of each of these by hand using the two methods discussed above.
Count the number of multiplications and divisions used in each case, for each of the methods. (On a computer, multiplications and divisions take much longer than additions and subtractions, so algorithm designers worry about them more.)
- Problem 3
What array can you invent that takes your computer system the longest to reduce? The shortest?
- Problem 4
Write the rest of the FORTRAN program to do a straightforward implementation of calculating determinants via Gauss' method. (Don't test for a zero pivot.) Compare the speed of your code to that used in your computer algebra system.
- Problem 5
The FORTRAN language specification requires that arrays be stored "by column", that is, the entire first column is stored contiguously, then the second column, etc. Does the code fragment given take advantage of this, or can it be rewritten to make it faster, by taking advantage of the fact that computer fetches are faster from contiguous locations?
Topic: Projective Geometry
There are geometries other than the familiar Euclidean one. One such geometry arose in art, where it was observed that what a viewer sees is not necessarily what is there. This is Leonardo da Vinci's The Last Supper.
What is there in the room, for instance where the ceiling meets the left and right walls, are lines that are parallel. However, what a viewer sees is lines that, if extended, would intersect. The intersection point is called the vanishing point. This aspect of perspective is also familiar as the image of a long stretch of railroad tracks that appear to converge at the horizon.
To depict the room, da Vinci has adopted a model of how we see, of how we project the three dimensional scene to a two dimensional image. This model is only a first approximation — it does not take into account that our retina is curved and our lens bends the light, that we have binocular vision, or that our brain's processing greatly affects what we see — but nonetheless it is interesting, both artistically and mathematically.
The projection is not orthogonal, it is a central projection from a single point, to the plane of the canvas.
(It is not an orthogonal projection since the line from the viewer to is not orthogonal to the image plane.) As the picture suggests, the operation of central projection preserves some geometric properties — lines project to lines. However, it fails to preserve some others — equal length segments can project to segments of unequal length; the length of is greater than the length of because the segment projected to is closer to the viewer and closer things look bigger. The study of the effects of central projections is projective geometry. We will see how linear algebra can be used in this study.
There are three cases of central projection. The first is the projection done by a movie projector.
We can think that each source point is "pushed" from the domain plane outward to the image point in the codomain plane. This case of projection has a somewhat different character than the second case, that of the artist "pulling" the source back to the canvas.
In the first case is in the middle while in the second case is in the middle. One more configuration is possible, with in the middle. An example of this is when we use a pinhole to shine the image of a solar eclipse onto a piece of paper.
We shall take each of the three to be a central projection by of to .
Consider again the effect of railroad tracks that appear to converge to a point. We model this with parallel lines in a domain plane and a projection via a to a codomain plane . (The gray lines are parallel to and .)
All three projection cases appear here. The first picture below shows acting like a movie projector by pushing points from part of out to image points on the lower half of . The middle picture shows acting like the artist by pulling points from another part of back to image points in the middle of . In the third picture, acts like the pinhole, projecting points from to the upper part of . This picture is the trickiest— the points that are projected near to the vanishing point are the ones that are far out on the bottom left of . Points in that are near to the vertical gray line are sent high up on .
There are two awkward things about this situation. The first is that neither of the two points in the domain nearest to the vertical gray line (see below) has an image because a projection from those two is along the gray line that is parallel to the codomain plane (we sometimes say that these two are projected "to infinity"). The second awkward thing is that the vanishing point in isn't the image of any point from because a projection to this point would be along the gray line that is parallel to the domain plane (we sometimes say that the vanishing point is the image of a projection "from infinity").
For a better model, put the projector at the origin. Imagine that is covered by a glass hemispheric dome. As looks outward, anything in the line of vision is projected to the same spot on the dome. This includes things on the line between and the dome, as in the case of projection by the movie projector. It includes things on the line further from than the dome, as in the case of projection by the painter. It also includes things on the line that lie behind , as in the case of projection by a pinhole.
From this perspective , all of the spots on the line are seen as the same point. Accordingly, for any nonzero vector , we define the associated point in the projective plane to be the set of nonzero vectors lying on the same line through the origin as . To describe a projective point we can give any representative member of the line, so that the projective point shown above can be represented in any of these three ways.
Each of these is a homogeneous coordinate vector for .
This picture, and the above definition that arises from it, clarifies the description of central projection but there is something awkward about the dome model: what if the viewer looks down? If we draw 's line of sight so that the part coming toward us, out of the page, goes down below the dome then we can trace the line of sight backward, up past and toward the part of the hemisphere that is behind the page. So in the dome model, looking down gives a projective point that is behind the viewer. Therefore, if the viewer in the picture above drops the line of sight toward the bottom of the dome then the projective point drops also and as the line of sight continues down past the equator, the projective point suddenly shifts from the front of the dome to the back of the dome. This discontinuity in the drawing means that we often have to treat equatorial points as a separate case. That is, while the railroad track discussion of central projection has three cases, the dome model has two.
We can do better than this. Consider a sphere centered at the origin. Any line through the origin intersects the sphere in two spots, which are said to be antipodal. Because we associate each line through the origin with a point in the projective plane, we can draw such a point as a pair of antipodal spots on the sphere. Below, the two antipodal spots are shown connected by a dashed line to emphasize that they are not two different points, the pair of spots together make one projective point.
While drawing a point as a pair of antipodal spots is not as natural as the one-spot-per-point dome mode, on the other hand the awkwardness of the dome model is gone, in that if as a line of view slides from north to south, no sudden changes happen on the picture. This model of central projection is uniform — the three cases are reduced to one.
So far we have described points in projective geometry. What about lines? What a viewer at the origin sees as a line is shown below as a great circle, the intersection of the model sphere with a plane through the origin.
(One of the projective points on this line is shown to bring out a subtlety. Because two antipodal spots together make up a single projective point, the great circle's behind-the-paper part is the same set of projective points as its in-front-of-the-paper part.) Just as we did with each projective point, we will also describe a projective line with a triple of reals. For instance, the members of this plane through the origin in
project to a line that we can described with the triple (we use row vectors to typographically distinguish lines from points). In general, for any nonzero three-wide row vector we define the associated line in the projective plane, to be the set of nonzero multiples of .
The reason that this description of a line as a triple is convienent is that in the projective plane, a point and a line are incident — the point lies on the line, the line passes throught the point — if and only if a dot product of their representatives is zero (Problem 4 shows that this is independent of the choice of representatives and ). For instance, the projective point described above by the column vector with components , , and lies in the projective line described by , simply because any vector in whose components are in ratio lies in the plane through the origin whose equation is of the form for any nonzero . That is, the incidence formula is inherited from the three-space lines and planes of which and are projections.
Thus, we can do analytic projective geometry. For instance, the projective line has the equation , because points incident on the line are characterized by having the property that their representatives satisfy this equation. One difference from familiar Euclidean anlaytic geometry is that in projective geometry we talk about the equation of a point. For a fixed point like
the property that characterizes lines through this point (that is, lines incident on this point) is that the components of any representatives satisfy and so this is the equation of .
This symmetry of the statements about lines and points brings up the Duality Principle of projective geometry: in any true statement, interchanging "point" with "line" results in another true statement. For example, just as two distinct points determine one and only one line, in the projective plane, two distinct lines determine one and only one point. Here is a picture showing two lines that cross in antipodal spots and thus cross at one projective point.
Contrast this with Euclidean geometry, where two distinct lines may have a unique intersection or may be parallel. In this way, projective geometry is simpler, more uniform, than Euclidean geometry.
That simplicity is relevant because there is a relationship between the two spaces: the projective plane can be viewed as an extension of the Euclidean plane. Take the sphere model of the projective plane to be the unit sphere in and take Euclidean space to be the plane . This gives us a way of viewing some points in projective space as corresponding to points in Euclidean space, because all of the points on the plane are projections of antipodal spots from the sphere.
Note though that projective points on the equator don't project up to the plane. Instead, these project "out to infinity". We can thus think of projective space as consisting of the Euclidean plane with some extra points adjoined — the Euclidean plane is embedded in the projective plane. These extra points, the equatorial points, are the ideal points or points at infinity and the equator is the ideal line or line at infinity (note that it is not a Euclidean line, it is a projective line).
The advantage of the extension to the projective plane is that some of the awkwardness of Euclidean geometry disappears. For instance, the projective lines shown above in () cross at antipodal spots, a single projective point, on the sphere's equator. If we put those lines into () then they correspond to Euclidean lines that are parallel. That is, in moving from the Euclidean plane to the projective plane, we move from having two cases, that lines either intersect or are parallel, to having only one case, that lines intersect (possibly at a point at infinity).
The projective case is nicer in many ways than the Euclidean case but has the problem that we don't have the same experience or intuitions with it. That's one advantage of doing analytic geometry, where the equations can lead us to the right conclusions. Analytic projective geometry uses linear algebra. For instance, for three points of the projective plane , , and , setting up the equations for those points by fixing vectors representing each, shows that the three are collinear — incident in a single line — if and only if the resulting three-equation system has infinitely many row vector solutions representing that line. That, in turn, holds if and only if this determinant is zero.
Thus, three points in the projective plane are collinear if and only if any three representative column vectors are linearly dependent. Similarly (and illustrating the Duality Principle), three lines in the projective plane are incident on a single point if and only if any three row vectors representing them are linearly dependent.
The following result is more evidence of the "niceness" of the geometry of the projective plane, compared to the Euclidean case. These two triangles are said to be in perspective from because their corresponding vertices are collinear.
Consider the pairs of corresponding sides: the sides and , the sides and , and the sides and . Desargue's Theorem is that when the three pairs of corresponding sides are extended to lines, they intersect (shown here as the point , the point , and the point ), and further, those three intersection points are collinear.
We will prove this theorem, using projective geometry. (These are drawn as Euclidean figures because it is the more familiar image. To consider them as projective figures, we can imagine that, although the line segments shown are parts of great circles and so are curved, the model has such a large radius compared to the size of the figures that the sides appear in this sketch to be straight.)
For this proof, we need a preliminary lemma (Coxeter 1974): if , , , are four points in the projective plane (no three of which are collinear) then there are homogeneous coordinate vectors , , , and for the projective points, and a basis for , satisfying this.
The proof is straightforward. Because are not on the same projective line, any homogeneous coordinate vectors do not line on the same plane through the origin in and so form a spanning set for . Thus any homogeneous coordinate vector for can be written as a combination . Then, we can take , , , and , where the basis is .
Now, to prove of Desargue's Theorem, use the lemma to fix homogeneous coordinate vectors and a basis.
Because the projective point is incident on the projective line , any homogeneous coordinate vector for lies in the plane through the origin in that is spanned by homogeneous coordinate vectors of and :
for some scalars and . That is, the homogenous coordinate vectors of members of the line are of the form on the left below, and the forms for and are similar.
The projective line is the image of a plane through the origin in . A quick way to get its equation is to note that any vector in it is linearly dependent on the vectors for and and so this determinant is zero.
The equation of the plane in whose image is the projective line is this.
Finding the intersection of the two is routine.
(This is, of course, the homogeneous coordinate vector of a projective point.) The other two intersections are similar.
The proof is finished by noting that these projective points are on one projective line because the sum of the three homogeneous coordinate vectors is zero.
Every projective theorem has a translation to a Euclidean version, although the Euclidean result is often messier to state and prove. Desargue's theorem illustrates this. In the translation to Euclidean space, the case where lies on the ideal line must be treated separately for then the lines , , and are parallel.
The parenthetical remark following the statement of Desargue's Theorem suggests thinking of the Euclidean pictures as figures from projective geometry for a model of very large radius. That is, just as a small area of the earth appears flat to people living there, the projective plane is also "locally Euclidean".
Although its local properties are the familiar Euclidean ones, there is a global property of the projective plane that is quite different. The picture below shows a projective point. At that point is drawn an -axis. There is something interesting about the way this axis appears at the antipodal ends of the sphere. In the northern hemisphere, where the axis are drawn in black, a right hand put down with fingers on the -axis will have the thumb point along the -axis. But the antipodal axis has just the opposite: a right hand placed with its fingers on the -axis will have the thumb point in the wrong way, instead, it is a left hand that works. Briefly, the projective plane is not orientable: in this geometry, left and right handedness are not fixed properties of figures.
The sequence of pictures below dramatizes this non-orientability. They sketch a trip around this space in the direction of the part of the -axis. (Warning: the trip shown is not halfway around, it is a full circuit. True, if we made this into a movie then we could watch the northern hemisphere spots in the drawing above gradually rotate about halfway around the sphere to the last picture below. And we could watch the southern hemisphere spots in the picture above slide through the south pole and up through the equator to the last picture. But: the spots at either end of the dashed line are the same projective point. We don't need to continue on much further; we are pretty much back to the projective point where we started by the last picture.)
At the end of the circuit, the part of the -axes sticks out in the other direction. Thus, in the projective plane we cannot describe a figure as right-{} or left-handed (another way to make this point is that we cannot describe a spiral as clockwise or counterclockwise).
This exhibition of the existence of a non-orientable space raises the question of whether our universe is orientable: is is possible for an astronaut to leave right-handed and return left-handed? An excellent nontechnical reference is (Gardner 1990). An classic science fiction story about orientation reversal is (Clarke 1982).
So projective geometry is mathematically interesting, in addition to the natural way in which it arises in art. It is more than just a technical device to shorten some proofs. For an overview, see (Courant & Robbins 1978). The approach we've taken here, the analytic approach, leads to quick theorems and — most importantly for us — illustrates the power of linear algebra (see Hanes (1990), Ryan (1986), and Eggar (1998)). But another approach, the synthetic approach of deriving the results from an axiom system, is both extraordinarily beautiful and is also the historical route of development. Two fine sources for this approach are (Coxeter 1974) or (Seidenberg 1962). An interesting and easy application is (Davies 1990).
Exercises
- Problem 1
What is the equation of this point?
- Problem 2
- Find the line incident on these points in the projective plane.
- Find the point incident on both of these projective lines.
- Problem 3
Find the formula for the line incident on two projective points. Find the formula for the point incident on two projective lines.
- Problem 4
Prove that the definition of incidence is independent of the choice of the representatives of and . That is, if , , , and , , are two triples of homogeneous coordinates for , and , , , and , , are two triples of homogeneous coordinates for , prove that if and only if .
- Problem 5
Give a drawing to show that central projection does not preserve circles, that a circle may project to an ellipse. Can a (non-circular) ellipse project to a circle?
- Problem 6
Give the formula for the correspondence between the non-equatorial part of the antipodal modal of the projective plane, and the plane .
- Problem 7
(Pappus's Theorem) Assume that , , and are collinear and that , , and are collinear. Consider these three points: (i) the intersection of the lines and , (ii) the intersection of the lines and , and (iii) the intersection of and .
- Draw a (Euclidean) picture.
- Apply the lemma used in Desargue's Theorem to get simple homogeneous coordinate vectors for the 's and .
- Find the resulting homogeneous coordinate vectors for 's (these must each involve a parameter as, e.g., could be anywhere on the line).
- Find the resulting homogeneous coordinate vectors for . (Hint: it involves two parameters.)
- Find the resulting homogeneous coordinate vectors for . (It also involves two parameters.)
- Show that the product of the three parameters is .
- Verify that is on the line..
Chapter V - Similarity
While studying matrix equivalence, we have shown that for any homomorphism there are bases and such that the representation matrix has a block partial-identity form.
This representation describes the map as sending to , where is the dimension of the domain and is the dimension of the range. So, under this representation the action of the map is easy to understand because most of the matrix entries are zero.
This chapter considers the special case where the domain and the codomain are equal, that is, where the homomorphism is a transformation. In this case we naturally ask to find a single basis so that is as simple as possible (we will take "simple" to mean that it has many zeroes). A matrix having the above block partial-identity form is not always possible here. But we will develop a form that comes close, a representation that is nearly diagonal.
Section I - Linear Algebra/Complex Vector Spaces
This chapter requires that we factor polynomials. Of course, many polynomials do not factor over the real numbers; for instance, does not factor into the product of two linear polynomials with real coefficients. For that reason, we shall from now on take our scalars from the complex numbers.
That is, we are shifting from studying vector spaces over the real numbers to vector spaces over the complex numbers— in this chapter vector and matrix entries are complex.
Any real number is a complex number and a glance through this chapter shows that most of the examples use only real numbers. Nonetheless, the critical theorems require that the scalars be complex numbers, so the first section below is a quick review of complex numbers.
In this book we are moving to the more general context of taking scalars to be complex only for the pragmatic reason that we must do so in order to develop the representation. We will not go into using other sets of scalars in more detail because it could distract from our goal. However, the idea of taking scalars from a structure other than the real numbers is an interesting one. Delightful presentations taking this approach are in (Halmos 1958) and (Hoffman & Kunze 1971).
1 - Factoring and Complex Numbers: A Review
This subsection is a review only and we take the main results as known. For proofs, see (Birkhoff & MacLane 1965) or (Ebbinghaus 1990).
Just as integers have a division operation— e.g., " goes times into with remainder "— so do polynomials.
- Theorem 1.1 (Division Theorem for Polynomials)
Let be a polynomial. If is a non-zero polynomial then there are quotient and remainder polynomials and such that
where the degree of is strictly less than the degree of .
In this book constant polynomials, including the zero polynomial, are said to have degree . (This is not the standard definition, but it is convienent here.)
The point of the integer division statement " goes times into with remainder " is that the remainder is less than — while goes times, it does not go times. In the same way, the point of the polynomial division statement is its final clause.
- Example 1.2
If and then and . Note that has a lower degree than .
- Corollary 1.3
The remainder when is divided by is the constant polynomial .
- Proof
The remainder must be a constant polynomial because it is of degree less than the divisor , To determine the constant, take from the theorem to be and substitute for to get .
If a divisor goes into a dividend evenly, meaning that is the zero polynomial, then is a factor of . Any root of the factor (any such that ) is a root of since . The prior corollary immediately yields the following converse.
- Corollary 1.4
If is a root of the polynomial then divides evenly, that is, is a factor of .
Finding the roots and factors of a high-degree polynomial can be hard. But for second-degree polynomials we have the quadratic formula: the roots of are
(if the discriminant is negative then the polynomial has no real number roots). A polynomial that cannot be factored into two lower-degree polynomials with real number coefficients is irreducible over the reals.
- Theorem 1.5
Any constant or linear polynomial is irreducible over the reals. A quadratic polynomial is irreducible over the reals if and only if its discriminant is negative. No cubic or higher-degree polynomial is irreducible over the reals.
- Corollary 1.6
Any polynomial with real coefficients can be factored into linear and irreducible quadratic polynomials. That factorization is unique; any two factorizations have the same powers of the same factors.
Note the analogy with the prime factorization of integers. In both cases, the uniqueness clause is very useful.
- Example 1.7
Because of uniqueness we know, without multiplying them out, that does not equal .
- Example 1.8
By uniqueness, if then where and , we know that .
While has no real roots and so doesn't factor over the real numbers, if we imagine a root— traditionally denoted so that — then factors into a product of linears .
So we adjoin this root to the reals and close the new system with respect to addition, multiplication, etc. (i.e., we also add , and , and , etc., putting in all linear combinations of and ). We then get a new structure, the complex numbers, denoted .
In we can factor (obviously, at least some) quadratics that would be irreducible if we were to stick to the real numbers. Surprisingly, in we can not only factor and its close relatives, we can factor any quadratic.
- Example 1.9
The second degree polynomial factors over the complex numbers into the product of two first degree polynomials.
- Corollary 1.10 (Fundamental Theorem of Algebra)
Polynomials with complex coefficients factor into linear polynomials with complex coefficients. The factorization is unique.
2 - Complex Representations
Recall the definitions of the complex number addition
and multiplication.
- Example 2.1
For instance, and .
Handling scalar operations with those rules, all of the operations that we've covered for real vector spaces carry over unchanged.
- Example 2.2
Matrix multiplication is the same, although the scalar arithmetic involves more bookkeeping.
Everything else from prior chapters that we can, we shall also carry over unchanged. For instance, we shall call this
the standard basis for as a vector space over and again denote it .
Section II - Similarity
1 - Definition and Examples
Definition and Examples
We've defined and to be matrix-equivalent if there are nonsingular matrices and such that . That definition is motivated by this diagram
showing that and both represent but with respect to different pairs of bases. We now specialize that setup to the case where the codomain equals the domain, and where the codomain's basis equals the domain's basis.
To move from the lower left to the lower right we can either go straight over, or up, over, and then down. In matrix terms,
(recall that a representation of composition like this one reads right to left).
- Definition 1.1
The matrices and are similar if there is a nonsingular such that .
Since nonsingular matrices are square, the similar matrices and must be square and of the same size.
- Example 1.2
With these two,
calculation gives that is similar to this matrix.
- Example 1.3
The only matrix similar to the zero matrix is itself: . The only matrix similar to the identity matrix is itself: .
Since matrix similarity is a special case of matrix equivalence, if two matrices are similar then they are equivalent. What about the converse: must matrix equivalent square matrices be similar? The answer is no. The prior example shows that the similarity classes are different from the matrix equivalence classes, because the matrix equivalence class of the identity consists of all nonsingular matrices of that size. Thus, for instance, these two are matrix equivalent but not similar.
So some matrix equivalence classes split into two or more similarity classes— similarity gives a finer partition than does equivalence. This picture shows some matrix equivalence classes subdivided into similarity classes.
To understand the similarity relation we shall study the similarity classes. We approach this question in the same way that we've studied both the row equivalence and matrix equivalence relations, by finding a canonical form for representatives^{[1]} of the similarity classes, called Jordan form. With this canonical form, we can decide if two matrices are similar by checking whether they reduce to the same representative. We've also seen with both row equivalence and matrix equivalence that a canonical form gives us insight into the ways in which members of the same class are alike (e.g., two identically-sized matrices are matrix equivalent if and only if they have the same rank).
Exercises
- Problem 1
For
check that .
- This exercise is recommended for all readers.
- Problem 2
Example 1.3 shows that the only matrix similar to a zero matrix is itself and that the only matrix similar to the identity is itself.
- Show that the matrix , also, is similar only to itself.
- Is a matrix of the form for some scalar similar only to itself?
- Is a diagonal matrix similar only to itself?
- Problem 3
Show that these matrices are not similar.
- Problem 4
Consider the transformation described by , , and .
- Find where .
- Find where .
- Find the matrix such that .
- This exercise is recommended for all readers.
- Problem 5
Exhibit an nontrivial similarity relationship in this way: let act by
and pick two bases, and represent with respect to then and . Then compute the and to change bases from to and back again.
- Problem 6
Explain Example 1.3 in terms of maps.
- This exercise is recommended for all readers.
- Problem 7
Are there two matrices and that are similar while and are not similar? (Halmos 1958)
- This exercise is recommended for all readers.
- Problem 8
Prove that if two matrices are similar and one is invertible then so is the other.
- This exercise is recommended for all readers.
- Problem 9
Show that similarity is an equivalence relation.
- Problem 10
Consider a matrix representing, with respect to some , reflection across the -axis in . Consider also a matrix representing, with respect to some , reflection across the -axis. Must they be similar?
- Problem 11
Prove that similarity preserves determinants and rank. Does the converse hold?
- Problem 12
Is there a matrix equivalence class with only one matrix similarity class inside? One with infinitely many similarity classes?
- Problem 13
Can two different diagonal matrices be in the same similarity class?
- This exercise is recommended for all readers.
- Problem 14
Prove that if two matrices are similar then their -th powers are similar when . What if ?
- This exercise is recommended for all readers.
- Problem 15
Let be the polynomial . Show that if is similar to then is similar to .
- Problem 16
List all of the matrix equivalence classes of matrices. Also list the similarity classes, and describe which similarity classes are contained inside of each matrix equivalence class.
- Problem 17
Does similarity preserve sums?
- Problem 18
Show that if and are similar matrices then and are also similar.
2 - Diagonalizability
The prior subsection defines the relation of similarity and shows that, although similar matrices are necessarily matrix equivalent, the converse does not hold. Some matrix-equivalence classes break into two or more similarity classes (the nonsingular matrices, for instance). This means that the canonical form for matrix equivalence, a block partial-identity, cannot be used as a canonical form for matrix similarity because the partial-identities cannot be in more than one similarity class, so there are similarity classes without one. This picture illustrates. As earlier in this book, class representatives are shown with stars.
We are developing a canonical form for representatives of the similarity classes. We naturally try to build on our previous work, meaning first that the partial identity matrices should represent the similarity classes into which they fall, and beyond that, that the representatives should be as simple as possible. The simplest extension of the partial-identity form is a diagonal form.
- Definition 2.1
A transformation is diagonalizable if it has a diagonal representation with respect to the same basis for the codomain as for the domain. A diagonalizable matrix is one that is similar to a diagonal matrix: is diagonalizable if there is a nonsingular such that is diagonal.
- Example 2.2
The matrix
is diagonalizable.
- Example 2.3
Not every matrix is diagonalizable. The square of
is the zero matrix. Thus, for any map that represents (with respect to the same basis for the domain as for the codomain), the composition is the zero map. This implies that no such map can be diagonally represented (with respect to any ) because no power of a nonzero diagonal matrix is zero. That is, there is no diagonal matrix in 's similarity class.
That example shows that a diagonal form will not do for a canonical form— we cannot find a diagonal matrix in each matrix similarity class. However, the canonical form that we are developing has the property that if a matrix can be diagonalized then the diagonal matrix is the canonical representative of the similarity class. The next result characterizes which maps can be diagonalized.
- Corollary 2.4
A transformation is diagonalizable if and only if there is a basis and scalars such that for each .
- Proof
This follows from the definition by considering a diagonal representation matrix.
This representation is equivalent to the existence of a basis satisfying the stated conditions simply by the definition of matrix representation.
- Example 2.5
To diagonalize
we take it as the representation of a transformation with respect to the standard basis and we look for a basis such that
that is, such that and .
We are looking for scalars such that this equation
has solutions and , which are not both zero. Rewrite that as a linear system.
In the bottom equation the two numbers multiply to give zero only if at least one of them is zero so there are two possibilities, and . In the possibility, the first equation gives that either or . Since the case of both and is disallowed, we are left looking at the possibility of . With it, the first equation in () is and so associated with are vectors with a second component of zero and a first component that is free.
That is, one solution to () is , and we have a first basis vector.
In the possibility, the first equation in () is , and so associated with are vectors whose second component is the negative of their first component.
Thus, another solution is and a second basis vector is this.
To finish, drawing the similarity diagram
and noting that the matrix is easy leads to this diagonalization.
In the next subsection, we will expand on that example by considering more closely the property of Corollary 2.4. This includes seeing another way, the way that we will routinely use, to find the 's.
Exercises
- This exercise is recommended for all readers.
- Problem 1
Repeat Example 2.5 for the matrix from Example 2.2.
- Problem 2
Diagonalize these upper triangular matrices.
- This exercise is recommended for all readers.
- Problem 3
What form do the powers of a diagonal matrix have?
- Problem 4
Give two same-sized diagonal matrices that are not similar. Must any two different diagonal matrices come from different similarity classes?
- Problem 5
Give a nonsingular diagonal matrix. Can a diagonal matrix ever be singular?
- This exercise is recommended for all readers.
- Problem 6
Show that the inverse of a diagonal matrix is the diagonal of the the inverses, if no element on that diagonal is zero. What happens when a diagonal entry is zero?
- Problem 7
The equation ending Example 2.5
is a bit jarring because for we must take the first matrix, which is shown as an inverse, and for we take the inverse of the first matrix, so that the two powers cancel and this matrix is shown without a superscript .
- Check that this nicer-appearing equation holds.
- Is the previous item a coincidence? Or can we always switch the and the ?
- Problem 8
Show that the used to diagonalize in Example 2.5 is not unique.
- This exercise is recommended for all readers.
- Problem 10
Diagonalize these.
- Problem 11
We can ask how diagonalization interacts with the matrix operations. Assume that are each diagonalizable. Is diagonalizable for all scalars ? What about ? ?
- This exercise is recommended for all readers.
- Problem 12
Show that matrices of this form are not diagonalizable.
- Problem 13
Show that each of these is diagonalizable.
3 - Eigenvalues and Eigenvectors
In this subsection we will focus on the property of Corollary 2.4.
- Definition 3.1
A transformation has a scalar eigenvalue if there is a nonzero eigenvector such that .
("Eigen" is German for "characteristic of" or "peculiar to"; some authors call these characteristic values and vectors. No authors call them "peculiar".)
- Example 3.2
The projection map
has an eigenvalue of associated with any eigenvector of the form
where and are scalars at least one of which is non-. On the other hand, is not an eigenvalue of since no non- vector is doubled.
That example shows why the "non-" appears in the definition. Disallowing as an eigenvector eliminates trivial eigenvalues.
- Example 3.3
The only transformation on the trivial space is
.
This map has no eigenvalues because there are no non- vectors mapped to a scalar multiple of themselves.
- Example 3.4
Consider the homomorphism given by . The range of is one-dimensional. Thus an application of to a vector in the range will simply rescale that vector: . That is, has an eigenvalue of associated with eigenvectors of the form where .
This map also has an eigenvalue of associated with eigenvectors of the form where .
- Definition 3.5
A square matrix has a scalar eigenvalue associated with the non- eigenvector if .
- Remark 3.6
Although this extension from maps to matrices is obvious, there is a point that must be made. Eigenvalues of a map are also the eigenvalues of matrices representing that map, and so similar matrices have the same eigenvalues. But the eigenvectors are different— similar matrices need not have the same eigenvectors.
For instance, consider again the transformation given by . It has an eigenvalue of associated with eigenvectors of the form where . If we represent with respect to
then is an eigenvalue of , associated with these eigenvectors.
On the other hand, representing with respect to gives
and the eigenvectors of associated with the eigenvalue are these.
Thus similar matrices can have different eigenvectors.
Here is an informal description of what's happening. The underlying transformation doubles the eigenvectors . But when the matrix representing the transformation is then it "assumes" that column vectors are representations with respect to . In contrast, "assumes" that column vectors are representations with respect to . So the vectors that get doubled by each matrix look different.
The next example illustrates the basic tool for finding eigenvectors and eigenvalues.
- Example 3.7
What are the eigenvalues and eigenvectors of this matrix?
To find the scalars such that for non- eigenvectors , bring everything to the left-hand side
and factor . (Note that it says ; the expression doesn't make sense because is a matrix while is a scalar.) This homogeneous linear system
has a non- solution if and only if the matrix is singular. We can determine when that happens.
The eigenvalues are and . To find the associated eigenvectors, plug in each eigenvalue. Plugging in gives
for a scalar parameter ( is non- because eigenvectors must be non-). In the same way, plugging in gives
with .
- Example 3.8
If
(here is not a projection map, it is the number ) then
so has eigenvalues of and . To find associated eigenvectors, first plug in for :
for a scalar , and then plug in :
where .
- Definition 3.9
The characteristic polynomial of a square matrix is the determinant of the matrix , where is a variable. The characteristic equation is . The characteristic polynomial of a transformation is the polynomial of any .
Problem 11 checks that the characteristic polynomial of a transformation is well-defined, that is, any choice of basis yields the same polynomial.
- Lemma 3.10
A linear transformation on a nontrivial vector space has at least one eigenvalue.
- Proof
Any root of the characteristic polynomial is an eigenvalue. Over the complex numbers, any polynomial of degree one or greater has a root. (This is the reason that in this chapter we've gone to scalars that are complex.)
Notice the familiar form of the sets of eigenvectors in the above examples.
- Definition 3.11
The eigenspace of a transformation associated with the eigenvalue is . The eigenspace of a matrix is defined analogously.
- Lemma 3.12
An eigenspace is a subspace.
- Proof
An eigenspace must be nonempty— for one thing it contains the zero vector— and so we need only check closure. Take vectors from , to show that any linear combination is in
(the second equality holds even if any is since ).
- Example 3.13
In Example 3.8 the eigenspace associated with the eigenvalue and the eigenspace associated with the eigenvalue are these.
- Remark 3.15
The characteristic equation is so in some sense is an eigenvalue "twice". However there are not "twice" as many eigenvectors, in that the dimension of the eigenspace is one, not two. The next example shows a case where a number, , is a double root of the characteristic equation and the dimension of the associated eigenspace is two.
- Example 3.16
With respect to the standard bases, this matrix
represents projection.
Its eigenspace associated with the eigenvalue and its eigenspace associated with the eigenvalue are easy to find.
By the lemma, if two eigenvectors and are associated with the same eigenvalue then any linear combination of those two is also an eigenvector associated with that same eigenvalue. But, if two eigenvectors and are associated with different eigenvalues then the sum need not be related to the eigenvalue of either one. In fact, just the opposite. If the eigenvalues are different then the eigenvectors are not linearly related.
- Theorem 3.17
For any set of distinct eigenvalues of a map or matrix, a set of associated eigenvectors, one per eigenvalue, is linearly independent.
- Proof
We will use induction on the number of eigenvalues. If there is no eigenvalue or only one eigenvalue then the set of associated eigenvectors is empty or is a singleton set with a non- member, and in either case is linearly independent.
For induction, assume that the theorem is true for any set of distinct eigenvalues, suppose that are distinct eigenvalues, and let be associated eigenvectors. If then after multiplying both sides of the displayed equation by , applying the map or matrix to both sides of the displayed equation, and subtracting the first result from the second, we have this.
The induction hypothesis now applies: . Thus, as all the eigenvalues are distinct, are all . Finally, now must be because we are left with the equation .
- Example 3.18
The eigenvalues of
are distinct: , , and . A set of associated eigenvectors like
is linearly independent.
- Corollary 3.19
An matrix with distinct eigenvalues is diagonalizable.
- Proof
Form a basis of eigenvectors. Apply Corollary 2.4.
Exercises
- Problem 1
For each, find the characteristic polynomial and the eigenvalues.
- This exercise is recommended for all readers.
- Problem 2
For each matrix, find the characteristic equation, and the eigenvalues and associated eigenvectors.
- Problem 3
Find the characteristic equation, and the eigenvalues and associated eigenvectors for this matrix. Hint. The eigenvalues are complex.
- Problem 4
Find the characteristic polynomial, the eigenvalues, and the associated eigenvectors of this matrix.
- This exercise is recommended for all readers.
- Problem 5
For each matrix, find the characteristic equation, and the eigenvalues and associated eigenvectors.
- This exercise is recommended for all readers.
- Problem 6
Let be
Find its eigenvalues and the associated eigenvectors.
- Problem 7
Find the eigenvalues and eigenvectors of this map .
- This exercise is recommended for all readers.
- Problem 8
Find the eigenvalues and associated eigenvectors of the differentiation operator .
- Problem 9
- Prove that
the eigenvalues of a triangular matrix (upper or lower triangular) are the entries on the diagonal.
- This exercise is recommended for all readers.
- Problem 10
Find the formula for the characteristic polynomial of a matrix.
- Problem 11
Prove that the characteristic polynomial of a transformation is well-defined.
- This exercise is recommended for all readers.
- Problem 12
- Can any non- vector in any nontrivial vector space be a eigenvector? That is, given a from a nontrivial , is there a transformation and a scalar such that ?
- Given a scalar , can any non- vector in any nontrivial vector space be an eigenvector associated with the eigenvalue ?
- This exercise is recommended for all readers.
- Problem 13
Suppose that and . Prove that the eigenvectors of associated with are the non- vectors in the kernel of the map represented (with respect to the same bases) by .
- Problem 14
Prove that if are all integers and then
has integral eigenvalues, namely and .
- This exercise is recommended for all readers.
- Problem 15
Prove that if is nonsingular and has eigenvalues then has eigenvalues . Is the converse true?
- This exercise is recommended for all readers.
- Problem 16
Suppose that is and are scalars.
- Prove that if has the eigenvalue with an associated eigenvector then is an eigenvector of associated with eigenvalue .
- Prove that if is diagonalizable then so is .
- This exercise is recommended for all readers.
- Problem 17
Show that is an eigenvalue of if and only if the map represented by is not an isomorphism.
- Problem 18
- Show that if is an eigenvalue of then is an eigenvalue of .
- What is wrong with this proof generalizing that? "If is an eigenvalue of and is an eigenvalue for , then is an eigenvalue for , for, if and then "?
- Problem 19
Do matrix-equivalent matrices have the same eigenvalues?
- Problem 20
Show that a square matrix with real entries and an odd number of rows has at least one real eigenvalue.
- Problem 21
Diagonalize.
- Problem 22
Suppose that is a nonsingular matrix. Show that the similarity transformation map sending is an isomorphism.
- ? Problem 23
Show that if is an square matrix and each row (column) sums to then is a characteristic root of . (Morrison 1967)
Section III - Nilpotence
The goal of this chapter is to show that every square matrix is similar to one that is a sum of two kinds of simple matrices. The prior section focused on the first kind, diagonal matrices. We now consider the other kind.
1 - Self-Composition
This subsection is optional, although it is necessary for later material in this section and in the next one.
A linear transformations , because it has the same domain and codomain, can be iterated.^{[2]} That is, compositions of with itself such as and are defined.
Note that this power notation for the linear transformation functions dovetails with the notation that we've used earlier for their squared matrix representations because if then .
- Example 1.1
For the derivative map given by
the second power is the second derivative
the third power is the third derivative
and any higher power is the zero map.
- Example 1.2
This transformation of the space of matrices
has this second power
and this third power.
After that, and , etc.
These examples suggest that on iteration more and more zeros appear until there is a settling down. The next result makes this precise.
- Lemma 1.3
For any transformation , the rangespaces of the powers form a descending chain
and the nullspaces form an ascending chain.
Further, there is a such that for powers less than the subsets are proper (if then and ), while for powers greater than the sets are equal (if then and ).
- Proof
We will do the rangespace half and leave the rest for Problem 6. Recall, however, that for any map the dimension of its rangespace plus the dimension of its nullspace equals the dimension of its domain. So if the rangespaces shrink then the nullspaces must grow.
That the rangespaces form chains is clear because if , so that , then and so . To verify the "further" property, first observe that if any pair of rangespaces in the chain are equal then all subsequent ones are also equal , etc. This is because if is the same map, with the same domain, as and it therefore has the same range: (and induction shows that it holds for all higher powers). So if the chain of rangespaces ever stops being strictly decreasing then it is stable from that point onward.
But the chain must stop decreasing. Each rangespace is a subspace of the one before it. For it to be a proper subspace it must be of strictly lower dimension (see Problem 4). These spaces are finite-dimensional and so the chain can fall for only finitely-many steps, that is, the power is at most the dimension of .
- Example 1.4
The derivative map of Example 1.1 has this chain of rangespaces
and this chain of nullspaces.
- Example 1.5
The transformation projecting onto the first two coordinates
has and .
- Example 1.6
Let be the map As the lemma describes, on iteration the rangespace shrinks
and then stabilizes , while the nullspace grows
and then stabilizes .
This graph illustrates Lemma 1.3. The horizontal axis gives the power of a transformation. The vertical axis gives the dimension of the rangespace of as the distance above zero— and thus also shows the dimension of the nullspace as the distance below the gray horizontal line, because the two add to the dimension of the domain.
As sketched, on iteration the rank falls and with it the nullity grows until the two reach a steady state. This state must be reached by the -th iterate. The steady state's distance above zero is the dimension of the generalized rangespace and its distance below is the dimension of the generalized nullspace.
- Definition 1.7
Let be a transformation on an -dimensional space. The generalized rangespace (or the closure of the rangespace) is The generalized nullspace (or the closure of the nullspace) is .
Exercises
- Problem 1
Give the chains of rangespaces and nullspaces for the zero and identity transformations.
- Problem 2
For each map, give the chain of rangespaces and the chain of nullspaces, and the generalized rangespace and the generalized nullspace.
- ,
- ,
- ,
- ,
- Problem 3
Prove that function composition is associative and so we can write without specifying a grouping.
- Problem 4
Check that a subspace must be of dimension less than or equal to the dimension of its superspace. Check that if the subspace is proper (the subspace does not equal the superspace) then the dimension is strictly less. (This is used in the proof of Lemma 1.3.)
- Problem 5
Prove that the generalized rangespace is the entire space, and the generalized nullspace is trivial, if the transformation is nonsingular. Is this "only if" also?
- Problem 6
Verify the nullspace half of Lemma 1.3.
- Problem 7
Give an example of a transformation on a three dimensional space whose range has dimension two. What is its nullspace? Iterate your example until the rangespace and nullspace stabilize.
- Problem 8
Show that the rangespace and nullspace of a linear transformation need not be disjoint. Are they ever disjoint?
2 - Strings
This subsection is optional, and requires material from the optional Direct Sum subsection.
The prior subsection shows that as increases, the dimensions of the 's fall while the dimensions of the 's rise, in such a way that this rank and nullity split the dimension of . Can we say more; do the two split a basis— is ?
The answer is yes for the smallest power since . The answer is also yes at the other extreme.
- Lemma 2.1
Where is a linear transformation, the space is the direct sum . That is, both and .
- Proof
We will verify the second sentence, which is equivalent to the first. The first clause, that the dimension of the domain of equals the rank of plus the nullity of , holds for any transformation and so we need only verify the second clause.
Assume that , to prove that is . Because is in the nullspace, . On the other hand, because , the map is a dimension-preserving homomorphism and therefore is one-to-one. A composition of one-to-one maps is one-to-one, and so is one-to-one. But now— because only is sent by a one-to-one linear map to — the fact that implies that .
- Note 2.2
Technically we should distinguish the map from the map because the domains or codomains might differ. The second one is said to be the restriction^{[3]} of to . We shall use later a point from that proof about the restriction map, namely that it is nonsingular.
In contrast to the and cases, for intermediate powers the space might not be the direct sum of and . The next example shows that the two can have a nontrivial intersection.
- Example 2.3
Consider the transformation of defined by this action on the elements of the standard basis.
The vector
is in both the rangespace and nullspace. Another way to depict this map's action is with a string.
- Example 2.4
A map whose action on is given by the string
has equal to the span , has , and has . The matrix representation is all zeros except for some subdiagonal ones.
- Example 2.5
Transformations can act via more than one string. A transformation acting on a basis by
is represented by a matrix that is all zeros except for blocks of subdiagonal ones
(the lines just visually organize the blocks).
In those three examples all vectors are eventually transformed to zero.
- Definition 2.6
A nilpotent transformation is one with a power that is the zero map. A nilpotent matrix is one with a power that is the zero matrix. In either case, the least such power is the index of nilpotency.
- Example 2.7
In Example 2.3 the index of nilpotency is two. In Example 2.4 it is four. In Example 2.5 it is three.
- Example 2.8
The differentiation map is nilpotent of index three since the third derivative of any quadratic polynomial is zero. This map's action is described by the string and taking the basis gives this representation.
Not all nilpotent matrices are all zeros except for blocks of subdiagonal ones.
- Example 2.9
With the matrix from Example 2.4, and this four-vector basis
a change of basis operation produces this representation with respect to .
The new matrix is nilpotent; it's fourth power is the zero matrix since
and is the zero matrix.
The goal of this subsection is Theorem 2.13, which shows that the prior example is prototypical in that every nilpotent matrix is similar to one that is all zeros except for blocks of subdiagonal ones.
- Definition 2.10
Let be a nilpotent transformation on . A -string generated by is a sequence . This sequence has length . A -string basis is a basis that is a concatenation of -strings.
- Example 2.11
In Example 2.5, the -strings and , of length three and two, can be concatenated to make a basis for the domain of .
- Lemma 2.12
If a space has a -string basis then the longest string in it has length equal to the index of nilpotency of .
- Proof
Suppose not. Those strings cannot be longer; if the index is then sends any vector— including those starting the string— to . So suppose instead that there is a transformation of index on some space, such that the space has a -string basis where all of the strings are shorter than length . Because has index , there is a vector such that . Represent as a linear combination of basis elements and apply . We are supposing that sends each basis element to but that it does not send to . That is impossible.
We shall show that every nilpotent map has an associated string basis. Then our goal theorem, that every nilpotent matrix is similar to one that is all zeros except for blocks of subdiagonal ones, is immediate, as in Example 2.5.
Looking for a counterexample, a nilpotent map without an associated string basis that is disjoint, will suggest the idea for the proof. Consider the map with this action.
Even after omitting the zero vector, these three strings aren't disjoint, but that doesn't end hope of finding a -string basis. It only means that will not do for the string basis.
To find a basis that will do, we first find the number and lengths of its strings. Since 's index of nilpotency is two, Lemma 2.12 says that at least one string in the basis has length two. Thus the map must act on a string basis in one of these two ways.
Now, the key point. A transformation with the left-hand action has a nullspace of dimension three since that's how many basis vectors are sent to zero. A transformation with the right-hand action has a nullspace of dimension four. Using the matrix representation above, calculation of 's nullspace
shows that it is three-dimensional, meaning that we want the left-hand action.
To produce a string basis, first pick and from
(other choices are possible, just be sure that is linearly independent). For pick a vector from that is not in the span of .
Finally, take and such that and .
Now, with respect to , the matrix of is as desired.
- Theorem 2.13
Any nilpotent transformation is associated with a -string basis. While the basis is not unique, the number and the length of the strings is determined by .
This illustrates the proof. Basis vectors are categorized into kind , kind , and kind . They are also shown as squares or circles, according to whether they are in the nullspace or not.
- Proof
Fix a vector space ; we will argue by induction on the index of nilpotency of . If that index is then is the zero map and any basis is a string basis , ..., . For the inductive step, assume that the theorem holds for any transformation with an index of nilpotency between and and consider the index case.
First observe that the restriction to the rangespace is also nilpotent, of index . Apply the inductive hypothesis to get a string basis for , where the number and length of the strings is determined by .
(In the illustration these are the basis vectors of kind , so there are strings shown with this kind of basis vector.)
Second, note that taking the final nonzero vector in each string gives a basis for . (These are illustrated with 's in squares.) For, a member of is mapped to zero if and only if it is a linear combination of those basis vectors that are mapped to zero. Extend to a basis for all of .
(The 's are the vectors of kind so that is the set of squares.) While many choices are possible for the 's, their number is determined by the map as it is the dimension of minus the dimension of .
Finally, is a basis for because any sum of something in the rangespace with something in the nullspace can be represented using elements of for the rangespace part and elements of for the part from the nullspace. Note that
and so can be extended to a basis for all of by the addition of more vectors. Specifically, remember that each of is in , and extend with vectors such that . (In the illustration, these are the 's.) The check that linear independence is preserved by this extension is Problem 13.
- Corollary 2.14
Every nilpotent matrix is similar to a matrix that is all zeros except for blocks of subdiagonal ones. That is, every nilpotent map is represented with respect to some basis by such a matrix.
This form is unique in the sense that if a nilpotent matrix is similar to two such matrices then those two simply have their blocks ordered differently. Thus this is a canonical form for the similarity classes of nilpotent matrices provided that we order the blocks, say, from longest to shortest.
- Example 2.15
The matrix
has an index of nilpotency of two, as this calculation shows.
The calculation also describes how a map represented by must act on any string basis. With one map application the nullspace has dimension one and so one vector of the basis is sent to zero. On a second application, the nullspace has dimension two and so the other basis vector is sent to zero. Thus, the action of the map is and the canonical form of the matrix is this.
We can exhibit such a -string basis and the change of basis matrices witnessing the matrix similarity. For the basis, take to represent with respect to the standard bases, pick a and also pick a so that .
(If we take to be a representative with respect to some nonstandard bases then this picking step is just more messy.) Recall the similarity diagram.
The canonical form equals , where
and the verification of the matrix calculation is routine.
- Example 2.16
The matrix
is nilpotent. These calculations show the nullspaces growing.
That table shows that any string basis must satisfy: the nullspace after one map application has dimension two so two basis vectors are sent directly to zero, the nullspace after the second application has dimension four so two additional basis vectors are sent to zero by the second iteration, and the nullspace after three applications is of dimension five so the final basis vector is sent to zero in three hops.
To produce such a basis, first pick two independent vectors from
then add such that and
and finish by adding ) such that .
Exercises
- This exercise is recommended for all readers.
- Problem 1
What is the index of nilpotency of the left-shift operator, here acting on the space of triples of reals?
- This exercise is recommended for all readers.
- Problem 2
For each string basis state the index of nilpotency and give the dimension of the rangespace and nullspace of each iteration of the nilpotent map.
Also give the canonical form of the matrix.
- Problem 3
Decide which of these matrices are nilpotent.
- This exercise is recommended for all readers.
- Problem 4
Find the canonical form of this matrix.
- This exercise is recommended for all readers.
- Problem 5
Consider the matrix from Example 2.16.
- Use the action of the map on the string basis to give the canonical form.
- Find the change of basis matrices that bring the matrix to canonical form.
- Use the answer in the prior item to check the answer in the first item.
- This exercise is recommended for all readers.
- Problem 6
Each of these matrices is nilpotent.
Put each in canonical form.
- Problem 7
Describe the effect of left or right multiplication by a matrix that is in the canonical form for nilpotent matrices.
- Problem 8
Is nilpotence invariant under similarity? That is, must a matrix similar to a nilpotent matrix also be nilpotent? If so, with the same index?
- This exercise is recommended for all readers.
- Problem 9
Show that the only eigenvalue of a nilpotent matrix is zero.
- Problem 10
Is there a nilpotent transformation of index three on a two-dimensional space?
- Problem 11
In the proof of Theorem 2.13, why isn't the proof's base case that the index of nilpotency is zero?
- This exercise is recommended for all readers.
- Problem 12
Let be a linear transformation and suppose is such that but . Consider the -string .
- Prove that is a transformation on the span of the set of vectors in the string, that is, prove that restricted to the span has a range that is a subset of the span. We say that the span is a -invariant subspace.
- Prove that the restriction is nilpotent.
- Prove that the -string is linearly independent and so is a basis for its span.
- Represent the restriction map with respect to the -string basis.
- Problem 13
Finish the proof of Theorem 2.13.
- Problem 14
Show that the terms "nilpotent transformation" and "nilpotent matrix", as given in Definition 2.6, fit with each other: a map is nilpotent if and only if it is represented by a nilpotent matrix. (Is it that a transformation is nilpotent if an only if there is a basis such that the map's representation with respect to that basis is a nilpotent matrix, or that any representation is a nilpotent matrix?)
- Problem 15
Let be nilpotent of index four. How big can the rangespace of be?
- Problem 16
Recall that similar matrices have the same eigenvalues. Show that the converse does not hold.
- Problem 17
Prove a nilpotent matrix is similar to one that is all zeros except for blocks of super-diagonal ones.
- This exercise is recommended for all readers.
- Problem 18
Prove that if a transformation has the same rangespace as nullspace. then the dimension of its domain is even.
- Problem 19
Prove that if two nilpotent matrices commute then their product and sum are also nilpotent.
- Problem 20
Consider the transformation of given by where is an matrix. Prove that if is nilpotent then so is .
- Problem 21
Show that if is nilpotent then is invertible. Is that "only if" also?
References
Section IV - Jordan Form
This section uses material from three optional subsections: Direct Sum, Determinants Exist, and Other Formulas for the Determinant.
The chapter on linear maps shows that every can be represented by a partial-identity matrix with respect to some bases and . This chapter revisits this issue in the special case that the map is a linear transformation . Of course, the general result still applies but with the codomain and domain equal we naturally ask about having the two bases also be equal. That is, we want a canonical form to represent transformations as .
After a brief review section, we began by noting that a block partial identity form matrix is not always obtainable in this case. We therefore considered the natural generalization, diagonal matrices, and showed that if its eigenvalues are distinct then a map or matrix can be diagonalized. But we also gave an example of a matrix that cannot be diagonalized and in the section prior to this one we developed that example. We showed that a linear map is nilpotent— if we take higher and higher powers of the map or matrix then we eventually get the zero map or matrix— if and only if there is a basis on which it acts via disjoint strings. That led to a canonical form for nilpotent matrices.
Now, this section concludes the chapter. We will show that the two cases we've studied are exhaustive in that for any linear transformation there is a basis such that the matrix representation is the sum of a diagonal matrix and a nilpotent matrix in its canonical form.
1 - Polynomials of Maps and Matrices
Recall that the set of square matrices is a vector space under entry-by-entry addition and scalar multiplication and that this space has dimension . Thus, for any matrix the -member set is linearly dependent and so there are scalars such that is the zero matrix.
- Remark 1.1
This observation is small but important. It says that every transformation exhibits a generalized nilpotency: the powers of a square matrix cannot climb forever without a "repeat".
- Example 1.2
Rotation of plane vectors radians counterclockwise is represented with respect to the standard basis by
and verifying that equals the zero matrix is easy.
- Definition 1.3
For any polynomial , where is a linear transformation then is the transformation on the same space and where is a square matrix then is the matrix .
- Remark 1.4
If, for instance, , then most authors write in the identity matrix: . But most authors don't write in the identity map: . In this book we shall also observe this convention.
Of course, if then , which follows from the relationships , and , and .
As Example 1.2 shows, there may be polynomials of degree smaller than that zero the map or matrix.
- Definition 1.5
The minimal polynomial of a transformation or a square matrix is the polynomial of least degree and with leading coefficient such that is the zero map or is the zero matrix.
A minimal polynomial always exists by the observation opening this subsection. A minimal polynomial is unique by the "with leading coefficient " clause. This is because if there are two polynomials and that are both of the minimal degree to make the map or matrix zero (and thus are of equal degree), and both have leading 's, then their difference has a smaller degree than either and still sends the map or matrix to zero. Thus is the zero polynomial and the two are equal. (The leading coefficient requirement also prevents a minimal polynomial from being the zero polynomial.)
- Example 1.6
We can see that is minimal for the matrix of Example 1.2 by computing the powers of up to the power .
Next, put equal to the zero matrix
and use Gauss' method.
Setting , , and to zero forces and to also come out as zero. To get a leading one, the most we can do is to set and to zero. Thus the minimal polynomial is quadratic.
Using the method of that example to find the minimal polynomial of a matrix would mean doing Gaussian reduction on a system with nine equations in ten unknowns. We shall develop an alternative. To begin, note that we can break a polynomial of a map or a matrix into its components.
- Lemma 1.7
Suppose that the polynomial factors as . If is a linear transformation then these two are equal maps.
Consequently, if is a square matrix then and are equal matrices.
- Proof
This argument is by induction on the degree of the polynomial. The cases where the polynomial is of degree and are clear. The full induction argument is Problem 21 but the degree two case gives its sense.
A quadratic polynomial factors into two linear terms (the roots and might be equal). We can check that substituting for in the factored and unfactored versions gives the same map.
The third equality holds because the scalar comes out of the second term, as is linear.
In particular, if a minimial polynomial for a transformation factors as then is the zero map. Since sends every vector to zero, at least one of the maps sends some nonzero vectors to zero. So, too, in the matrix case— if is minimal for then is the zero matrix and at least one of the matrices sends some nonzero vectors to zero. Rewording both cases: at least some of the are eigenvalues. (See Problem 17.)
Recall how we have earlier found eigenvalues. We have looked for such that by considering the equation and computing the determinant of the matrix . That determinant is a polynomial in , the characteristic polynomial, whose roots are the eigenvalues. The major result of this subsection, the next result, is that there is a connection between this characteristic polynomial and the minimal polynomial. This results expands on the prior paragraph's insight that some roots of the minimal polynomial are eigenvalues by asserting that every root of the minimal polynomial is an eigenvalue and further that every eigenvalue is a root of the minimal polynomial (this is because it says "" and not just "").
- Theorem 1.8 (Cayley-Hamilton)
If the characteristic polynomial of a transformation or square matrix factors into
then its minimal polynomial factors into
where for each between and .
The proof takes up the next three lemmas. Although they are stated only in matrix terms, they apply equally well to maps. We give the matrix version only because it is convenient for the first proof.
The first result is the key— some authors call it the Cayley-Hamilton Theorem and call Theorem 1.8 above a corollary. For the proof, observe that a matrix of polynomials can be thought of as a polynomial with matrix coefficients.
- Lemma 1.9
If is a square matrix with characteristic polynomial then is the zero matrix.
- Proof
Let be , the matrix whose determinant is the characteristic polynomial .
Recall that the product of the adjoint of a matrix with the matrix itself is the determinant of that matrix times the identity.
The entries of are polynomials, each of degree at most since the minors of a matrix drop a row and column. Rewrite it, as suggested above, as where each is a matrix of scalars. The left and right ends of equation () above give this.
Equate the coefficients of , the coefficients of , etc.
Multiply (from the right) both sides of the first equation by , both sides of the second equation by , etc. Add. The result on the left is , and the result on the right is the zero matrix.
We sometimes refer to that lemma by saying that a matrix or map satisfies its characteristic polynomial.
- Lemma 1.10
Where is a polynomial, if is the zero matrix then is divisible by the minimal polynomial of . That is, any polynomial satisfied by is divisable by 's minimal polynomial.
- Proof
Let be minimal for . The Division Theorem for Polynomials gives where the degree of is strictly less than the degree of . Plugging in shows that is the zero matrix, because satisfies both and . That contradicts the minimality of unless is the zero polynomial.
Combining the prior two lemmas gives that the minimal polynomial divides the characteristic polynomial. Thus, any root of the minimal polynomial is also a root of the characteristic polynomial. That is, so far we have that if then must has the form where each is less than or equal to . The proof of the Cayley-Hamilton Theorem is finished by showing that in fact the characteristic polynomial has no extra roots , etc.
- Lemma 1.11
Each linear factor of the characteristic polynomial of a square matrix is also a linear factor of the minimal polynomial.
- Proof
Let be a square matrix with minimal polynomial and assume that is a factor of the characteristic polynomial of , that is, assume that is an eigenvalue of . We must show that is a factor of , that is, that .
In general, where is associated with the eigenvector , for any polynomial function , application of the matrix to equals the result of multiplying by the scalar . (For instance, if has eigenvalue associated with the eigenvector and then .) Now, as is the zero matrix, and therefore .
- Example 1.12
We can use the Cayley-Hamilton Theorem to help find the minimal polynomial of this matrix.
First, its characteristic polynomial can be found with the usual determinant. Now, the Cayley-Hamilton Theorem says that 's minimal polynomial is either or or . We can decide among the choices just by computing:
and
and so .
Exercises
- This exercise is recommended for all readers.
- Problem 1
What are the possible minimal polynomials if a matrix has the given characteristic polynomial?
What is the degree of each possibility?
- This exercise is recommended for all readers.
- Problem 2
Find the minimal polynomial of each matrix.
- Problem 3
Find the minimal polynomial of this matrix.
- This exercise is recommended for all readers.
- Problem 4
What is the minimal polynomial of the differentiation operator on ?
- This exercise is recommended for all readers.
- Problem 5
Find the minimal polynomial of matrices of this form
where the scalar is fixed (i.e., is not a variable).
- Problem 6
What is the minimal polynomial of the transformation of that sends to ?
- Problem 7
What is the minimal polynomial of the map projecting onto the first two coordinates?
- Problem 8
Find a matrix whose minimal polynomial is .
- Problem 9
What is wrong with this claimed proof of Lemma 1.9: "if then "? (Cullen 1990)
- Problem 10
Verify Lemma 1.9 for matrices by direct calculation.
- This exercise is recommended for all readers.
- Problem 11
Prove that the minimal polynomial of an matrix has degree at most (not as might be guessed from this subsection's opening). Verify that this maximum, , can happen.
- This exercise is recommended for all readers.
- Problem 12
The only eigenvalue of a nilpotent map is zero. Show that the converse statement holds.
- Problem 13
What is the minimal polynomial of a zero map or matrix? Of an identity map or matrix?
- This exercise is recommended for all readers.
- Problem 14
Interpret the minimal polynomial of Example 1.2 geometrically.
- Problem 15
What is the minimal polynomial of a diagonal matrix?
- This exercise is recommended for all readers.
- Problem 16
A projection is any transformation such that . (For instance, the transformation of the plane projecting each vector onto its first coordinate will, if done twice, result in the same value as if it is done just once.) What is the minimal polynomial of a projection?
- Problem 17
The first two items of this question are review.
- Prove that the composition of one-to-one maps is one-to-one.
- Prove that if a linear map is not one-to-one then at least one nonzero vector from the domain is sent to the zero vector in the codomain.
- Verify the statement, excerpted here, that preceeds Theorem 1.8.
... if a minimial polynomial for a transformation factors as then is the zero map. Since sends every vector to zero, at least one of the maps sends some nonzero vectors to zero. ... Rewording ...: at least some of the are eigenvalues.
- Problem 18
True or false: for a transformation on an dimensional space, if the minimal polynomial has degree then the map is diagonalizable.
- Problem 19
Let be a polynomial. Prove that if and are similar matrices then is similar to .
- Now show that similar matrices have the same characteristic polynomial.
- Show that similar matrices have the same minimal polynomial.
- Decide if these are similar.
- Problem 20
- Show that a matrix is invertible if and only if the constant term in its minimal polynomial is not .
- Show that if a square matrix is not invertible then there is a nonzero matrix such that and both equal the zero matrix.
- This exercise is recommended for all readers.
- Problem 21
- Finish the proof of Lemma 1.7.
- Give an example to show that the result does not hold if is not linear.
- Problem 22
Any transformation or square matrix has a minimal polynomial. Does the converse hold?
2 - Jordan Canonical Form
This subsection moves from the canonical form for nilpotent matrices to the one for all matrices.
We have shown that if a map is nilpotent then all of its eigenvalues are zero. We can now prove the converse.
- Lemma 2.1
A linear transformation whose only eigenvalue is zero is nilpotent.
- Proof
If a transformation on an -dimensional space has only the single eigenvalue of zero then its characteristic polynomial is . The Cayley-Hamilton Theorem says that a map satisfies its characteristic polynimial so is the zero map. Thus is nilpotent.
We have a canonical form for nilpotent matrices, that is, for each matrix whose single eigenvalue is zero: each such matrix is similar to one that is all zeroes except for blocks of subdiagonal ones. (To make this representation unique we can fix some arrangement of the blocks, say, from longest to shortest.) We next extend this to all single-eigenvalue matrices.
Observe that if 's only eigenvalue is then 's only eigenvalue is because if and only if . The natural way to extend the results for nilpotent matrices is to represent in the canonical form , and try to use that to get a simple representation for . The next result says that this try works.
- Lemma 2.2
If the matrices and are similar then and are also similar, via the same change of basis matrices.
- Proof
With we have since the diagonal matrix commutes with anything, and so . Therefore , as required.
- Example 2.3
The characteristic polynomial of
is and so has only the single eigenvalue . Thus for
the only eigenvalue is , and is nilpotent. The null spaces are routine to find; to ease this computation we take to represent the transformation with respect to the standard basis (we shall maintain this convention for the rest of the chapter).
The dimensions of these null spaces show that the action of an associated map on a string basis is . Thus, the canonical form for with one choice for a string basis is
and by Lemma 2.2, is similar to this matrix.
We can produce the similarity computation. Recall from the Nilpotence section how to find the change of basis matrices and to express as . The similarity diagram
describes that to move from the lower left to the upper left we multiply by
and to move from the upper right to the lower right we multiply by this matrix.
So the similarity is expressed by
which is easily checked.
- Example 2.4
This matrix has characteristic polynomial
and so has the single eigenvalue . The nullities of are: the null space of has dimension two, the null space of has dimension three, and the null space of has dimension four. Thus, has the action on a string basis of and . This gives the canonical form for , which in turn gives the form for .
An array that is all zeroes, except for some number down the diagonal and blocks of subdiagonal ones, is a Jordan block. We have shown that Jordan block matrices are canonical representatives of the similarity classes of single-eigenvalue matrices.
- Example 2.5
The matrices whose only eigenvalue is separate into three similarity classes. The three classes have these canonical representatives.
In particular, this matrix
belongs to the similarity class represented by the middle one, because we have adopted the convention of ordering the blocks of subdiagonal ones from the longest block to the shortest.
We will now finish the program of this chapter by extending this work to cover maps and matrices with multiple eigenvalues. The best possibility for general maps and matrices would be if we could break them into a part involving their first eigenvalue (which we represent using its Jordan block), a part with , etc.
This ideal is in fact what happens. For any transformation , we shall break the space into the direct sum of a part on which is nilpotent, plus a part on which is nilpotent, etc. More precisely, we shall take three steps to get to this section's major theorem and the third step shows that where are 's eigenvalues.
Suppose that is a linear transformation. Note that the restriction^{[1]} of to a subspace need not be a linear transformation on because there may be an with . To ensure that the restriction of a transformation to a "part" of a space is a transformation on the partwe need the next condition.
- Definition 2.6
Let be a transformation. A subspace is invariant if whenever then (shorter: ).
Two examples are that the generalized null space and the generalized range space of any transformation are invariant. For the generalized null space, if then where is the dimension of the underlying space and so because is zero also. For the generalized range space, if then for some and then shows that is also a member of .
Thus the spaces and are invariant. Observe also that is nilpotent on because, simply, if has the property that some power of maps it to zero— that is, if it is in the generalized null space— then some power of maps it to zero. The generalized null space is a "part" of the space on which the action of is easy to understand.
The next result is the first of our three steps. It establishes that leaves 's part unchanged.
- Lemma 2.7
A subspace is invariant if and only if it is invariant for any scalar . In particular, where is an eigenvalue of a linear transformation , then for any other eigenvalue , the spaces and are invariant.
- Proof
For the first sentence we check the two implications of the "if and only if" separately. One of them is easy: if the subspace is invariant for any then taking shows that it is invariant. For the other implication suppose that the subspace is invariant, so that if then , and let be any scalar. The subspace is closed under linear combinations and so if then . Thus if then , as required.
The second sentence follows straight from the first. Because the two spaces are invariant, they are therefore invariant. From this, applying the first sentence again, we conclude that they are also invariant.
The second step of the three that we will take to prove this section's major result makes use of an additional property of and , that they are complementary. Recall that if a space is the direct sum of two others then any vector in the space breaks into two parts where and , and recall also that if and are bases for and then the concatenation is linearly independent (and so the two parts of do not "overlap"). The next result says that for any subspaces and that are complementary as well as invariant, the action of on breaks into the "non-overlapping" actions of on and on .
- Lemma 2.8
Let be a transformation and let and be invariant complementary subspaces of . Then can be represented by a matrix with blocks of square submatrices and
where and are blocks of zeroes.
- Proof
Since the two subspaces are complementary, the concatenation of a basis for and a basis for makes a basis for . We shall show that the matrix
has the desired form.
Any vector is in if and only if its final components are zeroes when it is represented with respect to . As is invariant, each of the vectors , ..., has that form. Hence the lower left of is all zeroes.
The argument for the upper right is similar.
To see that has been decomposed into its action on the parts, observe that the restrictions of to the subspaces and are represented, with respect to the obvious bases, by the matrices and . So, with subspaces that are invariant and complementary, we can split the problem of examining a linear transformation into two lower-dimensional subproblems. The next result illustrates this decomposition into blocks.
- Lemma 2.9
If is a matrices with square submatrices and
where the 's are blocks of zeroes, then .
- Proof
Suppose that is , that is , and that is . In the permutation formula for the determinant
each term comes from a rearrangement of the column numbers into a new order . The upper right block is all zeroes, so if a has at least one of among its first column numbers then the term arising from is zero, e.g., if then .
So the above formula reduces to a sum over all permutations with two halves: any significant is the composition of a that rearranges only and a that rearranges only . Now, the distributive law (and the fact that the signum of a composition is the product of the signums) gives that this
equals .
- Example 2.10
From Lemma 2.9 we conclude that if two subspaces are complementary and invariant then is nonsingular if and only if its restrictions to both subspaces are nonsingular.
Now for the promised third, final, step to the main result.
- Lemma 2.11
If a linear transformation has the characteristic polynomial then (1) and (2) .
- Proof
Because is the degree of the characteristic polynomial, to establish statement (1) we need only show that statement (2) holds and that is trivial whenever .
For the latter, by Lemma 2.7, both and are invariant. Notice that an intersection of invariant subspaces is invariant and so the restriction of to is a linear transformation. But both and are nilpotent on this subspace and so if has any eigenvalues on the intersection then its "only" eigenvalue is both and . That cannot be, so this restriction has no eigenvalues: is trivial (Lemma V.II.3.10 shows that the only transformation without any eigenvalues is on the trivial space).
To prove statement (2), fix the index . Decompose as
and apply Lemma 2.8.
By Lemma 2.9, . By the uniqueness clause of the Fundamental Theorem of Arithmetic, the determinants of the blocks have the same factors as the characteristic polynomial and , and the sum of the powers of these factors is the power of the factor in the characteristic polynomial: , ..., . Statement (2) will be proved if we will show that and that for all , because then the degree of the polynomial — which equals the dimension of the generalized null space— is as required.
For that, first, as the restriction of to is nilpotent on that space, the only eigenvalue of on it is . Thus the characteristic equation of on is . And thus for all .
Now consider the restriction of to . By Note V.III.2.2, the map is nonsingular on and so is not an eigenvalue of on that subspace. Therefore, is not a factor of , and so .
Our major result just translates those steps into matrix terms.
- Theorem 2.12
Any square matrix is similar to one in Jordan form
where each is the Jordan block associated with the eigenvalue of the original matrix (that is, is all zeroes except for 's down the diagonal and some subdiagonal ones).
- Proof
Given an matrix , consider the linear map that it represents with respect to the standard bases. Use the prior lemma to write where are the eigenvalues of . Because each is invariant, Lemma 2.8 and the prior lemma show that is represented by a matrix that is all zeroes except for square blocks along the diagonal. To make those blocks into Jordan blocks, pick each to be a string basis for the action of on .
Jordan form is a canonical form for similarity classes of square matrices, provided that we make it unique by arranging the Jordan blocks from least eigenvalue to greatest and then arranging the subdiagonal blocks inside each Jordan block from longest to shortest.
- Example 2.13
This matrix has the characteristic polynomial .
We will handle the eigenvalues and separately.
Computation of the powers, and the null spaces and nullities, of is routine. (Recall from Example 2.3 the convention of taking to represent a transformation, here , with respect to the standard basis.)
So the generalized null space has dimension two. We've noted that the restriction of is nilpotent on this subspace. From the way that the nullities grow we know that the action of on a string basis . Thus the restriction can be represented in the canonical form
where many choices of basis are possible. Consequently, the action of the restriction of to is represented by this matrix.
The second eigenvalue's computations are easier. Because the power of in the characteristic polynomial is one, the restriction of to must be nilpotent of index one. Its action on a string basis must be and since it is the zero map, its canonical form is the zero matrix. Consequently, the canonical form for the action of on is the matrix with the single entry . For the basis we can use any nonzero vector from the generalized null space.
Taken together, these two give that the Jordan form of is
where is the concatenation of and .
- Example 2.14
Contrast the prior example with
which has the same characteristic polynomial .
While the characteristic polynomial is the same,
here the action of is stable after only one application— the restriction of of to is nilpotent of index only one. (So the contrast with the prior example is that while the characteristic polynomial tells us to look at the action of the on its generalized null space, the characteristic polynomial does not describe completely its action and we must do some computations to find, in this example, that the minimal polynomial is .) The restriction of to the generalized null space acts on a string basis as and , and we get this Jordan block associated with the eigenvalue .
For the other eigenvalue, the arguments for the second eigenvalue of the prior example apply again. The restriction of to is nilpotent of index one (it can't be of index less than one, and since is a factor of the characteristic polynomial to the power one it can't be of index more than one either). Thus 's canonical form is the zero matrix, and the associated Jordan block is the matrix with entry .
Therefore, is diagonalizable.
(Checking that the third vector in is in the nullspace of is routine.)
- Example 2.15
A bit of computing with
shows that its characteristic polynomial is . This table
shows that the restriction of to acts on a string basis via the two strings and .
A similar calculation for the other eigenvalue
shows that the restriction of to its generalized null space acts on a string basis via the two separate strings and .
Therefore is similar to this Jordan form matrix.
We close with the statement that the subjects considered earlier in this Chpater are indeed, in this sense, exhaustive.
- Corollary 2.16
Every square matrix is similar to the sum of a diagonal matrix and a nilpotent matrix.
Exercises
- Problem 1
Do the check for Example 2.3.
- Problem 2
Each matrix is in Jordan form. State its characteristic polynomial and its minimal polynomial.
- This exercise is recommended for all readers.
- Problem 3
Find the Jordan form from the given data.
- The matrix is with the single eigenvalue . The nullities of the powers are: has nullity two, has nullity three, has nullity four, and has nullity five.
- The matrix is with two eigenvalues. For the eigenvalue the nullities are: has nullity two, and has nullity four. For the eigenvalue the nullities are: has nullity one.
- Problem 4
Find the change of basis matrices for each example.
- This exercise is recommended for all readers.
- Problem 5
Find the Jordan form and a Jordan basis for each matrix.
- This exercise is recommended for all readers.
- Problem 6
Find all possible Jordan forms of a transformation with characteristic polynomial .
- Problem 7
Find all possible Jordan forms of a transformation with characteristic polynomial .
- This exercise is recommended for all readers.
- Problem 8
Find all possible Jordan forms of a transformation with characteristic polynomial and minimal polynomial .
- Problem 9
Find all possible Jordan forms of a transformation with characteristic polynomial and minimal polynomial .
- This exercise is recommended for all readers.
- Problem 10
- Diagonalize these.
- This exercise is recommended for all readers.
- Problem 11
Find the Jordan matrix representing the differentiation operator on .
- This exercise is recommended for all readers.
- Problem 12
Decide if these two are similar.
- Problem 13
Find the Jordan form of this matrix.
Also give a Jordan basis.
- Problem 14
How many similarity classes are there for matrices whose only eigenvalues are and ?
- This exercise is recommended for all readers.
- Problem 15
Prove that a matrix is diagonalizable if and only if its minimal polynomial has only linear factors.
- Problem 16
Give an example of a linear transformation on a vector space that has no non-trivial invariant subspaces.
- Problem 17
Show that a subspace is invariant if and only if it is invariant.
- Problem 18
Prove or disprove: two matrices are similar if and only if they have the same characteristic and minimal polynomials.
- Problem 19
The trace of a square matrix is the sum of its diagonal entries.
- Find the formula for the characteristic polynomial of a matrix.
- Show that trace is invariant under similarity, and so we can sensibly speak of the "trace of a map". (Hint: see the prior item.)
- Is trace invariant under matrix equivalence?
- Show that the trace of a map is the sum of its eigenvalues (counting multiplicities).
- Show that the trace of a nilpotent map is zero. Does the converse hold?
- Problem 20
To use Definition 2.6 to check whether a subspace is invariant, we seemingly have to check all of the infinitely many vectors in a (nontrivial) subspace to see if they satisfy the condition. Prove that a subspace is invariant if and only if its subbasis has the property that for all of its elements, is in the subspace.
- This exercise is recommended for all readers.
- Problem 21
Is invariance preserved under intersection? Under union? Complementation? Sums of subspaces?
- Problem 22
Give a way to order the Jordan blocks if some of the eigenvalues are complex numbers. That is, suggest a reasonable ordering for the complex numbers.
- Problem 23
Let be the vector space over the reals of degree polynomials. Show that if then is an invariant subspace of under the differentiation operator. In , does any of , ..., have an invariant complement?
- Problem 24
In , the vector space (over the reals) of degree polynomials,
and
are the even and the odd polynomials; is even while is odd. Show that they are subspaces. Are they complementary? Are they invariant under the differentiation transformation?
- Problem 25
Lemma 2.8 says that if and are invariant complements then has a representation in the given block form (with respect to the same ending as starting basis, of course). Does the implication reverse?
- Problem 26
A matrix is the square root of another if . Show that any nonsingular matrix has a square root.
Footnotes
- ↑ More information on restrictions of functions is in the appendix.
Topic: Geometry of Eigenvalues
--Refer to Topic on Geometry of Linear Transformations---
The characterization of linear transformations in terms of the elementary operations is nice in some ways (for instance, we can easily see that lines are mapped to lines because each of the operations of projection, dilation, reflection, and skew maps lines to lines), but when a map is expressed as a composition of many small operations---no matter how simple---the description is less than ideal. We finish with another way, a somewhat more holistic way, of picturing the geometric effect of transformations of .
The pictures in that area give the action of the map on just one or two members of the domain. Although we know that a transformation is described completely by its action on a basis, and so to describe a transformation of therefore, strictly speaking, requires only a description of where it sends the two vectors from any basis, those pictures seem not to convey much geometric intution. Can we make clear a linear map's geometry by putting in more information, but not so much information that the picture gets confused?
A transformation of sends lines through the origin to lines through the origin. Thus, two points on a line will both be sent to the line, say, . Consider two such points. One is a multiple of the other, so we can write them with the second one as times the first, for some scalar .
Compare their images.
The second vector is times the first, and the image of the second is times the image of the first. Not only does the transformation preserve the fact that the vectors are colinear, it also preserves the relative scale of the vectors. That is, a transformation treats the points on a line through the origin uniformily. To describe the effect of the map on the entire line, we need only describe its effect on a single non-zero point in that line.
Since every point in the space is on some line through the origin, to understand the action of a linear transformation of , it is sufficient to pick one point from each line through the origin (say the point that is on the upper half of the unit circle) and show how the map's effect on that set of points.
Here is such a picture for a straightforward dilation.
Below, the same map is shown with the circle and its image superimposed.
Certainly the geometry here is more evident. For example, we can see that some lines through the origin are actually sent to themselves: the -axis is sent to the -axis, and the -axis is sent to the -axis.
This is the flip shown earlier, here with the circle and its image superimposed.
And this is the skew shown earlier.
Contrast the picture of this map's effect on the unit square with this one.
Here is a somewhat more complicated map (the second coordinate function is the same as the map in the prior picture, but the first coordinate function is different).
Observe that some vectors are being both dilated and rotated through some angle
while others are just being dilated, not rotated at all.
Exercises
- Problem 1
- Show the effect each matrix has on the top half of the unit circle.
Which vectors stay on the same line through the origin?
Topic: The Method of Powers
In practice, calculating eigenvalues and eigenvectors is a difficult problem. Finding, and solving, the characteristic polynomial of the large matrices often encountered in applications is too slow and too hard. Other techniques, indirect ones that avoid the characteristic polynomial, are used. Here we shall see such a method that is suitable for large matrices that are "sparse" (the great majority of the entries are zero).
Suppose that the matrix has the distinct eigenvalues , , ..., . Then has a basis that is composed of the associated eigenvectors . For any , where , iterating on gives these.
If one of the eigenvaluse, say, , has a larger absolute value than any of the other eigenvalues then its term will dominate the above expression. Put another way, dividing through by gives this,
and, because is assumed to have the largest absolute value, as gets larger the fractions go to zero. Thus, the entire expression goes to .
That is (as long as is not zero), as increases, the vectors will tend toward the direction of the eigenvectors associated with the dominant eigenvalue, and, consequently, the ratios of the lengths will tend toward that dominant eigenvalue.
For example (sample computer code for this follows the exercises), because the matrix
is triangular, its eigenvalues are just the entries on the diagonal, and . Arbitrarily taking to have the components and gives
and the ratio between the lengths of the last two is .
Two implementation issues must be addressed. The first issue is that, instead of finding the powers of and applying them to , we will compute as and then compute as , etc. (i.e., we never separately calculate , , etc.). These matrix-vector products can be done quickly even if is large, provided that it is sparse. The second issue is that, to avoid generating numbers that are so large that they overflow our computer's capability, we can normalize the 's at each step. For instance, we can divide each by its length (other possibilities are to divide it by its largest component, or simply by its first component). We thus implement this method by generating
until we are satisfied. Then the vector is an approximation of an eigenvector, and the approximation of the dominant eigenvalue is the ratio .
One way we could be "satisfied" is to iterate until our approximation of the eigenvalue settles down. We could decide, for instance, to stop the iteration process not after some fixed number of steps, but instead when differs from by less than one percent, or when they agree up to the second significant digit.
The rate of convergence is determined by the rate at which the powers of go to zero, where is the eigenvalue of second largest norm. If that ratio is much less than one then convergence is fast, but if it is only slightly less than one then convergence can be quite slow. Consequently, the method of powers is not the most commonly used way of finding eigenvalues (although it is the simplest one, which is why it is here as the illustration of the possibility of computing eigenvalues without solving the characteristic polynomial). Instead, there are a variety of methods that generally work by first replacing the given matrix with another that is similar to it and so has the same eigenvalues, but is in some reduced form such as tridiagonal form: the only nonzero entries are on the diagonal, or just above or below it. Then special techniques can be used to find the eigenvalues. Once the eigenvalues are known, the eigenvectors of can be easily computed. These other methods are outside of our scope. A good reference is (Goult et al. 1975).
Exercises
- Problem 1
Use ten iterations to estimate the largest eigenvalue of these matrices, starting from the vector with components and . Compare the answer with the one obtained by solving the characteristic equation.
- Problem 2
Redo the prior exercise by iterating until has absolute value less than At each step, normalize by dividing each vector by its length. How many iterations are required? Are the answers significantly different?
- Problem 3
Use ten iterations to estimate the largest eigenvalue of these matrices, starting from the vector with components , , and . Compare the answer with the one obtained by solving the characteristic equation.
- Problem 4
Redo the prior exercise by iterating until has absolute value less than . At each step, normalize by dividing each vector by its length. How many iterations does it take? Are the answers significantly different?
- Problem 5
What happens if ? That is, what happens if the initial vector does not to have any component in the direction of the relevant eigenvector?
- Problem 6
How can the method of powers be adopted to find the smallest eigenvalue?
This is the code for the computer algebra system Octave that was used to do the calculation above. (It has been lightly edited to remove blank lines, etc.)
Computer Code
>T=[3, 0;
8, -1]
T=
3 0
8 -1
>v0=[1; 2]
v0=
1
1
>v1=T*v0
v1=
3
7
>v2=T*v1
v2=
9
17
>T9=T**9
T9=
19683 0
39368 -1
>T10=T**10
T10=
59049 0
118096 1
>v9=T9*v0
v9=
19683
39367
>v10=T10*v0
v10=
59049
118096
>norm(v10)/norm(v9)
ans=2.9999
Remark: we are ignoring the power of Octave here; there are built-in functions to automatically apply quite sophisticated methods to find eigenvalues and eigenvectors. Instead, we are using just the system as a calculator.
Topic: Stable Populations
Imagine a reserve park with animals from a species that we are trying to protect. The park doesn't have a fence and so animals cross the boundary, both from the inside out and in the other direction. Every year, 10% of the animals from inside of the park leave, and 1% of the animals from the outside find their way in. We can ask if we can find a stable level of population for this park: is there a population that, once established, will stay constant over time, with the number of animals leaving equal to the number of animals entering?
To answer that question, we must first establish the equations. Let the year population in the park be and in the rest of the world be .
We can set this system up as a matrix equation (see the Markov Chain topic).
Now, "stable level" means that and , so that the matrix equation becomes . We are therefore looking for eigenvectors for that are associated with the eigenvalue . The equation is
which gives the eigenspace: vectors with the restriction that . Coupled with additional information, that the total world population of this species is is , we find that the stable state is and .
If we start with a park population of ten thousand animals, so that the rest of the world has one hundred thousand, then every year ten percent (a thousand animals) of those inside will leave the park, and every year one percent (a thousand) of those from the rest of the world will enter the park. It is stable, self-sustaining.
Now imagine that we are trying to gradually build up the total world population of this species. We can try, for instance, to have the world population grow at a rate of 1% per year. In this case, we can take a "stable" state for the park's population to be that it also grows at 1% per year. The equation leads to , which gives this system.
The matrix is nonsingular, and so the only solution is and . Thus, there is no (usable) initial population that we can establish at the park and expect that it will grow at the same rate as the rest of the world.
Knowing that an annual world population growth rate of 1% forces an unstable park population, we can ask which growth rates there are that would allow an initial population for the park that will be self-sustaining. We consider and solve for .
A shortcut to factoring that quadratic is our knowledge that is an eigenvalue of , so the other eigenvalue is . Thus there are two ways to have a stable park population (a population that grows at the same rate as the population of the rest of the world, despite the leaky park boundaries): have a world population that is does not grow or shrink, and have a world population that shrinks by 11% every year.
So this is one meaning of eigenvalues and eigenvectors— they give a stable state for a system. If the eigenvalue is then the system is static. If the eigenvalue isn't then the system is either growing or shrinking, but in a dynamically-stable way.
Exercises
- Problem 1
What initial population for the park discussed above should be set up in the case where world populations are allowed to decline by 11% every year?
- Problem 2
What will happen to the population of the park in the event of a growth in world population of 1% per year? Will it lag the world growth, or lead it? Assume that the inital park population is ten thousand, and the world population is one hunderd thousand, and calculate over a ten year span.
- Problem 3
The park discussed above is partially fenced so that now, every year, only 5% of the animals from inside of the park leave (still, about 1% of the animals from the outside find their way in). Under what conditions can the park maintain a stable population now?
- Problem 4
Suppose that a species of bird only lives in Canada, the United States, or in Mexico. Every year, 4% of the Canadian birds travel to the US, and 1% of them travel to Mexico. Every year, 6% of the US birds travel to Canada, and 4% go to Mexico. From Mexico, every year 10% travel to the US, and 0% go to Canada.
- Give the transition matrix.
- Is there a way for the three countries to have constant populations?
- Find all stable situations.
Topic: Linear Recurrences
In 1202 Leonardo of Pisa, also known as Fibonacci, posed this problem.
A certain man put a pair of rabbits in a place surrounded on all sides by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets a new pair which from the second month on becomes productive?
This moves past an elementary exponential growth model for population increase to include the fact that there is an initial period where newborns are not fertile. However, it retains other simplyfing assumptions, such as that there is no gestation period and no mortality.
The number of newborn pairs that will appear in the upcoming month is simply the number of pairs that were alive last month, since those will all be fertile, having been alive for two months. The number of pairs alive next month is the sum of the number alive current month and the number of newborns.
The is an example of a recurrence relation (it is called that because the values of are calculated by looking at other, prior, values of ). From it, we can easily answer Fibonacci's twelve-month question.
The sequence of numbers defined by the above equation (of which the first few are listed) is the Fibonacci sequence. The material of this chapter can be used to give a formula with which we can can calculate without having to first find , , etc.
For that, observe that the recurrence is a linear relationship and so we can give a suitable matrix formulation of it.
Then, where we write for the matrix and for the vector with components and , we have that . The advantage of this matrix formulation is that by diagonalizing we get a fast way to compute its powers: where we have , and the -th power of the diagonal matrix is the diagonal matrix whose entries that are the -th powers of the entries of .
The characteristic equation of is . The quadratic formula gives its roots as and . Diagonalizing gives this.
Introducing the vectors and taking the -th power, we have
We can compute from the second component of that equation.
Notice that is dominated by its first term because is less than one, so its powers go to zero. Although we have extended the elementary model of population growth by adding a delay period before the onset of fertility, we nonetheless still get an (asmyptotically) exponential function.
In general, a linear recurrence relation has the form
(it is also called a difference equation). This recurrence relation is homogeneous because there is no constant term; i.e, it can be put into the form . This is said to be a relation of order . The relation, along with the initial conditions , ..., completely determine a sequence. For instance, the Fibonacci relation is of order and it, along with the two initial conditions and , determines the Fibonacci sequence simply because we can compute any by first computing , , etc. In this Topic, we shall see how linear algebra can be used to solve linear recurrence relations.
First, we define the vector space in which we are working. Let be the set of functions from the natural numbers to the real numbers. (Below we shall have functions with domain , that is, without , but it is not an important distinction.)
Putting the initial conditions aside for a moment, for any recurrence, we can consider the subset of of solutions. For example, without initial conditions, in addition to the function given above, the Fibonacci relation is also solved by the function whose first few values are , , , and .
The subset is a subspace of . It is nonempty because the zero function is a solution. It is closed under addition since if and are solutions, then
And, it is closed under scalar multiplication since
We can give the dimension of . Consider this map from the set of functions to the set of vectors .
Problem 3 shows that this map is linear. Because, as noted above, any solution of the recurrence is uniquely determined by the initial conditions, this map is one-to-one and onto. Thus it is an isomorphism, and thus has dimension , the order of the recurrence.
So (again, without any initial conditions), we can describe the set of solutions of any linear homogeneous recurrence relation of degree by taking linear combinations of only linearly independent functions. It remains to produce those functions.
For that, we express the recurrence with a matrix equation.
In trying to find the characteristic function of the matrix, we can see the pattern in the case
and case.
Problem 4 shows that the characteristic equation is this.
We call that the polynomial "associated" with the recurrence relation. (We will be finding the roots of this polynomial and so we can drop the as irrelevant.)
If has no repeated roots then the matrix is diagonalizable and we can, in theory, get a formula for as in the Fibonacci case. But, because we know that the subspace of solutions has dimension , we do not need to do the diagonalization calculation, provided that we can exhibit linearly independent functions satisfying the relation.
Where , , ..., are the distinct roots, consider the functions through of powers of those roots. Problem 2 shows that each is a solution of the recurrence and that the of them form a linearly independent set. So, given the homogeneous linear recurrence (that is, ) we consider the associated equation . We find its roots , ..., , and if those roots are distinct then any solution of the relation has the form for . (The case of repeated roots is also easily done, but we won't cover it here— see any text on Discrete Mathematics.)
Now, given some initial conditions, so that we are interested in a particular solution, we can solve for , ..., . For instance, the polynomial associated with the Fibonacci relation is , whose roots are and so any solution of the Fibonacci equation has the form . Including the initial conditions for the cases and gives
which yields and , as was calculated above.
We close by considering the nonhomogeneous case, where the relation has the form for some nonzero . As in the first chapter of this book, only a small adjustment is needed to make the transition from the homogeneous case. This classic example illustrates.
In 1883, Edouard Lucas posed the following problem.
In the great temple at Benares, beneath the dome which marks the center of the world, rests a brass plate in which are fixed three diamond needles, each a cubit high and as thick as the body of a bee. On one of these needles, at the creation, God placed sixty four disks of pure gold, the largest disk resting on the brass plate, and the others getting smaller and smaller up to the top one. This is the Tower of Bramah. Day and night unceasingly the priests transfer the disks from one diamond needle to another according to the fixed and immutable laws of Bramah, which require that the priest on duty must not move more than one disk at a time and that he must place this disk on a needle so that there is no smaller disk below it. When the sixty-four disks shall have been thus transferred from the needle on which at the creation God placed them to one of the other needles, tower, temple, and Brahmins alike will crumble into dusk, and with a thunderclap the world will vanish.
(Translation of De Parvill (1884) from Ball (1962).)
How many disk moves will it take? Instead of tackling the sixty four disk problem right away, we will consider the problem for smaller numbers of disks, starting with three.
To begin, all three disks are on the same needle.
After moving the small disk to the far needle, the mid-sized disk to the middle needle, and then moving the small disk to the middle needle we have this.
Now we can move the big disk over. Then, to finish, we repeat the process of moving the smaller disks, this time so that they end up on the third needle, on top of the big disk.
So the thing to see is that to move the very largest disk, the bottom disk, at a minimum we must: first move the smaller disks to the middle needle, then move the big one, and then move all the smaller ones from the middle needle to the ending needle. Those three steps give us this recurence.
We can easily get the first few values of .
We recognize those as being simply one less than a power of two.
To derive this equation instead of just guessing at it, we write the original relation as , consider the homogeneous relation , get its associated polynomial , which obviously has the single, unique, root of , and conclude that functions satisfying the homogeneous relation take the form .
That's the homogeneous solution. Now we need a particular solution.
Because the nonhomogeneous relation is so simple, in a few minutes (or by remembering the table) we can spot the particular solution (there are other particular solutions, but this one is easily spotted). So we have that— without yet considering the initial condition— any solution of is the sum of the homogeneous solution and this particular solution: .
The initial condition now gives that , and we've gotten the formula that generates the table: the -disk Tower of Hanoi problem requires a minimum of moves.
Finding a particular solution in more complicated cases is, naturally, more complicated. A delightful and rewarding, but challenging, source on recurrence relations is (Graham, Knuth & Patashnik 1988)., For more on the Tower of Hanoi, (Ball 1962) or (Gardner 1957) are good starting points. So is (Hofstadter 1985). Some computer code for trying some recurrence relations follows the exercises.
Exercises
- Problem 1
Solve each homogeneous linear recurrence relations.
- Problem 2
Give a formula for the relations of the prior exercise, with these initial conditions.
- ,
- ,
- , , .
- Problem 3
Check that the isomorphism given betwween and is a linear map. It is argued above that this map is one-to-one. What is its inverse?
- Problem 4
Show that the characteristic equation of the matrix is as stated, that is, is the polynomial associated with the relation. (Hint: expanding down the final column, and using induction will work.)
- Problem 5
Given a homogeneous linear recurrence relation , let , ..., be the roots of the associated polynomial.
- Prove that each function satisfies the recurrence (without initial conditions).
- Prove that no is .
- Prove that the set is linearly independent.
- Problem 6
(This refers to the value given in the computer code below.) Transferring one disk per second, how many years would it take the priests at the Tower of Hanoi to finish the job?
Computer Code
This code allows the generation of the first few values of a function defined by a recurrence and initial conditions. It is in the Scheme dialect of LISP (specifically, it was written for A. Jaffer's free scheme interpreter SCM, although it should run in any Scheme implementation).
First, the Tower of Hanoi code is a straightforward implementation of the recurrence.
(define (tower-of-hanoi-moves n)
(if (= n 1)
1
(+ (* (tower-of-hanoi-moves (- n 1))
2)
1) ) )
(Note for readers unused to recursive code: to compute , the computer is told to compute , which requires, of course, computing . The computer puts the "times " and the "plus " aside for a moment to do that. It computes by using this same piece of code (that's what "recursive" means), and to do that is told to compute . This keeps up (the next step is to try to do while the other arithmetic is held in waiting), until, after steps, the computer tries to compute . It then returns , which now means that the computation of can proceed, etc., up until the original computation of finishes.)
The next routine calculates a table of the first few values. (Some language notes: '() is the empty list, that is, the empty sequence, and cons pushes something onto the start of a list. Note that, in the last line, the procedure proc is called on argument n.)
(define (first-few-outputs proc n)
(first-few-outputs-helper proc n '()) )
(define (first-few-outputs-aux proc n lst)
(if (< n 1)
lst
(first-few-outputs-aux proc (- n 1) (cons (proc n) lst)) ) )
The session at the SCM prompt went like this.
>(first-few-outputs tower-of-hanoi-moves 64)
Evaluation took 120 mSec
(1 3 7 15 31 63 127 255 511 1023 2047 4095 8191 16383 32767
65535 131071 262143 524287 1048575 2097151 4194303 8388607
16777215 33554431 67108863 134217727 268435455 536870911
1073741823 2147483647 4294967295 8589934591 17179869183
34359738367 68719476735 137438953471 274877906943 549755813887
1099511627775 2199023255551 4398046511103 8796093022207
17592186044415 35184372088831 70368744177663 140737488355327
281474976710655 562949953421311 1125899906842623
2251799813685247 4503599627370495 9007199254740991
18014398509481983 36028797018963967 72057594037927935
144115188075855871 288230376151711743 576460752303423487
1152921504606846975 2305843009213693951 4611686018427387903
9223372036854775807 18446744073709551615)
This is a list of through . (The mSec came on a 50 mHz '486 running in an XTerm of XWindow under Linux. The session was edited to put line breaks between numbers.)
Appendix
Mathematics is made of arguments (reasoned discourse that is, not crockery-throwing). This section is a reference to the most used techniques. A reader having trouble with, say, proof by contradiction, can turn here for an outline of that method.
But this section gives only a sketch. For more, these are classics: Methods of Logic by Quine, Induction and Analogy in Mathematics by Pólya, and Naive Set Theory by Halmos.
Propositions
The point at issue in an argument is the proposition. Mathematicians usually write the point in full before the proof and label it either Theorem for major points, Corollary for points that follow immediately from a prior one, or Lemma for results chiefly used to prove other results.
The statements expressing propositions can be complex, with many subparts. The truth or falsity of the entire proposition depends both on the truth value of the parts, and on the words used to assemble the statement from its parts.
Not
For example, where is a proposition, "it is not the case that " is true provided that is false. Thus, " is not prime" is true only when is the product of smaller integers.
We can picture the "not" operation with a Venn diagram.
Where the box encloses all natural numbers, and inside the circle are the primes, the shaded area holds numbers satisfying "not ".
To prove that a "not " statement holds, show that is false.
And
Consider the statement form " and ". For the statement to be true both halves must hold: " is prime and so is " is true, while " is prime and is not" is false.
Here is the Venn diagram for " and ".
To prove " and ", prove that each half holds.
Or
A " or " is true when either half holds: " is prime or is prime" is true, while " is not prime or is prime" is false. We take "or" inclusively so that if both halves are true " is prime or is not" then the statement as a whole is true. (In everyday speech, sometimes "or" is meant in an exclusive way— "Eat your vegetables or no dessert" does not intend both halves to hold— but we will not use "or" in that way.)
The Venn diagram for "or" includes all of both circles.
To prove " or ", show that in all cases at least one half holds (perhaps sometimes one half and sometimes the other, but always at least one).
If-then
An "if then " statement (sometimes written " materially implies " or just " implies " or "") is true unless is true while is false. Thus "if is prime then is not" is true while "if is prime then is also prime" is false. (Contrary to its use in casual speech, in mathematics "if then " does not connote that precedes or causes .)
More subtly, in mathematics "if then " is true when is false: "if is prime then is prime" and "if is prime then is not" are both true statements, sometimes said to be vacuously true. We adopt this convention because we want statements like "if a number is a perfect square then it is not prime" to be true, for instance when the number is or when the number is .
The diagram
shows that holds whenever does (another phrasing is " is sufficient to give "). Notice again that if does not hold, may or may not be in force.
There are two main ways to establish an implication. The first way is direct: assume that is true and, using that assumption, prove . For instance, to show "if a number is divisible by 5 then twice that number is divisible by 10", assume that the number is and deduce that . The second way is indirect: prove the contrapositive statement: "if is false then is false" (rephrased, " can only be false when is also false"). As an example, to show "if a number is prime then it is not a perfect square", argue that if it were a square then it could be factored where and so wouldn't be prime (of course or don't give but they are nonprime by definition).
Note two things about this statement form.
First, an "if then " result can sometimes be improved by weakening or strengthening . Thus, "if a number is divisible by then its square is also divisible by " could be upgraded either by relaxing its hypothesis: "if a number is divisible by then its square is divisible by ", or by tightening its conclusion: "if a number is divisible by then its square is divisible by ".
Second, after showing "if then ", a good next step is to look into whether there are cases where holds but does not. The idea is to better understand the relationship between and , with an eye toward strengthening the proposition.
Equivalence
An if-then statement cannot be improved when not only does imply , but also implies . Some ways to say this are: " if and only if ", " iff ", " and are logically equivalent", " is necessary and sufficient to give ", "". For example, "a number is divisible by a prime if and only if that number squared is divisible by the prime squared".
The picture here shows that and hold in exactly the same cases.
Although in simple arguments a chain like " if and only if , which holds if and only if ..." may be practical, typically we show equivalence by showing the "if then " and "if then " halves separately.
Quantifiers
Compare these two statements about natural numbers: "there is an such that is divisible by " is true, while "for all numbers , that is divisible by " is false. We call the "there is" and "for all" prefixes quantifiers.
For all
The "for all" prefix is the universal quantifier, symbolized .
Venn diagrams aren't very helpful with quantifiers, but in a sense the box we draw to border the diagram shows the universal quantifier since it dilineates the universe of possible members.
To prove that a statement holds in all cases, we must show that it holds in each case. Thus, to prove "every number divisible by has its square divisible by ", take a single number of the form and square it . This is a "typical element" or "generic element" proof.
This kind of argument requires that we are careful to not assume properties for that element other than those in the hypothesis— for instance, this type of wrong argument is a common mistake: "if is divisible by a prime, say , so that then and the square of the number is divisible by the square of the prime". That is an argument about the case , but it isn't a proof for general .
There exists
We will also use the existential quantifier, symbolized and read "there exists".
As noted above, Venn diagrams are not much help with quantifiers, but a picture of "there is a number such that " would show both that there can be more than one and that not all numbers need satisfy .
An existence proposition can be proved by producing something satisfying the property: once, to settle the question of primality of , Euler produced its divisor . But there are proofs showing that something exists without saying how to find it; Euclid's argument given in the next subsection shows there are infinitely many primes without naming them. In general, while demonstrating existence is better than nothing, giving an example is better, and an exhaustive list of all instances is great. Still, mathematicians take what they can get.
Finally, along with "Are there any?" we often ask "How many?" That is why the issue of uniqueness often arises in conjunction with questions of existence. Many times the two arguments are simpler if separated, so note that just as proving something exists does not show it is unique, neither does proving something is unique show that it exists. (Obviously "the natural number with more factors than any other" would be unique, but in fact no such number exists.)
Techniques of Proof
Induction
Many proofs are iterative, "Here's why the statement is true for for the case of the number , it then follows for , and from there to , and so on ...". These are called proofs by induction. Such a proof has two steps. In the base step the proposition is established for some first number, often or . Then in the inductive step we assume that the proposition holds for numbers up to some and deduce that it then holds for the next number .
Here is an example.
We will prove that .
For the base step we must show that the formula holds when . That's easy, the sum of the first number does indeed equal .
For the inductive step, assume that the formula holds for the numbers . That is, assume all of these instances of the formula.
From this assumption we will deduce that the formula therefore also holds in the next case. The deduction is straightforward algebra.
We've shown in the base case that the above proposition holds for . We've shown in the inductive step that if it holds for the case of then it also holds for ; therefore it does hold for . We've also shown in the inductive step that if the statement holds for the cases of and then it also holds for the next case , etc. Thus it holds for any natural number greater than or equal to .
Here is another example.
We will prove that every integer greater than is a product of primes.
The base step is easy: is the product of a single prime.
For the inductive step assume that each of is a product of primes, aiming to show is also a product of primes. There are two possibilities: (i) if is not divisible by a number smaller than itself then it is a prime and so is the product of primes, and (ii) if is divisible then its factors can be written as a product of primes (by the inductive hypothesis) and so can be rewritten as a product of primes. That ends the proof.
(Remark. The Prime Factorization Theorem of Number Theory says that not only does a factorization exist, but that it is unique. We've shown the easy half.)
There are two things to note about the "next number" in an induction argument.
For one thing, while induction works on the integers, it's no good on the reals. There is no "next" real.
The other thing is that we sometimes use induction to go down, say, from to to , etc., down to . So "next number" could mean "next lowest number". Of course, at the end we have not shown the fact for all natural numbers, only for those less than or equal to .
Contradiction
Another technique of proof is to show something is true by showing it can't be false.
The classic example is Euclid's, that there are infinitely many primes.
Suppose there are only finitely many primes . Consider . None of the primes on this supposedly exhaustive list divides that number evenly, each leaves a remainder of . But every number is a product of primes so this can't be. Thus there cannot be only finitely many primes.
Every proof by contradiction has the same form: assume that the false proposition is true and derive some contradiction to known facts. This kind of logic is known as Aristotelian Logic, or Term Logic
Another example is this proof that is not a rational number.
Suppose that .
Factor out the 's: and and rewrite.
The Prime Factorization Theorem says that there must be the same number of factors of on both sides, but there are an odd number on the left and an even number on the right. That's a contradiction, so a rational with a square of cannot be.
Both of these examples aimed to prove something doesn't exist. A negative proposition often suggests a proof by contradiction.
Sets, Functions, Relations
Sets
Mathematicians work with collections called sets. A set can be given as a listing between curly braces as in , or, if that's unwieldy, by using set-builder notation as in (read "the set of all such that \ldots"). We name sets with capital roman letters as with the primes , except for a few special sets such as the real numbers , and the complex numbers . To denote that something is an element (or member) of a set we use "", so that while .
What distinguishes a set from any other type of collection is the Principle of Extensionality, that two sets with the same elements are equal. Because of this principle, in a set repeats collapse and order doesn't matter .
We use "" for the subset relationship: and "" for subset or equality (if is a subset of but then is a proper subset of ). These symbols may be flipped, for instance .
Because of Extensionality, to prove that two sets are equal , just show that they have the same members. Usually we show mutual inclusion, that both and .
Set operations
Venn diagrams are handy here. For instance, can be pictured
and "" looks like this.
Note that this is a repeat of the diagram for "if \ldots then ..." propositions. That's because "" means "if then ".
In general, for every propositional logic operator there is an associated set operator. For instance, the complement of is
the union is
and the intersection is
}}When two sets share no members their intersection is the empty set , symbolized . Any set has the empty set for a subset, by the "vacuously true" property of the definition of implication.
Sequences
We shall also use collections where order does matter and where repeats do not collapse. These are sequences, denoted with angle brackets: . A sequence of length is sometimes called an ordered pair and written with parentheses: . We also sometimes say "ordered triple", "ordered -tuple", etc. The set of ordered -tuples of elements of a set is denoted . Thus the set of pairs of reals is .
Functions
We first see functions in elementary Algebra, where they are presented as formulas (e.g., ), but progressing to more advanced Mathematics reveals more general functions— trigonometric ones, exponential and logarithmic ones, and even constructs like absolute value that involve piecing together parts— and we see that functions aren't formulas, instead the key idea is that a function associates with its input a single output .
Consequently, a function or map is defined to be a set of ordered pairs such that suffices to determine , that is: if then (this requirement is referred to by saying a function is well-defined).\footnote{More on this is in the section on isomorphisms}
Each input is one of the function's arguments and each output is a value. The set of all arguments is 's domain and the set of output values is its range. Usually we don't need know what is and is not in the range and we instead work with a superset of the range, the codomain. The notation for a function with domain and codomain is .
We sometimes instead use the notation , read " maps under to ", or " is the "image' of '.
Some maps, like , can be thought of as combinations of simple maps, here, applied to the image of . The composition of with , is the map sending to . It is denoted . This definition only makes sense if the range of is a subset of the domain of .
Observe that the identity map defined by has the property that for any , the composition is equal to . So an identity map plays the same role with respect to function composition that the number plays in real number addition, or that the number plays in multiplication.
In line with that analogy, define a left inverse of a map to be a function such that is the identity map on . Of course, a right inverse of is a such that is the identity.
A map that is both a left and right inverse of is called simply an inverse. An inverse, if one exists, is unique because if both and are inverses of then (the middle equality comes from the associativity of function composition), so we often call it "the" inverse, written . For instance, the inverse of the function given by is the function given by .
The superscript "" notation for function inverse can be confusing— it doesn't mean . It is used because it fits into a larger scheme. Functions that have the same codomain as domain can be iterated, so that where , we can consider the composition of with itself: , and , etc.
Naturally enough, we write as and as , etc. Note that the familiar exponent rules for real numbers obviously hold: and . The relationship with the prior paragraph is that, where is invertible, writing for the inverse and for the inverse of , etc., gives that these familiar exponent rules continue to hold, once is defined to be the identity map.
If the codomain equals the range of then we say that the function is onto (or surjective). A function has a right inverse if and only if it is onto (this is not hard to check). If no two arguments share an image, if implies that , then the function is one-to-one (or injective). A function has a left inverse if and only if it is one-to-one (this is also not hard to check).
By the prior paragraph, a map has an inverse if and only if it is both onto and one-to-one; such a function is a correspondence. It associates one and only one element of the domain with each element of the range (for example, finite sets must have the same number of elements to be matched up in this way). Because a composition of one-to-one maps is one-to-one, and a composition of onto maps is onto, a composition of correspondences is a correspondence.
We sometimes want to shrink the domain of a function. For instance, we may take the function given by and, in order to have an inverse, limit input arguments to nonnegative reals . Technically, is a different function than ; we call it the restriction of to the smaller domain.
A final point on functions: neither nor need be a number. As an example, we can think of as a function that takes the ordered pair as its argument.
Relations
Some familiar operations are obviously functions: addition maps to . But what of "" or ""? We here take the approach of rephrasing "" to " is in the relation ". That is, define a binary relation on a set to be a set of ordered pairs of elements of . For example, the relation is the set ; some elements of that set are , , and .
Another binary relation on the natural numbers is equality; this relation is formally written as the set .
Still another example is "closer than ", the set . Some members of that relation are , , and . Neither nor is a member.
Those examples illustrate the generality of the definition. All kinds of relationships (e.g., "both numbers even" or "first number is the second with the digits reversed") are covered under the definition.
Equivalence Relations
We shall need to say, formally, that two objects are alike in some way. While these alike things aren't identical, they are related (e.g., two integers that "give the same remainder when divided by ").
A binary relation is an equivalence relationwhen it satisfies
- reflexivity: any object is related to itself;
- symmetry: if is related to then is related to ;
- transitivity: if is related to and is related to then is related to .
(To see that these conditions formalize being the same, read them again, replacing "is related to" with "is like".)
Some examples (on the integers): "" is an equivalence relation, "" does not satisfy symmetry, "same sign" is a equivalence, while "nearer than " fails transitivity.
Partitions
In "same sign" there are two kinds of pairs, the first with both numbers positive and the second with both negative. So integers fall into exactly one of two classes, positive or negative.
A partition of a set is a collection of subsets such that every element of is in one and only one : , and if is not equal to then . Picture being decomposed into distinct parts.
Thus, the first paragraph says "same sign" partitions the integers into the positives and the negatives.
Similarly, the equivalence relation "=" partitions the integers into one-element sets.
Another example is the fractions. Of course, and are equivalent fractions. That is, for the set , we define two elements and to be equivalent if . We can check that this is an equivalence relation, that is, that it satisfies the above three conditions. With that, is divided up into parts.
Before we show that equivalence relations always give rise to partitions, we first illustrate the argument. Consider the relationship between two integers of "same parity", the set (i.e., "give the same remainder when divided by "). We want to say that the natural numbers split into two pieces, the evens and the odds, and inside a piece each member has the same parity as each other. So for each we define the set of numbers associated with it: . Some examples are , and , and . These are the parts, e.g., is the odds.
}}Theorem An equivalence relation induces a partition on the underlying set.
- Proof
Call the set and the relation . In line with the illustration in the paragraph above, for each define .
Observe that, as is a member if , the union of all these sets is . So we will be done if we show that distinct parts are disjoint: if then . We will verify this through the contrapositive, that is, we wlll assume that in order to deduce that .
Let be an element of the intersection. Then by definition of and , the two and are members of , and by symmetry of this relation and are also members of . To show that we will show each is a subset of the other.
Assume that so that . Use transitivity along with to conclude that is also an element of . But so another use of transitivity gives that . Thus . Therefore implies , and so .
The same argument in the other direction gives the other inclusion, and so the two sets are equal, completing the contrapositive argument.
}}We call each part of a partition an equivalence class (or informally, "part").
We somtimes pick a single element of each equivalence class to be the class representative.
Usually when we pick representatives we have some natural scheme in mind. In that case we call them the canonical representatives.
An example is the simplest form of a fraction. We've defined and to be equivalent fractions. In everyday work we often use the "simplest form" or "reduced form" fraction as the class representatives.
Resources And Licensing
For information regarding the Licensing of this book please see Wikibooks' Copyright Policy. The original text of this wikibook has been copied form the book "Linear Algebra" by:
- Jim Hefferon, Mathematics
- Saint Michael's College
- Colchester, Vermont USA 05439.
The original text is available here, and is released under either the GNU Free Documentation License or the Creative Commons Attribution-ShareAlike 2.5 License.
Other Books and Lectures
- Linear Algebra - A free textbook by Prof. Jim Hefferon of St. Michael's College. This wikibook began as a wikified copy of Prof. Hefferon's text. Prof. Hefferon's book may differ from the book here, as both are still under development.
- A Course in Linear Algebra - A free set of video lectures given at the Massachusetts Institute of Technology by Prof. Gilbert Strang. Prof. Strang's book on linear algebra has been a widely influential book and it is referenced many times in this text.
- A First Course in Linear Algebra - A free textbook by Prof. Rob Beezer at the University of Puget Sound, released under GFDL.
- Lecture Notes on Linear Algebra - An online viewable set of lecture notes by Prof. José Figueroa-O’Farrill at the University of Edinburgh.
Software
- Octave a free and open soure application for Numerical Linear Algebra. There is also an Octave Programming Tutorial wikibook under development.
- A toolkit for linear algebra students - An online software resource aimed at helping linear algebra students learn and practice a basic linear algebra procedures, such as Gauss-Jordan reduction, calculating the determinant, or checking for linear independence. This software was produced by Przemyslaw Bogacki in the Department of Mathematics and Statistics at Old Dominion University.
- Online Javascript Matrix Calculator, basic matrix algebra, elementary row operations, RREF, inverses, determinants, characteristic polynomials, eigenvalues and eigenvectors, null space, range space, and least squares solutions to linear systems. The software was developed by the department of mathematics at the University of Houston.
Wikipedia
Wikipedia is frequently a great resource that often gives a general non-technical overview of a subject. Wikipedia has many articles on the subject of Linear Algebra. Below are some articles about some of the material in this book.
- Reduced Echelon Form is described in Row echelon form
- Gauss-Jordan Reduction is described in Gauss–Jordan elimination
- Gauss' Method is described in Gaussian elimination
- Many topics in the section Linear Algebra/Linear Geometry of n-Space and its subsections are discussed article Euclidean vector
- Row Equivalence is described in the article Row equivalence
Bibliography
- Microsoft (1993), Microsoft Programmers Reference, Microsoft Press.
- William Lowell Putnam Mathematical Competition, Problem A-5, 1990.
- The USSR Mathematics Olympiad, number 174.
- Ackerson, R. H. (Dec. 1955), "A Note on Vector Spaces", American Mathematical Monthly (American Mathematical Society) 62 (10): 721.
- Anning, Norman (proposer); Trigg, C. W. (solver) (Feb. 1953), "Elementary problem 1016", American Mathematical Monthly (American Mathematical Society) 60 (2): 115.
- Anton, Howard (1987), Elementary Linear Algebra, John Wiley & Sons.
- Arrow, J. (1963), Social Choice and Individual Values, Wiley.
- Ball, W.W. (1962), Mathematical Recreations and Essays, MacMillan (revised by H.S.M. Coxeter).
- Bennett, William (March 15, 1993), "Quantifying America's Decline", Wall Street Journal
- Birkhoff, Garrett; MacLane, Saunders (1965), Survey of Modern Algebra, Macmillan.
- Bittinger, Marvin (proposer) (Jan. 1973), "Quickie 578", Mathematics Magazine (American Mathematical Society) 46 (5): 286,296.
- Blass, A. (1984), "Existence of Bases Implies the Axiom of Choice", in Baumgartner, J. E., Axiomatic Set Theory, Providence RI: American Mathematical Society, pp. 31–33.
- Bridgman, P. W. (1931), Dimensional Analysis, Yale University Press.
- Casey, John (1890), The Elements of Euclid, Books I to VI and XI (9th ed.), Hodges, Figgis, and Co..
- Clark, David H.; Coupe, John D. (Mar. 1967), "The Bangor Area Economy Its Present and Future", Reprot to the City of Bangor, ME.
- Clarke, Arthur C. (1982), Great SF Stories 8: Technical Error, DAW Books.
- Courant, Richard; Robbins, Herbert (1978), What is Mathematics?, Oxford University Press.
- Coxeter, H.S.M. (1974), Projective Geometry (Second ed.), Springer-Verlag.
- Cullen, Charles G. (1990), Matrices and Linear Transformations (Second ed.), Dover.
- Dalal, Siddhartha; Folkes, Edward; Hoadley, Bruce (Fall 1989), "Lessons Learned from Challenger: A Statistical Perspective", Stats: the Magazine for Students of Statistics: 14-18
- Davies, Thomas D. (Jan. 1990), "New Evidence Places Peary at the Pole", National Geographic Magazine 177 (1): 44.
- de Mestre, Neville (1990), The Mathematics of Projectiles in sport, Cambridge University Press.
- De Parville (1884), La Nature, I, Paris, pp. 285-286.
- Duncan, Dewey (proposer); Quelch, W. H. (solver) (Sept.-Oct. 1952), Mathematics Magazine 26 (1): 48
- Dudley, Underwood (proposer); Lebow, Arnold (proposer); Rothman, David (solver) (Jan. 1963), "Elemantary problem 1151", American Mathematical Monthly 70 (1): 93.
- Ebbing, Darrell D. (1993), General Chemistry (Fourth ed.), Houghton Mifflin.
- Ebbinghaus, H. D. (1990), Numbers, Springer-Verlag.
- Eggar, M.H. (Aug./Sept. 1998), "Pinhole Cameras, Perspective, and Projective Geometry", American Mathematical Monthly (American Mathematical Society): 618-630.
- Einstein, A. (1911), Annals of Physics 35: 686.
- Feller, William (1968), An Introduction to Probability Theory and Its Applications, 1 (3rd ed.), Wiley.
- Gardner, Martin (May. 1957), "Mathematical Games: About the remarkable similarity between the Icosian Game and the Tower of Hanoi", Scientific American: 150-154.
- Gardner, Martin (April 1970), "Mathematical Games, Some mathematical curiosities embedded in the solar system", Scientific American: 108-112.
- Gardner, Martin (October 1974), "Mathematical Games, On the paradoxical situations that arise from nontransitive relations", Scientific American.
- Gardner, Martin (October 1980), "Mathematical Games, From counting votes to making votes count: the mathematics of elections", Scientific American.
- Gardner, Martin (1990), The New Ambidextrous Universe (Third revised ed.), W. H. Freeman and Company.
- Gilbert, George T.; Krusemeyer, Mark; Larson, Loren C. (1993), The Wohascum County Problem Book, The Mathematical Association of America.
- Giordano, R.; Jaye, M.; Weir, M. (1986), "The Use of Dimensional Analysis in Mathematical Modeling", UMAP Modules (COMAP) (632).
- Giordano, R.; Wells, M.; Wilde, C. (1987), "Dimensional Analysis", UMAP Modules (COMAP) (526).
- Goult, R.J.; Hoskins, R.F.; Milner, J.A.; Pratt, M.J. (1975), Computational Methods in Linear Algebra, Wiley.
- Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1988), Concrete Mathematics, Addison-Wesley.
- Haggett, Vern (proposer); Saunders, F. W. (solver) (Apr. 1955), "Elementary problem 1135", American Mathematical Monthly (American Mathematical Society) 62 (5): 257.
- Halmos, Paul P. (1958), Finite Dimensional Vector Spaces (Second ed.), Van Nostrand.
- Halsey, William D. (1979), Macmillian Dictionary, Macmillian.
- Hamming, Richard W. (1971), Introduction to Applied Numerical Analysis, Hemisphere Publishing.
- Hanes, Kit (1990), "Analytic Projective Geometry and its Applications", UMAP Modules (UMAP UNIT 710): 111.
- Heath, T. (1956), Euclid's Elements, 1, Dover.
- Hoffman, Kenneth; Kunze, Ray (1971), Linear Algebra (Second ed.), Prentice Hall
- Hofstadter, Douglas R. (1985), Metamagical Themas:~Questing for the Essence of Mind and Pattern, Basic Books.
- Iosifescu, Marius (1980), Finite Markov Processes and Their Applications, UMI Research Press.
- Ivanoff, V. F. (proposer); Esty, T. C. (solver) (Feb. 1933), "Problem 3529", American Mathematical Mothly 39 (2): 118
- Kelton, Christina M.L. (1983), Trends on the Relocation of U.S. Manufacturing, Wiley.
- Kemeny, John G.; Snell, J. Laurie (1960), Finite Markove Chains, D.~Van Nostrand.
- Kemp, Franklin (Oct. 1982), "Linear Equations", American Mathematical Monthly (American Mathematical Society): 608.
- Klamkin, M. S. (proposer) (Jan.-Feb. 1957), "Trickie T-27", Mathematics Magazine 30 (3): 173.
- Knuth, Donald E. (1988), The Art of Computer Programming, Addison Wesley.
- Leontief, Wassily W. (Oct. 1951), "Input-Output Economics", Scientific American 185 (4): 15.
- Leontief, Wassily W. (Apr. 1965), "The Structure of the U.S. Economy", Scientific American 212 (4): 25.
- Liebeck, Hans. (Dec. 1966), "A Proof of the Equality of Column Rank and Row Rank of a Matrix", American Mathematical Monthly (American Mathematical Society) 73 (10): 1114.
- Macdonald, Kenneth; Ridge, John (1988), "Social Mobility", British Social Trends Since 1900 (Macmillian).
- Morrison, Clarence C. (proposer) (1967), "Quickie", Mathematics Magazine 40 (4): 232.
- Munkres, James R. (1964), Elementary Linear Algebra, Addison-Wesley.
- Neimi, G.; Riker, W. (June 1976), "The Choice of Voting Systems", Scientific American: 21-27.
- O'Hanian, Hans (1985), Physics, 1, W. W. Norton
- O'Nan, Micheal (1990), Linear Algebra (3rd ed.), Harcourt College Pub.
- Oakley, Cletus; Baker, Justine (April 1977), "Least Squares and the 3:40 Mile", Mathematics Teacher
- Pólya, G. (1954), Mathematics and Plausible Reasoning: Volume II Patterns of Plausible Inference, Princeton University Press
- Peterson, G. M. (Apr. 1955), "Area of a triangle", American Mathematical Monthly (American Mathematical Society) 62 (4): 249.
- Poundstone, W. (2008), Gaming the vote, Hill and Wang, ISBN 978-0-8090-4893-9.
- Ransom, W. R. (proposer); Gupta, Hansraj (solver) (Jan. 1935), "Elementary problem 105", American Mathematical Monthly 42 (1): 47.
- Rice, John R. (1993), Numerical Methods, Software, and Analysis, Academic Press.
- Rucker, Rudy (1982), Infinity and the Mind, Birkhauser.
- Rupp, C. A. (proposer); Aude, H. T. R. (solver) (Jun.-July 1931), "Problem 3468", American Mathematical Monthly (American Mathematical Society) 37 (6): 355.
- Ryan, Patrick J. (1986), Euclidean and Non=Euclidean Geometry: an Anylytic Approach, Cambridge University Press.
- Salkind, Charles T. (1975), Contest Problem Book No 1: Annual High School Mathematics Examinations 1950-1960.
- Seidenberg, A. (1962), Lectures in Projective Geometry, Van Nostrandg.
- Silverman, D. L. (proposer); Trigg, C. W. (solver) (Jan. 1963), "Quickie 237", Mathematics Magazine (American Mathematical Society) 36 (1).
- Strang, Gilbert (1980), Linear Algebra and its Applications (2nd ed.), Hartcourt Brace Javanovich
- Strang, Gilbert (Nov. 1993), "The Fundamental Theorem of Linear Algebra", American Mathematical Monthly (American Mathematical Society): 848-855.
- Taylor, Alan D. (1995), Mathematics and Politics: Strategy, Voting, Power, and Proof, Springer-Verlag.
- Tilley, Burt, Private Communication.
- Trigg, C. W. (proposer); Walker, R. J. (solver) (Jan. 1949), "Elementary Problem 813", American Mathematical Monthly (American Mathematical Society) 56 (1).
- Trigg, C. W. (proposer) (Jan. 1963), "Quickie 307", Mathematics Magazine (American Mathematical Society) 36 (1): 77.
- Trono, Tony (compilier) (1991), University of Vermont Mathematics Department High School Prize Examinations 1958-1991, mimeograhed printing
- Walter, Dan (proposer); Tytun, Alex (solver) (1949), "Elementary problem 834", American Mathematical Monthly (American Mathematical Society) 56 (6): 409.
- Weston, J. D. (Aug./Sept. 1959), "Volume in Vector Spaces", American Mathematical Monthly (American Mathematical Society) 66 (7): 575-577.
- Weyl, Hermann (1952), Symmetry, Princeton University Press.
- Wickens, Thomas D. (1982), Models for Behavior, W.H. Freeman.
- Wilansky, Albert (Nov. year=1951), "The Row-Sum of the Inverse Matrix", American Mathematical Monthly (American Mathematical Society) 58 (9): 614.
- Wilkinson, J. H. (1965), The Algebraic Eigenvalue Problem, Oxford University Press.
- Yaglom, I. M. (1988), Felix Klein and Sophus Lie: Evolution of the Idea of Symmetry in the Nineteenth Century, Birkhäuser.
- Zwicker, S. (1991), "The Voters' Paradox, Spin, and the Borda Count", Mathematical Social Sciences 22: 187-227
Index
A
accuracy
addition
Arithmetic-Geometric Mean Inequality
B
base step
- change of
- definition
- natural
- orthogonal
- orthogonalization
- orthonormal
- standard 1, 2
- standard over the complex numbers
- string
C
classes
canonical form
characteristic
circuits
complex numbers
coordinates
D
- cofactor
- Cramer's Rule
- definition
- exists 1, 2, 3
- Laplace Expansion
- minor
- Vandermonde
- permutation expansion 1, 2
dilation
E
eigenvalue
eigenvector
elementary
elementary reduction operations
equivalence
F
finite-dimensional vector space
- argument
- codomain
- composition
- composition
- correspondence
- domain
- even
- identity
- inverse 1, 2
- inverse image
- left inverse
- multilinear
- range
- restriction
- odd
- one-to-one function
- onto
- right inverse
- structure preserving 1, 2
- two sided inverse
- value
- well-defined
- zero
Fundamental Theorem
G
Gram-Schmidt Orthogonalization
H
historyless
- composition
- matrix representation 1, 2, 3
- nonsingular 1, 2
- nullity
- nullspace
- rank 1, 2
- rangespace
- rank
- zero
I
image
index
inductive step
J
K
kernel</