In this subsection we will focus on the property of Corollary 2.4.
A transformation has a scalar eigenvalue if there is a nonzero eigenvector such that .
("Eigen" is German for "characteristic of" or "peculiar to"; some authors call these characteristic values and vectors. No authors call them "peculiar".)
The projection map
has an eigenvalue of associated with any eigenvector of the form
where and are scalars at least one of which is non-. On the other hand, is not an eigenvalue of since no non- vector is doubled.
That example shows why the "non-" appears in the definition. Disallowing as an eigenvector eliminates trivial eigenvalues.
The only transformation on the trivial space is
This map has no eigenvalues because there are no non- vectors mapped to a scalar multiple of themselves.
Consider the homomorphism given by . The range of is one-dimensional. Thus an application of to a vector in the range will simply rescale that vector: . That is, has an eigenvalue of associated with eigenvectors of the form where .
This map also has an eigenvalue of associated with eigenvectors of the form where .
A square matrix has a scalar eigenvalue associated with the non-eigenvector if .
Although this extension from maps to matrices is obvious, there is a point that must be made. Eigenvalues of a map are also the eigenvalues of matrices representing that map, and so similar matrices have the same eigenvalues. But the eigenvectors are different— similar matrices need not have the same eigenvectors.
For instance, consider again the transformation given by . It has an eigenvalue of associated with eigenvectors of the form where . If we represent with respect to
then is an eigenvalue of , associated with these eigenvectors.
On the other hand, representing with respect to gives
and the eigenvectors of associated with the eigenvalue are these.
Thus similar matrices can have different eigenvectors.
Here is an informal description of what's happening. The underlying transformation doubles the eigenvectors . But when the matrix representing the transformation is then it "assumes" that column vectors are representations with respect to . In contrast, "assumes" that column vectors are representations with respect to . So the vectors that get doubled by each matrix look different.
The next example illustrates the basic tool for finding eigenvectors and eigenvalues.
What are the eigenvalues and eigenvectors of this matrix?
To find the scalars such that for non- eigenvectors , bring everything to the left-hand side
and factor . (Note that it says ; the expression doesn't make sense because is a matrix while is a scalar.) This homogeneous linear system
has a non- solution if and only if the matrix is singular. We can determine when that happens.
The eigenvalues are and . To find the associated eigenvectors, plug in each eigenvalue. Plugging in gives
for a scalar parameter ( is non- because eigenvectors must be non-). In the same way, plugging in gives
(here is not a projection map, it is the number ) then
so has eigenvalues of and . To find associated eigenvectors, first plug in for :
for a scalar , and then plug in :
The characteristic polynomial of a square matrix is the determinant of the matrix , where is a variable. The characteristic equation is . The characteristic polynomial of a transformation is the polynomial of any .
Problem 11 checks that the characteristic polynomial of a transformation is well-defined, that is, any choice of basis yields the same polynomial.
A linear transformation on a nontrivial vector space has at least one eigenvalue.
Any root of the characteristic polynomial is an eigenvalue. Over the complex numbers, any polynomial of degree one or greater has a root. (This is the reason that in this chapter we've gone to scalars that are complex.)
Notice the familiar form of the sets of eigenvectors in the above examples.
The eigenspace of a transformation associated with the eigenvalue is . The eigenspace of a matrix is defined analogously.
An eigenspace is a subspace.
An eigenspace must be nonempty— for one thing it contains the zero vector— and so we need only check closure. Take vectors from , to show that any linear combination is in
(the second equality holds even if any is since ).
In Example 3.8 the eigenspace associated with the eigenvalue and the eigenspace associated with the eigenvalue are these.
In Example 3.7, these are the eigenspaces associated with the eigenvalues and .
The characteristic equation is so in some sense is an eigenvalue "twice". However there are not "twice" as many eigenvectors, in that the dimension of the eigenspace is one, not two. The next example shows a case where a number, , is a double root of the characteristic equation and the dimension of the associated eigenspace is two.
With respect to the standard bases, this matrix
Its eigenspace associated with the eigenvalue and its eigenspace associated with the eigenvalue are easy to find.
By the lemma, if two eigenvectors and are associated with the same eigenvalue then any linear combination of those two is also an eigenvector associated with that same eigenvalue. But, if two eigenvectors and are associated with different eigenvalues then the sum need not be related to the eigenvalue of either one. In fact, just the opposite. If the eigenvalues are different then the eigenvectors are not linearly related.
For any set of distinct eigenvalues of a map or matrix, a set of associated eigenvectors, one per eigenvalue, is linearly independent.
We will use induction on the number of eigenvalues. If there is no eigenvalue or only one eigenvalue then the set of associated eigenvectors is empty or is a singleton set with a non- member, and in either case is linearly independent.
For induction, assume that the theorem is true for any set of distinct eigenvalues, suppose that are distinct eigenvalues, and let be associated eigenvectors. If then after multiplying both sides of the displayed equation by , applying the map or matrix to both sides of the displayed equation, and subtracting the first result from the second, we have this.
The induction hypothesis now applies: . Thus, as all the eigenvalues are distinct, are all . Finally, now must be because we are left with the equation .
The eigenvalues of
are distinct: , , and . A set of associated eigenvectors like
is linearly independent.
An matrix with distinct eigenvalues is diagonalizable.