# Linear Algebra/Diagonalizability/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Repeat Example 2.5 for the matrix from Example 2.2.

Because the basis vectors are chosen arbitrarily, many different answers are possible. However, here is one way to go; to diagonalize

${\displaystyle T={\begin{pmatrix}4&-2\\1&1\end{pmatrix}}}$

take it as the representation of a transformation with respect to the standard basis ${\displaystyle T={\rm {Rep}}_{{\mathcal {E}}_{2},{\mathcal {E}}_{2}}(t)}$ and look for ${\displaystyle B=\langle {\vec {\beta }}_{1},{\vec {\beta }}_{2}\rangle }$ such that

${\displaystyle {\rm {Rep}}_{B,B}(t)={\begin{pmatrix}\lambda _{1}&0\\0&\lambda _{2}\end{pmatrix}}}$

that is, such that ${\displaystyle t({\vec {\beta }}_{1})=\lambda _{1}}$ and ${\displaystyle t({\vec {\beta }}_{2})=\lambda _{2}}$.

${\displaystyle {\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\vec {\beta }}_{1}=\lambda _{1}\cdot {\vec {\beta }}_{1}\qquad {\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\vec {\beta }}_{2}=\lambda _{2}\cdot {\vec {\beta }}_{2}}$

We are looking for scalars ${\displaystyle x}$ such that this equation

${\displaystyle {\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}=x\cdot {\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}}$

has solutions ${\displaystyle b_{1}}$ and ${\displaystyle b_{2}}$, which are not both zero. Rewrite that as a linear system

${\displaystyle {\begin{array}{*{2}{rc}r}(4-x)\cdot b_{1}&+&-2\cdot b_{2}&=&0\\1\cdot b_{1}&+&(1-x)\cdot b_{2}&=&0\end{array}}}$

If ${\displaystyle x=4}$ then the first equation gives that ${\displaystyle b_{2}=0}$, and then the second equation gives that ${\displaystyle b_{1}=0}$. The case where both ${\displaystyle b}$'s are zero is disallowed so we can assume that ${\displaystyle x\neq 4}$.

${\displaystyle {\xrightarrow[{}]{(-1/(4-x))\rho _{1}+\rho _{2}}}\;{\begin{array}{*{2}{rc}r}(4-x)\cdot b_{1}&+&-2\cdot b_{2}&=&0\\&&((x^{2}-5x+6)/(4-x))\cdot b_{2}&=&0\end{array}}}$

Consider the bottom equation. If ${\displaystyle b_{2}=0}$ then the first equation gives ${\displaystyle b_{1}=0}$ or ${\displaystyle x=4}$. The ${\displaystyle b_{1}=b_{2}=0}$ case is disallowed. The other possibility for the bottom equation is that the numerator of the fraction ${\displaystyle x^{2}-5x+6=(x-2)(x-3)}$ is zero. The ${\displaystyle x=2}$ case gives a first equation of ${\displaystyle 2b_{1}-2b_{2}=0}$, and so associated with ${\displaystyle x=2}$ we have vectors whose first and second components are equal:

${\displaystyle {\vec {\beta }}_{1}={\begin{pmatrix}1\\1\end{pmatrix}}\qquad {\text{(so }}{\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\begin{pmatrix}1\\1\end{pmatrix}}=2\cdot {\begin{pmatrix}1\\1\end{pmatrix}}{\text{, and }}\lambda _{1}=2{\text{).}}}$

If ${\displaystyle x=3}$ then the first equation is ${\displaystyle b_{1}-2b_{2}=0}$ and so the associated vectors are those whose first component is twice their second:

${\displaystyle {\vec {\beta }}_{2}={\begin{pmatrix}2\\1\end{pmatrix}}\qquad {\text{(so }}{\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\begin{pmatrix}2\\1\end{pmatrix}}=3\cdot {\begin{pmatrix}2\\1\end{pmatrix}}{\text{, and so }}\lambda _{2}=3{\text{).}}}$

This picture

shows how to get the diagonalization.

${\displaystyle {\begin{pmatrix}2&0\\0&3\end{pmatrix}}={\begin{pmatrix}1&2\\1&1\end{pmatrix}}^{-1}{\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\begin{pmatrix}1&2\\1&1\end{pmatrix}}}$

Comment. This equation matches the ${\displaystyle T=PSP^{-1}}$ definition under this renaming.

${\displaystyle T={\begin{pmatrix}2&0\\0&3\end{pmatrix}}\quad P={\begin{pmatrix}1&2\\1&1\end{pmatrix}}^{-1}\quad P^{-1}={\begin{pmatrix}1&2\\1&1\end{pmatrix}}\quad S={\begin{pmatrix}4&-2\\1&1\end{pmatrix}}}$
Problem 2

Diagonalize these upper triangular matrices.

1. ${\displaystyle {\begin{pmatrix}-2&1\\0&2\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}5&4\\0&1\end{pmatrix}}}$
1. Setting up
${\displaystyle {\begin{pmatrix}-2&1\\0&2\end{pmatrix}}{\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}=x\cdot {\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}\qquad \Longrightarrow \qquad {\begin{array}{*{2}{rc}r}(-2-x)\cdot b_{1}&+&b_{2}&=&0\\&&(2-x)\cdot b_{2}&=&0\end{array}}}$
gives the two possibilities that ${\displaystyle b_{2}=0}$ and ${\displaystyle x=2}$. Following the ${\displaystyle b_{2}=0}$ possibility leads to the first equation ${\displaystyle (-2-x)b_{1}=0}$ with the two cases that ${\displaystyle b_{1}=0}$ and that ${\displaystyle x=-2}$. Thus, under this first possibility, we find ${\displaystyle x=-2}$ and the associated vectors whose second component is zero, and whose first component is free.
${\displaystyle {\begin{pmatrix}-2&1\\0&2\end{pmatrix}}{\begin{pmatrix}b_{1}\\0\end{pmatrix}}=-2\cdot {\begin{pmatrix}b_{1}\\0\end{pmatrix}}\qquad {\vec {\beta }}_{1}={\begin{pmatrix}1\\0\end{pmatrix}}}$
Following the other possibility leads to a first equation of ${\displaystyle -4b_{1}+b_{2}=0}$ and so the vectors associated with this solution have a second component that is four times their first component.
${\displaystyle {\begin{pmatrix}-2&1\\0&2\end{pmatrix}}{\begin{pmatrix}b_{1}\\4b_{1}\end{pmatrix}}=2\cdot {\begin{pmatrix}b_{1}\\4b_{1}\end{pmatrix}}\qquad {\vec {\beta }}_{2}={\begin{pmatrix}1\\4\end{pmatrix}}}$
The diagonalization is this.
${\displaystyle {\begin{pmatrix}1&1\\0&4\end{pmatrix}}^{-1}{\begin{pmatrix}-2&1\\0&2\end{pmatrix}}{\begin{pmatrix}1&1\\0&4\end{pmatrix}}^{-1}{\begin{pmatrix}-2&0\\0&2\end{pmatrix}}}$
2. The calculations are like those in the prior part.
${\displaystyle {\begin{pmatrix}5&4\\0&1\end{pmatrix}}{\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}=x\cdot {\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}\qquad \Longrightarrow \qquad {\begin{array}{*{2}{rc}r}(5-x)\cdot b_{1}&+&4\cdot b_{2}&=&0\\&&(1-x)\cdot b_{2}&=&0\end{array}}}$
The bottom equation gives the two possibilities that ${\displaystyle b_{2}=0}$ and ${\displaystyle x=1}$. Following the ${\displaystyle b_{2}=0}$ possibility, and discarding the case where both ${\displaystyle b_{2}}$ and ${\displaystyle b_{1}}$ are zero, gives that ${\displaystyle x=5}$, associated with vectors whose second component is zero and whose first component is free.
${\displaystyle {\vec {\beta }}_{1}={\begin{pmatrix}1\\0\end{pmatrix}}}$
The ${\displaystyle x=1}$ possibility gives a first equation of ${\displaystyle 4b_{1}+4b_{2}=0}$ and so the associated vectors have a second component that is the negative of their first component.
${\displaystyle {\vec {\beta }}_{1}={\begin{pmatrix}1\\-1\end{pmatrix}}}$
We thus have this diagonalization.
${\displaystyle {\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}{\begin{pmatrix}5&4\\0&1\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}={\begin{pmatrix}5&0\\0&1\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 3

What form do the powers of a diagonal matrix have?

For any integer ${\displaystyle p}$,

${\displaystyle {\begin{pmatrix}d_{1}&0&\\0&\ddots &\\&&d_{n}\end{pmatrix}}^{p}={\begin{pmatrix}d_{1}^{p}&0&\\0&\ddots &\\&&d_{n}^{p}\end{pmatrix}}.}$
Problem 4

Give two same-sized diagonal matrices that are not similar. Must any two different diagonal matrices come from different similarity classes?

These two are not similar

${\displaystyle {\begin{pmatrix}0&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}1&0\\0&1\end{pmatrix}}}$

because each is alone in its similarity class.

For the second half, these

${\displaystyle {\begin{pmatrix}2&0\\0&3\end{pmatrix}}\qquad {\begin{pmatrix}3&0\\0&2\end{pmatrix}}}$

are similar via the matrix that changes bases from ${\displaystyle \langle {\vec {\beta }}_{1},{\vec {\beta }}_{2}\rangle }$ to ${\displaystyle \langle {\vec {\beta }}_{2},{\vec {\beta }}_{1}\rangle }$. (Question. Are two diagonal matrices similar if and only if their diagonal entries are permutations of each other's?)

Problem 5

Give a nonsingular diagonal matrix. Can a diagonal matrix ever be singular?

Contrast these two.

${\displaystyle {\begin{pmatrix}2&0\\0&1\end{pmatrix}}\qquad {\begin{pmatrix}2&0\\0&0\end{pmatrix}}}$

The first is nonsingular, the second is singular.

This exercise is recommended for all readers.
Problem 6

Show that the inverse of a diagonal matrix is the diagonal of the inverses, if no element on that diagonal is zero. What happens when a diagonal entry is zero?

To check that the inverse of a diagonal matrix is the diagonal matrix of the inverses, just multiply.

${\displaystyle {\begin{pmatrix}a_{1,1}&0\\0&a_{2,2}\\&&\ddots \\&&&a_{n,n}\end{pmatrix}}{\begin{pmatrix}1/a_{1,1}&0\\0&1/a_{2,2}\\&&\ddots \\&&&1/a_{n,n}\end{pmatrix}}}$

(Showing that it is a left inverse is just as easy.)

If a diagonal entry is zero then the diagonal matrix is singular; it has a zero determinant.

Problem 7

The equation ending Example 2.5

${\displaystyle {\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}{\begin{pmatrix}3&2\\0&1\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}={\begin{pmatrix}3&0\\0&1\end{pmatrix}}}$

is a bit jarring because for ${\displaystyle P}$ we must take the first matrix, which is shown as an inverse, and for ${\displaystyle P^{-1}}$ we take the inverse of the first matrix, so that the two ${\displaystyle -1}$ powers cancel and this matrix is shown without a superscript ${\displaystyle -1}$.

1. Check that this nicer-appearing equation holds.
${\displaystyle {\begin{pmatrix}3&0\\0&1\end{pmatrix}}={\begin{pmatrix}1&1\\0&-1\end{pmatrix}}{\begin{pmatrix}3&2\\0&1\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}}$
2. Is the previous item a coincidence? Or can we always switch the ${\displaystyle P}$ and the ${\displaystyle P^{-1}}$?
1. The check is easy.
${\displaystyle {\begin{pmatrix}1&1\\0&-1\end{pmatrix}}{\begin{pmatrix}3&2\\0&1\end{pmatrix}}={\begin{pmatrix}3&3\\0&-1\end{pmatrix}}\qquad {\begin{pmatrix}3&3\\0&-1\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}={\begin{pmatrix}3&0\\0&1\end{pmatrix}}}$
2. It is a coincidence, in the sense that if ${\displaystyle T=PSP^{-1}}$ then ${\displaystyle T}$ need not equal ${\displaystyle P^{-1}SP}$. Even in the case of a diagonal matrix ${\displaystyle D}$, the condition that ${\displaystyle D=PTP^{-1}}$ does not imply that ${\displaystyle D}$ equals ${\displaystyle P^{-1}TP}$. The matrices from Example 2.2 show this.
${\displaystyle {\begin{pmatrix}1&2\\1&1\end{pmatrix}}{\begin{pmatrix}4&-2\\1&1\end{pmatrix}}={\begin{pmatrix}6&0\\5&-1\end{pmatrix}}\qquad {\begin{pmatrix}6&0\\5&-1\end{pmatrix}}{\begin{pmatrix}1&2\\1&1\end{pmatrix}}^{-1}={\begin{pmatrix}-6&12\\-6&11\end{pmatrix}}}$
Problem 8

Show that the ${\displaystyle P}$ used to diagonalize in Example 2.5 is not unique.

The columns of the matrix are chosen as the vectors associated with the ${\displaystyle x}$'s. The exact choice, and the order of the choice was arbitrary. We could, for instance, get a different matrix by swapping the two columns.

Problem 9

Find a formula for the powers of this matrix Hint: see Problem 3.

${\displaystyle {\begin{pmatrix}-3&1\\-4&2\end{pmatrix}}}$

Diagonalizing and then taking powers of the diagonal matrix shows that

${\displaystyle {\begin{pmatrix}-3&1\\-4&2\end{pmatrix}}^{k}={\frac {1}{3}}{\begin{pmatrix}-1&1\\-4&4\end{pmatrix}}+({\frac {-2}{3}})^{k}{\begin{pmatrix}4&-1\\4&-1\end{pmatrix}}.}$
This exercise is recommended for all readers.
Problem 10

Diagonalize these.

1. ${\displaystyle {\begin{pmatrix}1&1\\0&0\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}0&1\\1&0\end{pmatrix}}}$
1. ${\displaystyle {\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}{\begin{pmatrix}1&1\\0&0\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}={\begin{pmatrix}1&0\\0&0\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}1&1\\1&-1\end{pmatrix}}^{-1}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}}$
Problem 11

We can ask how diagonalization interacts with the matrix operations. Assume that ${\displaystyle t,s:V\to V}$ are each diagonalizable. Is ${\displaystyle ct}$ diagonalizable for all scalars ${\displaystyle c}$? What about ${\displaystyle t+s}$? ${\displaystyle t\circ s}$?

Yes, ${\displaystyle ct}$ is diagonalizable by the final theorem of this subsection.

No, ${\displaystyle t+s}$ need not be diagonalizable. Intuitively, the problem arises when the two maps diagonalize with respect to different bases (that is, when they are not simultaneously diagonalizable). Specifically, these two are diagonalizable but their sum is not:

${\displaystyle {\begin{pmatrix}1&1\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}-1&0\\0&0\end{pmatrix}}}$

(the second is already diagonal; for the first, see Problem 10). The sum is not diagonalizable because its square is the zero matrix.

The same intuition suggests that ${\displaystyle t\circ s}$ is not be diagonalizable. These two are diagonalizable but their product is not:

${\displaystyle {\begin{pmatrix}1&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}0&1\\1&0\end{pmatrix}}}$

(for the second, see Problem 10).

This exercise is recommended for all readers.
Problem 12

Show that matrices of this form are not diagonalizable.

${\displaystyle {\begin{pmatrix}1&c\\0&1\end{pmatrix}}\qquad c\neq 0}$

If

${\displaystyle P{\begin{pmatrix}1&c\\0&1\end{pmatrix}}P^{-1}={\begin{pmatrix}a&0\\0&b\end{pmatrix}}}$

then

${\displaystyle P{\begin{pmatrix}1&c\\0&1\end{pmatrix}}={\begin{pmatrix}a&0\\0&b\end{pmatrix}}P}$

so

${\displaystyle {\begin{array}{rl}{\begin{pmatrix}p&q\\r&s\end{pmatrix}}{\begin{pmatrix}1&c\\0&1\end{pmatrix}}&={\begin{pmatrix}a&0\\0&b\end{pmatrix}}{\begin{pmatrix}p&q\\r&s\end{pmatrix}}\\{\begin{pmatrix}p&cp+q\\r&cr+s\end{pmatrix}}&={\begin{pmatrix}ap&aq\\br&bs\end{pmatrix}}\end{array}}}$

The ${\displaystyle 1,1}$ entries show that ${\displaystyle a=1}$ and the ${\displaystyle 1,2}$ entries then show that ${\displaystyle pc=0}$. Since ${\displaystyle c\neq 0}$ this means that ${\displaystyle p=0}$. The ${\displaystyle 2,1}$ entries show that ${\displaystyle b=1}$ and the ${\displaystyle 2,2}$ entries then show that ${\displaystyle rc=0}$. Since ${\displaystyle c\neq 0}$ this means that ${\displaystyle r=0}$. But if both ${\displaystyle p}$ and ${\displaystyle r}$ are ${\displaystyle 0}$ then ${\displaystyle P}$ is not invertible.

Problem 13

Show that each of these is diagonalizable.

1. ${\displaystyle {\begin{pmatrix}1&2\\2&1\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}x&y\\y&z\end{pmatrix}}\qquad x,y,z{\text{ scalars}}}$
1. Using the formula for the inverse of a ${\displaystyle 2\!\times \!2}$ matrix gives this.
${\displaystyle {\begin{array}{rl}{\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}1&2\\2&1\end{pmatrix}}\cdot {\frac {1}{ad-bc}}\cdot {\begin{pmatrix}d&-b\\-c&a\end{pmatrix}}&={\frac {1}{ad-bc}}{\begin{pmatrix}ad+2bd-2ac-bc&-ab-2b^{2}+2a^{2}+ab\\cd+2d^{2}-2c^{2}-cd&-bc-2bd+2ac+ad\end{pmatrix}}\end{array}}}$
Now pick scalars ${\displaystyle a,\ldots ,d}$ so that ${\displaystyle ad-bc\neq 0}$ and ${\displaystyle 2d^{2}-2c^{2}=0}$ and ${\displaystyle 2a^{2}-2b^{2}=0}$. For example, these will do.
${\displaystyle {\begin{pmatrix}1&1\\1&-1\end{pmatrix}}{\begin{pmatrix}1&2\\2&1\end{pmatrix}}\cdot {\frac {1}{-2}}\cdot {\begin{pmatrix}-1&-1\\-1&1\end{pmatrix}}={\frac {1}{-2}}{\begin{pmatrix}-6&0\\0&2\end{pmatrix}}}$
2. As above,
${\displaystyle {\begin{array}{rl}{\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}x&y\\y&z\end{pmatrix}}\cdot {\frac {1}{ad-bc}}\cdot {\begin{pmatrix}d&-b\\-c&a\end{pmatrix}}&={\frac {1}{ad-bc}}{\begin{pmatrix}adx+bdy-acy-bcz&-abx-b^{2}y+a^{2}y+abz\\cdx+d^{2}y-c^{2}y-cdz&-bcx-bdy+acy+adz\end{pmatrix}}\end{array}}}$
we are looking for scalars ${\displaystyle a,\ldots ,d}$ so that ${\displaystyle ad-bc\neq 0}$ and ${\displaystyle -abx-b^{2}y+a^{2}y+abz=0}$ and ${\displaystyle cdx+d^{2}y-c^{2}y-cdz=0}$, no matter what values ${\displaystyle x}$, ${\displaystyle y}$, and ${\displaystyle z}$ have. For starters, we assume that ${\displaystyle y\neq 0}$, else the given matrix is already diagonal. We shall use that assumption because if we (arbitrarily) let ${\displaystyle a=1}$ then we get
${\displaystyle {\begin{array}{rl}-bx-b^{2}y+y+bz&=0\\(-y)b^{2}+(z-x)b+y&=0\end{array}}}$
and the quadratic formula gives
${\displaystyle b={\frac {-(z-x)\pm {\sqrt {(z-x)^{2}-4(-y)(y)}}}{-2y}}\qquad y\neq 0}$
(note that if ${\displaystyle x}$, ${\displaystyle y}$, and ${\displaystyle z}$ are real then these two ${\displaystyle b}$'s are real as the discriminant is positive). By the same token, if we (arbitrarily) let ${\displaystyle c=1}$ then
${\displaystyle {\begin{array}{rl}dx+d^{2}y-y-dz&=0\\(y)d^{2}+(x-z)d-y&=0\end{array}}}$
and we get here
${\displaystyle d={\frac {-(x-z)\pm {\sqrt {(x-z)^{2}-4(y)(-y)}}}{2y}}\qquad y\neq 0}$
(as above, if ${\displaystyle x,y,z\in \mathbb {R} }$ then this discriminant is positive so a symmetric, real, ${\displaystyle 2\!\times \!2}$ matrix is similar to a real diagonal matrix). For a check we try ${\displaystyle x=1}$, ${\displaystyle y=2}$, ${\displaystyle z=1}$.
${\displaystyle b={\frac {0\pm {\sqrt {0+16}}}{-4}}=\mp 1\qquad d={\frac {0\pm {\sqrt {0+16}}}{4}}=\pm 1}$
Note that not all four choices ${\displaystyle (b,d)=(+1,+1),\dots ,(-1,-1)}$ satisfy ${\displaystyle ad-bc\neq 0}$.