# Linear Algebra/Changing Map Representations/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Decide if these matrices are matrix equivalent.

1. ${\displaystyle {\begin{pmatrix}1&3&0\\2&3&0\end{pmatrix}}}$, ${\displaystyle {\begin{pmatrix}2&2&1\\0&5&-1\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}0&3\\1&1\end{pmatrix}}}$, ${\displaystyle {\begin{pmatrix}4&0\\0&5\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}1&3\\2&6\end{pmatrix}}}$, ${\displaystyle {\begin{pmatrix}1&3\\2&-6\end{pmatrix}}}$
1. Yes, each has rank two.
2. Yes, they have the same rank.
3. No, they have different ranks.
This exercise is recommended for all readers.
Problem 2

Find the canonical representative of the matrix-equivalence class of each matrix.

1. ${\displaystyle {\begin{pmatrix}2&1&0\\4&2&0\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}0&1&0&2\\1&1&0&4\\3&3&3&-1\end{pmatrix}}}$

We need only decide what the rank of each is.

1. ${\displaystyle {\begin{pmatrix}1&0&0\\0&0&0\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\end{pmatrix}}}$
Problem 3

Suppose that, with respect to

${\displaystyle B={\mathcal {E}}_{2}\qquad D=\langle {\begin{pmatrix}1\\1\end{pmatrix}},{\begin{pmatrix}1\\-1\end{pmatrix}}\rangle }$

the transformation ${\displaystyle t:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ is represented by this matrix.

${\displaystyle {\begin{pmatrix}1&2\\3&4\end{pmatrix}}}$

Use change of basis matrices to represent ${\displaystyle t}$ with respect to each pair.

1. ${\displaystyle {\hat {B}}=\langle {\begin{pmatrix}0\\1\end{pmatrix}},{\begin{pmatrix}1\\1\end{pmatrix}}\rangle }$, ${\displaystyle {\hat {D}}=\langle {\begin{pmatrix}-1\\0\end{pmatrix}},{\begin{pmatrix}2\\1\end{pmatrix}}\rangle }$
2. ${\displaystyle {\hat {B}}=\langle {\begin{pmatrix}1\\2\end{pmatrix}},{\begin{pmatrix}1\\0\end{pmatrix}}\rangle }$, ${\displaystyle {\hat {D}}=\langle {\begin{pmatrix}1\\2\end{pmatrix}},{\begin{pmatrix}2\\1\end{pmatrix}}\rangle }$

Recall the diagram and the formula.

 ${\displaystyle {\hat {T}}={\rm {Rep}}_{D,{\hat {D}}}({\mbox{id}})\cdot T\cdot {\rm {Rep}}_{{\hat {B}},B}({\mbox{id}})}$
1. These two
${\displaystyle {\begin{pmatrix}1\\1\end{pmatrix}}=1\cdot {\begin{pmatrix}-1\\0\end{pmatrix}}+1\cdot {\begin{pmatrix}2\\1\end{pmatrix}}\qquad {\begin{pmatrix}1\\-1\end{pmatrix}}=(-3)\cdot {\begin{pmatrix}-1\\0\end{pmatrix}}+(-1)\cdot {\begin{pmatrix}2\\1\end{pmatrix}}}$
show that
${\displaystyle {\rm {Rep}}_{D,{\hat {D}}}({\mbox{id}})={\begin{pmatrix}1&-3\\1&-1\end{pmatrix}}}$
and similarly these two
${\displaystyle {\begin{pmatrix}0\\1\end{pmatrix}}=0\cdot {\begin{pmatrix}1\\0\end{pmatrix}}+1\cdot {\begin{pmatrix}0\\1\end{pmatrix}}\qquad {\begin{pmatrix}1\\1\end{pmatrix}}=1\cdot {\begin{pmatrix}1\\0\end{pmatrix}}+1\cdot {\begin{pmatrix}0\\1\end{pmatrix}}}$
give the other nonsingular matrix.
${\displaystyle {\rm {Rep}}_{{\hat {B}},B}({\mbox{id}})={\begin{pmatrix}0&1\\1&1\end{pmatrix}}}$
${\displaystyle {\hat {T}}={\begin{pmatrix}1&-3\\1&-1\end{pmatrix}}{\begin{pmatrix}1&2\\3&4\end{pmatrix}}{\begin{pmatrix}0&1\\1&1\end{pmatrix}}={\begin{pmatrix}-10&-18\\-2&-4\end{pmatrix}}}$
Although not strictly necessary, a check is reassuring. Arbitrarily fixing
${\displaystyle {\vec {v}}={\begin{pmatrix}3\\2\end{pmatrix}}}$
we have that
${\displaystyle {\rm {Rep}}_{B}({\vec {v}})={\begin{pmatrix}3\\2\end{pmatrix}}_{B}\qquad {\begin{pmatrix}1&2\\3&4\end{pmatrix}}_{B,D}{\begin{pmatrix}3\\2\end{pmatrix}}_{B}={\begin{pmatrix}7\\17\end{pmatrix}}_{D}}$
and so ${\displaystyle t({\vec {v}})}$ is this.
${\displaystyle 7\cdot {\begin{pmatrix}1\\1\end{pmatrix}}+17\cdot {\begin{pmatrix}1\\-1\end{pmatrix}}={\begin{pmatrix}24\\-10\end{pmatrix}}}$
Doing the calculation with respect to ${\displaystyle {\hat {B}},{\hat {D}}}$ starts with
${\displaystyle {\rm {Rep}}_{\hat {B}}({\vec {v}})={\begin{pmatrix}-1\\3\end{pmatrix}}_{\hat {B}}\qquad {\begin{pmatrix}-10&-18\\-2&-4\end{pmatrix}}_{{\hat {B}},{\hat {D}}}{\begin{pmatrix}-1\\3\end{pmatrix}}_{\hat {B}}={\begin{pmatrix}-44\\-10\end{pmatrix}}_{\hat {D}}}$
and then checks that this is the same result.
${\displaystyle -44\cdot {\begin{pmatrix}-1\\0\end{pmatrix}}-10\cdot {\begin{pmatrix}2\\1\end{pmatrix}}={\begin{pmatrix}24\\-10\end{pmatrix}}}$
2. These two
${\displaystyle {\begin{pmatrix}1\\1\end{pmatrix}}={\frac {1}{3}}\cdot {\begin{pmatrix}1\\2\end{pmatrix}}+{\frac {1}{3}}\cdot {\begin{pmatrix}2\\1\end{pmatrix}}\qquad {\begin{pmatrix}1\\-1\end{pmatrix}}=-1\cdot {\begin{pmatrix}1\\2\end{pmatrix}}+1\cdot {\begin{pmatrix}2\\1\end{pmatrix}}}$
show that
${\displaystyle {\rm {Rep}}_{D,{\hat {D}}}({\mbox{id}})={\begin{pmatrix}1/3&-1\\1/3&1\end{pmatrix}}}$
and these two
${\displaystyle {\begin{pmatrix}1\\2\end{pmatrix}}=1\cdot {\begin{pmatrix}1\\0\end{pmatrix}}+2\cdot {\begin{pmatrix}0\\1\end{pmatrix}}\qquad {\begin{pmatrix}1\\0\end{pmatrix}}=-1\cdot {\begin{pmatrix}1\\0\end{pmatrix}}+0\cdot {\begin{pmatrix}0\\1\end{pmatrix}}}$
show this.
${\displaystyle {\rm {Rep}}_{{\hat {B}},B}({\mbox{id}})={\begin{pmatrix}1&1\\2&0\end{pmatrix}}}$
With those, the conversion goes in this way.
${\displaystyle {\hat {T}}={\begin{pmatrix}1/3&-1\\1/3&1\end{pmatrix}}{\begin{pmatrix}1&2\\3&4\end{pmatrix}}{\begin{pmatrix}1&1\\2&0\end{pmatrix}}={\begin{pmatrix}-28/3&-8/3\\38/3&10/3\end{pmatrix}}}$
As in the prior item, a check provides some confidence that this calculation was performed without mistakes. We can for instance, fix the vector
${\displaystyle {\vec {v}}={\begin{pmatrix}-1\\2\end{pmatrix}}}$
(this is selected for no reason, out of thin air). Now we have
${\displaystyle {\rm {Rep}}_{B}({\vec {v}})={\begin{pmatrix}-1\\2\end{pmatrix}}\qquad {\begin{pmatrix}1&2\\3&4\end{pmatrix}}_{B,D}{\begin{pmatrix}-1\\2\end{pmatrix}}_{B}={\begin{pmatrix}3\\5\end{pmatrix}}_{D}}$
and so ${\displaystyle t({\vec {v}})}$ is this vector.
${\displaystyle 3\cdot {\begin{pmatrix}1\\1\end{pmatrix}}+5\cdot {\begin{pmatrix}1\\-1\end{pmatrix}}={\begin{pmatrix}8\\-2\end{pmatrix}}}$
With respect to ${\displaystyle {\hat {B}},{\hat {D}}}$ we first calculate
${\displaystyle {\rm {Rep}}_{\hat {B}}({\vec {v}})={\begin{pmatrix}1\\-2\end{pmatrix}}\qquad {\begin{pmatrix}-28/3&-8/3\\38/3&10/3\end{pmatrix}}_{{\hat {B}},{\hat {D}}}{\begin{pmatrix}1\\-2\end{pmatrix}}_{\hat {B}}={\begin{pmatrix}-4\\6\end{pmatrix}}_{\hat {D}}}$
and, sure enough, that is the same result for ${\displaystyle t({\vec {v}})}$.
${\displaystyle -4\cdot {\begin{pmatrix}1\\2\end{pmatrix}}+6\cdot {\begin{pmatrix}2\\1\end{pmatrix}}={\begin{pmatrix}8\\-2\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 4

What sizes are ${\displaystyle P}$ and ${\displaystyle Q}$ in the equation ${\displaystyle {\hat {H}}=PHQ}$?

Where ${\displaystyle H}$ and ${\displaystyle {\hat {H}}}$ are ${\displaystyle m\!\times \!n}$, the matrix ${\displaystyle P}$ is ${\displaystyle m\!\times \!m}$ while ${\displaystyle Q}$ is ${\displaystyle n\!\times \!n}$.

This exercise is recommended for all readers.
Problem 5

Use Theorem 2.6 to show that a square matrix is nonsingular if and only if it is equivalent to an identity matrix.

Any ${\displaystyle n\!\times \!n}$ matrix is nonsingular if and only if it has rank ${\displaystyle n}$, that is, by Theorem 2.6, if and only if it is matrix equivalent to the ${\displaystyle n\!\times \!n}$ matrix whose diagonal is all ones.

This exercise is recommended for all readers.
Problem 6

Show that, where ${\displaystyle A}$ is a nonsingular square matrix, if ${\displaystyle P}$ and ${\displaystyle Q}$ are nonsingular square matrices such that ${\displaystyle PAQ=I}$ then ${\displaystyle QP=A^{-1}}$.

If ${\displaystyle PAQ=I}$ then ${\displaystyle QPAQ=Q}$, so ${\displaystyle QPA=I}$, and so ${\displaystyle QP=A^{-1}}$.

This exercise is recommended for all readers.
Problem 7

Why does Theorem 2.6 not show that every matrix is diagonalizable (see Example 2.2)?

By the definition following Example 2.2, a matrix ${\displaystyle M}$ is diagonalizable if it represents ${\displaystyle M={\rm {Rep}}_{B,D}(t)}$ a transformation with the property that there is some basis ${\displaystyle {\hat {B}}}$ such that ${\displaystyle {\rm {Rep}}_{{\hat {B}},{\hat {B}}}(t)}$ is a diagonal matrix— the starting and ending bases must be equal. But Theorem 2.6 says only that there are ${\displaystyle {\hat {B}}}$ and ${\displaystyle {\hat {D}}}$ such that we can change to a representation ${\displaystyle {\rm {Rep}}_{{\hat {B}},{\hat {D}}}(t)}$ and get a diagonal matrix. We have no reason to suspect that we could pick the two ${\displaystyle {\hat {B}}}$ and ${\displaystyle {\hat {D}}}$ so that they are equal.

Problem 8

Must matrix equivalent matrices have matrix equivalent transposes?

Yes. Row rank equals column rank, so the rank of the transpose equals the rank of the matrix. Same-sized matrices with equal ranks are matrix equivalent.

Problem 9

What happens in Theorem 2.6 if ${\displaystyle k=0}$?

Only a zero matrix has rank zero.

This exercise is recommended for all readers.
Problem 10

Show that matrix-equivalence is an equivalence relation.

For reflexivity, to show that any matrix is matrix equivalent to itself, take ${\displaystyle P}$ and ${\displaystyle Q}$ to be identity matrices. For symmetry, if ${\displaystyle H_{1}=PH_{2}Q}$ then ${\displaystyle H_{2}=P^{-1}H_{1}Q^{-1}}$ (inverses exist because ${\displaystyle P}$ and ${\displaystyle Q}$ are nonsingular). Finally, for transitivity, assume that ${\displaystyle H_{1}=P_{2}H_{2}Q_{2}}$ and that ${\displaystyle H_{2}=P_{3}H_{3}Q_{3}}$. Then substitution gives ${\displaystyle H_{1}=P_{2}(P_{3}H_{3}Q_{3})Q_{2}=(P_{2}P_{3})H_{3}(Q_{3}Q_{2})}$. A product of nonsingular matrices is nonsingular (we've shown that the product of invertible matrices is invertible; in fact, we've shown how to calculate the inverse) and so ${\displaystyle H_{1}}$ is therefore matrix equivalent to ${\displaystyle H_{3}}$.

This exercise is recommended for all readers.
Problem 11

Show that a zero matrix is alone in its matrix equivalence class. Are there other matrices like that?

By Theorem 2.6, a zero matrix is alone in its class because it is the only ${\displaystyle m\!\times \!n}$ of rank zero. No other matrix is alone in its class; any nonzero scalar product of a matrix has the same rank as that matrix.

Problem 12

What are the matrix equivalence classes of matrices of transformations on ${\displaystyle \mathbb {R} ^{1}}$? ${\displaystyle \mathbb {R} ^{3}}$?

There are two matrix-equivalence classes of ${\displaystyle 1\!\times \!1}$ matrices— those of rank zero and those of rank one. The ${\displaystyle 3\!\times \!3}$ matrices fall into four matrix equivalence classes.

Problem 13

How many matrix equivalence classes are there?

For ${\displaystyle m\!\times \!n}$ matrices there are classes for each possible rank: where ${\displaystyle k}$ is the minimum of ${\displaystyle m}$ and ${\displaystyle n}$ there are classes for the matrices of rank ${\displaystyle 0}$, ${\displaystyle 1}$, ..., ${\displaystyle k}$. That's ${\displaystyle k+1}$ classes. (Of course, totaling over all sizes of matrices we get infinitely many classes.)

Problem 14

Are matrix equivalence classes closed under scalar multiplication? Addition?

They are closed under nonzero scalar multiplication, since a nonzero scalar multiple of a matrix has the same rank as does the matrix. They are not closed under addition, for instance, ${\displaystyle H+(-H)}$ has rank zero.

Problem 15

Let ${\displaystyle t:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}$ represented by ${\displaystyle T}$ with respect to ${\displaystyle {\mathcal {E}}_{n},{\mathcal {E}}_{n}}$.

1. Find ${\displaystyle {\rm {Rep}}_{B,B}(t)}$ in this specific case.
${\displaystyle T={\begin{pmatrix}1&1\\3&-1\end{pmatrix}}\qquad B=\langle {\begin{pmatrix}1\\2\end{pmatrix}},{\begin{pmatrix}-1\\-1\end{pmatrix}}\rangle }$
2. Describe ${\displaystyle {\rm {Rep}}_{B,B}(t)}$ in the general case where ${\displaystyle B=\langle {\vec {\beta }}_{1},\ldots ,{\vec {\beta }}_{n}\rangle }$.
1. We have
${\displaystyle {\rm {Rep}}_{B,{\mathcal {E}}_{2}}({\mbox{id}})={\begin{pmatrix}1&-1\\2&-1\end{pmatrix}}\qquad {\rm {Rep}}_{{\mathcal {E}}_{2},B}({\mbox{id}})={\rm {Rep}}_{B,{\mathcal {E}}_{2}}({\mbox{id}})^{-1}={\begin{pmatrix}1&-1\\2&-1\end{pmatrix}}^{-1}={\begin{pmatrix}-1&1\\-2&1\end{pmatrix}}}$
and thus the answer is this.
${\displaystyle {\rm {Rep}}_{B,B}(t)={\begin{pmatrix}1&-1\\2&-1\end{pmatrix}}{\begin{pmatrix}1&1\\3&-1\end{pmatrix}}{\begin{pmatrix}-1&1\\-2&1\end{pmatrix}}={\begin{pmatrix}-2&0\\-5&2\end{pmatrix}}}$
As a quick check, we can take a vector at random
${\displaystyle {\vec {v}}={\begin{pmatrix}4\\5\end{pmatrix}}}$
giving
${\displaystyle {\rm {Rep}}_{{\mathcal {E}}_{2}}({\vec {v}})={\begin{pmatrix}4\\5\end{pmatrix}}\qquad {\begin{pmatrix}1&1\\3&-1\end{pmatrix}}{\begin{pmatrix}4\\5\end{pmatrix}}={\begin{pmatrix}9\\7\end{pmatrix}}=t({\vec {v}})}$
while the calculation with respect to ${\displaystyle B,B}$
${\displaystyle {\rm {Rep}}_{B}({\vec {v}})={\begin{pmatrix}1\\-3\end{pmatrix}}\qquad {\begin{pmatrix}-2&0\\-5&2\end{pmatrix}}_{B,B}{\begin{pmatrix}1\\-3\end{pmatrix}}_{B}={\begin{pmatrix}-2\\-11\end{pmatrix}}_{B}}$
yields the same result.
${\displaystyle -2\cdot {\begin{pmatrix}1\\2\end{pmatrix}}-11\cdot {\begin{pmatrix}-1\\-1\end{pmatrix}}={\begin{pmatrix}9\\7\end{pmatrix}}}$
2. We have
 ${\displaystyle {\rm {Rep}}_{B,B}(t)={\rm {Rep}}_{{\mathcal {E}}_{2},B}({\mbox{id}})\cdot T\cdot {\rm {Rep}}_{B,{\mathcal {E}}_{2}}({\mbox{id}})}$

and, as in the first item of this question

${\displaystyle {\rm {Rep}}_{B,{\mathcal {E}}_{2}}({\mbox{id}})=\left({\begin{array}{c|c|c}{\vec {\beta }}_{1}&\;\cdots \;&{\vec {\beta }}_{n}\end{array}}\right)\qquad {\rm {Rep}}_{{\mathcal {E}}_{2},B}({\mbox{id}})={\rm {Rep}}_{B,{\mathcal {E}}_{2}}({\mbox{id}})^{-1}}$

so, writing ${\displaystyle Q}$ for the matrix whose columns are the basis vectors, we have that ${\displaystyle {\rm {Rep}}_{B,B}(t)=Q^{-1}TQ}$.

Problem 16
1. Let ${\displaystyle V}$ have bases ${\displaystyle B_{1}}$ and ${\displaystyle B_{2}}$ and suppose that ${\displaystyle W}$ has the basis ${\displaystyle D}$. Where ${\displaystyle h:V\to W}$, find the formula that computes ${\displaystyle {\rm {Rep}}_{B_{2},D}(h)}$ from ${\displaystyle {\rm {Rep}}_{B_{1},D}(h)}$.
2. Repeat the prior question with one basis for ${\displaystyle V}$ and two bases for ${\displaystyle W}$.
1. The adapted form of the arrow diagram is this.
Since there is no need to change bases in ${\displaystyle W}$ (or we can say that the change of basis matrix ${\displaystyle P}$ is the identity), we have ${\displaystyle {\rm {Rep}}_{B_{2},D}(h)={\rm {Rep}}_{B_{1},D}(h)\cdot Q}$ where ${\displaystyle Q={\rm {Rep}}_{B_{2},B_{1}}({\mbox{id}})}$.
2. Here, this is the arrow diagram.
We have that ${\displaystyle {\rm {Rep}}_{B,D_{2}}(h)=P\cdot {\rm {Rep}}_{B,D_{1}}(h)}$ where ${\displaystyle P={\rm {Rep}}_{D_{1},D_{2}}({\mbox{id}})}$.
Problem 17
1. If two matrices are matrix-equivalent and invertible, must their inverses be matrix-equivalent?
2. If two matrices have matrix-equivalent inverses, must the two be matrix-equivalent?
3. If two matrices are square and matrix-equivalent, must their squares be matrix-equivalent?
4. If two matrices are square and have matrix-equivalent squares, must they be matrix-equivalent?
1. Here is the arrow diagram, and a version of that diagram for inverse functions.

Yes, the inverses of the matrices represent the inverses of the maps. That is, we can move from the lower right to the lower left by moving up, then left, then down. In other words, where ${\displaystyle {\hat {H}}=PHQ}$ (and ${\displaystyle P,Q}$ invertible) and ${\displaystyle H,{\hat {H}}}$ are invertible then ${\displaystyle {\hat {H}}^{-1}=Q^{-1}H^{-1}P^{-1}}$.

2. Yes; this is the prior part repeated in different terms.
3. No, we need another assumption: if ${\displaystyle H}$ represents ${\displaystyle h}$ with respect to the same starting as ending bases ${\displaystyle B,B}$, for some ${\displaystyle B}$ then ${\displaystyle H^{2}}$ represents ${\displaystyle h\circ h}$. As a specific example, these two matrices are both rank one and so they are matrix equivalent
${\displaystyle {\begin{pmatrix}1&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}0&0\\1&0\end{pmatrix}}}$
but the squares are not matrix equivalent— the square of the first has rank one while the square of the second has rank zero.
4. No. These two are not matrix equivalent but have matrix equivalent squares.
${\displaystyle {\begin{pmatrix}0&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}0&0\\1&0\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 18

Square matrices are similar if they represent the same transformation, but each with respect to the same ending as starting basis. That is, ${\displaystyle {\rm {Rep}}_{B_{1},B_{1}}(t)}$ is similar to ${\displaystyle {\rm {Rep}}_{B_{2},B_{2}}(t)}$.

1. Give a definition of matrix similarity like that of Definition 2.3.
2. Prove that similar matrices are matrix equivalent.
3. Show that similarity is an equivalence relation.
4. Show that if ${\displaystyle T}$ is similar to ${\displaystyle {\hat {T}}}$ then ${\displaystyle T^{2}}$ is similar to ${\displaystyle {\hat {T}}^{2}}$, the cubes are similar, etc. Contrast with the prior exercise.
5. Prove that there are matrix equivalent matrices that are not similar.
Call matrices ${\displaystyle T,{\hat {T}}}$ similar if there is a nonsingular matrix ${\displaystyle P}$ such that ${\displaystyle {\hat {T}}=P^{-1}TP}$.
2. Take ${\displaystyle P^{-1}}$ to be ${\displaystyle P}$ and take ${\displaystyle P}$ to be ${\displaystyle Q}$.
3. This is as in Problem 10. Reflexivity is obvious: ${\displaystyle T=I^{-1}TI}$. Symmetry is also easy: ${\displaystyle {\hat {T}}=P^{-1}TP}$ implies that ${\displaystyle T=P{\hat {T}}P^{-1}}$ (multiply the first equation from the right by ${\displaystyle P^{-1}}$ and from the left by ${\displaystyle P}$). For transitivity, assume that ${\displaystyle T_{1}={P_{2}}^{-1}T_{2}P_{2}}$ and that ${\displaystyle T_{2}={P_{3}}^{-1}T_{3}P_{3}}$. Then ${\displaystyle T_{1}={P_{2}}^{-1}({P_{3}}^{-1}T_{3}P_{3})P_{2}=({P_{2}}^{-1}{P_{3}}^{-1})T_{3}(P_{3}P_{2})}$ and we are finished on noting that ${\displaystyle P_{3}P_{2}}$ is an invertible matrix with inverse ${\displaystyle {P_{2}}^{-1}{P_{3}}^{-1}}$.
4. Assume that ${\displaystyle {\hat {T}}=P^{-1}TP}$. For the squares: ${\displaystyle {\hat {T}}^{2}=(P^{-1}TP)(P^{-1}TP)=P^{-1}T(PP^{-1})TP=P^{-1}T^{2}P}$. Higher powers follow by induction.
${\displaystyle {\begin{pmatrix}1&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}0&0\\1&0\end{pmatrix}}}$