# Linear Algebra/Rangespace and Nullspace/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Let ${\displaystyle h:{\mathcal {P}}_{3}\to {\mathcal {P}}_{4}}$ be given by ${\displaystyle p(x)\mapsto x\cdot p(x)}$. Which of these are in the nullspace? Which are in the rangespace?

1. ${\displaystyle x^{3}}$
2. ${\displaystyle 0}$
3. ${\displaystyle 7}$
4. ${\displaystyle 12x-0.5x^{3}}$
5. ${\displaystyle 1+3x^{2}-x^{3}}$

First, to answer whether a polynomial is in the nullspace, we have to consider it as a member of the domain ${\displaystyle {\mathcal {P}}_{3}}$. To answer whether it is in the rangespace, we consider it as a member of the codomain ${\displaystyle {\mathcal {P}}_{4}}$. That is, for ${\displaystyle p(x)=x^{4}}$, the question of whether it is in the rangespace is sensible but the question of whether it is in the nullspace is not because it is not even in the domain.

1. The polynomial ${\displaystyle x^{3}\in {\mathcal {P}}_{3}}$ is not in the nullspace because ${\displaystyle h(x^{3})=x^{4}}$ is not the zero polynomial in ${\displaystyle {\mathcal {P}}_{4}}$. The polynomial ${\displaystyle x^{3}\in {\mathcal {P}}_{4}}$ is in the rangespace because ${\displaystyle x^{2}\in {\mathcal {P}}_{3}}$ is mapped by ${\displaystyle h}$ to ${\displaystyle x^{3}}$.
2. The answer to both questions is, "Yes, because ${\displaystyle h(0)=0}$." The polynomial ${\displaystyle 0\in {\mathcal {P}}_{3}}$ is in the nullspace because it is mapped by ${\displaystyle h}$ to the zero polynomial in ${\displaystyle {\mathcal {P}}_{4}}$. The polynomial ${\displaystyle 0\in {\mathcal {P}}_{4}}$ is in the rangespace because it is the image, under ${\displaystyle h}$, of ${\displaystyle 0\in {\mathcal {P}}_{3}}$.
3. The polynomial ${\displaystyle 7\in {\mathcal {P}}_{3}}$ is not in the nullspace because ${\displaystyle h(7)=7x}$ is not the zero polynomial in ${\displaystyle {\mathcal {P}}_{4}}$. The polynomial ${\displaystyle x^{3}\in {\mathcal {P}}_{4}}$ is not in the rangespace because there is no member of the domain that when multiplied by ${\displaystyle x}$ gives the constant polynomial ${\displaystyle p(x)=7}$.
4. The polynomial ${\displaystyle 12x-0.5x^{3}\in {\mathcal {P}}_{3}}$ is not in the nullspace because ${\displaystyle h(12x-0.5x^{3})=12x^{2}-0.5x^{4}}$. The polynomial ${\displaystyle 12x-0.5x^{3}\in {\mathcal {P}}_{4}}$ is in the rangespace because it is the image of ${\displaystyle 12-0.5x^{2}}$.
5. The polynomial ${\displaystyle 1+3x^{2}-x^{3}\in {\mathcal {P}}_{3}}$ is not in the nullspace because ${\displaystyle h(1+3x^{2}-x^{3})=x+3x^{3}-x^{4}}$. The polynomial ${\displaystyle 1+3x^{2}-x^{3}\in {\mathcal {P}}_{4}}$ is not in the rangespace because of the constant term.
This exercise is recommended for all readers.
Problem 2

Find the nullspace, nullity, rangespace, and rank of each map.

1. ${\displaystyle h:\mathbb {R} ^{2}\to {\mathcal {P}}_{3}}$ given by
${\displaystyle {\begin{pmatrix}a\\b\end{pmatrix}}\mapsto a+ax+ax^{2}}$
2. ${\displaystyle h:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R} }$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto a+d}$
3. ${\displaystyle h:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{2}}$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto a+b+c+dx^{2}}$
4. the zero map ${\displaystyle Z:\mathbb {R} ^{3}\to \mathbb {R} ^{4}}$
1. The nullspace is
${\displaystyle {\mathcal {N}}(h)=\{{\begin{pmatrix}a\\b\end{pmatrix}}\in \mathbb {R} ^{2}\,{\big |}\,a+ax+ax^{2}+0x^{3}=0+0x+0x^{2}+0x^{3}\}=\{{\begin{pmatrix}0\\b\end{pmatrix}}\,{\big |}\,b\in \mathbb {R} \}}$
while the rangespace is
${\displaystyle {\mathcal {R}}(h)=\{a+ax+ax^{2}\in {\mathcal {P}}_{3}\,{\big |}\,a,b\in \mathbb {R} \}=\{a\cdot (1+x+x^{2})\,{\big |}\,a\in \mathbb {R} \}}$
and so the nullity is one and the rank is one.
2. The nullspace is this.
${\displaystyle {\mathcal {N}}(h)=\{{\begin{pmatrix}a&b\\c&d\end{pmatrix}}\,{\big |}\,a+d=0\}=\{{\begin{pmatrix}-d&b\\c&d\end{pmatrix}}\,{\big |}\,b,c,d\in \mathbb {R} \}}$
The rangespace
${\displaystyle {\mathcal {R}}(h)\{a+d\,{\big |}\,a,b,c,d\in \mathbb {R} \}}$
is all of ${\displaystyle \mathbb {R} }$ (we can get any real number by taking ${\displaystyle d}$ to be ${\displaystyle 0}$ and taking ${\displaystyle a}$ to be the desired number). Thus, the nullity is three and the rank is one.
3. The nullspace is
${\displaystyle {\mathcal {N}}(h)=\{{\begin{pmatrix}a&b\\c&d\end{pmatrix}}\,{\big |}\,a+b+c=0{\text{ and }}d=0\}=\{{\begin{pmatrix}-b-c&b\\c&0\end{pmatrix}}\,{\big |}\,b,c\in \mathbb {R} \}}$
while the rangespace is ${\displaystyle {\mathcal {R}}(h)=\{r+sx^{2}\,{\big |}\,r,s\in \mathbb {R} \}}$. Thus, the nullity is two and the rank is two.
4. The nullspace is all of ${\displaystyle \mathbb {R} ^{3}}$ so the nullity is three. The rangespace is the trivial subspace of ${\displaystyle \mathbb {R} ^{4}}$ so the rank is zero.
This exercise is recommended for all readers.
Problem 3

Find the nullity of each map.

1. ${\displaystyle h:\mathbb {R} ^{5}\to \mathbb {R} ^{8}}$ of rank five
2. ${\displaystyle h:{\mathcal {P}}_{3}\to {\mathcal {P}}_{3}}$ of rank one
3. ${\displaystyle h:\mathbb {R} ^{6}\to \mathbb {R} ^{3}}$, an onto map
4. ${\displaystyle h:{\mathcal {M}}_{3\!\times \!3}\to {\mathcal {M}}_{3\!\times \!3}}$, onto

For each, use the result that the rank plus the nullity equals the dimension of the domain.

1. ${\displaystyle 0}$
2. ${\displaystyle 3}$
3. ${\displaystyle 3}$
4. ${\displaystyle 0}$
This exercise is recommended for all readers.
Problem 4

What is the nullspace of the differentiation transformation ${\displaystyle d/dx:{\mathcal {P}}_{n}\to {\mathcal {P}}_{n}}$? What is the nullspace of the second derivative, as a transformation of ${\displaystyle {\mathcal {P}}_{n}}$? The ${\displaystyle k}$-th derivative?

Because

${\displaystyle {\frac {d}{dx}}\,(a_{0}+a_{1}x+\dots +a_{n}x^{n})=a_{1}+2a_{2}x+3a_{3}x^{2}+\dots +na_{n}x^{n-1}}$

we have this.

${\displaystyle {\begin{array}{rl}{\mathcal {N}}({\frac {d}{dx}})&=\{a_{0}+\dots +a_{n}x^{n}\,{\big |}\,a_{1}+2a_{2}x+\dots +na_{n}x^{n-1}=0+0x+\dots +0x^{n-1}\}\\&=\{a_{0}+\dots +a_{n}x^{n}\,{\big |}\,a_{1}=0,{\text{ and }}a_{2}=0,\ldots ,a_{n}=0\}\\&=\{a_{0}+0x+0x^{2}+\dots +0x^{n}\,{\big |}\,a_{0}\in \mathbb {R} \}\end{array}}}$

In the same way,

${\displaystyle {\mathcal {N}}({\frac {d^{k}}{dx^{k}}})=\{a_{0}+a_{1}x+\dots +a_{n}x^{n}\,{\big |}\,a_{0},\dots ,a_{k-1}\in \mathbb {R} \}}$

for ${\displaystyle k\leq n}$.

Problem 5

Example 2.7 restates the first condition in the definition of homomorphism as "the shadow of a sum is the sum of the shadows". Restate the second condition in the same style.

The shadow of a scalar multiple is the scalar multiple of the shadow.

Problem 6

For the homomorphism ${\displaystyle h:{\mathcal {P}}_{3}\to {\mathcal {P}}_{3}}$ given by ${\displaystyle h(a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3})=a_{0}+(a_{0}+a_{1})x+(a_{2}+a_{3})x^{3}}$ find these.

1. ${\displaystyle {\mathcal {N}}(h)}$
2. ${\displaystyle h^{-1}(2-x^{3})}$
3. ${\displaystyle h^{-1}(1+x^{2})}$
1. Setting ${\displaystyle a_{0}+(a_{0}+a_{1})x+(a_{2}+a_{3})x^{3}=0+0x+0x^{2}+0x^{3}}$ gives ${\displaystyle a_{0}=0}$ and ${\displaystyle a_{0}+a_{1}=0}$ and ${\displaystyle a_{2}+a_{3}=0}$, so the nullspace is ${\displaystyle \{-a_{3}x^{2}+a_{3}x^{3}\,{\big |}\,a_{3}\in \mathbb {R} \}}$.
2. Setting ${\displaystyle a_{0}+(a_{0}+a_{1})x+(a_{2}+a_{3})x^{3}=2+0x+0x^{2}-x^{3}}$ gives that ${\displaystyle a_{0}=2}$, and ${\displaystyle a_{1}=-2}$, and ${\displaystyle a_{2}+a_{3}=-1}$. Taking ${\displaystyle a_{3}}$ as a parameter, and renaming it ${\displaystyle a_{3}=a}$ gives this set description ${\displaystyle \{2-2x+(-1-a)x^{2}+ax^{3}\,{\big |}\,a\in \mathbb {R} \}=\{(2-2x-x^{2})+a\cdot (-x^{2}+x^{3})\,{\big |}\,a\in \mathbb {R} \}}$.
3. This set is empty because the range of ${\displaystyle h}$ includes only those polynomials with a ${\displaystyle 0x^{2}}$ term.
This exercise is recommended for all readers.
Problem 7

For the map ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} }$ given by

${\displaystyle f({\begin{pmatrix}x\\y\end{pmatrix}})=2x+y}$

sketch these inverse image sets: ${\displaystyle f^{-1}(-3)}$, ${\displaystyle f^{-1}(0)}$, and ${\displaystyle f^{-1}(1)}$.

All inverse images are lines with slope ${\displaystyle -2}$.

This exercise is recommended for all readers.
Problem 8

Each of these transformations of ${\displaystyle {\mathcal {P}}_{3}}$ is nonsingular. Find the inverse function of each.

1. ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}\mapsto a_{0}+a_{1}x+2a_{2}x^{2}+3a_{3}x^{3}}$
2. ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}\mapsto a_{0}+a_{2}x+a_{1}x^{2}+a_{3}x^{3}}$
3. ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}\mapsto a_{1}+a_{2}x+a_{3}x^{2}+a_{0}x^{3}}$
4. ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}\mapsto a_{0}+(a_{0}+a_{1})x+(a_{0}+a_{1}+a_{2})x^{2}+(a_{0}+a_{1}+a_{2}+a_{3})x^{3}}$

These are the inverses.

1. ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}\mapsto a_{0}+a_{1}x+(a_{2}/2)x^{2}+(a_{3}/3)x^{3}}$
2. ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}\mapsto a_{0}+a_{2}x+a_{1}x^{2}+a_{3}x^{3}}$
3. ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}\mapsto a_{3}+a_{0}x+a_{1}x^{2}+a_{2}x^{3}}$
4. ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}\mapsto a_{0}+(a_{1}-a_{0})x+(a_{2}-a_{1})x^{2}+(a_{3}-a_{2})x^{3}}$

For instance, for the second one, the map given in the question sends ${\displaystyle 0+1x+2x^{2}+3x^{3}\mapsto 0+2x+1x^{2}+3x^{3}}$ and then the inverse above sends ${\displaystyle 0+2x+1x^{2}+3x^{3}\mapsto 0+1x+2x^{2}+3x^{3}}$. So this map is actually self-inverse.

Problem 9

Describe the nullspace and rangespace of a transformation given by ${\displaystyle {\vec {v}}\mapsto 2{\vec {v}}}$.

For any vector space ${\displaystyle V}$, the nullspace

${\displaystyle \{{\vec {v}}\in V\,{\big |}\,2{\vec {v}}={\vec {0}}\}}$

is trivial, while the rangespace

${\displaystyle \{{\vec {w}}\in V\,{\big |}\,{\vec {w}}=2{\vec {v}}{\text{ for some }}{\vec {v}}\in V\}}$

is all of ${\displaystyle V}$, because every vector ${\displaystyle {\vec {w}}}$ is twice some other vector, specifically, it is twice ${\displaystyle (1/2){\vec {w}}}$. (Thus, this transformation is actually an automorphism.)

Problem 10

List all pairs ${\displaystyle ({\text{rank}}(h),{\text{nullity}}(h))}$ that are possible for linear maps from ${\displaystyle \mathbb {R} ^{5}}$ to ${\displaystyle \mathbb {R} ^{3}}$.

Because the rank plus the nullity equals the dimension of the domain (here, five), and the rank is at most three, the possible pairs are: ${\displaystyle (3,2)}$, ${\displaystyle (2,3)}$, ${\displaystyle (1,4)}$, and ${\displaystyle (0,5)}$. Coming up with linear maps that show that each pair is indeed possible is easy.

Problem 11

Does the differentiation map ${\displaystyle d/dx:{\mathcal {P}}_{n}\to {\mathcal {P}}_{n}}$ have an inverse?

No (unless ${\displaystyle {\mathcal {P}}_{n}}$ is trivial), because the two polynomials ${\displaystyle f_{0}(x)=0}$ and ${\displaystyle f_{1}(x)=1}$ have the same derivative; a map must be one-to-one to have an inverse.

This exercise is recommended for all readers.
Problem 12

Find the nullity of the map ${\displaystyle h:{\mathcal {P}}_{n}\to \mathbb {R} }$ given by

${\displaystyle a_{0}+a_{1}x+\dots +a_{n}x^{n}\mapsto \int _{x=0}^{x=1}a_{0}+a_{1}x+\dots +a_{n}x^{n}\,dx.}$

The nullspace is this.

${\displaystyle \{a_{0}+a_{1}x+\dots +a_{n}x^{n}\,{\big |}\,a_{0}(1)+{\frac {\displaystyle a_{1}}{\displaystyle 2}}(1^{2})+\dots +{\frac {\displaystyle a_{n}}{\displaystyle n+1}}(1^{n+1})=0\}}$
${\displaystyle =\{a_{0}+a_{1}x+\dots +a_{n}x^{n}\,{\big |}\,a_{0}+(a_{1}/2)+\dots +(a_{n+1}/n+1)=0\}}$

Thus the nullity is ${\displaystyle n}$.

Problem 13
1. Prove that a homomorphism is onto if and only if its rank equals the dimension of its codomain.
2. Conclude that a homomorphism between vector spaces with the same dimension is one-to-one if and only if it is onto.
1. One direction is obvious: if the homomorphism is onto then its range is the codomain and so its rank equals the dimension of its codomain. For the other direction assume that the map's rank equals the dimension of the codomain. Then the map's range is a subspace of the codomain, and has dimension equal to the dimension of the codomain. Therefore, the map's range must equal the codomain, and the map is onto. (The "therefore" is because there is a linearly independent subset of the range that is of size equal to the dimension of the codomain, but any such linearly independent subset of the codomain must be a basis for the codomain, and so the range equals the codomain.)
2. By Theorem 2.21, a homomorphism is one-to-one if and only if its nullity is zero. Because rank plus nullity equals the dimension of the domain, it follows that a homomorphism is one-to-one if and only if its rank equals the dimension of its domain. But this domain and codomain have the same dimension, so the map is one-to-one if and only if it is onto.
Problem 14

Show that a linear map is nonsingular if and only if it preserves linear independence.

We are proving that ${\displaystyle h:V\to W}$ is nonsingular if and only if for every linearly independent subset ${\displaystyle S}$ of ${\displaystyle V}$ the subset ${\displaystyle h(S)=\{h({\vec {s}})\,{\big |}\,{\vec {s}}\in S\}}$ of ${\displaystyle W}$ is linearly independent.

One half is easy— by Theorem 2.21, if ${\displaystyle h}$ is singular then its nullspace is nontrivial (contains more than just the zero vector). So, where ${\displaystyle {\vec {v}}\neq {\vec {0}}_{V}}$ is in that nullspace, the singleton set ${\displaystyle \{{\vec {v\,}}\}}$ is independent while its image ${\displaystyle \{h({\vec {v}})\}=\{{\vec {0}}_{W}\}}$ is not.

For the other half, assume that ${\displaystyle h}$ is nonsingular and so by Theorem 2.21 has a trivial nullspace. Then for any ${\displaystyle {\vec {v}}_{1},\dots ,{\vec {v}}_{n}\in V}$, the relation

${\displaystyle {\vec {0}}_{W}=c_{1}\cdot h({\vec {v}}_{1})+\dots +c_{n}\cdot h({\vec {v}}_{n})=h(c_{1}\cdot {\vec {v}}_{1}+\dots +c_{n}\cdot {\vec {v}}_{n})}$

implies the relation ${\displaystyle c_{1}\cdot {\vec {v}}_{1}+\dots +c_{n}\cdot {\vec {v}}_{n}={\vec {0}}_{V}}$. Hence, if a subset of ${\displaystyle V}$ is independent then so is its image in ${\displaystyle W}$.

Remark. The statement is that a linear map is nonsingular if and only if it preserves independence for all sets (that is, if a set is independent then its image is also independent). A singular map may well preserve some independent sets. An example is this singular map from ${\displaystyle \mathbb {R} ^{3}}$ to ${\displaystyle \mathbb {R} ^{2}}$.

${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto {\begin{pmatrix}x+y+z\\0\end{pmatrix}}}$

Linear independence is preserved for this set

${\displaystyle \{{\begin{pmatrix}1\\0\\0\end{pmatrix}}\}\mapsto \{{\begin{pmatrix}1\\0\end{pmatrix}}\}}$

and (in a somewhat more tricky example) also for this set

${\displaystyle \{{\begin{pmatrix}1\\0\\0\end{pmatrix}},{\begin{pmatrix}0\\1\\0\end{pmatrix}}\}\mapsto \{{\begin{pmatrix}1\\0\end{pmatrix}}\}}$

(recall that in a set, repeated elements do not appear twice). However, there are sets whose independence is not preserved under this map;

${\displaystyle \{{\begin{pmatrix}1\\0\\0\end{pmatrix}},{\begin{pmatrix}0\\2\\0\end{pmatrix}}\}\mapsto \{{\begin{pmatrix}1\\0\end{pmatrix}},{\begin{pmatrix}2\\0\end{pmatrix}}\}}$

and so not all sets have independence preserved.

Problem 15

Corollary 2.17 says that for there to be an onto homomorphism from a vector space ${\displaystyle V}$ to a vector space ${\displaystyle W}$, it is necessary that the dimension of ${\displaystyle W}$ be less than or equal to the dimension of ${\displaystyle V}$. Prove that this condition is also sufficient; use Theorem 1.9 to show that if the dimension of ${\displaystyle W}$ is less than or equal to the dimension of ${\displaystyle V}$, then there is a homomorphism from ${\displaystyle V}$ to ${\displaystyle W}$ that is onto.

(We use the notation from Theorem 1.9.) Fix a basis ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$ for ${\displaystyle V}$ and a basis ${\displaystyle \langle {\vec {w}}_{1},\ldots ,{\vec {w}}_{k}\rangle }$ for ${\displaystyle W}$. If the dimension ${\displaystyle k}$ of ${\displaystyle W}$ is less than or equal to the dimension ${\displaystyle n}$ of ${\displaystyle V}$ then the theorem gives a linear map from ${\displaystyle V}$ to ${\displaystyle W}$ determined in this way.

${\displaystyle {\vec {\beta }}_{1}\mapsto {\vec {w}}_{1},\,\dots ,\,{\vec {\beta }}_{k}\mapsto {\vec {w}}_{k}\quad {\text{and}}\quad {\vec {\beta }}_{k+1}\mapsto {\vec {w}}_{k},\,\dots ,\,{\vec {\beta }}_{n}\mapsto {\vec {w}}_{k}}$

We need only to verify that this map is onto.

Any member of ${\displaystyle W}$ can be written as a linear combination of basis elements ${\displaystyle c_{1}\cdot {\vec {w}}_{1}+\dots +c_{k}\cdot {\vec {w}}_{k}}$. This vector is the image, under the map described above, of ${\displaystyle c_{1}\cdot {\vec {\beta }}_{1}+\dots +c_{k}\cdot {\vec {\beta }}_{k}+0\cdot {\vec {\beta }}_{k+1}\dots +0\cdot {\vec {\beta }}_{n}}$. Thus the map is onto.

Problem 16

Let ${\displaystyle h:V\to \mathbb {R} }$ be a homomorphism, but not the zero homomorphism. Prove that if ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$ is a basis for the nullspace and if ${\displaystyle {\vec {v}}\in V}$ is not in the nullspace then ${\displaystyle \langle {\vec {v}},{\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$ is a basis for the entire domain ${\displaystyle V}$.

By assumption, ${\displaystyle h}$ is not the zero map and so a vector ${\displaystyle {\vec {v}}\in V}$ exists that is not in the nullspace. Note that ${\displaystyle \langle h({\vec {v}})\rangle }$ is a basis for ${\displaystyle \mathbb {R} }$, because it is a size one linearly independent subset of ${\displaystyle \mathbb {R} }$. Consequently ${\displaystyle h}$ is onto, as for any ${\displaystyle r\in \mathbb {R} }$ we have ${\displaystyle r=c\cdot h({\vec {v}})}$ for some scalar ${\displaystyle c}$, and so ${\displaystyle r=h(c{\vec {v}})}$.

Thus the rank of ${\displaystyle h}$ is one. Because the nullity is given as ${\displaystyle n}$, the dimension of the domain of ${\displaystyle h}$ (the vector space ${\displaystyle V}$) is ${\displaystyle n+1}$. We can finish by showing ${\displaystyle \{{\vec {v}},{\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\}}$ is linearly independent, as it is a size ${\displaystyle n+1}$ subset of a dimension ${\displaystyle n+1}$ space. Because ${\displaystyle \{{\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\}}$ is linearly independent we need only show that ${\displaystyle {\vec {v}}}$ is not a linear combination of the other vectors. But ${\displaystyle c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}={\vec {v}}}$ would give ${\displaystyle -{\vec {v}}+c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}={\vec {0}}}$ and applying ${\displaystyle h}$ to both sides would give a contradiction.

This exercise is recommended for all readers.
Problem 17

Recall that the nullspace is a subset of the domain and the rangespace is a subset of the codomain. Are they necessarily distinct? Is there a homomorphism that has a nontrivial intersection of its nullspace and its rangespace?

Yes. For the transformation of ${\displaystyle \mathbb {R} ^{2}}$ given by

${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {h}{\longmapsto }}{\begin{pmatrix}0\\x\end{pmatrix}}}$

we have this.

${\displaystyle {\mathcal {N}}(h)=\{{\begin{pmatrix}0\\y\end{pmatrix}}\,{\big |}\,y\in \mathbb {R} \}={\mathcal {R}}(h)}$

Remark. We will see more of this in the fifth chapter.

Problem 18

Prove that the image of a span equals the span of the images. That is, where ${\displaystyle h:V\to W}$ is linear, prove that if ${\displaystyle S}$ is a subset of ${\displaystyle V}$ then ${\displaystyle h([S])}$ equals ${\displaystyle [h(S)]}$. This generalizes Lemma 2.1 since it shows that if ${\displaystyle U}$ is any subspace of ${\displaystyle V}$ then its image ${\displaystyle \{h({\vec {u}})\,{\big |}\,{\vec {u}}\in U\}}$ is a subspace of ${\displaystyle W}$, because the span of the set ${\displaystyle U}$ is ${\displaystyle U}$.

This is a simple calculation.

${\displaystyle {\begin{array}{rl}h([S])&=\{h(c_{1}{\vec {s}}_{1}+\dots +c_{n}{\vec {s}}_{n})\,{\big |}\,c_{1},\dots ,c_{n}\in \mathbb {R} {\text{ and }}{\vec {s}}_{1},\dots ,{\vec {s}}_{n}\in S\}\\&=\{c_{1}h({\vec {s}}_{1})+\dots +c_{n}h({\vec {s}}_{n})\,{\big |}\,c_{1},\dots ,c_{n}\in \mathbb {R} {\text{ and }}{\vec {s}}_{1},\dots ,{\vec {s}}_{n}\in S\}\\&=[h(S)]\end{array}}}$
This exercise is recommended for all readers.
Problem 19
1. Prove that for any linear map ${\displaystyle h:V\to W}$ and any ${\displaystyle {\vec {w}}\in W}$, the set ${\displaystyle h^{-1}({\vec {w}})}$ has the form
${\displaystyle \{{\vec {v}}+{\vec {n}}\,{\big |}\,{\vec {n}}\in {\mathcal {N}}(h)\}}$
for ${\displaystyle {\vec {v}}\in V}$ with ${\displaystyle h({\vec {v}})={\vec {w}}}$ (if ${\displaystyle h}$ is not onto then this set may be empty). Such a set is a coset of ${\displaystyle {\mathcal {N}}(h)}$ and is denoted ${\displaystyle {\vec {v}}+{\mathcal {N}}(h)}$.
2. Consider the map ${\displaystyle t:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ given by
${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {t}{\longmapsto }}{\begin{pmatrix}ax+by\\cx+dy\end{pmatrix}}}$
for some scalars ${\displaystyle a}$, ${\displaystyle b}$, ${\displaystyle c}$, and ${\displaystyle d}$. Prove that ${\displaystyle t}$ is linear.
3. Conclude from the prior two items that for any linear system of the form
${\displaystyle {\begin{array}{*{2}{rc}r}ax&+&by&=&e\\cx&+&dy&=&f\end{array}}}$
the solution set can be written (the vectors are members of ${\displaystyle \mathbb {R} ^{2}}$)
${\displaystyle \{{\vec {p}}+{\vec {h}}\,{\big |}\,{\vec {h}}{\text{ satisfies the associated homogeneous system}}\}}$
where ${\displaystyle {\vec {p}}}$ is a particular solution of that linear system (if there is no particular solution then the above set is empty).
4. Show that this map ${\displaystyle h:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}$ is linear
${\displaystyle {\begin{pmatrix}x_{1}\\\vdots \\x_{n}\end{pmatrix}}\mapsto {\begin{pmatrix}a_{1,1}x_{1}+\dots +a_{1,n}x_{n}\\\vdots \\a_{m,1}x_{1}+\dots +a_{m,n}x_{n}\end{pmatrix}}}$
for any scalars ${\displaystyle a_{1,1}}$, ..., ${\displaystyle a_{m,n}}$. Extend the conclusion made in the prior item.
5. Show that the ${\displaystyle k}$-th derivative map is a linear transformation of ${\displaystyle {\mathcal {P}}_{n}}$ for each ${\displaystyle k}$. Prove that this map is a linear transformation of that space
${\displaystyle f\mapsto {\frac {d^{k}}{dx^{k}}}f+c_{k-1}{\frac {d^{k-1}}{dx^{k-1}}}f+\dots +c_{1}{\frac {d}{dx}}f+c_{0}f}$
for any scalars ${\displaystyle c_{k}}$, ..., ${\displaystyle c_{0}}$. Draw a conclusion as above.
1. We will show that the two sets are equal ${\displaystyle h^{-1}({\vec {w}})=\{{\vec {v}}+{\vec {n}}\,{\big |}\,{\vec {n}}\in {\mathcal {N}}(h)\}}$ by mutual inclusion. For the ${\displaystyle \{{\vec {v}}+{\vec {n}}\,{\big |}\,{\vec {n}}\in {\mathcal {N}}(h)\}\subseteq h^{-1}({\vec {w}})}$ direction, just note that ${\displaystyle h({\vec {v}}+{\vec {n}})=h({\vec {v}})+h({\vec {n}})}$ equals ${\displaystyle {\vec {w}}}$, and so any member of the first set is a member of the second. For the ${\displaystyle h^{-1}({\vec {w}})\subseteq \{{\vec {v}}+{\vec {n}}\,{\big |}\,{\vec {n}}\in {\mathcal {N}}(h)\}}$ direction, consider ${\displaystyle {\vec {u}}\in h^{-1}({\vec {w}})}$. Because ${\displaystyle h}$ is linear, ${\displaystyle h({\vec {u}})=h({\vec {v}})}$ implies that ${\displaystyle h({\vec {u}}-{\vec {v}})={\vec {0}}}$. We can write ${\displaystyle {\vec {u}}-{\vec {v}}}$ as ${\displaystyle {\vec {n}}}$, and then we have that ${\displaystyle {\vec {u}}\in \{{\vec {v}}+{\vec {n}}\,{\big |}\,{\vec {n}}\in {\mathcal {N}}(h)\}}$, as desired, because ${\displaystyle {\vec {u}}={\vec {v}}+({\vec {u}}-{\vec {v}})}$.
2. This check is routine.
3. This is immediate.
4. For the linearity check, briefly, where ${\displaystyle c,d}$ are scalars and ${\displaystyle {\vec {x}},{\vec {y}}\in \mathbb {R} ^{n}}$ have components ${\displaystyle x_{1},\dots ,x_{n}}$ and ${\displaystyle y_{1},\dots ,y_{n}}$, we have this.
${\displaystyle {\begin{array}{rl}h(c\cdot {\vec {x}}+d\cdot {\vec {y}})&={\begin{pmatrix}a_{1,1}(cx_{1}+dy_{1})+\dots +a_{1,n}(cx_{n}+dy_{n})\\\vdots \\a_{m,1}(cx_{1}+dy_{1})+\dots +a_{m,n}(cx_{n}+dy_{n})\end{pmatrix}}\\&={\begin{pmatrix}a_{1,1}cx_{1}+\dots +a_{1,n}cx_{n}\\\vdots \\a_{m,1}cx_{1}+\dots +a_{m,n}cx_{n}\end{pmatrix}}+{\begin{pmatrix}a_{1,1}dy_{1}+\dots +a_{1,n}dy_{n}\\\vdots \\a_{m,1}dy_{1}+\dots +a_{m,n}dy_{n}\end{pmatrix}}\\&=c\cdot h({\vec {x}})+d\cdot h({\vec {y}})\end{array}}}$
The appropriate conclusion is that ${\displaystyle {\text{General}}={\text{Particular}}+{\text{Homogeneous}}}$.
5. Each power of the derivative is linear because of the rules
${\displaystyle {\frac {d^{k}}{dx^{k}}}(f(x)+g(x))={\frac {d^{k}}{dx^{k}}}f(x)+{\frac {d^{k}}{dx^{k}}}g(x)\quad {\text{and}}\quad {\frac {d^{k}}{dx^{k}}}rf(x)=r{\frac {d^{k}}{dx^{k}}}f(x)}$
from calculus. Thus the given map is a linear transformation of the space because any linear combination of linear maps is also a linear map by Lemma 1.16. The appropriate conclusion is ${\displaystyle {\text{General}}={\text{Particular}}+{\text{Homogeneous}}}$, where the associated homogeneous differential equation has a constant of ${\displaystyle 0}$.
Problem 20

Prove that for any transformation ${\displaystyle t:V\to V}$ that is rank one, the map given by composing the operator with itself ${\displaystyle t\circ t:V\to V}$ satisfies ${\displaystyle t\circ t=r\cdot t}$ for some real number ${\displaystyle r}$.

Because the rank of ${\displaystyle t}$ is one, the rangespace of ${\displaystyle t}$ is a one-dimensional set. Taking ${\displaystyle \langle h({\vec {v}})\rangle }$ as a basis (for some appropriate ${\displaystyle {\vec {v}}}$), we have that for every ${\displaystyle {\vec {w}}\in V}$, the image ${\displaystyle h({\vec {w}})\in V}$ is a multiple of this basis vector— associated with each ${\displaystyle {\vec {w}}}$ there is a scalar ${\displaystyle c_{\vec {w}}}$ such that ${\displaystyle t({\vec {w}})=c_{\vec {w}}t({\vec {v}})}$. Apply ${\displaystyle t}$ to both sides of that equation and take ${\displaystyle r}$ to be ${\displaystyle c_{t({\vec {v}})}}$

${\displaystyle t\circ t({\vec {w}})=t(c_{\vec {w}}\cdot t({\vec {v}}))=c_{\vec {w}}\cdot t\circ t({\vec {v}})=c_{\vec {w}}\cdot c_{t({\vec {v}})}\cdot t({\vec {v}})=c_{\vec {w}}\cdot r\cdot t({\vec {v}})=r\cdot c_{\vec {w}}\cdot t({\vec {v}})=r\cdot t({\vec {w}})}$

to get the desired conclusion.

Problem 21

Show that for any space ${\displaystyle V}$ of dimension ${\displaystyle n}$, the dual space

${\displaystyle \mathop {\mathcal {L}} (V,\mathbb {R} )=\{h:V\to \mathbb {R} \,{\big |}\,h{\text{ is linear}}\}}$

is isomorphic to ${\displaystyle \mathbb {R} ^{n}}$. It is often denoted ${\displaystyle V^{\ast }}$. Conclude that ${\displaystyle V^{\ast }\cong V}$.

Fix a basis ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$ for ${\displaystyle V}$. We shall prove that this map

${\displaystyle h{\stackrel {\Phi }{\longmapsto }}{\begin{pmatrix}h({\vec {\beta }}_{1})\\\vdots \\h({\vec {\beta }}_{n})\end{pmatrix}}}$

is an isomorphism from ${\displaystyle V^{\ast }}$ to ${\displaystyle \mathbb {R} ^{n}}$.

To see that ${\displaystyle \Phi }$ is one-to-one, assume that ${\displaystyle h_{1}}$ and ${\displaystyle h_{2}}$ are members of ${\displaystyle V^{\ast }}$ such that ${\displaystyle \Phi (h_{1})=\Phi (h_{2})}$. Then

${\displaystyle {\begin{pmatrix}h_{1}({\vec {\beta }}_{1})\\\vdots \\h_{1}({\vec {\beta }}_{n})\end{pmatrix}}={\begin{pmatrix}h_{2}({\vec {\beta }}_{1})\\\vdots \\h_{2}({\vec {\beta }}_{n})\end{pmatrix}}}$

and consequently, ${\displaystyle h_{1}({\vec {\beta }}_{1})=h_{2}({\vec {\beta }}_{1})}$, etc. But a homomorphism is determined by its action on a basis, so ${\displaystyle h_{1}=h_{2}}$, and therefore ${\displaystyle \Phi }$ is one-to-one.

To see that ${\displaystyle \Phi }$ is onto, consider

${\displaystyle {\begin{pmatrix}x_{1}\\\vdots \\x_{n}\end{pmatrix}}}$

for ${\displaystyle x_{1},\ldots ,x_{n}\in \mathbb {R} }$. This function ${\displaystyle h}$ from ${\displaystyle V}$ to ${\displaystyle \mathbb {R} }$

${\displaystyle c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}{\stackrel {h}{\longmapsto }}c_{1}x_{1}+\dots +c_{n}x_{n}}$

is easily seen to be linear, and to be mapped by ${\displaystyle \Phi }$ to the given vector in ${\displaystyle \mathbb {R} ^{n}}$, so ${\displaystyle \Phi }$ is onto.

The map ${\displaystyle \Phi }$ also preserves structure: where

${\displaystyle {\begin{array}{rl}c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}&{\stackrel {h_{1}}{\longmapsto }}c_{1}h_{1}({\vec {\beta }}_{1})+\dots +c_{n}h_{1}({\vec {\beta }}_{n})\\c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}&{\stackrel {h_{2}}{\longmapsto }}c_{1}h_{2}({\vec {\beta }}_{1})+\dots +c_{n}h_{2}({\vec {\beta }}_{n})\end{array}}}$

we have

${\displaystyle {\begin{array}{rl}(r_{1}h_{1}+r_{2}h_{2})(c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n})&=c_{1}(r_{1}h_{1}({\vec {\beta }}_{1})+r_{2}h_{2}({\vec {\beta }}_{1}))+\dots +c_{n}(r_{1}h_{1}({\vec {\beta }}_{n})+r_{2}h_{2}({\vec {\beta }}_{n}))\\&=r_{1}(c_{1}h_{1}({\vec {\beta }}_{1})+\dots +c_{n}h_{1}({\vec {\beta }}_{n}))+r_{2}(c_{1}h_{2}({\vec {\beta }}_{1})+\dots +c_{n}h_{2}({\vec {\beta }}_{n}))\end{array}}}$

so ${\displaystyle \Phi (r_{1}h_{1}+r_{2}h_{2})=r_{1}\Phi (h_{1})+r_{2}\Phi (h_{2})}$.

Problem 22

Show that any linear map is the sum of maps of rank one.

Let ${\displaystyle h:V\to W}$ be linear and fix a basis ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$ for ${\displaystyle V}$. Consider these ${\displaystyle n}$ maps from ${\displaystyle V}$ to ${\displaystyle W}$

${\displaystyle h_{1}({\vec {v}})=c_{1}\cdot h({\vec {\beta }}_{1}),\quad h_{2}({\vec {v}})=c_{2}\cdot h({\vec {\beta }}_{2}),\quad \ldots \quad ,h_{n}({\vec {v}})=c_{n}\cdot h({\vec {\beta }}_{n})}$

for any ${\displaystyle {\vec {v}}=c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}}$. Clearly ${\displaystyle h}$ is the sum of the ${\displaystyle h_{i}}$'s. We need only check that each ${\displaystyle h_{i}}$ is linear: where ${\displaystyle {\vec {u}}=d_{1}{\vec {\beta }}_{1}+\dots +d_{n}{\vec {\beta }}_{n}}$ we have ${\displaystyle h_{i}(r{\vec {v}}+s{\vec {u}})=rc_{i}+sd_{i}=rh_{i}({\vec {v}})+sh_{i}({\vec {u}})}$.

Problem 23

Is "is homomorphic to" an equivalence relation? (Hint: the difficulty is to decide on an appropriate meaning for the quoted phrase.)

Either yes (trivially) or no (nearly trivially).

If ${\displaystyle V}$ "is homomorphic to" ${\displaystyle W}$ is taken to mean there is a homomorphism from ${\displaystyle V}$ into (but not necessarily onto) ${\displaystyle W}$, then every space is homomorphic to every other space as a zero map always exists.

If ${\displaystyle V}$ "is homomorphic to" ${\displaystyle W}$ is taken to mean there is an onto homomorphism from ${\displaystyle V}$ to ${\displaystyle W}$ then the relation is not an equivalence. For instance, there is an onto homomorphism from ${\displaystyle \mathbb {R} ^{3}}$ to ${\displaystyle \mathbb {R} ^{2}}$ (projection is one) but no homomorphism from ${\displaystyle \mathbb {R} ^{2}}$ onto ${\displaystyle \mathbb {R} ^{3}}$ by Corollary 2.17, so the relation is not reflexive.[1]

Problem 24

Show that the rangespaces and nullspaces of powers of linear maps ${\displaystyle t:V\to V}$ form descending

${\displaystyle V\supseteq {\mathcal {R}}(t)\supseteq {\mathcal {R}}(t^{2})\supseteq \ldots }$

and ascending

${\displaystyle \{{\vec {0}}\}\subseteq {\mathcal {N}}(t)\subseteq {\mathcal {N}}(t^{2})\subseteq \ldots }$

chains. Also show that if ${\displaystyle k}$ is such that ${\displaystyle {\mathcal {R}}(t^{k})={\mathcal {R}}(t^{k+1})}$ then all following rangespaces are equal: ${\displaystyle {\mathcal {R}}(t^{k})={\mathcal {R}}(t^{k+1})={\mathcal {R}}(t^{k+2})\ldots \,}$. Similarly, if ${\displaystyle {\mathcal {N}}(t^{k})={\mathcal {N}}(t^{k+1})}$ then ${\displaystyle {\mathcal {N}}(t^{k})={\mathcal {N}}(t^{k+1})={\mathcal {N}}(t^{k+2})=\ldots \,}$.

That they form the chains is obvious. For the rest, we show here that ${\displaystyle {\mathcal {R}}(t^{j+1})={\mathcal {R}}(t^{j})}$ implies that ${\displaystyle {\mathcal {R}}(t^{j+2})={\mathcal {R}}(t^{j+1})}$. Induction then applies.
Assume that ${\displaystyle {\mathcal {R}}(t^{j+1})={\mathcal {R}}(t^{j})}$. Then ${\displaystyle t:{\mathcal {R}}(t^{j+1})\to {\mathcal {R}}(t^{j+2})}$ is the same map, with the same domain, as ${\displaystyle t:{\mathcal {R}}(t^{j})\to {\mathcal {R}}(t^{j+1})}$. Thus it has the same range: ${\displaystyle {\mathcal {R}}(t^{j+2})={\mathcal {R}}(t^{j+1})}$.