# Linear Algebra/Definition and Examples of Isomorphisms

Linear Algebra
 ← Isomorphisms Definition and Examples of Isomorphisms Rangespace and Nullspace →

Example 1.1

Consider the example mentioned above, the space of two-wide row vectors and the space of two-tall column vectors. They are "the same" in that if we associate the vectors that have the same components, e.g.,

${\displaystyle {\begin{pmatrix}1&2\end{pmatrix}}\quad \longleftrightarrow \quad {\begin{pmatrix}1\\2\end{pmatrix}}}$

then this correspondence preserves the operations, for instance this addition

${\displaystyle {\begin{pmatrix}1&2\end{pmatrix}}+{\begin{pmatrix}3&4\end{pmatrix}}={\begin{pmatrix}4&6\end{pmatrix}}\quad \longleftrightarrow \quad {\begin{pmatrix}1\\2\end{pmatrix}}+{\begin{pmatrix}3\\4\end{pmatrix}}={\begin{pmatrix}4\\6\end{pmatrix}}}$

and this scalar multiplication.

${\displaystyle 5\cdot {\begin{pmatrix}1&2\end{pmatrix}}={\begin{pmatrix}5&10\end{pmatrix}}\quad \longleftrightarrow \quad 5\cdot {\begin{pmatrix}1\\2\end{pmatrix}}={\begin{pmatrix}5\\10\end{pmatrix}}}$

More generally stated, under the correspondence

${\displaystyle {\begin{pmatrix}a_{0}&a_{1}\end{pmatrix}}\quad \longleftrightarrow \quad {\begin{pmatrix}a_{0}\\a_{1}\end{pmatrix}}}$

both operations are preserved:

${\displaystyle {\begin{pmatrix}a_{0}&a_{1}\end{pmatrix}}+{\begin{pmatrix}b_{0}&b_{1}\end{pmatrix}}={\begin{pmatrix}a_{0}+b_{0}&a_{1}+b_{1}\end{pmatrix}}\longleftrightarrow {\begin{pmatrix}a_{0}\\a_{1}\end{pmatrix}}+{\begin{pmatrix}b_{0}\\b_{1}\end{pmatrix}}={\begin{pmatrix}a_{0}+b_{0}\\a_{1}+b_{1}\end{pmatrix}}}$

and

${\displaystyle r\cdot {\begin{pmatrix}a_{0}&a_{1}\end{pmatrix}}={\begin{pmatrix}ra_{0}&ra_{1}\end{pmatrix}}\quad \longleftrightarrow \quad r\cdot {\begin{pmatrix}a_{0}\\a_{1}\end{pmatrix}}={\begin{pmatrix}ra_{0}\\ra_{1}\end{pmatrix}}}$

(all of the variables are real numbers).

Example 1.2

Another two spaces we can think of as "the same" are ${\displaystyle {\mathcal {P}}_{2}}$, the space of quadratic polynomials, and ${\displaystyle \mathbb {R} ^{3}}$. A natural correspondence is this.

${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}\quad \longleftrightarrow \quad {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}\qquad \qquad ({\text{e.g., }}1+2x+3x^{2}\,\longleftrightarrow \,{\begin{pmatrix}1\\2\\3\end{pmatrix}})}$

The structure is preserved: corresponding elements add in a corresponding way

${\displaystyle {\begin{array}{r}a_{0}+a_{1}x+a_{2}x^{2}\\+\,\,b_{0}+b_{1}x+b_{2}x^{2}\\\hline (a_{0}+b_{0})+(a_{1}+b_{1})x+(a_{2}+b_{2})x^{2}\end{array}}\quad \longleftrightarrow \quad {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}+{\begin{pmatrix}b_{0}\\b_{1}\\b_{2}\end{pmatrix}}={\begin{pmatrix}a_{0}+b_{0}\\a_{1}+b_{1}\\a_{2}+b_{2}\end{pmatrix}}}$

and scalar multiplication corresponds also.

${\displaystyle r\cdot (a_{0}+a_{1}x+a_{2}x^{2})=(ra_{0})+(ra_{1})x+(ra_{2})x^{2}\quad \longleftrightarrow \quad r\cdot {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}={\begin{pmatrix}ra_{0}\\ra_{1}\\ra_{2}\end{pmatrix}}}$
Definition 1.3

An isomorphism between two vector spaces ${\displaystyle V}$ and ${\displaystyle W}$ is a map ${\displaystyle f:V\to W}$ that

1. is a correspondence: ${\displaystyle f}$ is one-to-one and onto;[1]
2. preserves structure: if ${\displaystyle {\vec {v}}_{1},{\vec {v}}_{2}\in V}$ then
${\displaystyle f({\vec {v}}_{1}+{\vec {v}}_{2})=f({\vec {v}}_{1})+f({\vec {v}}_{2})}$
and if ${\displaystyle {\vec {v}}\in V}$ and ${\displaystyle r\in \mathbb {R} }$ then
${\displaystyle f(r{\vec {v}})=r\,f({\vec {v}})}$

(we write ${\displaystyle V\cong W}$, read "${\displaystyle V}$ is isomorphic to ${\displaystyle W}$", when such a map exists).

("Morphism" means map, so "isomorphism" means a map expressing sameness.)

Example 1.4

The vector space ${\displaystyle G=\{c_{1}\cos \theta +c_{2}\sin \theta \,{\big |}\,c_{1},c_{2}\in \mathbb {R} \}}$ of functions of ${\displaystyle \theta }$ is isomorphic to the vector space ${\displaystyle \mathbb {R} ^{2}}$ under this map.

${\displaystyle c_{1}\cos \theta +c_{2}\sin \theta {\stackrel {f}{\longmapsto }}{\begin{pmatrix}c_{1}\\c_{2}\end{pmatrix}}}$

We will check this by going through the conditions in the definition.

We will first verify condition 1, that the map is a correspondence between the sets underlying the spaces.

To establish that ${\displaystyle f}$ is one-to-one, we must prove that ${\displaystyle f({\vec {a}})=f({\vec {b}})}$ only when ${\displaystyle {\vec {a}}={\vec {b}}}$. If

${\displaystyle f(a_{1}\cos \theta +a_{2}\sin \theta )=f(b_{1}\cos \theta +b_{2}\sin \theta )}$

then, by the definition of ${\displaystyle f}$,

${\displaystyle {\begin{pmatrix}a_{1}\\a_{2}\end{pmatrix}}={\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}}$

from which we can conclude that ${\displaystyle a_{1}=b_{1}}$ and ${\displaystyle a_{2}=b_{2}}$ because column vectors are equal only when they have equal components. We've proved that ${\displaystyle f({\vec {a}})=f({\vec {b}})}$ implies that ${\displaystyle {\vec {a}}={\vec {b}}}$, which shows that ${\displaystyle f}$ is one-to-one.

To check that ${\displaystyle f}$ is onto we must check that any member of the codomain ${\displaystyle \mathbb {R} ^{2}}$ is the image of some member of the domain ${\displaystyle G}$. But that's clear—any

${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}}$

is the image under ${\displaystyle f}$ of ${\displaystyle x\cos \theta +y\sin \theta \in G}$.

Next we will verify condition (2), that ${\displaystyle f}$ preserves structure.

This computation shows that ${\displaystyle f}$ preserves addition.

${\displaystyle f{\bigl (}\,(a_{1}\cos \theta +a_{2}\sin \theta )+(b_{1}\cos \theta +b_{2}\sin \theta )\,{\bigr )}}$
${\displaystyle {\begin{array}{rl}&=f{\bigl (}\,(a_{1}+b_{1})\cos \theta +(a_{2}+b_{2})\sin \theta \,{\bigr )}\\&={\begin{pmatrix}a_{1}+b_{1}\\a_{2}+b_{2}\end{pmatrix}}\\&={\begin{pmatrix}a_{1}\\a_{2}\end{pmatrix}}+{\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}\\&=f(a_{1}\cos \theta +a_{2}\sin \theta )+f(b_{1}\cos \theta +b_{2}\sin \theta )\end{array}}}$

A similar computation shows that ${\displaystyle f}$ preserves scalar multiplication.

${\displaystyle {\begin{array}{rl}f{\bigl (}\,r\cdot (a_{1}\cos \theta +a_{2}\sin \theta )\,{\bigr )}&=f(\,ra_{1}\cos \theta +ra_{2}\sin \theta \,)\\&={\begin{pmatrix}ra_{1}\\ra_{2}\end{pmatrix}}\\&=r\cdot {\begin{pmatrix}a_{1}\\a_{2}\end{pmatrix}}\\&=r\cdot \,f(a_{1}\cos \theta +a_{2}\sin \theta )\end{array}}}$

With that, conditions (1) and (2) are verified, so we know that ${\displaystyle f}$ is an isomorphism and we can say that the spaces are isomorphic ${\displaystyle G\cong \mathbb {R} ^{2}}$.

Example 1.5

Let ${\displaystyle V}$ be the space ${\displaystyle \{c_{1}x+c_{2}y+c_{3}z\,{\big |}\,c_{1},c_{2},c_{3}\in \mathbb {R} \}}$ of linear combinations of three variables ${\displaystyle x}$, ${\displaystyle y}$, and ${\displaystyle z}$, under the natural addition and scalar multiplication operations. Then ${\displaystyle V}$ is isomorphic to ${\displaystyle {\mathcal {P}}_{2}}$, the space of quadratic polynomials.

To show this we will produce an isomorphism map. There is more than one possibility; for instance, here are four.

${\displaystyle {\begin{array}{c}c_{1}x+c_{2}y+c_{3}z\end{array}}\quad {\begin{array}{rl}{\stackrel {f_{1}}{\longmapsto }}&c_{1}+c_{2}x+c_{3}x^{2}\\{\stackrel {f_{2}}{\longmapsto }}&c_{2}+c_{3}x+c_{1}x^{2}\\{\stackrel {f_{3}}{\longmapsto }}&-c_{1}-c_{2}x-c_{3}x^{2}\\{\stackrel {f_{4}}{\longmapsto }}&c_{1}+(c_{1}+c_{2})x+(c_{1}+c_{3})x^{2}\end{array}}}$

The first map is the more natural correspondence in that it just carries the coefficients over. However, below we shall verify that the second one is an isomorphism, to underline that there are isomorphisms other than just the obvious one (showing that ${\displaystyle f_{1}}$ is an isomorphism is Problem 3).

To show that ${\displaystyle f_{2}}$ is one-to-one, we will prove that if ${\displaystyle f_{2}(c_{1}x+c_{2}y+c_{3}z)=f_{2}(d_{1}x+d_{2}y+d_{3}z)}$ then ${\displaystyle c_{1}x+c_{2}y+c_{3}z=d_{1}x+d_{2}y+d_{3}z}$. The assumption that ${\displaystyle f_{2}(c_{1}x+c_{2}y+c_{3}z)=f_{2}(d_{1}x+d_{2}y+d_{3}z)}$ gives, by the definition of ${\displaystyle f_{2}}$, that ${\displaystyle c_{2}+c_{3}x+c_{1}x^{2}=d_{2}+d_{3}x+d_{1}x^{2}}$. Equal polynomials have equal coefficients, so ${\displaystyle c_{2}=d_{2}}$, ${\displaystyle c_{3}=d_{3}}$, and ${\displaystyle c_{1}=d_{1}}$. Thus ${\displaystyle f_{2}(c_{1}x+c_{2}y+c_{3}z)=f_{2}(d_{1}x+d_{2}y+d_{3}z)}$ implies that ${\displaystyle c_{1}x+c_{2}y+c_{3}z=d_{1}x+d_{2}y+d_{3}z}$ and therefore ${\displaystyle f_{2}}$ is one-to-one.

The map ${\displaystyle f_{2}}$ is onto because any member ${\displaystyle a+bx+cx^{2}}$ of the codomain is the image of some member of the domain, namely it is the image of ${\displaystyle cx+ay+bz}$. For instance, ${\displaystyle 2+3x-4x^{2}}$ is ${\displaystyle f_{2}(-4x+2y+3z)}$.

The computations for structure preservation are like those in the prior example. This map preserves addition

${\displaystyle f_{2}{\bigl (}(c_{1}x+c_{2}y+c_{3}z)+(d_{1}x+d_{2}y+d_{3}z){\bigr )}}$
${\displaystyle {\begin{array}{rl}&=f_{2}{\bigl (}(c_{1}+d_{1})x+(c_{2}+d_{2})y+(c_{3}+d_{3})z{\bigr )}\\&=(c_{2}+d_{2})+(c_{3}+d_{3})x+(c_{1}+d_{1})x^{2}\\&=(c_{2}+c_{3}x+c_{1}x^{2})+(d_{2}+d_{3}x+d_{1}x^{2})\\&=f_{2}(c_{1}x+c_{2}y+c_{3}z)+f_{2}(d_{1}x+d_{2}y+d_{3}z)\end{array}}}$

and scalar multiplication.

${\displaystyle {\begin{array}{rl}f_{2}{\bigl (}r\cdot (c_{1}x+c_{2}y+c_{3}z){\bigr )}&=f_{2}(rc_{1}x+rc_{2}y+rc_{3}z)\\&=rc_{2}+rc_{3}x+rc_{1}x^{2}\\&=r\cdot (c_{2}+c_{3}x+c_{1}x^{2})\\&=r\cdot \,f_{2}(c_{1}x+c_{2}y+c_{3}z)\end{array}}}$

Thus ${\displaystyle f_{2}}$ is an isomorphism and we write ${\displaystyle V\cong {\mathcal {P}}_{2}}$.

We are sometimes interested in an isomorphism of a space with itself, called an automorphism. An identity map is an automorphism. The next two examples show that there are others.

Example 1.6

A dilation map ${\displaystyle d_{s}:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ that multiplies all vectors by a nonzero scalar ${\displaystyle s}$ is an automorphism of ${\displaystyle \mathbb {R} ^{2}}$.

A rotation or turning map ${\displaystyle t_{\theta }:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ that rotates all vectors through an angle ${\displaystyle \theta }$ is an automorphism.

A third type of automorphism of ${\displaystyle \mathbb {R} ^{2}}$ is a map ${\displaystyle f_{\ell }:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ that flips or reflects all vectors over a line ${\displaystyle \ell }$ through the origin.

See Problem 20.

Example 1.7

Consider the space ${\displaystyle {\mathcal {P}}_{5}}$ of polynomials of degree 5 or less and the map ${\displaystyle f}$ that sends a polynomial ${\displaystyle p(x)}$ to ${\displaystyle p(x-1)}$. For instance, under this map ${\displaystyle x^{2}\mapsto (x-1)^{2}=x^{2}-2x+1}$ and ${\displaystyle x^{3}+2x\mapsto (x-1)^{3}+2(x-1)=x^{3}-3x^{2}+5x-3}$. This map is an automorphism of this space; the check is Problem 12.

This isomorphism of ${\displaystyle {\mathcal {P}}_{5}}$ with itself does more than just tell us that the space is "the same" as itself. It gives us some insight into the space's structure. For instance, below is shown a family of parabolas, graphs of members of ${\displaystyle {\mathcal {P}}_{5}}$. Each has a vertex at ${\displaystyle y=-1}$, and the left-most one has zeroes at ${\displaystyle -2.25}$ and ${\displaystyle -1.75}$, the next one has zeroes at ${\displaystyle -1.25}$ and ${\displaystyle -0.75}$, etc.

Geometrically, the substitution of ${\displaystyle x-1}$ for ${\displaystyle x}$ in any function's argument shifts its graph to the right by one. Thus, ${\displaystyle f(p_{0})=p_{1}}$ and ${\displaystyle f}$'s action is to shift all of the parabolas to the right by one. Notice that the picture before ${\displaystyle f}$ is applied is the same as the picture after ${\displaystyle f}$ is applied, because while each parabola moves to the right, another one comes in from the left to take its place. This also holds true for cubics, etc. So the automorphism ${\displaystyle f}$ gives us the insight that ${\displaystyle P_{5}}$ has a certain horizontal homogeneity; this space looks the same near ${\displaystyle x=1}$ as near ${\displaystyle x=0}$.

As described in the preamble to this section, we will next produce some results supporting the contention that the definition of isomorphism above captures our intuition of vector spaces being the same.

Of course the definition itself is persuasive: a vector space consists of two components, a set and some structure, and the definition simply requires that the sets correspond and that the structures correspond also. Also persuasive are the examples above. In particular, Example 1.1, which gives an isomorphism between the space of two-wide row vectors and the space of two-tall column vectors, dramatizes our intuition that isomorphic spaces are the same in all relevant respects. Sometimes people say, where ${\displaystyle V\cong W}$, that "${\displaystyle W}$ is just ${\displaystyle V}$ painted green"—any differences are merely cosmetic.

Further support for the definition, in case it is needed, is provided by the following results that, taken together, suggest that all the things of interest in a vector space correspond under an isomorphism. Since we studied vector spaces to study linear combinations, "of interest" means "pertaining to linear combinations". Not of interest is the way that the vectors are presented typographically (or their color!).

As an example, although the definition of isomorphism doesn't explicitly say that the zero vectors must correspond, it is a consequence of that definition.

Lemma 1.8

An isomorphism maps a zero vector to a zero vector.

Proof

Where ${\displaystyle f:V\to W}$ is an isomorphism, fix any ${\displaystyle {\vec {v}}\in V}$. Then ${\displaystyle f({\vec {0}}_{V})=f(0\cdot {\vec {v}})=0\cdot f({\vec {v}})={\vec {0}}_{W}}$.

The definition of isomorphism requires that sums of two vectors correspond and that so do scalar multiples. We can extend that to say that all linear combinations correspond.

Lemma 1.9

For any map ${\displaystyle f:V\to W}$ between vector spaces these statements are equivalent.

1. ${\displaystyle f}$ preserves structure
${\displaystyle f({\vec {v}}_{1}+{\vec {v}}_{2})=f({\vec {v}}_{1})+f({\vec {v}}_{2})\quad {\text{and}}\quad f(c{\vec {v}})=c\,f({\vec {v}})}$
2. ${\displaystyle f}$ preserves linear combinations of two vectors
${\displaystyle f(c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})=c_{1}f({\vec {v}}_{1})+c_{2}f({\vec {v}}_{2})}$
3. ${\displaystyle f}$ preserves linear combinations of any finite number of vectors
${\displaystyle f(c_{1}{\vec {v}}_{1}+\dots +c_{n}{\vec {v}}_{n})=c_{1}f({\vec {v}}_{1})+\dots +c_{n}f({\vec {v}}_{n})}$
Proof

Since the implications ${\displaystyle 3\!\implies \!2}$ and ${\displaystyle 2\!\implies \!1}$ are clear, we need only show that ${\displaystyle 1\!\implies \!3}$. Assume statement 1. We will prove statement 3 by induction on the number of summands ${\displaystyle n}$.

The one-summand base case, that ${\displaystyle f(c{\vec {v}}_{1})=c\,f({\vec {v}}_{1})}$, is covered by the assumption of statement 1.

For the inductive step assume that statement 3 holds whenever there are ${\displaystyle k}$ or fewer summands, that is, whenever ${\displaystyle n=1}$, or ${\displaystyle n=2}$, ..., or ${\displaystyle n=k}$. Consider the ${\displaystyle k+1}$-summand case. The first half of 1 gives

${\displaystyle f(c_{1}{\vec {v}}_{1}+\dots +c_{k}{\vec {v}}_{k}+c_{k+1}{\vec {v}}_{k+1})=f(c_{1}{\vec {v}}_{1}+\dots +c_{k}{\vec {v}}_{k})+f(c_{k+1}{\vec {v}}_{k+1})}$

by breaking the sum along the final "${\displaystyle +}$". Then the inductive hypothesis lets us break up the ${\displaystyle k}$-term sum.

${\displaystyle =f(c_{1}{\vec {v}}_{1})+\dots +f(c_{k}{\vec {v}}_{k})+f(c_{k+1}{\vec {v}}_{k+1})}$

Finally, the second half of statement 1 gives

${\displaystyle =c_{1}\,f({\vec {v}}_{1})+\dots +c_{k}\,f({\vec {v}}_{k})+c_{k+1}\,f({\vec {v}}_{k+1})}$

when applied ${\displaystyle k+1}$ times.

In addition to adding to the intuition that the definition of isomorphism does indeed preserve the things of interest in a vector space, that lemma's second item is an especially handy way of checking that a map preserves structure.

We close with a summary. The material in this section augments the chapter on Vector Spaces. There, after giving the definition of a vector space, we informally looked at what different things can happen. Here, we defined the relation "${\displaystyle \cong }$" between vector spaces and we have argued that it is the right way to split the collection of vector spaces into cases because it preserves the features of interest in a vector space—in particular, it preserves linear combinations. That is, we have now said precisely what we mean by "the same", and by "different", and so we have precisely classified the vector spaces.

## Exercises

This exercise is recommended for all readers.
Problem 1

Verify, using Example 1.4 as a model, that the two correspondences given before the definition are isomorphisms.

This exercise is recommended for all readers.
Problem 2

For the map ${\displaystyle f:{\mathcal {P}}_{1}\to \mathbb {R} ^{2}}$ given by

${\displaystyle a+bx{\stackrel {f}{\longmapsto }}{\begin{pmatrix}a-b\\b\end{pmatrix}}}$

Find the image of each of these elements of the domain.

1. ${\displaystyle 3-2x}$
2. ${\displaystyle 2+2x}$
3. ${\displaystyle x}$

Show that this map is an isomorphism.

Problem 3

Show that the natural map ${\displaystyle f_{1}}$ from Example 1.5 is an isomorphism.

This exercise is recommended for all readers.
Problem 4

Decide whether each map is an isomorphism (if it is an isomorphism then prove it and if it isn't then state a condition that it fails to satisfy).

1. ${\displaystyle f:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R} }$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto ad-bc}$
2. ${\displaystyle f:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R} ^{4}}$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto {\begin{pmatrix}a+b+c+d\\a+b+c\\a+b\\a\end{pmatrix}}}$
3. ${\displaystyle f:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{3}}$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto c+(d+c)x+(b+a)x^{2}+ax^{3}}$
4. ${\displaystyle f:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{3}}$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto c+(d+c)x+(b+a+1)x^{2}+ax^{3}}$
Problem 5

Show that the map ${\displaystyle f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}}$ given by ${\displaystyle f(x)=x^{3}}$ is one-to-one and onto.Is it an isomorphism?

This exercise is recommended for all readers.
Problem 6

Refer to Example 1.1. Produce two more isomorphisms (of course, that they satisfy the conditions in the definition of isomorphism must be verified).

Problem 7

Refer to Example 1.2. Produce two more isomorphisms (and verify that they satisfy the conditions).

This exercise is recommended for all readers.
Problem 8

Show that, although ${\displaystyle \mathbb {R} ^{2}}$ is not itself a subspace of ${\displaystyle \mathbb {R} ^{3}}$, it is isomorphic to the ${\displaystyle xy}$-plane subspace of ${\displaystyle \mathbb {R} ^{3}}$.

Problem 9

Find two isomorphisms between ${\displaystyle \mathbb {R} ^{16}}$ and ${\displaystyle {\mathcal {M}}_{4\!\times \!4}}$.

This exercise is recommended for all readers.
Problem 10

For what ${\displaystyle k}$ is ${\displaystyle {\mathcal {M}}_{m\!\times \!n}}$ isomorphic to ${\displaystyle \mathbb {R} ^{k}}$?

Problem 11

For what ${\displaystyle k}$ is ${\displaystyle {\mathcal {P}}_{k}}$ isomorphic to ${\displaystyle \mathbb {R} ^{n}}$?

Problem 12

Prove that the map in Example 1.7, from ${\displaystyle {\mathcal {P}}_{5}}$ to ${\displaystyle {\mathcal {P}}_{5}}$ given by ${\displaystyle p(x)\mapsto p(x-1)}$, is a vector space isomorphism.

Problem 13

Why, in Lemma 1.8, must there be a ${\displaystyle {\vec {v}}\in V}$? That is, why must ${\displaystyle V}$ be nonempty?

Problem 14

Are any two trivial spaces isomorphic?

Problem 15

In the proof of Lemma 1.9, what about the zero-summands case (that is, if ${\displaystyle n}$ is zero)?

Problem 16

Show that any isomorphism ${\displaystyle f:{\mathcal {P}}_{0}\to \mathbb {R} ^{1}}$ has the form ${\displaystyle a\mapsto ka}$ for some nonzero real number ${\displaystyle k}$.

This exercise is recommended for all readers.
Problem 17

These prove that isomorphism is an equivalence relation.

1. Show that the identity map ${\displaystyle {\mbox{id}}:V\to V}$ is an isomorphism. Thus, any vector space is isomorphic to itself.
2. Show that if ${\displaystyle f:V\to W}$ is an isomorphism then so is its inverse ${\displaystyle f^{-1}:W\to V}$. Thus, if ${\displaystyle V}$ is isomorphic to ${\displaystyle W}$ then also ${\displaystyle W}$ is isomorphic to ${\displaystyle V}$.
3. Show that a composition of isomorphisms is an isomorphism: if ${\displaystyle f:V\to W}$ is an isomorphism and ${\displaystyle g:W\to U}$ is an isomorphism then so also is ${\displaystyle g\circ f:V\to U}$. Thus, if ${\displaystyle V}$ is isomorphic to ${\displaystyle W}$ and ${\displaystyle W}$ is isomorphic to ${\displaystyle U}$, then also ${\displaystyle V}$ is isomorphic to ${\displaystyle U}$.
Problem 18

Suppose that ${\displaystyle f:V\to W}$ preserves structure. Show that ${\displaystyle f}$ is one-to-one if and only if the unique member of ${\displaystyle V}$ mapped by ${\displaystyle f}$ to ${\displaystyle {\vec {0}}_{W}}$ is ${\displaystyle {\vec {0}}_{V}}$.

Problem 19

Suppose that ${\displaystyle f:V\to W}$ is an isomorphism. Prove that the set ${\displaystyle \{{\vec {v}}_{1},\dots ,{\vec {v}}_{k}\}\subseteq V}$ is linearly dependent if and only if the set of images ${\displaystyle \{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{k})\}\subseteq W}$ is linearly dependent.

This exercise is recommended for all readers.
Problem 20

Show that each type of map from Example 1.6 is an automorphism.

1. Dilation ${\displaystyle d_{s}}$ by a nonzero scalar ${\displaystyle s}$.
2. Rotation ${\displaystyle t_{\theta }}$ through an angle ${\displaystyle \theta }$.
3. Reflection ${\displaystyle f_{\ell }}$ over a line through the origin.

Hint. For the second and third items, polar coordinates are useful.

Problem 21

Produce an automorphism of ${\displaystyle {\mathcal {P}}_{2}}$ other than the identity map, and other than a shift map ${\displaystyle p(x)\mapsto p(x-k)}$.

Problem 22
1. Show that a function ${\displaystyle f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}}$ is an automorphism if and only if it has the form ${\displaystyle x\mapsto kx}$ for some ${\displaystyle k\neq 0}$.
2. Let ${\displaystyle f}$ be an automorphism of ${\displaystyle \mathbb {R} ^{1}}$ such that ${\displaystyle f(3)=7}$. Find ${\displaystyle f(-2)}$.
3. Show that a function ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ is an automorphism if and only if it has the form
${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}ax+by\\cx+dy\end{pmatrix}}}$
for some ${\displaystyle a,b,c,d\in \mathbb {R} }$ with ${\displaystyle ad-bc\neq 0}$. Hint. Exercises in prior subsections have shown that
${\displaystyle {\begin{pmatrix}b\\d\end{pmatrix}}{\text{ is not a multiple of }}{\begin{pmatrix}a\\c\end{pmatrix}}}$
if and only if ${\displaystyle ad-bc\neq 0}$.
4. Let ${\displaystyle f}$ be an automorphism of ${\displaystyle \mathbb {R} ^{2}}$ with
${\displaystyle f({\begin{pmatrix}1\\3\end{pmatrix}})={\begin{pmatrix}2\\-1\end{pmatrix}}\quad {\text{and}}\quad f({\begin{pmatrix}1\\4\end{pmatrix}})={\begin{pmatrix}0\\1\end{pmatrix}}.}$
Find
${\displaystyle f({\begin{pmatrix}0\\-1\end{pmatrix}}).}$
Problem 23

Refer to Lemma 1.8 and Lemma 1.9. Find two more things preserved by isomorphism.

Problem 24

We show that isomorphisms can be tailored to fit in that, sometimes, given vectors in the domain and in the range we can produce an isomorphism associating those vectors.

1. Let ${\displaystyle B=\langle {\vec {\beta }}_{1},{\vec {\beta }}_{2},{\vec {\beta }}_{3}\rangle }$ be a basis for ${\displaystyle {\mathcal {P}}_{2}}$ so that any ${\displaystyle {\vec {p}}\in {\mathcal {P}}_{2}}$ has a unique representation as ${\displaystyle {\vec {p}}=c_{1}{\vec {\beta }}_{1}+c_{2}{\vec {\beta }}_{2}+c_{3}{\vec {\beta }}_{3}}$, which we denote in this way.
${\displaystyle {\rm {Rep}}_{B}({\vec {p}})={\begin{pmatrix}c_{1}\\c_{2}\\c_{3}\end{pmatrix}}}$
Show that the ${\displaystyle {\rm {Rep}}_{B}(\cdot )}$ operation is a function from ${\displaystyle {\mathcal {P}}_{2}}$ to ${\displaystyle \mathbb {R} ^{3}}$ (this entails showing that with every domain vector ${\displaystyle {\vec {v}}\in {\mathcal {P}}_{2}}$ there is an associated image vector in ${\displaystyle \mathbb {R} ^{3}}$, and further, that with every domain vector ${\displaystyle {\vec {v}}\in {\mathcal {P}}_{2}}$ there is at most one associated image vector).
2. Show that this ${\displaystyle {\rm {Rep}}_{B}(\cdot )}$ function is one-to-one and onto.
3. Show that it preserves structure.
4. Produce an isomorphism from ${\displaystyle {\mathcal {P}}_{2}}$ to ${\displaystyle \mathbb {R} ^{3}}$ that fits these specifications.
${\displaystyle x+x^{2}\mapsto {\begin{pmatrix}1\\0\\0\end{pmatrix}}\quad {\text{and}}\quad 1-x\mapsto {\begin{pmatrix}0\\1\\0\end{pmatrix}}}$
Problem 25

Prove that a space is ${\displaystyle n}$-dimensional if and only if it is isomorphic to ${\displaystyle \mathbb {R} ^{n}}$. Hint. Fix a basis ${\displaystyle B}$ for the space and consider the map sending a vector over to its representation with respect to ${\displaystyle B}$.

Problem 26

(Requires the subsection on Combining Subspaces, which is optional.) Let ${\displaystyle U}$ and ${\displaystyle W}$ be vector spaces. Define a new vector space, consisting of the set ${\displaystyle U\times W=\{({\vec {u}},{\vec {w}})\,{\big |}\,{\vec {u}}\in U{\text{ and }}{\vec {w}}\in W\}}$ along with these operations.

${\displaystyle ({\vec {u}}_{1},{\vec {w}}_{1})+({\vec {u}}_{2},{\vec {w}}_{2})=({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})\quad {\text{and}}\quad r\cdot ({\vec {u}},{\vec {w}})=(r{\vec {u}},r{\vec {w}})}$

This is a vector space, the external direct sum of ${\displaystyle U}$ and ${\displaystyle W}$.

1. Check that it is a vector space.
2. Find a basis for, and the dimension of, the external direct sum ${\displaystyle {\mathcal {P}}_{2}\times \mathbb {R} ^{2}}$.
3. What is the relationship among ${\displaystyle \dim(U)}$, ${\displaystyle \dim(W)}$, and ${\displaystyle \dim(U\times W)}$?
4. Suppose that ${\displaystyle U}$ and ${\displaystyle W}$ are subspaces of a vector space ${\displaystyle V}$ such that ${\displaystyle V=U\oplus W}$ (in this case we say that ${\displaystyle V}$ is the internal direct sum of ${\displaystyle U}$ and ${\displaystyle W}$). Show that the map ${\displaystyle f:U\times W\to V}$ given by
${\displaystyle ({\vec {u}},{\vec {w}}){\stackrel {f}{\longmapsto }}{\vec {u}}+{\vec {w}}}$
is an isomorphism. Thus if the internal direct sum is defined then the internal and external direct sums are isomorphic.

Solutions

## Footnotes

1. More information on one-to-one and onto maps is in the appendix.
Linear Algebra
 ← Isomorphisms Definition and Examples of Isomorphisms Rangespace and Nullspace →