 This exercise is recommended for all readers.
 Problem 1
Verify, using Example 1.4 as a model, that the two correspondences given before the definition are isomorphisms.
 Example 1.1
 Example 1.2
 Answer
 Call the map $f$.
 ${\begin{pmatrix}a&b\end{pmatrix}}{\stackrel {f}{\longmapsto }}{\begin{pmatrix}a\\b\end{pmatrix}}$
It is onetoone because if $f$ sends two members of the domain to the same image, that is, if $f\left({\begin{pmatrix}a&b\end{pmatrix}}\right)=f\left({\begin{pmatrix}c&d\end{pmatrix}}\right)$, then the definition of $f$ gives that
 ${\begin{pmatrix}a\\b\end{pmatrix}}={\begin{pmatrix}c\\d\end{pmatrix}}$
and since column vectors are equal only if they have equal components, we have that $a=c$ and that $b=d$. Thus, if $f$ maps two row vectors from the domain to the same column vector then the two row vectors are equal: ${\begin{pmatrix}a&b\end{pmatrix}}={\begin{pmatrix}c&d\end{pmatrix}}$.
To show that $f$ is onto we must show that any member of the codomain $\mathbb {R} ^{2}$ is the image under $f$ of some row vector. That's easy;
 ${\begin{pmatrix}x\\y\end{pmatrix}}$
is $f\left({\begin{pmatrix}x&y\end{pmatrix}}\right)$.
The computation for preservation of addition is this.
 $f\left({\begin{pmatrix}a&b\end{pmatrix}}+{\begin{pmatrix}c&d\end{pmatrix}}\right)=f\left({\begin{pmatrix}a+c&b+d\end{pmatrix}}\right)={\begin{pmatrix}a+c\\b+d\end{pmatrix}}={\begin{pmatrix}a\\b\end{pmatrix}}+{\begin{pmatrix}c\\d\end{pmatrix}}=f\left({\begin{pmatrix}a&b\end{pmatrix}}\right)+f\left({\begin{pmatrix}c&d\end{pmatrix}}\right)$
The computation for preservation of scalar multiplication is similar.
 $f\left(r\cdot {\begin{pmatrix}a&b\end{pmatrix}}\right)=f\left({\begin{pmatrix}ra&rb\end{pmatrix}}\right)={\begin{pmatrix}ra\\rb\end{pmatrix}}=r\cdot {\begin{pmatrix}a\\b\end{pmatrix}}=r\cdot f\left({\begin{pmatrix}a&b\end{pmatrix}}\right)$
 Denote the map from Example 1.2 by $f$. To show that it is onetoone, assume that $f(a_{0}+a_{1}x+a_{2}x^{2})=f(b_{0}+b_{1}x+b_{2}x^{2})$. Then by the definition of the function,
 ${\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}={\begin{pmatrix}b_{0}\\b_{1}\\b_{2}\end{pmatrix}}$
and so $a_{0}=b_{0}$ and $a_{1}=b_{1}$ and $a_{2}=b_{2}$. Thus $a_{0}+a_{1}x+a_{2}x^{2}=b_{0}+b_{1}x+b_{2}x^{2}$, and consequently $f$ is onetoone.
The function $f$ is onto because there is a polynomial sent to
 ${\begin{pmatrix}a\\b\\c\end{pmatrix}}$
by $f$, namely, $a+bx+cx^{2}$.
As for structure, this shows that $f$ preserves addition
 ${\begin{array}{rl}f\left(\,(a_{0}+a_{1}x+a_{2}x^{2})+(b_{0}+b_{1}x+b_{2}x^{2})\,\right)&=f\left(\,(a_{0}+b_{0})+(a_{1}+b_{1})x+(a_{2}+b_{2})x^{2}\,\right)\\&={\begin{pmatrix}a_{0}+b_{0}\\a_{1}+b_{1}\\a_{2}+b_{2}\end{pmatrix}}\\&={\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}+{\begin{pmatrix}b_{0}\\b_{1}\\b_{2}\end{pmatrix}}\\&=f(a_{0}+a_{1}x+a_{2}x^{2})+f(b_{0}+b_{1}x+b_{2}x^{2})\end{array}}$
and this shows
 ${\begin{array}{rl}f(\,r(a_{0}+a_{1}x+a_{2}x^{2})\,)&=f(\,(ra_{0})+(ra_{1})x+(ra_{2})x^{2}\,)\\&={\begin{pmatrix}ra_{0}\\ra_{1}\\ra_{2}\end{pmatrix}}\\&=r\cdot {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}\\&=r\,f(a_{0}+a_{1}x+a_{2}x^{2})\end{array}}$
that it preserves scalar multiplication.
 This exercise is recommended for all readers.
 Problem 3
Show that the natural map $f_{1}$ from
Example 1.5
is an isomorphism.
 Answer
To verify it is onetoone, assume that $f_{1}(c_{1}x+c_{2}y+c_{3}z)=f_{1}(d_{1}x+d_{2}y+d_{3}z)$. Then $c_{1}+c_{2}x+c_{3}x^{2}=d_{1}+d_{2}x+d_{3}x^{2}$ by the definition of $f_{1}$. Members of ${\mathcal {P}}_{2}$ are equal only when they have the same coefficients, so this implies that $c_{1}=d_{1}$ and $c_{2}=d_{2}$ and $c_{3}=d_{3}$. Therefore $f_{1}(c_{1}x+c_{2}y+c_{3}z)=f_{1}(d_{1}x+d_{2}y+d_{3}z)$ implies that $c_{1}x+c_{2}y+c_{3}z=d_{1}x+d_{2}y+d_{3}z$, and so $f_{1}$ is onetoone.
To verify that it is onto, consider an arbitrary member of the codomain $a_{1}+a_{2}x+a_{3}x^{2}$ and observe that it is indeed the image of a member of the domain, namely, it is $f_{1}(a_{1}x+a_{2}y+a_{3}z)$.
(For instance, $0+3x+6x^{2}=f_{1}(0x+3y+6z)$.)
The computation checking that $f_{1}$ preserves addition is this.
 ${\begin{array}{rl}f_{1}\left(\,(c_{1}x+c_{2}y+c_{3}z)+(d_{1}x+d_{2}y+d_{3}z)\,\right)&=f_{1}\left(\,(c_{1}+d_{1})x+(c_{2}+d_{2})y+(c_{3}+d_{3})z\,\right)\\&=(c_{1}+d_{1})+(c_{2}+d_{2})x+(c_{3}+d_{3})x^{2}\\&=(c_{1}+c_{2}x+c_{3}x^{2})+(d_{1}+d_{2}x+d_{3}x^{2})\\&=f_{1}(c_{1}x+c_{2}y+c_{3}z)+f_{1}(d_{1}x+d_{2}y+d_{3}z)\end{array}}$
The check that $f_{1}$ preserves scalar multiplication is this.
 ${\begin{array}{rl}f_{1}(\,r\cdot (c_{1}x+c_{2}y+c_{3}z)\,)&=f_{1}(\,(rc_{1})x+(rc_{2})y+(rc_{3})z\,)\\&=(rc_{1})+(rc_{2})x+(rc_{3})x^{2}\\&=r\cdot (c_{1}+c_{2}x+c_{3}x^{2})\\&=r\cdot f_{1}(c_{1}x+c_{2}y+c_{3}z)\end{array}}$
 This exercise is recommended for all readers.
 Problem 4
Decide whether each map is an isomorphism (if it is an isomorphism then prove it and if it isn't then state a condition that it fails to satisfy).
 $f:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R}$ given by
 ${\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto adbc$
 $f:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R} ^{4}$ given by
 ${\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto {\begin{pmatrix}a+b+c+d\\a+b+c\\a+b\\a\end{pmatrix}}$
 $f:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{3}$ given by
 ${\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto c+(d+c)x+(b+a)x^{2}+ax^{3}$
 $f:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{3}$ given by
 ${\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto c+(d+c)x+(b+a+1)x^{2}+ax^{3}$
 Answer
 No; this map is not onetoone. In particular, the matrix of all zeroes is mapped to the same image as the matrix of all ones.
 Yes, this is an isomorphism.
It is onetoone:
 ${\text{if }}f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})=f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}}){\text{ then }}{\begin{pmatrix}a_{1}+b_{1}+c_{1}+d_{1}\\a_{1}+b_{1}+c_{1}\\a_{1}+b_{1}\\a_{1}\end{pmatrix}}={\begin{pmatrix}a_{2}+b_{2}+c_{2}+d_{2}\\a_{2}+b_{2}+c_{2}\\a_{2}+b_{2}\\a_{2}\end{pmatrix}}$
gives that $a_{1}=a_{2}$, and that $b_{1}=b_{2}$, and that $c_{1}=c_{2}$, and that $d_{1}=d_{2}$.
It is onto, since this shows
 ${\begin{pmatrix}x\\y\\z\\w\end{pmatrix}}=f({\begin{pmatrix}w&zw\\yz&xy\end{pmatrix}})$
that any fourtall vector is the image of a $2\!\times \!2$ matrix.
Finally, it preserves combinations
 ${\begin{array}{rl}f(\,r_{1}\cdot {\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}}+r_{2}\cdot {\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}}\,)&=f({\begin{pmatrix}r_{1}a_{1}+r_{2}a_{2}&r_{1}b_{1}+r_{2}b_{2}\\r_{1}c_{1}+r_{2}c_{2}&r_{1}d_{1}+r_{2}d_{2}\end{pmatrix}})\\&={\begin{pmatrix}r_{1}a_{1}+\dots +r_{2}d_{2}\\r_{1}a_{1}+\dots +r_{2}c_{2}\\r_{1}a_{1}+\dots +r_{2}b_{2}\\r_{1}a_{1}+r_{2}a_{2}\end{pmatrix}}\\&=r_{1}\cdot {\begin{pmatrix}a_{1}+\dots +d_{1}\\a_{1}+\dots +c_{1}\\a_{1}+b_{1}\\a_{1}\end{pmatrix}}+r_{2}\cdot {\begin{pmatrix}a_{2}+\dots +d_{2}\\a_{2}+\dots +c_{2}\\a_{2}+b_{2}\\a_{2}\end{pmatrix}}\\&=r_{1}\cdot f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})+r_{2}\cdot f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}})\end{array}}$
and so item 2 of Lemma 1.9 shows that it preserves structure.
 Yes, it is an isomorphism.
To show that it is onetoone, we suppose that two members of the domain have the same image under $f$.
 $f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})=f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}})$
This gives, by the definition of $f$, that $c_{1}+(d_{1}+c_{1})x+(b_{1}+a_{1})x^{2}+a_{1}x^{3}=c_{2}+(d_{2}+c_{2})x+(b_{2}+a_{2})x^{2}+a_{2}x^{3}$ and then the fact that polynomials are equal only when their coefficients are equal gives a set of linear equations
 ${\begin{array}{rl}c_{1}&=c_{2}\\d_{1}+c_{1}&=d_{2}+c_{2}\\b_{1}+a_{1}&=b_{2}+a_{2}\\a_{1}&=a_{2}\end{array}}$
that has only the solution $a_{1}=a_{2}$, $b_{1}=b_{2}$, $c_{1}=c_{2}$, and $d_{1}=d_{2}$.
To show that $f$ is onto, we note that $p+qx+rx^{2}+sx^{3}$ is the image under $f$ of this matrix.
 ${\begin{pmatrix}s&rs\\p&qp\end{pmatrix}}$
We can check that $f$ preserves structure by using item 2 of Lemma 1.9.
 ${\begin{array}{rl}f(r_{1}\cdot {\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}}+r_{2}\cdot {\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}})&=f({\begin{pmatrix}r_{1}a_{1}+r_{2}a_{2}&r_{1}b_{1}+r_{2}b_{2}\\r_{1}c_{1}+r_{2}c_{2}&r_{1}d_{1}+r_{2}d_{2}\end{pmatrix}})\\&={\begin{array}{rl}&(r_{1}c_{1}+r_{2}c_{2})+(r_{1}d_{1}+r_{2}d_{2}+r_{1}c_{1}+r_{2}c_{2})x\\&\,\quad +(r_{1}b_{1}+r_{2}b_{2}+r_{1}a_{1}+r_{2}a_{2})x^{2}+(r_{1}a_{1}+r_{2}a_{2})x^{3}\end{array}}\\&={\begin{array}{rl}&r_{1}\cdot \left(c_{1}+(d_{1}+c_{1})x+(b_{1}+a_{1})x^{2}+a_{1}x^{3}\right)\\&\,\quad +r_{2}\cdot \left(c_{2}+(d_{2}+c_{2})x+(b_{2}+a_{2})x^{2}+a_{2}x^{3}\right)\end{array}}\\&=r_{1}\cdot f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})+r_{2}\cdot f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}})\end{array}}$
 No, this map does not preserve structure. For instance, it does not send the zero matrix to the zero polynomial.
 This exercise is recommended for all readers.
 Problem 6
Refer to Example 1.1. Produce two more isomorphisms (of course, that they satisfy the conditions in the definition of isomorphism must be verified).
 Answer
Many maps are possible. Here are two.
 ${\begin{pmatrix}a&b\end{pmatrix}}\mapsto {\begin{pmatrix}b\\a\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}a&b\end{pmatrix}}\mapsto {\begin{pmatrix}2a\\b\end{pmatrix}}$
The verifications are straightforward adaptations of the others above.
 Problem 7
Refer to Example 1.2. Produce two more isomorphisms (and verify that they satisfy the conditions).
 Answer
Here are two.
 $a_{0}+a_{1}x+a_{2}x^{2}\mapsto {\begin{pmatrix}a_{1}\\a_{0}\\a_{2}\end{pmatrix}}\quad {\text{and}}\quad a_{0}+a_{1}x+a_{2}x^{2}\mapsto {\begin{pmatrix}a_{0}+a_{1}\\a_{1}\\a_{2}\end{pmatrix}}$
Verification is straightforward (for the second, to show that it is onto, note that
 ${\begin{pmatrix}s\\t\\u\end{pmatrix}}$
is the image of $(st)+tx+ux^{2}$).
 This exercise is recommended for all readers.
 Problem 9
Find two isomorphisms between $\mathbb {R} ^{16}$ and ${\mathcal {M}}_{4\!\times \!4}$.
 Answer
Here are two:
 ${\begin{pmatrix}r_{1}\\r_{2}\\\vdots \\r_{16}\end{pmatrix}}\mapsto {\begin{pmatrix}r_{1}&r_{2}&\ldots \\&\\&&\ldots &r_{16}\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}r_{1}\\r_{2}\\\vdots \\r_{16}\end{pmatrix}}\mapsto {\begin{pmatrix}r_{1}&\\r_{2}&\\\vdots &&&\vdots \\&&&r_{16}\end{pmatrix}}$
Verification that each is an isomorphism is easy.
 This exercise is recommended for all readers.
 Problem 12
Prove that the map in Example 1.7, from ${\mathcal {P}}_{5}$ to ${\mathcal {P}}_{5}$ given by $p(x)\mapsto p(x1)$, is a vector space isomorphism.
 Answer
This is the map, expanded.
 ${\begin{array}{rl}f(a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+a_{4}x^{4}+a_{5}x^{5})&={\begin{array}{rl}&a_{0}+a_{1}(x1)+a_{2}(x1)^{2}+a_{3}(x1)^{3}\\&\,\quad +a_{4}(x1)^{4}+a_{5}(x1)^{5}\end{array}}\\&={\begin{array}{rl}&a_{0}+a_{1}(x1)+a_{2}(x^{2}2x+1)\\&\,\quad +a_{3}(x^{3}3x^{2}+3x1)\\&\,\quad +a_{4}(x^{4}4x^{3}+6x^{2}4x+1)\\&\,\quad +a_{5}(x^{5}5x^{4}+10x^{3}10x^{2}+5x1)\end{array}}\\&={\begin{array}{rl}&(a_{0}a_{1}+a_{2}a_{3}+a_{4}a_{5})\\&\,\quad +(a_{1}2a_{2}+3a_{3}4a_{4}+5a_{5})x\\&\,\quad +(a_{2}3a_{3}+6a_{4}10a_{5})x^{2}+(a_{3}4a_{4}+10a_{5})x^{3}\\&\,\quad +(a_{4}5a_{5})x^{4}+a_{5}x^{5}\end{array}}\end{array}}$
To finish checking that it is an isomorphism, we apply item 2 of Lemma 1.9 and show that it preserves linear combinations of two polynomials. Briefly, the check goes like this.
 $f(c\cdot (a_{0}+a_{1}x+\dots +a_{5}x^{5})+d\cdot (b_{0}+b_{1}x+\dots +b_{5}x^{5}))$
 $=\dots =(ca_{0}ca_{1}+ca_{2}ca_{3}+ca_{4}ca_{5}+db_{0}db_{1}+db_{2}db_{3}+db_{4}db_{5})+\dots +(ca_{5}+db_{5})x^{5}$
 $=\dots =c\cdot f(a_{0}+a_{1}x+\dots +a_{5}x^{5})+d\cdot f(b_{0}+b_{1}x+\dots +b_{5}x^{5})$
 Problem 13
Why, in Lemma 1.8, must there be a ${\vec {v}}\in V$? That is, why must $V$ be nonempty?
 Answer
No vector space has the empty set underlying it. We can take ${\vec {v}}$ to be the zero vector.
 Problem 15
In the proof of Lemma 1.9, what about the zerosummands case (that is, if $n$ is zero)?
 Answer
A linear combination of $n=0$ vectors adds to the zero vector and so Lemma 1.8 shows that the three statements are equivalent in this case.
 This exercise is recommended for all readers.
 Problem 17
These prove that isomorphism is an equivalence relation.
 Show that the identity map ${\mbox{id}}:V\to V$ is an isomorphism. Thus, any vector space is isomorphic to itself.
 Show that if $f:V\to W$ is an isomorphism then so is its inverse $f^{1}:W\to V$. Thus, if $V$ is isomorphic to $W$ then also $W$ is isomorphic to $V$.
 Show that a composition of isomorphisms is an isomorphism: if $f:V\to W$ is an isomorphism and $g:W\to U$ is an isomorphism then so also is $g\circ f:V\to U$. Thus, if $V$ is isomorphic to $W$ and $W$ is isomorphic to $U$, then also $V$ is isomorphic to $U$.
 Answer
In each item, following item 2 of Lemma 1.9, we show that the map preserves
structure by showing that the it preserves linear combinations of two members of the domain.

The identity map is clearly onetoone and onto. For linear combinations the check is easy.
 ${\mbox{id}}(c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2})=c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2}=c_{1}\cdot {\mbox{id}}({\vec {v}}_{1})+c_{2}\cdot {\mbox{id}}({\vec {v}}_{2})$
 The inverse of a correspondence is also a correspondence (as stated in the appendix), so we need only check that the inverse preserves linear combinations. Assume that ${\vec {w}}_{1}=f({\vec {v}}_{1})$ (so $f^{1}({\vec {w}}_{1})={\vec {v}}_{1}$) and assume that ${\vec {w}}_{2}=f({\vec {v}}_{2})$.
 ${\begin{array}{rl}f^{1}(c_{1}\cdot {\vec {w}}_{1}+c_{2}\cdot {\vec {w}}_{2})&=f^{1}{\bigl (}\,c_{1}\cdot f({\vec {v}}_{1})+c_{2}\cdot f({\vec {v}}_{2})\,{\bigr )}\\&=f^{1}(\,f{\bigl (}c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})\,{\bigr )}\\&=c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2}\\&=c_{1}\cdot f^{1}({\vec {w}}_{1})+c_{2}\cdot f^{1}({\vec {w}}_{2})\end{array}}$
 The composition of two correspondences is a correspondence (as stated in the appendix), so we need only check that the composition map preserves linear combinations.
 ${\begin{array}{rl}g\circ f\,{\bigl (}c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2}{\bigr )}&=g{\bigl (}\,f(c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})\,{\bigr )}\\&=g{\bigl (}\,c_{1}\cdot f({\vec {v}}_{1})+c_{2}\cdot f({\vec {v}}_{2})\,{\bigr )}\\&=c_{1}\cdot g{\bigl (}f({\vec {v}}_{1}))+c_{2}\cdot g(f({\vec {v}}_{2}){\bigr )}\\&=c_{1}\cdot g\circ f\,({\vec {v}}_{1})+c_{2}\cdot g\circ f\,({\vec {v}}_{2})\end{array}}$
 This exercise is recommended for all readers.
 Problem 20
Show that each type of map from Example 1.6 is an automorphism.
 Dilation $d_{s}$ by a nonzero scalar $s$.
 Rotation $t_{\theta }$ through an angle $\theta$.
 Reflection $f_{\ell }$ over a line through the origin.
Hint.
For the second and third items, polar coordinates are useful.
 Answer

This map is onetoone because if $d_{s}({\vec {v}}_{1})=d_{s}({\vec {v}}_{2})$ then by definition of the map, $s\cdot {\vec {v}}_{1}=s\cdot {\vec {v}}_{2}$ and so ${\vec {v}}_{1}={\vec {v}}_{2}$, as $s$ is nonzero. This map is onto as any ${\vec {w}}\in \mathbb {R} ^{2}$ is the image of ${\vec {v}}=(1/s)\cdot {\vec {w}}$ (again, note that $s$ is nonzero). (Another way to see that this map is a correspondence is to observe that it has an inverse: the inverse of $d_{s}$ is $d_{1/s}$.)
To finish, note that this map preserves linear combinations
 $d_{s}(c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2})=s(c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})=c_{1}s{\vec {v}}_{1}+c_{2}s{\vec {v}}_{2}=c_{1}\cdot d_{s}({\vec {v}}_{1})+c_{2}\cdot d_{s}({\vec {v}}_{2})$
and therefore is an isomorphism.
 As in the prior item, we can show that the map $t_{\theta }$ is a correspondence by noting that it has an inverse, $t_{\theta }$.
That the map preserves structure is geometrically easy to see. For instance, adding two vectors and then rotating them has the same effect as rotating first and then adding. For an algebraic argument, consider polar coordinates: the map $t_{\theta }$ sends the vector with endpoint $(r,\phi )$ to the vector with endpoint $(r,\phi +\theta )$. Then the familiar trigonometric formulas $\cos(\phi +\theta )=\cos \phi \,\cos \theta \sin \phi \,\sin \theta$ and $\sin(\phi +\theta )=\sin \phi \,\cos \theta +\cos \phi \,\sin \theta$ show how to express the map's action in the usual rectangular coordinate system.
 ${\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}r\cos \phi \\r\sin \phi \end{pmatrix}}{\stackrel {t_{\theta }}{\longmapsto }}{\begin{pmatrix}r\cos(\phi +\theta )\\r\sin(\phi +\theta )\end{pmatrix}}={\begin{pmatrix}x\cos \theta y\sin \theta \\x\sin \theta +y\cos \theta \end{pmatrix}}$
Now the calculation for preservation of addition
is routine.
 ${\begin{pmatrix}x_{1}+x_{2}\\y_{1}+y_{2}\end{pmatrix}}{\stackrel {t_{\theta }}{\longmapsto }}{\begin{pmatrix}(x_{1}+x_{2})\cos \theta (y_{1}+y_{2})\sin \theta \\(x_{1}+x_{2})\sin \theta +(y_{1}+y_{2})\cos \theta \end{pmatrix}}={\begin{pmatrix}x_{1}\cos \theta y_{1}\sin \theta \\x_{1}\sin \theta +y_{1}\cos \theta \end{pmatrix}}+{\begin{pmatrix}x_{2}\cos \theta y_{2}\sin \theta \\x_{2}\sin \theta +y_{2}\cos \theta \end{pmatrix}}$
The calculation for preservation of scalar multiplication is similar.

This map is a correspondence because it has an inverse (namely, itself).
As in the last item, that the reflection map preserves structure is geometrically easy to see: adding vectors and then reflecting gives the same result as reflecting first and then adding, for instance. For an algebraic proof, suppose that the line $\ell$ has slope $k$ (the case of a line with undefined slope can be done as a separate, but easy, case). We can follow the hint and use polar coordinates: where the line $\ell$ forms an angle of $\phi$ with the $x$axis, the action of $f_{\ell }$ is to send the vector with endpoint $(r\cos \theta ,r\sin \theta )$ to the one with endpoint $(r\cos(2\phi \theta ),r\sin(2\phi \theta ))$.
To convert to rectangular coordinates, we will use some trigonometric formulas, as we did in the prior item. First observe that $\cos \phi$ and $\sin \phi$ can be determined from the slope $k$ of the line. This picture
gives that $\cos \phi =1/{\sqrt {1+k^{2}}}$ and $\sin \phi =k/{\sqrt {1+k^{2}}}$. Now,
 ${\begin{array}{rl}\cos(2\phi \theta )&=\cos(2\phi )\,\cos \theta +\sin(2\phi )\,\sin \theta \\&=\left(\cos ^{2}\phi \sin ^{2}\phi \right)\,\cos \theta +\left(2\sin \phi \cos \phi \right)\,\sin \theta \\&=\left(({\frac {1}{\sqrt {1+k^{2}}}})^{2}({\frac {k}{\sqrt {1+k^{2}}}})^{2}\right)\,\cos \theta +\left(2{\frac {k}{\sqrt {1+k^{2}}}}{\frac {1}{\sqrt {1+k^{2}}}}\right)\,\sin \theta \\&=\left({\frac {1k^{2}}{1+k^{2}}}\right)\,\cos \theta +\left({\frac {2k}{1+k^{2}}}\right)\,\sin \theta \end{array}}$
and thus the first component of the image vector is this.
 $r\cdot \cos(2\phi \theta )={\frac {1k^{2}}{1+k^{2}}}\cdot x+{\frac {2k}{1+k^{2}}}\cdot y$
A similar calculation shows that the second component of the image
vector is this.
 $r\cdot \sin(2\phi \theta )={\frac {2k}{1+k^{2}}}\cdot x{\frac {1k^{2}}{1+k^{2}}}\cdot y$
With this algebraic description of the action of $f_{\ell }$
 ${\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {f_{\ell }}{\longmapsto }}{\begin{pmatrix}(1k^{2}/1+k^{2})\cdot x+(2k/1+k^{2})\cdot y\\(2k/1+k^{2})\cdot x(1k^{2}/1+k^{2})\cdot y\end{pmatrix}}$
checking that it preserves structure is routine.
 Problem 22
 Show that a function $f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}$ is an automorphism if and only if it has the form $x\mapsto kx$ for some $k\neq 0$.
 Let $f$ be an automorphism of $\mathbb {R} ^{1}$ such that $f(3)=7$. Find $f(2)$.
 Show that a function $f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}$ is an automorphism if and only if it has the form
 ${\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}ax+by\\cx+dy\end{pmatrix}}$
for some $a,b,c,d\in \mathbb {R}$ with $adbc\neq 0$. Hint. Exercises in prior subsections have shown that
 ${\begin{pmatrix}b\\d\end{pmatrix}}{\text{ is not a multiple of }}{\begin{pmatrix}a\\c\end{pmatrix}}$
if and only if $adbc\neq 0$.
 Let $f$ be an automorphism of $\mathbb {R} ^{2}$ with
 $f({\begin{pmatrix}1\\3\end{pmatrix}})={\begin{pmatrix}2\\1\end{pmatrix}}\quad {\text{and}}\quad f({\begin{pmatrix}1\\4\end{pmatrix}})={\begin{pmatrix}0\\1\end{pmatrix}}.$
Find
 $f({\begin{pmatrix}0\\1\end{pmatrix}}).$
 Answer
 For the "only if" half, let $f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}$ to be an isomorphism. Consider the basis $\langle 1\rangle \subseteq \mathbb {R} ^{1}$. Designate $f(1)$ by $k$. Then for any $x$ we have that $f(x)=f(x\cdot 1)=x\cdot f(1)=xk$, and so $f$'s action is multiplication by $k$. To finish this half, just note that $k\neq 0$ or else $f$ would not be onetoone.
For the "if" half we only have to check that such a map is an isomorphism when $k\neq 0$. To check that it is onetoone, assume that $f(x_{1})=f(x_{2})$ so that $kx_{1}=kx_{2}$ and divide by the nonzero factor $k$ to conclude that $x_{1}=x_{2}$. To check that it is onto, note that any $y\in \mathbb {R} ^{1}$ is the image of $x=y/k$ (again, $k\neq 0$). Finally, to check that such a map preserves combinations of two members of the domain, we have this.
 $f(c_{1}x_{1}+c_{2}x_{2})=k(c_{1}x_{1}+c_{2}x_{2})=c_{1}kx_{1}+c_{2}kx_{2}=c_{1}f(x_{1})+c_{2}f(x_{2})$
 By the prior item, $f$'s action is $x\mapsto (7/3)x$. Thus $f(2)=14/3$.
 For the "only if" half, assume that $f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}$ is an automorphism. Consider the standard basis ${\mathcal {E}}_{2}$ for $\mathbb {R} ^{2}$. Let
 $f({\vec {e}}_{1})={\begin{pmatrix}a\\c\end{pmatrix}}\quad {\text{and}}\quad f({\vec {e}}_{2})={\begin{pmatrix}b\\d\end{pmatrix}}.$
Then the action of $f$ on any vector is determined by by its action on the two basis vectors.
 $f({\begin{pmatrix}x\\y\end{pmatrix}})=f(x\cdot {\vec {e}}_{1}+y\cdot {\vec {e}}_{2})=x\cdot f({\vec {e}}_{1})+y\cdot f({\vec {e}}_{2})=x\cdot {\begin{pmatrix}a\\c\end{pmatrix}}+y\cdot {\begin{pmatrix}b\\d\end{pmatrix}}={\begin{pmatrix}ax+by\\cx+dy\end{pmatrix}}$
To finish this half, note that if $adbc=0$, that is, if $f({\vec {e}}_{2})$ is a multiple of $f({\vec {e}}_{1})$, then $f$ is not onetoone.
For "if" we must check that the map is an isomorphism, under the condition that $adbc\neq 0$. The structurepreservation check is easy; we will here show that $f$ is a correspondence. For the argument that the map is onetoone, assume this.
 $f({\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}})=f({\begin{pmatrix}x_{2}\\y_{2}\end{pmatrix}})\quad {\text{and so}}\quad {\begin{pmatrix}ax_{1}+by_{1}\\cx_{1}+dy_{1}\end{pmatrix}}={\begin{pmatrix}ax_{2}+by_{2}\\cx_{2}+dy_{2}\end{pmatrix}}$
Then, because $adbc\neq 0$, the resulting system
 ${\begin{array}{*{2}{rc}r}a(x_{1}x_{2})&+&b(y_{1}y_{2})&=&0\\c(x_{1}x_{2})&+&d(y_{1}y_{2})&=&0\end{array}}$
has a unique solution, namely the trivial one
$x_{1}x_{2}=0$ and $y_{1}y_{2}=0$
(this follows from the hint).
The argument that this map is onto is closely related— this system
 ${\begin{array}{*{2}{rc}r}ax_{1}&+&by_{1}&=&x\\cx_{1}&+&dy_{1}&=&y\end{array}}$
has a solution for any $x$ and $y$ if and only if
this set
 $\{{\begin{pmatrix}a\\c\end{pmatrix}},{\begin{pmatrix}b\\d\end{pmatrix}}\}$
spans $\mathbb {R} ^{2}$, i.e., if and only if this set is
a basis (because it is a twoelement subset of $\mathbb {R} ^{2}$),
i.e., if and only if $adbc\neq 0$.

 $f({\begin{pmatrix}0\\1\end{pmatrix}})=f({\begin{pmatrix}1\\3\end{pmatrix}}{\begin{pmatrix}1\\4\end{pmatrix}})=f({\begin{pmatrix}1\\3\end{pmatrix}})f({\begin{pmatrix}1\\4\end{pmatrix}})={\begin{pmatrix}2\\1\end{pmatrix}}{\begin{pmatrix}0\\1\end{pmatrix}}={\begin{pmatrix}2\\2\end{pmatrix}}$
 Problem 23
Refer to Lemma 1.8 and Lemma 1.9. Find two more things preserved by isomorphism.
 Answer
There are many answers; two are linear independence and subspaces.
To show that if a set $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$ is linearly independent then its image $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$ is also linearly independent, consider a linear relationship among members of the image set.
 $0=c_{1}f({\vec {v}}_{1})+\dots +c_{n}f({\vec {v_{n}}})=f(c_{1}{\vec {v}}_{1})+\dots +f(c_{n}{\vec {v_{n}}})=f(c_{1}{\vec {v}}_{1}+\dots +c_{n}{\vec {v_{n}}})$
Because this map is an isomorphism, it is onetoone. So $f$ maps only one vector from the domain to the zero vector in the range, that is, $c_{1}{\vec {v}}_{1}+\dots +c_{n}{\vec {v}}_{n}$ equals the zero vector (in the domain, of course). But, if $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$ is linearly independent then all of the $c$'s are zero, and so $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$ is linearly independent also. (Remark. There is a small point about this argument that is worth mention. In a set, repeats collapse, that is, strictly speaking, this is a oneelement set: $\{{\vec {v}},{\vec {v}}\}$, because the things listed as in it are the same thing. Observe, however, the use of the subscript $n$ in the above argument. In moving from the domain set $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$ to the image set $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$, there is no collapsing, because the image set does not have repeats, because the isomorphism $f$ is onetoone.)
To show that if $f:V\to W$ is an isomorphism and if $U$ is a subspace of the domain $V$ then the set of image vectors $f(U)=\{{\vec {w}}\in W\,{\big }\,{\vec {w}}=f({\vec {u}}){\text{ for some }}{\vec {u}}\in U\}$ is a subspace of $W$, we need only show that it is closed under linear combinations of two of its members (it is nonempty because it contains the image of the zero vector). We have
 $c_{1}\cdot f({\vec {u}}_{1})+c_{2}\cdot f({\vec {u}}_{2})=f(c_{1}{\vec {u}}_{1})+f(c_{2}{\vec {u}}_{2})=f(c_{1}{\vec {u}}_{1}+c_{2}{\vec {u}}_{2})$
and $c_{1}{\vec {u}}_{1}+c_{2}{\vec {u}}_{2}$ is a member of $U$ because of the closure of a subspace under combinations. Hence the combination of $f({\vec {u}}_{1})$ and $f({\vec {u}}_{2})$ is a member of $f(U)$.
 Problem 24
We show that isomorphisms can be tailored to fit in that, sometimes, given vectors in the domain and in the range we can produce an isomorphism associating those vectors.
 Let $B=\langle {\vec {\beta }}_{1},{\vec {\beta }}_{2},{\vec {\beta }}_{3}\rangle$ be a basis for ${\mathcal {P}}_{2}$ so that any ${\vec {p}}\in {\mathcal {P}}_{2}$ has a unique representation as ${\vec {p}}=c_{1}{\vec {\beta }}_{1}+c_{2}{\vec {\beta }}_{2}+c_{3}{\vec {\beta }}_{3}$, which we denote in this way.
 ${\rm {Rep}}_{B}({\vec {p}})={\begin{pmatrix}c_{1}\\c_{2}\\c_{3}\end{pmatrix}}$
Show that the ${\rm {Rep}}_{B}(\cdot )$ operation is a function from ${\mathcal {P}}_{2}$ to $\mathbb {R} ^{3}$ (this entails showing that with every domain vector ${\vec {v}}\in {\mathcal {P}}_{2}$ there is an associated image vector in $\mathbb {R} ^{3}$, and further, that with every domain vector ${\vec {v}}\in {\mathcal {P}}_{2}$ there is at most one associated image vector).
 Show that this ${\rm {Rep}}_{B}(\cdot )$ function is onetoone and onto.
 Show that it preserves structure.
 Produce an isomorphism from ${\mathcal {P}}_{2}$ to
$\mathbb {R} ^{3}$ that fits these specifications.
 $x+x^{2}\mapsto {\begin{pmatrix}1\\0\\0\end{pmatrix}}\quad {\text{and}}\quad 1x\mapsto {\begin{pmatrix}0\\1\\0\end{pmatrix}}$
 Answer
 The association
 ${\vec {p}}=c_{1}{\vec {\beta }}_{1}+c_{2}{\vec {\beta }}_{2}+c_{3}{\vec {\beta }}_{3}{\stackrel {{\rm {Rep}}_{B}(\cdot )}{\longmapsto }}{\begin{pmatrix}c_{1}\\c_{2}\\c_{3}\end{pmatrix}}$
is a function if every member ${\vec {p}}$ of the domain is associated with at least one member of the codomain, and if every member ${\vec {p}}$ of the domain is associated with at most one member of the codomain. The first condition holds because the basis $B$ spans the domain— every ${\vec {p}}$ can be written as at least one linear combination of ${\vec {\beta }}$'s. The second condition holds because the basis $B$ is linearly independent— every member ${\vec {p}}$ of the domain can be written as at most one linear combination of the ${\vec {\beta }}$'s.  For the onetoone argument, if ${\rm {Rep}}_{B}({\vec {p}})={\rm {Rep}}_{B}({\vec {q}})$, that is, if ${\rm {Rep}}_{B}(p_{1}{\vec {\beta }}_{1}+p_{2}{\vec {\beta }}_{2}+p_{3}{\vec {\beta }}_{3})={\rm {Rep}}_{B}(q_{1}{\vec {\beta }}_{1}+q_{2}{\vec {\beta }}_{2}+q_{3}{\vec {\beta }}_{3})$ then
 ${\begin{pmatrix}p_{1}\\p_{2}\\p_{3}\end{pmatrix}}={\begin{pmatrix}q_{1}\\q_{2}\\q_{3}\end{pmatrix}}$
and so $p_{1}=q_{1}$ and $p_{2}=q_{2}$ and $p_{3}=q_{3}$, which gives the conclusion that ${\vec {p}}={\vec {q}}$. Therefore this map is onetoone.
For onto, we can just note that
 ${\begin{pmatrix}a\\b\\c\end{pmatrix}}$
equals ${\rm {Rep}}_{B}(a{\vec {\beta }}_{1}+b{\vec {\beta }}_{2}+c{\vec {\beta }}_{3})$, and so any member of the codomain $\mathbb {R} ^{3}$ is the image of some member of the domain ${\mathcal {P}}_{2}$.
 This map respects addition and scalar multiplication because it respects combinations of two members of the domain (that is, we are using item 2 of Lemma 1.9): where ${\vec {p}}=p_{1}{\vec {\beta }}_{1}+p_{2}{\vec {\beta }}_{2}+p_{3}{\vec {\beta }}_{3}$ and ${\vec {q}}=q_{1}{\vec {\beta }}_{1}+q_{2}{\vec {\beta }}_{2}+q_{3}{\vec {\beta }}_{3}$, we have this.
 ${\begin{array}{rl}{\rm {Rep}}_{B}(c\cdot {\vec {p}}+d\cdot {\vec {q}})&={\rm {Rep}}_{B}(\,(cp_{1}+dq_{1}){\vec {\beta }}_{1}+(cp_{2}+dq_{2}){\vec {\beta }}_{2}+(cp_{3}+dq_{3}){\vec {\beta }}_{3}\,)\\&={\begin{pmatrix}cp_{1}+dq_{1}\\cp_{2}+dq_{2}\\cp_{3}+dq_{3}\end{pmatrix}}\\&=c\cdot {\begin{pmatrix}p_{1}\\p_{2}\\p_{3}\end{pmatrix}}+d\cdot {\begin{pmatrix}q_{1}\\q_{2}\\q_{3}\end{pmatrix}}\\&={\rm {Rep}}_{B}({\vec {p}})+{\rm {Rep}}_{B}({\vec {q}})\end{array}}$
 Use any basis $B$ for ${\mathcal {P}}_{2}$ whose first two members are $x+x^{2}$ and $1x$, say $B=\langle x+x^{2},1x,1\rangle$.
 Problem 26
(Requires the subsection on Combining Subspaces, which is optional.) Let $U$ and $W$ be vector spaces. Define a new vector space, consisting of the set $U\times W=\{({\vec {u}},{\vec {w}})\,{\big }\,{\vec {u}}\in U{\text{ and }}{\vec {w}}\in W\}$ along with these operations.
 $({\vec {u}}_{1},{\vec {w}}_{1})+({\vec {u}}_{2},{\vec {w}}_{2})=({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})\quad {\text{and}}\quad r\cdot ({\vec {u}},{\vec {w}})=(r{\vec {u}},r{\vec {w}})$
This is a vector space, the external direct sum of $U$ and $W$.
 Check that it is a vector space.
 Find a basis for, and the dimension of, the external direct sum ${\mathcal {P}}_{2}\times \mathbb {R} ^{2}$.
 What is the relationship among $\dim(U)$, $\dim(W)$, and $\dim(U\times W)$?
 Suppose that $U$ and $W$ are subspaces of a vector space $V$ such that $V=U\oplus W$ (in this case we say that $V$ is the internal direct sum of $U$ and $W$). Show that the map $f:U\times W\to V$ given by
 $({\vec {u}},{\vec {w}}){\stackrel {f}{\longmapsto }}{\vec {u}}+{\vec {w}}$
is an isomorphism. Thus if the internal direct sum is defined then the internal and external direct sums are isomorphic.
 Answer
 Most of the conditions in the definition of a vector space are routine. We here sketch the verification of part 1 of that definition.
For closure of $U\times W$, note that because $U$ and $W$ are closed, we have that ${\vec {u}}_{1}+{\vec {u}}_{2}\in U$ and ${\vec {w}}_{1}+{\vec {w}}_{2}\in W$ and so $({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})\in U\times W$. Commutativity of addition in $U\times W$ follows from commutativity of addition in $U$ and $W$.
 $({\vec {u}}_{1},{\vec {w}}_{1})+({\vec {u}}_{2},{\vec {w}}_{2})=({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})=({\vec {u}}_{2}+{\vec {u}}_{1},{\vec {w}}_{2}+{\vec {w}}_{1})=({\vec {u}}_{2},{\vec {w}}_{2})+({\vec {u}}_{1},{\vec {w}}_{1})$
The check for associativity of addition is similar. The zero element is $({\vec {0}}_{U},{\vec {0}}_{W})\in U\times W$ and the additive inverse of $({\vec {u}},{\vec {w}})$ is $({\vec {u}},{\vec {w}})$.
The checks for the second part of the definition of a vector space are also straightforward.
 This is a basis
 $\langle \,(1,{\begin{pmatrix}0\\0\end{pmatrix}}),(x,{\begin{pmatrix}0\\0\end{pmatrix}}),(x^{2},{\begin{pmatrix}0\\0\end{pmatrix}}),(1,{\begin{pmatrix}1\\0\end{pmatrix}}),(1,{\begin{pmatrix}0\\1\end{pmatrix}})\,\rangle$
because there is one and only one way to represent any member of ${\mathcal {P}}_{2}\times \mathbb {R} ^{2}$ with respect to this set; here is an example.
 $(3+2x+x^{2},{\begin{pmatrix}5\\4\end{pmatrix}})=3\cdot (1,{\begin{pmatrix}0\\0\end{pmatrix}})+2\cdot (x,{\begin{pmatrix}0\\0\end{pmatrix}})+(x^{2},{\begin{pmatrix}0\\0\end{pmatrix}})+5\cdot (1,{\begin{pmatrix}1\\0\end{pmatrix}})+4\cdot (1,{\begin{pmatrix}0\\1\end{pmatrix}})$
The dimension of this space is five.
 We have $\dim(U\times W)=\dim(U)+\dim(W)$ as this is a basis.
 $\langle ({\vec {\mu }}_{1},{\vec {0}}_{W}),\dots ,({\vec {\mu }}_{\dim(U)},{\vec {0}}_{W}),({\vec {0}}_{U},{\vec {\omega }}_{1}),\ldots ,({\vec {0}}_{U},{\vec {\omega }}_{\dim(W)})\rangle$
 We know that if $V=U\oplus W$ then each ${\vec {v}}\in V$ can be written as ${\vec {v}}={\vec {u}}+{\vec {w}}$ in one and only one way. This is just what we need to prove that the given function an isomorphism.
First, to show that $f$ is onetoone we can show that if $f\left(({\vec {u}}_{1},{\vec {w}}_{1})\right)=\left(({\vec {u}}_{2},{\vec {w}}_{2})\right)$, that is, if ${\vec {u}}_{1}+{\vec {w}}_{1}={\vec {u}}_{2}+{\vec {w}}_{2}$ then ${\vec {u}}_{1}={\vec {u}}_{2}$ and ${\vec {w}}_{1}={\vec {w}}_{2}$. But the statement "each ${\vec {v}}$ is such a sum in only one way" is exactly what is needed to make this conclusion. Similarly, the argument that $f$ is onto is completed by the statement that "each ${\vec {v}}$ is such a sum in at least one way".
This map also preserves linear combinations
 ${\begin{array}{rl}f(\,c_{1}\cdot ({\vec {u}}_{1},{\vec {w}}_{1})+c_{2}\cdot ({\vec {u}}_{2},{\vec {w}}_{2})\,)&=f(\,(c_{1}{\vec {u}}_{1}+c_{2}{\vec {u}}_{2},c_{1}{\vec {w}}_{1}+c_{2}{\vec {w}}_{2})\,)\\&=c_{1}{\vec {u}}_{1}+c_{2}{\vec {u}}_{2}+c_{1}{\vec {w}}_{1}+c_{2}{\vec {w}}_{2}\\&=c_{1}{\vec {u}}_{1}+c_{1}{\vec {w}}_{1}+c_{2}{\vec {u}}_{2}+c_{2}{\vec {w}}_{2}\\&=c_{1}\cdot f(\,({\vec {u}}_{1},{\vec {w}}_{1})\,)+c_{2}\cdot f(\,({\vec {u}}_{2},{\vec {w}}_{2})\,)\end{array}}$
and so it is an isomorphism.