# Linear Algebra/Topic: Geometry of Linear Maps/Solutions

## Solutions

Problem 1

Let $h:\mathbb{R}^2\to \mathbb{R}^2$ be the transformation that rotates vectors clockwise by $\pi/4$ radians.

1. Find the matrix $H$ representing $h$ with respect to the standard bases. Use Gauss' method to reduce $H$ to the identity.
2. Translate the row reduction to to a matrix equation $T_jT_{j-1}\cdots T_1H=I$ (the prior item shows both that $H$ is similar to $I$, and that no column operations are needed to derive $I$ from $H$).
3. Solve this matrix equation for $H$.
4. Sketch the geometric effect matrix, that is, sketch how $H$ is expressed as a combination of dilations, flips, skews, and projections (the identity is a trivial projection).
1. To represent $H$, recall that rotation counterclockwise by $\theta$ radians is represented with respect to the standard basis in this way.
${\rm Rep}_{\mathcal{E}_2,\mathcal{E}_2}(h) =\begin{pmatrix} \cos\theta &-\sin\theta \\ \sin\theta &\cos\theta \end{pmatrix}$
A clockwise angle is the negative of a counterclockwise one.
${\rm Rep}_{\mathcal{E}_2,\mathcal{E}_2}(h) =\begin{pmatrix} \cos(-\pi/4) &-\sin(-\pi/4) \\ \sin(-\pi/4) &\cos(-\pi/4) \end{pmatrix} =\begin{pmatrix} \sqrt{2}/2 &\sqrt{2}/2 \\ -\sqrt{2}/2 &\sqrt{2}/2 \end{pmatrix}$
This Gauss-Jordan reduction
$\xrightarrow[]{\rho_1+\rho_2} \begin{pmatrix} \sqrt{2}/2 &\sqrt{2}/2 \\ 0 &\sqrt{2} \end{pmatrix} \xrightarrow[(1/\sqrt{2})\rho_2]{(2/\sqrt{2})\rho_1} \begin{pmatrix} 1 &1 \\ 0 &1 \end{pmatrix} \xrightarrow[]{-\rho_2+\rho_1} \begin{pmatrix} 1 &0 \\ 0 &1 \end{pmatrix}$
produces the identity matrix so there is no need for column-swapping operations to end with a partial-identity.
2. The reduction is expressed in matrix multiplication as
$\begin{pmatrix} 1 &-1 \\ 0 &1 \end{pmatrix} \begin{pmatrix} 2/\sqrt{2} &0 \\ 0 &1/\sqrt{2} \end{pmatrix} \begin{pmatrix} 1 &0 \\ 1 &1 \end{pmatrix} H =I$
(note that composition of the Gaussian operations is performed from right to left).
3. Taking inverses
$H = \underbrace{ \begin{pmatrix} 1 &0 \\ -1 &1 \end{pmatrix} \begin{pmatrix} \sqrt{2}/2 &0 \\ 0 &\sqrt{2} \end{pmatrix} \begin{pmatrix} 1 &1 \\ 0 &1 \end{pmatrix} }_P I$
gives the desired factorization of $H$ (here, the partial identity is $I$, and $Q$ is trivial, that is, it is also an identity matrix).
4. Reading the composition from right to left (and ignoring the identity matrices as trivial) gives that $H$ has the same effect as first performing this skew

followed by a dilation that multiplies all first components by $\sqrt{2}/2$ (this is a "shrink" in that $\sqrt{2}/2\approx0.707$) and all second components by $\sqrt{2}$, followed by another skew.

For instance, the effect of $H$ on the unit vector whose angle with the $x$-axis is $\pi/3$ is this.

Verifying that the resulting vector has unit length and forms an angle of $-\pi/6$ with the $x$-axis is routine.

Problem 2

What combination of dilations, flips, skews, and projections produces a rotation counterclockwise by $2\pi/3$ radians?

We will first represent the map with a matrix $H$, perform the row operations and, if needed, column operations to reduce it to a partial-identity matrix. We will then translate that into a factorization $H=PBQ$. Subsitituting into the general matrix

${\rm Rep}_{\mathcal{E}_2,\mathcal{E}_2}(r_\theta) \begin{pmatrix} \cos\theta &-\sin\theta \\ \sin\theta &\cos\theta \end{pmatrix}$

gives this representation.

${\rm Rep}_{\mathcal{E}_2,\mathcal{E}_2}(r_{2\pi/3}) \begin{pmatrix} -1/2 &-\sqrt{3}/2 \\ \sqrt{3}/2 &-1/2 \end{pmatrix}$

Gauss' method is routine.

$\xrightarrow[]{\sqrt{3}\rho_1+\rho_2} \begin{pmatrix} -1/2 &-\sqrt{3}/2 \\ 0 &-2 \end{pmatrix} \xrightarrow[(-1/2)\rho_2]{-2\rho_1} \begin{pmatrix} 1 &\sqrt{3} \\ 0 &1 \end{pmatrix} \xrightarrow[]{-\sqrt{3}\rho_2+\rho_1} \begin{pmatrix} 1 &0 \\ 0 &1 \end{pmatrix}$

That translates to a matrix equation in this way.

$\begin{pmatrix} 1 &-\sqrt{3} \\ 0 &1 \end{pmatrix} \begin{pmatrix} -2 &0 \\ 0 &-1/2 \end{pmatrix} \begin{pmatrix} 1 &0 \\ \sqrt{3} &1 \end{pmatrix} \begin{pmatrix} -1/2 &-\sqrt{3}/2 \\ \sqrt{3}/2 &-1/2 \end{pmatrix} =I$

Taking inverses to solve for $H$ yields this factorization.

$\begin{pmatrix} -1/2 &-\sqrt{3}/2 \\ \sqrt{3}/2 &-1/2 \end{pmatrix} = \begin{pmatrix} 1 &0 \\ -\sqrt{3} &1 \end{pmatrix} \begin{pmatrix} -1/2 &0 \\ 0 &-2 \end{pmatrix} \begin{pmatrix} 1 &\sqrt{3} \\ 0 &1 \end{pmatrix} I$
Problem 3

What combination of dilations, flips, skews, and projections produces the map $h:\mathbb{R}^3\to \mathbb{R}^3$ represented with respect to the standard bases by this matrix?

$\begin{pmatrix} 1 &2 &1 \\ 3 &6 &0 \\ 1 &2 &2 \end{pmatrix}$

This Gaussian reduction

$\xrightarrow[-\rho_1+\rho_3]{-3\rho_1+\rho_2} \begin{pmatrix} 1 &2 &1 \\ 0 &0 &-3 \\ 0 &0 &1 \end{pmatrix} \xrightarrow[]{(1/3)\rho_2+\rho_3} \begin{pmatrix} 1 &2 &1 \\ 0 &0 &-3 \\ 0 &0 &0 \end{pmatrix} \xrightarrow[]{(-1/3)\rho_2} \begin{pmatrix} 1 &2 &1 \\ 0 &0 &1 \\ 0 &0 &0 \end{pmatrix} \xrightarrow[]{-\rho_2+\rho_1} \begin{pmatrix} 1 &2 &0 \\ 0 &0 &1 \\ 0 &0 &0 \end{pmatrix}$

gives the reduced echelon form of the matrix. Now the two column operations of taking $-2$ times the first column and adding it to the second, and then of swapping columns two and three produce this partial identity.

$B=\begin{pmatrix} 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &0 \end{pmatrix}$

All of that translates into matrix terms as: where

$P= \begin{pmatrix} 1 &-1 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} \begin{pmatrix} 1 &0 &0 \\ 0 &-1/3 &0 \\ 0 &0 &1 \end{pmatrix} \begin{pmatrix} 1 &0 &0 \\ 0 &1 &0 \\ 0 &1/3 &1 \end{pmatrix} \begin{pmatrix} 1 &0 &0 \\ 0 &1 &0 \\ -1 &0 &1 \end{pmatrix} \begin{pmatrix} 1 &0 &0 \\ -3 &1 &0 \\ 0 &0 &1 \end{pmatrix}$

and

$Q= \begin{pmatrix} 1 &-2 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix} \begin{pmatrix} 0 &1 &0 \\ 1 &0 &0 \\ 0 &0 &1 \end{pmatrix}$

the given matrix factors as $PBQ$.

Problem 4

Show that any linear transformation of $\mathbb{R}^1$ is the map that multiplies by a scalar $x\mapsto kx$.

Represent it with respect to the standard bases $\mathcal{E}_1,\mathcal{E}_1$, then the only entry in the resulting $1 \! \times \! 1$ matrix is the scalar $k$.

Problem 5

Show that for any permutation (that is, reordering) $p$ of the numbers $1$, ..., $n$, the map

$\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix} \mapsto \begin{pmatrix} x_{p(1)} \\ x_{p(2)} \\ \vdots \\ x_{p(n)} \end{pmatrix}$

can be accomplished with a composition of maps, each of which only swaps a single pair of coordinates. Hint: it can be done by induction on $n$. (Remark: in the fourth chapter we will show this and we will also show that the parity of the number of swaps used is determined by $p$. That is, although a particular permutation could be accomplished in two different ways with two different numbers of swaps, either both ways use an even number of swaps, or both use an odd number.)

We can show this by induction on the number of components in the vector. In the $n=1$ base case the only permutation is the trivial one, and the map

$\begin{pmatrix} x_1 \end{pmatrix} \mapsto \begin{pmatrix} x_1 \end{pmatrix}$

is indeed expressible as a composition of swaps— as zero swaps. For the inductive step we assume that the map induced by any permutation of fewer than $n$ numbers can be expressed with swaps only, and we consider the map induced by a permutation $p$ of $n$ numbers.

$\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix} \mapsto \begin{pmatrix} x_{p(1)} \\ x_{p(2)} \\ \vdots \\ x_{p(n)} \end{pmatrix}$

Consider the number $i$ such that $p(i)=n$. The map

$\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_i \\ \vdots \\ x_n \end{pmatrix} \stackrel{\hat{p}}{\longmapsto} \begin{pmatrix} x_{p(1)} \\ x_{p(2)} \\ \vdots \\ x_{p(n)} \\ \vdots \\ x_{n} \end{pmatrix}$

will, when followed by the swap of the $i$-th and $n$-th components, give the map $p$. Now, the inductive hypothesis gives that $\hat{p}$ is achievable as a composition of swaps.

Problem 6

Show that linear maps preserve the linear structures of a space.

1. Show that for any linear map from $\mathbb{R}^n$ to $\mathbb{R}^m$, the image of any line is a line. The image may be a degenerate line, that is, a single point.
2. Show that the image of any linear surface is a linear surface. This generalizes the result that under a linear map the image of a subspace is a subspace.
3. Linear maps preserve other linear ideas. Show that linear maps preserve "betweeness": if the point $B$ is between $A$ and $C$ then the image of $B$ is between the image of $A$ and the image of $C$.
1. A line is a subset of $\mathbb{R}^n$ of the form $\{\vec{v}=\vec{u}+t\cdot\vec{w}\,\big|\, t\in\mathbb{R}\}$. The image of a point on that line is $h(\vec{v})=h(\vec{u}+t\cdot\vec{w})=h(\vec{u})+t\cdot h(\vec{w})$, and the set of such vectors, as $t$ ranges over the reals, is a line (albeit, degenerate if $h(\vec{w})=\vec{0}$).
2. This is an obvious extension of the prior argument.
3. If the point $B$ is between the points $A$ and $C$ then the line from $A$ to $C$ has $B$ in it. That is, there is a $t\in (0\,..\,1)$ such that $\vec{b}=\vec{a}+t\cdot (\vec{c}-\vec{a})$ (where $B$ is the endpoint of $\vec{b}$, etc.). Now, as in the argument of the first item, linearity shows that $h(\vec{b})=h(\vec{a})+t\cdot h(\vec{c}-\vec{a})$.
Problem 7

Use a picture like the one that appears in the discussion of the Chain Rule to answer: if a function $f:\mathbb{R}\to \mathbb{R}$ has an inverse, what's the relationship between how the function — locally, approximately — dilates space, and how its inverse dilates space (assuming, of course, that it has an inverse)?

The two are inverse. For instance, for a fixed $x\in\mathbb{R}$, if $f^\prime (x)=k$ (with $k\neq 0$) then $(f^{-1})^\prime (x)=1/k$.