# Linear Algebra/Changing Representations of Vectors/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

In $\mathbb{R}^2$, where

$D=\langle \begin{pmatrix} 2 \\ 1 \end{pmatrix},\begin{pmatrix} -2 \\ 4 \end{pmatrix} \rangle$

find the change of basis matrices from $D$ to $\mathcal{E}_2$ and from $\mathcal{E}_2$ to $D$. Multiply the two.

For the matrix to change bases from $D$ to $\mathcal{E}_2$ we need that ${\rm Rep}_{\mathcal{E}_2}(\mbox{id}(\vec{\delta}_1)) ={\rm Rep}_{\mathcal{E}_2}(\vec{\delta}_1)$ and that ${\rm Rep}_{\mathcal{E}_2}(\mbox{id}(\vec{\delta}_2)) ={\rm Rep}_{\mathcal{E}_2}(\vec{\delta}_2)$. Of course, the representation of a vector in $\mathbb{R}^2$ with respect to the standard basis is easy.

${\rm Rep}_{\mathcal{E}_2}(\vec{\delta}_1)=\begin{pmatrix} 2 \\ 1 \end{pmatrix} \qquad {\rm Rep}_{\mathcal{E}_2}(\vec{\delta}_2)=\begin{pmatrix} -2 \\ 4 \end{pmatrix}$

Concatenating those two together to make the columns of the change of basis matrix gives this.

${\rm Rep}_{D,\mathcal{E}_2}(\mbox{id}) =\begin{pmatrix} 2 &-2 \\ 1 &4 \end{pmatrix}$

The change of basis matrix in the other direction can be gotten by calculating ${\rm Rep}_{D}(\mbox{id}(\vec{e}_1))={\rm Rep}_{D}(\vec{e}_1)$ and ${\rm Rep}_{D}(\mbox{id}(\vec{e}_2))={\rm Rep}_{D}(\vec{e}_2)$ (this job is routine) or it can be found by taking the inverse of the above matrix. Because of the formula for the inverse of a $2 \! \times \! 2$ matrix, this is easy.

${\rm Rep}_{\mathcal{E}_2,D}(\mbox{id})= \frac{1}{10}\cdot\begin{pmatrix} 4 &2 \\ -1 &2 \end{pmatrix} =\begin{pmatrix} 4/10 &2/10 \\ -1/10 &2/10 \end{pmatrix}$
This exercise is recommended for all readers.
Problem 2

Find the change of basis matrix for $B,D\subseteq\mathbb{R}^2$.

1. $B=\mathcal{E}_2$, $D=\langle \vec{e}_2,\vec{e}_1 \rangle$
2. $B=\mathcal{E}_2$, $D=\langle \begin{pmatrix} 1 \\ 2 \end{pmatrix},\begin{pmatrix} 1 \\ 4 \end{pmatrix} \rangle$
3. $B=\langle \begin{pmatrix} 1 \\ 2 \end{pmatrix},\begin{pmatrix} 1 \\ 4 \end{pmatrix} \rangle$, $D=\mathcal{E}_2$
4. $B=\langle \begin{pmatrix} -1 \\ 1 \end{pmatrix},\begin{pmatrix} 2 \\ 2 \end{pmatrix} \rangle$, $D=\langle \begin{pmatrix} 0 \\ 4 \end{pmatrix},\begin{pmatrix} 1 \\ 3 \end{pmatrix} \rangle$

In each case, the columns ${\rm Rep}_{D}(\mbox{id}(\vec{\beta}_1))={\rm Rep}_{D}(\vec{\beta}_1)$ and ${\rm Rep}_{D}(\mbox{id}(\vec{\beta}_2))={\rm Rep}_{D}(\vec{\beta}_2)$ are concatenated to make the change of basis matrix ${\rm Rep}_{B,D}(\mbox{id})$.

1. $\begin{pmatrix} 0 &1 \\ 1 &0 \end{pmatrix}$
2. $\begin{pmatrix} 2 &-1/2 \\ -1 &1/2 \end{pmatrix}$
3. $\begin{pmatrix} 1 &1 \\ 2 &4 \end{pmatrix}$
4. $\begin{pmatrix} 1 &-1 \\ -1 &2 \end{pmatrix}$
Problem 3

For the bases in Problem 2, find the change of basis matrix in the other direction, from $D$ to $B$.

One way to go is to find ${\rm Rep}_{B}(\vec{\delta}_1)$ and ${\rm Rep}_{B}(\vec{\delta}_2)$, and then concatenate them into the columns of the desired change of basis matrix. Another way is to find the inverse of the matrices that answer Problem 2.

1. $\begin{pmatrix} 0 &1 \\ 1 &0 \end{pmatrix}$
2. $\begin{pmatrix} 1 &1 \\ 2 &4 \end{pmatrix}$
3. $\begin{pmatrix} 2 &-1/2 \\ -1 &1/2 \end{pmatrix}$
4. $\begin{pmatrix} 2 &1 \\ 1 &1 \end{pmatrix}$
This exercise is recommended for all readers.
Problem 4

Find the change of basis matrix for each $B,D\subseteq\mathcal{P}_2$.

1. $B=\langle 1,x,x^2 \rangle , D=\langle x^2,1,x \rangle$
2. $B=\langle 1,x,x^2 \rangle , D=\langle 1,1+x,1+x+x^2 \rangle$
3. $B=\langle 2,2x,x^2 \rangle , D=\langle 1+x^2,1-x^2,x+x^2 \rangle$

The columns vector representations ${\rm Rep}_{D}(\mbox{id}(\vec{\beta}_1))={\rm Rep}_{D}(\vec{\beta}_1)$, and ${\rm Rep}_{D}(\mbox{id}(\vec{\beta}_2))={\rm Rep}_{D}(\vec{\beta}_2)$, and ${\rm Rep}_{D}(\mbox{id}(\vec{\beta}_3))={\rm Rep}_{D}(\vec{\beta}_3)$ make the change of basis matrix ${\rm Rep}_{B,D}(\mbox{id})$.

1. $\begin{pmatrix} 0 &0 &1 \\ 1 &0 &0 \\ 0 &1 &0 \end{pmatrix}$
2. $\begin{pmatrix} 1 &-1 &0 \\ 0 &1 &-1 \\ 0 &0 &1 \end{pmatrix}$
3. $\begin{pmatrix} 1 &-1 &1/2 \\ 1 &1 &-1/2 \\ 0 &2 &0 \end{pmatrix}$

E.g., for the first column of the first matrix, $1=0\cdot x^2+1\cdot 1+0\cdot x$.

This exercise is recommended for all readers.
Problem 5

Decide if each changes bases on $\mathbb{R}^2$. To what basis is $\mathcal{E}_2$ changed?

1. $\begin{pmatrix} 5 &0 \\ 0 &4 \end{pmatrix}$
2. $\begin{pmatrix} 2 &1 \\ 3 &1 \end{pmatrix}$
3. $\begin{pmatrix} -1 &4 \\ 2 &-8 \end{pmatrix}$
4. $\begin{pmatrix} 1 &-1 \\ 1 &1 \end{pmatrix}$

A matrix changes bases if and only if it is nonsingular.

1. This matrix is nonsingular and so changes bases. Finding to what basis $\mathcal{E}_2$ is changed means finding $D$ such that
${\rm Rep}_{\mathcal{E}_2,D}(\mbox{id})= \begin{pmatrix} 5 &0 \\ 0 &4 \end{pmatrix}$
and by the definition of how a matrix represents a linear map, we have this.
${\rm Rep}_{D}(\mbox{id}(\vec{e}_1))={\rm Rep}_{D}(\vec{e}_1)=\begin{pmatrix} 5 \\ 0 \end{pmatrix} \qquad {\rm Rep}_{D}(\mbox{id}(\vec{e}_2))={\rm Rep}_{D}(\vec{e}_2)=\begin{pmatrix} 0 \\ 4 \end{pmatrix}$
Where
$D=\langle \begin{pmatrix} x_1 \\ y_1 \end{pmatrix},\begin{pmatrix} x_2 \\ y_2 \end{pmatrix} \rangle$
we can either solve the system
$\begin{pmatrix} 1 \\ 0 \end{pmatrix} =5\begin{pmatrix} x_1 \\ y_1 \end{pmatrix}+0\begin{pmatrix} x_2 \\ y_1 \end{pmatrix} \qquad \begin{pmatrix} 0 \\ 1 \end{pmatrix} =0\begin{pmatrix} x_1 \\ y_1 \end{pmatrix}+4\begin{pmatrix} x_2 \\ y_1 \end{pmatrix}$
or else just spot the answer (thinking of the proof of Lemma 1.4).
$D=\langle \begin{pmatrix} 1/5 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1/4 \end{pmatrix} \rangle$
2. Yes, this matrix is nonsingular and so changes bases. To calculate $D$, we proceed as above with
$D=\langle \begin{pmatrix} x_1 \\ y_1 \end{pmatrix}, \begin{pmatrix} x_2 \\ y_2 \end{pmatrix} \rangle$
to solve
$\begin{pmatrix} 1 \\ 0 \end{pmatrix} =2\begin{pmatrix} x_1 \\ y_1 \end{pmatrix}+3\begin{pmatrix} x_2 \\ y_1 \end{pmatrix} \quad\text{and}\quad \begin{pmatrix} 0 \\ 1 \end{pmatrix} =1\begin{pmatrix} x_1 \\ y_1 \end{pmatrix}+1\begin{pmatrix} x_2 \\ y_1 \end{pmatrix}$
and get this.
$D=\langle \begin{pmatrix} -1 \\ 3 \end{pmatrix}, \begin{pmatrix} 1 \\ -2 \end{pmatrix} \rangle$
3. No, this matrix does not change bases because it is nonsingular.
4. Yes, this matrix changes bases because it is nonsingular. The calculation of the changed-to basis is as above.
$D=\langle \begin{pmatrix} 1/2 \\ -1/2 \end{pmatrix}, \begin{pmatrix} 1/2 \\ 1/2 \end{pmatrix} \rangle$
Problem 6

Find bases such that this matrix represents the identity map with respect to those bases.

$\begin{pmatrix} 3 &1 &4 \\ 2 &-1 &1 \\ 0 &0 &4 \end{pmatrix}$

This question has many different solutions. One way to proceed is to make up any basis $B$ for any space, and then compute the appropriate $D$ (necessarily for the same space, of course). Another, easier, way to proceed is to fix the codomain as $\mathbb{R}^3$ and the codomain basis as $\mathcal{E}_3$. This way (recall that the representation of any vector with respect to the standard basis is just the vector itself), we have this.

$B=\langle \begin{pmatrix} 3 \\ 2 \\ 0 \end{pmatrix}, \begin{pmatrix} 1 \\ -1 \\ 0 \end{pmatrix}, \begin{pmatrix} 4 \\ 1 \\ 4 \end{pmatrix} \rangle \qquad D=\mathcal{E}_3$
Problem 7

Conside the vector space of real-valued functions with basis $\langle \sin(x),\cos(x) \rangle$. Show that $\langle 2\sin(x)+\cos(x),3\cos(x) \rangle$ is also a basis for this space. Find the change of basis matrix in each direction.

Checking that $B=\langle 2\sin(x)+\cos(x),3\cos(x) \rangle$ is a basis is routine. Call the natural basis $D$. To compute the change of basis matrix ${\rm Rep}_{B,D}(\mbox{id})$ we must find ${\rm Rep}_{D}(2\sin(x)+\cos(x))$ and ${\rm Rep}_{D}(3\cos(x))$, that is, we need $x_1,y_1, x_2,y_2$ such that these equations hold.

$\begin{array}{rl} x_1\cdot \sin(x)+y_1\cdot\cos(x) &= 2\sin(x)+\cos(x) \\ x_2\cdot \sin(x)+y_2\cdot\cos(x) &= 3\cos(x) \end{array}$

${\rm Rep}_{B,D}(\mbox{id}) =\begin{pmatrix} 2 &0 \\ 1 &3 \end{pmatrix}$

For the change of basis matrix in the other direction we could look for ${\rm Rep}_{B}(\sin(x))$ and ${\rm Rep}_{B}(\cos(x))$ by solving these.

$\begin{array}{rl} w_1\cdot (2\sin(x)+\cos(x))+z_1\cdot(3\cos(x)) &= \sin(x) \\ w_2\cdot (2\sin(x)+\cos(x))+z_2\cdot(3\cos(x)) &= \cos(x) \end{array}$

An easier method is to find the inverse of the matrix found above.

${\rm Rep}_{D,B}(\mbox{id}) =\begin{pmatrix} 2 &0 \\ 1 &3 \end{pmatrix}^{-1} =\frac{1}{6}\cdot\begin{pmatrix} 3 &0 \\ -1 &2 \end{pmatrix} =\begin{pmatrix} 1/2 &0 \\ -1/6 &1/3 \end{pmatrix}$
Problem 8

Where does this matrix

$\begin{pmatrix} \cos(2\theta) &\sin(2\theta) \\ \sin(2\theta) &-\cos(2\theta) \end{pmatrix}$

send the standard basis for $\mathbb{R}^2$? Any other bases? Hint. Consider the inverse.

We start by taking the inverse of the matrix, that is, by deciding what is the inverse to the map of interest.

${\rm Rep}_{D,\mathcal{E}_2}(\mbox{id}) {\rm Rep}_{D,\mathcal{E}_2}(\mbox{id})^{-1} =\frac{1}{-\cos^2(2\theta)-\sin^2(2\theta)}\cdot\begin{pmatrix} -\cos(2\theta) &-\sin(2\theta) \\ -\sin(2\theta) &\cos(2\theta) \end{pmatrix} =\begin{pmatrix} \cos(2\theta) &\sin(2\theta) \\ \sin(2\theta) &-\cos(2\theta) \end{pmatrix}$

This is more tractable than the representation the other way because this matrix is the concatenation of these two column vectors

${\rm Rep}_{\mathcal{E}_2}(\vec{\delta}_1) =\begin{pmatrix} \cos(2\theta) \\ \sin(2\theta) \end{pmatrix} \qquad {\rm Rep}_{\mathcal{E}_2}(\vec{\delta}_2) =\begin{pmatrix} \sin(2\theta) \\ -\cos(2\theta) \end{pmatrix}$

and representations with respect to $\mathcal{E}_2$ are transparent.

$\vec{\delta}_1=\begin{pmatrix} \cos(2\theta) \\ \sin(2\theta) \end{pmatrix} \qquad \vec{\delta}_2=\begin{pmatrix} \sin(2\theta) \\ -\cos(2\theta) \end{pmatrix}$

This pictures the action of the map that transforms $D$ to $\mathcal{E}_2$ (it is, again, the inverse of the map that is the answer to this question). The line lies at an angle $\theta$ to the $x$ axis.

This map reflects vectors over that line. Since reflections are self-inverse, the answer to the question is: the original map reflects about the line through the origin with angle of elevation $\theta$. (Of course, it does this to any basis.)

This exercise is recommended for all readers.
Problem 9

What is the change of basis matrix with respect to $B,B$?

The appropriately-sized identity matrix.

Problem 10

Prove that a matrix changes bases if and only if it is invertible.

Each is true if and only if the matrix is nonsingular.

Problem 11

Finish the proof of Lemma 1.4.

What remains to be shown is that left multiplication by a reduction matrix represents a change from another basis to $B=\langle \vec{\beta}_1,\ldots,\vec{\beta}_{n}\rangle$.

Application of a row-multiplication matrix $M_i(k)$ translates a representation with respect to the basis $\langle \vec{\beta}_1,\dots,k\vec{\beta}_i,\dots,\vec{\beta}_n \rangle$ to one with respect to $B$, as here.

$\vec{v}=c_1\cdot\vec{\beta}_1+\dots+c_i\cdot(k\vec{\beta}_i) +\dots+c_n\cdot\vec{\beta}_n \;\mapsto\; c_1\cdot\vec{\beta}_1+\dots +(kc_i)\cdot\vec{\beta}_i+\dots+c_n\cdot\vec{\beta}_n=\vec{v}$

Applying a row-swap matrix $P_{i,j}$ translates a representation with respect to $\langle \vec{\beta}_1,\dots,\vec{\beta}_j,\dots, \vec{\beta}_i,\dots,\vec{\beta}_n \rangle$ to one with respect to $\langle \vec{\beta}_1,\dots,\vec{\beta}_i,\dots, \vec{\beta}_j,\dots,\vec{\beta}_n \rangle$. Finally, applying a row-combination matrix $C_{i,j}(k)$ changes a representation with respect to $\langle \vec{\beta}_1,\dots,\vec{\beta}_i+k\vec{\beta}_j,\dots, \vec{\beta}_j,\dots,\vec{\beta}_n \rangle$ to one with respect to $B$.

$\vec{v}= c_1\cdot\vec{\beta}_1+\dots +c_i\cdot(\vec{\beta}_i+k\vec{\beta}_j) +\dots+c_j\vec{\beta}_j+\dots+c_n\cdot\vec{\beta}_n$
$\mapsto\; c_1\cdot\vec{\beta}_1+\dots+c_i\cdot\vec{\beta}_i +\dots+(kc_i+c_j)\cdot\vec{\beta}_j +\dots+c_n\cdot\vec{\beta}_n=\vec{v}$

(As in the part of the proof in the body of this subsection, the various conditions on the row operations, e.g., that the scalar $k$ is nonzero, assure that these are all bases.)

This exercise is recommended for all readers.
Problem 12

Let $H$ be a $n \! \times \! n$ nonsingular matrix. What basis of $\mathbb{R}^n$ does $H$ change to the standard basis?

Taking $H$ as a change of basis matrix $H={\rm Rep}_{B,\mathcal{E}_n}(\mbox{id})$, its columns are

$\begin{pmatrix} h_{1,i} \\ \vdots \\ h_{n,i} \end{pmatrix} ={\rm Rep}_{\mathcal{E}_n}(\mbox{id}(\vec{\beta}_i)) ={\rm Rep}_{\mathcal{E}_n}(\vec{\beta}_i)$

and, because representations with respect to the standard basis are transparent, we have this.

$\begin{pmatrix} h_{1,i} \\ \vdots \\ h_{n,i} \end{pmatrix} =\vec{\beta}_i$

That is, the basis is the one composed of the columns of $H$.

This exercise is recommended for all readers.
Problem 13
1. In $\mathcal{P}_3$ with basis $B=\langle 1+x,1-x,x^2+x^3,x^2-x^3 \rangle$ we have this represenatation.
${\rm Rep}_{B}(1-x+3x^2-x^3)= \begin{pmatrix} 0 \\ 1 \\ 1 \\ 2 \end{pmatrix}_B$
Find a basis $D$ giving this different representation for the same polynomial.
${\rm Rep}_{D}(1-x+3x^2-x^3)= \begin{pmatrix} 1 \\ 0 \\ 2 \\ 0 \end{pmatrix}_D$
2. State and prove that any nonzero vector representation can be changed to any other.

Hint. The proof of Lemma 1.4 is constructive— it not only says the bases change, it shows how they change.

1. We can change the starting vector representation to the ending one through a sequence of row operations. The proof tells us what how the bases change. We start by swapping the first and second rows of the representation with respect to $B$ to get a representation with resepect to a new basis $B_1$.
${\rm Rep}_{B_1}(1-x+3x^2-x^3)= \begin{pmatrix} 1 \\ 0 \\ 1 \\ 2 \end{pmatrix}_{B_1} \qquad B_1=\langle 1-x,1+x,x^2+x^3,x^2-x^3 \rangle$
We next add $-2$ times the third row of the vector representation to the fourth row.
${\rm Rep}_{B_3}(1-x+3x^2-x^3)= \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}_{B_2} \qquad B_2=\langle 1-x,1+x,3x^2-x^3,x^2-x^3 \rangle$
(The third element of $B_2$ is the third element of $B_1$ minus $-2$ times the fourth element of $B_1$.) Now we can finish by doubling the third row.
${\rm Rep}_{D}(1-x+3x^2-x^3)= \begin{pmatrix} 1 \\ 0 \\ 2 \\ 0 \end{pmatrix}_{D} \qquad D=\langle 1-x,1+x,(3x^2-x^3)/2,x^2-x^3 \rangle$
2. Here are three different approaches to stating such a result. The first is the assertion: where $V$ is a vector space with basis $B$ and $\vec{v}\in V$ is nonzero, for any nonzero column vector $\vec{z}$ (whose number of components equals the dimension of $V$) there is a change of basis matrix $M$ such that $M\cdot {\rm Rep}_{B}(\vec{v})=\vec{z}$. The second possible statement: for any ($n$-dimensional) vector space $V$ and any nonzero vector $\vec{v}\in V$, where $\vec{z}_1, \vec{z}_2\in\mathbb{R}^n$ are nonzero, there are bases $B, D\subset V$ such that ${\rm Rep}_{B}(\vec{v})=\vec{z}_1$ and ${\rm Rep}_{D}(\vec{v})=\vec{z}_2$. The third is: for any nonzero $\vec{v}$ member of any vector space (of dimension $n$) and any nonzero column vector (with $n$ components) there is a basis such that $\vec{v}$ is represented with respect to that basis by that column vector. The first and second statements follow easily from the third. The first follows because the third statement gives a basis $D$ such that ${\rm Rep}_{D}(\vec{v})=\vec{z}$ and then ${\rm Rep}_{B,D}(\mbox{id})$ is the desired $M$. The second follows from the third because it is just a doubled application of it. A way to prove the third is as in the answer to the first part of this question. Here is a sketch. Represent $\vec{v}$ with respect to any basis $B$ with a column vector $\vec{z}_1$. This column vector must have a nonzero component because $\vec{v}$ is a nonzero vector. Use that component in a sequence of row operations to convert $\vec{z}_1$ to $\vec{z}$. (This sketch could be filled out as an induction argument on the dimension of $V$.)
Problem 14

Let $V,W$ be vector spaces, and let $B,\hat{B}$ be bases for $V$ and $D,\hat{D}$ be bases for $W$. Where $h:V\to W$ is linear, find a formula relating ${\rm Rep}_{B,D}(h)$ to ${\rm Rep}_{\hat{B},\hat{D}}(h)$.

This is the topic of the next subsection.

This exercise is recommended for all readers.
Problem 15

Show that the columns of an $n \! \times \! n$ change of basis matrix form a basis for $\mathbb{R}^n$. Do all bases appear in that way: can the vectors from any $\mathbb{R}^n$ basis make the columns of a change of basis matrix?

A change of basis matrix is nonsingular and thus has rank equal to the number of its columns. Therefore its set of columns is a linearly independent subset of size $n$ in $\mathbb{R}^n$ and it is thus a basis. The answer to the second half is also "yes"; all implications in the prior sentence reverse (that is, all of the "if ... then ..." parts of the prior sentence convert to "if and only if" parts).

This exercise is recommended for all readers.
Problem 16

Find a matrix having this effect.

$\begin{pmatrix} 1 \\ 3 \end{pmatrix} \;\mapsto\; \begin{pmatrix} 4 \\ -1 \end{pmatrix}$

That is, find a $M$ that left-multiplies the starting vector to yield the ending vector. Is there a matrix having these two effects?

1. $\begin{pmatrix} 1 \\ 3 \end{pmatrix}\mapsto\begin{pmatrix} 1 \\ 1 \end{pmatrix} \quad \begin{pmatrix} 2 \\ -1 \end{pmatrix}\mapsto\begin{pmatrix} -1 \\ -1 \end{pmatrix}$
2. $\begin{pmatrix} 1 \\ 3 \end{pmatrix}\mapsto\begin{pmatrix} 1 \\ 1 \end{pmatrix} \quad \begin{pmatrix} 2 \\ 6 \end{pmatrix}\mapsto\begin{pmatrix} -1 \\ -1 \end{pmatrix}$

Give a necessary and sufficient condition for there to be a matrix such that $\vec{v}_1\mapsto\vec{w}_1$ and $\vec{v}_2\mapsto\vec{w}_2$.

In response to the first half of the question, there are infinitely many such matrices. One of them represents with respect to $\mathcal{E}_2$ the transformation of $\mathbb{R}^2$ with this action.

$\begin{pmatrix} 1 \\ 0 \end{pmatrix}\mapsto\begin{pmatrix} 4 \\ 0 \end{pmatrix} \qquad \begin{pmatrix} 0 \\ 1 \end{pmatrix}\mapsto\begin{pmatrix} 0 \\ -1/3 \end{pmatrix}$

The problem of specifying two distinct input/output pairs is a bit trickier. The fact that matrices have a linear action precludes some possibilities.

1. Yes, there is such a matrix. These conditions
$\begin{pmatrix} a &b \\ c &d \end{pmatrix} \begin{pmatrix} 1 \\ 3 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \end{pmatrix} \qquad \begin{pmatrix} a &b \\ c &d \end{pmatrix} \begin{pmatrix} 2 \\ -1 \end{pmatrix} = \begin{pmatrix} -1 \\ -1 \end{pmatrix}$
can be solved
$\begin{array}{*{4}{rc}r} a &+ &3b & & & & &= &1 \\ & & & &c &+ &3d &= &1 \\ 2a &- &b & & & & &= &-1 \\ & & & &2c &- &d &= &-1 \end{array}$
to give this matrix.
$\begin{pmatrix} -2/7 &3/7 \\ -2/7 &3/7 \end{pmatrix}$
2. No, because
$2\cdot\begin{pmatrix} 1 \\ 3 \end{pmatrix}=\begin{pmatrix} 2 \\ 6 \end{pmatrix} \quad\text{but}\quad 2\cdot\begin{pmatrix} 1 \\ 1 \end{pmatrix}\neq\begin{pmatrix} -1 \\ -1 \end{pmatrix}$
no linear action can produce this effect.
3. A sufficient condition is that $\{\vec{v}_1,\vec{v}_2\}$ be linearly independent, but that's not a necessary condition. A necessary and sufficient condition is that any linear dependences among the starting vectors appear also among the ending vectors. That is,
$c_1\vec{v}_1+c_2\vec{v}_2=\vec{0} \quad\text{implies}\quad c_1\vec{w}_1+c_2\vec{w}_2=\vec{0}.$
The proof of this condition is routine.