# Linear Algebra/Definition and Examples of Vector Spaces/Solutions

## Solutions

Problem 1

Name the zero vector for each of these vector spaces.

1. The space of degree three polynomials under the natural operations
2. The space of $2 \! \times \! 4$ matrices
3. The space $\{f:[0,1]\to\mathbb{R}\,\big|\, f\text{ is continuous}\}$
4. The space of real-valued functions of one natural number variable
1. $0+0x+0x^2+0x^3$
2. $\begin{pmatrix} 0 &0 &0 &0 \\ 0 &0 &0 &0 \end{pmatrix}$
3. The constant function $f(x)=0$
4. The constant function $f(n)=0$
This exercise is recommended for all readers.
Problem 2

Find the additive inverse, in the vector space, of the vector.

1. In $\mathcal{P}_3$, the vector $-3-2x+x^2$.
2. In the space $2 \! \times \! 2$,
$\begin{pmatrix} 1 &-1 \\ 0 &3 \end{pmatrix}.$
3. In $\{ae^x+be^{-x}\,\big|\, a,b\in\mathbb{R}\}$, the space of functions of the real variable $x$ under the natural operations, the vector $3e^x-2e^{-x}$.
1. $3+2x-x^2$
2. $\begin{pmatrix} -1 &+1 \\ 0 &-3 \end{pmatrix}$
3. $-3e^x+2e^{-x}$
This exercise is recommended for all readers.
Problem 3

Show that each of these is a vector space.

1. The set of linear polynomials $\mathcal{P}_1=\{a_0+a_1x\,\big|\, a_0,a_1\in\mathbb{R}\}$ under the usual polynomial addition and scalar multiplication operations.
2. The set of $2 \! \times \! 2$ matrices with real entries under the usual matrix operations.
3. The set of three-component row vectors with their usual operations.
4. The set
$L=\{\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}\in\mathbb{R}^4\,\big|\, x+y-z+w=0\}$
under the operations inherited from $\mathbb{R}^4$.

Most of the conditions are easy to check; use Example 1.3 as a guide. Here are some comments.

1. This is just like Example 1.3; the zero element is $0+0x$.
2. The zero element of this space is the $2 \! \times \! 2$ matrix of zeroes.
3. The zero element is the vector of zeroes.
4. Closure of addition involves noting that the sum
$\begin{pmatrix} x_1 \\ y_1 \\ z_1 \\ w_1 \end{pmatrix} +\begin{pmatrix} x_2 \\ y_2 \\ z_2 \\ w_2 \end{pmatrix} = \begin{pmatrix} x_1+x_2 \\ y_1+y_2 \\ z_1+z_2 \\ w_1+w_2 \end{pmatrix}$
is in $L$ because $(x_1+x_2)+(y_1+y_2)-(z_1+z_2)+(w_1+w_2) =(x_1+y_1-z_1+w_1)+(x_2+y_2-z_2+w_2)=0+0$. Closure of scalar multiplication is similar. Note that the zero element, the vector of zeroes, is in $L$.
This exercise is recommended for all readers.
Problem 4

Show that each of these is not a vector space. (Hint. Start by listing two members of each set.)

1. Under the operations inherited from $\mathbb{R}^3$, this set
$\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\in\mathbb{R}^3\,\big|\, x+y+z=1\}$
2. Under the operations inherited from $\mathbb{R}^3$, this set
$\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\in\mathbb{R}^3\,\big|\, x^2+y^2+z^2=1\}$
3. Under the usual matrix operations,
$\{\begin{pmatrix} a &1 \\ b &c \end{pmatrix} \,\big|\, a,b,c\in\mathbb{R}\}$
4. Under the usual polynomial operations,
$\{a_0+a_1x+a_2x^2\,\big|\, a_0,a_1,a_2\in\mathbb{R}^+\}$
where $\mathbb{R}^+$ is the set of reals greater than zero
5. Under the inherited operations,
$\{\begin{pmatrix} x \\ y \end{pmatrix}\in\mathbb{R}^2\,\big|\, x+3y=4 \text{ and } 2x-y=3 \text{ and } 6x+4y=10\}$

In each item the set is called $Q$. For some items, there are other correct ways to show that $Q$ is not a vector space.

1. It is not closed under addition; it fails to meet condition 1.
$\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\in Q \qquad \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}\not\in Q$
2. It is not closed under addition.
$\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\in Q \qquad \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}\not\in Q$
3. It is not closed under addition.
$\begin{pmatrix} 0 &1 \\ 0 &0 \end{pmatrix}, \, \begin{pmatrix} 1 &1 \\ 0 &0 \end{pmatrix}\in Q \qquad \begin{pmatrix} 1 &2 \\ 0 &0 \end{pmatrix}\not\in Q$
4. It is not closed under scalar multiplication.
$1+1x+1x^2\in Q \qquad -1\cdot(1+1x+1x^2)\not\in Q$
5. It is empty, violating condition 4.
Problem 5

Define addition and scalar multiplication operations to make the complex numbers a vector space over $\mathbb{R}$.

The usual operations $(v_0+v_1i)+(w_0+w_1i)=(v_0+w_0)+(v_1+w_1)i$ and $r(v_0+v_1i)=(rv_0)+(rv_1)i$ suffice. The check is easy.

This exercise is recommended for all readers.
Problem 6

Is the set of rational numbers a vector space over $\mathbb{R}$ under the usual addition and scalar multiplication operations?

No, it is not closed under scalar multiplication since, e.g., $\pi\cdot 1$ is not a rational number.

Problem 7

Show that the set of linear combinations of the variables $x,y,z$ is a vector space under the natural addition and scalar multiplication operations.

The natural operations are $(v_1x+v_2y+v_3z)+(w_1x+w_2y+w_3z)=(v_1+w_1)x+(v_2+w_2)y+(v_3+w_3)z$ and $r\cdot(v_1x+v_2y+v_3z)=(rv_1)x+(rv_2)y+(rv_3)z$. The check that this is a vector space is easy; use Example 1.3 as a guide.

Problem 8

Prove that this is not a vector space: the set of two-tall column vectors with real entries subject to these operations.

$\begin{pmatrix} x_1 \\ y_1 \end{pmatrix} +\begin{pmatrix} x_2 \\ y_2 \end{pmatrix} =\begin{pmatrix} x_1-x_2 \\ y_1-y_2 \end{pmatrix} \qquad r\cdot\begin{pmatrix} x \\ y \end{pmatrix} =\begin{pmatrix} rx \\ ry \end{pmatrix}$

The "$+$" operation is not commutative (that is, condition 2 is not met); producing two members of the set witnessing this assertion is easy.

Problem 9

Prove or disprove that $\mathbb{R}^3$ is a vector space under these operations.

1. $\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix} +\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix} =\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \quad\text{and}\quad r\begin{pmatrix} x \\ y \\ z \end{pmatrix} =\begin{pmatrix} rx \\ ry \\ rz \end{pmatrix}$
2. $\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix} +\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix} =\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \quad\text{and}\quad r\begin{pmatrix} x \\ y \\ z \end{pmatrix} =\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$
1. It is not a vector space.
$(1+1)\cdot\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\neq \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} +\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$
2. It is not a vector space.
$1\cdot\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\neq\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$
This exercise is recommended for all readers.
Problem 10

For each, decide if it is a vector space; the intended operations are the natural ones.

1. The diagonal $2 \! \times \! 2$ matrices
$\{\begin{pmatrix} a &0 \\ 0 &b \end{pmatrix}\,\big|\, a,b\in\mathbb{R}\}$
2. This set of $2 \! \times \! 2$ matrices
$\{\begin{pmatrix} x &x+y \\ x+y &y \end{pmatrix}\,\big|\, x,y\in\mathbb{R}\}$
3. This set
$\{\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}\in\mathbb{R}^4 \,\big|\, x+y+w=1\}$
4. The set of functions $\{f:\mathbb{R}\to\mathbb{R}\,\big|\, df/dx+2f=0\}$
5. The set of functions $\{f:\mathbb{R}\to\mathbb{R}\,\big|\, df/dx+2f=1\}$

For each "yes" answer, you must give a check of all the conditions given in the definition of a vector space. For each "no" answer, give a specific example of the failure of one of the conditions.

1. Yes.
2. Yes.
3. No, it is not closed under addition. The vector of all $1/4$'s, when added to itself, makes a nonmember.
4. Yes.
5. No, $f(x)=e^{-2x}+(1/2)$ is in the set but $2\cdot f$ is not (that is, condition 6 fails).
This exercise is recommended for all readers.
Problem 11

Prove or disprove that this is a vector space: the real-valued functions $f$ of one real variable such that $f(7)=0$.

It is a vector space. Most conditions of the definition of vector space are routine; we here check only closure. For addition, $(f_1+f_2)\,(7)=f_1(7)+f_2(7)=0+0=0$. For scalar multiplication, $(r\cdot f)\,(7)=rf(7)=r0=0$.

This exercise is recommended for all readers.
Problem 12

Show that the set $\mathbb{R}^+$ of positive reals is a vector space when "$x+y$" is interpreted to mean the product of $x$ and $y$ (so that $2+3$ is $6$), and "$r\cdot x$" is interpreted as the $r$-th power of $x$.

We check Definition 1.1.

First, closure under "$+$" holds because the product of two positive reals is a positive real. The second condition is satisfied because real multiplication commutes. Similarly, as real multiplication associates, the third checks. For the fourth condition, observe that multiplying a number by $1\in\mathbb{R}^+$ won't change the number. Fifth, any positive real has a reciprocal that is a positive real.

The sixth, closure under "$\cdot$", holds because any power of a positive real is a positive real. The seventh condition is just the rule that $v^{r+s}$ equals the product of $v^r$ and $v^s$. The eight condition says that $(vw)^r=v^rw^r$. The ninth condition asserts that $(v^r)^s=v^{rs}$. The final condition says that $v^1=v$.

Problem 13

Is $\{(x,y)\,\big|\, x,y\in\mathbb{R}\}$ a vector space under these operations?

1. $(x_1,y_1)+(x_2,y_2)=(x_1+x_2,y_1+y_2)$ and $r\cdot (x,y)=(rx,y)$
2. $(x_1,y_1)+(x_2,y_2)=(x_1+x_2,y_1+y_2)$ and $r\cdot (x,y)=(rx,0)$
1. No: $1\cdot(0,1)+1\cdot(0,1)\neq (1+1)\cdot(0,1)$.
2. No; the same calculation as the prior answer shows a contition in the definition of a vector space that is violated. Another example of a violation of the conditions for a vector space is that $1\cdot (0,1)\neq (0,1)$.
Problem 14

Prove or disprove that this is a vector space: the set of polynomials of degree greater than or equal to two, along with the zero polynomial.

It is not a vector space since it is not closed under addition, as $(x^2)+(1+x-x^2)$ is not in the set.

Problem 15

At this point "the same" is only an intuition, but nonetheless for each vector space identify the $k$ for which the space is "the same" as $\mathbb{R}^k$.

1. The $2 \! \times \! 3$ matrices under the usual operations
2. The $n \! \times \! m$ matrices (under their usual operations)
3. This set of $2 \! \times \! 2$ matrices
$\{\begin{pmatrix} a &0 \\ b &c \end{pmatrix} \,\big|\, a,b,c\in\mathbb{R}\}$
4. This set of $2 \! \times \! 2$ matrices
$\{\begin{pmatrix} a &0 \\ b &c \end{pmatrix} \,\big|\, a+b+c=0\}$
1. $6$
2. $nm$
3. $3$
4. To see that the answer is $2$, rewrite it as
$\{\begin{pmatrix} a &0 \\ b &-a-b \end{pmatrix} \,\big|\, a,b\in\mathbb{R}\}$
so that there are two parameters.
This exercise is recommended for all readers.
Problem 16

Using $\vec{+}$ to represent vector addition and $\,\vec{\cdot}\,$ for scalar multiplication, restate the definition of vector space.

A vector space (over $\mathbb{R}$) consists of a set $V$ along with two operations "$\mathbin{\vec{+}}$" and "$\mathbin{\vec{\cdot}}$" subject to these conditions. Where $\vec{v},\vec{w}\in V$,

1. their vector sum $\vec{v}\mathbin{\vec{+}}\vec{w}$ is an element of $V$. If $\vec{u},\vec{v},\vec{w}\in V$ then
2. $\vec{v}\mathbin{\vec{+}}\vec{w}=\vec{w}\mathbin{\vec{+}}\vec{v}$ and
3. $(\vec{v}\mathbin{\vec{+}}\vec{w})\mathbin{\vec{+}}\vec{u}=\vec{v}\mathbin{\vec{+}}(\vec{w}\mathbin{\vec{+}}\vec{u})$.
4. There is a zero vector $\vec{0}\in V$ such that $\vec{v}\mathbin{\vec{+}}\vec{0}=\vec{v}\,$ for all $\vec{v}\in V$.
5. Each $\vec{v}\in V$ has an additive inverse $\vec{w}\in V$ such that $\vec{w}\mathbin{\vec{+}}\vec{v}=\vec{0}$. If $r,s$ are scalars, that is, members of $\mathbb{R}$), and $\vec{v},\vec{w}\in V$ then
6. each scalar multiple $r\cdot\vec{v}$ is in $V$. If $r,s\in\mathbb{R}$ and $\vec{v},\vec{w}\in V$ then
7. $(r+ s)\cdot\vec{v}=r\cdot\vec{v}\mathbin{\vec{+}} s\cdot\vec{v}$, and
8. $r\mathbin{\vec{\cdot}} (\vec{v}+\vec{w})=r\mathbin{\vec{\cdot}}\vec{v}+r\mathbin{\vec{\cdot}}\vec{w}$, and
9. $(rs)\mathbin{\vec{\cdot}} \vec{v} =r\mathbin{\vec{\cdot}} (s\mathbin{\vec{\cdot}}\vec{v})$, and
10. $1\mathbin{\vec{\cdot}} \vec{v}=\vec{v}$.
This exercise is recommended for all readers.
Problem 17

Prove these.

1. Any vector is the additive inverse of the additive inverse of itself.
2. Vector addition left-cancels: if $\vec{v},\vec{s},\vec{t}\in V$ then $\vec{v}+\vec{s}=\vec{v}+\vec{t}\,$ implies that $\vec{s}=\vec{t}$.
1. Let $V$ be a vector space, assume that $\vec{v}\in V$, and assume that $\vec{w}\in V$ is the additive inverse of $\vec{v}$ so that $\vec{w}+\vec{v}=\vec{0}$. Because addition is commutative, $\vec{0}=\vec{w}+\vec{v}=\vec{v}+\vec{w}$, so therefore $\vec{v}$ is also the additive inverse of $\vec{w}$.
2. Let $V$ be a vector space and suppose $\vec{v},\vec{s},\vec{t}\in V$. The additive inverse of $\vec{v}$ is $-\vec{v}$ so $\vec{v}+\vec{s}=\vec{v}+\vec{t}$ gives that $-\vec{v}+\vec{v}+\vec{s}=-\vec{v}+\vec{v}+\vec{t}$, which says that $\vec{0}+\vec{s}=\vec{0}+\vec{t}$ and so $\vec{s}=\vec{t}$.
Problem 18

The definition of vector spaces does not explicitly say that $\vec{0}+\vec{v}=\vec{v}$ (it instead says that $\vec{v}+\vec{0}=\vec{v}$). Show that it must nonetheless hold in any vector space.

Addition is commutative, so in any vector space, for any vector $\vec{v}$ we have that $\vec{v}=\vec{v}+\vec{0}=\vec{0}+\vec{v}$.

This exercise is recommended for all readers.
Problem 19

Prove or disprove that this is a vector space: the set of all matrices, under the usual operations.

It is not a vector space since addition of two matrices of unequal sizes is not defined, and thus the set fails to satisfy the closure condition.

Problem 20

In a vector space every element has an additive inverse. Can some elements have two or more?

Each element of a vector space has one and only one additive inverse.

For, let $V$ be a vector space and suppose that $\vec{v}\in V$. If $\vec{w}_1,\vec{w}_2\in V$ are both additive inverses of $\vec{v}$ then consider $\vec{w}_1+\vec{v}+\vec{w}_2$. On the one hand, we have that it equals $\vec{w}_1+(\vec{v}+\vec{w}_2)= \vec{w}_1+\vec{0}=\vec{w}_1$. On the other hand we have that it equals $(\vec{w}_1+\vec{v})+\vec{w}_2= \vec{0}+\vec{w}_2=\vec{w}_2$. Therefore, $\vec{w}_1=\vec{w}_2$.

Problem 21
1. Prove that every point, line, or plane thru the origin in $\mathbb{R}^3$ is a vector space under the inherited operations.
2. What if it doesn't contain the origin?
1. Every such set has the form $\{r\cdot\vec{v}+s\cdot\vec{w}\,\big|\, r,s\in\mathbb{R}\}$ where either or both of $\vec{v},\vec{w}$ may be $\vec{0}$. With the inherited operations, closure of addition $(r_1\vec{v}+s_1\vec{w})+(r_2\vec{v}+s_2\vec{w})=(r_1+r_2)\vec{v}+(s_1+s_2)\vec{w}$ and scalar multiplication $c(r\vec{v}+s\vec{w})=(cr)\vec{v}+(cs)\vec{w}$ are easy. The other conditions are also routine.
2. No such set can be a vector space under the inherited operations because it does not have a zero element.
This exercise is recommended for all readers.
Problem 22

Using the idea of a vector space we can easily reprove that the solution set of a homogeneous linear system has either one element or infinitely many elements. Assume that $\vec{v}\in V$ is not $\vec{0}$.

1. Prove that $r\cdot\vec{v}=\vec{0}$ if and only if $r=0$.
2. Prove that $r_1\cdot\vec{v}=r_2\cdot\vec{v}$ if and only if $r_1=r_2$.
3. Prove that any nontrivial vector space is infinite.
4. Use the fact that a nonempty solution set of a homogeneous linear system is a vector space to draw the conclusion.

Assume that $\vec{v}\in V$ is not $\vec{0}$.

1. One direction of the if and only if is clear: if $r=0$ then $r\cdot\vec{v}=\vec{0}$. For the other way, let $r$ be a nonzero scalar. If $r\vec{v}=\vec{0}$ then $(1/r)\cdot r\vec{v}=(1/r)\cdot \vec{0}$ shows that $\vec{v}=\vec{0}$, contrary to the assumption.
2. Where $r_1,r_2$ are scalars, $r_1\vec{v}=r_2\vec{v}\,$ holds if and only if $(r_1-r_2)\vec{v}=\vec{0}$. By the prior item, then $r_1-r_2=0$.
3. A nontrivial space has a vector $\vec{v}\neq\vec{0}$. Consider the set $\{k\cdot\vec{v}\,\big|\, k\in\mathbb{R}\}$. By the prior item this set is infinite.
4. The solution set is either trivial, or nontrivial. In the second case, it is infinite.
Problem 23

Is this a vector space under the natural operations: the real-valued functions of one real variable that are differentiable?

Yes. A theorem of first semester calculus says that a sum of differentiable functions is differentiable and that $(f+g)^\prime=f^\prime+g^\prime$, and that a multiple of a differentiable function is differentiable and that $(r\cdot f)^\prime=r\,f^\prime$.

Problem 24

A vector space over the complex numbers $\mathbb{C}$ has the same definition as a vector space over the reals except that scalars are drawn from $\mathbb{C}$ instead of from $\mathbb{R}$. Show that each of these is a vector space over the complex numbers. (Recall how complex numbers add and multiply: $(a_0+a_1i)+(b_0+b_1i)=(a_0+b_0)+(a_1+b_1)i$ and $(a_0+a_1i)(b_0+b_1i)=(a_0b_0-a_1b_1)+(a_0b_1+a_1b_0)i$.)

1. The set of degree two polynomials with complex coefficients
2. This set
$\{\begin{pmatrix} 0 &a \\ b &0 \end{pmatrix}\,\big|\, a,b\in\mathbb{C}\text{ and } a+b=0+0i \}$

The check is routine. Note that "1" is $1+0i$ and the zero elements are these.

1. $(0+0i)+(0+0i)x+(0+0i)x^2$
2. $\begin{pmatrix} 0+0i &0+0i \\ 0+0i &0+0i \end{pmatrix}$
Problem 25

Name a property shared by all of the $\mathbb{R}^n$'s but not listed as a requirement for a vector space.

Notably absent from the definition of a vector space is a distance measure.

This exercise is recommended for all readers.
Problem 26
• Prove that a sum of four vectors $\vec{v}_1,\ldots,\vec{v}_4\in V$ can be associated in any way without changing the result.
$\begin{array}{rl} ((\vec{v}_1+\vec{v}_2)+\vec{v}_3)+\vec{v}_4 &=(\vec{v}_1+(\vec{v}_2+\vec{v}_3))+\vec{v}_4 \\ &=(\vec{v}_1+\vec{v}_2)+(\vec{v}_3+\vec{v}_4) \\ &=\vec{v}_1+((\vec{v}_2+\vec{v}_3)+\vec{v}_4) \\ &=\vec{v}_1+(\vec{v}_2+(\vec{v}_3+\vec{v}_4)) \end{array}$
This allows us to simply write "$\vec{v}_1+\vec{v}_2+\vec{v}_3+\vec{v}_4$" without ambiguity.
• Prove that any two ways of associating a sum of any number of vectors give the same sum. (Hint. Use induction on the number of vectors.)
1. A small rearrangement does the trick.
$\begin{array}{rl} (\vec{v}_1+(\vec{v}_2+\vec{v}_3))+\vec{v}_4 &=((\vec{v}_1+\vec{v}_2)+\vec{v}_3)+\vec{v}_4 \\ &=(\vec{v}_1+\vec{v}_2)+(\vec{v}_3+\vec{v}_4) \\ &=\vec{v}_1+(\vec{v}_2+(\vec{v}_3+\vec{v}_4)) \\ &=\vec{v}_1+((\vec{v}_2+\vec{v}_3)+\vec{v}_4) \end{array}$
Each equality above follows from the associativity of three vectors that is given as a condition in the definition of a vector space. For instance, the second "$=$" applies the rule $(\vec{w}_1+\vec{w}_2)+\vec{w}_3=\vec{w}_1+(\vec{w}_2+\vec{w}_3)$ by taking $\vec{w}_1$ to be $\vec{v}_1+\vec{v}_2$, taking $\vec{w}_2$ to be $\vec{v}_3$, and taking $\vec{w}_3$ to be $\vec{v}_4$.
2. The base case for induction is the three vector case. This case $\vec{v}_1+(\vec{v}_2+\vec{v}_3)=(\vec{v}_1+\vec{v}_2)+\vec{v}_3$ is required of any triple of vectors by the definition of a vector space. For the inductive step, assume that any two sums of three vectors, any two sums of four vectors, ..., any two sums of $k$ vectors are equal no matter how the sums are parenthesized. We will show that any sum of $k+1$ vectors equals this one $((\cdots((\vec{v}_1+\vec{v}_2)+\vec{v}_3)+\cdots)+\vec{v}_k)+\vec{v}_{k+1}$. Any parenthesized sum has an outermost "$+$". Assume that it lies between $\vec{v}_m$ and $\vec{v}_{m+1}$ so the sum looks like this.
$(\cdots\,(\vec{v}_1+(\cdots+\vec{v}_m))\,\cdots)+(\cdots\,(\vec{v}_{m+1}+(\cdots+\vec{v}_{k+1}))\,\cdots)$
The second half involves fewer than $k+1$ additions, so by the inductive hypothesis we can re-parenthesize it so that it reads left to right from the inside out, and in particular, so that its outermost "$+$" occurs right before $\vec{v}_{k+1}$.
$=(\cdots\,(\vec{v}_1+(\,\cdots\,+\vec{v}_m))\,\cdots)+(\cdots(\vec{v}_{m+1}+(\cdots+\vec{v}_{k})\cdots)+\vec{v}_{k+1})$
Apply the associativity of the sum of three things
$=((\,\cdots\,(\vec{v}_1+(\cdots+\vec{v}_m)\,\cdots\,) +(\,\cdots\,(\vec{v}_{m+1}+(\cdots\,\vec{v}_k))\cdots) +\vec{v}_{k+1}$
and finish by applying the inductive hypothesis inside these outermost parenthesis.
Problem 27

For any vector space, a subset that is itself a vector space under the inherited operations (e.g., a plane through the origin inside of $\mathbb{R}^3$) is a subspace.

1. Show that $\{a_0+a_1x+a_2x^2\,\big|\, a_0+a_1+a_2=0\}$ is a subspace of the vector space of degree two polynomials.
2. Show that this is a subspace of the $2 \! \times \! 2$ matrices.
$\{\begin{pmatrix} a &b \\ c &0 \end{pmatrix} \,\big|\, a+b=0\}$
3. Show that a nonempty subset $S$ of a real vector space is a subspace if and only if it is closed under linear combinations of pairs of vectors: whenever $c_1,c_2\in\mathbb{R}$ and $\vec{s}_1,\vec{s}_2\in S$ then the combination $c_1\vec{v}_1+c_2\vec{v}_2$ is in $S$.
1. We outline the check of the conditions from Definition 1.1. Additive closure holds because if $a_0+a_1+a_2=0$ and $b_0+b_1+b_2=0$ then
$(a_0+a_1x+a_2x^2)+(b_0+b_1x+b_2x^2)= (a_0+b_0)+(a_1+b_1)x+(a_2+b_2)x^2$
is in the set since $(a_0+b_0)+(a_1+b_1)+(a_2+b_2)=(a_0+a_1+a_2)+(b_0+b_1+b_2)$ is zero. The second through fifth conditions are easy. Closure under scalar multiplication holds because if $a_0+a_1+a_2=0$ then
$r\cdot(a_0+a_1x+a_2x^2)= (ra_0)+(ra_1)x+(ra_2)x^2$
is in the set as $ra_0+ra_1+ra_2=r(a_0+a_1+a_2)$ is zero. The remaining conditions here are also easy.
3. Call the vector space $V$. We have two implications: left to right, if $S$ is a subspace then it is closed under linear combinations of pairs of vectors and, right to left, if a nonempty subset is closed under linear combinations of pairs of vectors then it is a subspace. The left to right implication is easy; we here sketch the other one by assuming $S$ is nonempty and closed, and checking the conditions of Definition 1.1. First, to show closure under addition, if $\vec{s}_1,\vec{s}_2\in S$ then $\vec{s}_1+\vec{s}_2\in S$ as $\vec{s}_1+\vec{s}_2=1\cdot\vec{s}_1+1\cdot\vec{s}_2$. Second, for any $\vec{s}_1,\vec{s}_2\in S$, because addition is inherited from $V$, the sum $\vec{s}_1+\vec{s}_2$ in $S$ equals the sum $\vec{s}_1+\vec{s}_2$ in $V$ and that equals the sum $\vec{s}_2+\vec{s}_1$ in $V$ and that in turn equals the sum $\vec{s}_2+\vec{s}_1$ in $S$. The argument for the third condition is similar to that for the second. For the fourth, suppose that $\vec{s}$ is in the nonempty set $S$ and note that $0\cdot\vec{s}=\vec{0}\in S$; showing that the $\vec{0}$ of $V$ acts under the inherited operations as the additive identity of $S$ is easy. The fifth condition is satisfied because for any $\vec{s}\in S$ closure under linear combinations shows that the vector $0\cdot\vec{0}+(-1)\cdot\vec{s}$ is in $S$; showing that it is the additive inverse of $\vec{s}$ under the inherited operations is routine. The proofs for the remaining conditions are similar.