Linear Algebra/Dimension/Solutions

From Wikibooks, open books for an open world
< Linear Algebra‎ | Dimension
Jump to: navigation, search

Solutions[edit]

Assume that all spaces are finite-dimensional unless otherwise stated.

This exercise is recommended for all readers.
Problem 1

Find a basis for, and the dimension of,   \mathcal{P}_2 .

Answer

One basis is  \langle 1,x,x^2 \rangle  , and so the dimension is three.

Problem 2

Find a basis for, and the dimension of, the solution set of this system.


\begin{array}{*{4}{rc}r}
x_1  &-  &4x_2  &+  &3x_3  &-  &x_4  &=  &0  \\
2x_1  &-  &8x_2  &+  &6x_3  &-  &2x_4 &=  &0  
\end{array}
Answer

The solution set is


\{\begin{pmatrix} 4x_2-3x_3+x_4 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix}
\,\big|\, x_2,x_3,x_4\in\mathbb{R}\}

so a natural basis is this


\langle \begin{pmatrix} 4 \\ 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} -3 \\ 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 0 \\ 0 \\ 1 \end{pmatrix}   \rangle

(checking linear independence is easy). Thus the dimension is three.

This exercise is recommended for all readers.
Problem 3

Find a basis for, and the dimension of,  \mathcal{M}_{2 \! \times \! 2} , the vector space of  2 \! \times \! 2 matrices.

Answer

For this space


\{\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix} \,\big|\, a,b,c,d\in\mathbb{R}\}
=\{a\cdot\begin{pmatrix}
1  &0  \\
0  &0
\end{pmatrix}
+\dots+
d\cdot\begin{pmatrix}
0  &0  \\
0  &1
\end{pmatrix} \,\big|\, a,b,c,d\in\mathbb{R}\}

this is a natural basis.


\langle 
\begin{pmatrix}
1  &0  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
0  &1  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
0  &0  \\
1  &0
\end{pmatrix},
\begin{pmatrix}
0  &0  \\
0  &1
\end{pmatrix}   \rangle

The dimension is four.

Problem 4

Find the dimension of the vector space of matrices


\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}

subject to each condition.

  1. a, b, c, d\in\mathbb{R}
  2. a-b+2c=0 and d\in\mathbb{R}
  3. a+b+c=0, a+b-c=0, and d\in\mathbb{R}
Answer
  1. As in the prior exercise, the space \mathcal{M}_{2 \! \times \! 2} of matrices without restriction has this basis
    
\langle 
\begin{pmatrix}
1  &0  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
0  &1  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
0  &0  \\
1  &0
\end{pmatrix},
\begin{pmatrix}
0  &0  \\
0  &1
\end{pmatrix}   \rangle
    and so the dimension is four.
  2. For this space
    
\{\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix} \,\big|\, a=b-2c\text{ and }d\in\mathbb{R}\}
=\{b\cdot\begin{pmatrix}
1  &1  \\
0  &0
\end{pmatrix}
+c\cdot\begin{pmatrix}
-2  &0  \\
1  &0
\end{pmatrix}
+d\cdot\begin{pmatrix}
0  &0  \\
0  &1
\end{pmatrix} \,\big|\, b,c,d\in\mathbb{R}\}
    this is a natural basis.
    
\langle 
\begin{pmatrix}
1  &1  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
-2  &0  \\
1  &0
\end{pmatrix},
\begin{pmatrix}
0  &0  \\
0  &1
\end{pmatrix}   \rangle
    The dimension is three.
  3. Gauss' method applied to the two-equation linear system gives that c=0 and that a=-b. Thus, we have this description
    
\{\begin{pmatrix}
-b  &b  \\
0  &d
\end{pmatrix} \,\big|\, b,d\in\mathbb{R}\}
=\{b\cdot\begin{pmatrix}
-1  &1  \\
0  &0
\end{pmatrix}
+d\cdot\begin{pmatrix}
0  &0  \\
0  &1
\end{pmatrix} \,\big|\, b,d\in\mathbb{R}\}
    and so this is a natural basis.
    
\langle 
\begin{pmatrix}
-1  &1  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
0  &0  \\
0  &1
\end{pmatrix}   \rangle
    The dimension is two.
This exercise is recommended for all readers.
Problem 5

Find the dimension of each.

  1. The space of cubic polynomials p(x) such that p(7)=0
  2. The space of cubic polynomials p(x) such that p(7)=0 and p(5)=0
  3. The space of cubic polynomials p(x) such that p(7)=0, p(5)=0, and p(3)=0
  4. The space of cubic polynomials p(x) such that p(7)=0, p(5)=0, p(3)=0, and p(1)=0
Answer

The bases for these spaces are developed in the answer set of the prior subsection.

  1. One basis is  \langle -7+x,-49+x^2,-343+x^3 \rangle  . The dimension is three.
  2. One basis is \langle 35-12x+x^2,420-109x+x^3 \rangle so the dimension is two.
  3. A basis is \{-105+71x-15x^2+x^3\}. The dimension is one.
  4. This is the trivial subspace of \mathcal{P}_3 and so the basis is empty. The dimension is zero.
Problem 6

What is the dimension of the span of the set \{\cos^2\theta,\sin^2\theta,\cos2\theta,\sin2\theta\}? This span is a subspace of the space of all real-valued functions of one real variable.

Answer

First recall that \cos2\theta=\cos^2\theta-\sin^2\theta, and so deletion of \cos2\theta from this set leaves the span unchanged. What's left, the set \{\cos^2\theta,\sin^2\theta,\sin2\theta\}, is linearly independent (consider the relationship c_1\cos^2\theta+c_2\sin^2\theta+c_3\sin2\theta=Z(\theta) where Z is the zero function, and then take \theta=0, \theta=\pi/4, and \theta=\pi/2 to conclude that each c is zero). It is therefore a basis for its span. That shows that the span is a dimension three vector space.

Problem 7

Find the dimension of  \mathbb{C}^{47} , the vector space of 47-tuples of complex numbers.

Answer

Here is a basis


\langle (1+0i,0+0i,\dots,0+0i),\,(0+1i,0+0i,\dots,0+0i),(0+0i,1+0i,\dots,0+0i),\ldots  \rangle

and so the dimension is  2\cdot 47=94 .

Problem 8

What is the dimension of the vector space \mathcal{M}_{3 \! \times \! 5} of  3 \! \times \! 5 matrices?

Answer

A basis is


\langle 
\begin{pmatrix}
1  &0  &0  &0  &0  \\
0  &0  &0  &0  &0  \\
0  &0  &0  &0  &0
\end{pmatrix},
\begin{pmatrix}
0  &1  &0  &0  &0  \\
0  &0  &0  &0  &0  \\
0  &0  &0  &0  &0
\end{pmatrix},
\dots,
\begin{pmatrix}
0  &0  &0  &0  &0  \\
0  &0  &0  &0  &0  \\
0  &0  &0  &0  &1
\end{pmatrix}   \rangle

and thus the dimension is  3\cdot 5=15 .

This exercise is recommended for all readers.
Problem 9

Show that this is a basis for \mathbb{R}^4.


\langle \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}  \rangle

(The results of this subsection can be used to simplify this job.)

Answer

In a four-dimensional space a set of four vectors is linearly independent if and only if it spans the space. The form of these vectors makes linear independence easy to show (look at the equation of fourth components, then at the equation of third components, etc.).

Problem 10

Refer to Example 2.9.

  1. Sketch a similar subspace diagram for \mathcal{P}_2.
  2. Sketch one for \mathcal{M}_{2 \! \times \! 2}.
Answer
  1. The diagram for \mathcal{P}_2 has four levels. The top level has the only three-dimensional subspace, \mathcal{P}_2 itself. The next level contains the two-dimensional subspaces (not just the linear polynomials; any two-dimensional subspace, like those polynomials of the form ax^2+b). Below that are the one-dimensional subspaces. Finally, of course, is the only zero-dimensional subspace, the trivial subspace.
  2. For \mathcal{M}_{2 \! \times \! 2}, the diagram has five levels, including subspaces of dimension four through zero.
This exercise is recommended for all readers.
Problem 11
Where  S is a set, the functions  f:S\to\mathbb{R} form a vector space under the natural operations: the sum f+g is the function given by  f+g\,(s)=f(s)+g(s) and the scalar product is given by  r\cdot f \, (s)=r\cdot f(s) . What is the dimension of the space resulting for each domain?
  1.  S=\{1\}
  2.  S=\{1,2\}
  3.  S=\{1,\ldots,n\}
Answer
  1. One
  2. Two
  3.  n
Problem 12

(See Problem 11.) Prove that this is an infinite-dimensional space: the set of all functions  f:\mathbb{R}\to\mathbb{R} under the natural operations.

Answer

We need only produce an infinite linearly independent set. One is  \langle f_1,f_2,\ldots \rangle  where  f_i:\mathbb{R}\to\mathbb{R} is


f_i(x)=\begin{cases}
1  &\text{if }x=i  \\
0  &\text{otherwise}
\end{cases}

the function that has value 1 only at x=i.

Problem 13

(See Problem 11.) What is the dimension of the vector space of functions f:S\to\mathbb{R}, under the natural operations, where the domain S is the empty set?

Answer

Considering a function to be a set, specifically, a set of ordered pairs (x,f(x)), then the only function with an empty domain is the empty set. Thus this is a trivial vector space, and has dimension zero.

Problem 14

Show that any set of four vectors in  \mathbb{R}^2 is linearly dependent.

Answer

Apply Corollary 2.8.

Problem 15

Show that the set  \langle \vec{\alpha}_1,\vec{\alpha}_2,\vec{\alpha}_3 \rangle \subset\mathbb{R}^3 is a basis if and only if there is no plane through the origin containing all three vectors.

Answer

A plane has the form \{\vec{p}+t_1\vec{v}_1+t_2\vec{v}_2\,\big|\, t_1,t_2\in\mathbb{R}\}. (The first chapter also calls this a "2-flat", and contains a discussion of why this is equivalent to the description often taken in Calculus as the set of points (x,y,z) subject to a condition of the form ax+by+cz=d). When the plane passes through the origin we can take the particular vector \vec{p} to be \vec{0}. Thus, in the language we have developed in this chapter, a plane through the origin is the span of a set of two vectors.

Now for the statement. Asserting that the three are not coplanar is the same as asserting that no vector lies in the span of the other two— no vector is a linear combination of the other two. That's simply an assertion that the three-element set is linearly independent. By Corollary 2.12, that's equivalent to an assertion that the set is a basis for \mathbb{R}^3.

Problem 16
  1. Prove that any subspace of a finite dimensional space has a basis.
  2. Prove that any subspace of a finite dimensional space is finite dimensional.
Answer

Let the space V be finite dimensional. Let S be a subspace of V.

  1. The empty set is a linearly independent subset of S. By Corollary 2.10, it can be expanded to a basis for the vector space S.
  2. Any basis for the subspace S is a linearly independent set in the superspace V. Hence it can be expanded to a basis for the superspace, which is finite dimensional. Therefore it has only finitely many members.
Problem 17

Where is the finiteness of  B used in Theorem 2.3?

Answer

It ensures that we exhaust the  \vec{\beta} 's. That is, it justifies the first sentence of the last paragraph.

This exercise is recommended for all readers.
Problem 18

Prove that if  U and  W are both three-dimensional subspaces of  \mathbb{R}^5 then  U\cap W is non-trivial. Generalize.

Answer

Let  B_U be a basis for  U and let  B_W be a basis for  W . The set  B_U\cup B_W is linearly dependent as it is a six member subset of the five-dimensional space  \mathbb{R}^5 . Thus some member of  B_W is in the span of  B_U , and thus  U\cap W is more than just the trivial space  \{\vec{0}\,\} .

Generalization: if  U,W are subspaces of a vector space of dimension  n and if  \dim(U)+\dim(W)>n then they have a nontrivial intersection.

Problem 19

Because a basis for a space is a subset of that space, we are naturally led to how the property "is a basis" interacts with set operations.

  1. Consider first how bases might be related by "subset". Assume that  U,W are subspaces of some vector space and that  U\subseteq W . Can there exist bases  B_U for  U and  B_W for  W such that  B_U\subseteq B_W ? Must such bases exist? For any basis  B_U for  U , must there be a basis  B_W for  W such that  B_U\subseteq B_W ? For any basis  B_W for  W , must there be a basis  B_U for  U such that  B_U\subseteq B_W ? For any bases  B_U, B_W for  U and  W , must  B_U be a subset of  B_W ?
  2. Is the intersection of bases a basis? For what space?
  3. Is the union of bases a basis? For what space?
  4. What about complement?

(Hint. Test any conjectures against some subspaces of  \mathbb{R}^3 .)

Answer

First, note that a set is a basis for some space if and only if it is linearly independent, because in that case it is a basis for its own span.

  1. The answer to the question in the second paragraph is "yes" (implying "yes" answers for both questions in the first paragraph). If  B_U is a basis for  U then  B_U is a linearly independent subset of  W . Apply Corollary 2.10 to expand it to a basis for  W . That is the desired  B_W . The answer to the question in the third paragraph is "no", which implies a "no" answer to the question of the fourth paragraph. Here is an example of a basis for a superspace with no sub-basis forming a basis for a subspace: in  W=\mathbb{R}^2 , consider the standard basis  \mathcal{E}_2 . No sub-basis of \mathcal{E}_2 forms a basis for the subspace  U of \mathbb{R}^2 that is the line  y=x .
  2. It is a basis (for its span) because the intersection of linearly independent sets is linearly independent (the intersection is a subset of each of the linearly independent sets). It is not, however, a basis for the intersection of the spaces. For instance, these are bases for  \mathbb{R}^2 :
    
B_1=\langle \begin{pmatrix} 1 \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \end{pmatrix} \rangle 
\quad\text{and}\quad
B_2=\langle \begin{pmatrix} 2 \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ 2 \end{pmatrix} \rangle
    and  \mathbb{R}^2\cap\mathbb{R}^2=\mathbb{R}^2 , but  B_1\cap B_2 is empty. All we can say is that the intersection of the bases is a basis for a subset of the intersection of the spaces.
  3. The union of bases need not be a basis: in  \mathbb{R}^2
    
B_1=\langle \begin{pmatrix} 1 \\ 0 \end{pmatrix},\begin{pmatrix} 1 \\ 1 \end{pmatrix} \rangle 
\quad\text{and}\quad
B_2=\langle \begin{pmatrix} 1 \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ 2 \end{pmatrix} \rangle
    have a union  B_1\cup B_2 that is not linearly independent. A necessary and sufficient condition for a union of two bases to be a basis
    
B_1\cup B_2 \text{ is linearly independent }
\quad\iff\quad
[B_1\cap B_2]=[B_1]\cap
[B_2]
    it is easy enough to prove (but perhaps hard to apply).
  4. The complement of a basis cannot be a basis because it contains the zero vector.
This exercise is recommended for all readers.
Problem 20

Consider how "dimension" interacts with "subset". Assume  U and  W are both subspaces of some vector space, and that  U\subseteq W .

  1. Prove that  \dim (U)\leq\dim (W) .
  2. Prove that equality of dimension holds if and only if  U=W .
  3. Show that the prior item does not hold if they are infinite-dimensional.
Answer
  1. A basis for  U is a linearly independent set in  W and so can be expanded via Corollary 2.10 to a basis for  W . The second basis has at least as many members as the first.
  2. One direction is clear: if  V=W then they have the same dimension. For the converse, let  B_U be a basis for  U . It is a linearly independent subset of  W and so can be expanded to a basis for  W . If  \dim(U)=\dim(W) then this basis for  W has no more members than does  B_U and so equals  B_U . Since  U and  W have the same bases, they are equal.
  3. Let  W be the space of finite-degree polynomials and let  U be the subspace of polynomails that have only even-powered terms  \{a_0+a_1x^2+a_2x^4+\dots+a_nx^{2n}\,\big|\, a_0,\ldots,a_n\in\mathbb{R}\} . Both spaces have infinite dimension, but  U is a proper subspace.
? Problem 21

For any vector \vec{v} in \mathbb{R}^n and any permutation \sigma of the numbers 1, 2, ..., n (that is, \sigma is a rearrangement of those numbers into a new order), define \sigma(\vec{v}) to be the vector whose components are v_{\sigma(1)}, v_{\sigma(2)}, ..., and v_{\sigma(n)} (where \sigma(1) is the first number in the rearrangement, etc.). Now fix \vec{v} and let V be the span of \{\sigma(\vec{v})\,\big|\, \sigma\text{ permutes }1, \ldots, n\}. What are the possibilities for the dimension of V? (Gilbert, Krusemeyer & Larson 1993, Problem 47)

Answer

The possibilities for the dimension of V are 0, 1, n-1, and n.

To see this, first consider the case when all the coordinates of \vec{v} are equal.


\vec{v}=\begin{pmatrix} z \\ z \\ \vdots \\ z \end{pmatrix}

Then \sigma(\vec{v})=\vec{v} for every permutation \sigma, so V is just the span of \vec{v}, which has dimension 0 or 1 according to whether \vec{v} is \vec{0} or not.

Now suppose not all the coordinates of \vec{v} are equal; let x and y with x\neq y be among the coordinates of \vec{v}. Then we can find permutations \sigma_1 and \sigma_2 such that


\sigma_1(\vec{v})=\begin{pmatrix} x \\ y \\ a_3 \\ \vdots \\ a_n \end{pmatrix}
\quad\text{and}\quad
\sigma_2(\vec{v})=\begin{pmatrix} y \\ x \\ a_3 \\ \vdots \\ a_n \end{pmatrix}

for some a_3,\ldots,a_n\in\mathbb{R}. Therefore,


\frac{1}{y-x}\bigl(\sigma_1(\vec{v})-\sigma_2(\vec{v})\bigr)=
\begin{pmatrix} -1 \\ 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}

is in V. That is, \vec{e}_2-\vec{e}_1\in V, where \vec{e}_1, \vec{e}_2, ..., \vec{e}_n is the standard basis for \mathbb{R}^n. Similarly, \vec{e}_3-\vec{e}_2, ..., \vec{e}_n-\vec{e}_1 are all in V. It is easy to see that the vectors \vec{e}_2-\vec{e}_1, \vec{e}_3-\vec{e}_2, ..., \vec{e}_n-\vec{e}_1 are linearly independent (that is, form a linearly independent set), so \dim V\geq n-1.

Finally, we can write


\begin{array}{rl}
\vec{v} &=x_1\vec{e}_1+x_2\vec{e}_2+\dots+x_n\vec{e}_n  \\
&=(x_1+x_2+\dots+x_n)\vec{e}_1+x_2(\vec{e}_2-\vec{e}_1)+\dots
+x_n(\vec{e}_n-\vec{e}_1)
\end{array}

This shows that if x_1+x_2+\dots+x_n=0 then \vec{v} is in the span of \vec{e}_2-\vec{e}_1, ..., \vec{e_n}-\vec{e}_1 (that is, is in the span of the set of those vectors); similarly, each \sigma(\vec{v}) will be in this span, so V will equal this span and \dim V=n-1. On the other hand, if x_1+x_2+\cdots+x_n\neq 0 then the above equation shows that \vec{e}_1\in V and thus \vec{e}_1,\dots,\vec{e}_n\in V, so V=\mathbb{R}^n and \dim V=n.

References[edit]

  • Gilbert, George T.; Krusemeyer, Mark; Larson, Loren C. (1993), The Wohascum County Problem Book, The Mathematical Association of America .