# Linear Algebra/Length and Angle Measures/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Find the length of each vector.

1. ${\displaystyle {\begin{pmatrix}3\\1\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}-1\\2\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}4\\1\\1\end{pmatrix}}}$
4. ${\displaystyle {\begin{pmatrix}0\\0\\0\end{pmatrix}}}$
5. ${\displaystyle {\begin{pmatrix}1\\-1\\1\\0\end{pmatrix}}}$
1. ${\displaystyle {\sqrt {3^{2}+1^{2}}}={\sqrt {10}}}$
2. ${\displaystyle {\sqrt {5}}}$
3. ${\displaystyle {\sqrt {18}}}$
4. ${\displaystyle 0}$
5. ${\displaystyle {\sqrt {3}}}$
This exercise is recommended for all readers.
Problem 2

Find the angle between each two, if it is defined.

1. ${\displaystyle {\begin{pmatrix}1\\2\end{pmatrix}},{\begin{pmatrix}1\\4\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}1\\2\\0\end{pmatrix}},{\begin{pmatrix}0\\4\\1\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}1\\2\end{pmatrix}},{\begin{pmatrix}1\\4\\-1\end{pmatrix}}}$
1. ${\displaystyle \arccos(9/{\sqrt {85}})\approx 0.22{\text{ radians}}}$
2. ${\displaystyle \arccos(8/{\sqrt {85}})\approx 0.52{\text{ radians}}}$
3. Not defined.
This exercise is recommended for all readers.
Problem 3

During maneuvers preceding the Battle of Jutland, the British battle cruiser Lion moved as follows (in nautical miles): ${\displaystyle 1.2}$ miles north, ${\displaystyle 6.1}$ miles ${\displaystyle 38}$ degrees east of south, ${\displaystyle 4.0}$ miles at ${\displaystyle 89}$ degrees east of north, and ${\displaystyle 6.5}$ miles at ${\displaystyle 31}$ degrees east of north. Find the distance between starting and ending positions (O'Hanian 1985).

We express each displacement as a vector (rounded to one decimal place because that's the accuracy of the problem's statement) and add to find the total displacement (ignoring the curvature of the earth).

${\displaystyle {\begin{pmatrix}0.0\\1.2\end{pmatrix}}+{\begin{pmatrix}3.8\\-4.8\end{pmatrix}}+{\begin{pmatrix}4.0\\0.1\end{pmatrix}}+{\begin{pmatrix}3.3\\5.6\end{pmatrix}}={\begin{pmatrix}11.1\\2.1\end{pmatrix}}}$

The distance is ${\displaystyle {\sqrt {11.1^{2}+2.1^{2}}}\approx 11.3}$.

Problem 4

Find ${\displaystyle k}$ so that these two vectors are perpendicular.

${\displaystyle {\begin{pmatrix}k\\1\end{pmatrix}}\qquad {\begin{pmatrix}4\\3\end{pmatrix}}}$

Solve ${\displaystyle (k)(4)+(1)(3)=0}$ to get ${\displaystyle k=-3/4}$.

Problem 5

Describe the set of vectors in ${\displaystyle \mathbb {R} ^{3}}$ orthogonal to this one.

${\displaystyle {\begin{pmatrix}1\\3\\-1\end{pmatrix}}}$

The set

${\displaystyle \{{\begin{pmatrix}x\\y\\z\end{pmatrix}}\,{\big |}\,1x+3y-1z=0\}}$

can also be described with parameters in this way.

${\displaystyle \{{\begin{pmatrix}-3\\1\\0\end{pmatrix}}y+{\begin{pmatrix}1\\0\\1\end{pmatrix}}z\,{\big |}\,y,z\in \mathbb {R} \}}$
This exercise is recommended for all readers.
Problem 6
1. Find the angle between the diagonal of the unit square in ${\displaystyle \mathbb {R} ^{2}}$ and one of the axes.
2. Find the angle between the diagonal of the unit cube in ${\displaystyle \mathbb {R} ^{3}}$ and one of the axes.
3. Find the angle between the diagonal of the unit cube in ${\displaystyle \mathbb {R} ^{n}}$ and one of the axes.
4. What is the limit, as ${\displaystyle n}$ goes to ${\displaystyle \infty }$, of the angle between the diagonal of the unit cube in ${\displaystyle \mathbb {R} ^{n}}$ and one of the axes?
1. We can use the ${\displaystyle x}$-axis.
${\displaystyle \arccos({\frac {(1)(1)+(0)(1)}{{\sqrt {1}}{\sqrt {2}}}})\approx 0.79{\text{radians}}}$
2. Again, use the ${\displaystyle x}$-axis.
${\displaystyle \arccos({\frac {(1)(1)+(0)(1)+(0)(1)}{{\sqrt {1}}{\sqrt {3}}}})\approx 0.96{\text{radians}}}$
3. The ${\displaystyle x}$-axis worked before and it will work again.
${\displaystyle \arccos({\frac {(1)(1)+\cdots +(0)(1)}{{\sqrt {1}}{\sqrt {n}}}})=\arccos({\frac {1}{\sqrt {n}}})}$
4. Using the formula from the prior item, ${\displaystyle \lim _{n\to \infty }\arccos(1/{\sqrt {n}})=\pi /2{\text{ radians}}}$.
Problem 7

Is any vector perpendicular to itself?

Clearly ${\displaystyle u_{1}u_{1}+\cdots +u_{n}u_{n}}$ is zero if and only if each ${\displaystyle u_{i}}$ is zero. So only ${\displaystyle {\vec {0}}\in \mathbb {R} ^{n}}$ is perpendicular to itself.

This exercise is recommended for all readers.
Problem 8

Describe the algebraic properties of dot product.

1. Is it right-distributive over addition: ${\displaystyle ({\vec {u}}+{\vec {v}})\cdot {\vec {w}}={\vec {u}}\cdot {\vec {w}}+{\vec {v}}\cdot {\vec {w}}}$?
2. Is is left-distributive (over addition)?
3. Does it commute?
4. Associate?
5. How does it interact with scalar multiplication?

As always, any assertion must be backed by either a proof or an example.

Assume that ${\displaystyle {\vec {u}},{\vec {v}},{\vec {w}}\in \mathbb {R} ^{n}}$ have components ${\displaystyle u_{1},\ldots ,u_{n},v_{1},\ldots ,w_{n}}$.

1. Dot product is right-distributive.
${\displaystyle {\begin{array}{rl}({\vec {u}}+{\vec {v}})\cdot {\vec {w}}&=[{\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}+{\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}]\cdot {\begin{pmatrix}w_{1}\\\vdots \\w_{n}\end{pmatrix}}\\&={\begin{pmatrix}u_{1}+v_{1}\\\vdots \\u_{n}+v_{n}\end{pmatrix}}\cdot {\begin{pmatrix}w_{1}\\\vdots \\w_{n}\end{pmatrix}}\\&=(u_{1}+v_{1})w_{1}+\cdots +(u_{n}+v_{n})w_{n}\\&=(u_{1}w_{1}+\cdots +u_{n}w_{n})+(v_{1}w_{1}+\cdots +v_{n}w_{n})\\&={\vec {u}}\cdot {\vec {w}}+{\vec {v}}\cdot {\vec {w}}\end{array}}}$
2. Dot product is also left distributive: ${\displaystyle {\vec {w}}\cdot ({\vec {u}}+{\vec {v}})={\vec {w}}\cdot {\vec {u}}+{\vec {w}}\cdot {\vec {v}}}$. The proof is just like the prior one.
3. Dot product commutes.
${\displaystyle {\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}\cdot {\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}=u_{1}v_{1}+\cdots +u_{n}v_{n}=v_{1}u_{1}+\cdots +v_{n}u_{n}={\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}\cdot {\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}}$
4. Because ${\displaystyle {\vec {u}}\cdot {\vec {v}}}$ is a scalar, not a vector, the expression ${\displaystyle ({\vec {u}}\cdot {\vec {v}})\cdot {\vec {w}}}$ makes no sense; the dot product of a scalar and a vector is not defined.
5. This is a vague question so it has many answers. Some are (1) ${\displaystyle k({\vec {u}}\cdot {\vec {v}})=(k{\vec {u}})\cdot {\vec {v}}}$ and ${\displaystyle k({\vec {u}}\cdot {\vec {v}})={\vec {u}}\cdot (k{\vec {v}})}$, (2) ${\displaystyle k({\vec {u}}\cdot {\vec {v}})\neq (k{\vec {u}})\cdot (k{\vec {v}})}$ (in general; an example is easy to produce), and (3) ${\displaystyle |k{\vec {v}}\,|=|k||{\vec {v}}\,|}$ (the connection between norm and dot product is that the square of the norm is the dot product of a vector with itself).
Problem 9

Verify the equality condition in Corollary 2.6, the Cauchy-Schwartz Inequality.

1. Show that if ${\displaystyle {\vec {u}}}$ is a negative scalar multiple of ${\displaystyle {\vec {v}}}$ then ${\displaystyle {\vec {u}}\cdot {\vec {v}}}$ and ${\displaystyle {\vec {v}}\cdot {\vec {u}}}$ are less than or equal to zero.
2. Show that ${\displaystyle |{\vec {u}}\cdot {\vec {v}}|=|{\vec {u}}\,|\,|{\vec {v}}\,|}$ if and only if one vector is a scalar multiple of the other.
1. Verifying that ${\displaystyle (k{\vec {x}})\cdot {\vec {y}}=k({\vec {x}}\cdot {\vec {y}})={\vec {x}}\cdot (k{\vec {y}})}$ for ${\displaystyle k\in \mathbb {R} }$ and ${\displaystyle {\vec {x}},{\vec {y}}\in \mathbb {R} ^{n}}$ is easy. Now, for ${\displaystyle k\in \mathbb {R} }$ and ${\displaystyle {\vec {v}},{\vec {w}}\in \mathbb {R} ^{n}}$, if ${\displaystyle {\vec {u}}=k{\vec {v}}}$ then ${\displaystyle {\vec {u}}\cdot {\vec {v}}=(k{\vec {v}})\cdot {\vec {v}}=k({\vec {v}}\cdot {\vec {v}})}$, which is ${\displaystyle k}$ times a nonnegative real. The ${\displaystyle {\vec {v}}=k{\vec {u}}}$ half is similar (actually, taking the ${\displaystyle k}$ in this paragraph to be the reciprocal of the ${\displaystyle k}$ above gives that we need only worry about the ${\displaystyle k=0}$ case).
2. We first consider the ${\displaystyle {\vec {u}}\cdot {\vec {v}}\geq 0}$ case. From the Triangle Inequality we know that ${\displaystyle {\vec {u}}\cdot {\vec {v}}=|{\vec {u}}\,|\,|{\vec {v}}\,|}$ if and only if one vector is a nonnegative scalar multiple of the other. But that's all we need because the first part of this exercise shows that, in a context where the dot product of the two vectors is positive, the two statements "one vector is a scalar multiple of the other" and "one vector is a nonnegative scalar multiple of the other", are equivalent. We finish by considering the ${\displaystyle {\vec {u}}\cdot {\vec {v}}<0}$ case. Because ${\displaystyle 0<|{\vec {u}}\cdot {\vec {v}}|=-({\vec {u}}\cdot {\vec {v}})=(-{\vec {u}})\cdot {\vec {v}}}$ and ${\displaystyle |{\vec {u}}\,|\,|{\vec {v}}\,|=|-{\vec {u}}\,|\,|{\vec {v}}\,|}$, we have that ${\displaystyle 0<(-{\vec {u}})\cdot {\vec {v}}=|-{\vec {u}}\,|\,|{\vec {v}}\,|}$. Now the prior paragraph applies to give that one of the two vectors ${\displaystyle -{\vec {u}}}$ and ${\displaystyle {\vec {v}}}$ is a scalar multiple of the other. But that's equivalent to the assertion that one of the two vectors ${\displaystyle {\vec {u}}}$ and ${\displaystyle {\vec {v}}}$ is a scalar multiple of the other, as desired.
Problem 10

Suppose that ${\displaystyle {\vec {u}}\cdot {\vec {v}}={\vec {u}}\cdot {\vec {w}}}$ and ${\displaystyle {\vec {u}}\neq {\vec {0}}}$. Must ${\displaystyle {\vec {v}}={\vec {w}}}$?

No. These give an example.

${\displaystyle {\vec {u}}={\begin{pmatrix}1\\0\end{pmatrix}}\quad {\vec {v}}={\begin{pmatrix}1\\0\end{pmatrix}}\quad {\vec {w}}={\begin{pmatrix}1\\1\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 11

Does any vector have length zero except a zero vector? (If "yes", produce an example. If "no", prove it.)

We prove that a vector has length zero if and only if all its components are zero.

Let ${\displaystyle {\vec {u}}\in \mathbb {R} ^{n}}$ have components ${\displaystyle u_{1},\ldots ,u_{n}}$. Recall that the square of any real number is greater than or equal to zero, with equality only when that real is zero. Thus ${\displaystyle |{\vec {u}}\,|^{2}={u_{1}}^{2}+\cdots +{u_{n}}^{2}}$ is a sum of numbers greater than or equal to zero, and so is itself greater than or equal to zero, with equality if and only if each ${\displaystyle u_{i}}$ is zero. Hence ${\displaystyle |{\vec {u}}\,|=0}$ if and only if all the components of ${\displaystyle {\vec {u}}}$ are zero.

This exercise is recommended for all readers.
Problem 12

Find the midpoint of the line segment connecting ${\displaystyle (x_{1},y_{1})}$ with ${\displaystyle (x_{2},y_{2})}$ in ${\displaystyle \mathbb {R} ^{2}}$. Generalize to ${\displaystyle \mathbb {R} ^{n}}$.

We can easily check that

${\displaystyle {\bigl (}{\frac {x_{1}+x_{2}}{2}},{\frac {y_{1}+y_{2}}{2}}{\bigr )}}$

is on the line connecting the two, and is equidistant from both. The generalization is obvious.

Problem 13

Show that if ${\displaystyle {\vec {v}}\neq {\vec {0}}}$ then ${\displaystyle {\vec {v}}/|{\vec {v}}\,|}$ has length one. What if ${\displaystyle {\vec {v}}={\vec {0}}}$?

Assume that ${\displaystyle {\vec {v}}\in \mathbb {R} ^{n}}$ has components ${\displaystyle v_{1},\ldots ,v_{n}}$. If ${\displaystyle {\vec {v}}\neq {\vec {0}}}$ then we have this.

${\displaystyle {\sqrt {\left({\frac {v_{1}}{\sqrt {{v_{1}}^{2}+\cdots +{v_{n}}^{2}}}}\right)^{2}+\dots +\left({\frac {v_{n}}{\sqrt {{v_{1}}^{2}+\cdots +{v_{n}}^{2}}}}\right)^{2}}}}$
{\displaystyle {\begin{aligned}&={\sqrt {\left({\frac {{v_{1}}^{2}}{{v_{1}}^{2}+\cdots +{v_{n}}^{2}}}\right)+\dots +\left({\frac {{v_{n}}^{2}}{{v_{1}}^{2}+\cdots +{v_{n}}^{2}}}\right)}}\\&=1\end{aligned}}}

If ${\displaystyle {\vec {v}}={\vec {0}}}$ then ${\displaystyle {\vec {v}}/|{\vec {v}}\,|}$ is not defined.

Problem 14

Show that if ${\displaystyle r\geq 0}$ then ${\displaystyle r{\vec {v}}}$ is ${\displaystyle r}$ times as long as ${\displaystyle {\vec {v}}}$. What if ${\displaystyle r<0}$?

For the first question, assume that ${\displaystyle {\vec {v}}\in \mathbb {R} ^{n}}$ and ${\displaystyle r\geq 0}$, take the root, and factor.

${\displaystyle |r{\vec {v}}\,|={\sqrt {(rv_{1})^{2}+\cdots +(rv_{n})^{2}}}={\sqrt {r^{2}({v_{1}}^{2}+\cdots +{v_{n}}^{2}}}=r|{\vec {v}}\,|}$

For the second question, the result is ${\displaystyle r}$ times as long, but it points in the opposite direction in that ${\displaystyle r{\vec {v}}+(-r){\vec {v}}={\vec {0}}}$.

This exercise is recommended for all readers.
Problem 15

A vector ${\displaystyle {\vec {v}}\in \mathbb {R} ^{n}}$ of length one is a unit vector. Show that the dot product of two unit vectors has absolute value less than or equal to one. Can "less than" happen? Can "equal to"?

Assume that ${\displaystyle {\vec {u}},{\vec {v}}\in \mathbb {R} ^{n}}$ both have length ${\displaystyle 1}$. Apply Cauchy-Schwartz: ${\displaystyle |{\vec {u}}\cdot {\vec {v}}|\leq |{\vec {u}}\,|\,|{\vec {v}}\,|=1}$.

To see that "less than" can happen, in ${\displaystyle \mathbb {R} ^{2}}$ take

${\displaystyle {\vec {u}}={\begin{pmatrix}1\\0\end{pmatrix}}\qquad {\vec {v}}={\begin{pmatrix}0\\1\end{pmatrix}}}$

and note that ${\displaystyle {\vec {u}}\cdot {\vec {v}}=0}$. For "equal to", note that ${\displaystyle {\vec {u}}\cdot {\vec {u}}=1}$.

Problem 16

Prove that ${\displaystyle |{\vec {u}}+{\vec {v}}\,|^{2}+|{\vec {u}}-{\vec {v}}\,|^{2}=2|{\vec {u}}\,|^{2}+2|{\vec {v}}\,|^{2}.}$

Write

${\displaystyle {\vec {u}}={\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}\qquad {\vec {v}}={\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}}$

and then this computation works.

${\displaystyle {\begin{array}{rl}|{\vec {u}}+{\vec {v}}\,|^{2}+|{\vec {u}}-{\vec {v}}\,|^{2}&=(u_{1}+v_{1})^{2}+\cdots +(u_{n}+v_{n})^{2}\\&\quad +(u_{1}-v_{1})^{2}+\cdots +(u_{n}-v_{n})^{2}\\&={u_{1}}^{2}+2u_{1}v_{1}+{v_{1}}^{2}+\cdots +{u_{n}}^{2}+2u_{n}v_{n}+{v_{n}}^{2}\\&\quad +{u_{1}}^{2}-2u_{1}v_{1}+{v_{1}}^{2}+\cdots +{u_{n}}^{2}-2u_{n}v_{n}+{v_{n}}^{2}\\&=2({u_{1}}^{2}+\cdots +{u_{n}}^{2})+2({v_{1}}^{2}+\cdots +{v_{n}}^{2})\\&=2|{\vec {u}}\,|^{2}+2|{\vec {v}}\,|^{2}\end{array}}}$
Problem 17

Show that if ${\displaystyle {\vec {x}}\cdot {\vec {y}}=0}$ for every ${\displaystyle {\vec {y}}}$ then ${\displaystyle {\vec {x}}={\vec {0}}}$.

We will prove this demonstrating that the contrapositive statement holds: if ${\displaystyle {\vec {x}}\neq {\vec {0}}}$ then there is a ${\displaystyle {\vec {y}}}$ with ${\displaystyle {\vec {x}}\cdot {\vec {y}}\neq 0}$.

Assume that ${\displaystyle {\vec {x}}\in \mathbb {R} ^{n}}$. If ${\displaystyle {\vec {x}}\neq {\vec {0}}}$ then it has a nonzero component, say the ${\displaystyle i}$-th one ${\displaystyle x_{i}}$. But the vector ${\displaystyle {\vec {y}}\in \mathbb {R} ^{n}}$ that is all zeroes except for a one in component ${\displaystyle i}$ gives ${\displaystyle {\vec {x}}\cdot {\vec {y}}=x_{i}}$. (A slicker proof just considers ${\displaystyle {\vec {x}}\cdot {\vec {x}}}$.)

Problem 18

Is ${\displaystyle |{\vec {u}}_{1}+\cdots +{\vec {u}}_{n}|\leq |{\vec {u}}_{1}|+\cdots +|{\vec {u}}_{n}|}$? If it is true then it would generalize the Triangle Inequality.

Yes; we can prove this by induction.

Assume that the vectors are in some ${\displaystyle \mathbb {R} ^{k}}$. Clearly the statement applies to one vector. The Triangle Inequality is this statement applied to two vectors. For an inductive step assume the statement is true for ${\displaystyle n}$ or fewer vectors. Then this

${\displaystyle |{\vec {u}}_{1}+\cdots +{\vec {u}}_{n}+{\vec {u}}_{n+1}|\leq |{\vec {u}}_{1}+\cdots +{\vec {u}}_{n}|+|{\vec {u}}_{n+1}|}$

follows by the Triangle Inequality for two vectors. Now the inductive hypothesis, applied to the first summand on the right, gives that as less than or equal to ${\displaystyle |{\vec {u}}_{1}|+\cdots +|{\vec {u}}_{n}|+|{\vec {u}}_{n+1}|}$.

Problem 19

What is the ratio between the sides in the Cauchy-Schwartz inequality?

By definition

${\displaystyle {\frac {{\vec {u}}\cdot {\vec {v}}}{|{\vec {u}}\,|\,|{\vec {v}}\,|}}=\cos \theta }$

where ${\displaystyle \theta }$ is the angle between the vectors. Thus the ratio is ${\displaystyle |\cos \theta |}$.

Problem 20

Why is the zero vector defined to be perpendicular to every vector?

So that the statement "vectors are orthogonal iff their dot product is zero" has no exceptions.

Problem 21

Describe the angle between two vectors in ${\displaystyle \mathbb {R} ^{1}}$.

The angle between ${\displaystyle (a)}$ and ${\displaystyle (b)}$ is found (for ${\displaystyle a,b\neq 0}$) with

${\displaystyle \arccos({\frac {ab}{{\sqrt {a^{2}}}{\sqrt {b^{2}}}}}).}$

If ${\displaystyle a}$ or ${\displaystyle b}$ is zero then the angle is ${\displaystyle \pi /2}$ radians. Otherwise, if ${\displaystyle a}$ and ${\displaystyle b}$ are of opposite signs then the angle is ${\displaystyle \pi }$ radians, else the angle is zero radians.

Problem 22

Give a simple necessary and sufficient condition to determine whether the angle between two vectors is acute, right, or obtuse.

The angle between ${\displaystyle {\vec {u}}}$ and ${\displaystyle {\vec {v}}}$ is acute if ${\displaystyle {\vec {u}}\cdot {\vec {v}}>0}$, is right if ${\displaystyle {\vec {u}}\cdot {\vec {v}}=0}$, and is obtuse if ${\displaystyle {\vec {u}}\cdot {\vec {v}}<0}$. That's because, in the formula for the angle, the denominator is never negative.

This exercise is recommended for all readers.
Problem 23

Generalize to ${\displaystyle \mathbb {R} ^{n}}$ the converse of the Pythagorean Theorem, that if ${\displaystyle {\vec {u}}}$ and ${\displaystyle {\vec {v}}}$ are perpendicular then ${\displaystyle |{\vec {u}}+{\vec {v}}\,|^{2}=|{\vec {u}}\,|^{2}+|{\vec {v}}\,|^{2}}$.

Suppose that ${\displaystyle {\vec {u}},{\vec {v}}\in \mathbb {R} ^{n}}$. If ${\displaystyle {\vec {u}}}$ and ${\displaystyle {\vec {v}}}$ are perpendicular then

${\displaystyle |{\vec {u}}+{\vec {v}}\,|^{2}=({\vec {u}}+{\vec {v}})\cdot ({\vec {u}}+{\vec {v}})={\vec {u}}\cdot {\vec {u}}+2\,{\vec {u}}\cdot {\vec {v}}+{\vec {v}}\cdot {\vec {v}}={\vec {u}}\cdot {\vec {u}}+{\vec {v}}\cdot {\vec {v}}=|{\vec {u}}\,|^{2}+|{\vec {v}}\,|^{2}}$

(the third equality holds because ${\displaystyle {\vec {u}}\cdot {\vec {v}}=0}$).

Problem 24

Show that ${\displaystyle |{\vec {u}}\,|=|{\vec {v}}\,|}$ if and only if ${\displaystyle {\vec {u}}+{\vec {v}}}$ and ${\displaystyle {\vec {u}}-{\vec {v}}}$ are perpendicular. Give an example in ${\displaystyle \mathbb {R} ^{2}}$.

Where ${\displaystyle {\vec {u}},{\vec {v}}\in \mathbb {R} ^{n}}$, the vectors ${\displaystyle {\vec {u}}+{\vec {v}}}$ and ${\displaystyle {\vec {u}}-{\vec {v}}}$ are perpendicular if and only if ${\displaystyle 0=({\vec {u}}+{\vec {v}})\cdot ({\vec {u}}-{\vec {v}})={\vec {u}}\cdot {\vec {u}}-{\vec {v}}\cdot {\vec {v}}}$, which shows that those two are perpendicular if and only if ${\displaystyle {\vec {u}}\cdot {\vec {u}}={\vec {v}}\cdot {\vec {v}}}$. That holds if and only if ${\displaystyle |{\vec {u}}\,|=|{\vec {v}}\,|}$.

Problem 25

Show that if a vector is perpendicular to each of two others then it is perpendicular to each vector in the plane they generate. (Remark. They could generate a degenerate plane— a line or a point— but the statement remains true.)

Suppose ${\displaystyle {\vec {u}}\in \mathbb {R} ^{n}}$ is perpendicular to both ${\displaystyle {\vec {v}}\in \mathbb {R} ^{n}}$ and ${\displaystyle {\vec {w}}\in \mathbb {R} ^{n}}$. Then, for any ${\displaystyle k,m\in \mathbb {R} }$ we have this.

${\displaystyle {\vec {u}}\cdot (k{\vec {v}}+m{\vec {w}})=k({\vec {u}}\cdot {\vec {v}})+m({\vec {u}}\cdot {\vec {w}})=k(0)+m(0)=0}$
Problem 26

Prove that, where ${\displaystyle {\vec {u}},{\vec {v}}\in \mathbb {R} ^{n}}$ are nonzero vectors, the vector

${\displaystyle {\frac {\vec {u}}{|{\vec {u}}\,|}}+{\frac {\vec {v}}{|{\vec {v}}\,|}}}$

bisects the angle between them. Illustrate in ${\displaystyle \mathbb {R} ^{2}}$.

We will show something more general: if ${\displaystyle |{\vec {z}}_{1}|=|{\vec {z}}_{2}|}$ for ${\displaystyle {\vec {z}}_{1},{\vec {z}}_{2}\in \mathbb {R} ^{n}}$, then ${\displaystyle {\vec {z}}_{1}+{\vec {z}}_{2}}$ bisects the angle between ${\displaystyle {\vec {z}}_{1}}$ and ${\displaystyle {\vec {z}}_{2}}$

(we ignore the case where ${\displaystyle {\vec {z}}_{1}}$ and ${\displaystyle {\vec {z}}_{2}}$ are the zero vector).

The ${\displaystyle {\vec {z}}_{1}+{\vec {z}}_{2}={\vec {0}}}$ case is easy. For the rest, by the definition of angle, we will be done if we show this.

${\displaystyle {\frac {{\vec {z}}_{1}\cdot ({\vec {z}}_{1}+{\vec {z}}_{2})}{|{\vec {z}}_{1}|\,|{\vec {z}}_{1}+{\vec {z}}_{2}|}}={\frac {{\vec {z}}_{2}\cdot ({\vec {z}}_{1}+{\vec {z}}_{2})}{|{\vec {z}}_{2}|\,|{\vec {z}}_{1}+{\vec {z}}_{2}|}}}$

But distributing inside each expression gives

${\displaystyle {\frac {{\vec {z}}_{1}\cdot {\vec {z}}_{1}+{\vec {z}}_{1}\cdot {\vec {z}}_{2}}{|{\vec {z}}_{1}|\,|{\vec {z}}_{1}+{\vec {z}}_{2}|}}\qquad {\frac {{\vec {z}}_{2}\cdot {\vec {z}}_{1}+{\vec {z}}_{2}\cdot {\vec {z}}_{2}}{|{\vec {z}}_{2}|\,|{\vec {z}}_{1}+{\vec {z}}_{2}|}}}$

and ${\displaystyle {\vec {z}}_{1}\cdot {\vec {z}}_{1}=|{\vec {z}}_{1}|^{2}=|{\vec {z}}_{2}|^{2}={\vec {z}}_{2}\cdot {\vec {z}}_{2}}$, so the two are equal.

Problem 27

Verify that the definition of angle is dimensionally correct: (1) if ${\displaystyle k>0}$ then the cosine of the angle between ${\displaystyle k{\vec {u}}}$ and ${\displaystyle {\vec {v}}}$ equals the cosine of the angle between ${\displaystyle {\vec {u}}}$ and ${\displaystyle {\vec {v}}}$, and (2) if ${\displaystyle k<0}$ then the cosine of the angle between ${\displaystyle k{\vec {u}}}$ and ${\displaystyle {\vec {v}}}$ is the negative of the cosine of the angle between ${\displaystyle {\vec {u}}}$ and ${\displaystyle {\vec {v}}}$.

We can show the two statements together. Let ${\displaystyle {\vec {u}},{\vec {v}}\in \mathbb {R} ^{n}}$, write

${\displaystyle {\vec {u}}={\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}\qquad {\vec {v}}={\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}}$

and calculate.

${\displaystyle \cos \theta ={\frac {ku_{1}v_{1}+\cdots +ku_{n}v_{n}}{{\sqrt {{(ku_{1})}^{2}+\cdots +{(ku_{n})}^{2}}}{\sqrt {{b_{1}}^{2}+\cdots +{b_{n}}^{2}}}}}={\frac {k}{|k|}}{\frac {{\vec {u}}\cdot {\vec {v}}}{|{\vec {u}}\,|\,|{\vec {v}}\,|}}=\pm {\frac {{\vec {u}}\cdot {\vec {v}}}{|{\vec {u}}\,|\,|{\vec {v}}\,|}}}$
This exercise is recommended for all readers.
Problem 28

Show that the inner product operation is linear: for ${\displaystyle {\vec {u}},{\vec {v}},{\vec {w}}\in \mathbb {R} ^{n}}$ and ${\displaystyle k,m\in \mathbb {R} }$, ${\displaystyle {\vec {u}}\cdot (k{\vec {v}}+m{\vec {w}})=k({\vec {u}}\cdot {\vec {v}})+m({\vec {u}}\cdot {\vec {w}})}$.

Let

${\displaystyle {\vec {u}}={\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}},\quad {\vec {v}}={\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}\quad {\vec {w}}={\begin{pmatrix}w_{1}\\\vdots \\w_{n}\end{pmatrix}}}$

and then

${\displaystyle {\begin{array}{rl}{\vec {u}}\cdot {\bigl (}k{\vec {v}}+m{\vec {w}}{\bigr )}&={\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}\cdot {\bigl (}{\begin{pmatrix}kv_{1}\\\vdots \\kv_{n}\end{pmatrix}}+{\begin{pmatrix}mw_{1}\\\vdots \\mw_{n}\end{pmatrix}}{\bigr )}\\&={\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}\cdot {\begin{pmatrix}kv_{1}+mw_{1}\\\vdots \\kv_{n}+mw_{n}\end{pmatrix}}\\&=u_{1}(kv_{1}+mw_{1})+\cdots +u_{n}(kv_{n}+mw_{n})\\&=ku_{1}v_{1}+mu_{1}w_{1}+\cdots +ku_{n}v_{n}+mu_{n}w_{n}\\&=(ku_{1}v_{1}+\cdots +ku_{n}v_{n})+(mu_{1}w_{1}+\cdots +mu_{n}w_{n})\\&=k({\vec {u}}\cdot {\vec {v}})+m({\vec {u}}\cdot {\vec {w}})\end{array}}}$

as required.

This exercise is recommended for all readers.
Problem 29

The geometric mean of two positive reals ${\displaystyle x,y}$ is ${\displaystyle {\sqrt {xy}}}$. It is analogous to the arithmetic mean ${\displaystyle (x+y)/2}$. Use the Cauchy-Schwartz inequality to show that the geometric mean of any ${\displaystyle x,y\in \mathbb {R} }$ is less than or equal to the arithmetic mean.

For ${\displaystyle x,y\in \mathbb {R} ^{+}}$, set

${\displaystyle {\vec {u}}={\begin{pmatrix}{\sqrt {x}}\\{\sqrt {y}}\end{pmatrix}}\qquad {\vec {v}}={\begin{pmatrix}{\sqrt {y}}\\{\sqrt {x}}\end{pmatrix}}}$

so that the Cauchy-Schwartz inequality asserts that (after squaring)

${\displaystyle {\begin{array}{rl}({\sqrt {x}}{\sqrt {y}}+{\sqrt {y}}{\sqrt {x}})^{2}&\leq ({\sqrt {x}}{\sqrt {x}}+{\sqrt {y}}{\sqrt {y}})({\sqrt {y}}{\sqrt {y}}+{\sqrt {x}}{\sqrt {x}})\\(2{\sqrt {x}}{\sqrt {y}})^{2}&\leq (x+y)^{2}\\{\sqrt {xy}}&\leq {\frac {x+y}{2}}\end{array}}}$

as desired.

? Problem 30

A ship is sailing with speed and direction ${\displaystyle {\vec {v}}_{1}}$; the wind blows apparently (judging by the vane on the mast) in the direction of a vector ${\displaystyle {\vec {a}}}$; on changing the direction and speed of the ship from ${\displaystyle {\vec {v}}_{1}}$ to ${\displaystyle {\vec {v}}_{2}}$ the apparent wind is in the direction of a vector ${\displaystyle {\vec {b}}}$.

Find the vector velocity of the wind (Ivanoff & Esty 1933).

This is how the answer was given in the cited source.

The actual velocity ${\displaystyle {\vec {v}}}$ of the wind is the sum of the ship's velocity and the apparent velocity of the wind. Without loss of generality we may assume ${\displaystyle {\vec {a}}}$ and ${\displaystyle {\vec {b}}}$ to be unit vectors, and may write

${\displaystyle {\vec {v}}={\vec {v}}_{1}+s{\vec {a}}={\vec {v}}_{2}+t{\vec {b}}}$

where ${\displaystyle s}$ and ${\displaystyle t}$ are undetermined scalars. Take the dot product first by ${\displaystyle {\vec {a}}}$ and then by ${\displaystyle {\vec {b}}}$ to obtain

${\displaystyle {\begin{array}{rl}s-t{\vec {a}}\cdot {\vec {b}}&={\vec {a}}\cdot ({\vec {v}}_{2}-{\vec {v}}_{1})\\s{\vec {a}}\cdot {\vec {b}}-t&={\vec {b}}\cdot ({\vec {v}}_{2}-{\vec {v}}_{1})\end{array}}}$

Multiply the second by ${\displaystyle {\vec {a}}\cdot {\vec {b}}}$, subtract the result from the first, and find

${\displaystyle s={\frac {[{\vec {a}}-({\vec {a}}\cdot {\vec {b}}){\vec {b}}]\cdot ({\vec {v}}_{2}-{\vec {v}}_{1})}{1-({\vec {a}}\cdot {\vec {b}})^{2}}}.}$

Substituting in the original displayed equation, we get

${\displaystyle {\vec {v}}={\vec {v}}_{1}+{\frac {[{\vec {a}}-({\vec {a}}\cdot {\vec {b}}){\vec {b}}]\cdot ({\vec {v}}_{2}-{\vec {v}}_{1}){\vec {a}}}{1-({\vec {a}}\cdot {\vec {b}})^{2}}}.}$
Problem 31

Verify the Cauchy-Schwartz inequality by first proving Lagrange's identity:

${\displaystyle \left(\sum _{1\leq j\leq n}a_{j}b_{j}\right)^{2}=\left(\sum _{1\leq j\leq n}a_{j}^{2}\right)\left(\sum _{1\leq j\leq n}b_{j}^{2}\right)-\sum _{1\leq k

and then noting that the final term is positive. (Recall the meaning

${\displaystyle \sum _{1\leq j\leq n}a_{j}b_{j}=a_{1}b_{1}+a_{2}b_{2}+\cdots +a_{n}b_{n}}$

and

${\displaystyle \sum _{1\leq j\leq n}{a_{j}}^{2}={a_{1}}^{2}+{a_{2}}^{2}+\cdots +{a_{n}}^{2}}$

of the ${\displaystyle \Sigma }$ notation.) This result is an improvement over Cauchy-Schwartz because it gives a formula for the difference between the two sides. Interpret that difference in ${\displaystyle \mathbb {R} ^{2}}$.

We use induction on ${\displaystyle n}$.

In the ${\displaystyle n=1}$ base case the identity reduces to

${\displaystyle (a_{1}b_{1})^{2}=({a_{1}}^{2})({b_{1}}^{2})-0}$

and clearly holds.

For the inductive step assume that the formula holds for the ${\displaystyle 0}$, ..., ${\displaystyle n}$ cases. We will show that it then holds in the ${\displaystyle n+1}$ case. Start with the right-hand side

${\displaystyle {\bigl (}\sum _{1\leq j\leq n+1}{a_{j}}^{2}{\bigr )}{\bigl (}\sum _{1\leq j\leq n+1}{b_{j}}^{2}{\bigr )}-\sum _{1\leq k
{\displaystyle {\begin{aligned}&={\bigl [}(\sum _{1\leq j\leq n}{a_{j}}^{2})+{a_{n+1}}^{2}{\bigr ]}{\bigl [}(\sum _{1\leq j\leq n}{b_{j}}^{2})+{b_{n+1}}^{2}{\bigr ]}\\&\quad -{\bigl [}\sum _{1\leq k

and apply the inductive hypothesis

${\displaystyle {\begin{array}{rl}&={\bigl (}\sum _{1\leq j\leq n}a_{j}b_{j}{\bigr )}^{2}+\sum _{1\leq j\leq n}{b_{j}}^{2}{a_{n+1}}^{2}+\sum _{1\leq j\leq n}{a_{j}}^{2}{b_{n+1}}^{2}+{a_{n+1}}^{2}{b_{n+1}}^{2}\\&\qquad -{\bigl [}\sum _{1\leq k\leq n}{a_{k}}^{2}{b_{n+1}}^{2}-2\sum _{1\leq k\leq n}a_{k}b_{n+1}a_{n+1}b_{k}+\sum _{1\leq k\leq n}{a_{n+1}}^{2}{b_{k}}^{2}{\bigr ]}\\&={\bigl (}\sum _{1\leq j\leq n}a_{j}b_{j}{\bigr )}^{2}+2{\bigl (}\sum _{1\leq k\leq n}a_{k}b_{n+1}a_{n+1}b_{k}{\bigr )}+{a_{n+1}}^{2}{b_{n+1}}^{2}\\&={\bigl [}{\bigl (}\sum _{1\leq j\leq n}a_{j}b_{j}{\bigr )}+a_{n+1}b_{n+1}{\bigr ]}^{2}\end{array}}}$

to derive the left-hand side.

## References

• O'Hanian, Hans (1985), Physics, 1, W. W. Norton
• Ivanoff, V. F. (proposer); Esty, T. C. (solver) (Feb. 1933), "Problem 3529", American Mathematical Mothly 39 (2): 118