Linear Algebra/Print version/Part 1

From Wikibooks, open books for an open world
< Linear Algebra
Jump to: navigation, search


Linalg cover.png


Table of Contents

Linear Algebra
An Introduction to Mathematical Discourse

Loupe light.svg

The book was designed specifically for students who had not previously been exposed to mathematics as mathematicians view it. That is, as a subject whose goal is to rigorously prove theorems starting from clear consistent definitions. This book attempts to build students up from a background where mathematics is simply a tool that provides useful calculations to the point where the students have a grasp of the clear and precise nature of mathematics. A more detailed discussion of the prerequisites and goal of this book is given in the introduction.

Table of Contents

Linear Systems

  1. Solving Linear Systems100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    1. Gauss' Method 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    2. Describing the Solution Set 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    3. General = Particular + Homogeneous 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    4. Comparing Set Descriptions 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    5. Automation 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  2. Linear Geometry of n-Space 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    1. Vectors in Space 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    2. Length and Angle Measures 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  3. Reduced Echelon Form 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    1. Gauss-Jordan Reduction 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
    2. Row Equivalence 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  4. Topic: Computer Algebra Systems 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  5. Topic: Input-Output Analysis 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  6. Input-Output Analysis M File 100% developed  as of Mar 24 2008 (Mar 24 2008)
  7. Topic: Accuracy of Computations 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  8. Topic: Analyzing Networks 100% developed  as of Jul 13, 2009 (Jul 13, 2009)
  9. Topic: Speed of Gauss' Method 50% developed  as of Mar 24, 2008 (Mar 24, 2008)

Vector Spaces 100% developed  as of Apr 17, 2009 (Apr 17, 2009)

  1. Definition of Vector Space100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    1. Definition and Examples100% developed  as of Jun 18, 2009 (Jun 18, 2009)
    2. Subspaces and Spanning sets100% developed  as of Jun 18, 2009 (Jun 18, 2009)
  2. Linear Independence100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    1. Definition and Examples100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  3. Basis and Dimension100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    1. Basis100% developed  as of Jun 18, 2009 (Jun 18, 2009)
    2. Dimension100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    3. Vector Spaces and Linear Systems100% developed  as of Apr 17, 2009 (Apr 17, 2009)
    4. Combining Subspaces100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  4. Topic: Fields100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  5. Topic: Crystals100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  6. Topic: Voting Paradoxes100% developed  as of Apr 17, 2009 (Apr 17, 2009)
  7. Topic: Dimensional Analysis100% developed  as of Apr 17, 2009 (Apr 17, 2009)

Maps Between Spaces

  1. Isomorphisms100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Definition and Examples100% developed  as of July 19, 2009 (July 19, 2009)
    2. Dimension Characterizes Isomorphism100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  2. Homomorphisms100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Definition of Homomorphism100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Rangespace and Nullspace100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  3. Computing Linear Maps100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Representing Linear Maps with Matrices100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Any Matrix Represents a Linear Map100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  4. Matrix Operations100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Sums and Scalar Products100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Matrix Multiplication100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    3. Mechanics of Matrix Multiplication100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    4. Inverses100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  5. Change of Basis100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Changing Representations of Vectors100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Changing Map Representations100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  6. Projection100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Orthogonal Projection Onto a Line100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Gram-Schmidt Orthogonalization100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    3. Projection Onto a Subspace100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  7. Topic: Line of Best Fit100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  8. Topic: Geometry of Linear Maps100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  9. Topic: Markov Chains100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  10. Topic: Orthonormal Matrices100% developed  as of Jun 21, 2009 (Jun 21, 2009)

Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)

  1. Definition100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Exploration100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    2. Properties of Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    3. The Permutation Expansion100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    4. Determinants Exist100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  2. Geometry of Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Determinants as Size Functions100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  3. Other Formulas for Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)
    1. Laplace's Expansion100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  4. Topic: Cramer's Rule100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  5. Topic: Speed of Calculating Determinants100% developed  as of Jun 21, 2009 (Jun 21, 2009)
  6. Topic: Projective Geometry100% developed  as of Jun 21, 2009 (Jun 21, 2009)

Similarity100% developed  as of Jun 24, 2009 (Jun 24, 2009)

  1. Complex Vector Spaces100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    1. Factoring and Complex Numbers: A Review100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    2. Complex Representations100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  2. Similarity
    1. Definition and Examples100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    2. Diagonalizability100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    3. Eigenvalues and Eigenvectors100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  3. Nilpotence100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    1. Self-Composition100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    2. Strings100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  4. Jordan Form100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    1. Polynomials of Maps and Matrices100% developed  as of Jun 24, 2009 (Jun 24, 2009)
    2. Jordan Canonical Form100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  5. Topic: Geometry of Eigenvalues50% developed  as of Jun 24, 2009 (Jun 24, 2009)
  6. Topic: The Method of Powers100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  7. Topic: Stable Populations100% developed  as of Jun 24, 2009 (Jun 24, 2009)
  8. Topic: Linear Recurrences100% developed  as of Jun 24, 2009 (Jun 24, 2009)

Appendix

Resources And Licensing



Notation

 \mathbb{R} ,  \mathbb{R}^+ ,  \mathbb{R}^n set of real numbers, reals greater than 0, ordered n-tuples of reals

 \mathbb{N}

natural numbers:  \{0,1,2,\ldots\}

 \mathbb{C}

complex numbers

 \{\ldots\,\big|\,\ldots\}

set of . . . such that . . .

 (a\,..\,b) ,  [a\,..\,b]

interval (open or closed) of reals between a and b

 \langle \ldots \rangle

sequence; like a set but order matters

 V,W,U

vector spaces

 \vec{v},\vec{w}

vectors

\vec{0}, \vec{0}_V

zero vector, zero vector of V

 B,D

bases

 \mathcal{E}_n=\langle \vec{e}_1,\,\ldots,\,\vec{e}_n \rangle

standard basis for \mathbb{R}^n

 \vec{\beta},\vec{\delta}

basis vectors

 {\rm Rep}_{B}(\vec{v})

matrix representing the vector

 \mathcal{P}_n

set of  n -th degree polynomials

 \mathcal{M}_{n \! \times \! m}

set of  n \! \times \! m matrices

 [S]

span of the set  S

 M\oplus N

direct sum of subspaces

 V\cong W

isomorphic spaces

 h,g

homomorphisms, linear maps

 H,G

matrices

 t,s

transformations; maps from a space to itself

 T,S

square matrices

 {\rm Rep}_{B,D}(h)

matrix representing the map  h

 h_{i,j}

matrix entry from row  i , column  j

 \left|T\right|

determinant of the matrix  T

 \mathcal{R}(h),\mathcal{N}(h)

rangespace and nullspace of the map  h

 \mathcal{R}_\infty(h),\mathcal{N}_\infty(h)

generalized rangespace and nullspace

Lower case Greek alphabet


\begin{array}{ll|ll|ll}
\text{name}    &\text{character}      &\text{name}   &\text{character}     &\text{name}   &\text{character} \\ 
\hline
\text{alpha}   & \alpha    &\text{iota}   & \iota    &\text{rho}    & \rho     \\
\text{beta}    & \beta     &\text{kappa}  & \kappa   &\text{sigma}  & \sigma   \\
\text{gamma}   & \gamma    &\text{lambda} & \lambda  &\text{tau}    & \tau     \\
\text{delta}   & \delta    &\text{mu}     & \mu      &\text{upsilon}& \upsilon \\
\text{epsilon} & \epsilon  &\text{nu}     & \nu      &\text{phi}    & \phi     \\
\text{zeta}    & \zeta     &\text{xi}     & \xi      &\text{chi}    & \chi     \\
\text{eta}     & \eta      &\text{omicron}& o        &\text{psi}    & \psi     \\
\text{theta}   & \theta    &\text{pi}     & \pi      &\text{omega}  & \omega  
\end{array}


About the Cover. This is Cramer's Rule for the system x_1+2x_2=6, 3x_1+x_2=8. The size of the first box is the determinant shown (the absolute value of the size is the area). The size of the second box is x_1 times that, and equals the size of the final box. Hence, x_1 is the final determinant divided by the first determinant.


Introduction

This book helps students to master the material of a standard undergraduate linear algebra course.

The material is standard in that the topics covered are Gaussian reduction, vector spaces, linear maps, determinants, and eigenvalues and eigenvectors. The audience is also standard: sophomores or juniors, usually with a background of at least one semester of calculus and perhaps with as much as three semesters.

The help that it gives to students comes from taking a developmental approach—this book's presentation emphasizes motivation and naturalness, driven home by a wide variety of examples and extensive, careful, exercises. The developmental approach is what sets this book apart, so some expansion of the term is appropriate here.

Courses in the beginning of most mathematics programs reward students less for understanding the theory and more for correctly applying formulas and algorithms. Later courses ask for mathematical maturity: the ability to follow different types of arguments, a familiarity with the themes that underlay many mathematical investigations like elementary set and function facts, and a capacity for some independent reading and thinking. Linear algebra is an ideal spot to work on the transition between the two kinds of courses. It comes early in a program so that progress made here pays off later, but also comes late enough that students are often majors and minors. The material is coherent, accessible, and elegant. There are a variety of argument styles—proofs by contradiction, if and only if statements, and proofs by induction, for instance—and examples are plentiful.

So, the aim of this book's exposition is to help students develop from being successful at their present level, in classes where a majority of the members are interested mainly in applications in science or engineering, to being successful at the next level, that of serious students of the subject of mathematics itself.

Helping students make this transition means taking the mathematics seriously, so all of the results in this book are proved. On the other hand, we cannot assume that students have already arrived, and so in contrast with more abstract texts, we give many examples and they are often quite detailed.

In the past, linear algebra texts commonly made this transition abruptly. They began with extensive computations of linear systems, matrix multiplications, and determinants. When the concepts—vector spaces and linear maps—finally appeared, and definitions and proofs started, often the change brought students to a stop. In this book, while we start with a computational topic, linear reduction, from the first we do more than compute. We do linear systems quickly but completely, including the proofs needed to justify what we are computing. Then, with the linear systems work as motivation and at a point where the study of linear combinations seems natural, the second chapter starts with the definition of a real vector space. This occurs by the end of the third week.

Another example of our emphasis on motivation and naturalness is that the third chapter on linear maps does not begin with the definition of homomorphism, but with that of isomorphism. That's because this definition is easily motivated by the observation that some spaces are "just like" others. After that, the next section takes the reasonable step of defining homomorphism by isolating the operation-preservation idea. This approach loses mathematical slickness, but it is a good trade because it comes in return for a large gain in sensibility to students.

One aim of a developmental approach is that students should feel throughout the presentation that they can see how the ideas arise, and perhaps picture themselves doing the same type of work.

The clearest example of the developmental approach taken here—and the feature that most recommends this book—is the exercises. A student progresses most while doing the exercises, so they have been selected with great care. Each problem set ranges from simple checks to reasonably involved proofs. Since an instructor usually assigns about a dozen exercises after each lecture, each section ends with about twice that many, thereby providing a selection. There are even a few problems that are challenging puzzles taken from various journals, competitions, or problems collections. (These are marked with a "?" and as part of the fun, the original wording has been retained as much as possible.) In total, the exercises are aimed to both build an ability at, and help students experience the pleasure of, doing mathematics.


Applications and Computers.

The point of view taken here, that linear algebra is about vector spaces and linear maps, is not taken to the complete exclusion of others. Applications and the role of the computer are important and vital aspects of the subject. Consequently, each of this book's chapters closes with a few application or computer-related topics. Some are: network flows, the speed and accuracy of computer linear reductions, Leontief Input/Output analysis, dimensional analysis, Markov chains, voting paradoxes, analytic projective geometry, and difference equations.

These topics are brief enough to be done in a day's class or to be given as independent projects for individuals or small groups. Most simply give the reader a taste of the subject, discuss how linear algebra comes in, point to some further reading, and give a few exercises. In short, these topics invite readers to see for themselves that linear algebra is a tool that a professional must have.

For people reading this book on their own.

This book's emphasis on motivation and development make it a good choice for self-study. But, while a professional instructor can judge what pace and topics suit a class, if you are an independent student then perhaps you would find some advice helpful.

Here are two timetables for a semester. The first focuses on core material.

week Monday Wednesday Friday
1 One.I.1 One.I.1, 2 One.I.2, 3
2 One.I.3 One.II.1 One.II.2
3 One.III.1, 2 One.III.2 Two.I.1
4 Two.I.2 Two.II Two.III.1
5 Two.III.1, 2 Two.III.2 Exam
6 Two.III.2, 3 Two.III.3 Three.I.1
7 Three.I.2 Three.II.1 Three.II.2
8 Three.II.2 Three.II.2 Three.III.1
9 Three.III.1 Three.III.2 Three.IV.1, 2
10 Three.IV.2, 3, 4 Three.IV.4 Exam
11 Three.IV.4, Three.V.1 Three.V.1, 2 Four.I.1, 2
12 Four.I.3 Four.II Four.II
13 Four.III.1 Five.I Five.II.1
14 Five.II.2 Five.II.3 Review

The second timetable is more ambitious (it supposes that you know One.II, the elements of vectors, usually covered in third semester calculus).

week Monday Wednesday Friday
1 One.I.1 One.I.2 One.I.3
2 One.I.3 One.III.1, 2 One.III.2
3 Two.I.1 Two.I.2 Two.II
4 Two.III.1 Two.III.2 Two.III.3
5 Two.III.4 Three.I.1 Exam
6 Three.I.2 Three.II.1 Three.II.2
7 Three.III.1 Three.III.2 Three.IV.1, 2
8 Three.IV.2 Three.IV.3 Three.IV.4
9 Three.V.1 Three.V.2 Three.VI.1
10 Three.VI.2 Four.I.1 Exam
11 Four.I.2 Four.I.3 Four.I.4
12 Four.II Four.II, Four.III.1 Four.III.2, 3
13 Five.II.1, 2 Five.II.3 Five.III.1
14 Five.III.2 Five.IV.1, 2 Five.IV.2

See the table of contents for the titles of these subsections.

To help you make time trade-offs, in the table of contents I have marked subsections as optional if some instructors will pass over them in favor of spending more time elsewhere. You might also try picking one or two topics that appeal to you from the end of each chapter. You'll get more from these if you have access to computer software that can do the big calculations.

The most important advice is: do many exercises. The recommended exercises are labeled throughout. (The answers are available.) You should be aware, however, that few inexperienced people can write correct proofs. Try to find a knowledgeable person to work with you on this.

Finally, if I may, a caution for all students, independent or not: I cannot overemphasize how much the statement that I sometimes hear, "I understand the material, but it's only that I have trouble with the problems" reveals a lack of understanding of what we are up to. Being able to do things with the ideas is their point. The quotes below express this sentiment admirably. They state what I believe is the key to both the beauty and the power of mathematics and the sciences in general, and of linear algebra in particular (I took the liberty of formatting them as poems).



I know of no better tactic
 than the illustration of exciting principles
by well-chosen particulars.
        --Stephen Jay Gould



If you really wish to learn
 then you must mount the machine
 and become acquainted with its tricks
by actual trial.
        --Wilbur Wright




Jim Hefferon
Mathematics, Saint Michael's College
Colchester, Vermont USA 05439
http://joshua.smcvt.edu
2006-May-20






Author's Note. Inventing a good exercise, one that enlightens as well as tests, is a creative act, and hard work.

The inventor deserves recognition. But for some reason texts have traditionally not given attributions for questions. I have changed that here where I was sure of the source. I would greatly appreciate hearing from anyone who can help me to correctly attribute others of the questions.



Chapter I - Linear Systems

Section I - Solving Linear Systems

Systems of linear equations are common in science and mathematics. These two examples from high school science (O'Nan 1990) give a sense of how they arise.

The first example is from Physics. Suppose that we are given three objects, one with a mass known to be 2 kg, and are asked to find the unknown masses. Suppose further that experimentation with a meter stick produces these two balances.

Linalg balance 1.png Linalg balance 2.png

Since the sum of moments on the left of each balance equals the sum of moments on the right (the moment of an object is its mass times its distance from the balance point), the two balances give this system of two equations.

\begin{array}{rl}
40h+15c  &= 100  \\
25c      &= 50+50h
\end{array}

The second example of a linear system is from Chemistry. We can mix, under controlled conditions, toluene \hbox{C}_7\hbox{H}_8 and nitric acid \hbox{H}\hbox{N}\hbox{O}_3 to produce trinitrotoluene \hbox{C}_7\hbox{H}_5\hbox{O}_6\hbox{N}_3 along with the byproduct water (conditions have to be controlled very well, indeed— trinitrotoluene is better known as TNT). In what proportion should those components be mixed? The number of atoms of each element present before the reaction


x\,{\rm C}_7{\rm H}_8\ +\ y\,{\rm H}{\rm N}{\rm O}_3
\quad\longrightarrow\quad
z\,{\rm C}_7{\rm H}_5{\rm O}_6{\rm N}_3\ +\ w\,{\rm H}_2{\rm O}

must equal the number present afterward. Applying that principle to the elements C, H, N, and O in turn gives this system.

\begin{array}{rl}
7x      &= 7z  \\
8x +1y  &= 5z+2w  \\
1y      &= 3z  \\
3y      &= 6z+1w
\end{array}

To finish each of these examples requires solving a system of equations. In each, the equations involve only the first power of the variables. This chapter shows how to solve any such system.


1 - Gauss' Method

Definition 1.1

A linear equation in variables  x_1,x_2,\ldots,x_n has the form


a_1x_1+a_2x_2+a_3x_3+\cdots+a_nx_n=d

where the numbers  a_1, \ldots ,a_n\in\mathbb{R} are the equation's coefficients and  d\in\mathbb{R} is the constant. An  n -tuple  (s_1,s_2,\ldots ,s_n)\in\mathbb{R}^n is a solution of, or satisfies, that equation if substituting the numbers s_1, ..., s_n for the variables gives a true statement: a_1s_1+a_2s_2+\ldots+a_ns_n=d.

A system of linear equations


\begin{array}{*{4}{rc}r}
a_{1,1}x_1 &+ &a_{1,2}x_2  &+  &\cdots &+ &a_{1,n}x_n &=  &d_1  \\
a_{2,1}x_1 &+ &a_{2,2}x_2  &+  &\cdots &+ &a_{2,n}x_n &=  &d_2  \\
&  &            &   &       &  &           &\vdots   \\
a_{m,1}x_1 &+ &a_{m,2}x_2  &+  &\cdots &+ &a_{m,n}x_n &=  &d_m
\end{array}

has the solution  (s_1,s_2,\ldots ,s_n) if that n-tuple is a solution of all of the equations in the system.

Example 1.2

The ordered pair  (-1,5) is a solution of this system.


\begin{array}{*{2}{rc}r}
3x_1 &+ &2x_2 &= &7  \\
-x_1 &+ &x_2  &= &6
\end{array}

In contrast,  (5,-1) is not a solution.

Finding the set of all solutions is solving the system. No guesswork or good fortune is needed to solve a linear system. There is an algorithm that always works. The next example introduces that algorithm, called Gauss' method. It transforms the system, step by step, into one with a form that is easily solved.

Example 1.3

To solve this system


\begin{array}{*{3}{rc}r}
&   &      &   &3x_3  &=  &9  \\
x_1 &+  &5x_2  &-  &2x_3  &=  &2  \\
\frac{1}{3}x_1 &+  &2x_2  &   &      &=  &3  
\end{array}

we repeatedly transform it until it is in a form that is easy to solve.

\begin{array}{rcl}
\quad
&\xrightarrow[]{ \text{swap row 1 with row 3} }
&\begin{array}{*{3}{rc}r}
\frac{1}{3}x_1 &+  &2x_2  &   &      &=  &3  \\
x_1 &+  &5x_2  &-  &2x_3  &=  &2  \\
&   &      &   &3x_3  &=  &9  
\end{array}                                         \\
&\xrightarrow[]{ \text{multiply row 1 by 3} }
&\begin{array}{*{3}{rc}r}
x_1 &+  &6x_2  &   &      &=  &9  \\
x_1 &+  &5x_2  &-  &2x_3  &=  &2  \\
&   &      &   &3x_3  &=  &9  
\end{array}                                          \\
&\xrightarrow[]{ \text{add }-1\text{ times row 1 to row 2} }
&\begin{array}{*{3}{rc}r}
x_1 &+  &6x_2  &   &      &=  &9  \\
&   &-x_2  &-  &2x_3  &=  &-7 \\
&   &      &   &3x_3  &=  &9  
\end{array}
\end{array}

The third step is the only nontrivial one. We've mentally multiplied both sides of the first row by  -1 , mentally added that to the old second row, and written the result in as the new second row.

Now we can find the value of each variable. The bottom equation shows that  x_3=3 . Substituting 3 for  x_3 in the middle equation shows that  x_2=1 . Substituting those two into the top equation gives that  x_1=3 and so the system has a unique solution: the solution set is \{\,(3,1,3)\,\}.

Most of this subsection and the next one consists of examples of solving linear systems by Gauss' method. We will use it throughout this book. It is fast and easy. But, before we get to those examples, we will first show that this method is also safe in that it never loses solutions or picks up extraneous solutions.

Theorem 1.4 (Gauss' method)

If a linear system is changed to another by one of these operations

  1. an equation is swapped with another
  2. an equation has both sides multiplied by a nonzero constant
  3. an equation is replaced by the sum of itself and a multiple of another

then the two systems have the same set of solutions.

Each of those three operations has a restriction. Multiplying a row by  0 is not allowed because obviously that can change the solution set of the system. Similarly, adding a multiple of a row to itself is not allowed because adding  -1 times the row to itself has the effect of multiplying the row by  0 . Finally, swapping a row with itself is disallowed to make some results in the fourth chapter easier to state and remember (and besides, self-swapping doesn't accomplish anything).

Proof

We will cover the equation swap operation here and save the other two cases for Problem 14.

Consider this swap of row i with row j.


\begin{array}{*{4}{rc}r}
a_{1,1}x_1  &+  &a_{1,2}x_2 &+  &\cdots  &&a_{1,n}x_n  &=  &d_1  \\
&   &           &   &        &   &            &\vdots   \\
a_{i,1}x_1  &+  &a_{i,2}x_2 &+  &\cdots  &&a_{i,n}x_n  &=  &d_i  \\
&   &           &   &        &   &            &\vdots   \\
a_{j,1}x_1  &+  &a_{j,2}x_2 &+  &\cdots  &&a_{j,n}x_n  &=  &d_j  \\
&   &           &   &        &   &            &\vdots   \\
a_{m,1}x_1  &+  &a_{m,2}x_2 &+  &\cdots  &&a_{m,n}x_n  &=  &d_m  
\end{array}
\xrightarrow[]{}
\begin{array}{*{4}{rc}r}
a_{1,1}x_1  &+  &a_{1,2}x_2 &+  &\cdots  &&a_{1,n}x_n  &=  &d_1  \\
&   &           &   &        &   &            &\vdots   \\
a_{j,1}x_1  &+  &a_{j,2}x_2 &+  &\cdots  &&a_{j,n}x_n  &=  &d_j  \\
&   &           &   &        &   &            &\vdots   \\
a_{i,1}x_1  &+  &a_{i,2}x_2 &+  &\cdots  &&a_{i,n}x_n  &=  &d_i  \\
&   &           &   &        &   &            &\vdots   \\
a_{m,1}x_1  &+  &a_{m,2}x_2 &+  &\cdots  &&a_{m,n}x_n  &=  &d_m  
\end{array}


The  n -tuple  (s_1,\ldots\,,s_n) satisfies the system before the swap if and only if substituting the values, the s's, for the variables, the x's, gives true statements: a_{1,1}s_1+a_{1,2}s_2+\cdots+a_{1,n}s_n=d_1 and ... a_{i,1}s_1+a_{i,2}s_2+\cdots+a_{i,n}s_n=d_i and ... a_{j,1}s_1+a_{j,2}s_2+\cdots+a_{j,n}s_n=d_j and ... a_{m,1}s_1+a_{m,2}s_2+\cdots+a_{m,n}s_n=d_m.

In a requirement consisting of statements and-ed together we can rearrange the order of the statements, so that this requirement is met if and only if a_{1,1}s_1+a_{1,2}s_2+\cdots+a_{1,n}s_n=d_1 and ... a_{j,1}s_1+a_{j,2}s_2+\cdots+a_{j,n}s_n=d_j and ... a_{i,1}s_1+a_{i,2}s_2+\cdots+a_{i,n}s_n=d_i and ... a_{m,1}s_1+a_{m,2}s_2+\cdots+a_{m,n}s_n=d_m. This is exactly the requirement that  (s_1,\ldots\,,s_n) solves the system after the row swap.

Definition 1.5

The three operations from Theorem 1.4 are the elementary reduction operations, or row operations, or Gaussian operations. They are swapping, multiplying by a scalar or rescaling, and pivoting.

When writing out the calculations, we will abbreviate "row i" by " \rho_i ". For instance, we will denote a pivot operation by  k\rho_i+\rho_j , with the row that is changed written second. We will also, to save writing, often list pivot steps together when they use the same \rho_i.

Example 1.6

A typical use of Gauss' method is to solve this system.


\begin{array}{*{3}{rc}r}
x  &+  &y  &   &   &=  &0  \\
2x  &-  &y  &+  &3z &=  &3  \\
x  &-  &2y &-  &z  &=  &3  
\end{array}

The first transformation of the system involves using the first row to eliminate the x in the second row and the x in the third. To get rid of the second row's 2x, we multiply the entire first row by -2, add that to the second row, and write the result in as the new second row. To get rid of the third row's x, we multiply the first row by -1, add that to the third row, and write the result in as the new third row.

\begin{array}{rcl}
&\xrightarrow[-\rho_1 +\rho_3]{-2\rho_1 +\rho_2}
&\begin{array}{*{3}{rc}r}
x  &+  &y  &   &   &=  &0  \\
&   &-3y&+  &3z &=  &3  \\
&   &-3y&-  &z  &=  &3  
\end{array}   
\end{array}

(Note that the two \rho_1 steps -2\rho_1 +\rho_2 and -\rho_1 +\rho_3 are written as one operation.) In this second system, the last two equations involve only two unknowns. To finish we transform the second system into a third system, where the last equation involves only one unknown. This transformation uses the second row to eliminate y from the third row.

\begin{array}{rcl}
&\xrightarrow[]{-\rho_2 +\rho_3}
&\begin{array}{*{3}{rc}r}
x  &+  &y  &   &   &=  &0  \\
&   &-3y&+  &3z &=  &3  \\
&   &   &   &-4z&=  &0
\end{array}
\end{array}

Now we are set up for the solution. The third row shows that  z=0 . Substitute that back into the second row to get  y=-1 , and then substitute back into the first row to get  x=1 .

Example 1.7

For the Physics problem from the start of this chapter, Gauss' method gives this.

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
40h  &+  &15c  &=  &100      \\
-50h &+  &25c  &=  &50         
\end{array}
&\xrightarrow[]{5/4\rho_1 +\rho_2}
&\begin{array}{*{2}{rc}r}
40h  &+  &15c       &=  &100      \\
&   &(175/4)c  &=  &175 
\end{array}
\end{array}

So  c=4 , and back-substitution gives that  h=1 . (The Chemistry problem is solved later.)

Example 1.8

The reduction

\begin{array}{rcl}
\begin{array}{*{3}{rc}r}
x  &+  &y  &+  &z  &=  &9  \\
2x  &+  &4y &-  &3z &=  &1  \\
3x  &+  &6y &-  &5z &=  &0  
\end{array}
&\xrightarrow[-3\rho_1 +\rho_3]{-2\rho_1 +\rho_2}
&\begin{array}{*{3}{rc}r}
x  &+  &y  &+  &z  &=  &9  \\
&   &2y &-  &5z &=  &-17\\
&   &3y &-  &8z&=  &-27
\end{array}                                    \\
&\xrightarrow[]{-(3/2)\rho_2+\rho_3}
&\begin{array}{*{3}{rc}r}
x  &+  &y  &+  &z            &=  &9  \\
&   &2y &-  &5z           &=  &-17\\
&   &   &   &-(1/2)z      &=  &-(3/2) 
\end{array}
\end{array}

shows that  z=3 ,  y=-1 , and  x=7 .

As these examples illustrate, Gauss' method uses the elementary reduction operations to set up back-substitution.

Definition 1.9

In each row, the first variable with a nonzero coefficient is the row's leading variable. A system is in echelon form if each leading variable is to the right of the leading variable in the row above it (except for the leading variable in the first row).

Example 1.10

The only operation needed in the examples above is pivoting. Here is a linear system that requires the operation of swapping equations. After the first pivot

\begin{array}{rcl}
\begin{array}{*{4}{rc}r}
x  &-  &y  &   &   &   &   &=  &0  \\
2x  &-  &2y &+  &z  &+  &2w &=  &4  \\
&   &y  &   &   &+  &w  &=  &0  \\
&   &   &   &2z &+  &w  &=  &5  
\end{array}
&\xrightarrow[]{-2\rho_1 +\rho_2}
&\begin{array}{*{4}{rc}r}
x  &-  &y  &   &   &   &   &=  &0  \\
&   &   &   &z  &+  &2w &=  &4  \\
&   &y  &   &   &+  &w  &=  &0  \\
&   &   &   &2z &+  &w  &=  &5  
\end{array}    
\end{array}

the second equation has no leading y. To get one, we look lower down in the system for a row that has a leading y and swap it in.

\begin{array}{rcl}
&\xrightarrow[]{\rho_2 \leftrightarrow\rho_3}
&\begin{array}{*{4}{rc}r}
x  &-  &y  &   &   &   &   &=  &0  \\
&   &y  &   &   &+  &w  &=  &0  \\
&   &   &   &z  &+  &2w &=  &4  \\
&   &   &   &2z &+  &w  &=  &5  
\end{array}    
\end{array}

(Had there been more than one row below the second with a leading y then we could have swapped in any one.) The rest of Gauss' method goes as before.

\begin{array}{rcl}
&\xrightarrow[]{-2\rho_3 +\rho_4}
&\begin{array}{*{4}{rc}r}
x  &-  &y  &   &   &   &   &=  &0  \\
&   &y  &   &   &+  &w  &=  &0  \\
&   &   &   &z  &+  &2w &=  &4  \\
&   &   &   &   &   &-3w&=  &-3 
\end{array}
\end{array}

Back-substitution gives  w=1 ,  z=2 ,  y=-1 , and  x=-1 .

Strictly speaking, the operation of rescaling rows is not needed to solve linear systems. We have included it because we will use it later in this chapter as part of a variation on Gauss' method, the Gauss-Jordan method.

All of the systems seen so far have the same number of equations as unknowns. All of them have a solution, and for all of them there is only one solution. We finish this subsection by seeing for contrast some other things that can happen.

Example 1.11

Linear systems need not have the same number of equations as unknowns. This system


\begin{array}{*{2}{rc}r}
x  &+  &3y  &=  &1  \\
2x  &+  &y   &=  &-3 \\
2x  &+  &2y  &=  &-2 
\end{array}

has more equations than variables. Gauss' method helps us understand this system also, since this

\begin{array}{rcl}
&\xrightarrow[-2\rho_1 +\rho_3]{-2\rho_1 +\rho_2}
&\begin{array}{*{2}{rc}r}
x  &+  &3y  &=  &1  \\
&   &-5y &=  &-5 \\
&   &-4y &=  &-4 
\end{array}
\end{array}

shows that one of the equations is redundant. Echelon form

\begin{array}{rcl}
&\xrightarrow[]{-(4/5)\rho_2 +\rho_3}
&\begin{array}{*{2}{rc}r}
x  &+  &3y  &=  &1  \\
&   &-5y &=  &-5 \\
&   &0   &=  &0  
\end{array}
\end{array}

gives  y=1 and  x=-2 . The " 0=0 " is derived from the redundancy.

That example's system has more equations than variables. Gauss' method is also useful on systems with more variables than equations. Many examples are in the next subsection.

Another way that linear systems can differ from the examples shown earlier is that some linear systems do not have a unique solution. This can happen in two ways.

The first is that it can fail to have any solution at all.

Example 1.12

Contrast the system in the last example with this one.

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
x  &+  &3y  &=  &1  \\
2x  &+  &y   &=  &-3 \\
2x  &+  &2y  &=  &0  
\end{array}
&\xrightarrow[-2\rho_1 +\rho_3]{-2\rho_1 +\rho_2}
&\begin{array}{*{2}{rc}r}
x  &+  &3y  &=  &1  \\
&   &-5y &=  &-5 \\
&   &-4y &=  &-2
\end{array}
\end{array}

Here the system is inconsistent: no pair of numbers satisfies all of the equations simultaneously. Echelon form makes this inconsistency obvious.

\begin{array}{rcl}
&\xrightarrow[]{-(4/5)\rho_2 +\rho_3}
&\begin{array}{*{2}{rc}r}
x  &+  &3y  &=  &1  \\
&   &-5y &=  &-5 \\
&   &0   &=  &2 
\end{array}
\end{array}

The solution set is empty.

Example 1.13

The prior system has more equations than unknowns, but that is not what causes the inconsistency— Example 1.11 has more equations than unknowns and yet is consistent. Nor is having more equations than unknowns sufficient for inconsistency, as is illustrated by this inconsistent system with the same number of equations as unknowns.

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
x  &+  &2y  &=  &8  \\
2x  &+  &4y  &=  &8  
\end{array}
&\xrightarrow[]{-2\rho_1 + \rho_2}
&\begin{array}{*{2}{rc}r}
x  &+  &2y  &=  &8  \\
&   &0   &=  &-8
\end{array}
\end{array}

The other way that a linear system can fail to have a unique solution is to have many solutions.

Example 1.14

In this system


\begin{array}{*{2}{rc}r}
x  &+  &y   &=  &4  \\
2x  &+  &2y  &=  &8  
\end{array}

any pair of numbers satisfying the first equation automatically satisfies the second. The solution set  \{ (x,y)\,\big|\, x+y=4 \} is infinite; some of its members are (0,4), (-1,5), and (2.5,1.5). The result of applying Gauss' method here contrasts with the prior example because we do not get a contradictory equation.

\begin{array}{rcl}
&\xrightarrow[]{-2\rho_1 + \rho_2}
&\begin{array}{*{2}{rc}r}
x  &+  &y   &=  &4  \\
&   &0   &=  &0  
\end{array}
\end{array}

Don't be fooled by the " 0=0 " equation in that example. It is not the signal that a system has many solutions.

Example 1.15

The absence of a " 0=0 " does not keep a system from having many different solutions. This system is in echelon form


\begin{array}{*{3}{rc}r}
x  &+  &y  &+  &z  &=  &0  \\
&   &y  &+  &z  &=  &0  
\end{array}

has no "0=0", and yet has infinitely many solutions. (For instance, each of these is a solution: (0,1,-1), (0,1/2,-1/2), (0,0,0), and (0,-\pi,\pi). There are infinitely many solutions because any triple whose first component is 0 and whose second component is the negative of the third is a solution.)

Nor does the presence of a " 0=0 " mean that the system must have many solutions. Example 1.11 shows that. So does this system, which does not have many solutions— in fact it has none— despite that when it is brought to echelon form it has a "0=0" row.

\begin{array}{rcl}
\begin{array}{*{3}{rc}r}
2x  &   &   &-   &2z  &=  &6  \\
&   &y  &+   &z   &=  &1  \\
2x  &+  &y  &-   &z   &=  &7  \\
&   &3y &+   &3z  &=  &0  
\end{array}
&\xrightarrow[]{-\rho_1 +\rho_3}
&\begin{array}{*{3}{rc}r}
2x  &   &   &-   &2z  &=  &6  \\
&   &y  &+   &z   &=  &1  \\
&   &y  &+   &z   &=  &1  \\
&   &3y &+   &3z  &=  &0  
\end{array}     \\
&\xrightarrow[-3\rho_2 +\rho_4]{-\rho_2 +\rho_3}
&\begin{array}{*{3}{rc}r}
2x  &   &   &-   &2z  &=  &6  \\
&   &y  &+   &z   &=  &1  \\
&   &   &    &0   &=  &0  \\
&   &   &    &0   &=  &-3 
\end{array}
\end{array}

We will finish this subsection with a summary of what we've seen so far about Gauss' method.

Gauss' method uses the three row operations to set a system up for back substitution. If any step shows a contradictory equation then we can stop with the conclusion that the system has no solutions. If we reach echelon form without a contradictory equation, and each variable is a leading variable in its row, then the system has a unique solution and we find it by back substitution. Finally, if we reach echelon form without a contradictory equation, and there is not a unique solution (at least one variable is not a leading variable) then the system has many solutions.

The next subsection deals with the third case— we will see how to describe the solution set of a system with many solutions.

Exercises

This exercise is recommended for all readers.
Problem 1

Use Gauss' method to find the unique solution for each system.

  1. \begin{array}{*{2}{rc}r}
2x  &+  &3y  &=  &13  \\
x   &-  &y   &=  &-1
\end{array}


  2. \begin{array}{*{3}{rc}r}
x   &  &  &-  &z  &=  &0  \\
3x  &+ &y &   &   &=  &1  \\
-x  &+ &y &+  &z  &=  &4
\end{array}
This exercise is recommended for all readers.
Problem 2

Use Gauss' method to solve each system or conclude "many solutions" or "no solutions".

  1. 
\begin{array}{*{2}{rc}r}
2x  &+  &2y  &=  &5  \\
x  &-  &4y  &=  &0  
\end{array}


  2. 
\begin{array}{*{2}{rc}r}
-x  &+  &y   &=  &1  \\
x  &+  &y   &=  &2  
\end{array}


  3. 
\begin{array}{*{3}{rc}r}
x  &-  &3y  &+  &z  &=  &1  \\
x  &+  &y   &+  &2z &=  &14 
\end{array}


  4. 
\begin{array}{*{2}{rc}r}
-x  &-  &y   &=  &1  \\
-3x  &-  &3y  &=  &2  
\end{array}


  5. 
\begin{array}{*{3}{rc}r}
&   &4y  &+  &z  &=  &20 \\
2x  &-  &2y  &+  &z  &=  &0  \\
x  &   &    &+  &z  &=  &5  \\
x  &+  &y   &-  &z  &=  &10 
\end{array}


  6.  \begin{array}{*{4}{rc}r}
2x  &   &   &+  &z  &+  &w  &=  &5  \\
&   &y  &   &   &-  &w  &=  &-1 \\
3x  &   &   &-  &z  &-  &w  &=  &0  \\
4x  &+  &y  &+  &2z &+  &w  &=  &9  
\end{array}
This exercise is recommended for all readers.
Problem 3

There are methods for solving linear systems other than Gauss' method. One often taught in high school is to solve one of the equations for a variable, then substitute the resulting expression into other equations. That step is repeated until there is an equation with only one variable. From that, the first number in the solution is derived, and then back-substitution can be done. This method takes longer than Gauss' method, since it involves more arithmetic operations, and is also more likely to lead to errors. To illustrate how it can lead to wrong conclusions, we will use the system


\begin{array}{*{2}{rc}r}
x  &+  &3y  &=  &1  \\
2x  &+  &y   &=  &-3 \\
2x  &+  &2y  &=  &0  
\end{array}

from Example 1.12.

  1. Solve the first equation for x and substitute that expression into the second equation. Find the resulting y.
  2. Again solve the first equation for x, but this time substitute that expression into the third equation. Find this y.

What extra step must a user of this method take to avoid erroneously concluding a system has a solution?

This exercise is recommended for all readers.
Problem 4

For which values of  k are there no solutions, many solutions, or a unique solution to this system?


\begin{array}{*{2}{rc}r}
x  &-  &y  &=  &1  \\
3x  &-  &3y &=  &k  
\end{array}
This exercise is recommended for all readers.
Problem 5

This system is not linear, in some sense,


\begin{array}{*{3}{rc}r}
2\sin\alpha  &-  &\cos\beta  &+  &3\tan\gamma  &=  &3  \\
4\sin\alpha  &+  &2\cos\beta &-  &2\tan\gamma  &=  &10  \\
6\sin\alpha  &-  &3\cos\beta &+  &\tan\gamma   &=  &9  
\end{array}

and yet we can nonetheless apply Gauss' method. Do so. Does the system have a solution?

This exercise is recommended for all readers.
Problem 6

What conditions must the constants, the b's, satisfy so that each of these systems has a solution? Hint. Apply Gauss' method and see what happens to the right side (Anton 1987).

  1. 
\begin{array}{*{2}{rc}r}
x  &-  &3y  &=  &b_1 \\
3x  &+  &y   &=  &b_2 \\
x  &+  &7y  &=  &b_3 \\
2x  &+  &4y  &=  &b_4 
\end{array}


  2. 
\begin{array}{*{3}{rc}r}
x_1  &+  &2x_2  &+  &3x_3  &=  &b_1  \\
2x_1  &+  &5x_2  &+  &3x_3  &=  &b_2  \\
x_1  &   &      &+  &8x_3  &=  &b_3  
\end{array}
Problem 7

True or false: a system with more unknowns than equations has at least one solution. (As always, to say "true" you must prove it, while to say "false" you must produce a counterexample.)

Problem 8

Must any Chemistry problem like the one that starts this subsection— a balance the reaction problem— have infinitely many solutions?

This exercise is recommended for all readers.
Problem 9

Find the coefficients  a ,  b , and  c so that the graph of  f(x)=ax^2+bx+c passes through the points  (1,2) ,  (-1,6) , and  (2,3) .

Problem 10

Gauss' method works by combining the equations in a system to make new equations.

  1. Can the equation  3x-2y=5 be derived, by a sequence of Gaussian reduction steps, from the equations in this system?
    
\begin{array}{*{2}{rc}r}
x  &+  &y  &=  &1  \\
4x  &-  &y  &=  &6
\end{array}
  2. Can the equation  5x-3y=2 be derived, by a sequence of Gaussian reduction steps, from the equations in this system?
    
\begin{array}{*{2}{rc}r}
2x  &+  &2y &=  &5  \\
3x  &+  &y  &=  &4
\end{array}
  3. Can the equation  6x-9y+5z=-2 be derived, by a sequence of Gaussian reduction steps, from the equations in the system?
    
\begin{array}{*{3}{rc}r}
2x  &+  &y  &-  &z  &=  &4  \\
6x  &-  &3y &+  &z  &=  &5
\end{array}
Problem 11

Prove that, where  a,b,\ldots,e are real numbers and  a\neq 0 , if


ax+by=c

has the same solution set as


ax+dy=e

then they are the same equation. What if  a=0 ?

This exercise is recommended for all readers.
Problem 12

Show that if  ad-bc\neq 0 then


\begin{array}{*{2}{rc}r}
ax  &+  &by  &=  &j  \\
cx  &+  &dy  &=  &k  
\end{array}

has a unique solution.

This exercise is recommended for all readers.
Problem 13

In the system


\begin{array}{*{2}{rc}r}
ax  &+  &by  &=  &c  \\
dx  &+  &ey  &=  &f  
\end{array}

each of the equations describes a line in the  xy -plane. By geometrical reasoning, show that there are three possibilities: there is a unique solution, there is no solution, and there are infinitely many solutions.

Problem 14

Finish the proof of Theorem 1.4.

Problem 15

Is there a two-unknowns linear system whose solution set is all of  \mathbb{R}^2 ?

This exercise is recommended for all readers.
Problem 16

Are any of the operations used in Gauss' method redundant? That is, can any of the operations be synthesized from the others?

Problem 17

Prove that each operation of Gauss' method is reversible. That is, show that if two systems are related by a row operation S_1\rightarrow S_2 then there is a row operation to go back S_2\rightarrow S_1.

? Problem 18

A box holding pennies, nickels and dimes contains thirteen coins with a total value of  83 cents. How many coins of each type are in the box? (Anton 1987)

? Problem 19

Four positive integers are given. Select any three of the integers, find their arithmetic average, and add this result to the fourth integer. Thus the numbers 29, 23, 21, and 17 are obtained. One of the original integers is:

  1. 19
  2. 21
  3. 23
  4. 29
  5. 17

(Salkind 1975, 1955 problem 38)

This exercise is recommended for all readers.
? Problem 20

Laugh at this:  \mbox{AHAHA}+\mbox{TEHE}=\mbox{TEHAW} . It resulted from substituting a code letter for each digit of a simple example in addition, and it is required to identify the letters and prove the solution unique (Ransom & Gupta 1935).

? Problem 21

The Wohascum County Board of Commissioners, which has 20 members, recently had to elect a President. There were three candidates (A, B, and C); on each ballot the three candidates were to be listed in order of preference, with no abstentions. It was found that 11 members, a majority, preferred A over B (thus the other 9 preferred B over A). Similarly, it was found that 12 members preferred C over A. Given these results, it was suggested that B should withdraw, to enable a runoff election between A and C. However, B protested, and it was then found that 14 members preferred B over C! The Board has not yet recovered from the resulting confusion. Given that every possible order of A, B, C appeared on at least one ballot, how many members voted for B as their first choice (Gilbert, Krusemeyer & Larson 1993, Problem number 2)?

? Problem 22

"This system of n linear equations with n unknowns," said the Great Mathematician, "has a curious property."

"Good heavens!" said the Poor Nut, "What is it?"

"Note," said the Great Mathematician, "that the constants are in arithmetic progression."

"It's all so clear when you explain it!" said the Poor Nut. "Do you mean like  6x+9y=12 and  15x+18y=21 ?"

"Quite so," said the Great Mathematician, pulling out his bassoon. "Indeed, the system has a unique solution. Can you find it?"

"Good heavens!" cried the Poor Nut, "I am baffled."

Are you? (Dudley, Lebow & Rothman 1963)


2 - Describing the Solution Set

A linear system with a unique solution has a solution set with one element. A linear system with no solution has a solution set that is empty. In these cases the solution set is easy to describe. Solution sets are a challenge to describe only when they contain many elements.

Example 2.1

This system has many solutions because in echelon form

\begin{array}{rcl}
\begin{array}{*{3}{rc}r}
2x  &   &   &+  &z  &=  &3 \\
x  &-  &y  &-  &z  &=  &1 \\
3x  &-  &y  &   &   &=  &4
\end{array}
&\xrightarrow[-(3/2)\rho_1 +\rho_3]{-(1/2)\rho_1+\rho_2}
&\begin{array}{*{3}{rc}r}
2x  &   &   &+  &z      &=  &3    \\
&   &-y &-  &(3/2)z &=  &-1/2 \\
&   &-y &-  &(3/2)z &=  &-1/2
\end{array}                                   \\[3em]
&\xrightarrow[]{-\rho_2+\rho_3}
&\begin{array}{*{3}{rc}r}
2x  &   &   &+  &z      &=  &3    \\
&   &-y &-  &(3/2)z &=  &-1/2 \\
&   &   &   &0      &=  &0
\end{array}
\end{array}

not all of the variables are leading variables. The Gauss' method theorem showed that a triple satisfies the first system if and only if it satisfies the third. Thus, the solution set \{(x,y,z)\,\big|\, 2x+z=3\text{ and } x-y-z=1 \text{ and }3x-y=4\} can also be described as \{(x,y,z)\,\big|\,2x+z=3\text{ and }-y-3z/2=-1/2\}. However, this second description is not much of an improvement. It has two equations instead of three, but it still involves some hard-to-understand interaction among the variables.

To get a description that is free of any such interaction, we take the variable that does not lead any equation, z, and use it to describe the variables that do lead, x and y. The second equation gives y=(1/2)-(3/2)z and the first equation gives x=(3/2)-(1/2)z. Thus, the solution set can be described as 
\{ (x,y,z)=
((3/2)-(1/2)z,(1/2)-(3/2)z,z)\,\big|\, z\in\mathbb{R}\}
. For instance, (1/2,-5/2,2) is a solution because taking z=2 gives a first component of 1/2 and a second component of -5/2.

The advantage of this description over the ones above is that the only variable appearing, z, is unrestricted— it can be any real number.

Definition 2.2

The non-leading variables in an echelon-form linear system are free variables.

In the echelon form system derived in the above example, x and y are leading variables and  z is free.

Example 2.3

A linear system can end with more than one variable free. This row reduction

\begin{array}{rcl}
\begin{array}{*{4}{rc}r}
x  &+  &y   &+  &z   &-  &w   &=  &1  \\
&   &y   &-  &z   &+  &w   &=  &-1 \\
3x  &   &    &+  &6z  &-  &6w  &=  &6  \\
&   &-y  &+  &z   &-  &w   &=  &1
\end{array}
&\xrightarrow[]{-3\rho_1 +\rho_3}
&\begin{array}{*{4}{rc}r}
x  &+  &y   &+  &z   &-  &w   &=  &1  \\
&   &y   &-  &z   &+  &w   &=  &-1 \\
&   &-3y &+  &3z  &-  &3w  &=  &3  \\
&   &-y  &+  &z   &-  &w   &=  &1
\end{array}                                      \\[3em]
&\xrightarrow[\rho_2 +\rho_4]{3\rho_2 +\rho_3}
&\begin{array}{*{4}{rc}r}
x  &+  &y   &+  &z   &-  &w   &=  &1  \\
&   &y   &-  &z   &+  &w   &=  &-1 \\
&   &    &   &    &   &0   &=  &0  \\
&   &    &   &    &   &0   &=  &0
\end{array}
\end{array}

ends with  x and  y leading, and with both  z and  w free. To get the description that we prefer we will start at the bottom. We first express y in terms of the free variables z and w with y=-1+z-w. Next, moving up to the top equation, substituting for y in the first equation x+(-1+z-w)+z-w=1 and solving for x yields x=2-2z+2w. Thus, the solution set is  \{(2-2z+2w,-1+z-w,z,w)\,\big|\, z,w\in\mathbb{R}\} .

We prefer this description because the only variables that appear, z and w, are unrestricted. This makes the job of deciding which four-tuples are system solutions into an easy one. For instance, taking z=1 and w=2 gives the solution (4,-2,1,2). In contrast, (3,-2,1,2) is not a solution, since the first component of any solution must be 2 minus twice the third component plus twice the fourth.

Example 2.4

After this reduction

\begin{array}{rcl}
\begin{array}{*{4}{rc}r}
2x  &-  &2y  &   &    &   &    &=  &0  \\
&   &    &   &z   &+  &3w  &=  &2  \\
3x  &-  &3y  &   &    &   &    &=  &0  \\
x  &-  &y   &+  &2z  &+  &6w  &=  &4
\end{array}
&\xrightarrow[-(1/2)\rho_1+ \rho_4]{-(3/2)\rho_1 +\rho_3}
&\begin{array}{*{4}{rc}r}
2x  &-  &2y  &   &    &   &    &=  &0  \\
&   &    &   &z   &+  &3w  &=  &2  \\
&   &    &   &    &   &0   &=  &0  \\
&   &    &   &2z  &+  &6w  &=  &4
\end{array}                                    \\[3em]
&\xrightarrow[]{-2\rho_2 +\rho_4}
&\begin{array}{*{4}{rc}r}
2x  &-  &2y  &   &    &   &    &=  &0  \\
&   &    &   &z   &+  &3w  &=  &2  \\
&   &    &   &    &   &0   &=  &0  \\
&   &    &   &    &   &0   &=  &0
\end{array}
\end{array}

x and z lead,  y and  w are free. The solution set is \{ (y,y,2-3w,w)\,\big|\, y,w\in\mathbb{R} \}. For instance,  (1,1,2,0) satisfies the system— take y=1 and w=0. The four-tuple  (1,0,5,4) is not a solution since its first coordinate does not equal its second.

We refer to a variable used to describe a family of solutions as a parameter and we say that the set above is parametrized with y and w. (The terms "parameter" and "free variable" do not mean the same thing. Above, y and w are free because in the echelon form system they do not lead any row. They are parameters because they are used in the solution set description. We could have instead parametrized with y and z by rewriting the second equation as w=2/3-(1/3)z. In that case, the free variables are still y and w, but the parameters are y and z. Notice that we could not have parametrized with x and y, so there is sometimes a restriction on the choice of parameters. The terms "parameter" and "free" are related because, as we shall show later in this chapter, the solution set of a system can always be parametrized with the free variables. Consequently, we shall parametrize all of our descriptions in this way.)

Example 2.5

This is another system with infinitely many solutions.

\begin{array}{rcl}
\begin{array}{*{4}{rc}r}
x  &+  &2y  &   &   &   &   &=  &1  \\
2x  &   &    &+  &z  &   &   &=  &2  \\
3x  &+  &2y  &+  &z  &-  &w  &=  &4
\end{array}
&\xrightarrow[-3\rho_1 +\rho_3]{-2\rho_1+\rho_2}
&\begin{array}{*{4}{rc}r}
x  &+  &2y  &   &   &   &   &=  &1  \\
&   &-4y &+  &z  &   &   &=  &0  \\
&   &-4y &+  &z  &-  &w  &=  &1
\end{array}                                    \\[3em]
&\xrightarrow[]{-\rho_2+\rho_3}
&\begin{array}{*{4}{rc}r}
x  &+  &2y  &   &   &   &   &=  &1  \\
&   &-4y &+  &z  &   &   &=  &0  \\
&   &    &   &   &   &-w &=  &1
\end{array}
\end{array}

The leading variables are  x ,  y , and  w . The variable  z is free. (Notice here that, although there are infinitely many solutions, the value of one of the variables is fixed— w=-1.) Write  w in terms of  z with  w=-1+0z . Then  y=(1/4)z . To express x in terms of z, substitute for  y into the first equation to get  x=1-(1/2)z . The solution set is \{(1-(1/2)z,(1/4)z,z,-1)\,\big|\, z\in\mathbb{R}\}.

We finish this subsection by developing the notation for linear systems and their solution sets that we shall use in the rest of this book.

Definition 2.6

An  m \! \times \! n matrix is a rectangular array of numbers with  m rows and  n columns. Each number in the matrix is an entry.

Matrices are usually named by upper case roman letters, e.g.  A . Each entry is denoted by the corresponding lower-case letter, e.g. a_{i,j} is the number in row i and column j of the array. For instance,


A=
\begin{pmatrix}
1  &2.2  &5  \\
3  &4    &-7
\end{pmatrix}

has two rows and three columns, and so is a  2 \! \times \! 3 matrix. (Read that "two-by-three"; the number of rows is always stated first.) The entry in the second row and first column is  a_{2,1}=3 . Note that the order of the subscripts matters: a_{1,2}\neq a_{2,1} since  a_{1,2}=2.2 . (The parentheses around the array are a typographic device so that when two matrices are side by side we can tell where one ends and the other starts.)

Matrices occur throughout this book. We shall use  \mathcal{M}_{n \! \times \! m} to denote the collection of  n \! \times \! m matrices.

Example 2.7

We can abbreviate this linear system


\begin{array}{*{3}{rc}r}
x  &+  &2y  &   &    &=  &4   \\
&   &y   &-  &z &=  &0   \\
x  &   &      &+  &2z&=  &4
\end{array}

with this matrix.


\left(\begin{array}{*{3}{c}|c}
1  &2  &0  &4  \\
0  &1  &-1 &0  \\
1  &0  &2  &4
\end{array}\right)

The vertical bar just reminds a reader of the difference between the coefficients on the systems's left hand side and the constants on the right. When a bar is used to divide a matrix into parts, we call it an augmented matrix. In this notation, Gauss' method goes this way.


\left(\begin{array}{*{3}{c}|c}
1  &2  &0  &4  \\
0  &1  &-1 &0  \\
1  &0  &2  &4
\end{array}\right)
\xrightarrow[]{-\rho_1 +\rho_3}
\left(\begin{array}{*{3}{c}|c}
1  &2  &0  &4  \\
0  &1  &-1 &0  \\
0  &-2 &2  &0
\end{array}\right)
\xrightarrow[]{2\rho_2 +\rho_3}
\left(\begin{array}{*{3}{c}|c}
1  &2  &0  &4  \\
0  &1  &-1 &0  \\
0  &0  &0  &0
\end{array}\right)

The second row stands for y-z=0 and the first row stands for x+2y=4 so the solution set is  \{(4-2z,z,z)\,\big|\, z\in\mathbb{R}\} . One advantage of the new notation is that the clerical load of Gauss' method— the copying of variables, the writing of +'s and ='s, etc.— is lighter.

We will also use the array notation to clarify the descriptions of solution sets. A description like \{(2-2z+2w,-1+z-w,z,w)\,\big|\, z,w\in\mathbb{R}\} from Example 2.3 is hard to read. We will rewrite it to group all the constants together, all the coefficients of  z together, and all the coefficients of  w together. We will write them vertically, in one-column wide matrices.


\{\begin{pmatrix} 2 \\ -1 \\ 0 \\ 0 \end{pmatrix}
+\begin{pmatrix} -2 \\ 1 \\ 1 \\ 0 \end{pmatrix}\cdot z
+\begin{pmatrix} 2 \\ -1 \\ 0 \\ 1 \end{pmatrix}\cdot w
\,\big|\, z,w\in\mathbb{R}\}

For instance, the top line says that  x=2-2z+2w . The next section gives a geometric interpretation that will help us picture the solution sets when they are written in this way.

Definition 2.8

A vector (or column vector) is a matrix with a single column. A matrix with a single row is a row vector. The entries of a vector are its components.

Vectors are an exception to the convention of representing matrices with capital roman letters. We use lower-case roman or greek letters overlined with an arrow:  \vec{a} ,  \vec{b} , ... or  \vec{\alpha} ,  \vec{\beta} , ... (boldface is also common: \mathbf{a} or  \boldsymbol{\alpha} ). For instance, this is a column vector with a third component of  7 .


\vec{v}=
\begin{pmatrix}  1  \\  3  \\ 7 \end{pmatrix}
Definition 2.9

The linear equation  a_1x_1+a_2x_2+\,\cdots\,+a_nx_n=d with unknowns  x_1,\ldots\,,x_n is satisfied by


\vec{s}=\begin{pmatrix} s_1 \\ \vdots \\ s_n \end{pmatrix}

if  a_1s_1+a_2s_2+\,\cdots\,+a_ns_n=d . A vector satisfies a linear system if it satisfies each equation in the system.

The style of description of solution sets that we use involves adding the vectors, and also multiplying them by real numbers, such as the  z and w. We need to define these operations.

Definition 2.10

The vector sum of  \vec{u} and  \vec{v} is this.


\vec{u}+\vec{v}=
\begin{pmatrix} u_1 \\ \vdots \\ u_n \end{pmatrix}
+
\begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix}
=
\begin{pmatrix} u_1+v_1 \\ \vdots \\ u_n+v_n \end{pmatrix}

In general, two matrices with the same number of rows and the same number of columns add in this way, entry-by-entry.

Definition 2.11

The scalar multiplication of the real number  r and the vector  \vec{v} is this.


r\cdot\vec{v}=
r\cdot\begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix}
=
\begin{pmatrix} rv_1 \\ \vdots \\ rv_n \end{pmatrix}

In general, any matrix is multiplied by a real number in this entry-by-entry way.

Scalar multiplication can be written in either order:  r\cdot\vec{v} or  \vec{v}\cdot r , or without the "\cdot" symbol: r\vec{v}. (Do not refer to scalar multiplication as "scalar product" because that name is used for a different operation.)

Example 2.12

\begin{pmatrix} 2 \\ 3 \\ 1 \end{pmatrix}
+
\begin{pmatrix} 3 \\ -1 \\ 4 \end{pmatrix}
=
\begin{pmatrix} 2+3 \\ 3-1 \\ 1+4 \end{pmatrix}
=
\begin{pmatrix} 5 \\ 2 \\ 5 \end{pmatrix}
\qquad
7\cdot\begin{pmatrix} 1 \\ 4 \\ -1 \\ -3 \end{pmatrix}
=
\begin{pmatrix} 7 \\ 28 \\ -7 \\ -21 \end{pmatrix}

Notice that the definitions of vector addition and scalar multiplication agree where they overlap, for instance,  \vec{v} +\vec{v} = 2\vec{v} .

With the notation defined, we can now solve systems in the way that we will use throughout this book.

Example 2.13

This system


\begin{array}{*{5}{rc}r}
2x  &+  &y  &  &  &-  &w  &   &   &=  &4  \\
&   &y  &  &  &+  &w  &+  &u  &=  &4  \\
x  &   &   &- &z &+  &2w &   &   &=  &0
\end{array}

reduces in this way.

\begin{array}{rcl}
\left(\begin{array}{*{5}{c}|c}
2  &1  &0  &-1  &0  &4  \\
0  &1  &0  &1   &1  &4  \\
1  &0  &-1 &2   &0  &0
\end{array}\right)
&\xrightarrow[]{-(1/2)\rho_1+\rho_3}
&\left(\begin{array}{*{5}{c}|c}
2  &1     &0  &-1    &0  &4  \\
0  &1     &0  &1     &1  &4  \\
0  &-1/2  &-1 &5/2   &0  &-2
\end{array}\right)                                 \\[3em]
&\xrightarrow[]{(1/2)\rho_2+\rho_3}
&\left(\begin{array}{*{5}{c}|c}
2  &1     &0  &-1    &0    &4  \\
0  &1     &0  &1     &1    &4  \\
0  &0     &-1 &3     &1/2  &0
\end{array}\right)
\end{array}

The solution set is  \{(w+(1/2)u,4-w-u,3w+(1/2)u,w,u)\,\big|\, w,u\in\mathbb{R}\} . We write that in vector form.


\{\begin{pmatrix} x \\ y \\ z \\ w \\ u \end{pmatrix}=
\begin{pmatrix} 0 \\ 4 \\ 0 \\ 0 \\ 0 \end{pmatrix}+
\begin{pmatrix} 1 \\ -1 \\ 3 \\ 1 \\ 0 \end{pmatrix}w+
\begin{pmatrix} 1/2 \\ -1 \\ 1/2 \\ 0 \\ 1 \end{pmatrix}u
\,\big|\, w,u\in\mathbb{R}\}

Note again how well vector notation sets off the coefficients of each parameter. For instance, the third row of the vector form shows plainly that if  u is held fixed then  z increases three times as fast as  w .

That format also shows plainly that there are infinitely many solutions. For example, we can fix u as 0, let w range over the real numbers, and consider the first component x. We get infinitely many first components and hence infinitely many solutions.

Another thing shown plainly is that setting both  w and  u to zero gives that this


\begin{pmatrix} x \\ y \\ z \\ w \\ u \end{pmatrix}
=\begin{pmatrix} 0 \\ 4 \\ 0 \\ 0 \\ 0 \end{pmatrix}

is a particular solution of the linear system.

Example 2.14

In the same way, this system


\begin{array}{*{3}{rc}r}
x  &-  &y  &+  &z  &=  &1  \\
3x  &   &   &+  &z  &=  &3  \\
5x  &-  &2y &+  &3z &=  &5
\end{array}

reduces


\left(\begin{array}{*{3}{c}|c}
1  &-1  &1  &1  \\
3  &0   &1  &3  \\
5  &-2  &3  &5
\end{array}\right)
\xrightarrow[-5\rho_1+\rho_3]{-3\rho_1+\rho_2}
\left(\begin{array}{*{3}{c}|c}
1  &-1  &1  &1  \\
0  &3   &-2 &0  \\
0  &3   &-2 &0
\end{array}\right)
\xrightarrow[]{-\rho_2+\rho_3}
\left(\begin{array}{*{3}{c}|c}
1  &-1  &1  &1  \\
0  &3   &-2 &0  \\
0  &0   &0  &0
\end{array}\right)

to a one-parameter solution set.


\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
+\begin{pmatrix} -1/3 \\ 2/3 \\ 1 \end{pmatrix}z
\,\big|\, z\in\mathbb{R}\}

Before the exercises, we pause to point out some things that we have yet to do.

The first two subsections have been on the mechanics of Gauss' method. Except for one result, Theorem 1.4— without which developing the method doesn't make sense since it says that the method gives the right answers— we have not stopped to consider any of the interesting questions that arise.

For example, can we always describe solution sets as above, with a particular solution vector added to an unrestricted linear combination of some other vectors? The solution sets we described with unrestricted parameters were easily seen to have infinitely many solutions so an answer to this question could tell us something about the size of solution sets. An answer to that question could also help us picture the solution sets, in \mathbb{R}^2, or in \mathbb{R}^3, etc.

Many questions arise from the observation that Gauss' method can be done in more than one way (for instance, when swapping rows, we may have a choice of which row to swap with). Theorem 1.4 says that we must get the same solution set no matter how we proceed, but if we do Gauss' method in two different ways must we get the same number of free variables both times, so that any two solution set descriptions have the same number of parameters? Must those be the same variables (e.g., is it impossible to solve a problem one way and get y and w free or solve it another way and get y and z free)?

In the rest of this chapter we answer these questions. The answer to each is "yes". The first question is answered in the last subsection of this section. In the second section we give a geometric description of solution sets. In the final section of this chapter we tackle the last set of questions. Consequently, by the end of the first chapter we will not only have a solid grounding in the practice of Gauss' method, we will also have a solid grounding in the theory. We will be sure of what can and cannot happen in a reduction.

Exercises

This exercise is recommended for all readers.
Problem 1

Find the indicated entry of the matrix, if it is defined.


A=\begin{pmatrix}
1  &3  &1  \\
2  &-1 &4
\end{pmatrix}
  1.  a_{2,1}
  2.  a_{1,2}
  3.  a_{2,2}
  4.  a_{3,1}
This exercise is recommended for all readers.
Problem 2

Give the size of each matrix.

  1. 
\begin{pmatrix}
1  &0  &4  \\
2  &1  &5
\end{pmatrix}
  2. 
\begin{pmatrix}
1  &1  \\
-1  &1  \\
3  &-1
\end{pmatrix}
  3. 
\begin{pmatrix}
5  &10 \\
10  &5
\end{pmatrix}
This exercise is recommended for all readers.
Problem 3

Do the indicated vector operation, if it is defined.

  1.  \begin{pmatrix} 2 \\ 1 \\ 1 \end{pmatrix}
+\begin{pmatrix} 3 \\ 0 \\ 4 \end{pmatrix}
  2.  5\begin{pmatrix} 4 \\ -1 \end{pmatrix}
  3.  \begin{pmatrix} 1 \\ 5 \\ 1 \end{pmatrix}
-\begin{pmatrix} 3 \\ 1 \\ 1 \end{pmatrix}
  4.  7\begin{pmatrix} 2 \\ 1 \end{pmatrix}
+9\begin{pmatrix} 3 \\ 5 \end{pmatrix}
  5.  \begin{pmatrix} 1 \\ 2 \end{pmatrix}
+\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}
  6.  6\begin{pmatrix} 3 \\ 1 \\ 1 \end{pmatrix}
-4\begin{pmatrix} 2 \\ 0 \\ 3 \end{pmatrix}
+2\begin{pmatrix} 1 \\ 1 \\ 5 \end{pmatrix}
This exercise is recommended for all readers.
Problem 4

Solve each system using matrix notation. Express the solution using vectors.

  1.  \begin{array}{*{2}{rc}r}
3x  &+  &6y  &=  &18  \\
x  &+  &2y  &=  &6
\end{array}
  2.  \begin{array}{*{2}{rc}r}
x  &+  &y   &=  &1  \\
x  &-  &y   &=  &-1
\end{array}
  3.  \begin{array}{*{3}{rc}r}
x_1  &   &     &+  &x_3   &=  &4  \\
x_1  &-  &x_2  &+  &2x_3  &=  &5  \\
4x_1  &-  &x_2  &+  &5x_3  &=  &17
\end{array}
  4.  \begin{array}{*{3}{rc}r}
2a   &+  &b    &-  &c     &=  &2  \\
2a   &   &     &+  &c     &=  &3  \\
a   &-  &b    &   &      &=  &0
\end{array}
  5.  \begin{array}{*{4}{rc}r}
x  &+  &2y   &-   &z   &    &    &=  &3  \\
2x  &+  &y    &    &    &+   &w   &=  &4  \\
x  &-  &y    &+   &z   &+   &w   &=  &1
\end{array}
  6.  \begin{array}{*{4}{rc}r}
x  &   &     &+   &z   &+   &w   &=  &4  \\
2x  &+  &y    &    &    &-   &w   &=  &2  \\
3x  &+  &y    &+   &z   &    &    &=  &7
\end{array}
This exercise is recommended for all readers.
Problem 5

Solve each system using matrix notation. Give each solution set in vector notation.

  1.  \begin{array}{*{3}{rc}r}
2x  &+  &y  &-  &z  &=  &1  \\
4x  &-  &y  &   &   &=  &3
\end{array}
  2.  \begin{array}{*{4}{rc}r}
x  &   &   &-  &z  &   &   &=  &1  \\
&   &y  &+  &2z &-  &w  &=  &3  \\
x  &+  &2y &+  &3z &-  &w  &=  &7
\end{array}
  3.  \begin{array}{*{4}{rc}r}
x  &-  &y  &+  &z  &   &   &=  &0  \\
&   &y  &   &   &+  &w  &=  &0  \\
3x  &-  &2y &+  &3z &+  &w  &=  &0  \\
&   &-y &   &   &-  &w  &=  &0
\end{array}
  4.  \begin{array}{*{5}{rc}r}
a  &+  &2b &+  &3c &+  &d  &-  &e  &=  &1  \\
3a  &-  &b  &+  &c  &+  &d  &+  &e  &=  &3
\end{array}
This exercise is recommended for all readers.
Problem 6

The vector is in the set. What value of the parameters produces that vector?

  1. \begin{pmatrix} 5 \\ -5 \end{pmatrix}, \{\begin{pmatrix} 1 \\ -1 \end{pmatrix}k\,\big|\, k\in\mathbb{R}\}
  2. \begin{pmatrix} -1 \\ 2 \\ 1 \end{pmatrix}, \{\begin{pmatrix} -2 \\ 1 \\ 0 \end{pmatrix}i
+\begin{pmatrix} 3 \\ 0 \\ 1 \end{pmatrix}j\,\big|\, i,j\in\mathbb{R}\}
  3. \begin{pmatrix} 0 \\ -4 \\ 2 \end{pmatrix}, \{\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}m
+\begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix}n\,\big|\, m,n\in\mathbb{R}\}
Problem 7

Decide if the vector is in the set.

  1. \begin{pmatrix} 3 \\ -1 \end{pmatrix}, \{\begin{pmatrix} -6 \\ 2 \end{pmatrix}k\,\big|\, k\in\mathbb{R}\}
  2. \begin{pmatrix} 5 \\ 4 \end{pmatrix}, \{\begin{pmatrix} 5 \\ -4 \end{pmatrix}j\,\big|\, j\in\mathbb{R}\}
  3. \begin{pmatrix} 2 \\ 1 \\ -1 \end{pmatrix}, \{\begin{pmatrix} 0 \\ 3 \\ -7 \end{pmatrix}+\begin{pmatrix} 1 \\ -1 \\ 3 \end{pmatrix}r\,\big|\, r\in\mathbb{R}\}
  4. \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}, \{\begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix}j
+\begin{pmatrix} -3 \\ -1 \\ 1 \end{pmatrix}k\,\big|\, j,k\in\mathbb{R}\}
Problem 8

Parametrize the solution set of this one-equation system.


x_1+x_2+\cdots+x_n=0
This exercise is recommended for all readers.
Problem 9
  1. Apply Gauss' method to the left-hand side to solve
    
\begin{array}{*{4}{rc}r}
x  &+  &2y  &    &    &-   &w   &=   &a   \\
2x  &   &    &+   &z   &    &    &=   &b   \\
x  &+  &y   &    &    &+   &2w  &=   &c
\end{array}
    for  x ,  y ,  z , and   w , in terms of the constants a, b, and c.
  2. Use your answer from the prior part to solve this.
    
\begin{array}{*{4}{rc}r}
x  &+  &2y  &    &    &-   &w   &=   &3   \\
2x  &   &    &+   &z   &    &    &=   &1   \\
x  &+  &y   &    &    &+   &2w  &=   &-2
\end{array}
This exercise is recommended for all readers.
Problem 10

Why is the comma needed in the notation " a_{i,j} " for matrix entries?

This exercise is recommended for all readers.
Problem 11

Give the  4 \! \times \! 4 matrix whose  i,j -th entry is

  1.  i+j ;
  2.  -1 to the  i+j power.
Problem 12

For any matrix  A , the transpose of  A , written  {{A}^{\rm trans}} , is the matrix whose columns are the rows of  A . Find the transpose of each of these.

  1.  \begin{pmatrix}
1  &2  &3  \\
4  &5  &6
\end{pmatrix}
  2.  \begin{pmatrix}
2  &-3 \\
1  &1
\end{pmatrix}
  3.  \begin{pmatrix}
5  &10 \\
10  &5
\end{pmatrix}
  4.  \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}
This exercise is recommended for all readers.
Problem 13
  1. Describe all functions  f(x)=ax^2+bx+c such that  f(1)=2 and  f(-1)=6 .
  2. Describe all functions  f(x)=ax^2+bx+c such that  f(1)=2 .
Problem 14

Show that any set of five points from the plane  \mathbb{R}^2 lie on a common conic section, that is, they all satisfy some equation of the form  ax^2+by^2+cxy+dx+ey+f=0 where some of  a,\,\ldots\,,f are nonzero.

Problem 15

Make up a four equations/four unknowns system having

  1. a one-parameter solution set;
  2. a two-parameter solution set;
  3. a three-parameter solution set.
? Problem 16
  1. Solve the system of equations.
    
\begin{array}{*{2}{rc}r}
ax  &+  &y  &=  &a^2  \\
x  &+  &ay &=  &1
\end{array}
    For what values of a does the system fail to have solutions, and for what values of a are there infinitely many solutions?
  2. Answer the above question for the system.
    
\begin{array}{*{2}{rc}r}
ax  &+  &y  &=  &a^3  \\
x  &+  &ay &=  &1
\end{array}

(USSR Olympiad #174)

? Problem 17

In air a gold-surfaced sphere weighs  7588 grams. It is known that it may contain one or more of the metals aluminum, copper, silver, or lead. When weighed successively under standard conditions in water, benzene, alcohol, and glycerine its respective weights are  6588 ,  6688 ,  6778 , and  6328 grams. How much, if any, of the forenamed metals does it contain if the specific gravities of the designated substances are taken to be as follows?

Aluminum 2.7 Alcohol 0.81
Copper 8.9 Benzene 0.90
Gold 19.3 Glycerine 1.26
Lead 11.3 Water 1.00
Silver 10.8

(Duncan & Quelch 1952)


3 - General = Particular + Homogeneous

Description of Solution Sets

The prior subsection has many descriptions of solution sets. They all fit a pattern. They have a vector that is a particular solution of the system added to an unrestricted combination of some other vectors. The solution set from Example 2.13 illustrates.


\left\{
\underbrace{
\begin{pmatrix} 0 \\ 4 \\ 0 \\ 0 \\ 0 \end{pmatrix}}_{\begin{array}{c}\\[-19pt]\scriptstyle\text{particular} \\[-5pt]\scriptstyle\text{solution}\end{array}}+
\underbrace{w\begin{pmatrix} 1 \\ -1 \\ 3 \\ 1 \\ 0 \end{pmatrix}+
u\begin{pmatrix} 1/2 \\ -1 \\ 1/2 \\ 0 \\ 1 \end{pmatrix}}_{\begin{array}{c}\\[-19pt]\scriptstyle\text{unrestricted}\\[-5pt]\scriptstyle\text{combination}\end{array}}
\,\big|\, w,u\in\mathbb{R}\right\}

The combination is unrestricted in that w and u can be any real numbers— there is no condition like "such that 2w-u=0" that would restrict which pairs w,u can be used to form combinations.

That example shows an infinite solution set conforming to the pattern. We can think of the other two kinds of solution sets as also fitting the same pattern. A one-element solution set fits in that it has a particular solution, and the unrestricted combination part is a trivial sum (that is, instead of being a combination of two vectors, as above, or a combination of one vector, it is a combination of no vectors). A zero-element solution set fits the pattern since there is no particular solution, and so the set of sums of that form is empty.

We will show that the examples from the prior subsection are representative, in that the description pattern discussed above holds for every solution set.

Theorem 3.1

For any linear system there are vectors \vec{\beta}_1, ..., \vec{\beta}_k such that the solution set can be described as


\left\{\vec{p}+c_1\vec{\beta}_1+\,\cdots\,+c_k\vec{\beta}_k
\,\big|\, c_1,\,\ldots\,,c_k\in\mathbb{R}\right\}

where  \vec{p} is any particular solution, and where the system has  k free variables.

This description has two parts, the particular solution \vec{p} and also the unrestricted linear combination of the \vec{\beta}'s. We shall prove the theorem in two corresponding parts, with two lemmas.

Homogeneous Systems

We will focus first on the unrestricted combination part. To do that, we consider systems that have the vector of zeroes as one of the particular solutions, so that \vec{p}+c_1\vec{\beta}_1+\dots+c_k\vec{\beta}_k can be shortened to c_1\vec{\beta}_1+\dots+c_k\vec{\beta}_k.

Definition 3.2

A linear equation is homogeneous if it has a constant of zero, that is, if it can be put in the form a_1x_1+a_2x_2+\,\cdots\,+a_nx_n=0.

(These are "homogeneous" because all of the terms involve the same power of their variable— the first power— including a " 0x_{0} " that we can imagine is on the right side.)

Example 3.3

With any linear system like


\begin{array}{*{2}{rc}r}
3x  &+  &4y  &=  3  \\
2x  &-  &y   &=  1
\end{array}

we associate a system of homogeneous equations by setting the right side to zeros.


\begin{array}{*{2}{rc}r}
3x  &+  &4y  &=  0  \\
2x  &-  &y   &=  0
\end{array}

Our interest in the homogeneous system associated with a linear system can be understood by comparing the reduction of the system

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
3x  &+  &4y  &=  3  \\
2x  &-  &y   &=  1
\end{array}
&\xrightarrow[]{-(2/3)\rho_1+\rho_2}
&\begin{array}{*{2}{rc}r}
3x  &+  &4y        &=  3  \\
&   &-(11/3)y   &=  -1
\end{array}
\end{array}

with the reduction of the associated homogeneous system.

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
3x  &+  &4y  &=  0  \\
2x  &-  &y   &=  0
\end{array}
&\xrightarrow[]{-(2/3)\rho_1+\rho_2}
&\begin{array}{*{2}{rc}r}
3x  &+  &4y        &=  0  \\
&   &-(11/3)y   &=  0
\end{array}
\end{array}

Obviously the two reductions go in the same way. We can study how linear systems are reduced by instead studying how the associated homogeneous systems are reduced.

Studying the associated homogeneous system has a great advantage over studying the original system. Nonhomogeneous systems can be inconsistent. But a homogeneous system must be consistent since there is always at least one solution, the vector of zeros.

Definition 3.4

A column or row vector of all zeros is a zero vector, denoted  \vec{0} .

There are many different zero vectors, e.g., the one-tall zero vector, the two-tall zero vector, etc. Nonetheless, people often refer to "the" zero vector, expecting that the size of the one being discussed will be clear from the context.

Example 3.5

Some homogeneous systems have the zero vector as their only solution.


\begin{array}{*{3}{rc}r}
3x  &+  &2y  &+  &z  &=  &0  \\
6x  &+  &4y  &   &   &=  &0  \\
&   &y   &+  &z  &=  &0
\end{array}
\;\xrightarrow[]{-2\rho_1 +\rho_2}\;
\begin{array}{*{3}{rc}r}
3x  &+  &2y  &+  &z  &=  &0  \\
&   &    &   &-2z&=  &0  \\
&   &y   &+  &z  &=  &0
\end{array}
\;\xrightarrow[]{\rho_2 \leftrightarrow\rho_3}\;
\begin{array}{*{3}{rc}r}
3x  &+  &2y  &+  &z  &=  &0  \\
&   &y   &+  &z  &=  &0  \\
&   &    &   &-2z&=  &0
\end{array}
Example 3.6

Some homogeneous systems have many solutions. One example is the Chemistry problem from the first page of this book.

\begin{array}{rcl}
\begin{array}{*{4}{rc}r}
7x  &   &   &-  &7z  &   &   &=  &0  \\
8x  &+  &y  &-  &5z  &-  &2w &=  &0  \\
&   &y  &-  &3z  &   &   &=  &0  \\
&   &3y &-  &6z  &-  &w  &=  &0
\end{array}
&\xrightarrow[]{-(8/7)\rho_1+\rho_2}
&\begin{array}{*{4}{rc}r}
7x &   &   &-  &7z  &   &   &=  &0  \\
&   &y  &+  &3z  &-  &2w &=  &0  \\
&   &y  &-  &3z  &   &   &=  &0  \\
&   &3y &-  &6z  &-  &w  &=  &0
\end{array}                                        \\
&\xrightarrow[-3\rho_2+\rho_4]{-\rho_2+\rho_3}
&\begin{array}{*{4}{rc}r}
7x &   &   &-  &7z  &   &   &=  &0  \\
&   &y  &+  &3z  &-  &2w &=  &0  \\
&   &   &   &-6z &+  &2w &=  &0  \\
&   &   &   &-15z&+  &5w &=  &0
\end{array}                                        \\
&\xrightarrow[]{-(5/2)\rho_3+\rho_4}
&\begin{array}{*{4}{rc}r}
7x &   &   &-  &7z  &   &   &=  &0  \\
&   &y  &+  &3z  &-  &2w &=  &0  \\
&   &   &   &-6z &+  &2w &=  &0  \\
&   &   &   &    &   &0  &=  &0
\end{array}
\end{array}

The solution set:


\{\begin{pmatrix} 1/3 \\ 1 \\ 1/3 \\ 1 \end{pmatrix}w \,\big|\, w\in\mathbb{R}\}

has many vectors besides the zero vector (if we interpret  w as a number of molecules then solutions make sense only when  w is a nonnegative multiple of 3).

We now have the terminology to prove the two parts of Theorem 3.1. The first lemma deals with unrestricted combinations.

Lemma 3.7

For any homogeneous linear system there exist vectors \vec{\beta}_1, ..., \vec{\beta}_k such that the solution set of the system is


\{c_1\vec{\beta}_1+\cdots+c_k\vec{\beta}_k \,\big|\, c_1,\ldots,c_k\in\mathbb{R}\}

where k is the number of free variables in an echelon form version of the system.

Before the proof, we will recall the back substitution calculations that were done in the prior subsection.

Imagine that we have brought a system to this echelon form.


\begin{array}{*{4}{rc}r}
x  &+  &2y  &-  &z  &+  &2w &=  &0  \\
&   &-3y &+  &z  &   &   &=  &0  \\
&   &    &   &   &   &-w &=  &0
\end{array}

We next perform back-substitution to express each variable in terms of the free variable z. Working from the bottom up, we get first that  w is  0\cdot z , next that  y is  (1/3)\cdot z , and then substituting those two into the top equation  x+2((1/3)z)-z+2(0)=0 gives  x=(1/3)\cdot z . So, back substitution gives a parametrization of the solution set by starting at the bottom equation and using the free variables as the parameters to work row-by-row to the top. The proof below follows this pattern.

Comment: That is, this proof just does a verification of the bookkeeping in back substitution to show that we haven't overlooked any obscure cases where this procedure fails, say, by leading to a division by zero. So this argument, while quite detailed, doesn't give us any new insights. Nevertheless, we have written it out for two reasons. The first reason is that we need the result— the computational procedure that we employ must be verified to work as promised. The second reason is that the row-by-row nature of back substitution leads to a proof that uses the technique of mathematical induction.[1] This is an important, and non-obvious, proof technique that we shall use a number of times in this book. Doing an induction argument here gives us a chance to see one in a setting where the proof material is easy to follow, and so the technique can be studied. Readers who are unfamiliar with induction arguments should be sure to master this one and the ones later in this chapter before going on to the second chapter.

Proof

First use Gauss' method to reduce the homogeneous system to echelon form. We will show that each leading variable can be expressed in terms of free variables. That will finish the argument because then we can use those free variables as the parameters. That is, the \vec{\beta}'s are the vectors of coefficients of the free variables (as in Example 3.6, where the solution is x=(1/3)w, y=w, z=(1/3)w, and w=w).

We will proceed by mathematical induction, which has two steps. The base step of the argument will be to focus on the bottom-most non-" 0=0 " equation and write its leading variable in terms of the free variables. The inductive step of the argument will be to argue that if we can express the leading variables from the bottom  t rows in terms of free variables, then we can express the leading variable of the next row up— the  t+1 -th row up from the bottom— in terms of free variables. With those two steps, the theorem will be proved because by the base step it is true for the bottom equation, and by the inductive step the fact that it is true for the bottom equation shows that it is true for the next one up, and then another application of the inductive step implies it is true for third equation up, etc.

For the base step, consider the bottom-most non-" 0=0 " equation (the case where all the equations are "0=0" is trivial). We call that the m-th row:


a_{m,\ell_m}x_{\ell_m}+a_{m,\ell_m+1}x_{\ell_m+1}+\cdots+a_{m,n}x_n=0

where  a_{m,\ell_m}\neq 0 . (The notation here has "\ell" stand for "leading", so a_{m,\ell_m} means "the coefficient, from the row m of the variable leading row m".) Either there are variables in this equation other than the leading one x_{\ell_m} or else there are not. If there are other variables x_{\ell_{m}+1}, etc., then they must be free variables because this is the bottom non-"0=0" row. Move them to the right and divide by a_{m,\ell_m}


x_{\ell_m}
=(-a_{m,\ell_m+1}/a_{m,\ell_m})x_{\ell_m+1}+\cdots+(-a_{m,n}/a_{m,\ell_m})x_n

to express this leading variable in terms of free variables. If there are no free variables in this equation then  x_{\ell_m}=0 (see the "tricky point" noted following this proof).

For the inductive step, we assume that for the  m -th equation, and for the  (m-1) -th equation, ..., and for the  (m-t) -th equation, we can express the leading variable in terms of free variables (where  0\leq t<m ). To prove that the same is true for the next equation up, the  (m-(t+1)) -th equation, we take each variable that leads in a lower-down equation  x_{\ell_m},\ldots,x_{\ell_{m-t}} and substitute its expression in terms of free variables. The result has the form


a_{m-(t+1),\ell_{m-(t+1)}}x_{\ell_{m-(t+1)}}+
\text{sums of multiples of free variables}=0

where  a_{m-(t+1),\ell_{m-(t+1)}}\neq 0 . We move the free variables to the right-hand side and divide by  a_{m-(t+1),\ell_{m-(t+1)}} , to end with  x_{\ell_{m-(t+1)}} expressed in terms of free variables.

Because we have shown both the base step and the inductive step, by the principle of mathematical induction the proposition is true.

We say that the set \{c_1\vec{\beta}_1+\cdots+c_k\vec{\beta}_k \,\big|\, c_1,\ldots,c_k\in\mathbb{R}\} is generated by or spanned by the set of vectors  \{{\vec{\beta}_1},\ldots,{\vec{\beta}_k}\} . There is a tricky point to this definition. If a homogeneous system has a unique solution, the zero vector, then we say the solution set is generated by the empty set of vectors. This fits with the pattern of the other solution sets: in the proof above the solution set is derived by taking the  c 's to be the free variables and if there is a unique solution then there are no free variables.

This proof incidentally shows, as discussed after Example 2.4, that solution sets can always be parametrized using the free variables.

Nonhomogeneous Systems

The next lemma finishes the proof of Theorem 3.1 by considering the particular solution part of the solution set's description.

Lemma 3.8

For a linear system, where \vec{p} is any particular solution, the solution set equals this set.


\{\vec{p}+\vec{h} \,\big|\, \vec{h}\text{ satisfies the associated homogeneous system}\}
Proof

We will show mutual set inclusion, that any solution to the system is in the above set and that anything in the set is a solution to the system.[2]

For set inclusion the first way, that if a vector solves the system then it is in the set described above, assume that  \vec{s} solves the system. Then  \vec{s}-\vec{p} solves the associated homogeneous system since for each equation index  i ,


\begin{align}
a_{i,1}(s_1-p_1)+\cdots+a_{i,n}(s_n-p_n)
&=(a_{i,1}s_1+\cdots+a_{i,n}s_n)       \\
&\quad -(a_{i,1}p_1+\cdots+a_{i,n}p_n)  \\
&=d_i-d_i                 \\
&=0
\end{align}

where  p_j and  s_j are the  j -th components of  \vec{p} and  \vec{s} . We can write  \vec{s}-\vec{p} as  \vec{h} , where  \vec{h} solves the associated homogeneous system, to express  \vec{s} in the required  \vec{p}+\vec{h} form.

For set inclusion the other way, take a vector of the form \vec{p}+\vec{h}, where  \vec{p} solves the system and  \vec{h} solves the associated homogeneous system, and note that it solves the given system: for any equation index i,


\begin{align}
a_{i,1}(p_1+h_1)+\cdots+a_{i,n}(p_n+h_n)
&=(a_{i,1}p_1+\cdots+a_{i,n}p_n)      \\
&\quad+(a_{i,1}h_1+\cdots+a_{i,n}h_n)  \\
&=d_i+0                                \\
&=d_i
\end{align}

where  h_j is the  j -th component of  \vec{h} .

The two lemmas above together establish Theorem 3.1. We remember that theorem with the slogan " \text{General} = \text{Particular} + \text{Homogeneous} ".

Example 3.9

This system illustrates Theorem 3.1.


\begin{array}{*{3}{rc}r}
x  &+  &2y  &-  &z  &=  &1  \\
2x &+  &4y  &   &   &=  &2  \\
&   &y   &-  &3z &=  &0
\end{array}

Gauss' method


\xrightarrow[]{-2\rho_1+\rho_2}\;
\begin{array}{*{3}{rc}r}
x  &+  &2y  &-  &z  &=  &1  \\
&   &    &   &2z &=  &0  \\
&   &y   &-  &3z &=  &0
\end{array}
\;\xrightarrow[]{\rho_2\leftrightarrow\rho_3}\;
\begin{array}{*{3}{rc}r}
x  &+  &2y  &-  &z  &=  &1  \\
&   &y   &-  &3z &=  &0  \\
&   &    &   &2z &=  &0
\end{array}

shows that the general solution is a singleton set.


\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \}

That single vector is, of course, a particular solution. The associated homogeneous system reduces via the same row operations

\begin{array}{rcl}
\begin{array}{*{3}{rc}r}
x  &+  &2y  &-  &z  &=  &0  \\
2x &+  &4y  &   &   &=  &0  \\
&   &y   &-  &3z &=  &0
\end{array}
&\xrightarrow[]{-2\rho_1+\rho_2}
\;\xrightarrow[]{\rho_2\leftrightarrow\rho_3}
&\begin{array}{*{3}{rc}r}
x  &+  &2y  &-  &z  &=  &0  \\
&   &y   &-  &3z &=  &0  \\
&   &    &   &2z &=  &0
\end{array}
\end{array}

to also give a singleton set.


\{\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \}

As the theorem states, and as discussed at the start of this subsection, in this single-solution case the general solution results from taking the particular solution and adding to it the unique solution of the associated homogeneous system.

Example 3.10

Also discussed there is that the case where the general solution set is empty fits the "\text{General}=\text{Particular}+\text{Homogeneous}" pattern. This system illustrates. Gauss' method

\begin{array}{rcl}
\begin{array}{*{4}{rc}r}
x  &   &  &+  &z  &+ &w  &=  &-1  \\
2x  &-  &y &   &   &+ &w  &=  &3   \\
x  &+  &y &+  &3z &+ &2w &=  &1
\end{array}
&\xrightarrow[-\rho_1+\rho_3]{-2\rho_1+\rho_2}
&\begin{array}{*{4}{rc}r}
x  &   &  &+  &z  &+ &w  &=  &-1  \\
&   &-y&-  &2z &- &w  &=  &5   \\
&   &y &+  &2z &+ &w  &=  &2
\end{array}
\end{array}

shows that it has no solutions. The associated homogeneous system, of course, has a solution.

\begin{array}{rcl}
\begin{array}{*{4}{rc}r}
x  &   &  &+  &z  &+ &w  &=  &0   \\
2x  &-  &y &   &   &+ &w  &=  &0   \\
x  &+  &y &+  &3z &+ &2w &=  &0
\end{array}
&\xrightarrow[-\rho_1+\rho_3]{-2\rho_1+\rho_2}
\;\xrightarrow[]{\rho_2+\rho_3}
&\begin{array}{*{4}{rc}r}
x  &   &  &+  &z  &+ &w  &=  &0   \\
&   &-y&-  &2z &- &w  &=  &0   \\
&   &  &   &   &  &0  &=  &0
\end{array}
\end{array}

In fact, the solution set of the homogeneous system is infinite.


\{\begin{pmatrix} -1 \\ -2 \\ 1 \\ 0 \end{pmatrix}z+\begin{pmatrix} -1 \\ -1 \\ 0 \\ 1 \end{pmatrix}w
\,\big|\, z,w\in\mathbb{R}\}

However, because no particular solution of the original system exists, the general solution set is empty— there are no vectors of the form \vec{p}+\vec{h} because there are no \vec{p}\,'s.

Corollary 3.11

Solution sets of linear systems are either empty, have one element, or have infinitely many elements.

Proof

We've seen examples of all three happening so we need only prove that those are the only possibilities.

First, notice a homogeneous system with at least one non- \vec{0} solution \vec{v} has infinitely many solutions because the set of multiples s\vec{v} is infinite— if s\neq 1 then s\vec{v}-\vec{v}=(s-1)\vec{v} is easily seen to be non-\vec{0}, and so s\vec{v}\neq \vec{v}.

Now, apply Lemma 3.8 to conclude that a solution set


\{\vec{p}+\vec{h}\,\big|\,\vec{h} \text{ solves the associated homogeneous system}\}

is either empty (if there is no particular solution  \vec{p} ), or has one element (if there is a  \vec{p} and the homogeneous system has the unique solution  \vec{0} ), or is infinite (if there is a  \vec{p} and the homogeneous system has a non-\vec{0} solution, and thus by the prior paragraph has infinitely many solutions).

This table summarizes the factors affecting the size of a general solution.

number of solutions of the
associated homogeneous system
one infinitely many
particular
solution
exists?
yes unique
solution
infinitely many
solutions
no no
solutions
no
solutions

The factor on the top of the table is the simpler one. When we perform Gauss' method on a linear system, ignoring the constants on the right side and so paying attention only to the coefficients on the left-hand side, we either end with every variable leading some row or else we find that some variable does not lead a row, that is, that some variable is free. (Of course, "ignoring the constants on the right" is formalized by considering the associated homogeneous system. We are simply putting aside for the moment the possibility of a contradictory equation.)

A nice insight into the factor on the top of this table at work comes from considering the case of a system having the same number of equations as variables. This system will have a solution, and the solution will be unique, if and only if it reduces to an echelon form system where every variable leads its row, which will happen if and only if the associated homogeneous system has a unique solution. Thus, the question of uniqueness of solution is especially interesting when the system has the same number of equations as variables.

Definition 3.12

A square matrix is nonsingular if it is the matrix of coefficients of a homogeneous system with a unique solution. It is singular otherwise, that is, if it is the matrix of coefficients of a homogeneous system with infinitely many solutions.

Example 3.13

The systems from Example 3.3, Example 3.5, and Example 3.9 each have an associated homogeneous system with a unique solution. Thus these matrices are nonsingular.


\begin{pmatrix}
3  &4  \\
2  &-1
\end{pmatrix}
\qquad
\begin{pmatrix}
3  &2   &1  \\
6  &-4  &0  \\
0  &1   &1
\end{pmatrix}
\qquad
\begin{pmatrix}
1  &2  &-1 \\
2  &4  &0  \\
0  &1  &-3
\end{pmatrix}

The Chemistry problem from Example 3.6 is a homogeneous system with more than one solution so its matrix is singular.


\begin{pmatrix}
7  &0  &-7 &0  \\
8  &1  &-5 &-2 \\
0  &1  &-3 &0  \\
0  &3  &-6 &-1
\end{pmatrix}
Example 3.14

The first of these matrices is nonsingular while the second is singular


\begin{pmatrix}
1  &2  \\
3  &4
\end{pmatrix}
\qquad
\begin{pmatrix}
1  &2  \\
3  &6
\end{pmatrix}

because the first of these homogeneous systems has a unique solution while the second has infinitely many solutions.


\begin{array}{*{2}{rc}r}
x &+  &2y  &=  &0  \\
3x &+  &4y  &=  &0
\end{array}
\qquad
\begin{array}{*{2}{rc}r}
x &+  &2y  &=  &0  \\
3x &+  &6y  &=  &0
\end{array}

We have made the distinction in the definition because a system (with the same number of equations as variables) behaves in one of two ways, depending on whether its matrix of coefficients is nonsingular or singular. A system where the matrix of coefficients is nonsingular has a unique solution for any constants on the right side: for instance, Gauss' method shows that this system


\begin{array}{*{2}{rc}r}
x  &+  &2y  &=  &a \\
3x &+  &4y  &=  &b
\end{array}

has the unique solution x=b-2a and y=(3a-b)/2. On the other hand, a system where the matrix of coefficients is singular never has a unique solution— it has either no solutions or else has infinitely many, as with these.


\begin{array}{*{2}{rc}r}
x  &+  &2y  &=   &1   \\
3x  &+  &6y  &=   &2
\end{array}
\qquad
\begin{array}{*{2}{rc}r}
x  &+  &2y  &=   &1   \\
3x  &+  &6y  &=   &3
\end{array}

Thus, "singular" can be thought of as connoting "troublesome", or at least "not ideal".

The above table has two factors. We have already considered the factor along the top: we can tell which column a given linear system goes in solely by considering the system's left-hand side— the constants on the right-hand side play no role in this factor. The table's other factor, determining whether a particular solution exists, is tougher. Consider these two


\begin{array}{*{2}{rc}r}
3x &+ &2y &= &5  \\
3x &+ &2y &= &5
\end{array}
\qquad
\begin{array}{*{2}{rc}r}
3x &+ &2y &= &5  \\
3x &+ &2y &= &4
\end{array}

with the same left sides but different right sides. Obviously, the first has a solution while the second does not, so here the constants on the right side decide if the system has a solution. We could conjecture that the left side of a linear system determines the number of solutions while the right side determines if solutions exist, but that guess is not correct. Compare these two systems


\begin{array}{*{2}{rc}r}
3x &+ &2y &= &5  \\
4x &+ &2y &= &4
\end{array}
\qquad
\begin{array}{*{2}{rc}r}
3x &+ &2y &= &5  \\
3x &+ &2y &= &4
\end{array}

with the same right sides but different left sides. The first has a solution but the second does not. Thus the constants on the right side of a system don't decide alone whether a solution exists; rather, it depends on some interaction between the left and right sides.

For some intuition about that interaction, consider this system with one of the coefficients left as the parameter c.


\begin{array}{*{3}{rc}r}
x  &+  &2y  &+  &3z  &=  &1  \\
x  &+  &y   &+  &z   &=  &1  \\
cx  &+  &3y  &+  &4z  &=  &0
\end{array}

If  c=2 this system has no solution because the left-hand side has the third row as a sum of the first two, while the right-hand does not. If  c\neq 2 this system has a unique solution (try it with  c=1 ). For a system to have a solution, if one row of the matrix of coefficients on the left is a linear combination of other rows, then on the right the constant from that row must be the same combination of constants from the same rows.

More intuition about the interaction comes from studying linear combinations. That will be our focus in the second chapter, after we finish the study of Gauss' method itself in the rest of this chapter.

Exercises

This exercise is recommended for all readers.
Problem 1

Solve each system. Express the solution set using vectors. Identify the particular solution and the solution set of the homogeneous system.

  1.  \begin{array}{*{2}{rc}r}
3x  &+  &6y  &=  &18  \\
x  &+  &2y  &=  &6
\end{array}
  2.  \begin{array}{*{2}{rc}r}
x  &+  &y   &=  &1  \\
x  &-  &y   &=  &-1
\end{array}
  3.  \begin{array}{*{3}{rc}r}
x_1  &   &     &+  &x_3   &=  &4  \\
x_1  &-  &x_2  &+  &2x_3  &=  &5  \\
4x_1  &-  &x_2  &+  &5x_3  &=  &17
\end{array}
  4.  \begin{array}{*{3}{rc}r}
2a   &+  &b    &-  &c     &=  &2  \\
2a   &   &     &+  &c     &=  &3  \\
a   &-  &b    &   &      &=  &0
\end{array}
  5.  \begin{array}{*{4}{rc}r}
x  &+  &2y   &-   &z   &    &    &=  &3  \\
2x  &+  &y    &    &    &+   &w   &=  &4  \\
x  &-  &y    &+   &z   &+   &w   &=  &1
\end{array}
  6.  \begin{array}{*{4}{rc}r}
x  &   &     &+   &z   &+   &w   &=  &4  \\
2x  &+  &y    &    &    &-   &w   &=  &2  \\
3x  &+  &y    &+   &z   &    &    &=  &7
\end{array}
Problem 2

Solve each system, giving the solution set in vector notation. Identify the particular solution and the solution of the homogeneous system.

  1.  \begin{array}{*{3}{rc}r}
2x  &+  &y  &-  &z  &=  &1  \\
4x  &-  &y  &   &   &=  &3
\end{array}
  2.  \begin{array}{*{4}{rc}r}
x  &   &   &-  &z  &   &   &=  &1  \\
&   &y  &+  &2z &-  &w  &=  &3  \\
x  &+  &2y &+  &3z &-  &w  &=  &7
\end{array}
  3.  \begin{array}{*{4}{rc}r}
x  &-  &y  &+  &z  &   &   &=  &0  \\
&   &y  &   &   &+  &w  &=  &0  \\
3x  &-  &2y &+  &3z &+  &w  &=  &0  \\
&   &-y &   &   &-  &w  &=  &0
\end{array}
  4.  \begin{array}{*{5}{rc}r}
a  &+  &2b &+  &3c &+  &d  &-  &e  &=  &1  \\
3a  &-  &b  &+  &c  &+  &d  &+  &e  &=  &3
\end{array}
This exercise is recommended for all readers.
Problem 3

For the system


\begin{array}{*{4}{rc}r}
2x  &-  &y  &   &    &-  &w  &=  &3  \\
&   &y  &+  &z   &+  &2w &=  &2  \\
x  &-  &2y &-  &z   &   &   &=  &-1
\end{array}

which of these can be used as the particular solution part of some general solution?

  1.  \begin{pmatrix} 0 \\ -3 \\ 5 \\ 0 \end{pmatrix}
  2.  \begin{pmatrix} 2 \\ 1 \\ 1 \\ 0 \end{pmatrix}
  3.  \begin{pmatrix} -1 \\ -4 \\ 8 \\ -1 \end{pmatrix}
This exercise is recommended for all readers.
Problem 4

Lemma 3.8 says that any particular solution may be used for \vec{p}. Find, if possible, a general solution to this system


\begin{array}{*{4}{rc}r}
x  &-  &y  &   &    &+  &w  &=  &4  \\
2x  &+  &3y &-  &z   &   &   &=  &0  \\
&   &y  &+  &z   &+  &w  &=  &4
\end{array}

that uses the given vector as its particular solution.

  1.  \begin{pmatrix} 0 \\ 0 \\ 0 \\ 4 \end{pmatrix}
  2.  \begin{pmatrix} -5 \\ 1 \\ -7 \\ 10 \end{pmatrix}
  3.  \begin{pmatrix} 2 \\ -1 \\ 1 \\ 1 \end{pmatrix}
Problem 5

One of these is nonsingular while the other is singular. Which is which?

  1. \begin{pmatrix}
1  &3   \\
4  &-12
\end{pmatrix}
  2. \begin{pmatrix}
1  &3  \\
4  &12
\end{pmatrix}
This exercise is recommended for all readers.
Problem 6

Singular or nonsingular?

  1. 
\begin{pmatrix}
1  &2  \\
1  &3
\end{pmatrix}
  2. 
\begin{pmatrix}
1  &2  \\
-3  &-6
\end{pmatrix}
  3. 
\begin{pmatrix}
1  &2  &1  \\
1  &3  &1
\end{pmatrix}   (Careful!)
  4. 
\begin{pmatrix}
1  &2  &1  \\
1  &1  &3  \\
3  &4  &7
\end{pmatrix}
  5. 
\begin{pmatrix}
2  &2  &1  \\
1  &0  &5  \\
-1  &1  &4
\end{pmatrix}
This exercise is recommended for all readers.
Problem 7

Is the given vector in the set generated by the given set?

  1.  \begin{pmatrix} 2 \\ 3 \end{pmatrix},  \{\begin{pmatrix} 1 \\ 4 \end{pmatrix},
\begin{pmatrix} 1 \\ 5 \end{pmatrix}\}
  2.  \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix},  \{\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}\}
  3.  \begin{pmatrix} 1 \\ 3 \\ 0 \end{pmatrix},  \{\begin{pmatrix} 1 \\ 0 \\ 4 \end{pmatrix},
\begin{pmatrix} 2 \\ 1 \\ 5 \end{pmatrix},
\begin{pmatrix} 3 \\ 3 \\ 0 \end{pmatrix},
\begin{pmatrix} 4 \\ 2 \\ 1 \end{pmatrix}\}
  4.  \begin{pmatrix} 1 \\ 0 \\ 1 \\ 1 \end{pmatrix},  \{\begin{pmatrix} 2 \\ 1 \\ 0 \\ 1 \end{pmatrix},
\begin{pmatrix} 3 \\ 0 \\ 0 \\ 2 \end{pmatrix}\}
Problem 8

Prove that any linear system with a nonsingular matrix of coefficients has a solution, and that the solution is unique.

Problem 9

To tell the whole truth, there is another tricky point to the proof of Lemma 3.7. What happens if there are no non-" 0=0 " equations? (There aren't any more tricky points after this one.)

This exercise is recommended for all readers.
Problem 10

Prove that if  \vec{s} and  \vec{t} satisfy a homogeneous system then so do these vectors.

  1.  \vec{s}+\vec{t}
  2.  3\vec{s}
  3.  k\vec{s}+m\vec{t} for  k,m\in\mathbb{R}

What's wrong with: "These three show that if a homogeneous system has one solution then it has many solutions— any multiple of a solution is another solution, and any sum of solutions is a solution also— so there are no homogeneous systems with exactly one solution."?

Problem 11

Prove that if a system with only rational coefficients and constants has a solution then it has at least one all-rational solution. Must it have infinitely many?


Footnotes

  1. More information on mathematical induction is in the appendix.
  2. More information on equality of sets is in the appendix.


4 - Comparing Set Descriptions

This subsection is optional. Later material will not require the work here.

Comparing Set Descriptions

A set can be described in many different ways. Here are two different descriptions of a single set:


\{\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}z\,\big|\, z\in\mathbb{R}\}
\quad\text{and}\quad
\{\begin{pmatrix} 2 \\ 4 \\ 6 \end{pmatrix}w\,\big|\, w\in\mathbb{R}\}.

For instance, this set contains


\begin{pmatrix} 5 \\ 10 \\ 15 \end{pmatrix}

(take z=5 and w=5/2) but does not contain


\begin{pmatrix} 4 \\ 8 \\ 11 \end{pmatrix}

(the first component gives z=4 but that clashes with the third component, similarly the first component gives w=4/5 but the third component gives something different). Here is a third description of the same set:


\{\begin{pmatrix} 3 \\ 6 \\ 9 \end{pmatrix}+\begin{pmatrix} -1 \\ -2 \\ -3 \end{pmatrix}y\,\big|\, y\in\mathbb{R}\}.

We need to decide when two descriptions are describing the same set. More pragmatically stated, how can a person tell when an answer to a homework question describes the same set as the one described in the back of the book?

Set Equality

Sets are equal if and only if they have the same members. A common way to show that two sets, S_1 and S_2, are equal is to show mutual inclusion: any member of S_1 is also in S_2, and any member of S_2 is also in S_1.[1]

Example 4.1

To show that


S_1=
\{\begin{pmatrix} 1 \\ -1 \\ 0 \end{pmatrix}c+\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}d\,\big|\, c,d\in\mathbb{R}\}

equals


S_2=
\{\begin{pmatrix} 4 \\ 1 \\ 0 \end{pmatrix}m+\begin{pmatrix} -1 \\ -3 \\ 0 \end{pmatrix}n\,\big|\, m,n\in\mathbb{R}\}

we show first that S_1\subseteq S_2 and then that S_2\subseteq S_1.

For the first half we must check that any vector from  S_1 is also in  S_2 . We first consider two examples to use them as models for the general argument. If we make up a member of S_1 by trying  c=1 and  d=1 , then to show that it is in S_2 we need  m and n such that


\begin{pmatrix} 4 \\ 1 \\ 0 \end{pmatrix}m
+\begin{pmatrix} -1 \\ -3 \\ 0 \end{pmatrix}n
=\begin{pmatrix} 2 \\ 0 \\ 0 \end{pmatrix}

that is, this relation holds between m and n.


\begin{array}{*{2}{rc}r}
4m  &-  &n  &=  &2  \\
1m  &-  &3n &=  &0  \\
&   &0  &=  &0
\end{array}

Similarly, if we try  c=2 and  d=-1 , then to show that the resulting member of S_1 is in S_2 we need  m and n such that


\begin{pmatrix} 4 \\ 1 \\ 0 \end{pmatrix}m
+\begin{pmatrix} -1 \\ -3 \\ 0 \end{pmatrix}n
=\begin{pmatrix} 3 \\ -3 \\ 0 \end{pmatrix}

that is, this holds.


\begin{array}{*{2}{rc}r}
4m  &-  &n  &=  &3  \\
1m  &-  &3n &=  &-3 \\
&   &0  &=  &0
\end{array}

In the general case, to show that any vector from  S_1 is a member of  S_2 we must show that for any  c and  d there are appropriate  m and  n . We follow the pattern of the examples; fix


\begin{pmatrix} c+d \\ -c+d \\ 0 \end{pmatrix}\in S_1

and look for  m and  n such that


\begin{pmatrix} 4 \\ 1 \\ 0 \end{pmatrix}m
+\begin{pmatrix} -1 \\ -3 \\ 0 \end{pmatrix}n
=\begin{pmatrix} c+d \\ -c+d \\ 0 \end{pmatrix}

that is, this is true.


\begin{array}{*{2}{rc}r}
4m  &-  &n  &=  &c+d  \\
m  &-  &3n &=  &-c+d  \\
&   &0  &=  &0
\end{array}

Applying Gauss' method

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
4m  &-  &n  &=  &c+d  \\
m  &-  &3n &=  &-c+d
\end{array}
&\xrightarrow[]{-(1/4)\rho_1+\rho_2}
&\begin{array}{*{2}{rc}r}
4m  &-  &n        &=  &c+d            \\
&   &-(11/4)n &=  &-(5/4)c+(3/4)d
\end{array}
\end{array}

gives  n=(5/11)c-(3/11)d and  m=(4/11)c+(2/11)d . This shows that for any choice of c and d there are appropriate m and n. We conclude any member of S_1 is a member of S_2 because it can be rewritten in this way:


\begin{pmatrix} c+d \\ -c+d \\ 0 \end{pmatrix}
=\begin{pmatrix} 4 \\ 1 \\ 0 \end{pmatrix}((4/11)c+(2/11)d)+
\begin{pmatrix} -1 \\ -3 \\ 0 \end{pmatrix}((5/11)c-(3/11)d).

For the other inclusion,  S_2\subseteq S_1 , we want to do the opposite. We want to show that for any choice of m and n there are appropriate c and d. So fix m and n and solve for  c and  d :

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
c  &+ &d  &= &4m-n \\
-c  &+ &d  &= &m-3n
\end{array}
&\xrightarrow[]{\rho_1+\rho_2}
&\begin{array}{*{2}{rc}r}
c  &+ &d  &= &4m-n \\
&  &2d &= &5m-4n
\end{array}
\end{array}

shows that  d=(5/2)m-2n and  c=(3/2)m+n . Thus any vector from  S_2


\begin{pmatrix} 4 \\ 1 \\ 0 \end{pmatrix}m+\begin{pmatrix} -1 \\ -3 \\ 0 \end{pmatrix}n

is also of the right form for  S_1


\begin{pmatrix} 1 \\ -1 \\ 0 \end{pmatrix}((3/2)m+n)
+\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}((5/2)m-2n).
Example 4.2

Of course, sometimes sets are not equal. The method of the prior example will help us see the relationship between the two sets. These


P=
\{\begin{pmatrix} x+y \\ 2x \\ y \end{pmatrix}\,\big|\, x,y\in\mathbb{R}\}
\quad\text{and}\quad
R=
\{\begin{pmatrix} m+p \\ n \\ p \end{pmatrix}\,\big|\, m,n,p\in\mathbb{R}\}

are not equal sets. While P is a subset of R, it is a proper subset of R because R is not a subset of P.

To see that, observe first that given a vector from  P we can express it in the form for  R — if we fix x and y, we can solve for appropriate m, n, and p:


\begin{array}{*{3}{rc}r}
m  &   &   &+  &p  &=  &x+y  \\
&   &n  &   &   &=  &2x   \\
&   &   &   &p  &=  &y
\end{array}

shows that that any


\vec{v}=
\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix}x+
\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}y

can be expressed as a member of  R with  m=x ,  n=2x , and  p=y :


\vec{v}=
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}x+
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}2x+
\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}y.

Thus  P\subseteq R .

But, for the other direction, the reduction resulting from fixing m, n, and p and looking for x and y

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
x  &+  &y  &=  &m+p  \\
2x  &   &   &=  &n    \\
&   &y  &=  &p
\end{array}
&\xrightarrow[]{-2\rho_1+\rho_2}
&\begin{array}{*{2}{rc}r}
x  &+  &y  &=  &m+p  \\
&   &-2y&=  &-2m+n-2p \\
&   &y  &=  &p
\end{array}                                  \\
&\xrightarrow[]{(1/2)\rho_2+\rho_3}
&\begin{array}{*{2}{rc}r}
x  &+  &y  &=  &m+p  \\
&   &-2y&=  &-2m+n-2p \\
&   &0  &=  &m+(1/2)n
\end{array}
\end{array}

shows that the only vectors


\begin{pmatrix} m+p \\ n \\ p \end{pmatrix}\in R

representable in the form


\begin{pmatrix} x+y \\ 2x \\ y \end{pmatrix}

are those where  0=m+(1/2)n . For instance,


\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}

is in  R but not in  P .

Exercises

Problem 1

Decide if the vector is a member of the set.

  1. \begin{pmatrix} 2 \\ 3 \end{pmatrix}, \{\begin{pmatrix} 1 \\ 2 \end{pmatrix}k\,\big|\, k\in\mathbb{R}\}
  2. \begin{pmatrix} -3 \\ 3 \end{pmatrix}, \{\begin{pmatrix} 1 \\ -1 \end{pmatrix}k\,\big|\, k\in\mathbb{R}\}
  3. \begin{pmatrix} -3 \\ 3 \\ 4 \end{pmatrix}, \{\begin{pmatrix} 1 \\ -1 \\ 2 \end{pmatrix}k\,\big|\, k\in\mathbb{R}\}
  4. \begin{pmatrix} -3 \\ 3 \\ 4 \end{pmatrix}, \{\begin{pmatrix} 1 \\ -1 \\ 2 \end{pmatrix}k+\begin{pmatrix} 0 \\ 0 \\ 2 \end{pmatrix}m
\,\big|\, k,m\in\mathbb{R}\}
  5. \begin{pmatrix} 1 \\ 4 \\ 14 \end{pmatrix}, \{\begin{pmatrix} 2 \\ 2 \\ 5 \end{pmatrix}k+\begin{pmatrix} -1 \\ 0 \\ 2 \end{pmatrix}m
\,\big|\, k,m\in\mathbb{R}\}
  6. \begin{pmatrix} 1 \\ 4 \\ 6 \end{pmatrix}, \{\begin{pmatrix} 2 \\ 2 \\ 5 \end{pmatrix}k+\begin{pmatrix} -1 \\ 0 \\ 2 \end{pmatrix}m
\,\big|\, k,m\in\mathbb{R}\}
Problem 2

Produce two descriptions of this set that are different than this one.


\{\begin{pmatrix} 2 \\ -5 \end{pmatrix}k\,\big|\, k\in\mathbb{R}\}
This exercise is recommended for all readers.
Problem 3

Show that the three descriptions given at the start of this subsection all describe the same set.

This exercise is recommended for all readers.
Problem 4

Show that these sets are equal


\{\begin{pmatrix} 1 \\ 4 \\ 1 \\ 1 \end{pmatrix}
+\begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}z\,\big|\, z\in\mathbb{R}  \}
\quad\text{and}\quad
\{\begin{pmatrix} 0 \\ 4 \\ 2 \\ 1 \end{pmatrix}
+\begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}k\,\big|\, k\in\mathbb{R}  \},

and that both describe the solution set of this system.


\begin{array}{*{4}{rc}r}
x  &-  &y  &+  &z  &+  &w  &=  &-1  \\
&   &y  &   &   &-  &w  &=  &3   \\
x  &   &   &+  &z  &+  &2w &=  &4
\end{array}
This exercise is recommended for all readers.
Problem 5

Decide if the sets are equal.

  1.  \{\begin{pmatrix} 1 \\ 2 \end{pmatrix}
+\begin{pmatrix} 0 \\ 3 \end{pmatrix}t
\,\big|\, t\in\mathbb{R}\} and  \{\begin{pmatrix} 1 \\ 8 \end{pmatrix}
+\begin{pmatrix} 0 \\ -1 \end{pmatrix}s
\,\big|\, s\in\mathbb{R}\}
  2.  \{\begin{pmatrix} 1 \\ 3 \\ 1 \end{pmatrix}t
+\begin{pmatrix} 2 \\ 1 \\ 5 \end{pmatrix}s
\,\big|\, t,s\in\mathbb{R}\} and  \{\begin{pmatrix} 4 \\ 7 \\ 7 \end{pmatrix}m
+\begin{pmatrix} -4 \\ -2 \\ -10 \end{pmatrix}n
\,\big|\, m,n\in\mathbb{R}\}
  3.  \{\begin{pmatrix} 1 \\ 2 \end{pmatrix}t
\,\big|\, t\in\mathbb{R}\} and  \{\begin{pmatrix} 2 \\ 4 \end{pmatrix}m
+\begin{pmatrix} 4 \\ 8 \end{pmatrix}n
\,\big|\, m,n\in\mathbb{R}\}
  4.  \{\begin{pmatrix} 1 \\ 0 \\ 2 \end{pmatrix}s
+\begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}t
\,\big|\, s,t\in\mathbb{R}\} and  \{\begin{pmatrix} -1 \\ 1 \\ 1 \end{pmatrix}m
+\begin{pmatrix} 0 \\ 1 \\ 3 \end{pmatrix}n
\,\big|\, m,n\in\mathbb{R}\}
  5.  \{\begin{pmatrix} 1 \\ 3 \\ 1 \end{pmatrix}t
+\begin{pmatrix} 2 \\ 4 \\ 6 \end{pmatrix}s
\,\big|\, t,s\in\mathbb{R}\} and  \{\begin{pmatrix} 3 \\ 7 \\ 7 \end{pmatrix}t
+\begin{pmatrix} 1 \\ 3 \\ 1 \end{pmatrix}s
\,\big|\, t,s\in\mathbb{R}\}

Footnotes

  1. More information on set equality is in the appendix.


5 - Automation

This is a PASCAL routine to do  k\rho_i+\rho_j to an augmented matrix.

PROCEDURE Pivot(VAR LinSys : AugMat;
k : REAL;
i, j : INTEGER)
VAR
Col : INTEGER;
BEGIN
FOR Col:=1 TO NumVars+1 DO
LinSys[j,Col]:=k*LinSys[i,Col]+LinSys[j,Col];
END;

Of course this is only one part of a whole program, but it makes the point that Gaussian reduction is ideal for computer coding.

There are pitfalls, however. For example, some arise from the computer's use of finite-precision approximations of real numbers.

These systems provide a simple example.

Linalg singular and nonsingular systems.png

(The second two lines are hard to tell apart.) Both have  (1,1) as their unique solution.

In the first system, some small change in the numbers will produce only a small change in the solution:



\begin{array}{*{2}{rc}r}
x  &+  &2y &= &3                  \\
3x &-  &2y &= &1.008
\end{array}


gives a solution of  (1.002,0.999) . Geometrically, changing one of the lines by a small amount does not change the intersection point by very much.

That's not true for the second system. A small change in the coefficients



\begin{array}{*{2}{rc}r}
x              &+ &2y  &=  &3                  \\
1.000\,000\,01x &+ &2y  &=  &3.000\,000\,03
\end{array}


leads to a completely different answer:  (3,0) .

The solution of the second example varies wildly, depending on a  9 -th digit. That's bad news for a machine using  8 digits to represent reals. In short, systems that are nearly singular may be hard to compute with.

Another thing that can go wrong is error propagation. In a system with a large number of equations (say, 100 or more), small rounding errors early in the procedure can snowball to overwhelm the solution at the end.

These issues, and many others like them, are outside the scope of this book, but remember that just because Gauss' method always works in theory and just because a program correctly implements that method and just because the answer appears on green-bar paper, doesn't mean that answer is right. In practice, always use a package where experts have worked hard to counter what can go wrong.


Section II - Linear Geometry of n-Space

For readers who have seen the elements of vectors before, in calculus or physics, this section is an optional review. However, later work will refer to this material so it is not optional if it is not a review.

In the first section, we had to do a bit of work to show that there are only three types of solution sets— singleton, empty, and infinite. But in the special case of systems with two equations and two unknowns this is easy to see. Draw each two-unknowns equation as a line in the plane and then the two lines could have a unique intersection, be parallel, or be the same line.

Unique solution No solutions Infinitely many
solutions
Linalg unique solution.png


\begin{array}{*{2}{rc}r}
\scriptstyle 3x  &\scriptstyle +  &\scriptstyle 2y  &\scriptstyle =  &\scriptstyle 7   \\[-5pt]
\scriptstyle x   &\scriptstyle -  &\scriptstyle y   &\scriptstyle =  &\scriptstyle -1
\end{array}

Linalg no solutions.png


\begin{array}{*{2}{rc}r}
\scriptstyle 3x  &\scriptstyle +  &\scriptstyle 2y  &\scriptstyle =  &\scriptstyle 7   \\[-5pt]
\scriptstyle 3x  &\scriptstyle +  &\scriptstyle 2y  &\scriptstyle =  &\scriptstyle 4
\end{array}

Linalg infinitely many solutions.png


\scriptstyle \begin{array}{*{2}{rc}r}
\scriptstyle 3x  &\scriptstyle +  &\scriptstyle 2y  &\scriptstyle =  &\scriptstyle 7   \\[-5pt]
\scriptstyle 6x  &\scriptstyle +  &\scriptstyle 4y  &\scriptstyle =  &\scriptstyle 14
\end{array}

These pictures don't prove the results from the prior section, which apply to any number of linear equations and any number of unknowns, but nonetheless they do help us to understand those results. This section develops the ideas that we need to express our results from the prior section, and from some future sections, geometrically. In particular, while the two-dimensional case is familiar, to extend to systems with more than two unknowns we shall need some higher-dimensional geometry.


1 - Vectors in Space

"Higher-dimensional geometry" sounds exotic. It is exotic— interesting and eye-opening. But it isn't distant or unreachable.

We begin by defining one-dimensional space to be the set  \mathbb{R}^1 . To see that definition is reasonable, draw a one-dimensional space

Linalg line.png

and make the usual correspondence with  \mathbb{R} : pick a point to label 0 and another to label 1.

Linalg line with unit.png

Now, with a scale and a direction, finding the point corresponding to, say  +2.17 , is easy— start at  0 and head in the direction of  1 (i.e., the positive direction), but don't stop there, go  2.17 times as far.

The basic idea here, combining magnitude with direction, is the key to extending to higher dimensions.

An object comprised of a magnitude and a direction is a vector (we will use the same word as in the previous section because we shall show below how to describe such an object with a column vector). We can draw a vector as having some length, and pointing somewhere.

Linalg vectors 1.png

There is a subtlety here— these vectors

Linalg vectors 2.png

are equal, even though they start in different places, because they have equal lengths and equal directions. Again: those vectors are not just alike, they are equal.

How can things that are in different places be equal? Think of a vector as representing a displacement ("vector" is Latin for "carrier" or "traveler"). These squares undergo the same displacement, despite that those displacements start in different places.

Linalg vectors 3.png

Sometimes, to emphasize this property vectors have of not being anchored, they are referred to as free vectors. Thus, these free vectors are equal as each is a displacement of one over and two up.

Linalg vectors 4.png

More generally, vectors in the plane are the same if and only if they have the same change in first components and the same change in second components: the vector extending from  (a_1,a_2) to  (b_1,b_2) equals the vector from  (c_1,c_2) to  (d_1,d_2) if and only if  b_1-a_1=d_1-c_1 and  b_2-a_2=d_2-c_2 .

An expression like "the vector that, were it to start at  (a_1,a_2) , would extend to  (b_1,b_2) " is awkward. We instead describe such a vector as


\begin{pmatrix} b_1-a_1 \\ b_2-a_2 \end{pmatrix}

so that, for instance, the "one over and two up" arrows shown above picture this vector.


\begin{pmatrix} 1 \\ 2 \end{pmatrix}

We often draw the arrow as starting at the origin, and we then say it is in the canonical position (or natural position). When the vector


\begin{pmatrix} b_1-a_1 \\ b_2-a_2 \end{pmatrix}

is in its canonical position then it extends to the endpoint (b_1-a_1,b_2-a_2).

We typically just refer to "the point


\begin{pmatrix} 1 \\ 2 \end{pmatrix}
"

rather than "the endpoint of the canonical position of" that vector.

Thus, we will call both of these sets  \mathbb{R}^2 .


\{(x_1,x_2)\,\big|\, x_1,x_2\in\mathbb{R}\}
\qquad
\{\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}\,\big|\, x_1,x_2\in\mathbb{R}\}

In the prior section we defined vectors and vector operations with an algebraic motivation;


r\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
=
\begin{pmatrix} rv_1 \\ rv_2 \end{pmatrix}
\qquad
\begin{pmatrix} v_1 \\  v_2 \end{pmatrix}
+
\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}
=
\begin{pmatrix} v_1+w_1 \\ v_2+w_2 \end{pmatrix}

we can now interpret those operations geometrically. For instance, if  \vec{v} represents a displacement then  3\vec{v}\, represents a displacement in the same direction but three times as far, and  -1\vec{v}\, represents a displacement of the same distance as  \vec{v}\, but in the opposite direction.

Linalg scaled vectors.png

And, where  \vec{v} and  \vec{w} represent displacements,  \vec{v}+\vec{w} represents those displacements combined.

Linalg vector addition.png

The long arrow is the combined displacement in this sense: if, in one minute, a ship's motion gives it the displacement relative to the earth of \vec{v} and a passenger's motion gives a displacement relative to the ship's deck of \vec{w}, then \vec{v}+\vec{w} is the displacement of the passenger relative to the earth.

Another way to understand the vector sum is with the parallelogram rule. Draw the parallelogram formed by the vectors \vec{v}_1,\vec{v}_2 and then the sum \vec{v}_1+\vec{v}_2 extends along the diagonal to the far corner.

Linalg vector addition 2.png

The above drawings show how vectors and vector operations behave in  \mathbb{R}^2 . We can extend to \mathbb{R}^3, or to even higher-dimensional spaces where we have no pictures, with the obvious generalization: the free vector that, if it starts at  (a_1,\ldots,a_n) , ends at  (b_1,\ldots,b_n) , is represented by this column


\begin{pmatrix} b_1-a_1 \\ \vdots \\ b_n-a_n \end{pmatrix}

(vectors are equal if they have the same representation), we aren't too careful to distinguish between a point and the vector whose canonical representation ends at that point,


\mathbb{R}^n=
\{\begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix}\,\big|\, v_1,\ldots,v_n\in\mathbb{R}\}

and addition and scalar multiplication are component-wise.

Having considered points, we now turn to the lines.

In \mathbb{R}^2, the line through  (1,2) and  (3,1) is comprised of (the endpoints of) the vectors in this set


\{ \begin{pmatrix} 1 \\ 2 \end{pmatrix}+t\cdot\begin{pmatrix} 2 \\ -1 \end{pmatrix}\,\big|\, t\in\mathbb{R}\}

That description expresses this picture.

Linalg vector subtraction.png

The vector associated with the parameter t has its whole body in the line— it is a direction vector for the line. Note that points on the line to the left of  x=1 are described using negative values of  t .

In  \mathbb{R}^3 , the line through  (1,2,1) and  (2,3,2) is the set of (endpoints of) vectors of this form

Linalg line in R3.png

and lines in even higher-dimensional spaces work in the same way.

If a line uses one parameter, so that there is freedom to move back and forth in one dimension, then a plane must involve two. For example, the plane through the points  (1,0,5) ,  (2,1,-3) , and  (-2,4,0.5) consists of (endpoints of) the vectors in


\{ \begin{pmatrix} 1 \\ 0 \\ 5 \end{pmatrix}
+t\cdot\begin{pmatrix} 1 \\ 1 \\ -8 \end{pmatrix}
+s\cdot\begin{pmatrix} -3 \\ 4 \\ -4.5 \end{pmatrix}
\,\big|\, t,s\in\mathbb{R}      \}

(the column vectors associated with the parameters


\begin{pmatrix} 1 \\ 1 \\ -8 \end{pmatrix}
=
\begin{pmatrix} 2 \\ 1 \\ -3 \end{pmatrix}
-
\begin{pmatrix} 1 \\ 0 \\ 5 \end{pmatrix}
\qquad
\begin{pmatrix} -3 \\ 4 \\ -4.5 \end{pmatrix}
=
\begin{pmatrix} -2 \\ 4 \\ 0.5 \end{pmatrix}
-
\begin{pmatrix} 1 \\ 0 \\ 5 \end{pmatrix}

are two vectors whose whole bodies lie in the plane). As with the line, note that some points in this plane are described with negative t's or negative s's or both.

A description of planes that is often encountered in algebra and calculus uses a single equation as the condition that describes the relationship among the first, second, and third coordinates of points in a plane.

Linalg plane.png

The translation from such a description to the vector description that we favor in this book is to think of the condition as a one-equation linear system and parametrize  x=(1/2)(4-y-z) .

Linalg plane with basis.png

Generalizing from lines and planes, we define a  k -dimensional linear surface (or  k -flat) in \mathbb{R}^n to be \{\vec{p}+t_1\vec{v}_1+t_2\vec{v}_2+\cdots+t_k\vec{v}_k
\,\big|\, t_1,\ldots ,t_k\in\mathbb{R}\} where  \vec{v}_1,\ldots,\vec{v}_k\in\mathbb{R}^n . For example, in \mathbb{R}^4,


\{\begin{pmatrix} 2 \\ \pi \\ 3 \\ -0.5 \end{pmatrix}
+t\begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}
\,\big|\, t\in\mathbb{R}\}

is a line,


\{
\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}
+t\begin{pmatrix} 1 \\ 1 \\ 0 \\ -1 \end{pmatrix}
+s\begin{pmatrix} 2 \\ 0 \\ 1 \\ 0 \end{pmatrix}
\,\big|\, t,s\in\mathbb{R}\}

is a plane, and


\{
\begin{pmatrix} 3 \\ 1 \\ -2 \\ 0.5 \end{pmatrix}
+r\begin{pmatrix} 0 \\ 0 \\ 0 \\ -1 \end{pmatrix}
+s\begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}
+t\begin{pmatrix} 2 \\ 0 \\ 1 \\ 0 \end{pmatrix}
\,\big|\, r,s,t\in\mathbb{R}\}

is a three-dimensional linear surface. Again, the intuition is that a line permits motion in one direction, a plane permits motion in combinations of two directions, etc.

A linear surface description can be misleading about the dimension— this


L=\{
\begin{pmatrix} 1 \\ 0 \\ -1 \\ -2 \end{pmatrix}
+t\begin{pmatrix} 1 \\ 1 \\ 0 \\ -1 \end{pmatrix}
+s\begin{pmatrix} 2 \\ 2 \\ 0 \\ -2 \end{pmatrix}
\,\big|\, t,s\in\mathbb{R}\}

is a degenerate plane because it is actually a line.


L=\{
\begin{pmatrix} 1 \\ 0 \\ -1 \\ -2 \end{pmatrix}
+r\begin{pmatrix} 1 \\ 1 \\ 0 \\ -1 \end{pmatrix}
\,\big|\, r\in\mathbb{R}\}

We shall see in the Linear Independence section of Chapter Two what relationships among vectors causes the linear surface they generate to be degenerate.

We finish this subsection by restating our conclusions from the first section in geometric terms. First, the solution set of a linear system with  n unknowns is a linear surface in  \mathbb{R}^n . Specifically, it is a  k -dimensional linear surface, where  k is the number of free variables in an echelon form version of the system. Second, the solution set of a homogeneous linear system is a linear surface passing through the origin. Finally, we can view the general solution set of any linear system as being the solution set of its associated homogeneous system offset from the origin by a vector, namely by any particular solution.

Exercises

This exercise is recommended for all readers.
Problem 1

Find the canonical name for each vector.

  1. the vector from  (2,1) to  (4,2) in  \mathbb{R}^2
  2. the vector from  (3,3) to  (2,5) in  \mathbb{R}^2
  3. the vector from  (1,0,6) to  (5,0,3) in  \mathbb{R}^3
  4. the vector from  (6,8,8) to  (6,8,8) in  \mathbb{R}^3
This exercise is recommended for all readers.
Problem 2

Decide if the two vectors are equal.

  1. the vector from  (5,3) to  (6,2) and the vector from  (1,-2) to  (1,1)
  2. the vector from  (2,1,1) to  (3,0,4) and the vector from  (5,1,4) to  (6,0,7)
This exercise is recommended for all readers.
Problem 3

Does  (1,0,2,1) lie on the line through  (-2,1,1,0) and  (5,10,-1,4) ?

This exercise is recommended for all readers.
Problem 4
  1. Describe the plane through  (1,1,5,-1) ,  (2,2,2,0) , and  (3,1,0,4) .
  2. Is the origin in that plane?
Problem 5

Describe the plane that contains this point and line.


\begin{pmatrix} 2 \\ 0 \\ 3 \end{pmatrix}
\qquad
\{\begin{pmatrix} -1 \\ 0 \\ -4 \end{pmatrix}
+\begin{pmatrix} 1 \\ 1 \\ 2 \end{pmatrix}t
\,\big|\, t\in\mathbb{R}\}
This exercise is recommended for all readers.
Problem 6

Intersect these planes.


\{\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}t+
\begin{pmatrix} 0 \\ 1 \\ 3 \end{pmatrix}s
\,\big|\, t,s\in\mathbb{R}\}
\qquad
\{\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}
+\begin{pmatrix} 0 \\ 3 \\ 0 \end{pmatrix}k+
\begin{pmatrix} 2 \\ 0 \\ 4 \end{pmatrix}m
\,\big|\, k,m\in\mathbb{R}\}
This exercise is recommended for all readers.
Problem 7

Intersect each pair, if possible.

  1.  \{\begin{pmatrix} 1 \\ 1 \\ 2 \end{pmatrix}+t\begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}
\,\big|\, t\in\mathbb{R}\} ,  \{\begin{pmatrix} 1 \\ 3 \\ -2 \end{pmatrix}+s\begin{pmatrix} 0 \\ 1 \\ 2 \end{pmatrix}
\,\big|\, s\in\mathbb{R}\}
  2.  \{\begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix}+t\begin{pmatrix} 1 \\ 1 \\ -1 \end{pmatrix}
\,\big|\, t\in\mathbb{R}\} ,  \{s\begin{pmatrix} 0 \\ 1 \\ 2 \end{pmatrix}
+w\begin{pmatrix} 0 \\ 4 \\ 1 \end{pmatrix}
\,\big|\, s,w\in\mathbb{R}\}
Problem 8

When a plane does not pass through the origin, performing operations on vectors whose bodies lie in it is more complicated than when the plane passes through the origin. Consider the picture in this subsection of the plane


\{\begin{pmatrix} 2 \\ 0 \\ 0 \end{pmatrix}
+\begin{pmatrix} -0.5 \\ 1 \\ 0 \end{pmatrix} y
+\begin{pmatrix} -0.5 \\ 0 \\ 1 \end{pmatrix} z
\,\big|\, y,z\in\mathbb{R}\}

and the three vectors it shows, with endpoints (2,0,0), (1.5,1,0), and (1.5,0,1).

  1. Redraw the picture, including the vector in the plane that is twice as long as the one with endpoint (1.5,1,0). The endpoint of your vector is not (3,2,0); what is it?
  2. Redraw the picture, including the parallelogram in the plane that shows the sum of the vectors ending at (1.5,0,1) and (1.5,1,0). The endpoint of the sum, on the diagonal, is not (3,1,1); what is it?
Problem 9

Show that the line segments  \overline{(a_1,a_2)(b_1,b_2)} and  \overline{(c_1,c_2)(d_1,d_2)} have the same lengths and slopes if   b_1-a_1=d_1-c_1 and  b_2-a_2=d_2-c_2 . Is that only if?

Problem 10

How should \mathbb{R}^0 be defined?

This exercise is recommended for all readers.
? Problem 11

A person traveling eastward at a rate of  3 miles per hour finds that the wind appears to blow directly from the north. On doubling his speed it appears to come from the north east. What was the wind's velocity? (Klamkin 1957)

This exercise is recommended for all readers.
Problem 12

Euclid describes a plane as "a surface which lies evenly with the straight lines on itself". Commentators (e.g., Heron) have interpreted this to mean "(A plane surface is) such that, if a straight line pass through two points on it, the line coincides wholly with it at every spot, all ways". (Translations from Heath 1956, pp. 171-172.) Do planes, as described in this section, have that property? Does this description adequately define planes?


2 - Length and Angle Measures

We've translated the first section's results about solution sets into geometric terms for insight into how those sets look. But we must watch out not to be mislead by our own terms; labeling subsets of  \mathbb{R}^k of the forms  \{\vec{p}+t\vec{v}\,\big|\, t\in\mathbb{R}\} and  \{\vec{p}+t\vec{v}+s\vec{w}\,\big|\, t,s\in\mathbb{R}\} as "lines" and "planes" doesn't make them act like the lines and planes of our prior experience. Rather, we must ensure that the names suit the sets. While we can't prove that the sets satisfy our intuition— we can't prove anything about intuition— in this subsection we'll observe that a result familiar from  \mathbb{R}^2 and  \mathbb{R}^3 , when generalized to arbitrary  \mathbb{R}^k , supports the idea that a line is straight and a plane is flat. Specifically, we'll see how to do Euclidean geometry in a "plane" by giving a definition of the angle between two \mathbb{R}^n vectors in the plane that they generate.

Definition 2.1

The length of a vector  \vec{v}\in\mathbb{R}^n is this.


|\vec{v}\,|=\sqrt{v_1^2+\cdots+v_n^2}
Remark 2.2

This is a natural generalization of the Pythagorean Theorem. A classic discussion is in (Pólya 1954).

We can use that definition to derive a formula for the angle between two vectors. For a model of what to do, consider two vectors in  \mathbb{R}^3 .

Linalg two vectors in R3.png

Put them in canonical position and, in the plane that they determine, consider the triangle formed by  \vec{u} ,  \vec{v} , and  \vec{u}-\vec{v} .

Linalg triangle formed by two vectors.png

Apply the Law of Cosines, |\vec{u}-\vec{v}\,|^2
=
|\vec{u}\,|^2+|\vec{v}\,|^2-
2\,|\vec{u}\,|\,|\vec{v}\,|\cos\theta, where  \theta is the angle between the vectors. Expand both sides


(u_1-v_1)^2+(u_2-v_2)^2+(u_3-v_3)^2
=(u_1^2+u_2^2+u_3^2)+(v_1^2+v_2^2+v_3^2)-
2\,|\vec{u}\,|\,|\vec{v}\,|\cos\theta

and simplify.


\theta
=
\arccos(\,\frac{u_1v_1+u_2v_2+u_3v_3}{
|\vec{u}\,|\,|\vec{v}\,| }\,)

In higher dimensions no picture suffices but we can make the same argument analytically. First, the form of the numerator is clear— it comes from the middle terms of the squares  (u_1-v_1)^2 ,  (u_2-v_2)^2 , etc.

Definition 2.3

The dot product (or inner product, or scalar product) of two  n -component real vectors is the linear combination of their components.


\vec{u}\cdot\vec{v}=u_1v_1+u_2v_2+\cdots +u_nv_n

Note that the dot product of two vectors is a real number, not a vector, and that the dot product of a vector from  \mathbb{R}^n with a vector from  \mathbb{R}^m is defined only when  n equals  m . Note also this relationship between dot product and length: dotting a vector with itself gives its length squared  \vec{u}\cdot\vec{u}=u_1u_1+\cdots+u_nu_n=|\vec{u}\,|^2 .

Remark 2.4

The wording in that definition allows one or both of the two to be a row vector instead of a column vector. Some books require that the first vector be a row vector and that the second vector be a column vector. We shall not be that strict.

Still reasoning with letters, but guided by the pictures, we use the next theorem to argue that the triangle formed by  \vec{u} ,  \vec{v} , and  \vec{u}-\vec{v} in  \mathbb{R}^n lies in the planar subset of  \mathbb{R}^n generated by  \vec{u} and  \vec{v} .

Theorem 2.5 (Triangle Inequality)

For any  \vec{u},\vec{v}\in\mathbb{R}^n ,


|\vec{u}+\vec{v}\,|\leq|\vec{u}\,|+|\vec{v}\,|

with equality if and only if one of the vectors is a nonnegative scalar multiple of the other one.

This inequality is the source of the familiar saying, "The shortest distance between two points is in a straight line."

Linalg triangle inequality.png

Proof

(We'll use some algebraic properties of dot product that we have not yet checked, for instance that  \vec{u}\cdot(\vec{a}+\vec{b})
=\vec{u}\cdot\vec{a}+\vec{u}\cdot\vec{b} and that \vec{u}\cdot\vec{v}=\vec{v}\cdot\vec{u}. See Problem 8.) The desired inequality holds if and only if its square holds.

\begin{array}{rl}
|\vec{u}+\vec{v}\,|^2
&\leq(\,|\vec{u}\,|+|\vec{v}\,|\,)^2                            \\
(\,\vec{u}+\vec{v}\,)\cdot(\,\vec{u}+\vec{v}\,)
&\leq|\vec{u}\,|^2+2\,|\vec{u}\,|\,|\vec{v}\,|
+|\vec{v}\,|^2                                         \\
\vec{u}\cdot\vec{u}+\vec{u}\cdot\vec{v}
+\vec{v}\cdot\vec{u}+\vec{v}\cdot\vec{v}
&\leq\vec{u}\cdot\vec{u}+2\,|\vec{u}\,|\,|\vec{v}\,|
+\vec{v}\cdot\vec{v}                                          \\
2\,\vec{u}\cdot\vec{v}
&\leq 2\,|\vec{u}\,|\,|\vec{v}\,|
\end{array}

That, in turn, holds if and only if the relationship obtained by multiplying both sides by the nonnegative numbers  |\vec{u}\,| and  |\vec{v}\,|


2\,(\,|\vec{v}\,|\,\vec{u}\,)\cdot(\,|\vec{u}\,|\,\vec{v}\,)
\leq
2\,|\vec{u}\,|^2\,|\vec{v}\,|^2

and rewriting


0
\leq
|\vec{u}\,|^2\,|\vec{v}\,|^2
-2\,(\,|\vec{v}\,|\,\vec{u}\,)\cdot(\,|\vec{u}\,|\,\vec{v}\,)
+|\vec{u}\,|^2\,|\vec{v}\,|^2

is true. But factoring


0\leq
(\,|\vec{u}\,|\,\vec{v}-|\vec{v}\,|\,\vec{u}\,)\cdot
(\,|\vec{u}\,|\,\vec{v}-|\vec{v}\,|\,\vec{u}\,)

shows that this certainly is true since it only says that the square of the length of the vector  |\vec{u}\,|\,\vec{v}-|\vec{v}\,|\,\vec{u}\, is not negative.

As for equality, it holds when, and only when,  |\vec{u}\,|\,\vec{v}-|\vec{v}\,|\,\vec{u} is  \vec{0} . The check that  |\vec{u}\,|\,\vec{v}=|\vec{v}\,|\,\vec{u}\, if and only if one vector is a nonnegative real scalar multiple of the other is easy.

This result supports the intuition that even in higher-dimensional spaces, lines are straight and planes are flat. For any two points in a linear surface, the line segment connecting them is contained in that surface (this is easily checked from the definition). But if the surface has a bend then that would allow for a shortcut (shown here grayed, while the segment from P to Q that is contained in the surface is solid).

Linalg shortest path on surface.png

Because the Triangle Inequality says that in any \mathbb{R}^n, the shortest cut between two endpoints is simply the line segment connecting them, linear surfaces have no such bends.

Back to the definition of angle measure. The heart of the Triangle Inequality's proof is the " \vec{u}\cdot\vec{v}\leq |\vec{u}\,|\,|\vec{v}\,| " line. At first glance, a reader might wonder if some pairs of vectors satisfy the inequality in this way: while  \vec{u}\cdot\vec{v} is a large number, with absolute value bigger than the right-hand side, it is a negative large number. The next result says that no such pair of vectors exists.

Corollary 2.6 (Cauchy-Schwarz Inequality)

For any  \vec{u},\vec{v}\in\mathbb{R}^n ,


|\,\vec{u}\cdot\vec{v}\,|
\leq
|\,\vec{u}\,|\,|\vec{v}\,|

with equality if and only if one vector is a scalar multiple of the other.

Proof

The Triangle Inequality's proof shows that  \vec{u}\cdot\vec{v}\leq |\vec{u}\,|\,|\vec{v}\,| so if \vec{u}\cdot\vec{v} is positive or zero then we are done. If  \vec{u}\cdot\vec{v} is negative then this holds.


|\,\vec{u}\cdot\vec{v}\,|
=-(\,\vec{u}\cdot\vec{v}\,)
=(-\vec{u}\,)\cdot\vec{v}
\leq
|-\vec{u}\,|\,|\vec{v}\,|
=|\vec{u}\,|\,|\vec{v}\,|

The equality condition is Problem 9.

The Cauchy-Schwarz inequality assures us that the next definition makes sense because the fraction has absolute value less than or equal to one.

Definition 2.7

The angle between two nonzero vectors  \vec{u},\vec{v}\in\mathbb{R}^n is


\theta
=
\arccos(\,\frac{\vec{u}\cdot\vec{v}}{
|\vec{u}\,|\,|\vec{v}\,| }\,)

(the angle between the zero vector and any other vector is defined to be a right angle).

Thus vectors from  \mathbb{R}^n are orthogonal (or perpendicular) if and only if their dot product is zero.

Example 2.8

These vectors are orthogonal.

Linalg orthog vectors in R2.png \begin{pmatrix} 1 \\ -1 \end{pmatrix}\cdot\begin{pmatrix} 1 \\ 1 \end{pmatrix}=0

The arrows are shown away from canonical position but nevertheless the vectors are orthogonal.

Example 2.9

The  \mathbb{R}^3 angle formula given at the start of this subsection is a special case of the definition. Between these two

Linalg nonorthog vectors in R3.png

the angle is


\arccos(\frac{(1)(0)+(1)(3)+(0)(2)}{\sqrt{1^2+1^2+0^2}\sqrt{0^2+3^2+2^2}})
=\arccos(\frac{3}{\sqrt{2}\sqrt{13}})

approximately 0.94 \text{radians}. Notice that these vectors are not orthogonal. Although the  yz -plane may appear to be perpendicular to the  xy -plane, in fact the two planes are that way only in the weak sense that there are vectors in each orthogonal to all vectors in the other. Not every vector in each is orthogonal to all vectors in the other.

Exercises

This exercise is recommended for all readers.
Problem 1

Find the length of each vector.

  1.  \begin{pmatrix} 3 \\ 1 \end{pmatrix}
  2.  \begin{pmatrix} -1 \\ 2 \end{pmatrix}
  3.  \begin{pmatrix} 4 \\ 1 \\ 1 \end{pmatrix}
  4.  \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}
  5.  \begin{pmatrix} 1 \\ -1 \\ 1 \\ 0 \end{pmatrix}
This exercise is recommended for all readers.
Problem 2

Find the angle between each two, if it is defined.

  1.  \begin{pmatrix} 1 \\ 2 \end{pmatrix}, \begin{pmatrix} 1 \\ 4 \end{pmatrix}
  2.  \begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 4 \\ 1 \end{pmatrix}
  3.  \begin{pmatrix} 1 \\ 2 \end{pmatrix}, \begin{pmatrix} 1 \\ 4 \\ -1 \end{pmatrix}
This exercise is recommended for all readers.
Problem 3

During maneuvers preceding the Battle of Jutland, the British battle cruiser Lion moved as follows (in nautical miles):  1.2 miles north,  6.1 miles  38 degrees east of south,  4.0 miles at  89 degrees east of north, and  6.5 miles at  31 degrees east of north. Find the distance between starting and ending positions (O'Hanian 1985).

Problem 4

Find  k so that these two vectors are perpendicular.


\begin{pmatrix} k \\ 1 \end{pmatrix}
\qquad
\begin{pmatrix} 4 \\ 3 \end{pmatrix}
Problem 5

Describe the set of vectors in  \mathbb{R}^3 orthogonal to this one.


\begin{pmatrix} 1 \\ 3 \\ -1 \end{pmatrix}
This exercise is recommended for all readers.
Problem 6
  1. Find the angle between the diagonal of the unit square in  \mathbb{R}^2 and one of the axes.
  2. Find the angle between the diagonal of the unit cube in  \mathbb{R}^3 and one of the axes.
  3. Find the angle between the diagonal of the unit cube in  \mathbb{R}^n and one of the axes.
  4. What is the limit, as  n goes to  \infty , of the angle between the diagonal of the unit cube in  \mathbb{R}^n and one of the axes?
Problem 7

Is any vector perpendicular to itself?

This exercise is recommended for all readers.
Problem 8

Describe the algebraic properties of dot product.

  1. Is it right-distributive over addition: 
(\vec{u}+\vec{v})\cdot\vec{w}
=
\vec{u}\cdot\vec{w}+\vec{v}\cdot\vec{w} ?
  2. Is is left-distributive (over addition)?
  3. Does it commute?
  4. Associate?
  5. How does it interact with scalar multiplication?

As always, any assertion must be backed by either a proof or an example.

Problem 9

Verify the equality condition in Corollary 2.6, the Cauchy-Schwarz Inequality.

  1. Show that if  \vec{u} is a negative scalar multiple of  \vec{v} then  \vec{u}\cdot\vec{v} and  \vec{v}\cdot\vec{u} are less than or equal to zero.
  2. Show that  |\vec{u}\cdot\vec{v}|=
|\vec{u}\,|\,|\vec{v}\,| if and only if one vector is a scalar multiple of the other.
Problem 10

Suppose that  \vec{u}\cdot\vec{v}=\vec{u}\cdot\vec{w} and  \vec{u}\neq\vec{0} . Must  \vec{v}=\vec{w} ?

This exercise is recommended for all readers.
Problem 11

Does any vector have length zero except a zero vector? (If "yes", produce an example. If "no", prove it.)

This exercise is recommended for all readers.
Problem 12

Find the midpoint of the line segment connecting  (x_1,y_1) with  (x_2,y_2) in  \mathbb{R}^2 . Generalize to  \mathbb{R}^n .

Problem 13

Show that if  \vec{v}\neq\vec{0} then  \vec{v}/|\vec{v}\,| has length one. What if  \vec{v}=\vec{0} ?

Problem 14

Show that if  r\geq 0 then  r\vec{v} is  r times as long as  \vec{v} . What if  r< 0 ?

This exercise is recommended for all readers.
Problem 15

A vector  \vec{v}\in\mathbb{R}^n of length one is a unit vector. Show that the dot product of two unit vectors has absolute value less than or equal to one. Can "less than" happen? Can "equal to"?

Problem 16

Prove that 
|\vec{u}+\vec{v}\,|^2+|\vec{u}-\vec{v}\,|^2
=2|\vec{u}\,|^2+2|\vec{v}\,|^2.

Problem 17

Show that if  \vec{x}\cdot\vec{y}=0 for every  \vec{y} then  \vec{x}=\vec{0} .

Problem 18

Is  |\vec{u}_1+\cdots+\vec{u}_n| \leq
|\vec{u}_1|+\cdots+|\vec{u}_n| ? If it is true then it would generalize the Triangle Inequality.

Problem 19

What is the ratio between the sides in the Cauchy-Schwarz inequality?

Problem 20

Why is the zero vector defined to be perpendicular to every vector?

Problem 21

Describe the angle between two vectors in  \mathbb{R}^1 .

Problem 22

Give a simple necessary and sufficient condition to determine whether the angle between two vectors is acute, right, or obtuse.

This exercise is recommended for all readers.
Problem 23

Generalize to  \mathbb{R}^n the converse of the Pythagorean Theorem, that if  \vec{u} and  \vec{v} are perpendicular then  |\vec{u}+\vec{v}\,|^2=|\vec{u}\,|^2+|\vec{v}\,|^2 .

Problem 24

Show that  |\vec{u}\,|=|\vec{v}\,| if and only if  \vec{u}+\vec{v} and  \vec{u}-\vec{v} are perpendicular. Give an example in  \mathbb{R}^2 .

Problem 25

Show that if a vector is perpendicular to each of two others then it is perpendicular to each vector in the plane they generate. (Remark. They could generate a degenerate plane— a line or a point— but the statement remains true.)

Problem 26

Prove that, where  \vec{u},\vec{v}\in\mathbb{R}^n are nonzero vectors, the vector


\frac{\vec{u}}{|\vec{u}\,| }+\frac{\vec{v}}{|\vec{v}\,| }

bisects the angle between them. Illustrate in  \mathbb{R}^2 .

Problem 27

Verify that the definition of angle is dimensionally correct: (1) if  k>0 then the cosine of the angle between  k\vec{u} and  \vec{v} equals the cosine of the angle between  \vec{u} and  \vec{v} , and (2) if  k<0 then the cosine of the angle between  k\vec{u} and  \vec{v} is the negative of the cosine of the angle between  \vec{u} and  \vec{v} .

This exercise is recommended for all readers.
Problem 28

Show that the inner product operation is linear: for  \vec{u},\vec{v},\vec{w}\in\mathbb{R}^n and  k,m\in\mathbb{R} , \vec{u}\cdot(k\vec{v}+m\vec{w})=
k(\vec{u}\cdot\vec{v})+m(\vec{u}\cdot\vec{w}).

This exercise is recommended for all readers.
Problem 29

The geometric mean of two positive reals  x, y is  \sqrt{xy} . It is analogous to the arithmetic mean  (x+y)/2 . Use the Cauchy-Schwarz inequality to show that the geometric mean of any  x,y\in\mathbb{R} is less than or equal to the arithmetic mean.

? Problem 30

A ship is sailing with speed and direction  \vec{v}_1 ; the wind blows apparently (judging by the vane on the mast) in the direction of a vector  \vec{a} ; on changing the direction and speed of the ship from  \vec{v}_1 to  \vec{v}_2 the apparent wind is in the direction of a vector  \vec{b} .

Find the vector velocity of the wind (Ivanoff & Esty 1933).

Problem 31

Verify the Cauchy-Schwarz inequality by first proving Lagrange's identity:


\left(\sum_{1\leq j\leq n} a_jb_j \right)^2
=
\left(\sum_{1\leq j\leq n}a_j^2\right)
\left(\sum_{1\leq j\leq n}b_j^2\right)
-
\sum_{1\leq k < j\leq n}(a_kb_j-a_jb_k)^2

and then noting that the final term is positive. (Recall the meaning


\sum_{1\leq j\leq n}a_jb_j=
a_1b_1+a_2b_2+\cdots+a_nb_n

and


\sum_{1\leq j\leq n}{a_j}^2=
{a_1}^2+{a_2}^2+\cdots+{a_n}^2

of the  \Sigma notation.) This result is an improvement over Cauchy-Schwarz because it gives a formula for the difference between the two sides. Interpret that difference in  \mathbb{R}^2 .


Section III - Reduced Echelon Form

After developing the mechanics of Gauss' method, we observed that it can be done in more than one way. One example is that we sometimes have to swap rows and there can be more than one row to choose from. Another example is that from this matrix


\begin{pmatrix}
2  &2  \\
4  &3
\end{pmatrix}

Gauss' method could derive any of these echelon form matrices.


\begin{pmatrix}
2  &2  \\
0  &-1
\end{pmatrix}
\qquad
\begin{pmatrix}
1  &1  \\
0  &-1
\end{pmatrix}
\qquad
\begin{pmatrix}
2  &0  \\
0  &-1
\end{pmatrix}

The first results from -2\rho_1+\rho_2. The second comes from following (1/2)\rho_1 with -4\rho_1+\rho_2. The third comes from -2\rho_1+\rho_2 followed by 2\rho_2+\rho_1 (after the first pivot the matrix is already in echelon form so the second one is extra work but it is nonetheless a legal row operation).

The fact that the echelon form outcome of Gauss' method is not unique leaves us with some questions. Will any two echelon form versions of a system have the same number of free variables? Will they in fact have exactly the same variables free? In this section we will answer both questions "yes". We will do more than answer the questions. We will give a way to decide if one linear system can be derived from another by row operations. The answers to the two questions will follow from this larger result.


1 - Gauss-Jordan Reduction

Gaussian elimination coupled with back-substitution solves linear systems, but it's not the only method possible. Here is an extension of Gauss' method that has some advantages.

Example 1.1

To solve


\begin{array}{*{3}{rc}r}
x  &+  &y  &-  &2z  &=  &-2  \\
&   &y  &+  &3z  &=  &7   \\
x  &   &   &-  &z   &=  &-1
\end{array}

we can start by going to echelon form as usual.


\xrightarrow[]{-\rho_1+\rho_3}
\left(\begin{array}{*{3}{c}|c}
1  &1  &-2 &-2  \\
0  &1  &3  &7   \\
0  &-1 &1  &1
\end{array}\right)
\xrightarrow[]{\rho_2+\rho_3}
\left(\begin{array}{*{3}{c}|c}
1  &1  &-2 &-2  \\
0  &1  &3  &7   \\
0  &0  &4  &8
\end{array}\right)

We can keep going to a second stage by making the leading entries into ones


\xrightarrow[]{(1/4)\rho_3}
\left(\begin{array}{*{3}{c}|c}
1  &1  &-2 &-2  \\
0  &1  &3  &7   \\
0  &0  &1  &2
\end{array}\right)

and then to a third stage that uses the leading entries to eliminate all of the other entries in each column by pivoting upwards.


\xrightarrow[2\rho_3+\rho_1]{-3\rho_3+\rho_2}
\left(\begin{array}{*{3}{c}|c}
1  &1  &0  &2   \\
0  &1  &0  &1   \\
0  &0  &1  &2
\end{array}\right)
\xrightarrow[]{-\rho_2+\rho_1}
\left(\begin{array}{*{3}{c}|c}
1  &0  &0  &1   \\
0  &1  &0  &1   \\
0  &0  &1  &2
\end{array}\right)

The answer is  x=1 ,  y=1 , and  z=2 .

Note that the pivot operations in the first stage proceed from column one to column three while the pivot operations in the third stage proceed from column three to column one.

Example 1.2

We often combine the operations of the middle stage into a single step, even though they are operations on different rows.

\begin{array}{rcl}
\left(\begin{array}{*{2}{c}|c}
2   &1   &7   \\
4   &-2  &6
\end{array}\right)
&\xrightarrow[]{-2\rho_1+\rho_2}
&\left(\begin{array}{*{2}{c}|c}
2   &1   &7   \\
0   &-4  &-8
\end{array}\right)                                   \\
&\xrightarrow[(-1/4)\rho_2]{(1/2)\rho_1}
&\left(\begin{array}{*{2}{c}|c}
1   &1/2   &7/2   \\
0   &1     &2
\end{array}\right)                                    \\
&\xrightarrow[]{-(1/2)\rho_2+\rho_1}
&\left(\begin{array}{*{2}{c}|c}
1   &0   &5/2   \\
0   &1   &2
\end{array}\right)
\end{array}

The answer is x=5/2 and y=2.

This extension of Gauss' method is Gauss-Jordan reduction. It goes past echelon form to a more refined, more specialized, matrix form.

Definition 1.3

A matrix is in reduced echelon form if, in addition to being in echelon form, each leading entry is a one and is the only nonzero entry in its column.

The disadvantage of using Gauss-Jordan reduction to solve a system is that the additional row operations mean additional arithmetic. The advantage is that the solution set can just be read off.

In any echelon form, plain or reduced, we can read off when a system has an empty solution set because there is a contradictory equation, we can read off when a system has a one-element solution set because there is no contradiction and every variable is the leading variable in some row, and we can read off when a system has an infinite solution set because there is no contradiction and at least one variable is free.

In reduced echelon form we can read off not just what kind of solution set the system has, but also its description. Whether or not the echelon form is reduced, we have no trouble describing the solution set when it is empty, of course. The two examples above show that when the system has a single solution then the solution can be read off from the right-hand column. In the case when the solution set is infinite, its parametrization can also be read off of the reduced echelon form. Consider, for example, this system that is shown brought to echelon form and then to reduced echelon form.


\left(\begin{array}{*{4}{c}|c}
2  &6  &1  &2  &5  \\
0  &3  &1  &4  &1  \\
0  &3  &1  &2  &5
\end{array}\right)
\xrightarrow[]{-\rho_2+\rho_3}
\left(\begin{array}{*{4}{c}|c}
2  &6  &1  &2  &5  \\
0  &3  &1  &4  &1  \\
0  &0  &0  &-2 &4
\end{array}\right)
\xrightarrow[\begin{array}{c}\\[-19pt]\scriptstyle (1/3)\rho_2 \\[-5pt]\scriptstyle -(1/2)\rho_3\end{array}]{(1/2)\rho_1}
\;\xrightarrow[-\rho_3+\rho_1]{(4/3)\rho_3+\rho_2}
\;\xrightarrow[]{-3\rho_2+\rho_1}
\left(\begin{array}{*{4}{c}|c}
1  &0  &-1/2  &0  &-9/2  \\
0  &1  &1/3   &0  &3  \\
0  &0  &0     &1  &-2
\end{array}\right)

Starting with the middle matrix, the echelon form version, back substitution produces -2x_4=4 so that x_4=-2, then another back substitution gives 3x_2+x_3+4(-2)=1 implying that x_2=3-(1/3)x_3, and then the final back substitution gives 2x_1+6(3-(1/3)x_3)+x_3+2(-2)=5 implying that x_1=-(9/2)+(1/2)x_3. Thus the solution set is this.


S=\{\begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix}
=\begin{pmatrix} -9/2 \\ 3 \\ 0 \\ -2 \end{pmatrix}
+\begin{pmatrix} 1/2 \\ -1/3 \\ 1 \\ 0 \end{pmatrix}x_3
\,\big|\, x_3\in\mathbb{R}\}

Now, considering the final matrix, the reduced echelon form version, note that adjusting the parametrization by moving the x_3 terms to the other side does indeed give the description of this infinite solution set.

Part of the reason that this works is straightforward. While a set can have many parametrizations that describe it, e.g., both of these also describe the above set S (take t to be x_3/6 and s to be x_3-1)


\{\begin{pmatrix} -9/2 \\ 3 \\ 0 \\ -2 \end{pmatrix}
+\begin{pmatrix} 3 \\ -2 \\ 6 \\ 0 \end{pmatrix}t
\,\big|\, t\in\mathbb{R}\}
\qquad
\{\begin{pmatrix} -4 \\ 8/3 \\ 1 \\ -2 \end{pmatrix}
+\begin{pmatrix} 1/2 \\ -1/3 \\ 1 \\ 0 \end{pmatrix}s
\,\big|\, s\in\mathbb{R}\}

nonetheless we have in this book stuck to a convention of parametrizing using the unmodified free variables (that is, x_3=x_3 instead of x_3=6t). We can easily see that a reduced echelon form version of a system is equivalent to a parametrization in terms of unmodified free variables. For instance,

\begin{array}{rl}
x_1 &=4-2x_3 \\
x_2 &=3-x_3
\end{array}
\quad\Longleftrightarrow\quad
\left(\begin{array}{*{3}{c}|c}
1  &0  &2  &4  \\
0  &1  &1  &3  \\
0  &0  &0  &0
\end{array}\right)

(to move from left to right we also need to know how many equations are in the system). So, the convention of parametrizing with the free variables by solving each equation for its leading variable and then eliminating that leading variable from every other equation is exactly equivalent to the reduced echelon form conditions that each leading entry must be a one and must be the only nonzero entry in its column.

Not as straightforward is the other part of the reason that the reduced echelon form version allows us to read off the parametrization that we would have gotten had we stopped at echelon form and then done back substitution. The prior paragraph shows that reduced echelon form corresponds to some parametrization, but why the same parametrization? A solution set can be parametrized in many ways, and Gauss' method or the Gauss-Jordan method can be done in many ways, so a first guess might be that we could derive many different reduced echelon form versions of the same starting system and many different parametrizations. But we never do. Experience shows that starting with the same system and proceeding with row operations in many different ways always yields the same reduced echelon form and the same parametrization (using the unmodified free variables).

In the rest of this section we will show that the reduced echelon form version of a matrix is unique. It follows that the parametrization of a linear system in terms of its unmodified free variables is unique because two different ones would give two different reduced echelon forms.

We shall use this result, and the ones that lead up to it, in the rest of the book but perhaps a restatement in a way that makes it seem more immediately useful may be encouraging. Imagine that we solve a linear system, parametrize, and check in the back of the book for the answer. But the parametrization there appears different. Have we made a mistake, or could these be different-looking descriptions of the same set, as with the three descriptions above of S? The prior paragraph notes that we will show here that different-looking parametrizations (using the unmodified free variables) describe genuinely different sets.

Here is an informal argument that the reduced echelon form version of a matrix is unique. Consider again the example that started this section of a matrix that reduces to three different echelon form matrices. The first matrix of the three is the natural echelon form version. The second matrix is the same as the first except that a row has been halved. The third matrix, too, is just a cosmetic variant of the first. The definition of reduced echelon form outlaws this kind of fooling around. In reduced echelon form, halving a row is not possible because that would change the row's leading entry away from one, and neither is combining rows possible, because then a leading entry would no longer be alone in its column.

This informal justification is not a proof; we have argued that no two different reduced echelon form matrices are related by a single row operation step, but we have not ruled out the possibility that multiple steps might do. Before we go to that proof, we finish this subsection by rephrasing our work in a terminology that will be enlightening.

Many different matrices yield the same reduced echelon form matrix. The three echelon form matrices from the start of this section, and the matrix they were derived from, all give this reduced echelon form matrix.


\begin{pmatrix}
1  &0  \\
0  &1
\end{pmatrix}

We think of these matrices as related to each other. The next result speaks to this relationship.

Lemma 1.4

Elementary row operations are reversible.

Proof

For any matrix  A , the effect of swapping rows is reversed by swapping them back, multiplying a row by a nonzero  k is undone by multiplying by 1/k, and adding a multiple of row  i to row  j (with i\neq j) is undone by subtracting the same multiple of row  i from row  j .


A
\xrightarrow[]{\rho_i\leftrightarrow\rho_j}
\;\xrightarrow[]{\rho_j\leftrightarrow\rho_i}
A
\qquad
A
\xrightarrow[]{k\rho_i}
\;\xrightarrow[]{(1/k)\rho_i}
A
\qquad
A
\xrightarrow[]{k\rho_i+\rho_j}
\;\xrightarrow[]{-k\rho_i+\rho_j}
A

(The i\neq j conditions is needed. See Problem 7.)

This lemma suggests that "reduces to" is misleading— where  A\longrightarrow B , we shouldn't think of  B as "after"  A or "simpler than" A. Instead we should think of them as interreducible or interrelated. Below is a picture of the idea. The matrices from the start of this section and their reduced echelon form version are shown in a cluster. They are all interreducible; these relationships are shown also.

Linalg interreducible matrices.png

We say that matrices that reduce to each other are "equivalent with respect to the relationship of row reducibility". The next result verifies this statement using the definition of an equivalence.[1]

Lemma 1.5

Between matrices, "reduces to" is an equivalence relation.

Proof

We must check the conditions (i) reflexivity, that any matrix reduces to itself, (ii) symmetry, that if  A reduces to  B then  B reduces to  A , and (iii) transitivity, that if  A reduces to  B and  B reduces to  C then  A reduces to  C .

Reflexivity is easy; any matrix reduces to itself in zero row operations.

That the relationship is symmetric is Lemma 4— if  A reduces to  B by some row operations then also  B reduces to  A by reversing those operations.

For transitivity, suppose that  A reduces to  B and that  B reduces to  C . Linking the reduction steps from A \rightarrow\cdots\rightarrow B with those from B \rightarrow\cdots\rightarrow C gives a reduction from  A to  C .

Definition 1.6

Two matrices that are interreducible by the elementary row operations are row equivalent.

The diagram below shows the collection of all matrices as a box. Inside that box, each matrix lies in some class. Matrices are in the same class if and only if they are interreducible. The classes are disjoint— no matrix is in two distinct classes. The collection of matrices has been partitioned into row equivalence classes.[2]

Linalg row equiv classes.png

One of the classes in this partition is the cluster of matrices shown above, expanded to include all of the nonsingular 2 \! \times \! 2 matrices.

The next subsection proves that the reduced echelon form of a matrix is unique; that every matrix reduces to one and only one reduced echelon form matrix. Rephrased in terms of the row-equivalence relationship, we shall prove that every matrix is row equivalent to one and only one reduced echelon form matrix. In terms of the partition what we shall prove is: every equivalence class contains one and only one reduced echelon form matrix. So each reduced echelon form matrix serves as a representative of its class.

After that proof we shall, as mentioned in the introduction to this section, have a way to decide if one matrix can be derived from another by row reduction. We just apply the Gauss-Jordan procedure to both and see whether or not they come to the same reduced echelon form.

Exercises

This exercise is recommended for all readers.
Problem 1

Use Gauss-Jordan reduction to solve each system.

  1. 
\begin{array}{*{2}{rc}r}
x  &+  &y  &=  &2  \\
x  &-  &y  &=  &0
\end{array}
  2. 
\begin{array}{*{3}{rc}r}
x  &   &   &-  &z  &=  &4  \\
2x  &+  &2y &   &   &=  &1
\end{array}
  3. 
\begin{array}{*{2}{rc}r}
3x  &-  &2y  &=  &1  \\
6x  &+  &y   &=  &1/2
\end{array}
  4. 
\begin{array}{*{3}{rc}r}
2x  &-  &y  &  &  &= &-1  \\
x  &+  &3y &- &z &= &5   \\
&   &y  &+ &2z&= &5
\end{array}
This exercise is recommended for all readers.
Problem 2

Find the reduced echelon form of each matrix.

  1.  \begin{pmatrix}
2  &1  \\
1  &3
\end{pmatrix}
  2.  \begin{pmatrix}
1  &3  &1  \\
2  &0  &4  \\
-1  &-3 &-3
\end{pmatrix}
  3.  \begin{pmatrix}
1  &0  &3  &1  &2  \\
1  &4  &2  &1  &5  \\
3  &4  &8  &1  &2
\end{pmatrix}
  4.  \begin{pmatrix}
0  &1  &3  &2  \\
0  &0  &5  &6  \\
1  &5  &1  &5
\end{pmatrix}
This exercise is recommended for all readers.
Problem 3

Find each solution set by using Gauss-Jordan reduction, then reading off the parametrization.

  1.  \begin{array}{*{3}{rc}r}
2x  &+  &y  &-  &z  &=  &1  \\
4x  &-  &y  &   &   &=  &3
\end{array}
  2.  \begin{array}{*{4}{rc}r}
x  &   &   &-  &z  &   &   &=  &1  \\
&   &y  &+  &2z &-  &w  &=  &3  \\
x  &+  &2y &+  &3z &-  &w  &=  &7
\end{array}
  3.  \begin{array}{*{4}{rc}r}
x  &-  &y  &+  &z  &   &   &=  &0  \\
&   &y  &   &   &+  &w  &=  &0  \\
3x  &-  &2y &+  &3z &+  &w  &=  &0  \\
&   &-y &   &   &-  &w  &=  &0
\end{array}
  4.  \begin{array}{*{5}{rc}r}
a  &+  &2b &+  &3c &+  &d  &-  &e  &=  &1  \\
3a  &-  &b  &+  &c  &+  &d  &+  &e  &=  &3
\end{array}
Problem 4

Give two distinct echelon form versions of this matrix.


\begin{pmatrix}
2  &1  &1  &3  \\
6  &4  &1  &2  \\
1  &5  &1  &5
\end{pmatrix}
This exercise is recommended for all readers.
Problem 5

List the reduced echelon forms possible for each size.

  1.  2 \! \times \! 2
  2.  2 \! \times \! 3
  3.  3 \! \times \! 2
  4.  3 \! \times \! 3
This exercise is recommended for all readers.
Problem 6

What results from applying Gauss-Jordan reduction to a nonsingular matrix?

Problem 7

The proof of Lemma 4 contains a reference to the i\neq j condition on the row pivoting operation.

  1. The definition of row operations has an i\neq j condition on the swap operation \rho_i\leftrightarrow\rho_j. Show that in A\xrightarrow[]{\rho_i\leftrightarrow\rho_j}\;
\xrightarrow[]{\rho_i\leftrightarrow\rho_j}A this condition is not needed.
  2. Write down a 2 \! \times \! 2 matrix with nonzero entries, and show that the -1\cdot\rho_1+\rho_1 operation is not reversed by 1\cdot\rho_1+\rho_1.
  3. Expand the proof of that lemma to make explicit exactly where the i\neq j condition on pivoting is used.

Footnotes

  1. More information on equivalence relations is in the appendix.
  2. More information on partitions and class representatives is in the appendix.


2 - Row Equivalence

We will close this section and this chapter by proving that every matrix is row equivalent to one and only one reduced echelon form matrix. The ideas that appear here will reappear, and be further developed, in the next chapter.

The underlying theme here is that one way to understand a mathematical situation is by being able to classify the cases that can happen. We have met this theme several times already. We have classified solution sets of linear systems into the no-elements, one-element, and infinitely-many elements cases. We have also classified linear systems with the same number of equations as unknowns into the nonsingular and singular cases. We adopted these classifications because they give us a way to understand the situations that we were investigating. Here, where we are investigating row equivalence, we know that the set of all matrices breaks into the row equivalence classes. When we finish the proof here, we will have a way to understand each of those classes— its matrices can be thought of as derived by row operations from the unique reduced echelon form matrix in that class.

To understand how row operations act to transform one matrix into another, we consider the effect that they have on the parts of a matrix. The crucial observation is that row operations combine the rows linearly.

Definition 2.1

A linear combination of  x_1,\ldots,x_m is an expression of the form c_1x_1+c_2x_2+\,\cdots\,+c_mx_m where the  c 's are scalars.

(We have already used the phrase "linear combination" in this book. The meaning is unchanged, but the next result's statement makes a more formal definition in order.)

Lemma 2.2 (Linear Combination Lemma)

A linear combination of linear combinations is a linear combination.

Proof

Given the linear combinations c_{1,1}x_1+\dots+c_{1,n}x_n through c_{m,1}x_1+\dots+c_{m,n}x_n, consider a combination of those


d_1(c_{1,1}x_1+\dots+c_{1,n}x_n)\,+\dots+\,d_m(c_{m,1}x_1+\dots+c_{m,n}x_n)

where the d's are scalars along with the c's. Distributing those d's and regrouping gives


=(d_1c_{1,1}+\dots+d_mc_{m,1})x_1\,+\dots+\,(d_1c_{1,n}+\dots+d_mc_{m,n})x_n

which is a linear combination of the x's.

In this subsection we will use the convention that, where a matrix is named with an upper case roman letter, the matching lower-case greek letter names the rows.


A=
\begin{pmatrix}
\cdots \alpha_1 \cdots \\
\cdots \alpha_2 \cdots \\
\vdots                 \\
\cdots \alpha_m \cdots  
\end{pmatrix}
\qquad
B=
\begin{pmatrix}
\cdots \beta_1 \cdots \\
\cdots \beta_2 \cdots \\
\vdots                \\
\cdots \beta_m\cdots 
\end{pmatrix}
Corollary 2.3

Where one matrix reduces to another, each row of the second is a linear combination of the rows of the first.

The proof below uses induction on the number of row operations used to reduce one matrix to the other. Before we proceed, here is an outline of the argument (readers unfamiliar with induction may want to compare this argument with the one used in the "\text{General}=\text{Particular}+\text{Homogeneous}" proof).[1] First, for the base step of the argument, we will verify that the proposition is true when reduction can be done in zero row operations. Second, for the inductive step, we will argue that if being able to reduce the first matrix to the second in some number t\geq 0 of operations implies that each row of the second is a linear combination of the rows of the first, then being able to reduce the first to the second in t+1 operations implies the same thing. Together, this base step and induction step prove this result because by the base step the proposition is true in the zero operations case, and by the inductive step the fact that it is true in the zero operations case implies that it is true in the one operation case, and the inductive step applied again gives that it is therefore true in the two operations case, etc.

Proof

We proceed by induction on the minimum number of row operations that take a first matrix A to a second one B.

In the base step, that zero reduction operations suffice, the two matrices are equal and each row of B is obviously a combination of A's rows: \vec{\beta}_i
=0\cdot\vec{\alpha}_1+\dots+1\cdot\vec{\alpha}_i+\dots+0\cdot\vec{\alpha}_m.

For the inductive step, assume the inductive hypothesis: with t\geq 0, if a matrix can be derived from  A in  t or fewer operations then its rows are linear combinations of the A's rows. Consider a B that takes t+1 operations. Because there are more than zero operations, there must be a next-to-last matrix G so that A\longrightarrow\cdots\longrightarrow G\longrightarrow B. This  G is only t operations away from  A and so the inductive hypothesis applies to it, that is, each row of  G is a linear combination of the rows of  A .

If the last operation, the one from  G to  B , is a row swap then the rows of B are just the rows of G reordered and thus each row of B is also a linear combination of the rows of A. The other two possibilities for this last operation, that it multiplies a row by a scalar and that it adds a multiple of one row to another, both result in the rows of B being linear combinations of the rows of G. But therefore, by the Linear Combination Lemma, each row of B is a linear combination of the rows of A.

With that, we have both the base step and the inductive step, and so the proposition follows.

Example 2.4

In the reduction


\begin{pmatrix}
0  &2  \\
1  &1
\end{pmatrix}
\xrightarrow[]{\rho_1\leftrightarrow\rho_2}
\begin{pmatrix}
1  &1  \\
0  &2
\end{pmatrix}
\xrightarrow[]{(1/2)\rho_2}
\begin{pmatrix}
1  &1  \\
0  &1
\end{pmatrix}
\xrightarrow[]{-\rho_2+\rho_1}
\begin{pmatrix}
1  &0  \\
0  &1
\end{pmatrix}

call the matrices  A ,  D ,  G , and  B . The methods of the proof show that there are three sets of linear relationships.


\begin{align}
\delta_1 &=0\cdot\alpha_1+1\cdot\alpha_2         \\
\delta_2 &=1\cdot\alpha_1+0\cdot\alpha_2
\end{align}
\qquad
\begin{align}
\gamma_1 &=0\cdot\alpha_1+1\cdot\alpha_2         \\
\gamma_2 &=(1/2)\alpha_1+0\cdot\alpha_2
\end{align}
\qquad
\begin{align}
\beta_1 &=(-1/2)\alpha_1+1\cdot\alpha_2        \\
\beta_2 &=(1/2)\alpha_1+0\cdot\alpha_2
\end{align}

The prior result gives us the insight that Gauss' method works by taking linear combinations of the rows. But to what end; why do we go to echelon form as a particularly simple, or basic, version of a linear system? The answer, of course, is that echelon form is suitable for back substitution, because we have isolated the variables. For instance, in this matrix


R=\begin{pmatrix}
2  &3  &7  &8  &0  &0  \\
0  &0  &1  &5  &1  &1  \\
0  &0  &0  &3  &3  &0  \\
0  &0  &0  &0  &2  &1
\end{pmatrix}

x_1 has been removed from x_5's equation. That is, Gauss' method has made x_5's row independent of x_1's row.

Independence of a collection of row vectors, or of any kind of vectors, will be precisely defined and explored in the next chapter. But a first take on it is that we can show that, say, the third row above is not comprised of the other rows, that \rho_3\neq c_1\rho_1+c_2\rho_2+c_4\rho_4. For, suppose that there are scalars c_1, c_2, and c_4 such that this relationship holds.

\begin{array}{rl}
\begin{pmatrix} 0  &0  &0  &3  &3  &0 \end{pmatrix}
&=c_1\begin{pmatrix} 2 &3 &7 &8 &0 &0 \end{pmatrix}             \\
&\quad+c_2\begin{pmatrix} 0 &0 &1 &5 &1 &1 \end{pmatrix} \\
&\quad+c_4\begin{pmatrix} 0 &0 &0 &0 &2 &1 \end{pmatrix}
\end{array}

The first row's leading entry is in the first column and narrowing our consideration of the above relationship to consideration only of the entries from the first column 0=2c_1+0c_2+0c_4 gives that c_1=0. The second row's leading entry is in the third column and the equation of entries in that column 0=7c_1+1c_2+0c_4, along with the knowledge that c_1=0, gives that c_2=0. Now, to finish, the third row's leading entry is in the fourth column and the equation of entries in that column 3=8c_1+5c_2+0c_4, along with c_1=0 and c_2=0, gives an impossibility.

The following result shows that this effect always holds. It shows that what Gauss' linear elimination method eliminates is linear relationships among the rows.

Lemma 2.5

In an echelon form matrix, no nonzero row is a linear combination of the other rows.

Proof

Let R be in echelon form. Suppose, to obtain a contradiction, that some nonzero row is a linear combination of the others.


\rho_i=c_1\rho_1+\ldots+c_{i-1}\rho_{i-1}+
c_{i+1}\rho_{i+1}+\ldots+c_m\rho_m

We will first use induction to show that the coefficients c_1, ..., c_{i-1} associated with rows above \rho_i are all zero. The contradiction will come from consideration of \rho_i and the rows below it.

The base step of the induction argument is to show that the first coefficient c_1 is zero. Let the first row's leading entry be in column number  \ell_1 and consider the equation of entries in that column.


\rho_{i,\ell_1}=c_1\rho_{1,\ell_1}+\ldots+c_{i-1}\rho_{i-1,\ell_1}
+c_{i+1}\rho_{i+1,\ell_1}+\ldots+c_m\rho_{m,\ell_1}

The matrix is in echelon form so the entries \rho_{2,\ell_1}, ..., \rho_{m,\ell_1}, including \rho_{i,\ell_1}, are all zero.


0=c_1\rho_{1,\ell_1}+\dots+c_{i-1}\cdot 0
+c_{i+1}\cdot 0+\dots+c_m\cdot 0

Because the entry \rho_{1,\ell_1} is nonzero as it leads its row, the coefficient c_1 must be zero.

The inductive step is to show that for each row index k between 1 and i-2, if the coefficient c_1 and the coefficients c_2, ..., c_{k} are all zero then c_{k+1} is also zero. That argument, and the contradiction that finishes this proof, is saved for Problem 11.

We can now prove that each matrix is row equivalent to one and only one reduced echelon form matrix. We will find it convenient to break the first half of the argument off as a preliminary lemma. For one thing, it holds for any echelon form whatever, not just reduced echelon form.

Lemma 2.6

If two echelon form matrices are row equivalent then the leading entries in their first rows lie in the same column. The same is true of all the nonzero rows— the leading entries in their second rows lie in the same column, etc.

For the proof we rephrase the result in more technical terms. Define the form of an m \! \times \! n matrix to be the sequence \langle \ell_1,\ell_2,\ldots\,,\ell_m \rangle where \ell_i is the column number of the leading entry in row i and \ell_i=\infty if there is no leading entry in that row. The lemma says that if two echelon form matrices are row equivalent then their forms are equal sequences.

Proof

Let  B and  D be echelon form matrices that are row equivalent. Because they are row equivalent they must be the same size, say m \! \times \! n. Let the column number of the leading entry in row i of B be \ell_i and let the column number of the leading entry in row j of D be k_j. We will show that \ell_1=k_1, that \ell_2=k_2, etc., by induction.

This induction argument relies on the fact that the matrices are row equivalent, because the Linear Combination Lemma and its corollary therefore give that each row of  B is a linear combination of the rows of  D and vice versa:


\beta_i=s_{i,1}\delta_1+s_{i,2}\delta_2+\dots+s_{i,m}\delta_m
\quad\text{and}\quad
\delta_j=t_{j,1}\beta_1+t_{j,2}\beta_2+\dots+t_{j,m}\beta_m

where the s's and t's are scalars.

The base step of the induction is to verify the lemma for the first rows of the matrices, that is, to verify that \ell_1=k_1. If either row is a zero row then the entire matrix is a zero matrix since it is in echelon form, and therefore both matrices are zero matrices (by Corollary 2.3), and so both \ell_1 and k_1 are \infty. For the case where neither \beta_1 nor \delta_1 is a zero row, consider the i=1 instance of the linear relationship above.

\begin{array}{rl}
\beta_1 &=s_{1,1}\delta_1+s_{1,2}\delta_2+\dots+s_{1,m}\delta_m  \\
\begin{pmatrix} 0 &\cdots &b_{1,\ell_1} &\cdots & \end{pmatrix}
&=s_{1,1}\begin{pmatrix} 0 &\cdots &d_{1,k_1} &\cdots & \end{pmatrix}   \\
&\quad+s_{1,2}\begin{pmatrix} 0 &\cdots &0         &\cdots & \end{pmatrix}   \\
&\quad \vdots                                    \\
&\quad+s_{1,m}\begin{pmatrix} 0 &\cdots &0         &\cdots & \end{pmatrix}
\end{array}

First, note that \ell_1<k_1 is impossible: in the columns of D to the left of column k_1 the entries are all zeroes (as d_{1,k_1} leads the first row) and so if \ell_1<k_1 then the equation of entries from column \ell_1 would be b_{1,\ell_1}=s_{1,1}\cdot 0+\dots+s_{1,m}\cdot 0, but b_{1,\ell_1} isn't zero since it leads its row and so this is an impossibility. Next, a symmetric argument shows that k_1<\ell_1 also is impossible. Thus the \ell_1=k_1 base case holds.

The inductive step is to show that if \ell_1=k_1, and \ell_2=k_2, ..., and \ell_r=k_r, then also \ell_{r+1}=k_{r+1} (for r in the interval 1\,..\,m-1). This argument is saved for Problem 12.

That lemma answers two of the questions that we have posed: (i) any two echelon form versions of a matrix have the same free variables, and consequently, and (ii) any two echelon form versions have the same number of free variables. There is no linear system and no combination of row operations such that, say, we could solve the system one way and get y and z free but solve it another way and get y and w free, or solve it one way and get two free variables while solving it another way yields three.

We finish now by specializing to the case of reduced echelon form matrices.

Theorem 2.7

Each matrix is row equivalent to a unique reduced echelon form matrix.

Proof

Clearly any matrix is row equivalent to at least one reduced echelon form matrix, via Gauss-Jordan reduction. For the other half, that any matrix is equivalent to at most one reduced echelon form matrix, we will show that if a matrix Gauss-Jordan reduces to each of two others then those two are equal.

Suppose that a matrix is row equivalent to two reduced echelon form matrices  B and  D , which are therefore row equivalent to each other. The Linear Combination Lemma and its corollary allow us to write the rows of one, say  B , as a linear combination of the rows of the other \beta_i=c_{i,1}\delta_1+\cdots+c_{i,m}\delta_m. The preliminary result, Lemma 2.6, says that in the two matrices, the same collection of rows are nonzero. Thus, if \beta_1 through \beta_r are the nonzero rows of B then the nonzero rows of D are \delta_1 through \delta_r. Zero rows don't contribute to the sum so we can rewrite the relationship to include just the nonzero rows.


\beta_i =c_{i,1}\delta_1+\dots+c_{i,r}\delta_r
\qquad(*)

The preliminary result also says that for each row  j between 1 and r, the leading entries of the j-th row of B and D appear in the same column, denoted  \ell_j . Rewriting the above relationship to focus on the entries in the \ell_j-th column

\begin{array}{rl}
\begin{pmatrix}  &\cdots &b_{i,\ell_j} &\cdots & \end{pmatrix}
&=c_{i,1}\begin{pmatrix}  &\cdots &d_{1,\ell_j} &\cdots & \end{pmatrix} \\
&\quad+c_{i,2}\begin{pmatrix}  &\cdots
&d_{2,\ell_j} &\cdots & \end{pmatrix}                             \\
&\quad\vdots                                              \\
&\quad+c_{i,r}\begin{pmatrix}  &\cdots
&d_{r,\ell_j} &\cdots & \end{pmatrix}
\end{array}

gives this set of equations for i=1 up to i=r.

\begin{array}{rl}
b_{1,\ell_j} &=c_{1,1}d_{1,\ell_j}
+\cdots+c_{1,j}d_{j,\ell_j}+\cdots
+c_{1,r}d_{r,\ell_j}                 \\
&\vdots                            \\
b_{j,\ell_j} &=c_{j,1}d_{1,\ell_j}
+\cdots+c_{j,j}d_{j,\ell_j}+\cdots
+c_{j,r}d_{r,\ell_j}                 \\
&\vdots                            \\
b_{r,\ell_j} &=c_{r,1}d_{1,\ell_j}
+\cdots+c_{r,j}d_{j,\ell_j}+\cdots
+c_{r,r}d_{r,\ell_j}
\end{array}

Since D is in reduced echelon form, all of the  d 's in column \ell_j are zero except for  d_{j,\ell_j} , which is 1. Thus each equation above simplifies to b_{i,\ell_j}=c_{i,j}d_{j,\ell_j}=c_{i,j}\cdot 1. But B is also in reduced echelon form and so all of the b's in column \ell_j are zero except for b_{j,\ell_j}, which is 1. Therefore, each c_{i,j} is zero, except that  c_{1,1}=1 , and c_{2,2}=1, ..., and c_{r,r}=1.

We have shown that the only nonzero coefficient in the linear combination labelled (*) is  c_{j,j}  , which is  1 . Therefore \beta_j=\delta_j. Because this holds for all nonzero rows, B=D.

We end with a recap. In Gauss' method we start with a matrix and then derive a sequence of other matrices. We defined two matrices to be related if one can be derived from the other. That relation is an equivalence relation, called row equivalence, and so partitions the set of all matrices into row equivalence classes.

Linalg reduced echelon form equiv classes.png

(There are infinitely many matrices in the pictured class, but we've only got room to show two.) We have proved there is one and only one reduced echelon form matrix in each row equivalence class. So the reduced echelon form is a canonical form[2] for row equivalence: the reduced echelon form matrices are representatives of the classes.

Linalg reduced echelon form equiv classes 2.png

We can answer questions about the classes by translating them into questions about the representatives.

Example 2.8

We can decide if matrices are interreducible by seeing if Gauss-Jordan reduction produces the same reduced echelon form result. Thus, these are not row equivalent


\begin{pmatrix}
1  &-3  \\
-2  &6
\end{pmatrix}
\qquad
\begin{pmatrix}
1  &-3  \\
-2  &5
\end{pmatrix}

because their reduced echelon forms are not equal.


\begin{pmatrix}
1  &-3  \\
0  &0
\end{pmatrix}
\qquad
\begin{pmatrix}
1  &0   \\
0  &1
\end{pmatrix}
Example 2.9

Any nonsingular  3 \! \times \! 3 matrix Gauss-Jordan reduces to this.


\begin{pmatrix}
1  &0  &0 \\
0  &1  &0 \\
0  &0  &1
\end{pmatrix}
Example 2.10

We can describe the classes by listing all possible reduced echelon form matrices. Any 2 \! \times \! 2 matrix lies in one of these: the class of matrices row equivalent to this,


\begin{pmatrix}
0  &0  \\
0  &0
\end{pmatrix}

the infinitely many classes of matrices row equivalent to one of this type


\begin{pmatrix}
1  &a  \\
0  &0
\end{pmatrix}

where  a\in\mathbb{R} (including a=0), the class of matrices row equivalent to this,


\begin{pmatrix}
0  &1  \\
0  &0
\end{pmatrix}

and the class of matrices row equivalent to this


\begin{pmatrix}
1  &0  \\
0  &1
\end{pmatrix}

(this is the class of nonsingular 2 \! \times \! 2 matrices).

Exercises

This exercise is recommended for all readers.
Problem 1

Decide if the matrices are row equivalent.

  1. 
\begin{pmatrix}
1  &2  \\
4  &8
\end{pmatrix},
\begin{pmatrix}
0  &1  \\
1  &2
\end{pmatrix}
  2. 
\begin{pmatrix}
1  &0  &2  \\
3  &-1 &1  \\
5  &-1 &5
\end{pmatrix},
\begin{pmatrix}
1  &0  &2  \\
0  &2  &10 \\
2  &0  &4
\end{pmatrix}
  3. 
\begin{pmatrix}
2  &1  &-1 \\
1  &1  &0  \\
4  &3  &-1
\end{pmatrix},
\begin{pmatrix}
1  &0  &2  \\
0  &2  &10 \\
\end{pmatrix}
  4. 
\begin{pmatrix}
1  &1  &1  \\
-1  &2  &2
\end{pmatrix},
\begin{pmatrix}
0  &3  &-1 \\
2  &2  &5
\end{pmatrix}
  5. 
\begin{pmatrix}
1  &1  &1  \\
0  &0  &3
\end{pmatrix},
\begin{pmatrix}
0  &1  &2  \\
1  &-1 &1
\end{pmatrix}
Problem 2

Describe the matrices in each of the classes represented in Example 2.10.

Problem 3

Describe all matrices in the row equivalence class of these.

  1. 
\begin{pmatrix}
1  &0  \\
0  &0
\end{pmatrix}
  2. 
\begin{pmatrix}
1  &2      \\
2  &4
\end{pmatrix}
  3. 
\begin{pmatrix}
1  &1      \\
1  &3
\end{pmatrix}
Problem 4

How many row equivalence classes are there?

Problem 5

Can row equivalence classes contain different-sized matrices?

Problem 6

How big are the row equivalence classes?

  1. Show that the class of any zero matrix is finite.
  2. Do any other classes contain only finitely many members?
This exercise is recommended for all readers.
Problem 7

Give two reduced echelon form matrices that have their leading entries in the same columns, but that are not row equivalent.

This exercise is recommended for all readers.
Problem 8

Show that any two  n \! \times \! n nonsingular matrices are row equivalent. Are any two singular matrices row equivalent?

This exercise is recommended for all readers.
Problem 9

Describe all of the row equivalence classes containing these.

  1.  2 \! \times \! 2 matrices
  2.  2 \! \times \! 3 matrices
  3.  3 \! \times \! 2 matrices
  4.  3 \! \times \! 3 matrices
Problem 10
  1. Show that a vector \vec{\beta}_0 is a linear combination of members of the set \{\vec{\beta}_1,\ldots,\vec{\beta}_n\} if and only if there is a linear relationship \vec{0}=c_0\vec{\beta}_0+\cdots+c_n\vec{\beta}_n where c_0 is not zero. (Hint. Watch out for the \vec{\beta}_0=\vec{0} case.)
  2. Use that to simplify the proof of Lemma 2.5.
This exercise is recommended for all readers.
Problem 11

Finish the proof of Lemma 2.5.

  1. First illustrate the inductive step by showing that c_2=0.
  2. Do the full inductive step: where  1\leq n<i-1 , assume that  c_k=0 for 1<k< n and deduce that  c_{n+1}=0 also.
  3. Find the contradiction.
Problem 12

Finish the induction argument in Lemma 2.6.

  1. State the inductive hypothesis, Also state what must be shown to follow from that hypothesis.
  2. Check that the inductive hypothesis implies that in the relationship \beta_{r+1}=s_{r+1,1}\delta_1+s_{r+2,2}\delta_2
+\dots+s_{r+1,m}\delta_m the coefficients s_{r+1,1},\,\ldots\,,s_{r+1,r} are each zero.
  3. Finish the inductive step by arguing, as in the base case, that \ell_{r+1}<k_{r+1} and k_{r+1}<\ell_{r+1} are impossible.
Problem 13

Why, in the proof of Theorem 2.7, do we bother to restrict to the nonzero rows? Why not just stick to the relationship that we began with, \beta_i=c_{i,1}\delta_1+\dots+c_{i,m}\delta_m, with m instead of r, and argue using it that the only nonzero coefficient is  c_{i,i}  , which is  1 ?

This exercise is recommended for all readers.
Problem 14

Three truck drivers went into a roadside cafe. One truck driver purchased four sandwiches, a cup of coffee, and ten doughnuts for $8.45. Another driver purchased three sandwiches, a cup of coffee, and seven doughnuts for $6.30. What did the third truck driver pay for a sandwich, a cup of coffee, and a doughnut? (Trono 1991)

Problem 15

The fact that Gaussian reduction disallows multiplication of a row by zero is needed for the proof of uniqueness of reduced echelon form, or else every matrix would be row equivalent to a matrix of all zeros. Where is it used?

This exercise is recommended for all readers.
Problem 16

The Linear Combination Lemma says which equations can be gotten from Gaussian reduction from a given linear system.

  1. Produce an equation not implied by this system.
    
\begin{array}{*{2}{rc}r}
3x  &+  &4y  &=  &8 \\
2x  &+  & y  &=  &3
\end{array}
  2. Can any equation be derived from an inconsistent system?
Problem 17

Extend the definition of row equivalence to linear systems. Under your definition, do equivalent systems have the same solution set? (Hoffman & Kunze 1971)

This exercise is recommended for all readers.
Problem 18

In this matrix


\begin{pmatrix}
1  &2  &3  \\
3  &0  &3  \\
1  &4  &5
\end{pmatrix}

the first and second columns add to the third.

  1. Show that remains true under any row operation.
  2. Make a conjecture.
  3. Prove that it holds.

Footnotes

  1. More information on mathematical induction is in the appendix.
  2. More information on canonical representatives is in the appendix.


Topic: Computer Algebra Systems

The linear systems in this chapter are small enough that their solution by hand is easy. But large systems are easiest, and safest, to do on a computer. There are special purpose programs such as LINPACK for this job. Another popular tool is a general purpose computer algebra system, including both commercial packages such as Maple, Mathematica, or MATLAB, or free packages such as SciLab, Sage, or Octave.

For example, in the Topic on Networks, we need to solve this.


\begin{array}{*{7}{rc}r}
i_0  &-  &i_1  &-  &i_2  &   &    &  &    &   &    &  &    &=  &0  \\
&   &i_1  &   &     &-  &i_3 &  &    &-  &i_5 &  &    &=  &0  \\
&   &     &   &i_2  &   &    &- &i_4 &+  &i_5 &  &    &=  &0  \\
&   &     &   &     &   &i_3 &+ &i_4 &   &    &- &i_6 &=  &0  \\
&   &5i_1 &   &     &+  &10i_3  &  & &   &    &  &    &=  &10  \\
&   &     &   &2i_2 &   &    &+ &4i_4 &  &    &  &    &=  &10  \\
&   &5i_1 &-  &2i_2 &   &    &  &    &+  &50i_5 &&    &=  &0   
\end{array}

It can be done by hand, but it would take a while and be error-prone. Using a computer is better.

We illustrate by solving that system under Maple (for another system, a user's manual would obviously detail the exact syntax needed). The array of coefficients can be entered in this way


> A:=array( [[1,-1,-1,0,0,0,0],
[0,1,0,-1,0,-1,0],
[0,0,1,0,-1,1,0],
[0,0,0,1,1,0,-1],
[0,5,0,10,0,0,0],
[0,0,2,0,4,0,0],
[0,5,-2,0,0,50,0]] );

(putting the rows on separate lines is not necessary, but is done for clarity). The vector of constants is entered similarly.


> u:=array( [0,0,0,0,10,10,0] );

Then the system is solved, like magic.


> linsolve(A,u);
7 2 5 2 5 7
[ -, -, -, -, -, 0, - ]
3 3 3 3 3 3

Systems with infinitely many solutions are solved in the same way— the computer simply returns a parametrization.

Exercises

Answers for this Topic use Maple as the computer algebra system. In particular, all of these were tested on Maple V running under MS-DOS NT version 4.0. (On all of them, the preliminary command to load the linear algebra package along with Maple's responses to the Enter key, have been omitted.) Other systems have similar commands.

Problem 1

Use the computer to solve the two problems that opened this chapter.

  1. This is the Statics problem.
    \begin{array}{rl}
40h+15c  &= 100  \\
25c      &= 50+50h
\end{array}
  2. This is the Chemistry problem.
    \begin{array}{rl} 
7h      &= 7j  \\
8h +1i  &= 5j+2k  \\
1i      &= 3j  \\
3i      &= 6j+1k
\end{array}
Problem 2

Use the computer to solve these systems from the first subsection, or conclude "many solutions" or "no solutions".

  1. 
\begin{array}{*{2}{rc}r}
2x  &+  &2y  &=  &5  \\
x  &-  &4y  &=  &0  
\end{array}
  2. 
\begin{array}{*{2}{rc}r}
-x  &+  &y   &=  &1  \\
x  &+  &y   &=  &2  
\end{array}
  3. 
\begin{array}{*{3}{rc}r}
x  &-  &3y  &+  &z  &=  &1  \\
x  &+  &y   &+  &2z &=  &14 
\end{array}
  4. 
\begin{array}{*{2}{rc}r}
-x  &-  &y   &=  &1  \\
-3x  &-  &3y  &=  &2  
\end{array}
  5. 
\begin{array}{*{3}{rc}r}
&   &4y  &+  &z  &=  &20 \\
2x  &-  &2y  &+  &z  &=  &0  \\
x  &   &    &+  &z  &=  &5  \\
x  &+  &y   &-  &z  &=  &10 
\end{array}
  6.  \begin{array}{*{4}{rc}r}
2x  &   &   &+  &z  &+  &w  &=  &5  \\
&   &y  &   &   &-  &w  &=  &-1 \\
3x  &   &   &-  &z  &-  &w  &=  &0  \\
4x  &+  &y  &+  &2z &+  &w  &=  &9  
\end{array}
Problem 3

Use the computer to solve these systems from the second subsection.

  1.  \begin{array}{*{2}{rc}r}
3x  &+  &6y  &=  &18  \\
x  &+  &2y  &=  &6   
\end{array}
  2.  \begin{array}{*{2}{rc}r}
x  &+  &y   &=  &1  \\
x  &-  &y   &=  &-1   
\end{array}
  3.  \begin{array}{*{3}{rc}r}
x_1  &   &     &+  &x_3   &=  &4  \\
x_1  &-  &x_2  &+  &2x_3  &=  &5  \\
4x_1  &-  &x_2  &+  &5x_3  &=  &17  
\end{array}
  4.  \begin{array}{*{3}{rc}r}
2a   &+  &b    &-  &c     &=  &2  \\
2a   &   &     &+  &c     &=  &3  \\
a   &-  &b    &   &      &=  &0   
\end{array}
  5.  \begin{array}{*{4}{rc}r}
x  &+  &2y   &-   &z   &    &    &=  &3  \\
2x  &+  &y    &    &    &+   &w   &=  &4  \\
x  &-  &y    &+   &z   &+   &w   &=  &1  
\end{array}
  6.  \begin{array}{*{4}{rc}r}
x  &   &     &+   &z   &+   &w   &=  &4  \\
2x  &+  &y    &    &    &-   &w   &=  &2  \\
3x  &+  &y    &+   &z   &    &    &=  &7  
\end{array}
Problem 4

What does the computer give for the solution of the general 2 \! \times \! 2 system?


\begin{array}{*{2}{rc}r}
ax  &+  &cy  &=  &p  \\
bx  &+  &dy  &=  &q
\end{array}


Topic: Input-Output Analysis

An economy is an immensely complicated network of interdependences. Changes in one part can ripple out to affect other parts. Economists have struggled to be able to describe, and to make predictions about, such a complicated object. Mathematical models using systems of linear equations have emerged as a key tool. One is Input-Output Analysis, pioneered by W. Leontief, who won the 1973 Nobel Prize in Economics.

Consider an economy with many parts, two of which are the steel industry and the auto industry. As they work to meet the demand for their product from other parts of the economy, that is, from users external to the steel and auto sectors, these two interact tightly. For instance, should the external demand for autos go up, that would lead to an increase in the auto industry's usage of steel. Or, should the external demand for steel fall, then it would lead to a fall in steel's purchase of autos. The type of Input-Output model we will consider takes in the external demands and then predicts how the two interact to meet those demands.

We start with a listing of production and consumption statistics. (These numbers, giving dollar values in millions, are excerpted from (Leontief 1965), describing the 1958 U.S. economy. Today's statistics would be quite different, both because of inflation and because of technical changes in the industries.)

  used by  
  steel  
  used by  
  auto  
  used by  
  others  
  total  
  value of  
  steel  
  5 395     2 664     25 448  
  value of  
  auto  
  48     9 030     30 346  

For instance, the dollar value of steel used by the auto industry in this year is 2,664 million. Note that industries may consume some of their own output.

We can fill in the blanks for the external demand. This year's value of the steel used by others this year is 17,389 and this year's value of the auto used by others is 21,268. With that, we have a complete description of the external demands and of how auto and steel interact, this year, to meet them.

Now, imagine that the external demand for steel has recently been going up by 200 per year and so we estimate that next year it will be 17,589. Imagine also that for similar reasons we estimate that next year's external demand for autos will be down 25 to 21,243. We wish to predict next year's total outputs.

That prediction isn't as simple as adding 200 to this year's steel total and subtracting 25 from this year's auto total. For one thing, a rise in steel will cause that industry to have an increased demand for autos, which will mitigate, to some extent, the loss in external demand for autos. On the other hand, the drop in external demand for autos will cause the auto industry to use less steel, and so lessen somewhat the upswing in steel's business. In short, these two industries form a system, and we need to predict the totals at which the system as a whole will settle.

For that prediction, let s be next years total production of steel and let a be next year's total output of autos. We form these equations.


\begin{array}{rl}
\text{next year}\textrm{'}\text{s production of steel}
&=\text{next year}\textrm{'}\text{s use of steel by steel}   \\
&\quad+\text{next year}\textrm{'}\text{s use of steel by auto}  \\             
&\quad+\text{next year}\textrm{'}\text{s use of steel by others} \\
\text{next year}\textrm{'}\text{s production of autos}
&=\text{next year}\textrm{'}\text{s use of autos by steel}   \\
&\quad+\text{next year}\textrm{'}\text{s use of autos by auto}  \\             
&\quad+\text{next year}\textrm{'}\text{s use of autos by others} 
\end{array}


On the left side of those equations go the unknowns s and a. At the ends of the right sides go our external demand estimates for next year 17,589 and 21,243. For the remaining four terms, we look to the table of this year's information about how the industries interact.

For instance, for next year's use of steel by steel, we note that this year the steel industry used 5395 units of steel input to produce 25,448 units of steel output. So next year, when the steel industry will produce s units out, we expect that doing so will take s\cdot (5395)/(25\,448) units of steel input— this is simply the assumption that input is proportional to output. (We are assuming that the ratio of input to output remains constant over time; in practice, models may try to take account of trends of change in the ratios.)

Next year's use of steel by the auto industry is similar. This year, the auto industry uses 2664 units of steel input to produce 30346 units of auto output. So next year, when the auto industry's total output is a, we expect it to consume a\cdot (2664)/(30346) units of steel.

Filling in the other equation in the same way, we get this system of linear equation.



\begin{array}{*{3}{rc}r}
{\displaystyle\frac{5\,395}{25\,448}}\cdot s 
&+ &{\displaystyle\frac{2\,664}{30\,346}}\cdot a &+ &17\,589 
&= &s \\[1em]  
{\displaystyle\frac{48}{25\,448}}\cdot s     
&+ &{\displaystyle\frac{9\,030}{30\,346}}\cdot a &+ &21\,243 
&= &a
\end{array}


Gauss' method on this system.



\begin{array}{*{2}{rc}r}
(20\,053/25\,448)s &- &(2\,664/30\,346)a &= &17\,589 \\ 
-(48/25\,448)s      &+ &(21\,316/30\,346)a &= &21\,243 
\end{array}


gives s=25\,698 and a=30\,311.

Looking back, recall that above we described why the prediction of next year's totals isn't as simple as adding 200 to last year's steel total and subtracting 25 from last year's auto total. In fact, comparing these totals for next year to the ones given at the start for the current year shows that, despite the drop in external demand, the total production of the auto industry is predicted to rise. The increase in internal demand for autos caused by steel's sharp rise in business more than makes up for the loss in external demand for autos.

One of the advantages of having a mathematical model is that we can ask "What if ...?" questions. For instance, we can ask "What if the estimates for next year's external demands are somewhat off?" To try to understand how much the model's predictions change in reaction to changes in our estimates, we can try revising our estimate of next year's external steel demand from 17,589 down to 17,489, while keeping the assumption of next year's external demand for autos fixed at 21,243. The resulting system



\begin{array}{*{2}{rc}r}
(20\,053/25\,448)s &- &(2\,664/30\,346)a &= &17\,489 \\ 
-(48/25\,448)s      &+ &(21\,316/30\,346)a &= &21\,243 
\end{array}


when solved gives s=25\,571 and a=30\,311. This kind of exploration of the model is sensitivity analysis. We are seeing how sensitive the predictions of our model are to the accuracy of the assumptions.

Obviously, we can consider larger models that detail the interactions among more sectors of an economy. These models are typically solved on a computer, using the techniques of matrix algebra that we will develop in Chapter Three. Some examples are given in the exercises. Obviously also, a single model does not suit every case; expert judgment is needed to see if the assumptions underlying the model are reasonable for a particular case. With those caveats, however, this model has proven in practice to be a useful and accurate tool for economic analysis. For further reading, try (Leontief 1951) and (Leontief 1965).


Exercises

Hint: these systems are easiest to solve on a computer.

Problem 1

With the steel-auto system given above, estimate next year's total productions in these cases.

  1. Next year's external demands are: up 200 from this year for steel, and unchanged for autos.
  2. Next year's external demands are: up 100 for steel, and up 200 for autos.
  3. Next year's external demands are: up 200 for steel, and up 200 for autos.
Problem 2

In the steel-auto system, the ratio for the use of steel by the auto industry is 2\,664/30\,346, about 0.0878. Imagine that a new process for making autos reduces this ratio to .0500.

  1. How will the predictions for next year's total productions change compared to the first example discussed above (i.e., taking next year's external demands to be 17,589 for steel and 21,243 for autos)?
  2. Predict next year's totals if, in addition, the external demand for autos rises to be 21,500 because the new cars are cheaper.
Problem 3

This table gives the numbers for the auto-steel system from a different year, 1947 (see Leontief 1951). The units here are billions of 1947 dollars.

  used by  
  steel  
  used by  
  auto  
  used by  
  others  
  total  
  value of  
  steel  
  6.90     1.28     18.69  
  value of  
  auto  
  0     4.40     14.27  
  1. Solve for total output if next year's external demands are: steel's demand up 10% and auto's demand up 15%.
  2. How do the ratios compare to those given above in the discussion for the 1958 economy?
  3. Solve the 1947 equations with the 1958 external demands (note the difference in units; a 1947 dollar buys about what $1.30 in 1958 dollars buys). How far off are the predictions for total output?
Problem 4

Predict next year's total productions of each of the three sectors of the hypothetical economy shown below

  used by  
  farm  
  used by  
  rail  
  used by  
  shipping  
  used by  
  others  
  total  
  value of  
  farm  
  25     50     100     500  
  value of  
  rail  
  25     50     50     300  
  value of  
  shipping  
  15     10     0     500  

if next year's external demands are as stated.

  1. 625 for farm, 200 for rail, 475 for shipping
  2. 650 for farm, 150 for rail, 450 for shipping
Problem 5

This table gives the interrelationships among three segments of an economy (see Clark & Coupe 1967).

  used by  
  food  
  used by  
  wholesale  
  used by  
  retail  
  used by  
  others  
  total  
  value of  
  food  
  0     2 318     4 679     11 869  
  value of  
  wholesale  
  393     1 089     22 459     122 242  
  value of  
  retail  
  3     53     75     116 041  

We will do an Input-Output analysis on this system.

  1. Fill in the numbers for this year's external demands.
  2. Set up the linear system, leaving next year's external demands blank.
  3. Solve the system where next year's external demands are calculated by taking this year's external demands and inflating them 10%. Do all three sectors increase their total business by 10%? Do they all even increase at the same rate?
  4. Solve the system where next year's external demands are calculated by taking this year's external demands and reducing them 7%. (The study from which these numbers are taken concluded that because of the closing of a local military facility, overall personal income in the area would fall 7%, so this might be a first guess at what would actually happen.)


Input-Output Analysis M File

 # Octave commands for _Linear Algebra_ by Jim Hefferon,
 # Topic: leontif.tex
 a=[(25448-5395)/25448  -2664/30346;
     -48/25448          (30346-9030)/30346];
 b=[17589;
    21243];
 ans=a \ b;
 printf("The answer to the first system is s=%0.0f and a=%0.0f\n",ans(1),ans(2));
 b=[17489;
    21243];
 ans=a \ b;
 printf("The answer to the second system is s=%0.0f and a=%0.0f\n",ans(1),ans(2));
 # question 1
 b=[17789;
    21243];
 ans=a \ b;
 printf("The answer to question (1a) is s=%0.0f and a=%0.0f\n",ans(1),ans(2));
 b=[17689;
    21443];
 ans=a \ b;
 printf("The answer to question (1b) is s=%0.0f and a=%0.0f\n",ans(1),ans(2));
 b=[17789;
    21443];
 ans=a \ b;
 printf("The answer to question (1c) is s=%0.0f and a=%0.0f\n",ans(1),ans(2));
 # question 2
 printf("Current ratio for use of steel by auto is %0.4f\n",2664/30346);
 a=[(25448-5395)/25448  -0.0500;
     -48/25448          (30346-9030)/30346];
 b=[17589;
    21243];
 ans=a \ b;
 printf("The answer to 2(a) is s=%0.0f and a=%0.0f\n",ans(1),ans(2));
 b=[17589;
    21500];
 ans=a \ b;
 printf("The answer to 2(b) is s=%0.0f and a=%0.0f\n",ans(1),ans(2));
 # question 3
 printf("The value of steel used by others is %0.2f\n",18.69-(6.90+1.28));
 printf("The value of autos used by others is %0.2f\n",14.27-(0+4.40));
 a=[(18.69-6.90)/18.69  -1.28/14.27;
     -0/18.69          (14.27-4.40)/14.27];
 b=[1.10*(18.69-(6.90+1.28));
    1.15*(14.27-(0+4.40))];
 ans=a \ b;
 printf("The answer to 3(a) is s=%0.2f and a=%0.2f\n",ans(1),ans(2));
 printf("The 1947 ratio of steel used by steel is %0.2f\n",(18.69-6.90)/18.69);
 printf("The 1947 ratio of steel used by autos is %0.2f\n",1.28/14.27);
 printf("The 1947 ratio of autos used by steel is %0.2f\n",0/18.69);
 printf("The 1947 ratio of autos used by autos is %0.2f\n",(14.27-4.40)/14.27);
 printf("The 1958 ratio of steel used by steel is %0.2f\n",(25448-5395)/25448);
 printf("The 1958 ratio of steel used by autos is %0.2f\n",2664/30346);
 printf("The 1958 ratio of autos used by steel is %0.2f\n",48/25448);
 printf("The 1958 ratio of autos used by autos is %0.2f\n",(30346-9030)/30346);
 b=[17.598/1.30;
    21.243/1.30];
 ans=a \ b;
 newans=1.30 * ans;
 printf("The answer to 3(c) is (in billions of 1947 dollars) s=%0.2f and a=%0.2f\n  and in billions of 1958 dollars it is s=%0.2f and     a=%0.2f\n",ans(1),ans(2),newans(1),newans(2));


Topic: Accuracy of Computations

Gauss' method lends itself nicely to computerization. The code below illustrates. It operates on an n \! \times \! n matrix a, pivoting with the first row, then with the second row, etc.

for (pivot_row = 1; pivot_row <= n - 1; pivot_row++) {
    for (row_below = pivot_row + 1; row_below <= n; row_below++) {
        multiplier = a[row_below, pivot_row] / a[pivot_row, pivot_row];
        for (col = pivot_row; col <= n; col++) {
            a[row_below, col] -= multiplier * a[pivot_row, col];
        }
    }
}

(This code is in the C language. Here is a brief translation. The loop construct for (pivot_row = 1; pivot_row <= n - 1; pivot_row++) { ... } sets pivot_row to 1 and then iterates while pivot_row is less than or equal to n-1, each time through incrementing pivot_row by one with the "++" operation. The other non-obvious construct is that the "-=" in the innermost loop amounts to the a[row_below, col] =- multiplier * a[pivot_row, col] + a[row_below, col]} operation.)

While this code provides a quick take on how Gauss' method can be mechanized, it is not ready to use. It is naive in many ways. The most glaring way is that it assumes that a nonzero number is always found in the pivot_row, pivot_row position for use as the pivot entry. To make it practical, one way in which this code needs to be reworked is to cover the case where finding a zero in that location leads to a row swap, or to the conclusion that the matrix is singular.

Adding some if\cdots statements to cover those cases is not hard, but we will instead consider some more subtle ways in which the code is naive. There are pitfalls arising from the computer's reliance on finite-precision floating point arithmetic.

For example, we have seen above that we must handle as a separate case a system that is singular. But systems that are nearly singular also require care. Consider this one.


\begin{array}{*{2}{rc}r}
x &+ &2y &= &3  \\
1.000\,000\,01x &+ &2y &= &3.000\,000\,01
\end{array}

By eye we get the solution x=1 and y=1. But a computer has more trouble. A computer that represents real numbers to eight significant places (as is common, usually called single precision) will represent the second equation internally as 1.000\,000\,0x+2y=3.000\,000\,0, losing the digits in the ninth place. Instead of reporting the correct solution, this computer will report something that is not even close— this computer thinks that the system is singular because the two equations are represented internally as equal.

For some intuition about how the computer could come up with something that far off, we can graph the system.

Linalg singular system.png

At the scale of this graph, the two lines cannot be resolved apart. This system is nearly singular in the sense that the two lines are nearly the same line. Near-singularity gives this system the property that a small change in the system can cause a large change in its solution; for instance, changing the 3.000\,000\,01 to 3.000\,000\,03 changes the intersection point from (1,1) to (3,0). This system changes radically depending on a ninth digit, which explains why the eight-place computer has trouble. A problem that is very sensitive to inaccuracy or uncertainties in the input values is ill-conditioned.

The above example gives one way in which a system can be difficult to solve on a computer. It has the advantage that the picture of nearly-equal lines gives a memorable insight into one way that numerical difficulties can arise. Unfortunately this insight isn't very useful when we wish to solve some large system. We cannot, typically, hope to understand the geometry of an arbitrary large system. In addition, there are ways that a computer's results may be unreliable other than that the angle between some of the linear surfaces is quite small.

For an example, consider the system below, from (Hamming 1971).


\begin{array}{*{2}{rc}r}
0.001x  &+  &y  &=  &1  \\
x  &-  &y  &=  &0
\end{array}
\qquad(*)

The second equation gives x=y, so x=y=1/1.001 and thus both variables have values that are just less than 1. A computer using two digits represents the system internally in this way (we will do this example in two-digit floating point arithmetic, but a similar one with eight digits is easy to invent).


\begin{array}{*{2}{rc}r}
(1.0\times 10^{-3})x  &+  &(1.0\times 10^{0})y  &=  &1.0\times 10^{0}  \\
(1.0\times 10^{0})x   &-  &(1.0\times 10^{0})y  &=  &0.0\times 10^{0}
\end{array}

The computer's row reduction step -1000\rho_1+\rho_2 produces a second equation -1001y=-999, which the computer rounds to two places as (-1.0\times 10^{3})y=-1.0\times 10^{3}. Then the computer decides from the second equation that y=1 and from the first equation that x=0. This y value is fairly good, but the x is quite bad. Thus, another cause of unreliable output is a mixture of floating point arithmetic and a reliance on pivots that are small.

An experienced programmer may respond that we should go to double precision where sixteen significant digits are retained. This will indeed solve many problems. However, there are some difficulties with it as a general approach. For one thing, double precision takes longer than single precision (on a '486 chip, multiplication takes eleven ticks in single precision but fourteen in double precision (Microsoft 1993)) and has twice the memory requirements. So attempting to do all calculations in double precision is just not practical.[citation needed] And besides, the above systems can obviously be tweaked to give the same trouble in the seventeenth digit, so double precision won't fix all problems. What we need is a strategy to minimize the numerical trouble arising from solving systems on a computer, and some guidance as to how far the reported solutions can be trusted.

Mathematicians have made a careful study of how to get the most reliable results. A basic improvement on the naive code above is to not simply take the entry in the pivot_row, pivot_row position for the pivot, but rather to look at all of the entries in the pivot_row column below the pivot_row row, and take the one that is most likely to give reliable results (e.g., take one that is not too small). This strategy is partial pivoting. For example, to solve the troublesome system (*) above, we start by looking at both equations for a best first pivot, and taking the 1 in the second equation as more likely to give good results. Then, the pivot step of -.001\rho_2+\rho_1 gives a first equation of 1.001y=1, which the computer will represent as (1.0\times 10^{0})y=1.0\times 10^{0}, leading to the conclusion that y=1 and, after back-substitution, x=1, both of which are close to right. The code from above can be adapted to this purpose.

for (pivot_row = 1; pivot_row <= n - 1; pivot_row++) {
    /* Find the largest pivot in this column (in row max). */
    max = pivot_row;
    for (row_below = pivot_row + 1; pivot_row <= n; row_below++) {
        if (abs(a[row_below, pivot_row]) > abs(a[max, row_below]))
            max = row_below;
     }
 
    /* Swap rows to move that pivot entry up. */
    for (col = pivot_row; col <= n; col++) {
        temp = a[pivot_row, col];
        a[pivot_row, col] = a[max, col];
        a[max, col] = temp;
     }
 
     /* Proceed as before. */
     for (row_below = pivot_row + 1; row_below <= n; row_below++) {
         multiplier = a[row_below, pivot_row] / a[pivot_row, pivot_row];
         for (col = pivot_row; col <= n; col++) {
             a[row_below, col] -= multiplier * a[pivot_row, col];
         }
     }
}

A full analysis of the best way to implement Gauss' method is outside the scope of the book (see (Wilkinson 1965)), but the method recommended by most experts is a variation on the code above that first finds the best pivot among the candidates, and then scales it to a number that is less likely to give trouble. This is scaled partial pivoting.

In addition to returning a result that is likely to be reliable, most well-done code will return a number, called the condition number that describes the factor by which uncertainties in the input numbers could be magnified to become inaccuracies in the results returned (see (Rice 1993)).

The lesson of this discussion is that just because Gauss' method always works in theory, and just because computer code correctly implements that method, and just because the answer appears on green-bar paper, doesn't mean that the answer is reliable. In practice, always use a package where experts have worked hard to counter what can go wrong.

Exercises

Problem 1

Using two decimal places, add 253 and 2/3.

Problem 2

This intersect-the-lines problem contrasts with the example discussed above.

Linalg nonsingular system.png                  \displaystyle \begin{array}{*{2}{rc}r}
x &+ &2y &= &3  \\
3x &- &2y &= &1
\end{array}

Illustrate that in this system some small change in the numbers will produce only a small change in the solution by changing the constant in the bottom equation to 1.008 and solving. Compare it to the solution of the unchanged system.

Problem 3

Solve this system by hand (Rice 1993).


\begin{array}{*{2}{rc}r}
0.000\,3x  &+  &1.556y  &=  &1.569 \\
0.345\,4x  &-  &2.346y  &=  &1.018
\end{array}
  1. Solve it accurately, by hand.
  2. Solve it by rounding at each step to four significant digits.
Problem 4

Rounding inside the computer often has an effect on the result. Assume that your machine has eight significant digits.

  1. Show that the machine will compute (2/3)+((2/3)-(1/3)) as unequal to ((2/3)+(2/3))-(1/3). Thus, computer arithmetic is not associative.
  2. Compare the computer's version of (1/3)x+y=0 and (2/3)x+2y=0. Is twice the first equation the same as the second?
Problem 5

Ill-conditioning is not only dependent on the matrix of coefficients. This example (Hamming 1971) shows that it can arise from an interaction between the left and right sides of the system. Let \varepsilon be a small real.


\begin{array}{*{3}{rc}r}
3x  &+  &2y           &+  &z            &=  &6   \\
2x  &+  &2\varepsilon y  &+  &2\varepsilon z
&=  &2+4\varepsilon \\
x  &+  &2\varepsilon y  &-  &\varepsilon z
&=  &1+\varepsilon
\end{array}
  1. Solve the system by hand. Notice that the \varepsilon's divide out only because there is an exact cancelation of the integer parts on the right side as well as on the left.
  2. Solve the system by hand, rounding to two decimal places, and with \varepsilon=0.001.


Topic: Analyzing Networks

The diagram below shows some of a car's electrical network. The battery is on the left, drawn as stacked line segments. The wires are drawn as lines, shown straight and with sharp right angles for neatness. Each light is a circle enclosing a loop.

Linalg car circuit.png

The designer of such a network needs to answer questions like: How much electricity flows when both the hi-beam headlights and the brake lights are on? Below, we will use linear systems to analyze simpler versions of electrical networks.

For the analysis we need two facts about electricity and two facts about electrical networks.

The first fact about electricity is that a battery is like a pump: it provides a force impelling the electricity to flow through the circuits connecting the battery's ends, if there are any such circuits. We say that the battery provides a potential to flow. Of course, this network accomplishes its function when, as the electricity flows through a circuit, it goes through a light. For instance, when the driver steps on the brake then the switch makes contact and a circuit is formed on the left side of the diagram, and the electrical current flowing through that circuit will make the brake lights go on, warning drivers behind.

The second electrical fact is that in some kinds of network components the amount of flow is proportional to the force provided by the battery. That is, for each such component there is a number, it's resistance, such that the potential is equal to the flow times the resistance. The units of measurement are: potential is described in volts, the rate of flow is in amperes, and resistance to the flow is in ohms. These units are defined so that \mbox{volts}=\mbox{amperes}\cdot\mbox{ohms}.

Components with this property, that the voltage-amperage response curve is a line through the origin, are called resistors. (Light bulbs such as the ones shown above are not this kind of component, because their ohmage changes as they heat up.) For example, if a resistor measures 2 ohms then wiring it to a 12 volt battery results in a flow of 6 amperes. Conversely, if we have flow of electrical current of 2 amperes through it then there must be a 4 volt potential difference between it's ends. This is the voltage drop across the resistor. One way to think of a electrical circuits like the one above is that the battery provides a voltage rise while the other components are voltage drops.

The two facts that we need about networks are Kirchhoff's Laws.

  • Current Law. For any point in a network, the flow in equals the flow out.
  • Voltage Law. Around any circuit the total drop equals the total rise.

In the above network there is only one voltage rise, at the battery, but some networks have more than one.

For a start we can consider the network below. It has a battery that provides the potential to flow and three resistors (resistors are drawn as zig-zags). When components are wired one after another, as here, they are said to be in series.

Linalg resisters in series.png

By Kirchhoff's Voltage Law, because the voltage rise is 20 volts, the total voltage drop must also be 20 volts. Since the resistance from start to finish is 10 ohms (the resistance of the wires is negligible), we get that the current is (20/10)=2 amperes. Now, by Kirchhoff's Current Law, there are 2 amperes through each resistor. (And therefore the voltage drops are: 4 volts across the 2 oh m resistor, 10 volts across the 5 ohm resistor, and 6 volts across the 3 ohm resistor.)

The prior network is so simple that we didn't use a linear system, but the next network is more complicated. In this one, the resistors are in parallel. This network is more like the car lighting diagram shown earlier.

Linalg resisters in parallel.png

We begin by labeling the branches, shown below. Let the current through the left branch of the parallel portion be i_1 and that through the right branch be i_2, and also let the current through the battery be i_0. (We are following Kirchoff's Current Law; for instance, all points in the right branch have the same current, which we call i_2. Note that we don't need to know the actual direction of flow— if current flows in the direction opposite to our arrow then we will simply get a negative number in the solution.)

Linalg resisters in parallel 2.png

The Current Law, applied to the point in the upper right where the flow i_0 meets i_1 and i_2, gives that i_0=i_1+i_2. Applied to the lower right it gives i_1+i_2=i_0. In the circuit that loops out of the top of the battery, down the left branch of the parallel portion, and back into the bottom of the battery, the voltage rise is 20 while the voltage drop is i_1\cdot 12, so the Voltage Law gives that 12i_1=20. Similarly, the circuit from the battery to the right branch and back to the battery gives that 8i_2=20. And, in the circuit that simply loops around in the left and right branches of the parallel portion (arbitrarily taken clockwise), there is a voltage rise of 0 and a voltage drop of 8i_2-12i_1 so the Voltage Law gives that 8i_2-12i_1=0.


\begin{array}{*{3}{rc}r}
i_0&- &i_1    &-  &i_2   &=  &0 \\
-i_0&+ &i_1    &+  &i_2   &=  &0  \\
&  &12i_1  &   &      &=  &20  \\
&  &       &   &8i_2  &=  &20  \\
&  &-12i_1 &+  &8i_2  &=  &0
\end{array}

The solution is i_0=25/6, i_1=5/3, and i_2=5/2, all in amperes. (Incidentally, this illustrates that redundant equations do indeed arise in practice.)

Kirchhoff's laws can be used to establish the electrical properties of networks of great complexity. The next diagram shows five resistors, wired in a series-parallel way.

Linalg wheatstone bridge.png

This network is a Wheatstone bridge (see Problem 4). To analyze it, we can place the arrows in this way.

Linalg wheatstone bridge 2.png

Kirchoff's Current Law, applied to the top node, the left node, the right node, and the bottom node gives these.

\begin{array}{rl}
i_0     &=  i_1+i_2  \\
i_1     &=  i_3+i_5  \\
i_2+i_5 &=  i_4      \\
i_3+i_4 &=  i_0
\end{array}

Kirchhoff's Voltage Law, applied to the inside loop (the i_0 to i_1 to i_3 to i_0 loop), the outside loop, and the upper loop not involving the battery, gives these.

\begin{array}{rl}
5i_1+10i_3  &= 10   \\
2i_2+4i_4   &= 10   \\
5i_1+50i_5-2i_2  &= 0
\end{array}

Those suffice to determine the solution i_0=7/3, i_1=2/3, i_2=5/3, i_3=2/3, i_4=5/3, and i_5=0.

Networks of other kinds, not just electrical ones, can also be analyzed in this way. For instance, networks of streets are given in the exercises.

Exercises

Many of the systems for these problems are mostly easily solved on a computer.

Problem 1

Calculate the amperages in each part of each network.

  1. This is a simple network.

    Linalg resisters in series 2.png

  2. Compare this one with the parallel case discussed above.

    Linalg circuit 1.png

  3. This is a reasonably complicated network.

    Linalg circuit 2.png

Problem 2

In the first network that we analyzed, with the three resistors in series, we just added to get that they acted together like a single resistor of 10 ohms. We can do a similar thing for parallel circuits. In the second circuit analyzed,

Linalg resisters in parallel.png

the electric current through the battery is 25/6 amperes. Thus, the parallel portion is equivalent to a single resistor of 20/(25/6)=4.8 ohms.

  1. What is the equivalent resistance if we change the 12 ohm resistor to 5 ohms?
  2. What is the equivalent resistance if the two are each 8 ohms?
  3. Find the formula for the equivalent resistance if the two resistors in parallel are r_1 ohms and r_2 ohms.
Problem 3

For the car dashboard example that opens this Topic, solve for these amperages (assume that all resistances are 2 ohms).

  1. If the driver is stepping on the brakes, so the brake lights are on, and no other circuit is closed.
  2. If the hi-beam headlights and the brake lights are on.
Problem 4

Show that, in this Wheatstone Bridge,

Linalg wheatstone bridge 3.png

r_2/r_1 equals r_4/r_3 if and only if the current flowing through r_g is zero. (The way that this device is used in practice is that an unknown resistance at r_4 is compared to the other three r_1, r_2, and r_3. At r_g is placed a meter that shows the current. The three resistances r_1, r_2, and r_3 are varied— typically they each have a calibrated knob— until the current in the middle reads 0, and then the above equation gives the value of r_4.)

There are networks other than electrical ones, and we can ask how well Kirchoff's laws apply to them. The remaining questions consider an extension to networks of streets.

Problem 5

Consider this traffic circle.

Linalg rotary.png

This is the traffic volume, in units of cars per five minutes.


\begin{array}{r|c|c|c}
&\textit{North}  &\textit{Pier}  &\textit{Main}  \\
\hline
\textit{into}      &100             &150            &25      \\
\textit{out of}    &75              &150            &50
\end{array}

We can set up equations to model how the traffic flows.

  1. Adapt Kirchoff's Current Law to this circumstance. Is it a reasonable modelling assumption?
  2. Label the three between-road arcs in the circle with a variable. Using the (adapted) Current Law, for each of the three in-out intersections state an equation describing the traffic flow at that node.
  3. Solve that system.
  4. Interpret your solution.
  5. Restate the Voltage Law for this circumstance. How reasonable is it?
Problem 6

This is a network of streets.

Linalg intersection.png

The hourly flow of cars into this network's entrances, and out of its exits can be observed.


\begin{array}{r|c|c|c|c|c}
&\textit{east\ Winooski}
&\textit{west\ Winooski}
&\textit{Willow}
&\textit{Jay}
&\textit{Shelburne} \\
\hline
\text{into}      &80    &50    &65     &--    &40      \\
\text{out of}    &30    &5     &70     &55    &75
\end{array}

(Note that to reach Jay a car must enter the network via some other road first, which is why there is no "into Jay" entry in the table. Note also that over a long period of time, the total in must approximately equal the total out, which is why both rows add to 235 cars.) Once inside the network, the traffic may flow in different ways, perhaps filling Willow and leaving Jay mostly empty, or perhaps flowing in some other way. Kirchhoff's Laws give the limits on that freedom.

  1. Determine the restrictions on the flow inside this network of streets by setting up a variable for each block, establishing the equations, and solving them. Notice that some streets are one-way only. (Hint: this will not yield a unique solution, since traffic can flow through this network in various ways; you should get at least one free variable.)
  2. Suppose that some construction is proposed for Winooski Avenue East between Willow and Jay, so traffic on that block will be reduced. What is the least amount of traffic flow that can be allowed on that block without disrupting the hourly flow into and out of the network?


Topic: Speed of Gauss' Method

We are using Gauss' Method to solve the linear systems in this book because it is easy to understand, easily shown to give the right answers, and fast. It is fast in that, in all the by-hand calculations we have needed, we have gotten the answers in only a few steps, taking only a few minutes. However, scientists and engineers who solve linear systems in practice must have a method that is fast enough for large systems, with 1000 equations or 10,000 equations or even 100,000 equations. These systems are solved on a computer, so the speed of the machine helps, but nonetheless the speed of the method used is a major consideration, and is sometimes the factor limiting which problems can be solved.

The speed of an algorithm is usually measured by finding how the time taken to solve problems grows as the size of the input data set grows. That is, how much longer will the algorithm take if we increase the size of the input data by a factor of ten, say from a 1000-equation system to a 10,000-equation system, or from 10,000 to 100,000? Does the time taken grow ten times, or a hundred times, or a thousand times? Is the time taken by the algorithm proportional to the size of the data set, or to the square of that size, or to the cube of that size, etc.?

Here is a fragment of Gauss' Method code, implemented in the computer language FORTRAN. The coefficients of the linear system are stored in the N \! \times \! N array A, and the constants are stored in the N \! \times \! 1 array B. For each ROW between 1 and N this program has already found the pivot entry A(ROW,COL). Now it will pivot.


-PIVINV\cdot \rho_{ROW}
+\rho_{i}

(This code fragment is for illustration only, and is incomplete. For example, see the later topic on the Accuracy of Gauss' Method. Nonetheless, this fragment will do for our purposes because analysis of finished versions, including all the tests and sub-cases, is messier but gives essentially the same result.)


PIVINV=1./A(ROW,COL)
DO 10 I=ROW+1, N
DO 20 J=I, N
A(I,J)=A(I,J)-PIVINV*A(ROW,J)
20 CONTINUE
B(J)=B(J)-PIVINV*B(ROW)
10 CONTINUE


The outermost loop (not shown) runs through N-1 rows. For each of these rows, the shown loops perform arithmetic on the entries in A that are below and to the right of the pivot entry (and also on the entries in B, but to simplify the analysis we will not count those operations---see Exercise ). We will assume the pivot is found in the usual place, that is, that COL=ROW (as above, analysis of the general case is messier but gives essentially the same result). That means there are (N-ROW)^2 entries to perform arithmetic on. On average, ROW will be N/2. Thus we estimate the nested loops above will run something like (N/2)^2 times, that is, will run in a time proportional to the square of the number of equations. Taking into account the outer loop that is not shown, we get the estimate that the running time of the algorithm is proportional to the cube of the number of equations.

Algorithms that run in time directly proportional to the size of the data set are fast, algorithms that run in time proportional to the square of the size of the data set are less fast, but typically quite usable, and algorithms that run in time proportional to the cube of the size of the data set are still reasonable in speed.

Speed estimates like these are a good way of understanding how quickly or slowly an algorithm can be expected to run on average. There are special cases, however, of systems on which the above Gauss' method code is especially fast, so there may be factors about a problem that make it especially suitable for this kind of solution.

In practice, the code found in computer algebra systems, or in the standard packages, implements a variant on Gauss' method, called triangular factorization. To state this method requires the language of matrix algebra, which we will not see until Chapter Three. Nonetheless, the above code is conceptually quite close to that usually used in applications.

There have been some theoretical speed-ups in the running time required to solve linear systems. Algorithms other than Gauss' method have been invented that take a time proportional not to the cube of the size of the data set, but instead to the (approximately) 2.7 power (this is still under active research, so this exponent may come down somewhat over time). However, these theoretical improvements have not come into widespread use, in part because the new methods take a quite large data set before they overtake Gauss' method (although they will outperform Gauss' method on very large sets, there is some startup overhead that keeps them from being faster on the systems that have, so far, been solved in practice).


Exercises

Problem 1

Computer systems allow the generation of random numbers (of course, these are only pseudo-random, in that they are generated by some algorithm, but the sequence of numbers that is gotten passes a number of reasonable statistical tests for apparent randomness).

  1. Fill a 5 \! \times \! 5 array with random numbers (say, in the range [0..1)). Apply Gauss' method to see if it is singular. Repeat that experiment ten times. Are singular matrices frequent or rare (in this sense)?
  2. Time the computer at solving ten 5 \! \times \! 5 arrays of random numbers. Find the average time. (Notice that some systems can be found to be singular quite quickly, for instance if the first row equals the second. In the light of the first part, do you expect that singular systems play a large role in your average?)
  3. Repeat the prior item for 15 \! \times \! 15 arrays.
  4. Repeat the prior item for 25 \! \times \! 25 arrays.
  5. Repeat the prior item for 35 \! \times \! 35 arrays.
  6. Graph the input size versus the average time.
Problem 2

What 10 \! \times \! 10 array can you invent that takes your computer system the longest to reduce? The shortest?

Problem 3

Write the rest of the FORTRAN program to do a straightforward implementation of Gauss' method. Compare the speed of your code to that used in a computer algebra system. Which is faster? (Most computer algebra systems will apply some of the techniques of matrix algebra that we will have later, in Chapter Three.)

Problem 4

Extend the code fragment to handle the case where the B array has more than one column. That solves more than one system at a time (all with the same matrix of coefficients A).

Problem 5

The FORTRAN language specification requires that arrays be stored "by column", that is, the entire first column is stored contiguously, then the second column, etc. Does the code fragment given take advantage of this, or can it be rewritten to make it faster (by taking advantage of the fact that computer fetches are faster from contiguous locations)?

Problem 6

Estimate the running time of Gauss-Jordan reduction. Test your estimate by implementing Gauss-Jordan reduction in a computer language, and running it on 5 \! \times \! 5, 15 \! \times \! 15, and 25 \! \times \! 25 matrices of random entries.



Chapter II - Vector Spaces

The first chapter began by introducing Gauss' method and finished with a fair understanding, keyed on the Linear Combination Lemma, of how it finds the solution set of a linear system. Gauss' method systematically takes linear combinations of the rows. With that insight, we now move to a general study of linear combinations.

We need a setting for this study. At times in the first chapter, we've combined vectors from \mathbb{R}^2, at other times vectors from \mathbb{R}^3, and at other times vectors from even higher-dimensional spaces. Thus, our first impulse might be to work in \mathbb{R}^n, leaving n unspecified. This would have the advantage that any of the results would hold for \mathbb{R}^2 and for \mathbb{R}^3 and for many other spaces, simultaneously.

But, if having the results apply to many spaces at once is advantageous then sticking only to \mathbb{R}^n's is overly restrictive. We'd like the results to also apply to combinations of row vectors, as in the final section of the first chapter. We've even seen some spaces that are not just a collection of all of the same-sized column vectors or row vectors. For instance, we've seen a solution set of a homogeneous system that is a plane, inside of \mathbb{R}^3. This solution set is a closed system in the sense that a linear combination of these solutions is also a solution. But it is not just a collection of all of the three-tall column vectors; only some of them are in this solution set.

We want the results about linear combinations to apply anywhere that linear combinations are sensible. We shall call any such set a vector space. Our results, instead of being phrased as "Whenever we have a collection in which we can sensibly take linear combinations ...", will be stated as "In any vector space ...".

Such a statement describes at once what happens in many spaces. The step up in abstraction from studying a single space at a time to studying a class of spaces can be hard to make. To understand its advantages, consider this analogy. Imagine that the government made laws one person at a time: "Leslie Jones can't jay walk." That would be a bad idea; statements have the virtue of economy when they apply to many cases at once. Or, suppose that they ruled, "Kim Ke must stop when passing the scene of an accident." Contrast that with, "Any doctor must stop when passing the scene of an accident." More general statements, in some ways, are clearer.

Section I - Definition

We shall study structures with two operations, an addition and a scalar multiplication, that are subject to some simple conditions. We will reflect more on the conditions later, but on first reading notice how reasonable they are. For instance, surely any operation that can be called an addition (e.g., column vector addition, row vector addition, or real number addition) will satisfy all the conditions in Definition 1.1 below.


1 - Definition and Examples

Definition 1.1

A vector space (over  \mathbb{R} ) consists of a set  V along with two operations "+" and " \cdot " subject to these conditions.

  1. For any  \vec{v},\vec{w}\in V , their vector sum  \vec{v}+\vec{w} is an element of  V .
  2. If \vec{v},\vec{w}\in V , then  \vec{v}+\vec{w}=\vec{w}+\vec{v} .
  3. For any \vec{u},\vec{v},\vec{w}\in V ,  (\vec{v}+\vec{w})+\vec{u}=\vec{v}+(\vec{w}+\vec{u}) .
  4. There is a zero vector  \vec{0}\in V such that  \vec{v}+\vec{0}=\vec{v}\, for all  \vec{v}\in V.
  5. Each  \vec{v}\in V has an additive inverse  \vec{w}\in V such that  \vec{w}+\vec{v}=\vec{0} .
  6. If  r is a scalar, that is, a member of  \mathbb{R} and  \vec{v}\in V then the scalar multiple  r\cdot\vec{v} is in  V .
  7. If  r,s\in\mathbb{R} and  \vec{v}\in V then  (r+s)\cdot\vec{v}=r\cdot\vec{v}+s\cdot\vec{v} .
  8. If  r\in\mathbb{R} and  \vec{v},\vec{w}\in V , then  r\cdot(\vec{v}+\vec{w})=r\cdot\vec{v}+r\cdot\vec{w} .
  9. If  r,s\in\mathbb{R} and  \vec{v}\in V, then  (rs)\cdot\vec{v} =r\cdot(s\cdot\vec{v})
  10. For any  \vec{v}\in V ,  1\cdot\vec{v}=\vec{v} .
Remark 1.2

Because it involves two kinds of addition and two kinds of multiplication, that definition may seem confused. For instance, in condition 7 " (r+s)\cdot\vec{v}=r\cdot\vec{v}+s\cdot\vec{v}\, ", the first "+" is the real number addition operator while the "+" to the right of the equals sign represents vector addition in the structure  V . These expressions aren't ambiguous because, e.g.,  r and  s are real numbers so " r+s " can only mean real number addition.

The best way to go through the examples below is to check all ten conditions in the definition. That check is written out at length in the first example. Use it as a model for the others. Especially important are the first condition " \vec{v}+\vec{w} is in  V " and the sixth condition " r\cdot\vec{v} is in  V ". These are the closure conditions. They specify that the addition and scalar multiplication operations are always sensible— they are defined for every pair of vectors, and every scalar and vector, and the result of the operation is a member of the set (see Example 1.4).

Example 1.3

The set  \mathbb{R}^2 is a vector space if the operations " + " and " \cdot " have their usual meaning.


\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}
+
\begin{pmatrix} y_1 \\ y_2 \end{pmatrix}
=
\begin{pmatrix} x_1+y_1 \\ x_2+y_2 \end{pmatrix}
\qquad
r\cdot
\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}
=
\begin{pmatrix} rx_1 \\ rx_2 \end{pmatrix}

We shall check all of the conditions.

There are five conditions in item 1. For 1, closure of addition, note that for any  v_1,v_2,w_1,w_2\in\mathbb{R} the result of the sum



\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
+\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}
=\begin{pmatrix} v_1+w_1 \\ v_2+w_2 \end{pmatrix}


is a column array with two real entries, and so is in  \mathbb{R}^2 . For 2, that addition of vectors commutes, take all entries to be real numbers and compute


\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
+\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}
=\begin{pmatrix} v_1+w_1 \\ v_2+w_2 \end{pmatrix}
=\begin{pmatrix} w_1+v_1 \\ w_2+v_2 \end{pmatrix}
=\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}
+\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}

(the second equality follows from the fact that the components of the vectors are real numbers, and the addition of real numbers is commutative). Condition 3, associativity of vector addition, is similar.

\begin{array}{rl}
(\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
+\begin{pmatrix} w_1 \\ w_2 \end{pmatrix})
+\begin{pmatrix} u_1 \\ u_2 \end{pmatrix}
&=\begin{pmatrix} (v_1+w_1)+u_1 \\ (v_2+w_2)+u_2 \end{pmatrix} \\
&=\begin{pmatrix} v_1+(w_1+u_1) \\ v_2+(w_2+u_2) \end{pmatrix} \\
&=\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
+(\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}
+\begin{pmatrix} u_1 \\ u_2 \end{pmatrix})
\end{array}

For the fourth condition we must produce a zero element— the vector of zeroes is it.


\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
+\begin{pmatrix} 0 \\ 0 \end{pmatrix}
=\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}

For 5, to produce an additive inverse, note that for any v_1,v_2\in\mathbb{R} we have


\begin{pmatrix} -v_1 \\ -v_2 \end{pmatrix}
+\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
=\begin{pmatrix} 0 \\ 0 \end{pmatrix}

so the first vector is the desired additive inverse of the second.

The checks for the five conditions having to do with scalar multiplication are just as routine. For 6, closure under scalar multiplication, where r, v_1, v_2 \in \mathbb{R},



r\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
=\begin{pmatrix} rv_1 \\ rv_2 \end{pmatrix}

is a column array with two real entries, and so is in  \mathbb{R}^2 . Next, this checks 7.


(r+s)\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
=\begin{pmatrix} (r+s)v_1 \\ (r+s)v_2 \end{pmatrix}
=\begin{pmatrix} rv_1+sv_1 \\ rv_2+sv_2 \end{pmatrix}
=r\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}+s\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}

For 8, that scalar multiplication distributes from the left over vector addition, we have this.


r\cdot(\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}+\begin{pmatrix} w_1 \\ w_2 \end{pmatrix})
=\begin{pmatrix} r(v_1+w_1) \\ r(v_2+w_2) \end{pmatrix}
=\begin{pmatrix} rv_1+rw_1 \\ rv_2+rw_2 \end{pmatrix}
=r\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}+r\cdot\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}

The ninth


(rs)\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
=\begin{pmatrix} (rs)v_1 \\ (rs)v_2 \end{pmatrix}
=\begin{pmatrix} r(sv_1) \\ r(sv_2) \end{pmatrix}
=r\cdot(s\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix})

and tenth conditions are also straightforward.


1\cdot\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
=\begin{pmatrix} 1v_1 \\ 1v_2 \end{pmatrix}
=\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}

In a similar way, each  \mathbb{R}^n is a vector space with the usual operations of vector addition and scalar multiplication. (In  \mathbb{R}^1 , we usually do not write the members as column vectors, i.e., we usually do not write " (\pi) ". Instead we just write " \pi ".)

Example 1.4
This subset of  \mathbb{R}^3 that is a plane through the origin

P=\{ \begin{pmatrix} x \\ y \\ z \end{pmatrix} \,\big|\, x+y+z=0\}

is a vector space if "+" and "\cdot" are interpreted in this way.


\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix}
+
\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}
=
\begin{pmatrix} x_1+x_2 \\ y_1+y_2 \\ z_1+z_2 \end{pmatrix}
\qquad
r\cdot
\begin{pmatrix} x \\ y \\ z \end{pmatrix}
=
\begin{pmatrix} rx \\ ry \\ rz \end{pmatrix}

The addition and scalar multiplication operations here are just the ones of  \mathbb{R}^3 , reused on its subset P. We say that  P inherits these operations from  \mathbb{R}^3 . This example of an addition in P


\begin{pmatrix} 1 \\ 1 \\ -2 \end{pmatrix}+\begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix}=\begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix}

illustrates that P is closed under addition. We've added two vectors from P— that is, with the property that the sum of their three entries is zero— and the result is a vector also in P. Of course, this example of closure is not a proof of closure. To prove that P is closed under addition, take two elements of P


\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix} \quad \begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}

(membership in P means that x_1+y_1+z_1=0 and x_2+y_2+z_2=0), and observe that their sum


\begin{pmatrix} x_1+x_2 \\ y_1+y_2 \\ z_1+z_2 \end{pmatrix}

is also in P since its entries add (x_1+x_2)+(y_1+y_2)+(z_1+z_2)=(x_1+y_1+z_1)+(x_2+y_2+z_2) to 0. To show that  P is closed under scalar multiplication, start with a vector from P


\begin{pmatrix} x \\ y \\ z \end{pmatrix}

(so that  x+y+z=0 ) and then for  r\in\mathbb{R} observe that the scalar multiple


r\cdot\begin{pmatrix} x \\ y \\ z \end{pmatrix}
=
\begin{pmatrix} rx \\ ry \\ rz \end{pmatrix}

satisfies that  rx+ry+rz=r(x+y+z)=0 . Thus the two closure conditions are satisfied. Verification of the other conditions in the definition of a vector space are just as straightforward.

Example 1.5

Example 1.3 shows that the set of all two-tall vectors with real entries is a vector space. Example 1.4 gives a subset of an \mathbb{R}^n that is also a vector space. In contrast with those two, consider the set of two-tall columns with entries that are integers (under the obvious operations). This is a subset of a vector space, but it is not itself a vector space. The reason is that this set is not closed under scalar multiplication, that is, it does not satisfy condition 6. Here is a column with integer entries, and a scalar, such that the outcome of the operation


0.5
\cdot
\begin{pmatrix} 4 \\ 3 \end{pmatrix}
=
\begin{pmatrix} 2 \\ 1.5 \end{pmatrix}

is not a member of the set, since its entries are not all integers.

Example 1.6

The singleton set


\{ \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} \}

is a vector space under the operations


\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}
+
\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}
=
\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}
\qquad
r\cdot
\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}
=
\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}

that it inherits from  \mathbb{R}^4 .

A vector space must have at least one element, its zero vector. Thus a one-element vector space is the smallest one possible.

Definition 1.7

A one-element vector space is a trivial space.

Warning!

The examples so far involve sets of column vectors with the usual operations. But vector spaces need not be collections of column vectors, or even of row vectors. Below are some other types of vector spaces. The term "vector space" does not mean "collection of columns of reals". It means something more like "collection in which any linear combination is sensible".

Examples

Example 1.8

Consider  \mathcal{P}_3=\{a_0+a_1x+a_2x^2+a_3x^3\,\big|\, a_0,\ldots,a_3\in\mathbb{R}\} , the set of polynomials of degree three or less (in this book, we'll take constant polynomials, including the zero polynomial, to be of degree zero). It is a vector space under the operations


(a_0+a_1x+a_2x^2+a_3x^3)+(b_0+b_1x+b_2x^2+b_3x^3)

=(a_0+b_0)+(a_1+b_1)x+(a_2+b_2)x^2+(a_3+b_3)x^3

and


r\cdot(a_0+a_1x+a_2x^2+a_3x^3)=(ra_0)+(ra_1)x+(ra_2)x^2+(ra_3)x^3

(the verification is easy). This vector space is worthy of attention because these are the polynomial operations familiar from high school algebra. For instance, 3\cdot(1-2x+3x^2-4x^3)-2\cdot(2-3x+x^2-(1/2)x^3)=-1+7x^2-11x^3.

Although this space is not a subset of any  \mathbb{R}^n , there is a sense in which we can think of \mathcal{P}_3 as "the same" as  \mathbb{R}^4 . If we identify these two spaces's elements in this way



a_0+a_1x+a_2x^2+a_3x^3
\quad\text{corresponds to}\quad
\begin{pmatrix} a_0 \\ a_1 \\ a_2 \\ a_3 \end{pmatrix}

then the operations also correspond. Here is an example of corresponding additions.


\begin{array}{lr}
&1-2x+0x^2+1x^3 \\
+ &2+3x+7x^2-4x^3 \\ \hline
&3+1x+7x^2-3x^3
\end{array}
\quad\text{corresponds to}\quad
\begin{pmatrix} 1 \\ -2 \\ 0 \\ 1 \end{pmatrix}
+
\begin{pmatrix} 2 \\ 3 \\ 7 \\ -4 \end{pmatrix}
=
\begin{pmatrix} 3 \\ 1 \\ 7 \\ -3 \end{pmatrix}

Things we are thinking of as "the same" add to "the same" sum. Chapter Three makes precise this idea of vector space correspondence. For now we shall just leave it as an intuition.

Example 1.9

The set  \mathcal{M}_{2 \! \times \! 2} of  2 \! \times \! 2 matrices with real number entries is a vector space under the natural entry-by-entry operations.



\begin{pmatrix}
a  &b \\
c  &d
\end{pmatrix}
+
\begin{pmatrix}
w  &x \\
y  &z
\end{pmatrix}
=
\begin{pmatrix}
a+w  &b+x \\
c+y  &d+z
\end{pmatrix}
\qquad
r\cdot
\begin{pmatrix}
a  &b \\
c  &d
\end{pmatrix}
=
\begin{pmatrix}
ra  &rb \\
rc  &rd
\end{pmatrix}


As in the prior example, we can think of this space as "the same" as  \mathbb{R}^4 .

Example 1.10

The set  \{f\,\big|\, f:\mathbb{N}\to\mathbb{R}\} of all real-valued functions of one natural number variable is a vector space under the operations


(f_1+f_2)\,(n)=f_1(n)+f_2(n)
\qquad
(r\cdot f)\,(n)=r\,f(n)

so that if, for example,  f_1(n)=n^2+2\sin(n) and  f_2(n)=-\sin(n)+0.5 then  (f_1+2f_2)\,(n)=n^2+1 .

We can view this space as a generalization of Example 1.3— instead of 2-tall vectors, these functions are like infinitely-tall vectors.


\begin{array}{c|c}
   n   & f(n)=n^2+1 \\ \hline
   0   &  1     \\
   1   &  2     \\
   2   &  5     \\
   3   &  10    \\
\vdots & \vdots \\
\end{array} 
\quad\text{corresponds to}\quad
\begin{pmatrix}
1           \\
2           \\
5           \\
10          \\
\vdots       \end{pmatrix}

Addition and scalar multiplication are component-wise, as in Example 1.3. (We can formalize "infinitely-tall" by saying that it means an infinite sequence, or that it means a function from \mathbb{N} to \mathbb{R}.)

Example 1.11

The set of polynomials with real coefficients


\{ a_0+a_1x+\cdots+a_nx^n\,\big|\, n\in\mathbb{N} \text{ and } a_0,\ldots,a_n\in\mathbb{R}\}

makes a vector space when given the natural "+"


(a_0+a_1x+\cdots+a_nx^n)+(b_0+b_1x+\cdots+b_nx^n)

=(a_0+b_0)+(a_1+b_1)x+\cdots +(a_n+b_n)x^n

and "\cdot".


r\cdot (a_0+a_1x+\ldots a_nx^n)=(ra_0)+(ra_1)x+\ldots (ra_n)x^n

This space differs from the space \mathcal{P}_3 of Example 1.8. This space contains not just degree three polynomials, but degree thirty polynomials and degree three hundred polynomials, too. Each individual polynomial of course is of a finite degree, but the set has no single bound on the degree of all of its members.

This example, like the prior one, can be thought of in terms of infinite-tuples. For instance, we can think of  1+3x+5x^2 as corresponding to  (1,3,5,0,0,\ldots) . However, don't confuse this space with the one from Example 1.10. Each member of this set has a bounded degree, so under our correspondence there are no elements from this space matching  (1,2,5,10,\,\ldots\,) . The vectors in this space correspond to infinite-tuples that end in zeroes.

Example 1.12

The set  \{f\,\big|\, f:\mathbb{R}\to\mathbb{R}\} of all real-valued functions of one real variable is a vector space under these.


(f_1+f_2)\,(x)=f_1(x)+f_2(x)
\qquad
(r\cdot f)\,(x)=r\,f(x)

The difference between this and Example 1.10 is the domain of the functions.

Example 1.13

The set  F=\{ a\cos\theta+b\sin\theta \,\big|\, a,b\in\mathbb{R}\} of real-valued functions of the real variable  \theta is a vector space under the operations


(a_1\cos\theta+b_1\sin\theta)+(a_2\cos\theta+b_2\sin\theta) =(a_1+a_2)\cos\theta+(b_1+b_2)\sin\theta

and


r\cdot (a\cos\theta+b\sin\theta)=(ra)\cos\theta+(rb)\sin\theta

inherited from the space in the prior example. (We can think of  F as "the same" as  \mathbb{R}^2 in that a\cos\theta+b\sin\theta corresponds to the vector with components a and b.)

Example 1.14

The set


\{f:\mathbb{R}\to\mathbb{R} \,\big|\, \dfrac{d^2f}{dx^2}+f=0\}

is a vector space under the, by now natural, interpretation.


(f+g)\,(x)=f(x)+g(x)
\qquad
(r\cdot f)\,(x)=r\,f(x)

In particular, notice that closure is a consequence:


\frac{d^2(f+g)}{dx^2}+(f+g)
=(\frac{d^2f}{dx^2}+f)+(\frac{d^2g}{dx^2}+g)

and


\frac{d^2(rf)}{dx^2}+(rf)
=r(\frac{d^2 f}{dx^2}+f)


of basic Calculus. This turns out to equal the space from the prior example— functions satisfying this differential equation have the form a\cos\theta+b\sin\theta— but this description suggests an extension to solutions sets of other differential equations.

Example 1.15

The set of solutions of a homogeneous linear system in  n variables is a vector space under the operations inherited from  \mathbb{R}^n . For closure under addition, if


\vec{v}=\begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix}
\qquad
\vec{w}=\begin{pmatrix} w_1 \\ \vdots \\ w_n \end{pmatrix}

both satisfy the condition that their entries add to  0 then  \vec{v}+\vec{w} also satisfies that condition:  c_1(v_1+w_1)+\cdots+c_n(v_n+w_n) =(c_1v_1+\cdots+c_nv_n)+(c_1w_1+\cdots+c_nw_n) =0 . The checks of the other conditions are just as routine.

As we've done in those equations, we often omit the multiplication symbol " \cdot ". We can distinguish the multiplication in " c_1v_1 " from that in " r\vec{v}\, " since if both multiplicands are real numbers then real-real multiplication must be meant, while if one is a vector then scalar-vector multiplication must be meant.

The prior example has brought us full circle since it is one of our motivating examples.

Remark 1.16

Now, with some feel for the kinds of structures that satisfy the definition of a vector space, we can reflect on that definition. For example, why specify in the definition the condition that  1\cdot\vec{v}=\vec{v} but not a condition that  0\cdot\vec{v}=\vec{0} ?

One answer is that this is just a definition— it gives the rules of the game from here on, and if you don't like it, put the book down and walk away.

Another answer is perhaps more satisfying. People in this area have worked hard to develop the right balance of power and generality. This definition has been shaped so that it contains the conditions needed to prove all of the interesting and important properties of spaces of linear combinations. As we proceed, we shall derive all of the properties natural to collections of linear combinations from the conditions given in the definition.

The next result is an example. We do not need to include these properties in the definition of vector space because they follow from the properties already listed there.

Lemma 1.17

In any vector space  V , for any  \vec{v}\in V and  r\in\mathbb{R} , we have

  1.  0\cdot\vec{v}=\vec{0} , and
  2.  (-1\cdot\vec{v})+\vec{v}=\vec{0} , and
  3.  r\cdot\vec{0}=\vec{0} .
Proof

For 1, note that  \vec{v}=(1+0)\cdot\vec{v}=\vec{v}+(0\cdot\vec{v}) . Add to both sides the additive inverse of  \vec{v} , the vector  \vec{w} such that  \vec{w}+\vec{v}=\vec{0} .

\begin{array}{rl}
\vec{w}+\vec{v}
&=\vec{w}+\vec{v}+0\cdot\vec{v} \\
\vec{0}
&=\vec{0}+0\cdot\vec{v} \\
\vec{0}
&=0\cdot\vec{v}
\end{array}


The second item is easy:  (-1\cdot\vec{v})+\vec{v}=(-1+1)\cdot\vec{v}=0\cdot\vec{v}=\vec{0} shows that we can write " -\vec{v}\, " for the additive inverse of  \vec{v} without worrying about possible confusion with  (-1)\cdot\vec{v} .

For 3, this  r\cdot\vec{0}=r\cdot(0\cdot\vec{0})=(r\cdot 0)\cdot\vec{0}=\vec{0} will do.

Summary

We finish with a recap.

Our study in Chapter One of Gaussian reduction led us to consider collections of linear combinations. So in this chapter we have defined a vector space to be a structure in which we can form such combinations, expressions of the form  c_1\cdot\vec{v}_1+\dots+c_n\cdot\vec{v}_n (subject to simple conditions on the addition and scalar multiplication operations). In a phrase: vector spaces are the right context in which to study linearity.

Finally, a comment. From the fact that it forms a whole chapter, and especially because that chapter is the first one, a reader could come to think that the study of linear systems is our purpose. The truth is, we will not so much use vector spaces in the study of linear systems as we will instead have linear systems start us on the study of vector spaces. The wide variety of examples from this subsection shows that the study of vector spaces is interesting and important in its own right, aside from how it helps us understand linear systems. Linear systems won't go away. But from now on our primary objects of study will be vector spaces.

Exercises

Problem 1

Name the zero vector for each of these vector spaces.

  1. The space of degree three polynomials under the natural operations
  2. The space of  2 \! \times \! 4 matrices
  3. The space  \{f:[0,1]\to\mathbb{R}\,\big|\, f\text{ is continuous}\}
  4. The space of real-valued functions of one natural number variable
This exercise is recommended for all readers.
Problem 2

Find the additive inverse, in the vector space, of the vector.

  1. In  \mathcal{P}_3 , the vector  -3-2x+x^2 .
  2. In the space  2 \! \times \! 2 ,
    
\begin{pmatrix}
1  &-1  \\
0  &3
\end{pmatrix}.
  3. In  \{ae^x+be^{-x}\,\big|\, a,b\in\mathbb{R}\} , the space of functions of the real variable  x under the natural operations, the vector  3e^x-2e^{-x} .
This exercise is recommended for all readers.
Problem 3

Show that each of these is a vector space.

  1. The set of linear polynomials  \mathcal{P}_1=\{a_0+a_1x\,\big|\, a_0,a_1\in\mathbb{R}\} under the usual polynomial addition and scalar multiplication operations.
  2. The set of  2 \! \times \! 2 matrices with real entries under the usual matrix operations.
  3. The set of three-component row vectors with their usual operations.
  4. The set
    
L=\{\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}\in\mathbb{R}^4\,\big|\, x+y-z+w=0\}
    under the operations inherited from \mathbb{R}^4.
This exercise is recommended for all readers.
Problem 4

Show that each of these is not a vector space. (Hint. Start by listing two members of each set.)

  1. Under the operations inherited from  \mathbb{R}^3 , this set
    
\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\in\mathbb{R}^3\,\big|\, x+y+z=1\}
  2. Under the operations inherited from  \mathbb{R}^3 , this set
    
\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\in\mathbb{R}^3\,\big|\, x^2+y^2+z^2=1\}
  3. Under the usual matrix operations,
    
\{\begin{pmatrix}
a  &1  \\
b  &c
\end{pmatrix} \,\big|\, a,b,c\in\mathbb{R}\}
  4. Under the usual polynomial operations,
    
\{a_0+a_1x+a_2x^2\,\big|\, a_0,a_1,a_2\in\mathbb{R}^+\}
    where \mathbb{R}^+ is the set of reals greater than zero
  5. Under the inherited operations,
    
\{\begin{pmatrix} x \\ y \end{pmatrix}\in\mathbb{R}^2\,\big|\,
x+3y=4 \text{ and } 2x-y=3 \text{ and } 6x+4y=10\}
Problem 5

Define addition and scalar multiplication operations to make the complex numbers a vector space over  \mathbb{R} .

This exercise is recommended for all readers.
Problem 6

Is the set of rational numbers a vector space over  \mathbb{R} under the usual addition and scalar multiplication operations?

Problem 7

Show that the set of linear combinations of the variables  x,y,z is a vector space under the natural addition and scalar multiplication operations.

Problem 8

Prove that this is not a vector space: the set of two-tall column vectors with real entries subject to these operations.


\begin{pmatrix} x_1 \\ y_1 \end{pmatrix}
+\begin{pmatrix} x_2 \\ y_2 \end{pmatrix}
=\begin{pmatrix} x_1-x_2 \\ y_1-y_2 \end{pmatrix}
\qquad
r\cdot\begin{pmatrix} x \\ y \end{pmatrix}
=\begin{pmatrix} rx \\ ry \end{pmatrix}
Problem 9

Prove or disprove that  \mathbb{R}^3 is a vector space under these operations.

  1. 
\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix}
+\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}
=\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}
\quad\text{and}\quad
r\begin{pmatrix} x \\ y \\ z \end{pmatrix}
=\begin{pmatrix} rx \\ ry \\ rz \end{pmatrix}
  2. 
\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix}
+\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}
=\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}
\quad\text{and}\quad
r\begin{pmatrix} x \\ y \\ z \end{pmatrix}
=\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}
This exercise is recommended for all readers.
Problem 10

For each, decide if it is a vector space; the intended operations are the natural ones.

  1. The diagonal  2 \! \times \! 2 matrices
    
\{\begin{pmatrix}
a  &0  \\
0  &b
\end{pmatrix}\,\big|\, a,b\in\mathbb{R}\}
  2. This set of  2 \! \times \! 2 matrices
    
\{\begin{pmatrix}
x    &x+y  \\
x+y  &y
\end{pmatrix}\,\big|\, x,y\in\mathbb{R}\}
  3. This set
    
\{\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}\in\mathbb{R}^4
\,\big|\, x+y+w=1\}
  4. The set of functions  \{f:\mathbb{R}\to\mathbb{R}\,\big|\, df/dx+2f=0\}
  5. The set of functions  \{f:\mathbb{R}\to\mathbb{R}\,\big|\, df/dx+2f=1\}
This exercise is recommended for all readers.
Problem 11

Prove or disprove that this is a vector space: the real-valued functions  f of one real variable such that  f(7)=0 .

This exercise is recommended for all readers.
Problem 12

Show that the set  \mathbb{R}^+ of positive reals is a vector space when " x+y " is interpreted to mean the product of  x and  y (so that  2+3 is  6 ), and " r\cdot x " is interpreted as the  r -th power of  x .

Problem 13

Is  \{(x,y)\,\big|\, x,y\in\mathbb{R}\} a vector space under these operations?

  1.  (x_1,y_1)+(x_2,y_2)=(x_1+x_2,y_1+y_2) and  r\cdot (x,y)=(rx,y)
  2.  (x_1,y_1)+(x_2,y_2)=(x_1+x_2,y_1+y_2) and  r\cdot (x,y)=(rx,0)
Problem 14

Prove or disprove that this is a vector space: the set of polynomials of degree greater than or equal to two, along with the zero polynomial.

Problem 15

At this point "the same" is only an intuition, but nonetheless for each vector space identify the  k for which the space is "the same" as  \mathbb{R}^k .

  1. The  2 \! \times \! 3 matrices under the usual operations
  2. The  n \! \times \! m matrices (under their usual operations)
  3. This set of  2 \! \times \! 2 matrices
    
\{\begin{pmatrix}
a &0 \\
b &c
\end{pmatrix} \,\big|\, a,b,c\in\mathbb{R}\}
  4. This set of  2 \! \times \! 2 matrices
    
\{\begin{pmatrix}
a  &0  \\
b  &c
\end{pmatrix} \,\big|\, a+b+c=0\}
This exercise is recommended for all readers.
Problem 16

Using  \vec{+} to represent vector addition and  \,\vec{\cdot}\, for scalar multiplication, restate the definition of vector space.

This exercise is recommended for all readers.
Problem 17

Prove these.

  1. Any vector is the additive inverse of the additive inverse of itself.
  2. Vector addition left-cancels: if  \vec{v},\vec{s},\vec{t}\in V then  \vec{v}+\vec{s}=\vec{v}+\vec{t}\, implies that  \vec{s}=\vec{t} .
Problem 18

The definition of vector spaces does not explicitly say that  \vec{0}+\vec{v}=\vec{v} (it instead says that  \vec{v}+\vec{0}=\vec{v} ). Show that it must nonetheless hold in any vector space.

This exercise is recommended for all readers.
Problem 19

Prove or disprove that this is a vector space: the set of all matrices, under the usual operations.

Problem 20

In a vector space every element has an additive inverse. Can some elements have two or more?

Problem 21
  1. Prove that every point, line, or plane thru the origin in  \mathbb{R}^3 is a vector space under the inherited operations.
  2. What if it doesn't contain the origin?
This exercise is recommended for all readers.
Problem 22

Using the idea of a vector space we can easily reprove that the solution set of a homogeneous linear system has either one element or infinitely many elements. Assume that  \vec{v}\in V is not  \vec{0} .

  1. Prove that  r\cdot\vec{v}=\vec{0} if and only if  r=0 .
  2. Prove that  r_1\cdot\vec{v}=r_2\cdot\vec{v} if and only if  r_1=r_2 .
  3. Prove that any nontrivial vector space is infinite.
  4. Use the fact that a nonempty solution set of a homogeneous linear system is a vector space to draw the conclusion.
Problem 23

Is this a vector space under the natural operations: the real-valued functions of one real variable that are differentiable?

Problem 24

A vector space over the complex numbers \mathbb{C} has the same definition as a vector space over the reals except that scalars are drawn from  \mathbb{C} instead of from  \mathbb{R} . Show that each of these is a vector space over the complex numbers. (Recall how complex numbers add and multiply:  (a_0+a_1i)+(b_0+b_1i)=(a_0+b_0)+(a_1+b_1)i and  (a_0+a_1i)(b_0+b_1i)=(a_0b_0-a_1b_1)+(a_0b_1+a_1b_0)i .)

  1. The set of degree two polynomials with complex coefficients
  2. This set
    
\{\begin{pmatrix}
0  &a  \\
b  &0
\end{pmatrix}\,\big|\, a,b\in\mathbb{C}\text{ and }
a+b=0+0i \}
Problem 25

Name a property shared by all of the  \mathbb{R}^n 's but not listed as a requirement for a vector space.

This exercise is recommended for all readers.
Problem 26
  1. Prove that a sum of four vectors  \vec{v}_1,\ldots,\vec{v}_4\in V can be associated in any way without changing the result.
    \begin{array}{rl}
((\vec{v}_1+\vec{v}_2)+\vec{v}_3)+\vec{v}_4
&=(\vec{v}_1+(\vec{v}_2+\vec{v}_3))+\vec{v}_4 \\
&=(\vec{v}_1+\vec{v}_2)+(\vec{v}_3+\vec{v}_4) \\
&=\vec{v}_1+((\vec{v}_2+\vec{v}_3)+\vec{v}_4) \\
&=\vec{v}_1+(\vec{v}_2+(\vec{v}_3+\vec{v}_4))
\end{array}
    This allows us to simply write " \vec{v}_1+\vec{v}_2+\vec{v}_3+\vec{v}_4 " without ambiguity.
  2. Prove that any two ways of associating a sum of any number of vectors give the same sum. (Hint. Use induction on the number of vectors.)
Problem 27

For any vector space, a subset that is itself a vector space under the inherited operations (e.g., a plane through the origin inside of  \mathbb{R}^3 ) is a subspace.

  1. Show that  \{a_0+a_1x+a_2x^2\,\big|\, a_0+a_1+a_2=0\} is a subspace of the vector space of degree two polynomials.
  2. Show that this is a subspace of the  2 \! \times \! 2 matrices.
    
\{\begin{pmatrix}
a  &b  \\
c  &0
\end{pmatrix} \,\big|\, a+b=0\}
  3. Show that a nonempty subset  S of a real vector space is a subspace if and only if it is closed under linear combinations of pairs of vectors: whenever  c_1,c_2\in\mathbb{R} and  \vec{s}_1,\vec{s}_2\in S then the combination  c_1\vec{v}_1+c_2\vec{v}_2 is in  S .


2 - Subspaces and Spanning sets

One of the examples that led us to introduce the idea of a vector space was the solution set of a homogeneous system. For instance, we've seen in Example 1.4 such a space that is a planar subset of \mathbb{R}^3. There, the vector space \mathbb{R}^3 contains inside it another vector space, the plane.

Definition 2.1

For any vector space, a subspace is a subset that is itself a vector space, under the inherited operations.

Example 2.2

The plane from the prior subsection,


P=\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, x+y+z=0\}

is a subspace of  \mathbb{R}^3 . As specified in the definition, the operations are the ones that are inherited from the larger space, that is, vectors add in P as they add in \mathbb{R}^3


\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix}+\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}
=\begin{pmatrix} x_1+x_2 \\ y_1+y_2 \\ z_1+z_2 \end{pmatrix}

and scalar multiplication is also the same as it is in \mathbb{R}^3. To show that P is a subspace, we need only note that it is a subset and then verify that it is a space. Checking that P satisfies the conditions in the definition of a vector space is routine. For instance, for closure under addition, just note that if the summands satisfy that x_1+y_1+z_1=0 and x_2+y_2+z_2=0 then the sum satisfies that (x_1+x_2)+(y_1+y_2)+(z_1+z_2)=(x_1+y_1+z_1)+(x_2+y_2+z_2)=0.

Example 2.3

The  x -axis in  \mathbb{R}^2 is a subspace where the addition and scalar multiplication operations are the inherited ones.


\begin{pmatrix} x_1 \\ 0 \end{pmatrix}
+
\begin{pmatrix} x_2 \\ 0 \end{pmatrix}
=
\begin{pmatrix} x_1+x_2 \\ 0 \end{pmatrix}
\qquad
r\cdot\begin{pmatrix} x \\ 0 \end{pmatrix}
=\begin{pmatrix} rx \\ 0 \end{pmatrix}

As above, to verify that this is a subspace, we simply note that it is a subset and then check that it satisfies the conditions in definition of a vector space. For instance, the two closure conditions are satisfied: (1) adding two vectors with a second component of zero results in a vector with a second component of zero, and (2) multiplying a scalar times a vector with a second component of zero results in a vector with a second component of zero.

Example 2.4

Another subspace of \mathbb{R}^2 is


\{\begin{pmatrix} 0 \\ 0 \end{pmatrix}\}

its trivial subspace.

Any vector space has a trivial subspace  \{\vec{0}\,\} . At the opposite extreme, any vector space has itself for a subspace. These two are the improper subspaces. Other subspaces are proper.

Example 2.5

The condition in the definition requiring that the addition and scalar multiplication operations must be the ones inherited from the larger space is important. Consider the subset  \{1\} of the vector space  \mathbb{R}^1 . Under the operations 1+1=1 and r\cdot 1=1 that set is a vector space, specifically, a trivial space. But it is not a subspace of  \mathbb{R}^1 because those aren't the inherited operations, since of course  \mathbb{R}^1 has  1+1=2 .

Example 2.6

All kinds of vector spaces, not just \mathbb{R}^n's, have subspaces. The vector space of cubic polynomials  \{a+bx+cx^2+dx^3\,\big|\, a,b,c,d\in\mathbb{R}\} has a subspace comprised of all linear polynomials  \{m+nx\,\big|\, m,n\in\mathbb{R}\} .

Example 2.7

Another example of a subspace not taken from an \mathbb{R}^n is one from the examples following the definition of a vector space. The space of all real-valued functions of one real variable f:\mathbb{R}\to \mathbb{R} has a subspace of functions satisfying the restriction (d^2\,f/dx^2)+f=0.

Example 2.8

Being vector spaces themselves, subspaces must satisfy the closure conditions. The set  \mathbb{R}^+ is not a subspace of the vector space  \mathbb{R}^1 because with the inherited operations it is not closed under scalar multiplication: if  \vec{v}=1 then  -1\cdot\vec{v}\not\in\mathbb{R}^+ .

The next result says that Example 2.8 is prototypical. The only way that a subset can fail to be a subspace (if it is nonempty and the inherited operations are used) is if it isn't closed.

Lemma 2.9

For a nonempty subset  S of a vector space, under the inherited operations, the following are equivalent statements.[1]

  1.  S is a subspace of that vector space
  2.  S is closed under linear combinations of pairs of vectors: for any vectors  \vec{s}_1,\vec{s}_2\in S and scalars  r_1,r_2 the vector  r_1\vec{s}_1+r_2\vec{s}_2 is in  S
  3.  S is closed under linear combinations of any number of vectors: for any vectors  \vec{s}_1,\ldots,\vec{s}_n\in S and scalars  r_1, \ldots,r_n the vector  r_1\vec{s}_1+\cdots+r_n\vec{s}_n is in  S .

Briefly, the way that a subset gets to be a subspace is by being closed under linear combinations.

Proof

"The following are equivalent" means that each pair of statements are equivalent.


(1)\!\iff\!(2)
\qquad
(2)\!\iff\!(3)
\qquad
(3)\!\iff\!(1)

We will show this equivalence by establishing that  (1)\implies (3)\implies (2)\implies (1). This strategy is suggested by noticing that  (1)\implies (3) and  (3)\implies (2) are easy and so we need only argue the single implication  (2)\implies (1) .

For that argument, assume that  S is a nonempty subset of a vector space V and that S is closed under combinations of pairs of vectors. We will show that S is a vector space by checking the conditions.

The first item in the vector space definition has five conditions. First, for closure under addition, if  \vec{s}_1,\vec{s}_2\in S then  \vec{s}_1+\vec{s}_2\in S , as  \vec{s}_1+\vec{s}_2=1\cdot\vec{s}_1+1\cdot\vec{s}_2 . Second, for any  \vec{s}_1,\vec{s}_2\in S , because addition is inherited from  V , the sum  \vec{s}_1+\vec{s}_2 in  S equals the sum  \vec{s}_1+\vec{s}_2 in  V , and that equals the sum  \vec{s}_2+\vec{s}_1 in  V (because V is a vector space, its addition is commutative), and that in turn equals the sum  \vec{s}_2+\vec{s}_1 in  S . The argument for the third condition is similar to that for the second. For the fourth, consider the zero vector of  V and note that closure of S under linear combinations of pairs of vectors gives that (where  \vec{s} is any member of the nonempty set  S )  0\cdot\vec{s}+0\cdot\vec{s}=\vec{0} is in S; showing that  \vec{0} acts under the inherited operations as the additive identity of  S is easy. The fifth condition is satisfied because for any  \vec{s}\in S , closure under linear combinations shows that the vector  0\cdot\vec{0}+(-1)\cdot\vec{s} is in  S ; showing that it is the additive inverse of  \vec{s} under the inherited operations is routine.

The checks for item 2 are similar and are saved for Problem 14.

We usually show that a subset is a subspace with  (2)\implies (1) .

Remark 2.10

At the start of this chapter we introduced vector spaces as collections in which linear combinations are "sensible". The above result speaks to this.

The vector space definition has ten conditions but eight of them— the conditions not about closure— simply ensure that referring to the operations as an "addition" and a "scalar multiplication" is sensible. The proof above checks that these eight are inherited from the surrounding vector space provided that the nonempty set S satisfies Lemma 2.9's statement (2) (e.g., commutativity of addition in S follows right from commutativity of addition in V). So, in this context, this meaning of "sensible" is automatically satisfied.

In assuring us that this first meaning of the word is met, the result draws our attention to the second meaning of "sensible". It has to do with the two remaining conditions, the closure conditions. Above, the two separate closure conditions inherent in statement (1) are combined in statement (2) into the single condition of closure under all linear combinations of two vectors, which is then extended in statement (3) to closure under combinations of any number of vectors. The latter two statements say that we can always make sense of an expression like r_1\vec{s}_1+r_2\vec{s}_2, without restrictions on the r's— such expressions are "sensible" in that the vector described is defined and is in the set S.

This second meaning suggests that a good way to think of a vector space is as a collection of unrestricted linear combinations. The next two examples take some spaces and describe them in this way. That is, in these examples we parametrize, just as we did in Chapter One to describe the solution set of a homogeneous linear system.

Example 2.11

This subset of \mathbb{R}^3


S=\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, x-2y+z=0\}

is a subspace under the usual addition and scalar multiplication operations of column vectors (the check that it is nonempty and closed under linear combinations of two vectors is just like the one in Example 2.2). To parametrize, we can take x-2y+z=0 to be a one-equation linear system and expressing the leading variable in terms of the free variables x=2y-z.


S
=\{\begin{pmatrix} 2y-z \\ y \\ z \end{pmatrix}\,\big|\, y,z\in\mathbb{R}\}
=\{y\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix}+z\begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix}\,\big|\, y,z\in\mathbb{R}\}

Now the subspace is described as the collection of unrestricted linear combinations of those two vectors. Of course, in either description, this is a plane through the origin.

Example 2.12

This is a subspace of the  2 \! \times \! 2 matrices


L=\{\begin{pmatrix}
a  &0  \\
b  &c
\end{pmatrix}
\,\big|\, a+b+c=0\}

(checking that it is nonempty and closed under linear combinations is easy). To parametrize, express the condition as a=-b-c.


L
=\{\begin{pmatrix}
-b-c  &0  \\
b     &c
\end{pmatrix}
\,\big|\, b,c\in\mathbb{R}\}
=\{b\begin{pmatrix}
-1    &0  \\
1     &0
\end{pmatrix}
+c\begin{pmatrix}
-1    &0  \\
0     &1
\end{pmatrix}
\,\big|\, b,c\in\mathbb{R}\}

As above, we've described the subspace as a collection of unrestricted linear combinations (by coincidence, also of two elements).

Parametrization is an easy technique, but it is important. We shall use it often.

Definition 2.13

The span(or linear closure) of a nonempty subset  S of a vector space is the set of all linear combinations of vectors from  S .


[S] =\{ c_1\vec{s}_1+\cdots+c_n\vec{s}_n
\,\big|\, c_1,\ldots, c_n\in\mathbb{R}
\text{ and } \vec{s}_1,\ldots,\vec{s}_n\in S \}

The span of the empty subset of a vector space is the trivial subspace.

No notation for the span is completely standard. The square brackets used here are common, but so are "\mbox{span}(S)" and "\mbox{sp}(S)".

Remark 2.14

In Chapter One, after we showed that the solution set of a homogeneous linear system can be written as \{c_1\vec{\beta}_1+\cdots+c_k\vec{\beta}_k\,\big|\,
c_1,\ldots,c_k\in\mathbb{R}\}, we described that as the set "generated" by the {\vec{\beta}}'s. We now have the technical term; we call that the "span" of the set \{\vec{\beta}_1,\ldots,\vec{\beta}_k\}.

Recall also the discussion of the "tricky point" in that proof. The span of the empty set is defined to be the set  \{\vec{0}\} because we follow the convention that a linear combination of no vectors sums to  \vec{0} . Besides, defining the empty set's span to be the trivial subspace is a convienence in that it keeps results like the next one from having annoying exceptional cases.

Lemma 2.15

In a vector space, the span of any subset is a subspace.

Proof

Call the subset  S . If  S is empty then by definition its span is the trivial subspace. If  S is not empty then by Lemma 2.9 we need only check that the span  [S] is closed under linear combinations. For a pair of vectors from that span,  \vec{v}=c_1\vec{s}_1+\cdots+c_n\vec{s}_n and  \vec{w}=c_{n+1}\vec{s}_{n+1}+\cdots+c_m\vec{s}_m , a linear combination


p\cdot(c_1\vec{s}_1+\cdots+c_n\vec{s}_n)+
r\cdot(c_{n+1}\vec{s}_{n+1}+\cdots+c_m\vec{s}_m)

=
pc_1\vec{s}_1+\cdots+pc_n\vec{s}_n
+rc_{n+1}\vec{s}_{n+1}+\cdots+rc_m\vec{s}_m

( p ,  r scalars) is a linear combination of elements of  S and so is in  [S] (possibly some of the \vec{s}_i's forming \vec{v} equal some of the \vec{s}_j's from \vec{w}, but it does not matter).

The converse of the lemma holds: any subspace is the span of some set, because a subspace is obviously the span of the set of its members. Thus a subset of a vector space is a subspace if and only if it is a span. This fits the intuition that a good way to think of a vector space is as a collection in which linear combinations are sensible.

Taken together, Lemma 2.9 and Lemma 2.15 show that the span of a subset S of a vector space is the smallest subspace containing all the members of S.

Example 2.16

In any vector space  V , for any vector  \vec{v} , the set  \{r\cdot\vec{v} \,\big|\, r\in\mathbb{R}\} is a subspace of  V . For instance, for any vector  \vec{v}\in\mathbb{R}^3 , the line through the origin containing that vector,  \{k\vec{v}\,\big|\, k\in\mathbb{R} \} is a subspace of  \mathbb{R}^3 . This is true even when \vec{v} is the zero vector, in which case the subspace is the degenerate line, the trivial subspace.

Example 2.17

The span of this set is all of \mathbb{R}^2.


\{\begin{pmatrix} 1 \\ 1 \end{pmatrix},\begin{pmatrix} 1 \\ -1 \end{pmatrix}\}

To check this we must show that any member of \mathbb{R}^2 is a linear combination of these two vectors. So we ask: for which vectors (with real components x and y) are there scalars c_1 and c_2 such that this holds?


c_1\begin{pmatrix} 1 \\ 1 \end{pmatrix}+c_2\begin{pmatrix} 1 \\ -1 \end{pmatrix}=\begin{pmatrix} x \\ y \end{pmatrix}

Gauss' method

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
c_1  &+  &c_2  &=  &x  \\
c_1  &-  &c_2  &=  &y
\end{array}
&\xrightarrow[]{-\rho_1+\rho_2}
&\begin{array}{*{2}{rc}r}
c_1  &+  &c_2    &=  &x  \\
&   &-2c_2  &=  &-x+y
\end{array}
\end{array}

with back substitution gives c_2=(x-y)/2 and c_1=(x+y)/2. These two equations show that for any x and y that we start with, there are appropriate coefficients c_1 and c_2 making the above vector equation true. For instance, for x=1 and y=2 the coefficients c_2=-1/2 and c_1=3/2 will do. That is, any vector in \mathbb{R}^2 can be written as a linear combination of the two given vectors.

Since spans are subspaces, and we know that a good way to understand a subspace is to parametrize its description, we can try to understand a set's span in that way.

Example 2.18

Consider, in  \mathcal{P}_2 , the span of the set  \{3x-x^2, 2x\} . By the definition of span, it is the set of unrestricted linear combinations of the two \{c_1(3x-x^2)+c_2(2x)\,\big|\, c_1,c_2\in\mathbb{R}\}. Clearly polynomials in this span must have a constant term of zero. Is that necessary condition also sufficient?

We are asking: for which members a_2x^2+a_1x+a_0 of \mathcal{P}_2 are there c_1 and c_2 such that a_2x^2+a_1x+a_0=c_1(3x-x^2)+c_2(2x)? Since polynomials are equal if and only if their coefficients are equal, we are looking for conditions on a_2, a_1, and a_0 satisfying these.


\begin{array}{*{2}{rc}r}
-c_1  &   &     &=  &a_2   \\
3c_1  &+  &2c_2 &=  &a_1   \\
&   &0    &=  &a_0                                   
\end{array}

Gauss' method gives that c_1=-a_2, c_2=(3/2)a_2+(1/2)a_1, and 0=a_0. Thus the only condition on polynomials in the span is the condition that we knew of— as long as a_0=0, we can give appropriate coefficients c_1 and c_2 to describe the polynomial a_0+a_1x+a_2x^2 as in the span. For instance, for the polynomial 0-4x+3x^2, the coefficients c_1=-3 and c_2=5/2 will do. So the span of the given set is \{a_1x+a_2x^2\,\big|\, a_1,a_2\in\mathbb{R}\}.

This shows, incidentally, that the set  \{x,x^2\} also spans this subspace. A space can have more than one spanning set. Two other sets spanning this subspace are  \{x,x^2,-x+2x^2\} and  \{x,x+x^2,x+2x^2,\ldots\,\} . (Naturally, we usually prefer to work with spanning sets that have only a few members.)

Example 2.19

These are the subspaces of  \mathbb{R}^3 that we now know of, the trivial subspace, the lines through the origin, the planes through the origin, and the whole space (of course, the picture shows only a few of the infinitely many subspaces). In the next section we will prove that \mathbb{R}^3 has no other type of subspaces, so in fact this picture shows them all.

Linalg R3 subspaces.png

The subsets are described as spans of sets, using a minimal number of members, and are shown connected to their supersets. Note that these subspaces fall naturally into levels— planes on one level, lines on another, etc.— according to how many vectors are in a minimal-sized spanning set.

So far in this chapter we have seen that to study the properties of linear combinations, the right setting is a collection that is closed under these combinations. In the first subsection we introduced such collections, vector spaces, and we saw a great variety of examples. In this subsection we saw still more spaces, ones that happen to be subspaces of others. In all of the variety we've seen a commonality. Example 2.19 above brings it out: vector spaces and subspaces are best understood as a span, and especially as a span of a small number of vectors. The next section studies spanning sets that are minimal.

Exercises

This exercise is recommended for all readers.
Problem 1

Which of these subsets of the vector space of  2 \! \times \! 2 matrices are subspaces under the inherited operations? For each one that is a subspace, parametrize its description. For each that is not, give a condition that fails.

  1.  \{\begin{pmatrix}
a  &0  \\
0  &b
\end{pmatrix}  \,\big|\, a,b\in\mathbb{R}\}
  2.  \{\begin{pmatrix}
a  &0  \\
0  &b
\end{pmatrix}  \,\big|\, a+b=0\}
  3.  \{\begin{pmatrix}
a  &0  \\
0  &b
\end{pmatrix}  \,\big|\, a+b=5\}
  4.  \{\begin{pmatrix}
a  &c  \\
0  &b
\end{pmatrix}  \,\big|\, a+b=0, c\in\mathbb{R}\}
This exercise is recommended for all readers.
Problem 2

Is this a subspace of  \mathcal{P}_2 :  \{a_0+a_1x+a_2x^2\,\big|\, a_0+2a_1+a_2=4\} ? If it is then parametrize its description.

This exercise is recommended for all readers.
Problem 3

Decide if the vector lies in the span of the set, inside of the space.

  1.  \begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix} ,  \{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}  \} , in  \mathbb{R}^3
  2.  x-x^3 ,  \{x^2,2x+x^2,x+x^3\} , in  \mathcal{P}_3
  3.  \begin{pmatrix}
0  &1  \\
4  &2
\end{pmatrix}  ,  \{\begin{pmatrix}
1  &0  \\
1  &1
\end{pmatrix},
\begin{pmatrix}
2  &0  \\
2  &3
\end{pmatrix}  \} , in  \mathcal{M}_{2 \! \times \! 2}
Problem 4

Which of these are members of the span  [\{\cos^2x,\sin^2x\} ] in the vector space of real-valued functions of one real variable?

  1.  f(x)=1
  2.  f(x)=3+x^2
  3.  f(x)=\sin x
  4.  f(x)=\cos (2x)
This exercise is recommended for all readers.
Problem 5

Which of these sets spans  \mathbb{R}^3 ? That is, which of these sets has the property that any three-tall vector can be expressed as a suitable linear combination of the set's elements?

  1.  \{ \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 3 \end{pmatrix}  \}
  2.  \{ \begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}  \}
  3.  \{ \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 3 \\ 0 \\ 0 \end{pmatrix}  \}
  4.  \{ \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix},
\begin{pmatrix} 3 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} -1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 2 \\ 1 \\ 5 \end{pmatrix}  \}
  5.  \{ \begin{pmatrix} 2 \\ 1 \\ 1 \end{pmatrix},
\begin{pmatrix} 3 \\ 0 \\ 1 \end{pmatrix},
\begin{pmatrix} 5 \\ 1 \\ 2 \end{pmatrix},
\begin{pmatrix} 6 \\ 0 \\ 2 \end{pmatrix}  \}
This exercise is recommended for all readers.
Problem 6

Parametrize each subspace's description. Then express each subspace as a span.

  1. The subset  \{\begin{pmatrix} a &b &c \end{pmatrix}\,\big|\, a-c=0\}   of the three-wide row vectors
  2. This subset of  \mathcal{M}_{2 \! \times \! 2}
    
\{\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}  \,\big|\, a+d=0\}
  3. This subset of  \mathcal{M}_{2 \! \times \! 2}
    
\{\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}  \,\big|\, 2a-c-d=0 \text{ and } a+3b=0 \}
  4. The subset  \{a+bx+cx^3\,\big|\, a-2b+c=0\} of  \mathcal{P}_3
  5. The subset of  \mathcal{P}_2 of quadratic polynomials  p such that  p(7)=0
This exercise is recommended for all readers.
Problem 7

Find a set to span the given subspace of the given space. (Hint. Parametrize each.)

  1. the  xz -plane in  \mathbb{R}^3
  2.  \{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, 3x+2y+z=0\} in  \mathbb{R}^3
  3.  \{\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}\,\big|\,
2x+y+w=0 \text{ and } y+2z=0\} in  \mathbb{R}^4
  4.  \{a_0+a_1x+a_2x^2+a_3x^3\,\big|\,
a_0+a_1=0 \text{ and } a_2-a_3=0\} in  \mathcal{P}_3
  5. The set  \mathcal{P}_4 in the space  \mathcal{P}_4
  6.  \mathcal{M}_{2 \! \times \! 2} in  \mathcal{M}_{2 \! \times \! 2}
Problem 8
Parametrize it with

 \{y\begin{pmatrix} -2/3 \\ 1 \\ 0 \end{pmatrix}+z\begin{pmatrix} -1/3 \\ 0 \\ 1 \end{pmatrix}
\,\big|\, y,z\in\mathbb{R} \} to get  \{\begin{pmatrix} -2/3 \\ 1 \\ 0 \end{pmatrix},\begin{pmatrix} -1/3 \\ 0 \\ 1 \end{pmatrix} \} .

  •  \{\begin{pmatrix} 1 \\ -2 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} -1/2 \\ 0 \\ 0 \\ 1 \end{pmatrix} \}
  • Parametrize the description as  \{-a_1+a_1x+a_3x^2+a_3x^3\,\big|\, a_1,a_3\in\mathbb{R} \} to get  \{-1+x,x^2+x^3\}.
  •  \{1,x,x^2,x^3,x^4\}
  •  \{ \begin{pmatrix}
1  &0  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
0  &1  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
0  &0  \\
1  &0
\end{pmatrix},
\begin{pmatrix}
0  &0  \\
0  &1
\end{pmatrix} \}
Problem 9

Is  \mathbb{R}^2 a subspace of  \mathbb{R}^3 ?

This exercise is recommended for all readers.
Problem 10

Decide if each is a subspace of the vector space of real-valued functions of one real variable.

  1. The even functions  \{f:\mathbb{R}\to \mathbb{R} \,\big|\, f(-x)=f(x) \text{ for all } x\} . For example, two members of this set are f_1(x)=x^2 and f_2(x)=\cos (x).
  2. The odd functions  \{f:\mathbb{R}\to \mathbb{R} \,\big|\, f(-x)=-f(x) \text{ for all } x\} . Two members are f_3(x)=x^3 and f_4(x)=\sin(x).
Problem 11

Example 2.16 says that for any vector \vec{v} that is an element of a vector space V, the set \{r\cdot\vec{v}\,\big|\, r\in\mathbb{R}\} is a subspace of V. (This is of course, simply the span of the singleton set \{\vec{v}\}.) Must any such subspace be a proper subspace, or can it be improper?

Problem 12

An example following the definition of a vector space shows that the solution set of a homogeneous linear system is a vector space. In the terminology of this subsection, it is a subspace of \mathbb{R}^n where the system has n variables. What about a non-homogeneous linear system; do its solutions form a subspace (under the inherited operations)?

Problem 13

Example 2.19 shows that \mathbb{R}^3 has infinitely many subspaces. Does every nontrivial space have infinitely many subspaces?

Problem 14

Finish the proof of Lemma 2.9.

Problem 15

Show that each vector space has only one trivial subspace.

This exercise is recommended for all readers.
Problem 16

Show that for any subset  S of a vector space, the span of the span equals the span  [ [S] ]=[S] . (Hint. Members of [S] are linear combinations of members of S. Members of [[S]] are linear combinations of linear combinations of members of S.)

Problem 17

All of the subspaces that we've seen use zero in their description in some way. For example, the subspace in Example 2.3 consists of all the vectors from \mathbb{R}^2 with a second component of zero. In contrast, the collection of vectors from \mathbb{R}^2 with a second component of one does not form a subspace (it is not closed under scalar multiplication). Another example is Example 2.2, where the condition on the vectors is that the three components add to zero. If the condition were that the three components add to one then it would not be a subspace (again, it would fail to be closed). This exercise shows that a reliance on zero is not strictly necessary. Consider the set


\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, x+y+z=1\}

under these operations.


\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix}+\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}
=\begin{pmatrix} x_1+x_2-1 \\ y_1+y_2 \\ z_1+z_2 \end{pmatrix}
\qquad
r\begin{pmatrix} x \\ y \\ z \end{pmatrix}=\begin{pmatrix} rx-r+1 \\ ry \\ rz \end{pmatrix}
  1. Show that it is not a subspace of \mathbb{R}^3. (Hint. See Example 2.5).
  2. Show that it is a vector space. Note that by the prior item, Lemma 2.9 can not apply.
  3. Show that any subspace of \mathbb{R}^3 must pass through the origin, and so any subspace of \mathbb{R}^3 must involve zero in its description. Does the converse hold? Does any subset of \mathbb{R}^3 that contains the origin become a subspace when given the inherited operations?
Problem 18

We can give a justification for the convention that the sum of zero-many vectors equals the zero vector. Consider this sum of three vectors \vec{v}_1+\vec{v}_2+\vec{v}_3.

  1. What is the difference between this sum of three vectors and the sum of the first two of these three?
  2. What is the difference between the prior sum and the sum of just the first one vector?
  3. What should be the difference between the prior sum of one vector and the sum of no vectors?
  4. So what should be the definition of the sum of no vectors?
Problem 19

Is a space determined by its subspaces? That is, if two vector spaces have the same subspaces, must the two be equal?

Problem 20
  1. Give a set that is closed under scalar multiplication but not addition.
  2. Give a set closed under addition but not scalar multiplication.
  3. Give a set closed under neither.
Problem 21

Show that the span of a set of vectors does not depend on the order in which the vectors are listed in that set.

Problem 22

Which trivial subspace is the span of the empty set? Is it


\{\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}\}\subseteq \mathbb{R}^3,
\quad\text{or}\quad
\{0+0x\}\subseteq \mathcal{P}_1,

or some other subspace?

Problem 23

Show that if a vector is in the span of a set then adding that vector to the set won't make the span any bigger. Is that also "only if"?

This exercise is recommended for all readers.
Problem 24

Subspaces are subsets and so we naturally consider how "is a subspace of" interacts with the usual set operations.

  1. If  A,B are subspaces of a vector space, must  A\cap B be a subspace? Always? Sometimes? Never?
  2. Must  A\cup B be a subspace?
  3. If  A is a subspace, must its complement be a subspace?

(Hint. Try some test subspaces from Example 2.19.)

This exercise is recommended for all readers.
Problem 25

Does the span of a set depend on the enclosing space? That is, if  W is a subspace of  V and  S is a subset of  W (and so also a subset of  V ), might the span of  S in  W differ from the span of  S in  V ?

Problem 26

Is the relation "is a subspace of" transitive? That is, if V is a subspace of W and W is a subspace of X, must V be a subspace of X?

This exercise is recommended for all readers.
Problem 27

Because "span of" is an operation on sets we naturally consider how it interacts with the usual set operations.

  1. If  S\subseteq T are subsets of a vector space, is  [S]\subseteq[T] ? Always? Sometimes? Never?
  2. If  S,T are subsets of a vector space, is  [S\cup T]=[S]\cup[T] ?
  3. If  S,T are subsets of a vector space, is  [S\cap T]=[S]\cap[T] ?
  4. Is the span of the complement equal to the complement of the span?
Problem 28

Reprove Lemma 2.15 without doing the empty set separately.

Problem 29

Find a structure that is closed under linear combinations, and yet is not a vector space. (Remark. This is a bit of a trick question.)


Section II - Linear Independence

The prior section shows that a vector space can be understood as an unrestricted linear combination of some of its elements— that is, as a span. For example, the space of linear polynomials \{a+bx\,\big|\, a,b\in\mathbb{R}\} is spanned by the set \{1,x\}. The prior section also showed that a space can have many sets that span it. The space of linear polynomials is also spanned by \{1,2x\} and \{1,x,2x\}.

At the end of that section we described some spanning sets as "minimal", but we never precisely defined that word. We could take "minimal" to mean one of two things. We could mean that a spanning set is minimal if it contains the smallest number of members of any set with the same span. With this meaning \{1,x,2x\} is not minimal because it has one member more than the other two. Or we could mean that a spanning set is minimal when it has no elements that can be removed without changing the span. Under this meaning \{1,x,2x\} is not minimal because removing the  2x and getting  \{1,x\} leaves the span unchanged.

The first sense of minimality appears to be a global requirement, in that to check if a spanning set is minimal we seemingly must look at all the spanning sets of a subspace and find one with the least number of elements. The second sense of minimality is local in that we need to look only at the set under discussion and consider the span with and without various elements. For instance, using the second sense, we could compare the span of \{1,x,2x\} with the span of \{1,x\} and note that the 2x is a "repeat" in that its removal doesn't shrink the span.

In this section we will use the second sense of "minimal spanning set" because of this technical convenience. However, the most important result of this book is that the two senses coincide; we will prove that in the section after this one.


1 - Definition and Examples

Spanning Sets and Linear Independence

We first characterize when a vector can be removed from a set without changing the span of that set.

Lemma 1.1

Where  S is a subset of a vector space V,


[S]=[S\cup\{\vec{v}\}]
\quad\text{if and only if}\quad
\vec{v}\in[S]

for any \vec{v}\in V.

Proof

The left to right implication is easy. If [S]=[S\cup\{\vec{v}\}] then, since  \vec{v}\in[S\cup\{\vec{v}\}] , the equality of the two sets gives that  \vec{v}\in[S] .

For the right to left implication assume that  \vec{v}\in [S] to show that  [S]=[S\cup\{\vec{v}\}] by mutual inclusion. The inclusion  [S]\subseteq[S\cup\{\vec{v}\}] is obvious. For the other inclusion  [S]\supseteq[S\cup\{\vec{v}\}] , write an element of  [S\cup\{\vec{v}\}] as  d_0\vec{v}+d_1\vec{s}_1+\dots+d_m\vec{s}_m and substitute  \vec{v} 's expansion as a linear combination of members of the same set  d_0(c_0\vec{t}_0+\dots+c_k\vec{t}_k)+d_1\vec{s}_1+\dots+d_m\vec{s}_m . This is a linear combination of linear combinations and so distributing  d_0 results in a linear combination of vectors from  S . Hence each member of [S\cup\{\vec{v}\}] is also a member of [S].

Example 1.2

In  \mathbb{R}^3 , where


\vec{v}_1=\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\quad
\vec{v}_2=\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\quad
\vec{v}_3=\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix}

the spans  [\{\vec{v}_1,\vec{v}_2\}] and  [\{\vec{v}_1,\vec{v}_2,\vec{v}_3\}] are equal since  \vec{v}_3 is in the span  [\{\vec{v}_1,\vec{v}_2\}] .

The lemma says that if we have a spanning set then we can remove a \vec{v} to get a new set S with the same span if and only if \vec{v} is a linear combination of vectors from S. Thus, under the second sense described above, a spanning set is minimal if and only if it contains no vectors that are linear combinations of the others in that set. We have a term for this important property.

Definition 1.3

A subset of a vector space is linearly independent if none of its elements is a linear combination of the others. Otherwise it is linearly dependent.

Here is an important observation:


\vec{s}_0=c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n

although this way of writing one vector as a combination of the others visually sets  \vec{s}_0 off from the other vectors, algebraically there is nothing special in that equation about  \vec{s}_0 . For any  \vec{s}_i with a coefficient c_i that is nonzero, we can rewrite the relationship to set off  \vec{s}_i .


\vec{s}_i=(1/c_i)\vec{s}_0+(-c_1/c_i)\vec{s}_1
+\dots+(-c_n/c_i)\vec{s}_n

When we don't want to single out any vector by writing it alone on one side of the equation we will instead say that \vec{s}_0,\vec{s}_1,\dots,\vec{s}_n are in a linear relationship and write the relationship with all of the vectors on the same side. The next result rephrases the linear independence definition in this style. It gives what is usually the easiest way to compute whether a finite set is dependent or independent.

Lemma 1.4

A subset  S of a vector space is linearly independent if and only if for any distinct  \vec{s}_1,\dots,\vec{s}_n\in S the only linear relationship among those vectors


c_1\vec{s}_1+\dots+c_n\vec{s}_n=\vec{0}
\qquad c_1,\dots,c_n\in\mathbb{R}

is the trivial one:  c_1=0,\dots,\,c_n=0 .

Proof

This is a direct consequence of the observation above.

If the set  S is linearly independent then no vector \vec{s}_i can be written as a linear combination of the other vectors from S so there is no linear relationship where some of the \vec{s}\,'s have nonzero coefficients. If  S is not linearly independent then some  \vec{s}_i is a linear combination \vec{s}_i=c_1\vec{s}_1+\dots+c_{i-1}\vec{s}_{i-1} +c_{i+1}\vec{s}_{i+1}+\dots+c_n\vec{s}_n of other vectors from  S , and subtracting \vec{s}_i from both sides of that equation gives a linear relationship involving a nonzero coefficient, namely the  -1 in front of  \vec{s}_i .

Example 1.5

In the vector space of two-wide row vectors, the two-element set  \{ \begin{pmatrix} 40 &15 \end{pmatrix},\begin{pmatrix} -50 &25 \end{pmatrix}\} is linearly independent. To check this, set


c_1\cdot\begin{pmatrix} 40 &15 \end{pmatrix}+c_2\cdot\begin{pmatrix} -50 &25 \end{pmatrix}=\begin{pmatrix} 0 &0 \end{pmatrix}

and solving the resulting system


\begin{array}{*{2}{rc}r}
40c_1 &- &50c_2 &= &0 \\
15c_1 &+ &25c_2 &= &0
\end{array}
\;\xrightarrow[]{-(15/40)\rho_1+\rho_2}\;
\begin{array}{*{2}{rc}r}
40c_1 &- &50c_2    &= &0 \\
& &(175/4)c_2 &= &0
\end{array}

shows that both  c_1 and  c_2 are zero. So the only linear relationship between the two given row vectors is the trivial relationship.

In the same vector space,  \{ \begin{pmatrix} 40 &15 \end{pmatrix},\begin{pmatrix} 20 &7.5 \end{pmatrix}\} is linearly dependent since we can satisfy


c_1\begin{pmatrix} 40 &15 \end{pmatrix}+c_2\cdot\begin{pmatrix} 20 &7.5 \end{pmatrix}=\begin{pmatrix} 0 &0 \end{pmatrix}

with  c_1=1 and  c_2=-2 .

Remark 1.6

Recall the Statics example that began this book. We first set the unknown-mass objects at  40 cm and  15 cm and got a balance, and then we set the objects at  -50 cm and  25 cm and got a balance. With those two pieces of information we could compute values of the unknown masses. Had we instead first set the unknown-mass objects at  40 cm and  15 cm, and then at  20 cm and  7.5 cm, we would not have been able to compute the values of the unknown masses (try it). Intuitively, the problem is that the  \begin{pmatrix} 20 &7.5 \end{pmatrix} information is a "repeat" of the \begin{pmatrix} 40 &15 \end{pmatrix} information— that is, \begin{pmatrix} 20 &7.5 \end{pmatrix} is in the span of the set \{\begin{pmatrix} 40 &15 \end{pmatrix}\}— and so we would be trying to solve a two-unknowns problem with what is essentially one piece of information.

Example 1.7

The set  \{1+x,1-x\} is linearly independent in \mathcal{P}_2 , the space of quadratic polynomials with real coefficients, because


0+0x+0x^2
=
c_1(1+x)+c_2(1-x)
=
(c_1+c_2)+(c_1-c_2)x+0x^2

gives

\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
c_1 &+ &c_2 &= &0 \\
c_1 &- &c_2 &= &0
\end{array}
&\xrightarrow[]{-\rho_1+\rho_2}
&\begin{array}{*{2}{rc}r}
c_1 &+ &c_2 &= &0 \\
&  &2c_2 &= &0
\end{array}
\end{array}

since polynomials are equal only if their coefficients are equal. Thus, the only linear relationship between these two members of \mathcal{P}_2 is the trivial one.

Example 1.8

In  \mathbb{R}^3 , where


\vec{v}_1=\begin{pmatrix} 3 \\ 4 \\ 5 \end{pmatrix}
\quad
\vec{v}_2=\begin{pmatrix} 2 \\ 9 \\ 2 \end{pmatrix}
\quad
\vec{v}_3=\begin{pmatrix} 4 \\ 18 \\ 4 \end{pmatrix}

the set  S=\{\vec{v}_1,\vec{v}_2,\vec{v}_3\} is linearly dependent because this is a relationship


0\cdot\vec{v}_1
+2\cdot\vec{v}_2
-1\cdot\vec{v}_3
=\vec{0}

where not all of the scalars are zero (the fact that some of the scalars are zero doesn't matter).

Remark 1.9

That example illustrates why, although Definition 1.3 is a clearer statement of what independence is, Lemma 1.4 is more useful for computations. Working straight from the definition, someone trying to compute whether S is linearly independent would start by setting  \vec{v}_1=c_2\vec{v}_2+c_3\vec{v}_3 and concluding that there are no such c_2 and c_3. But knowing that the first vector is not dependent on the other two is not enough. This person would have to go on to try  \vec{v}_2=c_1\vec{v}_1+c_3\vec{v}_3 to find the dependence c_1=0,  c_3=1/2 . Lemma 1.4 gets the same conclusion with only one computation.

Example 1.10

The empty subset of a vector space is linearly independent. There is no nontrivial linear relationship among its members as it has no members.

Example 1.11

In any vector space, any subset containing the zero vector is linearly dependent. For example, in the space \mathcal{P}_2 of quadratic polynomials, consider the subset \{1+x,x+x^2,0\}.

One way to see that this subset is linearly dependent is to use Lemma 1.4: we have 0\cdot\vec{v}_1+0\cdot\vec{v}_2+1\cdot\vec{0}=\vec{0}, and this is a nontrivial relationship as not all of the coefficients are zero. Another way to see that this subset is linearly dependent is to go straight to Definition 1.3: we can express the third member of the subset as a linear combination of the first two, namely, c_1\vec{v}_1+c_2\vec{v}_2=\vec{0} is satisfied by taking c_1=0 and c_2=0 (in contrast to the lemma, the definition allows all of the coefficients to be zero).

(There is still another way to see that this subset is dependent that is subtler. The zero vector is equal to the trivial sum, that is, it is the sum of no vectors. So in a set containing the zero vector, there is an element that can be written as a combination of a collection of other vectors from the set, specifically, the zero vector can be written as a combination of the empty collection.)

The above examples, especially Example 1.5, underline the discussion that begins this section. The next result says that given a finite set, we can produce a linearly independent subset by discarding what Remark 1.6 calls "repeats".


Theorem 1.12

In a vector space, any finite subset has a linearly independent subset with the same span.

Proof

If the set  S=\{ \vec{s}_1,\dots,\vec{s}_n\} is linearly independent then S itself satisfies the statement, so assume that it is linearly dependent.

By the definition of dependence, there is a vector  \vec{s}_i that is a linear combination of the others. Call that vector  \vec{v}_1 . Discard it— define the set  S_1=S-\{\vec{v}_1\} . By Lemma 1.1, the span does not shrink  [S_1]=[S] .

Now, if  S_1 is linearly independent then we are finished. Otherwise iterate the prior paragraph: take a vector \vec{v}_2 that is a linear combination of other members of S_1 and discard it to derive  S_2=S_1-\{\vec{v}_2\} such that  [S_2]=[S_1] . Repeat this until a linearly independent set S_j appears; one must appear eventually because  S is finite and the empty set is linearly independent. (Formally, this argument uses induction on n, the number of elements in the starting set. Problem 20 asks for the details.)

Example 1.13

This set spans  \mathbb{R}^3 .


S=\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix},
\begin{pmatrix} 3 \\ 3 \\ 0 \end{pmatrix} \}

Looking for a linear relationship


c_1\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
+c_2\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix}
+c_3\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix}
+c_4\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix}
+c_5\begin{pmatrix} 3 \\ 3 \\ 0 \end{pmatrix}
=\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}

gives a three equations/five unknowns linear system whose solution set can be parametrized in this way.



\{\begin{pmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \\ c_5 \end{pmatrix}=
c_3\begin{pmatrix} -1 \\ -1 \\ 1 \\ 0 \\ 0 \end{pmatrix}
+c_5\begin{pmatrix} -3 \\ -3/2 \\ 0 \\ 0 \\ 1 \end{pmatrix}
\,\big|\, c_3,c_5\in\mathbb{R} \}

So S is linearly dependent. Setting  c_3=0 and  c_5=1 shows that the fifth vector is a linear combination of the first two. Thus, Lemma 1.1 says that discarding the fifth vector


S_1=\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \}

leaves the span unchanged [S_1]=[S]. Now, the third vector of  S_1 is a linear combination of the first two and we get


S_2=\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \}

with the same span as S_1, and therefore the same span as S, but with one difference. The set S_2 is linearly independent (this is easily checked), and so discarding any of its elements will shrink the span.

Linear Independence and Subset Relations

Theorem 1.12 describes producing a linearly independent set by shrinking, that is, by taking subsets. We finish this subsection by considering how linear independence and dependence, which are properties of sets, interact with the subset relation between sets.

Lemma 1.14

Any subset of a linearly independent set is also linearly independent. Any superset of a linearly dependent set is also linearly dependent.

Proof

This is clear.

Restated, independence is preserved by subset and dependence is preserved by superset.

Those are two of the four possible cases of interaction that we can consider. The third case, whether linear dependence is preserved by the subset operation, is covered by Example 1.13, which gives a linearly dependent set S with a subset S_1 that is linearly dependent and another subset S_2 that is linearly independent.

That leaves one case, whether linear independence is preserved by superset. The next example shows what can happen.

Example 1.15

In each of these three paragraphs the subset S is linearly independent.

For the set


S =\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\}

the span  [S] is the  x axis. Here are two supersets of S, one linearly dependent and the other linearly independent.

dependent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} -3 \\ 0 \\ 0 \end{pmatrix}\}      independent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\}

Checking the dependence or independence of these sets is easy.

For


S
=\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}
\}

the span  [S] is the  xy plane. These are two supersets.

dependent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 3 \\ -2 \\ 0 \end{pmatrix} \}      independent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \}

If


S =\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \}

then  [S]=\mathbb{R}^3 . A linearly dependent superset is

dependent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix},
\begin{pmatrix} 2 \\ -1 \\ 3 \end{pmatrix} \}

but there are no linearly independent supersets of S. The reason is that for any vector that we would add to make a superset, the linear dependence equation



\begin{pmatrix} x \\ y \\ z \end{pmatrix}
=c_1\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
+c_2\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}
+c_3\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}

has a solution c_1=x, c_2=y, and c_3=z.

So, in general, a linearly independent set may have a superset that is dependent. And, in general, a linearly independent set may have a superset that is independent. We can characterize when the superset is one and when it is the other.

Lemma 1.16

Where  S is a linearly independent subset of a vector space  V ,


S\cup\{\vec{v}\}\text{ is linearly dependent}
\quad\text{if and only if}\quad
\vec{v}\in[S]

for any  \vec{v}\in V with  \vec{v}\not\in S .

Proof

One implication is clear: if  \vec{v}\in[S] then  \vec{v}=c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n where each  \vec{s}_i\in S and  c_i\in\mathbb{R} , and so  \vec{0}=c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n+(-1)\vec{v} is a nontrivial linear relationship among elements of  S\cup\{\vec{v}\} .

The other implication requires the assumption that  S is linearly independent. With  S\cup\{\vec{v}\} linearly dependent, there is a nontrivial linear relationship  c_0\vec{v}+c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n=\vec{0} and independence of S then implies that  c_0\neq 0 , or else that would be a nontrivial relationship among members of  S . Now rewriting this equation as  \vec{v}=-(c_1/c_0)\vec{s}_1-\dots-(c_n/c_0)\vec{s}_n shows that  \vec{v}\in[S] .

(Compare this result with Lemma 1.1. Both say, roughly, that \vec{v} is a "repeat" if it is in the span of S. However, note the additional hypothesis here of linear independence.)

Corollary 1.17

A subset  S=\{\vec{s}_1,\dots,\vec{s}_n\} of a vector space is linearly dependent if and only if some  \vec{s_i} is a linear combination of the vectors  \vec{s}_1 , ...,  \vec{s}_{i-1} listed before it.

Proof

Consider  S_0=\{\} ,  S_1=\{\vec{s_1}\} ,  S_2=\{\vec{s}_1,\vec{s}_2 \} , etc. Some index  i\geq 1 is the first one with  S_{i-1}\cup\{\vec{s}_i \} linearly dependent, and there  \vec{s}_i\in[ S_{i-1} ] .

Lemma 1.16 can be restated in terms of independence instead of dependence: if  S is linearly independent and  \vec{v}\not\in S then the set  S\cup\{\vec{v}\} is also linearly independent if and only if  \vec{v}\not\in[S]. Applying Lemma 1.1, we conclude that if  S is linearly independent and  \vec{v}\not\in S then  S\cup\{\vec{v}\} is also linearly independent if and only if  [S\cup\{\vec{v}\}]\neq[S] . Briefly, when passing from S to a superset S_1, to preserve linear independence we must expand the span [S_1]\supset[S].

Example 1.15 shows that some linearly independent sets are maximal— have as many elements as possible— in that they have no supersets that are linearly independent. By the prior paragraph, a linearly independent sets is maximal if and only if it spans the entire space, because then no vector exists that is not already in the span.

This table summarizes the interaction between the properties of independence and dependence and the relations of subset and superset.


 S_1\subset S  S_1\supset S
S independent
 S_1 must be independent  S_1 may be either
 S_1 may be either  S_1 must be dependent
S dependent


In developing this table we've uncovered an intimate relationship between linear independence and span. Complementing the fact that a spanning set is minimal if and only if it is linearly independent, a linearly independent set is maximal if and only if it spans the space.

In summary, we have introduced the definition of linear independence to formalize the idea of the minimality of a spanning set. We have developed some properties of this idea. The most important is Lemma 1.16, which tells us that a linearly independent set is maximal when it spans the space.

Exercises

This exercise is recommended for all readers.
Problem 1

Decide whether each subset of  \mathbb{R}^3 is linearly dependent or linearly independent.

  1.  \{\begin{pmatrix} 1 \\ -3 \\ 5 \end{pmatrix},
\begin{pmatrix} 2 \\ 2 \\ 4 \end{pmatrix},
\begin{pmatrix} 4 \\ -4 \\ 14 \end{pmatrix} \}
  2.  \{\begin{pmatrix} 1 \\ 7 \\ 7 \end{pmatrix},
\begin{pmatrix} 2 \\ 7 \\ 7 \end{pmatrix},
\begin{pmatrix} 3 \\ 7 \\ 7 \end{pmatrix} \}
  3.  \{\begin{pmatrix} 0 \\ 0 \\ -1 \end{pmatrix},
\begin{pmatrix} 1 \\ 0 \\ 4 \end{pmatrix} \}
  4.  \{\begin{pmatrix} 9 \\ 9 \\ 0 \end{pmatrix},
\begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix},
\begin{pmatrix} 3 \\ 5 \\ -4 \end{pmatrix},
\begin{pmatrix} 12 \\ 12 \\ -1 \end{pmatrix} \}
This exercise is recommended for all readers.
Problem 2

Which of these subsets of  \mathcal{P}_3 are linearly dependent and which are independent?

  1.  \{3-x+9x^2,5-6x+3x^2,1+1x-5x^2\}
  2.  \{-x^2,1+4x^2\}
  3.  \{2+x+7x^2,3-x+2x^2,4-3x^2\}
  4.  \{8+3x+3x^2,x+2x^2,2+2x+2x^2,8-2x+5x^2\}
This exercise is recommended for all readers.
Problem 3

Prove that each set  \{f,g\} is linearly independent in the vector space of all functions from  \mathbb{R}^+ to  \mathbb{R} .

  1.  f(x)=x and  g(x)=1/x
  2.  f(x)=\cos(x) and  g(x)=\sin(x)
  3.  f(x)=e^x and  g(x)=\ln(x)
This exercise is recommended for all readers.
Problem 4

Which of these subsets of the space of real-valued functions of one real variable is linearly dependent and which is linearly independent? (Note that we have abbreviated some constant functions; e.g., in the first item, the "2" stands for the constant function f(x)=2.)

  1.  \{2,4\sin^2(x),\cos^2(x)\}
  2.  \{1,\sin(x),\sin(2x)\}
  3.  \{x,\cos(x)\}
  4.  \{(1+x)^2,x^2+2x,3\}
  5.  \{\cos(2x),\sin^2(x),\cos^2(x)\}
  6.  \{0,x,x^2\}
Problem 5

Does the equation  \sin^2(x)/\cos^2(x)=\tan^2(x) show that this set of functions  \{\sin^2(x),\cos^2(x),\tan^2(x)\} is a linearly dependent subset of the set of all real-valued functions with domain the interval  (-\pi/2..\pi/2) of real numbers between  -\pi/2 and  \pi/2) ?

Problem 6

Why does Lemma 1.4 say "distinct"?

This exercise is recommended for all readers.
Problem 7

Show that the nonzero rows of an echelon form matrix form a linearly independent set.

This exercise is recommended for all readers.
Problem 8
  1. Show that if the set  \{\vec{u},\vec{v},\vec{w}\} is linearly independent set then so is the set  \{\vec{u},\vec{u}+\vec{v},\vec{u}+\vec{v}+\vec{w}\} .
  2. What is the relationship between the linear independence or dependence of the set  \{\vec{u},\vec{v},\vec{w}\} and the independence or dependence of  \{\vec{u}-\vec{v},\vec{v}-\vec{w},\vec{w}-\vec{u}\} ?
Problem 9

Example 1.10 shows that the empty set is linearly independent.

  1. When is a one-element set linearly independent?
  2. How about a set with two elements?
Problem 10

In any vector space  V , the empty set is linearly independent. What about all of  V ?

Problem 11

Show that if  \{\vec{x},\vec{y},\vec{z}\} is linearly independent then so are all of its proper subsets:  \{\vec{x},\vec{y}\} ,  \{\vec{x},\vec{z}\} ,  \{\vec{y},\vec{z}\} ,  \{\vec{x}\} , \{\vec{y}\} ,  \{\vec{z}\} , and  \{\} . Is that "only if" also?

Problem 12
  1. Show that this
    
S=\{\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix},\begin{pmatrix} -1 \\ 2 \\ 0 \end{pmatrix}\}
    is a linearly independent subset of  \mathbb{R}^3 .
  2. Show that
    
\begin{pmatrix} 3 \\ 2 \\ 0 \end{pmatrix}
    is in the span of S by finding  c_1 and  c_2 giving a linear relationship.
    
c_1\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}
+c_2\begin{pmatrix} -1 \\ 2 \\ 0 \end{pmatrix}
=\begin{pmatrix} 3 \\ 2 \\ 0 \end{pmatrix}
    Show that the pair  c_1,c_2 is unique.
  3. Assume that  S is a subset of a vector space and that  \vec{v} is in  [S] , so that  \vec{v} is a linear combination of vectors from  S . Prove that if  S is linearly independent then a linear combination of vectors from  S adding to  \vec{v} is unique (that is, unique up to reordering and adding or taking away terms of the form  0\cdot\vec{s} ). Thus  S as a spanning set is minimal in this strong sense: each vector in  [S] is "hit" a minimum number of times— only once.
  4. Prove that it can happen when  S is not linearly independent that distinct linear combinations sum to the same vector.
Problem 13

Prove that a polynomial gives rise to the zero function if and only if it is the zero polynomial. (Comment. This question is not a Linear Algebra matter, but we often use the result. A polynomial gives rise to a function in the obvious way: x\mapsto c_nx^n+\dots+c_1x+c_0.)

Problem 14

Return to Section 1.2 and redefine point, line, plane, and other linear surfaces to avoid degenerate cases.

Problem 15
  1. Show that any set of four vectors in  \mathbb{R}^2 is linearly dependent.
  2. Is this true for any set of five? Any set of three?
  3. What is the most number of elements that a linearly independent subset of \mathbb{R}^2 can have?
This exercise is recommended for all readers.
Problem 16

Is there a set of four vectors in  \mathbb{R}^3 , any three of which form a linearly independent set?

Problem 17

Must every linearly dependent set have a subset that is dependent and a subset that is independent?

Problem 18

In  \mathbb{R}^4 , what is the biggest linearly independent set you can find? The smallest? The biggest linearly dependent set? The smallest? ("Biggest" and "smallest" mean that there are no supersets or subsets with the same property.)

This exercise is recommended for all readers.
Problem 19

Linear independence and linear dependence are properties of sets. We can thus naturally ask how those properties act with respect to the familiar elementary set relations and operations. In this body of this subsection we have covered the subset and superset relations. We can also consider the operations of intersection, complementation, and union.

  1. How does linear independence relate to intersection: can an intersection of linearly independent sets be independent? Must it be?
  2. How does linear independence relate to complementation?
  3. Show that the union of two linearly independent sets need not be linearly independent.
  4. Characterize when the union of two linearly independent sets is linearly independent, in terms of the intersection of the span of each.
This exercise is recommended for all readers.
Problem 20

For Theorem 1.12,

  1. fill in the induction for the proof;
  2. give an alternate proof that starts with the empty set and builds a sequence of linearly independent subsets of the given finite set until one appears with the same span as the given set.
Problem 21

With a little calculation we can get formulas to determine whether or not a set of vectors is linearly independent.

  1. Show that this subset of  \mathbb{R}^2
    
\{\begin{pmatrix} a \\ c \end{pmatrix},\begin{pmatrix} b \\ d \end{pmatrix}\}
    is linearly independent if and only if  ad-bc\neq 0 .
  2. Show that this subset of  \mathbb{R}^3
    
\{\begin{pmatrix} a \\ d \\ g \end{pmatrix},
\begin{pmatrix} b \\ e \\ h \end{pmatrix},
\begin{pmatrix} c \\ f \\ i \end{pmatrix} \}
    is linearly independent iff  aei+bfg+cdh-hfa-idb-gec \neq 0 .
  3. When is this subset of  \mathbb{R}^3
    
\{\begin{pmatrix} a \\ d \\ g \end{pmatrix},
\begin{pmatrix} b \\ e \\ h \end{pmatrix} \}
    linearly independent?
  4. This is an opinion question: for a set of four vectors from  \mathbb{R}^4 , must there be a formula involving the sixteen entries that determines independence of the set? (You needn't produce such a formula, just decide if one exists.)
This exercise is recommended for all readers.
Problem 22
  1. Prove that a set of two perpendicular nonzero vectors from  \mathbb{R}^n is linearly independent when  n>1 .
  2. What if  n=1 ?  n=0 ?
  3. Generalize to more than two vectors.
Problem 23

Consider the set of functions from the open interval (-1..1) to \mathbb{R}.

  1. Show that this set is a vector space under the usual operations.
  2. Recall the formula for the sum of an infinite geometric series:  1+x+x^2+\cdots=1/(1-x) for all  x\in(-1..1) . Why does this not express a dependence inside of the set \{g(x)=1/(1-x),f_0(x)=1,f_1(x)=x,f_2(x)=x^2,\ldots\} (in the vector space that we are considering)? (Hint. Review the definition of linear combination.)
  3. Show that the set in the prior item is linearly independent.

This shows that some vector spaces exist with linearly independent subsets that are infinite.

Problem 24

Show that, where  S is a subspace of  V , if a subset T of  S is linearly independent in  S then T is also linearly independent in  V . Is that "only if"?


Section III - Basis and Dimension

The prior section ends with the statement that a spanning set is minimal when it is linearly independent and a linearly independent set is maximal when it spans the space. So the notions of minimal spanning set and maximal independent set coincide. In this section we will name this idea and study its properties.


1 - Basis

Definition 1.1

A basis for a vector space is a sequence of vectors that form a set that is linearly independent and that spans the space.

We denote a basis with angle brackets  \langle \vec{\beta}_1,\vec{\beta}_2,\ldots \rangle  to signify that this collection is a sequence[2] — the order of the elements is significant. (The requirement that a basis be ordered will be needed, for instance, in Definition 1.13.)

Example 1.2

This is a basis for  \mathbb{R}^2 .


\langle  \begin{pmatrix} 2 \\ 4 \end{pmatrix},\begin{pmatrix} 1 \\ 1 \end{pmatrix}  \rangle

It is linearly independent


c_1\begin{pmatrix} 2 \\ 4 \end{pmatrix}+c_2\begin{pmatrix} 1 \\ 1 \end{pmatrix}=\begin{pmatrix} 0 \\ 0 \end{pmatrix}
\quad\implies\quad
\begin{array}{*{2}{rc}r}
2c_1  &+  &1c_2  &=  &0  \\
4c_1  &+  &1c_2  &=  &0  
\end{array}
\quad\implies\quad
c_1=c_2=0

and it spans  \mathbb{R}^2 .


\begin{array}{*{2}{rc}r}
2c_1  &+  &1c_2  &=  &x  \\
4c_1  &+  &1c_2  &=  &y  
\end{array}
\quad\implies\quad
c_2=2x-y\text{ and } c_1=(y-x)/2
Example 1.3

This basis for  \mathbb{R}^2


\langle \begin{pmatrix} 1 \\ 1 \end{pmatrix},\begin{pmatrix} 2 \\ 4 \end{pmatrix} \rangle

differs from the prior one because the vectors are in a different order. The verification that it is a basis is just as in the prior example.

Example 1.4

The space  \mathbb{R}^2 has many bases. Another one is this.


\langle  \begin{pmatrix} 1 \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \end{pmatrix}  \rangle

The verification is easy.

Definition 1.5

For any  \mathbb{R}^n ,


\mathcal{E}_n=\langle 
\begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{pmatrix},
\dots,\,
\begin{pmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{pmatrix} \rangle

is the standard (or natural) basis. We denote these vectors by  \vec{e}_1,\dots,\vec{e}_n .


(Calculus books refer to \mathbb{R}^2's standard basis vectors  \vec{\imath} and  \vec{\jmath} instead of \vec{e}_1 and \vec{e}_2, and they refer to  \mathbb{R}^3 's standard basis vectors  \vec{\imath} ,  \vec{\jmath} , and  \vec{k} instead of \vec{e}_1, \vec{e}_2, and \vec{e}_3.) Note that the symbol " \vec{e}_1 " means something different in a discussion of  \mathbb{R}^3 than it means in a discussion of  \mathbb{R}^2 .

Example 1.6

Consider the space  \{a\cdot\cos\theta+b\cdot\sin\theta\,\big|\, a,b\in\mathbb{R}\} of functions of the real variable \theta.


\langle 1\cdot\cos\theta+0\cdot\sin\theta,
0\cdot\cos\theta+1\cdot\sin\theta \rangle 
=\langle \cos\theta, \sin\theta \rangle

Another basis is  \langle \cos\theta-\sin\theta, 2\cos\theta+3\sin\theta \rangle . Verification that these two are bases is Problem 7.

Example 1.7

A natural for the vector space of cubic polynomials  \mathcal{P}_3 is  \langle 1,x,x^2,x^3 \rangle  . Two other bases for this space are  \langle x^3,3x^2,6x,6 \rangle  and  \langle 1,1+x,1+x+x^2,1+x+x^2+x^3 \rangle  . Checking that these are linearly independent and span the space is easy.

Example 1.8

The trivial space \{\vec{0}\} has only one basis, the empty one  \langle  \rangle  .

Example 1.9

The space of finite-degree polynomials has a basis with infinitely many elements  \langle 1,x,x^2,\ldots \rangle  .

Example 1.10

We have seen bases before. In the first chapter we described the solution set of homogeneous systems such as this one


\begin{array}{*{4}{rc}r}
x  &+  &y  &   &   &-   &w   &=  &0  \\
&   &   &   &z  &+   &w   &=  &0  
\end{array}

by parametrizing.


\{\begin{pmatrix} -1 \\ 1 \\ 0 \\ 0 \end{pmatrix}y
+\begin{pmatrix} 1 \\ 0 \\ -1 \\ 1 \end{pmatrix}w
\,\big|\, y,w\in\mathbb{R} \}

That is, we described the vector space of solutions as the span of a two-element set. We can easily check that this two-vector set is also linearly independent. Thus the solution set is a subspace of  \mathbb{R}^4 with a two-element basis.

Example 1.11

Parameterization helps find bases for other vector spaces, not just for solution sets of homogeneous systems. To find a basis for this subspace of \mathcal{M}_{2 \! \times \! 2}


\{\begin{pmatrix}
a  &b  \\
c  &0
\end{pmatrix} \,\big|\, a+b-2c=0\}

we rewrite the condition as a=-b+2c.


\{\begin{pmatrix}
-b+2c  &b  \\
c     &0
\end{pmatrix} \,\big|\, b,c \in \mathbb{R}\}
=\{b\begin{pmatrix}
-1  &1  \\
0  &0
\end{pmatrix}+
c\begin{pmatrix}
2  &0  \\
1  &0
\end{pmatrix} \,\big|\, b,c \in \mathbb{R}\}

Thus, this is a good candidate for a basis.


\langle \begin{pmatrix}
-1  &1  \\
0  &0
\end{pmatrix},
\begin{pmatrix}
2  &0  \\
1  &0
\end{pmatrix}  \rangle

The above work shows that it spans the space. To show that it is linearly independent is routine.

Consider again Example 1.2. It involves two verifications.

In the first, to check that the set is linearly independent we looked at linear combinations of the set's members that total to the zero vector c_1\vec{\beta}_1+c_2\vec{\beta}_2=\binom{0}{0}. The resulting calculation shows that such a combination is unique, that c_1 must be 0 and c_2 must be 0.

The second verification, that the set spans the space, looks at linear combinations that total to any member of the space c_1\vec{\beta}_1+c_2\vec{\beta}_2=\binom{x}{y}. In Example 1.2 we noted only that the resulting calculation shows that such a combination exists, that for each x,y there is a c_1,c_2. However, in fact the calculation also shows that the combination is unique: c_1 must be (y-x)/2 and c_2 must be 2x-y.

That is, the first calculation is a special case of the second. The next result says that this holds in general for a spanning set: the combination totaling to the zero vector is unique if and only if the combination totaling to any vector is unique.

Theorem 1.12

In any vector space, a subset is a basis if and only if each vector in the space can be expressed as a linear combination of elements of the subset in a unique way.

We consider combinations to be the same if they differ only in the order of summands or in the addition or deletion of terms of the form " 0\cdot\vec{\beta} ".

Proof

By definition, a sequence is a basis if and only if its vectors form both a spanning set and a linearly independent set. A subset is a spanning set if and only if each vector in the space is a linear combination of elements of that subset in at least one way.

Thus, to finish we need only show that a subset is linearly independent if and only if every vector in the space is a linear combination of elements from the subset in at most one way. Consider two expressions of a vector as a linear combination of the members of the basis. We can rearrange the two sums, and if necessary add some  0\vec{\beta}_i terms, so that the two sums combine the same  \vec{\beta} 's in the same order:  \vec{v}=c_1\vec{\beta}_1+c_2\vec{\beta}_2+\cdots +c_n\vec{\beta}_n and  \vec{v}=d_1\vec{\beta}_1+d_2\vec{\beta}_2+\cdots +d_n\vec{\beta}_n . Now


c_1\vec{\beta}_1+c_2\vec{\beta}_2+\cdots +c_n\vec{\beta}_n=d_1\vec{\beta}_1+d_2\vec{\beta}_2+\cdots +d_n\vec{\beta}_n

holds if and only if


(c_1-d_1)\vec{\beta}_1+\dots+(c_n-d_n)\vec{\beta}_n=\vec{0}

holds, and so asserting that each coefficient in the lower equation is zero is the same thing as asserting that  c_i=d_i for each  i .

Definition 1.13

In a vector space with basis B the representation of \vec{v} with respect to B is the column vector of the coefficients used to express \vec{v} as a linear combination of the basis vectors:


{\rm Rep}_{B}(\vec{v})=
\begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix}

where  B=\langle \vec{\beta}_1,\dots,\vec{\beta}_n \rangle
and  \vec{v}=c_1\vec{\beta}_1+c_2\vec{\beta}_2+\cdots
+c_n\vec{\beta}_n . The  c 's are the coordinates of  \vec{v} with respect to B

We will later do representations in contexts that involve more than one basis. To help with the bookkeeping, we shall often attach a subscript B to the column vector.

Example 1.14

In  \mathcal{P}_3 , with respect to the basis  B=\langle 1,2x,2x^2,2x^3 \rangle  , the representation of  x+x^2 is


{\rm Rep}_{B}(x+x^2)=\begin{pmatrix} 0 \\ 1/2 \\ 1/2 \\ 0 \end{pmatrix}_B

(note that the coordinates are scalars, not vectors). With respect to a different basis  D=\langle 1+x,1-x,x+x^2,x+x^3 \rangle  , the representation


{\rm Rep}_{D}(x+x^2)=\begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix}_D

is different.

Remark 1.15

This use of column notation and the term "coordinates" has both a down side and an up side.

The down side is that representations look like vectors from  \mathbb{R}^n , which can be confusing when the vector space we are working with is \mathbb{R}^n, especially since we sometimes omit the subscript base. We must then infer the intent from the context. For example, the phrase "in  \mathbb{R}^2 , where \vec{v}=\binom{3}{2}" refers to the plane vector that, when in canonical position, ends at  (3,2) . To find the coordinates of that vector with respect to the basis


B=\langle 
\begin{pmatrix} 1 \\ 1 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \end{pmatrix}  \rangle

we solve


c_1\begin{pmatrix} 1 \\ 1 \end{pmatrix}
+c_2\begin{pmatrix} 0 \\ 2 \end{pmatrix}
=
\begin{pmatrix} 3 \\ 2 \end{pmatrix}

to get that c_1=3 and c_2=-1/2. Then we have this.


{\rm Rep}_{B}(\vec{v})=\begin{pmatrix} 3 \\ -1/2 \end{pmatrix}

Here, although we've ommited the subscript  B from the column, the fact that the right side is a representation is clear from the context.

The up side of the notation and the term "coordinates" is that they generalize the use that we are familiar with:~in  \mathbb{R}^n and with respect to the standard basis  \mathcal{E}_n , the vector starting at the origin and ending at  (v_1,\dots,v_n) has this representation.



{\rm Rep}_{\mathcal{E}_n}(\begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix})
=
\begin{pmatrix} v_1 \\ \vdots \\ v_n \end{pmatrix}_{\mathcal{E}_n}

Our main use of representations will come in the third chapter. The definition appears here because the fact that every vector is a linear combination of basis vectors in a unique way is a crucial property of bases, and also to help make two points. First, we fix an order for the elements of a basis so that coordinates can be stated in that order. Second, for calculation of coordinates, among other things, we shall restrict our attention to spaces with bases having only finitely many elements. We will see that in the next subsection.

Exercises

This exercise is recommended for all readers.
Problem 1

Decide if each is a basis for  \mathbb{R}^3 .

  1.  \langle 
\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix},
\begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \rangle
  2.  \langle 
\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix},
\begin{pmatrix} 3 \\ 2 \\ 1 \end{pmatrix} \rangle
  3.  \langle 
\begin{pmatrix} 0 \\ 2 \\ -1 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix},
\begin{pmatrix} 2 \\ 5 \\ 0 \end{pmatrix} \rangle
  4.  \langle 
\begin{pmatrix} 0 \\ 2 \\ -1 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix},
\begin{pmatrix} 1 \\ 3 \\ 0 \end{pmatrix} \rangle
This exercise is recommended for all readers.
Problem 2

Represent the vector with respect to the basis.

  1.  \begin{pmatrix} 1 \\ 2 \end{pmatrix} ,  B=\langle \begin{pmatrix} 1 \\ 1 \end{pmatrix},\begin{pmatrix} -1 \\ 1 \end{pmatrix} \rangle \subseteq\mathbb{R}^2
  2.  x^2+x^3 ,  D=\langle 1,1+x,1+x+x^2,1+x+x^2+x^3 \rangle \subseteq\mathcal{P}_3
  3.  \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix} ,  \mathcal{E}_4\subseteq\mathbb{R}^4
Problem 3

Find a basis for   \mathcal{P}_2 , the space of all quadratic polynomials. Must any such basis contain a polynomial of each degree:~degree zero, degree one, and degree two?

Problem 4

Find a basis for the solution set of this system.


\begin{array}{*{4}{rc}r}
x_1  &-  &4x_2  &+  &3x_3  &-  &x_4  &=  &0  \\
2x_1  &-  &8x_2  &+  &6x_3  &-  &2x_4 &=  &0  
\end{array}
This exercise is recommended for all readers.
Problem 5

Find a basis for  \mathcal{M}_{2 \! \times \! 2} , the space of  2 \! \times \! 2 matrices.

This exercise is recommended for all readers.
Problem 6

Find a basis for each.

  1. The subspace \{a_2x^2+a_1x+a_0\,\big|\, a_2-2a_1=a_0\} of \mathcal{P}_2
  2. The space of three-wide row vectors whose first and second components add to zero
  3. This subspace of the 2 \! \times \! 2 matrices
    
\{\begin{pmatrix}
a  &b  \\
0  &c  
\end{pmatrix} \,\big|\, c-2b=0\}
Problem 7

Check Example 1.6.

This exercise is recommended for all readers.
Problem 8

Find the span of each set and then find a basis for that span.

  1. \{1+x,1+2x\} in \mathcal{P}_2
  2. \{2-2x,3+4x^2\} in \mathcal{P}_2
This exercise is recommended for all readers.
Problem 9

Find a basis for each of these subspaces of the space \mathcal{P}_3 of cubic polynomials.

  1. The subspace of cubic polynomials p(x) such that p(7)=0
  2. The subspace of polynomials p(x) such that p(7)=0 and p(5)=0
  3. The subspace of polynomials p(x) such that p(7)=0, p(5)=0, and~p(3)=0
  4. The space of polynomials p(x) such that p(7)=0, p(5)=0, p(3)=0, and~p(1)=0
Problem 10

We've seen that it is possible for a basis to remain a basis when it is reordered. Must it remain a basis?

Problem 11

Can a basis contain a zero vector?

This exercise is recommended for all readers.
Problem 12

Let  \langle \vec{\beta}_1,\vec{\beta}_2,\vec{\beta}_3 \rangle  be a basis for a vector space.

  1. Show that  \langle c_1\vec{\beta}_1,c_2\vec{\beta}_2,c_3\vec{\beta}_3 \rangle  is a basis when  c_1, c_2, c_3\neq 0 . What happens when at least one  c_i is 0?
  2. Prove that  \langle \vec{\alpha}_1,\vec{\alpha}_2,\vec{\alpha}_3 \rangle  is a basis where  \vec{\alpha}_i=\vec{\beta}_1+\vec{\beta}_i .
Problem 13

Find one vector \vec{v} that will make each into a basis for the space.

  1. \langle \begin{pmatrix} 1 \\ 1 \end{pmatrix},\vec{v} \rangle in \mathbb{R}^2
  2. \langle \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},\vec{v} \rangle in \mathbb{R}^3
  3. \langle x,1+x^2,\vec{v} \rangle in \mathcal{P}_2
This exercise is recommended for all readers.
Problem 14

Where  \langle \vec{\beta}_1,\dots,\vec{\beta}_n  \rangle  is a basis, show that in this equation


c_1\vec{\beta}_1+\dots+c_k\vec{\beta}_k
=
c_{k+1}\vec{\beta}_{k+1}+\dots+c_n\vec{\beta}_n

each of the  c_i 's is zero. Generalize.

Problem 15

A basis contains some of the vectors from a vector space; can it contain them all?

Problem 16

Theorem 1.12 shows that, with respect to a basis, every linear combination is unique. If a subset is not a basis, can linear combinations be not unique? If so, must they be?

This exercise is recommended for all readers.
Problem 17

A square matrix is symmetric if for all indices  i
and  j , entry  i,j equals entry  j,i .

  1. Find a basis for the vector space of symmetric  2 \! \times \! 2 matrices.
  2. Find a basis for the space of symmetric  3 \! \times \! 3 matrices.
  3. Find a basis for the space of symmetric  n \! \times \! n matrices.
This exercise is recommended for all readers.
Problem 18

We can show that every basis for \mathbb{R}^3 contains the same number of vectors.

  1. Show that no linearly independent subset of \mathbb{R}^3 contains more than three vectors.
  2. Show that no spanning subset of \mathbb{R}^3 contains fewer than three vectors. (Hint. Recall how to calculate the span of a set and show that this method, when applied to two vectors, cannot yield all of \mathbb{R}^3.)
Problem 19

One of the exercises in the Subspaces subsection shows that the set


\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, x+y+z=1\}

is a vector space under these operations.


\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix}+\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}
=\begin{pmatrix} x_1+x_2-1 \\ y_1+y_2 \\ z_1+z_2 \end{pmatrix}
\qquad
r\begin{pmatrix} x \\ y \\ z \end{pmatrix}=\begin{pmatrix} rx-r+1 \\ ry \\ rz \end{pmatrix}

Find a basis.

Footnotes

  1. More information on equivalence of statements is in the appendix.
  2. More information on sequences is in the appendix.


2 - Dimension

In the prior subsection we defined the basis of a vector space, and we saw that a space can have many different bases. For example, following the definition of a basis, we saw three different bases for \mathbb{R}^2. So we cannot talk about "the" basis for a vector space. True, some vector spaces have bases that strike us as more natural than others, for instance, \mathbb{R}^2's basis \mathcal{E}_2 or \mathbb{R}^3's basis \mathcal{E}_3 or \mathcal{P}_2's basis \langle 1,x,x^2 \rangle . But, for example in the space \{a_2x^2+a_1x+a_0\,\big|\, 2a_2-a_0=a_1\}, no particular basis leaps out at us as the most natural one. We cannot, in general, associate with a space any single basis that best describes that space.

We can, however, find something about the bases that is uniquely associated with the space. This subsection shows that any two bases for a space have the same number of elements. So, with each space we can associate a number, the number of vectors in any of its bases.

This brings us back to when we considered the two things that could be meant by the term "minimal spanning set". At that point we defined "minimal" as linearly independent, but we noted that another reasonable interpretation of the term is that a spanning set is "minimal" when it has the fewest number of elements of any set with the same span. At the end of this subsection, after we have shown that all bases have the same number of elements, then we will have shown that the two senses of "minimal" are equivalent.

Before we start, we first limit our attention to spaces where at least one basis has only finitely many members.

Definition 2.1

A vector space is finite-dimensional if it has a basis with only finitely many vectors.

(One reason for sticking to finite-dimensional spaces is so that the representation of a vector with respect to a basis is a finitely-tall vector, and so can be easily written.) From now on we study only finite-dimensional vector spaces. We shall take the term "vector space" to mean "finite-dimensional vector space". Other spaces are interesting and important, but they lie outside of our scope.

To prove the main theorem we shall use a technical result.

Lemma 2.2 (Exchange Lemma)

Assume that  B=\langle \vec{\beta}_1,\dots,\vec{\beta}_n \rangle  is a basis for a vector space, and that for the vector  \vec{v} the relationship  \vec{v}=c_1\vec{\beta}_1+c_2\vec{\beta}_2+\cdots +c_n\vec{\beta}_n has  c_i\neq 0 . Then exchanging  \vec{\beta}_i for  \vec{v} yields another basis for the space.

Proof

Call the outcome of the exchange  \hat{B}=\langle \vec{\beta}_1,\dots,\vec{\beta}_{i-1},\vec{v},\vec{\beta}_{i+1},\dots,\vec{\beta}_n \rangle   .

We first show that \hat{B} is linearly independent. Any relationship  d_1\vec{\beta}_1+\dots+d_i\vec{v}+\dots+d_n\vec{\beta}_n=\vec{0} among the members of \hat{B}, after substitution for \vec{v},


d_1\vec{\beta}_1+\dots
+d_i\cdot(c_1\vec{\beta}_1+\dots+c_i\vec{\beta}_i+\dots+c_n\vec{\beta}_n)
+\dots+d_n\vec{\beta}_n
=\vec{0}
\qquad\qquad(*)

gives a linear relationship among the members of B. The basis B is linearly independent, so the coefficient d_ic_i of \vec{\beta}_i is zero. Because c_i is assumed to be nonzero, d_i=0. Using this in equation (*) above gives that all of the other d's are also zero. Therefore \hat{B} is linearly independent.

We finish by showing that \hat{B} has the same span as B. Half of this argument, that [{\hat{B}}]\subseteq[B], is easy; any member d_1\vec{\beta}_1+\dots+d_i\vec{v}+\dots+d_n\vec{\beta}_n of [{\hat{B}}] can be written d_1\vec{\beta}_1+\dots+d_i\cdot(c_1\vec{\beta}_1+\dots+c_n\vec{\beta}_n)+\dots+d_n\vec{\beta}_n, which is a linear combination of linear combinations of members of B, and hence is in [B]. For the [B]\subseteq[{\hat{B}}] half of the argument, recall that when \vec{v}=c_1\vec{\beta}_1+\dots+c_n\vec{\beta}_n with c_i\neq 0, then the equation can be rearranged to \vec{\beta}_i=(-c_1/c_i)\vec{\beta}_1+\dots+(1/c_i)\vec{v}+\dots+(-c_n/c_i)\vec{\beta}_n. Now, consider any member d_1\vec{\beta}_1+\dots+d_i\vec{\beta}_i+\dots+d_n\vec{\beta}_n of [B], substitute for \vec{\beta}_i its expression as a linear combination of the members of \hat{B}, and recognize (as in the first half of this argument) that the result is a linear combination of linear combinations, of members of \hat{B}, and hence is in [{\hat{B}}].

Theorem 2.3

In any finite-dimensional vector space, all of the bases have the same number of elements.

Proof

Fix a vector space with at least one finite basis. Choose, from among all of this space's bases, one  B=\langle \vec{\beta}_1,\dots,\vec{\beta}_n \rangle  of minimal size. We will show that any other basis  D={\langle \vec{\delta}_1,\vec{\delta}_2,\ldots \rangle } also has the same number of members, n. Because  B has minimal size,  D has no fewer than  n vectors. We will argue that it cannot have more than  n vectors.

The basis  B spans the space and  \vec{\delta}_1 is in the space, so  \vec{\delta}_1 is a nontrivial linear combination of elements of  B . By the Exchange Lemma,  \vec{\delta}_1 can be swapped for a vector from  B , resulting in a basis  B_1 , where one element is  \vec{\delta} and all of the  n-1 other elements are  \vec{\beta} 's.

The prior paragraph forms the basis step for an induction argument. The inductive step starts with a basis  B_k (for  1\leq k<n ) containing  k members of  D and  n-k members of  B . We know that  D has at least  n members so there is a  \vec{\delta}_{k+1} . Represent it as a linear combination of elements of  B_k . The key point: in that representation, at least one of the nonzero scalars must be associated with a  \vec{\beta}_i or else that representation would be a nontrivial linear relationship among elements of the linearly independent set  D . Exchange  \vec{\delta}_{k+1} for  \vec{\beta}_i to get a new basis  B_{k+1} with one  \vec{\delta} more and one  \vec{\beta} fewer than the previous basis  B_k .

Repeat the inductive step until no  \vec{\beta} 's remain, so that  B_n contains \vec{\delta}_1,\dots,\vec{\delta}_n. Now,  D cannot have more than these  n vectors because any  \vec{\delta}_{n+1} that remains would be in the span of  B_n (since it is a basis) and hence would be a linear combination of the other \vec{\delta}'s, contradicting that D is linearly independent.

Definition 2.4

The dimension of a vector space is the number of vectors in any of its bases.

Example 2.5

Any basis for  \mathbb{R}^n has  n vectors since the standard basis  \mathcal{E}_n has  n vectors. Thus, this definition generalizes the most familiar use of term, that \mathbb{R}^n is n-dimensional.

Example 2.6

The space  \mathcal{P}_n of polynomials of degree at most n has dimension  n+1 . We can show this by exhibiting any basis— \langle 1,x,\dots,x^n \rangle comes to mind— and counting its members.

Example 2.7

A trivial space is zero-dimensional since its basis is empty.

Again, although we sometimes say "finite-dimensional" as a reminder, in the rest of this book all vector spaces are assumed to be finite-dimensional. An instance of this is that in the next result the word "space" should be taken to mean "finite-dimensional vector space".

Corollary 2.8

No linearly independent set can have a size greater than the dimension of the enclosing space.

Proof

Inspection of the above proof shows that it never uses that  D spans the space, only that  D is linearly independent.

Example 2.9

Recall the subspace diagram from the prior section showing the subspaces of  \mathbb{R}^3 . Each subspace shown is described with a minimal spanning set, for which we now have the term "basis". The whole space has a basis with three members, the plane subspaces have bases with two members, the line subspaces have bases with one member, and the trivial subspace has a basis with zero members. When we saw that diagram we could not show that these are the only subspaces that this space has. We can show it now. The prior corollary proves that the only subspaces of  \mathbb{R}^3 are either three-, two-, one-, or zero-dimensional. Therefore, the diagram indicates all of the subspaces. There are no subspaces somehow, say, between lines and planes.

Corollary 2.10

Any linearly independent set can be expanded to make a basis.

Proof

If a linearly independent set is not already a basis then it must not span the space. Adding to it a vector that is not in the span preserves linear independence. Keep adding, until the resulting set does span the space, which the prior corollary shows will happen after only a finite number of steps.

Corollary 2.11

Any spanning set can be shrunk to a basis.

Proof

Call the spanning set  S . If  S is empty then it is already a basis (the space must be a trivial space). If  S=\{\vec{0}\} then it can be shrunk to the empty basis, thereby making it linearly independent, without changing its span.

Otherwise, S contains a vector \vec{s}_1 with \vec{s}_1\neq\vec{0} and we can form a basis  B_1=\langle \vec{s}_1 \rangle . If  [B_1]=[S] then we are done.

If not then there is a  \vec{s}_2\in[S] such that  \vec{s}_2\not\in[B_1] . Let  B_2=\langle \vec{s}_1,\vec{s_2} \rangle  ; if  [B_2]=[S] then we are done.

We can repeat this process until the spans are equal, which must happen in at most finitely many steps.

Corollary 2.12

In an  n -dimensional space, a set of  n vectors is linearly independent if and only if it spans the space.

Proof

First we will show that a subset with  n vectors is linearly independent if and only if it is a basis. "If" is trivially true— bases are linearly independent. "Only if" holds because a linearly independent set can be expanded to a basis, but a basis has  n elements, so this expansion is actually the set that we began with.

To finish, we will show that any subset with  n vectors spans the space if and only if it is a basis. Again, "if" is trivial. "Only if" holds because any spanning set can be shrunk to a basis, but a basis has  n elements and so this shrunken set is just the one we started with.

The main result of this subsection, that all of the bases in a finite-dimensional vector space have the same number of elements, is the single most important result in this book because, as Example 2.9 shows, it describes what vector spaces and subspaces there can be. We will see more in the next chapter.

Remark 2.13

The case of infinite-dimensional vector spaces is somewhat controversial. The statement "any infinite-dimensional vector space has a basis" is known to be equivalent to a statement called the Axiom of Choice (see (Blass 1984).) Mathematicians differ philosophically on whether to accept or reject this statement as an axiom on which to base mathematics (although, the great majority seem to accept it). Consequently the question about infinite-dimensional vector spaces is still somewhat up in the air. (A discussion of the Axiom of Choice can be found in the Frequently Asked Questions list for the Usenet group sci.math. Another accessible reference is (Rucker 1982).

Exercises

Assume that all spaces are finite-dimensional unless otherwise stated.

This exercise is recommended for all readers.
Problem 1

Find a basis for, and the dimension of,   \mathcal{P}_2 .

Problem 2

Find a basis for, and the dimension of, the solution set of this system.


\begin{array}{*{4}{rc}r}
x_1  &-  &4x_2  &+  &3x_3  &-  &x_4  &=  &0  \\
2x_1  &-  &8x_2  &+  &6x_3  &-  &2x_4 &=  &0  
\end{array}
This exercise is recommended for all readers.
Problem 3

Find a basis for, and the dimension of,  \mathcal{M}_{2 \! \times \! 2} , the vector space of  2 \! \times \! 2 matrices.

Problem 4

Find the dimension of the vector space of matrices


\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}

subject to each condition.

  1. a, b, c, d\in\mathbb{R}
  2. a-b+2c=0 and d\in\mathbb{R}
  3. a+b+c=0, a+b-c=0, and d\in\mathbb{R}
This exercise is recommended for all readers.
Problem 5

Find the dimension of each.

  1. The space of cubic polynomials p(x) such that p(7)=0
  2. The space of cubic polynomials p(x) such that p(7)=0 and p(5)=0
  3. The space of cubic polynomials p(x) such that p(7)=0, p(5)=0, and p(3)=0
  4. The space of cubic polynomials p(x) such that p(7)=0, p(5)=0, p(3)=0, and p(1)=0
Problem 6

What is the dimension of the span of the set \{\cos^2\theta,\sin^2\theta,\cos2\theta,\sin2\theta\}? This span is a subspace of the space of all real-valued functions of one real variable.

Problem 7

Find the dimension of  \mathbb{C}^{47} , the vector space of 47-tuples of complex numbers.

Problem 8

What is the dimension of the vector space \mathcal{M}_{3 \! \times \! 5} of  3 \! \times \! 5 matrices?

This exercise is recommended for all readers.
Problem 9

Show that this is a basis for \mathbb{R}^4.


\langle \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}  \rangle

(The results of this subsection can be used to simplify this job.)

Problem 10

Refer to Example 2.9.

  1. Sketch a similar subspace diagram for \mathcal{P}_2.
  2. Sketch one for \mathcal{M}_{2 \! \times \! 2}.
This exercise is recommended for all readers.
Problem 11
Where  S is a set, the functions  f:S\to\mathbb{R} form a vector space under the natural operations: the sum f+g is the function given by  (f+g)\,(s)=f(s)+g(s) and the scalar product is given by  (r\cdot f) \, (s)=r\cdot f(s) . What is the dimension of the space resulting for each domain?
  1.  S=\{1\}
  2.  S=\{1,2\}
  3.  S=\{1,\ldots,n\}
Problem 12

(See Problem 11.) Prove that this is an infinite-dimensional space: the set of all functions  f:\mathbb{R}\to\mathbb{R} under the natural operations.

Problem 13

(See Problem 11.) What is the dimension of the vector space of functions f:S\to\mathbb{R}, under the natural operations, where the domain S is the empty set?

Problem 14

Show that any set of four vectors in  \mathbb{R}^2 is linearly dependent.

Problem 15

Show that the set  \langle \vec{\alpha}_1,\vec{\alpha}_2,\vec{\alpha}_3 \rangle \subset\mathbb{R}^3 is a basis if and only if there is no plane through the origin containing all three vectors.

Problem 16
  1. Prove that any subspace of a finite dimensional space has a basis.
  2. Prove that any subspace of a finite dimensional space is finite dimensional.
Problem 17

Where is the finiteness of  B used in Theorem 2.3?

This exercise is recommended for all readers.
Problem 18

Prove that if  U and  W are both three-dimensional subspaces of  \mathbb{R}^5 then  U\cap W is non-trivial. Generalize.

Problem 19

Because a basis for a space is a subset of that space, we are naturally led to how the property "is a basis" interacts with set operations.

  1. Consider first how bases might be related by "subset". Assume that  U,W are subspaces of some vector space and that  U\subseteq W . Can there exist bases  B_U for  U and  B_W for  W such that  B_U\subseteq B_W ? Must such bases exist? For any basis  B_U for  U , must there be a basis  B_W for  W such that  B_U\subseteq B_W ? For any basis  B_W for  W , must there be a basis  B_U for  U such that  B_U\subseteq B_W ? For any bases  B_U, B_W for  U and  W , must  B_U be a subset of  B_W ?
  2. Is the intersection of bases a basis? For what space?
  3. Is the union of bases a basis? For what space?
  4. What about complement?

(Hint. Test any conjectures against some subspaces of  \mathbb{R}^3 .)

This exercise is recommended for all readers.
Problem 20

Consider how "dimension" interacts with "subset". Assume  U and  W are both subspaces of some vector space, and that  U\subseteq W .

  1. Prove that  \dim (U)\leq\dim (W) .
  2. Prove that equality of dimension holds if and only if  U=W .
  3. Show that the prior item does not hold if they are infinite-dimensional.
? Problem 21

For any vector \vec{v} in \mathbb{R}^n and any permutation \sigma of the numbers 1, 2, ..., n (that is, \sigma is a rearrangement of those numbers into a new order), define \sigma(\vec{v}) to be the vector whose components are v_{\sigma(1)}, v_{\sigma(2)}, ..., and v_{\sigma(n)} (where \sigma(1) is the first number in the rearrangement, etc.). Now fix \vec{v} and let V be the span of \{\sigma(\vec{v})\,\big|\, \sigma\text{ permutes }1, \ldots, n\}. What are the possibilities for the dimension of V? (Gilbert, Krusemeyer & Larson 1993, Problem 47)


3 - Vector Spaces and Linear Systems

We will now reconsider linear systems and Gauss' method, aided by the tools and terms of this chapter. We will make three points.

For the first point, recall the first chapter's Linear Combination Lemma and its corollary: if two matrices are related by row operations A\longrightarrow\cdots\longrightarrow B then each row of B is a linear combination of the rows of A. That is, Gauss' method works by taking linear combinations of rows. Therefore, the right setting in which to study row operations in general, and Gauss' method in particular, is the following vector space.

Definition 3.1

The row space of a matrix is the span of the set of its rows. The row rank is the dimension of the row space, the number of linearly independent rows.

Example 3.2

If


A=\begin{pmatrix}
2  &3  \\
4  &6
\end{pmatrix}

then  \mathop{{\mbox{Rowspace}}}(A) is this subspace of the space of two-component row vectors.


\{c_1\cdot\begin{pmatrix} 2 &3 \end{pmatrix}+c_2\cdot\begin{pmatrix} 4  &6 \end{pmatrix}
\,\big|\, c_1,c_2\in\mathbb{R} \}

The linear dependence of the second on the first is obvious and so we can simplify this description to \{c\cdot\begin{pmatrix} 2 &3 \end{pmatrix}\,\big|\, c\in\mathbb{R} \}.

Lemma 3.3

If the matrices  A and  B are related by a row operation


A\xrightarrow[]{\rho_i\leftrightarrow\rho_j}B 
\quad\text{or}\quad
A\xrightarrow[]{k\rho_i}B 
\quad\text{or}\quad
A\xrightarrow[]{k\rho_i+\rho_j}B

(for i\neq j and k\neq 0) then their row spaces are equal. Hence, row-equivalent matrices have the same row space, and hence also, the same row rank.

Proof

By the Linear Combination Lemma's corollary, each row of B is in the row space of A. Further, \mathop{{\mbox{Rowspace}}}(B)\subseteq\mathop{{\mbox{Rowspace}}}(A) because a member of the set \mathop{{\mbox{Rowspace}}}(B) is a linear combination of the rows of B, which means it is a combination of a combination of the rows of A, and hence, by the Linear Combination Lemma, is also a member of \mathop{{\mbox{Rowspace}}}(A).

For the other containment, recall that row operations are reversible: A\longrightarrow B if and only if B\longrightarrow A. With that, \mathop{{\mbox{Rowspace}}}(A)\subseteq\mathop{{\mbox{Rowspace}}}(B) also follows from the prior paragraph, and so the two sets are equal.

Thus, row operations leave the row space unchanged. But of course, Gauss' method performs the row operations systematically, with a specific goal in mind, echelon form.

Lemma 3.4

The nonzero rows of an echelon form matrix make up a linearly independent set.

Proof

A result in the first chapter, Lemma One.III.2.5, states that in an echelon form matrix, no nonzero row is a linear combination of the other rows. This is a restatement of that result into new terminology.

Thus, in the language of this chapter, Gaussian reduction works by eliminating linear dependences among rows, leaving the span unchanged, until no nontrivial linear relationships remain (among the nonzero rows). That is, Gauss' method produces a basis for the row space.

Example 3.5

From any matrix, we can produce a basis for the row space by performing Gauss' method and taking the nonzero rows of the resulting echelon form matrix. For instance,

\begin{array}{rcl}
\begin{pmatrix}
1  &3  &1  \\
1  &4  &1  \\
2  &0  &5
\end{pmatrix}
&\xrightarrow[-2\rho_1+\rho_3]{-\rho_1+\rho_2}
\;\xrightarrow[]{6\rho_2+\rho_3}
&\begin{pmatrix}
1  &3  &1  \\
0  &1  &0  \\
0  &0  &3
\end{pmatrix}
\end{array}

produces the basis \langle \begin{pmatrix} 1 &3 &1 \end{pmatrix},
\begin{pmatrix} 0 &1 &0 \end{pmatrix},
\begin{pmatrix} 0 &0 &3 \end{pmatrix}  \rangle for the row space. This is a basis for the row space of both the starting and ending matrices, since the two row spaces are equal.

Using this technique, we can also find bases for spans not directly involving row vectors.

Definition 3.6

The column space of a matrix is the span of the set of its columns. The column rank is the dimension of the column space, the number of linearly independent columns.

Our interest in column spaces stems from our study of linear systems. An example is that this system


\begin{array}{*{3}{rc}r}
c_1  &+  &3c_2  &+  &7c_3  &=  &d_1  \\
2c_1  &+  &3c_2  &+  &8c_3  &=  &d_2  \\
&   &c_2   &+  &2c_3  &=  &d_3  \\
4c_1  &   &      &+  &4c_3  &=  &d_4   
\end{array}

has a solution if and only if the vector of  d 's is a linear combination of the other column vectors,


c_1\begin{pmatrix} 1 \\ 2 \\ 0 \\ 4 \end{pmatrix}
+c_2\begin{pmatrix} 3 \\ 3 \\ 1 \\ 0 \end{pmatrix}
+c_3\begin{pmatrix} 7 \\ 8 \\ 2 \\ 4 \end{pmatrix}
=\begin{pmatrix} d_1 \\ d_2 \\ d_3 \\ d_4 \end{pmatrix}

meaning that the vector of  d 's is in the column space of the matrix of coefficients.

Example 3.7

Given this matrix,


\begin{pmatrix}
1  &3  &7  \\
2  &3  &8  \\
0  &1  &2  \\
4  &0  &4
\end{pmatrix}

to get a basis for the column space, temporarily turn the columns into rows and reduce.

\begin{array}{rcl}
\begin{pmatrix}
1  &2  &0  &4  \\
3  &3  &1  &0  \\
7  &8  &2  &4
\end{pmatrix}
&\xrightarrow[-7\rho_1+\rho_3]{-3\rho_1+\rho_2}
\;\xrightarrow[]{-2\rho_2+\rho_3}
&\begin{pmatrix}
1  &2  &0  &4  \\
0  &-3 &1  &-12\\
0  &0  &0  &0
\end{pmatrix}
\end{array}

Now turn the rows back to columns.


\langle 
\begin{pmatrix} 1 \\ 2 \\ 0 \\ 4 \end{pmatrix},
\begin{pmatrix} 0 \\ -3 \\ 1 \\ -12 \end{pmatrix}  \rangle

The result is a basis for the column space of the given matrix.

Definition 3.8

The transpose of a matrix is the result of interchanging the rows and columns of that matrix. That is, column  j of the matrix  A is row  j of  {{A}^{\rm trans}} , and vice versa.

So the instructions for the prior example are "transpose, reduce, and transpose back".

We can even, at the price of tolerating the as-yet-vague idea of vector spaces being "the same", use Gauss' method to find bases for spans in other types of vector spaces.

Example 3.9

To get a basis for the span of  \{x^2+x^4,2x^2+3x^4,-x^2-3x^4\} in the space  \mathcal{P}_4 , think of these three polynomials as "the same" as the row vectors  \begin{pmatrix} 0 &0 &1 &0 &1 \end{pmatrix} ,  \begin{pmatrix} 0 &0 &2 &0 &3 \end{pmatrix} , and  \begin{pmatrix} 0 &0 &-1 &0 &-3 \end{pmatrix} , apply Gauss' method

\begin{array}{rcl}
\begin{pmatrix}
0  &0  &1  &0  &1  \\
0  &0  &2  &0  &3  \\
0  &0  &-1 &0  &-3
\end{pmatrix}
&\xrightarrow[\rho_1+\rho_3]{-2\rho_1+\rho_2}
\;\xrightarrow[]{2\rho_2+\rho_3}
&\begin{pmatrix}
0  &0  &1  &0  &1  \\
0  &0  &0  &0  &1  \\
0  &0  &0  &0  &0
\end{pmatrix}
\end{array}

and translate back to get the basis  \langle x^2+x^4,x^4 \rangle  . (As mentioned earlier, we will make the phrase "the same" precise at the start of the next chapter.)

Thus, our first point in this subsection is that the tools of this chapter give us a more conceptual understanding of Gaussian reduction.

For the second point of this subsection, consider the effect on the column space of this row reduction.

\begin{array}{rcl}
\begin{pmatrix}
1  &2  \\
2  &4
\end{pmatrix}
&\xrightarrow[]{-2\rho_1+\rho_2}
&\begin{pmatrix}
1  &2  \\
0  &0
\end{pmatrix}
\end{array}

The column space of the left-hand matrix contains vectors with a second component that is nonzero. But the column space of the right-hand matrix is different because it contains only vectors whose second component is zero. It is this knowledge that row operations can change the column space that makes next result surprising.

Lemma 3.10

Row operations do not change the column rank.

Proof

Restated, if A reduces to B then the column rank of B equals the column rank of A.

We will be done if we can show that row operations do not affect linear relationships among columns (e.g., if the fifth column is twice the second plus the fourth before a row operation then that relationship still holds afterwards), because the column rank is just the size of the largest set of unrelated columns. But this is exactly the first theorem of this book: in a relationship among columns,


c_1\cdot\begin{pmatrix} a_{1,1} \\ a_{2,1} \\ \vdots \\ a_{m,1} \end{pmatrix}
+\dots+
c_n\cdot \begin{pmatrix} a_{1,n} \\ a_{2,n} \\ \vdots \\ a_{m,n} \end{pmatrix}
=\begin{pmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{pmatrix}

row operations leave unchanged the set of solutions  (c_1,\ldots,c_n) .

Another way, besides the prior result, to state that Gauss' method has something to say about the column space as well as about the row space is to consider again Gauss-Jordan reduction. Recall that it ends with the reduced echelon form of a matrix, as here.

\begin{array}{rcl}
\begin{pmatrix}
1  &3  &1  &6  \\
2  &6  &3  &16 \\
1  &3  &1  &6
\end{pmatrix}
&\xrightarrow[]{}\;\cdots\;\xrightarrow[]{}
&\begin{pmatrix}
1  &3  &0  &2  \\
0  &0  &1  &4  \\
0  &0  &0  &0
\end{pmatrix}
\end{array}

Consider the row space and the column space of this result. Our first point made above says that a basis for the row space is easy to get: simply collect together all of the rows with leading entries. However, because this is a reduced echelon form matrix, a basis for the column space is just as easy: take the columns containing the leading entries, that is,  \langle \vec{e}_1,\vec{e}_2 \rangle  . (Linear independence is obvious. The other columns are in the span of this set, since they all have a third component of zero.) Thus, for a reduced echelon form matrix, bases for the row and column spaces can be found in essentially the same way— by taking the parts of the matrix, the rows or columns, containing the leading entries.

Theorem 3.11

The row rank and column rank of a matrix are equal.

Proof

First bring the matrix to reduced echelon form. At that point, the row rank equals the number of leading entries since each equals the number of nonzero rows. Also at that point, the number of leading entries equals the column rank because the set of columns containing leading entries consists of some of the  \vec{e}_i 's from a standard basis, and that set is linearly independent and spans the set of columns. Hence, in the reduced echelon form matrix, the row rank equals the column rank, because each equals the number of leading entries.

But Lemma 3.3 and Lemma 3.10 show that the row rank and column rank are not changed by using row operations to get to reduced echelon form. Thus the row rank and the column rank of the original matrix are also equal.

Definition 3.12

The rank of a matrix is its row rank or column rank.

So our second point in this subsection is that the column space and row space of a matrix have the same dimension. Our third and final point is that the concepts that we've seen arising naturally in the study of vector spaces are exactly the ones that we have studied with linear systems.

Theorem 3.13

For linear systems with  n unknowns and with matrix of coefficients  A , the statements

  1. the rank of  A is  r
  2. the space of solutions of the associated homogeneous system has dimension  n-r

are equivalent.


So if the system has at least one particular solution then for the set of solutions, the number of parameters equals n-r, the number of variables minus the rank of the matrix of coefficients.

Proof

The rank of  A is  r if and only if Gaussian reduction on  A ends with  r nonzero rows. That's true if and only if echelon form matrices row equivalent to  A have  r -many leading variables. That in turn holds if and only if there are  n-r free variables.

Remark 3.14
(Munkres 1964)

Sometimes that result is mistakenly remembered to say that the general solution of an  n unknown system of  m equations uses  n-m parameters. The number of equations is not the relevant figure, rather, what matters is the number of independent equations (the number of equations in a maximal independent set). Where there are  r independent equations, the general solution involves  n-r parameters.

Corollary 3.15

Where the matrix A is  n \! \times \! n , the statements

  1. the rank of  A is  n
  2.  A is nonsingular
  3. the rows of  A form a linearly independent set
  4. the columns of  A form a linearly independent set
  5. any linear system whose matrix of coefficients is  A has one and only one solution

are equivalent.

Proof

Clearly  \text{(1)}\iff\text{(2)}\iff\text{(3)}\iff\text{(4)} . The last,  \text{(4)}\iff\text{(5)} , holds because a set of  n column vectors is linearly independent if and only if it is a basis for  \mathbb{R}^n , but the system


c_1\begin{pmatrix} a_{1,1} \\ a_{2,1} \\ \vdots \\ a_{m,1} \end{pmatrix}
+\dots+
c_n\begin{pmatrix} a_{1,n} \\ a_{2,n} \\ \vdots \\ a_{m,n} \end{pmatrix}
=\begin{pmatrix} d_1 \\ d_2 \\ \vdots \\ d_m \end{pmatrix}

has a unique solution for all choices of  d_1,\dots,d_n\in\mathbb{R} if and only if the vectors of  a 's form a basis.

Exercises

Problem 1

Transpose each.

  1.  \begin{pmatrix}
2  &1  \\
3  &1
\end{pmatrix}
  2.  \begin{pmatrix}
2  &1  \\
1  &3
\end{pmatrix}
  3.  \begin{pmatrix}
1  &4  &3 \\
6  &7  &8
\end{pmatrix}
  4.  \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}
  5.  \begin{pmatrix} -1 &-2 \end{pmatrix}
This exercise is recommended for all readers.
Problem 2

Decide if the vector is in the row space of the matrix.

  1.  \begin{pmatrix}
2  &1  \\
3  &1
\end{pmatrix}  ,  \begin{pmatrix} 1 &0 \end{pmatrix}
  2.  \begin{pmatrix}
0  &1  &3  \\
-1  &0  &1  \\
-1  &2  &7
\end{pmatrix}  ,  \begin{pmatrix} 1 &1 &1 \end{pmatrix}
This exercise is recommended for all readers.
Problem 3

Decide if the vector is in the column space.

  1.  \begin{pmatrix}
1  &1  \\
1  &1
\end{pmatrix}  ,  \begin{pmatrix} 1 \\ 3 \end{pmatrix}
  2.  \begin{pmatrix}
1  &3  &1 \\
2  &0  &4 \\
1  &-3 &-3
\end{pmatrix}  ,  \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
This exercise is recommended for all readers.
Problem 4

Find a basis for the row space of this matrix.



\begin{pmatrix}
2  &0  &3  &4  \\
0  &1  &1  &-1 \\
3  &1  &0  &2  \\
1  &0  &-4 &-1
\end{pmatrix}
This exercise is recommended for all readers.
Problem 5

Find the rank of each matrix.

  1. 
\begin{pmatrix}
2  &1  &3  \\
1  &-1 &2  \\
1  &0  &3
\end{pmatrix}
  2. 
\begin{pmatrix}
1  &-1 &2  \\
3  &-3 &6  \\
-2  &2  &-4
\end{pmatrix}
  3. 
\begin{pmatrix}
1  &3  &2  \\
5  &1  &1  \\
6  &4  &3
\end{pmatrix}
  4. 
\begin{pmatrix}
0  &0  &0  \\
0  &0  &0  \\
0  &0  &0
\end{pmatrix}
This exercise is recommended for all readers.
Problem 6

Find a basis for the span of each set.

  1. 
\{\begin{pmatrix} 1 &3 \end{pmatrix},
\begin{pmatrix} -1 &3 \end{pmatrix},
\begin{pmatrix} 1 &4 \end{pmatrix},
\begin{pmatrix} 2 &1 \end{pmatrix}  \}\subseteq\mathcal{M}_{1 \! \times \! 2}
  2. 
\{\begin{pmatrix} 1 \\2 \\1 \end{pmatrix},
\begin{pmatrix} 3 \\ 1 \\ -1 \end{pmatrix},
\begin{pmatrix} 1 \\ -3 \\ -3 \end{pmatrix}  \}\subseteq\mathbb{R}^3
  3.   \{1+x,1-x^2,3+2x-x^2\}\subseteq\mathcal{P}_3
  4.  \{
\begin{pmatrix}
1  &0  &1  \\
3  &1  &-1
\end{pmatrix},
\begin{pmatrix}
1  &0  &3  \\
2  &1  &4
\end{pmatrix},
\begin{pmatrix}
-1  &0  &-5 \\
-1  &-1 &-9
\end{pmatrix}  \}  \subseteq\mathcal{M}_{2 \! \times \! 3}
Problem 7

Which matrices have rank zero? Rank one?

This exercise is recommended for all readers.
Problem 8

Given  a,b,c\in\mathbb{R} , what choice of  d will cause this matrix to have the rank of one?


\begin{pmatrix}
a  &b  \\
c  &d
\end{pmatrix}
Problem 9

Find the column rank of this matrix.


\begin{pmatrix}
1  &3  &-1  &5  &0  &4  \\
2  &0  &1   &0  &4  &1
\end{pmatrix}
Problem 10

Show that a linear system with at least one solution has at most one solution if and only if the matrix of coefficients has rank equal to the number of its columns.

This exercise is recommended for all readers.
Problem 11

If a matrix is  5 \! \times \! 9 , which set must be dependent, its set of rows or its set of columns?

Problem 12

Give an example to show that, despite that they have the same dimension, the row space and column space of a matrix need not be equal. Are they ever equal?

Problem 13

Show that the set  \{(1,-1,2,-3),(1,1,2,0),(3,-1,6,-6)\} does not have the same span as  \{(1,0,1,0),(0,2,0,3)\} . What, by the way, is the vector space?

This exercise is recommended for all readers.
Problem 14

Show that this set of column vectors


\left\{\begin{pmatrix} d_1 \\ d_2 \\ d_3 \end{pmatrix}
\,\big|\,
\text{there are }x, y, \text{ and } z \text{ such that }
\begin{array}{*{3}{rc}r}
3x  &+  &2y  &+  &4z  &=   &d_1   \\
x  &   &    &-  &z   &=   &d_2   \\
2x  &+  &2y  &+  &5z  &=   &d_3   
\end{array}
\right\}

is a subspace of  \mathbb{R}^3 . Find a basis.

Problem 15

Show that the transpose operation is linear:


{{(rA+sB)}^{\rm trans}}  = r{{A}^{\rm trans}}+s{{B}^{\rm trans}}

for  r,s\in\mathbb{R} and  A,B\in\mathcal{M}_{m \! \times \! n} .

This exercise is recommended for all readers.
Problem 16

In this subsection we have shown that Gaussian reduction finds a basis for the row space.

  1. Show that this basis is not unique— different reductions may yield different bases.
  2. Produce matrices with equal row spaces but unequal numbers of rows.
  3. Prove that two matrices have equal row spaces if and only if after Gauss-Jordan reduction they have the same nonzero rows.
Problem 17

Why is there not a problem with Remark 3.14 in the case that  r is bigger than  n ?

Problem 18

Show that the row rank of an  m \! \times \! n matrix is at most  m . Is there a better bound?

This exercise is recommended for all readers.
Problem 19

Show that the rank of a matrix equals the rank of its transpose.

Problem 20

True or false: the column space of a matrix equals the row space of its transpose.

This exercise is recommended for all readers.
Problem 21

We have seen that a row operation may change the column space. Must it?

Problem 22

Prove that a linear system has a solution if and only if that system's matrix of coefficients has the same rank as its augmented matrix.

Problem 23

An  m \! \times \! n matrix has full row rank if its row rank is  m , and it has full column rank if its column rank is  n .

  1. Show that a matrix can have both full row rank and full column rank only if it is square.
  2. Prove that the linear system with matrix of coefficients  A has a solution for any  d_1 , ...,  d_n 's on the right side if and only if