Jump to content

Linear Algebra/Mechanics of Matrix Multiplication

From Wikibooks, open books for an open world
Linear Algebra
 ← Matrix Multiplication Mechanics of Matrix Multiplication Inverses → 

In this subsection we consider matrix multiplication as a mechanical process, putting aside for the moment any implications about the underlying maps. As described earlier, the striking thing about matrix multiplication is the way rows and columns combine. The entry of the matrix product is the dot product of row of the left matrix with column of the right one. For instance, here a second row and a third column combine to make a entry.

We can view this as the left matrix acting by multiplying its rows, one at a time, into the columns of the right matrix. Of course, another perspective is that the right matrix uses its columns to act on the left matrix's rows. Below, we will examine actions from the left and from the right for some simple matrices.

The first case, the action of a zero matrix, is very easy.

Example 3.1

Multiplying by an appropriately-sized zero matrix from the left or from the right

results in a zero matrix.

After zero matrices, the matrices whose actions are easiest to understand are the ones with a single nonzero entry.

Definition 3.2

A matrix with all zeroes except for a one in the entry is an unit matrix.

Example 3.3

This is the unit matrix with three rows and two columns, multiplying from the left.

Acting from the left, an unit matrix copies row of the multiplicand into row of the result. From the right an unit matrix copies column of the multiplicand into column of the result.

Example 3.4

Rescaling these matrices simply rescales the result. This is the action from the left of the matrix that is twice the one in the prior example.

And this is the action of the matrix that is minus three times the one from the prior example.

Next in complication are matrices with two nonzero entries. There are two cases. If a left-multiplier has entries in different rows then their actions don't interact.

Example 3.5

But if the left-multiplier's nonzero entries are in the same row then that row of the result is a combination.

Example 3.6

Right-multiplication acts in the same way, with columns.

These observations about matrices that are mostly zeroes extend to arbitrary matrices.

Lemma 3.7

In a product of two matrices and , the columns of are formed by taking times the columns of

and the rows of are formed by taking the rows of times

(ignoring the extra parentheses).

Proof

We will show the case and leave the general case as an exercise.

The right side of the first equation in the result

is indeed the same as the right side of GH, except for the extra parentheses (the ones marking the columns as column vectors). The other equation is similarly easy to recognize.

An application of those observations is that there is a matrix that just copies out the rows and columns.

Definition 3.8

The main diagonal (or principal diagonal or diagonal) of a square matrix goes from the upper left to the lower right.

Definition 3.9

An identity matrix is square and has with all entries zero except for ones in the main diagonal.

Example 3.10

The identity leaves its multiplicand unchanged both from the left

and from the right.

Example 3.11

So does the identity matrix.

In short, an identity matrix is the identity element of the set of matrices with respect to the operation of matrix multiplication.

We next see two ways to generalize the identity matrix.

The first is that if the ones are relaxed to arbitrary reals, the resulting matrix will rescale whole rows or columns.

Definition 3.12

A diagonal matrix is square and has zeros off the main diagonal.

Example 3.13

From the left, the action of multiplication by a diagonal matrix is to rescales the rows.

From the right such a matrix rescales the columns.

The second generalization of identity matrices is that we can put a single one in each row and column in ways other than putting them down the diagonal.

Definition 3.14

A permutation matrix is square and is all zeros except for a single one in each row and column.

Example 3.15

From the left these matrices permute rows.

From the right they permute columns.

We finish this subsection by applying these observations to get matrices that perform Gauss' method and Gauss-Jordan reduction.

Example 3.16

We have seen how to produce a matrix that will rescale rows. Multiplying by this diagonal matrix rescales the second row of the other by a factor of three.

We have seen how to produce a matrix that will swap rows. Multiplying by this permutation matrix swaps the first and third rows.

To see how to perform a pivot, we observe something about those two examples. The matrix that rescales the second row by a factor of three arises in this way from the identity.

Similarly, the matrix that swaps first and third rows arises in this way.


Example 3.17

The matrix that arises as

will, when it acts from the left, perform the pivot operation .

Definition 3.18

The elementary reduction matrices are obtained from identity matrices with one Gaussian operation. We denote them:

  1. for ;
  2. for ;
  3. for .
Lemma 3.19

Gaussian reduction can be done through matrix multiplication.

  1. If then .
  2. If then .
  3. If then .
Proof

Clear.

Example 3.20

This is the first system, from the first chapter, on which we performed Gauss' method.

It can be reduced with matrix multiplication. Swap the first and third rows,

triple the first row,

and then add times the first row to the second.

Now back substitution will give the solution.

Example 3.21

Gauss-Jordan reduction works the same way. For the matrix ending the prior example, first adjust the leading entries

and to finish, clear the third column and then the second column.

We have observed the following result, which we shall use in the next subsection.

Corollary 3.22

For any matrix there are elementary reduction matrices , ..., such that is in reduced echelon form.

Until now we have taken the point of view that our primary objects of study are vector spaces and the maps between them, and have adopted matrices only for computational convenience. This subsection show that this point of view isn't the whole story. Matrix theory is a fascinating and fruitful area.

In the rest of this book we shall continue to focus on maps as the primary objects, but we will be pragmatic— if the matrix point of view gives some clearer idea then we shall use it.

Exercises

[edit | edit source]
This exercise is recommended for all readers.
Problem 1

Predict the result of each multiplication by an elementary reduction matrix, and then check by multiplying it out.

This exercise is recommended for all readers.
Problem 2

The need to take linear combinations of rows and columns in tables of numbers arises often in practice. For instance, this is a map of part of Vermont and New York.

In part because of Lake Champlain, there are no roads directly connecting some pairs of towns. For instance, there is no way to go from Winooski to Grand Isle without going through Colchester. (Of course, many other roads and towns have been left off to simplify the graph. From top to bottom of this map is about forty miles.)
  1. The incidence matrix of a map is the square matrix whose entry is the number of roads from city to city . Produce the incidence matrix of this map (take the cities in alphabetical order).
  2. A matrix is symmetric if it equals its transpose. Show that an incidence matrix is symmetric. (These are all two-way streets. Vermont doesn't have many one-way streets.)
  3. What is the significance of the square of the incidence matrix? The cube?
This exercise is recommended for all readers.
Problem 3

This table gives the number of hours of each type done by each worker, and the associated pay rates. Use matrices to compute the wages due.

  regular   overtime
Alan     40   12
Betty     35   6
Catherine     40   18
Donald     28   0
  wage
regular     $25.00
overtime     $45.00

(Remark. This illustrates, as did the prior problem, that in practice we often want to compute linear combinations of rows and columns in a context where we really aren't interested in any associated linear maps.)

Problem 4

Find the product of this matrix with its transpose.

This exercise is recommended for all readers.
Problem 5

Prove that the diagonal matrices form a subspace of . What is its dimension?

Problem 6

Does the identity matrix represent the identity map if the bases are unequal?

Problem 7

Show that every multiple of the identity commutes with every square matrix. Are there other matrices that commute with all square matrices?

Problem 8

Prove or disprove: nonsingular matrices commute.

This exercise is recommended for all readers.
Problem 9

Show that the product of a permutation matrix and its transpose is an identity matrix.

Problem 10

Show that if the first and second rows of are equal then so are the first and second rows of . Generalize.

Problem 11

Describe the product of two diagonal matrices.

Problem 12

Write

as the product of two elementary reduction matrices.

This exercise is recommended for all readers.
Problem 13

Show that if has a row of zeros then (if defined) has a row of zeros. Does that work for columns?

Problem 14

Show that the set of unit matrices forms a basis for .

Problem 15

Find the formula for the -th power of this matrix.

This exercise is recommended for all readers.
Problem 16

The trace of a square matrix is the sum of the entries on its diagonal (its significance appears in Chapter Five). Show that .

This exercise is recommended for all readers.
Problem 17

A square matrix is upper triangular if its only nonzero entries lie above, or on, the diagonal. Show that the product of two upper triangular matrices is upper triangular. Does this hold for lower triangular also?

Problem 18

A square matrix is a Markov matrix if each entry is between zero and one and the sum along each row is one. Prove that a product of Markov matrices is Markov.

This exercise is recommended for all readers.
Problem 19

Give an example of two matrices of the same rank with squares of differing rank.

Problem 20

Combine the two generalizations of the identity matrix, the one allowing entires to be other than ones, and the one allowing the single one in each row and column to be off the diagonal. What is the action of this type of matrix?

Problem 21

On a computer multiplications are more costly than additions, so people are interested in reducing the number of multiplications used to compute a matrix product.

  1. How many real number multiplications are needed in formula we gave for the product of a matrix and a matrix?
  2. Matrix multiplication is associative, so all associations yield the same result. The cost in number of multiplications, however, varies. Find the association requiring the fewest real number multiplications to compute the matrix product of a matrix, a matrix, a matrix, and a matrix.
  3. (Very hard.) Find a way to multiply two matrices using only seven multiplications instead of the eight suggested by the naive approach.
? Problem 22

If and are square matrices of the same size such that , does it follow that ? (Putnam Exam 1990)

Problem 23

Demonstrate these four assertions to get an alternate proof that column rank equals row rank. (Liebeck 1966)

  1. iff .
  2. iff .
  3. .
  4. .
Problem 24

Prove (where is an matrix and so defines a transformation of any -dimensional space with respect to where is a basis) that . Conclude

  1. iff ;
  2. iff ;
  3. iff and  ;
  4. iff  ;
  5. (Requires the Direct Sum subsection, which is optional.) iff .
(Ackerson 1955)

Solutions

References

[edit | edit source]
  • Ackerson, R. H. (1955), "A Note on Vector Spaces", American Mathematical Monthly, American Mathematical Society, 62 (10): 721 {{citation}}: Unknown parameter |month= ignored (help).
  • Liebeck, Hans. (1966), "A Proof of the Equality of Column Rank and Row Rank of a Matrix", American Mathematical Monthly, American Mathematical Society, 73 (10): 1114 {{citation}}: Unknown parameter |month= ignored (help).
  • William Lowell Putnam Mathematical Competition, Problem A-5, 1990.
Linear Algebra
 ← Matrix Multiplication Mechanics of Matrix Multiplication Inverses →