Linear Algebra/Matrix Multiplication

From Wikibooks, open books for an open world
Jump to navigation Jump to search
Linear Algebra
 ← Sums and Scalar Products Matrix Multiplication Mechanics of Matrix Multiplication → 

After representing addition and scalar multiplication of linear maps in the prior subsection, the natural next map operation to consider is composition.

Lemma 2.1

A composition of linear maps is linear.

Proof

(This argument has appeared earlier, as part of the proof that isomorphism is an equivalence relation between spaces.) Let and be linear. The calculation

shows that preserves linear combinations.

To see how the representation of the composite arises out of the representations of the two compositors, consider an example.

Example 2.2

Let and , fix bases , , , and let these be the representations.

To represent the composition we fix a , represent of , and then represent of that. The representation of is the product of 's matrix and 's vector.

The representation of is the product of 's matrix and 's vector.

Distributing and regrouping on the 's gives

which we recognizing as the result of this matrix-vector product.

Thus, the matrix representing has the rows of combined with the columns of .

Definition 2.3

The matrix-multiplicative product of the matrix and the matrix is the matrix , where

that is, the -th entry of the product is the dot product of the -th row and the -th column.

Example 2.4

The matrices from Example 2.2 combine in this way.

Example 2.5
Theorem 2.6

A composition of linear maps is represented by the matrix product of the representatives.

Proof

(This argument parallels Example 2.2.) Let and be represented by and with respect to bases , , and , of sizes , , and . For any , the -th component of is

and so the -th component of is this.

Distribute and regroup on the 's.

Finish by recognizing that the coefficient of each

matches the definition of the entry of the product .

The theorem is an example of a result that supports a definition. We can picture what the definition and theorem together say with this arrow diagram ("wrt" abbreviates "with respect to").

Above the arrows, the maps show that the two ways of going from to , straight over via the composition or else by way of , have the same effect

(this is just the definition of composition). Below the arrows, the matrices indicate that the product does the same thing— multiplying into the column vector has the same effect as multiplying the column first by and then multiplying the result by .

The definition of the matrix-matrix product operation does not restrict us to view it as a representation of a linear map composition. We can get insight into this operation by studying it as a mechanical procedure. The striking thing is the way that rows and columns combine.

One aspect of that combination is that the sizes of the matrices involved is significant. Briefly, .

Example 2.7

This product is not defined

because the number of columns on the left does not equal the number of rows on the right.

In terms of the underlying maps, the fact that the sizes must match up reflects the fact that matrix multiplication is defined only when a corresponding function composition

is possible.

Remark 2.8

The order in which these things are written can be confusing. In the "" equation, the number written first is the dimension of 's codomain and is thus the number that appears last in the map dimension description above. The explanation is that while is done first and then is applied, that composition is written , from the notation "". (Some people try to lessen confusion by reading "" aloud as " following ".) That order then carries over to matrices: is represented by .

Another aspect of the way that rows and columns combine in the matrix product operation is that in the definition of the entry

the red subscripts on the 's are column indicators while those on the 's indicate rows. That is, summation takes place over the columns of but over the rows of ; left is treated differently than right, so may be unequal to . Matrix multiplication is not commutative.

Example 2.9

Matrix multiplication hardly ever commutes. Test that by multiplying randomly chosen matrices both ways.

Example 2.10

Commutativity can fail more dramatically:

while

isn't even defined.

Remark 2.11

The fact that matrix multiplication is not commutative may be puzzling at first sight, perhaps just because most algebraic operations in elementary mathematics are commutative. But on further reflection, it isn't so surprising. After all, matrix multiplication represents function composition, which is not commutative— if and then while . True, this is not linear and we might have hoped that linear functions commute, but this perspective shows that the failure of commutativity for matrix multiplication fits into a larger context.

Except for the lack of commutativity, matrix multiplication is algebraically well-behaved. Below are some nice properties and more are in Problem 10 and Problem 11.

Theorem 2.12

If , , and are matrices, and the matrix products are defined, then the product is associative and distributes over matrix addition and .

Proof

Associativity holds because matrix multiplication represents function composition, which is associative: the maps and are equal as both send to .

Distributivity is similar. For instance, the first one goes (the third equality uses the linearity of ).

Remark 2.13

We could alternatively prove that result by slogging through the indices. For example, associativity goes: the -th entry of is

(where , , and are , , and matrices), distribute

and regroup around the 's

to get the entry of .

Contrast these two ways of verifying associativity, the one in the proof and the one just above. The argument just above is hard to understand in the sense that, while the calculations are easy to check, the arithmetic seems unconnected to any idea (it also essentially repeats the proof of Theorem 2.6 and so is inefficient). The argument in the proof is shorter, clearer, and says why this property "really" holds. This illustrates the comments made in the preamble to the chapter on vector spaces— at least some of the time an argument from higher-level constructs is clearer.

We have now seen how the representation of the composition of two linear maps is derived from the representations of the two maps. We have called the combination the product of the two matrices. This operation is extremely important. Before we go on to study how to represent the inverse of a linear map, we will explore it some more in the next subsection.

Exercises[edit | edit source]

This exercise is recommended for all readers.
Problem 1

Compute, or state "not defined".

This exercise is recommended for all readers.
Problem 2

Where

compute or state "not defined".

Problem 3

Which products are defined?

  1. times
  2. times
  3. times
  4. times
This exercise is recommended for all readers.
Problem 4

Give the size of the product or state "not defined".

  1. a matrix times a matrix
  2. a matrix times a matrix
  3. a matrix times a matrix
  4. a matrix times a matrix
This exercise is recommended for all readers.
Problem 5

Find the system of equations resulting from starting with

and making this change of variable (i.e., substitution).

Problem 6

As Definition 2.3 points out, the matrix product operation generalizes the dot product. Is the dot product of a row vector and a column vector the same as their matrix-multiplicative product?

This exercise is recommended for all readers.
Problem 7

Represent the derivative map on with respect to where is the natural basis . Show that the product of this matrix with itself is defined; what the map does it represent?

Problem 8

Show that composition of linear transformations on is commutative. Is this true for any one-dimensional space?

Problem 9

Why is matrix multiplication not defined as entry-wise multiplication? That would be easier, and commutative too.

This exercise is recommended for all readers.
Problem 10
  1. Prove that and for positive integers .
  2. Prove that for any positive integer and scalar .
This exercise is recommended for all readers.
Problem 11
  1. How does matrix multiplication interact with scalar multiplication: is ? Is ?
  2. How does matrix multiplication interact with linear combinations: is ? Is ?
Problem 12

We can ask how the matrix product operation interacts with the transpose operation.

  1. Show that .
  2. A square matrix is symmetric if each entry equals the entry, that is, if the matrix equals its own transpose. Show that the matrices and are symmetric.
This exercise is recommended for all readers.
Problem 13

Rotation of vectors in about an axis is a linear map. Show that linear maps do not commute by showing geometrically that rotations do not commute.

Problem 14

In the proof of Theorem 2.12 some maps are used. What are the domains and codomains?

Problem 15

How does matrix rank interact with matrix multiplication?

  1. Can the product of rank matrices have rank less than ? Greater?
  2. Show that the rank of the product of two matrices is less than or equal to the minimum of the rank of each factor.
Problem 16

Is "commutes with" an equivalence relation among matrices?

This exercise is recommended for all readers.
Problem 17

(This will be used in the Matrix Inverses exercises.) Here is another property of matrix multiplication that might be puzzling at first sight.

  1. Prove that the composition of the projections onto the and axes is the zero map despite that neither one is itself the zero map.
  2. Prove that the composition of the derivatives is the zero map despite that neither is the zero map.
  3. Give a matrix equation representing the first fact.
  4. Give a matrix equation representing the second.

When two things multiply to give zero despite that neither is zero, each is said to be a zero divisor.

Problem 18

Show that, for square matrices, need not equal .

This exercise is recommended for all readers.
Problem 19

Represent the identity transformation with respect to for any basis . This is the identity matrix . Show that this matrix plays the role in matrix multiplication that the number plays in real number multiplication: (for all matrices for which the product is defined).

Problem 20

In real number algebra, quadratic equations have at most two solutions. That is not so with matrix algebra. Show that the matrix equation has more than two solutions, where is the identity matrix (this matrix has ones in its and entries and zeroes elsewhere; see Problem 19).

Problem 21
  1. Prove that for any matrix there are scalars that are not all such that the combination is the zero matrix (where is the identity matrix, with 's in its and entries and zeroes elsewhere; see Problem 19).
  2. Let be a polynomial . If is a square matrix we define to be the matrix (where is the appropriately-sized identity matrix). Prove that for any square matrix there is a polynomial such that is the zero matrix.
  3. The minimal polynomial of a square matrix is the polynomial of least degree, and with leading coefficient , such that is the zero matrix. Find the minimal polynomial of this matrix.
    (This is the representation with respect to , the standard basis, of a rotation through radians counterclockwise.)
Problem 22

The infinite-dimensional space of all finite-degree polynomials gives a memorable example of the non-commutativity of linear maps. Let be the usual derivative and let be the shift map.

Show that the two maps don't commute ; in fact, not only is not the zero map, it is the identity map.

Problem 23

Recall the notation for the sum of the sequence of numbers .

In this notation, the entry of the product of and is this.

Using this notation,

  1. reprove that matrix multiplication is associative;
  2. reprove Theorem 2.6.

Solutions

Linear Algebra
 ← Sums and Scalar Products Matrix Multiplication Mechanics of Matrix Multiplication →