Linear Algebra/Matrix Multiplication/Solutions

From Wikibooks, open books for an open world
Jump to navigation Jump to search

A=(285)

Solutions[edit | edit source]

This exercise is recommended for all readers.
Problem 1

Compute, or state "not defined".

Answer
  1. Not defined.
This exercise is recommended for all readers.
Problem 2

Where

compute or state "not defined".

Answer
Problem 3

Which products are defined?

  1. times
  2. times
  3. times
  4. times
Answer
  1. Yes.
  2. Yes.
  3. No.
  4. No.
This exercise is recommended for all readers.
Problem 4

Give the size of the product or state "not defined".

  1. a matrix times a matrix
  2. a matrix times a matrix
  3. a matrix times a matrix
  4. a matrix times a matrix
Answer
  1. Not defined.
This exercise is recommended for all readers.
Problem 5

Find the system of equations resulting from starting with

and making this change of variable (i.e., substitution).

Answer

We have

which, after expanding and regrouping about the 's yields this.

The starting system, and the system used for the substitutions, can be expressed in matrix language.

With this, the substitution is .

Problem 6

As Definition 2.3 points out, the matrix product operation generalizes the dot product. Is the dot product of a row vector and a column vector the same as their matrix-multiplicative product?

Answer

Technically, no. The dot product operation yields a scalar while the matrix product yields a matrix. However, we usually will ignore the distinction.

This exercise is recommended for all readers.
Problem 7

Represent the derivative map on with respect to where is the natural basis . Show that the product of this matrix with itself is defined; what the map does it represent?

Answer

The action of on is , , , ... and so this is its matrix representation.

The product of this matrix with itself is defined because the matrix is square.


The map so represented is the composition

which is the second derivative operation.

Problem 8

Show that composition of linear transformations on is commutative. Is this true for any one-dimensional space?

Answer

It is true for all one-dimensional spaces. Let and be transformations of a one-dimensional space. We must show that for all vectors. Fix a basis for the space and then the transformations are represented by matrices.

Therefore, the compositions can be represented as and .

These two matrices are equal and so the compositions have the same effect on each vector in the space.

Problem 9

Why is matrix multiplication not defined as entry-wise multiplication? That would be easier, and commutative too.

Answer

It would not represent linear map composition; Theorem 2.6 would fail.

This exercise is recommended for all readers.
Problem 10
  1. Prove that and for positive integers .
  2. Prove that for any positive integer and scalar .
Answer

Each follows easily from the associated map fact. For instance, applications of the transformation , following applications, is simply applications.

This exercise is recommended for all readers.
Problem 11
  1. How does matrix multiplication interact with scalar multiplication: is ? Is ?
  2. How does matrix multiplication interact with linear combinations: is ? Is ?
Answer

Although these can be done by going through the indices, they are best understood in terms of the represented maps. That is, fix spaces and bases so that the matrices represent linear maps .

  1. Yes; we have both and (the second equality holds because of the linearity of ).
  2. Both answers are yes. First, and both send to ; the calculation is as in the prior item (using the linearity of for the first one). For the other, and both send to .
Problem 12

We can ask how the matrix product operation interacts with the transpose operation.

  1. Show that .
  2. A square matrix is symmetric if each entry equals the entry, that is, if the matrix equals its own transpose. Show that the matrices and are symmetric.
Answer

We have not seen a map interpretation of the transpose operation, so we will verify these by considering the entries.

  1. The entry of is the entry of , which is the dot product of the -th row of and the -th column of . The entry of is the dot product of the -th row of and the -th column of , which is the dot product of the -th column of and the -th row of . Dot product is commutative and so these two are equal.
  2. By the prior item each equals its transpose, e.g., .
This exercise is recommended for all readers.
Problem 13

Rotation of vectors in about an axis is a linear map. Show that linear maps do not commute by showing geometrically that rotations do not commute.

Answer

Consider rotating all vectors radians counterclockwise about the and axes (counterclockwise in the sense that a person whose head is at or and whose feet are at the origin sees, when looking toward the origin, the rotation as counterclockwise).

Rotating first and then is different than rotating first and then . In particular, so , while so , and hence the maps do not commute.

Problem 14

In the proof of Theorem 2.12 some maps are used. What are the domains and codomains?

Answer

It doesn't matter (as long as the spaces have the appropriate dimensions).

For associativity, suppose that is , that is , and that is . We can take any dimensional space, any dimensional space, any dimensional space, and any dimensional space— for instance, , , , and will do. We can take any bases , , , and , for those spaces. Then, with respect to the matrix represents a linear map , with respect to the matrix represents a , and with respect to the matrix represents an . We can use those maps in the proof.

The second half is done similarly, except that and are added and so we must take them to represent maps with the same domain and codomain.

Problem 15

How does matrix rank interact with matrix multiplication?

  1. Can the product of rank matrices have rank less than ? Greater?
  2. Show that the rank of the product of two matrices is less than or equal to the minimum of the rank of each factor.
Answer
  1. The product of rank matrices can have rank less than or equal to but not greater than . To see that the rank can fall, consider the maps projecting onto the axes. Each is rank one but their composition , which is the zero map, is rank zero. That can be translated over to matrices representing those maps in this way.
    To prove that the product of rank matrices cannot have rank greater than , we can apply the map result that the image of a linearly dependent set is linearly dependent. That is, if and both have rank then a set in the range of size larger than is the image under of a set in of size larger than and so is linearly dependent (since the rank of is ). Now, the image of a linearly dependent set is dependent, so any set of size larger than in the range is dependent. (By the way, observe that the rank of was not mentioned. See the next part.)
  2. Fix spaces and bases and consider the associated linear maps and . Recall that the dimension of the image of a map (the map's rank) is less than or equal to the dimension of the domain, and consider the arrow diagram.
    First, the image of must have dimension less than or equal to the dimension of , by the prior sentence. On the other hand, is a subset of the domain of , and thus its image has dimension less than or equal the dimension of the domain of . Combining those two, the rank of a composition is less than or equal to the minimum of the two ranks. The matrix fact follows immediately.
Problem 16

Is "commutes with" an equivalence relation among matrices?

Answer

The "commutes with" relation is reflexive and symmetric. However, it is not transitive: for instance, with

commutes with and commutes with , but does not commute with .

This exercise is recommended for all readers.
Problem 17

(This will be used in the Matrix Inverses exercises.) Here is another property of matrix multiplication that might be puzzling at first sight.

  1. Prove that the composition of the projections onto the and axes is the zero map despite that neither one is itself the zero map.
  2. Prove that the composition of the derivatives is the zero map despite that neither is the zero map.
  3. Give a matrix equation representing the first fact.
  4. Give a matrix equation representing the second.

When two things multiply to give zero despite that neither is zero, each is said to be a zero divisor.

Answer
  1. Either of these.
  2. The composition is the fifth derivative map on the space of fourth-degree polynomials.
  3. With respect to the natural bases,
    and their product (in either order) is the zero matrix.
  4. Where ,
    and their product (in either order) is the zero matrix.
Problem 18

Show that, for square matrices, need not equal .

Answer

Note that , so a reasonable try is to look at matrices that do not commute so that and don't cancel: with

we have the desired inequality.

This exercise is recommended for all readers.
Problem 19

Represent the identity transformation with respect to for any basis . This is the identity matrix . Show that this matrix plays the role in matrix multiplication that the number plays in real number multiplication: (for all matrices for which the product is defined).

Answer

Because the identity map acts on the basis as , ..., , the representation is this.

The second part of the question is obvious from Theorem 2.6.

Problem 20

In real number algebra, quadratic equations have at most two solutions. That is not so with matrix algebra. Show that the matrix equation has more than two solutions, where is the identity matrix (this matrix has ones in its and entries and zeroes elsewhere; see Problem 19).

Answer

Here are four solutions.

Problem 21
  1. Prove that for any matrix there are scalars that are not all such that the combination is the zero matrix (where is the identity matrix, with 's in its and entries and zeroes elsewhere; see Problem 19).
  2. Let be a polynomial . If is a square matrix we define to be the matrix (where is the appropriately-sized identity matrix). Prove that for any square matrix there is a polynomial such that is the zero matrix.
  3. The minimal polynomial of a square matrix is the polynomial of least degree, and with leading coefficient , such that is the zero matrix. Find the minimal polynomial of this matrix.
    (This is the representation with respect to , the standard basis, of a rotation through radians counterclockwise.)
Answer
  1. The vector space has dimension four. The set has five elements and thus is linearly dependent.
  2. Where is , generalizing the argument from the prior item shows that there is such a polynomial of degree or less, since is a -member subset of the -dimensional space .
  3. First compute the powers
    (observe that rotating by three times results in a rotation by , which is indeed what represents). Then set equal to the zero matrix
    to get this linear system.
    Apply Gaussian reduction.
    Setting , , and to zero makes and also come out to be zero so no degree one or degree zero polynomial will do. Setting and to zero (and to one) gives a linear system
    that can be solved with and . Conclusion: the polynomial is minimal for the matrix .
Problem 22

The infinite-dimensional space of all finite-degree polynomials gives a memorable example of the non-commutativity of linear maps. Let be the usual derivative and let be the shift map.

Show that the two maps don't commute ; in fact, not only is not the zero map, it is the identity map.

Answer

The check is routine:

while

so that under the map we have .

Problem 23

Recall the notation for the sum of the sequence of numbers .

In this notation, the entry of the product of and is this.

Using this notation,

  1. reprove that matrix multiplication is associative;
  2. reprove Theorem 2.6.
Answer
  1. Tracing through the remark at the end of the subsection gives that the entry of is this
    (the first equality comes from using the distributive law to multiply through the 's, the second equality is the associative law for real numbers, the third is the commutative law for reals, and the fourth equality follows on using the distributive law to factor the 's out), which is the entry of .
  2. The -th component of is
    and so the -th component of is this
    (the first equality holds by using the distributive law to multiply the 's through, the second equality represents the use of associativity of reals, the third follows by commutativity of reals, and the fourth comes from using the distributive law to factor the 's out).