Linear Algebra/Laplace's Expansion/Solutions

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Solutions[edit | edit source]

This exercise is recommended for all readers.
Problem 1

Find the cofactor.

Answer
This exercise is recommended for all readers.
Problem 2

Find the determinant by expanding

  1. on the first row
  2. on the second row
  3. on the third column.
Answer
Problem 3

Find the adjoint of the matrix in Example 1.6.

Answer

This exercise is recommended for all readers.
Problem 4

Find the matrix adjoint to each.

Answer
  1. The minors are : .
This exercise is recommended for all readers.
Problem 5

Find the inverse of each matrix in the prior question with Theorem 1.9.

Answer
  1. The matrix has a zero determinant, and so has no inverse.
Problem 6

Find the matrix adjoint to this one.

Answer

This exercise is recommended for all readers.
Problem 7

Expand across the first row to derive the formula for the determinant of a matrix.

Answer

The determinant

expanded on the first row gives (note the two minors).

This exercise is recommended for all readers.
Problem 8

Expand across the first row to derive the formula for the determinant of a matrix.

Answer

The determinant of

is this.

This exercise is recommended for all readers.
Problem 9
  1. Give a formula for the adjoint of a matrix.
  2. Use it to derive the formula for the inverse.
Answer
This exercise is recommended for all readers.
Problem 10

Can we compute a determinant by expanding down the diagonal?

Answer

No. Here is a determinant whose value

doesn't equal the result of expanding down the diagonal.

Problem 11

Give a formula for the adjoint of a diagonal matrix.

Answer

Consider this diagonal matrix.

If then the minor is an matrix with only nonzero entries, because both and are deleted. Thus, at least one row or column of the minor is all zeroes, and so the cofactor is zero. If then the minor is the diagonal matrix with entries , ..., , , ..., . Its determinant is obviously times the product of those.

By the way, Theorem 1.9 provides a slicker way to derive this conclusion.

This exercise is recommended for all readers.
Problem 12

Prove that the transpose of the adjoint is the adjoint of the transpose.

Answer

Just note that if then the cofactor equals the cofactor because and because the minors are the transposes of each other (and the determinant of a transpose equals the determinant of the matrix).

Problem 13

Prove or disprove: .

Answer

It is false; here is an example.

Problem 14

A square matrix is upper triangular if each entry is zero in the part above the diagonal, that is, when .

  1. Must the adjoint of an upper triangular matrix be upper triangular? Lower triangular?
  2. Prove that the inverse of a upper triangular matrix is upper triangular, if an inverse exists.
Answer
  1. An example
    suggests the right answer.
    The result is indeed upper triangular. A check of this is detailed but not hard. The entries in the upper triangle of the adjoint are where . We need to verify that the cofactor is zero if . With , row and column of ,
    when deleted, leave an upper triangular minor, because entry of the minor is either entry of (this happens if and ; in this case implies that the entry is zero) or it is entry of (this happens if and ; in this case, implies that , which implies that the entry is zero), or it is entry of (this last case happens when and ; obviously here implies that and so the entry is zero). Thus the determinant of the minor is the product down the diagonal. Observe that the entry of is the entry of the minor (it doesn't get deleted because the relation is strict). But this entry is zero because is upper triangular and . Therefore the cofactor is zero, and the adjoint is upper triangular. (The lower triangular case is similar.)
  2. This is immediate from the prior part, by Corollary 1.11.
Problem 15

This question requires material from the optional Determinants Exist subsection. Prove Theorem 1.5 by using the permutation expansion.

Answer

We will show that each determinant can be expanded along row . The argument for column is similar.

Each term in the permutation expansion contains one and only one entry from each row. As in Example 1.1, factor out each row entry to get , where each is a sum of terms not containing any elements of row . We will show that is the cofactor.

Consider the case first:

where the sum is over all -permutations such that . To show that is the minor , we need only show that if is an -permutation such that and is an -permutation with , ..., then . But that's true because and have the same number of inversions.

Back to the general case. Swap adjacent rows until the -th is last and swap adjacent columns until the -th is last. Observe that the determinant of the -th minor is not affected by these adjacent swaps because inversions are preserved (since the minor has the -th row and -th column omitted). On the other hand, the sign of and is changed plus times. Thus .

Problem 16

Prove that the determinant of a matrix equals the determinant of its transpose using Laplace's expansion and induction on the size of the matrix.

Answer

This is obvious for the base case.

For the inductive case, assume that the determinant of a matrix equals the determinant of its transpose for all , ..., matrices. Expanding on row gives and expanding on column gives Since the signs are the same in the two summations. Since the minor of is the transpose of the minor of , the inductive hypothesis gives .

? Problem 17

Show that

where is the -th term of , the Fibonacci sequence, and the determinant is of order . (Walter & Tytun 1949)

Answer

This is how the answer was given in the cited source.

Denoting the above determinant by , it is seen that , . It remains to show that . In subtract the -th column from the -th, the -th from the -th, ..., the first from the third, obtaining

By expanding this determinant with reference to the first row, there results the desired relation.

References[edit | edit source]

  • Walter, Dan (proposer); Tytun, Alex (solver) (1949), "Elementary problem 834", American Mathematical Monthly, American Mathematical Society, 56 (6): 409.