# Linear Algebra/Properties of Determinants/Solutions

## Solutions

For these, assume that an ${\displaystyle n\!\times \!n}$ determinant function exists for all ${\displaystyle n}$.

This exercise is recommended for all readers.
Problem 1

Use Gauss' method to find each determinant.

1. ${\displaystyle {\begin{vmatrix}3&1&2\\3&1&0\\0&1&4\end{vmatrix}}}$
2. ${\displaystyle {\begin{vmatrix}1&0&0&1\\2&1&1&0\\-1&0&1&0\\1&1&1&0\end{vmatrix}}}$
1. ${\displaystyle {\begin{vmatrix}3&1&2\\3&1&0\\0&1&4\end{vmatrix}}={\begin{vmatrix}3&1&2\\0&0&-2\\0&1&4\end{vmatrix}}=-{\begin{vmatrix}3&1&2\\0&1&4\\0&0&-2\end{vmatrix}}=6}$
2. ${\displaystyle {\begin{vmatrix}1&0&0&1\\2&1&1&0\\-1&0&1&0\\1&1&1&0\end{vmatrix}}={\begin{vmatrix}1&0&0&1\\0&1&1&-2\\0&0&1&1\\0&1&1&-1\end{vmatrix}}={\begin{vmatrix}1&0&0&1\\0&1&1&-2\\0&0&1&1\\0&0&0&1\end{vmatrix}}=1}$
Problem 2
Use Gauss' method to find each.
1. ${\displaystyle {\begin{vmatrix}2&-1\\-1&-1\end{vmatrix}}}$
2. ${\displaystyle {\begin{vmatrix}1&1&0\\3&0&2\\5&2&2\end{vmatrix}}}$
1. ${\displaystyle {\begin{vmatrix}2&-1\\-1&-1\end{vmatrix}}={\begin{vmatrix}2&-1\\0&-3/2\end{vmatrix}}=-3}$;
2. ${\displaystyle {\begin{vmatrix}1&1&0\\3&0&2\\5&2&2\end{vmatrix}}={\begin{vmatrix}1&1&0\\0&-3&2\\0&-3&2\end{vmatrix}}={\begin{vmatrix}1&1&0\\0&-3&2\\0&0&0\end{vmatrix}}=0}$
Problem 3

For which values of ${\displaystyle k}$ does this system have a unique solution?

${\displaystyle {\begin{array}{*{4}{rc}r}x&&&+&z&-&w&=&2\\&&y&-&2z&&&=&3\\x&&&+&kz&&&=&4\\&&&&z&-&w&=&2\end{array}}}$

When is the determinant not zero?

${\displaystyle {\begin{vmatrix}1&0&1&-1\\0&1&-2&0\\1&0&k&0\\0&0&1&-1\end{vmatrix}}={\begin{vmatrix}1&0&1&-1\\0&1&-2&0\\0&0&k-1&1\\0&0&1&-1\end{vmatrix}}}$

Obviously, ${\displaystyle k=1}$ gives nonsingularity and hence a nonzero determinant. If ${\displaystyle k\neq 1}$ then we get echelon form with a ${\displaystyle (-1/k-1)\rho _{3}+\rho _{4}}$ pivot.

${\displaystyle ={\begin{vmatrix}1&0&1&-1\\0&1&-2&0\\0&0&k-1&1\\0&0&0&-1-(1/k-1)\end{vmatrix}}}$

Multiplying down the diagonal gives ${\displaystyle (k-1)(-1-(1/k-1))=-(k-1)-1=-k}$. Thus the matrix has a nonzero determinant, and so the system has a unique solution, if and only if ${\displaystyle k\neq 0}$.

This exercise is recommended for all readers.
Problem 4

Express each of these in terms of ${\displaystyle \left|H\right|}$.

1. ${\displaystyle {\begin{vmatrix}h_{3,1}&h_{3,2}&h_{3,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\h_{1,1}&h_{1,2}&h_{1,3}\end{vmatrix}}}$
2. ${\displaystyle {\begin{vmatrix}-h_{1,1}&-h_{1,2}&-h_{1,3}\\-2h_{2,1}&-2h_{2,2}&-2h_{2,3}\\-3h_{3,1}&-3h_{3,2}&-3h_{3,3}\end{vmatrix}}}$
3. ${\displaystyle {\begin{vmatrix}h_{1,1}+h_{3,1}&h_{1,2}+h_{3,2}&h_{1,3}+h_{3,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\5h_{3,1}&5h_{3,2}&5h_{3,3}\end{vmatrix}}}$
1. Property (2) of the definition of determinants applies via the swap ${\displaystyle \rho _{1}\leftrightarrow \rho _{3}}$.
${\displaystyle {\begin{vmatrix}h_{3,1}&h_{3,2}&h_{3,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\h_{1,1}&h_{1,2}&h_{1,3}\end{vmatrix}}=-{\begin{vmatrix}h_{1,1}&h_{1,2}&h_{1,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\h_{3,1}&h_{3,2}&h_{3,3}\end{vmatrix}}}$
2. Property (3) applies.
${\displaystyle {\begin{vmatrix}-h_{1,1}&-h_{1,2}&-h_{1,3}\\-2h_{2,1}&-2h_{2,2}&-2h_{2,3}\\-3h_{3,1}&-3h_{3,2}&-3h_{3,3}\end{vmatrix}}=(-1)\cdot (-2)\cdot (-3)\cdot {\begin{vmatrix}h_{1,1}&h_{1,2}&h_{1,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\h_{3,1}&h_{3,2}&h_{3,3}\end{vmatrix}}=(-6)\cdot {\begin{vmatrix}h_{1,1}&h_{1,2}&h_{1,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\h_{3,1}&h_{3,2}&h_{3,3}\end{vmatrix}}}$
3. ${\displaystyle {\begin{array}{rl}{\begin{vmatrix}h_{1,1}+h_{3,1}&h_{1,2}+h_{3,2}&h_{1,3}+h_{3,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\5h_{3,1}&5h_{3,2}&5h_{3,3}\end{vmatrix}}&=5\cdot {\begin{vmatrix}h_{1,1}+h_{3,1}&h_{1,2}+h_{3,2}&h_{1,3}+h_{3,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\h_{3,1}&h_{3,2}&h_{3,3}\end{vmatrix}}\\&=5\cdot {\begin{vmatrix}h_{1,1}&h_{1,2}&h_{1,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\h_{3,1}&h_{3,2}&h_{3,3}\end{vmatrix}}\end{array}}}$
This exercise is recommended for all readers.
Problem 5

Find the determinant of a diagonal matrix.

A diagonal matrix is in echelon form, so the determinant is the product down the diagonal.

Problem 6

Describe the solution set of a homogeneous linear system if the determinant of the matrix of coefficients is nonzero.

It is the trivial subspace.

This exercise is recommended for all readers.
Problem 7

Show that this determinant is zero.

${\displaystyle {\begin{vmatrix}y+z&x+z&x+y\\x&y&z\\1&1&1\end{vmatrix}}}$

Pivoting by adding the second row to the first gives a matrix whose first row is ${\displaystyle x+y+z}$ times its third row.

Problem 8
1. Find the ${\displaystyle 1\!\times \!1}$, ${\displaystyle 2\!\times \!2}$, and ${\displaystyle 3\!\times \!3}$ matrices with ${\displaystyle i,j}$ entry given by ${\displaystyle (-1)^{i+j}}$.
2. Find the determinant of the square matrix with ${\displaystyle i,j}$ entry ${\displaystyle (-1)^{i+j}}$.
1. ${\displaystyle {\begin{pmatrix}1\end{pmatrix}}}$, ${\displaystyle {\begin{pmatrix}1&-1\\-1&1\end{pmatrix}}}$, ${\displaystyle {\begin{pmatrix}1&-1&1\\-1&1&-1\\1&-1&1\end{pmatrix}}}$
2. The determinant in the ${\displaystyle 1\!\times \!1}$ case is ${\displaystyle 1}$. In every other case the second row is the negative of the first, and so matrix is singular and the determinant is zero.
Problem 9
1. Find the ${\displaystyle 1\!\times \!1}$, ${\displaystyle 2\!\times \!2}$, and ${\displaystyle 3\!\times \!3}$ matrices with ${\displaystyle i,j}$ entry given by ${\displaystyle i+j}$.
2. Find the determinant of the square matrix with ${\displaystyle i,j}$ entry ${\displaystyle i+j}$.
1. ${\displaystyle {\begin{pmatrix}2\end{pmatrix}}}$, ${\displaystyle {\begin{pmatrix}2&3\\3&4\end{pmatrix}}}$, ${\displaystyle {\begin{pmatrix}2&3&4\\3&4&5\\4&5&6\end{pmatrix}}}$
2. The ${\displaystyle 1\!\times \!1}$ and ${\displaystyle 2\!\times \!2}$ cases yield these.
${\displaystyle {\begin{vmatrix}2\end{vmatrix}}=2\qquad {\begin{vmatrix}2&3\\3&4\end{vmatrix}}=-1}$
And ${\displaystyle n\!\times \!n}$ matrices with ${\displaystyle n\geq 3}$ are singular, e.g.,
${\displaystyle {\begin{vmatrix}2&3&4\\3&4&5\\4&5&6\end{vmatrix}}=0}$
because twice the second row minus the first row equals the third row. Checking this is routine.
This exercise is recommended for all readers.
Problem 10

Show that determinant functions are not linear by giving a case where ${\displaystyle \left|A+B\right|\neq \left|A\right|+\left|B\right|}$.

This one

${\displaystyle A=B={\begin{pmatrix}1&2\\3&4\end{pmatrix}}}$

is easy to check.

${\displaystyle \left|A+B\right|={\begin{vmatrix}2&4\\6&8\end{vmatrix}}=-8\qquad \left|A\right|+\left|B\right|=-2-2=-4}$

By the way, this also gives an example where scalar multiplication is not preserved ${\displaystyle \left|2\cdot A\right|\neq 2\cdot \left|A\right|}$.

Problem 11

The second condition in the definition, that row swaps change the sign of a determinant, is somewhat annoying. It means we have to keep track of the number of swaps, to compute how the sign alternates. Can we get rid of it? Can we replace it with the condition that row swaps leave the determinant unchanged? (If so then we would need new ${\displaystyle 1\!\times \!1}$, ${\displaystyle 2\!\times \!2}$, and ${\displaystyle 3\!\times \!3}$ formulas, but that would be a minor matter.)

No, we cannot replace it. Remark 2.2 shows that the four conditions after the replacement would conflict — no function satisfies all four.

Problem 12

Prove that the determinant of any triangular matrix, upper or lower, is the product down its diagonal.

A upper-triangular matrix is in echelon form.

A lower-triangular matrix is either singular or nonsingular. If it is singular then it has a zero on its diagonal and so its determinant (namely, zero) is indeed the product down its diagonal. If it is nonsingular then it has no zeroes on its diagonal, and can be reduced by Gauss' method to echelon form without changing the diagonal.

Problem 13

Refer to the definition of elementary matrices in the Mechanics of Matrix Multiplication subsection.

1. What is the determinant of each kind of elementary matrix?
2. Prove that if ${\displaystyle E}$ is any elementary matrix then ${\displaystyle \left|ES\right|=\left|E\right|\left|S\right|}$ for any appropriately sized ${\displaystyle S}$.
3. (This question doesn't involve determinants.) Prove that if ${\displaystyle T}$ is singular then a product ${\displaystyle TS}$ is also singular.
4. Show that ${\displaystyle \left|TS\right|=\left|T\right|\left|S\right|}$.
5. Show that if ${\displaystyle T}$ is nonsingular then ${\displaystyle \left|T^{-1}\right|=\left|T\right|^{-1}}$.
1. The properties in the definition of determinant show that ${\displaystyle \left|M_{i}(k)\right|=k}$, ${\displaystyle \left|P_{i,j}\right|=-1}$, and ${\displaystyle \left|C_{i,j}(k)\right|=1}$.
2. The three cases are easy to check by recalling the action of left multiplication by each type of matrix.
3. If ${\displaystyle TS}$ is invertible ${\displaystyle (TS)M=I}$ then the associative property of matrix multiplication ${\displaystyle T(SM)=I}$ shows that ${\displaystyle T}$ is invertible. So if ${\displaystyle T}$ is not invertible then neither is ${\displaystyle TS}$.
4. If ${\displaystyle T}$ is singular then apply the prior answer: ${\displaystyle \left|TS\right|=0}$ and ${\displaystyle \left|T\right|\cdot \left|S\right|=0\cdot \left|S\right|=0}$. If ${\displaystyle T}$ is not singular then it can be written as a product of elementary matrices ${\displaystyle \left|TS\right|=\left|E_{r}\cdots E_{1}S\right|=\left|E_{r}\right|\cdots \left|E_{1}\right|\cdot \left|S\right|=\left|E_{r}\cdots E_{1}\right|\left|S\right|=\left|T\right|\left|S\right|}$.
5. ${\displaystyle 1=\left|I\right|=\left|T\cdot T^{-1}\right|=\left|T\right|\left|T^{-1}\right|}$
Problem 14

Prove that the determinant of a product is the product of the determinants ${\displaystyle \left|TS\right|=\left|T\right|\,\left|S\right|}$ in this way. Fix the ${\displaystyle n\!\times \!n}$ matrix ${\displaystyle S}$ and consider the function ${\displaystyle d:{\mathcal {M}}_{n\!\times \!n}\to \mathbb {R} }$ given by ${\displaystyle T\mapsto \left|TS\right|/\left|S\right|}$.

1. Check that ${\displaystyle d}$ satisfies property (1) in the definition of a determinant function.
2. Check property (2).
3. Check property (3).
4. Check property (4).
5. Conclude the determinant of a product is the product of the determinants.
1. We must show that if
${\displaystyle T{\xrightarrow[{}]{k\rho _{i}+\rho _{j}}}{\hat {T}}}$
then ${\displaystyle d(T)=\left|TS\right|/\left|S\right|=\left|{\hat {T}}S\right|/\left|S\right|=d({\hat {T}})}$. We will be done if we show that pivoting first and then multiplying to get ${\displaystyle {\hat {T}}S}$ gives the same result as multiplying first to get ${\displaystyle TS}$ and then pivoting (because the determinant ${\displaystyle \left|TS\right|}$ is unaffected by the pivot so we'll then have ${\displaystyle \left|{\hat {T}}S\right|=\left|TS\right|}$, and hence ${\displaystyle d({\hat {T}})=d(T)}$). That argument runs: after adding ${\displaystyle k}$ times row ${\displaystyle i}$ of ${\displaystyle TS}$ to row ${\displaystyle j}$ of ${\displaystyle TS}$, the ${\displaystyle j,p}$ entry is ${\displaystyle (kt_{i,1}+t_{j,1})s_{1,p}+\dots +(kt_{i,r}+t_{j,r})s_{r,p}}$, which is the ${\displaystyle j,p}$ entry of ${\displaystyle {\hat {T}}S}$.
2. We need only show that swapping ${\displaystyle T[b]{\xrightarrow[{}]{\rho _{i}\leftrightarrow \rho _{j}}}{\hat {T}}}$ and then multiplying to get ${\displaystyle {\hat {T}}S}$ gives the same result as multiplying ${\displaystyle T}$ by ${\displaystyle S}$ and then swapping (because, as the determinant ${\displaystyle \left|TS\right|}$ changes sign on the row swap, we'll then have ${\displaystyle \left|{\hat {T}}S\right|=-\left|TS\right|}$, and so ${\displaystyle d({\hat {T}})=-d(T)}$). That argument runs just like the prior one.
3. Not surprisingly by now, we need only show that multiplying a row by a nonzero scalar ${\displaystyle T[b]{\xrightarrow[{}]{k\rho _{i}}}{\hat {T}}}$ and then computing ${\displaystyle {\hat {T}}S}$ gives the same result as first computing ${\displaystyle TS}$ and then multiplying the row by ${\displaystyle k}$ (as the determinant ${\displaystyle \left|TS\right|}$ is rescaled by ${\displaystyle k}$ the multiplication, we'll have ${\displaystyle \left|{\hat {T}}S\right|=k\left|TS\right|}$, so ${\displaystyle d({\hat {T}})=k\,d(T)}$). The argument runs just as above.
4. Clear.
5. Because we've shown that ${\displaystyle d(T)}$ is a determinant and that determinant functions (if they exist) are unique, we have that so ${\displaystyle \left|T\right|=d(T)=\left|TS\right|/\left|S\right|}$.
Problem 15

A submatrix of a given matrix ${\displaystyle A}$ is one that can be obtained by deleting some of the rows and columns of ${\displaystyle A}$. Thus, the first matrix here is a submatrix of the second.

${\displaystyle {\begin{pmatrix}3&1\\2&5\end{pmatrix}}\qquad {\begin{pmatrix}3&4&1\\0&9&-2\\2&-1&5\end{pmatrix}}}$

Prove that for any square matrix, the rank of the matrix is ${\displaystyle r}$ if and only if ${\displaystyle r}$ is the largest integer such that there is an ${\displaystyle r\!\times \!r}$ submatrix with a nonzero determinant.

We will first argue that a rank ${\displaystyle r}$ matrix has a ${\displaystyle r\!\times \!r}$ submatrix with nonzero determinant. A rank ${\displaystyle r}$ matrix has a linearly independent set of ${\displaystyle r}$ rows. A matrix made from those rows will have row rank ${\displaystyle r}$ and thus has column rank ${\displaystyle r}$. Conclusion: from those ${\displaystyle r}$ rows can be extracted a linearly independent set of ${\displaystyle r}$ columns, and so the original matrix has a ${\displaystyle r\!\times \!r}$ submatrix of rank ${\displaystyle r}$.

We finish by showing that if ${\displaystyle r}$ is the largest such integer then the rank of the matrix is ${\displaystyle r}$. We need only show, by the maximality of ${\displaystyle r}$, that if a matrix has a ${\displaystyle k\!\times \!k}$ submatrix of nonzero determinant then the rank of the matrix is at least ${\displaystyle k}$. Consider such a ${\displaystyle k\!\times \!k}$ submatrix. Its rows are parts of the rows of the original matrix, clearly the set of whole rows is linearly independent. Thus the row rank of the original matrix is at least ${\displaystyle k}$, and the row rank of a matrix equals its rank.

This exercise is recommended for all readers.
Problem 16

Prove that a matrix with rational entries has a rational determinant.

A matrix with only rational entries can be reduced with Gauss' method to an echelon form matrix using only rational arithmetic. Thus the entries on the diagonal must be rationals, and so the product down the diagonal is rational.

? Problem 17

Find the element of likeness in (a) simplifying a fraction, (b) powdering the nose, (c) building new steps on the church, (d) keeping emeritus professors on campus, (e) putting ${\displaystyle B}$, ${\displaystyle C}$, ${\displaystyle D}$ in the determinant

${\displaystyle {\begin{vmatrix}1&a&a^{2}&a^{3}\\a^{3}&1&a&a^{2}\\B&a^{3}&1&a\\C&D&a^{3}&1\end{vmatrix}}.}$

This is how the answer was given in the cited source.

The value ${\displaystyle (1-a^{4})^{3}}$ of the determinant is independent of the values ${\displaystyle B}$, ${\displaystyle C}$, ${\displaystyle D}$. Hence operation (e) does not change the value of the determinant but merely changes its appearance. Thus the element of likeness in (a), (b), (c), (d), and (e) is only that the appearance of the principle entity is changed. The same element appears in (f) changing the name-label of a rose, (g) writing a decimal integer in the scale of ${\displaystyle 12}$, (h) gilding the lily, (i) whitewashing a politician, and (j) granting an honorary degree.

## References

• Anning, Norman (proposer); Trigg, C. W. (solver) (Feb. 1953), "Elementary problem 1016", American Mathematical Monthly (American Mathematical Society) 60 (2): 115 .