# Chapter IV - Determinants

In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form ${\displaystyle T{\vec {x}}={\vec {b}}}$ where ${\displaystyle T}$ is a square matrix. We noted a distinction between two classes of ${\displaystyle T}$'s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular ${\displaystyle T}$ is associated with a unique solution in any system, such as the homogeneous system ${\displaystyle {\vec {b}}={\vec {0}}}$, then ${\displaystyle T}$ is associated with a unique solution for every ${\displaystyle {\vec {b}}}$. We call such a matrix of coefficients "nonsingular". The other kind of ${\displaystyle T}$, where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call "singular".

Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an ${\displaystyle n\!\times \!n}$ matrix ${\displaystyle T}$ is equivalent to each of these:

1. a system ${\displaystyle T{\vec {x}}={\vec {b}}}$ has a solution, and that solution is unique;
2. Gauss-Jordan reduction of ${\displaystyle T}$ yields an identity matrix;
3. the rows of ${\displaystyle T}$ form a linearly independent set;
4. the columns of ${\displaystyle T}$ form a basis for ${\displaystyle \mathbb {R} ^{n}}$;
5. any map that ${\displaystyle T}$ represents is an isomorphism;
6. an inverse matrix ${\displaystyle T^{-1}}$ exists.

So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say "matrix" in place of "square matrix".)

More precisely, we will develop infinitely many formulas, one for ${\displaystyle 1\!\times \!1}$ matrices, one for ${\displaystyle 2\!\times \!2}$ matrices, etc. Of course, these formulas are related — that is, we will develop a family of formulas, a scheme that describes the formula for each size.

## Section I - Definition

For ${\displaystyle 1\!\times \!1}$ matrices, determining nonsingularity is trivial.

${\displaystyle {\begin{pmatrix}a\end{pmatrix}}}$ is nonsingular iff ${\displaystyle a\neq 0}$

The ${\displaystyle 2\!\times \!2}$ formula came out in the course of developing the inverse.

${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}}$ is nonsingular iff ${\displaystyle ad-bc\neq 0}$

The ${\displaystyle 3\!\times \!3}$ formula can be produced similarly (see Problem 9).

${\displaystyle {\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}}}$ is nonsingular iff ${\displaystyle aei+bfg+cdh-hfa-idb-gec\neq 0}$

With these cases in mind, we posit a family of formulas, ${\displaystyle a}$, ${\displaystyle ad-bc}$, etc. For each ${\displaystyle n}$ the formula gives rise to a determinant function ${\displaystyle \det \nolimits _{n\!\times \!n}:{\mathcal {M}}_{n\!\times \!n}\to \mathbb {R} }$ such that an ${\displaystyle n\!\times \!n}$ matrix ${\displaystyle T}$ is nonsingular if and only if ${\displaystyle \det \nolimits _{n\!\times \!n}(T)\neq 0}$. (We usually omit the subscript because if ${\displaystyle T}$ is ${\displaystyle n\!\times \!n}$ then "${\displaystyle \det(T)}$" could only mean "${\displaystyle \det \nolimits _{n\!\times \!n}(T)}$".)

### 1 - Exploration

This subsection is optional. It briefly describes how an investigator might come to a good general definition, which is given in the next subsection.

The three cases above don't show an evident pattern to use for the general ${\displaystyle n\!\times \!n}$ formula. We may spot that the ${\displaystyle 1\!\times \!1}$ term ${\displaystyle a}$ has one letter, that the ${\displaystyle 2\!\times \!2}$ terms ${\displaystyle ad}$ and ${\displaystyle bc}$ have two letters, and that the ${\displaystyle 3\!\times \!3}$ terms ${\displaystyle aei}$, etc., have three letters. We may also observe that in those terms there is a letter from each row and column of the matrix, e.g., the letters in the ${\displaystyle cdh}$ term

${\displaystyle {\begin{pmatrix}&&c\\d\\&h\end{pmatrix}}}$

come one from each row and one from each column. But these observations perhaps seem more puzzling than enlightening. For instance, we might wonder why some of the terms are added while others are subtracted.

A good problem solving strategy is to see what properties a solution must have and then search for something with those properties. So we shall start by asking what properties we require of the formulas.

At this point, our primary way to decide whether a matrix is singular is to do Gaussian reduction and then check whether the diagonal of resulting echelon form matrix has any zeroes (that is, to check whether the product down the diagonal is zero). So, we may expect that the proof that a formula determines singularity will involve applying Gauss' method to the matrix, to show that in the end the product down the diagonal is zero if and only if the determinant formula gives zero. This suggests our initial plan: we will look for a family of functions with the property of being unaffected by row operations and with the property that a determinant of an echelon form matrix is the product of its diagonal entries. Under this plan, a proof that the functions determine singularity would go, "Where ${\displaystyle T\rightarrow \cdots \rightarrow {\hat {T}}}$ is the Gaussian reduction, the determinant of ${\displaystyle T}$ equals the determinant of ${\displaystyle {\hat {T}}}$ (because the determinant is unchanged by row operations), which is the product down the diagonal, which is zero if and only if the matrix is singular". In the rest of this subsection we will test this plan on the ${\displaystyle 2\!\times \!2}$ and ${\displaystyle 3\!\times \!3}$ determinants that we know. We will end up modifying the "unaffected by row operations" part, but not by much.

The first step in checking the plan is to test whether the ${\displaystyle 2\!\times \!2}$ and ${\displaystyle 3\!\times \!3}$ formulas are unaffected by the row operation of pivoting: if

${\displaystyle T{\xrightarrow[{}]{k\rho _{i}+\rho _{j}}}{\hat {T}}}$

then is ${\displaystyle \det({\hat {T}})=\det(T)}$? This check of the ${\displaystyle 2\!\times \!2}$ determinant after the ${\displaystyle k\rho _{1}+\rho _{2}}$ operation

${\displaystyle \det({\begin{pmatrix}a&b\\ka+c&kb+d\\\end{pmatrix}})=a(kb+d)-(ka+c)b=ad-bc}$

shows that it is indeed unchanged, and the other ${\displaystyle 2\!\times \!2}$ pivot ${\displaystyle k\rho _{2}+\rho _{1}}$ gives the same result. The ${\displaystyle 3\!\times \!3}$ pivot ${\displaystyle k\rho _{3}+\rho _{2}}$ leaves the determinant unchanged

${\displaystyle {\begin{array}{rl}\det({\begin{pmatrix}a&b&c\\kg+d&kh+e&ki+f\\g&h&i\end{pmatrix}})&={\begin{array}{l}a(kh+e)i+b(ki+f)g+c(kg+d)h\\\ -h(ki+f)a-i(kg+d)b-g(kh+e)c\end{array}}\\&=aei+bfg+cdh-hfa-idb-gec\end{array}}}$

as do the other ${\displaystyle 3\!\times \!3}$ pivot operations.

So there seems to be promise in the plan. Of course, perhaps the ${\displaystyle 4\!\times \!4}$ determinant formula is affected by pivoting. We are exploring a possibility here and we do not yet have all the facts. Nonetheless, so far, so good.

The next step is to compare ${\displaystyle \det({\hat {T}})}$ with ${\displaystyle \det(T)}$ for the operation

${\displaystyle T{\xrightarrow[{}]{{\rho }_{i}\leftrightarrow {\rho }_{j}}}{\hat {T}}}$

of swapping two rows. The ${\displaystyle 2\!\times \!2}$ row swap ${\displaystyle \rho _{1}\leftrightarrow \rho _{2}}$

${\displaystyle \det({\begin{pmatrix}c&d\\a&b\end{pmatrix}})=cb-ad}$

does not yield ${\displaystyle ad-bc}$. This ${\displaystyle \rho _{1}\leftrightarrow \rho _{3}}$ swap inside of a ${\displaystyle 3\!\times \!3}$ matrix

${\displaystyle \det({\begin{pmatrix}g&h&i\\d&e&f\\a&b&c\end{pmatrix}})=gec+hfa+idb-bfg-cdh-aei}$

also does not give the same determinant as before the swap — again there is a sign change. Trying a different ${\displaystyle 3\!\times \!3}$ swap ${\displaystyle \rho _{1}\leftrightarrow \rho _{2}}$

${\displaystyle \det({\begin{pmatrix}d&e&f\\a&b&c\\g&h&i\end{pmatrix}})=dbi+ecg+fah-hcd-iae-gbf}$

also gives a change of sign.

Thus, row swaps appear to change the sign of a determinant. This modifies our plan, but does not wreck it. We intend to decide nonsingularity by considering only whether the determinant is zero, not by considering its sign. Therefore, instead of expecting determinants to be entirely unaffected by row operations, will look for them to change sign on a swap.

To finish, we compare ${\displaystyle \det({\hat {T}})}$ to ${\displaystyle \det(T)}$ for the operation

${\displaystyle T{\xrightarrow[{}]{k{\rho }_{i}}}{\hat {T}}}$

of multiplying a row by a scalar ${\displaystyle k\neq 0}$. One of the ${\displaystyle 2\!\times \!2}$ cases is

${\displaystyle \det({\begin{pmatrix}a&b\\kc&kd\end{pmatrix}})=a(kd)-(kc)b=k\cdot (ad-bc)}$

and the other case has the same result. Here is one ${\displaystyle 3\!\times \!3}$ case

${\displaystyle {\begin{array}{rl}\det({\begin{pmatrix}a&b&c\\d&e&f\\kg&kh&ki\end{pmatrix}})&={\begin{array}{l}ae(ki)+bf(kg)+cd(kh)\\\quad -(kh)fa-(ki)db-(kg)ec\end{array}}\\&=k\cdot (aei+bfg+cdh-hfa-idb-gec)\end{array}}}$

and the other two are similar. These lead us to suspect that multiplying a row by ${\displaystyle k}$ multiplies the determinant by ${\displaystyle k}$. This fits with our modified plan because we are asking only that the zeroness of the determinant be unchanged and we are not focusing on the determinant's sign or magnitude.

In summary, to develop the scheme for the formulas to compute determinants, we look for determinant functions that remain unchanged under the pivoting operation, that change sign on a row swap, and that rescale on the rescaling of a row. In the next two subsections we will find that for each ${\displaystyle n}$ such a function exists and is unique.

For the next subsection, note that, as above, scalars come out of each row without affecting other rows. For instance, in this equality

${\displaystyle \det({\begin{pmatrix}3&3&9\\2&1&1\\5&10&-5\end{pmatrix}})=3\cdot \det({\begin{pmatrix}1&1&3\\2&1&1\\5&10&-5\end{pmatrix}})}$

the ${\displaystyle 3}$ isn't factored out of all three rows, only out of the top row. The determinant acts on each row of independently of the other rows. When we want to use this property of determinants, we shall write the determinant as a function of the rows: "${\displaystyle \det({\vec {\rho }}_{1},{\vec {\rho }}_{2},\dots {\vec {\rho }}_{n})}$", instead of as "${\displaystyle \det(T)}$" or "${\displaystyle \det(t_{1,1},\dots ,t_{n,n})}$". The definition of the determinant that starts the next subsection is written in this way.

## Exercises

This exercise is recommended for all readers.
Problem 1

Evaluate the determinant of each.

1. ${\displaystyle {\begin{pmatrix}3&1\\-1&1\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}2&0&1\\3&1&1\\-1&0&1\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}4&0&1\\0&0&1\\1&3&-1\end{pmatrix}}}$
Problem 2

Evaluate the determinant of each.

1. ${\displaystyle {\begin{pmatrix}2&0\\-1&3\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}2&1&1\\0&5&-2\\1&-3&4\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}2&3&4\\5&6&7\\8&9&1\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 3

Verify that the determinant of an upper-triangular ${\displaystyle 3\!\times \!3}$ matrix is the product down the diagonal.

${\displaystyle \det({\begin{pmatrix}a&b&c\\0&e&f\\0&0&i\end{pmatrix}})=aei}$

Do lower-triangular matrices work the same way?

This exercise is recommended for all readers.
Problem 4

Use the determinant to decide if each is singular or nonsingular.

1. ${\displaystyle {\begin{pmatrix}2&1\\3&1\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}0&1\\1&-1\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}4&2\\2&1\end{pmatrix}}}$
Problem 5

Singular or nonsingular? Use the determinant to decide.

1. ${\displaystyle {\begin{pmatrix}2&1&1\\3&2&2\\0&1&4\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}1&0&1\\2&1&1\\4&1&3\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}2&1&0\\3&-2&0\\1&0&0\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 6

Each pair of matrices differ by one row operation. Use this operation to compare ${\displaystyle \det(A)}$ with ${\displaystyle \det(B)}$.

1. ${\displaystyle A={\begin{pmatrix}1&2\\2&3\end{pmatrix}}}$ ${\displaystyle B={\begin{pmatrix}1&2\\0&-1\end{pmatrix}}}$
2. ${\displaystyle A={\begin{pmatrix}3&1&0\\0&0&1\\0&1&2\end{pmatrix}}}$ ${\displaystyle B={\begin{pmatrix}3&1&0\\0&1&2\\0&0&1\end{pmatrix}}}$
3. ${\displaystyle A={\begin{pmatrix}1&-1&3\\2&2&-6\\1&0&4\end{pmatrix}}}$ ${\displaystyle B={\begin{pmatrix}1&-1&3\\1&1&-3\\1&0&4\end{pmatrix}}}$
Problem 7

Show this.

${\displaystyle \det({\begin{pmatrix}1&1&1\\a&b&c\\a^{2}&b^{2}&c^{2}\end{pmatrix}})=(b-a)(c-a)(c-b)}$
This exercise is recommended for all readers.
Problem 8

Which real numbers ${\displaystyle x}$ make this matrix singular?

${\displaystyle {\begin{pmatrix}12-x&4\\8&8-x\end{pmatrix}}}$
Problem 9

Do the Gaussian reduction to check the formula for ${\displaystyle 3\!\times \!3}$ matrices stated in the preamble to this section.

${\displaystyle {\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}}}$ is nonsingular iff ${\displaystyle aei+bfg+cdh-hfa-idb-gec\neq 0}$

Problem 10

Show that the equation of a line in ${\displaystyle \mathbb {R} ^{2}}$ thru ${\displaystyle (x_{1},y_{1})}$ and ${\displaystyle (x_{2},y_{2})}$ is expressed by this determinant.

${\displaystyle \det({\begin{pmatrix}x&y&1\\x_{1}&y_{1}&1\\x_{2}&y_{2}&1\end{pmatrix}})=0\qquad x_{1}\neq x_{2}}$
This exercise is recommended for all readers.
Problem 11

Many people know this mnemonic for the determinant of a ${\displaystyle 3\!\times \!3}$ matrix: first repeat the first two columns and then sum the products on the forward diagonals and subtract the products on the backward diagonals. That is, first write

${\displaystyle \left({\begin{array}{ccc|cc}h_{1,1}&h_{1,2}&h_{1,3}&h_{1,1}&h_{1,2}\\h_{2,1}&h_{2,2}&h_{2,3}&h_{2,1}&h_{2,2}\\h_{3,1}&h_{3,2}&h_{3,3}&h_{3,1}&h_{3,2}\end{array}}\right)}$

and then calculate this.

${\displaystyle {\begin{array}{l}h_{1,1}h_{2,2}h_{3,3}+h_{1,2}h_{2,3}h_{3,1}+h_{1,3}h_{2,1}h_{3,2}\\\quad -h_{3,1}h_{2,2}h_{1,3}-h_{3,2}h_{2,3}h_{1,1}-h_{3,3}h_{2,1}h_{1,2}\end{array}}}$
1. Check that this agrees with the formula given in the preamble to this section.
2. Does it extend to other-sized determinants?
Problem 12

The cross product of the vectors

${\displaystyle {\vec {x}}={\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}\qquad {\vec {y}}={\begin{pmatrix}y_{1}\\y_{2}\\y_{3}\end{pmatrix}}}$

is the vector computed as this determinant.

${\displaystyle {\vec {x}}\times {\vec {y}}=\det({\begin{pmatrix}{\vec {e}}_{1}&{\vec {e}}_{2}&{\vec {e}}_{3}\\x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\end{pmatrix}})}$

Note that the first row is composed of vectors, the vectors from the standard basis for ${\displaystyle \mathbb {R} ^{3}}$. Show that the cross product of two vectors is perpendicular to each vector.

Problem 13

Prove that each statement holds for ${\displaystyle 2\!\times \!2}$ matrices.

1. The determinant of a product is the product of the determinants ${\displaystyle \det(ST)=\det(S)\cdot \det(T)}$.
2. If ${\displaystyle T}$ is invertible then the determinant of the inverse is the inverse of the determinant ${\displaystyle \det(T^{-1})=(\,\det(T)\,)^{-1}}$.

Matrices ${\displaystyle T}$ and ${\displaystyle T^{\prime }}$ are similar if there is a nonsingular matrix ${\displaystyle P}$ such that ${\displaystyle T^{\prime }=PTP^{-1}}$. (This definition is in Chapter Five.) Show that similar ${\displaystyle 2\!\times \!2}$ matrices have the same determinant.

This exercise is recommended for all readers.
Problem 14

Prove that the area of this region in the plane

is equal to the value of this determinant.

${\displaystyle \det({\begin{pmatrix}x_{1}&x_{2}\\y_{1}&y_{2}\end{pmatrix}})}$

Compare with this.

${\displaystyle \det({\begin{pmatrix}x_{2}&x_{1}\\y_{2}&y_{1}\end{pmatrix}})}$
Problem 15

Prove that for ${\displaystyle 2\!\times \!2}$ matrices, the determinant of a matrix equals the determinant of its transpose. Does that also hold for ${\displaystyle 3\!\times \!3}$ matrices?

This exercise is recommended for all readers.
Problem 16

Is the determinant function linear — is ${\displaystyle \det(x\cdot T+y\cdot S)=x\cdot \det(T)+y\cdot \det(S)}$?

Problem 17

Show that if ${\displaystyle A}$ is ${\displaystyle 3\!\times \!3}$ then ${\displaystyle \det(c\cdot A)=c^{3}\cdot \det(A)}$ for any scalar ${\displaystyle c}$.

Problem 18

Which real numbers ${\displaystyle \theta }$ make

${\displaystyle {\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}}$

singular? Explain geometrically.

? Problem 19

If a third order determinant has elements ${\displaystyle 1}$, ${\displaystyle 2}$, ..., ${\displaystyle 9}$, what is the maximum value it may have? (Haggett & Saunders 1955)

### 2 - Properties of Determinants

As described above, we want a formula to determine whether an ${\displaystyle n\!\times \!n}$ matrix is nonsingular. We will not begin by stating such a formula. Instead, we will begin by considering the function that such a formula calculates. We will define the function by its properties, then prove that the function with these properties exists and is unique and also describe formulas that compute this function. (Because we will show that the function exists and is unique, from the start we will say "${\displaystyle \det(T)}$" instead of "if there is a determinant function then ${\displaystyle \det(T)}$" and "the determinant" instead of "any determinant".)

Definition 2.1

A ${\displaystyle n\!\times \!n}$ determinant is a function ${\displaystyle \det :{\mathcal {M}}_{n\!\times \!n}\to \mathbb {R} }$ such that

1. ${\displaystyle \det({\vec {\rho }}_{1},\dots ,k\cdot {\vec {\rho }}_{i}+{\vec {\rho }}_{j},\dots ,{\vec {\rho }}_{n})=\det({\vec {\rho }}_{1},\dots ,{\vec {\rho }}_{j},\dots ,{\vec {\rho }}_{n})}$ for ${\displaystyle i\neq j}$
2. ${\displaystyle \det({\vec {\rho }}_{1},\ldots ,{\vec {\rho }}_{j},\dots ,{\vec {\rho }}_{i},\dots ,{\vec {\rho }}_{n})=-\det({\vec {\rho }}_{1},\dots ,{\vec {\rho }}_{i},\dots ,{\vec {\rho }}_{j},\dots ,{\vec {\rho }}_{n})}$ for ${\displaystyle i\neq j}$
3. ${\displaystyle \det({\vec {\rho }}_{1},\dots ,k{\vec {\rho }}_{i},\dots ,{\vec {\rho }}_{n})=k\cdot \det({\vec {\rho }}_{1},\dots ,{\vec {\rho }}_{i},\dots ,{\vec {\rho }}_{n})}$ for ${\displaystyle k\neq 0}$
4. ${\displaystyle \det(I)=1}$ where ${\displaystyle I}$ is an identity matrix

(the ${\displaystyle {\vec {\rho }}\,}$'s are the rows of the matrix). We often write ${\displaystyle \left|T\right|}$ for ${\displaystyle \det(T)}$.

Remark 2.2

Property (2) is redundant since

${\displaystyle T\;{\xrightarrow[{}]{\rho _{i}+\rho _{j}}}\;{\xrightarrow[{}]{-\rho _{j}+\rho _{i}}}\;{\xrightarrow[{}]{\rho _{i}+\rho _{j}}}\;{\xrightarrow[{}]{-\rho _{i}}}\;{\hat {T}}}$

swaps rows ${\displaystyle i}$ and ${\displaystyle j}$. It is listed only for convenience.

The first result shows that a function satisfying these conditions gives a criteria for nonsingularity. (Its last sentence is that, in the context of the first three conditions, (4) is equivalent to the condition that the determinant of an echelon form matrix is the product down the diagonal.)

Lemma 2.3

A matrix with two identical rows has a determinant of zero. A matrix with a zero row has a determinant of zero. A matrix is nonsingular if and only if its determinant is nonzero. The determinant of an echelon form matrix is the product down its diagonal.

Proof

To verify the first sentence, swap the two equal rows. The sign of the determinant changes, but the matrix is unchanged and so its determinant is unchanged. Thus the determinant is zero.

For the second sentence, we multiply a zero row by −1 and apply property (3). Multiplying a zero row with a constant leaves the matrix unchanged, so property (3) implies that ${\displaystyle \det(T)=-\det(T)}$. The only way this can be is if ${\displaystyle \det(T)=0}$.

For the third sentence, where ${\displaystyle T\rightarrow \cdots \rightarrow {\hat {T}}}$ is the Gauss-Jordan reduction, by the definition the determinant of ${\displaystyle T}$ is zero if and only if the determinant of ${\displaystyle {\hat {T}}}$ is zero (although they could differ in sign or magnitude). A nonsingular ${\displaystyle T}$ Gauss-Jordan reduces to an identity matrix and so has a nonzero determinant. A singular ${\displaystyle T}$ reduces to a ${\displaystyle {\hat {T}}}$ with a zero row; by the second sentence of this lemma its determinant is zero.

Finally, for the fourth sentence, if an echelon form matrix is singular then it has a zero on its diagonal, that is, the product down its diagonal is zero. The third sentence says that if a matrix is singular then its determinant is zero. So if the echelon form matrix is singular then its determinant equals the product down its diagonal.

If an echelon form matrix is nonsingular then none of its diagonal entries is zero so we can use property (3) of the definition to factor them out (again, the vertical bars ${\displaystyle \left|\cdots \right|}$ indicate the determinant operation).

${\displaystyle {\begin{vmatrix}t_{1,1}&t_{1,2}&&t_{1,n}\\0&t_{2,2}&&t_{2,n}\\&&\ddots \\0&&&t_{n,n}\end{vmatrix}}=t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot {\begin{vmatrix}1&t_{1,2}/t_{1,1}&&t_{1,n}/t_{1,1}\\0&1&&t_{2,n}/t_{2,2}\\&&\ddots \\0&&&1\end{vmatrix}}}$

Next, the Jordan half of Gauss-Jordan elimination, using property (1) of the definition, leaves the identity matrix.

${\displaystyle =t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot {\begin{vmatrix}1&0&&0\\0&1&&0\\&&\ddots \\0&&&1\end{vmatrix}}=t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot 1}$

Therefore, if an echelon form matrix is nonsingular then its determinant is the product down its diagonal.

That result gives us a way to compute the value of a determinant function on a matrix. Do Gaussian reduction, keeping track of any changes of sign caused by row swaps and any scalars that are factored out, and then finish by multiplying down the diagonal of the echelon form result. This procedure takes the same time as Gauss' method and so is sufficiently fast to be practical on the size matrices that we see in this book.

Example 2.4

Doing ${\displaystyle 2\!\times \!2}$ determinants

${\displaystyle {\begin{vmatrix}2&4\\-1&3\end{vmatrix}}={\begin{vmatrix}2&4\\0&5\end{vmatrix}}=10}$

with Gauss' method won't give a big savings because the ${\displaystyle 2\!\times \!2}$ determinant formula is so easy. However, a ${\displaystyle 3\!\times \!3}$ determinant is usually easier to calculate with Gauss' method than with the formula given earlier.

${\displaystyle {\begin{vmatrix}2&2&6\\4&4&3\\0&-3&5\end{vmatrix}}={\begin{vmatrix}2&2&6\\0&0&-9\\0&-3&5\end{vmatrix}}=-{\begin{vmatrix}2&2&6\\0&-3&5\\0&0&-9\end{vmatrix}}=-54}$
Example 2.5

Determinants of matrices any bigger than ${\displaystyle 3\!\times \!3}$ are almost always most quickly done with this Gauss' method procedure.

${\displaystyle {\begin{vmatrix}1&0&1&3\\0&1&1&4\\0&0&0&5\\0&1&0&1\end{vmatrix}}={\begin{vmatrix}1&0&1&3\\0&1&1&4\\0&0&0&5\\0&0&-1&-3\end{vmatrix}}=-{\begin{vmatrix}1&0&1&3\\0&1&1&4\\0&0&-1&-3\\0&0&0&5\end{vmatrix}}=-(-5)=5}$

The prior example illustrates an important point. Although we have not yet found a ${\displaystyle 4\!\times \!4}$ determinant formula, if one exists then we know what value it gives to the matrix — if there is a function with properties (1)-(4) then on the above matrix the function must return ${\displaystyle 5}$.

Lemma 2.6

For each ${\displaystyle n}$, if there is an ${\displaystyle n\!\times \!n}$ determinant function then it is unique.

Proof

For any ${\displaystyle n\!\times \!n}$ matrix we can perform Gauss' method on the matrix, keeping track of how the sign alternates on row swaps, and then multiply down the diagonal of the echelon form result. By the definition and the lemma, all ${\displaystyle n\!\times \!n}$ determinant functions must return this value on this matrix. Thus all ${\displaystyle n\!\times \!n}$ determinant functions are equal, that is, there is only one input argument/output value relationship satisfying the four conditions.

The "if there is an ${\displaystyle n\!\times \!n}$ determinant function" emphasizes that, although we can use Gauss' method to compute the only value that a determinant function could possibly return, we haven't yet shown that such a determinant function exists for all ${\displaystyle n}$. In the rest of the section we will produce determinant functions.

## Exercises

For these, assume that an ${\displaystyle n\!\times \!n}$ determinant function exists for all ${\displaystyle n}$.

This exercise is recommended for all readers.
Problem 1

Use Gauss' method to find each determinant.

1. ${\displaystyle {\begin{vmatrix}3&1&2\\3&1&0\\0&1&4\end{vmatrix}}}$
2. ${\displaystyle {\begin{vmatrix}1&0&0&1\\2&1&1&0\\-1&0&1&0\\1&1&1&0\end{vmatrix}}}$
Problem 2
Use Gauss' method to find each.
1. ${\displaystyle {\begin{vmatrix}2&-1\\-1&-1\end{vmatrix}}}$
2. ${\displaystyle {\begin{vmatrix}1&1&0\\3&0&2\\5&2&2\end{vmatrix}}}$
Problem 3

For which values of ${\displaystyle k}$ does this system have a unique solution?

${\displaystyle {\begin{array}{*{4}{rc}r}x&&&+&z&-&w&=&2\\&&y&-&2z&&&=&3\\x&&&+&kz&&&=&4\\&&&&z&-&w&=&2\end{array}}}$
This exercise is recommended for all readers.
Problem 4

Express each of these in terms of ${\displaystyle \left|H\right|}$.

1. ${\displaystyle {\begin{vmatrix}h_{3,1}&h_{3,2}&h_{3,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\h_{1,1}&h_{1,2}&h_{1,3}\end{vmatrix}}}$
2. ${\displaystyle {\begin{vmatrix}-h_{1,1}&-h_{1,2}&-h_{1,3}\\-2h_{2,1}&-2h_{2,2}&-2h_{2,3}\\-3h_{3,1}&-3h_{3,2}&-3h_{3,3}\end{vmatrix}}}$
3. ${\displaystyle {\begin{vmatrix}h_{1,1}+h_{3,1}&h_{1,2}+h_{3,2}&h_{1,3}+h_{3,3}\\h_{2,1}&h_{2,2}&h_{2,3}\\5h_{3,1}&5h_{3,2}&5h_{3,3}\end{vmatrix}}}$
This exercise is recommended for all readers.
Problem 5

Find the determinant of a diagonal matrix.

Problem 6

Describe the solution set of a homogeneous linear system if the determinant of the matrix of coefficients is nonzero.

This exercise is recommended for all readers.
Problem 7

Show that this determinant is zero.

${\displaystyle {\begin{vmatrix}y+z&x+z&x+y\\x&y&z\\1&1&1\end{vmatrix}}}$
Problem 8
1. Find the ${\displaystyle 1\!\times \!1}$, ${\displaystyle 2\!\times \!2}$, and ${\displaystyle 3\!\times \!3}$ matrices with ${\displaystyle i,j}$ entry given by ${\displaystyle (-1)^{i+j}}$.
2. Find the determinant of the square matrix with ${\displaystyle i,j}$ entry ${\displaystyle (-1)^{i+j}}$.
Problem 9
1. Find the ${\displaystyle 1\!\times \!1}$, ${\displaystyle 2\!\times \!2}$, and ${\displaystyle 3\!\times \!3}$ matrices with ${\displaystyle i,j}$ entry given by ${\displaystyle i+j}$.
2. Find the determinant of the square matrix with ${\displaystyle i,j}$ entry ${\displaystyle i+j}$.
This exercise is recommended for all readers.
Problem 10

Show that determinant functions are not linear by giving a case where ${\displaystyle \left|A+B\right|\neq \left|A\right|+\left|B\right|}$.

Problem 11

The second condition in the definition, that row swaps change the sign of a determinant, is somewhat annoying. It means we have to keep track of the number of swaps, to compute how the sign alternates. Can we get rid of it? Can we replace it with the condition that row swaps leave the determinant unchanged? (If so then we would need new ${\displaystyle 1\!\times \!1}$, ${\displaystyle 2\!\times \!2}$, and ${\displaystyle 3\!\times \!3}$ formulas, but that would be a minor matter.)

Problem 12

Prove that the determinant of any triangular matrix, upper or lower, is the product down its diagonal.

Problem 13

Refer to the definition of elementary matrices in the Mechanics of Matrix Multiplication subsection.

1. What is the determinant of each kind of elementary matrix?
2. Prove that if ${\displaystyle E}$ is any elementary matrix then ${\displaystyle \left|ES\right|=\left|E\right|\left|S\right|}$ for any appropriately sized ${\displaystyle S}$.
3. (This question doesn't involve determinants.) Prove that if ${\displaystyle T}$ is singular then a product ${\displaystyle TS}$ is also singular.
4. Show that ${\displaystyle \left|TS\right|=\left|T\right|\left|S\right|}$.
5. Show that if ${\displaystyle T}$ is nonsingular then ${\displaystyle \left|T^{-1}\right|=\left|T\right|^{-1}}$.
Problem 14

Prove that the determinant of a product is the product of the determinants ${\displaystyle \left|TS\right|=\left|T\right|\,\left|S\right|}$ in this way. Fix the ${\displaystyle n\!\times \!n}$ matrix ${\displaystyle S}$ and consider the function ${\displaystyle d:{\mathcal {M}}_{n\!\times \!n}\to \mathbb {R} }$ given by ${\displaystyle T\mapsto \left|TS\right|/\left|S\right|}$.

1. Check that ${\displaystyle d}$ satisfies property (1) in the definition of a determinant function.
2. Check property (2).
3. Check property (3).
4. Check property (4).
5. Conclude the determinant of a product is the product of the determinants.
Problem 15

A submatrix of a given matrix ${\displaystyle A}$ is one that can be obtained by deleting some of the rows and columns of ${\displaystyle A}$. Thus, the first matrix here is a submatrix of the second.

${\displaystyle {\begin{pmatrix}3&1\\2&5\end{pmatrix}}\qquad {\begin{pmatrix}3&4&1\\0&9&-2\\2&-1&5\end{pmatrix}}}$

Prove that for any square matrix, the rank of the matrix is ${\displaystyle r}$ if and only if ${\displaystyle r}$ is the largest integer such that there is an ${\displaystyle r\!\times \!r}$ submatrix with a nonzero determinant.

This exercise is recommended for all readers.
Problem 16

Prove that a matrix with rational entries has a rational determinant.

? Problem 17

Find the element of likeness in (a) simplifying a fraction, (b) powdering the nose, (c) building new steps on the church, (d) keeping emeritus professors on campus, (e) putting ${\displaystyle B}$, ${\displaystyle C}$, ${\displaystyle D}$ in the determinant

${\displaystyle {\begin{vmatrix}1&a&a^{2}&a^{3}\\a^{3}&1&a&a^{2}\\B&a^{3}&1&a\\C&D&a^{3}&1\end{vmatrix}}.}$

(Anning & Trigg 1953)

### 3 - The Permutation Expansion

The prior subsection defines a function to be a determinant if it satisfies four conditions and shows that there is at most one ${\displaystyle n\!\times \!n}$ determinant function for each ${\displaystyle n}$. What is left is to show that for each ${\displaystyle n}$ such a function exists.

How could such a function not exist? After all, we have done computations that start with a square matrix, follow the conditions, and end with a number.

The difficulty is that, as far as we know, the computation might not give a well-defined result. To illustrate this possibility, suppose that we were to change the second condition in the definition of determinant to be that the value of a determinant does not change on a row swap. By Remark 2.2 we know that this conflicts with the first and third conditions. Here is an instance of the conflict: here are two Gauss' method reductions of the same matrix, the first without any row swap

${\displaystyle {\begin{pmatrix}1&2\\3&4\end{pmatrix}}{\xrightarrow[{}]{-3\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&2\\0&-2\end{pmatrix}}}$

and the second with a swap.

${\displaystyle {\begin{pmatrix}1&2\\3&4\end{pmatrix}}{\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}{\begin{pmatrix}3&4\\1&2\end{pmatrix}}{\xrightarrow[{}]{-(1/3)\rho _{1}+\rho _{2}}}{\begin{pmatrix}3&4\\0&2/3\end{pmatrix}}}$

Following Definition 2.1 gives that both calculations yield the determinant ${\displaystyle -2}$ since in the second one we keep track of the fact that the row swap changes the sign of the result of multiplying down the diagonal. But if we follow the supposition and change the second condition then the two calculations yield different values, ${\displaystyle -2}$ and ${\displaystyle 2}$. That is, under the supposition the outcome would not be well-defined — no function exists that satisfies the changed second condition along with the other three.

Of course, observing that Definition 2.1 does the right thing in this one instance is not enough; what we will do in the rest of this section is to show that there is never a conflict. The natural way to try this would be to define the determinant function with: "The value of the function is the result of doing Gauss' method, keeping track of row swaps, and finishing by multiplying down the diagonal". (Since Gauss' method allows for some variation, such as a choice of which row to use when swapping, we would have to fix an explicit algorithm.) Then we would be done if we verified that this way of computing the determinant satisfies the four properties. For instance, if ${\displaystyle T}$ and ${\displaystyle {\hat {T}}}$ are related by a row swap then we would need to show that this algorithm returns determinants that are negatives of each other. However, how to verify this is not evident. So the development below will not proceed in this way. Instead, in this subsection we will define a different way to compute the value of a determinant, a formula, and we will use this way to prove that the conditions are satisfied.

The formula that we shall use is based on an insight gotten from property (3) of the definition of determinants. This property shows that determinants are not linear.

Example 3.1

For this matrix ${\displaystyle \det(2A)\neq 2\cdot \det(A)}$.

${\displaystyle A={\begin{pmatrix}2&1\\-1&3\end{pmatrix}}}$

Instead, the scalar comes out of each of the two rows.

${\displaystyle {\begin{vmatrix}4&2\\-2&6\end{vmatrix}}=2\cdot {\begin{vmatrix}2&1\\-2&6\end{vmatrix}}=4\cdot {\begin{vmatrix}2&1\\-1&3\end{vmatrix}}}$

Since scalars come out a row at a time, we might guess that determinants are linear a row at a time.

Definition 3.2

Let ${\displaystyle V}$ be a vector space. A map ${\displaystyle f:V^{n}\to \mathbb {R} }$ is multilinear if

1. ${\displaystyle f({\vec {\rho }}_{1},\dots ,{\vec {v}}+{\vec {w}},\ldots ,{\vec {\rho }}_{n})=f({\vec {\rho }}_{1},\dots ,{\vec {v}},\dots ,{\vec {\rho }}_{n})+f({\vec {\rho }}_{1},\dots ,{\vec {w}},\dots ,{\vec {\rho }}_{n})}$
2. ${\displaystyle f({\vec {\rho }}_{1},\dots ,k{\vec {v}},\dots ,{\vec {\rho }}_{n})=k\cdot f({\vec {\rho }}_{1},\dots ,{\vec {v}},\dots ,{\vec {\rho }}_{n})}$

for ${\displaystyle {\vec {v}},{\vec {w}}\in V}$ and ${\displaystyle k\in \mathbb {R} }$.

Lemma 3.3

Determinants are multilinear.

Proof

The definition of determinants gives property (2) (Lemma 2.3 following that definition covers the ${\displaystyle k=0}$ case) so we need only check property (1).

${\displaystyle \det({\vec {\rho }}_{1},\dots ,{\vec {v}}+{\vec {w}},\dots ,{\vec {\rho }}_{n})=\det({\vec {\rho }}_{1},\dots ,{\vec {v}},\dots ,{\vec {\rho }}_{n})+\det({\vec {\rho }}_{1},\dots ,{\vec {w}},\dots ,{\vec {\rho }}_{n})}$

If the set ${\displaystyle \{{\vec {\rho }}_{1},\dots ,{\vec {\rho }}_{i-1},{\vec {\rho }}_{i+1},\dots ,{\vec {\rho }}_{n}\}}$ is linearly dependent then all three matrices are singular and so all three determinants are zero and the equality is trivial. Therefore assume that the set is linearly independent. This set of ${\displaystyle n}$-wide row vectors has ${\displaystyle n-1}$ members, so we can make a basis by adding one more vector ${\displaystyle \langle {\vec {\rho }}_{1},\dots ,{\vec {\rho }}_{i-1},{\vec {\beta }},{\vec {\rho }}_{i+1},\dots ,{\vec {\rho }}_{n}\rangle }$. Express ${\displaystyle {\vec {v}}}$ and ${\displaystyle {\vec {w}}}$ with respect to this basis

${\displaystyle {\begin{array}{rl}{\vec {v}}&=v_{1}{\vec {\rho }}_{1}+\dots +v_{i-1}{\vec {\rho }}_{i-1}+v_{i}{\vec {\beta }}+v_{i+1}{\vec {\rho }}_{i+1}+\dots +v_{n}{\vec {\rho }}_{n}\\{\vec {w}}&=w_{1}{\vec {\rho }}_{1}+\dots +w_{i-1}{\vec {\rho }}_{i-1}+w_{i}{\vec {\beta }}+w_{i+1}{\vec {\rho }}_{i+1}+\dots +w_{n}{\vec {\rho }}_{n}\end{array}}}$

giving this.

${\displaystyle {\vec {v}}+{\vec {w}}=(v_{1}+w_{1}){\vec {\rho }}_{1}+\dots +(v_{i}+w_{i}){\vec {\beta }}+\dots +(v_{n}+w_{n}){\vec {\rho }}_{n}}$

By the definition of determinant, the value of ${\displaystyle \det({\vec {\rho }}_{1},\dots ,{\vec {v}}+{\vec {w}},\dots ,{\vec {\rho }}_{n})}$ is unchanged by the pivot operation of adding ${\displaystyle -(v_{1}+w_{1}){\vec {\rho }}_{1}}$ to ${\displaystyle {\vec {v}}+{\vec {w}}}$.

${\displaystyle {\vec {v}}+{\vec {w}}-(v_{1}+w_{1}){\vec {\rho }}_{1}=(v_{2}+w_{2}){\vec {\rho }}_{2}+\cdots +(v_{i}+w_{i}){\vec {\beta }}+\dots +(v_{n}+w_{n}){\vec {\rho }}_{n}}$

Then, to the result, we can add ${\displaystyle -(v_{2}+w_{2}){\vec {\rho }}_{2}}$, etc. Thus

${\displaystyle \det({\vec {\rho }}_{1},\dots ,{\vec {v}}+{\vec {w}},\dots ,{\vec {\rho }}_{n})}$
{\displaystyle {\begin{aligned}&=\det({\vec {\rho }}_{1},\dots ,(v_{i}+w_{i})\cdot {\vec {\beta }},\dots ,{\vec {\rho }}_{n})\\&=(v_{i}+w_{i})\cdot \det({\vec {\rho }}_{1},\dots ,{\vec {\beta }},\dots ,{\vec {\rho }}_{n})\\&=v_{i}\cdot \det({\vec {\rho }}_{1},\dots ,{\vec {\beta }},\dots ,{\vec {\rho }}_{n})+w_{i}\cdot \det({\vec {\rho }}_{1},\dots ,{\vec {\beta }},\dots ,{\vec {\rho }}_{n})\end{aligned}}}

(using (2) for the second equality). To finish, bring ${\displaystyle v_{i}}$ and ${\displaystyle w_{i}}$ back inside in front of ${\displaystyle {\vec {\beta }}}$ and use pivoting again, this time to reconstruct the expressions of ${\displaystyle {\vec {v}}}$ and ${\displaystyle {\vec {w}}}$ in terms of the basis, e.g., start with the pivot operations of adding ${\displaystyle v_{1}{\vec {\rho }}_{1}}$ to ${\displaystyle v_{i}{\vec {\beta }}}$ and ${\displaystyle w_{1}{\vec {\rho }}_{1}}$ to ${\displaystyle w_{i}{\vec {\rho }}_{1}}$, etc.

Multilinearity allows us to expand a determinant into a sum of determinants, each of which involves a simple matrix.

Example 3.4

We can use multilinearity to split this determinant into two, first breaking up the first row

${\displaystyle {\begin{vmatrix}2&1\\4&3\end{vmatrix}}={\begin{vmatrix}2&0\\4&3\end{vmatrix}}+{\begin{vmatrix}0&1\\4&3\end{vmatrix}}}$

and then separating each of those two, breaking along the second rows.

${\displaystyle ={\begin{vmatrix}2&0\\4&0\end{vmatrix}}+{\begin{vmatrix}2&0\\0&3\end{vmatrix}}+{\begin{vmatrix}0&1\\4&0\end{vmatrix}}+{\begin{vmatrix}0&1\\0&3\end{vmatrix}}}$

We are left with four determinants, such that in each row of each matrix there is a single entry from the original matrix.

Example 3.5

In the same way, a ${\displaystyle 3\!\times \!3}$ determinant separates into a sum of many simpler determinants. We start by splitting along the first row, producing three determinants (the zero in the ${\displaystyle 1,3}$ position is underlined to set it off visually from the zeroes that appear in the splitting).

${\displaystyle {\begin{vmatrix}2&1&-1\\4&3&{\underline {0}}\\2&1&5\end{vmatrix}}={\begin{vmatrix}2&0&0\\4&3&{\underline {0}}\\2&1&5\end{vmatrix}}+{\begin{vmatrix}0&1&0\\4&3&{\underline {0}}\\2&1&5\end{vmatrix}}+{\begin{vmatrix}0&0&-1\\4&3&{\underline {0}}\\2&1&5\end{vmatrix}}}$

Each of these three will itself split in three along the second row. Each of the resulting nine splits in three along the third row, resulting in twenty seven determinants

${\displaystyle ={\begin{vmatrix}2&0&0\\4&0&0\\2&0&0\end{vmatrix}}+{\begin{vmatrix}2&0&0\\4&0&0\\0&1&0\end{vmatrix}}+{\begin{vmatrix}2&0&0\\4&0&0\\0&0&5\end{vmatrix}}+{\begin{vmatrix}2&0&0\\0&3&0\\2&0&0\end{vmatrix}}+\dots +{\begin{vmatrix}0&0&-1\\0&0&{\underline {0}}\\0&0&5\end{vmatrix}}}$

such that each row contains a single entry from the starting matrix.

So an ${\displaystyle n\!\times \!n}$ determinant expands into a sum of ${\displaystyle n^{n}}$ determinants where each row of each summands contains a single entry from the starting matrix. However, many of these summand determinants are zero.

Example 3.6

In each of these three matrices from the above expansion, two of the rows have their entry from the starting matrix in the same column, e.g., in the first matrix, the ${\displaystyle 2}$ and the ${\displaystyle 4}$ both come from the first column.

${\displaystyle {\begin{vmatrix}2&0&0\\4&0&0\\0&1&0\end{vmatrix}}\qquad {\begin{vmatrix}0&0&-1\\0&3&0\\0&0&5\end{vmatrix}}\qquad {\begin{vmatrix}0&1&0\\0&0&{\underline {0}}\\0&0&5\end{vmatrix}}}$

Any such matrix is singular, because in each, one row is a multiple of the other (or is a zero row). Thus, any such determinant is zero, by Lemma 2.3.

Therefore, the above expansion of the ${\displaystyle 3\!\times \!3}$ determinant into the sum of the twenty seven determinants simplifies to the sum of these six.

${\displaystyle {\begin{array}{rl}{\begin{vmatrix}2&1&-1\\4&3&{\underline {0}}\\2&1&5\end{vmatrix}}&={\begin{vmatrix}2&0&0\\0&3&0\\0&0&5\end{vmatrix}}+{\begin{vmatrix}2&0&0\\0&0&{\underline {0}}\\0&1&0\end{vmatrix}}\\&\quad +{\begin{vmatrix}0&1&0\\4&0&0\\0&0&5\end{vmatrix}}+{\begin{vmatrix}0&1&0\\0&0&{\underline {0}}\\2&0&0\end{vmatrix}}\\&\quad +{\begin{vmatrix}0&0&-1\\4&0&0\\0&1&0\end{vmatrix}}+{\begin{vmatrix}0&0&-1\\0&3&0\\2&0&0\end{vmatrix}}\end{array}}}$

We can bring out the scalars.

${\displaystyle {\begin{array}{rl}&=(2)(3)(5){\begin{vmatrix}1&0&0\\0&1&0\\0&0&1\end{vmatrix}}+(2)({\underline {0}})(1){\begin{vmatrix}1&0&0\\0&0&1\\0&1&0\end{vmatrix}}\\&\quad +(1)(4)(5){\begin{vmatrix}0&1&0\\1&0&0\\0&0&1\end{vmatrix}}+(1)({\underline {0}})(2){\begin{vmatrix}0&1&0\\0&0&1\\1&0&0\end{vmatrix}}\\&\quad +(-1)(4)(1){\begin{vmatrix}0&0&1\\1&0&0\\0&1&0\end{vmatrix}}+(-1)(3)(2){\begin{vmatrix}0&0&1\\0&1&0\\1&0&0\end{vmatrix}}\end{array}}}$

To finish, we evaluate those six determinants by row-swapping them to the identity matrix, keeping track of the resulting sign changes.

${\displaystyle {\begin{array}{rl}&=30\cdot (+1)+0\cdot (-1)\\&\quad +20\cdot (-1)+0\cdot (+1)\\&\quad -4\cdot (+1)-6\cdot (-1)=12\end{array}}}$

That example illustrates the key idea. We've applied multilinearity to a ${\displaystyle 3\!\times \!3}$ determinant to get ${\displaystyle 3^{3}}$ separate determinants, each with one distinguished entry per row. We can drop most of these new determinants because the matrices are singular, with one row a multiple of another. We are left with the one-entry-per-row determinants also having only one entry per column (one entry from the original determinant, that is). And, since we can factor scalars out, we can further reduce to only considering determinants of one-entry-per-row-and-column matrices where the entries are ones.

These are permutation matrices. Thus, the determinant can be computed in this three-step way (Step 1) for each permutation matrix, multiply together the entries from the original matrix where that permutation matrix has ones, (Step 2) multiply that by the determinant of the permutation matrix and (Step 3) do that for all permutation matrices and sum the results together.

To state this as a formula, we introduce a notation for permutation matrices. Let ${\displaystyle \iota _{j}}$ be the row vector that is all zeroes except for a one in its ${\displaystyle j}$-th entry, so that the four-wide ${\displaystyle \iota _{2}}$ is ${\displaystyle {\begin{pmatrix}0&1&0&0\end{pmatrix}}}$. We can construct permutation matrices by permuting — that is, scrambling — the numbers ${\displaystyle 1}$, ${\displaystyle 2}$, ..., ${\displaystyle n}$, and using them as indices on the ${\displaystyle \iota }$'s. For instance, to get a ${\displaystyle 4\!\times \!4}$ permutation matrix matrix, we can scramble the numbers from ${\displaystyle 1}$ to ${\displaystyle 4}$ into this sequence ${\displaystyle \langle 3,2,1,4\rangle }$ and take the corresponding row vector ${\displaystyle \iota }$'s.

${\displaystyle {\begin{pmatrix}\iota _{3}\\\iota _{2}\\\iota _{1}\\\iota _{4}\end{pmatrix}}={\begin{pmatrix}0&0&1&0\\0&1&0&0\\1&0&0&0\\0&0&0&1\end{pmatrix}}}$
Definition 3.7

An ${\displaystyle n}$-permutation is a sequence consisting of an arrangement of the numbers ${\displaystyle 1}$, ${\displaystyle 2}$, ..., ${\displaystyle n}$.

Example 3.8

The ${\displaystyle 2}$-permutations are ${\displaystyle \phi _{1}=\langle 1,2\rangle }$ and ${\displaystyle \phi _{2}=\langle 2,1\rangle }$. These are the associated permutation matrices.

${\displaystyle P_{\phi _{1}}={\begin{pmatrix}\iota _{1}\\\iota _{2}\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}\qquad P_{\phi _{2}}={\begin{pmatrix}\iota _{2}\\\iota _{1}\end{pmatrix}}={\begin{pmatrix}0&1\\1&0\end{pmatrix}}}$

We sometimes write permutations as functions, e.g., ${\displaystyle \phi _{2}(1)=2}$, and ${\displaystyle \phi _{2}(2)=1}$. Then the rows of ${\displaystyle P_{\phi _{2}}}$ are ${\displaystyle \iota _{\phi _{2}(1)}=\iota _{2}}$ and ${\displaystyle \iota _{\phi _{2}(2)}=\iota _{1}}$.

The ${\displaystyle 3}$-permutations are ${\displaystyle \phi _{1}=\langle 1,2,3\rangle }$, ${\displaystyle \phi _{2}=\langle 1,3,2\rangle }$, ${\displaystyle \phi _{3}=\langle 2,1,3\rangle }$, ${\displaystyle \phi _{4}=\langle 2,3,1\rangle }$, ${\displaystyle \phi _{5}=\langle 3,1,2\rangle }$, and ${\displaystyle \phi _{6}=\langle 3,2,1\rangle }$. Here are two of the associated permutation matrices.

${\displaystyle P_{\phi _{2}}={\begin{pmatrix}\iota _{1}\\\iota _{3}\\\iota _{2}\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&0&1\\0&1&0\end{pmatrix}}\qquad P_{\phi _{5}}={\begin{pmatrix}\iota _{3}\\\iota _{1}\\\iota _{2}\end{pmatrix}}={\begin{pmatrix}0&0&1\\1&0&0\\0&1&0\end{pmatrix}}}$

For instance, the rows of ${\displaystyle P_{\phi _{5}}}$ are ${\displaystyle \iota _{\phi _{5}(1)}=\iota _{3}}$, ${\displaystyle \iota _{\phi _{5}(2)}=\iota _{1}}$, and ${\displaystyle \iota _{\phi _{5}(3)}=\iota _{2}}$.

Definition 3.9

The permutation expansion for determinants is

${\displaystyle {\begin{vmatrix}t_{1,1}&t_{1,2}&\ldots &t_{1,n}\\t_{2,1}&t_{2,2}&\ldots &t_{2,n}\\&\vdots \\t_{n,1}&t_{n,2}&\ldots &t_{n,n}\end{vmatrix}}={\begin{array}{l}t_{1,\phi _{1}(1)}t_{2,\phi _{1}(2)}\cdots t_{n,\phi _{1}(n)}\left|P_{\phi _{1}}\right|\\[.5ex]\quad +t_{1,\phi _{2}(1)}t_{2,\phi _{2}(2)}\cdots t_{n,\phi _{2}(n)}\left|P_{\phi _{2}}\right|\\[.5ex]\quad \vdots \\\quad +t_{1,\phi _{k}(1)}t_{2,\phi _{k}(2)}\cdots t_{n,\phi _{k}(n)}\left|P_{\phi _{k}}\right|\end{array}}}$

where ${\displaystyle \phi _{1},\ldots ,\phi _{k}}$ are all of the ${\displaystyle n}$-permutations.

This formula is often written in summation notation

${\displaystyle \left|T\right|=\sum _{{\text{permutations }}\phi }\!\!\!\!t_{1,\phi (1)}t_{2,\phi (2)}\cdots t_{n,\phi (n)}\left|P_{\phi }\right|}$

read aloud as "the sum, over all permutations ${\displaystyle \phi }$, of terms having the form ${\displaystyle t_{1,\phi (1)}t_{2,\phi (2)}\cdots t_{n,\phi (n)}\left|P_{\phi }\right|}$". This phrase is just a restating of the three-step process (Step 1) for each permutation matrix, compute ${\displaystyle t_{1,\phi (1)}t_{2,\phi (2)}\cdots t_{n,\phi (n)}}$ (Step 2) multiply that by ${\displaystyle \left|P_{\phi }\right|}$ and (Step 3) sum all such terms together.

Example 3.10

The familiar formula for the determinant of a ${\displaystyle 2\!\times \!2}$ matrix can be derived in this way.

${\displaystyle {\begin{array}{rl}{\begin{vmatrix}t_{1,1}&t_{1,2}\\t_{2,1}&t_{2,2}\end{vmatrix}}&=t_{1,1}t_{2,2}\cdot \left|P_{\phi _{1}}\right|+t_{1,2}t_{2,1}\cdot \left|P_{\phi _{2}}\right|\\&=t_{1,1}t_{2,2}\cdot {\begin{vmatrix}1&0\\0&1\end{vmatrix}}+t_{1,2}t_{2,1}\cdot {\begin{vmatrix}0&1\\1&0\end{vmatrix}}\\&=t_{1,1}t_{2,2}-t_{1,2}t_{2,1}\end{array}}}$

(the second permutation matrix takes one row swap to pass to the identity). Similarly, the formula for the determinant of a ${\displaystyle 3\!\times \!3}$ matrix is this.

{\displaystyle {\begin{array}{rl}{\begin{vmatrix}t_{1,1}&t_{1,2}&t_{1,3}\\t_{2,1}&t_{2,2}&t_{2,3}\\t_{3,1}&t_{3,2}&t_{3,3}\end{vmatrix}}&={\begin{aligned}&t_{1,1}t_{2,2}t_{3,3}\left|P_{\phi _{1}}\right|+t_{1,1}t_{2,3}t_{3,2}\left|P_{\phi _{2}}\right|+t_{1,2}t_{2,1}t_{3,3}\left|P_{\phi _{3}}\right|\\&\quad +t_{1,2}t_{2,3}t_{3,1}\left|P_{\phi _{4}}\right|+t_{1,3}t_{2,1}t_{3,2}\left|P_{\phi _{5}}\right|+t_{1,3}t_{2,2}t_{3,1}\left|P_{\phi _{6}}\right|\end{aligned}}\\&={\begin{aligned}&t_{1,1}t_{2,2}t_{3,3}-t_{1,1}t_{2,3}t_{3,2}-t_{1,2}t_{2,1}t_{3,3}\\&\quad +t_{1,2}t_{2,3}t_{3,1}+t_{1,3}t_{2,1}t_{3,2}-t_{1,3}t_{2,2}t_{3,1}\end{aligned}}\end{array}}}

Computing a determinant by permutation expansion usually takes longer than Gauss' method. However, here we are not trying to do the computation efficiently, we are instead trying to give a determinant formula that we can prove to be well-defined. While the permutation expansion is impractical for computations, it is useful in proofs. In particular, we can use it for the result that we are after.

Theorem 3.11

For each ${\displaystyle n}$ there is a ${\displaystyle n\!\times \!n}$ determinant function.

The proof is deferred to the following subsection. Also there is the proof of the next result (they share some features).

Theorem 3.12

The determinant of a matrix equals the determinant of its transpose.

The consequence of this theorem is that, while we have so far stated results in terms of rows (e.g., determinants are multilinear in their rows, row swaps change the signum, etc.), all of the results also hold in terms of columns. The final result gives examples.

Corollary 3.13

A matrix with two equal columns is singular. Column swaps change the sign of a determinant. Determinants are multilinear in their columns.

Proof

For the first statement, transposing the matrix results in a matrix with the same determinant, and with two equal rows, and hence a determinant of zero. The other two are proved in the same way.

We finish with a summary (although the final subsection contains the unfinished business of proving the two theorems). Determinant functions exist, are unique, and we know how to compute them. As for what determinants are about, perhaps these lines (Kemp 1982) help make it memorable.

Determinant none,
Solution: lots or none.
Determinant some,
Solution: just one.

## Exercises

These summarize the notation used in this book for the ${\displaystyle 2}$- and ${\displaystyle 3}$- permutations.

${\displaystyle {\begin{array}{c|cc}i&1&2\\\hline \phi _{1}(i)&1&2\\\phi _{2}(i)&2&1\end{array}}\qquad {\begin{array}{c|ccc}i&1&2&3\\\hline \phi _{1}(i)&1&2&3\\\phi _{2}(i)&1&3&2\\\phi _{3}(i)&2&1&3\\\phi _{4}(i)&2&3&1\\\phi _{5}(i)&3&1&2\\\phi _{6}(i)&3&2&1\end{array}}}$

This exercise is recommended for all readers.
Problem 1

Compute the determinant by using the permutation expansion.

1. ${\displaystyle {\begin{vmatrix}1&2&3\\4&5&6\\7&8&9\end{vmatrix}}}$
2. ${\displaystyle {\begin{vmatrix}2&2&1\\3&-1&0\\-2&0&5\end{vmatrix}}}$
This exercise is recommended for all readers.
Problem 2

Compute these both with Gauss' method and with the permutation expansion formula.

1. ${\displaystyle {\begin{vmatrix}2&1\\3&1\end{vmatrix}}}$
2. ${\displaystyle {\begin{vmatrix}0&1&4\\0&2&3\\1&5&1\end{vmatrix}}}$
This exercise is recommended for all readers.
Problem 3

Use the permutation expansion formula to derive the formula for ${\displaystyle 3\!\times \!3}$ determinants.

Problem 4

List all of the ${\displaystyle 4}$-permutations.

Problem 5

A permutation, regarded as a function from the set ${\displaystyle \{1,..,n\}}$ to itself, is one-to-one and onto. Therefore, each permutation has an inverse.

1. Find the inverse of each ${\displaystyle 2}$-permutation.
2. Find the inverse of each ${\displaystyle 3}$-permutation.
Problem 6

Prove that ${\displaystyle f}$ is multilinear if and only if for all ${\displaystyle {\vec {v}},{\vec {w}}\in V}$ and ${\displaystyle k_{1},k_{2}\in \mathbb {R} }$, this holds.

${\displaystyle f({\vec {\rho }}_{1},\dots ,k_{1}{\vec {v}}_{1}+k_{2}{\vec {v}}_{2},\dots ,{\vec {\rho }}_{n})=k_{1}f({\vec {\rho }}_{1},\dots ,{\vec {v}}_{1},\dots ,{\vec {\rho }}_{n})+k_{2}f({\vec {\rho }}_{1},\dots ,{\vec {v}}_{2},\dots ,{\vec {\rho }}_{n})}$
Problem 7

Find the only nonzero term in the permutation expansion of this matrix.

${\displaystyle {\begin{vmatrix}0&1&0&0\\1&0&1&0\\0&1&0&1\\0&0&1&0\end{vmatrix}}}$

Compute that determinant by finding the signum of the associated permutation.

Problem 8

How would determinants change if we changed property (4) of the definition to read that ${\displaystyle \left|I\right|=2}$?

Problem 9

Verify the second and third statements in Corollary 3.13.

This exercise is recommended for all readers.
Problem 10

Show that if an ${\displaystyle n\!\times \!n}$ matrix has a nonzero determinant then any column vector ${\displaystyle {\vec {v}}\in \mathbb {R} ^{n}}$ can be expressed as a linear combination of the columns of the matrix.

Problem 11

True or false: a matrix whose entries are only zeros or ones has a determinant equal to zero, one, or negative one. (Strang 1980)

Problem 12
1. Show that there are ${\displaystyle 120}$ terms in the permutation expansion formula of a ${\displaystyle 5\!\times \!5}$ matrix.
2. How many are sure to be zero if the ${\displaystyle 1,2}$ entry is zero?
Problem 13

How many ${\displaystyle n}$-permutations are there?

Problem 14

A matrix ${\displaystyle A}$ is skew-symmetric if ${\displaystyle {{A}^{\rm {trans}}}=-A}$, as in this matrix.

${\displaystyle A={\begin{pmatrix}0&3\\-3&0\end{pmatrix}}}$

Show that ${\displaystyle n\!\times \!n}$ skew-symmetric matrices with nonzero determinants exist only for even ${\displaystyle n}$.

This exercise is recommended for all readers.
Problem 15

What is the smallest number of zeros, and the placement of those zeros, needed to ensure that a ${\displaystyle 4\!\times \!4}$ matrix has a determinant of zero?

This exercise is recommended for all readers.
Problem 16

If we have ${\displaystyle n}$ data points ${\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\dots \,,(x_{n},y_{n})}$ and want to find a polynomial ${\displaystyle p(x)=a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+\dots +a_{1}x+a_{0}}$ passing through those points then we can plug in the points to get an ${\displaystyle n}$ equation/${\displaystyle n}$ unknown linear system. The matrix of coefficients for that system is called the Vandermonde matrix. Prove that the determinant of the transpose of that matrix of coefficients

${\displaystyle {\begin{vmatrix}1&1&\ldots &1\\x_{1}&x_{2}&\ldots &x_{n}\\{x_{1}}^{2}&{x_{2}}^{2}&\ldots &{x_{n}}^{2}\\&\vdots \\{x_{1}}^{n-1}&{x_{2}}^{n-1}&\ldots &{x_{n}}^{n-1}\end{vmatrix}}}$

equals the product, over all indices ${\displaystyle i,j\in \{1,\dots ,n\}}$ with ${\displaystyle i, of terms of the form ${\displaystyle x_{j}-x_{i}}$. (This shows that the determinant is zero, and the linear system has no solution, if and only if the ${\displaystyle x_{i}}$'s in the data are not distinct.)

Problem 17

A matrix can be divided into blocks, as here,

${\displaystyle \left({\begin{array}{cc|c}1&2&0\\3&4&0\\\hline 0&0&-2\end{array}}\right)}$

which shows four blocks, the square ${\displaystyle 2\!\times \!2}$ and ${\displaystyle 1\!\times \!1}$ ones in the upper left and lower right, and the zero blocks in the upper right and lower left. Show that if a matrix can be partitioned as

${\displaystyle T=\left({\begin{array}{c|c}J&Z_{2}\\\hline Z_{1}&K\end{array}}\right)}$

where ${\displaystyle J}$ and ${\displaystyle K}$ are square, and ${\displaystyle Z_{1}}$ and ${\displaystyle Z_{2}}$ are all zeroes, then ${\displaystyle \left|T\right|=\left|J\right|\cdot \left|K\right|}$.

This exercise is recommended for all readers.
Problem 18

Prove that for any ${\displaystyle n\!\times \!n}$ matrix ${\displaystyle T}$ there are at most ${\displaystyle n}$ distinct reals ${\displaystyle r}$ such that the matrix ${\displaystyle T-rI}$ has determinant zero (we shall use this result in Chapter Five).

? Problem 19

The nine positive digits can be arranged into ${\displaystyle 3\!\times \!3}$ arrays in ${\displaystyle 9!}$ ways. Find the sum of the determinants of these arrays. (Trigg 1963)

Problem 20

Show that

${\displaystyle {\begin{vmatrix}x-2&x-3&x-4\\x+1&x-1&x-3\\x-4&x-7&x-10\end{vmatrix}}=0.}$

(Silverman & Trigg 1963)

? Problem 21

Let ${\displaystyle S}$ be the sum of the integer elements of a magic square of order three and let ${\displaystyle D}$ be the value of the square considered as a determinant. Show that ${\displaystyle D/S}$ is an integer. (Trigg & Walker 1949)

? Problem 22

Show that the determinant of the ${\displaystyle n^{2}}$ elements in the upper left corner of the Pascal triangle

${\displaystyle {\begin{array}{cccccc}1&1&1&1&.&.\\1&2&3&.&.\\1&3&.&.&&\\1&.&.&&&\\.\\.\end{array}}}$

has the value unity. (Rupp & Aude 1931)

### 4 - Determinants Exist

This subsection is optional. It consists of proofs of two results from the prior subsection. These proofs involve the properties of permutations, which will not be used later, except in the optional Jordan Canonical Form subsection.

The prior subsection attacks the problem of showing that for any size there is a determinant function on the set of square matrices of that size by using multilinearity to develop the permutation expansion.

${\displaystyle {\begin{array}{rl}{\begin{vmatrix}t_{1,1}&t_{1,2}&\ldots &t_{1,n}\\t_{2,1}&t_{2,2}&\ldots &t_{2,n}\\&\vdots \\t_{n,1}&t_{n,2}&\ldots &t_{n,n}\end{vmatrix}}&={\begin{array}{l}t_{1,\phi _{1}(1)}t_{2,\phi _{1}(2)}\cdots t_{n,\phi _{1}(n)}\left|P_{\phi _{1}}\right|\\\quad +t_{1,\phi _{2}(1)}t_{2,\phi _{2}(2)}\cdots t_{n,\phi _{2}(n)}\left|P_{\phi _{2}}\right|\\\quad \vdots \\\quad +t_{1,\phi _{k}(1)}t_{2,\phi _{k}(2)}\cdots t_{n,\phi _{k}(n)}\left|P_{\phi _{k}}\right|\end{array}}\\&=\displaystyle \sum _{{\text{permutations }}\phi }t_{1,\phi (1)}t_{2,\phi (2)}\cdots t_{n,\phi (n)}\left|P_{\phi }\right|\end{array}}}$

This reduces the problem to showing that there is a determinant function on the set of permutation matrices of that size.

Of course, a permutation matrix can be row-swapped to the identity matrix and to calculate its determinant we can keep track of the number of row swaps. However, the problem is still not solved. We still have not shown that the result is well-defined. For instance, the determinant of

${\displaystyle P_{\phi }={\begin{pmatrix}0&1&0&0\\1&0&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}}$

could be computed with one swap

${\displaystyle P_{\phi }{\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}{\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}}$

or with three.

${\displaystyle P_{\phi }{\xrightarrow[{}]{\rho _{3}\leftrightarrow \rho _{1}}}{\begin{pmatrix}0&0&1&0\\1&0&0&0\\0&1&0&0\\0&0&0&1\end{pmatrix}}{\xrightarrow[{}]{\rho _{2}\leftrightarrow \rho _{3}}}{\begin{pmatrix}0&0&1&0\\0&1&0&0\\1&0&0&0\\0&0&0&1\end{pmatrix}}{\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{3}}}{\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}}$

Both reductions have an odd number of swaps so we figure that ${\displaystyle \left|P_{\phi }\right|=-1}$ but how do we know that there isn't some way to do it with an even number of swaps? Corollary 4.6 below proves that there is no permutation matrix that can be row-swapped to an identity matrix in two ways, one with an even number of swaps and the other with an odd number of swaps.

Definition 4.1

Two rows of a permutation matrix

${\displaystyle {\begin{pmatrix}\vdots \\\iota _{k}\\\vdots \\\iota _{j}\\\vdots \end{pmatrix}}}$

such that ${\displaystyle k>j}$ are in an inversion of their natural order.

Example 4.2

This permutation matrix

${\displaystyle {\begin{pmatrix}\iota _{3}\\\iota _{2}\\\iota _{1}\end{pmatrix}}={\begin{pmatrix}0&0&1\\0&1&0\\1&0&0\end{pmatrix}}}$

has three inversions: ${\displaystyle \iota _{3}}$ precedes ${\displaystyle \iota _{1}}$, ${\displaystyle \iota _{3}}$ precedes ${\displaystyle \iota _{2}}$, and ${\displaystyle \iota _{2}}$ precedes ${\displaystyle \iota _{1}}$.

Lemma 4.3

A row-swap in a permutation matrix changes the number of inversions from even to odd, or from odd to even.

Proof

Consider a swap of rows ${\displaystyle j}$ and ${\displaystyle k}$, where ${\displaystyle k>j}$. If the two rows are adjacent

${\displaystyle P_{\phi }={\begin{pmatrix}\vdots \\\iota _{\phi (j)}\\\iota _{\phi (k)}\\\vdots \end{pmatrix}}{\xrightarrow[{}]{\rho _{k}\leftrightarrow \rho _{j}}}{\begin{pmatrix}\vdots \\\iota _{\phi (k)}\\\iota _{\phi (j)}\\\vdots \end{pmatrix}}}$

then the swap changes the total number of inversions by one — either removing or producing one inversion, depending on whether ${\displaystyle \phi (j)>\phi (k)}$ or not, since inversions involving rows not in this pair are not affected. Consequently, the total number of inversions changes from odd to even or from even to odd.

If the rows are not adjacent then they can be swapped via a sequence of adjacent swaps, first bringing row ${\displaystyle k}$ up

${\displaystyle {\begin{pmatrix}\vdots \\\iota _{\phi (j)}\\\iota _{\phi (j+1)}\\\iota _{\phi (j+2)}\\\vdots \\\iota _{\phi (k)}\\\vdots \end{pmatrix}}{\xrightarrow[{}]{\rho _{k}\leftrightarrow \rho _{k-1}}}\;\;{\xrightarrow[{}]{\rho _{k-1}\leftrightarrow \rho _{k-2}}}\dots {\xrightarrow[{}]{\rho _{j+1}\leftrightarrow \rho _{j}}}{\begin{pmatrix}\vdots \\\iota _{\phi (k)}\\\iota _{\phi (j)}\\\iota _{\phi (j+1)}\\\vdots \\\iota _{\phi (k-1)}\\\vdots \end{pmatrix}}}$

and then bringing row ${\displaystyle j}$ down.

${\displaystyle {\xrightarrow[{}]{\rho _{j+1}\leftrightarrow \rho _{j+2}}}\;\;{\xrightarrow[{}]{\rho _{j+2}\leftrightarrow \rho _{j+3}}}\dots {\xrightarrow[{}]{\rho _{k-1}\leftrightarrow \rho _{k}}}{\begin{pmatrix}\vdots \\\iota _{\phi (k)}\\\iota _{\phi (j+1)}\\\iota _{\phi (j+2)}\\\vdots \\\iota _{\phi (j)}\\\vdots \end{pmatrix}}}$

Each of these adjacent swaps changes the number of inversions from odd to even or from even to odd. There are an odd number ${\displaystyle (k-j)+(k-j-1)}$ of them. The total change in the number of inversions is from even to odd or from odd to even.

Definition 4.4

The signum of a permutation ${\displaystyle \operatorname {sgn}(\phi )}$ is ${\displaystyle +1}$ if the number of inversions in ${\displaystyle P_{\phi }}$ is even, and is ${\displaystyle -1}$ if the number of inversions is odd.

Example 4.5

With the subscripts from Example 3.8 for the ${\displaystyle 3}$-permutations, ${\displaystyle \operatorname {sgn}(\phi _{1})=1}$ while ${\displaystyle \operatorname {sgn}(\phi _{2})=-1}$.

Corollary 4.6

If a permutation matrix has an odd number of inversions then swapping it to the identity takes an odd number of swaps. If it has an even number of inversions then swapping to the identity takes an even number of swaps.

Proof

The identity matrix has zero inversions. To change an odd number to zero requires an odd number of swaps, and to change an even number to zero requires an even number of swaps.

We still have not shown that the permutation expansion is well-defined because we have not considered row operations on permutation matrices other than row swaps. We will finesse this problem: we will define a function ${\displaystyle d:{\mathcal {M}}_{n\!\times \!n}\to \mathbb {R} }$ by altering the permutation expansion formula, replacing ${\displaystyle \left|P_{\phi }\right|}$ with ${\displaystyle \operatorname {sgn}(\phi )}$

${\displaystyle d(T)=\sum _{{\text{permutations }}\phi }t_{1,\phi (1)}t_{2,\phi (2)}\dots t_{n,\phi (n)}\operatorname {sgn}(\phi )}$

(this gives the same value as the permutation expansion because the prior result shows that ${\displaystyle \det(P_{\phi })=\operatorname {sgn}(\phi )}$). This formula's advantage is that the number of inversions is clearly well-defined — just count them. Therefore, we will show that a determinant function exists for all sizes by showing that ${\displaystyle d}$ is it, that is, that ${\displaystyle d}$ satisfies the four conditions.

Lemma 4.7

The function ${\displaystyle d}$ is a determinant. Hence determinants exist for every ${\displaystyle n}$.

Proof

We'll must check that it has the four properties from the definition.

Property (4) is easy; in

${\displaystyle d(I)=\sum _{{\text{perms }}\phi }\iota _{1,\phi (1)}\iota _{2,\phi (2)}\cdots \iota _{n,\phi (n)}\operatorname {sgn}(\phi )}$

all of the summands are zero except for the product down the diagonal, which is one.

For property (3) consider ${\displaystyle d({\hat {T}})}$ where ${\displaystyle T[b]{\xrightarrow[{}]{k\rho _{i}}}{\hat {T}}}$.

${\displaystyle \sum _{{\text{perms }}\phi }\!\!{\hat {t}}_{1,\phi (1)}\cdots {\hat {t}}_{i,\phi (i)}\cdots {\hat {t}}_{n,\phi (n)}\operatorname {sgn}(\phi )=\sum _{\phi }t_{1,\phi (1)}\cdots kt_{i,\phi (i)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )}$

Factor the ${\displaystyle k}$ out of each term to get the desired equality.

${\displaystyle =k\cdot \sum _{\phi }t_{1,\phi (1)}\cdots t_{i,\phi (i)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )=k\cdot d(T)}$

For (2), let ${\displaystyle T[b]{\xrightarrow[{}]{\rho _{i}\leftrightarrow \rho _{j}}}{\hat {T}}}$.

${\displaystyle d({\hat {T}})=\sum _{{\text{perms }}\phi }\!\!{\hat {t}}_{1,\phi (1)}\cdots {\hat {t}}_{i,\phi (i)}\cdots {\hat {t}}_{j,\phi (j)}\cdots {\hat {t}}_{n,\phi (n)}\operatorname {sgn}(\phi )}$

To convert to unhatted ${\displaystyle t}$'s, for each ${\displaystyle \phi }$ consider the permutation ${\displaystyle \sigma }$ that equals ${\displaystyle \phi }$ except that the ${\displaystyle i}$-th and ${\displaystyle j}$-th numbers are interchanged, ${\displaystyle \sigma (i)=\phi (j)}$ and ${\displaystyle \sigma (j)=\phi (i)}$. Replacing the ${\displaystyle \phi }$ in ${\displaystyle {\hat {t}}_{1,\phi (1)}\cdots {\hat {t}}_{i,\phi (i)}\cdots {\hat {t}}_{j,\phi (j)}\cdots {\hat {t}}_{n,\phi (n)}}$ with this ${\displaystyle \sigma }$ gives ${\displaystyle t_{1,\sigma (1)}\cdots t_{j,\sigma (j)}\cdots t_{i,\sigma (i)}\cdots t_{n,\sigma (n)}}$. Now ${\displaystyle \operatorname {sgn}(\phi )=-\operatorname {sgn}(\sigma )}$ (by Lemma 4.3) and so we get

${\displaystyle {\begin{array}{rl}&=\sum _{\sigma }t_{1,\sigma (1)}\cdots t_{j,\sigma (j)}\cdots t_{i,\sigma (i)}\cdots t_{n,\sigma (n)}\cdot {\bigl (}-\operatorname {sgn}(\sigma ){\bigr )}\\&=-\sum _{\sigma }t_{1,\sigma (1)}\cdots t_{j,\sigma (j)}\cdots t_{i,\sigma (i)}\cdots t_{n,\sigma (n)}\cdot \operatorname {sgn}(\sigma )\end{array}}}$

where the sum is over all permutations ${\displaystyle \sigma }$ derived from another permutation ${\displaystyle \phi }$ by a swap of the ${\displaystyle i}$-th and ${\displaystyle j}$-th numbers. But any permutation can be derived from some other permutation by such a swap, in one and only one way, so this summation is in fact a sum over all permutations, taken once and only once. Thus ${\displaystyle d({\hat {T}})=-d(T)}$.

To do property (1) let ${\displaystyle T[b]{\xrightarrow[{}]{k\rho _{i}+\rho _{j}}}{\hat {T}}}$ and consider

${\displaystyle {\begin{array}{rl}d({\hat {T}})&=\sum _{{\text{perms }}\phi }{\hat {t}}_{1,\phi (1)}\cdots {\hat {t}}_{i,\phi (i)}\cdots {\hat {t}}_{j,\phi (j)}\cdots {\hat {t}}_{n,\phi (n)}\operatorname {sgn}(\phi )\\&=\sum _{\phi }t_{1,\phi (1)}\cdots t_{i,\phi (i)}\cdots (kt_{i,\phi (j)}+t_{j,\phi (j)})\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )\end{array}}}$

(notice: that's ${\displaystyle kt_{i,\phi (j)}}$, not ${\displaystyle kt_{j,\phi (j)}}$). Distribute, commute, and factor.

${\displaystyle {\begin{array}{rl}=&\displaystyle \sum _{\phi }{\big [}t_{1,\phi (1)}\cdots t_{i,\phi (i)}\cdots kt_{i,\phi (j)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )\\&\displaystyle \qquad +t_{1,\phi (1)}\cdots t_{i,\phi (i)}\cdots t_{j,\phi (j)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi ){\big ]}\\\\=&\displaystyle \sum _{\phi }t_{1,\phi (1)}\cdots t_{i,\phi (i)}\cdots kt_{i,\phi (j)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )\\&\displaystyle \qquad +\sum _{\phi }t_{1,\phi (1)}\cdots t_{i,\phi (i)}\cdots t_{j,\phi (j)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )\\\\=&\displaystyle k\cdot \sum _{\phi }t_{1,\phi (1)}\cdots t_{i,\phi (i)}\cdots t_{i,\phi (j)}\cdots t_{n,\phi (n)}\operatorname {sgn}(\phi )+d(T)\end{array}}}$

We finish by showing that the terms ${\displaystyle t_{1,\phi (1)}\cdots t_{i,\phi (i)}\cdots t_{i,\phi (j)}\dots t_{n,\phi (n)}\operatorname {sgn}(\phi )}$ add to zero. This sum represents ${\displaystyle d(S)}$ where ${\displaystyle S}$ is a matrix equal to ${\displaystyle T}$ except that row ${\displaystyle j}$ of ${\displaystyle S}$ is a copy of row ${\displaystyle i}$ of ${\displaystyle T}$ (because the factor is ${\displaystyle t_{i,\phi (j)}}$, not ${\displaystyle t_{j,\phi (j)}}$). Thus, ${\displaystyle S}$ has two equal rows, rows ${\displaystyle i}$ and ${\displaystyle j}$. Since we have already shown that ${\displaystyle d}$ changes sign on row swaps, as in Lemma 2.3 we conclude that ${\displaystyle d(S)=0}$.

We have now shown that determinant functions exist for each size. We already know that for each size there is at most one determinant. Therefore, the permutation expansion computes the one and only determinant value of a square matrix.

We end this subsection by proving the other result remaining from the prior subsection, that the determinant of a matrix equals the determinant of its transpose.

Example 4.8

Writing out the permutation expansion of the general ${\displaystyle 3\!\times \!3}$ matrix and of its transpose, and comparing corresponding terms

${\displaystyle {\begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}}=\cdots \,+cdh\cdot {\begin{vmatrix}0&0&1\\1&0&0\\0&1&0\end{vmatrix}}+\,\cdots }$

(terms with the same letters)

${\displaystyle {\begin{vmatrix}a&d&g\\b&e&h\\c&f&i\end{vmatrix}}=\cdots \,+dhc\cdot {\begin{vmatrix}0&1&0\\0&0&1\\1&0&0\end{vmatrix}}+\,\cdots }$

shows that the corresponding permutation matrices are transposes. That is, there is a relationship between these corresponding permutations. Problem 6 shows that they are inverses.

Theorem 4.9

The determinant of a matrix equals the determinant of its transpose.

Proof

Call the matrix ${\displaystyle T}$ and denote the entries of ${\displaystyle {{T}^{\rm {trans}}}}$ with ${\displaystyle s}$'s so that ${\displaystyle t_{i,j}=s_{j,i}}$. Substitution gives this

${\displaystyle \left|T\right|=\sum _{{\text{perms }}\phi }t_{1,\phi (1)}\dots t_{n,\phi (n)}\operatorname {sgn}(\phi )=\sum _{\phi }s_{\phi (1),1}\dots s_{\phi (n),n}\operatorname {sgn}(\phi )}$

and we can finish the argument by manipulating the expression on the right to be recognizable as the determinant of the transpose. We have written all permutation expansions (as in the middle expression above) with the row indices ascending. To rewrite the expression on the right in this way, note that because ${\displaystyle \phi }$ is a permutation, the row indices in the term on the right ${\displaystyle \phi (1)}$, ..., ${\displaystyle \phi (n)}$ are just the numbers ${\displaystyle 1}$, ..., ${\displaystyle n}$, rearranged. We can thus commute to have these ascend, giving ${\displaystyle s_{1,\phi ^{-1}(1)}\cdots s_{n,\phi ^{-1}(n)}}$ (if the column index is ${\displaystyle j}$ and the row index is ${\displaystyle \phi (j)}$ then, where the row index is ${\displaystyle i}$, the column index is ${\displaystyle \phi ^{-1}(i)}$). Substituting on the right gives

${\displaystyle =\sum _{\phi ^{-1}}s_{1,\phi ^{-1}(1)}\cdots s_{n,\phi ^{-1}(n)}\operatorname {sgn}(\phi ^{-1})}$

(Problem 5 shows that ${\displaystyle \operatorname {sgn}(\phi ^{-1})=\operatorname {sgn}(\phi )}$). Since every permutation is the inverse of another, a sum over all ${\displaystyle \phi ^{-1}}$ is a sum over all permutations ${\displaystyle \phi }$

${\displaystyle =\sum _{{\text{perms }}\sigma }s_{1,\sigma ^{(}1)}\dots s_{n,\sigma (n)}\operatorname {sgn}(\sigma )=\left|{{T}^{\rm {trans}}}\right|}$

as required.

## Exercises

These summarize the notation used in this book for the ${\displaystyle 2}$- and ${\displaystyle 3}$- permutations.

${\displaystyle {\begin{array}{c|cc}i&1&2\\\hline \phi _{1}(i)&1&2\\\phi _{2}(i)&2&1\end{array}}\qquad {\begin{array}{c|ccc}i&1&2&3\\\hline \phi _{1}(i)&1&2&3\\\phi _{2}(i)&1&3&2\\\phi _{3}(i)&2&1&3\\\phi _{4}(i)&2&3&1\\\phi _{5}(i)&3&1&2\\\phi _{6}(i)&3&2&1\end{array}}}$

Problem 1

Give the permutation expansion of a general ${\displaystyle 2\!\times \!2}$ matrix and its transpose.

This exercise is recommended for all readers.
Problem 2

This problem appears also in the prior subsection.

1. Find the inverse of each ${\displaystyle 2}$-permutation.
2. Find the inverse of each ${\displaystyle 3}$-permutation.
This exercise is recommended for all readers.
Problem 3
1. Find the signum of each ${\displaystyle 2}$-permutation.
2. Find the signum of each ${\displaystyle 3}$-permutation.
Problem 4

What is the signum of the ${\displaystyle n}$-permutation ${\displaystyle \phi =\langle n,n-1,\dots ,2,1\rangle }$? (Strang 1980)

Problem 5

Prove these.

1. Every permutation has an inverse.
2. ${\displaystyle \operatorname {sgn}(\phi ^{-1})=\operatorname {sgn}(\phi )}$
3. Every permutation is the inverse of another.
Problem 6

Prove that the matrix of the permutation inverse is the transpose of the matrix of the permutation ${\displaystyle P_{\phi ^{-1}}={{P_{\phi }}^{\rm {trans}}}}$, for any permutation ${\displaystyle \phi }$.

This exercise is recommended for all readers.
Problem 7

Show that a permutation matrix with ${\displaystyle m}$ inversions can be row swapped to the identity in ${\displaystyle m}$ steps. Contrast this with Corollary 4.6.

This exercise is recommended for all readers.
Problem 8

For any permutation ${\displaystyle \phi }$ let ${\displaystyle g(\phi )}$ be the integer defined in this way.

${\displaystyle g(\phi )=\prod _{i

(This is the product, over all indices ${\displaystyle i}$ and ${\displaystyle j}$ with ${\displaystyle i, of terms of the given form.)

1. Compute the value of ${\displaystyle g}$ on all ${\displaystyle 2}$-permutations.
2. Compute the value of ${\displaystyle g}$ on all ${\displaystyle 3}$-permutations.
3. Prove this.
${\displaystyle \operatorname {sgn}(\phi )={\frac {g(\phi )}{|g(\phi )|}}}$

Many authors give this formula as the definition of the signum function.

## Section II - Geometry of Determinants

The prior section develops the determinant algebraically, by considering what formulas satisfy certain properties. This section complements that with a geometric approach. One advantage of this approach is that, while we have so far only considered whether or not a determinant is zero, here we shall give a meaning to the value of that determinant. (The prior section handles determinants as functions of the rows, but in this section columns are more convenient. The final result of the prior section says that we can make the switch.)

### 1 - Determinants as Size Functions

This parallelogram picture

is familiar from the construction of the sum of the two vectors. One way to compute the area that it encloses is to draw this rectangle and subtract the area of each subregion.

${\displaystyle {\begin{array}{l}{\text{area of parallelogram}}\\\quad ={\text{area of rectangle}}-{\text{area of }}A-{\text{area of }}B\\\qquad -\cdots -{\text{area of }}F\\\quad =(x_{1}+x_{2})(y_{1}+y_{2})-x_{2}y_{1}-x_{1}y_{1}/2\\\qquad -x_{2}y_{2}/2-x_{2}y_{2}/2-x_{1}y_{1}/2-x_{2}y_{1}\\\quad =x_{1}y_{2}-x_{2}y_{1}\end{array}}}$

The fact that the area equals the value of the determinant

${\displaystyle {\begin{vmatrix}x_{1}&x_{2}\\y_{1}&y_{2}\end{vmatrix}}=x_{1}y_{2}-x_{2}y_{1}}$

is no coincidence. The properties in the definition of determinants make reasonable postulates for a function that measures the size of the region enclosed by the vectors in the matrix.

For instance, this shows the effect of multiplying one of the box-defining vectors by a scalar (the scalar used is ${\displaystyle k=1.4}$).

The region formed by ${\displaystyle k{\vec {v}}}$ and ${\displaystyle {\vec {w}}}$ is bigger, by a factor of ${\displaystyle k}$, than the shaded region enclosed by ${\displaystyle {\vec {v}}}$ and ${\displaystyle {\vec {w}}}$. That is, ${\displaystyle {\text{size}}\,(k{\vec {v}},{\vec {w}})=k\cdot {\text{size}}\,({\vec {v}},{\vec {w}})}$ and in general we expect of the size measure that ${\displaystyle {\text{size}}\,(\dots ,k{\vec {v}},\dots )=k\cdot {\text{size}}\,(\dots ,{\vec {v}},\dots )}$. Of course, this postulate is already familiar as one of the properties in the defintion of determinants.

Another property of determinants is that they are unaffected by pivoting. Here are before-pivoting and after-pivoting boxes (the scalar used is ${\displaystyle k=0.35}$).

Although the region on the right, the box formed by ${\displaystyle v}$ and ${\displaystyle k{\vec {v}}+{\vec {w}}}$, is more slanted than the shaded region, the two have the same base and the same height and hence the same area. This illustrates that ${\displaystyle {\text{size}}\,({\vec {v}},k{\vec {v}}+{\vec {w}})={\text{size}}\,({\vec {v}},{\vec {w}})}$. Generalized, ${\displaystyle {\text{size}}\,(\dots ,{\vec {v}},\dots ,{\vec {w}},\dots )={\text{size}}\,(\dots ,{\vec {v}},\dots ,k{\vec {v}}+{\vec {w}},\dots )}$, which is a restatement of the determinant postulate.

Of course, this picture

shows that ${\displaystyle {\text{size}}\,({\vec {e}}_{1},{\vec {e}}_{2})=1}$, and we naturally extend that to any number of dimensions ${\displaystyle {\text{size}}\,({\vec {e}}_{1},\dots ,{\vec {e}}_{n})=1}$, which is a restatement of the property that the determinant of the identity matrix is one.

With that, because property (2) of determinants is redundant (as remarked right after the definition), we have that all of the properties of determinants are reasonable to expect of a function that gives the size of boxes. We can now cite the work done in the prior section to show that the determinant exists and is unique to be assured that these postulates are consistent and sufficient (we do not need any more postulates). That is, we've got an intuitive justification to interpret ${\displaystyle \det({\vec {v}}_{1},\dots ,{\vec {v}}_{n})}$ as the size of the box formed by the vectors. (Comment. An even more basic approach, which also leads to the definition below, is in (Weston 1959).

Example 1.1

The volume of this parallelepiped, which can be found by the usual formula from high school geometry, is ${\displaystyle 12}$.

${\displaystyle {\begin{vmatrix}2&0&-1\\0&3&0\\2&1&1\end{vmatrix}}=12}$

Remark 1.2

Although property (2) of the definition of determinants is redundant, it raises an important point. Consider these two.

 ${\displaystyle {\begin{vmatrix}4&1\\2&3\end{vmatrix}}=10}$ ${\displaystyle {\begin{vmatrix}1&4\\3&2\end{vmatrix}}=-10}$

The only difference between them is in the order in which the vectors are taken. If we take ${\displaystyle {\vec {u}}}$ first and then go to ${\displaystyle {\vec {v}}}$, follow the counterclockwise arc shown, then the sign is positive. Following a clockwise arc gives a negative sign. The sign returned by the size function reflects the "orientation" or "sense" of the box. (We see the same thing if we picture the effect of scalar multiplication by a negative scalar.)

Although it is