High School Mathematics Extensions/Matrices

From Wikibooks, open books for an open world
< High School Mathematics Extensions
Jump to: navigation, search
HSME
Content
100%.svg Matrices
100%.svg Recurrence Relations
Problems & Projects
100%.svg Problem Set
100%.svg Project
Soultions
100%.svg Exercises Solutions
50%.svg Problem Set Solutions
Misc.
100%.svg Definition Sheet
100%.svg Full Version

Introduction[edit]

A matrix may be more popularly known as a giant computer simulation, but in mathematics it is a totally different thing. To be more precise, a matrix (plural matrices) is a rectangular array of numbers. For example, below is a typical way to write a matrix, with numbers arranged in rows and columns and with round brackets around the numbers:


\begin{pmatrix}
1 & 5&10&20 \\
1&-3&-5&9\\
3&-1&-1&-1\\
3&2&4&-5
\end{pmatrix}

The above matrix has 4 rows and 4 columns, so we call it a 4 × 4 (4 by 4) matrix. Also, we can have matrices of many different shapes. The shape of a matrix is the name for the dimensions of matrix (m by n, where m is the number of rows and n the number of columns). Here are some more examples of matrices

This is an example of a 3 × 3 matrix:


\begin{pmatrix}
1&2&3\\
4&5&6\\
7&8&9\\
\end{pmatrix}

This is an example of a 5 × 4 matrix:


\begin{pmatrix}
a&b&c&d\\
h&g&f&e\\
i&j&k&l\\
p&o&n&m\\
q&r&s&t\\
\end{pmatrix}

This is an example of a 1 × 6 matrix:


\begin{pmatrix}
1&2&3&4&5&6\\
\end{pmatrix}

The theory of matrices is intimately connected with that of (linear) simultaneous equations. The ancient Chinese had established a systematic way to solve simultaneous equations. The theory of simultaneous equations was furthered in the east by the Japanese mathematician, Seki and a little later by Leibniz, Newton's greatest rival. Later, Gauss (1777 - 1855), one of the three giants of modern mathematics popularised the use of Gaussian elimination, which is a simple step by step algorithm for solving any number of linear simultaneous equations. By then the use of matrices to represent simultaneous equation neatly on paper (as discussed above) had become quite common1.

Consider the simultaneous equations:

x + y = 10
x - y = 4

it has the solution x = 7 and y = 3, and the usual way to solve it is to add the two equations together to eliminate the y. Matrix theory offers us another way to solve the above simultaneous equations via matrix multiplication (covered below). We will study the widely accepted way to multiply two matrices together. In theory with matrix multiplication we can solve any number of simultaneous equations, but we shall mainly restrict our attention to 2 × 2 matrices. But even with that restriction, we have opened up doors to topics simultaneous equations could never offer us. Two such examples are

  1. using matrices to solve linear recurrence relations which can be used to model population growth, and
  2. encrypting messages with matrices.

We shall commence our study by learning some of the more fundamental concepts of matrices. Once we have a firm grasp of the basics, we shall move on to study the real meat of this chapter, matrix multiplication.

Elements[edit]

An element of a matrix is a particular number inside the matrix, and it is uniquely located with a pair of numbers. E.g. let the following matrix be denoted by A, or symbolically:

A = 
\begin{pmatrix}
1&2&3\\
4&5&6\\
7&8&9\\
\end{pmatrix}

the (2,2)th entry of A is 5; the (1,1)th entry of A is 1, the (3,3) entry of A is 9 and the (3,2)th entry of A is 8. The (i , j)th entry of A is usually denoted ai,j and the (i , j)th entry of a matrix B is usually denoted by bi,j and so on.

Summary[edit]

  • A matrix is an array of numbers
  • A m×n matrix has m rows and n columns
  • The shape of a matrix is determined by its number of rows and columns
  • The (i,j)th element of a matrix is located in ith row and jth column

Matrix addition & Multiplication by a scalar[edit]

Matrices can be added together. But only the matrices of the same shape can be added. This is very natural. E.g.


A = 
\begin{pmatrix}
1&2&3\\
4&5&6\\
7&8&9\\
\end{pmatrix}

B = 
\begin{pmatrix}
2&9&8\\
0&-1&8\\
4&6&7\\
\end{pmatrix}

then


A + B = 
\begin{pmatrix}
1&2&3\\
4&5&6\\
7&8&9\\
\end{pmatrix}
+
\begin{pmatrix}
2&9&8\\
0&-1&8\\
4&6&7\\
\end{pmatrix}
=
\begin{pmatrix}
1+2&2+9&3+8\\
4+0&5+(-1)&6+8\\
7+4&8+6&9+7\\
\end{pmatrix}
=
\begin{pmatrix}
3&11&11\\
4&4&14\\
11&14&16\\
\end{pmatrix}

Similarly matrices can be multiplied by a number. We call the number a scalar to distinguish it from a matrix. The reader need not worry about the definition here, just remember that a scalar is simply a number.


5A = A + A + A + A + A =
5\begin{pmatrix}
1&2&3\\
4&5&6\\
7&8&9\\
\end{pmatrix}
=
\begin{pmatrix}
5&10&15\\
20&25&30\\
35&40&45\\
\end{pmatrix}

in this case the scalar value is 5. In general, when we do s × A , where s is a scalar and A a matrix, we multiply each entry of A by s.

Matrix Multiplication[edit]

The widely accepted way to multiply two matrices together is definitely non-intuitive. As mentioned above, multiplication can help with solving simultaneous equations. We will now give a brief outline of how this can be done. Firstly, any system of linear simultaneous equations can be written as a matrix of coefficients multiplied by a matrix of unknowns equaling a matrix of results. This description may sound a little complicated, but in symbolic form it is quite clear. The previous statement simply says that if A, x and b are matrices, then Ax = b, can be used to represent some system of simultaneous equations. The beautiful thing about matrix multiplications is that some matrices can have multiplicative inverses, that is we can multiply both sides of the equation by A-1 to get x = A-1b, which effectively solves the simultaneous equations.

The reader will surely come to understand matrix multiplication better as this chapter progresses. For now we should consider the simplest case of matrix multiplication, multiplying vectors. We will see a few examples and then we will explain process of multiplication


\begin{matrix}

A_{2\times 1} =
\begin{pmatrix}
2\\
9\\
\end{pmatrix}
& , &
B_{1\times 2} =
\begin{pmatrix}
3 & 5
\end{pmatrix}

\end{matrix}

then


B_{1\times 2} \times A_{2\times 1}
=
\begin{pmatrix}
3 & 5
\end{pmatrix}
\times
\begin{pmatrix}
2\\
9\\
\end{pmatrix}
=
\begin{pmatrix}
(3 \times 2) + (5 \times 9)
\end{pmatrix}
=
\begin{pmatrix}
51
\end{pmatrix}

Similarly if:

 \begin{matrix}

A_{3\times 1} = \begin{pmatrix} 1\\ 2\\ 3\end{pmatrix}
& , &
B_{1\times 3} = \begin{pmatrix} 4 & 5 & 6\end{pmatrix}

\end{matrix}

then


B_{1\times 3} \times A_{3\times 1}=
\begin{pmatrix} 4 & 5 &6 \end{pmatrix}
\times
\begin{pmatrix}1\\2\\3\\\end{pmatrix}
=
\begin{pmatrix}(4 \times 1) + (5 \times 2) + (6 \times 3)\end{pmatrix}
=
\begin{pmatrix}
32
\end{pmatrix}

A matrix with just one row is called a row vector, similarly a matrix with just one column is called a column vector. When we multiply a row vector A, with a column vector B, we multiply the element in the first column of A by the element in the first row of B and add to that the product of the second column of A and second row of B and so on. More generally we multiply a1,i by bi,1 (where i ranges from 1 to n, the number of rows/columns) and sum up all of the products. Symbolically:

 A_{1\times n} \times B_{n\times 1} = (\sum_{i=1}^na_{1,i}\times b_{i,1} ) (for information on the \sum sign, see Summation_Sign)
where n is the number of rows/columns.
In words: the product of a column vector and a row vector is the sum of the product of item 1,i from the row vector and i,1 from the column vector where i is from 1 to the width/height of these vectors.

Note: The product of matrices is also a matrix. The product of a row vector and column vector is a 1 by 1 matrix, not a scalar.

Exercises[edit]

Multiply:

\begin{pmatrix}1&2\end{pmatrix}\begin{pmatrix}1\\2\end{pmatrix}
\begin{pmatrix}1\\2\end{pmatrix}\begin{pmatrix}1&2\end{pmatrix}
\begin{pmatrix}\frac{1}{8}&9\end{pmatrix}\begin{pmatrix}16\\2\end{pmatrix}
\begin{pmatrix}a&b\end{pmatrix}\begin{pmatrix}d\\e\end{pmatrix}
\begin{pmatrix}6 + 6b&3 - b\end{pmatrix}\begin{pmatrix}0\\0\end{pmatrix}
\begin{pmatrix}0&abc\end{pmatrix}
\begin{pmatrix}a\\0\end{pmatrix}

Multiplication of non-vector matrices[edit]

Suppose A_{m \times n}B_{n \times p} = C_{m \times p} where A, B and C are matrices. We multiply the ith row of A with the jth column of B as if they are vector-matrices. The resulting number is the (i,j)th element of C. Symbolically:

c_{i,j} = \sum_{k=1}^{n}a_{i,k}\times b_{k,j}

Example 1

Evaluate AB = C and BA'= D, where

 A =
\begin{pmatrix}
3&2\\
5&6\\
\end{pmatrix}

and

 B =
\begin{pmatrix}
2&6\\
8&7\\
\end{pmatrix}

Solution

c_{1,1} = \begin{pmatrix} 3&2 \end{pmatrix} \begin{pmatrix}2\\ 8\end{pmatrix} = (3\times 2 + 2\times 8) = 22
c_{1,2} = \begin{pmatrix} 3&2 \end{pmatrix} \begin{pmatrix}6\\ 7\end{pmatrix} = (3\times 6 + 2\times 7) = 32
c_{2,1} = \begin{pmatrix} 5&6 \end{pmatrix} \begin{pmatrix}2\\ 8\end{pmatrix} = (5\times 2 + 6\times 8) = 58
c_{2,2} = \begin{pmatrix} 5&6 \end{pmatrix} \begin{pmatrix}6\\ 7\end{pmatrix} = (5\times 6 + 6\times 7) = 72

i.e.

C =
\begin{pmatrix} 22&32\\58&72 \end{pmatrix}


d_{1,1} = \begin{pmatrix} 2&6 \end{pmatrix} \begin{pmatrix}3\\ 5\end{pmatrix} = (2\times 3 + 6\times 5) = 36
d_{1,2} = \begin{pmatrix} 2&6 \end{pmatrix} \begin{pmatrix}2\\ 6\end{pmatrix} = (2\times 2 + 6\times 6) = 40
d_{2,1} = \begin{pmatrix} 8&7 \end{pmatrix} \begin{pmatrix}3\\ 5\end{pmatrix} = (8\times 3 + 7\times 5) = 59
d_{2,2} = \begin{pmatrix} 8&7 \end{pmatrix} \begin{pmatrix}2\\ 6\end{pmatrix} = (8\times 2 + 7\times 6) = 58

i.e.

D =
\begin{pmatrix} 36&40\\59&58 \end{pmatrix}

Example 2 Evaluate AB and BA where

A =
\begin{pmatrix}
5&17\\
2&7
\end{pmatrix}
B =
\begin{pmatrix}
7&-17\\
-2&5
\end{pmatrix}

Solution


\begin{pmatrix}
5&17\\
2&7
\end{pmatrix} 
\begin{pmatrix}
7&-17\\
-2&5
\end{pmatrix} =
\begin{pmatrix}
1&0\\
0&1
\end{pmatrix}

\begin{pmatrix}
7&-17\\
-2&5
\end{pmatrix}
\begin{pmatrix}
5&17\\
2&7
\end{pmatrix} =
\begin{pmatrix}
1&0\\
0&1
\end{pmatrix}

Example 3 Evaluate AB and BA where

A = 
\begin{pmatrix}
2&6\\
0&5
\end{pmatrix}
B = 
\begin{pmatrix}
5&-6\\
0&2
\end{pmatrix}

Solution


\begin{pmatrix}
2&6\\
0&5
\end{pmatrix}
\begin{pmatrix}
5&-6\\
0&2
\end{pmatrix} = 
\begin{pmatrix}
10&0\\
0&10
\end{pmatrix}

\begin{pmatrix}
5&-6\\
0&2
\end{pmatrix}
\begin{pmatrix}
2&6\\
0&5
\end{pmatrix} = 
\begin{pmatrix}
10&0\\
0&10
\end{pmatrix}

Example 4 Evaluate the following multiplication:


\begin{pmatrix}
a\\
b
\end{pmatrix}
\begin{pmatrix}
c&d\\
\end{pmatrix}

Solution

Note that:


\begin{pmatrix}
a\\
b
\end{pmatrix}

is a 2 by 1 matrix and


\begin{pmatrix}
c&d\\
\end{pmatrix}

is a 1 by 2 matrix. So the multiplication makes sense and the product should be a 2 by 2 matrix.


\begin{pmatrix}
a\\
b
\end{pmatrix}
\begin{pmatrix}
c&d\\
\end{pmatrix}
=
\begin{pmatrix}
ac&ad\\
bc&bd\\
\end{pmatrix}

Example 5 Evaluate the following multiplication:


\begin{pmatrix}
1\\
2
\end{pmatrix}
\begin{pmatrix}
3&4\\
\end{pmatrix}

Solution


\begin{pmatrix}
1\\
2
\end{pmatrix}
\begin{pmatrix}
3&4\\
\end{pmatrix}
=
\begin{pmatrix}
1 \times 3& 1 \times 4\\
2 \times 3& 2 \times 4 \\
\end{pmatrix}
=
\begin{pmatrix}
3& 4\\
6& 8 \\
\end{pmatrix}

Example 6 Evaluate the following multiplication:


\begin{pmatrix}
a&0\\
0&b
\end{pmatrix}
\begin{pmatrix}
c&0\\
0&d
\end{pmatrix}

Solution 
\begin{pmatrix}
a&0\\
0&b
\end{pmatrix}
\begin{pmatrix}
c&0\\
0&d
\end{pmatrix} = 
\begin{pmatrix}
ac&0\\
0&bd
\end{pmatrix}

Example 7 Evaluate the following multiplication:


\begin{pmatrix}
a&b\\
c&d
\end{pmatrix}
\begin{pmatrix}
x\\
y
\end{pmatrix}

Solution 
\begin{pmatrix}
a&b\\
c&d
\end{pmatrix}
\begin{pmatrix}
x\\
y
\end{pmatrix} = 
\begin{pmatrix}
ax+by\\
cx+dy
\end{pmatrix}

Note Multiplication of matrices is generally not commutative, i.e. generally ABBA.

Diagonal matrices[edit]

A diagonal matrix is a matrix with zero entries everywhere except possibly down the diagonal. Multiplying diagonal matrices is really convenient, as you need only to multiply the diagonal entries together.

Examples

The following are all diagonal matrices 
\begin{pmatrix}
a&0\\
0&b
\end{pmatrix}
\begin{pmatrix}
c&0\\
0&d
\end{pmatrix}
\begin{pmatrix}
1&0\\
0&2
\end{pmatrix}
\begin{pmatrix}
0&0\\
0&0
\end{pmatrix}
\begin{pmatrix}
a&0&0\\
0&c&0\\
0&0&0
\end{pmatrix}

Example 1 
\begin{pmatrix}
a&0\\
0&b
\end{pmatrix}
\begin{pmatrix}
e&0\\
0&f
\end{pmatrix}
\begin{pmatrix}
h&0\\
0&i\\
\end{pmatrix}
=
\begin{pmatrix}
aeh&0\\
0&bfi\\
\end{pmatrix}

Example 2 
\begin{pmatrix}
a&0\\
0&b
\end{pmatrix}
\begin{pmatrix}
a&0\\
0&b
\end{pmatrix}
\begin{pmatrix}
a&0\\
0&b
\end{pmatrix} = 
\begin{pmatrix}
a^3&0\\
0&b^3
\end{pmatrix}

The above examples show that if D is a diagonal matrix then Dk is very easy to compute, all we need to do is to take the diagonal entries to the kth power. This will be an extremely useful fact later on, when we learn how to compute the nth Fibonacci number using matrices.

Exercises[edit]

1. State the dimensions of C

a) C = An×pBp×m
b) C = 
\begin{pmatrix}
10^{10}&20\\
5000&0
\end{pmatrix}
\begin{pmatrix}
1&2&3&4\\
2&5&6&6
\end{pmatrix}

2. Evaluate. Please note that in matrix multiplication (AB)C = A(BC) i.e. the order in which you do the multiplications does not matter (proved later).

a)

\begin{pmatrix}
1&1\\
0&1\\
\end{pmatrix}
\begin{pmatrix}
1&1\\
0&1\\
\end{pmatrix}
\begin{pmatrix}
1&1\\
0&1\\
\end{pmatrix}
\begin{pmatrix}
1\\
1\\
\end{pmatrix}
b)

\begin{pmatrix}
3&1\\
2&8\\
\end{pmatrix}
\begin{pmatrix}
1&1\\
0&2\\
\end{pmatrix}
\begin{pmatrix}
1&1\\
0&1\\
\end{pmatrix}
\begin{pmatrix}
1\\
1\\
\end{pmatrix}

3. Performing the following multiplications:


C = \begin{pmatrix}
1&2\\
4&5
\end{pmatrix}
\begin{pmatrix}
1&0\\
0&1\\
\end{pmatrix}

D = \begin{pmatrix}
1&0\\
0&1
\end{pmatrix}
\begin{pmatrix}
1&2\\
4&5\\
\end{pmatrix}

What do you notice?

The Identity & multiplication laws[edit]

The exercise above showed us that the matrix:


\begin{pmatrix}
1&0\\
0&1
\end{pmatrix}

is a very special. It is called the 2 by 2 identity matrix. An identity matrix is a square matrix, whose diagonal entries are 1's and all other entries are zero. The identity matrix, I, has the following very special properties

  1. A \times I = A
  2. I \times A = A

for all matrices A. We don't usually specify the shape of the identity because it's obvious from the context, and in this chapter we will only deal with the 2 by 2 identity matrix. In the real number system, the number 1 satisfies: r × 1 = r = 1 × r, so it's clear that the identity matrix is analogous to "1".

Associativity, distributivity and (non)-commutativity

Matrix multiplication is a great deal different to the multiplication we know from multiplying real numbers. So it is comforting to know that many of the laws the real numbers satisfy also carries over to the matrix world. But with one big exception, in general ABBA.

Let A, B, and C be matrices. Associativity means

(AB)C = A(BC)

i.e. the order in which you multiply the matrices is unimportant, because the final result you get is the same regardless of the order which you do the multiplications.

On the other hand, distributivity means

A(B + C) = AB + AC

and

(A + B)C = AC + BC

Note: The commutative property of the real numbers (i.e. ab = ba), does not carry over to the matrix world.

Convince yourself

For all 2 by 2 matrices A, B and C. And I the identity matrix.

1. Convince yourself that in the 2 by 2 case:

A(B + C) = AB + AC

and

(A + B)C = AC + BC

2. Convince yourself that in the 2 by 2 case:

A(BC) = (AB)C

3. Convince yourself that:

AB\ne BA

in general. When does AB = BA? Name at least one case.

Note that all of the above are true for all matrices (of any dimension/shape).

Determinant and Inverses[edit]

We shall consider the simultaneous equations:

ax + by = α (1)
cx + dy = β (2)

where a, b, c, d, α and β are constants. We want to determine the necessary conditions for (1) and (2) to have a unique solution for x and y. We proceed:

Let (1') = (1) × c
Let (2') = (2) × a

i.e.

acx + bcy = cα (1')
acx + ady = aβ (2')

Now

let (3) = (2') - (1')
(ad - bc)y = aβ - cα (3)

Now y can be uniquely determined if and only if (ad - bc) ≠ 0. So the necessary condition for (1) and (2) to have a unique solution depends on all four of the coefficients of x and y. We call this number (ad - bc) the determinant, because it tells us whether there is a unique solution to two simultaneous equations of 2 variables. In summary

if (ad - bc) = 0 then there is no unique solution
if (ad - bc) ≠ 0 then there is a unique solution.

Note: Unique, we can not emphasise this word enough. If the determinant is zero, it doesn't necessarily mean that there is no solution to the simultaneous equations! Consider:

x + y = 2
7x + 7y = 14

the above set of equations has determinant zero, but there is obviously a solution, namely x = y = 1. In fact there are infiinitely many solutions! On the other hand consider also:

x + y = 1
x + y = 2

this set of equations has determinant zero, and there is no solution at all. So if determinant is zero then there is either no solution or infinitely many solutions.

Determinant of a matrix

We define the determinant of a 2 × 2 matrix

A = \begin{pmatrix}
a&b\\
c&d
\end{pmatrix}

to be

\det (A) = ad - bc \!

Inverses[edit]

It is perhaps, at this stage, not very clear what the use is of the det(A). But it's intimately connected with the idea of an inverse. Consider in the real number system a number b, it has (multiplicative) inverse 1/b, i.e. b(1/b) = (1/b)b = 1. We know that 1/b does not exist when b = 0.

In the world of matrices, a matrix A may or may not have an inverse depending on the value of the determinant det(A)! How is this so? Let's suppose A (known) does have an inverse B (i.e. AB = I = BA). So we aim to find B. Let's suppose further that

A
= \begin{pmatrix}
a&b\\
c&d
\end{pmatrix}

and

B
= \begin{pmatrix}
w&x\\
y&z
\end{pmatrix}

we need to solve four simultaneous equations to get the values of w, x, y and z in terms of a, b, c, d and det(A).

aw + by = 1
cw + dy = 0
ax + bz = 0
cx + dz = 1

the reader can try to solve the above by him/herself. The required answer is

B = \frac{1}{\det(A)}\begin{pmatrix}
d&-b\\
-c&a
\end{pmatrix}

In here we assumed that A has an inverse, but this doesn't make sense if det(A) = 0, as we can not divide by zero. So A-1 (the inverse of A) exists if and only if det(A) ≠ 0.

Summary

If AB = BA = I, then we say B is the inverse of A. We denote the inverse of A by A-1. The inverse of a 2 × 2 matrix

A =
\begin{pmatrix}
a&b\\
c&d
\end{pmatrix}

is

A^{-1} = \frac{1}{\det(A)}\begin{pmatrix}
d&-b\\
-c&a
\end{pmatrix}

provided the determinant of A is not zero.

Solving simultaneous equations[edit]

Suppose we are to solve:

ax + by = α
cx + dy = β

We let

 A = \begin{pmatrix}a&b\\c&d\end{pmatrix}
 w = \begin{pmatrix}x\\y\end{pmatrix}
 \gamma = \begin{pmatrix}\alpha\\ \beta\end{pmatrix}

we can translate it into matrix form

 \begin{pmatrix}a&b\\c&d\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} = \begin{pmatrix}\alpha\\ \beta\end{pmatrix}

i.e

 Aw = \gamma

If A's determinant is not zero, then we can pre-multiply both sides by A-1 (the inverse of A)


\begin{matrix}
A^{-1}Aw &=& A^{-1}\gamma\\
Iw &=& A^{-1}\gamma\\
w &=& A^{-1}\gamma\\
\end{matrix}

i.e.

\begin{pmatrix}x\\y\end{pmatrix} = \frac{1}{ad-bc}\begin{pmatrix}d&-b\\-c&a\end{pmatrix}\begin{pmatrix}\alpha\\ \beta\end{pmatrix}

which implies that x and y are unique.

Examples

Find the inverse of A, if it exists

a)  A = \begin{pmatrix}1&5\\2&3\end{pmatrix}
b)  A = \begin{pmatrix}10&2\\2&7\end{pmatrix}
c)  A = \begin{pmatrix}a&b\\3a&3b\end{pmatrix}
d)  A = \begin{pmatrix}3&5\\5&3\end{pmatrix}

Solutions

a) A^{-1} = \frac{1}{-7}\begin{pmatrix}3&-5\\-2&1\end{pmatrix}
b) A^{-1} = \frac{1}{66}\begin{pmatrix}7&-2\\-2&10\end{pmatrix}
c) No solution, as det(A) = 3ab - 3ab = 0
d) A^{-1} = \frac{1}{-16}\begin{pmatrix}3&-5\\-5&3\end{pmatrix}

Exercises

1. Find the determinant of

 A = \begin{pmatrix}\frac{2}{5}&\frac{2}{3}\\ \\ \frac{3}{2}& \frac{5}{2}\end{pmatrix}. Using the determinant of A, decide whether there's a unique solution to the following simultaneous equations

\begin{matrix}
\frac{2}{5}x + \frac{2}{3}y = 0\\
\frac{3}{2}x + \frac{5}{2}y = 0
\end{matrix}

2. Suppose

C = AB

show that

det(C) = det(A)det(B)

for the 2 × 2 case. Note: it's true for all cases.

3. Show that if you swap the rows of A to get A' , then det(A) = -det(A' )

4. Using the result of 2

a) Prove that if:

A = P^{-1}BP

then det(A) = det(B)

b) Prove that if:

Ak = 0

for some positive integer k, then det(A) = 0.

5. a) Compute A5, i.e. multiply A by itself 5 times, where

A = 
\begin{pmatrix}
-1&6\\
-1&4\\
\end{pmatrix}

b) Find the inverse of P where


P = \begin{pmatrix}
1&-2\\
-1&3\\
\end{pmatrix}

c) Verify that

A = 
P^{-1}
\begin{pmatrix}
1&0\\
0&2\\
\end{pmatrix}
P

d) Compute A5 by using part (b) and (c).

f) Compute A100


Other Sections[edit]

Next Section > High_School_Mathematics_Extensions/Matrices/Linear_Recurrence_Relations_Revisited

Problem Set > High_School_Mathematics_Extensions/Matrices/Problem Set

Project > High_School_Mathematics_Extensions/Matrices/Project/Elementary_Matrices