Calculus/Calculus on matrices

From Wikibooks, open books for an open world
Jump to navigation Jump to search

This section gives an overview on how calculus can be applied to matrices. Note that a general understanding of Linear Algebra is expected - you're expected to be familiar with the common ways of manipulating matrices.

The problem[edit | edit source]

Consider a n by n matrix and a n by 1 vector . How can we, for instance, find ? Now, if you were to naively apply single-variable calculus rules, a plausible answer would be

After all, the corresponding scalar form of the problem would indeed be a. And indeed the answer to the vector form is . But now consider the following problem: . If you were to take the scalar form, you'd probably think that the answer would be . But that isn't right - the answer is actually , where T refers to the transpose of the vector x.

The purpose of this section is to scratch the surface of this beautiful field - because it's not something that the average Calculus 3 or linear algebra course at university will teach - yet it has its own quirks. And what is it used for? Matrix calculus is widely used in machine learning and also in other fields, such as computational finance. It can also help us avoid having to take (potentially nasty) Lagrangians and effectively reduce the problem to a single-variable scenario!

Derivative with respect to a vector[edit | edit source]

In this section, we consider problems that involve differentiating with a vector x. As with above, we assume that x is a column vector.

One way to think about this problem is to reduce this to a problem of scalars. Notice that we can consider x to be a collection of scalars . Now take the individual partial derivatives for . Finally put them together. We're essentially finding after all - the steps are the same (only that before, the size of x was 2 or 3 that represented the i, j and k coordinate frame).

So let's try this from the above example. We want to find, for all ,

... which is A. And that's the same for every i.

Now what would get if you would combine all the partial derivatives? Just like how you'd find , you'll get . This is just A! Indeed, that's why .

A first step towards matrices[edit | edit source]

Now we return back to the other problem: .

Let's assume that A is a 2 by 2 matrix, and represent A as . Using the same notation for x, perform matrix multiplication:

Take the partial derivatives with respect to each of the elements of A (this is equivalent to finding the Jacobian). For , find :

, , and

But then what do you do with that? How do you "combine" the result? Clearly, we are missing something.

Dimensions of ∇f[edit | edit source]

Let's take a step back and ask the question: given a vector , what should be the dimension of ?

Consider the example above. We have two variables: A with 4 elements (2x2) and x with 2 elements, and we want to find . It is straightforward to show that the dimension of f is a column vector with , where . So we also need to consider the derivatives with respect to and . In other words, the dimension of and is a 2x2 vector, corresponding to the partial derivatives of each element of the matrix A.

How many elements would have in total? For each of the two scalars that comprise f, there are four partial derivatives. This results in (2 * (2 * 2)) = 8 elements in total - actually a tensor (which can be thought of as a higher-order matrix). This is where things start to get messy, but fortunately this is a simple enough example.

Getting a solution[edit | edit source]

So let's use this above observation to solve the problem.

First consider , where . Compute the individual partial derivatives: and . Similarly, and .

Now, how do we combine this? The issue is that is a tensor, but we can display only a 2D representation using matrices. So let's take the "face" that corresponds to . We can represent the collection of partial derivatives (that is, the Jacobian) we found above in a matrix: . Similarly, . What do we observe? This is simply (notice the change from column to row form) ! And indeed, that's how we show that .

In practice[edit | edit source]

In practice, you won't have to do all this work every time you want to find the derivative of a matrix. Instead, there are many matrix cookbooks (sometimes also called as a cheatsheet), which give a table of common derivatives with respect to matrices, and that's what you're likely to require in practice. Here's one.

An example[edit | edit source]

Consider the Markowitz problem. Assume that we have n stocks, and we want to assign weights . The inter-element covariance between each stock is , for all . Suppose we want to solve this problem in a traditional way. Then the optimisation problem is to minimise subject to constraints, while important, we won't mention here.

Solving this problem is likely to be messy given the double summation. Let's try matrix calculus. Let w be a N times 1 vector and be a N times N matrix. The above problem can be reduced to minimising , and all you need to do is to take derivatives with respect to w! As shown above using a matrix calculus cookbook, , which is much more elegant than trying to compute the individual partial derivatives.