# Linear Algebra/Singular Value Decomposition

### Singular Value Decomposition

Given any ${\displaystyle m\times n}$ matrix ${\displaystyle A}$, the singular value decomposition (SVD) is ${\displaystyle A=U\Sigma V^{H}}$ where ${\displaystyle U}$ is an ${\displaystyle m\times m}$ unitary matrix, ${\displaystyle V}$ is an ${\displaystyle n\times n}$ unitary matrix, and ${\displaystyle \Sigma }$ is an ${\displaystyle m\times n}$ diagonal matrix where all off-diagonal entries are 0 and the diagonal entries are all non-negative real values. The diagonal entries of ${\displaystyle \Sigma }$ are referred to as "singular values".

As an example, consider the shear transformation ${\displaystyle A={\begin{pmatrix}1&2\\0&1\\\end{pmatrix}}}$. The singular value decomposition of ${\displaystyle A}$ is:

${\displaystyle {\begin{pmatrix}1&2\\0&1\\\end{pmatrix}}={\begin{pmatrix}0.9239&-0.3827\\0.3827&0.9239\\\end{pmatrix}}{\begin{pmatrix}2.4142&0\\0&0.4142\\\end{pmatrix}}{\begin{pmatrix}0.3827&0.9239\\-0.9239&0.3827\\\end{pmatrix}}}$

The set of all unit length vectors ${\displaystyle {\vec {v}}}$ such that ${\displaystyle |{\vec {v}}|=1}$ form a sphere of radius 1 around the origin. When ${\displaystyle A}$ is applied to this sphere, it becomes an ellipsoid. The principal radii of this ellipsoid are the singular values, and their directions form the columns of ${\displaystyle U}$.

### Existence of the singular value decomposition

One fact that is not immediately obvious is that the singular value decomposition always exists:

Theorem (existence of the singular value decomposition)

Given any ${\displaystyle m\times n}$ matrix ${\displaystyle A}$, there exists ${\displaystyle m\times m}$ unitary matrix ${\displaystyle U}$, ${\displaystyle n\times n}$ unitary matrix ${\displaystyle V}$, and ${\displaystyle m\times n}$ diagonal matrix ${\displaystyle \Sigma }$ where all off-diagonal entries are 0 and the diagonal entries are all non-negative real values such that ${\displaystyle A=U\Sigma V^{H}}$.

In essence, any linear transformation is a rotation, followed by stretching or shrinking parallel to each axis (with some dimensions added or zeroed out of existence), followed by another rotation.

The following proof will demonstrate that the singular value decomposition always exists. An outline of the proof will be given first:

Proof outline

We need to prove that an arbitrary linear transform ${\displaystyle A}$ is a rotation: ${\displaystyle V^{H}}$, followed by scaling parallel to each axis: ${\displaystyle \Sigma }$, and lastly followed by another rotation: ${\displaystyle U}$, giving ${\displaystyle A=U\Sigma V^{H}}$.

If the columns of ${\displaystyle A}$ are already mutually orthogonal, then the first rotation is not necessary: ${\displaystyle V=I}$. The entries of ${\displaystyle \Sigma }$ are the lengths of the vectors formed by the columns of ${\displaystyle A}$, and ${\displaystyle U}$ is a rotation that rotates the elementary basis vectors of ${\displaystyle \mathbb {C} ^{m}}$ to be parallel with the columns of ${\displaystyle A}$.

In most cases however, the columns of ${\displaystyle A}$ are not mutually orthogonal. In this case, the rotation ${\displaystyle V^{H}}$ is non-trivial. ${\displaystyle A=(AV)V^{H}}$, so ${\displaystyle V}$ must be chosen so that the columns of ${\displaystyle AV}$ are mutually orthogonal. Let ${\displaystyle V={\begin{pmatrix}{\vec {v}}_{1}&{\vec {v}}_{2}&\dots &{\vec {v}}_{n}\end{pmatrix}}}$. We need to choose orthonormal vectors ${\displaystyle {\vec {v}}_{1},{\vec {v}}_{2},\dots ,{\vec {v}}_{n}}$ so that ${\displaystyle A{\vec {v}}_{1},A{\vec {v}}_{2},\dots ,A{\vec {v}}_{n}}$ are all mutually orthogonal. This can be done iteratively. Imagine that we have chosen ${\displaystyle {\vec {v}}_{1}}$ so that when given any vector ${\displaystyle {\vec {v}}}$ that is orthogonal to ${\displaystyle {\vec {v}}_{1}}$, that ${\displaystyle A{\vec {v}}_{1}}$ is orthogonal to ${\displaystyle A{\vec {v}}}$. The effort now switches to finding an orthonormal set of vectors ${\displaystyle {\vec {v}}_{2},{\vec {v}}_{3},\dots ,{\vec {v}}_{n}}$ confined to the space of vectors that are perpendicular to ${\displaystyle {\vec {v}}_{1}}$ such that ${\displaystyle A{\vec {v}}_{2},A{\vec {v}}_{3},\dots ,A{\vec {v}}_{n}}$ are mutually orthogonal.

Let ${\displaystyle V_{1}}$ be a unitary matrix with ${\displaystyle {\vec {v}}_{1}}$ as the first column. Factoring ${\displaystyle V_{1}}$ from the left side of ${\displaystyle V}$ to get ${\displaystyle V=V_{1}V_{\text{new}}}$ results in a new set of orthonormal vectors that are the columns of ${\displaystyle V_{\text{new}}}$. The goal of having the columns of ${\displaystyle AV}$ be mutually orthogonal is converted to having the columns of ${\displaystyle (AV_{1})V_{\text{new}}}$ be mutually orthogonal with ${\displaystyle AV_{1}}$ effectively replacing ${\displaystyle A}$. ${\displaystyle {\vec {v}}_{1}}$ transforms to ${\displaystyle {\vec {e}}_{1}}$, and the space of vectors orthogonal to ${\displaystyle {\vec {v}}_{1}}$ transforms to the space spanned by the standard basis vectors ${\displaystyle {\vec {e}}_{2},{\vec {e}}_{3},\dots ,{\vec {e}}_{n}}$. The first column of ${\displaystyle AV_{1}}$ is ${\displaystyle A{\vec {v}}_{1}}$ and so is orthogonal to all other columns.

If ${\displaystyle U_{1}}$ is a unitary matrix where the first column of ${\displaystyle U_{1}}$ is ${\displaystyle A{\vec {v}}_{1}}$ normalized to unit length, then factoring ${\displaystyle U_{1}}$ from the left side of ${\displaystyle AV_{1}}$ to get ${\displaystyle A_{1}=U_{1}^{H}AV_{1}}$ results in a matrix in which the first column is parallel to the standard basis vector ${\displaystyle {\vec {e}}_{1}}$. The first column of ${\displaystyle AV_{1}}$ is orthogonal to all other columns, so the first column of ${\displaystyle A_{1}}$ is orthogonal to all other columns, so hence the first row of ${\displaystyle A_{1}}$ contains all 0s except for the first column.

${\displaystyle {\vec {v}}_{2},{\vec {v}}_{3},\dots ,{\vec {v}}_{n}}$ can now be determined recursively with the dimension reduced to ${\displaystyle n-1}$, and ${\displaystyle A}$ is replaced with ${\displaystyle A_{1}}$ with the first row and column removed. This forms the inductive component of the coming proof.

Lastly, how do we know that there exists ${\displaystyle {\vec {v}}_{1}}$ so that when given any vector ${\displaystyle {\vec {v}}}$ that is orthogonal to ${\displaystyle {\vec {v}}_{1}}$, that ${\displaystyle A{\vec {v}}_{1}}$ is orthogonal to ${\displaystyle A{\vec {v}}}$? The answer will be that the unit length ${\displaystyle {\vec {v}}}$ that maximizes ${\displaystyle |A{\vec {v}}|}$ is a valid ${\displaystyle {\vec {v}}_{1}}$.

We are now ready to give the proof in full detail:

Proof of the existence of the singular value decomposition

This proof will proceed using induction on both ${\displaystyle m}$ and ${\displaystyle n}$.

Base Case ${\displaystyle m=1}$

${\displaystyle A}$ has a single row, and therefore has the form ${\displaystyle A={\begin{pmatrix}\sigma _{1}{\vec {v}}_{1}^{H}\end{pmatrix}}}$ where ${\displaystyle {\vec {v}}_{1}}$ is an arbitrary unit length vector, and ${\displaystyle \sigma _{1}}$ is an arbitrary non-negative real number. Note that ${\displaystyle {\vec {v}}_{1}}$ and ${\displaystyle \sigma _{1}}$ exist for any single row matrix ${\displaystyle A}$.

Let ${\displaystyle U={\begin{pmatrix}1\end{pmatrix}}}$, ${\displaystyle \Sigma ={\begin{pmatrix}\sigma _{1}&0&\dots &0\\\end{pmatrix}}}$, and ${\displaystyle V={\begin{pmatrix}{\vec {v}}_{1}&{\vec {v}}_{2}&\dots &{\vec {v}}_{n}\end{pmatrix}}}$ where ${\displaystyle {\vec {v}}_{1},{\vec {v}}_{2},\dots ,{\vec {v}}_{n}}$ together form an mutually orthogonal set of unit length vectors. ${\displaystyle {\vec {v}}_{2},{\vec {v}}_{3},\dots ,{\vec {v}}_{n}}$ can be determined via Gram-Schmidt orthogonalization. It is clear that: ${\displaystyle A=U\Sigma V^{H}}$.

Base Case ${\displaystyle n=1}$

${\displaystyle A}$ has a single column, and therefore has the form ${\displaystyle A={\begin{pmatrix}\sigma _{1}{\vec {u}}_{1}\end{pmatrix}}}$ where ${\displaystyle {\vec {u}}_{1}}$ is an arbitrary unit length vector, and ${\displaystyle \sigma _{1}}$ is an arbitrary non-negative real number. Note that ${\displaystyle {\vec {u}}_{1}}$ and ${\displaystyle \sigma _{1}}$ exist for any single column matrix ${\displaystyle A}$.

Let ${\displaystyle U={\begin{pmatrix}{\vec {u}}_{1}&{\vec {u}}_{2}&\dots &{\vec {u}}_{m}\end{pmatrix}}}$, ${\displaystyle \Sigma ={\begin{pmatrix}\sigma _{1}\\0\\\vdots \\0\\\end{pmatrix}}}$, and ${\displaystyle V={\begin{pmatrix}1\end{pmatrix}}}$ where ${\displaystyle {\vec {u}}_{1},{\vec {u}}_{2},\dots ,{\vec {u}}_{m}}$ together form an mutually orthogonal set of unit length vectors. ${\displaystyle {\vec {u}}_{2},{\vec {u}}_{3},\dots ,{\vec {u}}_{m}}$ can be determined via Gram-Schmidt orthogonalization. It is clear that: ${\displaystyle A=U\Sigma V^{H}}$.

Inductive Case ${\displaystyle m,n\geq 2}$

Let ${\displaystyle {\vec {e}}_{k,i}}$ denote the ${\displaystyle i^{\text{th}}}$ standard basis vector of ${\displaystyle \mathbb {C} ^{k}}$. Let ${\displaystyle \mathbf {0} _{k\times l}}$ denote a ${\displaystyle k\times l}$ matrix of 0s.

Maximize ${\displaystyle |A{\vec {v}}|}$ subject to the constraint ${\displaystyle |{\vec {v}}|=1}$. Let ${\displaystyle {\vec {v}}_{1}}$ be a unit length vector that maximizes ${\displaystyle |A{\vec {v}}|}$, and let ${\displaystyle \sigma _{1}=|A{\vec {v}}_{1}|}$. Let ${\displaystyle {\vec {u}}_{1}={\frac {A{\vec {v}}_{1}}{\sigma _{1}}}}$ (if ${\displaystyle \sigma _{1}=0}$, then ${\displaystyle {\vec {u}}_{1}}$ is an arbitrary unit length vector).

Using Gram-Schmidt orthogonalization, unitary matrices ${\displaystyle U_{1}}$ and ${\displaystyle V_{1}}$ can be determined such that the first columns of ${\displaystyle U_{1}}$ and ${\displaystyle V_{1}}$ respectively are ${\displaystyle {\vec {u}}_{1}}$ and ${\displaystyle {\vec {v}}_{1}}$: ${\displaystyle U_{1}{\vec {e}}_{m,1}={\vec {u}}_{1}}$ and ${\displaystyle V_{1}{\vec {e}}_{n,1}={\vec {v}}_{1}}$. ${\displaystyle A=U_{1}A_{1}V_{1}^{H}}$ where ${\displaystyle A_{1}=U_{1}^{H}AV_{1}}$. It will now be proven that the first column and row of ${\displaystyle A_{1}}$ contain all 0s except for the (1,1) entry which contains ${\displaystyle \sigma _{1}}$: ${\displaystyle A_{1}={\begin{pmatrix}\sigma _{1}&\mathbf {0} _{1\times (n-1)}\\\mathbf {0} _{(m-1)\times 1}&A_{\text{reduced}}\end{pmatrix}}}$.

${\displaystyle A{\vec {v}}_{1}=\sigma _{1}{\vec {u}}_{1}\implies U_{1}A_{1}V_{1}^{H}{\vec {v}}_{1}=\sigma _{1}{\vec {u}}_{1}\implies A_{1}{\vec {e}}_{n,1}=\sigma _{1}{\vec {e}}_{m,1}}$. This means that the first column of ${\displaystyle A_{1}}$ is ${\displaystyle \sigma _{1}{\vec {e}}_{m,1}={\begin{pmatrix}\sigma _{1}\\\mathbf {0} _{(m-1)\times 1}\end{pmatrix}}}$.

To show that the first row of ${\displaystyle A_{1}}$ is ${\displaystyle \sigma _{1}{\vec {e}}_{n,1}^{H}={\begin{pmatrix}\sigma _{1}&\mathbf {0} _{1\times (n-1)}\end{pmatrix}}}$, we will show that the first column of ${\displaystyle A_{1}}$ is orthogonal to all of the other columns of ${\displaystyle A_{1}}$. This will require exploiting the fact that ${\displaystyle {\vec {v}}={\vec {e}}_{n,1}}$ maximizes ${\displaystyle |A_{1}{\vec {v}}|}$ subject to the constraint ${\displaystyle |{\vec {v}}|=1}$.

Let ${\displaystyle v(t)}$ be a parameterized unit length vector. Let ${\displaystyle v(0)={\vec {e}}_{n,1}}$.

Taking the derivative of the constraint ${\displaystyle v^{H}v=1}$ gives ${\displaystyle {\frac {dv^{H}}{dt}}v+v^{H}{\frac {dv}{dt}}=0}$

${\displaystyle |A_{1}{\vec {v}}|}$ being maximized at ${\displaystyle {\vec {e}}_{n,1}}$ gives:

${\displaystyle {\frac {d}{dt}}(|A_{1}{\vec {v}}|^{2}){\bigg |}_{t=0}=0\iff {\frac {d}{dt}}({\vec {v}}^{H}A_{1}^{H}A_{1}{\vec {v}}){\bigg |}_{t=0}=0\iff {\frac {d{\vec {v}}^{H}}{dt}}{\bigg |}_{t=0}A_{1}^{H}A_{1}{\vec {e}}_{n,1}+{\vec {e}}_{n,1}^{H}A_{1}^{H}A_{1}{\frac {d{\vec {v}}}{dt}}{\bigg |}_{t=0}=0}$

${\displaystyle \iff \Re \left({\vec {e}}_{n,1}^{H}A_{1}^{H}A_{1}{\frac {d{\vec {v}}}{dt}}{\bigg |}_{t=0}\right)=0}$

(${\displaystyle \Re }$ and ${\displaystyle \Im }$ denote the real and imaginary components of a complex number respectively.)

Let ${\displaystyle i=2,3,\dots ,n}$ be arbitrary. Let ${\displaystyle {\frac {d{\vec {v}}}{dt}}{\bigg |}_{t=0}={\vec {e}}_{n,i}}$. ${\displaystyle {\vec {e}}_{n,1}}$ and ${\displaystyle {\vec {e}}_{n,i}}$ are orthogonal.

This gives: ${\displaystyle \Re ({\vec {e}}_{n,1}^{H}A_{1}^{H}A_{1}{\vec {e}}_{n,i})=0}$.

Now let ${\displaystyle {\frac {d{\vec {v}}}{dt}}{\bigg |}_{t=0}=i{\vec {e}}_{n,i}}$ where the ${\displaystyle i}$ outside of the subscript is the imaginary constant. ${\displaystyle {\vec {e}}_{n,1}}$ and ${\displaystyle i{\vec {e}}_{n,i}}$ are orthogonal.

This gives: ${\displaystyle \Re ({\vec {e}}_{n,1}^{H}A_{1}^{H}A_{1}(i{\vec {e}}_{n,i}))=0\implies \Im ({\vec {e}}_{n,1}^{H}A_{1}^{H}A_{1}{\vec {e}}_{n,i})=0}$.

Hence: ${\displaystyle (A_{1}{\vec {e}}_{n,1})^{H}(A_{1}{\vec {e}}_{n,i})={\vec {e}}_{n,1}^{H}A_{1}^{H}A_{1}{\vec {e}}_{n,i}=0}$.

Therefore, the first column of ${\displaystyle A_{1}}$ is orthogonal to all of the other columns of ${\displaystyle A_{1}}$, and ${\displaystyle A_{1}}$ has the form: ${\displaystyle A_{1}={\begin{pmatrix}\sigma _{1}&\mathbf {0} _{1\times (n-1)}\\\mathbf {0} _{(m-1)\times 1}&A_{\text{reduced}}\end{pmatrix}}}$.

${\displaystyle A_{\text{reduced}}}$ is an ${\displaystyle (m-1)\times (n-1)}$ matrix, and therefore by an inductive argument, ${\displaystyle A_{\text{reduced}}=U_{r}\Sigma _{r}V_{r}^{H}}$. Finally,

${\displaystyle A=U\Sigma V^{H}}$ where ${\displaystyle U=U_{1}{\begin{pmatrix}1&\mathbf {0} _{1\times (m-1)}\\\mathbf {0} _{(m-1)\times 1}&U_{r}\end{pmatrix}}}$, ${\displaystyle \Sigma ={\begin{pmatrix}\sigma _{1}&\mathbf {0} _{1\times (n-1)}\\\mathbf {0} _{(m-1)\times 1}&\Sigma _{r}\end{pmatrix}}}$, and ${\displaystyle V=V_{1}{\begin{pmatrix}1&\mathbf {0} _{1\times (n-1)}\\\mathbf {0} _{(n-1)\times 1}&V_{r}\end{pmatrix}}}$.