Engineering Analysis/Random Vectors

Many of the concepts that we have learned so far have been dealing with random variables. However, these concepts can all be translated to deal with vectors of random numbers. A random vector X contains N elements, Xi, each of which is a distinct random variable. The individual elements in a random vector may or may not be correlated or dependent on one another.

$X={\begin{bmatrix}X_{1}\\X_{2}\\\vdots \\X_{N}\end{bmatrix}}$ Expectation

The expectation of a random vector is a vector of the expectation values of each element of the vector. For instance:

$E[X]={\begin{bmatrix}E[X_{1}]\\E[X_{2}]\\\vdots \\E[X_{N}]\end{bmatrix}}$ Using this definition, the mean vector of random vector X, denoted μX is the vector composed of the means of all the individual elements of X:

$\mu _{X}={\begin{bmatrix}\mu _{X_{1}}\\\mu _{X_{2}}\\\vdots \\\mu _{X_{N}}\end{bmatrix}}$ Correlation Matrix

The correlation matrix of a random vector X is defined as:

$R_{X}=E[XX^{T}]$ Where each element of the correlation matrix corresponds to the correlation between the row element of X, and the column element of XT. The correlation matrix is a real-symmetric matrix. If the off-diagonal elements of the correlation matrix are all zero, the random vector is said to be uncorrelated. If the R matrix is an identity matrix, the random vector is said to be "white". For instance, "white noise" is uncorrelated, and each element of the vector has an equal correlation value.

Matrix Diagonalization

As discussed earlier, we can diagonalize a matrix by constructing the V matrix from the eigenvectors of that matrix. If X is our non-diagonal matrix, we can create a diagonal matrix D by:

$D=V^{-1}XV$ If the X matrix is real symmetric (as is always the case with the correlation matrix), we can simplify this to be:

$D=V^{T}XV$ Whitening

A matrix can be whitened by constructing a matrix W that contains the inverse squareroots of the eigenvalues of X on the diagonal:

$W={\begin{bmatrix}{\frac {1}{\sqrt {\lambda _{1}}}}&\cdots \\0&{\frac {1}{\sqrt {\lambda _{2}}}}\cdots \\\vdots &\vdots &\ddots \end{bmatrix}}$ Using this W matrix, we can convert X into the identity matrix:

$I=W^{T}V^{T}XVW$ Simultaneous Diagonalization

If we have two matrices, X and Y, we can construct a matrix A that will satisfy the following relationships:

$A^{T}XA=I$ $A^{T}YA=D$ Where I is an identity matrix, and D is a diagonal matrix. This process is known as simultaneous diagonalization. If we have the V and W matrices described above such that

$I=W^{T}V^{T}XVW$ ,

We can then construct the B matrix by applying this same transformation to the Y matrix:

$W^{T}V^{T}YVW=B$ We can combine the eigenvalues of B into a transformation matrix Z such that:

$Z^{T}BZ=D$ We can then define our A matrix as:

$A=VWZ$ $A^{T}=Z^{T}W^{T}V^{T}$ This A matrix will satisfy the simultaneous diagonalization procedure, outlined above.

Covariance Matrix

The Covariance Matrix of two random vectors, X and Y, is defined as:

$Q_{X}=E[(X-\mu _{X})(Y-\mu _{Y})^{T}]=E[(Y-\mu _{Y})(X-\mu _{X})^{T}]$ Where each element of the covariance matrix expresses the variance relationship between the row element of X, and the column element of Y. The covariance matrix is real symmetric.

We can relate the correlation matrix and the covariance matrix through the following formula:

$R=Q+\mu _{X}\mu _{X}^{T}$ Cumulative Distribution Function

An N-vector X has a cumulative distribution function Fx of N variables that is defined as:

$F_{X}(X)=P[X\leq x]=P[X_{1}\leq x_{1},X_{2}\leq x_{2},\cdots ,X_{N}\leq x_{N}]$ Probability Density Function

The probability density function of a random vector can be defined in terms of the Nth partial derivative of the cumulative distribution function:

$f_{X}(X)={\frac {\partial ^{N}F_{X}(X)}{\partial X_{1}\partial X_{2}\cdots \partial X_{N}}}$ If we know the density function, we can find the mean of the ith element of X using N-1 integrations:

$\mu _{X_{i}}=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\cdots \int _{-\infty }^{\infty }x_{i}f_{X}(x)dx_{1}dx_{2}\cdots dx_{n}$ 