Statistics/Numerical Methods/Basic Linear Algebra and Gram-Schmidt Orthogonalization

From Wikibooks, open books for an open world
Jump to: navigation, search

Introduction[edit]

Basically, all the sections found here can be also found in a linear algebra book. However, the Gram-Schmidt Orthogonalization is used in statistical algorithm and in the solution of statistical problems. Therefore, we briefly jump into the linear algebra theory which is necessary to understand Gram-Schmidt Orthogonalization.

The following subsections also contain examples. It is very important for further understanding that the concepts presented here are not only valid for typical vectors as tuple of real numbers, but also functions that can be considered vectors.

Fields[edit]

Definition[edit]

A set R with two operations + and * on its elements is called a field (or short (R,+,*)), if the following conditions hold:

  1. For all \alpha, \beta \in R holds \alpha+\beta \in R
  2. For all \alpha, \beta \in R holds \alpha+\beta = \beta+\alpha (commutativity)
  3. For all \alpha, \beta,\gamma \in R holds \alpha+(\beta+\gamma) = (\alpha+\beta)+\gamma (associativity)
  4. It exist a unique element 0, called zero, such that for all \alpha \in R holds \alpha+0 = \alpha
  5. For all \alpha \in R a unique element -\alpha, such that holds \alpha+ (-\alpha) = 0
  6. For all \alpha, \beta \in R holds \alpha*\beta \in R
  7. For all \alpha, \beta \in R holds \alpha*\beta = \beta*\alpha (commutativity)
  8. For all \alpha, \beta,\gamma \in R holds \alpha*(\beta*\gamma) = (\alpha*\beta)*\gamma (associativity)
  9. It exist a unique element 1, called one, such that for all \alpha \in R holds \alpha*1 = \alpha
  10. For all non-zero \alpha \in R a unique element \alpha^{-1}, such that holds \alpha * \alpha^{-1} = 1
  11. For all \alpha, \beta,\gamma \in R holds \alpha*(\beta+\gamma) = \alpha*\beta+\alpha*\gamma (distributivity)

The elements of R are also called scalars.

Examples[edit]

It can easily be proven that real numbers with the well known addition and multiplication (IR, +, *) are a field. The same holds for complex numbers with the addition and multiplication. Actually, there are not many more sets with two operations which fulfill all of these conditions.

For statistics, only the real and complex numbers with the addition and multiplication are important.

Vector spaces[edit]

Definition[edit]

A set V with two operations + and * on its elements is called a vector space over R, if the following conditions hold:

  1. For all x, y\in V holds x+y \in V
  2. For all x, y \in V holds x+y = y+x (commutativity)
  3. For all x, y, z \in V holds x+(y+z) = (x+y)+z (associativity)
  4. It exist a unique element \mathbb{O}, called origin, such that for all x \in V holds x+\mathbb{O} = x
  5. For all x \in V exists a unique element -v, such that holds x+ (-x) = \mathbb{O}
  6. For all \alpha\in R and x\in V holds \alpha*x \in V
  7. For all \alpha, \beta \in R and x\in V holds \alpha*(\beta*x) = (\alpha*\beta)*x (associativity)
  8. For all x\in V and 1 \in R holds 1*x = x
  9. For all \alpha\in R and for all x, y\in Vholds \alpha*(x+y) = \alpha*x+\alpha*y (distributivity wrt. vector addition)
  10. For all \alpha, \beta \in R and for all x\in Vholds (\alpha+\beta)*x = \alpha*x+\beta*x (distributivity wrt. scalar addition)

Note that we used the same symbols + and * for different operations in R and V. The elements of V are also called vectors.

Examples:

  1. The set IR^p with the real-valued vectors (x_1,...,x_p) with elementwise addition x+y=(x_1+y_1,...,x_p+y_p) and the elementwise multiplication \alpha*x = (\alpha x_1,...,\alpha x_p) is a vector space over IR.
  2. The set of polynomials of degree p, P(x) = b_0 + b_1 x + b_2 x^2 + ... + b_p x^p, with usual addition and multiplication is a vector space over IR.

Linear combinations[edit]

A vector x can be written as a linear combination of vectors x_1,...x_n, if

x = \sum_{i=1}^n \alpha_i x_i

with \alpha_i \in R.

Examples:

  • (1,2,3) is a linear combination of (1,0,0),\,(0,1,0), \,(0,0,1) since (1,2,3)=1*(1,0,0)+2*(0,1,0)+3*(0,0,1)
  • 1+2*x+3*x^2 is a linear combination of 1+x+x^2,\, x+x^2,\, x^2 since 1+2*x+3*x^2=1*(1+x+x^2)+1*(x+x^2)+1*(x^2)

Basis of a vector space[edit]

A set of vectors x_1, ..., x_n is called a basis of the vector space V, if

1. for each vector x \in V exist scalars \alpha_1,...,\alpha_n \in R such that x = \sum_i \alpha_i x_i 2. there is no subset of \{ x_1, ..., x_n \} such that 1. is fulfilled.

Note, that a vector space can have several bases.

Examples:

  • Each vector (\alpha_1, \alpha_2, \alpha_3) \in IR^3 can be written as \alpha_1 * (1,0,0) + \alpha_2 *(0,1,0) + \alpha_3 * (0,0,1). Therefore is \{(1,0,0), (0,1,0), (0,0,1) \} a basis of IR^3.
  • Each polynomial of degree p can be written as linear combination of \{ 1, x, x^2, ..., x^p\} and therefore forms a basis for this vector space.

Actually, for both examples we would have to prove condition 2., but it is clear that it holds.

Dimension of a vector space[edit]

A dimension of a vector space is the number of vectors which are necessary for a basis. A vector space has infinitely many number of basis, but the dimension is uniquely determined. Note that the vector space may have a dimension of infinity, e.g. consider the space of continuous functions.

Examples:

  • The dimension of IR^3 is three, the dimension of IR^p is p .
  • The dimension of the polynomials of degree p is p+1.

Scalar products[edit]

A mapping <.,.>: V\times V \rightarrow R is called a scalar product if the following holds for all x,x_1,x_2,y,y_1,y_2 \in V and \alpha_1, \alpha_2 \in R :

  1. <\alpha_1 x_1 + \alpha_2 x_2, y> = \alpha_1 <x_1,y> + \alpha_2 <x_2, y>
  2. <x, \alpha_1 y_1 + \alpha_2 y_2> = \alpha_1 <x,y_1> + \alpha_2 <x, y_2>
  3. <x,y> = \overline{<y,x>} with \overline{\alpha + \imath \beta} = \alpha - \imath \beta
  4. <x,x> \geq 0 with  <x,x>=0 \Leftrightarrow x = \mathbb{O}

Examples:

  • The typical scalar product in IR^p is <x,y> = \sum_i x_i y_i.
  • <f,g> = \int_a^b f(x)*g(x) dx is a scalar product on the vector space of polynomials of degree p.

Norm[edit]

A norm of a vector is a mapping \|.\|: V \rightarrow R, if holds

  1. \| x \| \geq 0 for all x \in V and \| x \| =0 \Leftrightarrow x=\mathbb{O} (positive definiteness)
  2. \| \alpha v \|= \mid \alpha \mid \| x \| for all x\in V and all \alpha\in R
  3. \| x+y \| \leq \| x \| + \| y \| for all x,y\in V (triangle inequality)

Examples:

  • The L_q norm of a vector in IR^p is defined as \|x\|_q = \sqrt[q]{\sum_{i=1}^p x_i^q}.
  • Each scalar product generates a norm by \|x\| = \sqrt{<x,x>}, therefore \|f\| = \sqrt{\int_a^b f^2(x) dx} is a norm for the polynomials of degree p.

Orthogonality[edit]

Two vectors x and y are orthogonal to each other if <x,y>=0. In IR^p it holds that the cosine of the angle between two vectors can expressed as

\cos(\angle(x,y)) = \frac{<x,y>}{\|x\|\|y\|}.

If the angle between x and y is ninety degree (orthogonal) then the cosine is zero and it follows that <x,y>=0.

A set of vectors x_1, ..., x_p is called orthonormal, if

<x_i,x_j> = \begin{cases} 0 & \mbox{ if } i\neq j \\ 1 & \mbox{ if } i=j \end{cases}.

If we consider a basis e_1,..., e_p of a vector space then we would like to have a orthonormal basis. Why ?

Since we have a basis, each vector x and y can be expressed by x=\alpha_1 e_1 + ... +\alpha_p e_p and y=\beta_1 e_1 + ... +\beta_p e_p. Therefore the scalar product of x and y reduces to

<x,y>\ = <\alpha_1 e_1 + ... +\alpha_p e_p, \beta_1 e_1 + ... +\beta_p e_p>\
 = \sum_{i=1}^p \sum_{j=1}^p \alpha_i \beta_j <e_i, e_j>
 =  \sum_{i=1}^p  \alpha_i \beta_i <e_i, e_i>
 = \alpha_1 \beta_1 + ... + \alpha_p \beta_p.\

Consequently, the computation of a scalar product is reduced to simple multiplication and addition if the coefficients are known. Remember that for our polynomials we would have to solve an integral!

Gram-Schmidt orthogonalization[edit]

Algorithm[edit]

The aim of the Gram-Schmidt orthogonalization is to find for a set of vectors x_1, ..., x_p an equivalent set of orthonormal vectors o_1,...,o_p such that any vector which can be expressed as linear combination of x_1, ..., x_p can also be expressed as linear combination of o_1,...,o_p:

1. Set b_1 = x_1 and o_1 = b_1 / \|b_1\|

2. For each i>1 set b_i = x_i - \sum_{j=1}^{i-1} \frac{<x_i, b_j>}{<b_j,b_j>} b_j and o_i = b_i / \|b_i\|, in each step the vector x_i is projected on b_j and the result is subtracted from x_i.

Vector2.jpg

Example[edit]

Consider the polynomials of degree two in the interval[-1,1] with the scalar product <f,g> = \int_{-1}^1 f(x) g(x) dx and the norm \|f\| = \sqrt{<f,f>}. We know that f_1(x)=1, f_2(x)=x and f_3(x)=x^2 are a basis for this vector space. Let us now construct an orthonormal basis:

Step 1a: b_1(x) = f_1(x) = 1

Step 1b: o_1(x) = \frac{b_1(x)}{\|b_1(x)\|} = \frac{1}{\sqrt{<b_1(x), b_1(x)>}} = \frac{1}{\sqrt{\int_{-1}^1 1 dx}} = \frac{1}{\sqrt{2}}

Step 2a: b_2(x) = f_2(x) - \frac{<f_2(x),b_1(x)>}{<b_1(x),b_1(x)>} b_1(x) = x - \frac{\int_{-1}^1 x\ 1 dx}{2} 1 = x - \frac{0}{2} 1 = x

Step 2b: o_2(x) = \frac{b_2(x)}{\|b_2(x)\|} = \frac{x}{\sqrt{<b_2(x), b_2(x)>}} = \frac{x}{\sqrt{\int_{-1}^1 x^2 dx}} = \frac{x}{\sqrt{2/3}} = x\sqrt{3/2}

Step 3a: b_3(x) = f_3(x) - \frac{<f_3(x),b_1(x)>}{<b_1(x),b_1(x)>} b_1(x) - \frac{<f_3(x),b_2(x)>}{<b_2(x),b_2(x)>} b_2(x) = x^2 - \frac{\int_{-1}^1 x^2 1\  dx}{2} 1 - \frac{\int_{-1}^1 x^2 x\ dx}{2/3} x = x^2 - \frac{2/3}{2} 1 - \frac{0}{2/3} x = x^2 - 1/3

Step 3b: o_3(x) = \frac{b_3(x)}{\|b_3(x)\|} = \frac{x^2-1/3}{\sqrt{<b_3(x), b_3(x)>}} = \frac{x^2-1/3}{\sqrt{\int_{-1}^1 (x^2-1/3)^2 dx}} = \frac{x^2-1/3}{\sqrt{\int_{-1}^1 x^4 - 2/3 x^2 + 1/9\ dx}} = \frac{x^2-1/3}{\sqrt{8/45}} = \sqrt{\frac{5}{8}} (3x^2-1)

It can be proven that 1/\sqrt{2}, x\sqrt{3/2} and \sqrt{\frac{5}{8}} (3x^2-1) form a orthonormal basis with the above scalarproduct and norm.

Numerical instability[edit]

Consider the vectors x_1 = (1,\epsilon, 0, 0), x_2 = (1,0,\epsilon,0) and x_3 = (1,0,0,\epsilon). Assume that \epsilon is so small that computing 1+\epsilon = 1 holds on a computer (see http://en.wikipedia.org/wiki/Machine_epsilon). Let compute a orthonormal basis for this vectors in IR^4 with the standard scalar product <x,y>=x_1y_1+x_2y_2+x_3y_3+x_4y_4 and the norm \|x\| = \sqrt{x_1^2+x_2^2+x_3^2+x_4^2}.

Step 1a. b_1 = x_1 = (1,\epsilon,0,0)

Step 1b. o_1 = \frac{b_1}{\|b_1\|} = \frac{b_1}{\sqrt{1+\epsilon^2}} = b_1 with 1+\epsilon^2=1

Step 2a. b_2 = x_2 - \frac{<x_2,b_1>}{<b_1,b_1>} b_1 = (1,0,\epsilon,0) - \frac{1}{1+\epsilon^2} (1,\epsilon,0,0) = (0,-\epsilon,\epsilon,0)

Step 2b. o_2 = \frac{b_2}{\|b_2\|} = \frac{b_2}{\sqrt{2\epsilon^2}} = (0,-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},0)

Step 3a. b_3 = x_3 -  \frac{<x_3,b_1>}{<b_1,b_1>} b_1 - \frac{<x_3,b_2>}{<b_2,b_2>} b_2 = (1,0,0,\epsilon) - \frac{1}{1+\epsilon^2} (1,\epsilon,0,0) - \frac{0}{2\epsilon^2} (0,-\epsilon,\epsilon,0) = (0, -\epsilon, 0,\epsilon)

Step 3b. o_3 = \frac{b_3}{\|b_3\|} = \frac{b_3}{\sqrt{2\epsilon^2}} = (0,-\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}})

It obvious that for the vectors

- o_1 = (1, \epsilon, 0, 0)\

- o_2 = (0,-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},0)

- o_3 = (0,-\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}})

the scalarproduct <o_2,o_3> = 1/2 \neq 0. All other pairs are also not zero, but they are multiplied with \epsilon such that we get a result near zero.

Modified Gram-Schmidt[edit]

To solve the problem a modified Gram-Schmidt algorithm is used:

  1. Set b_i = x_i for all i
  2. for each i from 1 to n compute
    1. o_i = \frac{b_i}{\|b_i\|}
    2. for each j from i+1 to n compute b_j = b_j - <b_j, o_i> o_i\

The difference is that we compute first our new b_i and subtract it from all other b_j. We apply the wrongly computed vector to all vectors instead of computing each b_i separately.

Example (recomputed)[edit]

Step 1. b_1 = (1,\epsilon,0,0), b_2 = (1,0,\epsilon,0), b_3 = (1,0,0,\epsilon)

Step 2a. o_1 = \frac{b_1}{\|b_1\|} = \frac{b_1}{\sqrt{1+\epsilon^2}} = b_1 = (1,\epsilon,0,0) with 1+\epsilon^2=1

Step 2b. b_2 = b_2 - <b_2, o_1> o_1 = (1,0,\epsilon,0) - (1,\epsilon,0,0) = (0, -\epsilon, \epsilon, 0)\

Step 2c. b_3 = b_3 - <b_3, o_1> o_1 = (1,0,0,\epsilon) - (1,\epsilon,0,0) = (0, -\epsilon, 0, \epsilon)\

Step 3a. o_2 = \frac{b_2}{\|b_2\|} = \frac{b_2}{\sqrt{2\epsilon^2}} =  (0,-\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0)

Step 3b. b_3 = b_3 - <b_3, o_2> o_2 = (0, -\epsilon, 0, \epsilon) - \frac{\epsilon}{\sqrt{2}}  (0,-\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0) = (0, -\epsilon/2, -\epsilon/2, \epsilon)

Step 4a. o_3 = \frac{b_3}{\|b_3\|} = \frac{b_3}{\sqrt{3/2\epsilon^2}} =  (0,-\frac{1}{\sqrt{6}}, -\frac{1}{\sqrt{6}}, \frac{2}{\sqrt{6}})

We can easily verify that <o_2,o_3> = 0.


Application[edit]

Exploratory Project Pursuit[edit]

In the analysis of high-dimensional data we usually analyze projections of the data. The approach results from the Theorem of Cramer-Wold that states that the multidimensional distribution is fixed if we know all one-dimensional projections. Another theorem states that most (one-dimensional) projections of multivariate data are looking normal, even if the multivariate distribution of the data is highly non-normal.

Therefore in Exploratory Projection Pursuit we jugde the interestingness of a projection by comparison with a (standard) normal distribution. If we assume that the one-dimensional data x are standard normal distributed then after the transformation z=2\Phi^{-1}(x)-1 with \Phi(x) the cumulative distribution function of the standard normal distribution then z is uniformly distributed in the interval [-1;1].

Thus the interesting can measured by \int_{-1}^1 (f(z)-1/2)^2 dx with f(z) a density estimated from the data. If the density f(z) is equal to 1/2 in the interval [-1;1] then the integral becomes zero and we have found that our projected data are normally distributed. An value larger than zero indicates a deviation from the normal distribution of the projected data and hopefully an interesting distribution.

Expansion with orthonormal polynomials[edit]

Let L_i(z) a set of orthonormal polynomials with the scalar product <f,g>=\int_{-1}^1 f(z)g(z) dz and the norm \|f\| = \sqrt{<f,f>}. What can we derive about a densities f(z) in the interval [-1;1] ?

If f(z)=\sum_{i=0}^I a_i L_i(z) for some maximal degree I then it holds

\int_{-1}^1 f(z) L_j(z) dz = \int_{-1}^1 \sum_{i=0}^I a_i L_i(z) L_j(z) dz = a_j \int_{-1}^1 L_j(z) L_j(z) dz = a_j

We can also write \int_{-1}^1 f(z) L_j(z) dz = E(L_j(z)) or empirically we get an estimator \hat{a}_j = \frac{1}{n} \sum_{k=1}^n L_j(z_k).

We describe the term 1/2 = \sum_{i=1}^I b_i L_i(z) and get for our integral

\int_{-1}^1 (f(z)-1/2)^2 dz = \int_{-1}^1 \left(\sum_{i=0}^I (a_i-b_i) L_i(z)\right)^2 dz  = \sum_{i,j=0}^I \int_{-1}^1 (a_i-b_i)(a_j-b_j) L_i(z) L_j(z) dz = \sum_{i=0}^I (a_i-b_i)^2.

So using a orthonormal function set allows us to reduce the integral to a summation of coefficient which can be estimated from the data by plugging \hat{a}_j in the formula above. The coefficients b_i can be precomputed in advance.

Normalized Legendre polynomials[edit]

The only problem left is to find the set of orthonormal polynomials L_i(z) upto degree I. We know that 1, x, x^2, ..., x^I form a basis for this space. We have to apply the Gram-Schmidt orthogonalization to find the orthonormal polynomials. This has been started in the first example.

The resulting polynomials are called normalized Legendre polynomials. Up to a sacling factor the normalized Legendre polynomials are identical to Legendre polynomials. The Legendre polynomials have a recursive expression of the form

L_i(z) = \frac{(2i-1) L_{i-1}(z) - (i-1) L_{i-2}(z)}{i}

So computing our integral reduces to computing L_0(z_k) and L_1(z_k) and using the recursive relationship to compute the \hat{a}_j's. Please note that the recursion can be numerically unstable!

References[edit]