Introduction to Mathematical Physics/Some mathematical problems and their solution/Linear evolution problems, spectral method

From Wikibooks, open books for an open world
Jump to: navigation, search

Spectral point of view[edit]

Spectral method is used to solve linear evolution problems of type Problem probevollin. Quantum mechanics (see chapters chapmq and chapproncorps ) supplies beautiful spectral problems {\it via} Schr\"odinger equation. Eigenvalues of the linear operator considered (the hamiltonian) are interpreted as energies associated to states (the eigenfunctions of the hamiltonian). Electromagnetism leads also to spectral problems (cavity modes).

Spectral methods consists in defining first the space where the operator L of problem probevollin acts and in providing it with the Hilbert space structure. Functions that verify:


are then seeked. Once eigenfunctions u_{k(x)} are found, the problem is reduced to integration of an ordinary differential equations (diagonal) system.

The following problem is a particular case of linear evolution problem \index{response (linear)} (one speaks about linear response problem)


Find \phi\in V such that:


where H_0 is a linear diagonalisable operator and H(t) is a linear operator "small" with respect to H_0.

This problem can be tackled by using a spectral method. Section secreplinmq presents an example of linear response in quantum mechanics.

Some spectral analysis theorems[edit]

In this section, some results on the spectral analysis of a linear operator L are presented. Demonstration are given when L is a linear operator acting from a finite dimension space E to itself. Infinite dimension case is treated in specialized books (see for instance ([ma:equad:Dautray5])). let L be an operator acting on E. The spectral problem associated to L is:


Find non zero vectors u\in V (called eigenvectors) and numbers \lambda (called eigenvalues) such that:

Lu=\lambda u

Here is a fundamental theorem:


Following conditions are equivalent:

  1. \exists u \neq 0  Lu=\lambda u
  2. matrix L-\lambda I is singular
  3. det(L-\lambda I)=0

A matrix is said diagonalisable if it exists a basis in which it has a diagonal form ([ma:algeb:Strang76]).


If a squared matrix L of dimension n has n eigenvectors linearly independent, then L is diagonalisable. Moreover, if those vectors are chosen as columns of a matrix S, then:

\Lambda = S^{-1} L S \mbox{  with  } \Lambda \mbox{  diagonal  }


Let us write vectors u_i as column of matrix S and let let us calculate LS:

LS=L \left( \begin{array}{cccc}
              \vdots&\vdots& &\vdots\\
              \vdots&\vdots& &\vdots\\
\end{array} \right)

\left( \begin{array}{cccc}
              \vdots&\vdots& &\vdots\\
              \lambda_1 u_1&\lambda_2 u_2&\ldots&\lambda_n u_n\\
              \vdots&\vdots& &\vdots\\
\end{array} \right)

\left( \begin{array}{cccc}
\vdots&\vdots& &\vdots\\
\end{array} \right)
\left( \begin{array}{cccc}
\lambda_1& & &\\
\end{array} \right)


Matrix S is invertible since vectors u_i are supposed linearly independent, thus:


Remark: LABEL remmatrindep If a matrix L has n distinct eigenvalues then its eigenvectors are linearly independent.

Let us assume that space E is a Hilbert space equiped by the scalar product < . | . >.


Operator L^* adjoint of L is by definition defined by:

\forall u, v   \mathrel{<} L^*u|v\mathrel{>} = \mathrel{<} u|Lv\mathrel{>}


An auto-adjoint operator is an operator L such that L=L^*


For each hermitic operator L, there exists at least one basis constituted by orthonormal eigenvectors. L is diagonal in this basis and diagonal elements are eigenvalues.


Consider a space E_n of dimension n. Let |u_1\mathrel{>} be the eigenvector associated to eigenvalue \lambda_1 of A. Let us consider a basis the space (|u_1\mathrel{>} direct sum any basis of E^\perp_{n-1}). In this basis:

\left( \begin{array}{cccc}
\lambda_1& &v& \\
             0& & & \\
             \vdots& &B& \\
              0& & & \\
\end{array} \right)

The first column of L is image of u_1. Now, L is hermitical thus:

\left( \begin{array}{cccc}
\lambda_1&0&\ldots&0 \\
             0& & & \\
             \vdots& &B& \\
              0& & & \\
\end{array} \right)

By recurrence, property is prooved..


Eigenvalues of an hermitic operator L are real.


Consider the spectral equation:

L|u\mathrel{>} =\lambda|u\mathrel{>}

Multiplying it by  \mathrel{<} u|, one obtains:

 \mathrel{<} u|Lu\mathrel{>} =\lambda \mathrel{<} u|u\mathrel{>} 

Complex conjugated equation of uAu is:

 \mathrel{<} u|L^*u\mathrel{>} =\lambda^* \mathrel{<} u|u\mathrel{>}

 \mathrel{<} u|u\mathrel{>} being real and L^*=L, one has: \lambda=\lambda^*


Two eigenvectors |u_1\mathrel{>} and |u_2\mathrel{>} associated to two distinct eigenvalues \lambda_1 and \lambda_2 of an hermitic operator are orthogonal.


By definition:

L|u_1\mathrel{>} =\lambda_1|u_1\mathrel{>}

L|u_2\mathrel{>} =\lambda_2|u_2\mathrel{>}


 \mathrel{<} u_2|Lu_1\mathrel{>} =\lambda_1 \mathrel{<} u_2|u_1\mathrel{>}

 \mathrel{<} u_1|Lu_2\mathrel{>} =\lambda_2 \mathrel{<} u_1|u_2\mathrel{>}

The difference between previous two equations implies:

0=(\lambda_1-\lambda_2) \mathrel{<} u_2|u_1\mathrel{>}

which implies the result.

Let us now presents some methods and tips to solve spectral problems.


Solving spectral problems[edit]

The fundamental step for solving linear evolution problems by doing the spectral method is the spectral analysis of the linear operator involved. It can be done numerically, but two cases are favourable to do the spectral analysis by hand: case where there are symmetries, and case where a perturbative approach is possible.

Using symmetries[edit]

Using of symmetries rely on the following fundamental theorem:


If operator L commutes with an operator T, then eigenvectors of T are also eigenvectors of L.

Proof is given in appendix chapgroupes. Applications of rotation invariance are presented at section secpotcent. Bloch's theorem deals with translation invariance (see theorem theobloch at section sectheobloch).

Perturbative approximation[edit]

A perturbative approach can be considered each time operator U to diagonalize can be considered as a sum of an operator U^{0} whose spectral analysis is known and of an operator U^{1} small with respect to U^{0}. The problem to be solved is then the following:\index{perturbation method}


U\mid  \phi \mathrel{>}  = \lambda\mid  \psi \mathrel{>}

Introducing the parameter \epsilon, it is assumed that U can be expanded as:

U=U_{0}+\epsilon U_{1}+\epsilon ^{2}U_2+...

Let us admit[1] that the eigenvectors can be expanded in \epsilon : For the i^{th} eigenvector:


\mid \phi^{i}\mathrel{>} =\mid \phi^{i}_{0}\mathrel{>} +\epsilon
\mid \phi^{i}_{1}\mathrel{>} +\epsilon^{2}\mid \phi^{i}_{2}\mathrel{>} +...

Equation ( bod) defines eigenvector, only to a factor. Indeed, if \mid \phi^{i}\mathrel{>} is solution, then a\,e^{i\theta}\mid \phi^{i}\mathrel{>} is also solution. Let us fix the norm of the eigenvectors to 1. Phase can also be chosen. We impose that phase of vector \mid \phi^{i}\mathrel{>} is the phase of vector \mid \phi^{i}_0\mathrel{>} . Approximated vectors \mid \phi^{i}\mathrel{>} and \mid \phi^{j}\mathrel{>} should be exactly orthogonal.

\mathrel{<} \phi^{i}\mid \phi^{j}\mathrel{>} =0

Egalating coefficients of \epsilon^k, one gets:


\mathrel{<} \phi^{i}_{0}\mid \phi^{j}_{k}\mathrel{>} +\mathrel{<} \phi^{i}_{1}\mid
\phi^{j}_{k-1}\mathrel{>} +\ldots+\mathrel{<} \phi^{i}_{k}\mid
\phi^{j}_{0}\mathrel{>} =0

Approximated eigenvectors are imposed to be exactly normed and \mathrel{<} \phi^{i}_{0}\mid \phi^{i}_{j}\mathrel{>} real:

\mathrel{<} \phi^{i}_{0}\mid \phi^{i}_{1}\mathrel{>} =1

Equalating coefficients in \epsilon^k with k > 1 in product \mathrel{<} \phi^{i}\mid \phi^{i}\mathrel{>} =1, one gets:

\mathrel{<} \phi^{i}_{0}\mid \phi^{i}_{k}\mathrel{>} +\mathrel{<} \phi^{i}_{1}\mid
\phi^{i}_{k-1}\mathrel{>} +\ldots+\mathrel{<} \phi^{i}_{k}\mid \phi^{i}_{0}\mathrel{>} =0.

Substituting those expansions into spectral equation bod and equalating coefficients of successive powers of \epsilon yields to:


&&U_{0}\mid \phi^{i}_{j}\mathrel{>} +U_{1}\mid \phi^{i}_{j-1}\mathrel{>} +...+U_{j}\mid \phi^{i}_{0}\mathrel{>} \\
&=&\lambda_{0}^{i}\mid \phi^{i}_{j}\mathrel{>} +\lambda_{1}^{i}\mid \phi^{i}_{j-1}\mathrel{>} +...
+\lambda_{j}^{i}\mid \phi^{i}_{0}\mathrel{>}

Projecting previous equations onto eigenvectors at zero order, and using conditions eqortper, successive corrections to eigenvectors and eigenvalues are obtained.

Headline text[edit]

Variational approximation[edit]

In the same way that problem


Find u such that:

  1. Lu=f, u\in E, x\in\Omega

  2. u satisfies boundary conditions on the border \partial \Omega of \Omega.

can be solved by variational method, spectral problem:


Find u and \lambda such that:

  1. Lu-\lambda u=f, u\in E, x\in\Omega

  2. u satisfies boundary conditions on the border \partial \Omega of \Omega.

can also be solved by variational methods. In case where L is self adjoint and f is zero (quantum mechanics case), problem can be reduced to a minimization problem. In particular, one can show that:


The eigenvector \phi with lowest energy E_0 of self adjoint operator H is solution of problem: Find \phi normed such that:

J(\phi)=\min_{\psi\in V}J(\psi)

where J(\psi)=<\psi|H\psi>.

Eigenvalue associated to \phi is J(\phi).

Demonstration is given in ([ph:mecaq:Cohen73],[ph:mecaq:Pauling60]). Practically, a family of vectors v_i of V is chosen and one hopes that eigenvector \phi is well approximated by some linear combination of those vectors:

\phi=\sum c_iv_i

Solving minimization problem is equivalent to finding coefficients c_i. At chapter chapproncorps, we will see several examples of good choices of families v_i.

Remark: In variational calculations, as well as in perturbative calculations, symmetries should be used each time they occur to simplify solving of spectral problems (see chapter chapproncorps).

  1. This is not obvious from a mathematical point of view (see [ma:equad:Kato66])