Introduction to Mathematical Physics/Some mathematical problems and their solution/Boundary problems, variational methods

From Wikibooks, open books for an open world
Jump to: navigation, search

Variational formulation[edit]

secvafor

Let us consider a boundary problem:

Problem:

Find u\in U such that:

eqLufva

Lu(x)=f(x) \forall x \in\Omega

u(x)=g(x) \forall x\in\partial\Gamma

Let us suppose that is a unique solution u of this problem. For a functional space V sufficiently large the previous problem, may be equivalent to the following:

Problem:

Let us coonsider aboundey problem\

Find u\in U such that:

eqvari

\forall v\in V, <v|Lu>=<v|f>

This is the variational form of the boundary problem. To obtain the equality eqvari, we have just multiplied scalarly by a "test function" <v| equation eqLufva. A "weak form" of this problem can be found using Green formula type formulas: the solution space is taken larger and as a counterpart, the test functions space is taken smaller. let us illustrate those ideas on a simple example:

Example:

Find u in U=C^2 such that:

-\Delta u=f, \forall x\in \Omega

u=g, \forall x\in\partial\Omega

The variational formulation is: Find u in U=C^2 such that:


\forall v\in V_1, -\int v\Delta u dx=\int vf dx

Using the Green equality (see appendix secappendgreeneq)\index{Green formula} and the boundary conditions we can reformulate the problem in: find u\in V, such that:


\forall v\in V \int \partial_i v\partial_i u dx=\int vf dx

One can show that this problem as a unique solution in V=H^1_0(\Omega) where


H^1(\Omega)=\{v\in L^2(\Omega);\partial_iv\in L^2(\Omega), 1\leq i\leq n\}

is the Sobolev space of order 1 on \Omega and H_0^1(\Omega) is the adherence of H^1(\Omega) in the space {\mathcal D}(\Omega) (space of infinitely differentiable functions on \Omega, with compact support in \Omega).

It may be that, as for the previous example, the solution function space and the test function space are the same. This is the case for most of linear operators L encountered in physics. In this case, one can associate to L a bilinear form a. The (weak) variational problem is thus:

provari2

Problem:

Find u\in V such that


\forall v\in V, a(u,v)=L(v)

Example:

In this previous example, the bilinear form is:


a(u,v)=\int_\Omega \partial_i u(x)\partial_i v(x) dx.


There exist theorems (Lax-Milgram theorem for instance) that proove the existence and unicity of the solution of the previous problem provari2 , under certain conditions on the bilinear form a(u,v).

Finite elements approximation[edit]

secvarinum


Let us consider the problem provari2 :

Problem:

Find u\in V such that:


\forall v\in V, a(u,v)=L(v)

The method of approximation consists in choosing a finite dimension subspace V_h of V and the problem to solve becomes:

provari3

Problem: Find u_h\in V_h such that

\forall v_h\in V_h, a(u_h,v_h)=L(v_h)


A base of \{v_h^i\} of V_h is chosen to satisfy boundary conditions. The problem is reduced to finding the components of u_h:

u_h=\sum_i<v_h^i|u_h>v_h^i

If a is a bilinear form

a(u_h,v_h^i)=\sum_j<v_h^i|u_h>a(v_h^j,v_h^i),

and to find the <v_h^i|u_h>'s is equivalent to solve a linear system (often close to a diagonal system)that can be solved by classical algorithms ([ma:equad:Ciarlet88],[ma:compu:Press92]) which can be direct methods (Gauss,Cholesky) or iterative (Jacobi, Gauss-Seidel, relaxation). Note that if the vectors of the basis of V_h are eigenvectors of L, then the solving of the system is immediate (diagonal system). This is the basis of the spectral methods for solving evolution problems. If a is not linear, we have to solve a nonlinear system. Let us give an example of a basis \{v_h^i\}.

Example:

When V=L^2([0,1]), an example of V_h that can be chosen ([ma:equad:Ciarlet88]) is the set of piecewise-linear continuous functions that are zero in x=0 and x=1 (for Dirichlet boundary conditions). More precisely, L^2[0,1] can be approximated by the space of piecewise linear continuous functions on intervals [i/n,(i+1)/n], i going from zero to n-1, that are zero in x=0 and x=1. The basis of such a space is made by functions (v_h^i), i\in (1,\dots,n-1) defined by:

v_h^i(x)=\frac{i-1}{n}+nx\mbox{ in  } [(i-1)/n,i/n]

v_h^i(x)=\frac{i}{n}-nx\mbox{ in  }  [i/n,(i+1)/n]

and zero anywhere else (see figure figapproxesp).

figapproxesp

Space L^2[0,1] can be approximated by pice wise linear continuous functions.

Finite difference approximation[edit]

Finite difference method is one of the most basic method to tackle PDE problems. It is not strictly speaking a variational approximation. It is rather a sort of variational method where weight functions w_k are Dirac functions \delta_k. Indeed, when considering the boundary problem,

eqfini

Lu=f \mbox{  for  }x\in \Omega

instead of looking for an approximate solution u_h which can be decomposed on a basis of weight functions w_k:

u_h=\sum_k<w_k|u_h>w_k,

the action of L on u is directly expressed in terms of Dirac functions, as well as the right hand term of equation eqfini:

eqfini2

\sum (Lu)_{ik} \delta_k=\sum f_k\delta_k

\begin{rem} If L contains derivatives, the following formulas are used : right formula, order 1:


\Delta x\frac{du}{dx}=\sum (u_{i+1}-u_i)\delta_i


\Delta x^2\frac{d^2u}{dx^2}=\sum (u_{i+1}-2u_i+u_{i+1})\delta_i

right formula order 2:


2\Delta x\frac{du}{dx}=\sum (-u_{i+2}+4u_{i+1}-3u_i)\delta_i


\Delta x^2\frac{d^2u}{dx^2}=\sum (-u_{i+3}+4u_{i+2}-5u_{i+1}+2u_i)\delta_i

Left formulas can be written in a similar way. Centred formulas, second order are:


2\Delta x\frac{du}{dx}=\sum (u_{i+1}-u_{i-1})\delta_i


\Delta x^2\frac{d^2u}{dx^2}=\sum (u_{i+1}-2u_{i}+u_{i-1})\delta_i

Centred formulas, fourth order are:


12\Delta x\frac{du}{dx}=\sum (-u_{i+2}+8u_{i+1}-8u_{i-1}+u_{i-2})\delta_i


12\Delta x^2\frac{d^2u}{dx^2}=\sum
(-u_{i+2}+16u_{i+1}-30u_i+16u_{i-1}-u_{i-2})\delta_i

\end{rem} One can show that the equation eqfini2 is equivalent to the system of equations:

eqfini3

\sum_i (Lu)_{ik} =f_k

One can see immediately that equation eqfini2 implies equation eqfini3 in choosing "test" functions v_i of support [x_i-1/2,x_i+1/2] and such that v_i(x_i)=1.

Minimization problems[edit]

A minimization problem can be written as follows:

promini

Problem: Let V a functional space, a J(u) a functional. Find u\in V, such that:

J(u)=\min_{v\in V} J(v)

The solving of minimization problems depends first on the nature of the functional J and on the space V. As usually, the functional J(u) is often approximated by a function of several variables J(u_1,\dots,u_N) where the u_i's are the coordinate of u in some base E_i that approximates V. The methods to solve minimization problems can be classified into two categories: One can distinguish problems without constraints (see Fig. figcontraintesans) and problems with constraints (see Fig. figcontrainteavec). Minimization problems without constraints can be tackled theoretically by the study of the zeros of the differential function dF(u) if exists. Numerically it can be less expensive to use dedicated methods. There are methods that don't use derivatives of F (downhill simplex method, direction-set method) and methods that use derivative of F (conjugate gradient method, quasi-Newton methods). Details are given in ([ma:compu:Press92]). Problems with constraints reduce the functional space U to a set V of functions that satisfy some additional conditions. Note that those sets V are not vectorial spaces: a linear combination of vectors of V are not always in V. Let us give some example of constraints: Let U a functional space. Consider the space

V=\{v\in U, \phi_{i}(v)=0, i\in 1,\dots,n \}

where \phi_{i}(v) are n functionals. This is a first example of constraints. It can be solved theoretically by using Lagrange multipliers ([ma:equad:Ciarlet88]), \index{constraint}. A second example of constraints is given by


V=\{v\in U, \phi_{i}(v)\leq 0, i\in 1,\dots,n  \}
v

where \phi_{i}(v) are n functionals. The linear programming (see example exmplinepro) problem is an example of minimization problem with such constraints (in fact a mixing of equalities and inequalities constraints).

figcontraintesans

Minimization of a function of two variables.

figcontrainteavec]]

Minimization of a function of two variables with constraints. Here, space is reduced to a disk in the plane u_1,u_2.

figcontrainteaveclag

Illustration of the Lagrange multiplier. At point A tangent vector on the surface and on the constraint curve are not paralele: A does not correspond to an extremum. At point B both tangent vector are colinear: we have an extremum.

Example:

Let us consider of a first class of functional J that are important for PDE problems. Consider again the bilinear form introduced at section secvafor. If this bilinear form a(u,v) is symmetrical, {\it i.e.}


\forall u\in V, \forall v\in V, a(u,v)=a(v,u).

the problem can be written as a minimization problem by introducing the functional:


J(v)=\frac{1}{2}a(v,v)-L(v)

One can show that (under certain conditions) solving the variational problem:\\ Find u\in V such that:


\forall v\in V, a(u,v)=L(v)

is equivalent to solve the minimization problem:\\ Find u\in V, such that:


J(u)=\min_{v\in V} J(v)

Physical principles have sometimes a natural variational formulation (as natural as a PDE formulation). We will come back to the variational formulations at the section on least action principle (see section secprinmoindreact ) and at the section secpuisvirtu on the principle of virtual powers.

exmplinepro

Example: Another example of functional is given by the linear programming problem. Let us consider a function u that can be located by its coordinates u_i in a basis e_i, i\in \{1,\dots,N\}:


u=\sum u_ie_i

In linear programming, the functional to minimize can be written


F(u)=\sum c_iu_i

and is subject to N primary constraints:


\forall i, u_i\geq 0

and M=m_1+m_2+m_3 additional constraints:


k\in\{1,\dots,m_1\}, \sum_j a_{k,j}u_j\leq b_k, (b_k\geq 0)


k\in\{m_1+1,\dots,m_1+m_2\}, \sum_j a_{k,j}u_j\geq b_k\geq 0


k\in\{m_1+m_2+1,\dots,m_1+m_2+m_3\}, \sum_j a_{k,j}u_j=b_k\geq 0

The numerical algorithm to solve this type of problem is presented in ([ma:compu:Press92]). It is called the simplex method.

exmpsimul

Example: When the variables u_i can take only discrete values, one speaks about discrete or combinatorial optimization. The function to minimize can be for instance (travelling salesman problem):


J(u,v)=\sum_{i=1}^N \sqrt{(u_{\sigma(i)}-u_{\sigma(i-1)})^2+
  (v_{\sigma(i)}-v_{\sigma(i-1)})^2}

where u_i,v_i are the coordinates of a city number i. The coordinates of the cities are numbers fixed in advance but the order in which the N cities are visited ({\it i.e} the permutation \sigma of \{1,\dots, N\}) is to be find to minimize J. Simulated annealing is a method to solve this problem and is presented in ([ma:compu:Press92]).

Lagrange multipliers[edit]

Lagrange multipliers method is an interesting approach to solve the minimization problem of a function of N variables with constraints, {\it i. e. } to solve the following problem: \index{contrainte}\index{Lagrange multiplier}

Problem:

Find u in a space V of dimension N such that

 
F(u)=\min_{v\in V}F(v)

with n constraints, R_i(x)=0, i=1,\dots,n.

Lagrange multipliers method is used in statistical physics (see sectionchapphysstat). In a problem without any constraints, solution u verifies:


dF=0

In the case with constraints, the n coordinates u_i of u are not independent. Indeed, they should verify the relations:


dR_i=0

Lagrange multipliers method consists in looking for n numbers \lambda_i called Lagrange multipliers such that:

dL=dF+\sum \lambda_i dR_i=0

One obtains the following equation system:

\frac{\partial F}{\partial x_j}+\sum \lambda_i \frac{\partial R_i}{\partial x_j}=0