# Introduction to Mathematical Physics/Some mathematical problems and their solution/Boundary problems, variational methods

## Variational formulation

secvafor

Let us consider a boundary problem:

Problem:

Find ${\displaystyle u\in U}$ such that:

eqLufva

${\displaystyle Lu(x)=f(x)\forall x\in \Omega }$
${\displaystyle u(x)=g(x)\forall x\in \partial \Gamma }$

Let us suppose that is a unique solution ${\displaystyle u}$ of this problem. For a functional space ${\displaystyle V}$ sufficiently large the previous problem, may be equivalent to the following:

Problem:

Let us coonsider aboundey problem\

Find ${\displaystyle u\in U}$ such that:

eqvari

${\displaystyle \forall v\in V,=}$

This is the variational form of the boundary problem. To obtain the equality eqvari, we have just multiplied scalarly by a "test function" ${\displaystyle equation eqLufva. A "weak form" of this problem can be found using Green formula type formulas: the solution space is taken larger and as a counterpart, the test functions space is taken smaller. let us illustrate those ideas on a simple example:

Example:

Find ${\displaystyle u}$ in ${\displaystyle U=C^{2}}$ such that:

${\displaystyle -\Delta u=f,\forall x\in \Omega }$
${\displaystyle u=g,\forall x\in \partial \Omega }$

The variational formulation is: Find ${\displaystyle u}$ in ${\displaystyle U=C^{2}}$ such that:

${\displaystyle \forall v\in V_{1},-\int v\Delta udx=\int vfdx}$

Using the Green equality (see appendix secappendgreeneq)\index{Green formula} and the boundary conditions we can reformulate the problem in: find ${\displaystyle u\in V}$, such that:

${\displaystyle \forall v\in V\int \partial _{i}v\partial _{i}udx=\int vfdx}$

One can show that this problem as a unique solution in ${\displaystyle V=H_{0}^{1}(\Omega )}$ where

${\displaystyle H^{1}(\Omega )=\{v\in L^{2}(\Omega );\partial _{i}v\in L^{2}(\Omega ),1\leq i\leq n\}}$

is the Sobolev space of order 1 on ${\displaystyle \Omega }$ and ${\displaystyle H_{0}^{1}(\Omega )}$ is the adherence of ${\displaystyle H^{1}(\Omega )}$ in the space ${\displaystyle {\mathcal {D}}(\Omega )}$ (space of infinitely differentiable functions on ${\displaystyle \Omega }$, with compact support in ${\displaystyle \Omega }$).

It may be that, as for the previous example, the solution function space and the test function space are the same. This is the case for most of linear operators ${\displaystyle L}$ encountered in physics. In this case, one can associate to ${\displaystyle L}$ a bilinear form ${\displaystyle a}$. The (weak) variational problem is thus:

provari2

Problem:

Find ${\displaystyle u\in V}$ such that

${\displaystyle \forall v\in V,a(u,v)=L(v)}$

Example:

In this previous example, the bilinear form is:

${\displaystyle a(u,v)=\int _{\Omega }\partial _{i}u(x)\partial _{i}v(x)dx.}$

There exist theorems (Lax-Milgram theorem for instance) that proove the existence and unicity of the solution of the previous problem provari2 , under certain conditions on the bilinear form ${\displaystyle a(u,v)}$.

## Finite elements approximation

secvarinum

Let us consider the problem provari2 :

Problem:

Find ${\displaystyle u\in V}$ such that:

${\displaystyle \forall v\in V,a(u,v)=L(v)}$

The method of approximation consists in choosing a finite dimension subspace ${\displaystyle V_{h}}$ of ${\displaystyle V}$ and the problem to solve becomes:

provari3

Problem: Find ${\displaystyle u_{h}\in V_{h}}$ such that

${\displaystyle \forall v_{h}\in V_{h},a(u_{h},v_{h})=L(v_{h})}$

A base of ${\displaystyle \{v_{h}^{i}\}}$ of ${\displaystyle V_{h}}$ is chosen to satisfy boundary conditions. The problem is reduced to finding the components of ${\displaystyle u_{h}}$:

${\displaystyle u_{h}=\sum _{i}v_{h}^{i}}$

If ${\displaystyle a}$ is a bilinear form

${\displaystyle a(u_{h},v_{h}^{i})=\sum _{j}a(v_{h}^{j},v_{h}^{i}),}$

and to find the ${\displaystyle }$'s is equivalent to solve a linear system (often close to a diagonal system)that can be solved by classical algorithms ([ma:equad:Ciarlet88],[ma:compu:Press92]) which can be direct methods (Gauss,Cholesky) or iterative (Jacobi, Gauss-Seidel, relaxation). Note that if the vectors of the basis of ${\displaystyle V_{h}}$ are eigenvectors of ${\displaystyle L}$, then the solving of the system is immediate (diagonal system). This is the basis of the spectral methods for solving evolution problems. If ${\displaystyle a}$ is not linear, we have to solve a nonlinear system. Let us give an example of a basis ${\displaystyle \{v_{h}^{i}\}}$.

Example:

When ${\displaystyle V=L^{2}([0,1])}$, an example of ${\displaystyle V_{h}}$ that can be chosen ([ma:equad:Ciarlet88]) is the set of piecewise-linear continuous functions that are zero in ${\displaystyle x=0}$ and ${\displaystyle x=1}$ (for Dirichlet boundary conditions). More precisely, ${\displaystyle L^{2}[0,1]}$ can be approximated by the space of piecewise linear continuous functions on intervals ${\displaystyle [i/n,(i+1)/n]}$, ${\displaystyle i}$ going from zero to ${\displaystyle n-1}$, that are zero in ${\displaystyle x=0}$ and ${\displaystyle x=1}$. The basis of such a space is made by functions ${\displaystyle (v_{h}^{i}),i\in (1,\dots ,n-1)}$ defined by:

${\displaystyle v_{h}^{i}(x)={\frac {i-1}{n}}+nx{\mbox{ in }}[(i-1)/n,i/n]}$
${\displaystyle v_{h}^{i}(x)={\frac {i}{n}}-nx{\mbox{ in }}[i/n,(i+1)/n]}$

and zero anywhere else (see figure figapproxesp).

figapproxesp

Space ${\displaystyle L^{2}[0,1]}$ can be approximated by pice wise linear continuous functions.

## Finite difference approximation

Finite difference method is one of the most basic method to tackle PDE problems. It is not strictly speaking a variational approximation. It is rather a sort of variational method where weight functions ${\displaystyle w_{k}}$ are Dirac functions ${\displaystyle \delta _{k}}$. Indeed, when considering the boundary problem,

eqfini

${\displaystyle Lu=f{\mbox{ for }}x\in \Omega }$

instead of looking for an approximate solution ${\displaystyle u_{h}}$ which can be decomposed on a basis of weight functions ${\displaystyle w_{k}}$:

${\displaystyle u_{h}=\sum _{k}w_{k},}$

the action of ${\displaystyle L}$ on ${\displaystyle u}$ is directly expressed in terms of Dirac functions, as well as the right hand term of equation eqfini:

eqfini2

${\displaystyle \sum (Lu)_{ik}\delta _{k}=\sum f_{k}\delta _{k}}$

\begin{rem} If ${\displaystyle L}$ contains derivatives, the following formulas are used : right formula, order 1:

${\displaystyle \Delta x{\frac {du}{dx}}=\sum (u_{i+1}-u_{i})\delta _{i}}$
${\displaystyle \Delta x^{2}{\frac {d^{2}u}{dx^{2}}}=\sum (u_{i+1}-2u_{i}+u_{i+1})\delta _{i}}$

right formula order 2:

${\displaystyle 2\Delta x{\frac {du}{dx}}=\sum (-u_{i+2}+4u_{i+1}-3u_{i})\delta _{i}}$
${\displaystyle \Delta x^{2}{\frac {d^{2}u}{dx^{2}}}=\sum (-u_{i+3}+4u_{i+2}-5u_{i+1}+2u_{i})\delta _{i}}$

Left formulas can be written in a similar way. Centred formulas, second order are:

${\displaystyle 2\Delta x{\frac {du}{dx}}=\sum (u_{i+1}-u_{i-1})\delta _{i}}$
${\displaystyle \Delta x^{2}{\frac {d^{2}u}{dx^{2}}}=\sum (u_{i+1}-2u_{i}+u_{i-1})\delta _{i}}$

Centred formulas, fourth order are:

${\displaystyle 12\Delta x{\frac {du}{dx}}=\sum (-u_{i+2}+8u_{i+1}-8u_{i-1}+u_{i-2})\delta _{i}}$
${\displaystyle 12\Delta x^{2}{\frac {d^{2}u}{dx^{2}}}=\sum (-u_{i+2}+16u_{i+1}-30u_{i}+16u_{i-1}-u_{i-2})\delta _{i}}$

\end{rem} One can show that the equation eqfini2 is equivalent to the system of equations:

eqfini3

${\displaystyle \sum _{i}(Lu)_{ik}=f_{k}}$

One can see immediately that equation eqfini2 implies equation eqfini3 in choosing "test" functions ${\displaystyle v_{i}}$ of support ${\displaystyle [x_{i}-1/2,x_{i}+1/2]}$ and such that ${\displaystyle v_{i}(x_{i})=1}$.

## Minimization problems

A minimization problem can be written as follows:

promini

Problem: Let ${\displaystyle V}$ a functional space, a ${\displaystyle J(u)}$ a functional. Find ${\displaystyle u\in V}$, such that:

${\displaystyle J(u)=\min _{v\in V}J(v)}$

The solving of minimization problems depends first on the nature of the functional ${\displaystyle J}$ and on the space ${\displaystyle V}$. As usually, the functional ${\displaystyle J(u)}$ is often approximated by a function of several variables ${\displaystyle J(u_{1},\dots ,u_{N})}$ where the ${\displaystyle u_{i}}$'s are the coordinate of ${\displaystyle u}$ in some base ${\displaystyle E_{i}}$ that approximates ${\displaystyle V}$. The methods to solve minimization problems can be classified into two categories: One can distinguish problems without constraints (see Fig. figcontraintesans) and problems with constraints (see Fig. figcontrainteavec). Minimization problems without constraints can be tackled theoretically by the study of the zeros of the differential function ${\displaystyle dF(u)}$ if exists. Numerically it can be less expensive to use dedicated methods. There are methods that don't use derivatives of ${\displaystyle F}$ (downhill simplex method, direction-set method) and methods that use derivative of ${\displaystyle F}$ (conjugate gradient method, quasi-Newton methods). Details are given in ([ma:compu:Press92]). Problems with constraints reduce the functional space ${\displaystyle U}$ to a set ${\displaystyle V}$ of functions that satisfy some additional conditions. Note that those sets ${\displaystyle V}$ are not vectorial spaces: a linear combination of vectors of ${\displaystyle V}$ are not always in ${\displaystyle V}$. Let us give some example of constraints: Let ${\displaystyle U}$ a functional space. Consider the space

${\displaystyle V=\{v\in U,\phi _{i}(v)=0,i\in 1,\dots ,n\}}$

where ${\displaystyle \phi _{i}(v)}$ are ${\displaystyle n}$ functionals. This is a first example of constraints. It can be solved theoretically by using Lagrange multipliers ([ma:equad:Ciarlet88]), \index{constraint}. A second example of constraints is given by

${\displaystyle V=\{v\in U,\phi _{i}(v)\leq 0,i\in 1,\dots ,n\}v}$

where ${\displaystyle \phi _{i}(v)}$ are ${\displaystyle n}$ functionals. The linear programming (see example exmplinepro) problem is an example of minimization problem with such constraints (in fact a mixing of equalities and inequalities constraints).

figcontraintesans

Minimization of a function of two variables.

figcontrainteavec]]

Minimization of a function of two variables with constraints. Here, space is reduced to a disk in the plane ${\displaystyle u_{1},u_{2}}$.

figcontrainteaveclag

Illustration of the Lagrange multiplier. At point ${\displaystyle A}$ tangent vector on the surface and on the constraint curve are not paralele: ${\displaystyle A}$ does not correspond to an extremum. At point ${\displaystyle B}$ both tangent vector are colinear: we have an extremum.

Example:

Let us consider of a first class of functional ${\displaystyle J}$ that are important for PDE problems. Consider again the bilinear form introduced at section secvafor. If this bilinear form ${\displaystyle a(u,v)}$ is symmetrical, {\it i.e.}

${\displaystyle \forall u\in V,\forall v\in V,a(u,v)=a(v,u).}$

the problem can be written as a minimization problem by introducing the functional:

${\displaystyle J(v)={\frac {1}{2}}a(v,v)-L(v)}$

One can show that (under certain conditions) solving the variational problem:\\ Find ${\displaystyle u\in V}$ such that:

${\displaystyle \forall v\in V,a(u,v)=L(v)}$

is equivalent to solve the minimization problem:\\ Find ${\displaystyle u\in V}$, such that:

${\displaystyle J(u)=\min _{v\in V}J(v)}$

Physical principles have sometimes a natural variational formulation (as natural as a PDE formulation). We will come back to the variational formulations at the section on least action principle (see section secprinmoindreact ) and at the section secpuisvirtu on the principle of virtual powers.

exmplinepro

Example: Another example of functional is given by the linear programming problem. Let us consider a function ${\displaystyle u}$ that can be located by its coordinates ${\displaystyle u_{i}}$ in a basis ${\displaystyle e_{i}}$, ${\displaystyle i\in \{1,\dots ,N\}}$:

${\displaystyle u=\sum u_{i}e_{i}}$

In linear programming, the functional to minimize can be written

${\displaystyle F(u)=\sum c_{i}u_{i}}$

and is subject to ${\displaystyle N}$ primary constraints:

${\displaystyle \forall i,u_{i}\geq 0}$

and ${\displaystyle M=m_{1}+m_{2}+m_{3}}$ additional constraints:

${\displaystyle k\in \{1,\dots ,m_{1}\},\sum _{j}a_{k,j}u_{j}\leq b_{k},(b_{k}\geq 0)}$
${\displaystyle k\in \{m_{1}+1,\dots ,m_{1}+m_{2}\},\sum _{j}a_{k,j}u_{j}\geq b_{k}\geq 0}$
${\displaystyle k\in \{m_{1}+m_{2}+1,\dots ,m_{1}+m_{2}+m_{3}\},\sum _{j}a_{k,j}u_{j}=b_{k}\geq 0}$

The numerical algorithm to solve this type of problem is presented in ([ma:compu:Press92]). It is called the simplex method.

exmpsimul

Example: When the variables ${\displaystyle u_{i}}$ can take only discrete values, one speaks about discrete or combinatorial optimization. The function to minimize can be for instance (travelling salesman problem):

${\displaystyle J(u,v)=\sum _{i=1}^{N}{\sqrt {(u_{\sigma (i)}-u_{\sigma (i-1)})^{2}+(v_{\sigma (i)}-v_{\sigma (i-1)})^{2}}}}$

where ${\displaystyle u_{i},v_{i}}$ are the coordinates of a city number ${\displaystyle i}$. The coordinates of the cities are numbers fixed in advance but the order in which the ${\displaystyle N}$ cities are visited ({\it i.e} the permutation ${\displaystyle \sigma }$ of ${\displaystyle \{1,\dots ,N\}}$) is to be find to minimize ${\displaystyle J}$. Simulated annealing is a method to solve this problem and is presented in ([ma:compu:Press92]).

## Lagrange multipliers

Lagrange multipliers method is an interesting approach to solve the minimization problem of a function of ${\displaystyle N}$ variables with constraints, {\it i. e. } to solve the following problem: \index{contrainte}\index{Lagrange multiplier}

Problem:

Find ${\displaystyle u}$ in a space ${\displaystyle V}$ of dimension ${\displaystyle N}$ such that

${\displaystyle F(u)=\min _{v\in V}F(v)}$

with ${\displaystyle n}$ constraints, ${\displaystyle R_{i}(x)=0,i=1,\dots ,n.}$

Lagrange multipliers method is used in statistical physics (see sectionchapphysstat). In a problem without any constraints, solution ${\displaystyle u}$ verifies:

${\displaystyle dF=0}$

In the case with constraints, the ${\displaystyle n}$ coordinates ${\displaystyle u_{i}}$ of ${\displaystyle u}$ are not independent. Indeed, they should verify the relations:

${\displaystyle dR_{i}=0}$

Lagrange multipliers method consists in looking for ${\displaystyle n}$ numbers ${\displaystyle \lambda _{i}}$ called Lagrange multipliers such that:

${\displaystyle dL=dF+\sum \lambda _{i}dR_{i}=0}$

One obtains the following equation system:

${\displaystyle {\frac {\partial F}{\partial x_{j}}}+\sum \lambda _{i}{\frac {\partial R_{i}}{\partial x_{j}}}=0}$