Partial Differential Equations/Fundamental solutions, Green's functions and Green's kernels

From Wikibooks, open books for an open world
Jump to: navigation, search
Partial Differential Equations
 ← Distributions Fundamental solutions, Green's functions and Green's kernels Poisson's equation → 

In the last two chapters, we have studied test function spaces and distributions. In this chapter we will demonstrate a method to obtain solutions to linear partial differential equations which uses test function spaces and distributions.


We now introduce equicontinuity, because we will absolutely need this concept at two very very important places of this wikibook. We will need the concept in the following way: We will show some things which hold for equicontinuity, then we will show that certain things are equicontinuous, and then we can conclude that the things which hold for equicontinuity hold for these things for which we had shown that they are equicontinuous.

Now in the previous sentence I often mentioned the word 'thing'. Let's see for which 'things' equicontinuity is defined:

Definition 5.1:

Let M be a metric space equipped with a metric which we shall denote by d here, let X \subseteq M be a set in M, and let \mathcal Q be a set of continuous functions mapping from X to the real numbers \mathbb R. We call this set \mathcal Q equicontinuous iff:

\forall x \in X : \exists \delta \in \mathbb R_{>0} : \forall y \in X : d(x, y) < \delta \Rightarrow \forall f \in \mathcal Q : |f(x) - f(y)| < \epsilon

So the 'things' are in fact sets of continuous functions mapping from X (a set in a metric space) to the real numbers \mathbb R.

Theorem 5.2:

Let M be a metric space equipped with a metric, which we shall denote by d, let Q \subseteq M be a sequentially compact set in M, and let \mathcal Q be an equicontinuous set of continuous functions from Q to the real numbers \mathbb R. Then follows: If (f_l)_{l \in \mathbb N} is a sequence in \mathcal Q such that f_l(x) has a limit for each x \in Q, then for the function f(x) := \lim_{l \to \infty} f_l(x), which maps from Q to \mathbb R, it follows f_l \to f uniformly.


In order to prove uniform convergence, by definition we must prove that for all \epsilon > 0, there exists an N \in \mathbb N such that for all l \ge N : \forall x \in Q : |f_l(x) - f(x)| < \epsilon.

So let's assume the contrary, which is by negating the logical statement:

\exists \epsilon > 0 : \forall N \in \mathbb N : \exists l \ge N : \exists x \in Q : |f_l(x) - f(x)| \ge \epsilon

We choose a sequence (x_m)_{m \in \mathbb N} in Q. We take x_1 in Q such that |f_{l_1}(x_1) - f(x_1)| \ge \epsilon for an arbitrarily chosen l_1 \in \mathbb N and if we have already chosen x_k and l_k for all k \in \{1, \ldots, m\}, we choose x_{m+1} such that |f_{l_{m+1}}(x_{m+1}) - f(x_{m+1})| \ge \epsilon, where l_{m+1} is greater than l_m.

As Q is sequentially compact, there is a convergent subsequence of (x_m)_{m \in \mathbb N}, which we shall denote by (x_k)_{k \in \mathbb N}. Let us call the limit of this sequence, very creatively, x. As \mathcal Q was equicontinuous, we can choose \delta \in \mathbb R_{>0} such that \|x - y\| < \delta \Rightarrow \forall f \in \mathcal Q : |f(x) - f(y)| < \frac{\epsilon}{4}. Further, since x_k \to x (if k \to \infty of course), we may choose J \in \mathbb N such that for all k \ge J, we have \|x_k - x\| < \delta. But then follows for k \ge J and the reverse triangle inequality:

|f_{l_k}(x) - f(x)| \ge \left| |f_{l_k}(x) - f(x_k)| - |f(x_k) - f(x)| \right|

Since we had |f(x_k) - f(x)| < \frac{\epsilon}{4} and further, by the reverse triangle inequality and how the sequence (x_k)_{k \in \mathbb N} is defined:

|f_{l_k}(x) - f(x_k)| \ge \left| |f_{l_k}(x_k) - f(x_k)| - |f_{l_k}(x) - f_{l_k}(x_k)| \right| \ge \epsilon - \frac{\epsilon}{4}

, we obtain:

|f_{l_k}(x) - f(x)| & \ge \left| |f_{l_k}(x) - f(x_k)| - |f(x_k) - f(x)| \right| \\
& = |f_{l_k}(x) - f(x_k)| - |f(x_k) - f(x)| \\
& \ge \epsilon - \frac{\epsilon}{4} - \frac{\epsilon}{4} \\
& \ge \frac{\epsilon}{2}

Thus we have a contradiction to f_l(x) \to f(x).


Theorem 5.3:

Let \mathcal Q be a set of differentiable functions, mapping from the convex set X \subseteq \mathbb R^d to \mathbb R. If we have, that there exists a constant c \in \mathbb R_{>0} such that for all functions in \mathcal Q, \forall x \in X : \| \nabla f(x) \| \le c (the \nabla f exists for each function in \mathcal Q because all functions there were required to be differentiable), then \mathcal Q is equicontinuous.

Proof: We have to prove equicontinuity, so we have to prove

\forall x \in X : \exists \delta \in \mathbb R_{>0} : \forall y \in X: \|x - y\| < \delta \Rightarrow \forall f \in \mathcal Q : |f(x) - f(y)| < \epsilon.

Let x \in X be arbitrary.

We choose \delta := \frac{\epsilon}{c}. Note that this choice is independent of x; we get away in this proof with choosing always the same old boring \delta for all the x :-)

Let y \in X such that \|x - y\| < \delta, and let f \in \mathcal Q be arbitrary. By the mean-value theorem in multiple dimensions, we obtain that there exists a \lambda \in [0, 1] such that:

f(x) - f(y) = \nabla f(\lambda x + (1 - \lambda) y) \cdot (x - y)

The element \lambda x + (1 - \lambda) y is inside X, because X is convex. From the Cauchy-Schwarz inequality then follows:

|f(x) - f(y)| = | \nabla f(\lambda x + (1 - \lambda) y) \cdot (x - y) | \le \|\nabla f(\lambda x + (1 - \lambda) y)\| \|x - y\| < c \delta = \frac{c}{c} \epsilon = \epsilon

Fundamental Solutions[edit]

In the last chapter, we had defined multiplication of a distribution with a smooth function and derivatives of distributions. Therefore, for a distribution \mathcal T, we are able to calculate such expressions as

a \cdot \partial_\alpha \mathcal T

for a smooth function a: \mathbb R^d \to \mathbb R and a d-dimensional multiindex \alpha \in \mathbb N_0^d. We therefore observe that in a linear partial differential equation of the form

\forall x \in \Omega : \sum_{\alpha \in \mathbb N_0^d} a_\alpha(x) \partial_\alpha u(x) = f(x)

we could insert any distribution \mathcal T instead of u in the left hand side. However, equality would not hold in this case, because on the right hand side we have a function, but the left hand side would give us a distribution (as finite sums of distributions are distributions again due to theorem 4.?; remember that only finitely many a_\alpha are allowed to be nonzero). If we however replace the right hand side by \mathcal T_f (the regular distribution corresponding to f), then there might be distributions \mathcal T which satisfy the equation. In this case, we speak of a distributional solution. Let's summarise this definition in a box.

Definition 5.?:

Let O \subseteq \mathbb R^d be open, let

\forall x \in O : \sum_{\alpha \in \mathbb N_0^d} a_\alpha(x) \partial_\alpha u(x) = f(x)

be a linear partial differential equation, and let \mathcal T \in \mathcal D(O)^*. \mathcal T is called a distributional solution to the above linear partial differential equation iff

\forall \varphi \in \mathcal D(O) : \sum_{\alpha \in \mathbb N_0^d} a_\alpha \partial_\alpha \mathcal T(\varphi) = \mathcal T_f (\varphi)

Now we will show how we can obtain distributional solutions to a partial differential equation. The method of choice will be to guess a so-called fundamental solution and then construct solutions with the help of that fundamental solution.

Definition 5.?:

Let O \subseteq \mathbb R^d be open and let

\forall x \in O : \sum_{\alpha \in \mathbb N_0^d} a_\alpha(x) \partial_\alpha u(x) = 0

be a linear homogenous partial differential equation. If F : O \to \mathcal D(O)^* has the two properties

  1. \forall \varphi \in \mathcal D(O) : x \mapsto F(x)(\varphi) \text{ is continuous}
  2. \forall x \in O : F(x) \text{ is a solution to } \forall \varphi \in \mathcal D(O) : \sum_{\alpha \in \mathbb N_0^d} a_\alpha \partial_\alpha F(x)(\varphi) = \delta_x(\varphi)

, we call F a fundamental solution.

Now why we defined this is: Once we have a fundamental solution for the homogenous equation (i. e. f = 0), we can easily construct solutions to the inhomogenous problem. We shall now explain how this works.

Lemma 5.?:

Let \{\mathcal T_\lambda : \lambda \in \mathcal I\} \subseteq \mathcal D(O)^* be a family of distributions, where \mathcal I \subseteq \mathbb R^d. Let's further assume that for all \varphi \in \mathcal A, the function \lambda \mapsto T_\lambda(\varphi) is continuous on \Lambda and bounded, and let f \in L^1(\Lambda). Then

T(\varphi) := \int_\Lambda f(\lambda) T_\lambda(\varphi) d \lambda

is a distribution.

Proof: Due to the truncation of L^p-functions, we have that there are radii R_i \in \R_+ such that

\int_{\Lambda \setminus B_{R_i}(0)} | f(\lambda) | d\lambda < \frac{1}{2 i \|T_\lambda(\varphi)\|_\infty}

, where \|T_\lambda(\varphi)\|_\infty is the supremum of the function \lambda \mapsto T_\lambda(\varphi).

B_{R_i}(0) is a compact set, since it is bounded as well as closed. Therefore, we may divide B_{R_i}(0) into finitely many (let's say n_i) squares d_{m_i} with diameter at most \delta_i, such that

\forall \nu \in B_{R_i}(0) : \lambda \in B_{\delta_i} (\nu) \Rightarrow |T_\lambda(\varphi) - T_\nu(\varphi)| < \frac{1}{2i \|f\|_{L^1}}

. This we may do because continuous functions are uniformly continuous on compact sets. At the border, we just round the squares so that they fit in with the sphere. Furthermore, we choose for each square a \lambda_{m_i} inside this square.

We choose now

T_i(\varphi) := \sum_{m=0}^{n_i} \int_{d_{m_i}} f(\lambda) T_{\lambda_{m_i}}(\varphi) d \lambda

, which is a finite linear combination of distributions and therefore a distribution. Due to the normal triangle inequality for the absolute value, the triangle inequality for the Lebegue integral, our first calculation and the fundamental integral estimation, we obtain:

|T_i(\varphi) - T(\varphi)| < \sum_{m=0}^{n_i} \int_{d_{m_i}} | f(\lambda) (T_{\lambda_{m_i}}(\varphi) - T_\lambda(\varphi))| d \lambda + \frac{\|T_\lambda(\varphi)\|_\infty}{2 i \|T_\lambda(\varphi)\|_\infty} \le \frac{1}{i}

This obviously goes to zero, and this lemma follows with Lemma 2.1.

Let's assume that in equation (*), f is integrable. Let K_{(\cdot)} be a fundamental solution for (*) with respect to the locally convex normed function space \mathcal A, such that \forall \phi \in \mathcal A, the function \xi \mapsto K_\xi(\phi) is bounded. Then we can know, that:

T(\varphi) = \int_{\R^d} f(\xi) K_\xi(\varphi) d\xi

is well-defined and solves (*) in the sense of distributions.

Proof: Since by the definition of fundamental solutions, the function \xi \mapsto K_\xi(\phi) is continuous, we may apply lemma 2.2, which gives us that T is indeed well-defined.

To show that it really solves (*) in the sense of distributions, we need the following calculation:

LT(\varphi) = T(L^*\varphi) = \int_{\R^d} f(\xi) K_\xi(L^* \varphi) d\xi = \int_{\R^d} f(\xi) LK_\xi(\varphi) d\xi
 = \int_{\R^d} f(\xi) \delta_\xi(\varphi) d\xi = \int_{\R^d} f(\xi) \varphi(\xi) d\xi = T_f(\varphi)

, which is what we wanted to show.

Green's functions[edit]

Assume that for each \xi, the fundamental solution K_\xi is a regular distribution, i. e. for each \xi \in \Omega, there is an integrable function G( \cdot| \xi) such that K_\xi = T_{G(\cdot | \xi)}. Then we call this function G: \R^d \times \Omega \to \R a Green's function for L.

Green's kernels[edit]

Let's assume that L has the Green's function G(\cdot|\xi). If there exists a function \tilde G: \R^d \to \R such that

G(\cdot|\xi) = \tilde G(\cdot - \xi)

, then we call \tilde G a Green's kernel for L.

Let \tilde G be a locally integrable function, and \Omega \subseteq \R^d be a domain. Then the family of distributions K_\xi := T_{\tilde G(\cdot - \xi)} \in \mathcal D'(\Omega) is well-defined and depends continuously on \xi. Furthermore, for each \phi \in \mathcal D(\Omega), the function \xi \mapsto K_\xi(\phi) is bounded.

Proof: Well-definedness follows from Lemma 1.3.

Let \phi \in \mathcal D(\Omega), and let \xi_n \to \xi. Then we can calculate the following:

T_{\tilde G(\cdot - \xi_n)}(\phi) - T_{\tilde G(\cdot - \xi)}(\phi) = \int_{\R^d} \tilde G(x - \xi_n) \phi(x) dx - \int_{\R^d} \tilde G(x - \xi) \phi(x) dx = \int_{\R^d} \tilde  G(x) (\phi(x + \xi_n) - \phi(x + \xi)) dx
\le \max_{x \in \R^d} |\phi(x + \xi_n) - \phi(x + \xi)| \underbrace{\int_{\text{supp } \phi + B_{2\xi}(0)} \tilde  G(x) dx}_\text{constant}

for sufficiently large n, where the last expression goes to 0 as n \to \infty, since the support of \phi(x) is compact and therefore the function is (even uniformly) continuous.

Furthermore, we have

T_{\tilde G(\cdot - \xi)}(\phi) = \int_{\R^d} \tilde G(x - \xi) \phi(x) dx = \int_{\text{supp } \phi} \tilde G(x) \phi(x + \xi) dx

, which is zero for \|\xi\| sufficiently large, which is why the function \xi \mapsto K_\xi(\phi) has compact support. But since the function is also continuous, we know that it obtains a maximum and a minimum and is therefore bounded.

This lemma shows that if we have found a locally integrable function \tilde G such that LT_{\tilde G(\cdot - \xi)} = \delta_\xi, we already know that it is a Green's kernel, and don't need to check the continuity property.

Theorem 5.?: (Fubini's theorem)

Let A \subseteq \mathbb R^i and B \subseteq \mathbb R^j, where i, j are arbitrary natural numbers, and let f: A \times B \to \mathbb R be a function. Then

\int_A \int_B f(x, y) dy dx = \int_{A \times B} f(x, y) d(x, y) = \int_B \int_A f(x, y) dx dy

Now this theorem finally shows us why distributions are useful:

Let \tilde G be a Green's kernel for L, and let f \in L^\infty(\R^d). If

u(x) = (f * \tilde G)(x)

is sufficiently often differentiable such that L u is continuous, then it is a solution for (*) in the classical sense.

Proof: From a case of Hölder's inequality (namely p = 1, q = \infty, i. e. \|f \cdot \tilde G\|_{L^1} \le \|\tilde G\|_{L^1} \cdot \|f\|_{L^\infty}), we obtain that u is locally integrable, which is why T_u is a distribution in \mathcal D'(\R^d).

Furthermore, due to the theorem of Fubini, we have for \varphi \in \mathcal D(\R^d), that

T_u(\varphi) = \int_{\R^d} (f * \tilde G)(x) \varphi(x) dx = \int_{\R^d} \int_{\R^d} f(y) \tilde G(x - y) \varphi(x) dy dx
 = \int_{\R^d} \int_{\R^d} \tilde G(x - y) \varphi(x) dx ~ f(y) dy = \int_{\R^d} T_{\tilde G(\cdot - \xi)}(\varphi) f(y) dy

, which is why T_u solves (*) in the sense of distributions (this is due to theorem 2.3).

Thus, for all \varphi \in \mathcal D(\R^d), we can calculate the following:

\int_{\R^d} (Lu)(x) \varphi(x) dx = T_{Lu} (\varphi) = LT_u(\varphi) = T_f(\varphi) = \int_{\R^d} f(x) \varphi(x) dx

and therefore

\int_{\R^d} ((Lu)(x) - f(x)) \varphi(x) dx = 0.

From this follows that Lu = f almost everywhere. But since Lu and f are both continuous, they must be equal everywhere. This is what we wanted to prove.


  1. Prove that if \mathcal Q is a set of differentiable functions which go from [0, 1]^d to \mathbb R, such that there exists a c \in \mathbb R_{>0} such that for all g \in \mathcal Q it holds \forall x \in \mathbb R^d : \|\nabla g(x)\| < c, and if (f_l)_{l \in \mathbb N} is a sequence in \mathcal Q for which the pointwise limit \lim_{l \to \infty} f_l(x) exists for all x \in \mathbb R^d, then f_l converges to a function uniformly on [0, 1]^d (hint: [0, 1]^d is sequentially compact; this can be proved with help of the Bolzano–Weierstrass theorem).


Partial Differential Equations
 ← Distributions Fundamental solutions, Green's functions and Green's kernels Poisson's equation →