# Partial Differential Equations/Fundamental solutions, Green's functions and Green's kernels

Partial Differential Equations
 ← Distributions Fundamental solutions, Green's functions and Green's kernels The heat equation →

In the last two chapters, we have studied test function spaces and distributions. In this chapter we will demonstrate a method to obtain solutions to linear partial differential equations which uses test function spaces and distributions.

## Distributional and fundamental solutions

In the last chapter, we had defined multiplication of a distribution with a smooth function and derivatives of distributions. Therefore, for a distribution ${\displaystyle {\mathcal {T}}}$, we are able to calculate such expressions as

${\displaystyle a\cdot \partial _{\alpha }{\mathcal {T}}}$

for a smooth function ${\displaystyle a:\mathbb {R} ^{d}\to \mathbb {R} }$ and a ${\displaystyle d}$-dimensional multiindex ${\displaystyle \alpha \in \mathbb {N} _{0}^{d}}$. We therefore observe that in a linear partial differential equation of the form

${\displaystyle \forall x\in O:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(x)\partial _{\alpha }u(x)=f(x)}$

we could insert any distribution ${\displaystyle {\mathcal {T}}}$ instead of ${\displaystyle u}$ in the left hand side. However, equality would not hold in this case, because on the right hand side we have a function, but the left hand side would give us a distribution (as finite sums of distributions are distributions again due to exercise 4.1; remember that only finitely many ${\displaystyle a_{\alpha }}$ are allowed to be nonzero, see definition 1.2). If we however replace the right hand side by ${\displaystyle {\mathcal {T}}_{f}}$ (the regular distribution corresponding to ${\displaystyle f}$), then there might be distributions ${\displaystyle {\mathcal {T}}}$ which satisfy the equation. In this case, we speak of a distributional solution. Let's summarise this definition in a box.

Definition 5.1:

Let ${\displaystyle O\subseteq \mathbb {R} ^{d}}$ be open, let

${\displaystyle \forall x\in O:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(x)\partial _{\alpha }u(x)=f(x)}$

be a linear partial differential equation, and let ${\displaystyle {\mathcal {T}}\in {\mathcal {D}}(O)^{*}}$. ${\displaystyle {\mathcal {T}}}$ is called a distributional solution to the above linear partial differential equation if and only if

${\displaystyle \forall \varphi \in {\mathcal {D}}(O):\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }{\mathcal {T}}(\varphi )={\mathcal {T}}_{f}(\varphi )}$.

Definition 5.2:

Let ${\displaystyle O\subseteq \mathbb {R} ^{d}}$ be open and let

${\displaystyle \forall x\in O:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(x)\partial _{\alpha }u(x)=f(x)}$

be a linear partial differential equation. If ${\displaystyle F:O\to {\mathcal {D}}(O)^{*}}$ has the two properties

1. ${\displaystyle \forall \varphi \in {\mathcal {D}}(O):x\mapsto F(x)(\varphi )}$ is continuous and
2. ${\displaystyle \forall x\in O:\forall \varphi \in {\mathcal {D}}(O):\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }F(x)(\varphi )=\delta _{x}(\varphi )}$,

we call ${\displaystyle F}$ a fundamental solution for that partial differential equation.

For the definition of ${\displaystyle \delta _{x}}$ see exercise 4.5.

Lemma 5.3:

Let ${\displaystyle O\subseteq \mathbb {R} ^{d}}$ be open and let ${\displaystyle \{{\mathcal {T}}_{x}|x\in S\}\subseteq {\mathcal {D}}(O)^{*}}$ be a set of distributions, where ${\displaystyle S\subseteq \mathbb {R} ^{d}}$. Let's further assume that for all ${\displaystyle \varphi \in {\mathcal {D}}(O)}$, the function ${\displaystyle S\to \mathbb {R} ,x\mapsto {\mathcal {T}}_{x}(\varphi )}$ is continuous and bounded, and let ${\displaystyle f\in L^{1}(S)}$ be compactly supported. Then

${\displaystyle {\mathcal {T}}(\varphi ):=\int _{S}f(x){\mathcal {T}}_{x}(\varphi )dx}$

is a distribution.

Proof:

Let ${\displaystyle C\subset \mathbb {R} ^{d}}$ be the support of ${\displaystyle f}$. For ${\displaystyle \varphi \in {\mathcal {D}}(O)}$, let us denote the supremum norm of the function ${\displaystyle C\to \mathbb {R} ,x\mapsto {\mathcal {T}}_{x}(\varphi )}$ by

${\displaystyle \|{\mathcal {T}}_{\cdot }(\varphi )\|_{\infty }}$.

For ${\displaystyle \|f\|_{L_{1}}=0}$ or ${\displaystyle \|{\mathcal {T}}_{\cdot }(\varphi )\|_{\infty }=0}$, ${\displaystyle {\mathcal {T}}}$ is identically zero and hence a distribution. Hence, we only need to treat the case where both ${\displaystyle \|f\|_{L_{1}}\neq 0}$ and ${\displaystyle \|{\mathcal {T}}_{\cdot }(\varphi )\|_{\infty }\neq 0}$.

For each ${\displaystyle n\in \mathbb {N} }$, ${\displaystyle {\overline {B_{n}(0)}}}$ is a compact set since it is bounded and closed. Therefore, we may cover ${\displaystyle {\overline {B_{n}(0)}}\cap S}$ by finitely many pairwise disjoint sets ${\displaystyle Q_{n,1},\ldots ,Q_{n,k_{n}}}$ with diameter at most ${\displaystyle 1/n}$ (for convenience, we choose these sets to be subsets of ${\displaystyle {\overline {B_{n}(0)}}\cap S}$). Furthermore, we choose ${\displaystyle x_{n,1}\in Q_{n,1},\ldots ,x_{n,k_{n}}\in Q_{n,k_{n}}}$.

For each ${\displaystyle n\in \mathbb {N} }$, we define

${\displaystyle {\mathcal {T}}_{n}(\varphi ):=\sum _{j=1}^{k_{n}}\int _{Q_{n,j}}f(x){\mathcal {T}}_{x_{n,j}}(\varphi )dx}$

, which is a finite linear combination of distributions and therefore a distribution (see exercise 4.1).

Let now ${\displaystyle \vartheta \in {\mathcal {D}}(O)}$ and ${\displaystyle \epsilon >0}$ be arbitrary. We choose ${\displaystyle N_{1}\in \mathbb {N} }$ such that for all ${\displaystyle n\geq N_{1}}$

${\displaystyle \forall x\in B_{R_{n}}(0)\cap S:y\in B_{1/n}(x)\Rightarrow |{\mathcal {T}}_{x}(\varphi )-{\mathcal {T}}_{y}(\varphi )|<{\frac {\epsilon }{2\|f\|_{L^{1}}}}}$.

This we may do because continuous functions are uniformly continuous on compact sets. Further, we choose ${\displaystyle N_{2}\in \mathbb {N} }$ such that

${\displaystyle \int _{S\setminus B_{n}(0)}|f(x)|dx<{\frac {\epsilon }{2\|{\mathcal {T}}_{\cdot }(\varphi )\|_{\infty }}}}$.

This we may do due to dominated convergence. Since for ${\displaystyle n\geq N:=\max\{N_{1},N_{2}\}}$

${\displaystyle |{\mathcal {T}}_{n}(\varphi )-{\mathcal {T}}(\varphi )|<\sum _{j=1}^{k_{n}}\int _{Q{n,j}}|f(x)||{\mathcal {T}}_{\lambda _{x_{n,j}}}(\varphi )-{\mathcal {T}}_{x}(\varphi )|dx+{\frac {\epsilon \|{\mathcal {T}}_{\cdot }(\varphi )\|_{\infty }}{2\|T_{\cdot }(\varphi )\|_{\infty }}}<\epsilon }$,

${\displaystyle \forall \varphi \in {\mathcal {D}}(O):{\mathcal {T}}_{l}(\varphi )\to {\mathcal {T}}(\varphi )}$. Thus, the claim follows from theorem AI.33.${\displaystyle \Box }$

Theorem 5.4:

Let ${\displaystyle O\subseteq \mathbb {R} ^{d}}$ be open, let

${\displaystyle \forall x\in O:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(x)\partial _{\alpha }u(x)=f(x)}$

be a linear partial differential equation such that ${\displaystyle f}$ is integrable and has compact support. Let ${\displaystyle F}$ be a fundamental solution of the PDE. Then

${\displaystyle {\mathcal {T}}:{\mathcal {D}}(O)\to \mathbb {R} ,{\mathcal {T}}(\varphi ):=\int _{\mathbb {R} ^{d}}f(x)F(x)(\varphi )dx}$

is a distribution which is a distributional solution for the partial differential equation.

Proof: Since by the definition of fundamental solutions the function ${\displaystyle x\mapsto F(x)(\varphi )}$ is continuous for all ${\displaystyle \varphi \in {\mathcal {D}}(O)}$, lemma 5.3 implies that ${\displaystyle {\mathcal {T}}}$ is a distribution.

Further, by definitions 4.16,

{\displaystyle {\begin{aligned}\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }{\mathcal {T}}(\varphi )&={\mathcal {T}}\left(\sum _{\alpha \in \mathbb {N} _{0}^{d}}\partial _{\alpha }(a_{\alpha }\varphi )\right)\\&=\int _{\mathbb {R} ^{d}}f(x)F(x)\left(\sum _{\alpha \in \mathbb {N} _{0}^{d}}\partial _{\alpha }(a_{\alpha }\varphi )\right)dx\\&=\int _{\mathbb {R} ^{d}}f(x)\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }F(x)(\varphi )dx\\&=\int _{\mathbb {R} ^{d}}f(x)\delta _{x}(\varphi )dx\\&=\int _{\mathbb {R} ^{d}}f(x)\varphi (x)dx\\&={\mathcal {T}}_{f}(\varphi )\end{aligned}}}.${\displaystyle \Box }$

Lemma 5.5:

Let ${\displaystyle \varphi \in {\mathcal {D}}(\mathbb {R} ^{d})}$, ${\displaystyle f\in {\mathcal {C}}^{\infty }(\mathbb {R} ^{d})}$, ${\displaystyle \alpha \in \mathbb {N} _{0}^{d}}$ and ${\displaystyle {\mathcal {T}}\in {\mathcal {D}}(\mathbb {R} ^{d})^{*}}$. Then

${\displaystyle f\partial _{\alpha }({\mathcal {T}}*\varphi )=(f\partial _{\alpha }{\mathcal {T}})*\varphi }$.

Proof:

By theorem 4.21 2., for all ${\displaystyle x\in \mathbb {R} ^{d}}$

{\displaystyle {\begin{aligned}f\partial _{\alpha }({\mathcal {T}}*\varphi )(x)&=f{\mathcal {T}}*(\partial _{\alpha }\varphi )(x)\\&=f{\mathcal {T}}((\partial _{\alpha }\varphi )(x-\cdot ))\\&=f{\mathcal {T}}\left((-1)^{|\alpha |}\partial _{\alpha }(\varphi (x-\cdot ))\right)\\&=f(\partial _{\alpha }{\mathcal {T}})(\varphi (x-\cdot ))\\&=(\partial _{\alpha }{\mathcal {T}})(f\varphi (x-\cdot ))\\&=(f\partial _{\alpha }{\mathcal {T}})(\varphi (x-\cdot ))=(f\partial _{\alpha }{\mathcal {T}})*\varphi (x)\\\end{aligned}}}.${\displaystyle \Box }$

Theorem 5.6:

Let ${\displaystyle {\mathcal {T}}}$ be a solution of the equation

${\displaystyle \forall \varphi \in {\mathcal {D}}(\mathbb {R} ^{d}):\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }{\mathcal {T}}(\varphi )=\delta _{0}}$,

where only finitely many ${\displaystyle a_{\alpha }}$ are nonzero, and let ${\displaystyle \vartheta \in {\mathcal {D}}(\mathbb {R} ^{d})}$. Then ${\displaystyle u:={\mathcal {T}}*\vartheta }$ solves

${\displaystyle \sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }u=\vartheta }$.

Proof:

By lemma 5.5, we have

{\displaystyle {\begin{aligned}\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }u(x)&=\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }({\mathcal {T}}*\vartheta )(x)\\&=\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(\partial _{\alpha }{\mathcal {T}})*\vartheta (x)\\&=\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }{\mathcal {T}}(\vartheta (x-\cdot ))\\&=\delta _{0}(\vartheta (x-\cdot ))=\vartheta (x)\end{aligned}}}.${\displaystyle \Box }$

## Partitions of unity

In this section you will get to know a very important tool in mathematics, namely partitions of unity. We will use it in this chapter and also later in the book. In order to prove the existence of partitions of unity (we will soon define what this is), we need a few definitions first.

Definitions 5.7:

Let ${\displaystyle S\subseteq \mathbb {R} ^{d}}$ be a set. We define:

• ${\displaystyle \partial S:=\left\{x\in \mathbb {R} {\big |}\forall \epsilon >0:B_{\epsilon }(x)\cap S\neq \emptyset \wedge B_{\epsilon }(x)\cap (\mathbb {R} ^{d}\setminus S)\neq \emptyset \right\}}$
• ${\displaystyle {\overset {\circ }{S}}:=S\setminus \partial S}$

${\displaystyle \partial S}$ is called the boundary of ${\displaystyle S}$ and ${\displaystyle {\overset {\circ }{S}}}$ is called the interior of ${\displaystyle S}$. Further, if ${\displaystyle x\in \mathbb {R} ^{d}}$, we define

${\displaystyle {\text{dist}}(S,x):=\inf _{y\in S}\|x-y\|}$.

We also need definition 3.13 in the proof, which is why we restate it now:

Definition 3.13:

For ${\displaystyle R\in \mathbb {R} _{>0}}$, we define

${\displaystyle \eta _{R}:\mathbb {R} ^{d}\to \mathbb {R} ,\eta _{R}(x)=\eta \left({\frac {x}{R}}\right){\big /}R^{d}}$.

Theorem and definitions 5.8: Let ${\displaystyle O\subseteq \mathbb {R} ^{d}}$ be an open set, and let ${\displaystyle U_{\upsilon },\upsilon \in \Upsilon }$ be open subsets of ${\displaystyle \mathbb {R} ^{d}}$ such that ${\displaystyle \bigcup _{\upsilon \in \Upsilon }U_{\upsilon }=O}$ (i. e. the sets ${\displaystyle U_{\upsilon },\upsilon \in \Upsilon }$ form an open cover of ${\displaystyle O}$). Then there exists a sequence of functions ${\displaystyle (\eta _{l})_{l\in \mathbb {N} }}$ in ${\displaystyle {\mathcal {D}}(\mathbb {R} ^{d})}$ such that the following conditions are satisfied:

1. ${\displaystyle \forall n\in \mathbb {N} :\forall x\in O:0\leq \eta _{n}(x)\leq 1}$
2. ${\displaystyle \forall n\in \mathbb {N} :\exists \upsilon \in \Upsilon :{\text{supp }}\eta _{n}\subseteq U_{\upsilon }}$
3. ${\displaystyle \forall x\in O:|\{n\in \mathbb {N} |\eta _{n}(x)\neq 0\}|<\infty }$
4. ${\displaystyle \forall x\in O:\sum _{i=0}^{\infty }\eta _{i}(x)=1}$

The sequence ${\displaystyle (\eta _{l})_{l\in \mathbb {N} }}$ is called a partition of unity for ${\displaystyle O}$ with respect to ${\displaystyle U_{\upsilon },\upsilon \in \Upsilon }$.

Proof: We will prove this by explicitly constructing such a sequence of functions.

1. First, we construct a sequence of open balls ${\displaystyle (B_{l})_{l\in \mathbb {N} }}$ with the properties

• ${\displaystyle \forall n\in \mathbb {N} :\exists \upsilon \in \Upsilon :{\overline {B_{n}}}\subseteq U_{\upsilon }}$
• ${\displaystyle \forall x\in O:|\{n\in \mathbb {N} |x\in {\overline {B_{n}}}\}|<\infty }$
• ${\displaystyle \bigcup _{j\in \mathbb {N} }B_{j}=O}$.

In order to do this, we first start with the definition of a sequence compact sets; for each ${\displaystyle n\in \mathbb {N} }$, we define

${\displaystyle K_{n}:=\left\{x\in O{\big |}{\text{dist}}(\partial O,x)\geq {\frac {1}{n}},\|x\|\leq n\right\}}$.

This sequence has the properties

• ${\displaystyle \bigcup _{j\in \mathbb {N} }K_{j}=O}$
• ${\displaystyle \forall n\in \mathbb {N} :K_{n}\subset {\overset {\circ }{K_{n+1}}}}$.

We now construct ${\displaystyle (B_{l})_{l\in \mathbb {N} }}$ such that

• ${\displaystyle K_{1}\subset \bigcup _{1\leq j\leq k_{1}}B_{j}\subseteq {\overset {\circ }{K_{2}}}}$ and
• ${\displaystyle \forall n\in \mathbb {N} :K_{n+1}\setminus {\overset {\circ }{K_{n}}}\subset \bigcup _{k_{n}

for some ${\displaystyle k_{1},k_{2},\ldots \in \mathbb {N} }$. We do this in the following way: To meet the first condition, we first cover ${\displaystyle K_{1}}$ with balls by choosing for every ${\displaystyle x\in K_{1}}$ a ball ${\displaystyle B_{x}}$ such that ${\displaystyle B_{x}\subseteq U_{\upsilon }\cap {\overset {\circ }{K_{2}}}}$ for an ${\displaystyle \upsilon \in \Upsilon }$. Since these balls cover ${\displaystyle K_{1}}$, and ${\displaystyle K_{1}}$ is compact, we may choose a finite subcover ${\displaystyle B_{1},\ldots B_{k_{1}}}$.

To meet the second condition, we proceed analogously, noting that for all ${\displaystyle n\in \mathbb {N} _{\geq 2}}$ ${\displaystyle K_{n+1}\setminus {\overset {\circ }{K_{n}}}}$ is compact and ${\displaystyle {\overset {\circ }{K_{n+2}}}\setminus K_{n-1}}$ is open.

This sequence of open balls has the properties which we wished for.

2. We choose the respective functions. Since each ${\displaystyle B_{n}}$, ${\displaystyle n\in \mathbb {N} }$ is an open ball, it has the form

${\displaystyle B_{n}=B_{R_{n}}(x_{n})}$

where ${\displaystyle R_{n}\in \mathbb {R} }$ and ${\displaystyle x_{n}\in \mathbb {R} ^{d}}$.

It is easy to prove that the function defined by

${\displaystyle {\tilde {\eta }}_{n}(x):=\eta _{R_{n}}(x-x_{n})}$

satisfies ${\displaystyle {\tilde {\eta }}_{n}(x)=0}$ if and only if ${\displaystyle x\in B_{n}}$. Hence, also ${\displaystyle {\text{supp }}{\tilde {\eta }}_{n}={\overline {B_{n}}}}$. We define

${\displaystyle \eta (x):=\sum _{j=1}^{\infty }{\tilde {\eta }}_{j}(x)}$

and, for each ${\displaystyle n\in \mathbb {N} }$,

${\displaystyle \eta _{n}:={\frac {{\tilde {\eta }}_{n}}{\eta }}}$.

Then, since ${\displaystyle \eta }$ is never zero, the sequence ${\displaystyle (\eta _{l})_{l\in \mathbb {N} }}$ is a sequence of ${\displaystyle {\mathcal {D}}(\mathbb {R} ^{d})}$ functions and further, it has the properties 1. - 4., as can be easily checked.${\displaystyle \Box }$

## Green's functions and Green's kernels

Definition 5.9:

Let

${\displaystyle \forall x\in O:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(x)\partial _{\alpha }u(x)=f(x)}$

be a linear partial differential equation. A function ${\displaystyle G:\mathbb {R} ^{d}\times \mathbb {R} ^{d}\to \mathbb {R} }$ such that for all ${\displaystyle x\in \mathbb {R} ^{d}}$ ${\displaystyle {\mathcal {T}}_{G(\cdot ,x)}}$ is well-defined and

${\displaystyle F(x):={\mathcal {T}}_{G(\cdot ,x)}}$

is a fundamental solution of that partial differential equation is called a Green's function of that partial differential equation.

Definition 5.10:

Let

${\displaystyle \forall x\in O:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(x)\partial _{\alpha }u(x)=f(x)}$

be a linear partial differential equation. A function ${\displaystyle K:\mathbb {R} ^{d}\to \mathbb {R} }$ such that the function

${\displaystyle G(y,x):=K(y-x)}$

is a Greens function for that partial differential equation is called a Green's kernel of that partial differential equation.

Theorem 5.11:

Let

${\displaystyle \forall x\in O:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(x)\partial _{\alpha }u(x)=f(x)}$

be a linear partial differential equation (in the following, we will sometimes abbreviate PDE for partial differential equation) such that ${\displaystyle f\in {\mathcal {C}}(\mathbb {R} ^{d})}$, and let ${\displaystyle K}$ be a Green's kernel for that PDE. If

${\displaystyle u:=f*K}$

exists and ${\displaystyle \sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }u}$ exists and is continuous, then ${\displaystyle u}$ solves the partial differential equation.

Proof:

We choose ${\displaystyle (\eta _{l})_{l\in \mathbb {N} }}$ to be a partition of unity of ${\displaystyle O}$, where the open cover of ${\displaystyle O}$ shall consist only of the set ${\displaystyle O}$. Then by definition of partitions of unity

${\displaystyle f=\sum _{j\in \mathbb {N} }\eta _{j}f}$.

For each ${\displaystyle n\in \mathbb {N} }$, we define

${\displaystyle f_{n}:=\eta _{n}f}$

and

${\displaystyle u_{n}:=f_{n}*K}$.

By Fubini's theorem, for all ${\displaystyle \varphi \in {\mathcal {D}}(\mathbb {R} ^{d})}$ and ${\displaystyle n\in \mathbb {N} }$

{\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{d}}T_{K(\cdot -y)}(\varphi )f_{n}(y)dy&=\int _{\mathbb {R} ^{d}}\int _{\mathbb {R} ^{d}}K(x-y)\varphi (x)dxf_{n}(y)dy\\&=\int _{\mathbb {R} ^{d}}\int _{\mathbb {R} ^{d}}f_{n}(y)K(x-y)\varphi (x)dydx\\&=\int _{\mathbb {R} ^{d}}(f_{n}*K)(x)\varphi (x)dx\\&={\mathcal {T}}_{u_{n}}(\varphi )\end{aligned}}}.

Hence, ${\displaystyle {\mathcal {T}}_{u_{n}}}$ as given in theorem 4.11 is a well-defined distribution.

Theorem 5.4 implies that ${\displaystyle {\mathcal {T}}_{u_{n}}}$ is a distributional solution to the PDE

${\displaystyle \forall x\in O:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(x)\partial _{\alpha }u_{n}(x)=f_{n}(x)}$.

Thus, for all ${\displaystyle \varphi \in {\mathcal {D}}(\mathbb {R} ^{d})}$ we have, using theorem 4.19,

{\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{d}}\left(\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }u_{n}\right)(x)\varphi (x)dx&={\mathcal {T}}_{\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }u_{n}}(\varphi )\\&=\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }{\mathcal {T}}_{u_{n}}(\varphi )\\&=T_{f_{n}}(\varphi )=\int _{\mathbb {R} ^{d}}f_{n}(x)\varphi (x)dx\end{aligned}}}.

Since ${\displaystyle \sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }u_{n}}$ and ${\displaystyle f_{n}}$ are both continuous, they must be equal due to theorem 3.17. Summing both sides of the equation over ${\displaystyle n}$ yields the theorem.${\displaystyle \Box }$

Theorem 5.12:

Let ${\displaystyle K\in L_{\text{loc}}^{1}}$ and let ${\displaystyle O\subseteq \mathbb {R} ^{d}}$ be open. Then for all ${\displaystyle \varphi \in {\mathcal {D}}(O)}$, the function ${\displaystyle x\mapsto {\mathcal {T}}_{K(\cdot -x)}(\varphi )}$ is continuous.

Proof:

If ${\displaystyle x_{l}\to x,l\to \infty }$, then

{\displaystyle {\begin{aligned}{\mathcal {T}}_{K(\cdot -x_{l})}(\varphi )-{\mathcal {T}}_{K(\cdot -x)}(\varphi )&=\int _{\mathbb {R} ^{d}}K(y-x_{l})\varphi (y)dy-\int _{\mathbb {R} ^{d}}K(y-x)\phi (y)dy\\&=\int _{\mathbb {R} ^{d}}K(y)(\varphi (y+x_{l})-\varphi (y+x))dy\\&\leq \max _{y\in \mathbb {R} ^{d}}|\varphi (y+x_{l})-\varphi (y+x)|\underbrace {\int _{{\text{supp }}\varphi +B_{1}(x)}K(y)dy} _{\text{constant}}\end{aligned}}}

for sufficiently large ${\displaystyle l}$, where the maximum in the last expression converges to ${\displaystyle 0}$ as ${\displaystyle l\to \infty }$, since the support of ${\displaystyle \varphi }$ is compact and therefore ${\displaystyle \varphi }$ is uniformly continuous by the Heine–Cantor theorem.${\displaystyle \Box }$

The last theorem shows that if we have found a locally integrable function ${\displaystyle K}$ such that

${\displaystyle \forall x\in \mathbb {R} ^{d}:\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }\partial _{\alpha }{\mathcal {T}}_{K(\cdot -x)}=\delta _{x}}$,

we have found a Green's kernel ${\displaystyle K}$ for the respective PDEs. We will rely on this theorem in our procedure to get solutions to the heat equation and Poisson's equation.

## Sources

Partial Differential Equations
 ← Distributions Fundamental solutions, Green's functions and Green's kernels The heat equation →