Partial Differential Equations/Print version

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Introduction and first examples

[edit | edit source]
Partial Differential Equations
Print version The transport equation → 

What is a partial differential equation?

[edit | edit source]

Let be a natural number, and let be an arbitrary set. A partial differential equation on looks like this:

is an arbitrary function here, specific to the partial differential equation, which goes from to , where is a natural number. And a solution to this partial differential equation on is a function satisfying the above logical statement. The solutions of some partial differential equations describe processes in nature; this is one reason why they are so important.

Multiindices

[edit | edit source]

In the whole theory of partial differential equations, multiindices are extremely important. Only with their help we are able to write down certain formulas a lot briefer.

Definitions 1.1:

A -dimensional multiindex is a vector , where are the natural numbers and zero.

If is a multiindex, then its absolute value is defined by

If is a -dimensional multiindex, is an arbitrary set and is sufficiently often differentiable, we define , the -th derivative of , as follows:

Types of partial differential equations

[edit | edit source]

We classify partial differential equations into several types, because for partial differential equations of one type we will need different solution techniques as for differential equations of other types. We classify them into linear and nonlinear equations, and into equations of different orders.

Definitions 1.2:

A linear partial differential equation is an equation of the form

, where only finitely many of the s are not the constant zero function. A solution takes the form of a function . We have for an arbitrary , is an arbitrary function and the sum in the formula is taken over all possible -dimensional multiindices. If the equation is called homogenous.

A partial differential equation is called nonlinear iff it is not a linear partial differential equation.

Definition 1.3:

Let . We say that a partial differential equation has -th order iff is the smallest number such that it is of the form

First example of a partial differential equation

[edit | edit source]

Now we are very curious what practical examples of partial differential equations look like after all.

Theorem and definition 1.4:

If is a differentiable function and , then the function

solves the one-dimensional homogenous transport equation

Proof: Exercise 2.

We therefore see that the one-dimensional transport equation has many different solutions; one for each continuously differentiable function in existence. However, if we require the solution to have a specific initial state, the solution becomes unique.

Theorem and definition 1.5:

If is a differentiable function and , then the function

is the unique solution to the initial value problem for the one-dimensional homogenous transport equation

Proof:

Surely . Further, theorem 1.4 shows that also:

Now suppose we have an arbitrary other solution to the initial value problem. Let's name it . Then for all , the function

is constant:

Therefore, in particular

, which means, inserting the definition of , that

, which shows that . Since was an arbitrary solution, this shows uniqueness.

In the next chapter, we will consider the non-homogenous arbitrary-dimensional transport equation.

Exercises

[edit | edit source]
  1. Have a look at the definition of an ordinary differential equation (see for example the Wikipedia page on that) and show that every ordinary differential equation is a partial differential equation.
  2. Prove Theorem 1.4 using direct calculation.
  3. What is the order of the transport equation?
  4. Find a function such that and .

Sources

[edit | edit source]
  • Martin Brokate (2011/2012), Partielle Differentialgleichungen, Vorlesungsskript (PDF) (in German) {{citation}}: Check date values in: |year= (help)
  • Daniel Matthes (2013/2014), Partial Differential Equations, lecture notes {{citation}}: Check date values in: |year= (help)
Partial Differential Equations
Print version The transport equation → 

The transport equation

[edit | edit source]
Partial Differential Equations
 ← Introduction and first examples Print version Test functions → 

In the first chapter, we had already seen the one-dimensional transport equation. In this chapter we will see that we can quite easily generalise the solution method and the uniqueness proof we used there to multiple dimensions. Let . The inhomogenous -dimensional transport equation looks like this:

, where is a function and is a vector.

Solution

[edit | edit source]

The following definition will become a useful shorthand notation in many occasions. Since we can use it right from the beginning of this chapter, we start with it.

Definition 2.1:

Let be a function and . We say that is times continuously differentiable iff all the partial derivatives

exist and are continuous. We write .

Before we prove a solution formula for the transport equation, we need a theorem from analysis which will play a crucial role in the proof of the solution formula.

Theorem 2.2: (Leibniz' integral rule)

Let be open and , where is arbitrary, and let . If the conditions

  • for all ,
  • for all and , exists
  • there is a function such that

hold, then

We will omit the proof.

Theorem 2.3: If , and , then the function

solves the inhomogenous -dimensional transport equation

Note that, as in chapter 1, that there are many solutions, one for each continuously differentiable in existence.

Proof:

1.

We show that is sufficiently often differentiable. From the chain rule follows that is continuously differentiable in all the directions . The existence of

follows from the Leibniz integral rule (see exercise 1). The expression

we will later in this proof show to be equal to

,

which exists because

just consists of the derivatives

2.

We show that

in three substeps.

2.1

We show that

This is left to the reader as an exercise in the application of the multi-dimensional chain rule (see exercise 2).

2.2

We show that

We choose

so that we have

By the multi-dimensional chain rule, we obtain

But on the one hand, we have by the fundamental theorem of calculus, that and therefore

and on the other hand

, seeing that the differential quotient of the definition of is equal for both sides. And since on the third hand

, the second part of the second part of the proof is finished.

2.3

We add and together, use the linearity of derivatives and see that the equation is satisfied.

Initial value problem

[edit | edit source]

Theorem and definition 2.4: If and , then the function

is the unique solution of the initial value problem of the transport equation

Proof:

Quite easily, . Therefore, and due to theorem 2.3, is a solution to the initial value problem of the transport equation. So we proceed to show uniqueness.

Assume that is an arbitrary other solution. We show that , thereby excluding the possibility of a different solution.

We define . Then

Analogous to the proof of uniqueness of solutions for the one-dimensional homogenous initial value problem of the transport equation in the first chapter, we define for arbitrary ,

Using the multi-dimensional chain rule, we calculate :

Therefore, for all is constant, and thus

, which shows that and thus .

Exercises

[edit | edit source]
  1. Let and . Using Leibniz' integral rule, show that for all the derivative

    is equal to

    and therefore exists.

  2. Let and . Calculate .
  3. Find the unique solution to the initial value problem

    .

Sources

[edit | edit source]
Partial Differential Equations
 ← Introduction and first examples Print version Test functions → 

Test functions

[edit | edit source]
Partial Differential Equations
 ← The transport equation Print version Distributions → 

Motivation

[edit | edit source]

Before we dive deeply into the chapter, let's first motivate the notion of a test function. Let's consider two functions which are piecewise constant on the intervals and zero elsewhere; like, for example, these two:

Let's call the left function , and the right function .

Of course we can easily see that the two functions are different; they differ on the interval ; however, let's pretend that we are blind and our only way of finding out something about either function is evaluating the integrals

and

for functions in a given set of functions .

We proceed with choosing sufficiently clever such that five evaluations of both integrals suffice to show that . To do so, we first introduce the characteristic function. Let be any set. The characteristic function of is defined as

With this definition, we choose the set of functions as

It is easy to see (see exercise 1), that for , the expression

equals the value of on the interval , and the same is true for . But as both functions are uniquely determined by their values on the intervals (since they are zero everywhere else), we can implement the following equality test:

This obviously needs five evaluations of each integral, as .

Since we used the functions in to test and , we call them test functions. What we ask ourselves now is if this notion generalises from functions like and , which are piecewise constant on certain intervals and zero everywhere else, to continuous functions. The following chapter shows that this is true.

Bump functions

[edit | edit source]

In order to write down the definition of a bump function more shortly, we need the following two definitions:

Definition 3.1:

Let , and let . We say that is smooth if all the partial derivatives

exist in all points of and are continuous. We write .

Definition 3.2:

Let . We define the support of , , as follows:

Now we are ready to define a bump function in a brief way:

Definition 3.3:

is called a bump function iff and is compact. The set of all bump functions is denoted by .

These two properties make the function really look like a bump, as the following example shows:

The standard mollifier in dimension

Example 3.4: The standard mollifier , given by

, where , is a bump function (see exercise 2).

Schwartz functions

[edit | edit source]

As for the bump functions, in order to write down the definition of Schwartz functions shortly, we first need two helpful definitions.

Definition 3.5:

Let be an arbitrary set, and let be a function. Then we define the supremum norm of as follows:

Definition 3.6:

For a vector and a -dimensional multiindex we define , to the power of , as follows:

Now we are ready to define a Schwartz function.

Definition 3.7:

We call a Schwartz function iff the following two conditions are satisfied:

By we mean the function .

Example 3.8: The function

is a Schwartz function.

Theorem 3.9:

Every bump function is also a Schwartz function.

This means for example that the standard mollifier is a Schwartz function.

Proof:

Let be a bump function. Then, by definition of a bump function, . By the definition of bump functions, we choose such that

, as in , a set is compact iff it is closed & bounded. Further, for arbitrary,

Convergence of bump and Schwartz functions

[edit | edit source]

Now we define what convergence of a sequence of bump (Schwartz) functions to a bump (Schwartz) function means.

Definition 3.10:

A sequence of bump functions is said to converge to another bump function iff the following two conditions are satisfied:

  1. There is a compact set such that

Definition 3.11:

We say that the sequence of Schwartz functions converges to iff the following condition is satisfied:

Theorem 3.12:

Let be an arbitrary sequence of bump functions. If with respect to the notion of convergence for bump functions, then also with respect to the notion of convergence for Schwartz functions.

Proof:

Let be open, and let be a sequence in such that with respect to the notion of convergence of . Let thus be the compact set in which all the are contained. From this also follows that , since otherwise , where is any nonzero value takes outside ; this would contradict with respect to our notion of convergence.

In , ‘compact’ is equivalent to ‘bounded and closed’. Therefore, for an . Therefore, we have for all multiindices :

Therefore the sequence converges with respect to the notion of convergence for Schwartz functions.

The ‘testing’ property of test functions

[edit | edit source]

In this section, we want to show that we can test equality of continuous functions by evaluating the integrals

and

for all (thus, evaluating the integrals for all will also suffice as due to theorem 3.9).

But before we are able to show that, we need a modified mollifier, where the modification is dependent of a parameter, and two lemmas about that modified mollifier.

Definition 3.13:

For , we define

.

Lemma 3.14:

Let . Then

.

Proof:

From the definition of follows

.

Further, for

Therefore, and since

, we have:

In order to prove the next lemma, we need the following theorem from integration theory:

Theorem 3.15: (Multi-dimensional integration by substitution)

If are open, and is a diffeomorphism, then

We will omit the proof, as understanding it is not very important for understanding this wikibook.

Lemma 3.16:

Let . Then

.

Proof:

Now we are ready to prove the ‘testing’ property of test functions:

Theorem 3.17:

Let be continuous. If

,

then .

Proof:

Let be arbitrary, and let . Since is continuous, there exists a such that

Then we have

Therefore, . An analogous reasoning also shows that . But due to the assumption, we have

As limits in the reals are unique, it follows that , and since was arbitrary, we obtain .

Remark 3.18: Let be continuous. If

,

then .

Proof:

This follows from all bump functions being Schwartz functions, which is why the requirements for theorem 3.17 are met.

Exercises

[edit | edit source]
  1. Let and be constant on the interval . Show that

  2. Prove that the standard mollifier as defined in example 3.4 is a bump function by proceeding as follows:
    1. Prove that the function

      is contained in .

    2. Prove that the function

      is contained in .

    3. Conclude that .
    4. Prove that is compact by calculating explicitly.
  3. Let be open, let and let . Prove that if , then and .
  4. Let be open, let be bump functions and let . Prove that .
  5. Let be Schwartz functions functions and let . Prove that is a Schwartz function.
  6. Let , let be a polynomial, and let in the sense of Schwartz functions. Prove that in the sense of Schwartz functions.
Partial Differential Equations
 ← The transport equation Print version Distributions → 

Distributions

[edit | edit source]
Partial Differential Equations
 ← Test functions Print version Fundamental solutions, Green's functions and Green's kernels → 

Distributions and tempered distributions

[edit | edit source]

Definition 4.1:

Let be open, and let be a function. We call a distribution iff

  • is linear ()
  • is sequentially continuous (if in the notion of convergence of bump functions, then in the reals)

The set of all distributions for we denote by

Definition 4.2:

Let be a function. We call a tempered distribution iff

  • is linear ()
  • is sequentially continuous (if in the notion of convergence of Schwartz functions, then in the reals)

The set of all tempered distributions we denote by .

Theorem 4.3:

Let be a tempered distribution. Then the restriction of to bump functions is a distribution.

Proof:

Let be a tempered distribution, and let be open.

1.

We show that has a well-defined value for .

Due to theorem 3.9, every bump function is a Schwartz function, which is why the expression

makes sense for every .

2.

We show that the restriction is linear.

Let and . Since due to theorem 3.9 and are Schwartz functions as well, we have

due to the linearity of for all Schwartz functions. Thus is also linear for bump functions.

3.

We show that the restriction of to is sequentially continuous. Let in the notion of convergence of bump functions. Due to theorem 3.11, in the notion of convergence of Schwartz functions. Since as a tempered distribution is sequentially continuous, .

The convolution

[edit | edit source]

Definition 4.4:

Let . The integral

is called convolution of and and denoted by if it exists.

The convolution of two functions may not always exist, but there are sufficient conditions for it to exist:

Theorem 4.5:

Let such that and let and . Then for all , the integral

has a well-defined real value.

Proof:

Due to Hölder's inequality,

.

We shall now prove that the convolution is commutative, i. e. .

Theorem 4.6:

Let such that (where ) and let and . Then for all :

Proof:

We apply multi-dimensional integration by substitution using the diffeomorphism to obtain

.

Lemma 4.7:

Let be open and let . Then .

Proof:

Let be arbitrary. Then, since for all

and further

,

Leibniz' integral rule (theorem 2.2) is applicable, and by repeated application of Leibniz' integral rule we obtain

.

Regular distributions

[edit | edit source]

In this section, we shortly study a class of distributions which we call regular distributions. In particular, we will see that for certain kinds of functions there exist corresponding distributions.

Definition 4.8:

Let be an open set and let . If for all can be written as

for a function which is independent of , then we call a regular distribution.

Definition 4.9:

Let . If for all can be written as

for a function which is independent of , then we call a regular tempered distribution.

Two questions related to this definition could be asked: Given a function , is for open given by

well-defined and a distribution? Or is given by

well-defined and a tempered distribution? In general, the answer to these two questions is no, but both questions can be answered with yes if the respective function has the respectively right properties, as the following two theorems show. But before we state the first theorem, we have to define what local integrability means, because in the case of bump functions, local integrability will be exactly the property which needs in order to define a corresponding regular distribution:

Definition 4.10:

Let be open, be a function. We say that is locally integrable iff for all compact subsets of

We write .

Now we are ready to give some sufficient conditions on to define a corresponding regular distribution or regular tempered distribution by the way of

or

:

Theorem 4.11:

Let be open, and let be a function. Then

is a regular distribution iff .

Proof:

1.

We show that if , then is a distribution.

Well-definedness follows from the triangle inequality of the integral and the monotony of the integral:

In order to have an absolute value strictly less than infinity, the first integral must have a well-defined value in the first place. Therefore, really maps to and well-definedness is proven.

Continuity follows similarly due to

, where is the compact set in which all the supports of and are contained (remember: The existence of a compact set such that all the supports of are contained in it is a part of the definition of convergence in , see the last chapter. As in the proof of theorem 3.11, we also conclude that the support of is also contained in ).

Linearity follows due to the linearity of the integral.

2.

We show that is a distribution, then (in fact, we even show that if has a well-defined real value for every , then . Therefore, by part 1 of this proof, which showed that if it follows that is a distribution in , we have that if is a well-defined real number for every , is a distribution in .

Let be an arbitrary compact set. We define

is continuous, even Lipschitz continuous with Lipschitz constant : Let . Due to the triangle inequality, both

and

, which can be seen by applying the triangle inequality twice.

We choose sequences and in such that and and consider two cases. First, we consider what happens if . Then we have

.

Second, we consider what happens if :

Since always either or , we have proven Lipschitz continuity and thus continuity. By the extreme value theorem, therefore has a minimum . Since would mean that for a sequence in which is a contradiction as is closed and , we have .

Hence, if we define , then . Further, the function

has support contained in , is equal to within and further is contained in due to lemma 4.7. Hence, it is also contained in . Since therefore, by the monotonicity of the integral

, is indeed locally integrable.

Theorem 4.12:

Let , i. e.

Then

is a regular tempered distribution.

Proof:

From Hölder's inequality we obtain

.

Hence, is well-defined.

Due to the triangle inequality for integrals and Hölder's inequality, we have

Furthermore

.

If in the notion of convergence of the Schwartz function space, then this expression goes to zero. Therefore, continuity is verified.

Linearity follows from the linearity of the integral.

Equicontinuity

[edit | edit source]

We now introduce the concept of equicontinuity.

Definition 4.13:

Let be a metric space equipped with a metric which we shall denote by here, let be a set in , and let be a set of continuous functions mapping from to the real numbers . We call this set equicontinuous if and only if

.

So equicontinuity is in fact defined for sets of continuous functions mapping from (a set in a metric space) to the real numbers .

Theorem 4.14:

Let be a metric space equipped with a metric which we shall denote by , let be a sequentially compact set in , and let be an equicontinuous set of continuous functions from to the real numbers . Then follows: If is a sequence in such that has a limit for each , then for the function , which maps from to , it follows uniformly.

Proof:

In order to prove uniform convergence, by definition we must prove that for all , there exists an such that for all .

So let's assume the contrary, which equals by negating the logical statement

.

We choose a sequence in . We take in such that for an arbitrarily chosen and if we have already chosen and for all , we choose such that , where is greater than .

As is sequentially compact, there is a convergent subsequence of . Let us call the limit of that subsequence sequence .

As is equicontinuous, we can choose such that

.

Further, since (if of course), we may choose such that

.

But then follows for and the reverse triangle inequality:

Since we had , the reverse triangle inequality and the definition of t

, we obtain:

Thus we have a contradiction to .

Theorem 4.15:

Let be a set of differentiable functions, mapping from the convex set to . If we have, that there exists a constant such that for all functions in , (the exists for each function in because all functions there were required to be differentiable), then is equicontinuous.

Proof: We have to prove equicontinuity, so we have to prove

.

Let be arbitrary.

We choose .

Let such that , and let be arbitrary. By the mean-value theorem in multiple dimensions, we obtain that there exists a such that:

The element is inside , because is convex. From the Cauchy-Schwarz inequality then follows:

The generalised product rule

[edit | edit source]

Definition 4.16:

If are two -dimensional multiindices, we define the binomial coefficient of over as

.

We also define less or equal relation on the set of multi-indices.

Definition 4.17:

Let be two -dimensional multiindices. We define to be less or equal than if and only if

.

For , there are vectors such that neither nor . For , the following two vectors are examples for this:

This example can be generalised to higher dimensions (see exercise 6).

With these multiindex definitions, we are able to write down a more general version of the product rule. But in order to prove it, we need another lemma.

Lemma 4.18:

If and , where the is at the -th place, we have

for arbitrary multiindices .

Proof:

For the ordinary binomial coefficients for natural numbers, we had the formula

.

Therefore,