Ordinary Differential Equations/Non Homogenous 1

From Wikibooks, open books for an open world
< Ordinary Differential Equations
Jump to: navigation, search

Higher Order Differential Equations

Non-Homogeneous Equations[edit]

A non-homogeneous equation of constant coefficients is an equation of the form


where ci are all constants and f(x) is not 0.

Complementary Function[edit]

Every non-homogeneous equation has a complementary function (CF), which can be found by replacing the f(x) with 0, and solving for the homogeneous solution. For example, the CF of

\frac{d^2y}{dx^2}-3\frac{dy}{dx}+4y=\frac{2 \sin x}{x^2}



Superposition Principle[edit]

The superposition principle makes solving a non-homogeneous equation fairly simple. The final solution is the sum of the solutions to the complementary function, and the solution due to f(x), called the particular integral (PI). In other words,

General Solution = CF + PI

Method of Undetermined Coefficients[edit]

The method of undetermined coefficients is an easy shortcut to find the particular integral for some f(x). The method works only if a finite number of derivatives of f(x) eventually reduces to 0, or if the derivatives eventually fall into a pattern in a finite number of derivatives. If this is true, we then know part of the PI - the sum of all derivatives before we hit 0 (or all the derivatives in the pattern) multiplied by arbitrary constants. This is the trial PI. We can then plug our trial PI into the original equation to solve it fully.

As we will see, we may need to alter this trial PI depending on the CF. If the trial PI contains a term that is also present in the CF, then the PI will be absorbed by the arbitrary constant in the CF, and therefore we will not have a full solution to the problem.

f(x) = Constant[edit]

The simplest case is when f(x) is constant, for example


The solution to the CF is


We now need to find a trial PI. When we differentiate y=3, we get zero. Therefore, our trial PI is the sum of a functions of y before this, that is, 3 multiplied by an arbitrary constant, which gives another arbitrary constant, K.

We now set y equal to the PI and find the derivatives up to the order of the DE (here, the second).


We can now substitute these into the orginal DE:


By summing the CF and the PI, we can get the general solution to the DE:


f(x) = Polynomial[edit]

This is the general method which includes the above example. A polynomial of order n reduces to 0 in exactly n+1 derivatives (so 1 for a constant as above, three for a quadratic, and so on). So we know that our PI is

y=C_1x^n+C_2x^{n-1}+...+C_nx+C_{n+1}. \,

For an example, lets take


First off, we know that our PI is


In order to plug in, we need to calculate the first two derivatives of this:


Plugging in we get:


Solving gives us

A=\frac{1}{2} B=\frac{-5}{3} C=\frac{13}{3} D=\frac{-50}{9} E=\frac{86}{27}

So, our PI is


However, we need to get the complementary function as well. To get that, set f(x) to 0 and solve just like we did in the last section. For this equation, the roots are -3 and -2. So that makes our CF


Summing gives us our general solution:


f(x) = Power of e[edit]

Powers of e don't ever reduce to 0, but they do become a pattern. In fact it does so in only 1 differentiation, since it's its own derivative. So we know

y_p=Ke^{px}, \,

where K is our constant and p is the power of e givin in the original DE.

For example, consider


We make our trial PI

y_p=Ae^{2x}+Be^{-7x} \,.

Which then gives

\frac{dy}{dx}=2Ae^{2x}-7Be^{-7x} \,

Plugging in, we get


That's the particular integral. We found the CF earlier. So the general solution is

y=Ae^{-3x}+Be^{-2x}+\frac{1}{4}e^{2x}+\frac{3}{10}e^{-7x} \,

f(x) = Polynomial × Power of e[edit]

Polynomials multiplied by powers of e also form a loop, in n derivatives (where n is the highest power of x in the polynomial). So we know that our trial PI is


where C is a constant and p is the power of e in the equation.

For example, lets try


We can now set our PI to


Plugging in, we get


Thats the particular solution. We found the homogeneous solution earlier. So the total solution is


f(x) = Trigonometric Functions[edit]

Trig functions don't reduce to 0 either. But they do have a loop of 2 derivatives - the derivative of sin x is cos x, and the derivative of cos x is -sin x. So we put our PI as

y_p=A \sin(px)+B \cos(px), \,

where C is a constant and p is the term inside the trig. function in the original DE.

For example, lets try

\frac{d^2y}{dx^2}+5\frac{dy}{dx}+6y=\cos 3x.

We set our PI to

y_p=A \sin 3x+B \cos 3x \,
\frac{dy_p}{dx}=3 A \cos 3x- 3 B \sin 3x
\frac{d^2y_p}{dx^2}=-9A \sin 3x-9B \cos3x

Plugging in, we get

-9A \sin3x-9B \cos 3x +5 \left( 3A \cos 3x -3B \sin3x \right)\,
+6 \left(A \sin 3x +B \cos 3x \right)=\cos 3x \,
y_p=\frac{5}{78}\sin 3x-\frac{1}{78} \cos 3x

Thats the particular solution. We found the homogeneous solution earlier. So the total solution is

y=Ae^{-3x}+Be^{-2x}+\frac{5}{78}\sin 3x-\frac{1}{78} \cos 3x

f(x) = A Mixture[edit]

Not only are any of the above solvable by the method of undetermined coefficients, so is the sum of one or more of the above. This is because the sum of two things whose derivatives either go to 0 or loop must also have a derivative that goes to 0 or loops. The y_p would be the sum of the individual y_p functions.

If the Trial PI shares terms with the CF[edit]

When dealing with e^x, or sometimes with polynomials (if the homogeneous equation has roots of 0) as f(x), you may get the same term in both the trial PI and the CF. If this happens, the PI will be absorbed into the arbitrary constants of the CF, which will not result in a full solution. To overcome this, multiply the affected terms by x as many times as needed until it no longer appears in the CF.

As an example, let's take


First, solve the homogeneous equation to get the CF.


The auxiliary polynomial is


Find the roots of the auxiliary polynomial. In this case, they are

m=0,\ 0,\ -1 \,

The CF is


Now for the particular integral. Since f(x) is a polynomial of degree 1, we would normally use Ax+B. However, since both a term in x and a constant appear in the CF, we need to multiply by x² and use


We solve this as we normally do for A and B.


So the total solution is


Variation of Parameters[edit]

Variation of parameters is a method for finding a particular solution to the equation y'' + p(x)y' + q(x)y = f(x) if the general solution for the corresponding homogeneous equation y'' + p(x)y' + q(x)y = 0 is known. We will now derive this general method.

We already know the general solution of the homogenous equation: it is of the form c_1y_1 + c_2y_2. We will look for a particular solution of the non-homogenous equation of the form \psi = uy_1 + vy_2, with u and v functions of the independent variable x. Differentiating this we get


Now notice that there is currently only one condition on \psi, namely that \psi'' + p(x)\psi' + q(x)\psi = f(x). We now impose another condition, that


This means that \psi'' will have no second derivatives of u and v. Thus, these new parameters (hence the name "variation of parameters") will be the solutions to some first order differential equation, which can be solved. Let us finish the problem:

\psi'' = u'y_1' + uy_1'' + v'y_2' + vy_2''\,


\psi'' + p(x)\psi' + q(x)\psi = f(x)\,

u'y_1' + v'y_2' + uy_1'' + vy_2'' + p(x)(uy_1' + vy_2') + q(x)(uy_1 + vy_2) = f(x)\,

u'y_1' + v'y_2' + u(y_1'' + p(x)y_1' + q(x)y_1) + v(y_2'' + p(x)y_2' + q(x)y_2) = f(x)\,

u'y_1' + v'y_2' = f(x)\,,

where the last step follows from the fact that y_1 and y_2 are solutions of the homogeneous equation.

Therefore, we have u'y_1 + v'y_2 = 0 and u'y_1' + v'y_2' = f(x). Multiplying the first equation by y_2' and the second by -y_2 and adding gives

u'y_1y_2' - u'y_1'y_2 = -f(x)y_2\,

u' = {-f(x)y_2 \over y_1y_2'-y_1'y_2}

A similar procedure gives

v' = {f(x)y_1 \over y_1y_2'-y_1'y_2}.

Now it is only necessary to evaluate these expressions and integrate them with respect to x to get the functions u and v, and then we have our particular solution \psi = uy_1 + vy_2. The general solution to the differential equation y'' + p(x)y' + q(x)y = f(x)\, is therefore c_1y_1 + c_2y_2 + uy_1 + vy_2\,.

Note that the main difficulty with this method is that the integrals involved are often extremely complicated. If the integral does not work out well, it is best to use the method of undetermined coefficients instead.

The quantity that appears in the denominator of the expressions for u' and v' is called the Wronskian of y_1 and y_2.

Laplace Transforms[edit]

The Laplace transform is a very useful tool for solving nonhomogenous initial-value problems. It allows us to reduce the problem of solving the differential equation to that of solving an algebraic equation. We begin with some setup.

Definition of the Laplace Transform[edit]

The Laplace transform of f(t)\, is defined as

F(s) = \int _0^\infty f(t)e^{-st}dt.

This can also be written as \mathcal{L}\{f(t)\}. When writing this on paper, you may write a cursive capital "L" and it will be generally understood. There is also an inverse Laplace transform \mathcal{L}^{-1}\{F(s)\} = f(t), but calculating it requires an integration with respect to a complex variable. Luckily, it is frequently possible to find \mathcal{L}^{-1}\{F(s)\} without resorting to this integration, using a variety of tricks which will be described later. However, it is first necessary to prove some facts about the Laplace transform.

Two Properties of the Laplace Transform[edit]

Property 1. The Laplace transform is a linear operator; that is, \mathcal{L}\{c_1f(t) + c_2g(t)\} = c_1\mathcal{L}\{f(t)\} + c_2\mathcal{L}\{g(t)\}.

The proof of this property follows immediately from the definition of the Laplace transform and is left to the reader.

Property 2. If F(s) = \mathcal{L}\{f(t)\}, then \mathcal{L}\{f'(t)\} = sF(s) - f(0)

\mathcal{L}\{f'(t)\} = \int _0^\infty f'(t)e^{-st}dt
= \lim_{C\to\infty} \left.e^{-st}f(t) \right| _0^C - \int _0^C -sf(t)e^{-st}dt (integrating by parts)
= -f(0) +  s \lim_{C\to\infty}\int _0^C f(t)e^{-st}dt
= s\mathcal{L}\{f(t)\} - f(0)
= sF(s) - f(0)\,

It is property 2 that makes the Laplace transform a useful tool for solving differential equations. As a corollary of property 2, note that \mathcal{L}\{f''(t)\} = s^2F(s) - sf(0) - f'(0).

Laplace Transforms of Simple Functions[edit]

\mathcal{L}\{1\} = {1 \over s}

\mathcal{L}\{e^{at}\} = {1 \over s-a}

\mathcal{L}\{\cos \omega t\} = {s \over s^2 + \omega^2}

\mathcal{L}\{\sin \omega t\} = {\omega \over s^2 + \omega^2}

The last two can be easily calculated using Euler's formula e^{i\omega t} = \cos \omega t + i \sin \omega t\,.

In order to find more Laplace transforms, in particular the transform of t^n, we will derive two more properties of the transform.

Two More Properties of the Laplace Transform[edit]

Property 3. If \mathcal{L}\{f(t)\} = F(s), then \mathcal{L}\{tf(t)\} = -F'(s).

Property 4. If \mathcal{L}\{f(t)\} = F(s), then \mathcal{L}\{e^{at}f(t)\} = F(s-a).

The proofs are straightforward and are left to the reader.

Now we can easily see that \mathcal{L}\{t\} = \mathcal{L}\{(t)(1)\} = -{d \over dt}\mathcal{L}\{1\} = {1 \over s^2} . Applying Property 3 multiple times, we can find that \mathcal{L}\{t^n\} = {n! \over s^{n+1}}. At last we are ready to solve a differential equation using Laplace transforms.

Using Laplace Transforms to Solve Non-Homogeneous Initial-Value Problems[edit]

In general, we solve a second-order linear non-homogeneous initial-value problem as follows: First, we take the Laplace transform of both sides. This immediately reduces the differential equation to an algebraic one. We then solve for F(s). Finally, we take the inverse transform of both sides to find y.

Let's begin by using this technique to solve the problem

y'' - 4y' + 3y = e^{-t}; y(0) = 0, y'(0) = 1\,.

We begin by taking the Laplace transform of both sides and using property 1 (linearity):

s^2F(s) - sy(0) - y'(0) - 4[sF(s) - y(0)] + 3F(s) = {1 \over s+1}
(s^2-4s+3)F(s) - 1 = {1 \over s+1} (using the initial conditions)

Now we isolate F(s):

F(s) = {1 \over (s-3)(s-1)} + {1 \over (s-3)(s-1)(s+1)}

Here we have factored s^2 - 4s + 3 in preparation for the next step. We now attempt to take the inverse transform of both sides; in order to do this, we will have to break down the right hand side into partial fractions.

F(s) = {A \over (s-3)} + {B \over (s-1)} + {C \over (s-3)} + {D \over (s+1)} + {E \over (s-1)}

The first two fractions imply that A(s-1) + B(s-3) = 1\,. Setting s=3 gives A = {1 \over 2}, while setting s=1 gives B = -{1 \over 2}. The other three fractions similarly give C = D = {1 \over 8} and E = -{1 \over 4}. Therefore:

F(s) = {1 \over 2(s-3)} - {1 \over 2(s-1)} + {1 \over 8(s-3)} + {1 \over 8(s+1)} - {1 \over 4(s-1)}
= {5 \over 8(s-3)} - {3 \over 4(s-1)} + {1 \over 8(s+1)}
= {5 \over 8}\mathcal{L}\{e^{3t}\} - {3 \over 4}\mathcal{L}\{e^t\} + {1 \over 8}\mathcal{L}\{e^{-t}\}

And finally we can take the inverse transform (by inspection, of course) to get

y = {5 \over 8}e^{3t} - {3 \over 4}e^t + {1 \over 8}e^{-t}.

The Convolution[edit]

The convolution is a method of combining two functions to yield a third function. The convolution has applications in probability, statistics, and many other fields because it represents the "overlap" between the functions. We are not concerned with this property here; for us the convolution is useful as a quick method for calculating inverse Laplace transforms.

Definition. The convolution (f*g)(t)\, is defined as \int_0^t f(u)g(t-u)du.

The convolution has several useful properties, which are stated below:

Property 1. ((f*g)*h)(t) = (f*(g*h))(t)\, (Associativity)

Property 2. (f*g)(t) = (g*f)(t)\, (Commutativity)

Property 3. (f*(g+h))(t) = (f*g)(t) + (f*h)(t)\, (Distribution over addition)

We now prove the result that makes the convolution useful for calculating inverse Laplace transforms.

Theorem. \mathcal{L}\{(f*g)(t)\} = \mathcal{L}\{f(t)\} \cdot \mathcal{L}\{g(t)\}

=\int_0^\infty (f*g)(t)e^{-st}dt
=\int_0^\infty \left[\int_0^t f(u)g(t-u)du \right]e^{-st}dt
=\int_0^\infty \int_0^t e^{-st}f(u)g(t-u)du\,dt
=\int_0^\infty \int_u^\infty e^{-st}f(u)g(t-u)dt\,du (changing order of integration)
=\int_0^\infty f(u)\left[ \int_u^\infty e^{-st}g(t-u)dt\right]du
Now let v=t-u.
=\int_0^\infty f(u) \left[ \int_0^\infty e^{-s(u+v)}g(v)dv\right]du
=\int_0^\infty e^{-su}f(u)\left[\int_0^\infty e^{-sv}g(v)dv\right]du
= \mathcal{L}\{f(t)\} \cdot \mathcal{L}\{g(t)\}

Let's solve another differential equation:

y'' + y = \sin t\, ; y(0) = 0, y'(0) = 0

Taking Laplace transforms of both sides gives

s^2F(s) + F(s) = {1 \over s^2 + 1}
F(s) = {1 \over (s^2 + 1)^2}

We now have to find \mathcal{L}^{-1}\lbrace F(s)\rbrace. To do this, we notice that {1 \over (s^2 + 1)^2} = [\mathcal{L}\{\sin t\}]^2, so F(s) = \mathcal{L}\{ \sin t * \sin t \} by the Theorem above. Thus, the solution to our differential equation is the convolution of sine with itself. We proceed to calculate this:

\sin t * \sin t\,
= \int_0^t \sin(u) \sin(t-u)du
= \int_0^t {\cos(2u-t) - \cos(t) \over 2}du
= {1 \over 2} \sin t - {1 \over 2} t \cos t

Therefore, the solution to the original equation is

y = {1 \over 2} \sin t - {1 \over 2} t \cos t.