Ordinary Differential Equations/Exact 1

From Wikibooks, open books for an open world
< Ordinary Differential Equations
Jump to: navigation, search

First Order Differential Equations

This page details a method for trying to find solutions to equations of the form


This is often written as a differential form


Subsequently, we will refer to this expression as ODE. Differential forms frequently come up in multivariable calculus while studying line integrals.

Exact Differential Equations[edit]

Before we begin identifying and solving exact differential equation it helps to make a few observations. We will begin reminding ourselves of the chain rule from multivariable calculus. Which states how to compute the derivative of a composition of two or more functions. Suppose that \psi(u,v) is a function of two real variables, and we are given functions g(t) and h(t) which are functions of a single real variable. Then the function f(t)=\psi(g(t),h(t)) is simply a function of t, with g and h being plugged into f as u and v. The chain rule from multivariable calculus tells us how to calculate the derivative of f(t). It states that:

f'(t)=\frac{\partial \psi}{\partial u}\frac{dg}{dt}+\frac{\partial\psi}{\partial v}\frac{dh}{dt}

If we slightly abuse the notation and call the two functions u(t) and v(t) (instead of g(t) and h(t)) then we can write the chain rule as

f'(t)=\frac{\partial \psi}{\partial u}\frac{du}{dt}+\frac{\partial\psi}{\partial v}\frac{dv}{dt}

As an example we could let f(u,v)=u^2+v^2 and we could let u(t)=\cos(t) and v(t)=\sin(t). Then according to the chain rule


Of course this would have been seen more directly by substituting for u and v to discover that f(t)=1, but this simply gives us a verification that we took the derivative correctly.

We will use this theory to evaluate:

\begin{align}\frac{d}{dx}\psi(x,y(x))&=\frac{\partial\psi}{\partial x}\frac{d}{dx}(x)+\frac{\partial\psi}{\partial y}\frac{d}{dx}(y)\\
&=\frac{\partial\psi}{\partial x}+\frac{\partial\psi}{\partial y}y'.\end{align}

If we examine this expression carefully it looks equal to the left hand side of our ODE above. Specifically if \tfrac{\partial\psi}{\partial x}(x,y)=P(x,y) and \tfrac{\partial\psi}{\partial y}(x,y)=Q(x,y) then our ODE is:


This type of equation is especially easy to solve. The only functions whose derivatives are 0 are constant functions or simple constants. Thus the solution to our ODE, i.e. integrating it, will be given by


Now consider the following example, applying what we have just figured out.


In this example P(x,y)=y-x^2, Q(x,y)=x, and y'=y'. Notice that if \psi(x,y)=xy-\frac{x^3}{3} then \psi_x'=y-x^2=P and \psi_y'=x=Q. By the way, if you ever wanted to check for yourself what we are doing here and your calculus is a bit rusty, get maxima (http://maxima.sourceforge.net or prepackaged for your favorite Linux distribution or for Android). The derivation of \psi(x,y) we have just made can easily be replayed in maxima, like so,

(%i1) psi:x*y-(x^3/3);


(%i2) diff(psi,x);


(%o1) y - x^2.

Or going from \psi_x' to \psi(x,y),

(%i1) psi_prime:y-x^2;


(%i2) integrate(psi_prime,x);


(%o1) x*y-(x^3/3)

Turning back to our problem, by our observations above the solution of this equation should be given by \psi(x,y)=C, or in other words:

xy-\frac{x^3}{3}=C\qquad\text{ or }y=\frac{x^2}{3}+\frac{C}{x}

This particular equation is linear so we may easily verify that the solution obtained in this way is correct. When there is a function \psi so that \psi_x'=P and \psi_y'=Q then the equation is called exact. Unfortunately not every differential equation of the form P(x,y)+Q(x,y)y' is exact. In order for this to be an effective method for solving differential equation we need a way to distinguish if a differential equation is exact, and what the function \psi(x,y) is if the function is exact.

In order to see that P and Q could not be arbitrary, remember form multivariable calculus that \psi_{xy}'=\psi_{yx}' (read: the order of the partial derivatives of \psi is exchangeable) whenever the derivatives exist and are continuous. Since \psi_{x}'=P, then \psi_{xy}'=P_y, similarly \psi_{yx}'=Q_x. Hence, if the equation is exact we would definitely need


or the same stated differently

\frac{\partial P}{\partial y}=\frac{\partial Q}{\partial x}

This can be put into a theorem.


Suppose that P(x,y) and Q(x,y) have continuous partial derivatives and they satisfy the relationship \frac{\partial P}{\partial y}=\frac{\partial Q}{\partial x}. Then there is a function \psi(x,y) so that \frac{\partial \psi}{\partial x}=P and \frac{\partial \psi}{\partial y}=Q.

Proof. We prove this by giving an explicit construction of \psi(x,y). First notice that if \psi exists then by integrating the expression \psi_x=P in x we get:

\psi(x,y)=\int P(x,y)\,dx+h(y).

Here \int P(x,y)\,dx just taking an anti-derivative P(x,y) with respect to x, treating y as a constant. It is necessary to add a function h(y), because for any function \frac{\partial}{\partial x}h(y)=0. So that with the above definition \partial_x \psi=0.

Now we need to determine h(y). To do this we use that \psi_y=Q.

Note that when this is the case, Pdx+Qdy and {\partial u \over \partial x}dx+{\partial u \over \partial y}dy must be the same, meaning that P={\partial u \over \partial x} and Q={\partial u \over \partial y}. This implies that {\partial P \over \partial y}={\partial Q \over \partial x}. We will now prove that this is also a sufficient condition when the mixed derivative {\partial u^2 \over \partial x \partial y} is continuous.


First, take the integral

u=\int_{x_0}^x Pdx + \Phi (y)

This obviously satisfies the condition that P={\partial u \over \partial x}.

In order for it to satisfy the other condition, Q(x,y)={\partial u \over \partial y} meaning that

Q(x,y)=\int_{x_0}^x {\partial P \over \partial y}dx + \Phi\prime(y)
=\int_{x_0}^x {\partial Q \over \partial x}dx + \Phi\prime(y)

Canceling Q(x,y) from both sides, we get \Phi(y)=\int_{y_0}^y Q(x_0,y) dy.

This proves that the equation is exact and that

u=\int_{x_0}^x Pdx + \int_{y_0}^y Q(x_0,y) dy = C is an integral of the differential equation.

Note that only C is the arbitrary constant. Changing y_0 only changes the integral by a constant value, which is absorbed by the C. Changing x_0 will also only change it by a constant because of the fact that {\partial P \over \partial y}={\partial Q \over \partial x}.


Consider the following DE:


Note that:

{\partial (y-x^2) \over \partial y} = {\partial x \over \partial x} = 1 and 1 is a continuous function so this equation is exact by what has been proven above.

Therefore, the integral is

u=\int_{x_0}^x Pdx + \int_{y_0}^y Q(x_0,y) dy = C

Take x_0=0 and y_0=0

Which is u = \bigg[{xy-\frac{x^3}{3}}\bigg]_{x=0}^{x=x} + \bigg[{x_0 y}\bigg]_{y=0}^{y=y}= xy-\frac{x^3}{3}=C

Integrating Factors for an Ordinary Linear Differential Equation of the First Order[edit]

Consider an equation of the form


where P(x), Q(x) and y are all functions of x. This is a first-order linear differential equation as discussed previously. For this to work this form must be closely adhered to - the derivative must be by itself.

In general these equations are not exact. They can, however, be made exact by multiplying through by an integrating factor, I(x), another function of x as we have done previously.

Multiply our original equation by I(x):


This will be our new, solvable, DE. Now consider the derivative of the product below:

(2):\frac{d}{dx}\left( I(x)\cdot y \right)=I(x)\frac{dy}{dx}+I^{\prime}(x)y

Now, if we make the RHS of (1) equal to the LHS of (2), then

(3):I(x)Q(x)=\frac{d}{dx}\left( I(x)\cdot y \right)

Which, by the other halves of the equations, makes:


Which simplifies to:


By equating the equations to get (3) forces multiplying by I(x) to produce a derivative of a product on the RHS of (1), i.e.

(6):I(x)\frac{dy}{dx}+I(x)P(x)y=\frac{d}{dx}\left( I(x)\cdot y \right)

The new DE is therefore exact, and can be solved more easily. We now find the function I(x) from (5). We will change notation slightly here.

\frac{dI}{dx}=I \cdot P(x)
\int \frac{1}{I}dI=\int P(x)dx
\ln |I|=\int P(x) dx
|I|=e^{\int P(x) dx}
I=\pm e^{\int P(x) dx}

We take this to be our integrating factor. We can ignore the negative factor, because when both sides of the DE are multiplied by it, they will cancel. So, our integrating factor is:

I(x)=e^{\int P(x) dx}

To solve the DE, we then multiply by this factor, and solve the equation, given that one side will be able to be turned into a derivative of a product.

General Integrating Factors[edit]

Now we generalize first-order linear differential equations to functions of the sort Pdx+Qdy=0 which is our ODE from the beginning this section. They are sometimes not exact, too. However, when multiplied by a function h(x,y), the product hPdx+hQdy=0 may be exact.

Theorem: An equation of the form Pdx+Qdy=0 which has exactly one integral solution with one arbitrary constant C has infinitely many integrating factors.

Proof: Suppose that the solution is f(x,y)=C. The differential is

{\partial f \over \partial x}dx+{\partial f \over \partial y}dy=0

Since f(x,y)=c is a solution of Pdx+Qdy=0, it must hold true that

\frac{f_x}{P}= \frac{f_y}{Q}

Which means that a function h exists such that

{\partial f \over \partial x}=hP


{\partial f \over \partial y}=hQ.

Obviously, this is an integrating factor. Furthermore, let S(f) be any function of f.

Then hS(f)(Pdx+Qdy) would equal S(f)({\partial f \over \partial x}dx+{\partial f \over \partial y}dy) so hS(f) is also an integrating factor.