Ordinary Differential Equations/Separable 1

From Wikibooks, open books for an open world
Jump to navigation Jump to search

First Order Differential Equations

This section deals with a technique of solving differential equation known as Separation of Variables. Before we begin discussing separation of variables it is very helpful to recall the theorem behind integration by substitution from calculus.

Which states that if is a continuous function and has a derivative that is a continuous function - hence being a function itself, then integrating such chained functions yields

In other words, our integrand and our antiderivate will be the same if we took the antiderivative of and plugged it into . This provides a very useful tool to solve many ODE's.

To take an example let's consider the differential equation:

We comment the constant function is a solution. We will focus on finding solutions that are not zero. As commented before it is a consequence of the uniqueness of solutions that two solutions can never intersect. For this problem it means that solutions are always positive or always negative because they cannot cross . For easier understanding, picture a normal Cartesian coordinate system. The solutions can only be above or below the x-axis.

For this reason we are justified for the remainder in assuming that . Thus we can divide by as division by zero is undefined, to find that:

Now integrating both sides with respect to we have that

Using the theorem about integration by substitution theorem:



Finally we comment that, and collapsing to , so technically speaking the above derivation only shows that we get a solution if or if , taking into account . Notice on the other hand that if we allow then we can recover the solution . Thus we can represent all solutions as

where is a real number.

This method is known as separation of variables because after we have divided we have one side that is entirely in terms of and the other side entirely in terms of .

Comments about notation and heuristics[edit | edit source]

While the change of variables theorem is the mathematical justification, using Leibniz' notation for derivatives allows for excellent mnemonics for remembering how separation of variables works.

We present two notationally different but equivalent derivations here:

Beginning from the point where we have divided by and using Leibniz notation we have that:

One might not view this as fully separated because there is still a "dx" on the left hand side. But one can fully separate the equation by multiplying by "dx", to get:

Then placing an integral in front of each term one gets:


This arrives at the correct place, but treating "dx" as a number is purely heuristic.

A second mnemonic which is fairly similar uses Leibniz notation to remind us how the change of variables theorem should work. Beginning from the step in the original example where we integrated both sides with respect to x, and using Leibniz notation we have:


Now we can "cancel the dx" to get that . Thus


Again, "dx" is not a number, so it cannot really be canceled. But the heuristic does correctly give the change of variables formula.

The reader should use which ever of the above methods seems most clear to them individually. The book will undoubtedly use anyone of the three at different points in its exposition.

What is a Separable Equation?[edit | edit source]

An ordinary differential equation is called separable if it can be rewritten in the form:

Integrating both sides with respect to

Now by one of the methods we discussed above we can change and simplfy the second integral to

An equation where you can factor out both P and Q into separate functions of x and y

is called separable because the equation can become an equation with separated variables.

However, attention must be paid to this process of division. The cases when and will be additional possible integral solutions that will not be found through the integral because of this division.

Trivial cases[edit | edit source]

There are two special cases of a separable equation that make the solution almost trivial. These are the cases where either


where k is a constant. In either case, the equation changes from being in terms of 3 variables and reduces down to 2. When we can throw out one of the two variables, the solution can be achieved by a simple integral.

No y term: N(y) = k[edit | edit source]

If N(y)=k, we can treat k as part of M(x), and turn N(y) into 1. That is, if we divide both sides by k and let f(x) = M(x)/k. Then we're left with


This is now basic calculus - take the integral of both sides. The general solution is

where C is a constant.

Let's take a look at a few examples.

Example 1[edit | edit source]

Here, we have a derivative equaling a function of x. The original function is the antiderivative of this, that is, we have to integrate to find it. Now we integrate both sides of the equation, w.r.t.x (with regard to 'x'):

This is the general solution. A function that solves a differential equation is said to satisfy it, so

satisfies the differential equation


Example 2[edit | edit source]

This works the same as before - integrate both sides with respect to x.

Note that even in this simple case, it may be impossible to write down a formula for the solution because f(x) cannot be integrated in elementary terms. In this case, writing the integral of y, which is the function we are looking for, symbolically with an integral symbol is deemed to be a solution.

No x term: M(x)=k[edit | edit source]

Another special case is when

The only difference is how we make it into an integrable form. Our equation is

This type of equation is called an autonomous differential equation. The differential equation has no explicit dependence on the independent variable x except through the function y. We will have more to say about this type of equation later, but for the moment we note that this type of equation is always separable.

Integrating both sides with respect to x we get

We now have two integrals, each with respect to the functions contained in them. The solution is of the form

You can now try to solve for y. It is possible that the left hand side doesn't have an antiderivative that can be expressed as a nice formula, and in that case, we leave it in a as clean a form as possible, that is with the integral symbol unresolved. Another important concept emerges here.

Example 4[edit | edit source]

Here we multiply through by y, and multiply through by dx. The rest of the math works as before.

In this case, its more convenient to solve for y2 than y. If we take the square root to solve for y, we change the equation (since the square root of a number is always positive, we would lose half of each solution curve). In these cases its just fine to leave it in its current form.

You may have noticed that the right hand side of the equation, before solving for y, is the same for both examples. When f(x)=k, the right hand side will always be kx+C where k is whatever constant is in the problem. It only changes when f(x) is not constant.

As before, it may be impossible to do the integral. Even if it is possible, it may then not be possible to invert the equation to give y explicitly as a function of x, for example if we get tan(y) + 1/y = x. Again, this does not matter; the result is regarded as a valid solution.

General Separable Variable Equations[edit | edit source]

In the most general case, neither M(x) or N(y) is a constant. In this case, we use the same method as before (although the variables might have to be separated first). The only difference is that both sides result in non-trivial integrals, so we have a little more work to do. This can produce some untidy solutions, and ODEs are known for this. This untidiness often means that we might want to check our result at the end.

Example 5[edit | edit source]

Dividing by y3 and multiplying by dx gives

Example 6[edit | edit source]

We multiply by :

Again collapsing into .

Example 7[edit | edit source]

This doesn't look separable at first glance. A little factoring can be applied though:

This is separable.

Problems with Initial Conditions[edit | edit source]

Solving an initial value problem is fairly easy. Solve for y as we did above. Then once you have the equation, plug in b for y and a for x. Then, solve for the constant.

Example 8[edit | edit source]

As we saw in example 3, the general solution to this is y=Cekx. Let's plug in our boundary conditions:

This is the particular solution.

Unfortunately in some cases care must be used when solving for the constant.

Example 9[edit | edit source]

This equation is already separated, so integrating both sides

By plugging in we find that:

So now we have that:

But if we continue to solve for y we must be careful. One may naively write:

But there is a problem, we are looking for a function y(x), which means the function must be single valued (see Single-valued function). And we should be careful what we mean by . Do we mean that for each different x there is a different possibility of a sign?

Fortunately since the function has to be differentiable (and hence continuous), the function y(x) cannot jump back and forth between positive and negative values. So we must mean there is one choice of sign:


But which one? It is a nice consequence of uniqueness, that only one of the two functions could solve the problem. Both solve the differential equation, so we can double check the initial condition. In the first instance y(0)=2, so that is not our solution, but for the second function y(0)=-2 as desired. Thus the solution is given by:

Example 10[edit | edit source]

Another interesting example that illustrates problems that may arise is:

In order to solve this initial value problem we begin by separating variables.

Now to plug in the initial conditions we have that:

, Thus

So we have a solution . But by inspection we also have the solution . So there are at least two solutions to this problem. Perhaps there are more, at the moment we have no way to be sure. Worse, there is nothing that comes up while separating variables to indicate we are in a situation where the situations are not unique. For this reason we will return to issue and discuss the general mathematical theorems that guarantee the existence of unique solutions.

This may be important for many reasons. For example, often one believes a real life process is described by a function which satisfies an ODE. Usually using the ODE, and actual measurements of the process you'd like to deduce what function is governing the process so you can make predictions about future behaviour. But if the solutions to the ODE are not unique, it is difficult to tell when you have found the one solution which is really giving the behaviour of the real world process.

Example 11[edit | edit source]

In the example above we saw that sometimes a differential equation is not well behaved and admits multiple solutions. In this example we see that sometimes the problem is perfectly well behaved but if we are not careful we may think we find multiple solutions.

Now by plugging in the initial condition we find that

At this point it is very tempting to claim that:

And we appear get two solutions

But if we are careful we can see that the second equation is not really a solution. To see this let's plug it into the equation.


, then
, and

Thus when we plug into we are trying to equate

and ,

but notice the first equation is negative for while the second equation is always positive.

This doesn't happen for the other function because we end up trying to compare , since we can drop the absolute values in the second expression.

So what went wrong? We introduced the ambiguity when we were solving for by squaring both sides of the equation. We could have avoided this issue if we solved for immediately after integrating, in which case we would have been working with the equation and:

, so

Plugging this back in and solving for y we get:

Proceed to the next part of the lesson: Examples