# Ordinary Differential Equations/Second Order

In this chapter we will primarily be focused on linear second order ordinary differential equations. That is, we will be interested in equations of the form ${\displaystyle ({\text{LIVP}})\qquad {\begin{cases}y''+p(x)y'+q(x)y=g(x)\\y(x_{0})=y_{0}\\y'(x_{0})=y'_{0}\end{cases}}.}$

While it doesn't often enter into the business of finding solutions to differential equations it is important to keep in mind when there is even hope that a solution exists. The following theorem tells us at least one case where we can hope to find solutions.

Theorem Suppose ${\displaystyle p(x),\,q(x)}$ and ${\displaystyle g(x)\,}$ are continuous functions defined on an open interval ${\displaystyle I}$ and that ${\displaystyle x_{0}\in I}$. Then there exists a unique function y(x) defined on ${\displaystyle I}$ that satisfies the ordinary differential equation ${\displaystyle y''(x)+p(x)y'+q(x)y=g(x)}$ and satisfies the initial conditions ${\displaystyle y(x_{0})=y_{0}}$, ${\displaystyle y'(x_{0})=y_{0}'}$.

Putting the proof of this fact aside for now, even knowing this statement still provides us with a lot of information. In particular it gives some idea of how many solutions there are. One way of looking at what this theorem is saying is that a solution is completely determined by two numbers, namely ${\displaystyle y_{0}}$ and ${\displaystyle y'_{0}}$

We first reduce this problem to the homogeneous case, that is ${\displaystyle g(x)=0}$. Later we will introduce methods that will allow us to leverage our understanding of the homogeneous problem to better understand the non-homogeneous case. Thus we are interested in the problem of fiding solutions to

${\displaystyle ({\text{LH}})\qquad y''+p(x)y'+q(x)y=0}$

The first thing to notice is that if ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ are solutions to (LH), then for any two real numbers ${\displaystyle c_{1}}$ and ${\displaystyle c_{2}}$, then ${\displaystyle c_{1}y_{1}+c_{2}y_{2}}$ is also a solution. This may be directly verified by substituting into the left hand side of (LH).

{\displaystyle {\begin{aligned}&(c_{1}y_{1}+c_{2}y_{2})''+p(x)(c_{1}y_{1}+c_{1}y_{2})'+q(x)(c_{1}y_{1}+c_{2}y_{2})\\=&c_{1}y_{1}''+c_{2}y_{2}''+c_{1}p(x)y_{1}'+c_{2}p(x)y_{2}'+c_{1}q(x)y_{1}+c_{2}q(x)y_{2}\\=&c_{1}(y_{1}''+p(x)y_{1}'+q(x)y_{1})+c_{2}(y_{2}''+p(x)y_{2}'+q(x)y_{2})=c_{1}\cdot 0+c_{2}\cdot 0=0\end{aligned}}}

If you're familiar with linear algebra, then you'll recall that a transformation is called linear if ${\displaystyle T(v+w)=T(v)+T(w)}$. So what we are really seeing is that the left hand side of the ODE is a linear transformation on functions, and it is for this reason the equation is called linear.

Now this gives us a very interesting fact for the homogeneous case. Recall we mentioned above that our existence theorem tells us all solutions are parametrized by two initial conditions. Putting this together with the fact that linear combinations of solutions to the homogeneous problem are again solutions, it becomes interesting to investigate what initial value problems we can solve simply by taking linear combinations of solutions that we already know.

This is, given fixed numbers ${\displaystyle y_{0}}$ and ${\displaystyle y'_{0}}$ we consider the problem

${\displaystyle ({\text{LHIVP}})\qquad {\begin{cases}y''+p(x)y'+q(x)y=0\\y(x_{0})=y_{0},\quad y'(x_{0})=y'_{0}.\end{cases}}}$

Suppose we know two solutions to the homogeneous problem, ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ but suppose that ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ don't satisfy the initial conditions. Since we are interested in solving the initial value problem ${\displaystyle y(x_{0})=y_{0}}$ and ${\displaystyle y'(x_{0})=y'_{0}}$ and we know that linear combinations of solutions are again solutions we can ask the question: "Is it possible that ${\displaystyle y=c_{1}y_{1}+c_{2}y_{2}}$?"

If that were the case we could evaluate ${\displaystyle y}$ to check the initial conditions. So we would need to have that:

${\displaystyle y(x_{0})=c_{1}y_{1}(x_{0})+c_{2}y_{2}(x_{0})}$

and

${\displaystyle y'(x_{0})=c_{1}y_{1}'(x_{0})+c_{2}y_{2}'(x_{0})}$

But it is important not to lose sight of the fact that we are assuming that ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ are just fixed functions that we know. So ${\displaystyle y_{1}(x_{0})}$, ${\displaystyle y_{1}'(x_{0})}$, ${\displaystyle y_{2}(x_{0})}$, and ${\displaystyle y_{2}'(x_{0})}$ are simply four numbers that we know.

This means we are really trying to solve the following linear system with two equations and two unknowns:

{\displaystyle \left\{{\begin{aligned}c_{1}y_{1}(t_{0})&+c_{2}y_{2}(t_{0})&=y_{0}\\c_{1}y_{1}'(t_{0})&+c_{2}y_{2}'(t_{0})&=y_{0}'\end{aligned}}\right.}

From linear algebra we know that such a system can be solved for any set of initial conditions ${\displaystyle y_{0}}$ and ${\displaystyle y'_{0}}$ provided we know the determinant of the coefficient matrix is not zero. In this two by two case that is simply ${\displaystyle y_{1}(x_{0})y_{2}'(x_{0})-y_{2}(x_{0})y_{1}'(x_{0})}$. This determinant, in the subject of ODE's, is named after the mathematician who first used it systematically. It is known as the Wronskian, which we will now give a more formal definition.

Definition: Given functions ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ the Wronskian of ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ is the function ${\displaystyle W(y_{1},y_{2})(x):=y_{1}(x)y_{2}'(x)-y_{2}(x)y_{1}'(x)}$.

Our discussion above can be summarized by the following theorem.

Theorem:
Suppose ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ are two solutions to the linear homogeneous problem (LH). Then every solution to the initial value problem
${\displaystyle {\begin{cases}y''+p(x)y'+q(x)y=0,\\y(x_{0})=y_{0},\\y'(x_{0})=y'_{0}.\end{cases}}}$

may be written as a linear combination of ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ provided that the Wronskian ${\displaystyle W(y_{1},y_{2})(x_{0})\neq 0}$.

### Constant Coefficients

The first tractable problem is to consider the case when ${\displaystyle p(x)}$ and ${\displaystyle q(x)}$ are constants. For convenience we also allow ${\displaystyle y''}$ to have a non-zero constant. Thus we are interested in the equations.

${\displaystyle ay''+by'+cy=g(x),}$

where a, b and c are real numbers with ${\displaystyle a\neq 0}$.

The homogeneous equation associated with this is

${\displaystyle ay''+by'+cy=0\,.}$

Our experience first order differential equations tells us that any solution to ${\displaystyle ay'-by=0}$ has form ${\displaystyle e^{rx}}$ (in this case ${\displaystyle r=b/a}$). It turns out to be worth effort to see if such a function will ever be the solution to the equation we are considering. So we simply substitute ${\displaystyle y=e^{rx}}$ in to our equation to get:

${\displaystyle ar^{2}e^{rx}+bre^{rx}+ce^{rx}=(ar^{2}+br+c)e^{rx}=0\,,}$

Since ${\displaystyle e^{rx}}$ is never zero, the only way for the product to be zero is if ${\displaystyle r}$ happend to satisfy:

${\displaystyle ar^{2}+br+c=0}$

This equation is known as the characteristic equation associated with the homogeneous differential equation and the polynomial ${\displaystyle ar^{2}+br+c}$ is called the characteristic polynomial. Since ${\displaystyle a,b,c}$ are real numbers there are three cases to consider.

#### Real distinct roots

The first case is that ${\displaystyle b^{2}-4ac>0}$, in which case the quadratic formula furnishes us with two real numbers ${\displaystyle r_{1},r_{2}}$ so that ${\displaystyle ar_{1}^{2}+br_{1}+c=0=ar_{2}^{2}+br_{2}+c}$. In this case our calculation above shows us that ${\displaystyle e^{r_{1}x}}$ and ${\displaystyle e^{r_{2}x}}$ are two different solutions to our equation. As you will show in the exercises the Wronskian of ${\displaystyle e^{r_{1}x}}$ and ${\displaystyle e^{r_{2}x}}$ is not zero in this case. Thus we have found two solutions to the equation, and by our theorem we can represent every solution as a linear combination of these two solutions.

Example 1

Find the general solution to ${\displaystyle y''-3y'+2y=0}$. In this the characteristic equation is ${\displaystyle r^{2}-3r+2=0}$. The polynomial ${\displaystyle r^{2}-3r+2=(r-2)(r-1)}$ has two real roots, ${\displaystyle r_{1}=1}$ and ${\displaystyle r_{2}=2}$. So we have two solutions ${\displaystyle y_{1}=e^{x}}$ and ${\displaystyle y_{2}=e^{2x}}$. Since these are two different solutions to a second order equation they form a fundamental solution set. So if ${\displaystyle y}$ is a general solution then

${\displaystyle y=c_{1}e^{x}+c_{2}e^{2x}}$.

Example 2

Find the solution to the initial value problem:

${\displaystyle {\begin{cases}y''-2y'+3y=0\\y(0)=1\\y'(0)=0\end{cases}}}$.

For this ODE, ${\displaystyle r^{2}-2r+3=0}$. This characteristic polynomial factors ${\displaystyle r^{2}-r+3=(r+1)(r-3)}$ so there two roots: ${\displaystyle r_{1}=-1,r_{2}=3}$. Thus our two solutions to the ODE are ${\displaystyle y_{1}=e^{-x}}$ and ${\displaystyle y_{2}=e^{3x}}$. This leads us to the general solution of:

${\displaystyle y=c_{1}e^{-x}+c_{2}e^{3x}}$.

In order to solve the initial value problem we now substitute in the values given. That is

${\displaystyle y(0)=c_{1}e^{-0}+c_{2}e^{3\cdot 0}=c_{1}+c_{2}}$ and so ${\displaystyle c_{1}+c_{2}=1}$.

Further

${\displaystyle y'(0)=-c_{1}e^{-0}+3c_{2}e^{3\cdot 0}=-c_{1}+3c_{2}}$ and so ${\displaystyle -c_{1}+3c_{2}=0}$.

This means we need to solve the 2×2 system:

{\displaystyle {\begin{aligned}&c_{1}+&\!\!\!\!&c_{2}&\!\!\!\!=1\\-&c_{1}+&\!\!\!\!3&c_{2}&\!\!\!\!=0\end{aligned}}}

Adding the first equation to the second we see that ${\displaystyle 4c_{2}=1}$, and so ${\displaystyle c_{2}={\tfrac {1}{4}}}$. Hence, by substituting ${\displaystyle c_{2}}$ into the first equation we get ${\displaystyle c_{1}+{\tfrac {1}{4}}=1}$ and so ${\displaystyle c_{1}={\tfrac {3}{4}}}$.

This means the solution to our initial value problem is ${\displaystyle y={\tfrac {3}{4}}e^{-x}+{\tfrac {1}{4}}e^{3x}}$.

#### Complex roots

The second case to consider is when ${\displaystyle b^{2}-4ac<0}$. In this case the theory is almost identical. Since the coefficients of the characteristic equation we know we may right ${\displaystyle r_{1}=z+iw}$ and ${\displaystyle r_{2}=z-iw}$ and that ${\displaystyle e^{r_{1}x}}$ and ${\displaystyle e^{r_{2}x}}$ are two solutions, and in fact form a fundamental solution set.

This being said, it is perhaps a bit disturbing to some of us to describe a real valued solution to an ode with real coefficients (and real initial data) using complex numbers. For this reason it is atheistically pleasing to find two real valued solutions. In order to do this it helps to know a little bit about what it even meas to raise a number to the a complex power.

In our setting the answer is provided by Euler's formula, which states that for a real number ${\displaystyle \theta }$: ${\displaystyle e^{i\theta }=\cos \theta +i\sin \theta }$. Let's take a quick look at why this is the formula makes any sense at all. The idea is it examine the series power series for ${\displaystyle e^{x}=1+x+x^{2}/2+x^{3}/3!+\ldots =\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}}$. Then plugging in ${\displaystyle i\theta }$ for ${\displaystyle x}$ and collecting real and imaginary parts we get:

{\displaystyle {\begin{aligned}&1+i\theta -{\frac {\theta ^{2}}{2}}-i{\frac {\theta ^{3}}{3!}}+{\frac {\theta ^{4}}{4!}}+i{\frac {\theta ^{5}}{5!}}-\cdots \\&=(1-{\frac {\theta ^{2}}{2}}+{\frac {\theta ^{4}}{4!}}-\cdots )+i(\theta -{\frac {\theta ^{3}}{3!}}+{\frac {\theta ^{5}}{5!}}-\cdots )\\&=\cos \theta +i\sin \theta .\end{aligned}}}

This calculation is justified because these power series are absolutely convergent, and so we may rearrange the terms as we see fit. For more general complex numbers we may define ${\displaystyle e^{z+iw}}$ as ${\displaystyle e^{z}e^{iw}}$. Thus using these definitions we may rewrite our two solutions as:

{\displaystyle {\begin{aligned}y_{1}=e^{zx}(\cos(wx)+i\sin(wx))\\y_{2}=e^{zx}(\cos(wx)-i\sin(wx))\end{aligned}}}.

Since any linear combination of these two solutions is again a solution we note that two particularly nice linear combinations are:

{\displaystyle {\begin{aligned}&{\tilde {y}}_{1}={\frac {y_{1}+y_{2}}{2}}=e^{zx}\cos(wx)\\&{\tilde {y}}_{2}={\frac {y_{1}-y_{2}}{2i}}=e^{zx}\sin(wx)\end{aligned}}}

For those uncomfortable with complex variables the above discussion may seem a bit unclear. But it may simply be considered as motivation. That is if we remember ${\displaystyle z={\frac {-b}{2a}}}$ and ${\displaystyle w={\frac {\sqrt {4ac-b^{2}}}{2a}}}$ one may directly verify that ${\displaystyle {\tilde {y}}_{1}}$ and ${\displaystyle {\tilde {y_{2}}}}$ solve (LH). It is also left to the reader to verify that ${\displaystyle W({\tilde {y}}_{1},{\tilde {y}}_{2})(x_{0})\neq 0}$. This in this case as well we also find a fundamental solution set.

Example 3

Solve the initial value problem

${\displaystyle {\begin{cases}y''+2y'+5y=0\\y(0)=0\qquad y'(0)=1.\end{cases}}}$

In this case our characteristic polynomial is ${\displaystyle r^{2}+2r+5}$. Using the quadratic formula we see the roots are ${\displaystyle r_{1}={\frac {-2+{\sqrt {4-4\cdot 1\cdot 5}}}{2\cdot 1}}=-1+2i}$ and ${\displaystyle r_{2}=-1-2i.}$. This means two solutions to this differential equation are given by:

${\displaystyle y_{1}=e^{-x}\cos(2x)}$ and ${\displaystyle y_{2}=e^{-x}\sin(2x).}$

So we know that the general solution has the form ${\displaystyle y=c_{1}e^{-x}\cos(2x)+c_{2}e^{-x}\sin(2x)}$. Now we need to use the initial conditions to solve for ${\displaystyle c_{1}}$ and ${\displaystyle c_{2}}$. We start by calculating the derivative of y:

${\displaystyle y'=-c_{1}{\big (}e^{-x}\cos(2x)+2e^{-x}\sin(2x){\big )}+c_{2}{\big (}-e^{-x}\sin(2x)+2e^{-x}\cos(2x){\big )}}$.

Thus, using the initial conditions we see that:

{\displaystyle {\begin{aligned}&0=y(0)=c_{1}\cdot 1+c_{2}\cdot 0=c_{1}\\&1=y'(0)=-c_{1}\cdot (0+2)+c_{2}\cdot (-2+0)=-2c_{1}-2c_{2}\end{aligned}}}

From this we see immediately that ${\displaystyle c_{1}=0}$ and ${\displaystyle c_{2}=-{\tfrac {1}{2}}}$. Thus ${\displaystyle y=-{\tfrac {1}{2}}e^{-x}\sin(2x)}$ is the solution to our initial value problem.

Example 4

Find the general solution to:

${\displaystyle 4y''+y=0.}$

In this case our characteristic equation is ${\displaystyle 4r^{2}+1=0}$, whose roots are ${\displaystyle r_{1}={\tfrac {i}{2}}}$ and ${\displaystyle r_{2}=-{\tfrac {i}{2}}}$. Notice that these two solutions have the form ${\displaystyle 0+{\tfrac {1}{2}}i}$ and ${\displaystyle 0-{\tfrac {1}{2}}i}$, or in other words using the notation above ${\displaystyle z=0}$ and ${\displaystyle w={\tfrac {1}{2}}}$. When there is no real part to the complex number, that just means the ${\displaystyle z=0}$. In this case we get ${\displaystyle y_{1}=e^{0\cdot x}\cos({\tfrac {x}{2}})=\cos({\tfrac {x}{2}})}$ and similarly ${\displaystyle y_{2}=\sin({\tfrac {x}{2}})}$.

Therefore the general solution to this ODE is:

${\displaystyle y=c_{1}\cos({\tfrac {x}{2}})+c_{2}\sin({\tfrac {x}{2}})}$.

#### Repeated Real Roots

In the case when ${\displaystyle b^{2}-4ac=0}$ finding two solutions is slightly more difficult. In this case our characteristic polynomial factors into ${\displaystyle a(r+{\frac {b}{2a}})^{2}}$. In this case we have only one root, namely ${\displaystyle r_{1}={\frac {-b}{2a}}}$. We still obtain the solution ${\displaystyle y_{1}=e^{r_{1}x}}$, the question becomes how do we find a second solution?

Luckily there is one very nice property of the characteristic polynomial. In general, if a polynomial the a repeated root then the derivative of our polynomial also has this root. (Since the polynomial depends on r, we mean here a derivative with respect to r.) In our case this is easily seen, let ${\displaystyle P(r)=a(r+{\frac {b}{2a}})^{2}}$ then we have

${\displaystyle P'(r)=2a(r+{\frac {b}{2a}})}$    and so
${\displaystyle P'(r_{1})=2a(r_{1}+{\frac {b}{2a}})=2a({\frac {-b}{2a}}+{\frac {b}{2a}})=0}$.

Since our characteristic polynomial came from considering ${\displaystyle a(e^{rx})''+b(e^{rx})'+c(e^{rx})}$ We might hope that taking a derivative in ${\displaystyle r}$ might help us find another solution to try.

So we start by considering:

${\displaystyle {\frac {d}{dr}}{\big (}a(e^{rx})''+b(e^{rx})'+c(e^{rx}){\big )}={\frac {d}{dr}}{\big (}(ar^{2}+br+c)e^{rx}{\big )}=(2ar+b)e^{rx}+(ar^{2}+br+c)xe^{rx}}$.

Now if ${\displaystyle r=r_{1}={\frac {-b}{2a}}}$ then ${\displaystyle (2ar_{1}+b)=0}$ and ${\displaystyle (ar_{1}^{2}+br_{1}+c)=0}$. Hence ${\displaystyle (2ar_{1}+b)e^{r_{1}x}+(ar_{1}^{2}+br_{1}+c)xe^{rx}}$

On the other hand, remembering that derivatives commute, we might have calculated this a bit differently to get:

${\displaystyle {\frac {d}{dr}}{\big (}a(e^{rx})''+b(e^{rx})'+c(e^{rx}){\big )}=a({\frac {d}{dr}}e^{rx})''+b({\frac {d}{dr}}e^{rx})'+c({\frac {d}{dr}}e^{rx})=a(xe^{rx})''+b(xe^{rx})'+c(xe^{rx})}$

That is we are really just looking at ${\displaystyle xe^{rx}}$ plugged into our differential equation, but we know from our first calculation this should be zero. So it seems that ${\displaystyle xe^{rx}}$ should be a solution.

Changing the order of the derivatives in ${\displaystyle x}$ and the derivatives in ${\displaystyle r}$ is allowed because ${\displaystyle e^{rt}}$ has continuous derivatives of all orders in ${\displaystyle x}$ and ${\displaystyle r}$. So we can let ${\displaystyle y_{2}=xe^{r_{1}x}}$. It can be checked that ${\displaystyle W(y_{1},y_{2})(x_{0})\neq 0}$ and so we again have that ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ form a fundamental solution set.

Example 5

Suppose we want to find the general solution to

${\displaystyle y''+4y'+4y=0}$.

In this case our characteristic equation is ${\displaystyle r^{2}+4r+4=0}$ which has only one root ${\displaystyle r_{1}=-2}$. And so according to our theory above a fundamental set of solutions are given by ${\displaystyle y_{1}=e^{-2x}}$ and ${\displaystyle y_{2}=xe^{-2x}}$, and so the general solution is given by:

${\displaystyle y=c_{1}e^{-2x}+c_{2}xe^{-2x}.}$
Example 6

If we want to solve the initial value problem

${\displaystyle {\begin{cases}9y''-6y+y\\y(0)=3\quad y'(0)=1.\end{cases}}}$

Then our characteristic equation is ${\displaystyle 9r^{2}-6r+1=0}$. The quadratic formula gives ${\displaystyle r_{1,2}={\tfrac {6\pm {\sqrt {36-4(9)(1)}}}{2\cdot 9}}={\tfrac {1}{3}}.}$ The fundamental solution set is given by ${\displaystyle y_{1}=e^{x/3}}$ and ${\displaystyle y_{1}=xe^{x/3}}$, and therefore our general solution is ${\displaystyle y_{1}=c_{1}e^{x/3}+c_{2}xe^{x/3}}$. We can now calculate ${\displaystyle y_{1}'=c_{1}({\tfrac {1}{3}}e^{x/3})+c_{2}(e^{x/3}+{\tfrac {1}{3}}xe^{x/3}).}$

In the above conversation we it was always necessary to check the Wronskian at the initial point ${\displaystyle x_{0}}$ in order to see if the set of functions formed a fundamental solution set. This leaves us with the uncomfortable possibility that perhaps our fundamental solution set at one point ${\displaystyle x_{0}}$ would not be a fundamental solution set if we choose to have our initial conditions at ${\displaystyle x_{1}}$. Thankfully this turns out not to be the case.

Abel's Theorem

Suppose that ${\displaystyle y_{1}}$, and ${\displaystyle y_{2}}$ are solutions to (LH). Then we have that

${\displaystyle W(y_{1},y_{2})(x)=Ce^{-\int _{x_{0}}^{x}p(t)\,dt}}$,

where C is a constant depending on ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$

To begin proving this we start by taking the derivative of ${\displaystyle W(y_{1},y_{2})}$.

${\displaystyle W'(y_{1},y_{2})=(y_{1}y_{2}')'-(y_{2}y_{1}')'=y_{1}'y_{2}'+y_{1}y_{2}''-y_{2}'y_{1}'-y_{2}y_{1}''=y_{1}y_{2}''-y_{2}y_{1}''}$.

Next we use the equation (LH) to work out what ${\displaystyle y_{1}''}$ and ${\displaystyle y_{2}''}$ are.

${\displaystyle y_{1}''=-p(x)y_{1}'-q(x)y_{1}}$ and
${\displaystyle y_{2}''=-p(x)y_{2}'-q(x)y_{2}}$

Thus

${\displaystyle W'(y_{1},y_{2})=y_{1}(-p(x)y_{2}'-q(x)y_{2})-y_{2}(-p(x)y_{1}'-q(x)y_{1})=-p(x)y_{1}y_{2}'+p(x)y_{2}y_{1}'}$

By inspection we see that ${\displaystyle W'(y_{1},y_{2})=-p(x)W(y_{1},y_{2})}$. We know the solution to this ODE is given by

${\displaystyle W(y_{1},y_{2})=Ce^{-\int _{x_{0}}^{x}p(t)\,dt}}$.

Finally if we plug in ${\displaystyle t_{0}}$ we get that ${\displaystyle C=W(y_{1},y_{2})(x_{0})}$. Thus we can write our final formula as

${\displaystyle W(y_{1},y_{2})(t)=W(y_{1},y_{2})(x_{0})e^{\int _{x_{0}}^{x}p(t)\,dt}.}$

The important thing for us to notice is that ${\displaystyle e^{\int _{x_{0}}^{x}p(t)\,dt}}$ is never zero. So for any real number ${\displaystyle x}$ we see that ${\displaystyle W(y_{1},y_{2})(t)=0}$ if and only if ${\displaystyle W(y_{1},y_{2})(x_{0})=0}$. This tells us exactly that either ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ are a fundamental solution set or they are not, where we take our initial data does not change that fact.

### 2: Series Solutions

As mentioned when we began second order ODE's, equations of the form ${\displaystyle y''+p(x)y'+q(x)y=g(x)}$ are guaranteed to have a unique solution when ${\displaystyle p(x)}$,${\displaystyle q(x)}$, ${\displaystyle g(x)}$and are continuous on an open interval that includes the initial condition. However, problems of this form are not guaranteed to have a closed-form solution, a solution that can be expressed in terms of "well-known" functions like ${\displaystyle x^{2}}$ and ${\displaystyle \sin(x)}$. We can get arround this by using Taylor's theorem from Calculus. Because we don't know the solution itself, we try a solution of the form ${\displaystyle y=\sum _{n=0}^{\infty }a_{n}(x-x_{0})^{n}}$, a power series, instead of using the definition of the Taylor series.

#### Series Solutions of Homogeneous ODE's

Much like the method used for constant coefficients, we take our assumed solution form, differentiate it, and plug it into the equation. We then collect each series into a single series after matching both the powers of ${\displaystyle x}$ and the indices. Because the collected series is equal to zero in the homogeneous case, each coefficient of x must also be equal to zero. We then use this fact to find a recurrence relationship between the successive values of ${\displaystyle a_{n}}$.

Example 1

Find a series solution to the following initial value problem about ${\displaystyle x_{0}=0}$.

${\displaystyle y''+x^{2}y'+y=0,\,y(0)=y_{0},\,y'(0)=y'_{0}}$

We begin by differentiation and plugging in our assumed form.

${\displaystyle y=\sum _{n=0}^{\infty }a_{n}x^{n}}$
${\displaystyle y'=\sum _{n=1}^{\infty }a_{n}nx^{n-1}}$
${\displaystyle y''=\sum _{n=2}^{\infty }a_{n}n(n-1)x^{n-2}}$
${\displaystyle \sum _{n=2}^{\infty }a_{n}n(n-1)x^{n-2}+x^{2}\sum _{n=1}^{\infty }a_{n}nx^{n-1}+\sum _{n=0}^{\infty }a_{n}x^{n}=0}$

Note that because the first term of each series is a constant, and the derivative of a constant is zero, each derivative has its starting index increased by one. We then move the ${\displaystyle x^{2}}$ into the series.

${\displaystyle \sum _{n=2}^{\infty }a_{n}n(n-1)x^{n-2}+\sum _{n=1}^{\infty }a_{n}nx^{n+1}+\sum _{n=0}^{\infty }a_{n}x^{n}=0}$

In order to combine the series in a useful manner, we match the powers of x and the indices. To match the powers we change the indices. To do so, we add (or subtract) an integer to the index and substitute ${\displaystyle n}$ minus (or plus) that number for n in the series itself.

${\displaystyle \sum _{n=0}^{\infty }a_{n+2}(n+2)(n+1)x^{n}+\sum _{n=2}^{\infty }a_{n-1}(n-1)x^{n}+\sum _{n=0}^{\infty }a_{n}x^{n}=0}$

The last step before combining the series is pulling terms out of the series to match the indices.

${\displaystyle a_{2}(2)(1)+a_{3}(3)(2)x+\sum _{n=2}^{\infty }a_{n+2}(n+2)(n+1)x^{n}+\sum _{n=2}^{\infty }a_{n-1}(n-1)x^{n}+a_{0}+a_{1}x+\sum _{n=2}^{\infty }a_{n}x^{n}=0}$

Combining the series and like powers of ${\displaystyle x}$ yields

${\displaystyle (2a_{2}+a_{0})+(6a_{3}+a_{1})+\sum _{n=2}^{\infty }[a_{n+2}(n+2)(n+1)+a_{n-1}(n-1)+a_{n}]x^{n}=0}$

In order for that equation to hold for every value of ${\displaystyle x}$, the coefficients of each power of ${\displaystyle x}$ must be zero. This yields

${\displaystyle 2a_{2}+a_{0}=0}$
${\displaystyle 6a_{3}+a_{1}=0}$
${\displaystyle a_{n+2}(n+2)(n+1)+a_{n-1}(n-1)+a_{n}=0}$

Solving each for the last ${\displaystyle a_{n}}$ yields

${\displaystyle a_{2}=(-1/2)\,a_{0}}$
${\displaystyle a_{3}=(-1/6)\,a_{1}}$
${\displaystyle a_{n+2}={\frac {-a_{n-1}(n-1)-a_{n}}{(n+2)(n+1)}}}$

The last equation above is called a recurrence relation. Given the previous three values of ${\displaystyle a}$, the next value of ${\displaystyle a}$ can be determined. Note that given ${\displaystyle a_{0}}$ and ${\displaystyle a_{1}}$, we can determine the values of all ${\displaystyle a_{n}}$. This is consistent with our expectation that the solution to a second order linear ODE should have two arbitrary constants.

Taking derivatives and plugging in zero, we find that ${\displaystyle a_{0}=y_{0}}$ and ${\displaystyle a_{1}=y'_{0}}$. Thus, the solution to our initial value problem is

${\displaystyle y=\sum _{n=0}^{\infty }a_{n}x^{n}}$
${\displaystyle a_{0}=y_{0}}$
${\displaystyle a_{1}=y'_{0}}$
${\displaystyle a_{2}=(-1/2)\,a_{0}}$
${\displaystyle a_{3}=(-1/6)\,a_{1}}$
${\displaystyle a_{n+2}={\frac {-a_{n-1}(n-1)-a_{n}}{(n+2)(n+1)}}}$

### 6: Bessel Equation

The Bessel differential equation has the form ${\displaystyle x^{2}y''+xy'+(x^{2}-n^{2})y=0}$