# Ordinary Differential Equations/One-dimensional first-order linear equations

## Definition

One-dimensional first-order inhomogenous linear ODEs are ODEs of the form

${\displaystyle x'(t)+f(t)x(t)=g(t)}$

for suitable (that is, mostly, continuous) functions ${\displaystyle f,g:\mathbb {R} \to \mathbb {R} }$; note that when ${\displaystyle g\equiv 0}$, we have a homogenous equation instead.

## General solution

First we note that we have the following superposition principle: If we have a solution ${\displaystyle x_{h}}$ ("${\displaystyle h}$" standing for "homogenous") of the problem

${\displaystyle x_{h}'(t)+f(t)x_{h}(t)=0}$

(which is nothing but the homogenous problem associated to the above ODE) and a solution to the actual problem ${\displaystyle x_{p}}$; that is a function ${\displaystyle x_{p}}$ such that

${\displaystyle x_{p}'(t)+f(t)x_{p}(t)=g(t)}$

("${\displaystyle p}$" standing for "particular solution", indicating that this is only one of the many possible solutions), then the function

${\displaystyle x(t):=ax_{h}(t)+x_{p}(t)}$ (${\displaystyle a\in \mathbb {R} }$ arbitrary)

still solves ${\displaystyle x'(t)+f(t)x(t)=g(t)}$, just like the particular solution ${\displaystyle x_{p}}$ does. This is proved by computing the derivative of ${\displaystyle x}$ directly.

In order to obtain the solutions to the ODE under consideration, we first solve the related homogenous problem; that is, first we look for ${\displaystyle x_{h}}$ such that

${\displaystyle x_{h}'(t)+f(t)x_{h}(t)=0\Leftrightarrow x_{h}'=-f(t)x_{h}}$.

It may seem surprising, but this gives actually a very quick path to the general solution, which goes as follows. Separation of variables (and using ${\displaystyle \ln ^{-1}=\exp }$) gives

${\displaystyle x_{h}(t)=\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)}$,

since the function

${\displaystyle G(t):=-\int _{t_{0}}^{t}f(s)ds}$

is an antiderivative of ${\displaystyle t\mapsto -f(t)}$. Thus we have found the solution to the related homogenous problem.

For the determination of a solution ${\displaystyle x_{p}}$ to the actual equation, we now use an Ansatz: Namely we assume

${\displaystyle x_{p}(t)=c(t)x_{h}(t)}$,

where ${\displaystyle c:\mathbb {R} \to \mathbb {R} }$ is a function. This Ansatz is called variation of the constant and is due to Leonhard Euler. If this equation holds for ${\displaystyle x_{p}}$, let's see what condition on ${\displaystyle c}$ we get for ${\displaystyle x_{p}}$ to be a solution. We want

${\displaystyle x_{p}'(t)+f(t)x_{p}(t)=g(t)}$, that is (by the product rule and inserting ${\displaystyle x_{h}}$):
${\displaystyle c'(t)\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)=c'(t)\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)+c(t)(-1)\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)f(t)+f(t)c(t)\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)=g(t)}$.

Putting the exponential on the other side, that is

${\displaystyle c'(t)=g(t)\exp \left(\int _{t_{0}}^{t}f(s)ds\right)}$

or

${\displaystyle c(t)=\int _{t_{0}}^{t}g(r)\exp \left(\int _{t_{0}}^{r}f(s)ds\right)dr+C_{1}}$.

Since all the manipulations we did are reversible, all functions of the form

${\displaystyle C_{2}\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)+\left(\int _{t_{0}}^{t}g(r)\exp \left(\int _{t_{0}}^{r}f(s)ds\right)dr+C_{1}\right)\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)}$ (${\displaystyle C_{1},C_{2}\in \mathbb {R} }$ arbitrary)

are solutions. If we set ${\displaystyle C:=C_{2}+C_{1}}$, we get the general solution form

${\displaystyle C\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)+\left(\int _{t_{0}}^{t}g(r)\exp \left(\int _{t_{0}}^{r}f(s)ds\right)dr\right)\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)}$.

We want now to prove that these constitute all the solutions to the equation under consideration. Thus, set

${\displaystyle x_{C}(t):=C\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)+\left(\int _{t_{0}}^{t}g(r)\exp \left(\int _{t_{0}}^{r}f(s)ds\right)dr\right)\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)}$

and let ${\displaystyle x_{2}(t)}$ be any other solution to the inhomogenous problem under consideration. Then ${\displaystyle x_{C}-x_{2}}$ solves the homogenous problem, for

${\displaystyle x_{C}'(t)-x_{2}'(t)-f(t)(x_{C}(t)-x_{2}(t))=x_{C}'(t)-f(t)x_{C}(t)-(x_{2}'(t)-f(t)x_{2}(t))=g(t)-g(t)=0}$.

Thus, if we prove that all the homogenous solutions (and in particular the difference ${\displaystyle x_{C}-x_{2}}$) are of the form

${\displaystyle C\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)}$,

then we may subtract

${\displaystyle D\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)}$

from ${\displaystyle x_{C}-x_{2}}$ for an appropriate ${\displaystyle D\in \mathbb {R} }$ to obtain zero, which is why ${\displaystyle x_{2}}$ is then of the desired form.

Thus, let ${\displaystyle x_{h}}$ be any solution to the homogenous problem. Consider the function

${\displaystyle t\mapsto x_{h}(t)\cdot \exp \left(\int _{t_{0}}^{t}f(s)ds\right)}$.

We differentiate this function and obtain by the product rule

${\displaystyle x_{h}'(t)\exp \left(\int _{t_{0}}^{t}f(s)ds\right)+f(t)\exp \left(\int _{t_{0}}^{t}f(s)ds\right)x_{h}(t)=-f(t)x_{h}(t)\exp \left(\int _{t_{0}}^{t}f(s)ds\right)+f(t)\exp \left(\int _{t_{0}}^{t}f(s)ds\right)x_{h}(t)=0}$

since ${\displaystyle x_{h}}$ is a solution to the homogenous problem. Hence, the function is constant (that is, equal to a constant ${\displaystyle C\in \mathbb {R} }$), and solving

${\displaystyle x_{h}(t)\cdot \exp \left(\int _{t_{0}}^{t}f(s)ds\right)=C}$

for ${\displaystyle x_{h}}$ gives the claim.

We have thus arrived at:

Theorem 3.1:

For continuous ${\displaystyle f,g:\mathbb {R} \to \mathbb {R} }$, the solutions to the ODE

${\displaystyle x'(t)+f(t)x(t)=g(t)}$

are precisely the functions

${\displaystyle x(t)=C\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)+\left(\int _{t_{0}}^{t}g(r)\exp \left(\int _{t_{0}}^{r}f(s)ds\right)dr\right)\exp \left(-\int _{t_{0}}^{t}f(s)ds\right)}$ (${\displaystyle C\in \mathbb {R} }$ arbitrary).

Note that imposing an inital condition ${\displaystyle x(t_{0})=x_{0}}$ for some ${\displaystyle x_{0}\in \mathbb {R} }$ enforces ${\displaystyle C=x_{0}}$, whence we got a unique solution for each initial condition.

### Exercises

• Exercise 3.2.1: First prove that ${\displaystyle {\frac {d}{dt}}\ln(t^{2})={\frac {2}{t}}}$. Then solve the ODE ${\displaystyle x'(t)+{\frac {2}{t}}x(t)={\frac {1}{t^{2}}}}$ for a function existent on ${\displaystyle [1,\infty )}$ such that ${\displaystyle x(1)=c}$ for ${\displaystyle c\in \mathbb {R} }$ arbitrary. Use that a similar version of theorem 3.1 holds when ${\displaystyle f,g}$ are only defined on a proper part of ${\displaystyle \mathbb {R} }$; this is because the proof carries over.

## Clever Ansatz for polynomial RHS

First note that RHS means "Right Hand Side". Let's consider the special case of a 1-dim. first-order linear ODE

${\displaystyle x'(t)+cx(t)=a_{i}t^{i}}$ (${\displaystyle c\in \mathbb {R} }$ arbitrary),

where we used Einstein summation convention; that is, ${\displaystyle a_{i}x^{i}}$ stands for ${\displaystyle \sum _{i=0}^{m}a_{i}t^{i}}$ for some ${\displaystyle m\in \mathbb {N} }$. In the notation of above, we have ${\displaystyle f(t)\equiv c}$ and ${\displaystyle g(t)=a_{i}t^{i}}$.

Using separation of variables, the solution to the corresponding homogenous problem ${\displaystyle g\equiv 0}$ is easily seen to equal ${\displaystyle x_{h}(t)=C\exp(-ct)}$ for some capital ${\displaystyle C\in \mathbb {R} }$.

To find a particular solution ${\displaystyle x_{p}}$, we proceed as follows. We pick the Ansatz to assume that ${\displaystyle x_{p}}$ is simply a polynomial; that is

${\displaystyle x_{p}(t)=b_{i}t^{i}}$

for certain coefficients ${\displaystyle b_{i}}$.

### Exercises

• Exercise 3.3.1: Find all solutions to the ODE ${\displaystyle x'(t)+2x(t)=2t^{2}+4t+3}$. (Hint: What does theorem 3.1 say about the number of solutions to that problem with a given fixed initial condition?)

Example 3.2: