Ordinary Differential Equations/Blow-ups and moving to boundary

Theorem (concatenation of solutions):

Assume we have two functions ${\displaystyle x_{1}:[t_{0}-\gamma ,t_{0}]\to \mathbb {R} ^{n}}$, ${\displaystyle x_{2}:[t_{0},t_{0}+\delta ]\to \mathbb {R} ^{n}}$ satisfying

${\displaystyle {\begin{cases}x_{1}'(t)=f(t,x_{1}(t))&t\in [t_{0}-\gamma ,t_{0}]\\x_{1}(t_{0})=x_{0}\end{cases}}}$

and

${\displaystyle {\begin{cases}x_{2}'(t)=f(t,x_{2}(t))&t\in [t_{0},t_{0}+\delta ]\\x_{2}(t_{0})=x_{0}\end{cases}}}$

respectively. Then the function

${\displaystyle x:[t_{0}-\gamma ,t_{0}+\delta ]\to \mathbb {R} ^{n},x(t):={\begin{cases}x_{1}(t)&t\in [t_{0}-\gamma ,t_{0}]\\x_{2}(t)&t\in [t_{0},t_{0}+\delta ]\end{cases}}}$

solves

${\displaystyle {\begin{cases}x'(t)=f(t,x(t))&t\in [t_{0}-\gamma ,t_{0}+\delta ]\\x(t_{0})=x_{0}.\end{cases}}}$

Proof 1:

We prove differentiability at ${\displaystyle x_{0}}$ as follows: We claim that the derivative of ${\displaystyle x}$ is given by ${\displaystyle f(t_{0},x_{0})}$. To prove our claim, we note that

${\displaystyle {\frac {f(t_{+})-f(t_{-})}{t_{+}-t_{-}}}={\frac {f(t_{+})-f(t_{0})}{t_{+}-t_{0}}}{\frac {t_{+}-t_{0}}{t_{+}-t_{-}}}+{\frac {f(t_{0})-f(t_{-})}{t_{0}-t_{-}}}{\frac {t_{0}-t_{-}}{t_{+}-t_{-}}}\to f(t_{0},x_{0}),t_{+},t_{-}\to 0}$

where ${\displaystyle t_{+}\in [t_{0},t_{0}+\delta ]}$ and ${\displaystyle t_{-}\in [t_{0}-\gamma ,t_{0}]}$; this is because

${\displaystyle {\frac {t_{+}-t_{0}}{t_{+}-t_{-}}}+{\frac {t_{0}-t_{-}}{t_{+}-t_{-}}}=1}$.

In the case where both ${\displaystyle t_{+},t_{-}}$ are both contained in the same of the two intervals ${\displaystyle [t_{0},t_{0}+\delta ]}$, ${\displaystyle [t_{0}-\gamma ,t_{0}]}$, the convergence is clear anyhow.${\displaystyle \Box }$

Proof 2:

We note that the equation ${\displaystyle x}$ is supposed to solve is equivalent to

${\displaystyle x(t)=x_{0}+\int _{t_{0}}^{t}f(\tau ,x(\tau ))d\tau }$

by the fundamental theorem of calculus; but this follows (by the fundamental theorem of calculus) from the equations satisfied by ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ for ${\displaystyle t\in [t_{0},t_{0}+\delta ]}$ and ${\displaystyle t\in [t_{0}-\gamma ,t_{0}]}$ separately.${\displaystyle \Box }$

Definition:

Let an ordinary differential equation

${\displaystyle {\begin{cases}x'(t)=f(t,x(t))&\\x(t_{0})=x_{0}\end{cases}}}$

be given, where ${\displaystyle f}$ is continuous. The maximal interval of existence around ${\displaystyle (t_{0},x_{0})}$ is the maximal (w.r.t. set inclusion) interval ${\displaystyle I}$ such that ${\displaystyle t_{0}\in I}$ and there exists a solution ${\displaystyle x}$ defined on ${\displaystyle I}$ to the equation above.

Note that only the preceding theorem on concatenation of solutions ensures that the definition of a maximal interval of existence makes sense, since otherwise it might happen that there are two intervals ${\displaystyle I_{1}=(a_{1},b_{1})}$ and ${\displaystyle I_{2}=(a_{2},b_{2})}$ (${\displaystyle a_{1}) such that ${\displaystyle t_{0}}$ is contained within both intervals and a solution is defined on both intervals, but the solutions are incompatible in the sense that none can be extended to the "large" interval ${\displaystyle (a_{1},b_{2})}$. The theorem on concatenation makes sure that this can never occur.

We now aim to prove that if we walk along the solution graph ${\displaystyle (t,x(t))}$ as ${\displaystyle t}$ approaches the endpoints of the maximal interval of existence ${\displaystyle I}$, then in a sense we move towards the boundary of ${\displaystyle D}$, where ${\displaystyle D}$ is required to be open and is the domain of definition of ${\displaystyle f}$. This shall mean that for any compact set ${\displaystyle K\subset D}$, if we pick ${\displaystyle t}$ large or small enough, ${\displaystyle (t,x(t))}$ is outside ${\displaystyle K}$. The proof is longer and needs preparation.

The first theorem we need is from a broader context in the sense that it has applications throughout analysis.

Theorem:

Let ${\displaystyle O\subseteq \mathbb {R} ^{n}}$ be open, and let ${\displaystyle K\subset O}$ be compact. Then there exists ${\displaystyle \delta >0}$ such that ${\displaystyle \forall x\in K:B_{\delta }(x)\subseteq O}$.

This can be interpreted as stating that a compact set has a minimum distance to the boundary of ${\displaystyle O}$.

This theorem admits different proofs, dependent on the method and the characterisation of compactness that is used.

Proof 1:

Assume otherwise. Then there exists a sequence ${\displaystyle \delta _{n}\to 0}$ and ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ such that

${\displaystyle \exists y_{n}\in B_{\delta _{n}}(x_{n})\setminus O}$.

Since ${\displaystyle K}$ is compact, the sequence ${\displaystyle (y_{n})_{n\in \mathbb {N} }}$ is bounded and hence contains a convergent subsequence ${\displaystyle (y_{n_{k}})_{k\in \mathbb {N} }}$ to a limit ${\displaystyle z\in \mathbb {R} ^{n}}$. The corresponding sequence ${\displaystyle (x_{n_{k}})_{k\in \mathbb {N} }}$ converges to the same limit ${\displaystyle z}$ by the triangle inequality:

${\displaystyle \|x_{n_{k}}-z\|\leq \underbrace {\|x_{n-k}-y_{n_{k}}\|} _{<\delta _{n}\to 0}+\underbrace {\|y_{n_{k}}-z\|} _{\to 0}}$.

Now on the one hand, since ${\displaystyle K}$ is compact, it is closed, and hence ${\displaystyle z\in K}$ since the sequence ${\displaystyle x_{n_{k}}}$ is contained within ${\displaystyle K}$. But on the other hand, since ${\displaystyle \mathbb {R} ^{n}\setminus O}$ is closed, ${\displaystyle z\in \mathbb {R} ^{n}\setminus O}$ and thus ${\displaystyle z\notin K}$. This is a contradiction.${\displaystyle \Box }$

Proof 2:

For each ${\displaystyle x\in K}$, choose an open ball ${\displaystyle B_{\delta _{x}}(x)\subseteq O}$ (possible since ${\displaystyle O}$ is open). Now the sets

${\displaystyle B_{\delta _{x}/2}(x),x\in K}$

trivially form an open cover of ${\displaystyle K}$, and due to the compactness of ${\displaystyle K}$ we may extract a finite subcover ${\displaystyle B_{\delta _{1}/2}(x_{1}),\ldots ,B_{\delta _{n}/2}(x_{n})}$. Set

${\displaystyle \delta :=\min _{i}\delta _{i}/2}$.

We claim that ${\displaystyle \delta }$ is as desired. Indeed, let ${\displaystyle y\in K}$. Then ${\displaystyle y}$ is contained within some ${\displaystyle B_{\delta _{j}/2}(x_{j})}$, and hence for any point ${\displaystyle z\notin O}$

${\displaystyle \|y-z\|\geq \left|\|y-x_{j}\|-\|x_{j}-z\|\right|\geq \delta _{j}/2}$

by the second triangle inequality.${\displaystyle \Box }$

Proof 3:

The function

${\displaystyle f:K\to \mathbb {R} ,f(x):=\inf _{z\notin O}\|x-z\|}$

is Lipschitz continuous (with Lipschitz constant ${\displaystyle 1}$), since

{\displaystyle {\begin{aligned}\left|f(x)-f(y)\right|&=\left|\inf _{z\notin O}\|x-z\|-\inf _{w\notin O}\|y-w\|\right|\\&=\left|\inf _{z,w\notin O}(\|x-z\|-\|y-w\|)\right|\\&=\inf _{z,w\notin O}\left|\|x-z\|-\|y-w\|\right|\\&\leq \inf _{z\notin O}\left|\|x-z\|-\|y-z\|\right|\\&\leq \inf _{z\notin O}\|x-y\|=\|x-y\|\end{aligned}}}

by the second triangle inequality. Since ${\displaystyle K}$ is compact, ${\displaystyle f}$ assumes a minimum on ${\displaystyle K}$, and this must be greater zero, since otherwise there exists ${\displaystyle x\in K}$ such that we could pick a sequence ${\displaystyle z_{n}\notin O}$ such that

${\displaystyle \|z_{n}-x\|\to 0}$,

and hence ${\displaystyle z_{n}\to x}$ and ${\displaystyle x\notin O}$ due to the closedness of ${\displaystyle \mathbb {R} ^{n}\setminus O}$, contradiction. This minimum is the desired ${\displaystyle \delta }$.${\displaystyle \Box }$

Now we have proven enough generalities to proceed to the specific theorems tailored to our claim.

Lemma:

Let ${\displaystyle f:D\to \mathbb {R} ^{n}}$ be the right hand side of a differential equation, where ${\displaystyle D}$ is open (in fact, only the set ${\displaystyle D}$ matters for this lemma). Set

${\displaystyle D_{k}:=\left\{x\in D|\operatorname {dist} (x,\partial D)>{\frac {1}{k}}\wedge \|x\|

for ${\displaystyle k\in \mathbb {N} }$. If ${\displaystyle K\subseteq D}$ is any compact set, then for sufficiently large ${\displaystyle k}$, ${\displaystyle K\subset D_{k}}$.

Proof:

Since ${\displaystyle K}$ is compact, it is bounded. Therefore, ${\displaystyle K\subseteq B_{k_{1}}(0)}$ for a sufficiently large ${\displaystyle k_{1}\in \mathbb {N} }$. Furthermore, due to the above, ${\displaystyle K}$ has a certain minimum distance ${\displaystyle \epsilon }$ to the boundary, and we may choose ${\displaystyle k_{2}\in \mathbb {N} }$ such that ${\displaystyle {\frac {1}{k_{2}}}<\epsilon }$. Choose ${\displaystyle k:=\max\{k_{1},k_{2}\}}$. Then

${\displaystyle K\subseteq B_{k_{1}}(0)\subseteq B_{k}(0)}$ and ${\displaystyle \forall x\in K:\operatorname {dist} (x,\partial D)>\epsilon >{\frac {1}{k_{2}}}>{\frac {1}{k}}}$.

Hence, ${\displaystyle K\subset D_{k}}$.${\displaystyle \Box }$

Theorem:

Let ${\displaystyle f:D\to \mathbb {R} ^{n}}$ be the right hand side of a differential equation (where ${\displaystyle D}$ is open), and let ${\displaystyle x:I\to \mathbb {R} ^{n}}$ be a solution to that equation, where ${\displaystyle I=(a,b)}$ is the interior of the maximal interval of solution. If ${\displaystyle t}$ is sufficiently close to either ${\displaystyle a}$ or ${\displaystyle b}$, then for each compact ${\displaystyle K\subset D}$ the point ${\displaystyle (t,x(t))}$ lies outside ${\displaystyle K}$.

Proof:

Suppose otherwise. Then without loss of generality, we have a sequence ${\displaystyle t_{1}>t_{2}>t_{3}>\cdots >t_{k}>\cdots }$ such that ${\displaystyle t_{k}\to a,k\to \infty }$ and ${\displaystyle (t_{k},x(t_{k}))\in K}$ (an analogous supposition for the other end of the interval ${\displaystyle b}$ is led to a contradiction analogously). Since ${\displaystyle K}$ is compact, the sequence ${\displaystyle (t_{k},x(t_{k}))}$ has an accumulation point ${\displaystyle (a,x^{*})}$. We claim that in fact

${\displaystyle \lim _{t\to a}(t,x(t))=(a,x^{*})}$.

Pick ${\displaystyle m\in \mathbb {N} }$ such that ${\displaystyle K\subset D_{m}}$. Let ${\displaystyle \epsilon >0}$ be arbitrary. We may restrict ourselves to ${\displaystyle \epsilon >0}$ sufficiently small such that ${\displaystyle B_{\epsilon }((a,x^{*}))\subseteq D_{m}}$. Since ${\displaystyle f}$ is continuous, it is bounded on the compact ${\displaystyle {\overline {B_{\epsilon }((a,x^{*}))}}}$, say by ${\displaystyle M>0}$. Now pick ${\displaystyle \delta <\epsilon /(2M)}$ such that ${\displaystyle (a+\delta ,x(a+\delta ))\in B_{\epsilon /2}((a,x^{*}))}$. If we assume that ${\displaystyle x}$ leaves ${\displaystyle B_{\epsilon }((a,x^{*}))}$ for ${\displaystyle |t-a|<\delta }$, the intermediate value theorem applied to the function

${\displaystyle (a,a+\delta )\to \mathbb {R} ,t\mapsto \|(a,x^{*})-(t,x(t))\|}$

yields the existence of an ${\displaystyle s\in (a,a+\delta )}$ such that ${\displaystyle \|(a,x^{*})-(t,x(t))\|=\epsilon }$. But

{\displaystyle {\begin{aligned}\|(a,x^{*})-(t,x(t))\|&\leq \|(a,x^{*})-(a+\delta ,x(a+\delta ))\|+\|(a+\delta ,x(a+\delta ))-(t,x(t))\|\\&<\epsilon /2+\|(a+\delta ,x(a+\delta ))-(t,x(t))\|\\&=\epsilon /2+\left\|\int _{t}^{a+\delta }f(s,x(s))ds\right\|\leq \epsilon /2+\delta M<\epsilon /2+\epsilon /2<\epsilon \end{aligned}}},

Hence, ${\displaystyle \lim _{t\to a}(t,x(t))\in K}$. But on the other hand, by Peano's existence theorem and concatenation of solutions we may extend the solution at ${\displaystyle a+\mu }$ for every ${\displaystyle \mu >0}$ to the left by a fixed amount (namely for ${\displaystyle \gamma =\max\{{\frac {1}{2m}},{\frac {1}{2m{\mathcal {M}}}}\}}$, where ${\displaystyle {\mathcal {M}}:=\min _{(t,x)\in {\overline {D_{2m}}}}|f(t,x)|}$ which exists due to continuity of ${\displaystyle f}$ and compactness of ${\displaystyle {\overline {D_{2m}}}}$), and doing so for sufficiently small ${\displaystyle \mu }$ yields the contradiction that ${\displaystyle (a,b)}$ is not the maximal interval of existence.${\displaystyle \Box }$
Let ${\displaystyle f:D\to \mathbb {R} ^{n}}$ be the right hand side of a differential equation for the special case ${\displaystyle D=(c,d)\times \mathbb {R} ^{n}}$ for an interval ${\displaystyle J=(c,d)}$. Let ${\displaystyle I=(a,b)}$ be the maximal interval of existence of a solution around ${\displaystyle (t_{0},x_{0})\in D}$. Then either ${\displaystyle a=c}$ or ${\displaystyle \|x(t)\|\to \infty }$ as ${\displaystyle t\to a}$. Similarly, either ${\displaystyle b=d}$ or ${\displaystyle \|x(t)\|\to \infty }$ as ${\displaystyle t\to b}$.
From the preceding theorem, the solution eventually leaves every compact ${\displaystyle K\subset D}$ as ${\displaystyle t\to a}$ or ${\displaystyle t\to b}$. In particular, this holds for the compact sets ${\displaystyle {\overline {D_{k}}}}$. But to leave this implies either ${\displaystyle \|(t,x(t))\|\geq k}$ or ${\displaystyle |c-t|\leq 1/k}$ or ${\displaystyle |d-t|\leq 1/k}$, since the distance of ${\displaystyle (t,x(t))}$ to ${\displaystyle \partial D}$ is exactly the distance of ${\displaystyle t}$ to the nearest of the interval endpoints ${\displaystyle c}$, ${\displaystyle d}$. Hence, if not ${\displaystyle a=c}$, then ${\displaystyle \|x(t)\|\to \infty }$ as ${\displaystyle t\to a}$, and the analogous statement for ${\displaystyle b}$ and ${\displaystyle d}$.${\displaystyle \Box }$