Parallel Spectral Numerical Methods/Timestepping

Introduction

We now briefly discuss how to solve initial value problems. For more on this see Bradie (Chap. 7)[1]. A slightly longer but still quick introduction to these ideas can also be found in Boyce and DiPrima.[2].

Forward Euler

In order to compute solutions to differential equations on computers efficiently, it is convenient to do our calculations at a finite number of specified points and then interpolate between these points. For many calculations it is convenient to use a grid whose points are equally distant from each other.

For the rest of the section ${\displaystyle h}$ will be our step size, which is assumed to be constant. When solving an ODE or PDE, the choice of ${\displaystyle h}$ isn't selected at random, but rather requires some intuition and/or theoretical analysis. We are going to start with forward Euler method which is the most basic numerical method. Lets first denote the time at the ${\displaystyle n^{\text{th}}}$ time-step by ${\displaystyle t_{n}}$ and the computed solution at the ${\displaystyle n^{\text{th}}}$ time-step by ${\displaystyle y^{n}}$, where ${\displaystyle y^{n}\equiv y(t=t^{n})}$. The step size ${\displaystyle h}$ in terms of ${\displaystyle t}$ is defined as ${\displaystyle h=t^{n+1}-t^{n}}$. Lets first start with a basic ODE with initial conditions, in which ${\displaystyle f(t,y)}$ is some arbitrary function and ${\displaystyle y(t)}$ is our solution,

${\displaystyle {\frac {dy}{dt}}=f(t,y)\qquad y(t^{0})=y^{0}.}$

( 1)

The differential equation can be approximated by finite differences,

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}=f(t^{n},y^{n}).}$

Now all we have to do is solve for ${\displaystyle y^{n+1}}$ algebraically,

${\displaystyle y^{n+1}=y^{n}+hf(t^{n},y^{n})\qquad {\text{(Forward Euler/Explicit method)}}}$

( 2)

If we wanted to calculate ${\displaystyle {\frac {dy}{dt}}}$ at time ${\displaystyle t^{0}}$, then we could generate an approximation for the value at time ${\displaystyle t^{n+1}}$ using eq. 2 by first finding ${\displaystyle y(t^{0})}$ and using it to compute ${\displaystyle y^{n+1}}$. We then repeat this process until the final time is reached.

An Example Computation

Lets take the ODE in eq. 1 with initial conditions ${\displaystyle y(t^{0})=1}$ where ${\displaystyle t^{0}=0}$. Using forward Euler, lets plot two examples where ${\displaystyle h=1}$ and ${\displaystyle h=.1}$

A numerical solution to the ODE in eq. 1 with f(t,y) = y demonstrating the accuracy of the Forward Euler method for different choices of timestep.

It should be no surprise that a smaller step size like ${\displaystyle h=.1}$ compared to ${\displaystyle h=1}$ will be more accurate. Looking at the line for ${\displaystyle h=0.1}$, you can see that ${\displaystyle y(t)}$ is calculated at only 4 points then straight lines are interpolate between each point. This is obviously not very accurate, but gives a rough idea of what the function looks like. The solution for ${\displaystyle h=.1}$ might require 10 times more steps to be taken, but it is clearly more accurate. Forward Euler is an example of a first order method and approximates the exact solution using the first two terms in the Taylor expansion[footnote 1]

${\displaystyle y(t^{n}+h)=y(t^{n})+h\left.{\frac {dy}{dt}}\right|_{t^{n}}+{\text{O}}(h^{2}),}$

where terms of higher order than O${\displaystyle (h^{2})}$ are omitted in the approximate solution. Substituting this into eq. 2 we get that

${\displaystyle y^{n}+h\left.{\frac {dy}{dt}}\right|_{t^{n}}+{\text{O}}(h^{2})=y^{n}+hf(t^{n},y^{n})}$

after cancelling terms and dividing by ${\displaystyle h}$, we get that

${\displaystyle \left.{\frac {dy}{dt}}\right|_{t^{n}}+{\text{O}}(h)=f(t^{n},y^{n}),}$

from which it is clear that the accuracy of the method changes linearly with the step size, and hence it is first order accurate.

Backwards Euler

A variation of forward Euler can be obtained by approximating a derivative by using a backward difference quotient. Using eq. 1 and applying

${\displaystyle {\frac {y^{n}-y^{n-1}}{h}}\approx f(t^{n},y^{n})}$

${\displaystyle y^{n}=y^{n-1}+hf(t^{n},y^{n}).}$

Stepping the index up from ${\displaystyle n}$ to ${\displaystyle n+1}$ we obtain,

${\displaystyle y^{n+1}=y^{n}+hf(t^{n+1},y^{n+1})\qquad {\text{(Backwards Euler/Implicit method)}}}$

Notice how ${\displaystyle y^{n+1}}$ is not written explicitly like it was in the forward Euler method. This equation instead implicitly defines ${\displaystyle y^{n+1}}$ and must be solved to determine the value of ${\displaystyle y^{n+1}}$. How difficult this is depends entirely on the complexity of the function ${\displaystyle f}$. For example, if ${\displaystyle f}$ is just ${\displaystyle y^{2}}$, then the quadratic equation could be used, but many nonlinear PDEs require other methods. Some of these methods will be introduced later.

Crank-Nicolson

By taking an average of the forward and backward Euler methods, we can find the Crank-Nicolson method:

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}={\frac {1}{2}}f(t^{n+1},y^{n+1})+{\frac {1}{2}}f(t^{n},y^{n})}$

Rearranging we obtain,

${\displaystyle y^{n+1}=y^{n}+{\frac {h}{2}}\left[f(t^{n+1},y^{n+1})+f(t^{n},y^{n})\right]\qquad {\text{(Crank-Nicolson)}}}$

Notice again how ${\displaystyle y^{n+1}}$ is not written explicitly like it was in forward Euler. This equation instead implicitly defines ${\displaystyle y^{n+1}}$ and so the equation must be solved algebraically to obtain ${\displaystyle y^{n+1}}$.

Stability of Forward Euler, Backward Euler and Crank-Nicolson

Let's look at the following ODE

${\displaystyle {\frac {dy}{dt}}=-\lambda y(t)}$

where ${\displaystyle \lambda }$ is a constant and ${\displaystyle y(t^{0})=1}$ where ${\displaystyle t^{0}=0}$. Lets numerically solve this ODE using the forward Euler, backward Euler and Crank-Nicolson time stepping schemes. The results are as follows

${\displaystyle y^{n+1}=y^{n}-\lambda hy^{n}\qquad {\text{(Forward Euler)}}}$

${\displaystyle y^{n+1}={\frac {y^{n}}{(1+\lambda h)}}\qquad {\text{(Backward Euler)}}}$

${\displaystyle y^{n+1}=y^{n}\left({\frac {2-\lambda h}{2+\lambda h}}\right)\qquad {\text{(Crank-Nicolson)}}}$

and the exact solution is given by

${\displaystyle y(t)=e^{-\lambda t}\qquad {\text{(Exact solution)}}}$

A numerical solution to the ODE in eq. 2 with λ = 20 and with a timestep of h = 0.1 demonstrating the instability of the Forward Euler method and the stability of the Backward Euler and Crank Nicolson methods.

The figure above shows how both methods converge to the solution, but the forward Euler solution is unstable for the chosen timestep. Listing below is a Matlab program where you can play around with the value of ${\displaystyle \lambda }$ to see how for a fixed timestep this changes the stability of the method.

( A)

% A program to demonstrate instability of timestepping methods
% when the timestep is inappropriately choosen.

%Differential equation: y'(t)=-y(t) y(t_0)=y_0
%Initial Condition, y(t_0)=1 where t_0=0
clear all; clc; clf;

%Grid
h=.1;
tmax=4;
Npoints = tmax/h;
lambda=.1;

%Initial Data
y0=1;
t_0=0;
t(1)=t_0;
y_be(1)=y0;
y_fe(1)=y0;
y_imr(1)=y0;

for n=1:Npoints
%Forward Euler
y_fe(n+1)=y_fe(n)-lambda*h*y_fe(n);
%Backwards Euler
y_be(n+1)=y_be(n)/(1+lambda*h);
%Crank Nicolson
y_imr(n+1)=y_imr(n)*(2-lambda*h)/(2+lambda*h)
t(n+1)=t(n)+h;
end

%Exact Solution
tt=[0:.001:tmax];
exact=exp(-lambda*tt);

%Plot
figure(1); clf; plot(tt,exact,'r-',t,y_fe,'b:',t,y_be,'g--',t,y_imr,'k-.');
xlabel time; ylabel y;
legend('Exact','Forward Euler','Backward Euler',...
'Crank Nicolson');


( Ap)

A Python program to demonstrate instability of different time-stepping methods. Code download
#!/usr/bin/env python
"""
A program to demonstrate instability of timestepping methods#
when the timestep is inappropriately choosen.################
"""

from math import exp
import matplotlib.pyplot as plt
import numpy

#Differential equation: y'(t)=-l*y(t) y(t_0)=y_0
#Initial Condition, y(t_0)=1 where t_0=0

# Definition of the Grid
h = 0.1	# Time step size
t0 = 0	# Initial value
tmax = 4	# Value to be computed y(tmax)
Npoints = int((tmax-t0)/h)	# Number of points in the Grid

t = [t0]

# Initial data
l = 0.1
y0 = 1	# Initial condition y(t0)=y0
y_be = [y0]	# Variables holding the value given by the Backward Euler Iteration
y_fe = [y0]	# Variables holding the value given by the Forward Euler Iteration
y_imr = [y0]	# Variables holding the value given by the Midpoint Rule Iteration

for i in xrange(Npoints):
y_fe.append(y_fe[-1]*(1-l*h))
y_be.append(y_be[-1]/(1+l*h))
y_imr.append(y_imr[-1]*(2-l*h)/(2+l*h))
t.append(t[-1]+h)

print "Exact Value: y(%d)=%f" % (tmax, exp(-4))
print "Backward Euler Value: y(%d)=%f" % (tmax, y_be[-1])
print "Forward Euler Value: y(%d)=%f" % (tmax, y_fe[-1])
print "Midpoint Rule Value: y(%d)=%f" % (tmax, y_imr[-1])

# Exact Solution
tt=numpy.arange(0,tmax,0.001)
exact = numpy.exp(-l*tt)

# Plot
plt.figure()
plt.plot(tt,exact,'r-',t,y_fe,'b:',t,y_be,'g--',t,y_imr,'k-.');
plt.xlabel('time')
plt.ylabel('y')
plt.legend(('Exact','Forward Euler','Backward Euler',
'Implicit Midpoint Rule'))
plt.show()


Stability and Accuracy of Forward Euler, Backward Euler and Crank-Nicolson Time Stepping Schemes for ${\displaystyle y'=-\lambda y}$

We now try to understand these observations so that we have some guidelines to design numerical methods that are stable. The numerical solution to an initial value problem with a bounded solution is stable if the numerical solution can be bounded by functions which are independent of the step size. There are two methods which are typically used to understand stability. The first method is linearized stability, which involves calculating eigenvalues of a linear system to see if small perturbations grow or decay. A second method is to calculate an energy like quantity associated with the differential equation and check whether this remains bounded.

We shall assume that ${\displaystyle \lambda \geq 0}$ so that the exact solution to the ODE does not grow without bound. The forward Euler method gives us

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}=-\lambda y^{n}}$

${\displaystyle y^{n+1}=(1-\lambda h)y^{n}}$

${\displaystyle \Rightarrow |y^{n+1}|\geq |(1-\lambda h)||y^{n}|\quad {\text{ if }}|(1-\lambda h)|>1}$

${\displaystyle \Rightarrow |y^{n+1}|\leq |(1-\lambda h)||y^{n}|\quad {\text{ if }}|(1-\lambda h)|<1.}$

We can do a similar calculation for backward Euler to get

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}=-\lambda y^{n+1}}$

${\displaystyle y^{n+1}={\frac {y^{n}}{1+\lambda h}}}$

${\displaystyle \Rightarrow |y^{n+1}|\leq \left|{\frac {y^{n}}{1+\lambda h}}\right|\leq |y^{n}|\quad {\text{ since }}\left|{\frac {1}{1+\lambda h}}\right|<1.}$

Thus, the backward Euler method is unconditionally stable, whereas the forward Euler method is not. We leave the analysis of the Crank-Nicolson method as an exercise.

A second method, often used to show stability for partial differential equations is to look for an energy like quantity and show that this bounds the solution and prevents it from becoming too positive or too negative. Usually, the quantity is chosen to be non negative, then all one needs to do is deduce there is an upper bound. We sketch how this is done for an ordinary differential equation so that we can use the same ideas when looking at partial differential equations. Recall that the forward Euler algorithm is given by

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}=-\lambda y^{n}.}$

Multiplying this by ${\displaystyle y^{n+1}}$ we find that

${\displaystyle (y^{n+1})^{2}=(1-h\lambda )y^{n}y^{n+1}.}$

Now to obtain a bound on ${\displaystyle |y^{n+1}|}$ in terms of ${\displaystyle |y^{n}|}$, we use the following fact

${\displaystyle (a-b)^{2}\geq 0\Rightarrow a^{2}+b^{2}\geq 2ab\Rightarrow {\frac {(y^{n+1})^{2}+(y^{n})^{2}}{2}}\geq y^{n}y^{n+1}.}$

Hence a sufficient condition for stability if

${\displaystyle (1-h\lambda )>0}$

is that

${\displaystyle (y^{n+1})^{2}\leq (1-h\lambda ){\frac {(y^{n+1})^{2}+(y^{n})^{2}}{2}}}$

${\displaystyle (y^{n+1})^{2}{\frac {1+h\lambda }{2}}\leq {\frac {1-h\lambda }{2}}(y^{n})^{2}}$

${\displaystyle (y^{n+1})^{2}\leq {\frac {1-h\lambda }{1+h\lambda }}(y^{n})^{2},}$

thus if ${\displaystyle 1-h\lambda >0}$, then ${\displaystyle 0<{\frac {1-h\lambda }{1+h\lambda }}<1}$ and so we have stability, we again see that the algorithm is stable provided the timestep is small enough. There are many situations for which ${\displaystyle \lambda }$ is large and so the timestep, ${\displaystyle h}$ needs to be very small. In such a situation, the forward Euler method can be very slow on a computer.

Stability is not the only requirement for a numerical method to approximate the solution to an initial value problem. We also want to show that as the timestep is made smaller, the numerical approximation becomes better. For the forward Euler method we have that

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}=-\lambda y^{n}}$

now if

${\displaystyle y^{n}=y(t)}$

${\displaystyle y^{n+1}=y(t+h)}$

then[footnote 2]

${\displaystyle y(t+h)=y(t)+h{\frac {\mathrm {d} y}{\mathrm {d} t}}+O(h^{2})}$

so

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}+\lambda y^{n}}$

${\displaystyle ={\frac {y(t+h)-y(t)}{h}}+\lambda y(t)}$

${\displaystyle ={\frac {\mathrm {d} y}{\mathrm {d} t}}+O(h)+\lambda y(t)}$

${\displaystyle =O(h).}$

We can do a similar calculation to show that the Crank-Nicolson method is second order. In this case however, we use Taylor expansions around ${\displaystyle y(t+h/2)}$.

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}=-\lambda {\frac {y^{n+1}+y^{n}}{2}}}$

so

${\displaystyle y^{n+1}=y(t+h)=y(t+h/2)+(h/2){\frac {\mathrm {d} y}{\mathrm {d} t}}+(h/2)^{2}{\frac {1}{2}}{\frac {\mathrm {d} ^{2}y}{\mathrm {d} t^{2}}}+O(h^{3})}$

${\displaystyle y^{n}=y(t)=y(t+h/2)-(h/2){\frac {\mathrm {d} y}{\mathrm {d} t}}+(h/2)^{2}{\frac {1}{2}}{\frac {\mathrm {d} ^{2}y}{\mathrm {d} t^{2}}}+O(h^{3})}$

hence

${\displaystyle {\frac {y^{n+1}-y^{n}}{h}}+\lambda {\frac {y^{n+1}+y^{n}}{2}}}$

${\displaystyle ={\frac {\mathrm {d} y}{\mathrm {d} t}}+O(h^{2})+\lambda \left[y(t+h/2)+O(h^{2})\right]}$

${\displaystyle =O(h^{2}).}$

Thus this is a second order method.

Exercises

1. Determine the real values of ${\displaystyle \lambda }$ and timestep ${\displaystyle h}$ for which the implicit midpoint rule is stable for the ODE ${\displaystyle {\frac {\mathrm {d} y}{\mathrm {d} t}}=-\lambda y}$. Sketch the stable region in a graph of ${\displaystyle \lambda }$ against timestep ${\displaystyle h}$.
2. Show that the backward Euler method is a first order method.

Notes

1. The derivation of the Taylor expansion can be found in most books on calculus.
2. We will use big `Oh' to mean that there exists a constant so that if ${\displaystyle f~O(h)}$, then for ${\displaystyle h\rightarrow 0}$, we have that ${\displaystyle \left|{\frac {f}{h}}\right|, where ${\displaystyle C}$ is some constant.

References

Boyce, W.E.; DiPrima, R.C. (2010). Elementary Differential Equations and Boundary Value Problems. Wiley.

Bradie, B. (2006). A Friendly Introduction to Numerical Analysis. Pearson.