Parallel Spectral Numerical Methods/Nonlinear Ordinary Differential Equations and Iteration

From Wikibooks, open books for an open world
< Parallel Spectral Numerical Methods
Jump to: navigation, search

Nonlinear Ordinary Differential Equations and Iteration[edit]

The implicit explicit method avoids the direct solution of nonlinear problems. This can be advantageous for some problems, but can also lead to severe time step restrictions in others. Furthermore, the resulting numerical schemes can sometimes have undesirable qualitative properties. For this reason, we need to describe methods that allow us to solve the nonlinear equations generated in fully-implicit numerical schemes.

We consider an ordinary differential equation \frac{\mathrm{d}y}{\mathrm{d}t}=f(t,y) 
for t\in[t_{0},t^{*}], and for which f(t,y) is not necessarily a linear function of y. We want to use an implicit numerical method to obtain an approximate solution of this problem – for example backward Euler’s method. If we want to demonstrate the convergence of the numerical scheme, we need to demonstrate convergence of functional iteration which we use to find the solution for the nonlinear equation term in using backward Euler’s method.

The results that follow are primarily taken from Iserles[1], although this material is also often found in calculus texts such as Lax, Burstein and Lax[2], and Hughes et al.[3]. We will let t_{i} denote the time at time step i, y_{i} denote the approximate solution at time step i and h denote the time step. We will assume f is Lipschitz continuous, a condition that is weaker than differentiable but stronger than continuous, which we will give a precise definition of. There are two classical iteration methods:

  • fixed-point iteration
  • Newton’s (Newton-Raphson) method.

We will prove convergence of these two methods (a proof of the convergence of the modified Newton-Raphson method is in Iserles[4]). We will analyze the specific problem y'(t)=y^{2} with initial data y(0)=1 and t\in[0,0.99].

Exact Solution to an Example Nonlinear Ordinary Differential Equation[edit]

We consider \frac{\mathrm{d}y}{\mathrm{d}t}=y^{2}
with initial data y(t=0)=1 and t\in[0,0.99]. Whenever the solution y(t) exists, it will be positive all the time, because the initial value is positive and \frac{\mathrm{d}y}{\mathrm{d}t} is positive.

To integrate this equation explicitly, we use separation of variables to find that {}\int_{y(0)}^{y(t)}\frac{1}{\tilde{y}^{2}}\mathrm{d}\tilde{y}=\int_0^t\mathrm{d}\tau
which implies -\frac{1}{y(t)}=t+c
where c is the constant of integration. Using our initial data we get c=-1, so y(t)=\frac{1}{1-t}
is our exact solution for this problem. We will use this exact solution to compare the numerical solutions obtained by the different iterative methods. Notice that this exact solution becomes infinite as t\rightarrow1.

Definitions Required to Prove Convergence[edit]

DefinitionA function f(x):x\in D\subset\mathbb{R} is Lipschitz if \left\Vert f(x_1)-f(x_2)\right\Vert \leq\lambda\left\Vert x_1-x_2\right\Vert for all x_1 and x_2 in the domain D.

There are two specific definitions of the Lipschitz condition.

Definition The function f(x) is called locally Lipschitz if, for each z\in\mathbb{R}, there exists an L>0 such that f is Lipschitz on the open ball of center z and radius L.

DefinitionIf f(x) is Lipschitz on all of the space \mathbb{R} (i.e. The open ball is \mathbb{R} in above definition), then f is globally Lipschitz.

Note the fundamental difference between the local and global versions of the Lipschitz-condition. Whereas in the local version the Lipschitz “constant”(\lambda) and the open ball depend on each point x\in\mathbb{R} , in the global version the “constant”(\lambda) is fixed and the open ball is \mathbb{R}. In particular, a globally Lipschitz function is locally Lipschitz continuous, but the converse is not true.

Existence and Uniqueness of Solutions to Ordinary Differential Equations[edit]

Peano’s theorem states that if f(x) is continuous, then a solution to the ordinary differential equation x'(t)=f(x) with initial condition x(t_0)=x_0 exists at least in some neighbourhood of time t_0 – this solution need not be unique. Picard’s theorem states that if f(x) is locally Lipschitz, then the solution for the ordinary differential equation x'(t)=f(x) with initial condition x(t_0)=x_0 is unique when it exists. A comprehensive statement of these theorems is in Iserles[5], and there are proofs of these theorems in many books on ordinary differential equations (for example Birkhoff and Rota[6]).

Backward Euler[edit]

We recall that the backward Euler method is given by y^{n+1}=y^{n}+hf(y^{n+1}).
Note that if f is nonlinear, we need to solve a nonlinear equation in each step advancing the solution (numerical). It is usually hard to solve a nonlinear equation exactly using analytical methods, so we also use numerical methods. For our example equation, we get y^{n+1}=y^{n}+h\left(y^{n+1}\right)^2
This example has the advantage that we can find its solutions algebraically, so we can then examine the behavior of numerical schemes.

Convergence of Functional Iteration[edit]

We often use functional iteration to solve nonlinear equations. We recall that there are two popular methods: fixed-point iteration and Newton’s method.

Convergence of the Fixed-Point Method[edit]

We want to find a root of x=f(x). We try to use the fixed-point method and to construct a sequence x_{n+1}=f(x_{n}) where n=0,1,2\ldots.

Theorem Let f(x) have a fixed-point \tilde{x}=f(\tilde{x}), be Lipschitz continuous for x\in(a, b)\subset\mathbb{R} with Lipschitz constant k<1 and f(x) be continuous on [a,b]. Then the fixed point method x_{n+1}=f(x_{n}) converges to the unique fixed-point of \tilde{x}=x_{\infty}=f(x_{\infty}) for x\in[a,b].

Proof Since f(x) is Lipschitz continuous, we find that, \left|x_{n+1}-x_{\infty}\right|=\left|f(x_{n})-f(x_{\infty})\right|\leq k\left|x_{n}-x_{\infty}\right|
for n=1,2\ldots. Hence by induction we conclude that \left|x_{n+1}-x_{\infty}\right|\leq k^{n}\left|x_{1}-x_{\infty}\right|.
Since k<1, \lim_{n\rightarrow\infty}k^n| x_1-x_{\infty}|=0, so we obtain a solution x_{\infty}=f(x_{\infty}), where x_{\infty} is the fixed point. We can show that the limit is unique by supposing that there are two different limits and reaching a contradiction.\Box

For a proof of the existence of the fixed-point under the assumptions used in this theorem, see a book on numerical analysis, such as Bradie[7] or Iserles[8].

Regarding our problem, we apply fixed-point iteration, we want to find the root of an equation of the form:


w=hw^{2}+\beta=f(w).

 

 

 

 

( 1)

When the timestep h is small enough then f'(w)=2hw\leq200h<1. So fixed-point iteration is convergent provided the time-step is small enough. We note that eq. 1 has two roots, and so the domain of the initial iterate plays an important role in determining which root is choosen.

Convergence of Newton's Method[edit]

We now consider Newton's method. We want to find a root, x^* of f(x) such that f(x^*)=0. Newton's method is a fixed-point method where the iterates are constructed by

 x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}

 

 

 

 

( 2)

where n=0,1,2\ldots. If the function f(x) is sufficiently well behaved, then Newton's method has a quadratic rate of convergence.

Theorem Suppose f(x) is twice continuously differentiable and that its second derivative is bounded. Suppose also that there exists x^* for which f(x^{*})=0. Suppose f'(x)\neq0 in the interval \left[x^{*}-\left|x^{*}-x_0\right|,x^{*}+\left|x^{*}-x_{0}\right|\right], f''(x) is finite in the same interval and \left| x_{0} - x^{*}\right| is small. Then, Newton's method is of quadratic convergence.

Proof f(x^{*})=f(x_n)+f'(x_n)(x^{*}-x_n)+\frac{1}{2!}f''(z_n)(x^{*}-x_n)^{2}
by Taylor expansion with Lagrange form remainder. In the above z_{n}\in[x_n,x^{*}]. Since f(x^*)=0, we have 0=f(x_n)+f'(x_n)(x^{*}-x_n)+\frac{1}{2!}f''(z_n)(x^{*}-x_n)^{2},
so \frac{f(x_n)}{f'(x_n)}+(x^{*}-x_{n})=-\frac{1}{2!}\frac{f''(z_n)}{f'(z_n)}(x^{*}-x_n)^{2}.
Plug in the formula for x_{n+1}, from eq. 2 we have x^{*}-x_{n+1}=-\frac{1}{2!}\frac{f''(z_n)}{f'(z_n)}(x^{*}-x_{n})^{2}.
Let e_{n}=\left|x^{*}-x_{n}\right|.
We have e_{n+1}=\left|\frac{1}{2!}\frac{f''(z_n)}{f'(z_n)}\right|e_{n}^{2}
and by our assumption, we know there is a constant c such that \left|\frac{1}{2!}\frac{f''(z_n)}{f'(z_n)}\right|<c.
Hence we have e_{n+1}<me_{n}^{2} for some finite constant m. So Newton's method is convergent provided e_0=| x_0-x^* | is sufficiently small.\Box

Regarding our problem, we consider f(y)=y-hy^{2}-\beta. Hence f'(y)=1-2hy\neq0 and f''(y) is finite, so our problem satisfies all assumptions if we choose our initial data and initial iterates suitably. Hence the Newton iterations will converge and give an approximation to the nonlinear term in backward Euler's method.

Convergence of the Theta Method[edit]

The backward Euler, forward Euler and Crank-Nicolson methods are special case of the theta method, so we will first prove the convergence of the theta method to encompass these three methods. The theta method is the following algorithm, y^{n+1}=y^{n}+h[\theta f(t^{n},y^{n})+(1-\theta)f(t^{n+1},y^{n+1})]
where n=0,1,\ldots and \theta\in[0,1]. Notice that for \theta=1/2 we obtain the Crank-Nicolson method or trapezoidal rule.

First, substituting the exact solution y(t) and using the Taylor expansion we have

{}y(t^{n+1})-y(t^{n})-h[\theta f(t^{n},y(t^{n}))+(1-\theta)f(t^{n+1},y(t^{n+1}))]

=y(t^{n+1})-y(t^{n})-h[\theta y'(t^{n})+(1-\theta)y'(t^{n+1})]

 =[y(t^n)+hy'(t^n)+\frac{1}{2}h^{2}y''(t^n)+\frac{1}{6}h^{3}y^{\prime\prime\prime}(t^n)] -y(t^{n})-h\{\theta y'(t^{n})+(1-\theta)[y'(t^{n})+hy''(t^{n})+\frac{1}{2}h^{2}y^{\prime\prime\prime}(t^{n})]\}+\mathcal{O}(h^{4})  =\left(\theta-\frac{1}{2}\right)h^{2}y''(t^{n})+\left(\frac{1}{2}\theta-\frac{1}{3}\right)h^{3}y^{\prime\prime\prime}(t^{n})+\mathcal{O}(h^{4}).

Subtracting the last expression from

y^{n+1}-y^{n}-h[\theta f(t^{n},y^{n})+(1-\theta)f(t^{n+1},y^{n+1})]=0,

we have that when h is small enough

e^{n+1,h}=e^{n,h}+\theta h[f(t^{n},y(t^{n})+e^{n,h})-f(t^{n},y(t^{n}))] +(1-\theta)h[f(t^{n+1},y(t^{n+1})+e^{n+1,h})-f(t^{n+1},y(t^{n+1}))] -\frac{1}{12}h^{3}y^{\prime\prime\prime}(t^{n})+\mathcal{O}(h^{4}) for \theta=\frac{1}{2}

e^{n+1,h}=e^{n,h}+\theta h[f(t^{n},y(t^{n})+e^{n,h})-f(t^{n},y(t^{n}))] +(1-\theta)h[f(t^{n+1},y(t^{n+1})+e^{n+1,h})-f(t^{n+1},y(t^{n+1}))] +(\theta-\frac{1}{2})h^{2}y''(t^{n})+\mathcal{O}(h^{3}) for \theta\neq\frac{1}{2}

where e^{i}=y^{i}-y(t^{i}). Using the triangle inequality and by the Lipschitz continuity of f, there exist constants c and \lambda such that

\left\Vert e^{n+1,h}\right\Vert \leq\left\Vert e^{n,h}\right\Vert +\theta h\lambda\left\Vert e^{n,h}\right\Vert +(1-\theta)h\lambda\left\Vert e^{n+1,h}\right\Vert +
ch^{3} if  \theta=\frac{1}{2}

\left\Vert e^{n+1,h}\right\Vert \leq\left\Vert e^{n,h}\right\Vert +\theta h\lambda\left\Vert e^{n,h}\right\Vert +(1-\theta)h\lambda\left\Vert e^{n+1,h}\right\Vert + ch^{2} if \theta\neq\frac{1}{2}.

When \theta=\frac{1}{2}, the theta method reduces to the trapezoidal rule. It is possible to show that the Crank-Nicolson method has second order convergence, see for example, Iserles[9]. Now let’s consider \theta\neq\frac{1}{2}, \left\Vert e^{n+1,h}\right\Vert \leq\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\left\Vert e^{n,h}\right\Vert +\frac{c}{1-(1-\theta)h\lambda}h^{2}.

We claim that \left\Vert e^{n,h}\right\Vert \leq\frac{c}{\lambda}\left[\left(\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\right)^{n}-1\right]h. We prove this statement by induction. When n=0, \left\Vert e^{n,h}\right\Vert =0, since the initial conditions is exactly calculated. Now suppose this statement is true for n=k, where k\geq0 and is a integer. We want to show this statement is true for n=k+1. Consider


\left\Vert e^{k+1,h}\right\Vert \leq\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\left\Vert e^{k,h}\right\Vert +\frac{c}{1-(1-\theta)h\lambda}h^{2},

then plug in

\left\Vert e^{kn,h}\right\Vert \leq\frac{c}{\lambda}\left[\left(\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\right)^{k}-1\right]h.

We have


\left\Vert e^{k+1,h}\right\Vert \leq\frac{c}{\lambda}\left[\left(\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\right)^{k+1}-\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\right]h+\frac{c}{1-(1-\theta)h\lambda}h^{2}


=\frac{c}{\lambda}\left[\left(\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\right)^{k+1}-1\right]h.

So our claim is true for all n . Note that


\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda} {}=1+\frac{h\lambda}{1-(1-\theta)h\lambda} \leq \exp\left(\frac{h\lambda}{1-(1-\theta)h\lambda}\right)

by a Taylor expansion of the exponential function. Thus, we have


\left\Vert e^{n,h}\right\Vert{} \leq\frac{c}{\lambda}\left[\left(\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\right)^{n}-1\right]h


\leq\frac{c}{\lambda}\left(\frac{1+\theta h\lambda}{1-(1-\theta)h\lambda}\right)^{n}h


\leq\frac{ch}{\lambda}\exp\left(\frac{nh\lambda}{1-(1-\theta)h\lambda}\right).

By our condition, nh\leq t^{*}. Therefore \left\Vert e^{n,h}\right\Vert \leq\frac{ch}{\lambda}\exp\left(\frac{t^{*}\lambda}{1-(1-\theta)h\lambda}\right). So we have \lim_{h\rightarrow0}\left\Vert e^{n,h}\right\Vert =0 and 0\leq nh\leq t^{*}. Hence the theta method is convergent of order 1 when \theta\neq\frac{1}{2}.

Note that the backward Euler method is a special case of the theta method when \theta=0, so backward Euler's method is convergent of order 1. We arrive at our theorem.

Theorem Backward Euler’s method is convergent of order 1.

Remark If f is globally Lipschitz, then we can apply the above argument with respect to any time interval. If f is only locally Lipschitz, then we need to analyze the situation more carefully. First, by Picard’s theorem, there is a unique solution of this ordinary differential equation for a short amount of time. Indeed, we just need to know that the Lipschitz constant is finite without necessarily needing to know the exact value.

Remark If one did not know of Picard's theorem, one could deduce the existence and uniqueness of solutions to ODEs by using time discretization.

Now we consider y'=y^{2} and t\in[0,0.99]. The exact solution of this problem is y(t)=\frac{1}{1-t}. So 1\leq y\leq100. In our problem, f=y^{2} is clearly analytic and it is locally Lipschitz. It is easy to show f is not globally Lipschitz. If a function f(x) is globally Lipschitz condition then there is a finite constant \lambda such that \frac{\left\Vert f(x)-f(y)\right\Vert }{\left\Vert x-y\right\Vert }\leq\lambda for all x,y\in \mathbb{R}. In our problem, let x=0 and \left\Vert y\right\Vert \rightarrow\infty, it is easy to check \frac{\left\Vert f(x)-f(y)\right\Vert }{\left\Vert x-y\right\Vert }\rightarrow\infty.

We now discuss how one can find local Lipschitz constants \lambda. When f is differentiable, we often just differentiate f and find the maximum value of its derivative in the domain of interest. In our example, f is simple and we only need to know that the Lipschitz constant is finite. So we use a more rough method to show that the Lipshitz constant is finite,

\left\Vert f(y^{1})-f(y^{2})\right\Vert 
=\left\Vert y^{1}+y^{2}\right\Vert \left\Vert y^{1}-y^{2}\right\Vert \leq\left(\left\Vert y^{1}\right\Vert +\left\Vert y^{2}\right\Vert\right)\left\Vert y^{1}-y^{2}\right\Vert.

So it suffices to find the maximal value of \left\Vert y\right\Vert in this problem. In our problem, y(t) is continuous. Furthermore, y(t) will be positive all the time, because the initial value is positive and y' is positive. A continuous function has finite maximal value in a closed and bounded set. Note that the exact solution of our problem is y(t)=\frac{1}{1-t} , so 1\leq y\leq100. So we know that the Lipschitz constant in our problem is finite.

Finally, we get the convergence of functional iteration and backward Euler’s method of our problem. Thus our numerical scheme for y'=y^{2} with initial data y(0)=1 and t\in[0,0.99] is convergent.

Corollary By the theorems for existence and uniqueness of the solution for ordinary differential equations and the Theorems on this page, we arrive at our final goal that the numerical solution generated by backward Euler's method with functional iteration exists and is unique when the time-step, h0 approaches zero.

Remark This requires careful choice of initial iterates when doing functional iteration.

Remark Typically, the exact solution of an ODE is not known, although it is possible to deduce local Lipschitz continuity. Should the solution become infinite, a numerical method will either not converge or display very large values if the approximate solution closely approximates the exact solution. Some care is required in interpreting such numerical simulations in these cases.

Example Programs which use Iteration to Solve a Nonlinear Ordinary Differential Equation[edit]

The following two Matlab and Python programs demonstrate backward Euler’s method for the example equation. The first one uses fixed-point iteration to solve for the nonlinear term and the second one uses Newton’s method to solve for the nonlinear term.

 

 

 

 

( A)

A Matlab program to demonstrate fixed-point iteration Code download
% A program to solve y'=y^2 using the backward Euler
% method and fixed point iteration
% This is not optimized and is very simple
 
clear all; format compact; format short;
set(0,'defaultaxesfontsize',25,'defaultaxeslinewidth',.7,...
'defaultlinelinewidth',6,'defaultpatchlinewidth',3.7,...
'defaultaxesfontweight','bold')
 
n=10000; % Number of timesteps
Tmax=0.99; % Maximum time
y0=1; % Initial value
tol=0.1^10; % Tolerance for fixed point iterations
dt=Tmax/n; % Time step
y=zeros(1,n); % vector for discrete solution
t=zeros(1,n); % vector for times of discrete solution
y(1)=y0;
t(1)=0;
tic, % start timing
for i=1:n
    yold=y(i); ynew=y(i); err=1;
    while err>tol
        ynew=dt*yold^2+y(i);
        err=abs(ynew-yold);
        yold=ynew;
    end
    y(i+1)=ynew;
    t(i+1)=t(i)+dt;
end
toc, % stop timing
yexact=1./(1-t); max(abs(y-yexact)), % print the maximum error
figure(1); clf; plot(t,y,'r+',t,yexact,'b-.');
xlabel Time; ylabel Solution; legend('Backward Euler','Exact solution');
title('Numerical solution of dy/dt=y^2');

 

 

 

 

( Ap)

A Python program to demonstrate fixed-point iteration Code download
#!/usr/bin/env python
"""
A program to solve y'=y^2 using the backward Euler
method and fixed point iteration
This is not optimized and is very simple
"""
 
import time
import matplotlib.pyplot as plt
 
N = 1000	# Number of timesteps
tmax = 0.99	# Maximum time
y0 = 1
t0 = 0	# Initial value
tol = pow(0.1,10)	# Tolerance for fixed point iterations
h = tmax/N	# Time step
 
y = [y0]	# Variables holding the values of iterations
t = [t0]	# Times of discrete solutions
 
 
 
T0 = time.clock()
for i in xrange(N):
yold = y[i]
ynew = y[i]
err = 1
while err > tol:
ynew = h*pow(yold,2)+y[i]
err = abs(ynew-yold)
yold = ynew
y.append(ynew)
t.append(t[i]+h)
 
T = time.clock() - T0	
yexact = [1.0/(1.0-x) for x in t]
 
print
print "Exact value: y(%d)=%f" % (tmax, 1/(1-tmax))
print "Value given by aproximation: y(%d)=%f" % (tmax, y[-1])
maxerr=(max([abs(y[i] - yexact[i]) for i in xrange(len(y))]))
print "Maximum error: %f" % maxerr
print "Elapsed time is %f" % (T)
 
plt.figure()
plt.plot(t,y,'r+',t,yexact,'b-.')
plt.xlabel('Time')
plt.ylabel('Solution')
plt.legend(('Backward Euler', 'Exact solution'))
plt.title('Numerical solution of dy/dt=y^2')
plt.show()


 

 

 

 

( B)

A Matlab program to demonstrate Newton iteration Code download
% A program to solve y'=y^2 using the backward Euler
% method and Newton iteration
% This is not optimized and is very simple
 
set(0,'defaultaxesfontsize',25,'defaultaxeslinewidth',.7,...
'defaultlinelinewidth',6,'defaultpatchlinewidth',3.7,...
    'defaultaxesfontweight','bold')
 
n=100000; % Number of timesteps
Tmax=0.99; % Maximum time
y0=1; % Initial value
tol=0.1^10; % Tolerance for fixed point iterations
dt=Tmax/n; % Time step
y=zeros(1,n); % vector for discrete solution
t=zeros(1,n); % vector for times of discrete solution
y(1)=y0;
t(1)=0;
tic, % start timing
for i=1:n
    yold=y(i); ynew=y(i); err=1;
    while err>tol
        ynew=yold-(yold-y(i)-dt*yold^2)/(1-2*dt*yold);
        err=abs(ynew-yold);
        yold=ynew;
    end
    y(i+1)=ynew;
    t(i+1)=t(i)+dt;
end
toc, % stop timing
yexact=1./(1-t); max(abs(y-yexact)), % print maximum error
figure(1); clf; plot(t,y,'r+',t,yexact,'b-.');
xlabel Time; ylabel Solution;
legend('Backward Euler','Exact solution');
title('Numerical solution of dy/dt=y^2');

 

 

 

 

( Bp)

A Python program to demonstrate Newton iteration. Code download
#!/usr/bin/env python
"""
A program to solve y'=y^2 using the backward Euler
method and Newton iteration
This is not optimized and is very simple
"""
 
import time
import matplotlib.pyplot as plt
 
N = 1000 # Number of timesteps
tmax = 0.99 # Maximum value
t0 = 0 # Initial t value
y0 = 1 # Initial value y(t0) = y0
tol = 0.1**10 # Tolerance for fixed point iterations
h = (tmax - t0)/N # Time step
 
y = [y0] # List for discrete solutions
t = [t0] # List with grid of discrete values of t
 
T0 = time.clock() #Start timing
 
for i in xrange(N):
    yold = y[i]
    ynew = y[i]
    err =1
    while err > tol:
        ynew = yold-(yold-y[i]-h*(yold**2))/(1-2*h*yold)
        err = abs(ynew-yold)
        yold = ynew
    y.append(ynew)
    t.append(t[-1]+h)
 
T = time.clock() - T0 # Stop timing
 
 
print "Exact value y(%f) = %f " % (t[N], 1/(1-t[N]))
print "Value given by approx y_n(%f) = %f" % (t[N], y[N])
print "The error = y-y_n = %f " % (abs(1/(1-t[N]) - y[N]))
print "Time taken = %f " % (T)
 
yexact = [1.0/(1.0-x) for x in t]
 
plt.figure();
plt.plot(t,y,'r+',t,yexact,'b-.');
plt.xlabel('Time')
plt.ylabel('Solution')
plt.legend(('Backward Euler', 'Exact Solution'))
plt.title('Numerical solution of dy/dt=y^2')
plt.show()

Exercises[edit]

  1. Run the fixed-point iteration program in Matlab and check that the outcome is reasonable. Now investigate how changing the number of time steps taken to go from a time of 0 to a time of 0.99, and the tolerance for fixed point iterations affects the maximum error. In particular try a range of 1,000-1,000,000 (in powers of 10) for the number of time steps and a tolerance ranging from 10^{-1}-10^{-7} (in powers of 10^{-1}). You should observe that there is an ``ideal" combination of subdivisions and tolerance to minimize the error. What are these combinations? Do this whole process again using Newton iteration instead. How have the answers changed?
  2. Write a Matlab program to solve y'=y^2 with y(0)=1 using the Crank-Nicolson method and fixed point iteration. Explain why there are two fixed-points to which the fixed-point iteration can converge. Which of these fixed-points gives the correct approximation to the solution of the differential equation? Comment on how the choice of initial iterate for the fixed-point iteration determines the fixed-point to which the method converges.
    1. Show that the differential equation y'=\sqrt{| y  |}, with y(0)=0 is not Lipschitz continuous.
    2. Find at least two analytical solutions to this differential equation.
    3. Compute a numerical solution to this differential equations using the forward Euler method.
    4. Compute a numerical solution to this differential equations using the backward Euler method. Be sure to try different initial guesses for the fixed-point iteration, not just the value at the previous time step; you should be able to calculate the influence of the choice of initial iterate on the selection of solution by the numerical method. Comment on this.
    5. Compute a numerical solution to this differential equations using the implicit midpoint rule. Be sure to try different initial guesses for the fixed point iteration, not just the value at the previous time step; you should be able to calculate the influence of the choice of initial iterate on the selection of “solution” by the numerical method. Comment on this.
    6. Repeat (d) and (e) with Newton iteration.
    7. Comment on the applicability of numerical methods for solving differential equations without unique solutions.
  1. Modify the program for the 1-D Allen-Cahn equation so that it uses the Crank-Nicolson and fixed-point iteration for the nonlinear term. You will need to calculate the nonlinear term in real space, so that your resulting scheme is \frac{\hat{u}^{n+1,k+1}-\hat{u}^n}{\delta t}=\frac{\hat{u}_{xx}^{n+1,k+1}+\hat{u}^n_{xx}}2 + \frac{1}{2}\widehat{\left[u^{n+1,k}- \left(u^{n+1,k} \right)^3\right]}+\frac{1}{2}\widehat{\left[u^{n}- \left(u^{n}\right)^3\right]},
where n denotes the time step and k denotes the iterate. Stop the iterations once the maximum difference between successive iterates is sufficiently small.
  2. Modify the program for the 2-D Allen-Cahn equation so that it uses the Crank-Nicolson method and fixed-point iteration for the nonlinear term. You will need to calculate the nonlinear term in real space.

Notes[edit]

  1. Iserles (2009)
  2. Lax, Burstein and Lax (1979)
  3. Hughes et al. (2008)
  4. Iserles (2009)
  5. Iserles (2009)
  6. Birkhoff and Rota (1989)
  7. Bradie (2006)
  8. Iserles (2009)
  9. Iserles (2009)

References[edit]

Birkhoff, G; Rota, G.C. (1989). Ordinary Differential Equations (4 ed.). Wiley. 

Bradie, B. (2006). A Friendly Introduction to Numerical Analysis. Pearson. 

Hughes-Hallett, D.; Gleason A.M., Flath D.E., Lock P.F., Lomen D.O., Lovelock D., MacCallum W.G., Mumford D., Osgood B. G., Quinney D., Rhea K., Tecosky-Feldman J., Tucker T.W., and Bretscher O.K., Iovita A., Raskind W., Gordon S.P., Pasquale A., Thrash J.B. (2008). Calculus, Single and Multivariable (5th ed.). Wiley. 

Iserles, A. (2009). A First Course in the Numerical Analysis of Differential Equations. Cambridge University Press. 

Lax, A. (1976). Calculus with Applications and Computing. 1. Springer.