Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/Jan08 667

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Problem 4[edit | edit source]

Consider the system

.


Problem 4a[edit | edit source]

Show that if the parameter is chosen sufficiently small, then this system has a unique solution within some rectangular region.

Solution 4a[edit | edit source]

The system of equations may be expressed in matrix notation.


The Jacobian of , , is computed using partial derivatives



If is sufficiently small and are restricted to a bounded region ,



From the mean value theorem, for , there exists such that



Since is bounded in the region give sufficiently small



Therefore, is a contraction and from the contraction mapping theorem there exists an unique fixed point (solution) in a rectangular region.

Problem 4b[edit | edit source]

Derive a fixed point iteration scheme for solving the system and show that it converges.

Solution 4b[edit | edit source]

Use Newton Method[edit | edit source]

To solve this problem, we can use the Newton Method. In fact, we want to find the zeros of


The Jacobian of , , is computed using partial derivatives



Then, the Newton method for solving this linear system of equations is given by


Show convergence by showing Newton hypothesis hold[edit | edit source]

Jacobian of g is Lipschitz[edit | edit source]

We want to show that is a Lipschitz function. In fact,



and now, using that is Lipschitz, we have


Jacobian of g is invertible[edit | edit source]

Since is a contraction, the spectral radius of the Jacobian of is less than 1 i.e.


.


On the other hand, we know that the eigenvalues of are .


Then, it follows that or equivalently is invertible.

inverse of Jacobian of g is bounded[edit | edit source]


Since exists, . Given a bounded region (bounded ), each entry of the above matrix is bounded. Therefore the norm is bounded.

norm of (Jacobian of g)^-1 (x_0) * g(x_0) bounded[edit | edit source]

since and is bounded.


Then, using a good enough approximation , we have that the Newton's method is at least quadratically convergent, i.e,

Problem 5a[edit | edit source]

Outline the derivation of the Adams-Bashforth methods for the numerical solution of the initial value problem .

Solution 5a[edit | edit source]

We want to solve the following initial value problem: .


First, we integrate this expression over , to obtain


,


To approximate the integral on the right hand side, we approximate its integrand using an appropriate interpolation polynomial of degree at .


This idea generates the Adams-Bashforth methods.


,


where, denotes the approximated solution, and denotes the associated Lagrange polynomial.

Problem 5b[edit | edit source]

Derive the Adams-Bashforth formula

Solution 5b[edit | edit source]

From (a) we have

where


Then if we let , where h is the fixed step size we get



So we have the Adams-Bashforth method as desired

Problem 5c[edit | edit source]

Analyze the method (1). To be specific, find the local truncation error, prove convergence and find the order of convergence.

Solution 5c[edit | edit source]

Find Local Truncation Error Using Taylor Expansion[edit | edit source]

Note that . Also, denote the uniform step size as h. Hence,




Therefore, the given equation may be written as


Expand Left Hand Side[edit | edit source]

Expanding about , we get


Expand Right Hand side[edit | edit source]

Also expanding about gives


Show Convergence by Showing Stability and Consistency[edit | edit source]

A method is convergent if and only if it is both stable and consistent.

Stability[edit | edit source]

It is easy to show that the method is zero stable as it satisfies the root condition. So stable.

Consistency[edit | edit source]

Truncation error is of order 2. Truncation error tends to zero as h tends to zeros. So the method is consistent.

Order of Convergence[edit | edit source]

Dahlquist principle: consistency + stability = convergent. Order of convergence will be 2.

Problem 6[edit | edit source]

Consider the problem


Problem 6a[edit | edit source]

Give a variational formulation of (2), i.e. express (2) as

Define the Space H, the bilinear form B and the linear functional F, and state the relation between (2) and (3).

Solution 6a[edit | edit source]

Multiplying (2) by a test function and using integration by parts we obtain:




Thus, the weak form or variational form associated with the problem (2) reads as follows: Find such that


for all


where .

Problem 6b[edit | edit source]

Let be a mesh on with , and let

.

Define the finite element approximation, based on the approximation space . What can be said about the error on the Sobolev norm on ?

Solution 6b[edit | edit source]

Define piecewise linear basis[edit | edit source]

For our basis of , we use the set of hat functions , i.e., for


Define u_h[edit | edit source]

Since is a basis for , and we have


.

Discrete Problem[edit | edit source]

Now, we can write the discrete problem: Find such that


for all


If we consider that is a basis of and the linearity of the bilinear form and the functional , we obtain the equivalent problem:


Find such that



This last problem can be formulated as a matrix problem as follows:


Find such that



where and .

Bound error[edit | edit source]

In general terms, we can use Cea's Lemma to obtain



In particular, we can consider as the Lagrange interpolant, which we denote by . Then,


.


It's easy to prove that the finite element solution is nodally exact. Then it coincides with the Lagrange interpolant, and we have the following punctual estimation:


Problem 6c[edit | edit source]

Derive the estimate for , the error in . Hint: Let w solve

(#) :

We characterize variationally as

.

Let to get

Use formula (4) to estimate .

Solution 6c[edit | edit source]

Continuity of Bilinear Form[edit | edit source]

Orthogonality of the Error[edit | edit source]

.

Bound for L2 norm of w[edit | edit source]


Hence,


Bound for L2 norm of w[edit | edit source]

From , we have



Then,


Bound L2 Error[edit | edit source]


Finally, from (#), we have that . Then,


,


or equivalently,


.

Problem 6d[edit | edit source]

Suppose is a basis for . Show that

where is the stiffness matrix.

Solution 6d[edit | edit source]

We know that


where the substitution in the last lines come from the orthogonality of error.


Now,


Then, we have obtained