# Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/Jan05 667

## Problem 4

 Let ${\displaystyle f:R^{n}\rightarrow R\!\,}$ be a nonlinear smooth function. To determine a (local) minimum of ${\displaystyle f\!\,}$ one can use a descent method of the form ${\displaystyle x_{k+1}=x_{k}+\alpha _{k}d_{k}\quad (k\geq 0)\quad \quad (1)\!\,}$ where ${\displaystyle 0\leq \alpha _{k}\leq 1\!\,}$ is a suitable parameter obtained by backtracking and ${\displaystyle d_{k}}$ is a descent direction i.e. it satisfies ${\displaystyle f(x_{k+1})

## Problem 4a

 Write the steepest descent method (or gradient) method and show that there exist ${\displaystyle 0<\alpha _{k}\leq 1\!\,}$ such that the resulting method satisfies (2)

## Solution 4a

### Steepest Descent Method

${\displaystyle x_{k+1}=x_{k}-\alpha _{k}\nabla f(x_{k})\!\,}$

Choose ${\displaystyle \alpha _{k}\in (0,1]\!\,}$ such that ${\displaystyle \alpha _{k}\!\,}$ minimizes ${\displaystyle f(x_{k}-\alpha _{k}\nabla f(x_{k}))\!\,}$ i.e.

${\displaystyle \alpha _{k}:\min _{0<\alpha _{k}\leq 1}f(x_{k}-\alpha _{k}\nabla f(x_{k}))\!\,}$

### Directional Derivative Negative

To satisfy (2), the directional derivative should be negative i.e.

${\displaystyle \nabla _{d_{k}}f(x_{k})=\lim _{\alpha _{k}\rightarrow 0}{\frac {f(x_{k}+\alpha _{k}d_{k})-f(x_{k})}{\alpha _{k}}}<0\!\,}$

which implies

${\displaystyle \underbrace {f(x_{k}+\alpha _{k}d_{k})} _{f(x_{k+1})}

since ${\displaystyle \alpha _{k}\in (0,1]\!\,}$

## Problem 4b

 Write the Newton method and examine whether or not there exist ${\displaystyle \alpha _{k}\!\,}$ which yield (2). Establish conditions on the Hessian ${\displaystyle H(f(x))\!\,}$ of ${\displaystyle f(x)\!\,}$ which guarantee the existence of ${\displaystyle \alpha _{k}\!\,}$.

## Solution 4b

### Newton's Method

${\displaystyle x_{k+1}=x_{k}-H^{-1}(f(x_{k}))\nabla f(x_{k})\!\,}$

### Directional Derivative Negative

To have descent, we want the directional derivative to be negative i.e.

${\displaystyle \nabla _{\alpha _{k}d_{k}}f(x_{k})=\nabla f(x_{k})\cdot \alpha _{k}d_{k}=-\nabla f(x_{k})\cdot H^{-1}(f(x_{k}))\nabla f(x_{k})<0\!\,}$

which implies

${\displaystyle \nabla f(x_{k})^{T}H^{-1}(f(x_{k}))\nabla f(x_{k})>0\!\,}$

Therefore ${\displaystyle H^{-1}(f(x_{k}))\!\,}$ is positive definite and therefore ${\displaystyle H(f(x_{k}))\!\,}$ is positive definite.

## Problem 4c

 If we replace the Hessian by the matrix ${\displaystyle H(f(x))+\gamma _{k}I\!\,}$, where ${\displaystyle \gamma _{k}\geq 0\!\,}$ and ${\displaystyle I}$ is the identity matrix, we obtain a quasi-Newton method. Find a condition on ${\displaystyle \gamma _{k}\!\,}$ which leads to (2).

## Solution 4c

We now want ${\displaystyle {\tilde {H}}(f(x_{k}))=H(f(x_{k}))+\gamma _{k}I\!\,}$ to be positive definite.

Let ${\displaystyle \lambda _{i}\!\,}$, ${\displaystyle i=1,2,\ldots ,n\!\,}$ be the eigenvalues of ${\displaystyle H(f(x_{k}))\!\,}$.

Then ${\displaystyle \lambda _{i}+\gamma _{k}\cdot 1\!\,}$, ${\displaystyle i=1,2,\ldots ,n\!\,}$ are eigenvalues of ${\displaystyle {\tilde {H}}(f(x_{k}))\!\,}$.

Since we want ${\displaystyle {\tilde {H}}(f(x_{k}))\!\,}$ to be positive definite, we equivalently have for ${\displaystyle i=1,2,\ldots ,n\!\,}$

${\displaystyle \lambda _{i}+\gamma _{k}>0\!\,}$

i.e.

${\displaystyle \gamma _{k}>-\lambda _{i}\quad i=1,2,\ldots ,n\!\,}$

## Problem 5

 Consider the following nonlinear autonomous initial value problem in ${\displaystyle R}$ with ${\displaystyle f\in C^{2}}$. ${\displaystyle y'=f(y)\quad \quad y(0)=y_{0}}$

## Problem 5a

 Write the ODE in integral form and use the mid-point quadrature rule to derive the mid-point method with uniform time-step ${\displaystyle h={\frac {T}{N}}}$: ${\displaystyle y_{n+1}=y_{n}+hf({\frac {y_{n+1}+y_{n}}{2}})}$

## Solution 5a

For ${\displaystyle x\in [x_{n},x_{n+1}]\!\,}$,

${\displaystyle y(x_{n+1})-y(x_{n})=\int _{x_{n}}^{x_{n+1}}f(y(x))dx\approx h\cdot f\left({\frac {y_{n+1}+y_{n}}{2}}\right)\!\,}$

## Problem 5b

 Define truncation error ${\displaystyle T_{n}\!\,}$. Assuming that ${\displaystyle T_{n}=O(h^{3})\!\,}$, show an estimate for the error ${\displaystyle |y(t_{N})-y_{N}|\!\,}$. What is the order of the method? (Hint: use that ${\displaystyle f\!\,}$ is Lipschitz continuous.

## Solution 5b

{\displaystyle {\begin{aligned}|e_{n+1}|&=|y(t_{n+1})-y_{n+1}|\\&\leq |y(t_{n})-y_{n}|+h\underbrace {\left|f\left({\frac {y(t_{n+1})-y(t_{n})}{2}}\right)-f\left({\frac {y_{n+1}-y_{n}}{2}}\right)\right|} _{\leq {\frac {\gamma }{2}}|y(t_{n+1})-y(t_{n})-y_{n+1}+y_{n}|}+{\mathcal {O}}(h^{3})\\&\leq |e_{n}|+{\frac {h\gamma }{2}}\left(|e_{n+1}|+|e_{n}|\right)+{\mathcal {O}}(h^{3})\end{aligned}}}

where ${\displaystyle \gamma \!\,}$ is the Lipschitz constant of ${\displaystyle f\!\,}$. Rearranging terms we get

${\displaystyle |e_{n+1}|\leq \underbrace {\frac {1+{\frac {h\gamma }{2}}}{1-{\frac {h\gamma }{2}}}} _{C}|e_{n}|+{\mathcal {O}}(h^{3})\!\,}$

In particular, ${\displaystyle |e_{1}|\leq C\underbrace {|e_{0}|} _{0}+{\mathcal {O}}(h^{3})\!\,}$

Then ${\displaystyle e_{N}\!\,}$ is given by

{\displaystyle {\begin{aligned}e_{N}&=\left(C^{N-1}+C^{N-2}+\cdots +C+1\right){\mathcal {O}}(h^{3})\\&={\frac {C^{N}-1}{C-1}}{\mathcal {O}}(h^{3})\\&\leq K{\mathcal {O}}(h^{2})\end{aligned}}\!\,}

## Problem 5c

 Prove that ${\displaystyle T_{n}=O(h^{3})\!\,}$ for this method. (Hint: expand first ${\displaystyle y(t_{n+1})\!\,}$ and ${\displaystyle y(t_{n})\!\,}$ around ${\displaystyle \tau _{n}=t_{n}+{\frac {h}{2}}\!\,}$ and next expand ${\displaystyle f({\frac {1}{2}}(y(t_{n+1})+y(t_{n}))).\!\,}$ Also expand ${\displaystyle y'(t)\!\,}$ around ${\displaystyle \tau _{n}\!\,}$.)

## Solution 5c

The midpoint method may be rewritten as follows:

${\displaystyle y(t_{n+1})=y(t_{n})+{\frac {h}{2}}y'(t_{n+1})+{\frac {h}{2}}y'(t_{n})+T_{n}\!\,}$

which implies

${\displaystyle y(t_{n+1})-y(t_{n})-{\frac {h}{2}}y'(t_{n+1})-{\frac {h}{2}}y'(t_{n})=T_{n}\!\,}$

Expanding each term around ${\displaystyle \tau _{n}=t_{n}+{\frac {h}{2}}\!\,}$ yields ${\displaystyle T_{n}\!\,}$.

${\displaystyle {\begin{array}{|c|c|c|c|c|c|}{\mbox{Order }}h&y(t_{n+1})&-y(t_{n})&-{\frac {h}{2}}y'(t_{n+1})&-{\frac {h}{2}}y'(t_{n})&\Sigma \\\hline &&&&&\\0&y(t_{n}+{\frac {h}{2}})&-y(t_{n}+{\frac {h}{2}})&---&---&0\\&&&&&\\1&y'(t_{n}+{\frac {h}{2}}){\frac {h}{2}}&-y'(t_{n}+{\frac {h}{2}})(-{\frac {h}{2}})&-{\frac {h}{2}}y'(t_{n}+{\frac {h}{2}})&-{\frac {h}{2}}y'(t_{n}+{\frac {h}{2}})&0\\&&&&&\\2&{\frac {y''(t_{n}+{\frac {h}{2}})}{2}}{\frac {h^{2}}{4}}&-{\frac {y''(t_{n}+{\frac {h}{2}}}{2}}{\frac {h^{2}}{4}}&-{\frac {h}{2}}y''(t_{n}+{\frac {h}{2}}){\frac {h}{2}}&-{\frac {h}{2}}y''(t_{n}+{\frac {h}{2}})(-{\frac {h}{2}})&0\\&&&&&\\3&O(h^{3})&O(h^{3})&O(h^{3})&O(h^{3})&O(h^{3})\\\hline \end{array}}}$

## Problem 6

 Consider the following boundary two-point boundary value problem in ${\displaystyle (0,1)\!\,}$ with ${\displaystyle \epsilon <<1\leq b:\!\,}$ ${\displaystyle -\epsilon u_{xx}+bu_{x}=1,\quad u(0)=u(1)=0\!\,}$

## Problem 6a

 Write the finite element method with piecewise linear elements over a uniform partition of ${\displaystyle (0,1)\!\,}$ with meshsize ${\displaystyle h\!\,}$. If ${\displaystyle U={U_{i}}_{i=0}^{I+1}\!\,}$ is the vector of nodal values of the finite element solution, find the (stiffness) matrix ${\displaystyle A\!\,}$ and right-hand side ${\displaystyle F\!\,}$ such that ${\displaystyle AU=F\!\,}$. Is ${\displaystyle A\!\,}$ symmetric? Is ${\displaystyle A\!\,}$ positive definite?

## Solution 6a

### Weak Variational Formulation

Multiplying the given equation by a test function ${\displaystyle v\in H_{0}^{1}[0,1]\!\,}$ and integrating from 0 to 1 gives the equivalent weak variational formulation:

Find ${\displaystyle u\in H_{0}^{1}[0,1]\!\,}$ such that for all ${\displaystyle v\in H_{0}^{1}[0,1]\!\,}$ the following holds

${\displaystyle \underbrace {\epsilon \int _{0}^{1}u'v'dx+b\int _{0}^{1}u'vdx} _{a(u,v)}=\underbrace {\int _{0}^{1}vdx} _{f(v)}\!\,}$

### Discrete Variational Formulation

Let ${\displaystyle V_{h}=\{v\in H_{0}^{1}[0,1]:\left.v_{h}\right|_{[x_{i-1},x_{i}]}{\mbox{ linear }}\!\,}$ for ${\displaystyle i=1,2,\ldots ,n\}\!\,}$

Then we have the discrete variational formulation which is an approximation to the weak variational formulation.

Find ${\displaystyle u_{h}\in V_{h}\!\,}$ such that for all ${\displaystyle v_{h}\in V_{h}\!\,}$

${\displaystyle a(u_{h},v_{h})=f(v_{h})\!\,}$

### Define basis for V_h

Let ${\displaystyle \{\phi _{i}\}_{i=1}^{n}\!\,}$ be the linear "hat" functions which defines a basis for ${\displaystyle V_{h}\!\,}$.

${\displaystyle \phi _{i}(x)={\begin{cases}{\frac {x-x_{i-1}}{h}}&{\mbox{ if }}x\in [x_{i-1},x_{i}]\\{\frac {x_{i+1}-x}{h}}&{\mbox{ if }}x\in [x_{i},x_{i+1}]\\0&{\mbox{ otherwise }}\end{cases}}\!\,}$

Then calculation yields the following: (draw pictures)

${\displaystyle \int _{0}^{1}\phi _{i}(x)dx=h\!\,}$

${\displaystyle \phi _{i}'(x)={\begin{cases}{\frac {1}{h}}&{\mbox{ if }}x\in [x_{i-1},x_{i}]\\-{\frac {1}{h}}&{\mbox{ if }}x\in [x_{i},x_{i+1}]\\0&{\mbox{ otherwise }}\end{cases}}\!\,}$

${\displaystyle \int _{0}^{1}\phi _{i}'(x)\phi _{j}'(x)dx={\begin{cases}{\frac {2}{h}}&{\mbox{ if }}i=j\\-{\frac {1}{h}}&{\mbox{ if }}|i-j|=1\\0&{\mbox{ otherwise }}\end{cases}}\!\,}$

${\displaystyle \int _{0}^{1}\phi _{i}(x)\phi _{j}'(x)dx={\begin{cases}0&{\mbox{ if }}i=j\\{\frac {1}{2}}&{\mbox{ if }}i-j=1\\-{\frac {1}{2}}&{\mbox{ if }}i-j=-1\\0&{\mbox{ otherwise }}\end{cases}}\!\,}$

### Discrete Variational Formulation in Matrix Form

Since ${\displaystyle \{\phi _{i}\}_{i=1}^{n}\!\,}$ is a basis for ${\displaystyle V_{h}\!\,}$,

${\displaystyle u_{h}=\sum _{i=1}^{n}u_{hi}\phi _{i}\!\,}$

Also, the discrete variational formulation may be expressed as

${\displaystyle \epsilon \sum _{i=1}^{n}u_{hi}\int _{0}^{1}\phi _{i}'(x)\phi _{j}'(x)dx+b\sum _{i=1}^{n}u_{hi}\int _{0}^{1}\phi _{i}'(x)\phi _{j}(x)dx=\int _{0}^{1}\phi _{j}(x)dx{\mbox{ for }}j=1,2,\ldots ,n\!\,}$

which in matrix form is

${\displaystyle \underbrace {\begin{bmatrix}2{\frac {\epsilon }{h}}&-{\frac {\epsilon }{h}}+{\frac {b}{2}}&0&\cdots &\cdots &0\\-{\frac {\epsilon }{h}}-{\frac {b}{2}}&2{\frac {\epsilon }{h}}&-{\frac {\epsilon }{h}}+{\frac {b}{2}}&0&\cdots &0\\0&\ddots &\ddots &\ddots &\cdots &0\\0&\cdots &\cdots &0&-{\frac {\epsilon }{h}}-{\frac {b}{2}}&2{\frac {\epsilon }{h}}\end{bmatrix}} _{A}{\begin{bmatrix}u_{1}\\u_{2}\\\vdots \\u_{n}\end{bmatrix}}={\begin{bmatrix}h\\h\\\vdots \\h\end{bmatrix}}\!\,}$

${\displaystyle A\!\,}$ is not symmetric. ${\displaystyle A\!\,}$ is positive definite if

${\displaystyle {\frac {\epsilon }{h}}>{\frac {b}{2}}\!\,}$

## Problem 6b

 Find a relation between the three parameters ${\displaystyle h,\epsilon ,b\!\,}$ for ${\displaystyle A\!\,}$ to be an ${\displaystyle M-\!\,}$matrix, i.e. to have ${\displaystyle a_{ii}>0,a_{ij}\leq 0\!\,}$ if ${\displaystyle i\neq j\!\,}$ and ${\displaystyle a_{ii}\geq -\sum _{i\neq j}a_{ij}\!\,}$

## Solution 6b

The first,second, and last rows all yield the same inequality for ${\displaystyle A\!\,}$ to be an ${\displaystyle M\!\,}$-matrix:

${\displaystyle {\frac {\epsilon }{h}}>{\frac {b}{2}}\!\,}$

## Problem 6c

 Consider the upwind modification of the ODE ${\displaystyle -(\epsilon +{\frac {b}{2}}h)u_{xx}+bu_{x}=1\!\,}$ Show that the resulting matrix ${\displaystyle A\!\,}$ is an M-matrix without restrictions on ${\displaystyle h,\epsilon \!\,}$ and ${\displaystyle b\!\,}$.

## Solution 6c

Substituting ${\displaystyle \epsilon +{\frac {b}{2}}h\!\,}$ for ${\displaystyle \epsilon \!\,}$ yields the following matrix ${\displaystyle A\!\,}$:

${\displaystyle {\begin{bmatrix}2{\frac {\epsilon }{h}}+b&-{\frac {\epsilon }{h}}&0&\cdots &\cdots &0\\-{\frac {\epsilon }{h}}-b&2{\frac {\epsilon }{h}}+b&-{\frac {\epsilon }{h}}&0&\cdots &0\\0&\ddots &\ddots &\ddots &\cdots &0\\0&\cdots &\cdots &0&-{\frac {\epsilon }{h}}-b&2{\frac {\epsilon }{h}}+b\end{bmatrix}}\!\,}$

All off diagonal entries are ${\displaystyle \leq 0\!\,}$. Diagonal entries are ${\displaystyle >0\!\,}$

The first row meets the last condition since

${\displaystyle 2{\frac {\epsilon }{h}}+b\geq {\frac {\epsilon }{h}}\!\,}$

The second row through (n-1) rows meets the last condition since

${\displaystyle 2{\frac {\epsilon }{h}}+b\geq {\frac {\epsilon }{h}}+b+{\frac {\epsilon }{b}}\!\,}$

The last row meets the last condition since

${\displaystyle 2{\frac {\epsilon }{h}}+b\geq {\frac {\epsilon }{h}}+b\!\,}$