Jump to content

Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/August 2007

From Wikibooks, open books for an open world

Problem 1

[edit | edit source]

A set of functions is a Chebyshev system if


(i) The set is linearly independent.


(ii) If is a linear combination of which is not identically zero, has at most distinct zeros in .


Problem 1a

[edit | edit source]

Show that is a Chebyshev system if and only if for any distinct points the matrix with is non-singular.

Solution 1a

[edit | edit source]

Forward Direction

[edit | edit source]

We want to prove the following statement:


If is a Chebyshev system, then for any distinct points the matrix with is non-singular.


Writing out the matrix yields:



Since the set is linearly independent, there does not exist any non-zero sets of constants of such that for any . Hence, the columns of the matrix are linearly independent which implies that is non-singular.

Reverse Direction

[edit | edit source]

Proof of (i)

[edit | edit source]

Assume is non-singular.


This implies the columns of



are linearly independent. Since is non-singular for any choice of , is a linearly independent set and we have shown .

Proof of (ii)

[edit | edit source]

By hypothesis, is a linear combination of i.e.



for not all zero.


Assume for the sake of contradiction that has zeros at


This implies the following equations:



Rewriting the equations in matrix form yields



Since are not all zero, this implies that the columns of , , are linearly dependent, a contradiction.


Hence, has at most zeros, and we have shown .

Problem 1b

[edit | edit source]

Let be such that for all . Let . Show that is a Chebyshev system. For this, you may use results from polynomial interpolation without proof.

Solution 1b

[edit | edit source]

We have to prove:

(i) is a linearly independent set


(ii)any linear combination of this set has at most zeros.

Proof of (i)

[edit | edit source]

If we evaluate at distinct points, , we have the following Van Der Monde matrix whose determinant is non-zero.



Hence, are linearly independent.


Assume for the sake of contradiction that is a linear combination of , that is


.


Hence, is a polynomial of degree . Taking derivatives of yields



which contradicts the hypothesis. Therefore is a linearly independent set.

Proof of (ii)

[edit | edit source]

Let .

Suppose has (or more) zeros in . By Rolle's Theorem, the (n+1)st derivative of f vanishes on the given interval, a contradiction.



(i) and (ii) show that is a Chebyshev system.

Problem 2

[edit | edit source]

Let

be a sequence of integration rules.

Problem 2a

[edit | edit source]

Suppose

and

for some constant . Show that

for all

Solution 2a

[edit | edit source]

By the Weierstrass Approximation Theorem, given , there exists a polynomial such that

Let

Adding and subtracting and , yields,



By the triangle inequality and equation (2) and (3),


By equation (1), when ,


Hence for arbitrary small as ,


i.e.

Problem 2b

[edit | edit source]

Show that if all then (1) implies (2).

Solution 2b

[edit | edit source]

For , equation (1) yields,


Letting in equation (0) yields,


Combining the two above results yields,


Since is finite, there exists some number such that .


Since ,

i.e. equation (2).

Problem 3

[edit | edit source]

Consider the real system of linear equations

where is non singular and satisfies

for all real , where the Euclidean inner product is used here.

Problem 3a

[edit | edit source]

Show that for all real where is the symmetric part of .

Solution 3a

[edit | edit source]

Substituting for in and expanding the inner product we have,



From properties of inner products we have,



Hence,


Problem 3b

[edit | edit source]

Prove that

where is the minimum eigenvalue of .

Solution 3b

[edit | edit source]

First Inequality

[edit | edit source]

Since is symmetric, it has a eigenvalue decomposition

where is orthogonal and is a diagonal matrix containing all the eigenvalues.

Substitution yields,

Let

This implies the following three relationships:

Substituting,

Expanding the numerator we have,

Expanding the denominator yield

Substituting,

Second Inequality

[edit | edit source]

From part(a)

for all real .


Therefore is positive definite which implies all its eigenvalues are positive. In particular,


Problem 3c

[edit | edit source]

Now consider the iteration for computing a series of approximate solutions to (1),



where and is chosen to minimize as a function of . Prove that



Solution 3c

[edit | edit source]

First, we want to write as a function of i.e.



Changing the indices of equation from to ,substituting for , and applying the definition of norm yields,



From a property of inner products and since is symmetric,



Hence,



Next we want to minimize as a function of . Taking its derivative with respect to yields,



Setting and solving for gives



Substituting for into gives,


By definition of norm,



Hence



Multiplying and dividing the second term on the right hand side of the above equation by and applying a property of inner product yields,



From the result of part (b), we have our desired result: