Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/August 2008

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Problem 1[edit | edit source]

Let and , with , and let be a scalar. Show how the constrained least squares problem,



can be reduced to solving a related unconstrained least squares problem. The algorithm should start by finding a Householder transformation such that


and setting

Solution 1[edit | edit source]

Since Householder Matrices are orthogonal and hermitian (i.e. symmetric if the matrix is real), we have



Then the constraint can be written



Hence, the first entry of the column vector , (a scalar), represents our constraint.


Now we want to show that we can write as a related unconstrained problem. Substituting for in we get,



Let where is the first column of and are the remaining columns.

Since,



block matrix multiplication yields,



So , which is unconstrained.

Problem 2[edit | edit source]

Prove or disprove the following interpolants exist for all values of and all distinct values of .

Problem 2a[edit | edit source]

Solution 2a[edit | edit source]

Substituting into this equation the pairs gives rise to the following matrix equation:



Where A is the Vandermonde matrix, which is know to be non-singular. This proves existence and uniqueness of the coefficients .

Problem 2b[edit | edit source]

Solution 2b[edit | edit source]

Now, substituting the pairs gives:



Consider the unique set of points

The system reduces to


Here, the matrix is clearly singular.

More generally, the determinant of the matrix is given by

if for any

Problem 3[edit | edit source]

Consider the linear system of equations where is a symmetric positive definite matrix of order . The conjugate gradient method (CG) for solving this system is

Choose , compute 
 Set 
 for  until convergence
 
 
 
 <Test for Convergence>
 
 
end

where Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikibooks.org/v1/":): {\displaystyle \!\,(v,w)=\sum_{i=1}^nv_iw_i} is the Euclidean inner product.

Let be some other symmetric[1] positive-definite matrix of order . We know that the forms

are inner products on . In addition, a matrix is symmetric with respect to the inner product if for all in , and is positive definite with respect to if for all nonzero

Problem 3a[edit | edit source]

Show that is symmetric and positive-definite with respect to the -inner product.

Solution 3a[edit | edit source]

Problem 3b[edit | edit source]

In light of this fact, CG can be used to solve the system in an appropriate manner. Specify this algorithm and identify any extra costs required that would not be present with .

Solution 3b[edit | edit source]

Choose 

 :  solve 

for  until convergence
 
 
 
 <Test for Convergence>
  :  solve 
 
 
end


One additional cost is computing .

Another one-time cost is multiplying which has cost and which has cost .

Problem 3c[edit | edit source]

Use any facts you know about the conjugate gradient method to identify properties of that would be desirable for computing the solution efficiently.

Solution 3c[edit | edit source]

We want to have eigenvalues whose values are close together. This accelerates convergence. Furthermore, if there are only distinct eigenvalues, the algorithm will terminate in steps.

Notes[edit | edit source]

  1. Omitted on the actual exam. This made the problem ambiguous and the word symmetric should have been included. (Howard Elman, 12/16/08)