A set of functions is a Chebyshev system if
- (i) The set is linearly independent.
- (ii) If is a linear combination of which is not identically zero, has at most distinct zeros in .
|
Show that is a Chebyshev system if and only if for any distinct points the matrix with is non-singular.
|
We want to prove the following statement:
If is a Chebyshev system, then for any distinct points the matrix with is non-singular.
Writing out the matrix yields:
Since the set is linearly independent, there does not exist any non-zero sets of constants of such that for any . Hence, the columns of the matrix are linearly independent which implies that is non-singular.
Assume is non-singular.
This implies the columns of
are linearly independent. Since is non-singular for any choice of , is a linearly independent set and we have shown .
By hypothesis, is a linear combination of i.e.
for not all zero.
Assume for the sake of contradiction that has zeros at
This implies the following equations:
Rewriting the equations in matrix form yields
Since are not all zero, this implies that the columns of , , are linearly dependent, a contradiction.
Hence, has at most zeros, and we have shown .
Let be such that for all . Let . Show that is a Chebyshev system. For this, you may use results from polynomial interpolation without proof.
|
We have to prove:
(i) is a linearly independent set
(ii)any linear combination of this set has at most zeros.
If we evaluate at distinct points, , we have the following Van Der Monde matrix whose determinant is non-zero.
Hence, are linearly independent.
Assume for the sake of contradiction that is a linear combination of , that is
.
Hence, is a polynomial of degree . Taking derivatives of yields
which contradicts the hypothesis. Therefore is a linearly independent set.
Let .
Suppose has (or more) zeros in . By Rolle's Theorem, the (n+1)st derivative of f vanishes on the given interval, a contradiction.
(i) and (ii) show that is a Chebyshev system.
Let
be a sequence of integration rules.
|
Suppose
and
for some constant . Show that
for all
|
By the Weierstrass Approximation Theorem, given , there exists a polynomial such that
Let
Adding and subtracting and , yields,
By the triangle inequality and equation (2) and (3),
By equation (1), when ,
Hence for arbitrary small as ,
i.e.
Show that if all then (1) implies (2).
|
For , equation (1) yields,
Letting in equation (0) yields,
Combining the two above results yields,
Since is finite, there exists some number such that .
Since ,
i.e. equation (2).
Consider the real system of linear equations
where is non singular and satisfies
for all real , where the Euclidean inner product is used here.
|
Substituting for in and expanding the inner product we have,
From properties of inner products we have,
Hence,
Prove that
where is the minimum eigenvalue of .
|
Since is symmetric, it has a eigenvalue decomposition
where is orthogonal and is a diagonal matrix containing all the eigenvalues.
Substitution yields,
Let
This implies the following three relationships:
Substituting,
Expanding the numerator we have,
Expanding the denominator yield
Substituting,
From part(a)
for all real .
Therefore is positive definite which implies all its eigenvalues are positive. In particular,
Now consider the iteration for computing a series of approximate solutions to (1),
where and is chosen to minimize as a function of . Prove that
|
First, we want to write as a function of i.e.
Changing the indices of equation from to ,substituting for , and applying the definition of norm yields,
From a property of inner products and since is symmetric,
Hence,
Next we want to minimize as a function of . Taking its derivative with respect to yields,
Setting and solving for gives
Substituting for into gives,
By definition of norm,
Hence
Multiplying and dividing the second term on the right hand side of the above equation by and applying a property of inner product yields,
From the result of part (b), we have our desired result: