# Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/August 2005

## Problem 1

 Derive the one-, two-, and three-point Gaussian quadrature formulas such that ${\displaystyle \int _{-1}^{1}f(x)x^{4}dx\approx \sum _{j=1}^{n}f(x_{j})w_{j}\!\,}$ Give Bounds on the error of these formulas.

## Solution 1

### Find Orthogonal Polynomials w.r.t. Weighting Function

For this problem, we need the first three orthogonal polynomials ${\displaystyle P_{1},P_{2},{\mbox{ and }}P_{3}\!\,}$ with respect to a weighted inner product

${\displaystyle \left\langle f,g\right\rangle =\int _{-1}^{1}f(x)g(x)w(x)dx\!\,}$

where ${\displaystyle w(x)=x^{4}\!\,}$ in our case. This can be done using Gram-Schmidt Orthogonalization on the basis ${\displaystyle \left\{1,x,x^{2},x^{3}\right\}\!\,}$

#### Zero Order Polynomial

Let ${\displaystyle P_{0}(x)=1\!\,}$

#### First Order Polynomial

{\displaystyle {\begin{aligned}P_{1}(x)&=x-{\frac {\left\langle 1,x\right\rangle }{\left\langle 1,1\right\rangle }}\cdot 1\\&=x-\underbrace {\frac {\int _{-1}^{1}x^{5}dx}{\int _{-1}^{1}x^{4}dx}} _{\frac {0}{2/5}}\\&=x\end{aligned}}}

#### Second Order Polynomial

{\displaystyle {\begin{aligned}P_{2}(x)&=x^{2}-{\frac {\left\langle 1,x^{2}\right\rangle }{\left\langle 1,1\right\rangle }}\cdot 1-{\frac {\left\langle x,x^{2}\right\rangle }{\left\langle x,x\right\rangle }}\cdot x\\&=x^{2}-{\frac {\int _{-1}^{1}x^{6}dx}{\int _{-1}^{1}x^{4}dx}}-x\cdot \underbrace {\frac {\int _{-1}^{1}x^{7}dx}{\int _{-1}^{1}x^{6}dx}} _{\frac {0}{2/7}}\\&=x^{2}-{\frac {5}{7}}\end{aligned}}}

#### Third Order Polynomial

{\displaystyle {\begin{aligned}P_{3}(x)&=x^{3}-{\frac {\left\langle 1,x^{3}\right\rangle }{\left\langle 1,1\right\rangle }}\cdot 1-{\frac {\left\langle x,x^{3}\right\rangle }{\left\langle x,x\right\rangle }}\cdot x-{\frac {\left\langle x^{2}-{\frac {5}{7}},x^{3}\right\rangle }{\left\langle x^{2}-{\frac {5}{7}},x^{2}-{\frac {5}{7}}\right\rangle }}\cdot (x^{2}-{\frac {5}{7}})\\&=x^{3}-0-{\frac {7}{2}}x\cdot {\frac {\int _{-1}^{1}x^{8}dx}{\int _{-1}^{1}x^{6}dx}}-0\\&=x^{3}-{\frac {7}{9}}x\end{aligned}}}

#### Find Roots of Polynomials

The roots of these polynomials will be the interpolation nodes used in Gauss Quadrature.

${\displaystyle {\begin{array}{|c|c|}{\mbox{Polynomial}}&{\mbox{Roots}}\\\hline P_{1}(x)=x&0\\\\P_{2}(x)=x^{2}-{\frac {5}{7}}&\pm {\sqrt {\frac {5}{7}}}\\\\P_{3}(x)=x^{3}-{\frac {7}{9}}x&0,\pm {\sqrt {\frac {7}{9}}}\\\\\hline \end{array}}}$

### Formula to Compute Coefficients

${\displaystyle w_{j}=\int _{-1}^{1}l_{i}(x)x^{4}dx\!\,}$, where
${\displaystyle l_{i}(x)=\prod _{j=1,j\neq i}^{n}{\frac {x-x_{j}}{x_{i}-x_{j}}}\!\,}$

#### 1 point formula

${\displaystyle l_{1}(x)=1\!\,}$

${\displaystyle w_{1}=\int _{-1}^{1}x^{4}dx={\frac {2}{5}}\!\,}$

${\displaystyle \int _{-1}^{1}f(x)x^{4}dx\approx {\frac {2}{5}}f(x_{1})\!\,}$

where ${\displaystyle x_{1}\!\,}$ is the root of ${\displaystyle P_{1}(x)\!\,}$.

#### 2 point formula

${\displaystyle l_{1}(x)={\frac {x-x_{1}}{x_{1}-x_{2}}}\!\,}$

${\displaystyle l_{2}(x)={\frac {x-x_{1}}{x_{2}-x_{1}}}\!\,}$

${\displaystyle w_{1}=\int _{-1}^{1}{\frac {x-x_{1}}{x_{1}-x_{2}}}x^{4}dx={\frac {2x_{2}}{5(x_{2}-x_{1})}}\!\,}$

${\displaystyle w_{2}=\int _{-1}^{1}{\frac {x-x_{1}}{x_{2}-x_{1}}}x^{4}dx={\frac {2x_{1}}{5(x_{1}-x_{2})}}\!\,}$

${\displaystyle \int _{-1}^{1}f(x)x^{4}dx\approx w_{1}f(x_{1})+w_{2}f(x_{2})\!\,}$

where ${\displaystyle x_{1}{\mbox{ and }}x_{2}\!\,}$ are the roots of ${\displaystyle P_{2}(x)\!\,}$.

#### 3 point formula

${\displaystyle l_{1}(x)={\frac {x-x_{2}}{x_{1}-x_{2}}}{\frac {x-x_{3}}{x_{1}-x_{3}}}\!\,}$

${\displaystyle l_{2}(x)={\frac {x-x_{1}}{x_{2}-x_{1}}}{\frac {x-x_{3}}{x_{2}-x_{3}}}\!\,}$

${\displaystyle l_{3}(x)={\frac {x-x_{1}}{x_{3}-x_{1}}}{\frac {x-x_{2}}{x_{3}-x_{2}}}\!\,}$

${\displaystyle w_{1}=\int _{-1}^{1}{\frac {x-x_{2}}{x_{1}-x_{2}}}{\frac {x-x_{3}}{x_{1}-x_{3}}}x^{4}dx={\frac {1}{(x_{1}-x_{2})(x_{1}-x_{3})}}\left({\frac {2}{7}}+{\frac {2x_{2}x_{3}}{5}}\right)\!\,}$

${\displaystyle w_{2}=\int _{-1}^{1}{\frac {x-x_{1}}{x_{2}-x_{1}}}{\frac {x-x_{3}}{x_{2}-x_{3}}}x^{4}dx={\frac {1}{(x_{2}-x_{1})(x_{2}-x_{3})}}\left({\frac {2}{7}}+{\frac {2x_{1}x_{3}}{5}}\right)\!\,}$

${\displaystyle w_{3}=\int _{-1}^{1}{\frac {x-x_{1}}{x_{3}-x_{1}}}{\frac {x-x_{2}}{x_{3}-x_{2}}}x^{4}dx={\frac {1}{(x_{3}-x_{1})(x_{3}-x_{2})}}\left({\frac {2}{7}}+{\frac {2x_{1}x_{2}}{5}}\right)\!\,}$

${\displaystyle \int _{-1}^{1}f(x)x^{4}dx\approx w_{1}f(x_{1})+w_{2}f(x_{2})+w_{3}f(x_{3})\!\,}$

where ${\displaystyle x_{1}{\mbox{, }}x_{2}{\mbox{ and }}x_{3}\!\,}$ are the roots of ${\displaystyle P_{3}(x)\!\,}$.

### Derive Error Bound

We know that this quadrature is exact for all polynomials of degree at most ${\displaystyle 2n-1\!\,}$.

We choose a polynomial ${\displaystyle p\!\,}$ of degree at most ${\displaystyle 2n-1\!\,}$ that Hermite interpolates i.e.

${\displaystyle p(x_{i})=f(x_{i})\quad p'(x_{i})=f'(x_{i})\quad (0\leq i\leq n-1)\!\,}$

The error for this interpolation is

${\displaystyle f(x)-p(x)={\frac {f^{(2n)}(\zeta (x))}{(2n)!}}\prod _{i=0}^{n-1}(x-x_{i})^{2}\!\,}$

Compute the error of the quadrature as follows:

{\displaystyle {\begin{aligned}E&=\int _{-1}^{1}f(x)x^{4}dx-\sum _{i=0}^{n-1}w_{i}f(x_{i})\\&=\int _{-1}^{1}f(x)x^{4}dx-\sum _{i=0}^{n-1}w_{i}p(x_{i})\\&=\int _{-1}^{1}f(x)x^{4}dx-\int _{-1}^{1}p(x)x^{4}dx\\&=\int _{-1}^{1}(f(x)-p(x))x^{4}dx\\&=\int _{-1}^{1}{\frac {f^{(2n)}(\zeta (x))}{(2n)!}}\prod _{i=0}^{n-1}(x-x_{i})^{2}\cdot x^{4}dx\\&={\frac {f^{(2n)}(\xi )}{(2n)!}}\int _{-1}^{1}x^{4}\cdot \underbrace {\prod _{i=0}^{n-1}(x-x_{i})^{2}} _{(p_{n}(x))^{2}}dx\end{aligned}}\!\,}

where the last line follows from the mean value theorem of integrals.

Notice that ${\displaystyle p_{n}(x)\!\,}$ is simply the polynomials orthogonal with respect to weight function since ${\displaystyle x_{i}\!\,}$ are the roots of the polynomials.

Hence, the error for 1 point gaussian quadrature is

{\displaystyle {\begin{aligned}E&={\frac {f^{(2)}(\xi )}{(2)!}}\int _{-1}^{1}x^{4}\cdot x^{2}dx\\&={\frac {f^{(2)}(\xi )}{7}}\end{aligned}}\!\,}

The error for 2 point quadrature:

{\displaystyle {\begin{aligned}E&={\frac {f^{(4)}(\xi )}{(4)!}}\int _{-1}^{1}x^{4}\cdot (x^{2}-{\frac {5}{7}})^{2}dx\end{aligned}}\!\,}

The error for 3 point quadrature:

{\displaystyle {\begin{aligned}E&={\frac {f^{(6)}(\xi )}{(6)!}}\int _{-1}^{1}x^{4}\cdot (x^{3}-{\frac {7}{9}}x)^{2}dx\end{aligned}}\!\,}

## Problem 2

 We wish to solve ${\displaystyle Ax=b\!\,}$ iteratively where ${\displaystyle A={\begin{bmatrix}1&1&-1\\1&2&1\\1&1&1\end{bmatrix}}\!\,}$ Show that for this ${\displaystyle A\!\,}$ the Jacobi method and the Gauss-Seidel method both converge. Explain why for this ${\displaystyle A\!\,}$ one of these methods is better than the other.

## Solution 2

### Jacobi Method

Decompose ${\displaystyle A\!\,}$ into its diagonal, lower, and upper parts i.e.

${\displaystyle A=D+(L+U)\!\,}$

Derive Jacobi iteration by solving for x as follows:

{\displaystyle {\begin{aligned}Ax&=b\\(D+(L+U))x&=b\\Dx+(L+U)x&=b\\Dx&=b-(L+U)x\\x&=D^{-1}(b-(L+U)x)\end{aligned}}\!\,}

### Gauss-Seidel Method

Decompose ${\displaystyle A\!\,}$ into its diagonal, lower, and upper parts i.e.

${\displaystyle A=(D+L)+U\!\,}$

Derive Gauss-Sediel iteration by solving for x as follows:

{\displaystyle {\begin{aligned}Ax&=b\\((D+L)+U)x&=b\\(D+L)x+Ux&=b\\(D+L)x&=b-Ux\\x&=(D+L)^{-1}(b-Ux)\end{aligned}}\!\,}

### Jacobi Convergence

Convergence occurs for the Jacobi iteration if the spectral radius of ${\displaystyle D^{-1}(L+U)\!\,}$ is less than 1 i.e.

${\displaystyle \rho (D^{-1}(L+U))<1\!\,}$

The eigenvalues of the matrix

${\displaystyle D^{-1}(L+U)={\begin{bmatrix}0&1&-1\\{\frac {1}{2}}&0&{\frac {1}{2}}\\1&1&0\end{bmatrix}}\!\,}$

are ${\displaystyle \lambda =0,0,0\!\,}$ i.e. the eigenvalue ${\displaystyle 0\!\,}$ has order 3.

Therefore, the spectral radius is ${\displaystyle \rho =0<1\!\,}$.

### Gauss-Seidel Convergence

Convergence occurs for the Gauss-Seidel iteration if the spectral radius of ${\displaystyle (D+L)^{-1}U\!\,}$ is less than 1 i.e.

${\displaystyle \rho ((D+L)^{-1}U)<1\!\,}$

The eigenvalues of the matrix

${\displaystyle (D+L)^{-1}U={\begin{bmatrix}0&1&-1\\0&-{\frac {1}{2}}&1\\0&-{\frac {1}{2}}&0\end{bmatrix}}\!\,}$

are ${\displaystyle \lambda =0,-{\frac {1}{4}}+{\frac {{\sqrt {7}}i}{4}},-{\frac {1}{4}}-{\frac {{\sqrt {7}}i}{4}}\!\,}$

Therefore, the spectral radius is ${\displaystyle \rho ={\sqrt {{\frac {1}{16}}+{\frac {7}{16}}}}={\frac {\sqrt {2}}{2}}\approx .7<1\!\,}$.

### Comparison of Methods

In general, the Gauss-Seidel iteration converges faster than the Jacobi iteration since Gauss-Seidel uses the new components of ${\displaystyle x_{i}\!\,}$ as they become available, but in this case ${\displaystyle \rho _{Jacobi}<\rho _{Gauss-Seidel}\!\,}$, so the Jacobi iteration is faster.

## Problem 3

 Consider the three-term polynomial recurrence ${\displaystyle p_{k+1}(x)=(x-\mu _{k})p_{k}(x)-\nu _{k}^{2}p_{k-1}(x)\quad {\mbox{ for }}k=1,2,\dots ,\!\,}$ initialized by ${\displaystyle p_{0}(x)=1{\mbox{ and }}p_{1}(x)=x-\mu _{0}\!\,}$, where each ${\displaystyle \mu _{k}{\mbox{ and }}\nu _{k}\!\,}$ is real and each ${\displaystyle \nu _{k}\!\,}$ is nonzero.

## Problem 3a

 Prove that each ${\displaystyle p_{k}(x)\!\,}$ is a monic polynomial of degree ${\displaystyle k\!\,}$, and for every ${\displaystyle n=0,1,\dots \!\,}$, one has ${\displaystyle \operatorname {span} \left\{p_{0}(x),p_{1}(x),\dots ,p_{n}(x)\right\}=\operatorname {span} \left\{1,x,x^{2},\dots ,x^{n}\right\}\!\,}$

## Solution 3a

We prove the claim by by induction.

### Base Case

${\displaystyle p_{0}(x)=1\!\,}$ is monic with degree zero, and ${\displaystyle \operatorname {span} \left\{p_{0}(x)\right\}=\operatorname {span} \left\{1\right\}\!\,}$.

${\displaystyle p_{1}(x)=x-\mu _{0}\!\,}$ is monic with degree one, and ${\displaystyle \operatorname {span} \left\{p_{0}(x),p_{1}(x)\right\}=\operatorname {span} \left\{1,x\right\}\!\,}$.

${\displaystyle p_{2}(x)=(x-\mu _{0})(x-\mu _{1})-\nu _{1}^{2}\!\,}$ is monic with degree 2, and ${\displaystyle \operatorname {span} \left\{p_{0}(x),p_{1}(x),p_{2}(x)\right\}=\operatorname {span} \left\{1,x,x^{2}\right\}\!\,}$.

### Induction Step

Assume ${\displaystyle p_{k-1}(x)\!\,}$ is monic with degree ${\displaystyle k-1\!\,}$, and ${\displaystyle \operatorname {span} \left\{p_{0}(x),p_{1}(x),\dots ,p_{k-1}(x)\right\}=\operatorname {span} \left\{1,x,\dots ,x^{k-1}\right\}\!\,}$.

Also, assume ${\displaystyle p_{k}(x)\!\,}$ is monic with degree ${\displaystyle k\!\,}$, and ${\displaystyle \operatorname {span} \left\{p_{0}(x),p_{1}(x),\dots ,p_{k}(x)\right\}=\operatorname {span} \left\{1,x,\dots ,x^{k}\right\}\!\,}$.

Then from the hypothesis, ${\displaystyle p_{k+1}(x)=(x-\mu _{k})p_{k}(x)-\nu _{k}^{2}p_{k-1}(x)\!\,}$ is monic with degree ${\displaystyle k+1\!\,}$.

Also,

${\displaystyle \operatorname {span} \left\{p_{0}(x),p_{1}(x),\dots ,p_{k+1}(x)\right\}=\operatorname {span} \left\{1,x,\dots ,x^{k+1}\right\}\!\,}$

since ${\displaystyle p_{k+1}(x)\!\,}$ is just ${\displaystyle x^{n+1}\!\,}$ plus a linear combination of lower order terms already in ${\displaystyle \operatorname {span} \left\{1,x,\dots ,x^{k}\right\}\!\,}$.

## Problem 3b

 Show that for every ${\displaystyle k=0,1,\dots \!\,}$ the polynomial ${\displaystyle p_{k}(x)\!\,}$ has ${\displaystyle k\!\,}$ simple real roots that interlace with the roots of ${\displaystyle p_{k-1}(x)\!\,}$.

## Solution 3b

We prove the claim by induction.

### Base Case

Let's consider the case ${\displaystyle k=2\!\,}$. We know that

${\displaystyle p_{2}(x)=(x-\mu _{0})(x-\mu _{1})-\nu _{1}^{2}.\!\,}$

The quadratic formula shows that ${\displaystyle p_{2}(x)\!\,}$ has two simple real roots.

Let ${\displaystyle r_{21}\!\,}$ and ${\displaystyle r_{22}\!\,}$ be the zeros of ${\displaystyle p_{2}(x)\!\,}$. Then, we have (because of sign changes) that

${\displaystyle p_{2}(\mu _{0})=-\nu _{1}^{2}\!\,}$ and ${\displaystyle \lim _{x\rightarrow \infty }p_{2}(x)>0\!\,}$ ${\displaystyle \Rightarrow \!\,}$ there exists ${\displaystyle r_{22}\in (\mu _{0},\infty )\!\,}$ such that ${\displaystyle p_{2}(r_{22})=0\!\,}$.

${\displaystyle p_{2}(\mu _{0})=-\nu _{1}^{2}\!\,}$ and ${\displaystyle \lim _{x\rightarrow -\infty }p_{2}(x)>0\!\,}$ ${\displaystyle \Rightarrow \!\,}$ there exists ${\displaystyle r_{21}\in (-\infty ,\mu _{0})\!\,}$ such that ${\displaystyle p_{2}(r_{21})=0\!\,}$.

In conclusion, ${\displaystyle r_{21}<\underbrace {\mu _{0}} _{r_{11}}.

### Induction Step

Let ${\displaystyle r_{k-1,1},\dots ,r_{k-1,k-1}\!\,}$ and ${\displaystyle r_{k,1},\dots ,r_{k,k}\!\,}$ be the simple real roots of ${\displaystyle p_{k-1}(x)\!\,}$ and ${\displaystyle p_{k}(x)\!\,}$, respectively, such that the roots of ${\displaystyle p_{k}\!\,}$ are interlaced with the roots of ${\displaystyle p_{k-1}\!\,}$, i.e.,

${\displaystyle r_{k,1}

Then, we have that

${\displaystyle p_{k+1}(r_{k,1})=(r_{k,1}-\mu _{k})\underbrace {p_{k}(r_{k,1})} _{0}-\nu _{k}^{2}p_{k-1}(r_{k,1})=-\nu _{k}^{2}p_{k-1}(r_{k,1}),\!\,}$

${\displaystyle p_{k+1}(r_{k,2})=(r_{k,2}-\mu _{k})\underbrace {p_{k}(r_{k,2})} _{0}-\nu _{k}^{2}p_{k-1}(r_{k,2})=-\nu _{k}^{2}p_{k-1}(r_{k,2}).\!\,}$

From the induction hypothesis ${\displaystyle p_{k-1}(r_{k,1})\!\,}$ and ${\displaystyle p_{k-1}(r_{k,2})\!\,}$ have different signs since ${\displaystyle r_{k,1}. Then, there exists ${\displaystyle r_{k+1,1}\in (r_{k,1},r_{k,2})\!\,}$ such that ${\displaystyle p_{k+1}(r_{k+1,1})=0\!\,}$.

Proceeding in the same manner for all the intervals ${\displaystyle (r_{k,j},r_{k,j+1})\!\,}$, we obtain that there exists ${\displaystyle r_{k+1,j}\in (r_{k,j},r_{k,j+1})\!\,}$ such that ${\displaystyle p_{k+1}(r_{k+1,j})=0\!\,}$ for ${\displaystyle j=1,\dots ,k-1.\!\,}$

We now consider the smallest and largest roots of ${\displaystyle p_{k+1}\!\,}$ i.e. ${\displaystyle r_{k+1,0},r_{k+1,k+1}\!\,}$

Since for ${\displaystyle k=0,1,2,\ldots \!\,}$, ${\displaystyle p_{k}\!\,}$ is a monic polynomial,

${\displaystyle \lim _{x\rightarrow \infty }p_{k}(x)=\infty \!\,}$

Hence for any ${\displaystyle x>r_{k}\!\,}$ (any ${\displaystyle x\!\,}$ larger than the largest root of ${\displaystyle p_{k}\!\,}$)

${\displaystyle p_{k}(x)>0\!\,}$

Therefore

${\displaystyle \lim _{x\rightarrow \infty }p_{k+1}(x)>0\!\,}$ and ${\displaystyle p_{k+1}(r_{k,k})=-\nu _{k}^{2}p_{k-1}(r_{k,k})<0,\!\,}$

implies there exists ${\displaystyle r_{k+1,k}\in (r_{k,k},\infty )\!\,}$ such that ${\displaystyle p_{k+1}(r_{k+1,k})=0\!\,}$.

If ${\displaystyle k+1\!\,}$ is even, then by similar reasoning

${\displaystyle \lim _{x\rightarrow -\infty }p_{k+1}(x)>0\!\,}$ and ${\displaystyle p_{k+1}(r_{k,1})=-\nu _{k}^{2}p_{k-1}(r_{k,1})<0,\!\,}$ ${\displaystyle \Rightarrow \!\,}$ there exists ${\displaystyle r_{k+1,0}\in (-\infty ,r_{k,1})\!\,}$ such that ${\displaystyle p_{k+1}(r_{k+1,0})=0\!\,}$.

If ${\displaystyle k+1\!\,}$ is odd,

${\displaystyle \lim _{x\rightarrow -\infty }p_{k+1}(x)<0\!\,}$ and ${\displaystyle p_{k+1}(r_{k,1})=-\nu _{k}^{2}p_{k-1}(r_{k,1})>0,\!\,}$ ${\displaystyle \Rightarrow \!\,}$ there exists ${\displaystyle r_{k+1,0}\in (-\infty ,r_{k,1})\!\,}$ such that ${\displaystyle p_{k+1}(r_{k+1,0})=0\!\,}$.

## Problem 3c

 Show that for every ${\displaystyle n=0,1,\dots \!\,}$ the roots of ${\displaystyle p_{k+1}(x)\!\,}$ are the eigenvalues of the symmetric tridiagonal matrix ${\displaystyle T={\begin{pmatrix}\mu _{0}&\nu _{1}&0&\cdots &0\\\nu _{1}&\mu _{1}&\nu _{2}&\ddots &\vdots \\0&\nu _{2}&\nu _{2}&\ddots &0\\\vdots &\ddots &\ddots &\ddots &\nu _{n}\\0&\cdots &0&\nu _{n}&\mu _{n}\end{pmatrix}}}$

## Solution 3c

Let ${\displaystyle T_{n+1}={\begin{pmatrix}\mu _{0}&\nu _{1}&0&\cdots &0\\\nu _{1}&\mu _{1}&\nu _{2}&\ddots &\vdots \\0&\nu _{2}&\nu _{2}&\ddots &0\\\vdots &\ddots &\ddots &\ddots &\nu _{n}\\0&\cdots &0&\nu _{n}&\mu _{n}\end{pmatrix}}}$

Then ${\displaystyle \operatorname {det} (xI_{n+1}-T_{n+1})\!\,}$ is a monic polynomial in ${\displaystyle x\!\,}$ of degree ${\displaystyle n+1\!\,}$.

Expanding this determinant along the last row, we have

${\displaystyle \operatorname {det} (xI_{n+1}-T_{n+1})=(x-\mu _{n})\operatorname {det} (xI_{n}-T_{n})-\nu _{n}^{2}\operatorname {det} (xI_{n-1}-T_{n-1})\qquad (1)\!\,}$

where ${\displaystyle \operatorname {det} (xI_{n}-T_{n})\!\,}$ and ${\displaystyle \operatorname {det} (xI_{n-1}-T_{n-1})\!\,}$ are monic polynomials of degree ${\displaystyle n\!\,}$ and ${\displaystyle n-1\!\,}$, respectively.

Notice that ${\displaystyle \operatorname {det} (xI_{1}-T_{1})=x-\mu _{0}\equiv p_{1}(x)\!\,}$ and if we let ${\displaystyle p_{0}(x)\equiv 1\!\,}$, then (1) is equivalent to the three-term recurrence stated in the problem.

Thus, ${\displaystyle p_{n+1}(x)\equiv \operatorname {det} (xI_{n+1}-T_{n+1})=0\!\,}$ shows that the roots of ${\displaystyle p_{n+1}(x)\!\,}$ are the eigenvalues of ${\displaystyle T_{n+1}\!\,}$.