# Examples and counterexamples in mathematics/Real-valued functions of one real variable

## Polynomials

### Polynomial with infinitely many roots

The zero polynomial ${\displaystyle P(x)=0;}$ every number is a root of P. This is the only polynomial with infinitely many roots. A non-zero polynomial is of some degree n (n may be 0,1,2,...) and cannot have more than n roots since, by a well-known theorem of algebra, if ${\displaystyle P(a_{1})=\dots =P(a_{m})=0}$ (for pairwise different ${\displaystyle a_{1},\dots ,a_{m}}$), then necessarily ${\displaystyle P(x)=(x-a_{1})\dots (x-a_{m})Q(x)}$ for some non-zero polynomial Q of degree ${\displaystyle n-m\geq 0.}$

### Integer values versus integer coefficients

Every polynomial P with integer coefficients is integer-valued, that is, its value P(k) is an integer for every integer k; but the converse is true only for first degree polynomials (linear functions). For example, the polynomial ${\displaystyle \textstyle P_{2}(x)={\frac {1}{2}}x^{2}-{\frac {1}{2}}x={\frac {1}{2}}x(x-1)}$ takes on integer values whenever x is an integer. That is because one of x and x - 1 must be an even number. The values ${\displaystyle \textstyle P_{2}(k)={\binom {k}{2}}}$ are the binomial coefficients.

More generally, for every n=0,1,2,3,... the polynomial ${\displaystyle \textstyle P_{n}(x)={\frac {1}{n!}}x(x-1)\dots (x-n+1)}$ is integer-valued; ${\displaystyle \textstyle P_{n}(k)={\binom {k}{n}}}$ are the binomial coefficients. In fact, every integer-valued polynomial is an integer linear combination of these Pn.

### Polynomial mimics cosine: roots

The cosine function, ${\displaystyle f(x)=\cos x,}$ satisfies ${\displaystyle f(0)=1}$ and has infinitely many roots: ${\displaystyle f(\pm 0.5\pi )=f(\pm 1.5\pi )=f(\pm 2.5\pi )=\dots =0.}$ A polynomial cannot satisfy all these conditions; can it satisfy a finite portion of them?

It is easy to find a polynomial P such that ${\displaystyle P(0)=1}$ and ${\displaystyle P(\pm 0.5\pi )=0,}$ namely ${\displaystyle P_{1}(x)=-{\frac {4}{\pi ^{2}}}(x^{2}-0.25\pi ^{2})}$ (check it). What about ${\displaystyle P(0)=1}$ and ${\displaystyle P(\pm 0.5\pi )=P(\pm 1.5\pi )=0\,?}$

The conditions being insensitive to the sign of x, we seek a polynomial of ${\displaystyle x^{2},}$ that is, ${\displaystyle P(x)=Q(x^{2})}$ where Q satisfies ${\displaystyle Q(0)=1}$ and ${\displaystyle Q(0.25\pi ^{2})=Q(2.25\pi ^{2})=0.}$ It is easy to find such Q, namely, ${\displaystyle Q(x)={\frac {16}{9\pi ^{4}}}(x-0.25\pi ^{2})(x-2.25\pi ^{2})}$ (check it), which leads to

${\displaystyle P_{2}(x)={\frac {16}{9\pi ^{4}}}(x^{2}-0.25\pi ^{2})(x^{2}-2.25\pi ^{2}).}$ As we see on the picture, the two functions are rather close for ${\displaystyle -0.5\pi in fact, the greatest ${\displaystyle |P_{2}(x)-\cos x|}$ for these x is about 0.028, while the greatest ${\displaystyle |P_{1}(x)-\cos x|}$ (for these x) is about 0.056.

The next step in this direction: ${\displaystyle \textstyle P_{3}(x)=-{\frac {64}{225\pi ^{6}}}(x^{2}-0.25\pi ^{2})(x^{2}-2.25\pi ^{2})(x^{2}-6.25\pi ^{2})}$ ${\displaystyle \textstyle ={\Big (}1-{\frac {4x^{2}}{\pi ^{2}}}{\Big )}{\Big (}1-{\frac {4x^{2}}{9\pi ^{2}}}{\Big )}{\Big (}1-{\frac {4x^{2}}{25\pi ^{2}}}{\Big )};}$ ${\displaystyle \quad \max _{-0.5\pi

And so on. For every ${\displaystyle n=1,2,\dots }$ the polynomial

${\displaystyle P_{n}(x)={\Big (}1-{\frac {4x^{2}}{\pi ^{2}}}{\Big )}{\Big (}1-{\frac {4x^{2}}{9\pi ^{2}}}{\Big )}\dots {\Big (}1-{\frac {4x^{2}}{(2n-1)^{2}\pi ^{2}}}{\Big )}=\prod _{k=1}^{n}{\Big (}1-{\frac {4x^{2}}{(2k-1)^{2}\pi ^{2}}}{\Big )}}$

satisfies ${\displaystyle P_{n}(0)=1}$ and ${\displaystyle P_{n}(\pm 0.5\pi )=P_{n}(\pm 1.5\pi )=\dots =P_{n}(\pm (n-0.5)\pi )=0,}$ which is easy to check. It is harder (but possible) to prove that ${\displaystyle P_{n}(x)\to \cos x}$ as ${\displaystyle n\to \infty ,}$ which represents the cosine as an infinite product

${\displaystyle \cos x={\Big (}1-{\frac {4x^{2}}{\pi ^{2}}}{\Big )}{\Big (}1-{\frac {4x^{2}}{9\pi ^{2}}}{\Big )}\dots =\prod _{k=1}^{\infty }{\Big (}1-{\frac {4x^{2}}{(2k-1)^{2}\pi ^{2}}}{\Big )}.}$

On the other hand, the well-known power series ${\displaystyle \cos x=1-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-\cdots =\sum _{k=0}^{\infty }{\frac {(-1)^{k}x^{2k}}{(2k)!}}}$ gives another sequence of polynomials ${\displaystyle Q_{n}(x)=1-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-\dots +{\frac {(-1)^{n}x^{2n}}{(2n)!}}=\sum _{k=0}^{n}{\frac {(-1)^{k}x^{2k}}{(2k)!}}}$ converging to the same cosine function. See the picture for Q3; ${\displaystyle \quad \max _{-0.5\pi

Can we check the equality ${\displaystyle \textstyle {\big (}1-{\frac {4x^{2}}{\pi ^{2}}}{\big )}{\big (}1-{\frac {4x^{2}}{9\pi ^{2}}}{\big )}\dots =1-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-\cdots }$ by opening the brackets? Let us try. The constant coefficient: just 1=1. The coefficient of x2: ${\displaystyle \textstyle -{\frac {4}{\pi ^{2}}}-{\frac {4}{9\pi ^{2}}}-\dots =-{\frac {1}{2}},}$ that is, ${\displaystyle \textstyle 1+{\frac {1}{3^{2}}}+{\frac {1}{5^{2}}}+\dots ={\frac {\pi ^{2}}{8}};}$ really? Yes, ${\displaystyle \textstyle 1+{\frac {1}{3^{2}}}+{\frac {1}{5^{2}}}+\dots =\textstyle (1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\dots )-({\frac {1}{2^{2}}}+{\frac {1}{4^{2}}}+{\frac {1}{6^{2}}}+\dots )=(1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\dots )-{\frac {1}{2^{2}}}(1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\dots )={\frac {3}{4}}\cdot {\frac {\pi ^{2}}{6}}={\frac {\pi ^{2}}{8}};}$ the well-known series of reciprocal squares is instrumental.

Such non-rigorous opening of brackets can be made rigorous as follows. For every polynomial P, the constant coefficient is the value of P at zero, P(0); the coefficient of x is the value at zero of the derivative, P '(0); and the coefficient of x2 is one half of the value at zero of the second derivative, ½P''(0). Clearly, ${\displaystyle Q_{n}(0)=1=f(0),}$ ${\displaystyle Q'_{n}(0)=0=f'(0)}$ and ${\displaystyle Q''_{n}(0)=-1=f''(0)}$ for all ${\displaystyle n\geq 1}$ (as before, ${\displaystyle f(x)=\cos x}$). The calculation above shows that ${\displaystyle P''_{n}(0)\to -1=f''(0)}$ as n tends to infinity. What about higher derivative, ${\displaystyle P_{n}^{(k)}(0),}$ does it converge to ${\displaystyle f^{(k)}(0)\,}$? It is tedious (if at all possible) to generalize the above calculation to k=4,6,...; fortunately, there is a better approach. Namely, ${\displaystyle P_{n}(z)\to f(z)}$ for all complex numbers z, and moreover, ${\displaystyle \max _{|z|\leq R}|P_{n}(z)-f(z)|\to 0}$ for every R>0. Using Cauchy's differentiation formula one concludes that ${\displaystyle \max _{|z|\leq R}|P_{n}^{(k)}(z)-f^{(k)}(z)|\to 0}$ (as ${\displaystyle n\to \infty }$) for each k, and in particular, ${\displaystyle P_{n}^{(k)}(0)\to f^{(k)}(0).}$

### Limit of derivatives versus derivative of limit

${\displaystyle P_{n}(x)=x(1-x^{2})^{n}\,.}$

For ${\displaystyle -1\leq x\leq 1}$ we have ${\displaystyle P_{n}(x)\to 0}$ (think, why) as ${\displaystyle n\to \infty .}$ Nevertheless, the derivative at zero does not converge to 0; rather, it is equal to 1 (for all n) since, denoting ${\displaystyle Q_{n}(x)=(1-x^{2})^{n},}$ we have ${\displaystyle P'_{n}(x)=(xQ_{n}(x))'=1\cdot Q_{n}(x)+x\cdot Q'_{n}(x);}$ ${\displaystyle P'_{n}(0)=Q_{n}(0)=1.}$

Thus, the limit of the sequence of functions ${\displaystyle (P_{1},P_{2},\dots )}$ on the interval ${\displaystyle [-1,1]}$ is the zero function, hence the derivative of the limit is the zero function as well. However, the limit of the sequence of derivatives ${\displaystyle (P'_{1},P'_{2},\dots )}$ fails to be zero at least for ${\displaystyle x=0.}$ What happens for ${\displaystyle 0<|x|\leq 1\,}$? Here the limit of derivatives is zero, since ${\displaystyle P'_{n}(x)={\big (}1-(2n+1)x^{2}{\big )}(1-x^{2})^{n-1}\to 0}$ (check it; the exponential decay of the second factor outweighs the linear grows of the first factor). Thus,

${\displaystyle 0={\big (}\lim _{n\to \infty }P_{n}(x){\big )}'\neq \lim _{n\to \infty }P'_{n}(x)=f(x)={\begin{cases}0&{\text{for }}-1\leq x<0,\\1&{\text{for }}x=0,\\0&{\text{for }}0

It is not always possible to interchange derivative and limit.

Note the two equivalent definition of the function f; one is piecewise (something for some x, something else for other x, ...), but the other is a single expression ${\displaystyle f(x)=\lim _{n\to \infty }{\big (}1-(2n+1)x^{2}{\big )}(1-x^{2})^{n-1}}$ for all these x.

The function f is discontinuous (at 0), and nevertheless it is the limit of continuous functions ${\displaystyle P'_{n}.}$ This can happen for pointwise convergence (that is, convergence at every point of the considered domain), since the speed of convergence can depend on the point.

Otherwise (when the speed of convergence does not depend on the point) convergence is called uniform; by the uniform convergence theorem, the uniform limit of continuous functions is a continuous function.

It follows that convergence of ${\displaystyle P'_{n}}$ (to f) is non-uniform, but this is a proof by contradiction.

A better understanding may be gained from a direct proof. The derivatives ${\displaystyle P'_{n}}$ fail to converge uniformly, since ${\displaystyle P'_{n}(x)}$ fails to be small (when n is large) for some x close to 0; for instance, try ${\displaystyle \textstyle x={\sqrt {\frac {c}{n-1}}}\,:}$

${\displaystyle \max _{0

for all ${\displaystyle c>0;}$ and ${\displaystyle (1-2c)e^{-c}}$ is not zero (unless ${\displaystyle c=0.5}$).

In contrast, ${\displaystyle P_{n}\to 0}$ uniformly on ${\displaystyle [-1,1],}$ that is, ${\displaystyle \max _{-1\leq x\leq 1}|P_{n}(x)|\to 0}$ as ${\displaystyle n\to \infty ,}$ since the maximum is reached at ${\displaystyle \textstyle x=\pm {\frac {1}{\sqrt {2n+1}}}}$ (check it by solving the equation ${\displaystyle P'_{n}(x)=0}$) and ${\displaystyle \textstyle 0\leq P_{n}{\big (}{\frac {1}{\sqrt {2n+1}}}{\big )}\leq {\frac {1}{\sqrt {2n+1}}}\to 0.}$ And still, it appears to be impossible to interchange derivative and limit. Compare this case with a well-known theorem:

If ${\displaystyle (f_{n})}$ is a sequence of differentiable functions on ${\displaystyle [a,b]}$ such that ${\displaystyle \lim _{n\to \infty }f_{n}(x_{0})}$ exists (and is finite) for some ${\displaystyle x_{0}\in [a,b]}$ and the sequence ${\displaystyle (f'_{n})}$ converges uniformly on ${\displaystyle [a,b]}$, then ${\displaystyle f_{n}}$ converges uniformly to a function ${\displaystyle f}$ on ${\displaystyle [a,b]}$, and ${\displaystyle f'(x)=\lim _{n\to \infty }f'_{n}(x)}$ for ${\displaystyle x\in [a,b]}$.

Uniform convergence of derivatives ${\displaystyle f'_{n}}$ is required; uniform convergence of functions ${\displaystyle f_{n}}$ is not enough.

Complex numbers, helpful in Sect. "Polynomial mimics cosine: roots", are helpless here, since for ${\displaystyle z=\pm ci}$ we have ${\displaystyle |P(z)|=c(1+c^{2})^{n}\to \infty }$ for all ${\displaystyle c>0.}$