Polynomial with infinitely many roots
The zero polynomial every number is a root of P. This is the only polynomial with infinitely many roots. A non-zero polynomial is of some degree n (n may be 0,1,2,...) and cannot have more than n roots since, by a well-known theorem of algebra, if (for pairwise different ), then necessarily for some non-zero polynomial Q of degree
Integer values versus integer coefficients
Every polynomial P with integer coefficients is integer-valued, that is, its value P(k) is an integer for every integer k; but the converse is true only for first degree polynomials (linear functions). For example, the polynomial takes on integer values whenever x is an integer. That is because one of x and x - 1 must be an even number. The values are the binomial coefficients.
More generally, for every n=0,1,2,3,... the polynomial is integer-valued; are the binomial coefficients. In fact, every integer-valued polynomial is an integer linear combination of these Pn.
Polynomial mimics cosine: roots
The cosine function, satisfies and has infinitely many roots: A polynomial cannot satisfy all these conditions; can it satisfy a finite portion of them?
It is easy to find a polynomial P such that and namely (check it). What about and
The conditions being insensitive to the sign of x, we seek a polynomial of that is, where Q satisfies and It is easy to find such Q, namely, (check it), which leads to
As we see on the picture, the two functions are rather close for in fact, the greatest for these x is about 0.028, while the greatest (for these x) is about 0.056.
The next step in this direction:
And so on. For every the polynomial
satisfies and which is easy to check. It is harder (but possible) to prove that as which represents the cosine as an infinite product
On the other hand, the well-known power series gives another sequence of polynomials converging to the same cosine function. See the picture for Q3;
Can we check the equality by opening the brackets? Let us try. The constant coefficient: just 1=1. The coefficient of x2: that is, really? Yes, the well-known series of reciprocal squares is instrumental.
Such non-rigorous opening of brackets can be made rigorous as follows. For every polynomial P, the constant coefficient is the value of P at zero, P(0); the coefficient of x is the value at zero of the derivative, P '(0); and the coefficient of x2 is one half of the value at zero of the second derivative, ½P''(0). Clearly, and for all (as before, ). The calculation above shows that as n tends to infinity. What about higher derivative, does it converge to ? It is tedious (if at all possible) to generalize the above calculation to k=4,6,...; fortunately, there is a better approach. Namely, for all complex numbers z, and moreover, for every R>0. Using Cauchy's differentiation formula one concludes that (as ) for each k, and in particular,
Limit of derivatives versus derivative of limit
For we have (think, why) as Nevertheless, the derivative at zero does not converge to 0; rather, it is equal to 1 (for all n) since, denoting we have
Thus, the limit of the sequence of functions on the interval is the zero function, hence the derivative of the limit is the zero function as well. However, the limit of the sequence of derivatives fails to be zero at least for What happens for ? Here the limit of derivatives is zero, since (check it; the exponential decay of the second factor outweighs the linear grows of the first factor). Thus,
It is not always possible to interchange derivative and limit.
Note the two equivalent definition of the function f; one is piecewise (something for some x, something else for other x, ...), but the other is a single expression for all these x.
The function f is discontinuous (at 0), and nevertheless it is the limit of continuous functions This can happen for pointwise convergence (that is, convergence at every point of the considered domain), since the speed of convergence can depend on the point.
Otherwise (when the speed of convergence does not depend on the point) convergence is called uniform; by the uniform convergence theorem, the uniform limit of continuous functions is a continuous function.
It follows that convergence of (to f) is non-uniform, but this is a proof by contradiction.
A better understanding may be gained from a direct proof. The derivatives fail to converge uniformly, since fails to be small (when n is large) for some x close to 0; for instance, try
for all and is not zero (unless ).
In contrast, uniformly on that is, as since the maximum is reached at (check it by solving the equation ) and And still, it appears to be impossible to interchange derivative and limit. Compare this case with a well-known theorem:
- If is a sequence of differentiable functions on such that exists (and is finite) for some and the sequence converges uniformly on , then converges uniformly to a function on , and for .
Uniform convergence of derivatives is required; uniform convergence of functions is not enough.
Complex numbers, helpful in Sect. "Polynomial mimics cosine: roots", are helpless here, since for we have for all