Examples and counterexamples in mathematics/Real-valued functions of one real variable

From Wikibooks, open books for an open world
Jump to: navigation, search


Polynomial with infinitely many roots[edit]

The zero polynomial every number is a root of P. This is the only polynomial with infinitely many roots. A non-zero polynomial is of some degree n (n may be 0,1,2,...) and cannot have more than n roots since, by a well-known theorem of algebra, if (for pairwise different ), then necessarily for some non-zero polynomial Q of degree

Integer values versus integer coefficients[edit]

Every polynomial P with integer coefficients is integer-valued, that is, its value P(k) is an integer for every integer k; but the converse is true only for first degree polynomials (linear functions). For example, the polynomial takes on integer values whenever x is an integer. That is because one of x and x-1 must be an even number. The values are the binomial coefficients.

More generally, for every n=0,1,2,3,... the polynomial is integer-valued; are the binomial coefficients. In fact, every integer-valued polynomial is an integer linear combination of these Pn.

Polynomial mimics cosine: interpolation[edit]

Cosine: notable values

The cosine function, satisfies also and for all x, which gives infinitely many x such that is one of the numbers that is, infinitely many points on the graph. A polynomial cannot satisfy all these conditions, because as for every non-constant polynomial P; can it satisfy a finite portion of them?

Let us try to find P that satisfies the five conditions For convenience we rescale x letting and rewrite the five conditions in terms of Q: In order to find such Q we use Lagrange polynomials.

Lagrange interpolation polynomial

Using the polynomial of degree 5 with roots at the given points 0, 2, 3, 4, 6 and taking into account that (check it by differentiating the product) we consider the so-called Lagrange basis polynomial of degree 4 with roots at 2, 3, 4, 6 (but not 0); the division in the left-hand side is interpreted algebraically as division of polynomials (that is, finding a polynomial whose product by the denominator is the numerator ) rather than division of functions, thus, the quotient is defined for all x, including 0. Its value at 0 is 1. Think, why; see the picture; recall that .

Similarly, the second Lagrange basis polynomial takes the values 0, 1, 0, 0, 0 (at 0, 2, 3, 4, 6 respectively). The third one takes the values 0, 0, 1, 0, 0. And so on (calculate the fourth and fifth). It remains to combine these five Lagrange basis polynomials with the coefficients equal to the required values of Q:

Polynomial approximation to cosine

Finally, As we see on the picture, the two functions are quite close for in fact, the greatest for these x is about 0.00029.

A better approximation can be obtained via derivative. The derivative of being we have The corresponding derivatives are close but different; for instance,

In order to fix the derivative without spoiling the values we replace with where is a polynomial of degree 4 such that the derivative of is equal to for it means, since for these x; so,


We find such R as before:

Polynomial approximation to cosine

and get a better approximation in fact, If you still want a smaller error, try second derivative and

Polynomial mimics cosine: roots[edit]

The cosine function, satisfies and has infinitely many roots: A polynomial cannot satisfy all these conditions; can it satisfy a finite portion of them?

It is easy to find a polynomial P such that and namely (check it). What about and

The conditions being insensitive to the sign of x, we seek a polynomial of that is, where Q satisfies and It is easy to find such Q, namely, (check it), which leads to

Polynomial approximation to cosine

As we see on the picture, the two functions are rather close for in fact, the greatest for these x is about 0.028, while the greatest (for these x) is about 0.056.

Polynomial approximation to cosine

The next step in this direction:

And so on. For every the polynomial

satisfies and which is easy to check. It is harder (but possible) to prove that as which represents the cosine as an infinite product

Polynomial approximation to cosine

On the other hand, the well-known power series gives another sequence of polynomials converging to the same cosine function. See the picture for Q3;

Can we check the equality by opening the brackets? Let us try. The constant coefficient: just 1=1. The coefficient of x2: that is, really? Yes, the well-known series of reciprocal squares is instrumental.

Such non-rigorous opening of brackets can be made rigorous as follows. For every polynomial P, the constant coefficient is the value of P at zero, P(0); the coefficient of x is the value at zero of the derivative, P '(0); and the coefficient of x2 is one half of the value at zero of the second derivative, ½P''(0). Clearly, and for all (as before, ). The calculation above shows that as n tends to infinity. What about higher derivative, does it converge to ? It is tedious (if at all possible) to generalize the above calculation to k=4,6,...; fortunately, there is a better approach. Namely, for all complex numbers z, and moreover, for every R>0. Using Cauchy's differentiation formula one concludes that (as ) for each k, and in particular,

Limit of derivatives versus derivative of limit[edit]

For we have (think, why) as Nevertheless, the derivative at zero does not converge to 0; rather, it is equal to 1 (for all n) since, denoting we have

Derivative of limit

Thus, the limit of the sequence of functions on the interval is the zero function, hence the derivative of the limit is the zero function as well. However, the limit of the sequence of derivatives fails to be zero at least for What happens for ? Here the limit of derivatives is zero, since (check it; the exponential decay of the second factor outweighs the linear grows of the first factor). Thus,

It is not always possible to interchange derivative and limit.

Note the two equivalent definition of the function f; one is piecewise (something for some x, something else for other x, ...), but the other is a single expression for all these x.

The function f is discontinuous (at 0), and nevertheless it is the limit of continuous functions This can happen for pointwise convergence (that is, convergence at every point of the considered domain), since the speed of convergence can depend on the point.

Otherwise (when the speed of convergence does not depend on the point) convergence is called uniform; by the uniform convergence theorem, the uniform limit of continuous functions is a continuous function.

It follows that convergence of (to f) is non-uniform, but this is a proof by contradiction.

A better understanding may be gained from a direct proof. The derivatives fail to converge uniformly, since fails to be small (when n is large) for some x close to 0; for instance, try

for all and is not zero (unless ).

In contrast, uniformly on that is, as since the maximum is reached at (check it by solving the equation ) and And still, it appears to be impossible to interchange derivative and limit. Compare this case with a well-known theorem:

If is a sequence of differentiable functions on such that exists (and is finite) for some and the sequence converges uniformly on , then converges uniformly to a function on , and for .

Uniform convergence of derivatives is required; uniform convergence of functions is not enough.

Complex numbers, helpful in Sect. "Polynomial mimics cosine: roots", are helpless here, since for we have for all