BASIC Programming/Normative BASIC/Minimal BASIC

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Sample Programs[edit | edit source]

Sample programs for Minimal BASIC will appear here.

Numerical Integration[edit | edit source]

Introduction[edit | edit source]

There exists two cases, when the computation of the value of a definite integral by numerical methods is needed. One of them is the calculation of the area below the curve defined by a set of experimental data, and another is the calculation of the definite integral of a mathematical function, for which no known integral is known. The former is often the case of response functions in the experimental labors of science and engineering, while the latter is normally the case in the practical investigations of physics, mathematics, and engineering.

Independently of it, the development of numerical methods for integration purposes, a field that belongs to the department of applied mathematics, is based on the simple idea from which it stems, i.e., if is a real-valued (the complex-valued case can be treated analogously, by separating it into its real and imaginary parts) continuous function of defined in an interval , its definite integral,

,

can be calculated approximately as the finite sum of the product evaluated at some given points in the interval .

In the case of experimental data, the set of points at which the value of the function is measured is usually not regularly distributed (i.e., the points are not equispaced), so the value of the definite integral must be calculated in the form:

,

which can be approximated either as:

,

or as:

.

In the first case, the value of the integral is underestimated (overestimated) in the case of monotonically ascending (descending) functions, since the value of taken in each evaluation is always the lowest (highest) in every subinterval, and hence constituting an absolute lower (upper) bound to the value of the integral, while in the second case, the value of the integral is overestimated (underestimated) in the case of monotonically descending (ascending) functions, since the value of taken in each evaluation is always the highest (lowest) in every subinterval, and hence constituting an absolute upper (lower) bound to the value of the integral.

According to the Mean-Value Theorem of Calculus, the value of a definite integral can also be calculated as:

,

for some value in for which represents the mean value of in , so it is then a better approximation to calculate the definite integral of a set of experimental data as:

,

,

.

For reasons that we shall see later, this is equal to assume a piecewise linear interpolating function between the different points, and the value of the integral so calculated is exact for linear functions (i.e., functions for which its slope changes at constant rate), although it is underestimated for functions for which its slope grows at non-constant rate (i.e., its second derivative is strictly positive in the considered interval), and it is overestimated for functions for which its slope decreases at non-constant rate (i.e., its second derivative is strictly negative in the considered interval). The value so calculated constitutes a better approximation than the lower and upper bounds presented before, and in the case that the second derivative of the function (which can be calculated from the experimental data with the help of second or central differences) changes signs between subintervals, the value is expected to be close to the actual value due to the cancellation of the errors in the approximation of the mean values.

In the case of mathematical functions, there is more information about the function, since it is possible not only to calculate the value of the function at any given point, but also to compute first, second, and higher-order derivatives with any degree of accuracy.

Let us elaborate some mathematical results, going from simple to more elaborate methods:

The main theme in the development of numerical methods, together with the study of the stability (i.e., if a method converges), is the rate of convergence of the method, which studies how many evaluations are needed and the error in the approximation, for non-iterative methods, or how many iterations are needed and how the error is minimized in each iteration, for iterative methods.

In the case of the study of the stability, and as we have seen before, the value of the definite integral of a function defined in an interval ,

,

can be calculated approximately as the finite sum of the product evaluated at some given points in the interval .

In the limit , the finite sum tends to the infinite integral, and so convergence is assured.

In the case of the study of the rate of convergence, one is interested in increasing the accuracy of the approximation, while retaining the number of subintervals, with a minor increase in computational complexity.

The approach used consists normally in using a polynomial approximation for the evaluation of the function in each subinterval, using the information provided by the value of the function at several points in the subinterval.

Let us consider the case of equally-spaced points (although this restriction can be easily lifted):

According to the definition,

,

with being indicative of the subinterval, being some arbitrary number in every subinterval , with , and , and , with , and .

The Mean-Value Theorem of Calculus tells us, that if

is the definite integral of in , there exists a value in , such that

.

Additionally, by definition, if

is the definite integral of in , this one can also be understood as being composed of the individual contributions

for arbitrary values .

Now, applying the Mean-Value Theorem to each individual contribution yields the result:

,

which is exact.

In the particular case that every subinterval is of equal size, i.e., , then the expression reduces to

.

In this way, the calculation of the initial definite integral reduces to the calculation of the mean values

.

In a first approximation, with only one point,

,

with being the value of in the middle of each subinterval, which leads to

.

In a second approximation, with only two points,

,

with being the value of at the beginning and at the end of each subinterval, which leads to

.

In a third approximation, with only three points,

,

with being the value of at the beginning, in the middle, and at the end of each subinterval, which leads to

.

In a fourth approximation, while still making use of the evaluation of the function at the beginning, in the middle, and at the end of each subinterval, one can use the result that if and are estimates for , then its arithmetic mean is also another estimate with at least the same accuracy, if not better.

So, adding the results for the first and second approximation, and dividing by two,

,

which leads to the result

.

Adding the results for the first and third approximation, and dividing by two,

,

which leads to the result

.

In practice, one can do no better with only three evaluations in an interval, but the results obtained are simple and accurate enough, even in the case of one single interval.

Let us illustrate the case by means of an example:

Let us suppose, that we wish to calculate the definite integral of the function in the interval , for which we know its exact value, .

Let us also keep the problem simple and do the calculations with a single interval, i.e., , , and .

So, we have:

First approximation:

This is equal to a relative error of

.

Second approximation:

This is equal to a relative error of

.

Third approximation:

This is equal to a relative error of

.

Fourth approximation, first and second terms (linear expansion):

This is equal to a relative error of

.

Fourth approximation, first and third terms (quadratic expansion):

This is equal to a relative error of

.

In case more accuracy is needed, one can analogously introduce higher-order terms in the expansion of around , or half the integration step in one of the methods considered before.

Let us consider the case, when higher-order terms are considered in the expansion, i.e.,

,

which leads to the result

.

Let us illustrate the case by means of an example:

Let us suppose, that we wish to calculate the definite integral of the function in the interval , for which we know its exact value, .

Let us also keep the problem simple and do the calculations with two intervals, i.e., , , , and .

Applying second order expansion

,

which yields

,

which is equal to a relative error of

.

Applying fourth order expansion

,

which yields

.

We know from the Calculus of Differences, that any function can be expressed as the polynomial expansion

in the case of forward differences, or

in the case of backward differences, or

in the case of central differences.

In a first approximation,

in the case of forward differences,

in the case of backward differences,

in the case of central differences.

So, in the case of evaluating the integral with one single point, the value of it is always underestimated (overestimated) with forward (backward) differences if the value of the slope of the function at the evaluation point is positive, and underestimated (overestimated) with forward (backward) differences if the value of the slope of the function at the evaluation point is negative, being the error in the estimation directly proportional to the slope of the function at the evaluation point, and to one half of the square of the integration step. In the case of evaluating the integral with central differences, the value is exact for linear functions.

In a second approximation,

in the case of forward differences,

in the case of backward differences,

in the case of central differences.

So, in the case of evaluating the integral with a linear function, there is no difference between forward and backward differences, and the value of the integral is equal to the product of the mean of the values of the function at the beginning and at the end of the interval times the integration step, being the value so calculated underestimated (overestimated) for monotonically increasing (decreasing) functions with a positive (negative) second derivative, the error in the approximation being directly proportional to the second derivative of the function and to the cube of the integration step (i.e., as one increases the number of subintervals, one reduces the deviation proportional to the cube of the number of subintervals). In the case of central differences, the value of the integral is equal to the product of the value of the function in the middle of the interval times the integration step, as in the previous case, the error in the approximation being reduced by a factor of four with relation to the case of forward and backward differences.

In a third approximation,

in the case of forward differences,

in the case of backward differences,

in the case of central differences.

In a fourth approximation,

in the case of forward differences,

in the case of backward differences,

in the case of central differences.

The divided differences are:

Lagrange interpolation

One point:

Two points:

Three points:

Four points:

Five points:

Six points:

Seven points:

One point:

Two points:


Three points:

Four points:








S-Sum

Let us consider the integrals



An analytical method of calculation of the relative weights of the evaluations of the function at the points , consists in evaluating the integral of the function as the integral of a polynomial approximation.

If one chooses the value of as , i.e., at the beginning of every subinterval, then the sum of products is equal to:

,

If one chooses the value of as , i.e., at the beginning of every subinterval, then the sum of products is equal to:

,

Let us consider the case of equally-spaced points (although this restriction can be easily lifted), according to the definition,

In the case of a semi-continuous function, i.e., a function which is continuous except for a finite numerable set of points in an interval , the calculation of the definite integral can be defined as the sum of the definite integrals between the points of discontinuity, i.e.,

.