# Calculus/Approximating Values of Functions

 ← Euler's Method Calculus Differentiation/Applications of Derivatives/Exercises → Approximating Values of Functions

Although many times, a value can and does have an exact representation that is and can be described by some function, it is sometimes useful to get an approximation of those values especially in many real world contexts. For example, a construction worker might need a room to be ${\displaystyle {\sqrt {2}}}$ feet long. However, this value is not useful because most rulers do not have a measurement for ${\displaystyle {\sqrt {2}}}$. Instead, workers need an approximation of the length to be able to construct a room that is ${\displaystyle {\sqrt {2}}}$ feet long.

Some numbers were and still are very hard to approximate, but calculus makes it easier to do so. The subfield of Numerical Analysis studies algorithms used to approximate numbers, including but not limited to the residual (how much the value is off from the true value), the level of decimal precision, and the amount of times the procedure needs to be done to reach a certain level of precision.

While this section will not be a replacement for numerical analysis (not even close), this section will hopefully introduce efficient algorithms to approximate values to surprising levels of accuracy.

Before diving into this section, Section 2.4 already introduced a method to approximate the solution to a function using calculus as justification behind this approximation algorithm, referred to as the Bisection Method. Using calculus to approximate values should therefore not be very surprising.

## Linear Approximation

Figure 1: The tangent line to the function at ${\displaystyle \alpha }$ approximates the ${\displaystyle f(c)}$ quite well.

Recall one of the definitions of the derivative: it is the slope of the tangent line at a point, ${\displaystyle x=\alpha }$, of the function ${\displaystyle f}$. Thinking about the local behavior of the function ${\displaystyle f}$ around ${\displaystyle x=\alpha }$, the tangent line can be a good approximation of the value ${\displaystyle f(c)}$ (refer to Figure 1) if ${\displaystyle c-\alpha }$ is small and ${\displaystyle f(c)-f(\alpha )}$ is small.

Linear Approximation

If ${\displaystyle f(\alpha )}$ is a known value for the differentiable and continuous function ${\displaystyle y=f(x)}$, the derivative at that point is ${\displaystyle f^{\prime }(\alpha )}$, and ${\displaystyle |f(c)-f(\alpha )|=\epsilon _{0}}$ for all small ${\displaystyle \epsilon _{0}>0}$, then the value of ${\displaystyle f(c)}$ on the interval ${\displaystyle \left(x-\delta ,x+\delta \right)}$ for some small ${\displaystyle \delta >0}$ is approximated by the following equation:

(1) ${\displaystyle f(c)\approx f^{\prime }(\alpha )(c-\alpha )+f(\alpha )}$

Justification: Notice the tangent line at ${\displaystyle x=\alpha }$ for some differentiable function ${\displaystyle f(x)}$ is given by the following equation:

(2) ${\displaystyle h(x)=f^{\prime }(\alpha )(x-\alpha )+f(\alpha )}$

where ${\displaystyle h(x)}$ is the equation of the tangent line.

If we are trying to obtain ${\displaystyle f(c)}$ (the true value) through the tangent line, and the distance ${\displaystyle |f(c)-f(\alpha )|=\epsilon _{0}}$ for all small ${\displaystyle \epsilon _{0}>0}$, then ${\displaystyle h(c)\approx f(c)}$. Therefore,

${\displaystyle f(c)\approx h(c)=f(\alpha )+f^{\prime }(\alpha )(c-\alpha )\qquad \square }$

Notice that for this technique to be used, it needs to be the case that

1. ${\displaystyle f(x)}$ is differentiable at ${\displaystyle x=\alpha }$ and continuous in ${\displaystyle \left[\alpha ,c\right]}$.
2. ${\displaystyle |c-\alpha |}$ is small and ${\displaystyle |f(c)-f(\alpha )|}$ is small. Otherwise, some very strange possible approximations may appear.
3. ${\displaystyle f(x)}$ is monotonic in ${\displaystyle \left[\alpha ,c\right]}$. (You will learn this more comprehensively in Section 6.2.)

If any one of these conditions are false, then this technique will either not work or will not be very useful.

 Example 3.17.1: Approximating ${\displaystyle {\sqrt {1.01}}}$ Let ${\displaystyle f(x)={\sqrt {x}}}$. The exact value of ${\displaystyle {\sqrt {1.01}}=f(1.01)}$. ${\displaystyle f^{\prime }(x)={\frac {1}{2{\sqrt {x}}}}}$ The tangent line equation at ${\displaystyle x=1}$ is given by ${\displaystyle y-f(1)=f^{\prime }(1)(x-1)\Leftrightarrow y=f^{\prime }(1)(x-1)+f(1)}$ ${\displaystyle y={\frac {1}{2}}x+{\frac {1}{2}}}$ Let ${\displaystyle g(x)=y}$. Suppose ${\displaystyle g(1.01)\approx f(1.01)}$. Then, ${\displaystyle f(1.01)\approx g(1.01)={\frac {1}{2}}(1.01)+{\frac {1}{2}}=1.005}$. Therefore, ${\displaystyle {\sqrt {1.01}}\approx 1.005}$. ${\displaystyle \blacksquare }$

The benefit of linear approximation is that instead of having to use harder to understand functions, we estimate the value of a function using linear functions, arguably the easiest possible calculation we will ever have to use, assuming the value of the derivative is easy to find.

### Determining over- or underestimation

Notice that for any approximation we obtain using a tangent line approximation (same as a linear approximation), there exists a remainder term that will make it equal to the true value we can obtain from the function. That is,

(3) ${\displaystyle f(c)=f(\alpha )+f^{\prime }(\alpha )(c-\alpha )+R_{2}}$

where ${\displaystyle R_{2}}$ is the remainder term. This, unfortunately, does not give us a precise estimate of the residual, especially if we cannot find the exact value of the term. While there is a technique to determine the upper bound of a residual for this type of estimate, it will be done much later in Section 6.8.

The best solution we have for now is determining if the estimate we have given is below or above the true value, which can be done with the following technique.

Above or Below the True Value of Tangent Line Approximation
• If ${\displaystyle f}$ is concave down in the interval between ${\displaystyle \alpha }$ and ${\displaystyle c}$, the approximation will be an overestimate.
• If ${\displaystyle f}$ is concave up in the interval between ${\displaystyle \alpha }$ and ${\displaystyle c}$, the approximation will be an underestimate.

Justification: Suppose ${\displaystyle f(x)}$ is a twice differentiable function in ${\displaystyle \left[\alpha ,c\right]}$ and ${\displaystyle \alpha .

Case 1(A): Let ${\displaystyle f^{\prime }(\alpha )>0}$ and ${\displaystyle f^{\prime \prime }(x_{0})<0}$ for ${\displaystyle x_{0}\in \left[\alpha ,c\right]}$. Then, the tangent line, ${\displaystyle h(x)}$ at ${\displaystyle (\alpha ,f(\alpha ))}$ has positive slope and ${\displaystyle h(c)>f(c)}$ (see the bottom function of Figure 1).
Case 1(B): Let ${\displaystyle f^{\prime }(\alpha )>0}$ and ${\displaystyle f^{\prime \prime }(x_{0})>0}$ for ${\displaystyle x_{0}\in \left[\alpha ,c\right]}$. Then, the tangent line, ${\displaystyle h(x)}$ at ${\displaystyle (\alpha ,f(\alpha ))}$ has positive slope and ${\displaystyle h(c).
Case 2(A): Let ${\displaystyle f^{\prime }(\alpha )<0}$ and ${\displaystyle f^{\prime \prime }(x_{0})<0}$ for ${\displaystyle x_{0}\in \left[\alpha ,c\right]}$. Then, the tangent line, ${\displaystyle h(x)}$ at ${\displaystyle (\alpha ,f(\alpha ))}$ has negative slope and ${\displaystyle h(c)>f(c)}$.
Case 2(B): Let ${\displaystyle f^{\prime }(\alpha )<0}$ and ${\displaystyle f^{\prime \prime }(x_{0})>0}$ for ${\displaystyle x_{0}\in \left[\alpha ,c\right]}$. Then, the tangent line, ${\displaystyle h(x)}$ at ${\displaystyle (\alpha ,f(\alpha ))}$ has negative slope and ${\displaystyle h(c) (see the bottom function of Figure 1).

Since a concave down function has ${\displaystyle h(c)>f(c)}$ no matter the slope of the tangent, and a concave up function has ${\displaystyle h(c) no matter the slope of the tangent, we have justified what we wanted to do. ${\displaystyle \square }$

 Example 3.17.2: Is ${\displaystyle {\sqrt {1.01}}\approx {\frac {1}{2}}(1.01)+{\frac {1}{2}}=1.005}$ an over- or underestimate? Let ${\displaystyle f(x)={\sqrt {x}}}$. ${\displaystyle f^{\prime }(x)={\frac {1}{2}}x^{-{\frac {1}{2}}}={\frac {1}{2{\sqrt {x}}}}}$ ${\displaystyle f^{\prime \prime }(x)=-{\frac {1}{2}}x^{-{\frac {3}{2}}}}$ The function is concave down for all ${\displaystyle x\geq 1}$ because {\displaystyle {\begin{aligned}f^{\prime \prime }(x)&=-{\frac {1}{2}}(x)^{-{\frac {3}{2}}}\\&=-{\frac {1}{2x^{3/2}}}\end{aligned}}}. ${\displaystyle \Rightarrow {\frac {1}{2}}\leq {\frac {1}{2x^{3/2}}}<0}$ The last implication is justified because ${\displaystyle \lim _{x\to \infty }f^{\prime \prime }(x)=0}$ (show this yourself). If ${\displaystyle f^{\prime \prime }(x)}$ is monotonically increasing and bounded (which it is by the inequality shown here), then we can be certain that the inequality shown is correct. Because ${\displaystyle f(x)}$ is certainly concave down from ${\displaystyle x=1}$ to ${\displaystyle x=1.01}$, ${\displaystyle {\sqrt {1.01}}\approx 1.005}$ is an overestimate of the true value. ${\displaystyle \blacksquare }$

### Issues with Linear Approximation

While linear (or tangent line) approximation is a powerful, easy tool that can be used to approximate functions, it does have its issues. These issues were alluded to when introducing the technique. Each issue will highlight why this tool may not be very useful all the time.

 Example 3.17.3: Approximate ${\displaystyle {\sqrt[{3}]{0.01}}}$ using Tangent Line Approximation Let ${\displaystyle f(x)={\sqrt[{3}]{x}}=x^{1/3}}$. Since we are approximating ${\displaystyle f(0.01)}$, we may use the tangent line approximation at ${\displaystyle x=0}$ to obtain an approximation at ${\displaystyle f(0.01)}$. Unfortunately, ${\displaystyle f^{\prime }(0)=\lim _{x\to 0}{\dfrac {\sqrt[{3}]{x}}{x}}}$ does not converge to a finite value. This therefore means that the tangent line is either vertical or does not exist. Since the derivative does not exist at ${\displaystyle x=0}$, it is impossible to obtain an accurate approximation using the linear method. ${\displaystyle \blacksquare }$
 Example 3.17.4: Approximate ${\displaystyle (-0.8)^{4}-1.5(-0.8)^{2}}$ using Tangent Line Approximation Let ${\displaystyle f(x)=x^{4}-{\frac {3}{2}}x^{2}}$. Since we are approximating ${\displaystyle f(-0.8)}$, and ${\displaystyle -0.8-(-1)=0.2}$ is a small difference in ${\displaystyle x}$, we may use the tangent line approximation at ${\displaystyle x=-1}$ to obtain an approximation at ${\displaystyle f(-0.8)}$. ${\displaystyle f(-1)=(-1)^{4}-1.5(-1)^{2}=1-1.5=-0.5}$ ${\displaystyle f^{\prime }(x)=4x^{3}-3x}$ The tangent line equation at ${\displaystyle x=-1}$ is given by ${\displaystyle y-f(-1)=f^{\prime }(-1)(x+1)\Leftrightarrow y=f^{\prime }(-1)(x+1)+f(-1)}$ ${\displaystyle y=-x-1.5}$ Let ${\displaystyle g(x)=y}$. Suppose ${\displaystyle g(-0.8)\approx f(-0.8)}$. Then, ${\displaystyle f(-0.8)\approx g(-0.8)=-(-0.8)-1.5=-0.7}$. Therefore, ${\displaystyle (-0.8)^{4}-1.5(-0.8)^{2}\approx -0.7}$. However, this approximation is very off from the actual value. It has an error of ${\displaystyle R_{2}=0.1496}$ within the actual value of the function at ${\displaystyle x=-0.8}$. The reason for this large error has to do with something subtle. While the derivative does exist at ${\displaystyle \alpha =-1}$, the function is not monotonic from ${\displaystyle \left[-1,-0.8\right]}$: the function will both decrease and increase. Recall from your reading that a function is monotonically decreasing if and only if for any ${\displaystyle x_{1},x_{2}\in \left[a,b\right]}$ and ${\displaystyle x_{1}, ${\displaystyle h(x_{1})>h(x_{2})}$. However, there exists a counterexample for ${\displaystyle f(x)}$ above, which becomes apparent if you graph the function. Hence, it would be irresponsible to use a tangent line to approximate the value of ${\displaystyle (-0.8)^{4}-1.5(-0.8)^{2}}$. ${\displaystyle \blacksquare }$

Because this will be something you will learn more comprehensively in Section 6.2, assume that all exercises have a monotonic function. This example is simply meant to illustrate a common pitfall that this method falls to.

## Newton-Raphson Method

While tangent line approximations are very helpful, they tend to only be useful when you know a nearby value. However, if there does not exist a nearby value to help estimate its value, then it will be very difficult to obtain a precise estimate of the exact value desired.

The Newton-Raphson method, introduced in Section 3.13, is a useful method to determine the zeros of a function, whether polynomial, transcendental, irrational, exponential, etc. However, the Newton-Raphson method can also be used to approximate values of specific functions.

Newton-Raphson Method

Suppose ${\displaystyle \alpha }$ is the value of interest. Let ${\displaystyle x=\alpha }$. There exists some function ${\displaystyle f(x)}$ so that ${\displaystyle f(\alpha )=0}$. If an approximation of ${\displaystyle \alpha }$ is desired, then iteratively find ${\displaystyle \alpha }$ as follows:

(4) ${\displaystyle \alpha \approx x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}}$

If you read Section 3.13, then this equation should be known and already justified to you.

 Example 3.17.5: Approximate ${\displaystyle {\sqrt {72}}}$. Let ${\displaystyle x={\sqrt {72}}}$. We need to manipulate this equation so that we may obtain ${\displaystyle f(\alpha )=0}$. {\displaystyle {\begin{aligned}x&={\sqrt {72}}\\x^{2}&=72\\x^{2}-72&=0\\\end{aligned}}} From this, let ${\displaystyle f(x)=x^{2}-72}$. Since ${\displaystyle x={\sqrt {72}}}$ is a root of the equation, we can use the Newton-Rhapson method to approximate ${\displaystyle {\sqrt {72}}}$. First, we choose an initial guess. Since ${\displaystyle f(8)=-8<0, and ${\displaystyle f(9)>\left|f(8)\right|>0}$, ${\displaystyle x_{0}=8}$ is a good initial guess. Before we can begin, we need the derivative of the function, which can be easily shown to be ${\displaystyle f^{\prime }(x)=2x}$. Now we begin by finding the root. {\displaystyle {\begin{aligned}x_{1}&=8-{\frac {-8}{2\cdot 8}}&=8.5\\x_{2}&=8.5-{\frac {8.5^{2}-72}{2\cdot 8.5}}&=8.47058823529\\x_{3}&=8.47058823529-{\frac {8.47058823529^{2}-72}{2\cdot 8.47058823529}}&=8.48529411765\end{aligned}}} Out of convenience, we will stop here. However, the value we have obtained is already correct to four decimal places. ${\displaystyle \blacksquare }$
 Example 3.17.5: Approximate ${\displaystyle {\sqrt {2}}+{\sqrt {3}}}$. Let ${\displaystyle x={\sqrt {2}}+{\sqrt {3}}}$. We need to manipulate this equation so that we may obtain ${\displaystyle f(\alpha )=0}$. Keep in mind to eliminate as many square roots as possible so that our jobs are easier. {\displaystyle {\begin{aligned}x&={\sqrt {2}}+{\sqrt {3}}\\x^{2}&=2+2{\sqrt {6}}+3\\x^{2}-5&=2{\sqrt {6}}&\\x^{4}-10x^{2}+25&=4\cdot 6\\x^{4}-10x^{2}+1&=0\end{aligned}}} Notice this equation has four possible roots, but we only care about one of them. Looking back: ${\displaystyle 1+1=2<{\sqrt {2}}+{\sqrt {3}}<4=2+2}$ Notice that ${\displaystyle f(3)=-8<0, so we choose ${\displaystyle x_{0}=3}$. Let ${\displaystyle f(x)=x^{4}-10x^{2}+1}$. Then, ${\displaystyle f^{\prime }(x)=4x^{3}-20x}$. Notice that ${\displaystyle 4x(x^{2}-5)=0\Rightarrow x=0\vee x=\pm {\sqrt {5}}}$ means that ${\displaystyle f^{\prime }(x)\neq 0}$ for ${\displaystyle x\in \left({\sqrt {5}},4\right)}$. Finally, notice that ${\displaystyle x_{n+1}=x_{n}-{\frac {x_{n}^{4}-10x_{n}^{2}+1}{4x_{n}^{3}-20x_{n}}}}$ cannot be simplified. We are going to have to work with these values. (Keep in mind that even without calculators, many people in the past used slide rulers to work with these ugly values. We have the benefit of an electric computer on our fingertips as opposed to an analog one.) Now we begin by finding the root. {\displaystyle {\begin{aligned}x_{1}&=3-{\frac {-8}{48}}={\frac {19}{6}}&\approx 3.16666666667\\x_{2}&={\frac {19}{6}}-{\dfrac {\frac {1657}{1296}}{\frac {3439}{54}}}={\frac {86569}{27512}}&\approx 3.14659057866\\x_{3}&=3.14659057866-{\frac {0.0201173085466}{61.686167862}}&\approx 3.14626445516\end{aligned}}} Out of convenience, we will stop here. However, the value we have obtained is already correct to six decimal places. ${\displaystyle \blacksquare }$

### Failure Analysis

Figure 2: The tangent lines of ${\displaystyle x^{3}-2x+2}$ at ${\displaystyle 0}$ and ${\displaystyle 1}$ intersect the ${\displaystyle x}$-axis at ${\displaystyle 1}$ and ${\displaystyle 0}$ respectively, illustrating why Newton's method oscillates between these values for some starting points.

Of course, as we know from Section 3.13, the Newton-Raphson method is not perfect and will fail in some instances. One obvious instance is when the derivative at a particular point is zero. Because that is in the denominator, we cannot find the next possible root once that occurs. However, there are others.

#### Starting Point Enters a Cycle

For some functions, some starting points may enter an infinite cycle, preventing convergence. Let

${\displaystyle f(x)=x^{3}-2x+2\!}$

and take ${\displaystyle 0}$ as the starting point. The first iteration produces ${\displaystyle 1}$ and the second iteration returns to ${\displaystyle 0}$ so the sequence will alternate between the two without converging to a root (see Figure 2). In fact, this two-cycle is stable: there are neighborhoods around ${\displaystyle 0}$ and around ${\displaystyle 1}$ from which all points iterate asymptotically to the two-cycle (and hence not to the root of the function). In general, the behavior of the sequence can be very complex (see Newton Fractal). The real solution of this equation is ${\displaystyle -1.76929235\ldots }$.

#### Derivative does not exist at root

A simple example of a function where Newton's method diverges is trying to find the cube root of zero. The cube root is continuous and infinitely differentiable, except for ${\displaystyle x=0}$, where its derivative is undefined:

${\displaystyle f(x)={\sqrt[{3}]{x}}.}$

For any iteration point ${\displaystyle x_{n}}$, the next iteration point will be:

${\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f^{\prime }(x_{n})}}=x_{n}-{\frac {{x_{n}}^{\frac {1}{3}}}{{\frac {1}{3}}{x_{n}}^{-{\frac {2}{3}}}}}=x_{n}-3x_{n}=-2x_{n}.}$

The algorithm overshoots the solution and lands on the other side of the ${\displaystyle y}$-axis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration.

In fact, the iterations diverge to infinity for every ${\displaystyle f(x)=|x|^{\alpha }}$, where ${\displaystyle 0<\alpha <{\frac {1}{2}}}$. In the limiting case of ${\displaystyle \alpha ={\frac {1}{2}}}$, the iterations will alternate indefinitely between points ${\displaystyle x_{0}}$ and ${\displaystyle -x_{0}}$, so they do not converge in this case either.

#### Discontinuous Derivative

If the derivative is not continuous at the root, then convergence may fail to occur in any neighborhood of the root. Consider the function

${\displaystyle f(x)={\begin{cases}0&{\text{if }}x=0\\x+x^{2}\sin \left({\dfrac {2}{x}}\right)&{\text{if }}x\neq 0\end{cases}}}$
${\displaystyle f^{\prime }(x)={\begin{cases}0&{\text{if }}x=0\\1+2x\sin \left({\dfrac {2}{x}}\right)-2\cos \left({\dfrac {2}{x}}\right)&{\text{if }}x\neq 0\end{cases}}}$

Within any neighborhood of the root, this derivative keeps changing sign as ${\displaystyle x}$ approaches ${\displaystyle 0}$ from the right (or from the left) while ${\displaystyle f(x)\geq x-x^{2}>0}$ for ${\displaystyle 0.

Thus, ${\displaystyle {\dfrac {f(x)}{f^{\prime }(x)}}}$ is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though:

• the function is differentiable (and thus continuous) everywhere;
• the derivative at the root is nonzero;
• ${\displaystyle f}$ is infinitely differentiable except at the root; and
• the derivative is bounded in a neighborhood of the root (unlike ${\displaystyle f(x)/f^{\prime }(x)}$).

## Euler's Method

 ← Euler's Method Calculus Differentiation/Applications of Derivatives/Exercises → Approximating Values of Functions