Jump to content

Calculus/Newton's Method

From Wikibooks, open books for an open world
← Extrema and Points of Inflection Calculus Related Rates →
Newton's Method

Newton's Method (also called the Newton-Raphson method) is a recursive algorithm for approximating the root of a differentiable function. We know simple formulas for finding the roots of linear and quadratic equations, and there are also more complicated formulae for cubic and quartic equations. At one time it was hoped that there would be formulas found for equations of quintic and higher-degree, though it was later shown by Neils Henrik Abel that no such equations exist. The Newton-Raphson method is a method for approximating the roots of polynomial equations of any order. In fact the method works for any equation, polynomial or not, as long as the function is differentiable in a desired interval.

Newton's Method

Let be a differentiable function. Select a point based on a first approximation to the root, arbitrarily close to the function's root. To approximate the root you then recursively calculate using:

As you recursively calculate, the 's often become increasingly better approximations of the function's root.

In order to explain Newton's method, imagine that is already very close to a 0 of . We know that if we only look at points very close to then looks like its tangent line. If was already close to the place where was 0, and near we know that looks like its tangent line, then we hope the 0 of the tangent line at is a better approximation then itself.

The equation for the tangent line to at is given by

Now we set and solve for .

This value of we feel should be a better guess for the value of where . We choose to call this value of , and a little algebra we have

If our intuition was correct and is in fact a better approximation for the root of , then our logic should apply equally well at . We could look to the place where the tangent line at is zero. We call , following the algebra above we arrive at the formula

And we can continue in this way as long as we wish. At each step, if your current approximation is our new approximation will be .

Examples

[edit | edit source]

Find the root of the function .

Figure 1: A few iterations of Newton's method applied to starting with . The blue curve is . The other solid lines are the tangents at the various iteration points.

As you can see is gradually approaching 0 (which we know is the root of ) . One can approach the function's root with arbitrary accuracy.

Answer: has a root at .

Notes

[edit | edit source]
Figure 2: Newton's method applied to the function

starting with .

This method fails when . In that case, one should choose a new starting place. Occasionally it may happen that and have a common root. To detect whether this is true, we should first find the solutions of , and then check the value of at these places.

Newton's method also may not converge for every function, take as an example:

For this function choosing any then would cause successive approximations to alternate back and forth, so no amount of iteration would get us any closer to the root than our first guess.

Figure 3: Newton's method, when applied to the function with initial guess , eventually iterates between the three points shown above.

Newton's method may also fail to converge on a root if the function has a local maximum or minimum that does not cross the x-axis. As an example, consider with initial guess . In this case, Newton's method will be fooled by the function, which dips toward the x-axis but never crosses it in the vicinity of the initial guess.

See also

[edit | edit source]
← Extrema and Points of Inflection Calculus Related Rates →
Newton's Method