Jump to content

Introduction to Numerical Methods/Roots of Equations

From Wikibooks, open books for an open world

Root Finding

[edit | edit source]

Objectives:

  • find solutions of quadratic and cubic equations
  • derive the formula and follow the algorithms for the solutions of non-linear equations using the following methods:
    • Bisection
    • Newton-Raphson
    • Secant
    • False-Position

Resources

Roots (or Zeros) of a function f(x) are values of x that produces an output of 0. Roots can be real or complex numbers. Finding the root of is the same as solving the equation . Solving an equation is finding the values that satisfy the condition specified by the equation.

Lower degree (quadratic, cubic, and quartic) polynomials have closed-form solutions, but numerical methods may be easier to use. To solve a quadratic equation we can use the quadratic formula:

There are many root-find algorithms for solving equations numerically.

Bisection Method

[edit | edit source]

The bisection method starts with two guesses and uses a binary search algorithm to improve the answers. If a function is continuous between the two initial guesses, the bisection method is guaranteed to converge. Here is a picture that illustrates the idea:

A few steps of the bisection method applied over the starting range [a1;b1]. The bigger red dot is the root of the function.

The advantages of bisection method include guaranteed convergence on continuous functions and the error is bounded.

The disadvantages of bisection method include relatively slow convergence and non convergence on certain functions.

Newton-Raphson Method

[edit | edit source]

Newton's Method (a.k.a Newton-Raphson Method) is an open method for solving non-linear equations. Contrary to a bracketing-method (e.g. bisection method) Newton's method needs one initial guess but it doesn't guarantee to converge.

The basic idea of Newton's method is as follows:

Given a function f of "x" and a initial guess for the root of this function, a better guess is
This is the Newton-Raphson formula.

The following animation illustrates the method:

The function ƒ is shown in blue and the tangent line is in red. We see that xn+1 is a better approximation than xn for the root x of the function f.

Lets solve the following equations using Newton's method:

from sympy import Symbol, diff, sin
x = Symbol('x')
y = sin(x)
derivative = diff(y, x)

'''
Solve f with initial guess g using Newton's method.
Stop when the absolute relative approximate error is
smaller than the tolerance or the max # iterations
is reached.
'''
def newton(f, derivative, g, tolerance, max_iteration):
  x_previous = g
  for i in range(max_iteration):
    x_current = x_previous - \
      y.subs(x, x_previous).evalf()/derivative.subs(x, x_previous).evalf()

    error = abs((x_current-x_previous)/x_current)
    print "current x:", x_current, " error:", error

    x_previous = x_current

    if error < tolerance:
      break;

newton(y, derivative, 5, 0.005, 15)
$ python newton.py 
current x: 8.38051500624659  error: 0.403377955140806
current x: 10.1008867567293  error: 0.170318883075941
current x: 9.29864101772707  error: 0.0862755898924158
current x: 9.42545121429349  error: 0.0134540186653470
current x: 9.42477796066766  error: 7.14344283379346e-5

Limitations of Newton's Method

[edit | edit source]

Division by Zero

[edit | edit source]

Because a division is involved in Newton's formula when the denominator becomes zero the method won't continue correctly. The following program demonstrates this issue.

from sympy import Symbol, diff
x = Symbol('x')
y = 4 - 1.0/x
derivative = diff(y, x)

'''
Solve f with initial guess g using Newton's method.
Stop when the absolute relative approximate error is
smaller than the tolerance or the max # iterations
is reached.
'''      
def newton(f, derivative, g, tolerance, max_iteration):
  x_previous = g
  for i in range(max_iteration):
    x_current = x_previous - \
      y.subs(x, x_previous)/derivative.subs(x, x_previous)

    error = abs((x_current-x_previous)/x_current) 
    print "current x:", x_current, " error:", error
    
    x_previous = x_current

    if error < tolerance:
      break;

newton(y, derivative, 0.5, 0.005, 15)

Output:

$ python newton.py 
current x: 0  error: oo
current x: nan  error: nan
current x: nan  error: nan
current x: nan  error: nan
current x: nan  error: nan
current x: nan  error: nan
...

Divergence at Inflection Points

[edit | edit source]

When the initial guess is near a inflection point the method may diverge from the desired root.

The following code using the same implementation of Newton's method (assume the newton function is imported) demonstrates divergence at an inflection point (x=0).

from sympy import Symbol, diff
x = Symbol('x')
y = (x-1)**3 + 0.5

derivative = diff(y, x)
newton(y, derivative, 5, 0.005, 15)
$ python newton.py 
current x: 3.65625000000000  error: 0.367521367521368
current x: 2.74721164936563  error: 0.330894909696631
current x: 2.11021215705302  error: 0.301865141940137
current x: 1.60492272680972  error: 0.314837232847788
current x: 0.947823175470885  error: 0.693272298403532
current x: -60.2548036313237  error: 1.01573025084058
current x: -39.8365801731819  error: 0.512549605648313
current x: -26.2244867245776  error: 0.519060433539279
current x: -17.1498826852607  error: 0.529135050417335
current x: -11.1004277326071  error: 0.544974941360464
current x: -7.06809009701998  error: 0.570498901434099
current x: -4.38128712811798  error: 0.613245124168828
current x: -2.59328016409484  error: 0.689476975445585
current x: -1.40842833626215  error: 0.841258157995587
current x: -0.634351912031382  error: 1.22026340513730

Secant Method

[edit | edit source]

Secant method is similar to Newton's method in that it is an open method and use a intersection to get the improved estimate of the root. Secant method avoids calculating the first derivatives by estimating the derivative values using the slope of a secant line.

The following figure illustrates the secant method with two initial guesses and two successive improved estimates: The first two iterations of the secant method. The red curve shows the function f and the blue lines are the secants. For this particular case, the secant method will not converge.

The formula for secant method is as follows:

False-Position Method

[edit | edit source]

The false-position method is similar to the bisection method in that it requires two initial guesses (bracketing method). Instead of using the midpoint as the improved guess, the false-position method use the root of secant line that passes both end points. The following figure illustrates the idea. The first two iterations of the false position method. The red curve shows the function f and the blue lines are the secants.

The formula for false-position method is as follows: