Anda di halaman 1dari 37

Numerical Analysis

Determination of roots of polynomials and transcendental equations by


Newton Raphson, Secant and Bairstow's method.

Motivation:
Let us look at a set of problems of scientific and engineering interest to
get a feel of what is root finding and why to find roots. Later we learn
how to find them.

Problem 1:
Suppose we are asked to cut a rectangular sheet with one of its sides
1.25mts longer than the other and the area being 0.875 mts from a
thin iron sheet of 5 mts area.What will be length of the 'smallest side'?

Say, length of the smallest side =

Length of the other side

Area of rectangle
mt

i.e.

i.e. say

(1)

So you need to solve a quadratic equation to find the required quantity.


i.e. we have to find the roots of a quadratic equation.

We know that the roots of a quadratic equation

are given by

(3)
Problem 2:
Concepts of thermodynamics are used extensively in their works by say
aerospace, mechanical and chemical engineers. Here, the zero-

pressure specific heat of dry air say KJ/(KgK) is related to


temperature 'T' by

Now, determine the temperature that corresponds to a specific heat of


1.2 KJ/(KgK). So, here we have to solve or find the roots of

=1.2

i.e. find roots of

(4)

Problem 3:
The concentration of pollutant bacteria 'C' in a lake decreases as per the model:

where 't' is the time variable. Determine the time required for the bacteria
concentration to be reduced to 9.
Here, we have to find the roots of
(5)

Problem 4:
The volume of liquid in a hollow horizontal cylinder of radius r and
length L is related to the depth of the liquid h by

Determine h given that


Here we have to find the roots of

(6)

So we have seen that finding roots of ..................(7)

is very important in finding solution to several scientific and engineering


problems.

The equation may be a polynomial equation or a


transcendental equation.

Polynomial Equations:
Polynomial equations in one independent variable 'x' are a simple class
of algebraic equations that are represented as follows:

The degree polynomial has roots. These roots may be real or


complex.
Examples:

Transcendental Equation:
The equations include trigonometric or exponential or logarithmic
functions.
Examples:
We may note that the examples are nonlinear functions.

Method of solution:

Some of the ways of finding the roots or solution of are:

• Direct analytical methods


• Graphical approach
• Iterative methods etc.

Direct analytical methods: One may be able to find a mathematical


expression for the solution (root) of

.
For example, for quadratic equation (2), we have solutions given by (3).
However a large number of equations cannot be solved by direct
analytical methods.

Graphical Method: This approach involves plotting the given function and
determining the points where it crosses the x-axis. These points,
extracted approximately from the plot, represent approximate values of
the roots of the function.
Example:

Find the positive roots of

Rewrite as

Now consider and and plot them .


The x-co-ordinate of the point of intersection of , and

gives the required positive root of the given function. Clearly


this approach is cumbersome and time consuming.

Iterative Methods: Starting with an initial guess solution these methods


generate a sequence of estimates to the solution which is expected to
converge to the true solution. They are grouped into two categories :

• (a) Bracketing methods


• (b) Open methods

(a)Bracketing Methods: These methods exploit the fact that a function


typically changes sign in the vicinity of a root. They start with two initial
guesses that bracket the root and then systematically reduce the width
of the bracket until the solution to a desired accuracy is reached. The
popular bracketing methods are: (a) Bisection Method, (b) False
Position (or) Regula Falsi method, (c) Improved or modified Regula
Falsi Method.
(b)Open methods: These methods are based on formulas that require only
a single starting (or guess) value of solution or two starting values that
do not necessarily bracket the root. They may sometimes diverge or
move away from true root as the computation progresses. However
when the open methods converge , they do so much more quickly than
the Bracketing methods. Some of the popular open methods are: (a)
Secant method, (b) Newton-Raphson method, (c) Bairstow's method (d)
Muller's method etc.

Bracketing Methods:
(a) Bisection Method:
This is one of the simplest and reliable iterative methods for the solution
of nonlinear equation. This method is also known as binary chopping or
half-interval method. Given a function which is real and continuous

in an interval and and are of opposite sign i.e.

, then there is at least one real root of

Algorithm:

Given a function continuous on a interval satisfying the


Bisection method starting criteria, carry out the following steps to find a

root of

(1) Set
(2) For n=1,2,...until satisfied do

(a) If

(b)If

otherwise

Note:

1) The subscripts in etc denote the


iteration number. is the interval for the zeroth or starting iteration.

is the interval for the n iteration.

(2) An iterative process must be terminated at some stage. 'Until


satisfied' refers to the solution convergence criteria used for stopping
the execution process. We must have an objective criteria for deciding
when to stop the process.We may use one of the following criteria
depending on the behavior of the function (monotonous/steep
variation/increasing /decreasing)

(i) (Tolerable absolute error in

(ii) (Tolerable relative error in

(iii) (Value of function as )

(iv) (difference in two consecutive iteration


function values)

Usually are referred to as tolerance values and it is fixed by


us depending on the level of accuracy we desire to have on the solution.

For example etc.

Example:

Solve for the root in the interval [1,2] by Bisection


method.

Solution: Given on

There is a root for the given function in [1,2].

Set
Set

and

Set

Details of the remaining steps are provided in the table below:

Bisection
Method
Iteration
no.
0 1.0000000000 2.0000000000 1.5000000000 -2.0000000000
1 1.5000000000 2.0000000000 1.7500000000 1.3437500000
2 1.5000000000 1.7500000000 1.6250000000 -0.4804687500
3 1.6250000000 1.7500000000 1.6875000000 0.3920898438
4 1.6250000000 1.6875000000 1.6562500000 -0.0538940430
5 1.6562500000 1.6875000000 1.6718750000 0.1666488647
6 1.6562500000 1.6718750000 1.6640625000 0.0557680130
7 1.6562500000 1.6640625000 1.6601562500 0.0007849932
8 1.6562500000 1.6601562500 1.6582031250 -0.0265924782
9 1.6582031250 1.6601562500 1.6591796875 -0.0129132364
10 1.6591796875 1.6601562500 1.6596679688 -0.0060664956
11 1.6596679688 1.6601562500 1.6599121094 -0.0026413449
12 1.6599121094 1.6601562500 1.6600341797 -0.0009283243
13 1.6600341797 1.6601562500 1.6600952148 -0.0000717027
14 1.6600952148 1.6601562500 1.6601257324 0.0003566360
15 1.6600952148 1.6601257324 1.6601104736 0.0001424643
16 1.6600952148 1.6601104736 1.6601028442 0.0000353802
17 1.6600952148 1.6601028442 1.6600990295 -0.0000181614
18 1.6600990295 1.6601028442 1.6601009369 0.0000086094

Example:

Solve for the root in the interval by


Bisection method.

Bisection Method
Iteration
no.
0 0.5000000000 1.5000000000 1.0000000000 3.1720056534
1 0.5000000000 1.0000000000 0.7500000000 0.6454265714
2 0.5000000000 0.7500000000 0.6250000000 -1.0943561792
3 0.6250000000 0.7500000000 0.6875000000 -0.1919542551
4 0.6875000000 0.7500000000 0.7187500000 0.2357951254
5 0.6875000000 0.7187500000 0.7031250000 0.0240836944
6 0.6875000000 0.7031250000 0.6953125000 -0.0834089667
7 0.6953125000 0.7031250000 0.6992187500 -0.0295295101
8 0.6992187500 0.7031250000 0.7011718750 -0.0026894973
9 0.7011718750 0.7031250000 0.7021484375 0.0107056862
10 0.7011718750 0.7021484375 0.7016601562 0.0040097744
11 0.7011718750 0.7016601562 0.7014160156 0.0006612621
12 0.7011718750 0.7014160156 0.7012939453 -0.0010144216
13 0.7012939453 0.7014160156 0.7013549805 -0.0001766436
14 0.7013549805 0.7014160156 0.7013854980 0.0002420362
15 0.7013549805 0.7013854980 0.7013702393 0.0000326998
16 0.7013549805 0.7013702393 0.7013626099 -0.0000715650
17 0.7013626099 0.7013702393 0.7013664246 -0.0000194324
18 0.7013664246 0.7013702393 0. 7013683 319 0.0000069206

Exercise: - Find the solutions of the following problems accurate

to within using Bisection Method.

(1) for

(2) for

False Position or Regula Falsi method:

Bisection method converges slowly. Here while defining the new interval

the only utilization of the function is in checking whether

but not in actually calculating the end point of the


interval. False Position or Regular Falsi method uses not only in

deciding the new interval as in bisection method but also in


calculating one of the end points of the new interval. Here one of end

points of say is calculated as a weighted average defined on

previous interval as
( have opposite signs).
The algorithm for computing the root of function by this method is
given below.
Algorithm:

Given a function continuous on an interval satisfying the

criteria , carry out the following steps to find the root of

in :

(1) Set
(2) For n = 0,1,2.... until convergence criteria is satisfied , do:

(a) Compute

(b) If , then set

otherwise set
Note:
Use any one of the convergence criteria discussed earlier under
bisection method. For the sake of carrying out a comparative study we
will stick both to the same convergence criteria as before i.e.

(say) and to the example problems.


Example:

Solve for the root in the interval [1,2] by Regula-


Falsi method:

Solution: Since , we go ahead in finding the root of


given function f(x) in [1,2].

Set .
set

, proceed with iteration.


Iteration details are provide below in a tabular form:

Regula Falsi
Method

Iteration
no.
0 1.0000000000 2.0000000000 1.4782608747 -2.2348976135
1 1.4782608747 2.0000000000 1.6198574305 -0.5488323569
2 1.6198574305 2.0000000000 1.6517157555 -0.1169833690
3 1.6517157555 2.0000000000 1.6583764553 -0.0241659321
4 1.6583764553 2.0000000000 1.6597468853 -0.0049594725
5 1.6597468853 2.0000000000 1.6600278616 -0.0010169938
6 1.6600278616 2.0000000000 1.6600854397 -0.0002089010
7 1.6600854397 2.0000000000 1.6600972414 -0.0000432589
8 1.6600972414 2.0000000000 1.6600997448 -0.0000081223
Note : One may note that Regula Falsi method has converged faster
than the Bisection method.

Geometric Interpretation of Regula Falsi Method:


Let us plot the polynomial considered in the above example and trace

, its movement and new intervals with iteration. From the


figure(), one can verify that the weighted average
is the point of intersection of the secant to , passing through points

and with the x-axis. Since here is concave


upward and increasing the secant is always above . Hence,
always lies to the left of the zero. If were to be concave downward
and increasing, would always lie to the right of the zero.

Example:

Solve for the root in the interval [0.5,1.5] by


Regula Falsi method.

Regula Falsi
Method

Iteration
no.
0 0.5000000000 1.5000000000 0.8773435354 2.1035263538
1 0.5000000000 0.8773435354 0.7222673893 0.2828366458
2 0.5000000000 0.7222673893 0.7032044530 0.0251714624
3 0.5000000000 0.7032044530 0.7015219927 0.0021148270
4 0.5000000000 0.7015219927 0.7013807297 0.0001767781
5 0.5000000000 0.7013807297 0.7013689280 0.0000148928
6 0.5000000000 0.7013689280 0.7013679147 0.0000009526

Exercise: 1) Solve for the root in the interval [2,3]


by Regula-Falsi Method.

2) Find the solution to , in the interval [1,2] accurate

to within using Regula-Falsi Method.

Modified Regula Falsi method:

In this method an improvement over Regula Falsi method is obtained by


replacing the secant by straight lines of even-smaller slope until falls
to the otherside of the zero of . The various steps in the method are
given in the algorithm below:
Algorithm:

Given a function continuous on an interval satisfying the

criteria , carry out the following steps to find the root of of

in :

(1)Set
(2) For n=0,1,2...., until convergence criteria is satisfied, do:

(a) compute

(b) If then
Set

Also if Set
Otherwise

Set

Also if Set
Example:

Solve for the root in the interval [1,2] by Modified


Regula Falsi method.

Solution: Since we go ahead with finding the

root of given function f(x) in [1,2]. Setting and following


the above algorithm. Results are provided in the table below:

Modified Regula Falsi Method

Iteration
no.
0 1.0000000000 2.0000000000 1.4782608747 -2.2348976135
1 1.4782608747 2.0000000000 1.7010031939 0.5908976793
2 1.4782608747 1.7010031939 1.6544258595 -0.0793241411
3 1.6544258595 1.7010031939 1.6599385738 -0.0022699926
4 1.6599385738 1.7010031939 1.6602516174 0.0021237291
5 1.6599385738 1.6602516174 1.6601003408 0.0000002435

The geometric view of the example is provided in the figure below:


Example: Solve for the root in the interval
[0.5,1.5] by Modified Regula Falsi Method.

Modified Regula Falsi Method

Iteration
no.
0 0.5000000000 1.5000000000 0.8773435354 2.1035263538
1 0.5000000000 0.8773435354 0.7222673893 0.2828366458
2 0.5000000000 0.7222673893 0.6871531010 -0.1967970580
3 0.6871531010 0.7222673893 0.7015607357 0.0026464546
4 0.6871531010 0.7015607357 0.7013695836 0.0000239155
5 0.6871531010 0.7013695836 0.7013661265 -0.0000235377
6 0.7013661265 0.7013695836 0.7013678551 -0.0000003363
Secant Method

Like the Regula Falsi method and the Bisection method this method

also requires two initial estimates of the root of f(x)=0 but


unlike those earlier methods it gives up the demand of bracketing the
root. Like in the Regula Falsi method, this method too retains the use of
secants throughout while tracking the root of f(x)=0. The secant joining

the points is given by

Say it intersects with x-axis at , then

If (say) then replace with

and repeat the process to get and so on . The


method is algorithmically described below:
Algorithm:
Given a , two initial points a, b and the required level of accuracy

carry out the following steps to find the root of f(x)=0.

(1) Set
(2) For n=0,1,2... until convergence criteria is satisfied, do:
Compute

Example:

Solve for the root with by secant


method to an accuracy of .

Solution:

Set
Repeat the process with and so on till you

get a s.t. These results are tabulated below:

Secant Method

Iteration
no.
0 1.0000000000 2.0000000000 1.4782608747 -2.2348976135
1 2.0000000000 1.4782608747 1.6198574305 -0.5488323569
2 1.4782608747 1.6198574305 1.6659486294 0.0824255496
3 1.6198574305 1.6659486294 1.6599303484 -0.0023854144
4 1.6659486294 1.6599303484 1.6600996256 -0.0000097955

Geometrical visualization of the root tracking procedure by Secant


method for the above example.
Exercise: Find the solutions accurate to within for the
following problems using Secant's Method.

(1)

(2)

Convergence of secant method:

Definition: Say, where is the root of

, , are the errors at n and (n+1) iterations and

are the approximations of at , (n+1) iterations. If

where is a constant, then the rate of convergence of

the method by which is generated is p.


Claim: Secant method has super linear convergence.
Proof: The iteration scheme for the secant method is given by

Say and i.e the error in the n iteration in

estimating .

Using (iii) in (ii) we get

By Mean value Theorem, in the interval and s.t.


We get

i.e.
Using (iii)above, we get

using (v), (vi), in (iv) we get

By def of rate of convergence , the method is of order p if

>From (vii) and (viii) we get

i.e

i.e

From (viii), (ix) we get

i.e.
Hence the convergence is superlinear.

Example:

Solve for the root in the interval [0.5,1.5] by


secant method.

Secant Method

0 0.5000000000 1.5000000000 0.8773435354 2.1035263538


1 1.5000000000 0.8773435354 0.4212051630 -4.2280626297
2 0.8773435354 0.4212051630 0.7258019447 0.3298732340
3 0.4212051630 0.7258019447 0.7037572265 0.0327354670
4 0.7258019447 0.7037572265 0.7013285756 -0.0005388701
5 0.7037572265 0.7013285756 0.7013679147 0.0000009526

Newton-Raphson Method:

Unlike the earlier methods, this method requires only one appropriate

starting point as an initial assumption of the root of the function

. At a tangent to is drawn. Equation of this


tangent is given by

The point of intersection, say , of this tangent with x-asis (y = 0) is


taken to be the next approximation to the root of f(x) = 0. So on
substituting y = 0 in the tangent equation we get

If (say) we have got an acceptable approximate root

of , otherwise we replace by , and draw a tangent to

at and consider its intersection, say , with x-axis

as an improved approximation to the root of f(x)=0. If , we


iterate the above process till the convergence criteria is satisfied. This
geometrical description of the method may be clearly visualized in the
figure below:

The various steps involved in calculating the root of by Newton


Raphson Method are described compactly in the algorithm below.

Algorithm:
Given a continuously differentiable function and an initial

approximation to the root of , the steps involved in

calculating an approximation to the root of s.t. are:

(1) Calculate and set


(2) For n = 0,1,2... until convergence criteria is satisfied ,do:
Calculate
Remark (1): This method converges faster than the earlier methods.
In fact the method converges at a quadratic rate. We will prove this
later.

Remark (2): This method can be derived directly by the Taylor

expansion f(x) in the neighbourhood of the root of . The

starting approximation to is to be properly chosen so that the first

order Taylor series approximation of in the neighbourhood of

leads to , an improved approximation to . i.e

, neglecting and its higher powers, we get

i.e.

Now the successive approximations etc may be


calculated by the iterative formula:

Remark(3) : One may also derive the above iteration formulation


starting with the iteration formula for the secant method. In a way this
may help one to visualize Newton-Raphson method as an improvement
over the secant method. So, let us consider the iteration formula for the
secant method i.e.

Add and subtract to the numerator on the R.H.S. to get


or ,

Clearly, is the slope of the secant to the curve

through the points , . This also


represents slope of the tangent to f(x)=0 parallel to the secant

intersecting x-axis between and . If is differentiable one

may as well approximate this slope by and thus arrive at the


iteration formula.

Example:

Solve for the root in [1,2] by Newton Raphson


method.

Solution:
Given

Take and
Since,

Therefore repeat the process.

Results are tabulated below:

Newton Rahpson Method

Iteration no.
0 2.0000000000 1.7209302187 0.8910911679
1 1.7209302187 1.6625729799 0.0347661413
2 1.6625729799 1.6601046324 0.0000604780
3 1.6601046324 1.6601003408 0.0000002435

Example:

Solve in [0.5,1.5] for the root by Newton-Raphson


method.

Solution: Given

Say,

The results are tabulated below:


Newton Raphson Method

Iteration no.
0 0.5000000000 0.6934901476 0.1086351126
1 0.6934901476 0.7013291121 0.0005313741
2 0.7013291121 0.7013678551 0.0000003363

Exercise: Find the solutions accurate to within for the


following problems using Newton-Raphson Method.

(1) for and

(2) for and

Convergence of Newton-Raphson method:

Suppose is a root of and is an estimate of s.t.

. Then by Taylor series expansion we have,

for some between and .


By Newton-Raphson method, we know that

i.e.

Using(2*) in (1*) we get


Say

where denote the error in the solution at n and (n+1)


iterations.

Newton Raphson Method is said to have quadratic convergence.

Note:
Alternatively, one can also prove the quadratic convergence of Newton-
Raphson method based on the fixed - point theory. It is worth stating
few comments on this approach as it is a more general approach
covering most of the iteration schemes discussed earlier.

A Brief discussion on Fixed Point Iteration:

Suppose that we are given a function

on an interval for which we need to find a root. Derive , from it, an


equation of the form:

Any solution to (ii) is called a fixed point and it is a solution of (i). The
function g(x) is called as "Iteration function".

Example:

Given , one may re-write it as:

or ,

or ,

where g(x) denotes possible choice iteration function.


Fixed point Iteration:

Let be a root of and be an associated iteration function.

Say, is the given starting point. Then one can generate a sequence

of successive approximations of as:

...............

...............

.................
This sequence is said to converge to iff as
.
Now the natural question that would arise is what are the conditions on

s.t. the sequence as


Here, we state few important comments on such a convergence:

(i)Suppose on an interval is defined and .


i.e. g(x) maps I into itself.

(ii) The iteration function is continuous on I=[a,b].

(iii)The iteration function g(x) is differentiable on and

s.t.

Theorem :
Let g(x) be an iteration function satisfying (i), (ii) and (iii) then g(x) has

exactly one fixed point in I and starting with any , the sequence

generated by fixed point iteration function converges to .

(iv) If then . For rapid

convergence it is desirable that . Under this condition for the

Newton Raphson method one can show that (i.e.


quadratic convergence).

Remark 1: One can generalize all the iterative methods for a system
of nonlinear equations. For instance, if we have two non-linear

equations then given a suitable starting

point , the Newton-Raphson algorithm may be written as follows:

For i=1,2... until satisfied , do


Exercises: Solve the following systems of equations by Newton
Raphson Method.

(1)

Use the initial approximation

(2)

Use the initial approximation


Bairstow Method

Bairstow Method is an iterative method used to find both the real and
complex roots of a polynomial. It is based on the idea of synthetic
division of the given polynomial by a quadratic function and can be used
to find all the roots of a polynomial. Given a polynomial say,

(B.1)

Bairstow's method divides the polynomial by a quadratic function.


(B.2)

Now the quotient will be a polynomial


(B.3)

and the remainder is a linear function , i.e.


(B.4)

Since the quotient and the remainder are obtained by

standard synthetic division the co-efficients can be


obtained by the following recurrence relation.
(B.5a)

(B.5b)

for (B.5c)
If is an exact factor of then the remainder is zero

and the real/complex roots of are the roots of . It may

be noted that is considered based on some guess values

for . So Bairstow's method reduces to determining the values of r

and s such that is zero. For finding such values Bairstow's method
uses a strategy similar to Newton Raphson's method.

Since both and are functions of r and s we can have Taylor series

expansion of , as:

(B.6a)

(B.6b)

For , terms i.e. second and higher order

terms may be neglected, so that the improvement over guess

value may be obtained by equating (B.6a),(B.6b) to zero i.e.

(B.7a)

(B.7b)

To solve the system of equations , we need the partial

derivatives of w.r.t. r and s. Bairstow has shown that these partial

derivatives can be obtained by synthetic division of , which


amounts to using the recurrence relation replacing

with and with i.e.


(B.8a)

(B.8b)

(B.8c)

for

where

(B.9)

The system of equations (B.7a)-(B.7b) may be written as.


(B.10a)

(B.10b)

These equations can be solved for and turn be used to

improve guess value to .

Now we can calculate the percentage of approximate errors in (r,s) by

(B.11)
If or , where is the iteration stopping error, then

we repeat the process with the new guess i.e. .

Otherwise the roots of can be determined by

(B.12)

If we want to find all the roots of then at this point we have the
following three possibilities:

1. If the quotient polynomial is a third (or higher) order


polynomial then we can again apply the Bairstow's method to the

quotient polynomial. The previous values of can serve as the


starting guesses for this application.

2. If the quotient polynomial is a quadratic function then use

(B.12) to obtain the remaining two roots of .

3. If the quotient polynomial is a linear function say

then the remaining single root is given by

Example:
Find all the roots of the polynomial

by Bairstow method . With the initial values

Solution:
Set iteration=1

Using the recurrence relations (B.5a)-(B.5c) and (B.8a)-(B.8c) we get


the simultaneous equations for and are:

on solving we get

and

Set iteration=2

now we have to solve

On solving we get
Now proceeding in the above manner in about ten iteration we get

with

Now on using we get

So at this point Quotient is a quadratic equation

Roots of are:

Roots are

i.e

Exercises:

(1) Use initial approximation to find a

quadratic factor of the form of the polynomial equation

using Bairstow method and hence find all its roots.

(2) Use initial approximaton to find a quadratic

factor of the form of the polynomial equation

using Bairstow method and hence find all the roots.

Anda mungkin juga menyukai