Motivation:
Let us look at a set of problems of scientific and engineering interest to
get a feel of what is root finding and why to find roots. Later we learn
how to find them.
Problem 1:
Suppose we are asked to cut a rectangular sheet with one of its sides
1.25mts longer than the other and the area being 0.875 mts from a
thin iron sheet of 5 mts area.What will be length of the 'smallest side'?
Area of rectangle
mt
i.e.
i.e. say
(1)
are given by
(3)
Problem 2:
Concepts of thermodynamics are used extensively in their works by say
aerospace, mechanical and chemical engineers. Here, the zero-
=1.2
(4)
Problem 3:
The concentration of pollutant bacteria 'C' in a lake decreases as per the model:
where 't' is the time variable. Determine the time required for the bacteria
concentration to be reduced to 9.
Here, we have to find the roots of
(5)
Problem 4:
The volume of liquid in a hollow horizontal cylinder of radius r and
length L is related to the depth of the liquid h by
(6)
Polynomial Equations:
Polynomial equations in one independent variable 'x' are a simple class
of algebraic equations that are represented as follows:
Transcendental Equation:
The equations include trigonometric or exponential or logarithmic
functions.
Examples:
We may note that the examples are nonlinear functions.
Method of solution:
.
For example, for quadratic equation (2), we have solutions given by (3).
However a large number of equations cannot be solved by direct
analytical methods.
Graphical Method: This approach involves plotting the given function and
determining the points where it crosses the x-axis. These points,
extracted approximately from the plot, represent approximate values of
the roots of the function.
Example:
Rewrite as
Bracketing Methods:
(a) Bisection Method:
This is one of the simplest and reliable iterative methods for the solution
of nonlinear equation. This method is also known as binary chopping or
half-interval method. Given a function which is real and continuous
Algorithm:
root of
(1) Set
(2) For n=1,2,...until satisfied do
(a) If
(b)If
otherwise
Note:
Example:
Solution: Given on
Set
Set
and
Set
Bisection
Method
Iteration
no.
0 1.0000000000 2.0000000000 1.5000000000 -2.0000000000
1 1.5000000000 2.0000000000 1.7500000000 1.3437500000
2 1.5000000000 1.7500000000 1.6250000000 -0.4804687500
3 1.6250000000 1.7500000000 1.6875000000 0.3920898438
4 1.6250000000 1.6875000000 1.6562500000 -0.0538940430
5 1.6562500000 1.6875000000 1.6718750000 0.1666488647
6 1.6562500000 1.6718750000 1.6640625000 0.0557680130
7 1.6562500000 1.6640625000 1.6601562500 0.0007849932
8 1.6562500000 1.6601562500 1.6582031250 -0.0265924782
9 1.6582031250 1.6601562500 1.6591796875 -0.0129132364
10 1.6591796875 1.6601562500 1.6596679688 -0.0060664956
11 1.6596679688 1.6601562500 1.6599121094 -0.0026413449
12 1.6599121094 1.6601562500 1.6600341797 -0.0009283243
13 1.6600341797 1.6601562500 1.6600952148 -0.0000717027
14 1.6600952148 1.6601562500 1.6601257324 0.0003566360
15 1.6600952148 1.6601257324 1.6601104736 0.0001424643
16 1.6600952148 1.6601104736 1.6601028442 0.0000353802
17 1.6600952148 1.6601028442 1.6600990295 -0.0000181614
18 1.6600990295 1.6601028442 1.6601009369 0.0000086094
Example:
Bisection Method
Iteration
no.
0 0.5000000000 1.5000000000 1.0000000000 3.1720056534
1 0.5000000000 1.0000000000 0.7500000000 0.6454265714
2 0.5000000000 0.7500000000 0.6250000000 -1.0943561792
3 0.6250000000 0.7500000000 0.6875000000 -0.1919542551
4 0.6875000000 0.7500000000 0.7187500000 0.2357951254
5 0.6875000000 0.7187500000 0.7031250000 0.0240836944
6 0.6875000000 0.7031250000 0.6953125000 -0.0834089667
7 0.6953125000 0.7031250000 0.6992187500 -0.0295295101
8 0.6992187500 0.7031250000 0.7011718750 -0.0026894973
9 0.7011718750 0.7031250000 0.7021484375 0.0107056862
10 0.7011718750 0.7021484375 0.7016601562 0.0040097744
11 0.7011718750 0.7016601562 0.7014160156 0.0006612621
12 0.7011718750 0.7014160156 0.7012939453 -0.0010144216
13 0.7012939453 0.7014160156 0.7013549805 -0.0001766436
14 0.7013549805 0.7014160156 0.7013854980 0.0002420362
15 0.7013549805 0.7013854980 0.7013702393 0.0000326998
16 0.7013549805 0.7013702393 0.7013626099 -0.0000715650
17 0.7013626099 0.7013702393 0.7013664246 -0.0000194324
18 0.7013664246 0.7013702393 0. 7013683 319 0.0000069206
(1) for
(2) for
Bisection method converges slowly. Here while defining the new interval
previous interval as
( have opposite signs).
The algorithm for computing the root of function by this method is
given below.
Algorithm:
in :
(1) Set
(2) For n = 0,1,2.... until convergence criteria is satisfied , do:
(a) Compute
otherwise set
Note:
Use any one of the convergence criteria discussed earlier under
bisection method. For the sake of carrying out a comparative study we
will stick both to the same convergence criteria as before i.e.
Set .
set
Regula Falsi
Method
Iteration
no.
0 1.0000000000 2.0000000000 1.4782608747 -2.2348976135
1 1.4782608747 2.0000000000 1.6198574305 -0.5488323569
2 1.6198574305 2.0000000000 1.6517157555 -0.1169833690
3 1.6517157555 2.0000000000 1.6583764553 -0.0241659321
4 1.6583764553 2.0000000000 1.6597468853 -0.0049594725
5 1.6597468853 2.0000000000 1.6600278616 -0.0010169938
6 1.6600278616 2.0000000000 1.6600854397 -0.0002089010
7 1.6600854397 2.0000000000 1.6600972414 -0.0000432589
8 1.6600972414 2.0000000000 1.6600997448 -0.0000081223
Note : One may note that Regula Falsi method has converged faster
than the Bisection method.
Example:
Regula Falsi
Method
Iteration
no.
0 0.5000000000 1.5000000000 0.8773435354 2.1035263538
1 0.5000000000 0.8773435354 0.7222673893 0.2828366458
2 0.5000000000 0.7222673893 0.7032044530 0.0251714624
3 0.5000000000 0.7032044530 0.7015219927 0.0021148270
4 0.5000000000 0.7015219927 0.7013807297 0.0001767781
5 0.5000000000 0.7013807297 0.7013689280 0.0000148928
6 0.5000000000 0.7013689280 0.7013679147 0.0000009526
in :
(1)Set
(2) For n=0,1,2...., until convergence criteria is satisfied, do:
(a) compute
(b) If then
Set
Also if Set
Otherwise
Set
Also if Set
Example:
Iteration
no.
0 1.0000000000 2.0000000000 1.4782608747 -2.2348976135
1 1.4782608747 2.0000000000 1.7010031939 0.5908976793
2 1.4782608747 1.7010031939 1.6544258595 -0.0793241411
3 1.6544258595 1.7010031939 1.6599385738 -0.0022699926
4 1.6599385738 1.7010031939 1.6602516174 0.0021237291
5 1.6599385738 1.6602516174 1.6601003408 0.0000002435
Iteration
no.
0 0.5000000000 1.5000000000 0.8773435354 2.1035263538
1 0.5000000000 0.8773435354 0.7222673893 0.2828366458
2 0.5000000000 0.7222673893 0.6871531010 -0.1967970580
3 0.6871531010 0.7222673893 0.7015607357 0.0026464546
4 0.6871531010 0.7015607357 0.7013695836 0.0000239155
5 0.6871531010 0.7013695836 0.7013661265 -0.0000235377
6 0.7013661265 0.7013695836 0.7013678551 -0.0000003363
Secant Method
Like the Regula Falsi method and the Bisection method this method
(1) Set
(2) For n=0,1,2... until convergence criteria is satisfied, do:
Compute
Example:
Solution:
Set
Repeat the process with and so on till you
Secant Method
Iteration
no.
0 1.0000000000 2.0000000000 1.4782608747 -2.2348976135
1 2.0000000000 1.4782608747 1.6198574305 -0.5488323569
2 1.4782608747 1.6198574305 1.6659486294 0.0824255496
3 1.6198574305 1.6659486294 1.6599303484 -0.0023854144
4 1.6659486294 1.6599303484 1.6600996256 -0.0000097955
(1)
(2)
estimating .
i.e.
Using (iii)above, we get
i.e
i.e
i.e.
Hence the convergence is superlinear.
Example:
Secant Method
Newton-Raphson Method:
Unlike the earlier methods, this method requires only one appropriate
Algorithm:
Given a continuously differentiable function and an initial
i.e.
Example:
Solution:
Given
Take and
Since,
Iteration no.
0 2.0000000000 1.7209302187 0.8910911679
1 1.7209302187 1.6625729799 0.0347661413
2 1.6625729799 1.6601046324 0.0000604780
3 1.6601046324 1.6601003408 0.0000002435
Example:
Solution: Given
Say,
Iteration no.
0 0.5000000000 0.6934901476 0.1086351126
1 0.6934901476 0.7013291121 0.0005313741
2 0.7013291121 0.7013678551 0.0000003363
i.e.
Note:
Alternatively, one can also prove the quadratic convergence of Newton-
Raphson method based on the fixed - point theory. It is worth stating
few comments on this approach as it is a more general approach
covering most of the iteration schemes discussed earlier.
Any solution to (ii) is called a fixed point and it is a solution of (i). The
function g(x) is called as "Iteration function".
Example:
or ,
or ,
Say, is the given starting point. Then one can generate a sequence
...............
...............
.................
This sequence is said to converge to iff as
.
Now the natural question that would arise is what are the conditions on
s.t.
Theorem :
Let g(x) be an iteration function satisfying (i), (ii) and (iii) then g(x) has
exactly one fixed point in I and starting with any , the sequence
Remark 1: One can generalize all the iterative methods for a system
of nonlinear equations. For instance, if we have two non-linear
(1)
(2)
Bairstow Method is an iterative method used to find both the real and
complex roots of a polynomial. It is based on the idea of synthetic
division of the given polynomial by a quadratic function and can be used
to find all the roots of a polynomial. Given a polynomial say,
(B.1)
(B.5b)
for (B.5c)
If is an exact factor of then the remainder is zero
and s such that is zero. For finding such values Bairstow's method
uses a strategy similar to Newton Raphson's method.
Since both and are functions of r and s we can have Taylor series
expansion of , as:
(B.6a)
(B.6b)
(B.7a)
(B.7b)
(B.8b)
(B.8c)
for
where
(B.9)
(B.10b)
(B.11)
If or , where is the iteration stopping error, then
(B.12)
If we want to find all the roots of then at this point we have the
following three possibilities:
Example:
Find all the roots of the polynomial
Solution:
Set iteration=1
on solving we get
and
Set iteration=2
On solving we get
Now proceeding in the above manner in about ten iteration we get
with
Roots of are:
Roots are
i.e
Exercises: