Anda di halaman 1dari 10

Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)

College of Engineering
Mechanical Engineering Department

H.W:1
2016 2017

Finding Roots of Nonlinear Equations

Methods Used:
1. Simple Fixed-Point Iteration
2. NEWTON-RAPHSON Method (Univariate)
3. Multivariate Newton-Raphson
4. SECANT METHOD
5. Gauss-Seidel Method
6. Relaxation Method

1 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)

Introduction
nonlinear equations may be: an algebraic equation, a transcendental equation, a solution of
differential equation or any nonlinear function of x, given a
continuous nonlinear
function f(x): find the
value x=c such that f(c)=0
a) a single real root
b) no real roots exist (but
complex roots may exist)
c) two simple roots
d) three simple roots
e) two multiple roots
f) three multiple roots
g) one simple root and two
multiple roots
h) multiple roots

1. Simple Fixed-Point Iteration (one-point iteration or successive substitution)

by rearranging the function f (x) = 0 so that x is on the left-hand side of the equation: x=g(x) eq.1
This transformation can be accomplished either by algebraic manipulation or by simply adding x to
both sides of the original equation. The utility of eq.1 is that it provides a formula to predict a new
value of x as a function of an old value of x. Thus, given an initial guess at the root xi, eq.1 can be used
to compute a new estimate xi+1 as expressed by the iterative formula xi+1 = g (xi)
x -x
i+1 i
the approximate error for this equation can be determined using the error estimator: =
a *100%
x i+1
-x
Example1: f ( x)= e - x

Let f(x) = 0 > e-x-x=0

x=e -x
x=g( x)
-xi
xi+1=e
Thus, each iteration brings the estimate closer to the true value of the root: 0.56714329

2 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)
2. NEWTON-RAPHSON Method (Univariate)
Perhaps the most widely used of all root-locating formulas is the Newton-Raphson method (Fig.1). If
the initial guess at the root is xi, a tangent can be extended from the point [xi, f (xi)]. The point where
this tangent crosses the x axis usually represents an improved estimate of the root.
The Newton-Raphson method can be derived on the basis of this geometrical interpretation. As in Fig.1,
the first derivative at x is equivalent to the slope:
f( xi)-0 f(xi)
'
f ( x)= x =x-
x-i xi+1 i+1 i f'( x)i
Fig.1
Example2:
f ( x)= e-x x
f'( x)=-e -x-1

e -x x
xi+1=xi- -x
- e -1

Thus, the approach rapidly converges on the true

root. Notice that the true percent relative error at
each iteration decreases much faster than it does in
simple fixed-point iteration

Although the Newton-Raphson method is often very efficient, there are situations where it performs
poorly.
However, even when dealing with simple roots, difficulties can also arise, as in the following example.

Example3: Determine the positive root of f (x) = x10 - 1 using the Newton-Raphson method and an initial
guess of x = 0.5.
xi10-1
Ans : xi+1=xi- 9
xi

Thus, after the first poor Fig.2

prediction, the technique is
converging on the true root of 1,
but
at a very slow rate.
Why does this happen? As shown
in Fig.2, a simple plot of the first
few iterations are
helpful in providing insight.
Notice how the first guess is in a
region where the slope is near
zero. Thus, the first iteration flings the solution far away from the initial
guess to a new
value (x = 51.65) where f (x) has an extremely high value. The solution
then plods along
for over 40 iterations until converging on the root with adequate accuracy.

3. Multivariate Newton-Raphson
3 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)
Recall that when we wanted to find the solution to a single non-linear equation of the form f(x)=0, The
basis of the Newton-Raphson method lay in the fact that we can approximate the derivative of f(x)
numerically.
df( x1) f(x1)-f( x2)
f'( x1)= =
dx x1-x2

f( x)
leading to the iterative formula: xi+1=xi-f'( xi )
i

Now consider the system of n non-linear equations and n unknowns.

This is the most general form of the problems that face chemical engineers and encompass linear
equations (when all the functions, f, are linear) and single equations (when n
= 1).
One technique used to solve this problem is called the multivariate Newton
Raphson Method (MNRM). The idea follows from the single-variable case.
The basic idea again stems from the fact that the total derivative of a function,
fj , is:

for the case of two variables or more

for the n variable case. We can discretize this expression and write:

where the j is the index over functions, the i is the index

over variables and the superscript in
parentheses stand for the iteration. Note that if we have only one variable (n=1), this reduces to
the Newton-Raphson method that we have already learned.
Now consider that we have n equations so that j = 1 to n. We have a system of equations
which we can write as:

Just as in the single variable case, we want our next iteration to take us to the root so we assume

that f ( { x} (2) )= 0 . We can then write this system in matrix notation as:

(*)

4 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)
where R(k) is called the residual vector at the kth iteration and is defined as

where J (k) is called the Jacobian matrix at the kth iteration is defined as

and where

so that the new guess for the x is

(**)

The algorithm for solving the multivariate Newton-Raphson follows analogously from the single
variable NRM. The steps are as follows:
1. Make an initial guess for x
2. calculate the Jacobian and the Residual.
3. Solve equation (*)
4. Calculate new x from equation (**)
5. If the solution has not converged, loop back to step 2.

The multivariate Newton-Raphson Method suffers from the same short-comings as the
single-variable Newton-Raphson Method.
(1) You need a good initial guess.
(2) You don t get quadratic convergence until you are close to the solution.
(3) If the partial derivatives are zero, the method blows up. If the partial derivatives are close to zero,
the method may not converge.

5 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)

Example4:

The x2 vs x 1plot of the following 2 functions is

given below. The first function is that of a circle,
centered at the origin with radius 2. The second
function is a parabola. We see there are two and
only two solutions to this systems of equations.

In order to use MNRM, we must first determine the functional form of the partial derivatives

Then following the algorithm outlined above:

Step One. Make an initial guess. One of the solutions looks to be at x1= 1.0 and x 2 = 2.0. Step Two.
Using that initial guess, calculate the residual and the Jacobian.

Step Four. Calculate new values for x via equation (**)

6 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)

Step Five. Loop back to Step 2. and repeat until converged. Here are what further iterations yield

7 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)

4.SECANT METHOD
A potential problem in implementing the Newton-Raphson method is the evaluation of the derivative.
Although this is not inconvenient for polynomials and many other functions, there are certain functions
whose derivatives may be difficult or inconvenient to evaluate. For these cases, the derivative can be
f(x )-f(x)
approximated by a backward finite divided difference: f'( xi) i-1 i
x -x i-1 i
f( x)( x - x
This approximation can be substituted to yield the following iterative equation: xi-1=x-i f xi -fi-1 x i)
( i-1) ( i)
Notice that the approach requires two initial estimates of x, rather than using two arbitrary values to
estimate the derivative, an alternative approach involves a fractional perturbation of the independent
variable to estimate f'(x)
f (xi + xi ) - f (xi )
f' (xi )
xi
where = a small perturbation fraction. This approximation can be substituted to yield the following
iterative equation:
xif(xi)
xi+1=xi-
f ( xi + xi )-f (xi)
We call this the modified secant method. As in the following example, it provides a nice means to attain
the efficiency of Newton-Raphson without having to compute derivatives.

Example5: Use the modified secant method to determine the mass of the bungee jumper with a drag
coefficient of 0.25 kg/m to have a velocity of 36 m/s after 4 s of free fall. Note: The acceleration of
gravity is 9.81 m/s2. Use an initial guess of 50 kg and a
value of 106 for the perturbation fraction.

First iteration:
x0 = 50
f (x0) = 4.57938708
x0 + x0 = 50.00005
f (x0 + x0) = 4.579381118
( 50)( -4.57938708)*10 -6
x1=50- =88.39931
-4.579381118 - ( -4.57938708)
The calculation can be continued to yield the correct answer

The choice of a proper value for is not automatic. If is too small, the method can be swamped by
round-off error caused by subtractive cancellation in the denominator. If it is too big, the technique can
become inefficient and even divergent. However, if chosen correctly, it provides a nice alternative for
cases where evaluating the derivative is difficult and developing two initial guesses is inconvenient.

8 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)

5- Gauss-Seidel Method
The Gauss-Seidel method is the most commonly used iterative
method for solving linear algebraic equations. Assume that we
are given a set of n equations:
[A ]{x} = {b}
Suppose that for conciseness we limit ourselves to a 3 3 set
of equations. If the diagonal elements are all nonzero, the first
equation can be solved for x1, the second for x2, and the third
for x3 to yield
where j and j 1 are the present and previous iterations.
To start the solution process, initial guesses must be made for the
x s. A simple approach
is to assume that they are all zero. These zeros can be substituted
into first equation,
which can be used to calculate a new value for x1 = b1/a11. Then we substitute this new
value of x1 along with the previous guess of zero for x3 into second equation to compute a new
value for x2. The process is repeated for third equation to calculate a new estimate for x3. Then
we return to the first equation and repeat the entire procedure until our solution converges
closely enough to the true values. Convergence can be checked using the criterion that for
all is

Example6:
3x1 0.1x2 0.2x3 = 7.85
0.1x1 + 7x2 0.3x3 = 19.3
0.3x1 0.2x2 + 10x3 = 71.4
Note that the solution is x1 = 3, x2 = 2.5, and x3 = 7.

By assuming that x2 and x3 are zero

x1 = [7.85 + 0.1(0) + 0.2(0)]/3 = 2.616667
This value, along with the assumed value of x3 = 0
x2 =[19.3 0.1(2.616667) + 0.3(0)]/3= 2.794524
The first iteration is completed by substituting the calculated values for x1 and x2
x3 = [71.4 - 0.3(2.616667) + 0.2(-2.794524)]/10 = 7.005610

For the second iteration, the same process is repeated to compute

x1 = 7.85 + 0.1(2.794524) + 0.2(7.005610)/ 3= 2.990557
x2 = 19.3 0.1(2.990557) + 0.3(7.005610)/ 7= 2.499625
x3 = 71.4 0.3(2.990557) + 0.2(2.499625)/ 10 = 7.000291
The method is, therefore, converging on the true solution. Additional iterations could be applied to improve the answers.
However, in an actual problem, we would not know the true
answer a priori. provides a means to estimate the error. For
example, for x1:
For x2 and x3, the error estimates are a,2 = 11.8% and a,3 =
0.076%. Note that, as was the case when determining roots of a
single equation.

6. Relaxation Method

9 |Page
Finding Roots of Nonlinear Equations Numerical MethodsMSc Applied Mechanics (2016-2017)
Relaxation represents a slight modification of the Gauss-Seidel method that is designed to enhance
convergence. After each new value of x is computed, that value is modified by a weighted average of
the results of the previous and the present iterations:
xi*new=xinew+(1-)xiold

Example7: Solve the following system with Gauss-Seidel using overrelaxation (= 1.2) and a stopping
criterion of
s = 10%:
3x1 + 12x2 = 9
10x1 2x2 = 8

10| P a g e