COURSE OUTLINES
A.
B.
Bisection Method
2.
3.
Jacobi Iteration
4.
Gauss-Seidel Iteration
Linear Interpolation
2.
Quadratic Interpolation
3.
Lagrange interpolation
4.
5.
Equispaced Interpolations
- Difference operators and difference Tables
- Forward, backward and central differences
C.
Numerical Differentiation
2.
3.
Numerical Integration
- Trapezoidal Rule
- Simpsons Rule
- Mid-Point Rule
- Romberg Integration
D.
Euler Method
2.
Runge-Kutta Methods
3.
Predictor-Corrector Methods.
INTRODUCTION
In the process of solving problems in Science, Engineering, Economics, etc., a physical
situation is first converted into a mathematical model. This is often called formulation of the
problem. This mathematical model often gives rise to mathematical problems which are too
difficult to solve in a neat closed form e.g.
1
(i.)
Integration: Find e x dx
2
(ii.)
(iii.)
(iv.)
When such problem arises, numerical analysis is then used for developing techniques to find
a solution or approximate solution of the mathematical equations describing the model.
A numerical method (or a combination of numerical methods) which can be to solve a
problem is often called an algorithm.
An algorithm is a complete and unambiguous set of procedures leading to the solution of a
mathematical problem.
The results obtained for the solution of a problem will be affected by various source of error.
Numerical analysts must consider how much accuracy is required, estimate the magnitude of
round-off and discretization errors, determine an appropriate step-size or the number of
iterations required, provide for checks on the accuracy and make allowance for corrective
action in cases of non-convergence.
The efficiency of any numerical method (or algorithm) must also be considered. An
algorithm would be of no practical use if it required the largest computer error built to obtain
a useful answer.
The final phase in solving a problem is programming. Programming is the transformation of
the algorithm into a set of unambiguous step-by-step instructions for the computer.
In this segment of the course, we will look at the design (formulation) and analysis of various
numerical methods and assess them in terms of accuracy, efficiency and computer effort.
This will involve some mathematical analysis and some practical work using MATLAB.
Mathematical Modelling:
Example 1.1
0
Cannon
Target
d
Figure 1.1: Aiming a Cannon
(1.1)
(1.2)
. V0 cos t
x x
y V0 sin t 1/ 2 gt
(1.3)
..
x
0
x ..
g
y
(1.4)
..
Analysis:
When y 0,
V0 sin t 1/ 2 gt 2 0
t (V0 sin t 1/ 2 gt 2 )
t 0,V0 sin t 1/ 2 gt 2 0
V0 sin t 1/ 2 gt 2 0
2V0 sin
g
(1.5)
v0 cos t.
2v0 sin
g
(1.6)
gravity.
In order to find the correct elevation to hit the target requires satisfying the equation
d
(1.7)
(1.8)
(ii)
The modelling process gives an idealisation: Some features have been ignored
e.g. air resistance, length of the muzzle. They may be significant.
v2
The nonlinear equation may not have a solution. The maximum range is 0
g
when / 4 .So if d
(iii)
(iv)
v0 2
, the target is out of range.
g
(1.9)
d
g
g
(1.10)
sin(2 )
1
2
dg
v0 2
dg
2
v0
sin 1
(1.11)
Normally, in such problems, a closed form solution is not possible and so an approximate
solution is sought from a numerical method.
Example 1.2
A sphere of radius r and density is floating in water of unit density. It is required to
determine the depth h of the sphere that is below the waterline.
Solution
Mass of the sphere = Volume x Density
4
r3
3
(1.12)
r
h
Displaced water mass: Applying Archimedes Principle, a body partially submerged in a fluid
is buoyed up by a force equal to the weight of the displaced fluid.
Mass of displaced water 1/ 3 h2 (3r h) 1 equating the two weight masses, we have
4 3
1
r h 2 (3r h) 1
3
3
4r 3 h2 (3r h)
4
3rh 2 h3
3
r3
r
2
h h
4 3
r r
Define x = r
4 3x 2 x3
x3 3x 2 4 0
(1.13)
Equation (1.13) is a cubic polynomial equation with three zeros. If 0.6 , to what depth
does the sphere sink as a fraction of its radius? This example can be visualized as finding the
values of x for which the graph f(x) = 0 i.e. touches the x-axis.
y
3
2
1
x
-1
1
-1
-2
(1.14)
b b 2 4ac
,a 0
2a
(1.15)
And simply require the substitution of values for a, b and c into (1.15).
LOCATING A ROOT
To develop a numerical method for finding the rots of f ( x ) 0 , it is useful to first determine
an interval a x b that contains at least one root. So, we need to ask. What properties must
a function f of x satisfy on the interval a x b to guarantee at least one root in this
interval?
Example 1.3:
Consider the polynomial curve, x3 3x 2 2 x (shown below). The curve cuts the x-axis at
x3 3x 2 2 x 0 has three
x 0, x 1 and x 2 , indicating that the cubic equation
solutions.
y
1.5
1.0
0.5
x
-1
-0.5
-1.0
1.1
BISECTION METHOD
Is a numerical method for solving f ( x ) 0 . The underlying mathematics for this methodis
contained Intermediate Value Theorem (IVT). If the function f is continuous on [a,b] and k is
any number lying between f ( a ) and f (b) then there is a point C somewhere in (a,b) such
that f (c) k .
y
y = f(x)
f(a)
k
f(b)
For the equation f ( x) 0, we use k 0. then the IVT tells us that if f is continuous on [a,b]
and f ( a ) and f (b) have different signs, then there is a solution of f ( x ) 0 between a and b.
however, there might be more than one.
y
y = f(x)
solution
y =f(x)
solution
b
solution
solution solution
solution
x
a
2. Suppose [a,b] is a nontrivial bracket for a root of f ( x) 0, then the value at the
midpoint c 1 / 2( a b) gives 3 possibilities.
(i)
(ii)
f (c) 0 and f (c). f (b) 0, then [c, b] is a nontrivial bracket for f(x) = 0.
(iii)
f (c) 0 and f (c). f (b) 0, then [a, c] is a nontrivial bracket for f(x) = 0.
With case (ii) or (iii), the resulting nontrivial bracket is half the size of the original.
3. This process is repeated until the interval containing the root is small enough or any
convergence is decided upon (used).
Example 1: (Use your calculator here) Find a root of x 2 0.5 0 , using the Bisection method
starting with the interval [0, 1]. Use 4 steps (i.e. calculate 4 subintervals).
n
0
1
2
3
4
a
0
c
0.5
f(a)
b
1
f(c)
f(b)
2.
3.
4.
5.
n
0
1
2
3
4
a
0
0.5
0.5
0.625
0.6875
c
0.5
0.75
0.625
0.6875
0.71875
Mode
b
1
1
0.75
0.75
0.75
STO
then
Mode
note small
D
1
f(a)
-
f(c)
+
-
0.03125
So p c4 0.03125
The root cannot be further from C4 than 1/ 2(b4 a4 ) but it may be closer.
f(b)
+
+
+
+
+
y
y = f(x)
p
x
c0
a0
b0
a1
c1
a2
c2
a3 c3
b1
b2
b3
OUTPUT:
Approximate solution, c.
STEP 1:
STEP 2:
SET
STEP 3:
IF f(c) = 0,
1
( a b) .
2
STEP 4:
THEN
SET
b=c
SET
a=c
ELSE
STEP 5:
1
( a b) ,
2
OUTPUT (c),
STOP
SET
LINE 1: begins with keyword function followed by output argument c and the =
symbol. This is followed by the input arguments within round brackets.
2.
LINE 2: called the H1 (help 1) line. It should be a comment line of special form:
beginning with %, followed without space by the function name in capitals, followed
by one or more spaces and then a brief description. The description should begin with
a capital, end with a period . and omit the words the and a.
3.
All comments lines from H1 up to the first non-comment line (usually a blank line for
readability) are displayed when helpfunction_Name is typed in the command
window after the >> symbol. These lines should describe the function and its
arguments (capitalized by convention).
Note that the function name must be the same as the name of the m-file in which it is stored.
So here bisect and bisect.m for the m-file.
function c = bisect(a,b,epsilon)
%BISECT Bisection method for f(x) = 0.
%
fa
fc
fb');
fa = f(a);
fb = f(b);
% check initial bracket
if fa*fb>= 0
error('fa*fb is non-negative! The bracket is out of range!!!')
end
% bisection loop
b = c;
fb = fc;
else
end
end
function y = f(x)
%F(X)
Function of X.
INTERVAL LENGTH
If the original bracket has length L0 i.e L0 b a , then the bracket length after k steps
L0
. This formula allows us to calculate
2k
the number of steps required to make the bracket length Lk E . Thus, Lk E implies
L0
E
2k
L
2k 0
E
L
ln 0
E
k
ln(2)
Example 2: If
Lk E .
This requires
1
ln 6
10
k
ln(2)
ln 106
ln(2)
19.93
Note: the error in the mid-point as an approximation to the root introduces an extra ,
Ck P 1/ 2 Lk .
ERROR ANALYSIS
The Bisection algorithm was stopped when b a E . This gave us an error estimate for the
approximate root. If the approximate root is 1, then an error estimate of 10 6 , say, is
reasonable. If however, the approximate root is 10 7 , then an error estimate of 10 6 is very
bad.
ABSOLUTE ERROR
If y approximates x, then y x is the absolute error.
RELATIVE ERROR
If y approximates x, then
yx
x
2.
3.
x 0.3000x104 , y 0.3100x104
Solutions
1.
3.
x 0.3000x104 , y 0.3100x104
In this section, we explore a class of numerical methods for solving the equation f(x) = 0,
which given a first term x0, generates a sequence {xn} such that:
(a)
(b)
The requirements are to (i) generate a sequence based on the equation f(x) = 0 and (ii) to
ensure that xn (xn approaches ) as n (n tends to ) i.e. convergence is reached.
0
0.100
n
xn
1
0.180
2
0.295
(1.2.1)
3
0.416
4
0.486
5
0.5000
6
0.5000
From the Table, the sequence approaches the root x = 0.5 where it gets stuck. On dropping
subscripts from equation 1.2.1, the equation x 2 x(1 x) is obtained.
y
y= x
y= 2x(1-x)
0.25 0.5
-0.1
0.75 1.0
-0.2
a:Roots
b: Fixed-Point
In the figure (b), it is seen that the graph of y = x and y 2 x(1 x) intersect at x = 0.5. Thus
when RHS of equation 1.2.1 is evaluated at x = 0.5, the value of xn+1 = 0.5 is obtained. All
subsequent terms in the sequence, xn 2 , xn 3 , ... etc. will equal to fixed value 0.5.
Definition
f ( x) 0
(1.2.2)
(1.2.3)
(1.2.4)
x = g(x)
(1.2.5)
There are therefore infinity ways of arranging (1.2.2) in the form of (1.2.5).
Example 1.2.2: For the equation f ( x ) 0 , where f ( x) x3 4 x 2 10
The following are the three possible arrangements in the form of x = g(x):
(i)
x 3 10 4 x 2 g1 ( x)
(ii)
(iii)
1
10 x 3 g 2 ( x)
2
10
g 2 ( x)
4 x
The systematic iterative approach required to determine a zero off follows the following
steps:
1.
2.
3.
Given x0, use the recurrence relation to generate the sequence {xn}.
xn 1 3 10 4 xn 2 g1 ( xn )
(1.2.6)
1
10 xn 3 g 2 ( xn )
2
(1.2.7)
xn 1
xn 1
10
g3 ( xn )
4 xn
(1.2.8)
g2(xn)
1.2870
1.4025
1.3455
1.3752
1.3601
1.3678
1.3639
. . .
g1(xn)
1
1.8171
-1.4748
1.0914
1.7364
-1.2725
1.5215
No Convergent
n
0
2
3
4
5
6
7
8
9
10
11
12
13
14
1.3652
The sequences show a range of behaviour. The first sequence does not approach a limiting
value. The second sequence attains a limiting value after 14 iterations. The third sequence
attains a limiting value after just 5 iterations.
Then the question to ask is:
What property does g(x) require near a fixed-point for to be the limiting value of a
sequence generated by xn+1 = g(xn)?
The answer leads us to another topic.
Fixed-Point Convergence
Convergence of the sequence {xn} generated by xn+1 = g(xn) occurs if
g ( x) 1
Condition
if g ( ) 1
Bahaviour
Convergent
Indifferent
if g ( ) 1
Uncertain
Repelling
if g ( ) 1
Divergent
So if the slope of g is positive, then monotonic convergence or divergence results. If the slope
of g is negative, then oscillatory convergence or divergence results.
Initial Guess
Fixed-Points methods require a good initial value x0. A simple solution is to find an interval
[a, b] in which the original function f satisfies the sign property and then use the mid-point
1
x0 ( a b) as the initial value. In other words, bisection method can be used in evaluating
2
an approximate starting value.
Example 1.2.4:
Find a root of f ( x) x 2 5x 4 0 .
The possible arrangements are:
(i)
x2 4
x g1 ( x)
5
(ii)
x g 2 ( x) 5 x 4
(iii)
x g3 ( x) x 2 4 x 4
g1 ( x )
2x
5
(b)
g 2 ( x)
5
2 5x 4
(c)
g3 ( x) 2 x 4
4 x2
1
25
x2
25
0
4
5
5
( x )( x ) 0
2
2
5
5
x (a guess of x0 1.5 will be appropriate)
2
2
5
5
x . Moreover, if
2
2
g ( x ) is near zero in the entire region, the iteration converges quickly; if the derivative is
25
1
4(5 x 4)
25 4(5 x 4)
20 x 9
x 0.45
2x 4 1
(2 x 4) 2 1
2 x 4 1 or 2 x 4 1
i.e.
5
3
or x
2
2
3
5
x
2
2
1.3
From the Fixed-Point processes seen so far it is clear that the choice of g is instrumental in
defining the convergence properties of the sequence {xn} generated by xn1 g ( xn ) . We now
investigate how to accelerate the convergence of {xn}. The first approach uses the sequence
{xn}itself and simply modifies the term xn to improve convergence.
The Aitkens acceleration scheme is given as
xn*1 xn 1
(xn ) 2
2 xn 1
(1.2.9)
Where
xn xn1 xn
2 xn 1 xn 1 2 xn xn1
xn
1.5
1.2870
1.4025
1.3455
1.3752
1.3601
1.3678
xn *
1.3619
1.3643
1.3650
1.3652
1.3652
For n = 1,
*
n 1
*
11
( x2 x1 )2
x x2
x2 2 x1 x0
*
2
(1.4025 1.2870) 2
x 1.4025
1.3619
1.4025 2(1.2870) 1.50
*
2
For n =2,
x3* 1.3455
(1.3455 1.4025) 2
1.3643
1.3455 2(1.4025) 1.2870
For n =3,
x4* 1.3752
For n =4,
(1.3752 1.3455) 2
1.3650
1.3752 2(1.3455) 1.4025
x5* 1.3601
(1.3601 1.3752) 2
1.3652
1.3601 2(1.3752) 1.3455
For n =5,
x6* 1.3678
1.4
(1.3678 1.3601) 2
1.3652
1.3678 2(1.3601) 1.3752
A second approach to accelerating convergence is based upon the choice of the function g.
Earlier several ideas were established regarding the convergence of a sequence {xn}
generated by xn1 g ( xn ) , to a fixed-point of g, that
(a)
xn if g ( ) 1 and
(b)
(ii)
To answer the first question, let us add the quantity x to each side of the equation x g ( x) ,
where 1 , to obtain
x x g ( x) x
Rearranging, we have
x(1 ) g ( x) x
g ( x) x
G ( x)
(1 )
(1.30)
g (2)
1 2
1
3
42
2
3
x2 2
.
3
If, for example, 7x is added to both sides and then arranged to the form equation (1.2.6), we
obtain
x2 2
7 x x 2 21x 2
3
x
G x
1 7
24
G(1) = 1 and G(2) = 2. Thus, x = 1 and x = 2 are fixed point of G, and hence the roots of x =
G(2). To find a fixed point of G, the relation xn + 1 = G(xn) could be used to generate a
sequence {xn} with limiting value . To be efficient G ( ) should be small, ideally close to
zero.
G x
g x x
1
G x
g x
,
1
(1.31)
Typically, if a root is sought in the vicinity of x0, then is taken as g x0 . The better the
initial guess, the better the value of .
Example1.2.7: consider again the equation x 3 4 x 2 10 0 . Then acceleration scheme is
applied to all three iterative processes of example 1.6.
i.e. xn1 gi xn , I = 1, 2, 3 where gi 1 15
.
Compare the convergence of the sequences in the table below (with 11, 5 and 3 iterations
respectively) with the original table under example 1.6 (divergence, 4 and 6 iterations).
4.000
0.6556
0.1226
xn
1.5000
1.5000
1.5000
1.4000
1.3713
1.3653
1.3785
1.3657
1.3652
1.3705
1.3653
1.3652
1.3674
1.3652
1.3661
1.3652
1.3656
1.3654
1.3653
1.3653
10
1.3652
11
1.3652
To further accelerate the sequence, we might update the value of at each step using =
g( xn ) in place of the fixed value = g ( x0 ) (xn should be closer to than x0 for a
convergent sequence). The effect of this modification is now investigated and it results in the
new iterative scheme Newton-Raphson iteration.
g xn xn
1
g xn g xn xn
1 g xn
(1.28)
The function g is related to function f appearing in the original equation f(x) = 0. So by simple
manipulation, adding x to each side yields.
x f x x
x x f x g x
Hence, g(x) = x f(x)
g1 x 1 f i x
xn f xn 1 f i xn xn
1 1 f i xn
xn f xn xn f i xn xn
f i xn
f i xn xn f xn
f xn
i
xn1 xn
xn
f xn
f i xn
f xn
f i xn
(1.29)
f xn
f i xn
xn
xn 3 4 xn 2 10
3xn 2 4 xn
2 xn 3 4 xn 2 10
3xn 2 8xn
n =1; xi = 1.3733
2 x13 4 x12 10
x11 x2
13653
.
3x12 8x1
n = 2; x2 = 1.3653
x2 1 x3
2 x2 3 4 x2 2 10
13652
.
3x2 2 8 x2
n = 3; x3 = 1.3652
2 x33 4 x32 10
x31 x4
13652
.
3x32 8 x3
xn1 g xn
So it could be deduced from equation (1.2.9) that
g x x
f x
f i x
Recall that the derivative of g is the key descriptor of convergence points. Here
g i x 1
f i x f i x f x f ii x
f i x f i x
f x f ii x
f x f ii x
At a root x = , the value f() is zero by definition. Hence, for the Newton-Raphson scheme,
g x 0 - optimal convergence. So the condition for convergence:
g x 1 holds.
f x f x
f x
<1
2.0
2.1
Langrange Polynomials
x x1
,
x0 x1
L1 x
x x0
x1 x0
P x L0 x f x0 L1 x f x1
since
L0 x0 1,
L0 x1 0,
L1 x0 0, and L1 x1 1 ,
we have
P x0 1. f x0 0. f x1 f x0 y0 and
P x1 0. f x0 1. f x1 f x1 y1
So, P is the unique linear function passing through (x0, y0) and (x1, y1),
y = f(x)
y1 = f(x1)
y0 = f(x0)
y = P(x)
x1
x0
To generalize the concept of linear interpolation to higher degree polynomials, consider the
construction of a polynomial of degree at most n that passes through
the n + 1 points.
y
f
x0
x1
xn - 1
x2
xn
In this case, we need to construct, for each k = 0, 1, , n, a function Ln, k(x) with the point
that Ln, k (xi) =0 when i k and Ln, k(xk) = 1. To satisfy Ln, k (xi) =0 for
each i = k requires
that the numerator of Ln, k(x) contains the term.
x x x x ...... x x x x ...... x x
0
k 1
k 1
To satisfy Ln, k(xk) =1, the dominator of Ln, k(x) must be equal to this term evaluated at
x = xk. Thus
Ln , k ( x) =
x - x0 ... x xk 1 x xk 1 ... x xn
xk x0 ... xk xk 1 xk xk 1 ... xk xn
Ln, k(x)
x0
xk -1
x1
xk
xk +1
xn -1 xn
The interpolating polynomial is easily described now that the form of Ln,
known. This polynomial is called the nth Lagrange interpolating polynomial.
Pn x f x0 Ln , o x ...... f xn Ln , n x
Where Ln , k x
f x L
n
k 0
n, k
(x) is
x x x x ..... x x x x ...... x x
x x x x ...... x x x x ...... x x
0
k 1
k 1
k 1
k 1
For each k = 0, 1, ., n.
If x0, x1,..., xn are (n + 1) distinct numbers and f is a function whose values are given at
these numbers, then Pn(x) is the unique polynomial of degree at most n that agrees
with f(x) at x0, x1, , xn.
The notation for describing Lagrange interpolating polynomial Pn(x) is rather
complicated. To reduce this somewhat, we will write Ln, k(x) simply as Lk(x).
Example 2.1: Using the numbers, or node, x0 = 2, x1 = 2.5, and x2 = 4 to find the
second interpolating polynomial for f(x) = 1/x.
Solution: this requires that we first determine the coefficient polynomials L0, L1 and
L2.
L0 x
x 2.5 x 4
x 6.5 x 10
2 2.5 2 4
L1 x
x 2 x 4
4 x 24 x 32
2.5 2 2.5 4
3
And
L2 x
x 2 x 2.5 x 4.5 x 5
4 2 4 2.5
3
f x L x
k 0
f x0 L0 x f x1 L1 x f x2 L2 x
0.5 x 6.5 x 10 0.4
4 x 24 x 32
3
0.25
x 4.5 x 5
3
f 3 P 3 0.325
Example 2.2: The table below lists values of a function at variable points. compare
the approximation to f(1.5) obtained by Lagrange polynomials of define 2 and 3.
x
f(x)
1.0
0.7651977
1.3
0.6200860
1.6
0.4554022
1.9
0.2818186
2.2
0.1103623
Since 1.5 is between 1.3 and 1.6, then the most approximate linear polynomial uses
x0 = 1.3 and x1 = 1.6. The value of interpolating polynomial at 1.5 is
P1 15
.
15
15
. 16
.
. 13
.
0.6200860
0.4554022
13
16
. 16
.
. 13
.
=0.5102968
Two polynomials of
x0 13
. ,
x1 16
. ,
be
used,
one
by
letting
P2 1.5
= 0.5112857
In the third degree core these are two reasonable choices for the polynomial. One
uses:
(i)
x0 = 1.3,
x1 = 1.6,
x2 = 1.9,
P3 15
.
15
. 16
. 15
. 19
. 15
. 2.2
0.6200860
13
. 16
. 13
. 19
. 13
. 2.2
15
. 13
. 15
. 19
. 15
. 2.2
0.4554022
16
. 13
. 16
. 19
. 16
. 2.2
15
. 13
. 15
. 16
. 15
. 2.2
0.2818186
19
. 13
. 19
. 16
. 19
. 2.2
15
. 13
. 15
. 16
. 15
. 19
.
0.403623
2.2 13
. 2.2 16
. 2.2 19
.
= 0.5118302
Although P3 15
. is the most accurate approximate .
2.2
f n1 x
x x0 x x1 ... x xn
n 1 !
For some x between x0, x1, , xn and x. A practical difficulty with Langrage
interpolating is that since the error term in difficult to apply, the degree of polynomial
needed for the desired accuracy is generally not known until computations are
determined. The usual practical is to compute the results given from various
polynomials until appropriate agreement is obtained, as was done in the previous
example.
Recursively generated Langrage Polynomials. Let f be defined at x0, x1, xk and xj,
xi, be two numbers in this set. If
P x
x x P
j
0 ,1,..., j 1, j 1,...k
x x
i
Then P(x) is the kth Lagrange polynomial that interpolates f at the kth points x0, x1,
,xk. To test the correctness of the recursive formula,
Let Q P0,1,...,i 1,i 1,...k and
Q P0,1,..., j 1, j 1,...k since Q(x) and Q(x) are polynomials of degree at most k 1
P x
x x Q x x x Q x
x x
j
P xr xr x j Q xr xr xi Q xr
x x f x f x
i
xi x j
This result implies that the approximations from the interpolating polynomials can be
generated recursively in the manner shown in the table below:
x0
P0 = Q0, 0
x1
P1 = Q1, 0
P0,1 = Q1,1
x2
P2 = Q2, 0
P1, 2 = Q2, 1
P 0,1, 2 = Q2, 2
x3
P3 = Q3, 0
P2, 3 = Q3, 1
P 1,2, 3 = Q3, 2
x4
P4 = Q4, 0
P3, 4 = Q4, 1
P 2, 3,4 = Q4, 2
Example 2.3: In Example 2.2, values of various interpolating polynomials at x = 1.5 were
obtaining using the data shown in the first two columns of the table below. Suppose that we
want to use Nevilles method to calculate the approximate to f(1.5). If
x0 10
. ,
x1 13
. ,
x2 16
. ,
x3 19
.
and
x4 2.2, then f(1.0) =
Q0,0, f(1.3) = Q1,0, f(1.6) = Q2,0, f(1.9) = Q3,0, and f(2.2) = Q4,0, so these are the 5 polynomials
of degree zero (constant) that approximate f(1.5). Calculating:
xi
fi = Qi, 0
1.0
0.7651977
1.3
0.6200860 Q1,1
1.6
0.4554022 Q2,1
Q2,2
1.9
0.2818186 Q3,1
Q3,2
Q3,3
2.2
0.1103623 Q4,1
Q4,2
Q4,3
Q4,4
15
. 10
. Q1,0 15
. 13
. Q0,0
13
. 10
.
Similarly,
Q2 ,1 15
.
15
. 13
. 0.4554022 15
. 16
. 0.6200860
16
. 13
.
= 0.5102968
Q3,1 15
. 0.5132634 and Q4 ,1 15
. 0.5104270
The higher degree approximates are generated in a similar manner and are shown in
the table below:
1.0
0.7651977
1.3
0.6200860
0.5233449
1.6
0.4554022
0.5102968
0.5124715
1.9
0.2818186
0.5132634
0.5112857
0.5118127
2.2
0.1103623
0.5104270
0.5137361
0.5118302
0.5118200
The best linear approximate is expected to be Q2,1 since 1.5 is between x1 = 1.3 and x2
=1.6
3.0
Divided differences
Iterated interpolating was used in the previous section to generate successful higher degree
polynomial approximations at a specific point. Divided difference methods introduced here
are used to successfully generate the polynomials themselves. We first need to introduce the
notation. The zeroth divided difference of the function f w.r.t. xi f xi is simply the value of f
at xi
f xi f xi
The remaining divided differences are defined inductively. The first divided difference of f
f xi , xi 1
3.2
f xi 1 f xi
xi 1 xi
Consider a function f(x) which is analytic (can be expanded in a Taylor series) in the
neighbourhood of a point x. We find f(x + h) by expanding f(x) in a Taylor series about x:
f x h f x hf x
h2
h3
f x
f x ...
2!
3!
(3.1)
f x h f x h
h2
f x
f x ....
h
2
6
(3.2)
f x h f x
h
h
(3.3)
In words, equation (3.3) states that we have found an expression for the 1st derivative of f w.
r. t. x which is accurate to within an error of order of h. we shall employ the subscript
notation.
f x h f j i
(3.4)
f x f j
(3.5)
f j 1 f j
(3.6)
(3.7)
f j
(3.8)
We now use the Taylor series expansion of f(x) about x to determine f(x h):
f x h f x hf x
h2
h3
f x
f x ...
2!
3!
(3.9)
Or
f x
f x f x h
h
h
(3.10)
(3.11)
f j f j 1
h
(3.12)
(3.13)
f j
h
(3.14)
h2
h3
f x
f x ...
2!
6
(3.15)
4h 3
f x ...
3
(3.16)
f x 2h 2 f x h f x
h2
hf x ...
(3.17)
f j 2 2 f j 1 f j
h2
(3.18)
We have now found an expression for the second derivative of f w.r.t. x which is accurate to
within an error order of h. The 2nd forward difference of f at j is defined as
f j f j 2 2 f j 1 f j
(3.19)
2 f j
h2
(3.20)
By using the backward expansion (3.9) to obtain f(x - h) and a similar expansion about x to
obtain f(x 2h), we can find backward difference expression for f ( x ) which is accurate to
h :
f x
f j 2 f j 1 f j 2
h2
(3.21)
(3.22)
2 f j
h2
(3.23)
We may now define the procedure for finding any higher forward and backward differences
of order, say n. Any forward or backward difference may be obtained starting from the 1st
forward and backward difference (3.1) and (3.13) by using the following recurrence
formulas.
(3.24)
(3.25)
n f j n1 f j
n f j n1 f j
2 f j f j f j f j 1 f j f j 1
f j f j 1 f j 1 f j 2 f j 2 f j 1 f j 2
Forward and backward difference of expression for derivatives of any order are given by
dn f
dx n
xj
n f j
hn
(3.26)
xj
n f j
hn
(3.27)
Forward and backward difference expressions of h are tabulated below for derivatives of
up to 4th order. It may be a convenient memory aid to note that the coefficients of the forward
difference experience expressions for the nth derivative starting form j and proceeding
forward are given by the coefficient of (- 1)n(a - b)n in order, while those for the backward
difference expressions starting from j and proceeding backward are given by the coefficients
of (a - b)n in order.
fj
fj +1
hf ( x)
-1
h f ( x)
-2
h3 f ( x)
-1
-3
h4 f iv ( x)
-4
-4
fj + 2
fj + 3
fj +4
h
fj - 4
fj - 3
fj - 2
fj - 1
fj
-1
-2
-1
-3
-4
-4
hf ( x)
h2 f ( x)
h3 f ( x)
h4 f iv ( x)
The difference expressions for derivatives which we have thus far obtained are of h .
More accurate expressions may be found by simply taking more for example, the series in
equation (3.1) for f(x + h).
f x h f x hf x
h2
h3
f x
f x ...
2!
3!
(3.28)
f x h f x h
h2
f x
f x ...
h
2
6
(3.29)
From (3.17) we have a forward difference expression for f ( x ) complete which its error
term. Substituting this expression into (3.29), we obtain
f x
h2
f x h f x h f x 2h 2 f x h f x
hf
x
......
f x ...
h
2
h2
6
(3.30)
Collecting terms,
f x
f x 2h 4 f x h 3 f x h 2
f x ...
2h
3
(3.31)
Or in subscript notation,
f x
f j 2 4 f j 1 3 f j
2h
h2
(3.32)
We have thus found a forward difference representation for the first derivative which is
accurate to h 2 . Note that the expression is exact for a parabola since the error term
involves only 3rd and higher derivatives. A similar backward difference expression of h 2
could be obtained by using the backward Taylor series expansion of f(x - h) and replacing
fj + 1
fj + 2
fj + 3
fj + 4
fj + 5
2hf ( x j )
-3
-1
h 2 f ( x j )
-5
-1
2h 3 f ( x j )
-5
18
-24
14
-3
h 4 f iv ( x j )
-14
26
-24
11
-2
fj - 2
fj - 1
fj
-4
-1
-5
-14
24
-18
11
-24
26
-14
fj - 4
fj - 3
2hf ( x j )
h 2 f ( x j )
2h 3 f ( x j )
h 4 f iv ( x j )
-2
Higher order forward and backward difference representations, although rarely used in
practice, can be obtained by replacing successively more terms in the Taylor series
expansions by difference representations of h .
3.5
Central Differences
Consider again the analytic function f(x); the forward and backward Taylor series expansions
about x are respectively.
f x h f x hf x
h2
h3
f x
f x ...
2!
3!
(3.33)
f x h f x hf x
h2
h3
f x
f x ...
2!
3!
(3.34)
h3
f x ...
3!
(3.35)
Or solving for f x ,
f x
f x h f x h h2
f x ...
2h
6
(3.36)
f x
f x h f x h
h 2 ...
2h
(3.37)
Or
f j 1 f j 1
2h
h 2 ...
(3.38)
h 2 . Note that the expression is exact for polynomials of degree 2 (parabolas) and lower.
f j 1 2 f j f j 1
h
h2
(3.39)
The central difference expressions of h 2 for derivatives up to 4th order are tabulated
below. A convenient memory aid for these central difference expressions of h 2 in terms of
ordinary forward and backward differences is given by
n
n
d n f f j n 2 f j n 2
h 2 , n even
n
2
dx
2h
(3.40)
n
n
d n f f j n1 2 f j n1 2
h 2 , n odd
dx n
2h n
(3.41)
fj 2
2hf ( x j )
fj 1
fj
fj + 1
-1
1
h
h f ( x j )
2
-2
2hf ( x j )
-1
-2
h 4 f iv ( x j )
-4
-4
Representation of h
3.6
fj + 2
Difference expressions for derivatives and polynomials have some distinct relationships
which can be very useful. Thus, if we consider a polynomial of order n, the nth difference
representation taken anywhere along this polynomial will be constant and exactly equal to the
nth derivative regardless of the mesh spacing h (since all of the error terms will be zero).
This knowledge may be used to get some idea of how well a given polynomial will fit data
obtained at a series of equally-spaced points on the independent variable. For example, if the
3rd differences taken at various values of the independent variable are approximately equal
and the 4th differences are close to zero, then a cubic polynomial should fit the date relatively
well.
Example 3.1: find the fifth backward difference representation which is of h .
Solution
From the recurrence scheme for difference, the fifth backward difference can be expressed as
5fj 4fj
f j 4 f j 1 6 f j 2 4 f j 3 f j 4 f f 1 4 f j 2 6 f j 3 4 f j 4 f j 5
f j 5 f j 1 10 f j 2 10 f j 3 5 f j 4 f j 5
and
5
d5 f f j
h
dx 5
h5
Example 3.2: Given the function tabulated at the points j, j +1, and j + 2 shown in the figure
below, find a three points difference representation for f j .
f(x)
2h
j +1
j +2
(a)
f j 1 Ah 2 Bh C
f j 2 9 Ah 2 3Bh C
f x 2 Ax B
f 0 B
(d)
f j 2 9 Ah 2 3Bh f j
(e)
f 0
9 f j 1 f j 2 8 f j
6h
8 f j 9 f j 1 f j 2
6h
Example 3.3:
Find a central difference representation of h 2 for d 5 f / dx 5 . From example 3.1,
5 f j f j 5 f j 1 10 f j 2 10 f j 3 5 f j 4 f j 5
5 f j 4 f j
f j 5 4 f j 4 6 f j 3 4 f j 2 f j 1 f j 4 4 f j 3 6 f j 2 4 f j 1 f j
f j 5 5 f j 4 10 f j 3 10 f j 2 5 f j 1 f j
Or
d5 f
{ f j 2 5 f j 1 10 f j 10 f j 1 5 f j 2 f j 3
dx5
[ f j 3 5 f j 2 10 f j 1 10 f j 5 f j 1 f j 2 ]} / 2h5 (h) 2
f j 3 4 f j 2 5 f j 1 5 f j 1 4 f j 2 f j 3
2h
f(x)
30
33
28
12
-22
Find f (0) , f (2) , f (4) and f (0) using difference representations which are of h .
2
Solution:
At x = 0, a forward difference must be used since no points are available in the backward
direction.
f 0
f 2 4 f 1 3 f 0
2
1
2 1
f 0
28 4 33 3 30
2
7 to 1
2
f 3 f 1
2
1
2
12 33
2
10.5 to 1
2
3 f 4 4 f 3 f 2
2
1
2 1
3 22 4 12 28
2
43 to 1
2
f 0
2 f 0 5 f 1 4 f 2 f 3
2
h
2
1
3 30 5 33 4 28 12
2
h
1
5 to 1
4.0
DIFFERENCE EQUATION
Definition
Suppose U1, U2, U3,, Un are the terms of a sequence. We now introduce a difference
operator such that the result of operating with on Ur, is defined by
U r U r 1 U r
(4.1)
(4.2)
U 2 U3 U 2
(4.3)
And so on. There expression are usually called the first finite difference of U1, U2, U3, . . .,
Un. In the same way the second finite differences 2U r are defined by
2Ur Ur Ur 1 Ur
Ur 2 Ur 1 Ur 1 Ur
(4.4)
U r 2 2U r 1 U r
Consequently,
2U1 U 3 2U 2 U1
(4.5)
2U 2 U 4 2U 3 U 2
(4.6)
equal to the higher order finite difference term contained in it. e.g.
U r 5U r 3
(is of order 1)
2U r 3U r 2U r r 2
(is of order 2)
3U r r U r 2r U r 0
(is of order 3)
The general mth order linear difference equation therefore has the form.
(4.7)
Where f(n) and a0, a1, , am are given functions of r. by analogy which the
terminology of o.d.e (4.7) is said constant coefficient type if a0, a1, am are
numbers in dependent of r.
becomes
U r 1 U r 5U r 3
U r 1 6r 3
(4.8)
becomes
r2
U r 2 5U r 1 6U r r 2
(4.9)
When written in this form, difference equations are often called recurrence relations.
4.2
Example 4.1:
If
U r A4 r
Hence we see that the elimination of the arbitrary constant A leads to a first order difference
equation.
In general, if the expression for U r contains m arbitrary constants, it must satisfy mth order
difference equations.
Example 4.2: if
U r A B(3)r
(4.10)
(4.11)
U r 2 A B(3) r 2
(4.12)
And
(4.13)
U A
U r 2 A 3r 2 r r 9U r 8 A
3
(4.14)
And
(4.15)
Consequently, (4.10) which contain 2 arbitrary constants must satisfy a 2nd order difference
equation. It should be clear that the general solution of an mth order linear difference
equation must contain m arbitrary constants.
4.3
1.
(4.16)
(4.17)
U3 a2U 2 a2aU
1 1
(4.18)
U 4 a3U 3 a3a2a1U1
(4.19)
(4.20)
Hence, if U1 has a known value, say c, the solution of (4.16) may be written as
r 1
U r c ap
(4.21)
p 1
2.
(4.22)
Where ar, br are given functions of r, we define the general solution of (4.16) as
U r U r wr
(4.23)
(4.24)
(4.25)
(4.26)
Given that U1 2
We write
U 2 4U1,
U 3 42 U 2 42 4 U1
U 4 43 U 3 43 42 4U1
Consequently,
U r 4r 1 4r 2 ...43 42 4 U1
(4.27)
r 1
2 4 p
p 1
(4.28)
(4.29)
(4.30)
V4 2V3 2 2 2 V1
Vr 2r 1V1
V1 2r 1
where V1 is arbitrary. Hence the solution of (4.28) now depends on our ability to find any one
solution of (4.30). To do this we adopt a trial solution of the form
Wr r
(4.31)
where and are to be determined such that (4.31) satisfies (4.30) we find
r 1 2 v r
which gives comparing coefficients
0,
(coefficient of r0)
1,
(coefficient of r)
Hence 1,
(4.32)
Wr 1 r
V1 C 2
U r 2 c 2r 1 1 r
(4.33)
5.0
NUMERICAL INTEGRATION
Here we are concerned with obtaining estimates of the value of the definite integral
b
I f x dx
(5.1)
Where f is a function of one real variable x; a and b define the lower and upper limits of the
domain of integration on the x axis. A formal evaluation of equation (5.1) requires finding a
primitive F which, when differentiated, gives the original function f, i.e.
dF
f
dx
In this case,
I
dF
dx F b f a
dx
(5.2)
A major problem with integration is finding a primitive F, and for this reason, the numerical
integration is required in order to obtain numerical estimates to the value of I. A useful first
step is to develop a visual appreciation of integration. It is often stated that the integral of a
function f between limits x = a and x = b is equal to the area under the graph of y = f(x) on the
interval a x b .
The value of the integral (5.1) equals the area under the graph y = f(x).
y
y = f(x)
f x dx
b
5.1
f x dx
For an integreable function f(x) on the interval a x b . We divide the interval a x b into
n equal subintervals each of width x where
x
b a
n
n 1
x
2
f
2
f j x
0
n
2
j 1
(5.3)
I sin xdx
0
b a 0
n
3
3
5.4
xi
sin(xi)
0.00000
0.866025
2 3
0.866025
0.00000
3
2
.
4 0.866025 1813799
f b f a
12
f x cos x
12
f f 0
cos cos 0
12
12
2 0182770
.
Adding the end correction to the previously obtained trapezoidal rule value yields
I 1813799
.
0182771
.
199659
.
This value is indeed closer to the analytical solution (I = 2) than the previous results.
5.2
SIMPSONS RULE
Simpsons rule is a numerical integration technique which is based on the use of parabolic
arcs to approximate f(x) instead of the straight lines employed as the interpolating
polynomials with the trapezoidal rule. The method is given as
n 1
n2
x
6
f
4
f
2
f j x
0
n
j
3
j 1
j 2
(5.5)
Equation (5.5) is Simpsons one third rule for the entire interval. It is a fourth order method.
The Simpsons 3/8th rule is obtained by substituting the factor
3 x
x
with
in (5.5) with
3
8
which we obtain
n 1
n2
3x
I
f0 fn 4 f j 2 f j
8
j 1
j 2
j odd
j even
(5.6)
With the ends correction taken into considering equation 5.5 becomes
n2
n1
x 1
I
14 f 0 f n f j 16 f j x f a f b
15 2
j 2
j 1
j
even
j
odd
5.7
Example 5.2: Using the same integrand in example 5.1, evaluate I using Simpsons 1/3rd
and 3/8th Rule with 3 panels and compared the results.
Solution:
I
f x dx sin xdx
0
3
3
0 0 4 0.866025 2 0.866025
6 0.866025
1.732050
18145292
.
3 x
f 0 f 3 4 f1 2 f 2
8
6 0.866025
2.041344
5.3
ROMBERG INTEGRATION
This powerful and efficient integration technique is based on the use of trapezoidal rule
combines with Richardson extrapolation. In order to describe the algorithm in detail, we
adopt a new notation. The trapezoidal rule estimates of the integral will be denoted as
Tl .k
x
f
a
f
b
2
f a j x
2
j 1
(5.8)
Where
x b a 2 k 1 and l 2 k 1 1 . The number of panels, n, involves in Tl,k is
2 k 1,
b a
f a f b
2
T1,2
b a
b a
f a f b 2 f a
4
2
T1,3
b a
b a
b a
3 b a
2f a
2f a
f a f b 2 f a
8
2
2
4
e.t.c.
NOTE THAT:
T1,2
T1,3
T1,1
2
T1,1
2
b a
b a
f a
2
2
b a
b a
3 b a
f a
f a
4
4
4
e.t.c.
The extrapolating is carried out according to
Tl ,k
1
4l 1Tl 1,k 1 Tl 1,k
4 1
l 1
Now for l = 3,
T2 ,1
1
4T T
3 1,2 1,1
T2 ,2
1
4T T
3 1,3 1,2
(5.9)
T3,1
1
16T2 ,2 T2 ,1
15
T1,1
T1,2
T2 ,1
T1,3
T2,2
T3,1
T1,4
T2 ,3
T3,2
T4 ,1
Tl 1,1
T1,l
T2 ,l 1
T3,l 2
Tl1,2
Tl ,1
5x 4
0 8 4 x 3 2 x 1 dx
8
The analytical solution of the integrand is I = 72. Romberg extrapolation should yield this
exact answer in only a few extrapolations.
f x
5x 4
4x3 2x 1
8
So,
f(0) = 1
f(8) = 2560 - 2048 + 16 + 1= 529
and
b a = 8 0 = 8
Trapezoidal approximate with 1 and 2 panels are
T1,1
8
1 529 2120
2
T1,2
2120 8
f 4 1060 4160 250 8 1 712
2
2
1
2
4 712 2120 242
3
3
712 8
f 2 f 6 356 2 17 41 240
2
4
1
2
4 240 712 82
3
3
1 2
2
16 82 242 72 . (which is the exact answer)
15 3
3
712
240
82
2
3
2
3
72
The best available trapezoidal rule value of 240 using four panels is still very far from correct
and the greatly accelerated convergence along the diagonal should be apparent. In general of
course, we would not know that the exact answer had been obtained, so another line of the
table would have to be computed. After this computation, the table would be
2120
2
3
712
242
240
82
2
3
72
72
2
3
72
128
1
2
72
6.0
All problem involving ordinary differential equations fall into two categories: the initial value
problem and boundary value problems. Initial value problems are those for which conditions
are specified at only one value of the independent variable. These conditions are termed
initial conditions. A typical IVP might be of the form.
A
d2y
dy
Cy g t ,
2 B
dt
dt
y 0 y0 ,
dy
0 V0
dt
On the other hand, Boundary value problems are those for which conditions are specified at
two values of the independent variable. a typical BVP might be of the form.
d2y
dy
Ey h x ,
2 D
dx
dx
y 0 y0 ,
y L YL
The problem is a boundary value problem if any conditions are specified at two different
values of the independent variable. Thus,
d4y
Ay f x
dx 4
y 0 y0 ,
dy
0 W0,
dx
d2y
0 V0 , y L y L
dx 2
6.1
Any IVP can be represented as a set of one of more coupled first order ordinary differential
equations, each which an initial condition. For example, the simple harmonic oscillator
described by
d2y
dy
A 2 B Cy g t
dt
dt
(6.1)
y 0 y0
(6.2)
dy
0 V0
dt
(6.3)
dy
dt
(6.4)
dz
Bz Cy g t
dt
(6.5)
With some rearrangement, the problem separated by equations (6.1) (6.3) can now be
written as
dy
z
dt
(6.6)
t
dz
B
C
z yg
dt
A
A
A
(6.7)
(6.8)
z 0 V0
(6.9)
Any nth order differential equations can similarly be reduced to a system of n firstorder
differential equations.
(6.10)
dy2
f 2 y1 , y2 ,..., yn , t
dt
dyn
f n y1 , y2 ,..., yn , t
dt
.
.
yn 0 yn0
(6.11).
Since any IVP can be expressed as a set of firstorder ordinary differential equations, our
concern now will be to consider numerical methods for the solution of first ordinary
differential equations. We will be concerned with two classes of methods. The first of these
consists of the formulas of the RungeKutta type. In these formulas, the desired solution y j1
is obtained in terms of y j , f y j , t j and f y , t evaluated for various estimated values of y
between t j and t j1 . Thesemethods are self starting because solution is carried directly from
t j to t j1 without requiring values of y or f y , t for t t j .
y
Solution Known to here
t
yj
tj
tj + 1
A second class of methods consists of formulas of the multistep types. These formulas, in
general, requires information for t t j . (see Fig below)
t
yj - 3
yj - 2
yj - 1
yj
tj - 3
tj - 2
tj - 1
tj
tj + 1
yj + 1
t
The solution for yj +1 might require the value of yj and values of f(y, t) at each of the points tj,
tj -1, tj 2 and tj 3. These multi step formulas are obviously not self starting.
6.2
(6.12)
y 0 y0
dy
by a simple forward difference
dt
f y j ,t j
(6.13)
(6.14)
Example: 6.1: stating at t = 0 and t = 0.1, solve the ordinary differential equation
dy
y2
dt
(6.15)
The problem can be solved numerically by applying Euler recurrence formula (6.14).
y j 1 y j t y j 2
1
0.9090909
1 01
.
At t = 0.2
y2 0.9 01
. 0.9
0.819
1
0.833333
1 0.2
(6.16)
yexact
1
0.5
11
Although this numerical method is very simple, it is obviously not extremely accurate.
However, because it is so simple, it is convenient to use as an introduction to numerical
technique for ordinary difference equations.
6.3
These formulas are among the most widely used formula for the numerical solution of
ordinary differential equations.
Their advantages include:
ki f x Ci h, y h aij k j , i = 1, 2, , n
j 1
(6.17)
and
n
y y h bi ki
l 1
The coefficients of the common procedures are listed in the table below
(6.18)
c1
a11
a12
a1n
c2
a21
a 22
a2 n
cn
an1
an2
a nn
b1
b2
bn
(a)
0
1
1
0.5
Euler method
improved Euler
method
e(b)= 0(h)
e = 0(h2)
(c)
0.5
0.5
0.5
Modified Euler
method 0.1
e = 0 (h2)
(d)
0.5
0.5
0.5
0.5
-1
0.5
0.5
1
6
2
3
1
6
1
6
1
3
1
3
Runge-Kutta Method
Runge-Kutta Method
e = (e)
0 (h3)
e = 0(f)(h4)
1
6
Employing equations (6.17) and (6.18), we can formulate the formulas presented in table (a)
to (f), e.g.
(a)
Euler Method
k1 f x 0.h, y h 0.k1
f x, y
y y h 1. k1
y hf x, y
yi 1 yi hf xi , yi
(b)
k1 f x, y
k2 f x h, y k1h
y y h05
. k1 05
. k2
y
(c)
h
k k2
2 1
k 2 f x h, y k1h
2
2
y y hb1k1 b2 k2
y hb2 , k2
y hk2
(d)
k2 f x h, y hk1
2
2
1
1
1
k3 f x h, y h 0. k1 k2
2
2
2
1
1
f x h, y hk2
2
2
k4 f x h, y hk3
y y
h
k 2 k2 2 k3 k4
6 1