Anda di halaman 1dari 88

Chapter 3

Finite Difference
Methods and
Interpolation
Finite Differences
• Numerical technique that lends itself to the
numerical solution of large sets of simultaneous
differential equations that lack analytical
solutions.
• Need to develop the systematic terminology used
in the calculus of finite differences.
• Finite differences also find application in the
derivation of interpolating polynomials.
Symbolic Operators
• Definition of the derivative

df ( x ) f ( x) − f ( x0 )
= f ′( x 0 ) = lim
dx x0
x →x 0 x − x0

• In the calculus of finite differences, x - x0 does not


approach zero, but remains finite; and is represented by
the quantity h.

h = x − x0
Symbolic Operators
• The derivative may therefore be approximated:
f ( x0 + h ) − f ( x0 )
f ′( x0 ) ≈
h
• Mean Value Theorem states "Let f(x) be continuous in
a ≤ x≤ b, and differentiable in the range a<x<b; then
there exists at least one ξ , a<ξ <b for which
The Mean Value Theorem
f ( b) − f ( a )
f ′( ξ ) =
forms the basis for both
differential and finite difference
b−a calculus
Symbolic Operators
• A function f(x) continuous and differentiable in the
interval [x0, x] can be represented by Taylor series:

f ( x ) = f ( x0 ) + ( x − x0 ) f ′( x0 ) +
( x − x0 ) f ′′( x0 )
2
+
2!
( x − x0 ) 3 f ′′′( x0 ) +  + ( x − x0 ) n f ( n ) ( x0 ) + R ( x )
n
3! n!

• Rn includes terms from (n+1) to ∞ (i.e, truncation error)


Symbolic Operators
• MVT can be used to show that a point ξ exists in the
interval [x0, x] such that:

R n ( x) =
( x − x 0 ) f ( n +1) ( ξ )
n +1

( n + 1) !
• Remainder is of the order (n+1) because it is a function
of a term (x-x0)n+1 and the (n+1)th derivative → O(nn+1)
Linear Symbolic Operators
Operator Definition

D differential

I integral

E shift
∆ forward difference
∇ backward difference
δ center difference

µ average
Linear Symbolic Operators

dy( x )
• Differential Operator, D Dy( x ) = = y′( x )
dx
x+h
• Integral Operator, I Iy ( x ) = ∫ y( x ) dx
x

−1
I=D
Linear Symbolic Operators

Ey ( x ) = y ( x + h )
• Shift Operator, E

E y( x ) = y( x − h )
-1

E y( x ) = y( x + nh )
n
Linear Symbolic Operators
• Express shift operator in terms of differential operator by
expanding y(x+h) into a Taylor series:
h h2 h3
y( x + h ) = y( x ) + y′( x ) + y′′( x ) + y′′′( x ) + 
1! 2! 3!
h h2 2 h3 3
y( x + h ) = y( x ) + Dy( x ) + D y( x ) + D y( x ) + 
1! 2! 3!
 h h2 2 h3 3 
y( x + h ) = 1 + D + D + D +   y( x ) Content of parantheses is Taylor
series expansion of ehD .
 1! 2! 3! 
y( x + h ) = e hD y( x ) compare to definition…

E = e hD
Linear Symbolic Operators
• Inverse of shift in terms of differential operator :
h h2 h3
y ( x − h ) = y ( x ) − y′( x ) + y′′( x ) − y′′′( x ) + 
1! 2! 3!
h h2 2 h3 3
y ( x − h ) = y ( x ) − Dy ( x ) + D y ( x ) − D y ( x ) + 
1! 2! 3!
 h h 2 2 h3 3 
y ( x − h ) = 1 − D + D − D +   y ( x )
 1! 2! 3! 
y ( x − h ) = e − hD y ( x )
− hD Content of parantheses is Taylor
E =e
-1
series expansion of e-hD .
Backward Finite Difference (BFD)
• Consider the equivalent sets of values:

y i −3 yi−2 y i −1 yi y i +1 yi+2 y i +3
y( x − 3h ) y( x − 2 h ) y( x − h ) y( x ) y( x + h ) y( x + 2 h ) y( x + 3h )

• The first backward finite difference of y at i is defined:

∇y i = y i − y i −1
∇y ( x ) = y ( x ) − y ( x − h )
Backward Finite Difference
The second backward finite difference:

∇ y i = ∇( ∇y i ) = ∇( y i − y i −1 )
2

= ∇y i − ∇y i −1
= ( y i − y i −1 ) − ( y i −1 − y i − 2 )
= y i − 2 y i −1 + y i − 2
∇ y( x ) = y( x ) − 2 y( x − h ) + y( x − 2h )
2
Backward Finite Difference

The third backward finite difference:

( )
∇ 3 y i = ∇ ∇ 2 y i = ∇( y i − 2 y i −1 + y i − 2 )
= ∇y i − 2∇y i −1 + ∇y i − 2
= ( y i − y i −1 ) − 2( y i −1 − y i − 2 ) + ( y i − 2 − y i −3 )
= y i − 3y i −1 + 3y i − 2 − y i −3
Backward Finite Difference

The nth backward finite difference:

n
n!
n
∇ y= ∑ ( − 1)
m =0
m

( n − m ) !m !
yi−m
Backward Finite Difference
Establish relationship between backwards difference
operators and differential operators (i.e, ∇ =f(D)):
∇y ( x ) = y ( x ) − y ( x − h )
= y( x ) − e − hD y( x )
(= 1− e) y( x )
− hD
∇ = hD −
h 2 D 2 h 3 D3
+ −
2 3
∇ = (1 − e ) − hD
∇ 2 = h 2 D 2 − h 3D3 + 127 h 4 D 4 − 
∇ = (1 − e )
n − hD n
∇ 3 = h 3 D3 − 32 h 4 D 4 + 54 h 5 D5 − 
Relate ∇ to D

∇ = 1 − e hD ∇ 2 ∇3 ∇ 4
2 2 3 3 hD = ∇ + + + + ...
h D hD 2 3 4
∇ = hD − + − ...
2 6 11 4 5 5
h D = ∇ + ∇ + ∇ + ∇ + ...
2 2 2 3

7 4 4 12 6
∇ = h D − h D3 + h D − ...
2 2 2 3

12 3 4 7 5
h D = ∇ + ∇ + ∇ + ...
3 3 3

3 4 4 5 5 2 4
∇ = h D − h D + h D5 − ...
3 3 3

2 4 ∇ 2
∇ 3
∇ 4
h n
D n
= (∇ + + + + ...) n

∇ n = (1 − e − hD ) n 2 3 4
Forward Finite Difference (FFD)
• The first forward ∆y i = y i +1 − y i
finite difference: ∆y( x ) = y( x + h ) − y( x )

∆2 y i = ∆( ∆y i ) = ∆( y i +1 − y i )
• The second
backward = ∆y i +1 − ∆y i
finite difference = ( y i + 2 − y i +1 ) − ( y i +1 − y i )
= y i + 2 − 2 y i +1 + y i
∆2 y( x ) = y( x + 2h ) − 2 y( x + h ) + y( x )
Relate ∆ to D

∆ = e − hD − 1 ∆2 ∆3 ∆4
hD = ∆ − + − + ...
h 2 D 2 h3 D 3 2 3 4
∆ = hD + + + ...
2 6 11 4 5 5
h D = ∆ − ∆ + ∆ − ∆ + ...
2 2 2 3

7 4 4 12 6
∆ = h D + h D3 + h D + ...
2 2 2 3

12 3 4 7 5
h D = ∆ − ∆ + ∆ − ...
3 3 3

3 4 4 5 5 2 4
∆ = h D + h D + h D5 + ...
3 3 3

2 4 ∆ 2
∆3
∆4
h n D n = (∆ − + − + ...) n
∆n = (e − hD − 1) n 2 3 4
Central Finite Difference (CFD)

Consider the equivalent sets of values:

y i −1½ y i −1 y i −½ yi y i +½ y i +1 y i +1½
y( x − 1½ h ) y( x − h ) y( x − ½ h ) y( x ) y( x + ½ h ) y( x + h ) y( x + 1½ h )
Central Finite Difference
• The first central δy i = y i +½ − y i −½
finite difference:
δy( x ) = y( x + ½ h ) − y( x − ½ h )

δ 2 y i = δ( δy i ) = δ( y i +½ − y i −½ )
• The second
central = δy i +½ − δy i −½
finite difference = ( y i +1 − y i ) − ( y i − y i −1 )
= y i +1 − 2 y i + y i −1
δ 2 y( x ) = y( x + h ) − 2 y( h ) + y( x − h )
Central Finite Difference
The nth forward finite difference:
n
n!
n
δ y= ∑ ( − 1)
m =0
m

( n − m ) !m !
y i − m +½ n

Odd order → interval midpoints


Even order → interval endpoints
∴ twice the number of points necessary to use CFD
Central Finite Difference

Define average operator, µ : µ= 1


2
[E ½
+E −½
]
Shifts operand ½ interval left(+) or right(-) of the pivot,
averaging the evaluations at each

µ δy i = 1
2
[E ½
δy i + E −½ δy i ]
= 1
2
[ δyi +½ + δyi −½ ]
= 12 [ ( y i +1 − y i ) + ( y i − y i −1 ) ]
1
2
( y i +1 − y i −1 )
Relate δ to D

First averaged central difference


µ δy( x ) = 1
2
[ y( x + h ) − y( x − h ) ]
= 2 [e y( x ) − e y( x ) ]
1 hD − hD

µ δy( x ) = 2 [e − e ] ⋅ y( x )
1 hD − hD

µ δ= sinh ( hD)
Relate δ to D

Expand in Taylor Series:

sin ( hD) = hD +
( hD)
3
+
( hD)
5
+
( hD)
7
+
3! 5! 7!
h 3 D3 h 5 D5 h 7 D 7
= hD + + + +
6 120 5040
Relate δ to D

Second central difference

δ 2 y( x ) = y( x + h ) − 2 y( x ) + y( x − h )
= e hD y( x ) − 2 y( x ) + e − hD y( x )
δ 2
y x = [ e − 2 + e ] ⋅ y( x )
( ) hD − hD

δ = 2[ cosh ( hD) − 1] = E + E − 2
2 −1
Relate δ to D
Expand in Taylor Series (and by analogy, expand to higher
order Central Finite Difference terms):
4 4 6 6 8 8
2 2 2 h D h D h D
δ =h D + + + +
12 360 20160
5 5 7 7
3 3 3 h D h D
µ δ=h D + + +
4 540
6 6 8 8
4 4 4 h D h D
δ =h D + + +
6 80
Relate δ to µ
Start with definitions of µ and δ :

µ= 1
2
[E + E ]
½ −½

µ 2
= [ E + E + 2]
1
4
−1

δ 2 = E + E −1 − 2 2 2
µ = ¼δ + 1
2 −1
δ +2=E+E
Relate δ to µ
• Recall:
µ δ= sinh ( hD) → hD = sinh −1 ( µ δ)
• Expand inverse sine argument:

sinh −1
( µ δ) = µ δ−
( µ δ)
3
+
3( µ δ)
5
−
6 40

• Substitute:
hD = µ δ−
( µ δ)
3
+
3( µ δ)
5
−
6 40
Relate δ to µ

Using the result → µ 2 = ¼δ 2 + 1

eliminate even powers to obtain hD, or higher orders


δ4 δ6
( µ δ) 3( µ δ) h D =δ −
2 2 2
3 5 + −
hD = µ δ− + − 12 90
6 40
 δ 5
7 δ 7

 δ3 δ5  h D = µ δ − +
3 3 3
− 
= µ δ − + −   4 120 
 6 30 
6 8
δ 7 δ
h 4 D4 = δ4 − + −
6 240
Relate µδ to D

h3 D3 h5 D5 h7 D 7
µ δ= hD + + + + ...
6 120 5040
4 4 6 6 8 8
h D h D h D
δ =h D +
2 2 2
+ + + ...
12 360 20160
5 5 7 7
h D h D
µ δ =h D +
2 3 3
+ + ...
4 40
6 6 8 8
h D hD
δ =h D +
4 4 4
+ + ...
6 80
Relate µδ to D
 δ3 δ5 δ7 
hD = µ  δ − + − + ... 
 6 30 140 
δ 3
δ 4
h2 D2 = δ 2 − + − ...
12 90
 δ 5
7δ 7

h D = µ  δ −
3 3 3
+ − ... 
 4 120 
δ 7δ 6 8
h D =δ −
4 4
+4
− ...
6 240
Difference Equations and Solution
• Forward, backward, and central FD equations are
used in the solution of differential equations.
• FDs transform differential equations into
difference equations:
f ( y k , y k +1 , , y k + n ) = 0
– Eqns may be linear/nonlinear, may be
homogeneous/non, and may have constant or variable
coefficients
– We will focus on linear, homogeneous systems with
constant coefficients
Order of a Difference Equation
• The order of a difference equation is the
difference between the highest and lowest
subscript of the dependent variable:
order = (k+n)-k=n
• 2nd order homogeneous linear ordinary differential equation

y′′ + 3y′ − 4 y = 0
• 2nd order homogeneous linear difference equation

y k + 2 + 3y k +1 − 4 y k = 0
Solution of a Differential Equation
y′′ + 3y′ − 4 y = 0
D 2 y + 3Dy − 4 y = 0
D 2 + 3D − 4 = 0 Characteristic equation

( D + 4)( D − 1) = 0
λ1 = −4, λ 2 = 1 Eigen values

y = C1e λ1x + C 2 e λ 2 x
y = C1e − 4 x + C 2 e x
Solution of a Difference Equation
y k + 2 + 3y k +1 − 4 y k = 0 real & distinct roots

E 2 y k + 3Ey k − 4 y k = 0 λ1 = −4, λ 2 = 1
k k
2
E + 3E − 4 = 0 y = C1λ1 + C 2 λ 2
( E + 4)( E − 1) = 0 y = C1 ( − 4 ) + C 2
k

real & repeated roots two complex roots

y = ( C1 + C 2 k ) λk y = C1 ( α + β i ) + C 2 ( α − βi )
k k
Difference Equation Solution Forms

Solution is of the form: y k = f ( k, λ )


k is the marching counter
λ is a Eigen values of the characteristic
equation
Difference Equation Solution Forms
k k
real & distinct roots y k = C1λ1 + C 2 λ 2

real & repeated roots y k = ( C1 + C 2 k ) λk

y k = C1 ( α + βi ) + C 2 ( α − β i )
k k
two complex roots

polar form y k = r k ( C1′ cos kθ + C′2 sin kθ)


C1′ = C1 + C 2 ; C′2 = ( C1 − C 2 ) i
Stability/Convergence Depend on λ
1. Stable, converging with oscillations
 λ → real, distinct, and | λ | ≤ 1
 λ → real, repeated, and | λ | < 1
1. Stable, converging with damped oscillation
 λ → complex (α ± β i), distinct, and | r | ≤ 1
 λ → complex, repeated, and | r | < 1
1. Unstable, non-oscillatory
 λ → real, distinct, and | λ | > 1
 λ → real, repeated, and | λ | ≥ 1
1. Unstable, oscillatory
 λ → complex (α ± β i), distinct, and | r | > 1
 λ → complex, repeated, and | r | ≥ 1
Stability/Convergence Depend on λ
1. Stable, converging with oscillations
λ → real, distinct, and | λ | ≤ 1; λ → real, repeated, and | λ | <
1
Stability/Convergence Depend on λ
3. Unstable, non-oscillatory
λ → real, distinct, and | λ | > 1 λ → real, repeated, and | λ |
≥ 1
Stability/Convergence Depend on λ
4. Unstable, oscillatory
λ → complex (α ± β i), distinct, and | r | = 1
Stability/Convergence Depend on λ
4. Unstable, oscillatory
λ → complex (α ± β i), distinct, and | r | > 1
Interpolation
Interpolation is a process of finding a formula
(often a polynomial) whose graph will pass
through a given set of points (x, y).
As an example, consider defining:
x0 = 0, x1 =π/4, x2 = π/2
and
yi = cosxi, i= 0, 1, 2
Interpolation
Interpolation
Interpolation
PURPOSES OF INTERPOLATION
1.Replace a set of data points {(xi, yi)} with a
function given analytically.

2. Approximate functions with simpler ones,


usually polynomials or ‘piecewise polynomials’.
Interpolation
Purpose #1 has several aspects.
Interpolation
Interpolation
Interpolation

Purpose #2 for interpolation is to approximate


functions f(x) by simpler functions p(x), perhaps
to make it easier to integrate or differentiate f(x).
That will be the primary reason for studying
interpolation in this course.
Interpolation
Linear Interpolation
Interpolation
Linear Interpolation
Linear Interpolation
Linear Interpolation

The true value is tan(1.15) = 2.2345. We will


want to examine formulas for the error in
interpolation, to know when we have sufficient
accuracy in our interpolant.
Linear Interpolation
Quadratic Interpolation
Quadratic Interpolation
Quadratic Interpolation
Quadratic Interpolation

for i, j = 0, 1, 2. Also, they all have degree 2. As a

consequence of each Li(x) being of degree 2, we

have that the interpolant:

P2(x) = y0L0(x) + y1L1(x) + y2L2(x)

must have degree ≤ 2.


Higher Degree Interpolation
Higher Degree Interpolation
Higher Degree Interpolation
Higher Degree Interpolation
Higher Degree Interpolation

It improves with increasing degree n, but not at a


very rapid rate. In fact, the error becomes worse
when n is increased further. Later we will see
that interpolation of a much higher degree, say n
≥ 10, is often poorly behaved when the node
points {xi} are evenly spaced.
Interpolating Polynomials
• Useful in representing experimental data when
fundamental relationship is not known.
• Chose form of polynomial: base points
Pn ( x ) = a 0 + a 1x + a 2 x 2 + a 3 x 3 +  + a n x n of function
x0 f ( x0 )
• for the base points, it must be true that x1 f ( x1 )
Pn ( x i ) = f ( x i ) , i = 0,1,2, , n
 
• Substitute n known xn and f(xn) values into xn f (xn )
Pn(x), yields n+1 simultaneous linear algebraic
equations with unknown coefficients a0 – an.
Gregory-Newton Interpolation
Consider a set of known values of f(x) at equally
spaced values of x (base points):
x x − 3h x − 2h x−h x x+h x + 2h x + 3h
f(x) f ( x − 3h ) f ( x − 2h ) f ( x − h ) f ( x ) f ( x + h ) f ( x + 2h ) f ( x + 3h )

apply FFD f ( x + h ) = e hD f ( x ) , f ( x + nh ) = e nhD f ( x )

f ( x + nh ) = (1 + ∆ ) f ( x )
n
recall e hD = 1 + ∆ thus

expand as binomial series


n ( n − 1) 2 n ( n − 1)( n − 2) 3
(1 + ∆ ) n
= 1 + n∆ + ∆ + ∆ +
2! 3!
Gregory-Newton Interpolation
Substitute expansion
n ( n − 1) 2 n ( n − 1)( n − 2 ) 3
f ( x + nh ) = f ( x ) + nf ( x ) + n∆ + ∆ f ( x) + ∆ f ( x) + 
2! 3!

if n = positive integer, series has n+1 terms


therefore, expansion is a polynomial of degree n
if n+1 base points are known, expansion fits all
n+1 points exactly
Gregory-Newton Interpolation
• Define interval size as xi = x0 + nh, solve for n and substitute

f ( x) = f ( x0 ) +
( x − x 0 ) ∆f ( x ) + ( x − x 0 )( x − x1 ) ∆2f ( x ) + ( x − x 0 )( x − x1 )( x − x 2 ) ∆3f ( x ) + 
0 2 0 3 0
n 2 !h 3 !h
• Generalized as:
n
 k −1  ∆k f ( x 0 )
∑∏
f ( x ) = f ( x 0 ) +  ( x − x m )  k
k =1  m = 0  k !h

• binomial series has a finite number of terms when n is a positive integer.


However, in Gregory-Newton, n is not usually an integer, therefore the series
has an infinite number of terms.
• If |∆ | ≤ 1, binomial series for (1+∆ )n converges to (1+∆ )n when the number of
terms gets increasingly larger (implies FDs must be small)
Stirling's Interpolation
• Based on central differences (see text for development)
f ( x) = f ( x ) +
( x − x0 )
µ δf ( x ) +
( x − x0 ) 2
2
δ f(x )
0 0 0
n 2 !h 2

+
( x − x −1 )( x − x 0 )( x − x1 ) 3
µ δf ( x 0 ) +
( x − x −1 )( x − x 0 ) ( x − x1 ) 4
2
δ f ( x0 ) + 
3 4
3 !h 4 !h

• Generalized formulae for higher-order terms includes expressions


for both odd differences and even differences:
 1 ½ ( k −1)  k
odd 
 k ! h k ∏ ( x − x m ) µ δf ( x 0 ) , k = 1,3,
m = −½ ( k −1) 
 ( x − x 0 ) ½ ( k −2 )  k
even  k
 k !h m = −½ ( k − 2 )

( x − x m ) δ f ( x 0 ) , k = 2,4,

Lagrange Polynomials
• Unequally spaced point interpolation
n
• Based on polynomial defined as: Pn ( x ) = ∑ p ( x )f ( x )
k =0
k k

– pk(x) are nth degree polynomial functions corresponding to each base point

– each weighting function polynomial pk(x) must be chose so that it has the value of
unity when x = xk, and zero at all other base points

n
0 i ≠ k
pk ( x) =  pk ( x ) = Ck ∏( x − x ) i
1 i = k i =0
i≠k
selection criteria function form
Lagrange Polynomials
Constants Ck are evaluated to make the second criteria
true (pk(x) = 1 @ i = k): −1
 n 
 
Ck =  ∏
 ii =≠0k
( xk − xi ) 

 
n
0 i ≠ k
pk ( x) =  pk ( x ) = Ck ∏( x − x )
i
1 i = k i =0
i≠k
selection criteria function form
Lagrange Polynomials
• Combine to yield Lagrange Interpolating Polynomials:

n
 x − xi 
pk ( x) = ∏ 
i =0  x k − x i


i≠k

• With remainder term

n
f ( n +1) ( ξ )
R n ( x) = ∏
i =0
( x − xi )
( n + 1) !
, x0 < ξ < xn
Spline Interpolation
When dealing with a 3.5

large number of data 3


basepoints
LaGrange

points, a high degree 2.5


Spline

polynomial is likely to
fluctuate between base 2

points, rather than pass Y 1.5

smoothly through the 1

points.
0.5
1 2 3 4 5 6 7 8 9 10
X
Spline Interpolation

• Spline interpolation is designed to correct this


behavior through a series of lower-degree
interpolating polynomials in the vicinity of each
base point.

• Most common → cubic spline


Cubic Spline
• On the interval [xi-1, x],

Pi ( x ) = a i x + b i x + c i x + d i
3 2

• There are 4 unknowns in each Pi, and n Pi's on the


interval [x0, xn]
∀ ∴ the 4n unknown coefficients require 4n
equations to determine.
Required Cubic Spline Equations
– Each spline passes from the base points on the edge of
its interval (2n eqns)
– First derivatives of the splines are continuous across
the interior base points (n-1 eqns)
– Second derivatives of the splines are continuous across
the interior base points (n-1 eqns)
– Second derivatives of the end splines are zero at the
end base points (2 eqns, natural condition) or
– Third derivatives of the end splines equal to the third
derivative of the neighboring spline (not-a-knot
condition)
Cubic Spline
Define cubic spline equation and derivatives:

Pi ( x ) = a i x + b i x + c i x + d i
3 2

Pi ( x ) = 3a i x + 2b i x + c i
′ 2

Pi′′( x ) = 6a i x + 2b i
Cubic Spline
• 2nd derivative of interpolating polynomial at any point [xi-
1, xi] can be given by the 1st order LaGrange interpolation
polynomial:
 x − xi   x − x i −1 
y′′ =   y′i′−1 +   y′i′
 x i −1 − x i   x i − x i −1 
• Which can be integrated twice as:

y=
( x − xi )
3
y′i′−1 +
( x − xi )
3
+ C1x + C 2
6( x i −1 − x i ) 6( x i −1 − x i )
Cubic Spline
y=
( x − xi )
3
y′i′−1 +
( x − xi )
3
+ C1x + C 2
6( x i −1 − x i ) 6( x i −1 − x i )

• integration constants from BCs y( x i −1 ) = y i −1 , y( x i ) = y i

1 ( x − xi )   x − xi 
3
y=  − ( x i −1 − x i )( x − x i )  y′i′−1 +   y i −1
6  x i −1 − x i  x
 i −1 − x i
Eq. 1
1 ( x − xi )   x − x i −1 
3
+  − ( x i − x i −1 )( x − x i −1 )  y′i′ +   yi
6  x i − x i −1  x
 i − x i −1 

• Which is equivalent to Pi ( x ) = a i x 3 + b i x 2 + c i x + d i with


1  y′i′−1 − y′i′ 
ai = 
6  x i −1 − x i 
 1  x i2 y′i′−1 − x i2−1 y′i′  1
ci =  + ( x − x )( y ′′ − y ′′ ) +
( y i −1 − y i )

2  x i −1 − x i  6
i −1 i i i −1
( x i −1 − x i )
1  x i −1 y′i′ − x i y′i′−1 
1  x 3i −1 y′i′ − x 3i y′i′−1  1
di =  + ( x − x )( x y ′′ − x y ′′ ) +
( x i −1y i − x i y i −1 )

bi = 
2  x i −1 − x i 
 6  x i −1 − x i  6
i −1 i i i −1 i −1 i
( x i −1 − x i )
Cubic Spline
• Unknowns y"i-1 and y"i are determined from
continuity of the first derivative of the spline at
interior base points, i.e., y'i-1 = y'i.
• Differentiating Eq.1, and applying continuity
( xi − xi −1 ) yi′′−1 + 2( xi +1 − xi −1 ) yi′′ + ( xi +1 − xi ) yi′′+1
 yi +1 − yi   yi − yi −1 
= 6  − 6 
 xi +1 − xi   xi − xi −1 
where; i = 1,2,  , n − 1 y′0′ = y′n′ = 0 (natural spline)

• Which is an (n-1)th order tridiagonal matrix


Cubic Spline in Matrix Form

2( x2 − x0 ) ( x2 − x1 ) 0 0  0 0 
 (x − x ) 2( x3 − x1 ) ( x3 − x2 ) 0  0 0 
 2 1 
 0 ( x3 − x2 ) 2( x4 − x2 ) ( x4 − x3 )  0 0 
 
         
 0 0  0 ( xn − 2 − x n − 3 ) 2( xn −1 − xn −3 ) ( xn −1 − xn − 2 ) 
 
 0 0  0 0 2( xn −1 − xn − 2 ) 2( xn − xn − 2 ) 
 y1′′   yx2 −− xy1 y1 − y 0
x1 − x0
 or
 y′′   y32 − y12 
Ay′′ = c
y 2 − y1
 2   x3 − x 2 x2 − x1 
 y3′′   y 4 − y3 y3 − y 2 
which we
 = 6 4 3 
x −x x3 − x2
×
already know
    −1
  
 yn′′− 2 
 
 yn−1 − yn−2
 xn−1 − xn−2
y n−2 − y n−3 
xn−2 − xn−3 
y′′ = A c how to solve

 y′n′−1   yxnn −− xynn−−11 y n−1 − y n−2


xn−1 − xn−2 

Orthogonal Polynomials
• Satisfy orthogonality condition with respect to a
weighting function w(x) ≥ 0:
b
 0 if n ≠ m
∫ w ( x ) ⋅ g n ( x ) ⋅ g m ( x ) dx = 
a c( n ) > 0 if n = m

• Families of polynomials can be defined that obey


such a condition, named after their creators
(inventors, discoverors,…).
Legendre Polynomials

The simplest of the orthogonal class:


b
 0 if n ≠ m
∫ w ( x ) ⋅ g n ( x ) g m ( x ) dx = 
a c( n ) > 0 if n = m

1
 0 if n ≠ m
∫ Pn ( x ) Pm ( x ) dx =  2
−1  2 n +1 if n = m
Legendre Polynomials
Satisfies recurrence: ( n + 1) Pn +1 ( x ) − ( 2n + 1) xPn ( x ) + nPn −1 ( x ) = 0
0 P0 ( x ) = 1
P1 ( x ) = x
1.05 1
1
3x 2 − 1
2 P2 ( x ) =
2 Leg( 0 , x)
0.5

3
5x − 3x
P3 ( x ) =
Leg( 1 , x)
3 Leg( 2 , x)
2 Leg( 3 , x)
1 0.5 0 0.5 1

35x 4 − 30 x 2 + 3
P4 ( x ) =
Leg( 4 , x)

4 Leg( 5 , x)
8
0.5

( 2n − 2m ) !
−1
n 2
Pn ( x ) = ∑ ( − 1) n
1

x n −2m
m

2 m !( n − m ) !( n − 2m ) !
−1 x 1

m =0
Chebyshev Polynomials
Satisfies recurrence: Tn +1 ( x ) − 2 xTn ( x ) + Tn −1 ( x ) = 0

0 T0 ( x ) = 1
1 T1 ( x ) = x
2 T2 ( x ) = 2 x 2 − 1
3 T3 ( x ) = 4 x 3 − 3x
4 T4 ( x ) = 8x 4 − 8x 2 + 1

( n 2)
n(! x 2
− )
1
m
x m −2n 
Tn ( x ) = ∑ 
m = 0  ( 2m ) !( n − 2m ) !



Anda mungkin juga menyukai