Perhatikan persamaan differensial orde kedua dengan nilai batas berikut ini.
2
d y dy
=f (t , y , ),
dt 2 dt a≤t≤b, y(a )=α , y (b )=β (9)
Kelihatannya mirip dengan pada kasus linear tetapi dengan pengecualian bahwa penyelesaian
untuk kasus yang tidak linear tidak dapat diekspresikan secara sederhana seperti yang dilakukan
untuk kasus linear. Untuk problema yang tidak linear ini, penyelesaiannya tidak dapat dinyatakan
sebagai suatu kombinasi linear dari dua dari solusi untuk problema nilai awal. Dan untuk itu kita
perlu mengambil strategi lain, yaitu menggunakan penyelesaian-penyelesaian untuk suatu
sekuens dari probleman nilai awal dengan bentuk berikut.
2
d y dy
=f (t , y , ),
dt 2 dt a≤t≤b, y(a )=α , y' (a)=θ, (10)
Yang melibatkan suatu parameter θ yang digunakan untuk menghampiri penyelesaian dari
persamaan differensial orde kedua dengan nilai batas yang kita tinjau. Untuk itu kita memilih
parameter θ = θk dengan cara sedemikian sehingga kita dapat menjamin bahwa limit
lim y ( b , θ )= y ( b)= β ,
k
k →∞ (11)
dimana y (t , θk )
adalah solusi dari problema nilai awal yang dinyatakan oleh persamaan (10)
dengan θ=θ k and y(t) menyatakan the solution dari problema nilai batas pada persamaan (9).
This technique is called “shooting” method by analog to the procedure of firing objects at
a stationary target. (see fig. 10.2) We star with a parameter θ0 that determine the initial
elevation at which the object is fired from he point (a,α ) and along the curve described by the
solution to the initial-value problem:
2
d y dy
=f (t , y , ),
(10.12) dt 2 dt a≤t≤b , y (a )=α , y ' (a )=t 0 .
(10.13) y(b,θ)−β=0
Since this is a nonlinear equation of the type considered in chapter 2, a number of methods are
available. If we wish to employ the secant method (Algorithm 2.4 of section 2.3) to solve the
problem, we need to choose initial approximations θ0 and θ1 and then generate the
remaining terms of the sequence by
To use the more powerful Newton’s method to generate the sequence {θ k } , only one
initial value , θ0 , is needed. However, the iteration has the form
( y (b , θk −1 )−β ) dy
θk =ϑ k−1 (dy/dθ )(b , θk−1 )≡ (b , θ k−1 ),
(dy /dθ )(b , θk −1 ) dimana dθ (14)
And requires the knowledge of (dy/dθ )(b, θk−1 ) . This presents a difficulty, since an explicit
representation for y(b,θ) is not known; we know only the values y (b ,θ0 ) , y(b,θ1 ) , …,
y(b,θk−1 ) .
Suppose we rewrite the initial-value problem (10.10), emphasizing that the solution
depends on both t and θ:
2
d y dy dy
(t ,θ )=f (t , y (t , θ ), (t ,θ )), (a , θ )=θ ,
dt 2 dt a≤t≤b, y(a,θ)=α , dt
(15)
Retaining the prime notation to indicate differentitation with respect to x. since we are interested
in determining (dy/dθ)(b,θ) when θ=θ k−1 , we first take the partial derivative of (15) with
respect to t. this implies that
d2 y
∂ ( )
dt 2
(t , θ )
∂f
(t , y(t , θ), y' (t ,θ ))
∂θ = ∂θ
dy
∂f dy ∂t ∂f ∂y
(t , y (t ,θ ), (t , θ)) + (t , y (t , θ )) (t , θ )+
∂f dy
(t , y (t , θ ), (t ,θ ))
(
∂
dt )
(t ,θ )
∂t dt ∂θ ∂ y ∂θ dy dt ∂θ
=
∂
dt ( )
Karena t dan θ tidak saling bergantungan, maka
dy
∂f dy ∂y ∂f dy
∂( dt )
∂ ( dydt ) (t , θ ) ∂θ
(t , y (t , θ), (t ,θ )) (t ,θ )+
dt ∂θ dy
(t , y (t ,θ ), (t , θ ))
dt ∂θ
(t , θ)
∂θ =
∂
dt ( )
For a≤t≤b. the initial conditions give
∂y
( a ,θ )=0
∂ ( dydt ) (a , θ )=1
∂θ and ∂θ
If we simplicity the notation by using z(x,t) to denote (∂ y/∂ t )( x ,t ) and assume that
the order of differentiation of x and t can be reversed, Eq.(10.16) becomes the initial-value
problem
d2 z ∂ f dy ∂f dy dz
2
= (t , y , ) z+ (t , y , ) ,
dt ∂ y dt dy dt dt
(10.17)
∂ ( )
dt a≤t≤b, z(a) = 0,
z’(a) = 1
Newton’s method therefore requires that two initial-value problems be solved o each iteration,
Eqs. (10.10). then from Eq. (10.14),
y (b ,θ k −1 )−β
θk =θ k−1
(10.18) z (b , θk −1 )
In practice, none of these initial-value problems is likely to be solved exactly; instead the
solutions are approximated by one of the methods discussed in chapter 5. algorithm 10.2 uses the
fourth-order Runge-Kutta method to both solution required by Newton’s method. A similar
procedure for the Secant method is considered in exercise 4.
n
y =f (x , y , y ' ), a≤x≤b , y (a )=α , y(b )=β .
(Note: Equation (10.15), (10.17) are written as first-order systems and solved.)
Step 3 set
w 1,0=α ;
w 2,0=TK ;
u1 =0 ;
u2 =1.
Step 6
k 1,1 =hw 2 ,i−1 ;
k 1,2 =hf ( x ,w1 ,i−1 ,w2 ,i−1 );
1
k 2,1 =h( w2 ,i−1 + 2 k 1,2 );
1 1
k 2,2 =hf ( x +h /2 , w1 , i−1 + 2 k 1,1 , w2 , i−1 + 2 k 1,2 );
1
k 3,1 =h( w 2 ,i −1 + 2 k 2,2 ) ;
1 1
k 3,2 =hf ( x +h/2 , w1 , i−1 + 2 k 2,1 , w2 , i−1 + 2 k 2,2 ) ;
k 4,1=h (w2 , i−1 +k 3,2 );
k 4,2=hf (x +h , w 1,i−1 +k 3,1 , w2 , i−1 +k 3,2 );
w 1, i=w1 ,i−1 +(k 1,1 +2 k 2,1 +2 k 3,1 +k 4,1 )/6 ;
w 2, i=w2 , i−1 +( k 1,2 +2 k 2,2 +2k 3,2 +k 4,2 )/6 ;
k ' 1,1 =hu 2 ;
k ' 1,2 =h [ f y ( x , w1 , i−1 , w2 ,i−1 )u1 +f y ' ( x , w 1, i−1 , w 2, i−1 )u 2 ];
1
k ' 2,1 =h(u 2 + 2 k ' 1,2 );
1
k ' 2,2 =h [ f y ( x +h /2 , w 1, i−1 , w 2, i−1 )(u1 + 2 k ' 1,1 )
1
+f y ' ( x +h /2 , w 1, i−1 , w 2, i−1 )(u1 + 2 k ' 1,2 )];
1
k ' 3,1 =h( u 2 + 2 k ' 2,2 ) ;
1
k ' 3,2 =h [ f y ( x +h /2 , w 1, i−1 , w2 , i−1 )(u 1 + 2 k ' 2,1 )
1
+
f y ' ( x +h/2 , w1 , i−1 , w2 , i−1 )(u 2 + 2 k ' 2,2 )];
k ' 4,1=h(u2 +k' 3,2 );
k ' 4,2=h[ f y ( x+h, w1,i−1 ,w 2,i−1 )(u1 +k ' 3,1 )
+
f y' ( x +h,w1 ,i−1 ,w2 ,i−1 )(u 2 +k ' 3,2 )];
1
u1 =u1 + 6 (k ' 1,1 +2 k ' 2,1 +2 k ' 3,1 + k ' 4,1 ) ;
1
u2 =u2 + 6 ( k ' 1,2 +2 k ' 2,2 +2 k ' 3,2 +k ' 4,2 ) .
Step 7 if
|w 1, N −β|≤ TOL then do steps 8 and 9.
Step 8 for i = 0, 1, …, N
Set x = a + ih;
OUTPUT
( x,w 1,i ,w2 ,i ).
Step 9 ( Procedure is complete.)
STOP.
w 1, N −β
Step 10 set
TK=TK −
( u1 );
(Newton’s method is used to compute TK.)
k = k + 1.
n
In step 7, the best approximation to β we can expect for
w 1,N (tk ) is O(h ) if
the approximation method selected for step 6 gives O(hn ) rate of convergence.
The value
t 0 selected in step 1 is the slope of the straight line through (a ,α ) and
(b , β) . If the problem satisfies the hypotheses of theorem 10.1, any choice of t 0 will give
convergence; but in general the procedure will work for many problems for which these
hypotheses are not satisfied, although a good choice of
t0 is necessary.
Although Newton’s method used which the shooting technique requires the solution of an
additional initial-value problem, it will generally be faster than the Secant method. Both methods
are only locally convergent, since they require good initial approximations whenever the
assumptions of theorem 10.1 do not hold.
For a general discussion o the convergence of the shooting techniques for nonlinear
problems, the reader is referred to he excellent text by Keller [75]. In this reference more general
boundary conditions are discussed, and it is also noted that the shooting technique for nonlinear
problems is sensitive to rounding errors, especially if the solutions y(x) and z(x) are rapidly
increasing functions on [a,b].
Although the shooting methods presented in the earlier part the part of this chapter can used for
both linear and boundary-value problems, they often present problems of instability. The methods
we present here have better stability characteristics, but generally require more work to obtain a
specified accuracy.
Methods involving finite differences for solving boundary-value problems consist of
replacing each of the derivatives in the differential equation by an appropriate difference-quotient
approximation of the type considered in section 4.1. the difference quotient is generally chosen so
that a certain order of truncation error is maintained.
The linear second-order boundary-value problem,
2
d y dy
= p (t ) +q(t ) y+r (t ),
(10.20) dt 2 dt a≤t≤b, y(a)=α , y(b )=β .
Required that difference-quotient approximations be used for approximatingboth y’ and y”. to
accomplish this, we select an integer N > 0 and divide the interval [a,b] into (N+1) equal
subintervals, whose endpoints are the meshpoints t i =a+ih , for I = 0, 1, 2, …, N+1, where h =
(b – a)/( N + 1 ). Choosing the constant h in this manner will facilitate the application of a matrix
algorithm from chapter 6, which in this form will require solving a linear system involving an N
×N matrix.
At the interior meshpoint, ti , i = 1, 2, …, N, the differential equation to be
approximated is
2
d y dy
2
(t i )=p (ti ) (t i )+q (t i ) y (t i )+r (t i )
(10.21) dt dt .
ξ+ ξ+
For some i , ti < i < ti+1 , and
h2
y(xi−1)=y(xi−h)=y(xi)−hy'(xi)+ y \( x rSub { size 8{i} } \) + { {h rSup { size 8{3} } } over {6} } y rSup { size 8{'''} } \( x rSub { size 8{i} } \) } {¿
(10.23) 2
4
h (4 )
+ y (ξ − ),
24 i
ξ−
t i−1 <ξ−i <t i , assuming y∈ C 4 [t i−1 ,ti+1 ]. if these equations are added, the
For some i ,
terms involving y' (ti ) and y' \( t rSub { size 8{i} } \) } {¿ are eliminated, and a simple algebraic manipulating gives
y\(trSub{size8{i} \)={ 1} over {hrSup{size8{2} } \[y\(trSub{size8{i+1} \) -2y\(trSub{size8{i} \)+y\(trSub{size8{i-1} \) ] - { hrSup{size8{2} over {24} \[yrSup{size8{\(4\)} \(ξrSub{size8{i} rSup{size8{+ } } \)+yrSup{size8{\(4\)} \(ξrSub{size8{i} rSup{size8{- } } \) ] .}{¿
The intermediate value theorem can be used to simplify this even further:
(10.24)
y\(trSub{size8{i} \)={ 1} over {hrSup{size8{2} } \[y\(trSub{size8{i+1} \) -2y\(trSub{size8{i} \)+y\(trSub{size8{i-1} \) ] - { hrSup{size8{2} over {12} yrSup{size8{\(4\)} \(ξrSub{size8{ }rSup{size6{i} } \),}{¿
ξi t i−1 <ξ i <t i+1 .
For some point , equation (10.24) is called the centered-difference formula
for y \( t rSub { size 8{i} } \) } {¿ .
A centered-difference formula for y' (ti ) can be obtained in a similar manner (the
details are considered in section 4.1) resulting in
2
1 h
y'(ti )= [ y(t i+1 )−y(t i−1 )]− y' \( η rSub { size 8{i} } \) ,} {¿
(10.25) 2h 6
η η
For some i where ti−1 < i < ti+1 .
The use of these centered-difference formulas in eq. (10.21) results in the equation.
h
+r ( t i )− [ 2 p( t i¿ ¿
12
2
A finite method with truncation error of order O( h ) results by using this equation
together with the boundary conditions y (a )=α and y(b )=β to define
y 0 =α y N +1 =β
And
For each i = 1, 2, …, N.
In the form we will consider, eq. (10.26) is rewritten as
h h
( ) (
− 1+ p (t i ) y i−1 +(2+h2 q (t i )) y i− 1− p (ti ) y i+1 =−h 2 r(t i )
2 2 )
And the resulting system of question is expressed in the tridiagonal N × N – matrix form
(10.27) Ty = d,
di mana
h
2+ h2 q ( t 1 ) −1+ p t 0 0 . .. 0
2 ( 1)
[ ]
h h
−1+ p t 2+h2 q ( t 2 ) −1+ p t 0 . .. 0
2 ( 2) 2 ( 2)
h h
0 −1+ p t 2+ h2 q ( t 3 ) −1+ p t . .. 0
2 ( 3) 2 ( 3)
T= . . . . .
.
. . . . .
.
. . . . 0
h
. . . ... −1+ pt
2 ( N −1 )
h 2
0 0 0 ... −1+ p ( t N ) 2+ h q ( t N )
2
dan
h
(
−h 2 r ( t 1 ) + 1+ )
p ( t1 ) α
[ ]
2
−h2 r ( t 2 )
y1 2
−h r ( t 3 )
[]
y2 d= .
y3 .
y= . .
. −h2 r ( t N−1 )
. h
yN (
−h2 r ( t N ) + 1+
2 )
p (t N ) β
dan
The following theorem gives conditions under which the tridiagonal linear system (10.27)has an
unique solution. Its proof is a consequence of theorem 6.24 (p.356) and is considered in exercise
6.
Theorem 10.3
Suppose that p, q, and r are continuous on [ a,b]. if q(x) ¿ 0 on [a,b], then the tridiagonal
linear system (10.27) has an unique solution provided that h < 2/L where a≤x≤b L=max |p( x)|
It should be noted that the hypotheses of theorem 10.3 guarantee an unique solution to the
4
boundary-value problem (10.20), but they do not guarantee that y∈ C [a,b] . it is necessary
(4 )
to establish that y is continuous on [a,b] in order to ensure that the runcation error has order
2
O(h ) .
n
y = p( x) y'+q( x) y+r( x), a≤x≤b , y (a )=α , y(b )=β .
discussed in Section 9.2. A sequence of iterates {( w(1k ) , w(2k ) ,…, w(Nk ) )t } is generated that will
converge to the solution of system (10.31), provided that the initial approximation
t
(w(10) , w(20) ,…, w(N0) )t is sufficiently closed to the solution (w1 , w2 ,…, w N ) and that the
Jacobian matrix for the system, defined in Eqs. (9.11), is nonsingular. However, for the system
(10.31), the Jacobian matrix in (10.32) is tridiagonal, and the assumptions presented at the
beginning of this discussion ensure that J is a nonsingular matrix.
Newton’s method for nonlinear systems requires that at each iteration, the N N linear system
y 2 −α
(
J ( y 1 , y 2 …, y N )( v 1 , v 2 … , v N )T =( 2 y 1 − y 2 + h2 f t 1 , y 1 ,
2h ) ,
y 3− y1
(
− y 1 +2 y 2 − y 3 +h 2 f t 2 , y 2 ,
2h )
,…,
− y N −2 +2 y N −1 − y N
(10.31) y N − y N−2
(
+h2 f t N −1 , y N−1 ,
2h ) ,
β− y N−1
(
− y N −1 +2 y N +h 2 f t N , y N ,
2h )
−β )T
be solved for
(v 1 , v 2 ,…,v N ) since
(k) ( k −1)
w i =w i + v i , for each i = 0,1,2,…,N.
Since J is tridiagonal, this is not as formidable a problem as it might at first appear; the Crout
reduction algorithm for tridiagonal system (Algorithm 6.7) can be easily applied. The entire
process is detailed in the following algorithm.
d dy
(10.36) −
dx (
p( x )
dx )
+q (x ) y =f ( x )
, 0≤x≤1 ,
2
if and only if y is the unique function in C0 [ 0,1] that minimizes the integral
1
(10.37) I [u ]=∫ { p( x )[u ' ( x )]2 +q( x )[u( x )]2 −2 f ( x )u( x )} dx .
0
2
In the Rayleigh-Ritz procedure the integral I is minimized not over all the functions in C0 [ 0,1]
, but over a smaller set of functions consisting of linear combinations of certain basis funcyions
φ1 ,φ 2 ,…φ n . The basis function {φ i }ni=1 must be linearly independent and satisfy
(10.38)
I [φ ]=I [∑ cφ
i=1 i i ]
1 2 2
¿∫ { p( x ) [∑ }
n n n
0
c φ +q ( x )
i=1 i i
'
] [ ]
∑i=1 c i φi −2 f ( x )∑i=1 ci φ i dx .
and, for a minimum to occur it is necessary, when considering I as a function of
c 1 ,c 2 ,…c n to
have
∂I
(10.39) =0
∂c j for each j = 1,2,…,n.
Differentiating (10.38) gives
1
∂I n n
{
=∫ 2 p (x ) ∑i=1 c i φ' ( x )φ'j ( x )+2 q( x ) ∑ i=1 c i φ i ( x )φ j ( x )−2 f (x )φ j (x ) dx,
∂c j 0 }
and substituting into Eq. (10.39) yields
n 1 1
(10.40) 0=∑
i=1 0
for each j = 1,2,…,n.
[ ' '
]
∫ { p( x )φ i ( x )φ j ( x )+q( x )φ i ( x )φ j ( x )} dx c i−∫ f ( x )φ j ( x )dx
0
The equation described in Eq. (10.40) can be considered as linear system Ac = b, where the
symmetric matrix A is given by
1
aij =∫ { p( x )φ'i ( x)φ'j ( x )+q( x )φ i ( x )φ j ( x )} dx
0
and b is defined by
1
bi =∫ f (x )φ j (x )dx
0
The first choice of basis functions we will discuss involves piecewise linear polynomials. The
φ1 ( x),φ 2 ( x),…φ n ( x) by
0, 0≤x≤x i−1 ,
{
x−x i−1
, x i−1 < x≤x i ,
hi−1
(10.41) φi ( x )=
x i+1 −x
, xi < x≤xi +1 ,
hi
0, xi +1 < x≤1 ,
for each i = 1,2,…,n. (See Fig 10.4)
Since the functions i is piecewise linear, the derivatives φ'i , while not continuous, are
0, 0≤x≤x i−1 ,
(10.41) φ'i ( x )=
x i < x≤x i +1 ,
x i +1 < x≤1 ,
'
Because i and φi are nonzero only on ( xi−1 ,x i+1) ,
φi ( x )φ 'j (x )≡0 and φ'i ( x )φ 'j (x )≡0
except when j is i – 1, i, or i + 1. As a consequence the linear system (10.40) reduces to an n n
tridiagonal linear system. The nonzero entries in A are given by
1
aii =∫ { p( x )[φ 'i ( x )]2 +q ( x )[ φi ( x )]2 } dx
0
xi x i+ 1
2 2
1 −1
¿∫
( )
x i−1 hi−1
p( x )dx + ∫
x
( ) p ( x )dx
i
hi
x x
i 2 i +1 2
1 −1
+ ∫( )( hi−1
x −xi−1 ) q( x ) dx + ∫ ( ) ( x
2
h i+1
2
−x ) q ( x ) dx ,
xi−1 xi i
b)
S ( x j )=f ( x j ) for j=0,1,2,3,4 . (5 specified conditions)
c)
S j+1 ( x j+1 )=S j ( x j+1 ) for j=0,1,2. (3 specified conditions)
Since uniqueness of solutions requires that the number of constants in (a), 16, must equal the
number of conditions in (b) through (f), only one of the boundary conditions in (f) can be
specified for the interpolatory cubic splines.
The cubic spline functions we will use for our basis functions are called B-splines or bell-shaped
splines. This splines differ from interpolatory splines in that both sets of boundary conditions in
(f) are satisfied. This requires the relaxation of two of the conditions in (b) through (e). Since the
(b)
S ( x j )=f ( x j ) for j=0,2,4.
The basic B-spline S defined below uses the equally spaced nodes
x 0=−2, x 1=−1,
(b)
S ( x 0 )=0 , S ( x 2 )=1 , S ( x 4 )=0 ;
as well as both sets of conditions
'' ''
i) S ( x 0 )=S ( x 4 )=0 ,
' '
ii) S (x 0 )=S ( x 4 )=0 .
2
As a consequence, S ∈C 0 [−∞ , ∞ ].
0 x ≤−2;
{
3 3 3 3
[( 2−x ) −4 ( 1−x ) −6 x + 4( 1+ x ) ] / 4 , −2< x≤−1
[( 2−x )3 −4( 1−x )3 −6 x3 ]/ 4 , −1< x ≤0
S ( x )= 3 3
[ ( 2−x ) −4( 1−x ) ]/ 4 , 0< x≤1
3
[( 2−x ) ]/ 4 , 1< x ≤2
0 x >2