Anda di halaman 1dari 16

10.2.

The Shooting method for Nonlinear Problems

Perhatikan persamaan differensial orde kedua dengan nilai batas berikut ini.
2
d y dy
=f (t , y , ),
dt 2 dt a≤t≤b, y(a )=α , y (b )=β (9)

Kelihatannya mirip dengan pada kasus linear tetapi dengan pengecualian bahwa penyelesaian
untuk kasus yang tidak linear tidak dapat diekspresikan secara sederhana seperti yang dilakukan
untuk kasus linear. Untuk problema yang tidak linear ini, penyelesaiannya tidak dapat dinyatakan
sebagai suatu kombinasi linear dari dua dari solusi untuk problema nilai awal. Dan untuk itu kita
perlu mengambil strategi lain, yaitu menggunakan penyelesaian-penyelesaian untuk suatu
sekuens dari probleman nilai awal dengan bentuk berikut.
2
d y dy
=f (t , y , ),
dt 2 dt a≤t≤b, y(a )=α , y' (a)=θ, (10)

Yang melibatkan suatu parameter θ yang digunakan untuk menghampiri penyelesaian dari
persamaan differensial orde kedua dengan nilai batas yang kita tinjau. Untuk itu kita memilih
parameter θ = θk dengan cara sedemikian sehingga kita dapat menjamin bahwa limit

lim y ( b , θ )= y ( b)= β ,
k
k →∞ (11)

dimana y (t , θk )
adalah solusi dari problema nilai awal yang dinyatakan oleh persamaan (10)
dengan θ=θ k and y(t) menyatakan the solution dari problema nilai batas pada persamaan (9).
This technique is called “shooting” method by analog to the procedure of firing objects at
a stationary target. (see fig. 10.2) We star with a parameter θ0 that determine the initial
elevation at which the object is fired from he point (a,α ) and along the curve described by the
solution to the initial-value problem:
2
d y dy
=f (t , y , ),
(10.12) dt 2 dt a≤t≤b , y (a )=α , y ' (a )=t 0 .

If is not sufficiently close to β , we attempt to correct our approximation


y(b,θ0 )
by choosing another elevation θ1 and so on, until y(b,θk ) is sufficiently close to “hitting”
β . (see fig 10.3.)

To determine how the parameters θk can be choosen, suppose a boundary-value


problem of the form (10.9) satisfies the hypotheses of theorem 10.1 if y(x,θ) is used to denote the
solution to the initial-value problem (10.10), the problem is to determine t so that

(10.13) y(b,θ)−β=0
Since this is a nonlinear equation of the type considered in chapter 2, a number of methods are
available. If we wish to employ the secant method (Algorithm 2.4 of section 2.3) to solve the
problem, we need to choose initial approximations θ0 and θ1 and then generate the
remaining terms of the sequence by

( y (b ,θ k −1 )−β )(θ k −1 −θ k−2 )


θk =θ k−1 − ,
y (b ,θ k−1 )− y (b , θ k−2 ) k=2,3,…

To use the more powerful Newton’s method to generate the sequence {θ k } , only one
initial value , θ0 , is needed. However, the iteration has the form

( y (b , θk −1 )−β ) dy
θk =ϑ k−1 (dy/dθ )(b , θk−1 )≡ (b , θ k−1 ),
(dy /dθ )(b , θk −1 ) dimana dθ (14)

And requires the knowledge of (dy/dθ )(b, θk−1 ) . This presents a difficulty, since an explicit
representation for y(b,θ) is not known; we know only the values y (b ,θ0 ) , y(b,θ1 ) , …,
y(b,θk−1 ) .
Suppose we rewrite the initial-value problem (10.10), emphasizing that the solution
depends on both t and θ:
2
d y dy dy
(t ,θ )=f (t , y (t , θ ), (t ,θ )), (a , θ )=θ ,
dt 2 dt a≤t≤b, y(a,θ)=α , dt
(15)

Retaining the prime notation to indicate differentitation with respect to x. since we are interested
in determining (dy/dθ)(b,θ) when θ=θ k−1 , we first take the partial derivative of (15) with
respect to t. this implies that

d2 y
∂ ( )
dt 2
(t , θ )
∂f
(t , y(t , θ), y' (t ,θ ))
∂θ = ∂θ
dy
∂f dy ∂t ∂f ∂y
(t , y (t ,θ ), (t , θ)) + (t , y (t , θ )) (t , θ )+
∂f dy
(t , y (t , θ ), (t ,θ ))
(

dt )
(t ,θ )
∂t dt ∂θ ∂ y ∂θ dy dt ∂θ
=

dt ( )
Karena t dan θ tidak saling bergantungan, maka

dy
∂f dy ∂y ∂f dy
∂( dt )
∂ ( dydt ) (t , θ ) ∂θ
(t , y (t , θ), (t ,θ )) (t ,θ )+
dt ∂θ dy
(t , y (t ,θ ), (t , θ ))
dt ∂θ
(t , θ)

∂θ =

dt ( )
For a≤t≤b. the initial conditions give
∂y
( a ,θ )=0
∂ ( dydt ) (a , θ )=1
∂θ and ∂θ

If we simplicity the notation by using z(x,t) to denote (∂ y/∂ t )( x ,t ) and assume that
the order of differentiation of x and t can be reversed, Eq.(10.16) becomes the initial-value
problem

d2 z ∂ f dy ∂f dy dz
2
= (t , y , ) z+ (t , y , ) ,
dt ∂ y dt dy dt dt
(10.17)
∂ ( )
dt a≤t≤b, z(a) = 0,
z’(a) = 1

Newton’s method therefore requires that two initial-value problems be solved o each iteration,
Eqs. (10.10). then from Eq. (10.14),
y (b ,θ k −1 )−β
θk =θ k−1
(10.18) z (b , θk −1 )
In practice, none of these initial-value problems is likely to be solved exactly; instead the
solutions are approximated by one of the methods discussed in chapter 5. algorithm 10.2 uses the
fourth-order Runge-Kutta method to both solution required by Newton’s method. A similar
procedure for the Secant method is considered in exercise 4.

Nonlinear Shooting Algorithm 10.2

To approximate the solution of the nonlinear boundary-value problem

n
y =f (x , y , y ' ), a≤x≤b , y (a )=α , y(b )=β .
(Note: Equation (10.15), (10.17) are written as first-order systems and solved.)

INPUT endpoints a, b; boundary conditions α,β ; number of subintervals N; tolerance TOL;


maximum number of iterations M.

OUTPUT approximations 1,i tow i y ( x ); w


2 ,i
to i y' ( x ) for each i = 0, 1, …, N or a
message that the maximum number of iterations was exceeded.

Step 1 set h = (b-a)/N;


k = 1;
TK = ( β−α )/(b−a ) .

Step 2 while (k ≤M ) do steps 3-10.

Step 3 set
w 1,0=α ;
w 2,0=TK ;
u1 =0 ;
u2 =1.

Step 4 for i = 1,…, N do steps 5 and 6.


(Rung-Kutta method for systems is used in steps 5 and 6.)

Step 5 set x = a + (i-1)h.

Step 6
k 1,1 =hw 2 ,i−1 ;
k 1,2 =hf ( x ,w1 ,i−1 ,w2 ,i−1 );
1
k 2,1 =h( w2 ,i−1 + 2 k 1,2 );
1 1
k 2,2 =hf ( x +h /2 , w1 , i−1 + 2 k 1,1 , w2 , i−1 + 2 k 1,2 );
1
k 3,1 =h( w 2 ,i −1 + 2 k 2,2 ) ;
1 1
k 3,2 =hf ( x +h/2 , w1 , i−1 + 2 k 2,1 , w2 , i−1 + 2 k 2,2 ) ;
k 4,1=h (w2 , i−1 +k 3,2 );
k 4,2=hf (x +h , w 1,i−1 +k 3,1 , w2 , i−1 +k 3,2 );
w 1, i=w1 ,i−1 +(k 1,1 +2 k 2,1 +2 k 3,1 +k 4,1 )/6 ;
w 2, i=w2 , i−1 +( k 1,2 +2 k 2,2 +2k 3,2 +k 4,2 )/6 ;
k ' 1,1 =hu 2 ;
k ' 1,2 =h [ f y ( x , w1 , i−1 , w2 ,i−1 )u1 +f y ' ( x , w 1, i−1 , w 2, i−1 )u 2 ];
1
k ' 2,1 =h(u 2 + 2 k ' 1,2 );
1
k ' 2,2 =h [ f y ( x +h /2 , w 1, i−1 , w 2, i−1 )(u1 + 2 k ' 1,1 )
1
+f y ' ( x +h /2 , w 1, i−1 , w 2, i−1 )(u1 + 2 k ' 1,2 )];
1
k ' 3,1 =h( u 2 + 2 k ' 2,2 ) ;
1
k ' 3,2 =h [ f y ( x +h /2 , w 1, i−1 , w2 , i−1 )(u 1 + 2 k ' 2,1 )
1
+
f y ' ( x +h/2 , w1 , i−1 , w2 , i−1 )(u 2 + 2 k ' 2,2 )];
k ' 4,1=h(u2 +k' 3,2 );
k ' 4,2=h[ f y ( x+h, w1,i−1 ,w 2,i−1 )(u1 +k ' 3,1 )
+
f y' ( x +h,w1 ,i−1 ,w2 ,i−1 )(u 2 +k ' 3,2 )];
1
u1 =u1 + 6 (k ' 1,1 +2 k ' 2,1 +2 k ' 3,1 + k ' 4,1 ) ;
1
u2 =u2 + 6 ( k ' 1,2 +2 k ' 2,2 +2 k ' 3,2 +k ' 4,2 ) .

Step 7 if
|w 1, N −β|≤ TOL then do steps 8 and 9.

Step 8 for i = 0, 1, …, N
Set x = a + ih;
OUTPUT
( x,w 1,i ,w2 ,i ).
Step 9 ( Procedure is complete.)
STOP.

w 1, N −β

Step 10 set
TK=TK −
( u1 );
(Newton’s method is used to compute TK.)
k = k + 1.

Step 11 OUTPUT (‘Mximum number of iterations exceeded’);


STOP.

n
In step 7, the best approximation to β we can expect for
w 1,N (tk ) is O(h ) if
the approximation method selected for step 6 gives O(hn ) rate of convergence.
The value
t 0 selected in step 1 is the slope of the straight line through (a ,α ) and
(b , β) . If the problem satisfies the hypotheses of theorem 10.1, any choice of t 0 will give
convergence; but in general the procedure will work for many problems for which these
hypotheses are not satisfied, although a good choice of
t0 is necessary.

Although Newton’s method used which the shooting technique requires the solution of an
additional initial-value problem, it will generally be faster than the Secant method. Both methods
are only locally convergent, since they require good initial approximations whenever the
assumptions of theorem 10.1 do not hold.
For a general discussion o the convergence of the shooting techniques for nonlinear
problems, the reader is referred to he excellent text by Keller [75]. In this reference more general
boundary conditions are discussed, and it is also noted that the shooting technique for nonlinear
problems is sensitive to rounding errors, especially if the solutions y(x) and z(x) are rapidly
increasing functions on [a,b].

Finite-Difference Methods for Linear Problems

Although the shooting methods presented in the earlier part the part of this chapter can used for
both linear and boundary-value problems, they often present problems of instability. The methods
we present here have better stability characteristics, but generally require more work to obtain a
specified accuracy.
Methods involving finite differences for solving boundary-value problems consist of
replacing each of the derivatives in the differential equation by an appropriate difference-quotient
approximation of the type considered in section 4.1. the difference quotient is generally chosen so
that a certain order of truncation error is maintained.
The linear second-order boundary-value problem,
2
d y dy
= p (t ) +q(t ) y+r (t ),
(10.20) dt 2 dt a≤t≤b, y(a)=α , y(b )=β .
Required that difference-quotient approximations be used for approximatingboth y’ and y”. to
accomplish this, we select an integer N > 0 and divide the interval [a,b] into (N+1) equal
subintervals, whose endpoints are the meshpoints t i =a+ih , for I = 0, 1, 2, …, N+1, where h =
(b – a)/( N + 1 ). Choosing the constant h in this manner will facilitate the application of a matrix
algorithm from chapter 6, which in this form will require solving a linear system involving an N
×N matrix.
At the interior meshpoint, ti , i = 1, 2, …, N, the differential equation to be
approximated is
2
d y dy
2
(t i )=p (ti ) (t i )+q (t i ) y (t i )+r (t i )
(10.21) dt dt .

Expanding y in a third-degree Taylor polynomial about ti evaluated at ti+1 and ti−1 we


have:
2
h
y(ti+1 )=y(t i+h)=y(ti )+hy'(t i )+ y} } \( t rSub { size 8{i} } \) + { {h rSup { size 8{3} } } over {6} } y rSup { size 8{'''} } \( t rSub { size 8{i} } \) } { ¿¿ ¿
(10.22) 2
4
h (4 )
+ y (ξ + ),
24 i

ξ+ ξ+
For some i , ti < i < ti+1 , and

h2
y(xi−1)=y(xi−h)=y(xi)−hy'(xi)+ y \( x rSub { size 8{i} } \) + { {h rSup { size 8{3} } } over {6} } y rSup { size 8{'''} } \( x rSub { size 8{i} } \) } {¿
(10.23) 2
4
h (4 )
+ y (ξ − ),
24 i

ξ−
t i−1 <ξ−i <t i , assuming y∈ C 4 [t i−1 ,ti+1 ]. if these equations are added, the
For some i ,
terms involving y' (ti ) and y' \( t rSub { size 8{i} } \) } {¿ are eliminated, and a simple algebraic manipulating gives

y\(trSub{size8{i} \)={ 1} over {hrSup{size8{2} } \[y\(trSub{size8{i+1} \) -2y\(trSub{size8{i} \)+y\(trSub{size8{i-1} \) ] - { hrSup{size8{2} over {24} \[yrSup{size8{\(4\)} \(ξrSub{size8{i} rSup{size8{+ } } \)+yrSup{size8{\(4\)} \(ξrSub{size8{i} rSup{size8{- } } \) ] .}{¿
The intermediate value theorem can be used to simplify this even further:

(10.24)
y\(trSub{size8{i} \)={ 1} over {hrSup{size8{2} } \[y\(trSub{size8{i+1} \) -2y\(trSub{size8{i} \)+y\(trSub{size8{i-1} \) ] - { hrSup{size8{2} over {12} yrSup{size8{\(4\)} \(ξrSub{size8{ }rSup{size6{i} } \),}{¿
ξi t i−1 <ξ i <t i+1 .
For some point , equation (10.24) is called the centered-difference formula
for y \( t rSub { size 8{i} } \) } {¿ .
A centered-difference formula for y' (ti ) can be obtained in a similar manner (the
details are considered in section 4.1) resulting in
2
1 h
y'(ti )= [ y(t i+1 )−y(t i−1 )]− y' \( η rSub { size 8{i} } \) ,} {¿
(10.25) 2h 6

η η
For some i where ti−1 < i < ti+1 .
The use of these centered-difference formulas in eq. (10.21) results in the equation.

y (t i+1 )−2 y (t i )+ y (t i−1 ) y (t i+1 )− y (t i−1 )


h 2
=p (t i ) [ 2h
2
]
+ q(t i ) y (t i )

h
+r ( t i )− [ 2 p( t i¿ ¿
12
2
A finite method with truncation error of order O( h ) results by using this equation
together with the boundary conditions y (a )=α and y(b )=β to define

y 0 =α y N +1 =β

And

2 y i − y i+1 − y i−1 y i+1 − y i−1


(10.26) h 2
+ p(ti ) ( 2h ) +q(t i ) y i =−r (t i )

For each i = 1, 2, …, N.
In the form we will consider, eq. (10.26) is rewritten as

h h
( ) (
− 1+ p (t i ) y i−1 +(2+h2 q (t i )) y i− 1− p (ti ) y i+1 =−h 2 r(t i )
2 2 )
And the resulting system of question is expressed in the tridiagonal N × N – matrix form

(10.27) Ty = d,

di mana

h
2+ h2 q ( t 1 ) −1+ p t 0 0 . .. 0
2 ( 1)

[ ]
h h
−1+ p t 2+h2 q ( t 2 ) −1+ p t 0 . .. 0
2 ( 2) 2 ( 2)
h h
0 −1+ p t 2+ h2 q ( t 3 ) −1+ p t . .. 0
2 ( 3) 2 ( 3)
T= . . . . .
.
. . . . .
.
. . . . 0
h
. . . ... −1+ pt
2 ( N −1 )
h 2
0 0 0 ... −1+ p ( t N ) 2+ h q ( t N )
2
dan
h
(
−h 2 r ( t 1 ) + 1+ )
p ( t1 ) α

[ ]
2
−h2 r ( t 2 )
y1 2
−h r ( t 3 )

[]
y2 d= .
y3 .
y= . .
. −h2 r ( t N−1 )
. h
yN (
−h2 r ( t N ) + 1+
2 )
p (t N ) β
dan

The following theorem gives conditions under which the tridiagonal linear system (10.27)has an
unique solution. Its proof is a consequence of theorem 6.24 (p.356) and is considered in exercise
6.

Theorem 10.3
Suppose that p, q, and r are continuous on [ a,b]. if q(x) ¿ 0 on [a,b], then the tridiagonal

linear system (10.27) has an unique solution provided that h < 2/L where a≤x≤b L=max |p( x)|
It should be noted that the hypotheses of theorem 10.3 guarantee an unique solution to the
4
boundary-value problem (10.20), but they do not guarantee that y∈ C [a,b] . it is necessary
(4 )
to establish that y is continuous on [a,b] in order to ensure that the runcation error has order
2
O(h ) .

Algorithm Penyelesaian Persamaan Differensial dengan nilai batas menggunakan Metoda


Beda Hingga

To approximate the solution of the boundary-value problem

n
y = p( x) y'+q( x) y+r( x), a≤x≤b , y (a )=α , y(b )=β .

INPUT endpoints a, b; boundary conditions α , β ; integer N.


OUTPUT approximations
w i to y( ti ) for each i = 0, 1, …, N+1.

Step 1. Masukkan ujung-ujung interval t, yaitu a dan b


Masukkan nilai awal dan nilai akhir dari y, yaitu y(a) = alfa dan y(b) = beta

Step 2 Hitung h = (b-a)/(N+1);


x = a + h;
2
a1 = 2 + h q(x);
b1 = -1 +(h/2)p(x);
2
d1 = - h r(x)+(1+(h/2)p(x)) α .
Set i = 2
Step 3 Untuk setiap i: i = 2, …, N-1
Set t = a + ih
2
ai =2+h q (t ) ;
bi =−1+(h /2) p (t );
c i=−1−(h /2) p (t );
2
d 1 =−h r(t ) .

Step 3 set t = b-h;


2
a N =2+h q (x );
c N =−1−(h/2 ) p( x);
2
d N =−h r( x )+(1−(h/2 ) p( x ))β .

Step 4 Panggil Algoritma Thomas

Step 5 For i = 1, 2, …, N+1 set yi = xi;


OUTPUT (x,
wi ).

Step 12 STOP. (Procedure is complete)

10.4 Finite-Difference Methods for Nonlinear Problems


For the general nonlinear boundary –value problem
2
(10.28) d y dy
2
=f (t , y , ), a≤t≤b , y (a)=α , y (b )= β
dt dt
the difference method is similar to the method applied to linear problems in Section 10.3. Here,
however, the system of equations that is derived will not be linear, so an iterative process is
required to solve it.
For the development method of the procedure, we will assume throughout that f satisfies the
following conditions:

i) f and the partial derivatives


f y ≡∂ f /∂ y and
f y ≡∂f /∂ y '
'
are all continuous on
' '
D={( x , y , y )|a≤x≤b ,−∞< y<∞ ,−∞< y <∞}
ii) 0 <   fy(x,y,y) on D for some  > 0;
iii) constant k and L exists, with
' '
k = max |f y ( x , y , y )|, L= max |f y' ( x , y , y )|,
' '
( x , y , y )∈ D ( x , y , y )∈ D
This will ensure, by Theorem 10.1, that a unique solution to Eq.(10.28) exists.
As in the linear case, we divide [a,b] into (N + 1) equal subintervals whose endpoints are at
ti = a + ih for i = 0,1,, N+1. Assuming that the exact solution has a bounded fourth derivative
allows us to replace y(xi) and y(xi) in each of the equations
2
(10.29) d y dy
2
( x i )=f (t i , y (t i ), (t i ))
dt dt
by the appropriate centered-difference formula given in Eqs. (10.24) and (10.25), to obtain, for
each i = 0,1,2,…,N,
y (t i+1 )−2 y (t i )+ y (t i−1 ) y (t i+1 )− y (t i−1 ) h 2 ' '' h2 ( 4)
(10.30)
h2 (
=f t i , y (t i ),
2h 6 12 )
− y (ηi ) + y (ξi )
,
for some i and i in the interval (ti–1,ti+1).
As in the linear case, the difference method results when error terms are deleted and the boundary
conditions employed:
y0 = , yN+1 = ,
and
yi +1 −2 y i +wi−1 y i+1 − y i−1

h2 (
+f t i , yi ,
2h )
=0

for each i = 0,1,2,…,N.


The N  N nonlinear system obtained from this method:
y 2 −α
(
2 y 1 − y 2 +h2 f t 1 , y 1 ,
2h
−α=0 , )
y −y
(
− y 1 +2 y 2− y 3 +h2 f t 2 , y 2 , 3 1 =0 ,
2h )

(10.31) y −y
(
− y N −2 +2 y N−1 − y N +h2 f t N −1 , y N−1 , N N −2 =0 ,
2h )
β− y N −1
(
− y N −1 +2 y N +h 2 f t N , y N ,
2h
−β=0 )
will have a unique solution provided that h < 2/L, as shown in the previously referenced book by
Keller [75], page 86.
To approximate the solution to this system, we will use Newton’s method for nonlinear systems

discussed in Section 9.2. A sequence of iterates {( w(1k ) , w(2k ) ,…, w(Nk ) )t } is generated that will
converge to the solution of system (10.31), provided that the initial approximation
t
(w(10) , w(20) ,…, w(N0) )t is sufficiently closed to the solution (w1 , w2 ,…, w N ) and that the
Jacobian matrix for the system, defined in Eqs. (9.11), is nonsingular. However, for the system
(10.31), the Jacobian matrix in (10.32) is tridiagonal, and the assumptions presented at the
beginning of this discussion ensure that J is a nonsingular matrix.
Newton’s method for nonlinear systems requires that at each iteration, the N  N linear system
y 2 −α
(
J ( y 1 , y 2 …, y N )( v 1 , v 2 … , v N )T =( 2 y 1 − y 2 + h2 f t 1 , y 1 ,
2h ) ,

y 3− y1
(
− y 1 +2 y 2 − y 3 +h 2 f t 2 , y 2 ,
2h )
,…,

− y N −2 +2 y N −1 − y N
(10.31) y N − y N−2
(
+h2 f t N −1 , y N−1 ,
2h ) ,

β− y N−1
(
− y N −1 +2 y N +h 2 f t N , y N ,
2h )
−β )T

be solved for
(v 1 , v 2 ,…,v N ) since
(k) ( k −1)
w i =w i + v i , for each i = 0,1,2,…,N.
Since J is tridiagonal, this is not as formidable a problem as it might at first appear; the Crout
reduction algorithm for tridiagonal system (Algorithm 6.7) can be easily applied. The entire
process is detailed in the following algorithm.

10.5 Rayleigh-Ritz Methods


An important linear two-point boundary-value problem in beam-stress analysis is given by the
differential equation
d dy
(10.33) −
dx (
p( x )
dx )
+q (x ) y =f ( x )
for 0≤x≤1 ,
with the boundary conditions
(10.34) y(0) = y(1) = 0
This differential equation describes the deflection y(x) on a beam of length one with variable
cross section represented by q(x). The deflection is due to the added stresses p(x) and f(x).
In the discussion that follows we assume that p  C1[0, 1] and q, f  C[0, 1]. Further, we require
that there exist a constant  > 0 such that

0 <   p(x) for 0≤x≤1


and that

q(x)  0 for 0≤x≤1 .


These assumptions are sufficient to guarantee that the boundary-value problem (10.33) and
(10.34) has a unique solution.
As is the case of many boundary-value problems that describe physical phenomena, the solution
to the beam equation satisfies a variational property. The variational principle for the beam
equation is fundamental to the development of the Ryleigh-Ritz method and characterizes the
solution to the beam equation as the function that minimizes a certain integral over all function in
2
C0 [ 0,1] , the set of those function u in C2[0, 1] with the property that u(0) = u(1) = 0. The
following theorem gives the characterization. The proof of this theorem, while not difficult, is
lengthy; it can be found in Shultz [119], pp. 88-89.

Theorem 10.4 Let p  C1[0, 1], q, f  C[0, 1], and


(10.35) 0 <   p(x), q(x)  0 for 0≤x≤1 .
2
The function y  C0 [ 0,1] is the unique solution to the differential equation

d dy
(10.36) −
dx (
p( x )
dx )
+q (x ) y =f ( x )
, 0≤x≤1 ,
2
if and only if y is the unique function in C0 [ 0,1] that minimizes the integral
1
(10.37) I [u ]=∫ { p( x )[u ' ( x )]2 +q( x )[u( x )]2 −2 f ( x )u( x )} dx .
0
2
In the Rayleigh-Ritz procedure the integral I is minimized not over all the functions in C0 [ 0,1]
, but over a smaller set of functions consisting of linear combinations of certain basis funcyions

φ1 ,φ 2 ,…φ n . The basis function {φ i }ni=1 must be linearly independent and satisfy

φi ( 0)=φi (1 )=0 for i = 1,2,…,n.


n
φ( x )=∑ i=1 ci φ i ( x )
An approximation to the solution y(x) of Eq. (10.36) is obtained by
n
finding constants
c 1 ,c 2 ,…c n to minimize I [ Σi=1 c i φi ] .
From Eq. (10.37),
n

(10.38)
I [φ ]=I [∑ cφ
i=1 i i ]
1 2 2
¿∫ { p( x ) [∑ }
n n n

0
c φ +q ( x )
i=1 i i
'
] [ ]
∑i=1 c i φi −2 f ( x )∑i=1 ci φ i dx .
and, for a minimum to occur it is necessary, when considering I as a function of
c 1 ,c 2 ,…c n to
have
∂I
(10.39) =0
∂c j for each j = 1,2,…,n.
Differentiating (10.38) gives
1
∂I n n
{
=∫ 2 p (x ) ∑i=1 c i φ' ( x )φ'j ( x )+2 q( x ) ∑ i=1 c i φ i ( x )φ j ( x )−2 f (x )φ j (x ) dx,
∂c j 0 }
and substituting into Eq. (10.39) yields
n 1 1
(10.40) 0=∑
i=1 0
for each j = 1,2,…,n.
[ ' '
]
∫ { p( x )φ i ( x )φ j ( x )+q( x )φ i ( x )φ j ( x )} dx c i−∫ f ( x )φ j ( x )dx
0

The equation described in Eq. (10.40) can be considered as linear system Ac = b, where the
symmetric matrix A is given by
1
aij =∫ { p( x )φ'i ( x)φ'j ( x )+q( x )φ i ( x )φ j ( x )} dx
0
and b is defined by
1
bi =∫ f (x )φ j (x )dx
0
The first choice of basis functions we will discuss involves piecewise linear polynomials. The

first step is toform a partition on [0, 1] by choosing points


x 0 , x1 ,…x n+1 with
x 0 <x 1 <…<x n < xn+1 =1 .
Letting
hi =xi+1 −xi for each i = 1,2,…,n, we define the basis functions

φ1 ( x),φ 2 ( x),…φ n ( x) by

0, 0≤x≤x i−1 ,

{
x−x i−1
, x i−1 < x≤x i ,
hi−1
(10.41) φi ( x )=
x i+1 −x
, xi < x≤xi +1 ,
hi
0, xi +1 < x≤1 ,
for each i = 1,2,…,n. (See Fig 10.4)

Since the functions i is piecewise linear, the derivatives φ'i , while not continuous, are

constant on open subinterval


( x j , x j+1 ) for each j = 1,2,…,n. Thus, we have

0, 0≤x≤x i−1 ,

(10.41) φ'i ( x )=

for each i = 1,2,…,n.


{ 1
hi −1
−1
hi
0,
,
, x i−1 < x ≤xi ,

x i < x≤x i +1 ,

x i +1 < x≤1 ,

'
Because i and φi are nonzero only on ( xi−1 ,x i+1) ,
φi ( x )φ 'j (x )≡0 and φ'i ( x )φ 'j (x )≡0
except when j is i – 1, i, or i + 1. As a consequence the linear system (10.40) reduces to an n  n
tridiagonal linear system. The nonzero entries in A are given by
1
aii =∫ { p( x )[φ 'i ( x )]2 +q ( x )[ φi ( x )]2 } dx
0
xi x i+ 1
2 2
1 −1
¿∫
( )
x i−1 hi−1
p( x )dx + ∫
x
( ) p ( x )dx
i
hi
x x
i 2 i +1 2
1 −1
+ ∫( )( hi−1
x −xi−1 ) q( x ) dx + ∫ ( ) ( x
2

h i+1
2
−x ) q ( x ) dx ,
xi−1 xi i

for each i = 1,2,…,n;


1
' '
ai ,i +1 =∫ { p( x ) φi ( x ) φ i+1 ( x )+ q( x ) φi ( x ) φi+1 ( x ) } dx
0
x i +1 x i+1
2 2
1 1
¿∫
xi

hi ( ) p( x ) dx + ∫
x
i
( ) hi
( x i+1 −x ) ( x−x i ) q ( x )dx ,
for each i = 1,2,…,n1; and
1
' '
ai ,i−1=∫ { p ( x ) φi ( x ) φi−1 (x )+ q( x ) φi (x )φ i−1 ( x ) } dx
0
xi xi
2 2
1 1
¿∫
x i−1

( )
hi−1
p( x ) dx + ∫
x
i−1
( )
hi−1
( x i−x ) ( x−x i−1 ) q( x ) dx ,
for each i = 1,2,…,n. The entries in b are given by
1
bi =∫ f ( x )φ i ( x )dx
0
xi x i+1
1 1
¿∫ ( x−x i−1 ) f (x )dx + ∫ ( x i+1 −x ) f ( x )dx ,
x i−1 hi−1 x hi
i

for each i = 1,2,…,n. The entries in c are the unknown coefficients


c 1 ,c 2 ,…c n from which
the Rayleigh-Ritz approximation  given by
n
φ( x )=∑ i=1 ci φ i ( x )
is constructed.

Piecewise Linear Rayleigh-Ritz Algorithm 10.5 (skipped)


It can be shown that the traditional matrix A given by the piecewise linear basis functions is
positive definite, so, by Theorem 6.22 on page 337 the linear system is stable. Under the
hypotheses presented at the beginninfg of this section we have
2
|φ( x)− y( x)|=O(h ), 0≤x≤1
The use of piecewise-linear basis functions results in an approximate solutions to Eqs. (10.33)
and (10.34) that is continuous but not differentiable on [0, 1]. A more complicated set of basis
2
functions is required to construct an approximation that belongs to C0 [ 0,1] . These basis
functions are similar to the cubic interpolatory splines that were discussed in Section 3.6.
A cubic interpolatory spline S on the five nodes x0, x1, x2, x3, and x4 for a function f is defined by:

a) S is a cubic polynomial, denoted by Sj, on


[ x j ,x j+1 ] for j=0,1,2,3 . (this give 16
selectable constants for each cubic.)

b)
S ( x j )=f ( x j ) for j=0,1,2,3,4 . (5 specified conditions)

c)
S j+1 ( x j+1 )=S j ( x j+1 ) for j=0,1,2. (3 specified conditions)

d) S 'j+1 ( x j+1 )=S 'j ( x j+1 ) for j=0,1,2. (3 specified conditions)


'' ''
e) S j+1 ( x j+1 )=S j ( x j+1 ) for j=0,1,2. (3 specified conditions)
f) One of the following boundary conditions is satisfied:

i) Free: S ' ' ( x 0 )=S' ' ( x 4 )=0 . (2 specified conditions)


' '
ii) Clamped: S (x 0 )=f ( x 0 ) and S ' ( x 4 )=f ' ( x 4 ) . (2 specified conditions)

Since uniqueness of solutions requires that the number of constants in (a), 16, must equal the
number of conditions in (b) through (f), only one of the boundary conditions in (f) can be
specified for the interpolatory cubic splines.
The cubic spline functions we will use for our basis functions are called B-splines or bell-shaped
splines. This splines differ from interpolatory splines in that both sets of boundary conditions in
(f) are satisfied. This requires the relaxation of two of the conditions in (b) through (e). Since the

spline must have two continuous derivatives on


[ x0 , x4 ] the only reasonable choice is to
modify (b) to

(b)
S ( x j )=f ( x j ) for j=0,2,4.
The basic B-spline S defined below uses the equally spaced nodes
x 0=−2, x 1=−1,

x 2=0 , x 3=1, and x 4 =2 . It satisfies the interpolatory conditions

(b)
S ( x 0 )=0 , S ( x 2 )=1 , S ( x 4 )=0 ;
as well as both sets of conditions
'' ''
i) S ( x 0 )=S ( x 4 )=0 ,
' '
ii) S (x 0 )=S ( x 4 )=0 .
2
As a consequence, S ∈C 0 [−∞ , ∞ ].

0 x ≤−2;

{
3 3 3 3
[( 2−x ) −4 ( 1−x ) −6 x + 4( 1+ x ) ] / 4 , −2< x≤−1
[( 2−x )3 −4( 1−x )3 −6 x3 ]/ 4 , −1< x ≤0
S ( x )= 3 3
[ ( 2−x ) −4( 1−x ) ]/ 4 , 0< x≤1
3
[( 2−x ) ]/ 4 , 1< x ≤2
0 x >2

To construct the basis function


φi in C20 [ 0,1] we first partition [0, 1] by choosing a positive
integer n and defining h = 1/(n+1). This produces the equally spaced nodes xi = ih, for each

i = 0,1,2,…, n+1. We then define Si by S i ( x )=S (( x−x i )/h ) for each i = 0,1,2,…, n+1.


The graph of a typical Si is shown in Figure 10.5.

Anda mungkin juga menyukai