Anda di halaman 1dari 20

Ordinary Differential Equations

Now that we have studied differentiation and integration in isolation, lets look at applying these concepts to differential equations. One of the most famous ordinary differential equations is Newtons Second Law:

F = ma (Force = mass acceleration)


which may be reorganized slightly as:

d 2x d x v or m 2 =F = F /m dt v dt where x(t ) is position and v(t ) is velocity. By integrating the differential equation, we may nd the velocity and position at any time T : v(T ) = v(t = 0) + x(T ) = x(t = 0) +
Z T F
0 m

dt

Z T
0

v(t ) dt

The differential equation describes the rate of change of the variables with respect to time.

Consider the ordinary differential equation:

y = f (t , y)

with

y(t = a) = y0 and t [a, b]


Z t
a

The solution to the differential equation may be computed analytically:

y(t ) = y0 +

f (t , y) dt

Instead of trying to compute the value of y for all t [a, b], we choose a set of evenly spaced points in time tk = a + h k where h = (b a)/N so that:

a = t0 < t1 < t2 < . . . < tN = b


We will only compute y(t ) at the points t = tk . Starting from the initial value y(t = a) = y0, integrate the function f (t , y) to nd the next value of y(t = a + h):

y(a + h) = y(a) +

Z a+h
a

f (t , y) dt

In general:

y(tk + h) = y(tk ) +

Z t +h k
tk

f (t , y) dt

Expand y(t ) in a Taylor series about t = tk :

(t tk )2 y(t ) = y(tk ) + (t tk ) y (tk ) + y (c) 2 But the differential equation tells us that y (tk ) = f (tk , yk ) so that: (t tk )2 y (c) y(t ) = y(tk ) + (t tk ) f (tk , yk ) + 2 Therefore, we can compute y(tk + h) = y(tk+1) from: y(tk+1) = y(tk ) + (tk+1 tk ) f (tk , yk ) = y(tk ) + h f (tk , yk )
=h

with a local error of order h2. This is Eulers method.

Error
As in numerical integration, there are two types of discretization error:

Global discretization error: ek = y(tk )


exact

yk
approx.
Z t k
tk1

Local discretization error: k = y(tk ) yk1 + h


exact Consider Eulers method:

f (t , y) dt

approx.

h2 k = y(tk ) (yk1 + h f (tk , yk )) = y (c) local truncation error 2 Summing the error over the N intervals between a and b yield the global error: h2 Nh2 (b a)h ek = y (c) y (c1) = y (c1) = O(h) 2 2 2 k=1
Therefore, for Eulers method, ek = O(h) and k = O(h2). The total error for Eulers method is:
N

E = |y(b) yN | =

(b a)h y (c1) = O(h) 2

Multistep Methods
Eulers method is simple but not very accurate. To improve the method, the integral R tk+1 f (t , y) dt must be approximated using f (t , y) at more time locations. tk Multistep methods accomplish this by including f (tk1, yk1), f (tk2, yk2), . . . Say that we know the value of y(t ) at tk1 and tk . Lets use those values of y to compute y(tk+1):

y(tk+1) = y(tk ) +

Z t k+1
tk

f (t , y) dt

Our job is to approximate the integral of f (t , y) between tk and tk+1. What to do? Interpolate f (t , y) using the values of f at tk1 and tk . Integrate the interpolant from tk to tk+1. Lets do it. Interpolate:

(t tk1) (t tk ) P(t ) = f (tk1, yk1) + f (tk , yk ) (tk1 tk ) (tk tk1) But we assume that the points are evenly spaced in time so that tk tk1 = h: f f P(t ) = k1 (t tk ) + k (t tk1) h h

Integrate from tk to tk+1:

yk+1 = yk + = yk +

Z t k+1
tk Z t tk

f (t , y) dt yk +

Z t k+1
tk

P(t ) dt dt

k+1

f f k1 (t tk ) + k (t tk1) h h

Change variables to s = t tk . (This implies that ds = dt.)

fk1 yk+1 = yk + s+ h 0 h fk1 s2 + = yk h 2


0

Z h

fk (s + h) ds h h f k s2 + hs h 2
0

fk1 = yk h

h2 2

fk + h

h2 + h2 2

This gives us the second-order accurate Adams-Bashforth method:

yk+1 = yk + h

3 f (tk , yk ) f (tk1, yk1) 2 2

Adams-Bashforth Methods
The general Adams-Bashforth method may be written:

yk+1 = yk + h

n=0

wk f (tkn, ykn)

The methods are labeled according to the global discretization error: First Order (Euler)

yk+1 = yk + h f (tk , yk ) 3 f (tk , yk ) f (tk1, yk1) yk+1 = yk + h 2 2 23 f (tk , yk ) 16 f (tk1, yk1) 5 f (tk2, yk2) yk+1 = yk + h + 12 12 12

Second Order

Third Order

These methods require uniform steps in time (i.e. tk+1 tk = h for all k) and except for Eulers method require starting values (y0 and y1 for the second order method and y0, y1 and y2 for the third order method). Euler just needs y0.

Implicit Multistep Methods


Implicit methods can be useful to solve ordinary differential equations that are stiff, e.g.

x = x y = 20y
much more quickly than x.

x(0) = 1 y(0) = 1

The solution to this system is x(t ) = exp(t ), y(t ) = exp(20t ). Note that y changes

With an explicit method (one that only uses past or present information to compute

xn+1 and yn+1), the time step will be restricted by the behavior of y(t ). However,
an implicit method can take relatively larger time steps and still properly compute the behavior of x(t ) and y(t ). Note that if f (t , y) is nonlinear, an implicit method will require the solution of a system of nonlinear equations, leading to the use of Newtons method or a similar algorithm.

Adams-Moulton methods
These implicit multistep methods follow the same approach as the (explicit) AdamsBashforth formulas:
Z t n+1
tn

yn+1 = yn +

f (t , y) dt

However, they include f (tn+1, yn+1) in computing an approximation of the integral. Approximating the integral in this way leads immediately to two possibilities:

yn+1 = yn + h f (tn+1, yn+1) h Trapezoidal Rule O(h2) yn+1 = yn + ( f (tn, yn) + f (tn+1, yn+1)) 2 To nd higher-order formulas, we can interpolate a polynomial through f (tn+1, yn+1), f (tn, yn), f (tn1, yn1), . . . and then integrate that polynomial from tn to tn+1. The
third order Adams-Moulton method is: Third Order

Backward Euler O(h)

yn+1 = yn + h

8 1 5 f (t ,y )+ f (tn, yn) f (t ,y ) 12 n+1 n+1 12 12 n1 n1

Backward Differentiation methods


Here, we take a different approach to solving for yn+1. Instead of approximating R tk+1 f (t , y) dt, we go back to the differential equation and approximate dy/dt: tk

y = f (t , y) with y(t = a) = y0
Approximate the derivative dy/dt at t = tn+1 using one of our one-sided, backward difference formulas from the last chapter:

yn+1 yn f (tn+1, yn+1) h


This is actually the same backward Euler formula we found as the rst order AdamsMoulton method. We can use higher-order backward differentiation methods using higher-order differentiation formulas: Second Order Third Order

3 1 yn+1 2 yn + yn1 = h f (tn+1, yn+1) 2 2 11 3 1 yn+1 3 yn + yn1 yn2 = h f (tn+1, yn+1) 6 2 3

Runge-Kutta Methods
Lets now return to the problem of computing:

y = f (t , y)

y(t + h) = y(t ) +

Z t +h
t

f (t , y) dt

We can compute a more accurate integral by including more values of f (t , y) for t [tn, tn+1] (rather than using values to the left of tn as in multistep methods). Two possible approximations of the integral:

h y(t + h) = y(t ) + [ f (t , y(t )) + f (t + h, y(t + h))] 2 h h Midpoint rule y(t + h) = y(t ) + h f (t + , y(t + )) 2 2 We can approximate y(t + h) and y(t + h 2 ) using Eulers method:
Trapezoidal rule Modied Euler Midpoint Method

h y(t + h) = y(t ) + [ f (t , y) + f (t + h, y + h f (t , y))] 2 h h y(t + h) = y(t ) + h f (t + , y + f (t , y)) 2 2 h 1 k2 = h f (t + , y(t ) + k1) 2 2 y(t + h) = y(t ) + k2

We can write the midpoint method as:

k1 = h f (t , y(t ))

Fourth-Order Runge-Kutta
This is one of the most popular algorithms for solving ordinary differential equations. Solve for four different values of f (t , y):

k1 = h f (tn, yn) h 1 k2 = h f (tn + , yn + k1) 2 2 h 1 k3 = h f (tn + , yn + k2) 2 2 k4 = h f (tn + h, yn + k3)


We can then integrate from tn to tn + h using:

yn+1 = yn +

1 [k1 + 2k2 + 2k3 + k4] 6

This scheme has a global error of O(h4) and a local error of O(h5). We have been able to construct a more accurate approximation of yn+1 by sampling

f (t , y) a few times within the interval [tn, tn + h].

Example
Lets try to use the fourth-order Runge-Kutta formula to solve the following second order ordinary differential equation.

y + Ay = B cos( t )
To solve this, we convert it to a system of two rst order ODEs:

y v

0 1 A 0

y v

0 B cos( t )

where v = y . So, we can consider x = f(t , x) where:

x=

y v

and

f(t , x) =

0 1 A 0

y v

0 B cos( t )

We can now apply the fourth order Runge-Kutta method to this problem.

Boundary Value Problems


Up to now, we have considered initial value problems in which the differential equation is specied along with initial conditions for each of the dependent variables. Another type of problem is the boundary value problem, in which a differential equation is valid on a specic interval and boundary values are given for the variables under consideration

y = f (t , y, y )

t [a, b] with y(a) = y(b) =

where and are the boundary conditions on y. If this were an initial value problem, two initial conditions y(t = a) and y (t = a) would be required. As long as

f (t , y, y ) is differentiable and has continuous derivatives, the solution is unique.

Solving Boundary Value Problems: Shooting methods


It may be hard to solve the boundary value problem directly. However, we know how to solve initial value problems. Lets convert the boundary value problem:

y = f (t , y, y )
into the initial value problem:

t [a, b] with y(a) = y(b) =

y = f (t , y, y )

t [a, b] with y(a) = y (a) =???

We dont have an initial value for y (a), so we try a number of values for y (a) and integrate each solution y(t ) until we nd one with y(t = b) = . Algorithm

Integrate ODE using 4th order Runge-Kutta with y(a) = and y (a) = C. Evaluate y(t = b) and compare with boundary condition y(b) = . Choose new C and repeat until y(t = b) = . Note: Use bisection algorithm to nd correct value for C.

Shooting Methods: An Example


Solve the following boundary value problem:

y + 2y = sin(t ) with y(0) = 0 y(1) = 0


Formulate as an initial value problem:

y + 2y = sin(t ) with y(0) = 0 y (0) = C


We want to nd the value of C which will give y(1) = 0. Use MATLABs ode45 to integrate from t = 0 to t = 1.
Shooting Method with y(0) = 0 find y(0) = c such that y(1) = 0 0.05

y(0)
0 0 0 0 0 0 0

y (0) = C
-2 -1 0 -0.5 -0.75 -0.625 -0.5625

y(1)
-1.0126 -0.3141 0.3844 0.0351 -0.1395 -0.0522 -0.0085

0.05

y(t)

0.1

c1 =0.5 c4 =0.5625

0.15
c3 =0.625

0.2
c2 =0.75

0.25 0

0.1

0.2

0.3

0.4

0.5 t

0.6

0.7

0.8

0.9

Solving Boundary Value Problems: Relaxation


Return to the boundary value problem:

y = f (t , y, y )

t [a, b] with y(a) = y(b) =

Here, we will discretize the differential equation using nite differences and then solve the differential equation either using Gaussian elimination or iteratively. Lets rst consider the solution when f (t , y, y ) is linear. Then, assume the differential equation becomes:

y = p(t )y + q(t )y + r(t ) with y(a) = y(b) =


Lets discretize the derivatives using nite difference approximations:

y (ti) = y (ti) =

y(ti + h) 2y(ti) + y(ti h) 2) + O ( h h2 y(ti + h) y(ti h) + O(h2) 2h

Then, we can approximate our boundary value problem as:

y(ti + h) 2y(ti) + y(ti h) y(ti + h) y(ti h) = p ( t ) + q(ti) y(ti) + r(ti) i 2 2h h

This equation will be valid for a < ti < b. However, when t = a or t = b, we have to apply the boundary conditions:

y(t = a) = t0 = a t1 = a + h t2 = a + 2h
. . .

and

y(t = b) =

This results in the following system of equations:

y0 =
y2 2y1 +y0 h2 y3 2y2 +y1 h2 yN 1 2yN 2 +yN 3 h2 yN 2yN 1 +yN 2 h2 y0 = p(t1) y22 h + q(t1 ) y1 + r(t1 ) y1 = p(t2) y32 h + q(t2 ) y2 + r(t2 )

. . .

tN 2 = b 2h tN 1 = b h tN = b

= p(tN 2) = p(tN 1)

yN 1 yN 3 + q(tN 2) yN 2 + r(tN 2) 2h yN yN 2 + q(tN 1) yN 1 + r(tN 1) 2h

yN =

We can rearrange the equations to gather the coefcients of each yi:

h h 1 p(ti) yi1 + 2 + h2 q(ti) yi + 1 + p(ti) yi+1 = h2 r(ti) 2 2

The system of equations may be written in matrix form:

0 0 0 ... 0 y0 h 2 h2r(t1) 1 h 0 ... 0 y1 2 p(t1) 2 + h q(t1 ) 1 + 2 p(t1 ) 2 h h 2 0 1 2 p(t2) 2 + h q(t2) 1 + 2 p(t2) . . . 0 y2 = h r(t2) . . . . . . . . . .. .. . . . . . yN 0 ... 0 0 0 1 1
This system may be solved using Gaussian elimination or iteratively using Jacobi, Gauss-Seidel or SOR (successive over-relaxation). If p, q and r are continuous on the interval [a, b] and q(x) 0 when a x b, then
2 where L = max the above system has a unique solution if h < L x[a,b] | p(x)|.

Nonlinear Systems
y = f (t , y, y ) t [a, b] y(t = a) = y(t = b) =
When f is nonlinear, the solution becomes more difcult and requires iteration. We can proceed as before to set up the system (now of nonlinear equations):

y(ti h) 2y(ti) + y(ti + h) y(ti + h) y(ti h) = f t , y ( t ) , i i 2h h2


Then, solve the following system of nonlinear equations using Newtons method:

t0 = a t1 = a + h t2 = a + 2h
. . .

y0 y2 + 2y1 y0 + y3 + 2y2 y1 +
y0 h2 f t1, y1, y22 h y1 h2 f t2, y2, y32 h yN 1 yN 3 2h yN yN 2 2h

= = 0 = 0
. . .

tN 2 = b 2h tN 1 = b h tN = b

yN 1 + 2yN 2 yN 3 + h2 f tN 2, yN 2, yN + 2yN 1 yN 2 +

= 0 = 0 =

h2 f tN 1, yN 1, yN

Anda mungkin juga menyukai