Anda di halaman 1dari 68

Numerical Methods for Differential Equations

Chapter 2: Runge–Kutta and Linear Multistep methods

Gustaf Söderlind and Carmen Arévalo


Numerical Analysis, Lund University

Textbooks: A First Course in the Numerical Analysis of Differential Equations, by Arieh Iserles
and Introduction to Mathematical Modelling with Differential Equations, by Lennart Edsberg


c Gustaf Söderlind, Numerical Analysis, Mathematical Sciences, Lund University, 2008-09

Numerical Methods for Differential Equations – p. 1/63


Chapter 2: contents

◮ Solving nonlinear equations


◮ Fixed points
◮ Newton’s method
◮ Quadrature
◮ Runge–Kutta methods
◮ Embedded RK methods and adaptivity
◮ Implicit Runge–Kutta methods
◮ Stability and the stability function
◮ Linear multistep methods

Numerical Methods for Differential Equations – p. 2/63


1. Solving nonlinear equations f (x) = 0
We can have a single equation

x − cos(x) = 0

or a system

4x2 − y 2 = 0
4xy 2 − x = 1

Nonlinear equations may have


no solution; one solution; any finite number of solutions;
infinitely many solutions

Numerical Methods for Differential Equations – p. 3/63


Iteration and convergence
Nonlinear equations are solved iteratively. One computes
a sequence {x[k] } of approximations to the root x∗

For f (x∗ ) = 0 , let e[k] = x[k] − x∗

Definition The method converges if limk→∞ ke[k] k = 0

Definition The convergence is


◮ linear if ke[k+1] k ≤ c · ke[k] k with 0 < c < 1
◮ quadratic if ke[k+1] k ≤ c · ke[k] kp with p = 2
◮ superlinear if p > 1;
◮ cubic if p = 3, etc.
Numerical Methods for Differential Equations – p. 4/63
2. Fixed points

Definition x is called a fixed point of the function g if

x = g(x)

Definition A function g is called contractive if

kg(x) − g(y)k ≤ L[g] · kx − yk

with L[g] < 1 for all x, y in the domain of g

Numerical Methods for Differential Equations – p. 5/63


Fixed Point Theorem

Theorem Assume that g is continuously differentiable on


the compact interval I, i.e., g ∈ C 1 (I)

◮ If g : I → I there exists an x∗ ∈ I such that x∗ = g(x∗ )


◮ If in addition L[g] < 1 on I, then x∗ is unique, and . . .
◮ . . . the iteration
xn+1 = g(xn )
converges to the fixed point x∗ for all x0 ∈ I

Note Both conditions are absolutely essential!

Numerical Methods for Differential Equations – p. 6/63


Fixed Point Theorem. Existence and uniqueness

1 1 1

0.9 0.9 0.9

0.8 0.8 0.8

0.7 0.7 0.7

0.6 0.6 0.6

0.5 0.5 0.5

0.4 0.4 0.4

0.3 0.3 0.3

0.2 0.2 0.2

0.1 0.1 0.1

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1

Left No condition satisfied: maybe no x∗


Center First condition satisfied: maybe multiple x∗
Right Both conditions satisfied: unique x∗
Numerical Methods for Differential Equations – p. 7/63
Fixed point iteration. x = e−x ; g(x) = e−x
[k+1] −x[k]
x =e ; x[0] = 0
g : [0, 1] → [0, 1]; |g ′ (x)| = e−x < 1 for x > 0

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Numerical Methods for Differential Equations – p. 8/63


Error estimation in fixed point iteration

Assume kg ′ (x)k ≤ L < 1 for all x ∈ I.

x[k+1] = g(x[k] )
x∗ = g(x∗ )

By the mean value theorem,

x[k+1] − x∗ = g(x[k] ) − g(x∗ )


= g(x[k] ) − g(x[k+1] ) + g(x[k+1] ) − g(x∗ )

kx[k+1] − x∗ k ≤ L · kx[k] − x[k+1] k + L · kx[k+1] − x∗ k

Numerical Methods for Differential Equations – p. 9/63


Error estimation . . .

(1 − L) · kx[k+1] − x∗ k ≤ L · kx[k] − x[k+1] k

Error estimate (computable error bound)

[k+1] L
kx −x k≤ ∗
kx[k] − x[k+1] k
1−L
Theorem If L[g] < 1, then the error in fixed point iteration
is bounded by

[k+1] L[g]
kx ∗
−x k≤ kx[k] − x[k+1] k
1 − L[g]

Numerical Methods for Differential Equations – p. 10/63


Example: The trapezoidal rule
Approximate the solution to y ′ = −y 2 cos t, y(0) = 1/2, t ∈ [0, 8π].

Taking 96 steps, solve nonlinear equations with one (left) and


four (right) fixed point iterations. Graphs show absolute error
0.7 0.01

0.009
0.6

0.008

0.5 0.007

0.006
0.4

0.005

0.3
0.004

0.003
0.2

0.002

0.1
0.001

0 0
0 10 20 30 0 10 20 30

Numerical Methods for Differential Equations – p. 11/63


3. Newton’s method
Newton’s method solves f (x) = 0 using repeated
linearizations. Linearize at the point (x[k] , f (x[k] ))!
2

1.8

1.6

1.4

1.2

0.8

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Numerical Methods for Differential Equations – p. 12/63


Newton’s method . . .

Straight line equation

y − f (x[k] ) = f ′ (x[k] ) · (x − x[k] )

Define x = x[k+1] ⇒ y = 0, so that

−f (x[k] ) = f ′ (x[k] ) · (x[k+1] − x[k] )

Solve for x = x[k+1] , to get Newton’s method


[k]
f (x )
x[k+1] [k]
= x − ′ [k]
f (x )

Numerical Methods for Differential Equations – p. 13/63


Newton’s method. Alternative derivation

Expand f (x[k+1] ) in a Taylor series around x[k]

f (x[k+1] ) = f (x[k] + (x[k+1] − x[k] ))


≈ f (x[k] ) + f ′ (x[k] ) · (x[k+1] − x[k] ) := 0

Newton’s method
[k]
f (x )
x[k+1] [k]
= x − ′ [k]
f (x )

Numerical Methods for Differential Equations – p. 14/63


Newton’s method. Alternative derivation . . .

This holds also when f is vector valued (equation systems)

f (x[k] ) + f ′ (x[k] ) · (x[k+1] − x[k] ) := 0



[k+1] [k] [k]
−1
x =x ′
− f (x ) f (x[k] )

Definition f ′ (x[k] ) is the Jacobian matrix of f , defined by


 
′ ∂fi
f (x) =
∂xj

Numerical Methods for Differential Equations – p. 15/63


Newton’s method. Convergence
Write Newton’s method as a fixed point iteration with
iteration function g(x) := x − f (x)/f ′ (x)

x[k+1] = g(x[k] )

Note Newton’s method converges fast if f ′ (x∗ ) 6= 0, as


g ′ (x∗ ) = f (x∗ )f ′′ (x∗ )/f ′ (x∗ )2 = 0

Expand g(x) in a Taylor series around x∗

[k] [k] g (x ) [k]


′′ ∗
∗ ′ ∗ ∗
g(x ) − g(x ) ≈ g (x )(x − x ) + (x − x∗ )2
2
[k+1] g (x ) [k]
′′ ∗
x ∗
−x ≈ (x − x∗ )2
2
Numerical Methods for Differential Equations – p. 16/63
Newton’s method. Convergence . . .

Define the error by ε[k] = x[k] − x∗ , then


[k+1] [k] 2

ε ∼ ε

Newton’s method is quadratically convergent!

Fixed point iterations are typically only linearly convergent

ε[k+1] ∼ ε[k]

Numerical Methods for Differential Equations – p. 17/63


Implicit Euler. Newton vs Fixed point

As yn+1 = yn + hf (yn+1 ) we need to solve an equation

y = hf (y) + ψ

Note All implict methods lead to an equation of this form!

Theorem Fixed point iterations converge if L[hf ] < 1,


restricting the step size to h < 1/L[f ] !

Stiff equations have L[hf ] ≫ 1 so fixed point iterations will


not converge; it is necessary to use Newton’s method!

Numerical Methods for Differential Equations – p. 18/63


Convergence order and rate
Definition The convergence order is p with (asymptotic)
error constant Cp , if

kε[k+1] k
0 < lim [k] p
= Cp < ∞
k→∞ kε k

Special cases:

p=1 Linear convergence


Example Fixed point iteration Cp = |g ′ (x∗ )|
p=2 Quadratic convergence
f ′′ (x∗ )
Example Newton iteration Cp = 2f ′ (x∗ )

Numerical Methods for Differential Equations – p. 19/63


4. Quadrature (integration) formulas
Numerical “quadrature” is the approximation of definite integrals
Z b
I(f ) = f (x)dx
a

We substitute the “infinite sum” by a finite sum: the integrand is


sampled at a finite number of points
n
X
I(f ) = wi f (xi ) + Rn
i=1
Pn
Rn = I(f ) − i=1 wi f (xi ) is the quadrature error. Numerical
method
n
X
I(f ) ≈ wi f (xi )
i=1

Numerical Methods for Differential Equations – p. 20/63


5. Runge–Kutta methods

To solve the IVP y ′ = f (t, y), t ≥ t0 , y(t0 ) = y0 we use a


quadrature formula to approximate the integral in
Z tn+1
y(tn+1 ) = y(tn ) + f (τ, y(τ ))dτ
tn
s
X
y(tn+1 ) ≈ y(tn ) + h bj f (tn + cj h, y(tn + cj h))
j=1

Let {Yj } denote the numerical approximations to {y(tn + cj h)}


A Runge–Kutta method then has the form
s
X
yn+1 = yn + h bj f (tn + cj h, Yj )
j=1

Numerical Methods for Differential Equations – p. 21/63


Stage values and stage derivatives

The vectors Yj are called stage values

The vectors Yj′ = f (tn + cj h, Yj ) are called stage derivatives

Stage values and stage derivatives are related through


i−1
X
Yi = yn + h ai,j Yj′
j=1

and the method advances one step through


s
X
yn+1 = yn + h bj Yj′
j=1

Numerical Methods for Differential Equations – p. 22/63


The RK matrix, weights, nodes and stages
A = {aij } is the RK matrix, b = [b1 b2 · · · bs ]T is the weight vector,
c = [c1 c2 · · · cs ]T are the nodes, and the method has s stages

The Butcher tableau of an explicit RK method is

0 0 0 ··· 0
c2 a21 0 ··· 0
.. .. .. .. c A
. . . . or
bT
cs as1 as2 · · · 0
b1 b2 ··· bs

i
X
Simplifying assumption: ci = ai,j i = 2, 3, . . . , s
j=1
Numerical Methods for Differential Equations – p. 23/63
Simplest case: a 2-stage ERK

Y1′ = f (tn , yn )
Y2′ = f (tn + c2 h, yn + h a21 Y1′ )
yn+1 = yn + h[b1 Y1′ + b2 Y2′ ]

0 0 0
c2 a21 0 c2 = a21
b1 b2

Numerical Methods for Differential Equations – p. 24/63


2-stage ERK . . .

Using Y1′ = f (tn , yn ), expand in Taylor series around tn , yn :


Y2′ = f (tn + c2 h, yn + h a21 f (tn , yn ))
= f + h [c2 ft + a21 fy f ] + O(h2 )

Numerical Methods for Differential Equations – p. 25/63


2-stage ERK . . .

Using Y1′ = f (tn , yn ), expand in Taylor series around tn , yn :


Y2′ = f (tn + c2 h, yn + h a21 f (tn , yn ))
= f + h [c2 ft + a21 fy f ] + O(h2 )

Inserting into yn+1 = yn + h[b1 Y1′ + b2 Y2′ ], we get

yn+1 = yn + h(b1 + b2 )f + h2 b2 (c2 ft + a21 fy f ) + O(h3 )

Numerical Methods for Differential Equations – p. 25/63


2-stage ERK . . .

Using Y1′ = f (tn , yn ), expand in Taylor series around tn , yn :


Y2′ = f (tn + c2 h, yn + h a21 f (tn , yn ))
= f + h [c2 ft + a21 fy f ] + O(h2 )

Inserting into yn+1 = yn + h[b1 Y1′ + b2 Y2′ ], we get

yn+1 = yn + h(b1 + b2 )f + h2 b2 (c2 ft + a21 fy f ) + O(h3 )

Taylor expansion of the exact solution:


y′ = f
y ′′ = ft + fy y ′ = ft + fy f
1 2
y(t + h) = y + hf + h (ft + fy f ) + O(h3 )
2

Numerical Methods for Differential Equations – p. 25/63


2-stage ERK . . .
Matching terms in
yn+1 = yn + h(b1 + b2 )f + h2 b2 (c2 ft + a21 fy f ) + O(h3 )
1 2
y(t + h) = y + hf + h (ft + fy f ) + O(h3 )
2
and taking c2 = a21 we get the conditions for order 2:
b1 + b2 = 1 (consistency)
b2 c2 = 1/2
Note All consistent RK methods are convergent!
Second order, 2-stage ERK methods have a Butcher tableau

0 0 0
1 1
2b 2b 0
1−b b

Numerical Methods for Differential Equations – p. 26/63


Example 1. The modified Euler method

Put b = 1 to get
0 0 0
1 1
2 2 0
0 1

Y1′ = f (tn , yn )
Y2′ = f (tn + h/2, yn + hY1′ /2)
yn+1 = yn + hY2′

Second order explicit Runge–Kutta (ERK) method

Numerical Methods for Differential Equations – p. 27/63


Example 2. Heun’s method

Put b = 1/2 to get


0 0 0
1 1 0
1/2 1/2

Y1′ = f (tn , yn )
Y2′ = f (tn + h, yn + hY1′ )
yn+1 = yn + h (Y1′ + Y2′ )/2

Second order ERK, compare to the trapezoidal rule!

Numerical Methods for Differential Equations – p. 28/63


Third order 3-stage ERK

Conditions for 3rd order: a21 = c2 ; a31 + a32 = c3


b1 + b2 + b3 = 1
b2 c2 + b3 c3 = 1/2
b2 c22 + b3 c23 = 1/3
b3 a32 c2 = 1/6
Classical RK Nyström scheme
0 0 0 0 0 0 0 0
1/2 1/2 0 0 2/3 2/3 0 0
1 −1 2 0 2/3 0 2/3 0
1/6 2/3 1/6 1/4 3/8 3/8

Numerical Methods for Differential Equations – p. 29/63


Exercise

Construct the Butcher tableau for the 3-stage Heun method.

Y1′ = f (tn , yn )
Y2′ = f (tn + h/3, yn + hY1′ /3)
Y3′ = f (tn + 2h/3, yn + 2hY2′ /3)
yn+1 = yn + h (Y1′ + 3Y3′ )/4

Is it of order 3?

Numerical Methods for Differential Equations – p. 30/63


Classic RK4: 4th order, 4-stage ERK

The “original” RK method (1895):

Y1′ = f (tn , yn )
Y2′ = f (tn + h/2, yn + hY1′ /2)
Y3′ = f (tn + h/2, yn + hY2′ /2)
Y4′ = f (tn + h, yn + hY3′ )
h ′
yn+1 = yn + (Y1 + 2Y2′ + 2Y3′ + Y4′ ).
6

Numerical Methods for Differential Equations – p. 31/63


Classic 4th order RK4 . . .

Butcher tableau

0
1/2 1/2
1/2 0 1/2
1 0 0 1
1/6 1/3 1/3 1/6

s-stage ERK methods of order p = s exist only for s ≤ 4 (i.e.


there is no 5-stage ERK of order 5)

Numerical Methods for Differential Equations – p. 32/63


Order conditions
An s-stage ERK method has s + s(s − 1)/2 coefficients to
choose, but the order conditions are many

Numerical Methods for Differential Equations – p. 33/63


Order conditions
An s-stage ERK method has s + s(s − 1)/2 coefficients to
choose, but the order conditions are many

Number of coefficients
stages s 1 2 3 4 5 6 7 8 9 10
# coefficients 1 3 6 10 15 21 28 36 45 55

Numerical Methods for Differential Equations – p. 33/63


Order conditions
An s-stage ERK method has s + s(s − 1)/2 coefficients to
choose, but the order conditions are many

Number of coefficients
stages s 1 2 3 4 5 6 7 8 9 10
# coefficients 1 3 6 10 15 21 28 36 45 55

Number of order conditions


order p 1 2 3 4 5 6 7 8 9 10
# conditions 1 2 4 8 17 37 85 200 486 1205

Numerical Methods for Differential Equations – p. 33/63


Order conditions
An s-stage ERK method has s + s(s − 1)/2 coefficients to
choose, but the order conditions are many

Number of coefficients
stages s 1 2 3 4 5 6 7 8 9 10
# coefficients 1 3 6 10 15 21 28 36 45 55

Number of order conditions


order p 1 2 3 4 5 6 7 8 9 10
# conditions 1 2 4 8 17 37 85 200 486 1205
Maximum order, min stages
order p 1 2 3 4 5 6 7 8
min stages 1 2 3 4 6 7 9 10
Numerical Methods for Differential Equations – p. 33/63
6. Embedded RK methods

Two methods in one Butcher tableau (RK34)


Y1′ = f (tn , yn )
Y2′ = f (tn + h/2, yn + hY1′ /2)
Y3′ = f (tn + h/2, yn + hY2′ /2)
Z3′ = f (tn + h, yn − hY1′ + 2hY2′ )
Y4′ = f (tn + h, yn + hY3′ )
h ′
yn+1 = yn + (Y1 + 2Y2′ + 2Y3′ + Y4′ ) order 4
6
h ′
zn+1 = yn + (Y1 + 4Y2′ + Z3′ ) order 3
6

The difference yn+1 − zn+1 can be used as an error estimate

Numerical Methods for Differential Equations – p. 34/63


Adaptive RK methods

Example Use an embedded pair, e.g. RK34


Local error estimate rn+1 := kyn+1 − zn+1 k = O(h4 )
Adjust the step size h so that the local error estimate
equals a prescribed tolerance TOL
Simplest step size change scheme
 1/p
TOL
hn+1 = hn
rn+1

makes rn ≈ TOL
Adaptivity using local error control

Numerical Methods for Differential Equations – p. 35/63


7. Implicit Runge–Kutta methods (IRK)
s
X
Y1 = yn + h a1,j f (tn + cj h, Yj )
j=1
s
X
Y2 = yn + h a2,j f (tn + cj h, Yj )
j=1
..
.
s
X
Ys = yn + h as,j f (tn + cj h, Yj )
j=1

s
X
yn+1 = yn + h bj f (tn + cj h, Yj )
j=1

Numerical Methods for Differential Equations – p. 36/63


Implicit Runge–Kutta methods . . .

In stage value – stage derivative form

s
X
Yi = yn + h ai,j Yj′
j=1

Yj′ = f (tn + cj h, Yj )

s
X
yn+1 = yn + h bj Yj′
j=1

Numerical Methods for Differential Equations – p. 37/63


1-stage IRK method
Implicit Euler (order 1) Implicit midpoint method (order 2)
1 1 1/2 1/2
1 1

Y1′ = f (tn + c1 h, yn + h a11 Y1′ )


yn+1 = yn + h b1 Y1′
may be written as
b1 − a11 a11
yn+1 = yn + h b1 f (tn + c1 h, ( yn + yn+1 ))
b1 b1
Taylor expansion:
a11 a11 h2
yn+1 = y + h b1 f (tn + c1 h, y + hf + (ft + fy f ) + O(h2 ))
b1 b1 2
= y + h b1 hf + h2 (b1 c1 ft + a11 fy f ) + O(h3 )

Numerical Methods for Differential Equations – p. 38/63


Taylor expansions for 1-stage IRK

h2
y(tn+1 ) = y + h f + (ft + fy f ) + O(h3 )
2
yn+1 = y + h b1 f + h2 (b1 c1 ft + a11 fy f ) + O(h3 )

Condition for order 1 (consistency) b1 = 1


Conditions for order 2 c1 = a11 = 1/2
Conclusion Implicit Euler is of order 1 and implicit
midpoint method is the only 1-stage IRK of order 2

Numerical Methods for Differential Equations – p. 39/63


8. The stability function

Applying an IRK to the test equation, we get


s
X
hYi′ = hλ · (yn + ai,j hYj′ ), i = 1, . . . , s
j=1

Let hY ′ = [hY1′ · · · hYs′ ]T and 1 = [1 1 · · · 1]T ∈ Rs , then


(I − hλA)hY ′ = hλ1yn so hY ′ = hλ(I − hλA)−1 1yn
s
X
yn+1 = yn + bj hYj′ = [1 + hλbT (I − hλA)−1 1]yn
j=1

Numerical Methods for Differential Equations – p. 40/63


The stability function

Theorem For every Runge-Kutta method applied to the


linear test equation y ′ = λy we have

yn+1 = R(hλ)yn

where the rational function

R(z) = 1 + zbT (I − zA)−1 1.

If the method is explicit, then R(z) is a polynomial of


degree s

The function R(z) is called the method’s stability function

Numerical Methods for Differential Equations – p. 41/63


A-stability of RK methods

Definition The method’s stability region is the set

D = {z ∈ C : |R(z)| ≤ 1}

Theorem If R(z) maps all of C− into the unit circle, then


the method is A-stable

Corollary No explicit RK method is A-stable

(For ERK R(z) is a polynomial, and P (z) → ∞ as z → ∞)

Numerical Methods for Differential Equations – p. 42/63


A-stable methods and the Maximum Principle

Theorem |R(z)| ≤ 1 for all z ∈ C− if and only if all the poles of


R have positive real parts and |R(iω)| ≤ 1 for all ω ∈ R

This is the Maximum Principle in complex analysis

Example

0 1/4 −1/4 Y1′ = f (yn + hY1′ /4 − hY2′ /4)


2/3 1/4 5/12 ⇒ Y2′ = f (yn + hY1′ /4 + 5hY2′ /12)
1/4 3/4 yn+1 = yn + h(Y1′ + 3Y2′ )/4

Numerical Methods for Differential Equations – p. 43/63


Example . . .

Applied to the test equation, we get

1 + 13 hλ
yn+1 = 2 1 2
yn
1 − 3 hλ + 6 (hλ)

with poles 2 ± i 2 ∈ C+ , and

1 2
2 1 + 9
ω
|R(iω)| = 1 2 1 4 ≤ 1
1 + 9 ω + 36 ω

Conclusion |R(z)| ≤ 1 ∀ z ∈ C− . The method is A-stable

Numerical Methods for Differential Equations – p. 44/63


9. Linear Multistep Methods

A multistep method is a method of the type

yn+1 = Φ(f, h, y0 , y1 , . . . , yn )

using values from several previous steps


◮ Explicit Euler yn+1 = yn + h f (tn , yn )
◮ Trapezoidal rule yn+1 = yn + h ( f (tn ,yn )+f2(tn+1 ,yn+1 ) )
◮ Implicit Euler yn+1 = yn + h f (tn+1 , yn+1 )
are all one-step methods, but also LM methods

Numerical Methods for Differential Equations – p. 45/63


Multistep methods and difference equations
A k-step multistep method replaces the ODE y ′ = f (t, y) by a
(finite) difference equation
k
X k
X
ak−j yn−j = h bk−j f (tn−j , yn−j )
j=0 j=0

Generating (characteristic) polynomials


k
X k
X
ρ(w) = am w m σ(w) = bm w m
m=0 m=0

Normalization ak = 1 or σ(1) = 1 (choose your preference)


If bk = 0, the method is explicit ; if bk 6= 0, it is implicit
EE: ρ(w) = w − 1, σ(w) = 1; IE: ρ(w) = w − 1, σ(w) = w
Numerical Methods for Differential Equations – p. 46/63
Lagrange interpolation

Given a grid {t0 , t1 , . . . , tk } construct a degree k polynomial


basis
Φi (t) i = 0, 1, . . . , k
such that Φi (tj ) = δij (the Kronecker delta)

If the values zj = z(tj ) are known for the function z(t), then
k
X
P (t) = Φj (t)zj
j=0

interpolates z(t) on the grid:

P (tj ) = zj ; P (t) ≈ z(t) for all t

Numerical Methods for Differential Equations – p. 47/63


Adams methods (J.C. Adams, 1880s)

Suppose we have the first n + k approximations

ym = y(tm ), m = 0, 1, . . . , n + k − 1

Rewrite y ′ = f (t, y) by integration


Z tn+k
y(tn+k ) − y(tn+k−1 ) = f (τ, y(τ )) dτ
tn+k−1

Adams methods approximate the integrand by an


interpolation polynomial on tn , tn−1 , . . .

f (τ, y(τ )) ≈ P (τ )

Numerical Methods for Differential Equations – p. 48/63


Adams–Bashforth methods (explicit)
Interpolate f values with polynomial of degree k − 1:

P (tn+j ) = f (tn+j , y(tn+j )) j = 0, . . . , k − 1

Then P (τ ) = f (τ, y(τ )) + O(hk ) for t ∈ [tn , tn+k ]


Z tn+k
y(tn+k ) = y(tn+k−1 ) + P (τ )dτ + O(hk+1 )
tn+k−1

The k-step Adams-Bashforth method is the order k method


k−1
X
yn+k = yn+k−1 + h bj f (tn+j , yn+j )
j=0

−1
R tn+k
where bj = h tn+k−1 Φj (τ )dτ

Numerical Methods for Differential Equations – p. 49/63


Coefficients of AB1
AB1: for k = 1

yn+1 = yn + h b0 f (tn , yn )

where
Z tn+1 Z tn+1
b0 = h−1 Φ0 (τ )dτ = h−1 1dτ = 1 ⇒
tn tn

yn+1 = yn + h f (tn , yn )

Conclusion AB1 is the explicit Euler method

Numerical Methods for Differential Equations – p. 50/63


Coefficients of AB2
Here
t − tn t − tn+1
Φ1 (t) = ; Φ0 (t) =
tn+1 − tn tn − tn+1

AB2: for k = 2
yn+2 = yn+1 + h [b1 f (tn+1 , yn+1 ) + b0 f (tn , yn )]
Z tn+2
b0 = h−1 Φ0 (τ )dτ = −1/2
tn+1
Z tn+2
b1 = h−1 Φ1 (τ )dτ = 3/2
tn+1
 
3 1
yn+2 = yn+1 + h f (tn+1 , yn+1 ) − f (tn , yn )
2 2

Numerical Methods for Differential Equations – p. 51/63


Initializing an Adams method

The first step of AB2 is


 
3 1
y2 = y1 + h f (t1 , y1 ) − f (t0 , y0 )
2 2

so we need the values of y0 and y1 to start.

y0 is obtained from the initial value, but y1 has to be computed


with a one-step method.

Implementing AB2: we may use AB1 for the first step,


y1 = y0 + h f (t0 , y0 )
 
3 1
yn+2 = yn+1 + h f (tn+1 , yn+1 ) − f (tn , yn ) , n≥0
2 2

Numerical Methods for Differential Equations – p. 52/63


Example: AB1 vs AB2
Approximate the solution to y ′ = −y 2 cos t, y(0) = 1/2, t ∈ [0, 8π]
with h = π/6 and h = π/60

AB1: Solutions Errors


0
1 10

0.8 −1
10

0.6
−2
10
0.4
−3
10
0.2

−4
0 10
0 10 20 30 0 10 20 30

0
1.2 10

1
−2
10
0.8

0.6 −4
10
0.4

−6
0.2 10
0 10 20 30 0 10 20 30
AB2: Solutions Errors
Numerical Methods for Differential Equations – p. 53/63
How do we check the order of a multistep method?

A multistep method is of order of consistency p if the local error is


k
X k
X
aj y(tn+j ) − h bj y ′ (tn+j ) = O(hp+1 )
j=0 j=0

Try if the formula holds exactly for polynomials

y(t) = 1, t, t2 , t3 , . . .
Insert y = tm and y ′ = mtm−1 into the formula, take tn+j = jh:
k
X k
X k
X
aj (jh)m − h bj m(jh)m−1 = hm (aj j m − bj mj m−1 )
j=0 j=0 j=0

Numerical Methods for Differential Equations – p. 54/63


Order conditions for multistep methods

Theorem A k-step method is of consistency order p if and only


if it satisfies the following conditions:
k
X k
X
◮ j m aj = m j m−1 bj , m = 0, 1, . . . , p
j=0 j=0
k
X k
X
◮ j p+1 aj 6= (p + 1) j p bj
j=0 j=0

Then the multistep formula holds exactly for all polynomials of


degree p or less. (Problems with y = P (t) are solved exactly)

Numerical Methods for Differential Equations – p. 55/63


The root condition

Definition A polynomial ρ satisfies the root condition if all of its


zeros have moduli less than or equal to one and the zeros of unit
modulus are simple

Examples
◮ ρ(w) = (w − 1)(w − 0.5)(w + 0.9)
◮ ρ(w) = (w − 1)(w + 1)
◮ ρ(w) = (w − 1)2 (w − 0.5)
√ √
◮ ρ(w) = (w − 1)(w − 2)(w + 2)
◮ ρ(w) = (w − 1)(w2 + 0.25)
All Adams methods have ρ(w) = wk−1 (w − 1)

Numerical Methods for Differential Equations – p. 56/63


The Dahlquist equivalence theorem

Definiton A method is zero-stable if ρ satisfies the root


condition

Theorem A multistep method is convergent if and only if it


is zero-stable and consistent of order p ≥ 1 (without proof)

Example k-step Adams-Bashforth methods are of order k


and have ρ(w) = w k−1 (w − 1) ⇒ they are convergent

Numerical Methods for Differential Equations – p. 57/63


Dahlquist’s first barrier

Theorem The maximal order of a zero-stable k-step


method is



 k for explicit methods

p = k + 1 if k is odd

 for implicit methods
k + 2 if k is even

Numerical Methods for Differential Equations – p. 58/63


Backward differentiation formula of order 2

Construct a 2-step method of order 2 of the form

α2 yn+2 + α1 yn+1 + α0 yn = h f (tn+2 , yn+2 )

Order conditions for p = 2:


α2 + α1 + α0 = 0; 2α2 + α1 = 1; 4α2 + α1 = 4
⇒ α2 = 3/2; α1 = −2; α0 = 1/2;

3 1
ρ(w) = (w − 1)(w − ) ⇒ BDF2 is convergent
2 3

Numerical Methods for Differential Equations – p. 59/63


Backward differentiation formulas
The backward difference operator is defined as
∇0 yn+k = yn+k and
∇j yn+k = ∇j−1 yn+k − ∇j−1 yn+k−1 , j ≥ 1

Theorem (without proof): The k-step BDF method


k
X ∇j
yn+k = h f (tn+k , yn+k )
j=1
j

is convergent of order p = k if and only if 1 ≤ k ≤ 6

Note BDF methods are suitable for stiff problems

Numerical Methods for Differential Equations – p. 60/63


BDF1-6 stability regions
The methods are stable outside the indicated area
Stability regions of BDF1−6 methods
20

15

10

−5

−10

−15

−20
−10 −5 0 5 10 15 20 25 30 35
Numerical Methods for Differential Equations – p. 61/63
A-stability of multistep methods
Applying a multistep method to the linear test equation
y ′ = λy produces a difference equation
k
X k
X
aj yn+j = hλ bj yn+j
j=0 j=0

The characteristic equation (with z := hλ)

ρ(w) − zσ(w) = 0

has k roots wj (z). The method is A-stable iff


Re z ≤ 0 ⇒ |wj (z)| ≤ 1,
with simple unit modulus roots (root condition)

Numerical Methods for Differential Equations – p. 62/63


Dahlquist’s second barrier

Theorem (without proof): The highest order of an


A-stable multistep method is p = 2. Of all 2nd order
A-stable multistep methods, the trapezoidal rule has the
smallest error

Note There is no such order restriction for Runge–Kutta


methods, which can be A-stable for arbitrarily high orders

A multistep method can be useful although it isn’t A-stable

Numerical Methods for Differential Equations – p. 63/63

Anda mungkin juga menyukai