Anda di halaman 1dari 112

FINITE DIFFERENCE METHODS

FOR
SOLVING DIFFERENTIAL EQUATIONS
I-Liang Chern
Department of Mathematics
National Taiwan University
2009
January 18, 2009
2
Contents
1 Introduction 3
1.1 Finite Difference Approximation . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Basic Numerical Methods for Ordinary Differential Equations . . . . . . . 5
1.3 Runge-Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Linear difference equation . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.1 Zero Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.2 Absolute Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 Finite Difference Methods for Linear Parabolic Equations 17
2.1 A review of Heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Finite Difference Methods for the Heat Equation . . . . . . . . . . . . . . 17
2.2.1 Some discretization methods . . . . . . . . . . . . . . . . . . . . . 17
2.2.2 Stability and Convergence for the Forward Euler method . . . . . . 19
2.3 L
2
Stability Von Neumann Analysis . . . . . . . . . . . . . . . . . . . . 20
2.4 Energy method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5 Stability Analysis for Montone Operators Entropy Estimates . . . . . . . 22
2.6 Entropy estimate for backward Euler method . . . . . . . . . . . . . . . . 24
2.7 Existence Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.1 Existence via forward Euler method . . . . . . . . . . . . . . . . . 26
2.7.2 A Sharper Energy Estimate for backward Euler method . . . . . . . 27
2.8 Relaxation of errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.9 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.9.1 Dirichlet boundary condition . . . . . . . . . . . . . . . . . . . . . 29
2.9.2 Neumann boundary condition . . . . . . . . . . . . . . . . . . . . 30
2.10 The discrete Laplacian and its inversion . . . . . . . . . . . . . . . . . . . 31
2.10.1 Dirichlet boundary condition . . . . . . . . . . . . . . . . . . . . . 31
3 Finite Difference Methods for Linear elliptic Equations 35
3.1 Discrete Laplacian in two dimensions . . . . . . . . . . . . . . . . . . . . 35
3.1.1 Discretization methods . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.2 The 9-point discrete Laplacian . . . . . . . . . . . . . . . . . . . . 36
3.2 Stability of the discrete Laplacian . . . . . . . . . . . . . . . . . . . . . . 36
3.2.1 Fourier method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.2 Energy method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3
4 CONTENTS
4 Finite Difference Theory For Linear Hyperbolic Equations 41
4.1 A review of smooth theory of linear hyperbolic equations . . . . . . . . . . 41
4.1.1 Linear advection equation . . . . . . . . . . . . . . . . . . . . . . 41
4.1.2 Linear systems of hyperbolic equations . . . . . . . . . . . . . . . 42
4.2 Finite difference methods for linear advection equation . . . . . . . . . . . 44
4.2.1 Design techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Courant-Friedrichs-Levy condition . . . . . . . . . . . . . . . . . 47
4.2.3 Consistency and Truncation Errors . . . . . . . . . . . . . . . . . . 48
4.2.4 Laxs equivalence theorem . . . . . . . . . . . . . . . . . . . . . . 48
4.2.5 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2.6 Modied equation . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3 Finite difference schemes for linear hyperbolic system with constant coef-
cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.1 Some design techniques . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.2 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 Finite difference methods for linear systems with variable coefcients . . . 57
5 Scalar Conservation Laws 61
5.1 Physical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1.1 Trafc ow model . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1.2 Burgers equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.1.3 Two phase ow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.2 Basic theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.2.1 Riemann problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2.2 Entropy conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2.3 Rieman problem for nonconvex uxes . . . . . . . . . . . . . . . 67
5.3 Uniqueness and Existence . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6 Finite Difference Schemes For Scalar Conservation Laws 71
6.1 Major problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.2 Conservative schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.3 Entropy and Monotone schemes . . . . . . . . . . . . . . . . . . . . . . . 74
7 Finite Difference Methods for Hyperbolic Conservation Laws 79
7.1 Flux splitting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.1.1 Total Variation Diminishing (TVD) . . . . . . . . . . . . . . . . . 80
7.1.2 Other Examples for () . . . . . . . . . . . . . . . . . . . . . . . 82
7.1.3 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7.2 High Order Godunov Methods . . . . . . . . . . . . . . . . . . . . . . . . 84
7.2.1 Piecewise-constant reconstruction . . . . . . . . . . . . . . . . . . 85
7.2.2 piecewise-linear reconstruction . . . . . . . . . . . . . . . . . . . . 87
7.3 Multidimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.1 Splitting Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.2 Unsplitting Methods . . . . . . . . . . . . . . . . . . . . . . . . . 92
CONTENTS 1
8 Systems of Hyperbolic Conservation Laws 95
8.1 General Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.1.1 Rarefaction Waves . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.1.2 Shock Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.1.3 Contact Discontinuity (Linear Wave) . . . . . . . . . . . . . . . . 99
8.2 Physical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.2.1 Gas dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.2.2 Riemann Problem of Gas Dynamics . . . . . . . . . . . . . . . . . 103
2 CONTENTS
Chapter 1
Introduction
The goal of this course is to provide numerical analysis background for nite difference
approximation to solve partial differential equations. The partial differential equations in-
clude
parabolic equations,
elliptic equations,
hyperbolic conservation laws.
I will mainly talk about stability and convergence theory.
1.1 Finite Difference Approximation
Our goal is to appriximate differential operqtors by nite difference operators. How to
perform approximation? What is the error so produced? In general, we shall assume the
underlying functions are smooth. But we should notice that in some class of problems,
the underlying functions may not be smooth. Nevertheless, let us limit ourselves to those
smooth functions at this moment.
Assumming the underlying function u : R R is smooth. Let us dene the following
nite difference operators:
Forward difference: D
+
u(x) :=
u(x+h)u(x)
h
Backward difference: D

u(x) :=
u(x)u(xh)
h
Centered difference: D
0
u(x) :=
u(x+h)u(xh)
2h
By Taylor expansion, we can get
u

(x) = D
+
u(x) + O(h),
u

(x) = D

u(x) + O(h),
u

(x) = D
0
u(x) + O(h
2
),
3
4 CHAPTER 1. INTRODUCTION
We can also approximate u

(x) with higher order error. For example,


u

(x) = D
3
u(x) +O(h
3
)
where
D
3
u(x) =
1
6h
(2u(x +h) + 3u(x) 6u(x h) +u(x 2h))
These formulae can be derived from performing Taylor expansion of u at x. For instance,
we expand
u(x +h) = u(x) +u

(x)h +
h
2
2
u

(x) +
h
3
3!
u

(x) +
u(x h) = u(x) u

(x)h +
h
2
2
u

(x)
h
3
3!
u

(x) +
Substracting these two equations yield
u(x +h) u(x h) = 2u

(x)h +
2h
3
3!
u

(x) + .
This gives
u

(x) = D
0
u(x)
h
2
3!
u

(x) + = D
0
u(x) + O(h
2
).
In general, we can derive nite difference approximation for u
(k)
at x by the values of
u at stencil points x
j
, j = 0, ...n with n k. That is,
u
(k)
(x) =
n

j=0
c
j
u(x
j
) +O(h
p+1
)
for some p as larger as possible. As we shall see that we can choose p = n. To nd the
coefcients c
j
, j = 0, ..., n, we expand
u(x
j
) =
p

i=0
1
i!
(x
j
x)
i
u
(i)
(x) +O(h
p+1
).
Thus,
u
(k)
(x) =
n

j=0
c
j
p

i=0
1
i!
(x
j
x)
i
u
(i)
(x) + O(h
p+1
).
Comparing both sides, we obtain
n

j=0
1
i!
(x
j
x)
i
c
j
=
_
1 if i = k
0 otherwise
for i = 0, ..., p
There are p +1 equations here. it is natural to choose p = n to match the n +1 unknowns.
This is a n n Vandermonde system. It is nonsingular if x
i
are different. The matlab code
fdcoeffV(k,xbar,x) can be used to compute these coefcients.
1.2. BASICNUMERICAL METHODS FORORDINARYDIFFERENTIAL EQUATIONS5
Homeworks.
1. Consider x
i
= ih, i = 0, ..., n. Let x = x
m
. Find the coefcients c
i
for u
(k)
( x) and
the coefcient of the leading truncation error for the following cases:
k = 1, n = 2, 3, m = 0, 1, 2, 3.
k = 2, n = 2, m = 0, 1, 2.
1.2 Basic Numerical Methods for Ordinary Differential
Equations
The basic methods to design numerical algorithm is based on the smoothness of the so-
lution. Techniques of numerical interpolation, numerical integration, or nite difference
approximation are adopted.
Euler method
Euler method is the simplest numerical integrator for ODEs. The ODE
y

= f(t, y) (1.1)
is discretized by
y
n+1
= y
n
+kf(t
n
, y
n
). (1.2)
Here, k is time step size of the discretization. This is called the forward Euler method.
It simply dy/dt(t
n
) replaced by forward nite difference (y
n+1
y
n
)/k. To measure the
error, the local truncation error is

n
:= y

(t
n
)
y(t
n+1
) y(t
n
)
k
= O(k)
Let e
n
:= y
n
y(t
n
) be the true error.
Theorem 2.1 Assuming f C
1
and the solution y

= f(t, y) with y(0) = y


0
exists on
[0, T]. Then the Euler method converges at any t [0, T]. In fact, the true error e
n
has the
following estimate:
|e
n
|
e
t

O(k) 0, as n . (1.3)
Here, = max f/y and nk = t.
Proof. From the regularity of the solution, y C
2
[0, T] and
y(t
n+1
) = y(t
n
) +kf(t
n
, y(t
n
)) +k
n
. (1.4)
Taking difference of (1.2) and (1.4), we obtain
|e
n+1
| |e
n
| +k[f(t
n
, y
n
) f(t
n
, y(t
n
))[ +k[
n
[
(1 +k)|e
n
| +k[
n
[.
6 CHAPTER 1. INTRODUCTION
where
= max
x,y
[f(t, x) f(t, y)[/[x y[
The nite difference inequality has a fundamental solution G
n
= (1 + k)
n
, which is
positive provided k is small. Multiplying above equation by (1 +k)
n1
, we obtain
e
m+1
G
m1
e
m
G
m
+kG
m1
[
m
[.
Summing in m from m = 0 to n 1, we get
e
n

n1

m=0
G
nm1
k[
m
[

n1

m=0
G
m
O(k
2
)
=
G
n
1
G1
O(k
2
)

G
n

O(k)

e
t

O(k)
where t = nk and we have used (1 +k)
n
e
t
.
Remarks.
1. The theorem says that the numerical method converges in [0, T] as long as the solu-
tions of the ODE exists.
2. One can also prove the existence of the ODE solution through Euler method. It will
be a local existence theorem.
Backward Euler method
In many applications, the system is relaxed to a stable solution in a very short time. For
instance, consider
y

=
y y

.
The corresponding solution y(t) y as t O(). In the above forward Euler method,
practically, we should require
1 +k 1
in order to have G
n
remain bounded. Here, is the Lipschitz constant. In the present case,
= 1/. If is very small, the the above forward Euler method will require very small k
and lead to inefcient computation. In general, forward Euler method is inefcient (require
small k) if
max

f(t, y)
y

>> 1.
1.2. BASICNUMERICAL METHODS FORORDINARYDIFFERENTIAL EQUATIONS7
In the case f/y >> 1, we have no choice to resolve details. We have to take a very small
k. However, if f/y < 0, say for example, y

= y with >> 1. then the backward


Euler method is recommended.
y
n+1
= y
n
+kf(t
n+1
, y
n+1
).
The error satises
e
n+1
e
n
ke
n+1
+O(k
2
)
The corresponding fundamental solution is G
n
:= (1+k)
n
. Notice that the error satises
e
n

n1

m=0
(1 +k)
m
O(k
2
)

(1 +k)
n+1
k
O(k
2
)

e
T

O(k)
There is no restriction on the size of k.
Leap frog method
We integrate y

= f(t, y) from t
n1
to t
n+1
:
y(t
n+1
) y(t
n1
) =
_
t
n+1
t
n1
f(, y()) d
We apply the midpoint rule for numerical integration, we then get
y(t
n+1
) y(t
n1
) = 2kf(t
n
, y(t
n
)) +O(k
3
).
The midpoint method (or called leapfrog method) is
y
n+1
y
n1
= 2kf(t
n
, y
n
). (1.5)
This is a two-step explicit method.
Homeworks.
1. Prove the convergence theorem for the backward Euler method.
Hint: show that e
n+1
e
n
+ (1 + k)e
n+1
+ k
n
, where is the Lipschitz constant
of f.
2. Prove the convergence theorem for the leap-frog method.
Hint: consider the system y
n
1
= y
n1
and y
n
2
= y
n
.
8 CHAPTER 1. INTRODUCTION
1.3 Runge-Kutta methods
The Runge-Kutta method (RK) is a strategy to integrate
_
t
n+1
t
n
f d by some quadrature
method. For instance, a second order RK, denoted by RK2, is based on the trapezoidal rule
of numerical integration. The integration
_
t
n+1
t
n
f(, y()) is approximated by 1/2(f(t
n
, y
n
)+
f(t
n
, y
n+1
))k. The latter term involves y
n+1
. An explicit Runge-Kutta method approximate
y
n+1
by y
n
+kf(t
n
, y
n
). Thus, RK2 reads

1
= f(t
n
, y
n
)
y
n+1
= y
n
+
k
2
(f(t
n
, y
n
) +f(t
n+1
, y
n
+k
1
))
Another kind of RK2 is based on the midpoint rule of integration. It reads

1
= f(t
n
, y
n
)
y
n+1
= y
n
+kf(t
n+1/2
, y
n
+
k
2

1
)
1.4 Linear difference equation
Second-order linear difference equation. In the linear case y

= y, the above differ-


ence scheme results in a linear difference equation. Let us consider general second order
linear difference equation with constant coefcients:
ay
n+1
+by
n
+cy
n1
= 0, (1.6)
where a ,= 0. To nd its general solutions, we try the ansatz y
n
=
n
for some number
. Here, the n in y
n
is an index, whereas the n in
n
is a power. Plug this ansatz into the
equation, we get
a
n+1
+b
n
+c
n1
= 0.
This leads to
a
2
+b +c = 0.
There are two solutions
1
and
2
. In case
1
,=
2
, these two solutions are independent.
Since the equation is linear, any linear combination of these two solutions is again a solu-
tion. Moreover, the general solution can only depend on two free parameters, namely, once
y
0
and y
1
are known, then y
n

nZ
is uniquely determined. Thus, the general solution is
y
n
= C
1

n
1
+C
2

n
2
,
where C
1
, C
2
are constants. In case of
1
=
2
, then we can use the two solutions
n
2
and

n
1
with
2
,=
1
, but very closed, to produce another nontrivial solution:
lim

n
2

n
1

1
This yields the second solution is n
n1
1
. Thus, the general solution is
C
1

n
1
+C
2
n
n1
1
.
1.4. LINEAR DIFFERENCE EQUATION 9
Linear nite difference equation of order r . We consider general linear nite differ-
ence equation of order r:
a
r
y
n+r
+ +a
0
y
n
= 0, (1.7)
where a
r
,= 0. Since y
n+r
can be solved in terms of y
n+r1
, ..., y
n
for all n, this equation
together with initial data y
0
, ..., y
r+1
has a unique solution. The solution space is r dimen-
sions.
To nd fundamental solutions, we try the ansatz
y
n
=
n
for some number . Plug this ansatz into equation, we get
a
r

n+r
+ +a
0

n
= 0,
for all n. This implies
a() := a
r

r
+ +a
0
= 0. (1.8)
The polynomial a() is called the characteristic polynomial of (??) and its roots
1
, ...,
r
are called the characteristic roots.
Simple roots (i.e.
i
,=
j
, for all i ,= j): The fundamental solutions are
n
i
, i =
1, ..., r.
Multiple roots: if
i
is a multiple root with multiplicity m
i
, then the corresponding
independent solutions

n
i
, n
n1
i
, C
n
2

n2
i
..., C
n
m
i
1

nm
i
+1
i
Here, C
n
k
:= n!/(k!(n k)!). The solution C
n
2

n2
i
can be derived from differentiation
d
d
C
n
1

n1
at
i
.
In the case of simple roots, we can express general solution as
y
n
= C
1

n
1
+ +C
r

n
r
,
where the constants C
1
, ..., C
r
are determined by
y
i
= C
1

i
1
+ +C
r

i
r
, i = 0, ..., r + 1.
Syatem of linear difference equation. The above rth order linear difference equation is
equivalent to a rst order linear difference system:
A
0
y
n+1
= Ay
n
(1.9)
where
y
n
=
_
_
_
y
n
1
.
.
.
y
n
r
_
_
_
=
_
_
_
y
nr+1
.
.
.
y
n
_
_
_
10 CHAPTER 1. INTRODUCTION
A
0
=
_
I
(r1)(r1)
0
0 c
r
_
, A =
_
_
_
_
_
_
_
0 1 0 0
0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
a
0
a
1
a
2
a
r1
_
_
_
_
_
_
_
.
We may divide (1.9) by A
0
and get
y
n+1
= Gy
n
.
We call Gthe fundamental matrix of (1.9). For this homogeneous equation, the solution is
y
n
= G
n
y
0
Next, we compute G
n
in terms of eigenvalues of G.
In the case that all eigenvalues
i
, i = 1, ..., r of Gare distinct, then Gcan be expressed
as
G = TDT
1
, D = diag (
1
, ,
r
),
and the column vectors of T are the corresponding eigenvectors.
When the eigenvalues of Ghave multiple roots, we can normalize it into Jordan blocks:
G = TJT
1
, J = diag (J
1
, , J
s
),
where the Jordan block J
i
corresponds to eigenvalue
i
with multiplicity m
i
:
J
i
=
_
_
_
_
_
_
_

i
1 0 0
0
i
1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
0 0 0
i
_
_
_
_
_
_
_
m
i
m
i
.
and

s
i=1
m
i
= r. Indeed, this form also covers the case of distinct eigenvalues.
In the stability analysis below, we are concerned with whether G
n
is bounnded. It is
easy to see that
G
n
= TJ
n
T
1
, J
n
= diag (J
n
1
, , J
n
s
)
J
n
i
=
_
_
_
_
_
_
_

n
i
n
n1
i
C
n
2

n2
C
n
m
i
1

nm
i
+1
i
0
n
i
n
n1
i
C
n
m
i
2

nm
i
+2
i
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 n
n1
i
0 0 0
n
i
_
_
_
_
_
_
_
m
i
m
i
.
where C
n
k
:=
n!
k!(nk)!
.
Denition 4.1 The fundamental matrix G is called stable if G
n
remains bounded under
certain norm | | for all n.
Theorem 4.2 The fundamental matrix G is stable if and only if its eigenvalue satisfy the
following condition:
either [[ = 1 and is a simple root,
or [[ < 1
(1.10)
1.5. STABILITY ANALYSIS 11
Nonhomogeneous linear nite difference system In general, we consider the nonho-
mogeneous linear difference system:
y
n+1
= Gy
n
+f
n
(1.11)
with initial data y
0
. Its solution can be expressed as
y
n
= Gy
n1
+f
n1
= G(Gy
n2
+f
n2
) +f
n1
.
.
.
= G
n
y
0
+
n1

m=0
G
n1m
f
m
Homeworks.
1. Consider the linear ODE
y

= y
where considered here can be complex. Study the linear difference equation de-
rived for this ODE by forward Euler method, backward Euler, midpoint. Find its
general solutions.
2. Consider linear nite difference equation with source term
ay
n+1
+by
n
+cy
n1
= f
n
Given initial data y
0
and y
1
, nd its solution.
3. Find the characteristic roots for the Adams-Bashforth and Adams-Moulton schemes
with steps 1-3 for the linear equation y

= y.
1.5 Stability analysis
1.5.1 Zero Stability
Our goal is to develop general convergence theory for numerical ODEs. First, let us see
the proof of the convergence of the two stage Runge-Kutta method. The scheme can be
expressed as
y
n+1
= y
n
+k(y
n
, t
n
, k) (1.12)
where
(y
n
, t
n
, k) := f(y +
1
2
kf(y)). (1.13)
Suppose y() is a true solution, the corresponding truncation error

n
:=
y(t
n+1
) y(t
n
)
k
(y(t
n
), t
n
, k) = O(k
2
)
12 CHAPTER 1. INTRODUCTION
Thus, the true solution satises
y(t
n+1
) y(t
n
) = k(y(t
n
), t
n
, k) + k
n
The true error e
n
:= y
n
y(t
n
) satises
e
n+1
= e
n
+k((y
n
, t
n
, k) (y(t
n
), t
n
, k)) k
n
.
This implies
[e
n+1
[ [e
n
[ +k

[e
n
[ +k[
n
[.
Hence, we get
[e
n
[ (1 +k

)
n
[e
0
[ +k
n1

m=0
(1 +k

)
n1m
[
m
[
e

t
[e
0
[ +
e

max
m
[
m
[
In a numerical scheme, when we x the nal time T = nk and let n , we want
the corresponding numerical solution remains bounded, A scheme satises this property is
called stable. We can rst investigate stability for the linear equation
y

= y
Consider a nite difference scheme for this linear equation, it results in a linear nite dif-
ference equation. For instance, for the second-order Runge-Kutta in the previous example,
the corresponding nite difference equation is
y
n+1
= y
n
+k(y
n
+
1
2
ky
n
) = (1 + k +
1
2
(k)
2
)y
n
(1.14)
Another example, the multistep method (??), the corresponding nite difference equation
is
r

m=0
a
m
y
n+1r+m
= k
r

m=0
b
m
y
n+1r+m
This gives
y
n
= G(k, y
n1
, ..., y
nr+1
).
We can express this scheme in system form:
y
n
= G(k)y
n1
.
The stability of G means that |G
n
| are uniformly bounded. The norm |G
n
| has a lower
bound in terms of r(k), the spectral radius of G(k), i.e.
r(k) := max
i

i
(k),
i
is the eigenvalues of G
Lemma 5.1
r(G)
n
|G
n
|
1.5. STABILITY ANALYSIS 13
Proof. If y is a unit eigenvector of Gwith eigenvalue , then
[[
n
= |G
n
y| |G
n
|
This yields r(G)
n
|G
n
|.
We shall call r(G) the amplication factor. Here are some example of the amplication
factors:
r(z) = [1 +z[ forward Euler,
r(z) =

1 +z +
z
2
2

, midpoint, RK2
r(z) =

1 +
z
2
1
z
2

, implicit trapezoidal
Denition 5.2 A scheme is called zero-stable if the spectral radius of the spectral radius
r(k) of the corresponding amplication matrix G(k) (Greens function) satises the von
Neumann condition
r(k) 1 +Ck (1.15)
Theorem 5.3 A necessary condition for
|G
n
| C
1
is that its spectral radius r(k) satises
r(k) 1 +C
2
k.
Here C
1
, C
2
are independent of n.
Proof. The spectral radius of G
n
is r(k)
n
. Thus, |G
n
| C
1
implies r(k) C
1/n
1

1 + C
2
/n for some constant C
2
independent of n. We choose nk = t xed, then r(k)
1 +C
2
k/t.
For one-step methods such as the forward Euler method, backward Euler method,
Runge-Kutta methods, we can show that they are always zero-stable. For multistep method,
the necessary and sufcient condition is the following theorem.
Theorem 5.4 A multistep method (??) is zero-stable if the root of its characteristic poly-
nomial a() satises
either [[ = 1 and is a simple root,
or [[ < 1
(1.16)
Conversely, if the scheme is zero-stable, then it is necessary that all roots of a() = 0
satisfy
[[ 1.
14 CHAPTER 1. INTRODUCTION
Proof. The characteristic root of (??) satises
a() kb() = 0, (1.17)
where
a() :=

m
a
m

m
, b() :=

m
b
m

m
,
From the perturbation theory of roots of polynomial, if
i
is a root of a() = 0, then there
corresponds a root of (1.17) and satisfying

i
(k) =
i
+O(k).
Thus, the Greens operator G satises
|G
n
| (1 +Ck)
n
e
Cnk
.
On the second case where
i
is a repreat root, the perturbed
i
(z) can be repeat root or
split into several simple roots. In the latter case, the corresoponding G
n
remains bounded.
However, in the rst cases, it becomes unbounded. Thus, the root condition for the repeat
root is only a sufcient condition for stability.
Theorem 5.5 (Dahlquist) For nite difference schemes for ODE y

= f(t, y),
consistency + zero-stability convergence
1.5.2 Absolute Stability
In the above convergence analysis, it says that the scheme convergence if the mesh size k
is small enough. In practice, we can only choose nite and proper mesh size. For instance,
for linear equation y

= y, the forward Euler scheme satises


y
n
= (1 +k)
n
y
0
In order to have the solution remain bounded as n with kn = T xed, we need to
require
[1 +k[ < 1.
This gives a restriction on k. The region z[[1 + z[ < 1 is the region of absolute stability
for the forward Euler scheme. It is then important to study the region of absolute stability
in order to to choose proper mesh size in practical computation.
For general linear system of equations y

= Ay, the same condition should be satised


for all eigenvalues
i
of A. More precisely,
[1 +k
i
[ 1
if
i
is an eigenvalue of A with multiplicity 1, and
[1 + k
i
[ < 1
1.5. STABILITY ANALYSIS 15
if
i
is an eigenvalue of A with multiplicity greater than 1.
Back to the scalar linear equation y

= y. Let abbreviate k by z. Denote the ampli-


cication factor by r(z). We have
r(z) = 1 +z forward Euler,
r(z) = 1 +z +
z
2
2
, midpoint, RK2
r(z) =
1 +
z
2
1
z
2
, implicit trapezoidal
The stability region D := z[[r(z)[ < 1.
16 CHAPTER 1. INTRODUCTION
Chapter 2
Finite Difference Methods for Linear
Parabolic Equations
2.1 A review of Heat equation
2.2 Finite Difference Methods for the Heat Equation
2.2.1 Some discretization methods
Let us start from the simplest parabolic equation, the heat equation:
u
t
= u
xx
Let h = x, k = t be the spatial and temporal mesh sizes. Dene x
j
= jh, j Z and
t
n
= nk, n 0. Let us abbreviate u(x
j
, t
n
) by u
n
j
. We shall approximate u
n
j
by U
n
j
, where
U
n
j
satises some nite difference equations.
Spatial discretization : The simplest one is that we use centered nite difference ap-
proximation for u
xx
:
u
xx
=
u
j+1
2u
j
+u
j1
h
2
+O(h
2
)
This results in the following systems of ODEs
u
j
(t) =
u
j+1
(t) 2u
j
(t) +u
j1
(t)
h
2
or in vector form

U =
1
h
2
AU
where U = (u
0
, u
1
, ...)
t
, A = diag (1, 2, 1).
17
18CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
Homeworks.
1. Derive the 4th order centered nite difference approximation for u
xx
:
u
xx
=
1
h
2
(u
j2
+ 16u
j1
30u
j
+ 16u
j+1
u
j+2
) + O(h
4
).
2. Derive a 2nd order centered nite difference approximation for ((x)u
x
)
x
.
Temporal discretization We can apply numerical ODE solvers
Forward Euler method:
U
n+1
= U
n
+
k
h
2
AU
n
(2.1)
Backward Euler method:
U
n+1
= U
n
+
k
h
2
AU
n+1
(2.2)
2nd order Runge-Kutta (RK2):
U
n+1
U
n
=
k
h
2
AU
n+1/2
, U
n+1/2
= U
n
+
k
2h
2
AU
n
(2.3)
Crank-Nicolson:
U
n+1
U
n
=
k
2h
2
(AU
n+1
+AU
n
). (2.4)
These linear nite difference equations can be solved formally as
U
n+1
= GU
n
where
Forward Euler: G = 1 +
k
h
2
A,
Backward Euler: G = (1
k
h
2
A)
1
,
RK2: G = 1 +
k
h
2
A +
1
2
_
k
h
2
_
2
A
2
Crank-Nicolson: G =
1+
k
2h
2
A
1
k
2h
2
A
For the Forward Euler, We may abbreviate it as
U
n+1
j
= G(U
n
j1
, U
n
j
, U
n
j+1
), (2.5)
where
G(U
j1
, U
j
, U
j+1
) = U
j
+
k
h
2
(U
j1
2U
j
+U
j+1
)
2.2. FINITE DIFFERENCE METHODS FOR THE HEAT EQUATION 19
2.2.2 Stability and Convergence for the Forward Euler method
Our goal is to show under what condition can U
n
j
converges to u(x
j
, t
n
) as the mesh sizes
h, k 0.
To see this, we rst see the local error a true solution can produce. Plug a true solution
u(x, t) into (2.1). We get
u
n+1
j
u
n
j
=
k
h
2
_
u
n
j+1
2u
n
j
+u
n
j1
_
+k
n
j
(2.6)
where

n
j
= D
+
u
n
j
(u
t
)
n
j
(D
+
D

u
n
j
(u
xx
)
n
j
) = O(k) +O(h
2
).
Let e
n
j
denote for u
n
j
U
n
j
. Then substract (2.1) from (2.6), we get
e
n+1
j
e
n
j
=
k
h
2
_
e
n
j+1
2e
n
j
+e
n
j1
_
+k
n
j
(2.7)
This can be expressed as
e
n+1
j
= G(e
n
j1
, e
n
j
, e
n
j+1
) +k
n
j
(2.8)
or in operator form:
e
n+1
= G(e
n
) +
n
(2.9)
where e
n
= (e
n
j
)
jZ
, G(e)
j
= G(e
j1
, e
j
, e
j+1
).
Suppose Gsatises
|G(U)| |U|
under certain norm | |, we can accumulate the local truncation errors in time to get the
global error as the follows.
|e
n
| |Ge
n1
| +k|
n1
|
|e
n1
| +k|
n1
|
|Ge
n2
| +k(|
n2
| +|
n1
|)
|e
0
| +k(|
0
| + +|
n2
| +|
n1
|)
If the local truncation error has the estimate
max
n
|
n
| = O(h
2
) +O(k)
and the initial error e
0
satises
|e
0
| = O(h
2
),
then so does the global true error |e
n
| for all n.
The above analysis leads to the following denitions.
Denition 2.3 A nite difference method is called consistent if its local truncation error
satises
|
h,k
| 0 as h, k 0.
20CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
Denition 2.4 A nite difference scheme U
n+1
= G
h,k
(U
n
) is called stable under the
norm | | in a region (h, k) R if
|G
n
h,k
U| |U|
for all n with nk xed.
Denition 2.5 A nite difference method is called convergence if the true error
|e
h,k
| 0 as h, k 0.
In the above analysis, we have seen that
stability + consistency convergence.
2.3 L
2
Stability Von Neumann Analysis
Since we only deal with smooth solutions in this section, the L
2
-norm or the Sobolev norm
is a proper norm to our stability analysis. For constant coefcient and scalar case, the von
Neumann analysis (via Fourier method) provides a necessary and sufcient condition for
stability. For systemwith constant coefcients, the von Neumann analysis gives a necessary
condition for statbility. For systems with variable coefcients, the Kreiss matrix theorem
provides characterizations of stability condition.
Below, we give L
2
stability analysis. We use two methods, one is the energy method,
the other is the Fourier method, that is the von Neumann analysis. We describe the von
Neumann analysis below.
Given U
j

jZ
, we dene
|U|
2
=

j
[U
j
[
2
and its Fourier transform

U() =
1
2

U
j
e
ij
.
The advantages of Fourier method for analyzing nite difference scheme are
the shift operator is transformed to a multiplier:

TU() = e
i

U(),
where (TU)
j
:= U
j+1
;
the Parseval equility
|U|
2
= |

U|
2

U()[
2
d.
2.3. L
2
STABILITY VON NEUMANN ANALYSIS 21
If a nite difference scheme is expressed as
U
n+1
j
= (GU
n
)
j
=
m

i=l
a
i
(T
i
U
n
)
j
,
then

U
n+1
=

G()

U
n
().
From the Parseval equality,
|U
n+1
|
2
= |

U
n+1
|
2
=
_

G()[
2
[

U
n
()[
2
d
max

G()[
2
_

U
n
()[
2
d
= [

G[
2

|U|
2
Thus a sufcient condition for stability is
[

G[

1. (2.10)
Conversely, suppose [

G(
0
)[ > 1, from

G being a smooth function in , we can nd and


such that
[

G()[ 1 + for all [


0
[ < .
Let us choose an initial data U
0
in
2
such that

U
0
() = 1 for [
0
[ . Then
|

U
n
|
2
=
_
[

G[
2n
()[

U
0
[
2

_
|
0
|
[

G[
2n
()[

U
0
[
2
(1 +)
2n
as n
Thus, the scheme can not be stable. We conclude the above discussion by the following
theorem.
Theorem 3.6 A nite difference scheme
U
n+1
j
=
m

k=l
a
k
U
n
j+k
with constant coefcients is stable if and only if

G() :=
m

k=l
a
k
e
ik
satises
max

G()[ 1. (2.11)
22CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
Homeworks.
1. Compute the

G for the schemes: Forward Euler, Backward Euler, RK2 and Crank-
Nicolson.
2.4 Energy method
We write the nite difference scheme as
U
n+1
j
= U
n
j1
+U
n
j
+U
n
j+1
, (2.12)
where
, , 0 and + + = 1.
We multiply (2.12) by U
n+1
j
on both sides, apply Cauchy-Schwarz inequality, we get
(U
n+1
j
)
2
= U
n
j1
U
n+1
j
+U
n
j
U
n+1
j
+U
n
j+1
U
n+1
j


2
((U
n
j1
)
2
+ (U
n+1
j
)
2
) +

2
((U
n
j
)
2
+ (U
n+1
j
)
2
) +

2
((U
n
j+1
)
2
+ (U
n+1
j
)
2
)
Here, we have used , , 0. We multiply this inequality by h and sum it over j Z.
Denote
|U|
2
:=
_

j
[U
j
[
2
h
_
1/2
.
We get
|U
n+1
|
2


2
(|U
n
|
2
+|U
n+1
|
2
) +

2
(|U
n
|
2
+|U
n+1
|
2
) +

2
(|U
n
|
2
+|U
n+1
|
2
)
=
1
2
(|U
n
|
2
+|U
n+1
|
2
).
Here, + + = 1 is applied. Thus, we get the energy estimate
|U
n+1
|
2
|U
n
|
2
. (2.13)
Homeworks.
1. Can the RK-2 method possess an energy estimate?
2.5 Stability Analysis for Montone Operators Entropy
Estimates
Stbility in the maximum norm
We notice that the action of G is a convex combinition of U
j1
, U
j
, U
j+1
, provided
0 <
k
h
2

1
2
. (2.14)
2.5. STABILITYANALYSIS FORMONTONE OPERATORS ENTROPYESTIMATES23
Thus, we get
min U
n
j1
, U
n
j
, U
n
j+1
U
n+1
j
max U
n
j1
, U
n
j
, U
n
j+1
.
This leads to
min
j
U
n+1
j
min
j
U
n
j
,
max
j
U
n+1
j
max
j
U
n
j
and
max
j
[U
n+1
j
[ max
j
[U
n
j
[
Such an operator Gis called a monotone operator.
Entropy estimates
The property that U
n+1
is a convex combination (average) of U
n
is very important. Given
any convex function (u), called entropy function, by Jensons inequality,
(U
n+1
j
) (U
n
j1
) + (U
n
j
) + (U
n
j+1
) (2.15)
Summing over all j and using + + = 1, we get

j
(U
n+1
j
)

j
(U
n
j
). (2.16)
This means that the entropy decreases in time. In particular, we choose
(u) = [u[
2
, we recover the L
2
stability,
(u) = [u[
p
, 1 p < , we get

j
[U
n+1
j
[
p

j
[U
n
j
[
p
This leads to
_

j
[U
n+1
j
[
p
h
_
1/p

j
[U
n
j
[
p
h
_
1/p
,
the general L
p
stability. Taking p , we recover L

stability.
(u) = [u c[ for any constant c, we obtain Kruzkovs entropy estimate.
Homeworks.
1. Show that the solution of the difference equation derived from the RK2 satises
the entropy estimate. What is the condition required on h and k for such entropy
estimate?
24CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
2.6 Entropy estimate for backward Euler method
In the backward Euler method, the amplication matrix is given by
G = (I A)
1
(2.17)
where
=
k
h
2
, A = diag(1, 2, 1).
The matrix M := I A has the following property:
m
ii
> 0, m
ij
0,

j=i
[m
ij
[ m
ii
(2.18)
Such a matrix is called an M-matrix.
Theorem 6.7 The inverse of an M-matrix is a nonnegative matrix, i.e. all its entries are
non-negative.
I shall not prove this general theorem. Instead, I will nd the inverse of M. Let us express
M =
1 + 2
2
diag (a, 2, a)
In our case,
a =
2
1 + 2
The general solution of the difference equation
au
j1
+ 2u
j
au
j+1
= 0 (2.19)
has the form:
u
j
= C
1

j
1
+C
2

j
2
where
1
,
2
are the characteristic roots, i.e. the roots of the polynomial
a
2
+ 2 a = 0.
Thus,

i
=
1

1 a
2
a
.
From the assumption:
0 < a < 1,
we have
1
< 1 and
2
> 1.
Now, we dene a fundamental solution:
g
j
=
_

j
1
for j 0

j
2
for j < 0
2.6. ENTROPY ESTIMATE FOR BACKWARD EULER METHOD 25
We can check that g
j
0 as [j[ . g
j
satises the difference equation (2.19) for
[j[ 1. For j = 0, we have
ag
1
+ 2g
0
ag
1
= a
1
2
+ 2 a
1
= 2 a(
1
+
1
2
) = d
We reset g
j
g
j
/d. Then we have

j
g
ij
m
j
=
i,0
Thus, M
1
is a positive matrix (i.e. all its entries are positive). In fact,

j
g
ij
= 1 for all i
Such a matrix appears in probability called transition matrix of a Markov chain.
Let us go back to our backward Euler method for the heat equation, we get that
U
n+1
= (1 A)
1
= GU
n
where
(GU)
i
=

j
g
ij
U
j
We can think U
n+1
j
is a convex combination of U
n
j
with weights g
j
. This weight has the
properties:
g
j
> 0


j
g
j
= 1
Thus, G is a monotone operator. With this property, we can apply Jansens inequality to
get the entropy estimates:
Theorem 6.8 Let (u) be a convex function. Let U
n
j
be a solution of the difference equation
derived from the backward Euler method for the heat equation. Then we have

j
(U
n
j
)

j
(U
0
j
). (2.20)
Homeworks.
1. Can the Crank-Nicolson method for the heat equation satisfy the entropy estimate?
What is the condition on h and k?
26CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
2.7 Existence Theory
2.7.1 Existence via forward Euler method
From energy estimate, we get |U
n
| |U
0
|. We can take dinite difference quotient to the
equation (forward Euler equation, for instance), then (D
x,+
U)
n
j
:= U
n
j+1
U
n
j
also satises
the same equation. Thus, it also has the same estimate for D
x,+
U. Similar estimate for
D
2
x,+
U. We have
|D
m
x,+
U
n
| |D
m
x,+
U
0
|. (2.21)
If we assume the initial data f H
2
, then we get U
n
H
2
h
. Here, h = 1/n.
For any discrete function U
j
H
m
h
we can construct a function u in H
m
dened by
u(x) :=

j
U
j

h
(x x
j
) (2.22)
where
h
(x) = sinc(x/h). We have
u
h
(x
j
) = U
j
It can be shown that
|D
m
x
u
h
| |D
m
x,+
U|. (2.23)
Similarly, the space L

k
(H
m
h
) can be embeded into L

(H
m
) by dening
u
h,k
(x, t) =

n0

j
U
n
j

k
(t)
h
(x)
The discrete norm and the continuous norm are equivalent.
With this background, we get
Theorem 7.9 If the initial data H
m
,m 2 and k/h
2
1/2, then the solution of
forward Euler equation has the estimate
|D
m
x,+
U
n
| |D
m
x,+
U
0
|, |D
t,+
U
n
| |D
2
x,+
U
0
| (2.24)
Further, the corresponding smoothing function u
h,k
has the same estimate and has a sub-
sequence converges to a solution u(x, t) of the original equation.
Proof. The functions u
h,k
are unformly bounded in W
1,
(H
2
). Hence they have a subse-
quence converges strongly in L

(H
1
) and u W
1,
(H
2
). The functions u
h,k
satisfy
u
h,k
(x
j
, t
n+1
) u
h,k
(x
j
, t
n
) =
k
h
2
(u
h,k
(x
j1
, t
n
) 2u
h,k
(x
j
, t
n
) +u
h,k
(x
j+1
, t
n
))
Multiply a test smooth function , sum over j and n, take summation by part, we can get
the subsubsequence converges to a solution of u
t
= u
xx
weakly.
2.7. EXISTENCE THEORY 27
2.7.2 A Sharper Energy Estimate for backward Euler method
Let us see that we can have a sharper energy estimate for the nite difference derived by
backward Euler method. Recall the backward Euler method for solving the heat equation
is
U
n+1
j
U
n
j
= (U
n+1
j1
2U
n+1
j
+U
n+1
j+1
) (2.25)
An important technique is the summation by part:

j
(U
j
U
j1
)V
j
=

j
U
j
(V
j+1
V
j
) (2.26)
There is no boundary term because we consider periodic condition in the present case.
Now We multiply both sides by U
n+1
j
, then sum over j. We get

j
(U
n+1
j
)
2
U
n+1
j
U
n
j
=

j
[U
n+1
j+1
U
n+1
j
[
2
The term
U
n+1
j
U
n
j

1
2
((U
n+1
j
)
2
(U
n
j
)
2
))
Hence, we get
1
2

j
_
(U
n+1
j
)
2
(U
n
j
)
2
)
_

j
[U
n+1
j+1
U
n+1
j
[
2
Or
1
2
|D
t,
U
n+1
| k
h
k
2
k
h
2
|D
x,+
U
n+1
| (2.27)
Here,
D
t,
U
n+1
j
:=
U
n+1
j
U
n
j
k
, D
x,+
U
n+1
j
:=
U
n+1
j+1
U
n+1
j
h
,
Theorem 7.10 For the backward Euler method, we have the estimate
|U
N
|
2
+C
N

n=1
|D
x,+
U
n
|
2
|U
0
|
2
(2.28)
This gives controls not only on |U
n
|
2
but also on |D
x,+
U
n
|.
Homeworks.
1. Show that the Crank-Nicolson method also has similar energy estimate.
2. Can forward Euler method have similar energy estimate?
28CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
2.8 Relaxation of errors
In this section, we want to study the evolution of an error. We consider
u
t
= u
xx
+f(x) (2.29)
with initial data . The error e
n
j
:= u(x
j
, t
n
) U
n
j
satises
e
n+1
j
= e
n
j
+(e
n
j1
2e
n
j
+e
n
j+1
) +k
n
j
(2.30)
We want to know how error is relaxed to zero from an initial error e
0
. We study the homo-
geneous nite difference quation rst. That is
e
n+1
j
= e
n
j
+(e
n
j1
2e
n
j
+e
n
j+1
). (2.31)
or e
n+1
= G(u
n
). The matrix is a tridiagonal matrix. It can be diagonalized by Fourier
method. The eigenfunctions and eigenvalues are
v
k,j
= e
2ijk/N
,
k
= 1 2 + 2cos(2k/N) = 1 4sin
2
(k/N), k = 0, ..., N 1.
When 1/2, all eigenvalues are negative except
0
:
1 =
0
> [
1
[ > [
2
[ > .
The eigenfunction
v
0
1.
Hence, the projection of any discrete function U onto this eigenfunction is the average:

j
U
j
.
Now, we decompose the error into
e
n
=

e
n
k
v
k
Then
e
n+1
k
=
k
e
n
k
.
Thus,
e
n
k
=
n
k
e
0
k
.
We see that e
n
k
decays exponentially fast except e
n
0
, which is the average of e
0
. Thus, the
average of initial error never decay unless we choose it zero. To guarantee the average of
e
0
is zero, we may choose U
n
j
to be the cell average of u(x, t
n
) in the jth cell:
U
n
j
=
1
h
_
x
j+1/2
x
j1/2
u(x, t
n
) dx.
instead of the grid data. This implies the initial error has zero local averages, and thus so
does the global average.
The contribution of the truncation to the true solution is:
e
n+1
=
k
e
n
k
+ t
n
k
Its solution is
e
n
k
=
n
k
e
0
k
+ t
n1

m=0

n1m
k

m
k
We see that the term e
n
0
does not tend to zero unless
m
0
= 0. This can be achieved if we
choose U
j
as well as f
j
to be the cell averages instead the grid data.
2.9. BOUNDARY CONDITIONS 29
Homeworks.
1. Dene U
j
:=
1
h
_
x
j+1/2
x
j1/2
u(x) dx. Show that if u(x) is a smooth periodic function on
[0, 1], then
u

(x
j
) =
1
h
2
(U
j1
2U
j
+U
j+1
) +
with = O(h
2
).
2.9 Boundary Conditions
2.9.1 Dirichlet boundary condition
Dirichlet boundary condition is
u(0) = a, u(1) = b (2.32)
The nite difference approximation of u
xx
at x
1
involves u at x
0
= 0. We plug the boundary
contion:
u
xx
(x
1
) =
U
0
2U
1
+U
2
h
2
+O(h
2
) =
a 2U
1
+U
2
h
2
+O(h
2
)
Silimarly,
u
xx
(x
N1
) =
U
N2
2U
N1
+U
N
h
2
+O(h
2
) =
U
N2
2U
N1
+b
h
2
+O(h
2
)
The unknowns are U
n
1
, ..., U
n
N1
with N 1 nite difference at x
1
, ..., x
N1
. The discrete
Laplacian becomes
A =
_
_
_
_
_
2 1 0 0 0
1 2 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 2
_
_
_
_
_
. (2.33)
This discrete Lalacian is the same as a discrete Laplacian with zero Dirichlet boundary
condition.
We can have energy estimates, entropy estimates as the case of periodic boundary con-
dition.
Next, we exame how error is relaxed for the Euler method with zero Dirichlet boundary
condition. From Fourier method, we observe that the eigenfunctions and eigenvalues for
the forward Euler method are
v
k,j
= sin(2jk/N),
k
= 12+2cos(2k/N) = 14sin
2
(k/N), k = 1, ..., N1.
In the present case, all eigenvalues

i
< 1, i = 1, ..., N 1.
provided the stability condition
1/2.
30CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
Thus, the errors e
n
i
decays to zero exponentially for all i = 1, ..., N 1. The slowest mode
is
1
which is

1
= 1 4sin
2
(/N) 1 4
_

N
_
2
and

n
1

_
1 4
_

N
_
2
_
n
e
4
2
t
where we have used k/h
2
is xed and nk = t.
2.9.2 Neumann boundary condition
The Neumann boundary condition is
u

(0) =
0
, u

(1) =
1
. (2.34)
We may use the following disrete discretization methods:
First order:
U
1
U
0
h
=
0
.
Second order-I:
U
1
U
0
h
= u
x
(x
1/2
) = u
x
(0) +
h
2
u
xx
(x
0
) =
0
+
h
2
f(x
0
)
Second order-II: we use extrapolation
3U
0
2U
1
+U
2
2h
2
=
0
.
The knowns are U
n
j
with j = 0, ..., N. In the mean time, we add two more equations at the
boundaries.
Homeworks.
1. Find the eigenfunctions and eigenvalues for the discrete Laplacian with the Neu-
mann boundary condition (consider both rst order and second order approximation
at boundary). Notice that there is a zero eigenvalue.
Hint: You may use Matlab to nd the eigenvalues and eigenvectors.
Here, I will provide another method. Suppose A is the discrete Laplacian with Neumann
boundary condition. A is an (N + 1) (N + 1) matrix. Suppose Av = v. Then for
j = 1, ..., N 1, v satises
v
j1
2v
j
+v
j+1
= v
j
, j = 1, ..., N 1.
For v
0
, we have
v
0
v
1
= v
0
.
2.10. THE DISCRETE LAPLACIAN AND ITS INVERSION 31
For v
N
, we have
v
N
v
N1
= v
N
.
Suppose the general solution has the ansatz:
C
1

j
1
+
j
2
, j = 0, ..., N,
where
1
,
2
satisfy

2
(2 +) + 1 = 0,
At x
0
, we have v
0
v
1
= v
0
, i.e. we get
(1 )(C
1
+ 1) = C
1

1
+
2
At x
N
, we have v
N
v
1
= v
N
, i.e.
(1 )(C
1

N
1
+
N
2
) = C
1

N1
1
+
N1
2
.
There are two equations for two unknowns and C
1
. We can get nontrivial solutions for
suitable and C
1
. Finally, we normalize v to be a unite vector.
Homeworks.
1. Complete the calculation.
2. Consider
u
t
= u
xx
+f(x)
on [0, 1] with Neumann boundary condition u

(0) = u

(1) = 0. If
_
f(x) dx ,= 0.
What wil happen to u as t ?
2.10 The discrete Laplacian and its inversion
We consider the elliptic equation
u
xx
u = f(x), x (0, 1)
2.10.1 Dirichlet boundary condition
Dirichlet boundary condition is
u(0) = a, u(1) = b (2.35)
The nite difference approximation of u
xx
at x
1
involves u at x
0
= 0. We plug the boundary
contion:
u
xx
(x
1
) =
U
0
2U
1
+U
2
h
2
+O(h
2
) =
a 2U
1
+U
2
h
2
+O(h
2
)
32CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
Similarly,
u
xx
(x
N1
) =
U
N2
2U
N1
+U
N
h
2
+O(h
2
) =
U
N2
2U
N1
+b
h
2
+O(h
2
)
The unknowns are U
n
1
, ..., U
n
N1
with N 1 nite difference at x
1
, ..., x
N1
. The discrete
Laplacian becomes
A =
_
_
_
_
_
2 1 0 0 0
1 2 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 2
_
_
_
_
_
. (2.36)
This is the discrete Lalacian with Dirichlet boundary condition. In one dimension, we can
solve A
1
explicitly. Let us solve (A 2)
1
where = h
2
/2. The difference equation
U
j1
(2 + 2)U
j
+U
j+1
= 0
has two independent solutions
1
and
2
, where
i
are roots of

2
(2 + 2) + 1 = 0.
That is
= 1 +
_
(1 +)
2
1
When = 0, the two solutions are U
j
= 1 and U
j
= j. This gives the fundamental
solution
G
i,j
=
_
jC
i
j i
(N j)C

i
j i
From G
i,i1
2G
i,i
+ G
i,i+1
= 1 and iC
i
= (N i)C

i
we get C
i
= (N i)/N and
C

i
= i/N.
When > 0, the two roots are
1
< 1 and
2
> 1.
Homeworks.
1. Use matlab or maple to nd the fundamental solution G
i,j
:= (A2)
1
with > 0.
2. Is it correct that v
i,j
has the following form?
G
i,j
=
_

ji
1
N 1 > j i

ji
2
1 < j < i
Let us go back to the original equation:
u
xx
u = f(x)
The above study of the Greens function of the discrete Laplacian helps us to quantify the
the error produced from the source term. If Au = f and A
1
= G, then an error in f, say
, will produce an error
e = G.
2.10. THE DISCRETE LAPLACIAN AND ITS INVERSION 33
If the off-diagonal part of G decays exponentially (i.e. > 0), then the error is localized,
otherwise, it polutes everywhere. The error from the boundary also has the same behavior.
Indeed, if = 0, then The discrete solution is
u(x
j
) = aG
0
(j) +bG
1
(j) +

j
G
i,j
f
j
where G(j) = jh, G
1
(j) = 1 jh and G = A
1
, the Greens function with zero Dirichlet
boundary condition. Here, G
0
solves the equation
G
0
(i 1) 2G
0
(i) +G
0
(i + 1) = 0, i = 1, ..., N 1,
for j = 1, ..., N 1 with G
0
(0) = 1 and G
0
(N) = 0. And G
1
solves the same equation
with G
1
(0) = 0 and G
1
(N) = 1.
If > 0, we can see that both G
0
and G
1
are also localized.
Project 2. Solve the following equation
u
xx
u +f(x) = 0, x [0, 1]
numerically with periodic, Dirichlet and Neumann boundary condition. The equilibrium
1. A layer structure
f(x) =
_
1 1/4 < x < 3/4
1 otherwise
2. An impluse
f(x) =
_
1/2 < x < 1/2 +
0 otherwise
3. A dipole
f(x) =
_
_
_
1/2 < x < 1/2
1/2 < x < 1/2 +
0 otherwise
You may choose = 0.1, 1, = 0.1, 1, 2. Observe how solutions change as you vary
and .
Project 3. Solve the following equation
u
xx
+f(u) = g(x), x [0, 1]
numerically with Neumann boundary condition. Here, f(u) = F

(u) and the potential is


F(u) = u
4
u
2
.
Study the solution as a function of . Choose simple g, say piecewise constant, a delta
function, or a dipole.
34CHAPTER2. FINITE DIFFERENCE METHODS FORLINEARPARABOLICEQUATIONS
Chapter 3
Finite Difference Methods for Linear
elliptic Equations
3.1 Discrete Laplacian in two dimensions
We will solve the Poisson equation
u = f
in a domain R
2
with Dirichlet boundary condition
u = g on
Such a problemis a core problemin many applications. We may assume g = 0 by substract-
ing a suitable function from u. Thus, we limit our discussion to the case of zero boundary
condition. Let h be the spatial mesh size. For simplicity, let us assume = [0, 1] [0, 1].
But many discussion below can be extended to general smooth bounded domain.
3.1.1 Discretization methods
Centered nite difference The Laplacian is approximated by
A =
1
h
2
(U
i1,j
+U
i+1,j
+U
i,j1
+U
i,j+1
4U
i,j
) .
For the square domain, the indeces run from 1 i, j N 1 and
U
0,j
= U
N,j
= U
i,0
= U
i,N
= 0
from the boundary condition.
If we order the unknowns U by i +j (N 1) with j being outer loop index and i the
inner loop index, then the matrix form of the discrete Laplacian is
A =
1
h
2
_
_
_
_
_
_
_
T I
I T I
I T I
.
.
.
.
.
.
.
.
.
I T
_
_
_
_
_
_
_
35
36CHAPTER3. FINITE DIFFERENCE METHODS FORLINEARELLIPTICEQUATIONS
This is an (N1)(N1) block tridiagonal matrix. The block T is an (N1)(N1)
matrix
T =
_
_
_
_
_
_
_
4 1
1 4 1
1 4 1
.
.
.
.
.
.
.
.
.
1 4
_
_
_
_
_
_
_
Since this discrete Laplacian is derived by centered nite differencing over uniform grid, it
is second order accurate, the truncation error

i,j
:=
1
h
2
(u(x
i1
, y
j
) +u(x
i+1
, y
j
) + u(x
i
, y
j1
) +u(x
i
, y
j+1
) 4u(x
i
, y
j
))
= O(h
2
).
3.1.2 The 9-point discrete Laplacian
The Laplacian is approximated by

2
9
=
1
6h
2
_
_
1 4 1
4 20 4
1 4 1
_
_
One can show by Taylor expansion that

2
9
u =
2
u +
1
12
h
2

4
u +O(h
4
).
If u is a solution of
2
u = f, then

2
9
u = f +
1
12
h
2

2
f +O(h
4
).
Thus, we get a 4th order method:

2
9
U
ij
= f
ij
+
h
2
12

2
f
ij
3.2 Stability of the discrete Laplacian
3.2.1 Fourier method
Since our domain = [0, 1] [0, 1] and the coefcients are constant, we can apply Fourier
transform. It is then easy to see that the eigenfunctions of the discrete Laplacian are
U
k,
i,j
= sin(ikh) sin(ih)
3.2. STABILITY OF THE DISCRETE LAPLACIAN 37
The corresponding eigenvalues are

k,
=
2
h
2
(cos(kh) + cos(h) 2)
=
4
h
2
(sin
2
(kh/2) + sin
2
(h/2))
This is the eigenvalues of the 5-point discrete Laplacian. The smallest eigenvalue (in mag-
nitude) is

1,1
=
8
h
2
sin
2
(h/2) 2
2
To show the stability, we take Fourier transform on the equation

2
5
U
ij
= f
ij
We get

k,

U
k,
=
1
2k,
We abbreviate it by

U =

f
The L
2
norm of

A has the following estimate:
|

U| 2
2
|

U|
Hence
|

U|
1
2
2
|

f|.
From Parseval equality, we have
|U|
1
2
2
|f|
Homeworks.
1. Compute th eigenvalues and eigenfunctions of the 9-point discrete Laplacian on the
domain [0, 1] [0, 1] with zero boundary condition.
3.2.2 Energy method
The energy method can be etended to more general domain. To perform energy estimate,
we rewrite the discrete Laplacian as
AU
i,j
=
1
h
2
(U
i1,j
+U
i+1,j
+U
i,j1
+U
i,j+1
4U
i,j
) = ((D
x+
D
x
+D
y+
D
y
)U)
i,j
where
(D
x+
U)
i,j
=
U
i+1,j
U
i,j
h
38CHAPTER3. FINITE DIFFERENCE METHODS FORLINEARELLIPTICEQUATIONS
the forward differencing. We multiply the discrete Laplacian by U
i,j
, then sum over all i, j.
By applying the ummation by part, we get
(AU, U) = ((D
x+
D
x
+D
y+
D
y
)U, U)
= (D
x
U, D
x
U) (D
y
U, D
y
U)
= |
h
U|
2
h
Here, the discrete L
2
norm is dened by
|U|
2
h
=

i,j
[U
i,j
[
2
h
2
.
The boundary term does not show up beause we consider the zero Dirichlet boundary prob-
lem. Thus, the discrete Poisson equation has the estimate
|
h
U|
2
h
= [(f, U)[ |f|
h
|U|
h
. (3.1)
Next, for the zero Dirichlet boundary condition, we have Poincare inequality. Before stat-
ing the Poincare inequality, we need to clarify the meaning of zero boundary condition in
the discrete sense. We dene the Sobolev space H
1
h,0
to be the completion of the restriction
of all C
1
0
functions to the grid points under the discrete H
1
norm. Here, C
1
0
function is a
C
1
function that is zero on the boundary; the discrete H
1
norm is
|U|
h,1
:= |U|
h
+|
h
U|
h
.
Lemma 2.2 Let be a bounded domain in R
2
, then there exist a constant d

, which is the
diameter of the domain , such that for any U H
1
h,0
,
|U|
h
d

|
h
U| (3.2)
Proof. Let us take = [0, X] [0, Y ] as an example for the proof. We assume X =
Mh, Y = Nh. From zero boundary condition, we have
U
2
i,j
= (
i

=1
D
x
U
i

,j
h)
2
(
i

=1
1
2
) (
i

=1
(D
x
U
i

,j
)
2
)h
2
i(
M

=1
(D
x
U
i

,j
)
2
)h
2
multiply both sides by h
2
then sum over all i, j, we get
|U|
2
h
=

i,j
U
2
i,j
h
2

M
2
2
h
2

i,j
(D
x
U
i,j
)
2
h
2
=
M
2
2
h
2
|D
x
U|
2
h
3.2. STABILITY OF THE DISCRETE LAPLACIAN 39
Similarly, we have
|U|
2
h

N
2
2
h
2
|D
y
U|
2
h
Thus,
|U|
2
h
h
2
1
2
maxM
2
, N
2
|
h
U|
2
d
2

|
h
U|
2
h
.
With the Poincare inequality, we can obtain two estimates for U.
Proposition 1 Consider the discrete Laplacian with zero boundary condition. We have
|U|
h
d
2
Omega
|f|
h
, (3.3)
|
h
U| d

|f|
h
. (3.4)
Proof. From
|
h
U|
2
h
|f|
h
|U|
h
We apply the Poincare inequality to the left-hand side, we obtain
|U|
2
h
d
2

|U|
2
h
d
2

|f|
h
|U|
h
This yields
|U|
h
d
2

|f|
h
If we apply the Poincare inequality to the right-hand side, we get
|
h
U|
2
h
|f|
h
|U|
h
|f|
h
d

|
h
U|
h
Thus, we obtain
|
h
U| d

|f|
h
40CHAPTER3. FINITE DIFFERENCE METHODS FORLINEARELLIPTICEQUATIONS
Chapter 4
Finite Difference Theory For Linear
Hyperbolic Equations
4.1 A review of smooth theory of linear hyperbolic equa-
tions
Hyperbolic equations appear commonly in physical world. The propagation of acoustic
wave, electric-magnetic waves, etc. obey hyperbolic equations. Physical characterization
of hyperbolicity is that the signal propagates at nite speed. Mathematically, it means
that compact-supported initial data yield compact-supported solutions at all time. This
hyperbolicity property has been characterized in terms of coefcients of the corresponding
linear partial differential equations through Fourier method.
They are two techaniques for hyperbolic equations, one is based on Fourier method
(Garding et al.), the other is energy method (Friedrichs symmetric hyperbolic equations).
A good reference is F. Johns book. For computational purpose, we shall only study one
dimensional cases. For analysis, the techniques include methods of characteristics, energy
methods, Fourier methods.
4.1.1 Linear advection equation
We start from the Cauchy problem of the linear advection in one-space dimension
u
t
+au
x
= 0, (4.1)
u(x, 0) = u
0
(x). (4.2)
Its solution is simply a translation of u
0
, namely,
u(x, t) = u
0
(x at).
More generally, we can solve the linear advection equation with variable coefcients by the
method of characteristics. Consider
u
t
+a(x, t)u
x
= 0.
41
42CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
This equation merely says that the direction derivative of u is 0 in the direction (1, a) |
(dt, dx). If x(t, ) is the solution of the ODE
dx
dt
= a(x, t).
with initial data x(0, ) = , then
d
dt
[

u(x(t, ), t) =
t
u +
x
u
dx
dt
= u
t
+au
x
= 0
In other words, u is unchanged along the curve: dx/dt = a. Such a curve is called the
characteristic curve. Suppose from any point (x, t), t > 0, we can nd the characteristic
curve (s, t, x) backward in time and (, t, x) can be extended to s = 0. Namely, (, t, x)
solves the ODE: d/ds = a(, s) with (t, t, x) = x, and (, t, x) exists on [0, t]. The
solution to the Cauchy problem is then given by u(x, t) = u
0
((0, t, x)).
Note that the characteristics are the curves where signals propagate along.
Homeworks
1. Find the solution of
u
t
tanh xu
x
= 0
with initial data u
0
. Also show that u(x, t) 0 as t , provided u
0
(x) 0 as
[x[ .
2. Show that the initial value problem for
u
t
+ (1 + x
2
)u
x
= 0
is not well dened. (Show the characteristics issued from x-axis do not cover the
entire domain: x R, t 0.)
4.1.2 Linear systems of hyperbolic equations
Methods of characteristics Second-order hyperbolic equations can be expressed as hy-
perbolic systems. For example, the wave equation
u
tt
c
2
u
xx
= 0
can be written as
_
u
x
u
t
_
t

_
0 1
c
2
0
__
u
x
u
t
_
x
= 0.
In general, systems of hyperbolic equations have the following form
u
t
+A(x, t)u
x
= B(x, t)u +f.
Here, u is an n-vector and A, B are n n matrices. Such a system is called hyperbolic if
A is diagonalizable with real eigenvalues. That is, A has real eigenvalues

1

n
4.1. A REVIEW OF SMOOTH THEORY OF LINEAR HYPERBOLIC EQUATIONS 43
with left/right eigenvectors l
i
/r
i
, respectively. We normalize these eigenvectors so that
l
i
r
j
=
i,j
. Let R = (r
1
, , r
n
) and L = (l
1
, , l
n
)
t
. Then
A = RL,
= diag (
1
, ,
n
)
LR = I
We can use L and R to diagonalize this system. First, we introduce v = Lu, then multiply
the equation by L from the left:
Lu
t
+LAu
x
= LBu +Lf.
This gives
v
t
+ v
x
= Cv +g,
where C = LBR +L
t
R + L
x
R and g = Lf. The i-th equation:
v
i,t
+
i
v
i,x
=

j
c
i,j
v
j
+g
i
is simply an ODE in the direction dx/dt =
i
(x, t). As before, from a point (x, t) with
t > 0, we draw characteristic curves
i
(, t, x), i = 1, , n:
d
i
ds
=
i
(
i
, s), i = 1, , n

i
(t, t, x) = x
We integrate the i-th equation along the i-th characteristics to obtain
v
i
(x, t) = v
0,i
(
i
(0, t, x)) +
_
t
0
(

j
c
i,j
v
j
+g
i
)(
i
(s, t, x), s) ds.
An immediate conclusion we can draw here is that the domain of dependence of (x, t) is
[
n
(0, t, x),
1
(0, t, x)], which, we denote by D(x, t), is nite. This means that if u
0
is zero
on D(x, t), then u(x, t) = 0.
One can obtain local existence theorem from this integral equation provided v
0
and
v
0,x
are bounded. Its proof is mimic to that of the local existence of ODE. We dene
a function space C
b
(R), the bounded continuous functions on R, using the sup norm:
|u|

:= sup
x
[u(x)[. Dene a map
Tv = v
0,i
(
i
(0, t, x)) +
_
t
0
(

j
c
i,j
v
j
+g
i
)
Then T is a contraction in C
b
if the time is short enough. The contraction map T yields a
xed point. This is the solution.
The global existence follows from a priori estimates (for example, C
1
-estimates) using
the above integral equations. A necessary condition for global existence is that all charac-
teristics issued from any point (x, t), x R, t > 0 should be traced back to initial time. A
sufcient condition is that A(x, t) is bounded in the upper half plane in x t space.
A nice reference for the method of characteristics for systems of hyperbolic equations
in one-dimension is Johns book, P.D.E., Sec. 5, Chapter 2.
44CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
Energy method for sysmetric hyperbolic equations Many physical systems can be
written in the sysmetric hyperbolic equations:
A
0
u
t
+A(x, t)u
x
= B(x, t)u +f,
where A
0
, Aare nn symmetric matrices and A
0
is positive denite. We take inner product
of this equation with u then integragte over x. For simplicity, we rst assume A
0
and A are
constant matrices. We get

t
1
2
A
0
u u +

x
1
2
Au u = Bu u +f u.
Here we have used the symmetric properties of A
0
and A:

x
Au u = Au
x
u +Au u
x
= 2Au
x
u.
As we integrate over the whole space, we get
d
dt
1
2
(A
0
u, u) = (Bu, u) + (f, u).
The positivity of A
0
yields that (A
0
u, u) is equivalent to the L
2
-norm of u, namely, there
are two constants C
1
and C
2
such that for any u L
2
(R),
C
1
_
[u[
2
dx (A
0
u, u) C
2
_
[u[
2
dx.
If we use (A
0
u, u) as a new norm |u|
2
, then we get
d
dt
1
2
|u(t)|
2
C|u[[[
2
+|u| |f|
Here, we have used the boundedness of B. Eliminating |u|, we get
d
dt
|u(t)| C|u| +|f|
This yields boundedness of |u(t)| for all time if |u(0)| is bounded.
We can apply this method to the equations for derivatives of u by differentiating the
equations. This will give us the boundedness of all derivatives. For general smooth
theory for symmetric hyperbolic systems in high-dimension we refer to Chapter 6 of Johns
book.
4.2 Finite difference methods for linear advection equa-
tion
4.2.1 Design techniques
We shall explain some design principles for the linear advection equation:
u
t
+au
x
= 0.
4.2. FINITE DIFFERENCE METHODS FOR LINEAR ADVECTION EQUATION 45
We shall assume a > 0 a constant. Despite of its simplicity, the linear advection equation
is a prototype equation to design numerical methods for nonlinear hyperbolic equations in
multi-dimension.
First, we choose h = x and k = t to be the spatial and temporal mesh sizes,
respectively. We discrete the x t space by the grid points (x
j
, t
n
), where x
j
= jx and
t
n
= nt. We shall use the data U
n
j
to approximate u(x
j
, t
n
). To derive nite difference
schemes, we use nite differences to approximate derivatives. We demonstrate spatial
discretization rst, then the temporal discretization.
1. Spatial discretization. There are two important design principles here, the interpola-
tion and upwinding.
1. Derivatives are replaced by nite differences. For instance, u
xj
can be replaced by
U
j
U
j1
h
, or
U
j+1
U
j1
2h
, or
3U
j
4U
j1
+U
j2
2h
.
The rst one is rst-order, one-side nite differencing, the second one is the central
differencing which is second order, the third one is a one-side, second-order nite
differencing. This formulae can be obtained by make Taylor expansion of u
j+k
about
x
j
.
2. Upwinding. We assume a > 0, this implies that the information comes from left.
Therefore, it is reasonable to approximate u
x
by left-side nite difference:
U
j
U
j1
h
or
3U
j
4U
j1
+U
j2
2h
2. Temporal discretization.
1. Forward Euler: We replace u
t
n
j
by (U
n+1
j
U
n
j
)/k. As conbining with the upwinding
spatial nite differencing, we obtain the above upwinding scheme.
2. backward Euler: We replace u
t
n+1
j
by (U
n+1
j
U
n
j
)/k.
3. Leap frog: We replace u
t
n
j
by (U
n+1
j
U
n1
j
)/2k.
4. An important trick is to replace high-order temporal derivatives by high-order spatial
derivatives through the help of P.D.E.: for instance, in order to achieve high order
approximation of u
t
, we can expand
u
n+1
j
u
n
j
k
= u
n
t,j
+
k
2
u
n
tt,j
+ ,
Here, u
n
j
, u
n
t,j
denotes u(x
j
, t
n
), u
t
(x
j
, t
n
), respectively. We can replace u
tt
by
u
tt
= au
xt
= a
2
u
xx
,
46CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
then approximate u
xx
by central nite difference. Finally, a high order approximation
of u
t
is
u
t

U
n+1
j
U
n
j
k

k
2h
2
(U
n
j+1
2U
n
j
+U
n
j1
).
We list some nite difference schemes below. Let = ak/h.
Upwind : U
n+1
j
= U
n
j
(U
n
j
U
n
j1
)
Lax-Friedrichs : U
n+1
j
=
U
n
j+1
+U
n
j1
2
+

2
(U
n
j+1
U
n
j1
)
Lax-Wendroff : U
n+1
j
= U
n
j


2
(U
n
j+1
U
n
j1
) +

2
2
(U
n
j+1
2U
n
j
+U
n
j1
)
Beam-Warming : U
n+1
j
= U
n
j


2
(3U
n
j
4U
n
j1
+U
n
j2
) +

2
2
(U
n
j
2U
n
j1
+U
n
j2
)
Backward Euler : U
n+1
j
U
n
j
=

2
(U
n+1
j1
U
n+1
j+1
)
In general, an (explicit) nite difference scheme for the linear advection equation can be
expressed as
U
n+1
j
= G(U
n
jl
, U
n
jl+1
, , U
n
j+m
)
=
m

k=l
a
k
U
n
j+k
Remark.
1. From characteristics method, u(x
j
, t
n+1
) = u(x
j
ak, t
n
). We can approximate it by
interpolation at neighboring grid points. For instance, a linear interpolation at x
j1
and x
j
gives
u
n+1
j

ak
h
u
n
j1
+ (1
ak
h
)u
n
j
.
The corresponding nite difference scheme is then dened by
U
n+1
j
=
ak
h
U
n
j1
+ (1
ak
h
)U
n
j
.
This is the well-known upwind scheme. Where the spatial discretization is exactly
the above one-side, rst-order nite differencing.
2. The term (u
n+1
j
u
n
j
)/k in a forward Euler method introduces a anti-diffusion term
a
2
u
xx
, namely,
u
n+1
j
u
n
j
k
= u
t
+
k
2
u
tt
+O(k
2
) = u
t
+
a
2
k
2
u
xx
+O(k
2
).
Thus, a high-order upwind differencing

2
(3U
n
j
4U
n
j1
+ U
n
j2
) for au
x
and rst-
order difference in time will be unstable.
4.2. FINITE DIFFERENCE METHODS FOR LINEAR ADVECTION EQUATION 47
Homeworks.
1. Use the trick u
tt
= a
2
u
xx
and central nite difference to derive Lax-Wendroff scheme
by yourself.
2. Derive a nite difference using method of characteristics and a quadratic interpola-
tion at x
j2
, x
j1
and x
j
. Is this scheme identical to the Beam-Warming scheme?
3. Do the same thing with cubic interpolation at x
j2
, , x
j+1
.
4. Write a computer program using the above listed schemes to the linear advection
equation. Use periodic boundary condition. The initial condition are
(a) square wave,
(b) hat function
(c) Gaussian
(d) e
x
2
/D
sin mx
Rene the mesh by a factor of 2 to check the convergence rates.
4.2.2 Courant-Friedrichs-Levy condition
For a nite difference scheme:
U
n+1
j
= G(U
n
j
, , U
n
j+m
),
We can dene numerical domain of dependence of (x
j
, t
n
) (denoted by D
n
(j, n)) to be
[x
jn
, x
j+nm
]. For instance, the numerical domain of upwind method is [x
jn
, x
j
]. If
U
0
k
= 0 on D
n
(j, n), then U
n
j
= 0. In order to have our nite difference schemes physically
meaningful, a natural condition is
physical domain of dependence numerical domain of dependence
This gives a constraint on the ratio of h and k. Such a condition is called the Courant-
Friedrichs-Levy condition. For the linear advection equation with a > 0, the condition
is
0
ak
h
1
If this condition is violated, we can esaily construct an initial condition which is zero on
numerical domain of dependence of (x, t), yet u(x, t) ,= 0. The nite difference scheme
will produce 0 at (x, t). Thus, its limit is also 0.
Below, we shall x the ratio h/k during the analysis and take h 0 in the approxima-
tion procedure.
48CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
4.2.3 Consistency and Truncation Errors
Let u be a smooth solution of the differential equation. Let us express our difference scheme
in the form:
U
n+1
= GU
n
Given a smooth function u(x, t). Let us denote u(jh, nk) by u
n
j
. Plug u
n
into this nite
difference equation, then make Taylor expansion about (jh, nk). For instance, we plug a
smooth function u into a upwinding scheme:
1
k
(u
n+1
j
u
n
j
) +
1
h
(u
n
j
u
n
j1
) = (u
t
+au
x
) +k(u
tt
u
xx
) + O(h
2
+k
2
)
Thus, we may dene the truncation error as
e
n
=
u
n+1
Gu
n
k
A nite difference scheme is called consistent if e 0 as k 0. Naturally, this is a
minimal requirement of a nite difference scheme. If the scheme is expressed as
U
n+1
j
=
m

k=l
a
k
U
n
j+k
Then a necessary and sufcient condition for consistency is
m

k=l
a
k
= 1.
This is easy to see because the constant is a solution.
If e = O(k
r
), then the scheme is called of order r. We can easily check that e = O(k)
for upwind method and e = O(k
2
) for Lax-Wendroff method.
Homeworks.Find the truncation error of the schemes listed above.
4.2.4 Laxs equivalence theorem
Suppose U
n
is generated from a nite difference scheme: U
n+1
= G(U
n
), we wish the
solution remain bounded under certain norm as we let the mesh size t 0. This is
equivalent to let the time step number n . A scheme is called stable if |U
n
| remains
bounded under certain norm | | for all n.
Let u be an exact solution of some linear hyperbolic P.D.E. and U be the solution of a
corresponding nite difference equation, We want to estimate the true error
n
j
= u
n
j
U
n
j
.
First we estimate how much error accumulated in one time step.
u
n+1
U
n+1
= ke
n
+Gu
n
GU
n
.
If we can have an estimate (called stability condition)
|GU| |U| (4.3)
4.2. FINITE DIFFERENCE METHODS FOR LINEAR ADVECTION EQUATION 49
under certain norm | |, then we obtain
|u
n
U
n
| |u
0
U
0
| +k(e
n1
+ +e
1
).
From the consistency, we obtain |
n
| 0 as k 0. If the scheme is of order r, then we
obtain
|
n
| |u
0
U
0
| +O(k
r
).
We have the following theorems.
Theorem 2.11 (Lax equivalence theorem) Given a linear hyperbolic partial differential
equation. Then a consistent nite difference scheme is stable if and only if is is convergent.
We have proved stability convergence. We shall prove the other part in the next section.
Theorem 2.12 For smooth solutions, the associated true error computed by a nite differ-
ence scheme of order r is O(k
r
).
4.2.5 Stability analysis
Since we only deal with smooth solutions in this section, the L
2
-norm or the Sobolev norm
is a proper norm to our stability analysis. For constant coefcient and scalar case, the von
Neumann analysis (via Fourier method) provides a necessary and sufcient condition for
stability. For systemwith constant coefcients, the von Neumann analysis gives a necessary
condition for statbility. For systems with variable coefcients, the Kreiss matrix theorem
provides characterizations of stability condition. We describe the von Neumann analysis
below.
Given U
j

jZ
, we dene
|U|
2
=

j
[U
j
[
2
and its Fourier transform

U() =
1
2

U
j
e
ij
.
The advantages of Fourier method for analyzing nite difference scheme are
the shift operator is transformed to a multiplier:

TU() = e
i

U(),
where (TU)
j
:= U
j+1
;
the Parseval equility
|U|
2
= |

U|
2

U()[
2
d.
50CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
If a nite difference scheme is expressed as
U
n+1
j
= (GU
n
)
j
=
m

i=l
a
i
(T
i
U
n
)
j
,
then

U
n+1
=

G()

U
n
().
From the Parseval equality,
|U
n+1
|
2
= |

U
n+1
|
2
=
_

G()[
2
[

U
n
()[
2
d
max

G()[
2
_

U
n
()[
2
d
= [

G[
2

|U|
2
Thus a necessary condition for stability is
[

G[

1. (4.4)
Conversely, Suppose [

G(
0
)[ > 1, from

G being a smooth function in , we can nd and


such that
[

G()[ 1 + for all [


0
[ < .
Let us choose an initial data U
0
in
2
such that

U
0
() = 1 for [
0
[ . Then
|

U
n
|
2
=
_
[

G[
2n
()[

U
0
[
2

_
|
0
|
[

G[
2n
()[

U
0
[
2
(1 +)
2n
as n
Thus, the scheme can not be stable. We conclude the above discussion by the following
theorem.
Theorem 2.13 A nite difference scheme
U
n+1
j
=
m

k=l
a
k
U
n
j+k
with constant coefcients is stable if and only if

G() :=
m

k=l
a
k
e
ik
satises
max

G()[ 1. (4.5)
4.2. FINITE DIFFERENCE METHODS FOR LINEAR ADVECTION EQUATION 51
As a simple example, we show that the scheme:
U
n+1
j
= U
n
j
+

2
(U
n
j+1
U
n
j1
)
is unstable. The operator G = 1 +

2
(T T
1
). The corresponding

G() = 1 + i sin ,
which cannot be bounded by 1 in magnitude. One the other hand, the Lax-Friedrichs
scheme replaces U
n
j
in the above scheme by the average (U
n
j1
+U
n
j+1
)/2. The correspond-
ing

G() = cos + i sin , which is bounded by 1 in magnitude provided [[ 1. The
above replacement is equivalent to add a term (U
n
j1
2U
n
j
+ U
n
j+1
)/2 to the right hand
side of the above unstable nite difference. It then stabilizes the scheme. This quantity is
called a numerical viscosity. We see the discussion in the next section.
Homeworks.
1. Compute the

G for the schemes: Lax-Friedrichs, Lax-Wendroff, Leap-Frog, Beam-
Warming, and Backward Euler.
4.2.6 Modied equation
We shall study the performance of a nite difference scheme to a linear hyperbolic equation.
Consider the upwind scheme for the linear advection equation. Let u(x, t) be a smooth
function. Expand u in Taylor series, we obtain
u
n+1
j
G(u
n
)
j
= (u
t
+au
x
)t
(x)
2
2t
(
2
)u
xx
+O((t)
3
).
The truncation error for the upwind method is O(t) if u satises the linear advection
scheme. However, if we x x and t, then the error is O(t
3
) if u satises
u
t
+au
x
u
xx
= 0,
where
=
(x)
2
2t
(
2
).
This equation is called modied equation. The solution of the nite difference equation
is closer to the solution of this modied equation than the original equation. The role of
u
xx
is a dissipation term in the scheme. The constant is called numerical viscosity. We
observe that 0 if and only if 0 1, which is exactly the (C-F-L as well as
von Neumann) stability condition. This is consistent to the well-postedness of diffusion
equations (i.e. 0).
The effect of numerical viscosity is that it will make solution smoother, and will smear
out discontinuities. To see this, let us solve the Cauchy problem:
u
t
+au
x
= u
xx
u(x, 0) = H(x) :=
_
1 if x 0
0 if x < 0
52CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
The function H is called the Heaviside function. The corresponding solution is given by
u(x, t) =
1

4t
_

(xaty)
2
4t
u(y, 0) dy
=
1

4t
_

0
e

(xaty)
2
4t
dy
= erf((x at)/

4t),
where
erf(x) :=
2

_
x

e
z
2
dz.
Let u
e
(x, t) be the exact solution of u
t
+au
x
= 0 with u(x, 0) = H(x). Then
[u
e
(y +at, t) u(y +at, t)[ = erf([y[/

4t).
Hence,
|u
e
(, t) u(, t)|
L
1 = 2
_
0

erf(
y

4t
) dy
= C

t
Since = O(t), we see that
|u
n
e
u
n
| = O(

t).
On the other hand, if U is the solution of the nite difference equation, then we have
|U
n
u
n
| = O(t
2
). Hence
|U
n
u
n
e
|
L
1 = O(

t).
Thus, a rst order scheme is only of half order for linear discontinuities.
One can also observe the smearing (averaging ) of discontinuities from the nite dif-
ference directly. In upwind scheme, U
n+1
j
may be viewed as weighted averages of U
n
j
and
U
n
j1
:
U
n+1
j
= (1 )U
n
j
+U
n
j1
.
If U
n
j1
= 0 and U
n
j
= 1, then U
n+1
j
is a value between 0 and 1. This is a smearing
process (averaging process). The smearing process will spread out. Its width is (

nx) =
O(

t) from the estimate of binomial distribution.


It should be noticed that the magnititute of the numerical viscosity of the upwind
method is smaller than that of the Lax-Friedrichs method. The upwind method uses the
information of charateristic speed whereas the Lax-Friedrichs does not use this informa-
tion.
4.2. FINITE DIFFERENCE METHODS FOR LINEAR ADVECTION EQUATION 53
Homeworks.
1. Find the modied equations for the following schemes:
Lax-Friedrichs : u
t
+au
x
=
(x)
2
2t
(1
2
)u
xx
Lax-Wendroff : u
t
+au
x
=
(x)
2
6
a(
2
1)u
xxx
Beam-Warming : u
t
+au
x
=
(x)
2
6
a(2 3 +
2
)u
xxx
2. Expand u up to u
xxxx
, nd the modied equation with the term u
xxxx
for the Lax-
Wendroff scheme and Beam-Warming. That is
u
t
+au
x
= u
xxx
+u
xxxx
.
Show that the coefcient < 0 for both scheme if and only if the C-F-L stability
condition.
3. Find the solution U
n
j
of the upwind scheme with initial data U
0
j
=
j0
. (Hint: a
binomial distribution.) Now, condider the Heaviside function as our initial data.
Using the above solution formula, superposition principle and the Stirling formula,
show that

j
[u
n
j
U
n
j
[x = O(

nx) = O(

t).
Next, we study second-order scheme for discontinuities. We use Fourier method to
study the solution of the modied equation:
u
t
+au
x
= u
xxx
.
By taking Fourier transform, we nd
u
t
= (ia i
3
) u = i() u
Hence
u(x, t) =
_
e
i(x()t)
u(, 0) d.
The initial data we consider here is the Heaviside function H(x). However, in the discrete
domain, its Fourier expansion is truncated. The corresponding inversion has oscillation on
both side of the discontinuity. The width is O(x), the height is O(1). We propagate such
an initial data by the equation u
t
+ au
x
= u
xxx
. The superposition of waves in different
wave number cause interference of waves. Eventually, it forms a wave package: a high
frequency wave modulated by a low frequency wave. By the method of stationary phase,
we see that the major contribution of the integral is on the set when
d
d
(x ()t) = 0.
The correspond wave e
i(x

()t)
is the modulated wave. Its speed

() is called the group


velocity.
54CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
For the Lax-Wendroff scheme, we see that the group speed is
v
p
= a + 3
2
.
For the Beam-Warming, v
p
= a + 3
2
. Since < 0 for the Lax-Wendroff, while >
0 for the Beam-Warming, we observe that the wave package leaves behind (ahead) the
discontinuity in the Lax-Wendroff (Beam-Warming).
Homeworks.
1. Measure the width of the oscillation as a function of number of time steps n.
One can also observe this oscillation phenomena directly from the scheme. In Beam-
Warming, we know that U
n+1
j
is a quadratic interpolation of U
n
j2
, U
n
j1
and U
n
j
. If U
n
j2
=
0, and U
n
j1
= U
n
j
= 1, then the quadratic interpolation gives an overshoot at U
n+1
j
(that
is, U
n+1
j
> 1). Similarly, in the Lax-Wendroff scheme, U
n+1
j
is a quadratic interpolation of
U
n
j1
, U
n
j
and U
n
j+1
. If U
n
j1
= U
n
j
= 0, and U
n
j+1
= 1, then U
n+1
j
< 0 (an undershoot).
4.3 Finite difference schemes for linear hyperbolic system
with constant coefcients
4.3.1 Some design techniques
We consider the system
u
t
+Au
x
= 0
with A being a constant n n matrix. The rst designing principle is to diagonal the
system. Using the left/right eigenvectors, we decompose
A = RL
= R(
+

)L
= A
+
A

Here, = diag(
1
, ,
n
) and

are the positive/negative parts of .


With this decomposition, we can dene the upwind scheme:
U
n+1
j
= U
n
j
+
t
x
A
+
(U
n
j1
U
n
j
)
t
x
A

(U
n
j+1
U
n
j
).
The Lax-Friedrichs is still
U
n+1
j
=
U
n
j1
+U
n
j+1
2
+
t
2x
A(U
n
j1
U
n
j+1
)
= U
n
j
+
t
2x
A(U
n
j1
U
n
j+1
) +
U
n
j1
2U
n
j
+U
n
j+1
2
4.3. FINITE DIFFERENCE SCHEMES FORLINEARHYPERBOLICSYSTEMWITHCONSTANT COEFFICIENTS55
We see the last term is a dissipation term. In general, we can design modied L-F scheme
as
U
n+1
j
= U
n
j
+
t
2x
A(U
n
j1
U
n
j+1
) + D
U
n
j1
2U
n
j
+U
n
j+1
2
where D is a positive constant.
The Lax-Wendroff scheme is given by
U
n+1
j
= U
n
j
+
t
2x
(U
n
j1
U
n
j+1
) +
t
2(x)
2
A
2
(U
n
j+1
2U
n
j
+U
n
j1
).
The C-F-L condition for upwind, L-F, L-W are
max
i
[
i
[
t
x
1.
Homeworks.
1. Find the modied equation for the above schemes.
2. What is the stability condition for the modied L-F scheme.
3. Write a compute program to compute the solution of the wave equation:
u
t
= v
x
v
t
= c
2
u
x
using upwind, modied L-F, L-W schemes. The initial data is chosen as those for the
linear advection equation. Use the periodic boundary condition.
4.3.2 Stability analysis
The denition of L
2
-stability is that the L
2
-norm of the solution of nite difference scheme

j
[U
n
j
[
2
x
is uniformly bounded.
This L
2
-theory for smooth solutions was well developed in the 60s. First, Laxs equiva-
lence theoremwas originally proved for well-posed linear systems even in multi-dimension.
Thus, the essential issue for nite difference scheme is still the stability problem.
Let us suppose the system is expressed as
u
t
=

i
A
i
u
x
i
+Bu +f
Here, A
i
, B are constant matrices. We assume that the system is hyperbolic. This means
that

i
A
i
is diagonal with real eigenvalues. Suppose the corresponding nite difference
scheme is expressed as
U
n+1
= GU
n
=

U
n
.
56CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
Here, = (
1
, ,
n
) is multi-index, a

are matrices. Consider the Fourier transform of


G:

G(k) =

e
i
P
m

m
k
m
x
m
If we take x
m
as a function of t, then

G is a function of (k, t). Using

G, we have

U
n
=

G
n

U
0
.
From the Parseval equality: |U|
2
=
_
[

U[
2
, we obtain that the stability of a scheme
U
n+1
= GU
n
is equivalent to |

G
n
| is uniformly bounded. Von Neumann gave a nec-
essary condition for stability for system case.
Theorem 3.14 A necessary condition for stability is that all eigenvalues of

G(k, t) sat-
ises
[
i
(k, )[ 1 + O(t), k, t .
Proof. The spectral radius of

G(k, t) is the maximum value of the absolute values of the
its eigenvalues. That is,
(

G) := max
i
[
i
[
Since there is an eigenvector v such that [

Gv[ = [v[, we have that


|

G| := max
u
[

Gu[
[u[
.
Also, the eigenvalues of

G
n
are
n
i
. Hence we have
(

G
n
) = (

G)
n
.
Combine the above two, we obtain
(

G)
n
|

G
n
|.
Now, if |

G
n
| is uniformly bounded, say by a constant C depends on t := nt, then
C
1/n
1 + O(t).
For single equation, we have seen that von Neumann condition is also a sufcient condition
for stability.
In general, Kreiss provided characterization of matrices which are stable.
Denition 3.6 A family of matrices A is stable if there exists a constant C such that for
all A A and all positive integer n,
|A
n
| C.
4.4. FINITE DIFFERENCE METHODS FORLINEARSYSTEMS WITHVARIABLE COEFFICIENTS57
Theorem 3.15 (Kreiss matrix theorem) The stability of A is equivalent to each of the
following statements:
(i) There exists a constant C such that for all A A and z C, [z[ > 1, (A zI)
1
exists and satises
|(A zI)
1
|
C
[z[ 1
.
(ii) There exist constants C
1
and C
2
such that for all A A, there exists nonsingular
matrix S such that (1) |S|, |S
1
| C
1
, and (2) B = SAS
1
is upper triangular
and its off-diagonal elements satisfy
[B
ij
[ C
2
min1 [
i
[, 1 [
j
[
where
i
are the diagonal elements of B.
(iii) There exists a constant C > 0 such that for all A A, there exists a positive
denite matrix H such that
C
1
I H CI
A

HA H
Remarks.
1. In the rst statement, the spectral radius of A is bounded by 1.
2. In the second statement, it is necessary that all [
i
[ 1.
3. The meaning of the last statement means that we should use the norm

[U
j
[
2
=

j
(HU
j
, U
j
) instead of the Euclidean norm. Then G is nonincreasing under this
norm.
4.4 Finite difference methods for linear systems with vari-
able coefcients
Again, the essential issue is stability because Laxs equivalence theorem.
Kreiss showed by an example that the local stability (i.e. the stability for the frozen
coefcients) is neither necessary nor sufcient for overall stability of linear variable sys-
tems. However, if the system u
t
= Au with A being rst order, Strang showed that the
overall stability does imply the local stability. So, for linear rst-order systems with vari-
able coefcients, the von Neumann necessary condition is also a necessary for the overall
stability.
For sufcient condition, we need some numerical dissipation to damp the high fre-
quency component from spatial inhomogeneity. To illustrate this, let us consider the fol-
lowing scalar equation:
u
t
+a(x)u
x
= 0,
58CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
and a nite difference scheme
U
n+1
(x) = A(x)U
n
(x x) + B(x)U
n
(x) +C(x)U
n
(x + x).
For consistency, we need to require
A(x) + B(x) + C(x) = 1
A(x) C(x) = a(x)
Now, we impose another condition for local stability:
0 A(x), B(x), C(x) 1.
We showstability result. Multiply the difference equation by U
n+1
(x), use Cauchy-Schwartz
inequality, we obtain
(U
n+1
(x))
2
= A(x)U
n
(x x)U
n+1
(x) +B(x)U
n
(x)U
n+1
(x) + C(x)U
n
(x + x)U
n+1
(x)

A(x)
2
((U
n
(x x))
2
+ (U
n+1
(x))
2
) +
B(x)
2
((U
n
(x))
2
+ (U
n+1
(x))
2
)
+
C(x)
2
((U
n
(x + x))
2
+ (U
n+1
(x))
2
)
=
A(x)
2
(U
n
(x x))
2
+
B(x)
2
(U
n
(x))
2
+
C(x)
2
(U
n
(x + x))
2
+
1
2
(U
n+1
(x))
2
This implies
(U
n+1
(x))
2
A(x)(U
n
(x x))
2
+B(x)(U
n
(x))
2
+C(x)(U
n
(x + x))
2
= A(x x)(U
n
(x x))
2
+B(x)(U
n
(x))
2
+C(x + x)(U
n
(x + x))
2
+(A(x) A(x x))(U
n
(x x))
2
+ (C(x) C(x + x))(U
n
(x + x))
2
Now, we sum over x = x
j
for j Z. This yields
|U
n+1
|
2
|U
n
|
2
+O(t)|U
n
|
2
Hence,
|U
n
|
2
(1 +O(t))
n
|U
0
|
2
e
Kt
|U
0
|
2
.
The above analysis show that monotone schemes are stable in L
2
. Indeed, the scheme
has some dissipation to damp the errors from the variation of coefcient (i.e. the term like
(A(x) A(x x))).
For higher order scheme, we need to estimate higher order nite difference U, this
will involves [a[|U|, or their higher order nite differences. We need some dissipa-
tion to damp the growth of this high frequency modes. That is, the eigenvalues of the
amplication matrix should satises
[
i
[ 1 [kx[
2r
, when [kx[
for some > 0.
4.4. FINITE DIFFERENCE METHODS FORLINEARSYSTEMS WITHVARIABLE COEFFICIENTS59
To be more precisely, we consider rst-order hyperbolic system in high-space dimen-
sion:
u
t
+

i
a
i
(x)u
x
i
= 0.
Consider a nite difference approximation:
U
n+1
(x) =

(x)T

U
n
(x)
Here = (
1
, ,
n
) is a multi-index.
Let

G(x, t, ) =

e
i
be the Fourier transform of the frozen nite difference
operator.
Denition 4.7 A nite difference scheme with amplication matrix

G(x, t, ) is called
dissipative of order 2r if there exists a constant > 0 such that all eigenvalues of

G satisfy
[
i
(x, t, )[ 1 [[
2r
for all max
i
[
i
[ , all x, and all t < for some constant .
An important theorem due to Kreiss is the following stability theorem.
Theorem 4.16 Suppose the system is symmetric hyperbolic, i.e. the matrices a
i
are sym-
metric. Suppose the coefcients A

are also symmetric. Assume all coefcients are uni-


formly bounded. If the scheme is of order 2r 1 and is dissipative of order r, then the
scheme is stable.
60CHAPTER4. FINITE DIFFERENCE THEORYFORLINEARHYPERBOLICEQUATIONS
Chapter 5
Scalar Conservation Laws
5.1 Physical models
Many partial differential equations are derived from physical conservation laws such as
conservation of mass, momentum, energy, charges, etc. This class of PDEs is called con-
servation laws. The scalar conservation law is a single conservation law.
5.1.1 Trafc ow model
An interesting model is the following trafc owmodel on a high way. We use macroscopic
model, which means that x 100m. Let be the car density, u be the average car
velocity. The car ux at a point x is the number of car passing through x per unit time. In a
time period t, the car which can pass x must be in the region u(x, t)t. Thus, the ux at
x is ((x, t)u(x, t)t)/(t) = (x, t)u(x, t). Now, consider an arbitrary region (a, b), we
have
the change of number of cars in (a, b) = the car ux at a the car ux at b.
In mathematical formula:
d
dt
_
b
a
(x, t) dx = (a, t)u(a, t) (b, t)u(b, t)
=
_
b
a
(u)
x
dx
This holds for any (a, b). Hence, we have

t
+ (u)
x
= 0. (5.1)
This equation is usually called the continuity equation in continuum mechanics. It is not
closed because it involves two knowns and u. Empirically, u can be teated as a function
of which satises u 0 as
max
. For instance,
u() = u
max
(1

max
),
61
62 CHAPTER 5. SCALAR CONSERVATION LAWS
if there is a upper velocity limit, or
u() = a log(
max
/)
if there is no restriction of velocity. We can model u to depend on
x
also. For instance,
u = u()

which means that if the car number becomes denser (rareed) , then the speed is reduced
(increased). Here, is the diffusion coefcient (viscosity) which is a positive number.
Thus, the nal equation is

t
+f()
x
= 0, (5.2)
or

t
+f()
x
=
xx
, (5.3)
where f() = u().
5.1.2 Burgers equation
The Burgers equation is
u
t
+
1
2
(u
2
)
x
= u
xx
. (5.4)
When = 0, this equation is called inviscid Burgers equation. This equation is a prototype
equation to study conservation laws.
Homeworks.
1. The Burgers equation can be linearized by the following nonlinear transform: let
v = e

R
x
u(,t) d
,
show that v satises the heat equation:
v
t
= v
xx
2. Show that the Cauchy problem of the Burgers equation with initial data u
0
has an
explicit solution:
u(x, t) =

2
v
x
v
=
_

_
x y
t
_
p

(x, t, y) dy,
where
p

(x, t, y) =
e

2
I(x,t,y)
_

2
I(x,t,y)
dy
,
I(x, t, y) =
(x y)
2
2t
+
_
y
0
u
0
() d.
5.2. BASIC THEORY 63
5.1.3 Two phase ow
The Buckley-Leverett equation models how oil and water move in a reservoir. The un-
known u is the saturation of water, 0 u 1. The equation is
u
t
+f(u)
x
= 0
where
f(u) =
u
2
u
2
+a(1 u
2
)
2
.
Unlike previous examples, the ux f here is a non-convex function.
5.2 Basic theory
Let consider scalar conservation law
u
t
+f(u)
x
= 0. (5.5)
The equation can be viewed as a directional derivative
t
+ f

(u)
x
of u is zero. That
implies u is constant along the characteristic curve
dx
dt
= f

(u(x, t)).
This yields that the characteristic curve is indeed a straight line. Using this we can solve
the Cauchy problem of (5.5) with initial data u
0
implicitly:
u = u
0
(x ut).
For instance, for inviscid Burgers equation with u
0
(x) = x, the solution u is given by
u = x ut, or u = x/(1 +t).
Homeworks.
1. If f is convex and u
0
is increasing, then the Cauchy problem for equation (5.5) has
global solution.
2. If f is convex and u

0
< 0 at some point, then u
x
at nite time.
The solution may blow up (i.e. [u
x
[ ) in nite time due to the intersection of
characteristic curves. A shock wave (discontinuity) is formed. We have to extend our
solution class to to include these discontinuous solutions. We can view (5.5) in weak
sense. That is, for every smooth test function with compact support in R [0, ),
_

0
_

[u
t
+f(u)
x
] dxdt = 0
Integrate by part, we obtain
_

0
_

[
t
u +
x
f(u)] dxdt +
_

(x, 0)u(x, 0) dx = 0, (5.6)


In this formulation, it allows u to be discontinuous.
64 CHAPTER 5. SCALAR CONSERVATION LAWS
Denition 2.8 A function u is called a weak solution of (5.5) if it satises (5.6) for all
smooth test function with compact support in R [0, ).
Lemma 5.1 Suppose u is a weak solution with discontinuity across a curve x(t). Suppose
u is smooth on the two sides of x(t). Then u satises the following jump condition across
x(t):
dx
dt
[u] = [f(u)] (5.7)
where [u] := u(x(t)+, t) u(x(t), t).
Homeworks.Work out this by yourself.
5.2.1 Riemann problem
The Riemann problem is a Cauchy problem of (5.5) with the following initial data
u(x, 0) =
_
u

for x < 0
u
r
for x > 0
(5.8)
The reasons why Riemann problem is important are:
(i) Discontinuities are generic, therefore Riemann problem is generic locally.
(ii) In physical problems, the far eld states are usually two constant states. Because of
the hyperbolicity, at large time, we expect the solution is a perturbation of solution
to the Riemann problem. Therefore, Riemann problem is also generic globally.
(iii) Both the equation (5.5) and the Riemann data (5.8) are invariant under the Galilean
transform: x x, t t for all > 0. If the uniqueness is true, the solution to
the Riemann problem is self-similar. That is, u = u(x/t). The PDE problem is then
reduced to an ODE problem.
When f

,= 0, say, f

> 0, here are two important solutions.


1. shock wave: u

u
r
u(x, t) =
_
u

for x < t
u
r
for x > t
(5.9)
where = (f(u
r
) f(u

))/(u
r
u

).
2. rarefaction wave: u

< u
r
u(x, t) =
_
_
_
u

for x <

t
u for

< (u) =
x
t
<
r
u
r
for x >
r
t
(5.10)
where (u) = f

(u) is an increasing function.


These two solution are of fundamental importance. We shall denote them by (u

, u
r
).
The weak solution is not unique. For instance, in the case of u

< u
r
, both (5.10)
and (5.9) are weak solutions. Indeed, there are innite many weak solution to such a Rie-
mann problem. Therefore, additional condition is needed to guarantee uniqueness. Such a
condition is called an entropy condition.
5.2. BASIC THEORY 65
5.2.2 Entropy conditions
To nd a suitable entropy condition for general hyperbolic conservation laws, let us go
back to study the gas dynamic problems. The hyperbolic conservation laws are simplied
equations. The original physical equations usually contain a viscous term u
xx
, as that
in the Navier-Stokes equation. We assume the viscous equation has uniqueness property.
Therefore let us make the following denition.
Denition 2.9 A weak solution is called admissible if it is the limit of
u

t
+f(u

)
x
= u

xx
, (5.11)
as 0+.
We shall label this condition by (A). In gas dynamics, the viscosity causes the physical
entropy increases as gas particles passing through a shock front. One can show that such
a condition is equivalent to the admissibility condition. Notice that this entropy increasing
condition does not involve viscosity explicitly. Rather, it is a limiting condition as 0+.
This kind of conditions is what we are looking for. For general hyperbolic conservation
laws, there are many of them. We list some of them below.
(L) Laxs entropy condition: across a shock (u

, u
r
) with speed , the Laxs entropy
condition is

> >
r
(5.12)
where

(
r
) is the left (right) characteristic speed of the shock.
The meaning of this condition is that the information can only enter into a shock,
then disappear. It is not allowed to have information coming out of a shock. Thus, if
we draw characteristic curve from any point (x, t) backward in time, we can always
meet the initial axis. It can not stop at a shock in the middle of time because it
would violate the entropy condition. In other words, all information can be traced
back to initial time. This is a causality property. It is also time irreversible, which
is consistent to the second law of thermodynamics. However, Laxs entropy is only
suitable for ux f with f

,= 0.
(OL) Oleinik-Lius entropy condition: Let
(u, v) :=
f(u) f(v)
u v
.
The Oleinik-Lius entropy condition is that, across a shock
(u

, v) (u

, u
r
) (5.13)
for all v between u

and u
r
. This condition is applicable to nonconvex uxes.
(GL) The above two conditions are conditions across a shock. Lax proposed another
global entropy condition. First, he dene entropy-entropy ux: a pair of function
((u), q(u)) is called an entropy-entropy ux for equation (5.5) A weak solution
66 CHAPTER 5. SCALAR CONSERVATION LAWS
u(x, t) is said to satisfy entropy condition if for any entropy-entropy ux pair (, q),
u(x, t) satises
(u(x, t))
t
+q(u(x, t))
x
0 (5.14)
in weak sense.
(K) Another global entropy proposed by Kruzkov is for any constant c,
_

0
_

[[u c[
t
+ sign(u c)(f(u) f(c))
x
] dx 0 (5.15)
for all positive smooth with compact support in R (0, ). (GL) (K):
For any c, we choose (u) = [u c[, which is a convex function. One can check the
corresponding q(u) = sign(u c)(f(u) f(c)). Thus, (K) is a special case of (GL).
We may remark here that we can choose even simplier entropy-entropy ux:
(u) = u c, q(u) = f(u c),
where u c := maxu, c.
When the ux is convex, each of the above conditions is equivalent to the admissibility
condition. When f is not convex, each but the Laxs entropy condition is equivalent to the
admissibility condition.
We shall not provide general proof here. Rather, we study special case: the weak
solution is only a single shock (u

, u
r
) with speed .
Theorem 5.1 Consider the scalar conservation law (5.5) with convex ux f. Let (u

, u
r
)
be its shock with speed . Then the above entropy conditions are all equivalent.
Proof. (L) (OL);
We need to assume f to be convex. This part is easy. It follows from the convexity of f.
We leave the proof to the reader.
(A) (OL):
We also need to assume f to be convex. Suppose (u

, u
r
) is a shock. Its speed
=
f(u
r
) f(u

)
u
r
u

.
We shall nd a solution of (5.11) such that its zero viscosity limit is (u

, u
r
). Consider a
solution haing the form ((x t)/). In order to have (u

, u
r
), we need to require
far eld condition:
()
_
u


u
r

(5.16)
Plug ((x t)/) into (5.11), integrate in once, we obtain

= F(). (5.17)
where F(u) = f(u) f(u

) (uu

). We nd F(u

) = F(u
r
) = 0. This equation with
far-eld condition (5.16) if and only if, for all u between u

and u
r
, (i) F

(u) > 0 when


5.2. BASIC THEORY 67
u

< u
r
, or (ii) F

(u) < 0 when u

> u
r
. One can check that (i) or (ii) is equivalent to
(OL).
Next, we study global entropy conditions.
(A) (GL)
If u is an admissible solution. This means that it is the limit of u

which satisfy the viscous


conservation law (5.11). Let (, q) be a pair of entropy-entropy ux. Multiply (5.11) by

(u

), we obtain
(u

)
t
+q(u

)
x
=

(u

)u

xx
= (u

)
xx

(u

x
)
2
(u

)
xx
We multiply this equation by any positive smooth test function with compact support in
R (0, ), then integrate by part, and take 0, we obtain
_

0
_

[(u)
t
+q(u)
x
] dxdt 0
This means that (u)
t
+q(u)
x
0 in weak sense.
(K) (OL) for single shock:
Suppose (u

, u
r
) is a shock. Suppose it satises (K). We want to show it satises (OL). The
condition (GL), as applied to a single shock (u

, u
r
), is read as
[] + [q] 0.
Here, we choose = [u c[. The condition becomes
([u
r
c[ [u

c[) + sign(u
r
c)(f(u
r
) f(c)) sign(u

c)(f(u

) f(c)) 0
Or
(u

, u
r
)([u
r
c[ [u

c[) +[u
r
c[(u
r
, c) [u

c[(u

, c) 0 (5.18)
We claim that this condition is equivalent to (OL). First, if c lies outside of u

and u
r
, then
the left-hand side of (5.18) is zero. So (5.18) is always true in this case. Next, if c lies
betrween u

and u
r
, one can easily check it is equivalent to (OL).
5.2.3 Rieman problem for nonconvex uxes
The Oleinik-Lius entropy condition can be interpreted as the follows graphically. Suppose
(u

, u
r
) is a shock, then the condition (OL) is equivalent to one of the follows. Either
u

> u
r
and the graph of f between u

, u
r
lies below the secant (u
r
, f(u
r
)), (u

, f(u

)). Or
u

< u
r
and the graph of f between u

, u
r
lies above the secant ((u

, f(u

)), (u
r
, f(u
r
))).
With this, we can construct the solution to the Riemann problem for nonconvex ux as the
follows.
68 CHAPTER 5. SCALAR CONSERVATION LAWS
If u

> u
r
, then we connect (u

, f(u

)) and (u
r
, f(u
r
)) by a convex envelope of f (i.e.
the largest convex function below f). The straight line of this envelope corresponds to an
entropy shock. In curved part, f

(u) increases, and this portion corresponds to a centered


rarefaction wave. Thus, the solution is a composition of rarefaction waves and shocks. It is
called a copmposite wave.
If u

< u
r
, we simply replace convex envelope by concave envelope.
Example. Consider the cubic ux: f(u) =
1
3
u
3
. If u

< 0, u
r
> 0 From u

, we can
draw a line tangent to the graph of f at u

= u

/2. If u
r
> u

, then the wave structure


is a shock (u

, u

) follows by a rarefaction wave (u

, u
r
). If u
r
u

, then the wave is a


single shock. Notice that in a composite wave, the shock may contact to a rarefaction wave.
Such a shock is called a contact shock.
5.3 Uniqueness and Existence
Theorem 5.2 (Kruzkov) Assume f is Lipschitz continuous and the initial data u
0
is in
L
1
TV . Then there exists a global entropy solution (satisfying condition (K)) to the
Cauchy problem for (5.5). Furthermore, the solution operator is contractive in L
1
, that is,
if u, v are two entropy solutions, then
|u(t) v(t)|
L
1 |u(0) v(0)|
L
1 (5.19)
As a consequence, we have uniqueness theorem and the total variation diminishing prop-
erty:
T.V.u(, t) T.V.u(, 0) (5.20)
Proof. The part of total variation diminishing is easy. We prove it here. The total variation
of u is dened by
T.V.u(, t) = Sup
h>0
_
[u(x +h, t) u(x, t)[
h
dx
We notice that if u(x, t) is an entropy solution, so is u(x + h, t). Apply the contraction
estimate for u(, t) and v = u( +h, t). We obtain the total variation diminishing property.
To prove the L
1
contraction property, we claim that the constant k in the Kruzhkov
entropy condition can be replaced by any other entropy solution v(t, x). That is
_ _
[[u(t, x) v(t, x)[
t
+ sign(u(t, x) v(t, x))(f(u(t, x)) f(v(t, x)))
x
] dxdt 0
for all positive smooth with compact support in R[0, ). To see this, we choose a test
function (s, x, t, y), the entropy conditions for u and v are
_ _
[[u(s, x)k[
s
(s, x, t, y)+sign(u(s, x)k)(f(u(s, x))f(k))
x
(s, x, t, y)] dxds 0
_ _
[[v(t, y)k

[
t
(s, x, t, y)+sign(v(t, y)k

)(f(v(t, y))f(k

))
y
(s, x, t, y)] dxds 0
5.3. UNIQUENESS AND EXISTENCE 69
Set k = v(t, y) in the rst equation and k

= u(s, x) in the second equation. Integrate the


rest varibles and add them together. We get
_ _ _ _
[u(s, x) v(t, y)[(
s
+
t
) + sign(u(s, x) v(t, y)) [f(u(s, x)) f(v(t, y))] (
x
+
y
) dx ds dy dt 0.
Now we choose (s, x, t, y) such that it concentrates at the diagonal s = t and x = y. To do
so, let
h
(x) = h
1
(x/h) be an approximation of the Dirac mass measure. Let (T, X)
be a non-negative test function on (0, ) R. Choosing
(s, x, t, y) =
_
s +t
2
,
x +y
2
_

h
_
s t
2
_

h
_
x y
2
_
,
we get
_ _ _ _

h
_
s t
2
_

h
_
x y
2
__
[u(s, x) v(t, y)[
T
_
s +t
2
,
x +y
2
_
+sign(u(s, x) v(t, y)) [f(u(s, x)) f(u(v(t, y)))]
X
_
s +t
2
,
x +y
2
__
dxdy ds dt 0.
Now we take limit h 0, we can get the desired inequality.
Next, we choose
(t, x) = [
h
(t)
h
(t )] [1
h
([x[ R +L( t))],
where
h
(z) =
_
z

h
(s) ds. We can get the desired L
1
contraction estimate.
The existence theorem mainly based on the same proof of the uniqueness theorem.
Suppose the initial data is in L
1
L

BV , we can construct a sequence of approximate


solutions which satisfy entropy conditions. They can be construncted by nite difference
methods (see the next section), or by viscosity methods, or by wave tracking methods (by
approximate the ux function by piecewise linear functions). Let us suppose the approxi-
mate solutions are constructed via viscosity method, namely, u

are solutions of
u

t
+f(u

)
x
= u

xx
.
Following the same proof for (GL) (K), we can get that the total variation norms of the
approximate solutions u

are bounded by T.V.u


0
. This gives the compactness in L
1
and a
convergent subsequence leads to an entropy solution.
Remark. The general existence theorem can allow only initial data u
0
L
1
L

. Even
the initial data is not in BV , the solution immediately has nite total variation at any t > 0.
70 CHAPTER 5. SCALAR CONSERVATION LAWS
Chapter 6
Finite Difference Schemes For Scalar
Conservation Laws
6.1 Major problems
First of all, we should keep in mind that local stability is necessary in designing nite
difference schemes for hyperbolic conservation laws. Namely, the scheme has to be stable
for hyperbolic conservation laws with frozen coefcients, see Chapter 1. In addition, there
are new things that we should be careful for nonlinear equations. The main issue is how to
compute discontinuities correctly. We list common problems on this issue.
Spurious oscillation appears around discontinuities in every high order schemes. The
reason is that the solution of nite difference scheme is closer to a PDE with higher
order derivatives. The corresponding dispersion formula demonstrates that oscilla-
tion should occur. Also, one may viewthat it is incorrect to approximate weak deriva-
tive at discontinuity by higher order nite differences. The detail spurious structure
can be analyzed by the study of the discrete traveling wave corresponding to a nite
difference scheme.
To cure this problem, we have to lower down the order of approximation near dis-
continuities to avoid oscillation. We shall denote to this issue later.
The approximate solution may converge to a function which is not a weak solution.
For instance, let apply the Courant-Isaacson-Rees (C-I-R) method to compute a sin-
gle shock for the inviscid Burgers equation. The C-I-R method is based on charac-
teristic method. Suppose we want to update the state U
n+1
j
. We draw a characteristic
curve back to time t
n
. However, the slope of the characteristic curve is not known
yet. So, let us approximate it by U
n
j
. Then we apply upwind method:
U
n+1
j
U
n
j
=
t
x
U
n
j
(U
n
j1
U
n
j
) if U
n
j
0
U
n+1
j
U
n
j
=
t
x
U
n
j
(U
n
j
U
n
j+1
) if U
n
j
< 0
71
72CHAPTER6. FINITE DIFFERENCE SCHEMES FORSCALARCONSERVATIONLAWS
Now, we take the following initial data:
U
0
j
=
_
1 for j < 0
0 for j 0
It is easy to see that U
n
j
= U
0
j
. This is a wrong solution. The reason is that we use
wrong characteristic speed U
n
j
when there is a discontinuity passing x
j
from t
n
to
t
n+1
.
To resolve this problem, it is advised that one should use a conservative scheme. We
shall discuss this issue in the next section.
Even the approximate solutions converge to a weak solution, it may not be an entropy
solution. For instance, consider the invisid Burgers equation u
t
+ uu
x
= 0 with the
initial data:
U
0
j
=
_
1 for j < 0
1 for j 0
We dene the scheme by
U
n+1
j
= U
n
j
+
t
x
(F(U
n
j1
, U
n
j
) F(U
n
j
, U
n
j+1
))
where
F(U, V ) =
_
f(U) if U +V 0
f(V ) if U +V < 0
We nd that F(U
n
j
, U
n
j+1
) = F(U
n
j1
, U
n
j
). Thus, the solution is U
n
j
= U
0
j
. This is a
nonentropy solution.
6.2 Conservative schemes
A nite difference scheme is called conservative if it can be written as
U
n+1
j
= U
n
j
+
t
x
(F
n+1/2
j1/2
F
n+1/2
j+1/2
) (6.1)
where F
n+1/2
j+1/2
is a function of U
n
and possibly U
n+1
. The advantage of this formulation is
that the total mass is conservative:

j
U
n
j
=

j
U
n+1
j
(6.2)
There is a nice interpretation of F if we view U
n
j
as an approximation of the cell-average of
the solution u over the cell (x
j1/2
, x
j+1/2
) at time step n. Let us integrate the conservation
law u
t
+f(u)
x
= 0 over the box: (x
j1/2
, x
j+1/2
) (t
n
, t
n+1
). Using divergence theorem,
we obtain
u
n+1
j
= u
n
j
+
t
x
(

f
n+1
j1/2


f
n+1
j+1/2
) (6.3)
6.2. CONSERVATIVE SCHEMES 73
where
u
n
j
=
1
x
_
x
j+1/2
x
j1/2
u(x, t
n
) dx

f
n+1/2
j+1/2
=
1
t
_
t
n+1
t
n
f(u(x
j+1/2
, t)) dt
Thus, in a conservative scheme (6.2), we may view U j
n
as an approximation of the cell
average u
n
j
and F
n+1/2
j+1/2
as an approximation of the ux average

f
n+1/2
j+1/2
. This formulation
is closer to the original integral formulation of a conservation, and it does not involve
derivatives of the unknown quantity u.
A conservative scheme is consistent if F
j+1/2
(U, U) = f(u), where U is a vector with
U
j
= u. For explicit scheme, F
j+1/2
is a function of U
n
only and it only depends on
U
n
j+1
, , U
n
j+m
. That is
F
j+1/2
= F(U
n
j+1
, , U
n
j+m
).
We usually assume that the function is a Lipschitz function.
The most important advantage of conservative schemes is the following Lax-Wendroff
theorem. Which says that its approximate solutions, if converge, must to a weak solution.
Theorem 6.3 (Lax-Wendroff) Suppose U
n
j
be the solution of a conservative scheme
(6.2). The Dene u
x
:= U
n
j
for [x
j1/2
, x
j+1/2
) [t
n
, t
n+1
). Suppose u
x
is uniformly
bounded and converges to u almost everywhere. Then u is a weak solution of (??).
Proof. Let be a smooth test function with compact support on R [0, ). We multiply
(6.2) by
n
j
and sum over j and n to obtain

n=0

j=

n
j
(U
n+1
j
U
n
j
) =
t
x

n=0

j=

n
j
[F
j1/2
(U
n
) F
j+1/2
(U
n
)]
Using summation by part, we obtain

j=

0
j
U
0
j
+

n=1

j=
(
n
j

n1
j
)U
n
j
+

n=0

j=
(
n
j+1

n
j
)F
j+1/2
(U
n
) = 0
Since is of compact support and u
x
, hence F(U
n
), are uniformly bounded, we obtain
the convergence in the above equation is uniformly in j and n. If (x
j
, t
n
) (x, t), then
from the consistency condition, F
j+1/2
(U
n
) f(u(x, t)). We obtain that u is a weak
solution.
Below, we show that many scheme can be written in conservation form. We may view
F
n+1/2
j+1/2
as a numerical ux at x
j+1/2
between t
n
and t
n+1
.
1. Lax-Friedrichs:
F
n+1/2
j+1/2
= F(U
j
, U
j+1
) =
1
2
(f(U
j1
) + f(U
j
)) +
t
2x
(U
j
U
j+1
). (6.4)
The second term is a numerical dissipation.
74CHAPTER6. FINITE DIFFERENCE SCHEMES FORSCALARCONSERVATIONLAWS
2. Two-step Lax-Wendroff:
F
n+1/2
j+1/2
= f(U
n+1/2
j+1/2
)
U
n+1/2
j+1/2
=
U
n
j
+U
n
j+1
2
+
t
2x
_
f(U
n
j
) f(U
n
j+1
)

Homeworks.Construct an example to show that the Lax-Wendroff scheme may produce


nonentropy solution.
6.3 Entropy and Monotone schemes
Denition 3.10 A scheme expressed as
U
n+1
j
= G(U
n
j
, , U
n
j+m
) (6.5)
is called a monotone scheme if
G
U
j+k
0, k = , , m (6.6)
In the case of linear equation, the monotone scheme is
U
n+1
j
=
m

k=
a
k
U
n
j+k
with a
k
0. The consistency condition gives

k
a
k
= 1. Thus, a monotone scheme in
linear cases means that U
n+1
j
is an average of U
n
j
, , U
n
j+m
. In the nonlinear case, this
is more or less true. For instance, the sup norm is nonincreasing, the solution operator is

1
-contraction, and the total variation is dimishing. To be precise, let us dene the norms
for U = U
j
:
[U[

= sup
j
[U
j
[
|U|
1
=

j
[U
j
|x
T.V.(U) =

j
[U
j+1
U
j
[
We have the following theorem.
Theorem 6.4 For a monotone scheme (6.5), we have
(i)

- bound:
[U
n+1
[

[U
n
[

(ii)
1
-contraction: if U, V are two solutions of (??), then
|U
n+1
V
n+1
|
1
|U
n
V
n
|
1
(6.7)
6.3. ENTROPY AND MONOTONE SCHEMES 75
(iii) total variation diminishing:
T.V.
x
(U
n+1
) T.V.
x
(U
n
) (6.8)
(iv) boundedness of total variation: there exists a constant C such that
T.V.
x,t
(U) C (6.9)
Proof.
1.
U
n+1
j
= G(U
n
j
, , U
n
j+m
)
G(max U
n
, , max U
n
)
= max U
n
Hence, we have max U
n+1
max U
n
. Similarly, we also have min U
n+1
min U
n
.
2. Let us denote the vector (U
n
j
) by U
n
, the scheme (6.5) by an operator U
n+1
=
G(U
n
). U V means that U
j
V
j
for each j. Denote by U V for the vec-
tor (maxU
j
, V
j
). The monotonicity reads
G(U) G(V ) if U V.
We have G(U V ) G(V ). Hence,
(G(U) G(V ))
+
((G(U V ) G(V ))
+
= G(U V ) G(V ).
We take summation in j, and use conservative proper of G, namely,

j
(G(U))
j
=

j
U
j
, we obtain

j
(G(U) G(V ))
+
j

j
((U V ) V )
j
=

j
(U V )
+
j
.
Similarly, we have

j
(G(V ) G(U))
+
j

j
(V U)
+
j
.
Adding these two, we obtain the
1
-contraction:

j
[G(U)
j
G(V )
j
[

j
[U
j
V
j
[.
3. Suppose U
n
j
is a solution of (6.5). We take V
n
j
to be U
n
j+1
. Then V
n
j
also satises
(6.5). From the
1
-contraction property, we have

j
[U
n+1
j+1
U
n+1
j
[

j
[U
n
j+1
U
n
j
[
This shows the total variation dimishing property of (6.5).
76CHAPTER6. FINITE DIFFERENCE SCHEMES FORSCALARCONSERVATIONLAWS
4. The total variation of U in x, t with 0 t T is dened by
T.V.
x,t
(U) =
N

n=0

j=
_
[U
n
j+1
U
n
j
[
x
+
[U
n+1
j
U
n
j
[
t
_
x t
=
N

n=0
_
T.V.
x
U
n
t +|U
n+1
U
n
|
L
1

= T.V.
x
U
n
T +
N

n=0
|U
n+1
U
n
|
L
1.
Here Nt = T. We claim that |U
n+1
U
n
|
L
1 O(t). If so, then we obtain the
result with C T +NO(t) T +KT for some constant K. Now, we prove this
claim:
[U
n+1
j
U
n
j
[ = [G(U
n
j
, , U
n
j+m
) G(U
n
j
, , U
n
j
)[
L([U
n
j
U
n
j
[ + +[U
n
j+m
U
n
j
[)
L( +m)
2
T.V.
x
(U
n
).
Here, we have used that G is Lipschitz continuous. Hence, we conclude

j
[U
n+1
j
U
n
j
[x O(t).
The boundedness of total variation of U in (x, t) implies that we can substract a subse-
quence u
x
which converges in L
1
. Below, we show that its limit indeed satises entropy
condition.
Theorem 6.5 The limiting function of the approximate solutions constructed from a mono-
tone scheme satises Kruzkovs entropy condition.
Proof. We choose = (uc)
+
= ucc. The corresponding entropy ux is q(u) = f(u
c) f(c). It is natural to choose the numerical entropy ux to be Q(U
j+1
, , U
j+m
) =
F(U
j+1
c, , U
j+m
c) F(c, , c). We have
(U
n+1
c) = G(U
n
j
, , U
n
j+m
) G(c, , c)
G(U
n
j
c, , U
n
j+m
c)
= U
n
j
c +
t
x
_
F(U
n
j
c, , U
n
j+m1
c) F(U
n
j+1
c, , U
n
j+m
c)

= U
n
j
c +
t
x
_
Q(U
n
j
, , U
n
j+m1
) Q(U
n
j+1
, , U
n
j+m
)

Multiply this inequality by


n
j
, sum over j and n, and apply summation-by-part, then
take limit t, x 0. We obtain that u is an entropy solution.
6.3. ENTROPY AND MONOTONE SCHEMES 77
Theorem 6.6 (Harten-Hyman-Lax) A monotone scheme is at most rst order.
Proof. We claim that the modied equation corresponding to a monotone scheme has the
following form
u
t
+f(u)
x
= t[(u, )u
x
]
x
(6.10)
where = t/x,
=
1
2
2
m

k=
k
2
G
k
(u, , u)
1
2
f

(u)
2
(6.11)
and > 0 except for some exceptional cases. Since for smooth solution, the solution of
nite difference equation is closer to the modied equation, we see that the scheme is at
most rst order.
To show (6.10), we take Taylor expansion of G about (u
0
, , u
0
):
G(u

, , u
m
) = G(u
0
, , u
0
)
+
m

k=
G
k
(u
k
u
0
)
+
1
2
m

j,k=
G
j,k
(u
j
u
0
) (u
k
u
0
) +O(x)
3
= u
0
+ xu
x
m

k=
kG
k
+
1
2
(x)
2
u
xx
m

k=
k
2
G
k
+

j,k
1
2
(x)
2
u
2
x

j,k
jkG
j,k
+O(x)
3
= u
0
+ xu
x
m

k=
kG
k
+
1
2
(x)
2
_
m

k=
k
2
G
k
u
x
_
x
+

j,k
1
2
(x)
2
u
2
x

j,k
(jk k
2
)G
j,k
+O(x)
3
On the other hand,
G(u

, , u
m
) = u
0
+(F( u) F(T u))
where u = (u

, , u
m1
), T u = (u
+1
, , u
m
). We differentiate this equation to obtain
G
k
=
0,k
+[F
k
( u) F
k1
(T u)]
G
j,k
= [F
j,k
( u) F
j1,k1
(T u)]
We differentiate the consistency condition F(u
0
, , u
0
) = f(u
0
) to obtain
m1

F
k
(u
0
, , u
0
) = f

(u
0
).
78CHAPTER6. FINITE DIFFERENCE SCHEMES FORSCALARCONSERVATIONLAWS
Therefore,
m

k=
G
k
= 1
m

k=
kG
k
=

(F
k
F
k1
)k = f

(u
0
)

j,k
(j k)
2
G
j,k
=

(j k)
2
[G
j1,k1
G
j,k
] = 0
Using this and the symmetry G
j,k
= G
k,j
, we obtain

j,k
G
j,k
(jk k
2
) =
1
2

G
j,k
(j k)
2
= 0.
Hence we obtain
G(u

, , u
m
) = u
0
xf

(u)u
x
+ (
1
2
x)
2
u
xx

k
k
2
G
k
+O(x)
3
Now, from the Taylor expansion:
u
1
0
= u
0
+ tu
t
+
1
2
(t)
2
u
tt
+O(t)
3
= u
0
tf(u)
x
+ (
1
2
t)
2
[f

(u)
2
u
x
]
x
+O(t)
3
Combine these two, we obtain that smooth solution of the nite difference equation satisfy
the modied equation up to a truncation error (t)
2
.
To show 0, from the monotonicity G
k
0. Hence

2
f

(u)
2
=
_

k
kG
k
_
2
=
_

k
_
G
k
_
G
k
_
2

k
2
G
k

G
k
=

k
k
2
G
k
The equality holds only when G
k
(u, , u) = 0 for all k except 1. This means that
G(u

, , u
m
) = u
1
. This is a trivial case.
Chapter 7
Finite Difference Methods for
Hyperbolic Conservation Laws
Roughly speaking, modern schemes for hyperbolic conservation laws can be classied into
the following two categories.
1) ux-splitting methods
2) high-order Godunov methods
1) is more algebraic construction while 2) is more geometrical construction.
Among 1), there are
articial viscosity methods,
ux correction transport (FCT),
total variation diminishing (TVD),
total variation bounded (TVB),
central scheme,
relaxation schemes,
relaxed scheme.
Among 2), there are
High order Godunov methods,
MUSCL,
piecewise parabolic method (PPM),
essential nonoscillatory. (ENO)
In 1) we describe total variation diminishing method while in 2) we show the high order
Godunov methods.
79
80CHAPTER7. FINITE DIFFERENCE METHODS FORHYPERBOLICCONSERVATIONLAWS
7.1 Flux splitting methods
The basic thinking for these methods is to add a switch such that the scheme becomes rst
order near discontinuity and remains high order in the smooth region.
Suppose we are given
F
L
a lower order numerical ux
F
H
a higher order numerical ux
Dene
F
j+
1
2
= F
L
j+
1
2
+
j+
1
2
(F
H
j+
1
2
F
L
j+
1
2
)
= F
H
j+
1
2
+ (1
j+
1
2
)(F
L
j+
1
2
F
H
j+
1
2
).
Here,
j+
1
2
is a switch or a limiter. We require

j+
1
2
0, i.e. F
j+
1
2
F
L
j+
1
2
, near a discontinuity,

j+
1
2
1, i.e. F
j+
1
2
F
H
j+
1
2
, in smooth region.
In FCT, is chosen so that max U
n+1
j
max(U
n
j1
, U
n
j
, U
n
j+1
) and min U
n+1
j
min(U
n
j1
, U
n
j
, U
n
j+1
).
Design Criterion for
j+
1
2
7.1.1 Total Variation Diminishing (TVD)
Consider the linear advection equation
u
t
+au
x
= 0, a > 0.
We show the ideas by choosing
F
L
j+
1
2
= aU
j
be upwinds ux, and
F
H
j+
1
2
= aU
j
+
1
2
a(1
at
x
)(U
j+1
U
j
) be Lax-Wendroffs ux.
Then the numerical ux is
F
j+
1
2
= aU
j
+
j+
1
2
(
1
2
a(1
at
x
)(U
j+1
U
j
)). (7.1)
Here

j+
1
2
= (
j+
1
2
),

j+
1
2
:=
U
j
U
j1
U
j+1
U
j
.
7.1. FLUX SPLITTING METHODS 81
Theorem 7.7 1. If is bounded, then the scheme is consistent with the partial differ-
ential equation.
2. If (1) = 1, and is Lipschitz continuous( or C
1
) at = 1, then the scheme is
second order in smooth monoton region.(i.e., u is smooth and u
x
,= 0)
3. If 0
()

2 and 0 () 2, then the scheme is TVD.


Proof.
1. F
j+
1
2
(u, u) = f(u) = au.
2. Hint: Apply truncation error analysis.
3. From (7.1), the next time step U
n+1
j
is
U
n+1
j
= U
n
j
c
n
j1
(U
n
j
U
n
j1
),
where c
n
j1
= +
1
2
(1 )(

j+
1
2
(U
n
j+1
U
n
j
)
j
1
2
(U
n
j
U
n
j1
)
U
n
j
U
n
j1
), =
at
x
.
In other words, U
n+1
j
is the average of U
n
j
and U
n
j1
with weights (1c
n
j1
) and c
n
j1
.
U
n+1
j+1
U
n+1
j
= (U
n
j+1
c
n
j
(U
n
j+1
U
n
j
)) (U
n
j
c
n
j1
(U
n
j
U
n
j1
))
= (1 c
n
j
)(U
n
j+1
U
n
j
) +c
n
j1
(U
n
j
U
n
j1
)
Suppose 1 c
n
j
0 j, n
[U
n+1
j+1
U
n+1
j
[ (1 c
n
j
)[U
n
j+1
U
n
j
[ +c
n
j1
[U
n
j
U
n
j1
[

j
[U
n+1
j+1
U
n+1
j
[

j
(1 c
n
j
)[U
n
j+1
U
n
j
[ +

c
n
j1
[U
n
j
U
n
j1
[
=

j
(1 c
n
j
)[U
n
j+1
U
n
j
[ +

c
n
j
[U
n
j+1
U
n
j
[
=

j
[U
n
j+1
U
n
j
[,
then the computed solution is total variation diminishing.
Next, we need to nd such that 0 c
n
j
1, j, n. Consider

j+
1
2
(U
j+1
U
j
)
j
1
2
(U
j
U
j1
)
U
j
U
j1
=
(
j+
1
2
)

j+
1
2
(
j
1
2
),
=c
n
j1
= +
1
2
(1 )(
(
j+
1
2
)

j+
1
2
(
j
1
2
)) 0 1
A sufcient condition for 0 c
n
j1
1 j is
[
(
j+
1
2
)

j+
1
2
(
j
1
2
)[ 2. (7.2)
If
j+
1
2
< 0, (
j+
1
2
) = 0.
If 0
()

2, 0 () 2, then (7.2) is valid.


82CHAPTER7. FINITE DIFFERENCE METHODS FORHYPERBOLICCONSERVATIONLAWS
0
1
2
1
()

() 2
()

2
2
Figure 7.1: The region in which () should lie so that the scheme will be TVD.
7.1.2 Other Examples for ()
1. () = 1. This is the Lax-Wendroff scheme.
2. () = . This is Beam-Warming.
3. Any between
BW
and
LW
with 0 2, 0
()

2 is second order.
4. Van Leers minmod
() =
+[[
1 +[[
.
It is a smooth limiter with (1) = 1.
5. Roes superbee
() = max(0, min(1, 2), min(, 2))
7.1.3 Extensions
There are two kinds of extensions. One is the a < 0 case, and the other is the linear system
case.
For a < 0, we let
F
L
j+
1
2
=
1
2
a(U
j
+U
j+1
)
1
2
[a[(U
j+1
U
j
)
=
_
aU
j
if a > 0
aU
j+1
if a < 0
F
H
j+
1
2
=
1
2
a(U
j
+U
j+1
)
1
2
a(U
j+1
U
j
) =
at
x
7.1. FLUX SPLITTING METHODS 83
0
1
2
1
()

Lax-Wendroff
2 0
1
2
1
()
2
Beam-Warming
0
1
2
1
()
2
van Leers minmod
0
1
2
1
()
0 2
Roes superbee
Figure 7.2: Several limiters
Then
F
j+
1
2
= F
L
j+
1
2
+
j+
1
2
(F
H
j+
1
2
F
L
j+
1
2
)
= F
L
j+
1
2
+
j+
1
2
1
2
(sign() )a(U
j+1
U
j
).
Where
j+
1
2
= (
j+
1
2
),
j+
1
2
=
U
j

+1
U
j

U
j+1
U
j
, and j

= j sign() = j 1.
In the linear system case, our equation is
u
t
+Au
x
= 0. (7.3)
We can decompose A so that A = RR
1
with = diag(
1
, ,
n
) constituting by As
eigenvalues and R = [r
1
, , r
n
] being right eigenvectors.That is, Ar
i
=
i
r
i
. We know
that U
j+1
U
j
=
n

k=1

j,k
r
k
, let

k
=
k
t
x

j,k
=

j

,k

j,k
j

= j sign(
k
).
Therefore,
F
L
=
1
2
A(U
j
+U
j+1
)
1
2
[A[(U
j+1
U
j
)
F
H
=
1
2
A(U
j
+U
j+1
)
1
2
t
x
A
2
(U
j+1
U
j
)
84CHAPTER7. FINITE DIFFERENCE METHODS FORHYPERBOLICCONSERVATIONLAWS
where [A[ = R[[R
1
. The numerical ux is
F
j+
1
2
= F
L
j+
1
2
+
1
2

k
(
j,k
)(sign(
k
)
k
)
k

j,k
r
k
.
7.2 High Order Godunov Methods
Algorithm
1. Reconstruction: start from cell averages U
n
j
, we reconstruct a piecewise polyno-
mial function u(x, t
n
).
2. Exact solver for u(x, t), t
n
< t < t
n+1
. It is a Riemann problem with initial data
u(x, t
n
).
3. Dene
U
n+1
j
=
1
x
_
x
j+
1
2
x
j
1
2
u(x, t
n+1
) dx.
If 2. is an exact solver, using
_
t
n+1
t
n
_
x
j+
1
2
x
j
1
2
u
t
+f(u)
x
dxdt = 0
we have
U
n+1
j
= U
n
j
+
t
x
(

f
j
1
2


f
j+
1
2
),
where

f
j+
1
2
=
1
t
_
t
n+1
t
n
f( u(x
j+
1
2
, t)) dt is the average ux. Thus 2. and 3. can be replaced
by
2. an Exact solver for u at x
j+
1
2
, t
n
< t < t
n+1
to compute averaged ux

f
j+
1
2
.
3. U
n+1
j
= U
n
j
+
t
x
(

f
j
1
2


f
j+
1
2
)
1. Reconstruction: We want to construct a polynomial in each cell. The main criterions
are
(1) high order in regions where u is smooth and u
x
,= 0
(2) total variation no increasing.
In other words, suppose we are given a function u(x), let
U
j
=
1
x
_
x
j+
1
2
x
j
1
2
u(x) dx
From U
j
, we can use some reconstruct algorithm to construct a function u(x). We want
the reconstruction algorithm to satisfy
(1) [ u(x) u(x)[ = O(x)
r
, where u is smooth in I
j
= (x
j
1
2
, x
j+
1
2
) and u
x
,= 0 near
I
j
.
(2) T.V. u(x) T.V.u(x)(1 +O(x))
7.2. HIGH ORDER GODUNOV METHODS 85
7.2.1 Piecewise-constant reconstruction
Our equation is
u
t
+f(u)
x
= 0 (7.4)
Following the algorithm, we have
(1) approximate u(t, x) by piecewise constant function, i.e., U
n
j
represents the cell av-
erage of u(x, t
n
) over (x
j
1
2
, x
j+
1
2
).
x
j
1
2
x
j+
1
2
x
(2) solve Riemann problem
(u
j
, u
j+1
) on the edge x
j+
1
2
, its solution u(x
j+
1
2
, t),t
n
< t < t
n+1
can be found,
which is a constant.
(3) integrate the equation (7.4) over (x
j
1
2
, x
j+
1
2
) (t
n
, t
n+1
)
=U
n+1
j
=
1
x
_
x
j+
1
2
x
j
1
2
u(x, t
n+1
) dx
= U
n
j
+
t
x
1
t
_
t
n+1
t
n
_
f( u(x
j
1
2
, t)) f( u(x
j+
1
2
, t)
_
dt
= u
n
j
+
t
x
[f( u(x
j
1
2
, t
n+
1
2
)) f( u(x
j+
1
2
, t
n+
1
2
))]
Example 1 f(u) = au a > 0
Riemann problem gives u(x, t) =
_
u
j
if x x
j+
1
2
< at, t
n
< t < t
n+1
u
j+1
if x x
j+
1
2
> at, t
n
< t < t
n+1
u
n+
1
2
j+
1
2
= u(x
j+
1
2
, t
n+
1
2
) = u
j
F
j+
1
2
= aU
n+1
j+
1
2
= aU
j
.

. U
n+1
j
= U
n
j
+
t
x
(aU
n
j1
aU
n
j
)
This is precisely the upwind scheme.
86CHAPTER7. FINITE DIFFERENCE METHODS FORHYPERBOLICCONSERVATIONLAWS
Example 2 Linear system
u
t
+Au
x
= 0
Let R
1
AR = = diag(
1
, ,
n
). We need to solve Riemann problem with
initial data (U
j
, U
j+1
). Let L = (
1
, ,
n
) = R
1
,
i
A =
i

i
, i = 1, . . . , n be the
left eigenvectors. Project initial data onto r
1
, . . . , r
n
u(x, t
n
) =
_
U
j
x < x
j+
1
2
U
j+1
x > x
j+
1
2
by
i
r
j
=
ij
,

(
i
u(x, t
n
))r
i
= u(x, t
n
).

i
(u
t
+Au
x
) = 0
=
it
+
i

ix
= 0
=
i
(x, t) =
i
(x
i
(t t
n
), t
n
)
=
i
u(x
i
(t t
n
), t
n
)
u(x, t) =

i
(
i
u(x
i
(t t
n
), t
n
))r
i
u(x
j+
1
2
, t) =

i
0
(
i
u(x
i
(t t
n
), t
n
))r
i
+

i
<0
(
i
u(x
i
(t t
n
), t
n
))r
i

U
n+
1
2
j+
1
2
=

i,
i
0

i
U
j
r
i
+

i,
i
<0

i
U
j+1
r
i
F
j+
1
2
= A

U
n+
1
2
j+
1
2
=

i,
i
0

i
U
j
r
i
+

i,
i
<0

i
U
j+1
r
i
solve u(x, t) for
x
t
=
u(x, t) =

i
U
j
r
i
+

i
<

i
U
j+1
r
i
consider the following cases
(1) <
1
< <
n
u(x, t) =

i
U
j
r
i
= U
j
(2)
1
< <
2
< <
n
u(x, t) =
n

i=2

i
U
j
r
i
+
1
U
j+1
r
1
=
n

i=1

i
U
j
r
i
+
1
U
j+1
r
1

1
U
j
r
1
= U
j
+
1
(U
j+1
U
j
)r
1
There is a jump
1
(U
j+1
U
j
)r
1
7.2. HIGH ORDER GODUNOV METHODS 87
(3)
1
<
2
< <
3
< <
n
u(x, t) = U
j
+
1
(U
j+1
U
j
)r
1
+
2
(U
j+1
U
j
)r
2
Therefore the structure of the solution of Riemann problem is composed of n waves

1
(U
j+1
U
j
)r
1
, ,
n
(U
j+1
U
j
)r
n
with left state U
j
and right state U
j+1
. Each
wave propagate at speed
i
respectively.


n
x
j+
1
2
U
j
U
j+1
7.2.2 piecewise-linear reconstruction
(1) Reconstruction
Given cell average U
j
, we want to reconstruct a polynimial u(x, t
n
) in each cell
(x
j
1
2
, x
j+
1
2
) under following criterions
a) high order approximation in smooth regions.
b) TVD or TVB or ENO
(2) Riemann solver
solve equation exactly for (t
n
, t
n+1
).
Once we have these two, dene U
n+1
j
= U
n
j
+
t
x
1
t
_
t
n+1
t
n
f( u(x
j
1
2
, t))f( u(x
j+
1
2
, t)) dt.
For second order temporal discretization,
1
t
_
t
n+1
t
n
f( u(x
j+
1
2
, t)) dt f( u(x
j+
1
2
, t
n+
1
2
)),
U
n+1
j
= U
n
j
+
t
x
[f( u(x
j
1
2
, t
n+
1
2
)) f( u(x
j+
1
2
, t
n+
1
2
))].
For Scalar Case
(1) Reconstruction
Suppose u(x, t
n
) = a + b(x x
j
) + c(x x
j
)
2
, want to nd a, b, c such that the
average of u = U
j
.
1
x
_
x
j+
1
2
x
j
1
2
u(x, t
n
) dx = U
j
1
x
_
x
j
1
2
x
j
3
2
u(x, t
n
) dx = U
j1
1
x
_
x
j+
3
2
x
j+
1
2
u(x, t
n
) dx = U
j+1
88CHAPTER7. FINITE DIFFERENCE METHODS FORHYPERBOLICCONSERVATIONLAWS
=a = U
j
, b =
U
j+1
U
j1
2x
, c = 0
Lemma 7.2 Given a smooth function u(x), let U
j
=
1
x
_
x
j+
1
2
x
j
1
2
u(x) dx, and let
u(x) = U
j
+ U
j
xx
j
x
U
j
= (U
j+1
U
j1
)/2, then [ u(x) u(x)[ = O(x)
3
for x (x
j
1
2
, x
j+
1
2
)
When u has discontinuities or u
x
changes sign, we need to put a limiter to avoid
oscillation of u.
Example of limiters
(a)
U
j
= minmod(U
j+1
U
j
, U
j
U
j1
)
=
_

_
sign(U
j+1
U
j
) min[U
j+1
U
j
[, [U
j
U
j1
[ if U
j+1
U
j
and U
j
U
j1
have the same
sign
0 otherwise
(b) U
j
= minmod(
U
j+1
U
j1
2
, 2(U
j
U
j1
), 2(U
j+1
U
j
))
(2) Exact solver for small time step
Consider the linear advection equation
u
t
+au
x
= 0.
with precise linear data
u(x, t
n
) =
_
U
j
+U
j
xx
j
x
x < x
j+
1
2
U
j+1
+U
j+1
xx
j+1
x
x > x
j+
1
2
Then
u
n+
1
2
j+
1
2
= u(x
j+
1
2
a(t t
n
), t
n
) (a > 0)
= U
j
+U
j
(x
j+
1
2
a(t
n+
1
2
t
n
) x
j
)/x
= U
j
+U
j
(
1
2

at
2x
) let =
at
x
F
j+
1
2
= a

U
n+
1
2
j+
1
2
= a(U
j
+U
j
(
1
2


2
))
To compare with the TVD scheme, let U
j
= minmod(U
j+1
U
j
, U
j
U
j1
)
F
j+
1
2
= aU
j
+ (
1
2


2
)a(U
j+1
U
j
)
j+
1
2

j+
1
2
=
minmod(U
j+1
U
j
, U
j
U
j1
)
U
j+1
U
j
7.2. HIGH ORDER GODUNOV METHODS 89
() =
_
_
_
0 0
0 1
1 1
=
U
j
U
j1
U
j+1
U
j
Its graph is shown in Fig.(7.3).
0
1
2
1
()
2
Figure 7.3: The limiter of second order Godunov method
If a < 0, then
u
n+
1
2
j+
1
2
= U
j+1
+U
j+1
(
1
2

at
2x
) [
at
x
[ 1
F
j+
1
2
= a(U
j+1
+U
j+1
(
1
2


2
))
For System Case
u
t
+Au
x
= 0 (7.5)
(1) Reconstruction
Construct u(x, t
n
) to be a piecewise linear function.
u(x, t
n
) = U
n
j
+U
n
j
(
x x
j
x
)
The slope is found by U
n
j
= minmod(U
j
U
j1
, U
j+1
U
j
). We can write it
characteristic-wisely: let

L
j,k
=
k
(U
j
U
j1
),

R
j,k
=
k
(U
j+1
U
j
),

j,k
= minmod(
L
j,k
,
R
j,k
).
Then U
j
=

j,k
r
k
.
(2) Exactly solver
We trace back along the characteristic curve to get u in half time step.
u
n+
1
2
j+
1
2
=

k
u(x
j+
1
2

k
(t
n+
1
2
t
n
), t
n
)r
k
90CHAPTER7. FINITE DIFFERENCE METHODS FORHYPERBOLICCONSERVATIONLAWS
=

k
0

k
(U
j
+U
j
(
1
2


k
2
))r
k
+

k
<0

k
(U
j+1
+U
j+1
(
1
2


k
2
))r
k
= initial state of Riemann data (U
j
, U
j+1
) +

k
0
(
k
(
1
2


k
2
)r
k
)U
j
+

k
<0
(
k
(
1
2
)

k
2
)r
k
)U
j+1
.
In another viewpoint, let u
n+
1
2
j+
1
2
,L
be the solution of (7.5) with initial data =
_
u(x, t
n
) x (x
j
1
2
, x
j+
1
2
)
0 otherwise
.
u
n+
1
2
j+
1
2
,L
= u
n
j
+

k
0

k
U
n
j
(
x
j+
1
2

k
t
2
x
j
x
)r
k
= u
n
j
+

k
0

k
U
n
j
(
1
2


k
2
)r
k
where
k
, r
k
are left / right eigenvector,
k
is eigenvalue and
k
=

k
t
x
.
Similarly,
u
n+
1
2
j+
1
2
,R
= u
n
j+1

k
<0

k
U
n
j+1
(
x
j+
1
2

k
t
2
x
j+1
x
)r
k
= u
n
j+1

k
<0

k
U
n
j+1
(
1
2


k
2
)r
k
Then we solve (7.5) with(u
n+
1
2
j+
1
2
,L
, u
n+
1
2
j+
1
2
,R
) as the Riemann data. This gives u
n+
1
2
j+
1
2
.
Therefore
u
n+
1
2
j+
1
2
= u
n+
1
2
j+
1
2
,L
+

k
0

k
U
j+
1
2
(

k
t
2
x
)r
k
= u
n+
1
2
j+
1
2
,L
+

k
0

k
U
j+
1
2
(

k
2
)r
k
or u
n+
1
2
j+
1
2
= u
n+
1
2
j+
1
2
,R

k
0

k
U
j+
1
2
(

k
2
)r
k
or u
n+
1
2
j+
1
2
=
U
n+
1
2
j+
1
2
,L
+U
n+
1
2
j+
1
2
,R
2

1
2

sign(
k
)
k
U
j+
1
2

k
2
r
k
where U
j+
1
2
= U
n+
1
2
j+
1
2
,R
U
n+
1
2
j+
1
2
,L
.
(3) U
n+1
j
= U
n
j
+
t
x
(f(U
n+
1
2
j
1
2
) f(U
n+
1
2
j+
1
2
)).
7.3. MULTIDIMENSION 91
7.3 Multidimension
There are two kinds of methods.
1. Splitting method.
2. Unsplitting method.
We consider two-dimensional case.
7.3.1 Splitting Method
We start from
u
t
+Au
x
+Bu
y
= 0. (7.6)
This equation can be viewed as
u
t
= (A
x
B
y
)u.
Then the solution operator is:
e
t(A
x
+B
y
)
,
which can be approximate by e
tA
x
e
tB
y
for small t. Let / = A
x
, B = B
y
, we
have
u = e
t(A+B)
u
0
.
Consider e
t(A+B)
,
e
t(A+B)
= 1 +t(/+B) +
t
2
2
(/
2
+B
2
+/B +B/) +
e
tB
e
tA
= (1 + tB +
t
2
2
B
2
+ )(1 +t/+
t
2
2
/
2
+ )
= 1 +t(/+B) +
t
2
2
(/
2
+B
2
) +t
2
B/+
.

.e
t(A+B)
e
tB
e
tA
= t
2
(
/B B/
2
) +O(t
3
).
Now we can design splitting method as:
Given U
n
i,j
,
1. For each j, solve u
t
+Au
x
= 0 with data U
n
j
for t step. This gives

U
n
i,j
.

U
n
i,j
= U
n
i,j
+
t
x
(F(U
n
i1,j
, U
n
i,j
) F(U
n
i,j
, U
n
i+1,j
))
where F(U, V ) is the numerical ux for u
t
+Au
x
= 0.
2. For each i, solve u
t
+Bu
y
= 0 for t step with data

U
n
i,j
. This gives U
n+1
i,j
.
U
n+1
i,j
=

U
n
i,j
+
t
y
(G(

U
n
i,j1
,

U
n
i,j
) G(

U
n
i,j
,

U
n
i,j+1
))
92CHAPTER7. FINITE DIFFERENCE METHODS FORHYPERBOLICCONSERVATIONLAWS
The error is rst order in time n(t)
2
= O(t).
To reach higher order time splitting, we may approximate e
t(A+B)
by polynomials
P(e
tA
, e
tB
) or rationals R(e
tA
, e
tB
). For example, the Trotter product (or strang splitting)
is given by
e
t(A+B)
= e
1
2
tA
e
tB
e
1
2
tA
+O(t
3
).
For t = nt,
e
t(A+B)
u
0
= (e
1
2
tA
e
tB
e
1
2
tA
) (e
1
2
tA
e
tB
e
1
2
tA
)(e
1
2
tA
e
tB
e
1
2
tA
)u
0
= e
1
2
tA
e
tB
e
tA
e
tB
e
tA
e
tA
e
tB
e
1
2
tA
u
0
Trotter product is second order.
7.3.2 Unsplitting Methods
The PDE is
u
t
+f(u)
x
+g(u)
y
= 0 (7.7)
Integrate this equation over (x
i
1
2
, x
i+
1
2
) (y
j
1
2
, y
j+
1
2
) (t
n
, t
n+1
). We have
U
n+1
i,j
= U
n
i,j
+
t
x
(

f
n+
1
2
i
1
2
,j


f
n+
1
2
i+
1
2
,j
) +
t
y
( g
n+
1
2
i,j
1
2
g
n+
1
2
i,j+
1
2
)
where

f
n+
1
2
i+
1
2
,j
=
1
t
_
t
n+1
t
n
f(u(x
i+
1
2
, y
j
, t)) dt
g
n+
1
2
i,j+
1
2
=
1
t
_
t
n+1
t
n
g(u(x
i
, y
j+
1
2
, t)) dt.
Looking for numerical approximations F(U
n
i,j+k
, U
n
i+1,j+k
), G(U
n
i+,j
, U
n
i+,j+1
) for

f
n+
1
2
i+
1
2
,j+k
, g
n+
1
2
i+,j+
1
2
.
We consider Godunov type method.
1. Reconstruction
u(x, y, t
n
) = u
n
i,j
+
x
U
i,j
(
x x
i
x
)+
y
U
i,j
(
y y
j
y
) in I = (x
i
1
2
, x
i+
1
2
)(y
j
1
2
, y
j+
1
2
)
For example,
x
U
i,j
= minmod(U
i,j
U
i+1,j
, U
i+1,j
U
i,j
).
2. We need to solve
u
t
+Au
x
+Bu
y
= 0 with data
_
u(x, y, t
n
) for (x, y) I
0 otherwise
u(x
j+
1
2
, y
j
,
t
2
) = U
n
i,j
+

a>0

x
U
i,j
(
x
i+
1
2

at
2
x
i
x
) +
y
U
i,j
(
y
j

bt
2
y
j
y
)
= U
n
i,j
+

a>0
(
x
U
n
i,j
) (
1
2


x
2
) + (
y
U
n
i,j
)(

y
2
)
7.3. MULTIDIMENSION 93
where
x
=
at
x
,
y
=
bt
y
. For system case,
x
k
,
y
k
are eigenvalues of A and B.
U
n+
1
2
i+
1
2
,L,j
= U
n
i,j
+

x
0
(
1
2


x
k
2
)(
x
k

x
U
i,j
)r
x
k
+

k
(

y
k
2
)(
y
k

y
U
i,j
)r
y
k
similarly,
U
n+
1
2
i+
1
2
,R,j
= U
n
i+1,j
+

x
k
<0
(
1
2


x
k
2
)(
x
k

x
U
i,j
)r
x
k
+

k
(

y
k
2
)(
y
k

y
U
i+1,j
)r
y
k
Finally, solve Riemann problem u
t
+Au
x
= 0 with data
_
_
_
U
n+
1
2
i+
1
2
,L,j
U
n+
1
2
i+
1
2
,R,j
.

.U
n+
1
2
i+
1
2
,j
= U
n+
1
2
i+
1
2
,L,j
+

x
k
0

k
U
i+
1
2
,j
r
k
94CHAPTER7. FINITE DIFFERENCE METHODS FORHYPERBOLICCONSERVATIONLAWS
Chapter 8
Systems of Hyperbolic Conservation
Laws
8.1 General Theory
We consider
u
t
+f(u)
x
= 0, u =
_
_
_
_
_
u
1
u
2
.
.
.
u
n
_
_
_
_
_
f : R
n
R
n
the ux (8.1)
The system (8.1) is called hyperbolic if u, the n n matrix f

(u) is diagonalizable with


real eigenvalues
1
(u)
2
(u)
n
(u). Let us denote its left/right eigenvectors by

i
(u)/r
i
(u), respectively.
It is important to notice that the system is Galilean invariant, that is , the equation is un-
changed under the transform:
t t, x x, > 0.
This suggests we can look for special solution of the form u(
x
t
).
We plug u(
x
t
) into (8.1) to yield
u

(
x
t
2
) + f

(u)u

1
t
= 0
= f

(u)u

=
x
t
u

This implies that there exists i such that u

= r
i
(u) and
x
t
=
i
(u(
x
t
)). To nd such a
solution, we rst construct the integral curve of r
i
(u): u

= r
i
(u). Let R
i
(u
0
, s) be the
integral curve of r
i
(u) passing through u
0
, and parameterized by its arclength. Along R
i
,
the speed
i
has the variation:
d
ds

i
(R
i
(u
0
, s)) =
i
R

i
=
i
r
i
.
We have the following denition.
95
96 CHAPTER 8. SYSTEMS OF HYPERBOLIC CONSERVATION LAWS
Denition 1.11 The i-th characteristic eld is called
1. genuinely nonlinear if
i
(u) r
i
(u) ,= 0u.
2. linearly degenerate if
i
(u) r
i
(u) 0
3. nongenuinely nonlinear if
i
(u) r
i
(u) = 0 on isolated hypersurface in R
n
.
For scalar equation, the genuine nonlinearity is equivalent to the convexity( or concavity) of
the ux f, linear degeneracy is f(u) = au, while nongenuine nonlinearity is nonconvexity
of f.
8.1.1 Rarefaction Waves
When the i-th eld is genuiely nonlinear, we dene
R
+
i
(u
0
) = u R
i
(u
0
)[
i
(u)
i
(u
0
).
Nowsuppose u
1
R
+
i
(u
0
), we construct the centered rarefaction wave, denoted by (u
0
, u
1
):
(u
0
, u
1
)(
x
t
) =
_
_
_
u
0
if
x
t

i
(u
0
)
u
1
if
x
t

i
(u
1
)
u if
i
(u
0
)
x
t

i
(u
1
)and
i
(u) =
x
t
It is easy to check this is a solution. We call (u
0
, u
1
) an i-rarefaction wave.
t
x

i
(u
0
)

i
(u
1
)
u
0
u
1

i
(u) =
x
t

i
u
0

i
(u
1
)
u
1

i
(u
0
)
Figure 8.1: The integral curve of u

= r
i
(u) and the rarefaction wave.
8.1.2 Shock Waves
The shock wave is expressed by:
u(
x
t
) =
_
u
0
for
x
t
<
u
1
for
x
t
>
Then (u
0
, u
1
, ) need to satisfy the jump condition:
f(u
1
) f(u
0
) = (u
1
u
0
). (8.2)
8.1. GENERAL THEORY 97
Lemma 8.3 (Local structure of shock waves)
1. The solution of (8.2) for (u, ) consists of n algebraic curves passing through u
0
locally, named them by S
i
(u
0
), i = 1, , n.
2. S
i
(u
0
) is tangent to R
i
(u
0
) up to second order. i.e., S
(k)
i
(u
0
) = R
(k)
i
(u
0
), k = 0, 1, 2,
here the derivatives are arclength derivatives.
3.
i
(u
0
, u)
i
(u
0
) as u u
0
, and

i
(u
0
, u
0
) =
1
2

i
(u
0
)
Proof.
1. Let S(u
0
) = u[f(u)f(u
0
) = (uu
0
) for some R. We claim that S(u
0
) =
n

i=1
S
i
(u
0
), where S
i
(u
0
) is a smooth curve passing through u
0
with tangent r
i
(u
0
) at
u
0
. When u is on S(u
0
), rewrite the jump condition as
f(u) f(u
0
) = [
_
1
0
f

(u
0
+t(u u
0
) dt](u u
0
)
=

A(u
0
, u)(u u
0
)
= (u u
0
)
.

. u S(u
0
) (u u
0
) is an eigenvector of

A(u
0
, u).
Assume A(u) = f

(u) has real and distinct eigenvalues


1
(u) <
n
(u),

A(u
0
, u)
also has real and distinct eigenvalues

1
(u
0
, u) < <

n
(u
0
, u), with left/right
eigenvectors

i
(u
0
, u) and r
i
(u
0
, u), respectively, and they converge to
i
(u
0
),
i
(u
0
), r
i
(u
0
)
as u u
0
respectively. Normalize the eigenvectors: | r
i
| = 1,

i
r
j
=
ij
. The vector
which is parallel to r
i
can be determined by

k
(u
0
, u)(u u
0
) = 0 for k ,= i, k = 1, , n.
Now we dene
S
i
(u
0
) = u[

k
(u
0
, u)(u u
0
) = 0, k ,= i, k = 1, , n
We claim this is a smooth curve passing through u
0
. Choose coordinate system
r
1
(u
0
), , r
n
(u
0
). Differential this equation

k
(u
0
, u)(u u
0
) = 0 in r
j
(u
0
).

r
j

u=u
0
(

k
(u
0
, u)(u u
0
)) =

k
(u
0
, u
0
) r
j
(u
0
) =
jk
,
which is a full rank matrix. By implicit function theorem, there exists unique free
parameter smooth curve S
i
(u
0
) passing through u
0
. Therefore S(u
0
) =
n

i=1
S
i
(u
0
).
98 CHAPTER 8. SYSTEMS OF HYPERBOLIC CONSERVATION LAWS
2,3. R
i
(u
0
) = u
0
= S
i
(u
0
)
f(u) f(u
0
) =
i
(u
0
, u)(u u
0
) u S
i
(u
0
)
Take arclength derivative along S
i
(u
0
)
f

(u)u

i
(u u
0
) +
i
u

and u

= S

i
.
When u u
0
f

(u
0
)S

i
(u
0
) =
i
(u
0
, u
0
)S

i
(u
0
)
= S

i
(u
0
) = r
i
(u
0
) and
i
(u
0
, u
0
) =
i
(u
0
).
Consider the second derivative.
(f

(u)u

, u

) +f

(u)u

i
(u u
0
) + 2

i
u

+
i
u

At u = u
0
, u

= S

i
(u
0
) = R

i
(u
0
) = r
i
(u
0
) and u

= S

i
(u
0
),
= (f

r
i
, r
i
) + f

i
= 2

i
r
i
+
i
S

i
On the other hand, we take derivative of f

(u)r
i
(u) =
i
(u)r
i
(u) along R
i
(u
0
), then
evaluate at u = u
0
.
(f

r
i
, r
i
) +f

(r
i
r
i
) =

i
r
i
+
i
r
i
r
i
,
where r
i
r
i
= R

i
.
= (f
i
)(S

i
R

i
) = (2

i
)r
i
= 2

i
=

i
Let S

i
R

i
=

k
r
k
(u
0
) =
i
r
i

k=i
(
k

i
)
k
r
k
= 0
=
k
= 0 k ,= i and

i
= 2

i
at u
0

(R

i
, R

i
) = 1 (S

i
, S

i
) = 1
and (R

i
, R

i
) = 0 (S

i
, S

i
) = 0
.

. (R

i
S

i
)r
i
.

.
i
= 0
Hence R

i
= S

i
at u
0
.
8.1. GENERAL THEORY 99
Let S

i
(u
0
) = u S
i
(u
0
)[
i
(u)

i
(u
0
).
If u
1
S
i
(u
0
), dene
(u
0
, u
1
) =
_
u
0
for
x
t
<
i
(u
0
, u
1
)
u
1
for
x
t
>
i
(u
0
, u
1
)
(u
0
, u
1
) is a weak solution.
u
0
S

i
R
+
i
R
i
S
i
We propose the following entropy condition: (Lax entropy condition)

i
(u
0
) >
i
(u
0
, u
1
) >
i
(u
1
) (8.3)
If the i-th characteristic eld is genuinely nonlinear, then for u
1
S

i
(u
0
), and u
1
u
0
,
(8.3) is always valid. This follows easily from
i
= 2

i
and
i
(u
0
, u
0
) =
i
(u
0
). For
u
1
S

i
(u
0
), we call the solution (u
0
, u
1
) i-shock or Lax-shock.
8.1.3 Contact Discontinuity (Linear Wave)
If
i
(u) r
i
(u) 0, we call the i-th characteristic eld linearly degenerate (. dg.). In
the case of scalar equation, this correspond f

= 0. We claim
R
i
(u
0
) = S
i
(u
0
) and
i
(u
0
, u) =
i
(u
0
) for u S
i
(u
0
) or R
i
(u
0
).
Indeed, along R
i
(u
0
), we have
f

(u)u

=
i
(u)u

.
and
i
(u) is a constant
i
(u
0
) from the linear degeneracy. We integrate the above equation
from u
0
to u along R
i
(u
0
), we get
f(u) f(u
0
) =
i
(u
0
)(u u
0
).
This gives the shock condition. Thus, S
i
(u
0
) R
i
(u
0
) and (u, u
0
)
i
(u
0
).
Homeworks.
(u
0
, u
1
) =
_
u
0
x
t
<
i
(u
0
, u
1
)
u
1
x
t
>
i
(u
0
, u
1
)
Let T
i
(u
0
) = R
+
i
(u
0
) S

i
(u
0
) be called the i-th wave curve. For u
1
T
i
(u
0
), (u
0
, u
1
) is
either a rarefaction wave, a shock, or a contact discontinuity.
Theorem 8.8 (Lax) For strictly hyperbolic system (8.1), if each eld is either genuinely
nonlinear or linear degenerate, then for u
L
u
R
, the Riemann problem with two end states
(u
l
, u
R
) has unique self-similar solution which consists of n elementary waves. Namely,
there exist u
0
= u
L
, , u
n
= u
R
such that (u
i1
, u
i
) is an i-wave.
Proof. Given (
1
, ,
n
) R
n
, dene u
i
inductively u
i
T
i
(u
i1
), and the arclength of
(u
i1
, u
i
) on T
i
=
i
.
100 CHAPTER 8. SYSTEMS OF HYPERBOLIC CONSERVATION LAWS
u
i
= f(u
0
,
1
, ,
i
)
We want to nd
1
, ,
n
such
that
u
R
= f(u
L
,
1
, ,
n
).
u
0
u
1
T
1
T
2
(u
1
)

1
First u
L
= f(u
L
, 0, , 0), as u
R
= u
L
, (
1
, ,
n
) = (0, , 0) is a solution. When
u
R
u
L
and r
i
(u
0
) are independent,

=0
f(u
L
, 0, , 0) = r
i
(u
0
)andf C
2
By Inverse function theorem, for u
R
u
L
, there exists unique such that u
R
= f(u
L
, ).
Uniqueness leaves as an exercise.
8.2 Physical Examples
8.2.1 Gas dynamics
The equations of gas dynamics are derived based on conservation of mass, momentum and
energy. Before we derive these equations, let us review some thermodynamics. First, the
basic thermo variables are pressure (p), specic volume (), called state variables. The
internal energy (e) is a function of p and . Such a relation is called a constitutive equation.
The basic assumption are
e
p

> 0,
e

p
> 0
Sometimes, it is convinient to express p as a function of (, e).
In an adiabetic process (no heat enters or losses), the rst law of thermodynamics (con-
servation of energy) reads
de +pd = 0. (8.4)
1-wave
2-wave
n-wave
t
x
u
0
= u
L
u
n
= u
R
u
1
u
2
u
n1
8.2. PHYSICAL EXAMPLES 101
This is called a Pfafan equation mathematically. A function (e, ) is called an integral
of (8.4) if there exists a function (e, ) such that
d = (de +pd).
Thus, = constant represents a specic adiabetic process. For Pfafan equation with only
two independent variables, one can always its integral. First, one can derive equation for
: from

e
= and

= p
and using
e
=
e
, we obtain the equation for :

= (p)
e
.
This is a linear rst order equation for . It can be solved by the method of characteristics
in the region > 0 and e > 0. The solutions of and are not unique. If is a
solution, so does with d = ()d for any function (). We can choose such that
if two systems are in thermoequilibrium, then then have the same value . In other words,
is only a function of emperical temperature. We shall denote it by 1/T. T is called
the absolute temperature. And the corresponding is called the physical entropy S. The
relation d = (de +pd) is re-expressed as
de = TdS pd. (8.5)
For ideal gas, (the laws of Boyle and Gay-Lussac)
p = RT (8.6)
R is a universal gas constant. From this and (8.5), treat S and as independent variables,
one obtains
Re
S
(S, ) +e

(S, ) = 0.
This implies that e = h(H), where H = exp(S/R). We notice that h

< 0 because
p = (
e

)
S
= Hh

(H) > 0. From T = (


e
S
)

=
1
R
h

(H) H, we see that T is a


function of H. In most cases, T is a decreasing function of H. We shall make this as an
assumption. With this, e is a function of T, say e(T), and e(T) is an increasing function.
Now, we have ve thermo variables p, , e, S, T, and three relations:
p = RT
e = e(T)
de = TdS pd
Hence, we can choose two as independent thermo variables and treat the rest three as de-
pendent variables.
For instance, e is a linear function of T, i.e. e = c
v
T, where c
v
is a constant called
specc heat at constant volume. Such a gas is called polytropic gas. We can obtain
p = RT and e = c
v
T =
p
1
(8.7)
102 CHAPTER 8. SYSTEMS OF HYPERBOLIC CONSERVATION LAWS
or in terms of entropy,
p = A(S)

T =
A(S)
R

+1
e =
c
v
A(S)
R

+1
where
A(S) = ( 1) exp((S S
0
)/c
v
)
= 1 + R/c
v
If we dene dQ = TdS, it is easy to see that c
v
and c
p
are the specic heat at constant
volume and constant pressure, respectively.
c
v
=
_
Q
T
_

=
_
e
T
_

,
c
p
:=
_
Q
T
_
p
= ((
e

)
p
+p)/(
T

)
p
=
_
e
T
_
p
+p
_

T
_
p
In general, c
p
> c
v
. Because c
p
is the amount of heat added to a system per unit mass
at constant pressure. In order to maintain constant pressure, the volume has to expand
(otherwise, pressure will increase), the extra amount of work due to expansion is supplied
by the extra amount of heat c
p
c
v
.
Next, we derive the equation of gas dynamics. Let us consider an arbitrary domain
R
3
. The mass ux from outside to inside per unit time per unit area dS is v, where
n is the outer normal of . Thus, the conservation of mass can be read as
d
dt
_

dx =
_

[v n]dS
=
_

div ( v) dx
This holds for arbitrary , hence we have

t
+ div( v) = 0. (8.8)
This is called the continuity equation.
Now, we derive momentum equation. Let us suppose the only surface force is from
pressure (no viscous force). Then the momentum change in is due to (i) the momentum
carried in through boundary, (ii) the pressure force exerted on the surface, (iii) the body
8.2. PHYSICAL EXAMPLES 103
force. The rst term is vv n, the second term is pn. Thus, we have
d
dt
_

v dx =
_

[vv n +pn] dS +
_
F dx
=
_

div[v v pI] + F dx
This yields
(v)
t
+ div( v v) +p = F (8.9)
The energy per unit volume is E =
1
2
v
2
+ e. The energy change in per unit time is
due to (i) the energy carried in through boundary (ii) the work done by the pressure from
boundary, and (iii) the work done by the body force. The rst term is Ev n. The second
term is pv n. The third term is F v. The conservation of energy can be read as
d
dt
_

E dx =
_

[Ev n pv n] dS +
_

F v dx
By applying divergence theorem, we obtain the energy equation:
E
t
+ div[(E +p)v] = F v. (8.10)
In one dimension, the equations are (without body force)

t
+ (u)
x
= 0
(u)
t
+ (u
2
p
)
x
= 0
(
1
2
u
2
+e)
t
+ [(
1
2
u
2
+e +p)u]
x
= 0.
Here, the unknowns are two thermo variable and e, and one kinetic variable u. Other
thermo variable p is given by the constitutive equation p(, e).
8.2.2 Riemann Problem of Gas Dynamics
We use (, u, S) as our variables.
_
_

u
S
_
_
t
+
_
_
u 0
c
2

u
P
S

0 0 u
_
_
_
_

u
S
_
_
x
= 0
Where p(, S) = A(S)

, > 1 and c
2
=
P

S
. The eigenvalues and corresponding
eigenvectors are

1
= u c
2
= u
3
= u +c
r
1
=
_
_

c
0
_
_
r
2
=
_
_
P
S
0
c
2
_
_
r
3
=
_
_

c
0
_
_

1
= (c, ,
P
S
c
)
2
= (0, 0, 1)
3
= (c, ,
P
S
c
)
104 CHAPTER 8. SYSTEMS OF HYPERBOLIC CONSERVATION LAWS
Note that

1
r
1
=
1
c
(
1
2
P

+c
2
) > 0

3
r
3
=
1
c
(
1
2
P

+c
2
) > 0

2
r
2
0.
R
1
is the integral curve of (d, du, dS) | r
1
and (d, du, dS)
2
and
3
. Therefore on
R
1
,
_
(d, du, dS) (0, 0, 1) = 0
(d, du, dS) (c, ,
P
S
c
) = 0.
=
_
dS = 0 along R
1
cd +du +
P
S
c
dS = 0
=
_
cd +du = 0
c
2
d +P
S
dS +du = 0 = dP +du = 0
On R
2
, (d, du, dS)
1
,
3
=
_
c
2
d +cdu +P
S
dS = 0
c
2
d cdu +P
S
dS = 0
=
_
dP +cdu = 0
dP cdu = 0
=
_
dP = 0
du = 0
,= 0
On R
3
, (d, du, dS)
1
,
2
_
dS = 0
cd du = 0
Let =
_
c(,S)

d. From c =
_
P

=
_
A(S)
1
, (P, s) =
_
A(S)
2
1

1
2
. Then
on R

3
,
u u
0
=
_

0
c

d = (
0
)
=
_
A(S)
2
1

1
2 =
2
1

= A(S) = A(S
0
) = P
0

0
.
Express interms of P, P
0
,
0
, then plug it into ,

0
= (P)
=
2
1
(
_
P(
P
0
P
)
1

1
0

P
0

0
)
=
2

1
2
0
P
1
2
0
(P
1
2
P
1
2
0
)
8.2. PHYSICAL EXAMPLES 105
P
u
()
R
+
3
R
+
1
Figure 8.2: The integral curve of the rst and the third eld on the (u, P) phase plane.
.

. R
1
u = u
0

0
(P)
R
3
u = u
0
+
0
(P)
On R
2
, which is a contact discontinuity, du = 0, dP = 0. Therefore u = u
0
, P = P
0
.
For S
1
, S
3
_
_
_

t
+ (u)
x
= 0
(u)
t
+ (u
2
+P)
x
= 0
(
1
2
u
2
+e)
t
+ ((
1
2
u
2
+e +P)u)
x
= 0
Suppose the shock is along x t. Let v = u (standing shock)
_
_
_
[v] = 0
[v
2
+P] = 0
[(
1
2
v
2
+e +P)v] = 0
Let
m =
0
v
0
= v
which is from the rst jump condition. The second jump condition says that

0
v
2
0
+P
0
= v
2
+P
mv
0
+P
0
= mv +P
m =
P P
0
v v
0
=
P P
0
m m
0
where =
1

is the specic volume.


.

. m
2
=
PP
0

0
v v
0
=
PP
0
m
(u u
0
)
2
= (v v
0
)
2
= (P P
0
)(
0
)
The third one is
(
1
2

0
v
2
0
+
0
e
0
+P
0
)v
0
= (
1
2
v
2
+e +P)v
=
1
2
v
2
0
+e
0
+P
0

0
=
1
2
v
2
+e +P
106 CHAPTER 8. SYSTEMS OF HYPERBOLIC CONSERVATION LAWS
By v
2
0
= m
2

2
0
, v
2
= m
2

2
, m
2
=
PP
0

0
,
= H(P, ) = e e
0
+
P +P
0
2
(
0
) = 0
Recall e =
P
1
. From H(P, ) = 0,
P
1

P
0

0
1
+ (
P +P
0
2
)(
0
) = 0.
Solve fot in terms of P, P
0
,
0
, then plug into
(u u
0
)
2
= (P P
0
)(
0
)
Set (P) = (P P
0
)

_
2
+1

0
P +
1
+1
P
0
Then
S
1
: u = u
0

0
(P)
S
3
: u = u
0
+
0
(P)
Therefore,
T
()
1
: u =
_
u
0

0
(P) P < P
0
u
0

0
(P) P P
0
T
()
3
: u =
_
u
0
+
0
(P) P > P
0
u
0
+
0
(P) P P
0
T
(r)
1
: u =
_
u
0

0
(P) P > P
0
u
0

0
(P) P P
0
T
(r)
3
: u =
_
u
0
+
0
(P) P < P
0
u
0
+
0
(P) P P
0
Nowwe are ready to solve Riemann Problemwith initial states (
L
, P
L
, u
L
) and (
R
, P
R
, u
R
).
Recall that in the second eld, [P] = [u] = 0.

I
, P
I
, u
I

I
, P
II
, u
II

L
, P
L
, u
L

R
, P
R
, u
R
P
I
= P
II
= P

u
I
= u
II
= u

8.2. PHYSICAL EXAMPLES 107


u
S
1
S
3

P
R
1
R
3
r
S
3
S
1
R
1
R
3

L
, P
L
, u
L

R
, P
R
, u
R

I

II
P

, u

The vaccum state


P
u

r
Solution must satisfy P > 0. If u

is less than u
r

r
, there is no
solution.
For numerical, Godunov gives a procedure, Godunove iteration. The algorithm
to nd P

:
u

(P) = u
I
= u
II
= u
r
+f
r
(P)
f
0
(P) =
_

0
(P) P < P
0

0
(P) P P
0
We can solve this equation.
P
u
()
S

1
R
+
3
R
+
1
S

3
P
u
S
+
1
S
+
3
R

1
R

3
(r)
Figure 8.3: The rarefaction waves ans shocks of 1,3 eld on (u, P) phase plane at left/right
state.
108 CHAPTER 8. SYSTEMS OF HYPERBOLIC CONSERVATION LAWS
Godunov iteration is
_
Z
R
(u

u
R
) = P

P
R
Z
L
(u

u
L
) = P

P
L
.
Where
Z
R
=
_
P
R

R
(
P

P
R
)
Z
L
=
_
P
L

L
(
P

P
L
)
and
(w) =
_
_
_
_
+1
2
w +
1
2
w > 1
1
2

1w
1w
1
2
w 1
This can be solved by Newtons method.
Approximate Riemann Solver Our equation is u
t
+ f(u)
x
= 0 with Riemann
data (u
L
, u
R
). We look for middle state. Suppose u
L
u
R
, the original equation can be
replaced by
u
t
+f

(u)u
x
= 0
u
t
+A(u)u
x
= 0
Choose u =
u
L
+u
R
2
, solve u
t
+A( u)u
x
= 0 with Riemann data u(x, 0) =
_
u
L
x < 0
u
R
x > 0
Let
i
,
i
, r
i
be eigenvalues and eigenvectors of A. Then
u(
x
t
) = u
L
+

i
<
x
t
(
i
(u
R
u
L
)) r
i
.
One severe error in this approximate Riemann solver is that rarefaction waves are approx-
imated by discontinuities. This will produce nonentropy shocks. To cure this problem,
we expand such a linear discontinuity by a linear fan. Precisely, suppose
i
(u
i1
) <
0,
i
(u
i
) > 0, this suggests that there exists rarefaction fan crossing
x
t
= 0. We then
expand this discontinuity by a linear fan. At x/t = 0, we thus choose
u
m
= (1 )u
i1
+u
i
,
=

i
(u
i1
)

i
(u
i
)
i
(u
i1
)
.

Anda mungkin juga menyukai