0 Suka0 Tidak suka

0 tayangan89 halamanApr 08, 2019

© © All Rights Reserved

PDF, TXT atau baca online dari Scribd

© All Rights Reserved

0 tayangan

© All Rights Reserved

- Finite Element Method
- DWT Matlab
- Algebra Lineal
- 17.09.13
- mathgen-1079926766
- esatdo solido
- Matlab Interface
- NotesCh6
- Authoritative Sources in a Hyperlinked Environment
- 1207.1551(1)68
- Dimension
- dimension.pdf
- Discriminat Analysis
- The P-Version of the Finite Element Method_ch5_ecm003g
- Cap_1_HK_2012
- producto 2
- Sol2
- a Practical Approach to Linear Algebra
- unit2_math_aside.pdf
- A Cookbook of Mathematics

Anda di halaman 1dari 89

Notes taken by Dexter Chua

Michaelmas 2015

Self-adjoint ODEs

Periodic functions. Fourier series: definition and simple properties; Parseval’s theorem.

Equations of second order. Self-adjoint di↵erential operators. The Sturm-Liouville

equation; eigenfunctions and eigenvalues; reality of eigenvalues and orthogonality of

eigenfunctions; eigenfunction expansions (Fourier series as prototype), approximation

in mean square, statement of completeness. [5]

Physical basis of Laplace’s equation, the wave equation and the di↵usion equation.

General method of separation of variables in Cartesian, cylindrical and spherical

coordinates. Legendre’s equation: derivation, solutions including explicit forms of

P0 , P1 and P2 , orthogonality. Bessel’s equation of integer order as an example of a

self-adjoint eigenvalue problem with non-trivial weight.

Examples including potentials on rectangular and circular domains and on a spherical

domain (axisymmetric case only), waves on a finite string and heat flow down a

semi-infinite rod. [5]

Properties of the Dirac delta function. Initial value problems and forced problems with

two fixed end points; solution using Green’s functions. Eigenfunction expansions of the

delta function and Green’s functions. [4]

Fourier transforms

Fourier transforms: definition and simple properties; inversion and convolution theorems.

The discrete Fourier transform. Examples of application to linear systems. Relationship

of transfer function to Green’s function for initial value problems. [4]

Classification of PDEs in two independent variables. Well posedness. Solution by

the method of characteristics. Green’s functions for PDEs in 1, 2 and 3 independent

variables; fundamental solutions of the wave equation, Laplace’s equation and the

di↵usion equation. The method of images. Application to the forced wave equation,

Poisson’s equation and forced di↵usion equation. Transient solutions of di↵usion

problems: the error function. [6]

1

Contents IB Methods

Contents

0 Introduction 3

1 Vector spaces 4

2 Fourier series 7

2.1 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Convergence of Fourier series . . . . . . . . . . . . . . . . . . . . 8

2.3 Di↵erentiability and Fourier series . . . . . . . . . . . . . . . . . 9

3 Sturm-Liouville Theory 13

3.1 Sturm-Liouville operators . . . . . . . . . . . . . . . . . . . . . . 13

3.2 Least squares approximation . . . . . . . . . . . . . . . . . . . . 19

4.1 Laplace’s equation . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2 Laplace’s equation in the unit disk in R2 . . . . . . . . . . . . . . 22

4.3 Separation of variables . . . . . . . . . . . . . . . . . . . . . . . . 22

4.4 Laplace’s equation in spherical polar coordinates . . . . . . . . . 25

4.4.1 Laplace’s equation in spherical polar coordinates . . . . . 25

4.4.2 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . 25

4.4.3 Solution to radial part . . . . . . . . . . . . . . . . . . . . 28

4.5 Multipole expansions for Laplace’s equation . . . . . . . . . . . . 29

4.6 Laplace’s equation in cylindrical coordinates . . . . . . . . . . . . 30

4.7 The heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.8 The wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5 Distributions 45

5.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.2 Green’s functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5.3 Green’s functions for initial value problems . . . . . . . . . . . . 51

6 Fourier transforms 53

6.1 The Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . 53

6.2 The Fourier inversion theorem . . . . . . . . . . . . . . . . . . . . 56

6.3 Parseval’s theorem for Fourier transforms . . . . . . . . . . . . . 58

6.4 A word of warning . . . . . . . . . . . . . . . . . . . . . . . . . . 59

6.5 Fourier transformation of distributions . . . . . . . . . . . . . . . 60

6.6 Linear systems and response functions . . . . . . . . . . . . . . . 60

6.7 General form of transfer function . . . . . . . . . . . . . . . . . . 61

6.8 The discrete Fourier transform . . . . . . . . . . . . . . . . . . . 63

6.9 The fast Fourier transform* . . . . . . . . . . . . . . . . . . . . . 66

7.1 Well-posedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

7.2 Method of characteristics . . . . . . . . . . . . . . . . . . . . . . 69

7.3 Characteristics for 2nd order partial di↵erential equations . . . . 73

7.4 Green’s functions for PDEs on Rn . . . . . . . . . . . . . . . . . 77

7.5 Poisson’s equation . . . . . . . . . . . . . . . . . . . . . . . . . . 83

2

0 Introduction IB Methods

0 Introduction

In the previous courses, the (partial) di↵erential equations we have seen are

mostly linear. For example, we have Laplace’s equation:

@2 @

2

+ 2 = 0,

@x @y

and the heat equation: ✓ ◆

@ @2 @2

= + .

@t @x2 @y 2

The Schrödinger’ equation in quantum mechanics is also linear:

@ ~2 @ 2

i~ = + V (x) (x).

@t 2m @x2

By being linear, these equations have the property that if 1 , 2 are solutions,

then so are 1 1 + 2 2 (for any constants i ).

Why are all these linear? In general, if we just randomly write down a

di↵erential equation, most likely it is not going to be linear. So where did the

linearity of equations of physics come from?

The answer is that the real world is not linear in general. However, often we

are not looking for a completely accurate and precise description of the universe.

When we have low energy/speed/whatever, we can often quite accurately approx-

imate reality by a linear equation. For example, the equation of general relativity

is very complicated and nowhere near being linear, but for small masses and

velocities, they reduce to Newton’s law of gravitation, which is linear.

The only exception to this seems to be Schrödinger’s equation. While there

are many theories and equations that superseded the Schrödinger equation, these

are all still linear in nature. It seems that linearity is the thing that underpins

quantum mechanics.

Due to the prevalence of linear equations, it is rather important that we

understand these equations well, and this is our primary objective of the course.

3

1 Vector spaces IB Methods

1 Vector spaces

When dealing with functions and di↵erential equations, we will often think of

the space of functions as a vector space. In many cases, we will try to find a

“basis” for our space of functions, and expand our functions in terms of the basis.

Under di↵erent situations, we would want to use a di↵erent basis for our space.

For example, when dealing with periodic functions, we will want to pick basis

elements that are themselves periodic. In other situations, these basis elements

would not be that helpful.

A familiar example would be the Taylor series, where we try to approximate

a function f by

X1

f (n) (0) n

f (x) = x .

n=0

n!

Here we are thinking of {xn : n 2 N} as the basis of our space, and trying to

approximate an arbitrary function as a sum of the basis elements. When writing

the function f as a sum like this, it is of course important to consider whether

the sum converges, and when it does, whether it actually converges back to f .

Another issue of concern is if we have a general setP of basis functions {yn },

how can we find the coefficients cn such that f (x) = cn yn (x)? This is the

bit where linear algebra comes in. Finding these coefficients is something we

understand well in linear algebra, and we will attempt to borrow the results and

apply them to our space of functions.

Another concept we would want to borrow is eigenvalues and eigenfunctions,

as well as self-adjoint (“Hermitian”) operators. As we go along the course, we

will see some close connections between functions and vector spaces, and we can

often get inspirations from linear algebra.

Of course, there is no guarantee that the results from linear algebra would

apply directly, since most of our linear algebra results was about finite basis and

finite linear sums. However, it is often a good starting point, and usually works

when dealing with sufficiently nice functions.

We start with some preliminary definitions, which should be familiar from

IA Vectors and Matrices and/or IB Linear Algebra.

operation + which obeys

(i) u + v = v + u (commutativity)

(ii) (u + v) + w = u + (v + w) (associativity)

We can also multiply vectors by a scalars 2 C, which satisfies

(i) (µv) = ( µ)v (associativity)

(ii) (u + v) = u + v (distributivity in V )

(iii) ( + µ)u = u + v (distributivity in C)

(iv) 1v = v (identity)

4

1 Vector spaces IB Methods

Often, we wouldn’t have just a vector space. We usually give them some

additional structure, such as an inner product.

Definition (Inner product). An inner product on V is a map ( · , · ) : V ⇥V ! C

that satisfies

(i) (u, v) = (u, v) (linearity in second argument)

(ii) (u, v + w) = (u, v) + (u, w) (additivity)

(iii) (u, v) = (v, u)⇤ (conjugate symmetry)

(iv) (u, u) 0, with equality i↵ u = 0 (positivity)

Note that the positivity condition makes sense since conjugate symmetry entails

that (u, u) 2 R. p

The inner product in turn defines a norm kuk = (u, u) that provides the

notion of length and distance.

It is important to note that we only have linearity in the second argument.

For the first argument, we have ( u, v) = (v, u)⇤ = ⇤ (v, u)⇤ = ⇤ (u, v).

Definition (Basis). A set of vectors {v1 , v2 , · · · , vn } form a basis of V i↵ any

u 2 V can be uniquely written as a linear combination

n

X

u= i vi

i=1

for some scalars i . The dimension of a vector space is the number of basis

vectors in its basis.

A basis is orthogonal (with respect to the inner product) if (vi , vj ) = 0

whenever i 6= j.

A basis is orthonormal (with respect to the inner product) if it is orthogonal

and (vi , vi ) = 1 for all i.

Orthonormal bases are the nice bases, and these are what we want to work

with.

Given an orthonormal basis, we can use the inner product to find the

expansion of any u 2 V in terms of the basis, for if

X

u= i vi ,

i

!

X X

(vj , u) = vj , i vi = i (vj , vi ) = j,

i i

i = (vi , u).

We have seen all these so far in IA Vectors and Matrices, where a vector is a list

of finitely many numbers. However, functions can also be thought of as elements

of an (infinite dimensional) vector space.

5

1 Vector spaces IB Methods

(f + g)(x) = f (x) + g(x). Given scalar , we can also define ( f )(x) = f (x).

This also makes intuitive sense. We can simply view a functions as a list

of numbers, where we list out the values of f at each point. The list could be

infinite, but a list nonetheless.

Most of the time, we don’t want to look at the set of all functions. That

would be too huge and uninteresting. A natural class of functions to consider

would be the set of solutions to some particular di↵erential solution. However,

this doesn’t always work. For this class to actually be a vector space, the sum of

two solutions and the scalar multiple of a solution must also be a solution. This

is exactly the requirement that the di↵erential equation is linear. Hence, the set

of solutions to a linear di↵erential equation would form a vector space. Linearity

pops up again.

Now what about the inner product? A natural definition is

Z

(f, g) = f (x)⇤ g(x) dµ,

⌃

where µ is some measure. For example, we could integrate dx, or dx2 . This

measure specifies how much weighting we give to each point x.

Why does this definition make sense? P ⇤Recall that the usual inner product

on finite-dimensional vector spaces is vi wi . Here we are just summing the

di↵erent components of v and w. We just said we can think of the function f as

a list of all its values, and this integral is just the sum of all components of f

and g.

Z b

(f, g) = f (x)⇤ g(x) dx.

a

Z 1 Z 2⇡

(f, g) = f (r, ✓)⇤ g(r, ✓) d✓ r dr

0 0

Note that we were careful and said that dµ is “some measure”. Here we are

integrating against d✓ r dr. We will later see cases where this can be even more

complicated.

If ⌃ has a boundary, we will often want to restrict our functions to take

particular values on the boundary, known as boundary conditions. Often, we

want the boundary conditions to preserve linearity. We call these nice boundary

conditions homogeneous conditions.

Definition (Homogeneous boundary conditions). A boundary condition is

homogeneous if whenever f and g satisfy the boundary conditions, then so does

f + µg for any , µ 2 C (or R).

Example. Let ⌃ = [a, b]. We could require that f (a) + 7f 0 (b) = 0, or maybe

f (a) + 3f 00 (a) = 0. These are examples of homogeneous boundary conditions.

On the other hand, the requirement f (a) = 1 is not homogeneous.

6

2 Fourier series IB Methods

2 Fourier series

The first type of functions we will consider is periodic functions.

Definition (Periodic function). A function f is periodic if there is some fixed

R such that f (x + R) = f (x) for all x.

However, it is often much more convenient to think of this as a function

f : S 1 ! C from unit circle to C, and parametrize our function by an angle ✓

ranging from 0 to 2⇡.

Now why do we care about periodic functions? Apart from the fact that

genuine periodic functions exist, we can also use them to model functions on a

compact domain. For example, if we have a function defined on [0, 1], we can

pretend it is a periodic function on R by making infinitely many copies of the

function to the intervals [1, 2], [2, 3] etc.

x x

So. As mentioned in the previous chapter, we want to find a set of “basis

functions” for periodic functions. We could go with the simplest case of periodic

functions we know of — the exponential function ein✓ . These functions have a

period of 2⇡, and are rather easy to work with. We all know how to integrate

and di↵erentiate the exponential function.

More importantly, this set of basis functions is orthogonal. We have

Z ⇡ Z ⇡ (

im✓ in✓ im✓ in✓ i(n m)✓ 2⇡ n = m

(e , e ) = e e d✓ = e d✓ = = 2⇡ nm

⇡ ⇡ 0 n 6= m

n o

We can normalize these to get a set of orthonormal functions p12⇡ ein✓ : n 2 Z .

Fourier’s idea was to use this as a basis for any periodic function. Fourier

claimed that any f : S 1 ! C can be expanded in this basis:

X

f (✓) = fˆn ein✓ ,

n2Z

where Z ⇡

1 in✓ 1

fˆn = (e , f ) = e in✓ f (✓) d✓.

2⇡ 2⇡ ⇡

P ˆ ein✓ ⇣ in✓ ⌘

These really should be defined as f (✓) = fn p2⇡ with fˆn = p e

2⇡

, f , but for

convenience reasons, we move all the constant factors to the fˆn coefficients.

We can consider the special case where f : S 1 ! R is a real function. We

might want to make our expansion look a bit more “real”. We get

✓ Z ⇡ ◆⇤ Z ⇡

ˆ 1 1

⇤

( fn ) = e in✓

f (✓) d✓ = ein✓ f (✓) d✓ = fˆ n .

2⇡ ⇡ 2⇡ ⇡

7

2 Fourier series IB Methods

1 ⇣

X ⌘ 1 ⇣

X ⌘

f (✓) = fˆ0 + fˆn ein✓ + fˆ n e in✓

= fˆ0 + fˆn ein✓ + fˆn⇤ e in✓

.

n=1 n=1

an + ibn

Setting fˆn = , we can write this as

2

1

X

f (✓) = fˆ0 + (an cos n✓ + bn sin n✓)

n=1

1

X

a0

= + (an cos n✓ + bn sin n✓).

2 n=1

Z Z

1 ⇡ 1 ⇡

an = cos n✓f (✓) d✓, bn = sin n✓f (✓) d✓.

⇡ ⇡ ⇡ ⇡

This is an alternative formulation of the Fourier series in terms of sin and cos.

So when given a real function, which expansion should we use? It depends. If

our function is odd (or even), it would be useful to pick the sine/cosine expansion,

since the cosine (or sine) terms will simply disappear. On the other hand, if we

want to stick our function into a di↵erential equation, exponential functions are

usually more helpful.

When Fourier first proposed the idea of a Fourier series, people didn’t really

believe in him. How can we be sure that the infinite series actually converges?

It turns out that in many cases, they don’t.

To investigate the convergence of the series, we define the partial Fourier

sum as

Xn

Sn f = fˆm eim✓ .

m= n

have to be careful with what we mean by convergence. As we (might) have seen

in Analysis, there are many ways of defining convergence of functions. If we

have a “norm” on the space of functions, we can define convergence to mean

lim kSn f f k = 0. Our “norm” can be defined as

n!1

Z ⇡

1

kSn f fk = |Sn f (✓) f (✓)|2 d✓.

2⇡ ⇡

this does not necessarily have to be a norm. Indeed, if lim Sn f di↵ers from f

n!1

on finitely or countably many points, the integral will still be zero. In particular,

lim Sn f and f can di↵er at all rational points, but this definition will say that

Sn f converges to f .

8

2 Fourier series IB Methods

n!1

for all ✓. This is known as pointwise convergence. However, this is often too

weak a notion. We can ask for more, and require that the rate of convergent is

independent of ✓. This is known as uniform convergence, and is defined by

n!1 ✓

to whether it converges. Unfortunately, even if we manage to get our definition

of convergence right, it is still difficult to come up with a criterion to decide

if a function has a convergent Fourier Series. As of today, we still don’t really

understand how the convergence of Fourier series behaves.

Instead of trying to come up with something general, (since this is an applied

course), let’s look at an example instead.

Example. Consider the sawtooth function f (✓) = ✓.

3⇡ ⇡ ⇡ 3⇡

The Fourier coefficients (for n 6= 0) are

Z ⇡ ⇡ Z ⇡

1 1 1 ( 1)n+1

fˆn = e in✓ ✓ d✓ = e in✓ ✓ + ein✓ d✓ = .

2⇡ ⇡ 2⇡in ⇡ 2⇡in ⇡ in

Hence we have

X ( 1)n+1

✓= ein✓ .

in

n6=0

It turns out that this series converges to the sawtooth for all ✓ 6= (2m + 1)⇡, ie.

everywhere that the sawtooth is continuous.

Let’s look explicitly at the case where ✓ = ⇡. Each term of the partial Fourier

series is zero. So we can say that the Fourier series converges to 0. This is the

average value of lim f (⇡ ± ").

"!0

This is typical. At an isolated discontinuity, the Fourier series is the average

of the limiting values of the original function as we approach from either side.

Integration is a smoothening operator. If we take, say, the step function

(

1 x>0

⇥(x) = .

0 x<0

9

2 Fourier series IB Methods

Z (

x x>0

⇥(x) dx = .

0 x<0

This is now a continuous function. If we integrate it again, we make the positive

side look like a quadratic curve, and it is now di↵erentiable.

y y

R

dx

x x

For example, when we di↵erentiate the continuous function ⇥(x) dx, we obtain

the discontinuous function ⇥(x). If we attempt to di↵erentiate this again, we

get the -(non)-function.

Hence, it is often helpful to characterize the “smoothness” of a function by

how many times we can di↵erentiate it. It turns out this is rather relevant to

the behaviour of the Fourier series.

Suppose that we have a function that is itself continuous and whose first

m 1 derivatives are also continuous, but f (m) has isolated discontinuities at

{✓1 , ✓2 , ✓3 , · · · , ✓r }.

We can look at the kth Fourier coefficient:

Z ⇡

1

fˆk = e ik✓ f (✓) d✓

2⇡ ⇡

⇡ Z ⇡

1 1

= e ik✓ f (✓) + e ik✓ f 0 (✓) d✓

2⇡ik ⇡ 2⇡ik ⇡

The first term vanishes since f (✓) is continuous and takes the same value at ⇡

and ⇡. So we are left with

Z ⇡

1

= e ik✓ f 0 (✓) d✓

2⇡ik ⇡

= ···

Z ⇡

1 1

= eik✓ f (m) (✓) d✓.

(ik)m 2⇡ ⇡

that f (m) is everywhere finite, we can approximate this by removing small strips

(✓i ", ✓i + ") from our domain of integration, and take the limit " ! 0. We can

write this as

Z ✓1 " Z ✓2 " Z ⇡

1 1

= lim + +··· + eik✓ f (m) (✓) d✓

"!0 (ik)m 2⇡ ⇡ ✓1 +" ✓r +"

" r Z #

1 1 X (m) +

= (f (✓s ) f (m) (✓s )) + e ik✓ f (m+1) (✓) d✓ .

(ik)m+1 2⇡ s=1 ( ⇡,⇡)\✓

10

2 Fourier series IB Methods

m+1

We now have to stop. So fˆk decays like k1 if our function and its (m 1)

derivatives are continuous. This means that if a function is more di↵erentiable,

then the coefficients decay more quickly.

This makes sense. If we have a rather smooth function, then we would expect

the first few Fourier terms (with low frequency) to account for most of the

variation of f . Hence the coefficients decay really quickly.

However, if the function is jiggly and bumps around all the time, we would

expect to need some higher frequency terms to account for the minute variation.

Hence the terms would not decay away that quickly. So in general, if we can

di↵erentiate it more times, then the terms should decay quicker.

An important result, which is in some sense a Linear Algebra result, is

Parseval’s theorem.

Theorem (Parseval’s theorem).

Z ⇡ X

(f, f ) = |f (✓)|2 d✓ = 2⇡ |fˆn |2

⇡ n2Z

Proof.

Z ⇡

(f, f ) = |f (✓)|2 d✓

⇡

Z ! !

⇡ X X

= fˆm

⇤

e im✓

fˆn ein✓ d✓

⇡ m2Z n2Z

X Z ⇡

= fˆm

⇤ ˆ

fn ei(n m)✓

d✓

m,n2Z ⇡

X

= 2⇡ fˆm

⇤ ˆ

fn mn

m,n2Z

X

= 2⇡ |fˆn |2

n2Z

Note that this is not a fully rigorous proof, since we assumed not only that

the Fourier series converges to the function, but also that we could commute the

infinite sums. However, for the purposes of an applied course, this is sufficient.

Last time, we computed that the sawtooth function f (✓) = ✓ has Fourier

coefficients

i( 1)n

fˆ0 = 0, fˆn = for n 6= 0.

n

But why do we care? It turns out this has some applications in number theory.

You might have heard of the Riemann ⇣-function, defined by

X1

1

⇣(s) = .

n=1

ns

We will show that this obeys the property that for any m, ⇣(2m) = ⇡ 2m q for

some q 2 Q. This may not be obvious at first sight. So let’s apply Parseval’s

11

2 Fourier series IB Methods

that Z ⇡

2⇡ 3

(f, f ) = ✓2 d✓ = .

⇡ 3

However, by Parseval’s theorem, we know that

X X1

1

(f, f ) = 2⇡ |fˆn |2 = 4⇡ 2

.

n=1

n

n2Z

X1

1 ⇡2

= ⇣(2) = .

n=1

n2 6

We have just done it for the case where m = 1. But if we integrate the sawtooth

function repeatedly, then we can get the general result for all m.

At this point, we might ask, why are we choosing these eim✓ as our basis?

Surely there are a lot of di↵erent sets of basis we can use. For example, in finite

dimensions, we can just arbitrary choose random vectors (that are not linearly

dependent) to get a set of basis vectors. However, in practice, we don’t pick

them randomly. We often choose a basis that can reveal the symmetry of a

system. For example, if we have a rotationally symmetric system, we would like

to use polar coordinates. Similarly, if we have periodic functions, then eim✓ is

often a good choice of basis.

12

3 Sturm-Liouville Theory IB Methods

3 Sturm-Liouville Theory

3.1 Sturm-Liouville operators

In finite dimensions, we often consider linear maps M : V ! W . If {vi } is a

basis for V and {wi } is a basis for W , then we can represent the map by a

matrix with entries

Mai = (wa , M vi ).

A map M : V ! V is called self-adjoint if M † = M as matrices. However, it is

not obvious how we can extend this notion to arbitrary maps between arbitrary

vector spaces (with an inner product) when they cannot be represented by a

matrix.

Instead, we make the following definitions:

Definition (Adjoint and self-adjoint). The adjoint B of a map A : V ! V is a

map such that

(Bu, v) = (u, Av)

for all vectors u, v 2 V . A map is then self-adjoint if

(M u, v) = (u, M v).

Self-adjoint matrices come with a natural basis. Recall that the eigenvalues

of a matrix are the roots of det(M I) = 0. The eigenvector corresponding to

an eigenvalue is defined by M vi = i vi .

In general, eigenvalues can be any complex number. However, self-adjoint

maps have real eigenvalues. Suppose

M vi = i vi .

Then we have

⇤

i (vi , vi ) = (vi , M vi ) = (M vi , vi ) = i (vi , vi ).

So = ⇤i .

i

Furthermore, eigenvectors with distinct eigenvalues are orthogonal with

respect to the inner product. Suppose that

M vi = i vi , M vj = j vj .

Then

i (vj , vi ) = (vj , M vi ) = (M vj , vi ) = j (vj , vi ).

Knowing eigenvalues and eigenvalues gives a neat way so solve linear equations

of the form

Mu = f.

Here we are given M and f , and want to find u. Of course, the answer is

u = M 1 f . However, if we expand in terms of eigenvectors, we obtain

X X

Mu = M ui v i = ui i v i .

13

3 Sturm-Liouville Theory IB Methods

Hence we have X X

ui i v i = fi v i .

Taking the inner product with vj , we know that

fj

uj = .

j

So far, these are all things from IA Vectors and Matrices. Sturm-Liouville theory

is the infinite-dimensional analogue.

In our vector space of di↵erentiable functions, our “matrices” would be linear

di↵erential operators L. For example, we could have

dp dp 1

d

L = Ap (x) + Ap 1 (x) + · · · + A1 (x) + A0 (x).

dxp dxp 1 dx

It is an easy check that this is in fact linear.

dp

We say L has order p if the highest derivative that appears is dx p.

L be self-adjoint?

In the p = 2 case, we have

d2 y dy

Ly = P 2

+R Qy

dx

2 dx

d y R dy Q

=P + y

dx2 P dx P

R ✓ R ◆

R

dx d R

dx dy Q

=P e P e P y

dx dx P

R R

Let p = exp P dx . Then we can write this as

✓ ◆

1 d dy Q

= Pp p py .

dx dx P

Q 1

We further define q = P p. We also drop a factor of P p . Then we are left with

✓ ◆

d d

L= p(x) q(x).

dx dx

This is the Sturm-Liouville form of the operator. Now let’s compute (f, Lg). We

integrate by parts numerous times to obtain

Z b ✓ ✓ ◆ ◆

d dg

(f, Lg) = f⇤ p qg dx

a dx dx

Z b✓ ⇤ ◆

⇤ 0 b df dg ⇤

= [f pg ]a p + f qg dx

a dx dx

Z b ✓ ✓ ◆ ◆

⇤ 0 0⇤ b d df ⇤ ⇤

= [f pg f pg]a + p qf g dx

a dx dx

= [(f ⇤ g 0 f 0⇤ g)p]ba + (Lf, g),

14

3 Sturm-Liouville Theory IB Methods

So 2nd order linear di↵erential operators are self-adjoint with respect to this

norm if p, q are real and the boundary terms vanish. When do the boundary

terms vanish? One possibility is when p is periodic (with the right period), or if

we constrain f and g to be periodic.

Example. We can consider a simple case, where

d2

L= .

dx2

Here we have p = 1, q = 0. If we ask for functions to be periodic on [a, b], then

Z b 2 Z b 2 ⇤

⇤d g d f

f 2

dx = 2

g dx.

a dx a dx

is first-order, then we would have a negative sign, since we integrated by parts

once.

Just as in finite dimensions, self-adjoint operators have eigenfunctions and

eigenvalues with special properties. First, we define a more sophisticated inner

product.

Definition (Inner product with weight). An inner product with weight w,

written ( · , · )w , is defined by

Z b

(f, g)w = f ⇤ (x)g(x)w(x) dx,

a

Why do we want a weight w(x)? In the future, we might want to work with

the unit disk, instead of a square in R2 . When we want to use polar coordinates,

we will have to integrate with r dr d✓, instead of just dr d✓. Hence we need the

weight of r. Also, we allow it to have finitely many zeroes, so that the radius

can be 0 at the origin.

Why can’t we have more zeroes? We want the inner product to keep the

property that (f, f )w = 0 i↵ f = 0 (for continuous f ). If w is zero at too many

places, then the inner product could be zero without f being zero.

We now define what it means to be an eigenfunction.

Definition (Eigenfunction with weight). An eigenfunction with weight w of L

is a function y : [a, b] ! C obeying the di↵erential equation

Ly = wy,

where 2 C is the eigenvalue.

This might be strange at first sight. It seems like we can take any nonsense

y, apply L, to get some nonsense Ly. But then it is fine, since we can write it

as some nonsense w times our original y. So any function is an eigenfunction?

No! There are many constraints w has to satisfy, like being positive, real and

having finitely many zeroes. It turns out this severely restraints what values

y can take, so not everything will be an eigenfunction. In fact we can develop

this theory without the weight function w. However, weight functions are much

more convenient when, say, dealing with the unit disk.

15

3 Sturm-Liouville Theory IB Methods

Proof. Suppose Lyi = i wyi . Then

⇤

i (yi , yi )w = i (yi , wyi ) = (yi , Lyi ) = (Lyi , yi ) = ( i wyi , yi ) = i (yi , yi )w .

⇤

Since (yi , yi )w 6= 0, we have i = i.

Note that the first and last terms use the weighted inner product, but the

middle terms use the unweighted inner product.

Proposition. Eigenfunctions with di↵erent eigenvalues (but same weight) are

orthogonal.

Proof. Let Lyi = i wyi and Lyj = j wyj . Then

Proposition. On a compact domain, the eigenvalues 1, 2, · · · form a countably

infinite sequence and are discrete.

We will not prove this.

This will be a rather helpful result in quantum mechanics, since in quantum

mechanics, the possible values of, say, the energy are the eigenvalues of the

Hamiltonian operator. Then this result says that the possible values of the

energy are discrete and form an infinite sequence.

Note here the word compact. In quantum mechanics, if we restrict a particle

in a well [0, 1], then it will have quantized energy level since the domain is

compact. However, if the particle is free, then it can have any energy at all since

we no longer have a compact domain. Similarly, angular momentum is quantized,

since it describe rotations, which takes values in S 1 , which is compact.

Proposition. The eigenfunctions are complete: any function f : [a, b] ! C

(obeying appropriate boundary conditions) can be expanded as

X

f (x) = fˆn yn (x),

n

where Z

fˆn = yn⇤ (x)f (x)w(x) dx.

d2

Example. Let [a, b] = [ L, L], L = dx2 , w = 1, restricting to periodic functions

Then our eigenfunction obeys

d 2 yn

= n yn (x),

dx2

Then our eigenfunctions are

✓ ◆

in⇡x

yn (x) = exp

L

with eigenvalues

n2 ⇡ 2

n =

L2

for n 2 Z. This is just the Fourier series!

16

3 Sturm-Liouville Theory IB Methods

Example (Hermite polynomials). We are going to cheat a little bit and pick

our domain to be R. We want to study the di↵erential equation

1 00

H xH 0 = H,

2

with H : R ! C. We want to put this in Sturm-Liouville form. We have

✓ Z x ◆

2

p(x) = exp 2t dt = e x ,

0

✓ ◆

d x2 dH 2

e = 2 e x H(x).

dx dx

2

So we take our weight function to be w(x) = e x .

We now ask that H(x) grows at most polynomially as |x| ! 1. In particular,

2

we want e x H(x)2 ! 0. This ensures that the boundary terms from integration

by parts vanish at the infinite boundary, so that our Sturm-Liouville operator is

self-adjoint.

The eigenfunctions turn out to be

2 dn ⇣ x2

⌘

Hn (x) = ( 1)n ex e .

dxn

These are known as the Hermite polynomials. Note that these are indeed

2

polynomials. When we di↵erentiate the e x term many times, we get a lot

2

of things from the product rule, but they will always keep an e x , which will

2

ultimately cancel with ex .

Just as for matrices, we can use the eigenfunction expansion to solve forced

di↵erential equations. For example, if might want to solve

Lg = f (x),

Lg = w(x)F (x).

We expand our g as X

g(x) = ĝn yn (x).

n2Z

Then by linearity,

X X

Lg = ĝn Lyn (x) = ĝn n w(x)yn (x).

n2Z n2Z

!

X

w(x)F (x) = w(x) F̂n yn (x) .

n2Z

17

3 Sturm-Liouville Theory IB Methods

Taking the (regular) inner product with ym (x) (and noting orthogonality of

eigenfunctions), we obtain

w(x)ĝm m = w(x)F̂m

F̂m

ĝm = .

m

So we have

X F̂n

g(x) = yn (x),

n

n2Z

This is a systematic way of solving forced di↵erential equations. We used to

solve these by “being smart”. We just looked at the forcing term and tried to

guess what would work. Unsurprisingly, this approach does not succeed all the

time. Thus it is helpful to have a systematic way of solving the equations.

It is often helpful to rewrite this into another form, using the fact that

F̂n = (yn , F )w . So we have

X 1 Z b X 1

g(x) = (yn , F )w yn (x) = yn⇤ (t)yn (x)w(t)F (t) dt.

n a n2Z n

n2Z

Note that we swapped the sum and the integral, which is in general a dangerous

thing to do, but we don’t really care because this is an applied course. We can

further write the above as

Z b

g(x) = G(x, t)F (t)w(t) dt,

a

X 1

G(x, t) = yn⇤ (t)yn (x).

n

n2Z

We call this the Green’s function. Note that this depends on n and yn only.

It depends on the di↵erential operator L, but not the forcing term F . We can

think of this as something like the “inverse matrix”, which we can use to solve

the forced di↵erential equation for any forcing term.

Recall that for a matrix, the inverse exists if the determinant is non-zero,

which is true if the eigenvalues are all non-zero. Similarly, here a necessary

condition for the Green’s function to exist is that all the eigenvalues are non-zero.

We now have a second version of Parseval’s theorem.

X

(f, f )w = |fˆn |2 .

n2Z

18

3 Sturm-Liouville Theory IB Methods

Proof. We have

Z

(f, f )w = f ⇤ (x)f (x)w(x) dx

⌦

X Z

= fˆn⇤ yn⇤ (x)fˆm ym (x)w(x) dx

n,m2Z ⌦

X

= fˆn⇤ fˆm (yn , ym )w

n,m2Z

X

= |fˆn |2 .

n2Z

So far, we have expanded functions in terms of infinite series. However, in real

life, when we ask a computer to do this for us, it is incapable of storing and

calculating an infinite series. So we need to truncate it at some point.

Suppose we approximate some function f : ⌦ ! C by a finite set of eigen-

functions yn (x). Suppose we write the approximation g as

n

X

g(x) = ck yk (x).

k=1

The objective here is to figure out what values of the coefficients ck are “the

best”, ie. can make g represent f as closely as possible.

Firstly, we have to make it explicit what we mean by “as closely as possible”.

Here we take this to mean that we want to minimize the w-norm (f g, f g)w .

By linearity of the norm, we know that

(f g, f g)w = (f, f )w + (g, g)w (f, g)w (g, f )w .

To minimize this norm, we want

@ @

0= (f g, f g)w = [(f, f )w + (g, g)w (f, g)w (g, f )w ].

@cj @cj

We know that the (f, f )w term vanishes since it does not depend on ck . Expanding

our definitions of g, we can get

1 n n

!

@ X X X

0= |ck | n

fˆk ck

⇤

ck fˆk = c⇤j fˆj⇤ .

⇤

@cj i=1

k=1 k=1

Note that here we are treating c⇤j and cj as distinct quantities. So when we vary

cj , c⇤j is unchanged. To formally justify this treatment, we can vary the real and

imaginary parts separately.

Hence, the extremum is achieved at c⇤j = fˆj⇤ . Similarly, varying with respect

to c⇤j , we get that cj = fˆj .

To check that this is indeed an minimum, we can look at the second-derivatives

@2 @2

(f g, f g)w = (f g, f g)w = 0,

@ci @cj @c⇤i c⇤j

19

3 Sturm-Liouville Theory IB Methods

while

@2

(f g, f g)w = ij 0.

@c⇤i @cj

Hence this is indeed a minimum.

Thus we know that (f g, f g)w is minimized over all g(x) when

ck = fˆk = (yk , f )w .

These are exactly the coefficients in our infinite expansion. Hence if we truncate

our series at an arbitrary point, it is still the best approximation we can get.

20

4 Partial di↵erential equations IB Methods

So far, we have come up with a lot of general theory about functions and stu↵.

We are going to look into some applications of these. In this section, we will

develop the general technique of separation of variables, and apply this to many

many problems.

Definition (Laplace’s equation). Laplace’s equation on Rn says that a (twice-

di↵erentiable) equation : Rn ! C obeys

r2 = 0,

where

Xn

@2

r2 = .

i=1

@x2i

force F = r , and we are in a region where the force obeys the source-free

Gauss’ law r · F = 0 (eg. when there is no electric charge in an electromagnetic

field), then we get Laplace’s equation.

It also arises as a special case of the heat equation @@t = r2 , and wave

@2

equation @t2 = c2 r2 , when there is no time dependence.

Definition (Harmonic functions). Functions that obey Laplace’s equation are

often called harmonic functions.

f : @⌦ ! R. Then there is a unique solution to r2 = 0 on ⌦ with |@⌦ = f .

Proof. Suppose 1 and 2 are both solutions such that 1 |@⌦ = 2 |@⌦ = f . Then

let = 1 2 . So = 0 on the boundary. So we have

Z Z Z

0= r2 d n x = (r ) · (r ) dx + r · n dn 1 x.

⌦ ⌦ @⌦

We note that the second term vanishes since = 0 on the boundary. So we have

Z

0= (r ) · (r ) dx.

⌦

is constant throughout ⌦. Since at the boundary, = 0, we have =0

throughout, ie. 1 = 2 .

Asking that |@⌦ = f (x) is a Dirichlet boundary condition. We can instead

ask n · r |@⌦ = g, a Neumann condition. In this case, we have a unique solution

up to a constant factor.

These we’ve all seen in IA Vector Calculus, but this time in arbitrary dimen-

sions.

21

4 Partial di↵erential equations IB Methods

Let ⌦ = {(x, y) 2 R2 : x2 + y 2 1}. Then Laplace’s equation becomes

@2 @2 @2

0 = r2 = + = ,

@x2 @y 2 @z@ z̄

where we define z = x + iy, z̄ = x iy. Then the general solution is

for some , .

Suppose we want a solution obeying the Dirichlet boundary condition

(z, z̄)|@⌦ = f (✓), where the boundary @⌦ is the unit circle. Since the do-

main is the unit circle S 1 , we can Fourier-expand it! We can write

X 1

X 1

X

f (✓) = fˆn ein✓ = fˆ0 + fˆn ein✓ + fˆ n e in✓

.

n2Z n=1 n=1

that z = ei✓ and z̄ = e i✓ . So we can write

1

X

f (✓) = fˆ0 + (fˆn z n + fˆ n z̄ n )

n=1

This is defined for |z| = 1. Now we can define on the unit disk by

1

X 1

X

(z, z̄) = fˆ0 + fˆn z n + fˆ n z̄ n .

n=1 n=1

It is clear that this obeys the boundary conditions by construction. Also, (z, z̄)

certainly converges when |z| < 1 if the series for f (✓) converged on @⌦. Also,

(z, z̄) is of the form (z) + (z̄). So it is a solution to Laplace’s equation on

the unit disk. Since the solution is unique, we’re done!

Unfortunately, the use of complex variables is very special to the case where

⌦ ✓ R2 . In higher dimensions, we proceed di↵erently.

We let ⌦ = {(x, y, z) 2 R3 : 0 x a, 0 y b, z 0}. We want the

following boundary conditions:

(0, y, z) = (a, y, z) = 0

(x, 0, z) = (x, b, z) = 0

lim (x, y, z) = 0

z!1

(x, y, 0) = f (x, y),

z = 0 and vanish at the boundaries.

The first step is to look for a solution of r2 (x, y, z) = 0 of the form

22

4 Partial di↵erential equations IB Methods

Then we have

✓ ◆

2 00 00 00 X 00 Y 00 Z 00

0=r = Y ZX + XZY + XY Z = XY Z + + .

X Y Z

X 00 Y 00 Z 00

+ + = 0.

X Y Z

The key point is each term depends on only one of the variables (x, y, z). If we

00 00 00

vary, say, x, then YY + ZZ does not change. For the total sum to be 0, XX must

be constant. So each term has to be separately constant. We can thus write

X 00 = X, Y 00 = µY, Z 00 = ( + µ)Z.

The signs before and µ are there just to make our life easier later on.

We can solve these to obtain

p p

X = a sin x + b cos x,

p p

Y = c sin y + d cos x,

p p

Z = g exp( + µz) + h exp( + µz).

We now impose the homogeneous boundary conditions, ie. the conditions that

vanishes at the walls and at infinity.

– At x = 0, we need (0, y, z) = 0. So b = 0.

n⇡ 2

– At x = a, we need (a, y, z) = 0. So = a .

– At y = 0, we need (x, 0, z) = 0. So d = 0.

m⇡ 2

– At y = b, we need (x, b, z) = 0. So µ = b .

– As z ! 1, (x, y, z) ! 0. So g = 0.

Here we have n, m 2 N.

So we found a solution

⇣ n⇡x ⌘ ⇣ m⇡y ⌘

(x, y, z) = An,m sin sin exp( sn,m z),

a b

where An,m is an arbitrary constant, and

✓ 2 ◆

n m2

s2n,m = + ⇡2 .

a2 b2

This obeys the homogeneous boundary conditions for any n, m 2 N but not the

inhomogeneous condition at z = 0.

By linearity, the general solution obeying the homogeneous boundary condi-

tions is

1

X ⇣ n⇡x ⌘ ⇣ m⇡y ⌘

(x, y, z) = An,m sin sin exp( sn,m z)

n,m=1

a b

23

4 Partial di↵erential equations IB Methods

f (x, y). In other words, we need

1

X ⇣ n⇡x ⌘ ⇣ m⇡y ⌘

An,m sin sin = f (x, y). (†)

n,m=1

a b

We can use the orthogonality relation we’ve previously had:

Z a ✓ ◆ ⇣ n⇡x ⌘

k⇡x a

sin sin dx = k,n .

0 a a 2

k⇡x

So we multiply (†) by sin a and integrate:

1

X Z a ✓ ◆ ⇣ n⇡x ⌘ ⇣ m⇡y ⌘ Z a ✓ ◆

k⇡x k⇡x

An,m sin sin dx sin = sin f (x, y) dx.

n,m=1 0 a a b 0 a

1 ⇣ m⇡y ⌘ Z a ✓ ◆

a X k⇡x

Ak,m sin = sin f (x, y) dx.

2 m=1 b 0 a

Z ✓ ◆ ✓ ◆

ab k⇡x j⇡y

Ak,j = sin sin f (x, y) dx dy. (⇤)

4 [0,a]⇥[0,b] a b

1

X ⇣ n⇡x ⌘ ⇣ m⇡y ⌘

(x, y, z) = An,m sin sin exp( sn,m z)

n,m=1

a b

where Am,n is given by (⇤). Since we have shown that there is a unique solution

to Laplace’s equation obeying Dirichlet boundary conditions, we’re done.

Note that if we imposed a boundary

p condition at finite z, say 0 z c,

then both sets of exponential exp(± + µz) would have contributed. Similarly,

if does not vanish at the other boundaries, then the cos terms would also

contribute.

In general, to actually find our , we have to do the horrible integral for

Am,n , and this is not always easy (since integrals are hard ). We will just look at

a simple example.

Example. Suppose f (x, y) = 1. Then

Z Z (

4 a ⇣ n⇡x ⌘ b ⇣ m⇡y ⌘ 16

n, m both odd

Am,n = sin dx sin dy = ⇡ 2 mn

ab 0 a 0 b 0 otherwise

Hence we have

16 X 1 ⇣ n⇡x ⌘ ⇣ m⇡y ⌘

(x, y, z) = sin sin exp( sm,n z).

⇡2 nm a b

n,m odd

24

4 Partial di↵erential equations IB Methods

4.4.1 Laplace’s equation in spherical polar coordinates

We’ll often also be interested in taking

p

⌦ = {(s, y, z) 2 R3 : x2 + y 2 + z 2 a}.

Since our domain is spherical, it makes sense to use some coordinate system

with spherical symmetry. We use spherical coordinates (r, ✓, ), where

x = r sin ✓ cos , y = r sin ✓ sin , z = r cos ✓.

The Laplacian is

✓ ◆ ✓ ◆

1 @ @ 1 @ @ 1 @2

r2 = r2 + sin ✓ + 2 @ 2.

r2 @r @r r2 sin ✓ @✓ @✓ r2 sin ✓

Similarly, the volume element is

dV = dx dy dz = r2 sin ✓ dr d✓ d .

In this course, we’ll only look at axisymmetric solutions, where (r, ✓, ) = (r, ✓)

does not depend on .

We perform separation of variables, where we look for any set of solution of

r2 = 0 inside ⌦ where (r, ✓) = R(r)⇥(✓).

Laplace’s equation then becomes

✓ ◆

1 d 2 0 1 d d⇥

(r R ) + sin ✓ = 0.

r2 R dr ⇥ sin ✓ d✓ d✓

Similarly, since each term depends only on one variable, each must separately

be constant. So we have

✓ ◆ ✓ ◆

d 2 dR d d⇥

r = R, sin ✓ = sin ✓⇥.

dr dr d✓ d✓

Note that both these equations are eigenfunction equations for some Sturm-

Liouville operator and the ⇥(✓) equation has a non-trivial weight function

w(✓) = sin ✓.

The angular equation is known as the Legendre’s equation. It’s standard to set

x = cos ✓ (which is not the Cartesian coordinate, but just a new variable). Then

we have

d d

= sin ✓ .

d✓ dx

So the Legendre’s equation becomes

d d⇥

sin ✓ sin ✓( sin ✓) + sin ✓⇥ = 0.

dx dx

Equivalently, in terms of x, we have

d d⇥

(1 x2 ) = ⇥.

dx dx

25

4 Partial di↵erential equations IB Methods

This operator is a Sturm-Liouville operator with

p(x) = 1 x2 , q(x) = 0.

boundary x = ±1, the Sturm-Liouville operator is self-adjoint provided our

function ⇥(x) is regular at x = ±1. Hence we look for a set of Legendre’s

equations inside ( 1, 1) that remains regular including at x = ±1. We can try a

power series

1

X

⇥(x) = an xn .

n=0

1

X 1

X 1

X

(1 x2 ) an n(n 1)xn 2

2 an nxn + an xn = 0.

n=0 n=0 n=0

For this to hold for all x 2 ( 1, 1), the equation must hold for each coefficient of

x separately. So

n(n + 1)

an+2 = an .

(n + 2)(n + 1)

This equation relates an+2 to an . So we can choose a0 and a1 freely. So we get

two linearly independents solutions ⇥0 (x) and ⇥1 (x), where they satisfy

⇥0 ( x) = ⇥0 (x), ⇥1 ( x) = ⇥1 (x).

(6 ) 4

⇥0 (x) = a0 1 x2 x + ···

2 4!

(2 ) 2 (12 )(2 ) 5

⇥1 (x) = a1 x + x + x + ··· .

3! 5!

We now consider the boundary conditions. We know that ⇥(x) must be regular

(ie. finite) at x = ±1. This seems innocent, However, we have

an+2

lim = 1.

n!1 an

This is fine if we are inside ( 1, 1), since the power series will still converge.

However, at ±1, the ratio test is not conclusive and does not guarantee conver-

gence. In fact, more sophisticated convergence tests show that the infinite series

would diverge on the boundary! To avoid this problem, we need to choose such

26

4 Partial di↵erential equations IB Methods

that the series truncates. That is, if we set = `(` + 1) for some ` 2 N, then

our power series will truncate. Then since we are left with a finite polynomial,

it of course converges.

In this case, the finiteness boundary condition fixes our possible values of

eigenvalues. This is how quantization occurs in quantum mechanics. In this case,

this process is why angular momentum is quantized.

The resulting polynomial solutions P` (x) are called Legendre polynomials.

For example, we have

P0 (x) = 1

P1 (x) = x

1

P2 (x) = (3x2 1)

2

1

P3 (x) = (5x3 3x),

2

where the overall normalization is chosen to fix P` (1) = 1.

It turns out that

1 d` 2

P` (x) = ` (x 1)` .

2 `! dx`

The constants in front are just to ensure normalization. We now check that this

indeed gives the desired normalization:

1 d`

P` (1) = [(x 1)` (x + 1)` ]

2` dx` x=1

1 ⇥ `

⇤

= ` `!(x + 1) + (x 1)(stu↵) x=1

2 `!

=1

are orthogonal. Let’s see that directly now for this operator. The weight function

is w(x) = 1.

We first prove a short lemma: for 0 k `, we have

dk 2

(x 1)` = Q`,k (x2 1)` k

dxk

for some degree k polynomial Q`,k (x).

We show this by induction. This is trivially true when k = 0. We have

dk+1 2 d ⇥ ⇤

(x + 1)` = Q`,k (x)(x2 1)` k

dxk+1 dx

= Q0`,k (x2 1)` k + 2(` k)Q`,k x(x2 1)` k 1

h i

= (x2 1)` k 1 Q0`,k (x2 1) 2(` k)Q`,k x .

Since Q`,k has degree k, Q0`,k has degree k 1. So the right bunch of stu↵ has

degree k + 1.

27

4 Partial di↵erential equations IB Methods

Z 1

(P` , P` ) =

0 P` (x)P`0 (x) dx

1

Z 1

1 d` 2

= ` (x 1)` P`0 (x) dx

2 `! 1 dx`

` 1 Z 1

1 d 2 2 1 d` 1

dP`0

= (x 1) P ` 0 (x) (x2 1)` dx

2` `! dx` 1 2` `! 1 dx` 1 dx

Z 1 ` 1

1 d dP`0

= (x2 1)` dx.

2` `! 1 dx` 1 dx

Note that the first term disappears since the (` 1)th derivative of (x2 1)` still

has a factor of x2 1. So integration by parts allows us to transfer the derivative

from x2 1 to P`0 .

Now if ` 6= `0 , we can wlog assume `0 < `. Then we can integrate by parts `0

times until we get the `0th derivative of P`0 , which is zero.

In fact, we can show that

2 ``0

(P` , P`0 ) = .

2` + 1

hence the P` (x) are orthogonal polynomials.

By the fundamental theorem of algebra, we know that P` (x) has ` roots. In

fact, they are always real, and lie in ( 1, 1).

QmSuppose not. Suppose only m < ` roots lie in ( 1, 1). Then let Qm (x) =

r=1 (x xr ), where {x1 , x2 , · · · , xm } areQthese m roots. Consider the polynomial

Qm

`

P` (x)Qm (x). If we factorize this, we get r=m+1 (x r) r=1 (x xr )2 . The first

few terms have roots outside ( 1, 1), and hence do not change sign in ( 1, 1).

The latter terms are always non-negative. Hence for some appropriate sign, we

have Z 1

± P` (x)Qm (x) dx > 0.

1

m

X

Qm (x) = qr Pr (x),

r=1

In our original equation r2 = 0, we now set ⇥(✓) = P` (cos ✓). The radial

equation becomes

(r2 R0 )0 = `(` + 1)R.

Trying a solution of the form R(r) = r↵ , we find that ↵(↵ + 1) = `(` + 1). So

the solution is ↵ = ` or (` + 1). Hence

X✓ b`

◆

`

(r, ✓) = a` r + `+1 P` (cos ✓)

r

28

4 Partial di↵erential equations IB Methods

want regularity at r = 0, then we need b` = 0 for all `.

The a` are fixed if we require (a, ✓) = f (✓) on @⌦, ie. when r = 1.

We expand

1

X

f (✓) = F` P` (cos ✓),

`=0

where Z ⇡

F` = (P` , f ) = P` (cos ✓)f (✓) d(cos ✓).

0

So

1

X ⇣ r ⌘↵

(r, ✓) = F` P` (cos ✓).

a

`=0

One can quickly check that

1

(r) =

|r r0 |

solves Laplace’s equation r2 = 0 for all r 2 R3 \ r0 . For example, if r0 = k̂,

where k̂ is the unit vector in the z direction, then

1

X

1 1

=p = c` r` P` (cos ✓).

|r k̂| r2 + 1 2r cos ✓ `=0

we have

1

X X1

1

c` r ` = = r` .

1 r

`=0 `=0

More generally, we have

1 X ⇣ r ⌘`

1

1

= P` (r̂ · r̂0 ).

|r r0 | r0 r0

`=0

This is called the multiple expansion, and is valid when r < r0 . Thus

1 1 r

= + 02 r̂ · r̂0 + · · · .

|r r0 | r 0 r

The first term r10 is known as the monopole, and the second term is known as

the dipole, in analogy to charges in electromagnetism. The monopole term is

what we get due to a single charge, and the second term is the result of the

interaction of two charges.

29

4 Partial di↵erential equations IB Methods

We let

⌦ = {(r, ✓, z) 2 R3 : r a, z 0}.

In cylindrical polars, we have

✓ ◆

1 @ @ 1 @2 @2

r2 = r + 2 2

+ 2 = 0.

r @r @r r @✓ @z

We look for a solution that is regular inside ⌦ and obeys the boundary conditions

(a, ✓, z) = 0

(r, ✓, 0) = f (r, ✓)

lim (r, ✓, z) = 0,

z!1

Again, we start by separation of variables. We try

(r, ✓, z) = R(r)⇥(✓)Z(z).

Then we get

1 1 ⇥00 Z 00

(rR0 )0 + 2 + = 0.

rR r ⇥ Z

So we immediately know that

Z 00 = µZ

We now replace Z 00 /Z by µ, and multiply by r2 to obtain

r ⇥00

(rR0 )0 + + µr2 = 0.

R ⇥

So we know that

⇥00 = ⇥.

Then we are left with

p

µz

Z(z) = cµ e

✓ ◆

d dR n2

r R = µrR.

dr dr r

Here we have

n2

p(r) = r, q(r) = , w(r) = r.

r

30

4 Partial di↵erential equations IB Methods

p

Introducing x = r µ, we can rewrite this as

d2 R dR

x2 +x + (x2 n2 )R = 0.

dx2 dx

This is Bessel’s equation. Note that this is actually a whole family of di↵erential

equations, one for each n. The n is not the eigenvalue we are trying to figure out

in this equation. It is already fixed by the ⇥ equation. The actual eigenvalue we

are finding out is µ, not n.

Since Bessel’s equations are second-order, there are two independent solution

for each n, namely Jn (x) and Yn (x). These are called Bessel functions of order

n of the first (Jn ) or second (Yn ) kind. We will not study these functions’

properties, but just state some useful properties of them.

The Jn ’s are all regular at the origin. In particular, as x ! 0 Jn (x) ⇠ xn .

These look like decaying sine waves, but the zeroes are not regularly spaced.

On the other hand, the Yn ’s are similar but singular at the origin. As x ! 0,

Y0 (x) ⇠ ln x, while Yn (x) ⇠ x n .

Now we can write our separable solution as

p

µz p p

(r, ✓, z) = (an sin n✓ + bn cos n✓)e [cµ,n Jn (r µ) + dµ,n Yn (r, µ)].

dµ,n = 0.

We now impose our first boundary condition (a, ✓, z) = 0. This demands

p

Jn (a µ) = 0. So we must have

p kni

µ= ,

a

where kni is the ith root of Jn (x) (since there is not much pattern to these roots,

this is the best description we can give!).

So our general solution obeying the homogeneous conditions is

1

X X ✓ ◆ ✓ ◆

kni kni r

(r, ✓, z) = (Ani sin n✓ + Bni cos n✓) exp z Jn .

n=0 i2roots

a a

do this, we use the orthogonality condition

Z a ✓ ◆ ✓ ◆

knj r kni r a2 0 2

Jn Jn r dr = ij [Jn (kni )] ,

0 a a 2

which is proved in the example sheets. Note that we have a weight function

r here. Also, remember this is the orthogonality relation for di↵erent roots of

Bessel’s functions of the same order. It does not relate Bessel’s functions of

di↵erent orders.

We now integrate our general solution with respect to cos m✓ to obtain

Z X ✓ ◆

1 ⇡ kmi r

f (r, ✓) cos m✓ d✓ = Jm .

⇡ ⇡ i2roots

a

31

4 Partial di↵erential equations IB Methods

Now we just have Bessel’s function for a single order j. So we can use the

orthogonality relation for the Bessel’s functions to obtain

Z aZ ⇡ ✓ ◆

1 2 kmj r

Bmj = 0 cos m✓J m f (r, ✓)r dr d✓.

[Jm (kmj )] ⇡a2 0 ⇡ a

How can we possibly perform this integral? We don’t know how Jm looks like,

what the roots kmj are, and we multiply all these complicated things together

and integrate! Often, we just don’t. We ask our computers to do this numerically

for us. There are indeed some rare cases where we are indeed able to get an

explicit solution, but these are rare.

We choose ⌦ = Rn ⇥ [0, 1). We treat Rn as the “space” variable, and [0, 1) as

our time variable.

Definition (Heat equation). The heat equation for a function : ⌦ ! C is

@

= r2 ,

@t

where > 0 is the di↵usion constant.

We can think of this as a function representing the “temperature” at

di↵erent points in space and time, and this equation says the rate of change of

temperature is proportional to r2 .

Our previous study of the Laplace’s equation comes in handy. When our

system is in equilibrium and does not change with time, then we are left with

Laplace’s equation.

Let’s first look at some interesting properties of the heat equation.

The first most important property of the heat equation is that the total

“heat” is conserved under evolution by the heat equation, ie.

Z

d

(x, t) dn x = 0,

dt Rn

since we can write this as

Z Z Z

d @

(x, t) dn x = dn x = r2 dn x = 0,

dt Rn Rn @t Rn

In particular, on a compact space, ie. ⌦ = M ⇥ [0, 1) with M compact,

there is no boundary term and we have

Z

d

dµ = 0.

dt M

The second useful property is if (x, t) solves the heat equation, so do 1 (x, t) =

(x x0 , t t0 ), and 2 (x, t) = A ( x, 2 t).

Let’s try to choose A such that

Z Z

n

2 (x, t) d x = (x, t) dn x.

Rn Rn

32

4 Partial di↵erential equations IB Methods

We have

Z Z Z

n 2

2 (x, t) d x = A ( x, t) dn x = A n

(y, 2

t) dn y,

Rn Rn Rn

where we substituted y = x.

So if we set A = n , then the total heat in 2 at time 2 t is the same as the

total heat in at time t. But the total heat is constant in time. So the total

heat is the same all the time.

Note again that if @@t = r2 on ⌦ ⇥ [0, 1), then so does n ( x, 2 t). This

means that nothing is lost by letting = p1t and consider solutions of the form

✓ ◆

1 x 1

(x, t) = F p = F (⌘),

(t)n/2 t (t)n/2

where

x

⌘=p .

t

For example, in 1 + 1 dimensions, we can look for a solution of the form p1t F (⌘).

We have

✓ ◆

@ @ 1 1 1 d⌘ 0 1

= p F (⌘) = p F (⌘) + p F (⌘) = p [F + ⌘F 0 ]

@t @t t 2 t3 t dt 2 t3

On the other side of the equation, we have

✓ ◆ ✓ ◆

@2 1 @ @⌘ 0 1

2 p F (⌘) = p F =p F 00 .

@x t t @x @x t3

So the equation becomes

0 = 2F 00 + ⌘F 0 + F = (2F 0 + ⌘F )0 .

So we have

2F 0 + ⌘F = const,

and the boundary conditions require that the constant is zero. So we get

⌘

F0 = F.

2

We now see a solution ✓ ◆

⌘2

F (⌘) = a exp .

4

By convention, we now normalize our solution (x, t) by requiring

Z 1

(x, t) dx = 1.

1

This gives

1

a= p ,

4⇡

and the full solution is

✓ ◆

1 x2

(x, t) = p exp .

4⇡t 4t

33

4 Partial di↵erential equations IB Methods

✓ ◆

1 (x x0 )2

(x, t) = exp .

(4⇡(t t0 ))n/2 4(t t0 )

At any fixed time t 6= t0 , the solutions are Gaussians centered on xq 0 with standard

p

deviation 2(t t0 ), and the height of the peak at x = x0 is 4⇡(t1 t0 ) .

We see that the solution gets “flatter” as t increases. This is a general property

of the evolution by the heat equation. Suppose we have an eigenfunction y(x) of

the Laplacian on ⌦, ie. r2 y = y. We want to show that in certain cases,

must be positive. Consider

Z Z Z Z

2 n ⇤ 2 n ⇤ n 1

|y| d x = y (x)r y(x) d x = y n · ry d x |ry|2 dn x.

⌦ ⌦ @⌦ ⌦

If there is no boundary term, eg. when ⌦ is closed and compact (eg. a sphere),

then we get R

|ry|2 dn x

= R .

⌦

|y|2 dn x

Hence the eigenvalues are all positive. What does this mean for heat flow?

Suppose (x, t) obeys the heat equation for t > 0. At time t = 0, we have

(x, 0) = f (x). We can write the solution (somehow) as

2

(x, t) = etr (x, 0),

where we (for convenience) let = 1. But we can expand our initial condition

X

(x, 0) = f (x) = cI yI (x)

I

By linearity, we have

!

X

tr2

(x, t) = e cI yI (x)

I

X 2

= cI etr yI (x)

I

X

2t

= cI e yI (x)

I

= cI (t)yI (x),

where cI (t) = cI e 2 t . Since I are all positive, we know that cI (t) decays

exponentially with time. In particular, the coefficients corresponding to the

largest eigenvalues | I | decays most rapidly. We also know that eigenfunctions

with larger eigenvalues are in general more “spiky”. So these smooth out rather

quickly.

When people first came up with the heat equation to describe heat loss, they

were slightly skeptical — this flattening out caused by the heat equation is not

time-reversible, while Newton’s laws of motions are all time reversible. How

could this smoothening out arise from reversible equations?

34

4 Partial di↵erential equations IB Methods

Einstein came up with an example where the heat equation can come out of

apparently naturally from seemingly reversible laws, namely Brownian motion.

The idea is that a particle will randomly jump around in space, where the

movement is independent of time and position.

Let the probability that a dust particle moves through a step y over a time

t be p(y, t). For any fixed t, we must have

Z 1

p(y, t) dy = 1.

1

We also assume that p(y, t) is independent of time, and of the location of the

dust particle. We also assume p(y, t) is strongly peaked around y = 0 and

p(y, t) = p( y, t). Now let P (x, t) be the probability that the dust particle

is located at x at time t. Then at time t + t, we have

Z 1

P (x, t + t) = P (x y, t)p(y, t) dy.

1

@P y2 @ 2 P

P (x y, t) ⇡ P (x, t) y (x, t) + (x, t).

@x 2 @x2

So we get

@P 1 @2P

P (x, t + t) ⇡ P (x, t) (x, t)hyi + hy 2 i 2 P (x, t) + · · · ,

@x 2 @x

where Z 1

hy r i = y r p(y, t) dy.

1

peaked at 0, we expect the higher order terms to be small. So we can write

1 2 @2P

P (x, t + t) P (x, t) = hy i 2 .

2 @x

hy 2 i

Suppose that as we take the limit t ! 0, we get 2 t ! for some . Then

this becomes the heat equation

@P @2P

= 2.

@t @x

@

Proposition. Suppose : ⌦⇥[0, 1) ! R satisfies the heat equation @t = r2 ,

and obeys

– Initial conditions (x, 0) = f (x) for all x 2 ⌦

– Boundary condition (x, t)|@⌦ = g(x, t) for all t 2 [0, 1).

Then (x, t) is unique.

35

4 Partial di↵erential equations IB Methods

Z

1 2

E(t) = dV.

2 ⌦

Then we know that E(t) 0. Since 1 , 2 both obey the heat equation, so does

. Also, on the boundary and at t = 0, we know that = 0. We have

Z

dE d

= dV

dt ⌦ dt

Z

= r2 dV

⌦

Z Z

= r · dS (r )2 dV

@⌦ ⌦

Z

2

= (r ) dV

⌦

0.

know that at time t = 0, E = = 0. So E = 0 always. So = 0 always. So

1 = 2.

and the Sun heats up the surface of the Earth through sun light. So the sun will

maintain the soil at some fixed temperature. However, this temperature varies

with time as we move through the day-night cycle and seasons of the year.

We let (x, t) be the temperature of the soil as a function of the depth x,

defined on R 0 ⇥ [0, 1). Then it obeys the heat equation with conditions

⇣ ⌘ ⇣ ⌘

(i) (0, t) = 0 + A cos 2⇡ttD + B cos 2⇡t

tY .

We know that

@ @2

= K 2.

@t @x

We try the separation of variables

(x, t) = T (t)X(x).

T 0 = T, X 00 = X.

K

From the boundary solution, we know that our things will be oscillatory. So we

let be imaginary, and set = i!. So we have

⇣ p p ⌘

(x, t) = ei!t a! e i!/Kx + b! e i!/Kx .

r

i! <(1 + i) 2K

|!|

!>0

= q

K :(i 1) |!| !<0

2K

36

4 Partial di↵erential equations IB Methods

we need a! = 0. Otherwise, we need b! = 0.

To match up at x = 0, we just want terms with

2⇡ 2⇡

|!| = !D = , |!| = !Y = .

tD tY

So we can write out solution as

✓ r ◆ ✓ r ◆

!D !D

(x, t) = 0 + A exp cos !D t x

2K 2K

✓ r ◆ ✓ r ◆

!Y !Y

+ B exp cos !Y t x

2K 2K

We can notice a few things. Firstly, as we go further down, the e↵ect of the sun

decays, and the temperature is more stable. Also, the e↵ect of the day-night

cycle decays more quickly than the annual cycle, which makes sense. We also

see that while the temperature does fluctuate with the same frequency as the

day-night/annual cycle, as we go down, there is a phase shift. This is helpful

since we can store things underground and make them cool in summer, warm in

winter.

Example (Heat conduction in a finite rod). We now have a finite rod of length

2L.

x= L x=0 x=L

(

1 0<x<L

(x, 0) = ⇥(x) =

0 L<x<0

( L, t) = 0, (L, t) = 1

So we start with a step temperature, and then maintain the two ends at fixed

temperatures 0 and 1.

We are going to do separation of variables, but we note that all our boundary

and initial conditions are inhomogeneous. This is not helpful. So we use a

little trick. We first look for any solution satisfying the boundary conditions

S ( L, t) = 0, S (L, t) = 1. For example, we can look for time-independent

2

solutions S (x, t) = S (x). Then we need ddx2S = 0. So we get

x+L

S (x) = .

2L

By linearity, (x, t) = (x, t) s (x) obeys the heat equation with the conditions

( L, t) = (L, t) = 0,

x+L

(x, 0) = ⇥(x) .

2L

37

4 Partial di↵erential equations IB Methods

T0 = T, X0 = X.

Then we have

h p p i

t

(x, t) = a sin( x) + b cos( x) e .

Since initial condition is odd, we can eliminate all cos terms. Our boundary

conditions also requires

n2 ⇡ 2

= , n = 1, 2, · · · .

L2

So we have

1

X ⇣ n⇡x ⌘ ✓ ◆

n2 ⇡ 2

(x, t) = s (x) + an sin exp t ,

n=1

L L2

Z ⇣ n⇡x ⌘

1 L x+L 1

an = ⇥(x) sin dx = .

L L 2L L n⇡

Example (Cooling of a uniform sphere). Once upon a time, Charles Darwin

went around the Earth, looked at species, and decided that evolution happened.

When he came up with his theory, it worked quite well, except that there was

one worry. Was there enough time on Earth for evolution to occur? He knew

well that the Earth started as a ball of molten rock, and obviously life couldn’t

have evolved when the world was still molten rock. So he would want to know

how long it took for the Earth to cool down to its current temperature, and if

that was sufficient for evolution to occur.

We can, unsurprisingly, use the heat equation. We assume that at time t = 0,

the temperature is 0 , the melting temperature of rock. We also assume that

the space is cold, and we let the temperature on the boundary of Earth as 0. We

then solve the heat equation on a sphere (or ball), and see how much time it

takes for Earth to cool down to its present temperature.

We let ⌦ = {(x, y, z) 2 R3 , r R}, and we want a solution (r, t) of the heat

equation that is spherically symmetric and obeys the conditions

– (R, t) = 0

– (r, 0) = 0,

✓ ◆

@ @ @

= r2 = 2 r2 .

@t r @r @r

Again, we do the separation of variables.

38

4 Partial di↵erential equations IB Methods

So we get ✓ ◆

0 2 d 2 dR 2 2

T = T, r = r R.

dr dr

We can simplify this a bit by letting R(r) = S(r)

r , then our radial equation just

becomes

S 00 = 2

S.

We can solve this to get

sin r cos r

R(r) = A +B .

r r

We want a regular solution at r = 0. So we need B = 0. Also, the boundary

condition (R, t) = 0 gives

n⇡

= , n = 1, 2, · · ·

R

So we get ✓ ◆

X An ⇣ n⇡r ⌘ n2 ⇡ 2 t

(r, t) = sin exp ,

r R R2

n2Z

0R

An = ( 1)n+1 .

n⇡

We know that the Earth isn’t just a cold piece of rock. There are still volcanoes.

So we know many terms still contribute to the sum nowadays. This is rather

difficult to work with. So we instead look at the temperature gradient

X ⇣ n⇡r ⌘ ✓ ◆

@ 0 n+1 n2 ⇡ 2 t

= ( 1) cos exp + sin stu↵.

@r r R R2

n2Z

✓ ◆ Z 1 ✓ ◆ r

@ 0

X n2 ⇡ 2 t 0 ⇡y 2 t 1

= exp ⇡ exp dy = 0 .

@r R R R2 R 1 R 2 ⇡t

n2Z

✓ ◆2

0 1

t⇡ ,

V 4⇡K

where

@

V = .

@r R

We can plug the numbers in, and get that the Earth is 100 million years. This is

not enough time for evolution.

Later on, people discovered that fission of metals inside the core of Earth

produce heat, and provides an alternative source of heat. So problem solved!

The current estimate of the age of the universe is around 4 billion years, and

evolution did have enough time to occur.

39

4 Partial di↵erential equations IB Methods

Consider a string x 2 [0, L] undergoing small oscillations described by (x, t).

B

✓B

✓A

(x, t) A

the outward pointing tension tangent to the string at A (B). Since there is no

sideways (x) motion, there is no net horizontal force. So

If the string has mass per unit length µ, then in the vertical direction, Newton’s

second law gives

@2

µ x 2 = TB sin ✓B TA sin ✓A .

@t

We now divide everything by T , noting the relation (⇤), and get

x @2 TB sin ✓B TA sin ✓A

µ =

T @t2 TB cos ✓B TA cos ✓A

= tan ✓B tan ✓A

@ @

=

@x B @x A

2

@

⇡ x .

@x2

Taking the limit x ! 0 and setting c2 = T /µ, we have that (x, t) obeys the

wave equation

1 @2 @2

2 2

= .

c @t @x2

From IA Di↵erential Equations, we’ve learnt that the solution can be written as

f (x ct) + g(x + ct). However, this does not extend to higher dimensions, but

separation of variables does. So let’s do that.

Assume that the string is fixed at both ends. Then (0, t) = (L, t) = 0 for

all t. The we can perform separation of variables, and the general solution can

then be written

X1 ⇣ n⇡x ⌘ ✓

n⇡ct

◆ ✓

n⇡ct

◆

(x, t) = sin An cos + Bn sin .

n=1

L L L

The coefficients An are fixed by the initial profile (x, 0) of the string, while

the coefficients Bn are fixed by the initial string velocity @@t (x, 0). Note that we

need two sets of initial conditions, since the wave equation is second-order in

time.

40

4 Partial di↵erential equations IB Methods

An oscillating string contains has some sort of energy. The kinetic energy of a

small element x of the string is

✓ ◆2

1 @

µ x .

2 @t

The total kinetic energy of the string is hence the integral

Z ✓ ◆2

µ L @

K(t) = dx.

2 0 @t

The string also has potential energy due to tension. The potential energy of a

small element x of the string is

T ⇥ extension = T ( s x)

p

= T ( x2 + 2 x)

✓ ◆2 !

1

⇡T x 1+ + ··· T x

2 x

✓ ◆2

T

= x .

2 x

Hence the total potential energy of the string is

Z ✓ ◆2

µ L 2 @

V (t) = c dx,

2 0 @x

Using our series expansion for , we get

1 ✓ ◆ ✓ ◆ 2

µ⇡ 2 c2 X 2 n⇡ct n⇡ct

K(t) = n An sin Bn cos

4L n=1 L L

1 ✓ ◆ ✓ ◆ 2

µ⇡ 2 c2 X 2 n⇡ct n⇡ct

V (t) = n An cos + Bn sin

4L n=1 L L

1

n⇡ 2 c2 X 2 2

E(t) = n (An + Bn2 ).

4L n=1

What can we see here? Our solution is essentially an (infinite) sum of independent

harmonic oscillators, one for each n. The period of the fundamental mode (n = 1)

is 2⇡ L 2L

! = 2⇡ · ⇡c = c . Thus, averaging over a period, the average kinetic energy

is Z 2L/c Z 2L/c

c c E

K̄ = K(t) dt = V̄ = V (t) dt = .

2L 0 2L 0 2

Hence we have an equipartition of the energy between the kinetic and potential

energy.

The energy also allows us to prove a uniqueness theorem. First we show that

energy is conserved in general.

41

4 Partial di↵erential equations IB Methods

2

Proposition. Suppose : ⌦⇥[0, 1) ! R obeys the wave equation @@t2 = c2 r2

inside ⌦ ⇥ (0, 1), and is fixed at the boundary. Then E is constant.

Proof. By definition, we have

Z 2 ✓ ◆

dE @ @ 2 @

= 2 @t

+c r · r dV.

dt ⌦ @t @t

Z ✓ 2 ◆ Z

dE d @ 2 2 2 @

= 2

c r dV + c r · dS.

dt ⌦ dt @t @⌦ @t

@2

Since @t2 = c 2 r2 by the wave equation, and is constant on @⌦, we know

that

dE

= 0.

dt

@2

Proposition. Suppose : ⌦⇥[0, 1) ! R obeys the wave equation @t2 = c 2 r2

inside ⌦ ⇥ (0, 1), and obeys, for some f, g, h,

@

(ii) @t (x, 0) = g(x); and

(iii) |@⌦⇥[0,1) = h(x).

Then is unique.

Proof. Suppose 1 and 2 are two such solutions. Then = 1 2 obeys the

wave equation

@2

= c 2 r2 ,

@t2

and

@

|@⌦⇥[0,1) = |⌦⇥{0} = = 0.

@t ⌦⇥{0}

We let Z "✓ ◆2 #

1 @ 2

E (t) = +c r ·r dV.

2 ⌦ @t

Then since obeys the wave equation with fixed boundary conditions, we know

E is constant.

Initially, at t = 0, we know that = @@t = 0. So E (0) = 0. At time t, we

have Z ✓ ◆2

1 @

E = + c2 (r ) · (r ) dV = 0.

2 ⌦ @t

@

Hence we must have @t = 0. So is constant. Since it is 0 at the beginning, it

is always 0.

42

4 Partial di↵erential equations IB Methods

✓ ◆

1 @2 2 1 @ 1 @2

= r = r + ,

c2 @t2 r @r r2 @✓2

with the boundary condition |@⌦ = 0. We can imagine this as a drum, where

the membrane can freely oscillate with the boundary fixed.

Separating variables with (r, ✓, t) = T (t)R(r)⇥(✓), we get

Then as before, T and ⇥ are both sine and cosine waves. Since we are in polars

coordinates, we need (t, r, ✓ + 2⇡) = (t, r, ✓). So we must have µ = m2 for

some m 2 N. Then the radial equation becomes

r(rR0 )0 + (r2 m2 )R = 0,

p p

R(r) = am Jm ( r) + bm Ym ( r).

boundary condition |@⌦ = 0, we must choose = kmi , where kmi is the ith

root of Jm .

Hence the general solution is

1

X

(t, r, ✓) = [A0i sin(k0i ct) + B0i cos(k0i ct)]J0 (k0i r)

i=0

1 X

X 1

+ [Ami cos(m✓) + Bmi sin(m✓)] sin kmi ctJm (kmi r)

m=1 i=0

X1 X 1

+ [Cmi cos(m✓) + Dmi sin(m✓)] cos kmi ctJm (kmi r)

m=1 i=0

g(r). So we start with a flat surface and suddenly hit it with some force. By

symmetry, we must have Ami , Bmi , Cmi , Dmi = 0 for m 6= 0. If this does not

seem obvious, we can perform some integrals with the orthogonality relations to

prove this.

At t = 0, we need = 0. So we must have B0i = 0. So we are left with

1

X

= A0i sin(k0i ct)J0 (k0j r).

i=0

Z 1X1 Z 1

k0i cA0i J0 (k0i r)J0 (k0j r)r dr = g(r)J0 (k0j r)r dr.

0 i=0 0

Using the orthogonality relations for J0 from the example sheet, we get

Z 1

2 1

A0i = g(r)J0 (k0j r)r dr.

ck0i [J00 (k0i )]2 0

43

4 Partial di↵erential equations IB Methods

Note that the frequencies come from the roots of the Bessel’s function, and are

not evenly spaced. This is di↵erent from, say, string instruments, where the

frequencies are evenly spaced. So drums sound di↵erently from strings.

This is quite good, as it worked in our examples, but there are a few issues with

it. Of course, we have the problem of whether it converges or not, but there is a

more fundamental problem.

To perform separation of variables, we need to pick a good coordinate system,

such that the boundary conditions come in a nice form. We were able to

perform separation of variables because we can find some coordinates that fit our

boundary conditions well. However, in real life, most things are not nice. Our

domain might have a weird shape, and we cannot easily find good coordinates

for it.

Hence, instead of solving things directly, we want to ask some more general

questions about them. In particular, Kac asked the question “can you hear the

shape of a drum?” — suppose we know all the frequencies of the modes of

oscillation on some domain ⌦. Can we know what ⌦ is like?

The answer is no, and we can explicitly construct two distinct drums that

sound the same. Fortunately, we get an affirmative answer if we restrict ourselves

a bit. If we require ⌦ to be convex, and has a real analytic boundary, then yes!

In fact, we have the following result: let N ( 0 ) be the number of eigenvalues

less than 0 . Then we can show that

N( 0)

4⇡ 2 lim = Area(⌦).

0 !1 0

44

5 Distributions IB Methods

5 Distributions

5.1 Distributions

When performing separation of variables, we also have the problem of convergence

as well. To perform separation of variables, we first find some particular solutions

of the form, say, X(x)Y (y)Z(z). We know that these solve, say, the wave equation.

However, what we do next is take an infinite sum of these functions. First of

all, how can we be sure that this converges at all? Even if it did, how do we

know that the sum satisfies the di↵erential equation? As we have seen in Fourier

series, an infinite sum of continuous functions can be discontinuous. If it is not

even continuous, how can we say it is a solution of a di↵erential equation?

Hence, at first people were rather skeptical of this method. They thought

that while these methods worked, they only work on a small, restricted domain

of problems. However, later people realized that this method is indeed rather

general, as long as we allow some more “generalized” functions that are not

functions in the usual sense. These are known as distributions.

To define a distribution, we first pick a class of “nice” test functions, where

“nice” means we can do whatever we want to them (eg. di↵erentiate, integrate

etc.) A main example is the set of infinitely smooth functions with compact

1

support on some set K ✓ ⌦, written Ccpt (⌦) or D(⌦). For example, we can

have the bump function defined by

( 2

e 1/(1 x ) |x| < 1

(x) =

0 otherwise.

who are doing IB Linear Algebra, this is the dual space of the space of (“nice”)

real-valued functions. For 2 D(⌦), we often write its image under T as T [ ].

Example. The simplest example is just an ordinary function that is integrable

over any compact region. Then we define the distribution Tf as

Z

Tf [ ] = f (x) (x) dx.

⌦

Note that this is a linear map since integration is linear (and multiplication

is commutative and distributes over addition). Hence every function “is” a

distribution.

Of course, this would not be interesting if we only had ordinary functions.

The most important example of a distribution (that is not an ordinary function)

is the Dirac delta “function”.

Definition (Dirac-delta). The Dirac-delta is a distribution defined by

[ ] = (0).

45

5 Distributions IB Methods

By analogy with the first case, we often abuse notation and write

Z

[ ]= (x) (x) dx,

⌦

and pretend (x) is an actual function. Of course, it is not really a function, ie.

it is not a map ⌦ ! R. If it were, then we must have (x) = 0 whenever x 6= 0,

since [ ] = (0) only depends on what happens at 0. But then this integral

will just give 0 if (0) 2 R. Some people like to think of this as a function that

is zero anywhere and “infinitely large” everywhere else. Formally, though, we

have to think of it as a distribution.

Although distributions can be arbitrarily singular and insane, we can nonethe-

less define all their derivatives, defined by

T 0[ ] = T [ 0 ].

This is motivated by the case of regular functions, where we get, after integrating

by parts, Z Z

f 0 (x) (x) dx = f (x) 0 (x) dx,

⌦ ⌦

with no boundary terms since we have a compact support. Since is infinitely

di↵erentiable, we can take arbitrary derivatives of distributions.

So even though distributions can be crazy and singular, everything can be

di↵erentiated. This is good.

Generalized functions can occur as limits of sequences of normal functions.

For example, the family of functions

n

Gn (x) = p exp( n2 x2 )

⇡

are smooth for any finite n, and Gn [ ] ! [ ] for any .

D

0

( )= [ 0] = 0

(0),

as this is the limit of the sequence

Z

lim G0n (x) (x) dx.

n!1 ⌦

It is often convenient to think of (x) as lim Gn (x), and 0 (x) = lim G0n (x)

n!1 n!1

etc, despite the fact that these limits do not exist as functions.

We can look at some properties of (x):

46

5 Distributions IB Methods

– Translation:

Z 1 Z 1

(x a) (x) dx = (y) (y + a) dx.

1 1

– Scaling:

Z 1 Z 1 ⇣ y ⌘ dy 1

(cx) (x) dx = (y) = (0).

1 1 c |c| |c|

– These are both special cases of the following: suppose f (x) is a continuously

di↵erentiable function with isolated simple zeros at xi . Then near any of

its zeros xi , we have f (x) ⇡ (x xi ) @f@x . Then

xi

Z n Z 1

!

1 X @f

(f (x)) (x) dx = (x xi ) (x) dx

1 i=1 1 @x xi

Xn

1

= (xi ).

i=1

|f 0 (xi )|

in the interval [ L, L], and write a Fourier expansion

X

(x) = ˆn ein⇡x/L

n2Z

with Z L

ˆn = 1 ein⇡x/L (x) dx =

1

2L L 2L

So we have

1 X in⇡x/L

(x) = e .

2L

n2Z

This does make sense as a distribution. We let

N

1 X in⇡x/L

SN (x) = e .

2L

n= N

Then

Z L Z L N

X

1

lim SN (x) (x) dx = lim ein⇡x/L (x) dx

N !1 L N !1 2L L n= N

N

" Z #

X 1 L

in⇡x/L

= lim e (x) dx

N !1 2L L

n= N

N

X

= lim ˆ n

N !1

n= N

N

X

= lim ˆn ein⇡0/L

N !1

n= N

= (0),

47

5 Distributions IB Methods

since the Fourier series of the smooth function (x) does converge for all x 2

[ L, L].

We can equally well expand (x) in terms of any other set of orthonormal

eigenfunctions. Let {yn (x)} be a complete set of eigenfunctions on [a, b] that are

orthogonal with respect to a weight function w(x). Then we can write

X

(x ⇠) = cn yn (x),

n

with Z b

cn = yn⇤ (x) (x ⇠)w(x) dx = yn⇤ (⇠)w(⇠).

a

So X X

(x ⇠) = w(⇠) yn⇤ (⇠)yn (x) = w(x) yn⇤ (⇠)yn (x),

n n

w(⇠)

(x ⇠) = (x ⇠).

w(x)

One of the main uses of the function is the Green’s function. Suppose we wish

to solve the 2nd order ordinary di↵erential equation Ly = f , where f (x) is a

bounded forcing term, and L is a bounded di↵erential operator, say

@2 @

L = ↵(x) + (x) + (x).

@x2 @x

As a first step, we try to find the Green’s function for L, which we shall write as

G(x, ⇠), which obeys

LG(x, ⇠) = (x ⇠).

Given G(x, ⇠), we can then define

Z b

y(x) = G(x, ⇠)f (⇠) d⇠.

a

Then we have

Z b Z b

Ly = LG(x, ⇠)f (⇠) d⇠ = (x ⇠)f (⇠) d⇠ = f (x).

a a

We can formally say y = L 1 f , and the Green’s function shows that L 1 means

an integral.

Hence if we can solve for Green’s function, then we have solved the di↵erential

equation, at least in terms of an integral.

To find the Green’s function G(x, ⇠) obeying the boundary conditions

G(a, ⇠) = G(b, ⇠) = 0,

first note that LG = 0 for all x 2 [a, ⇠) [ (⇠, b], ie. everywhere except ⇠ itself.

Thus we must be able to expand it in a basis of solutions in these two regions.

48

5 Distributions IB Methods

[a, b], with boundary conditions y1 (a) = 0, y2 (b) = 0. Then we must have

(

A(⇠)y1 (x) a x < ⇠

G(x, ⇠) =

B(⇠)y2 (x) ⇠ < x b

how to join these solutions together over x = ⇠.

If G(x, ⇠) were discontinuous at x = ⇠, then @x G|x=⇠ would involve a

function, while @x2 G|x=⇠ would involve the derivative of the function. This is

not good, since nothing in LG = (x ⇠) would balance a 0 . So G(x, ⇠) must

be everywhere continuous. Hence we require

Now integrate over a small region (⇠ ", ⇠ + ") surrounding ⇠. Then we have

Z ⇠+" Z ⇠+"

d2 G dG

↵(x) 2 + (x) + (x)G dx = (x ⇠) dx = 1.

⇠ " dx dx ⇠ "

discontinuous, it is still finite. So the G0 term also does not contribute. So we

have Z ⇠+"

lim ↵G00 dx = 1.

"!0 ⇠ "

Z ⇠+"

lim [↵G0 ]⇠+"

⇠ " + ↵0 G0 dx = 1.

"!0 ⇠ "

!

@G @G

↵(⇠) =1

@x ⇠+ @x ⇠

Hence we obtain

1

B(⇠)y20 (⇠) A(⇠)y10 (⇠) = .

↵(⇠)

Together with (⇤), we know that

y2 (⇠) y1 (⇠)

A(⇠) = , B(⇠) = ,

↵(⇠)W (⇠) ↵(⇠)W (⇠)

where W is the Wronskian

W = y1 y20 y2 y10 .

(

1 y2 (⇠)y1 (x) ax⇠

G(x, ⇠) =

↵(⇠)W (⇠) y1 (⇠)y2 (x) ⇠ < x b.

49

5 Distributions IB Methods

1

G(x, ⇠) = [⇥(⇠ x)y2 (⇠)y1 (x) + ⇥(x ⇠)y1 (⇠)y2 (x)].

↵(⇠)W (⇠)

Z b

y(x) = G(x, ⇠)f (⇠) d⇠

a

Z b Z x

f (⇠) f (⇠)

= y2 (⇠)y1 (x) d⇠ + y1 (⇠)y2 (x) d⇠.

x ↵(⇠)W (⇠) a ↵(⇠)W (⇠)

Example. Consider

Ly = y 00 y=f

for x 2 (0, 1) with y(0) = y(1) = 0. We choose our basis solution to satisfy

y 00 = y as {sin x, sin(1 x)}. Then we can compute the Wronskian

1

G(x, ⇠) = [⇥(⇠ x) sin(1 ⇠) sin x + ⇥(x ⇠) sin ⇠ sin(1 x)].

sin 1

Hence we get

Z 1 Z x

sin(1 ⇠) sin ⇠

y(x) = sin x f (⇠) d⇠ + sin(1 x) f (⇠) d⇠.

x sin 1 0 sin 1

Example. Suppose we have a string with mass per unit length µ(x) and held

under a tension T . In the presence of gravity, Newton’s second law gives

@2y @2y

µ 2

= T 2 + µg.

@t @x

We look for the steady state solution ẏ = 0 shape of the string, assuming

y(0, t) = y(L, t) = 0. So we get

@2y µ(x)

= g.

@x2 T

We look for a Green’s function obeying

@2y

= (x ⇠).

@x2

This can be interpreted as the contribution of a pointlike mass located at x = ⇠.

0 ⇠ L

50

5 Distributions IB Methods

y = Ax + B(x L).

So we get (

A(⇠x) 0x<⇠

G(x, ⇠) =

B(⇠)(x L) ⇠ < x L.

Continuity at x = ⇠ gives

A(⇠) B(⇠) = 1.

⇠ L ⇠

A(⇠) = , B(⇠) = .

L L

Hence the Green’s function is

⇠ L ⇠

G(x, ⇠) = ⇥(⇠ x) + (x L)⇥(x ⇠).

L L

Notice that ⇠ is always less that L. So the first term has a negative slope; while

⇠ is always positive, so the second term has positive slope.

For the general case, we can just substitute this into the integral. We can

give this integral a physical interpretation. We can think that we have many

pointlike particles mi at small separations x along the string. So we get

n

X mi g

g(x) = G(x, xi ) ,

i=1

T

Z L

g

g(x) ! G(x, ⇠)µ(⇠) d⇠.

T 0

Consider Ly = f (t) subject to y(t = t0 ) = 0 and y 0 (t = t0 ) = 0. So instead of

having two boundaries, we have one boundary and restrict both the value and

the derivative at this boundary.

As before, let y1 (t), y2 (t) be any basis of solutions to Ly = 0. The Green’s

function obeys

L(G) = (t ⌧ ).

We can write our Green’s function as

(

A(⌧ )y1 (t) + B(⌧ )y2 (t) t0 t < ⌧

G(t, ⌧ ) =

C(⌧ )y1 (t) + D(⌧ )y2 (t) t > ⌧.

51

5 Distributions IB Methods

✓ ◆✓ ◆ ✓ ◆

y1 (t0 ) y2 (t0 ) A 0

=

y10 (t0 ) y20 (t0 ) B 0

basis). So we must have A, B = 0 (which makes sense, since G = 0 is obviously

a solution for t0 t < ⌧ ).

Our continuity and jump conditions now require

1

= C(⌧ )y10 (⌧ ) + D(⌧ )y20 (⌧ ).

↵(⌧ )

Z 1 Z t

y(t) = G(t, ⌧ )f (⌧ ) d⌧ = G(t, ⌧ )f (⌧ ) d⌧,

t0 t0

since we know that when ⌧ > t, the Green’s function G(t, ⌧ ) = 0. Thus the

solution y(t) depends on the forcing term term f only for times < t. This

expresses causality!

Then we have

for some C(⌧ ), D(⌧ ). The continuity and jump conditions gives D(⌧ ) = 1, C(⌧ ) =

0. So we get

G(t, ⌧ ) = ⇥(t ⌧ ) sin t(⌧ ).

So we get Z t

y(t) = sin(t ⌧ )f (⌧ ) d⌧.

0

52

6 Fourier transforms IB Methods

6 Fourier transforms

6.1 The Fourier transform

So far, we have considered writing a function f : S 1 ! C (or a periodic function)

as a Fourier sum

X Z ⇡

1

f (✓) = fˆn ein✓ , fˆn = e in✓ f (✓) d✓.

2⇡ ⇡

n2Z

f : R ! C? Can we still do Fourier analysis?

R⇡ Yes!

To start with, instead of fˆn = 2⇡

1

⇡

e in✓

f (✓) d✓, we define the Fourier

transform as follows:

Definition (Fourier transform). The Fourier transform of an (absolutely inte-

grable) function f : R ! C is defined as

Z 1

f˜(k) = e ikx f (x) dx

1

Note that for any k, we have

Z 1 Z 1 Z 1

|f˜(k)| = e ikx f (x) dx |e ikx

f (x)| dx = |f (x)| dx.

1 1 1

Since we have assumed that our function is absolutely integrable, this is finite,

and the definition makes sense.

We can look at a few basic properties of the Fourier transform.

(i) Linearity: if f, g : R ! C are absolutely integrable and c1 , c2 are constants,

then

F[c1 f (x) + c2 g(x)] = c1 F[f (x)] + c2 F[g(x)].

This follows directly from the linearity of the integral.

(ii) Translation:

Z

ikx

F[f (x a)] = e f (x a) dx

R

Z

ik(y+a)

= e f (y) dy

R

Z

ika iky

=e e f (y) dy

R

ika

=e F[f (x)],

53

6 Fourier transforms IB Methods

(iii) Re-phasing:

Z 1

i`x ikx i`x

F[e f (x)] = e e f (x) dx

1

Z 1

i(k+`)x

= e f (x) dx

1

= f˜(k + `),

(iv) Scaling:

Z 1

ikx

F[f (cx)] = e f (cx) dx

1

Z 1

dy

= e iky/c f (y)

1 |c|

✓ ◆

1 ˜ k

= f .

|c| c

y/c and c is negative, then we will flip the bounds of the integral. The

extra minus sign then turns c into c = |c|.

(v) Convolutions: for functions f, g : R ! C, we define the convolution as

Z 1

f ⇤ g(x) = f (x y)g(y) dy.

1

We then have

Z 1 Z 1

ikx

F[f ⇤ g(x)] = e f (x y)g(y) dy dx

1 1

Z

= eik(x y)

f (x y)e iky

g(y) dy dx

R2

Z Z

iku iky

= e f (u) du e g(y) dy

R R

= F[f ]F[g],

of individual Fourier transforms.

(vi) The most useful property of the Fourier transform is that it “turns di↵er-

entiation into multiplication”. Integrating by parts, we have

Z 1

df

F[f 0 (x)] = e ikx dx

1 dx

Z 1

d

= (e ikx )f (x) dx

1 dx

54

6 Fourier transforms IB Methods

Note that we don’t have any boundary terms since for the function to be

absolutely integrable, it has to decay to zero as we go to infinity. So we

have

Z 1

= ik e ikx f (x) dx

1

= ikF[f (x)]

Conversely,

Z 1 Z 1

d

F[xf (x)] = e ikx

xf (x) dx = i e ikx

f (x) dx = if˜0 (k).

1 dk 1

This are useful results, since if we have a complicated function to find the Fourier

transform of, we can use these to break it apart into smaller pieces and hopefully

make our lives easier.

Suppose we have a di↵erential equation

L(@)y = f,

where

p

X dr

L(@) = cr

r=0

dxr

is a di↵erential operator of pth order with constant coefficients.

Taking the Fourier transform of both sides of the equation, we find

The interesting part is the left hand side, since the Fourier transform turns

di↵erentiation into multiplication. The left hand side becomes

Here L(ik) is a polynomial in ik. Thus taking the Fourier transform has changed

our ordinary di↵erential equation into the algebraic equation

L(ik)ỹ(k) = f˜(k).

f˜(k)

ỹ(k) = .

L(ik)

r2 m2 = ⇢(x),

where

Xn

d2

r2 =

i=1

dx2i

is the Laplacian on Rn .

55

6 Fourier transforms IB Methods

Z

˜(k) = e ik·x f (x) dx = F[ (x)]

Rn

Z Z

ik·x n ik·x

e r·r d x= r(e ) · r dn x

Rn Rn

Z

= r2 (e ik·x ) dn x

Rn

Z

= k·k e ik·x (x) dn x

Rn

= k · k ˜(k).

So we get

˜(k) = ⇢˜(k)

.

|k|2 + m2

So di↵erential equations are trivial in k space. The problem is, of course,

that we want our solution in x space, not k space. So we need to find a way to

restore our function back to x space.

We need to be able to express our original function f (x) in terms of its Fourier

transform f˜(k). Recall that in the periodic case where f (x) = f (x + L), we have

X

f (x) = fˆn e2inx⇡/L .

n2Z

by taking the limit L ! 1. Recall that our coefficients are defined by

Z L/2

1

fˆn = e 2in⇡x/L

f (x) dx.

L L/2

Now define

2⇡

k= .

L

So we have Z

X k L/2

inx k inx k

f (x) = e e f (u) du

2⇡ L/2

n2Z

56

6 Fourier transforms IB Methods

Z L/2 Z 1

e ixn k f (x) dx = e ix(n k) f (x) dx = f˜(n k).

L/2 1

X Z 1

k in 1

f (x) = lim e kx

f˜(n k) = eikx f˜(k) dk.

k!1 2⇡ 2⇡ 1

n2Z

So Z 1

1

f (x) = F 1

[f (k)] = eikx f˜(k) dk.

2⇡ 1

This is a really dodgy argument. We first took the limit as L ! 1 to turn

RL R1

the integral from L to 1 . Then we take the limit as k ! 0. However,

we can’t really do this, since k is defined to be 2⇡/L, and both limits have

to be taken at the same time. So we should really just take this as a heuristic

argument for why this works, instead of a formal proof.

Nevertheless, note that the inverse Fourier transform looks very similar to

1

the Fourier transform itself. The only di↵erences are a factor of 2⇡ , and the sign

of the exponent. We can get rid of the first di↵erence by evenly distributing the

factor among the Fourier transform and inverse Fourier transform. For example,

we can instead define

Z 1

1

F[f (x)] = p e ikx f (x) dx.

2⇡ 1

Then F 1 looks just like F apart from having the wrong sign.

However, we will stick to our original definition so that we don’t have to deal

with messy square roots. Hence we have the duality

1

F[f (x)] = F [f ( x)](2⇡).

This is useful because it means we can use our knowledge of Fourier transforms

to compute inverse Fourier transform.

Note that this does not occur in the case of the Fourier series. In the Fourier

series, we obtain the coefficients by evaluating an integral, and restore the original

function by taking a discrete sum. These operations are not symmetric.

Example. Recall that we defined the convolution as

Z 1

f ⇤ g(x) = f (x y)g(y) dy.

1

We then computed

F[f ⇤ g(x)] = f˜(k)f˜(g).

It follows by definition of the inverse that

F 1

[f˜(k)g̃(k)] = f ⇤ g(x).

Z 1

F[f (x)g(x)] = 2⇡ f˜(k `)g̃(`) d` = 2⇡ f˜ ⇤ g̃(k).

1

57

6 Fourier transforms IB Methods

˜(k) = ⇢˜(k)

.

|k|2 + m2

Z

1 ⇢˜(k)

(x) = F 1

[ ˜(k)] = eik·x dn x.

(2⇡)n Rn |k|2 + m2

Note that we have (2⇡)n instead of 2⇡ since we get a factor of 2⇡ for each

dimension (and the negative sign was just brought down from the original

expression).

Since we are taking the inverse Fourier transform of a product,

⇣ we know

⌘ that

the result is just a convolution of F 1 [˜⇢(k)] = ⇢(x) and F 1 |k|2 1+m2 .

Recall that when we worked with Green’s function, if we have a forcing term,

then the final solution is the convolution of our Green’s function with the forcing

term. Here we get a similar expression in higher dimensions.

Theorem (Parseval’s theorem (again)). Suppose f, g : R ! C are sufficiently

well-behaved that f˜ and g̃ exist and we indeed have F 1 [f˜] = f, F 1 [g̃] = g.

Then Z

1 ˜

(f, g) = f ⇤ (x)g(x) dx = (f , g̃).

R 2⇡

In particular, if f = g, then

1 ˜2

kf k2 = kf k .

2⇡

So taking the Fourier transform preserves the L2 norm (up to a constant

1

factor of 2⇡ ).

Proof.

Z

(f, g) = f ⇤ (x)g(x) dx

ZR1 Z 1

1

= f ⇤ (x) eikx g̃(x) dk dx

1 2⇡ 1

Z 1 Z 1

1 ⇤

= f (x)eikx dx g̃(k) dk

2⇡ 1 1

Z 1 Z 1 ⇤

1

= f (x)e ikx dx g̃(k) dk

2⇡ 1 1

Z 1

1

= f˜⇤ (k)g̃(k) dk

2⇡ 1

1 ˜

= (f , g̃).

2⇡

58

6 Fourier transforms IB Methods

Suppose f (x) is defined by

(

1 |x| < 1

f (x) =

0 |x| 1.

This function looks rather innocent. Sure it has discontinuities at ±1, but these

are not too absurd, and we are just integrating. This is certainly absolutely

integrable. We can easily compute the Fourier transform as

Z 1 Z 1

2 sin k

f˜(k) = e ikx f (x) dx = e ikx dx = .

1 1 k

Is our original function equal to the integral F 1 [f˜(l)]? We have

Z

2 sin k 1 1 ikx sin k

F 1 = e dk.

k ⇡ 1 k

This is hard to do. So let’s first see if this function is absolutely integrable. We

have

Z 1 Z 1

ikx sin k sin k

e dx = dk

1 k 1 k

Z 1

sin k

=2 dk

0 k

XN Z (n+3/4)⇡

sin k

2 dk.

n=0 (n+1/4)⇡

k

The idea here is instead of looking at the integral over the whole real line, we

just pick out segments of it. In these small segments, we know that | sin k| p12 .

So we can bound this by

Z 1 XN Z (n+3/4)⇡

ikx sin k 1 dk

e dx 2 p

1 k n=0

2 (n+1/4)⇡ k

N Z (n+3/4)⇡

p X dk

2

n=0 (n+1/4)⇡

(n + 1)⇡

p N

2X 1

= ,

2 n=0 n + 1

have to be careful when we are doing these things. Well, in theory. We actually

won’t. We’ll just ignore this issue.

59

6 Fourier transforms IB Methods

We have Z 1

F[ (x)] = eikx (x) dx = 1.

1

Hence we have Z 1

1 1

F [1] = eikx dk = (x).

2⇡ 1

diverges as a normal integral, but we can just have faith and believe this makes

sense as long as we are talking about distributions.

Similarly, from our rules of translations, we get

F[ (x a)] = eika

and

F[ei`x ] = 2⇡ (k `),

Hence we get

1 i`x i`x 1 1

F[cos(`x)] = F (e + e ) = F[ei`x ]+ F[e `x

] = ⇡[ (k `)+ (k +`)].

2 2 2

We see that highly localized functions in x-space have very spread-out behaviour

in k-space, and vice versa.

Suppose we have an amplifier that modifies an input signal I(t) to produce an

output O(t). Typically, amplifiers work by modifying the amplitudes and phases

of specific frequencies in the output. By Fourier’s inversion theorem, we know

Z

1 ˜

I(t) = ei!t I(!) d!.

2⇡

˜

This I(!) is the resolution of I(t).

We specify what the amplifier does by the transfer function R̃(!) such that

the output is given by

Z 1

1 ˜

O(t) = ei!t R̃(!)I(!) d!.

2⇡ 1

˜

Since this R̃(!)I(!) is a product, on computing O(t) = F 1 ˜

[R̃(!)I(!)], we

obtain a convolution

Z 1

O(t) = R(t u)I(u) du,

1

where Z 1

1

R(t) = ei!t R̃(!) d!

2⇡ 1

is the response function. By plugging it directly into the equation above, we see

that R(t) is the response to an input signal I(t) = (t).

60

6 Fourier transforms IB Methods

Now note that causality implies that the amplifier cannot “respond” before

any input has been given. So we must have R(t) = 0 for all t < 0.

Assume that we only start providing input at t = 0. Then

Z 1 Z t

O(t) = R(t u)I(u) du = R(t u)I(u) du.

1 0

This is exactly the same form of solution as we found for initial value problems

with the response function R(t) playing the role of the Green’s function.

To model the situation, suppose the amplifier’s operation is described by the

ordinary di↵erential equation

I(t) = Lm O(t),

where

m

X di

Lm = ai

i=0

dti

with ai 2 C. In other words, we have an mth order ordinary di↵erential equation

with constant coefficients, and the input is the forcing function. Notice that

we are not saying that O(t) can be expressed as a function of derivatives of

I(t). Instead, I(t) influences O(t) by being the forcing term the di↵erential

equation has to satisfy. This makes sense because when we send an input into

the amplifier, this input wave “pushes” the amplifier, and the amplifier is forced

to react in some way to match the input. ⇥ ⇤

Taking the Fourier transform and using F dO dt = i! Õ(!), we have

m

X

˜

I(!) = aj (i!)j Õ(!).

j=0

So we get

˜

I(!)

Õ(!) = .

a0 + i!a1 + · · · + (i!)m am

Hence, the transfer function is just

1

R̃(!) = .

a0 + i!a1 + · · · + (i!)m am

The denominator is an nth order polynomial. By the fundamental theorem of

algebra, it has m roots, say cj 2 C for j = 1, · · · , J, where cj has multiplicity

kj . Then we can write our transfer function as

J

1 Y 1

R̃(!) = .

am j=1 (i! cj )kj

kj

J X

X rj

R̃(!) =

j=1 r=1

(i! cj )r

61

6 Fourier transforms IB Methods

By linearity of the (inverse) Fourier transform, we can find O(t) if we know

the inverse Fourier transform of all functions of the form

1

.

(i! ↵)p

To compute this, there are many possible approaches. We can try to evaluate

the integral directly, learn some fancy tricks from complex analysis, or cheat,

and let the lecturer tell you the answer. Consider the function

(

e↵t t > 0

h0 (t) =

0 otherwise

Z 1 Z 1

i!t 1

h̃o (!) = e h0 (t) dt = e(↵ i!)t

dt =

1 0 i! ↵

! 0 as t ! 1). Similarly, we can let

(

te↵t t>0

h1 (t) =

0 otherwise

Then we get

d 1

h̃1 (!) = F[th0 (t)] = i F[h0 (t)] = .

d! (i! ↵)2

Proceeding inductively, we can define

( p

t ↵t

p! e t>0

hp (t) = ,

0 otherwise

1

h̃p (!) = ,

(i! ↵)p+1

again provided Re(↵) < 0. So the response function is a linear combination

of these functions hp (t) (if any of the roots cj have non-negative real part,

then it turns out the system is unstable, and is better analysed by the Laplace

transform). We see that the response function does indeed vanish for all t < 0.

In fact, each term (except h0 ) increases from zero at t = 0 to rise to some

maximum before eventually decaying exponentially.

Fourier transforms can also be used to solve ordinary di↵erential equations

on the whole real line R, provided the functions are sufficiently well-behaved.

For example, suppose y : R ! R solves

y 00 A2 y = f (x)

f˜(k)

ỹ(k) = .

k2 + A2

62

6 Fourier transforms IB Methods

Since this is a product, the inverse Fourier transform will be a convolution of f (x)

1

and an object whose Fourier transform is k2 +A 2 . Again, we’ll cheat. Consider

1 µ|x|

h(x) = e .

2µ

with µ > 0. Then

Z 1

1

f˜(k) = e ikx e µ|x| dx

2µ 1

Z 1

1

= Re e (µ+ik)x dx

µ 0

1 1

= Re

µ ik + µ

1

= 2 .

µ + k2

Therefore we get Z 1

1 A|x u|

y(x) = e f (u) du.

2A 1

So far, we have done Fourier analysis over some abelian groups. For example,

we’ve done it over S 1 , which is an abelian group under multiplication, and R,

which is an abelian group under addition. We will next look at Fourier analysis

over another abelian group, Zm , known as the discrete Fourier transform.

Recall that the Fourier transform is defined as

Z

˜

f (k) = e ikx f (x) dx.

R

To find the Fourier transform, we have to know how to perform the integral. If

we cannot do the integral, then we cannot do the Fourier transform. However,

it is usually difficult to perform integrals for even slightly more complicated

functions.

A more serious problem is that we usually don’t even have a closed form of

the function f for us to perform the integral. In real life, f might be some radio

signal, and we are just given some data of what value f takes at di↵erent points..

There is no hope that we can do the integral exactly.

Hence, we first give ourselves a simplifying assumption. We suppose that f

is mostly concentrated in some finite region [ R, S]. More precisely, |f (x)| ⌧ 1

for x 62 [ R, S] for some R, S > 0. Then we can approximate our integral as

Z S

f˜(k) = e ikx

f (x) dx.

R

R+S

x = xj = R+j

N

63

6 Fourier transforms IB Methods

N 1

R+S X

f˜(k) ⇡ f (xj )e ikxj

.

N j=0

Similarly, our computer can only store the result f˜(k) for some finite list of

k. Let’s choose these to be at

2⇡m

k = km = .

R+S

Then after some cancellation,

N

X1

R + S 2⇡imR 2⇡i

f˜(km ) ⇡ e R+S f (xj )e N jm

N j=0

2 3

N 1

2⇡imR 1 X

= (R + S)e R+S 4 f (xj )! jm 5

,

N j=0

where

2⇡i

!=e N

N 1

1 X jm

F (m) = f (xj )! .

N j=0

Of course, we’ve thrown away lots of information about our original function

f (x), since we made approximations all over the place. For example, we have

already lost all knowledge of structures varying more rapidly than our sampling

interval R+SN . Also, F (m + N ) = F (m), since !

N

= 1. So we have “forced” some

˜

periodicity into our function F , while f (k) itself was not necessarily periodic.

For the usual Fourier transform, we were able to re-construct our original

function from the f˜(k), but here we clearly cannot. However, if we know the

F (m) for all m = 0, 1, · · · , N 1, then we can reconstruct the exact values of

f (xj ) for all j by just solving linear equations. To make this more precise, we

want to put what we’re doing into the linear algebra framework we’ve been using.

Let G = {1, !, ! 2 , · · · , ! N 1 }. For our purposes below, we can just treat this as

a discrete set, and ! i are just meaningless symbols that happen to visually look

like the powers of our N th roots of unity.

Consider a function g : G ! C defined by

g(! j ) = f (xj ).

This is actually nothing but just a new notation for f (xj ). Then using this new

notation, we have

N 1

1 X

F (m) = ! jm g(! j ).

N j=0

64

6 Fourier transforms IB Methods

isomorphic to CN . This has an inner product

N 1

1 X ⇤ j

(f, g) = f (! )g(! j ).

N j=0

(f, c1 g1 + c2 g2 ) = c1 (f, g1 ) + c2 (f, g2 )

(f, f ) 0 with equality i↵ f (!j ) = 0

em (! j ) = ! jm .

with respect to our inner product. To show this, we can compute

N 1

1 X ⇤ j

(em , em ) = e (! )em (! j )

N j=0 m

N 1

1 X jm

= ! ! jm

N j=0

N 1

1 X

= 1

N j=0

= 1.

For n 6= m, we have

N 1

1 X ⇤ j m j

(en , em ) = e (! )e (! )

N j=0 n

N 1

1 X j(m n)

= !

N j=0

1 1 ! (m n)N

= .

N 1 !m n

So the numerator is 0. However, since n 6= m, m n 6= 0. So the denominator is

non-zero. So we get 0. Hence

(en , em ) = 0.

N 1 N 1

1 X jm 1 X ⇤ j

F (m) = ! f (xj ) = e (! )g(! j ) = (em , g).

N j=0 N j=0 m

65

6 Fourier transforms IB Methods

N

X1 N

X1

g= (em , g)em = F (m)em .

m=0 m=0

N

X1

f (xj ) = g(! j ) = F (m)em (! j ).

m=0

If we forget about our f s and just look at the g, what we have e↵ectively done is

take the Fourier transform of functions taking values on G = {1, !, · · · , ! N 1 } ⇠

=

ZN .

This is exactly analogous to what we did for the Fourier transform, except

that everything is now discrete, and we don’t have to worry about convergence

since these are finite sums.

What we’ve said so far is we’ve defined

N 1

1 X mj

F (m) = ! g(! j ).

N j=0

To compute this directly, even given the values of ! mj for all j, m, this takes

N 1 additions and N + 1 multiplications. This is 2N operations for a single

value of m. To know F (m) for all m, this takes 2N 2 operations.

This is a problem. Historically, during the cold war, especially after the

Cuban missile crisis, people were in fear that the world will one day go into

a huge nuclear war, and something had to be done to prevent this. Thus the

countries decided to come up with a treaty to ensure people don’t perform

nuclear testings anymore.

However, the obvious problem is that we can’t easily check if other countries

are doing nuclear tests. Nuclear tests are easy to spot if the test is done above

ground, since there will be a huge explosion. However, if it is done underground,

it is much harder to test.

They then came up with the idea to put some seismometers all around

the world, and record vibrations in the ground. To distinguish normal crust

movements from nuclear tests, they wanted to take the Fourier transform and

analyze the frequencies of the vibrations. However, they had a large amount of

data, and the value of N is on the order of magnitude of a few million. So 2N 2

will be a huge number that the computers at that time were not able to handle.

So they need to find a trick to perform Fourier transforms quickly. This is known

as the fast Fourier transform. Note that this is nothing new mathematically.

This is entirely a computational trick.

66

6 Fourier transforms IB Methods

2M 1

1 X

F (m) = ! jm g(! j )

2M j=0

" M 1 #

1 1 X

= ! 2km g(! 2k ) + ! (2k+1)m

g(! 2k+1

) .

2 M

k=0

We then have

" M 1 M 1

#

1 1 X km k ! m X km k

F (m) = ⌘ G(⌘ ) + ⌘ H(⌘ )

2 M M

k=0 k=0

1 m

= [G̃(m) + ! H̃(m)].

2

Suppose we are given the values of G̃(m) and H̃(m) and ! m for all m =

{0, · · · , N 1}. Then we can compute F (m) using 3 operations per value of m.

So this takes 3 ⇥ N = 6M operations for all M .

We can compute ! m for all m using at most 2N operations, and suppose it

takes PM operations to compute G̃(m) for all m. Then the number of operations

needed to compute F (m) is

PN 4N log2 N ⌧ N 2 .

So by breaking the Fourier transform apart, we are able to greatly reduce the

number of operations needed to compute the Fourier transform. This is used

nowadays everywhere for everything.

67

7 More partial di↵erential equations IB Methods

7.1 Well-posedness

Recall that to find a unique solution of Laplace’s equation r2 = 0 on a bounded

domain ⌦ ✓ Rn , we imposed the Dirichlet boundary condition |@⌦ = f or the

Neumann boundary condition n · r |@⌦ = g.

For the heat equation @@t = r2 on ⌦ ⇥ [0, 1), we asked for @|@⌦⇥[0,1) = f

and also |⌦⇥{0} = g.

2

For the wave equation @@t2 = c2 r2 on ⌦⇥[0, 1), we imposed |@⌦⇥[0,1) = f ,

|⌦⇥{0} = g and also @t |⌦⇥{0} = h.

All the boundary and initial conditions restrict the value of on some co-

dimension 1 surface, ie. a surface whose dimension is one less than the domain.

This is known as Cauchy data for our partial di↵erential equation.

Definition (Well-posed problem). A partial di↵erential equation problem is

said to be well-posed if its Cauchy data means

(ii) The solution is unique;

(iii) A “small change” in the Cauchy data leads to a “small change” in the

solution.

“small change”. To do this properly, we need to impose some topology on our

space of functions, which is some technicalities we will not go into. Instead, we

can look at a simple example.

Suppose we have the heat equation @t = r2 . We know that whatever

starting condition we have, the solution quickly smooths out with time. Any

spikiness of the initial conditions get exponentially suppressed. Hence this is

a well-posed problem — changing the initial condition slightly will result in a

similar solution.

However, if we take the heat equation but run it backwards in time, we get

a non-well-posed problem. If we provide a tiny, small change in the “ending

condition”, as we go back in time, this perturbation grows exponentially, and

the result could vary wildly.

Another example is as follows: consider the Laplace’s equation r2 on the

upper half plane (x, y) 2 R ⇥ R 0 subject to the boundary conditions

However, if we take g(x) = sinAAx , then we get the unique solution

sin(Ax) sinh(Ay)

(x, y) = .

A2

So far so good. However, now consider the limit as A ! 1. Then

sin(Ax)

g(x) = ! 0.

A

68

7 More partial di↵erential equations IB Methods

2A , y , we get

⇣ ⇡ ⌘ sinh(Ay)

,y = ! A 2 eAy ,

2A A2

which is unbounded. So as we take the limit as our boundary conditions g(x) ! 0,

we get an unbounded solution.

How can we make sure this doesn’t happen? We’re first going to look at first

order equations.

Curves in the plane

Suppose we have a curve on the plane x : R ! R2 given by s 7! (x(s), y(s)).

Definition (Tangent vector). The tangent vector to a smooth curve C given

by x : R ! R2 with x(s) = (x(s), y(s)) is

✓ ◆

dx dy

, .

ds ds

If we have some quantity : R2 ! R, then its value along the curve C is just

(x(s), y(s)).

A vector field V(x, y) : R2 ! R2 defines a family of curves. To imagine this,

suppose we are living in a river. The vector field tells us the how the water flows

at each point. We can then obtain a curve by starting a point and flow along

with the water. More precisely,

Definition (Integral curve). Let V(x, y) : R2 ! R2 ⇣ be a vector

⌘ field. The

dy

integral curves associated to V are curves whose tangent dx ,

ds ds is just V(x, y).

For sufficiently regular vector fields, we can fill the space with di↵erent curves.

We will parametrize which curve we are on by the parameter t. More precisely,

we have a curve B = (x(t), y(t)) that is transverse (ie. nowhere parallel) to our

family of curves, and we can label the members of our family by the value of t

at which they intersect B.

B(t)

C4

C3

C2

C1

We can thus label our family of curves by (x(s, t), y(s, t)). If the Jacobian

@x @y @x @y

J= 6= 0,

@s @t @t @s

then we can invert this to find (s, t) as a function of (x, y), ie. at any point, we

know which curve we are on, and how far along the curve we are.

This means we now have a new coordinate system (s, t) for our points in

R2 . It turns out by picking the right vector field V, we can make di↵erential

equations much easier to solve in this new coordinate system.

69

7 More partial di↵erential equations IB Methods

Suppose : R2 ! R obeys

@ @

a(x, y) + b(x, y) = 0.

@x @y

We can define our vector field as

✓ ◆

a(x, y)

V(x, y) = .

b(x, y)

V · r = 0.

@ dx(s) @ dy(s) @

= + =V·r ,

@s ds @x ds @y

where the integral curves of V are determined by

@x @y

= a(x, y), = b(x, y).

@s t @s t

Hence our partial di↵erential equation just becomes the equation

@

= 0.

@s t

along a transverse curve B. We pick our transverse curve as s = 0, and we

suppose we are given the initial data

(0, t) = h(t).

(s, t) = h(t).

Then (x, y) is given by inverting the characteristic equations to find t(x, y).

We do a few examples to make this clear.

Example (Trivial). Suppose obeys

@

= 0.

@x y

(0, y) = f (y).

The solution is obvious, since the di↵erential equation tells us the function is

just a function of y, and then the solution is obviously (x, y) = h(y). However,

70

7 More partial di↵erential equations IB Methods

we can try to use the method of characteristics to practice our skills. Our vector

field is given by ✓ ◆ ✓ dx ◆

1 ds

V= = dy .

0 ds

Hence, we get

dx dy

= 1, = 0.

ds ds

So we have

x = s + c, y = d.

Our initial curve is given by y = t, x = 0. Since we want this to be the curve

s = 0, we find our integral curves to be

x = s, y = t.

@

= 0, (s, t) = h(t) = f (t).

@s t

So we know that

(x, y) = f (y).

@ @

ex + = 0.

@x @y

with the boundary condition

(x, 0) = cosh x.

✓ x◆

e

V=

1

This gives

dx dy

= ex , = 1.

ds ds

Thus we have

x

e = s + c, y = s + d.

We pick our curve as x = t, y = 0. So our relation becomes

x

e = s + e t, y = s.

We thus have

@ x

= 0, (s, t) = (0, t) = cosh(t) = cosh[ ln(y + e )]

@s t

So done.

71

7 More partial di↵erential equations IB Methods

@x + 2@y = yex

with (x, x) = sin x.

We can still use the method of characteristics. We have

✓ ◆

1

u= .

2

So the characteristic curves obey

dx dy

= 1, = 2.

ds ds

This gives

x = s + t, y = 2s + t

so that x = y = t at s = 0. We can invert this to obtain the relations

s=y x, t = 2x y.

The partial di↵erential equation now becomes

d

= u · r = yex = (2s + t)es+t

ds t

constant. We have the boundary conditions

(s = 0, t) = sin t.

So we get

(x(s, t), y(s, t)) = (2 t)et (1 es ) + sin t + 2ses+t .

Putting it in terms of x and y, we have

(x, y) = (2 2x + y)e2x y

+ sin(2x y) + (y 2)ex .

We see that our Cauchy data (x, x) = sin x should be specified on a curve

B that intersects each characteristic curve exactly once.

If we tried to use a characteristic curve B that intersects the characteristics

multiple times, if we want a solution at all, we cannot specify data freely along

B. its values at the points of intersection have to compatible with the partial

di↵erential equation.

For example, if we have a homogeneous equation, we saw the solution will

be constant along the same characteristic. Hence if our Cauchy data requires

the solution to take di↵erent values on the same characteristic, we will have no

solution.

Moreover, even if it is compatible, if B does not intersect all characteristics,

we will not get a unique solution. So the solution is not fixed along such

characteristics.

On the other hand, if B intersects each characteristic curve transversely,

the problem is well-posed. So there exists a unique solution, at least in the

neighbourhood of B. Notice that data is never transferred between characteristic

curves.

72

7 More partial di↵erential equations IB Methods

tions

Whether the method of characteristics works for a 2nd order partial di↵erential

equation, or even higher order ones, depends on the “type” of the di↵erential

equation.

To classify these di↵erential equations, we need to define

Definition (Symbol and principal part). Let L be the general 2nd order di↵er-

ential operator on Rn . We can write it as

n

X X n

@2 @

L= aij (x) + bi (x) i + c(X),

i,j=1

@xi @xj i=1

@x

We define the symbol (k, x) of L to be

n

X n

X

(k, x) = aij (x)ki kj + bi (x)ki + c(x).

i,j=1 i=1

The principal part of the symbol is the leading term

n

X

p

(k, x) = aij (x)ki kj .

i,j=1

Example. If L = r2 , then

n

X

p

(k, x) = (k, x) = (ki )2 .

i=1

If L is the heat operator (where we think of the last coordinate xn as the time,

and others as space), then the operator is given by

n

X1

@ @2

L= .

@xn i=1

@xi2

n

X1

(k, x) = kn (ki )2 ,

i=1

n

X1

p

(k, x) = (ki )2 .

i=1

Note that the symbol is closely related to the Fourier transform of the

di↵erential operator, since both turn di↵erentiation into multiplication. Indeed,

they are equal if the coefficients are constant. However, we define this symbol

for arbitrary di↵erential operators with non-constant coefficients.

73

7 More partial di↵erential equations IB Methods

p

(k, x) = kT Ak,

where A(x) has elements aij (x). Recall that a real symmetric matrix (such as

A) has all real eigenvalues. So we define the following:

Definition (Elliptic, hyperbolic, ultra-hyperbolic and parabolic di↵erential

operators). We say a di↵erential operator L is elliptic if all eigenvalues have the

same sign.

We say it is hyperbolic if all but one eigenvalues have the same sign.

We say it is ultra-hyperbolic if there are more than one eigenvalues of each

sign.

We say it is parabolic if A is degenerate, ie. has a zero eigenvalue.

These names are supposed to remind us of conic sections, and indeed if we

think of kT Ak as an equation in k, then we get a conic section.

Note that the type of L depends on the values of aij (x), which in turn

depends on the value of x. Hence it is possible that L has di↵erent types at

di↵erent regions of space.

Example. Let

@2 @2 @2 @ @

L = a(x, y) 2

+ 2b(x, y) + c(x, y) 2

+ d(x, y) + e(x, y) + f (x, y).

@x @x@y @y @x @y

Then the principal part of the symbol is

✓ ◆✓ ◆

p a(x, y) b(x, y) kx

(k, x) = kx ky

b(x, y) c(x, y) ky

b2 ac = 0.

Note that since we only have two dimensions and hence only two dimensions,

we cannot possibly have an ultra-hyperbolic equation.

Characteristic surfaces

Definition (Characteristic surface). Given a di↵erential operator L, let

f (x1 , x2 , · · · , xn ) = 0

n

X @f @f

aij (x) = (rf )T A(rf ) = 0.

i,j=1

@xi @xj

In the case where we only have two dimensions, a characteristic surface is just a

curve.

We see that the characteristic equation restricts what values rf can take.

Recall that rf is the normal to the surface C. So in general, at any point, we

can find what the normal of the surface should be, and stitch these together to

form the full characteristic surfaces.

74

7 More partial di↵erential equations IB Methods

For an elliptic operator, all the eigenvalues of A have the same sign. So there

are no non-trivial real solutions to this equation (rf )T A(rf ) = 0. Consequently,

elliptic operators have no real characteristics. So the method of characteristics

would not be of any use when studying, say, Laplace’s equation, at least if we

want to stay in the realm of real numbers.

If L is parabolic, we for simplicity assume that f has exactly one zero

eigenvector, say n, and the other eigenvalues have the same sign. This is the

case when, say, there are just two dimensions. So we have An = nT A = 0. We

normalize n such that n · n = 1.

For any rf , we can always decompose it as

rf = n(n · rf ) + [rf n · (n · rf )].

This is a relation that is trivially true, since we just add and subtract the same

thing. Note, however, that the first term points along n, while the latter term is

orthogonal to n. To save some writing, we adopt the notation

r? f = rf n(n · rf ).

So we have

rf = n(n · rf ) + rf? .

Then we can compute

(rf )T A(rf ) = [n(n · rf ) + r? f ]T A[n(n · rf ) + r? f ]

= (r? f )T A(r? f ).

there are no non-trivial solutions. Hence, if f defines a characteristic surface,

then r? f = 0. In other words, f is parallel to n. So at any point, the normal to

a characteristic surface must be n, and there is only one possible characteristic.

If L is hyperbolic, we assume all but one eigenvalues are positive, and let

be the unique negative eigenvalue. We let n be the corresponding unit

eigenvector, where we normalize it such that n · n = 1. We say f is characteristic

if

0 = (rf )T A(rf )

= [n(n · rf ) + r? f ]T A[n(n · rf ) + r? f ]

= (n · rf )2 + (r? f )T A(r? f ).

Consequently, for this to be a characteristic, we need

r

(r? f )T A(r? f )

n · rf = ± .

have two characteristic surfaces through any point.

This is not too helpful in general. However, in the case where we have two

dimensions, we can find the characteristic curves explicitly. Suppose our curve

is given by f (x, y) = 0. We can write y = y(x). Then since f is constant along

each characteristic, by the chain rule, we know

@f @f dy

0= + .

@x @y dx

75

7 More partial di↵erential equations IB Methods

p

dy b± b2 ac

= .

dx a

We now see explicitly how the type of the di↵erential equation influences the

number of characteristics — if b2 ac > 0, then we obtain two distinct di↵erential

equations and obtain two solutions; if b2 ac = 0, then we only have one equation;

if b2 ac < 0, then there are no real characteristics.

Example. Consider

@y2 xy@x2 = 0

on R2 . Then a = xy, b = 0, c = 1. So b2 ac = xy. So the type is elliptic if

xy < 0, hyperbolic if xy > 0, and parabolic if xy = 0.

In the regions where it is hyperbolic, we find

p

b ± b2 ac 1

= ±p .

a xy

dy 1

= ±p .

dx xy

1 3/2

y ± x1/2 = c.

3

We now let

1 3/2

u= y + x1/2 ,

3

1

v = y 3/2 x1/2 .

3

Then the equation becomes

@2

+ lower order terms = 0.

@u@v

Example. Consider the wave equation

@2 @2

c2 =0

@t2 @x2

on R1,1 . Then the equation is hyperbolic everywhere, and the characteristic

curves are x ± ct = const. Let’s look for a solution to the wave equation that

obeys

(x, 0) = f (x), @t (x, 0) = g(x).

Now put u = x ct, v = x + ct. Then the wave equation becomes

@2

= 0.

@u@v

76

7 More partial di↵erential equations IB Methods

Z x+ct

1 1

(x, t) = [f (x ct) + f (x + ct)] + g(y) dy.

2 2c x ct

Note that the value of at any point (x, t) is completely determined by f, g

in the interval [x ct, x + ct]. This is known as the (past) domain of dependence

of the solution at (x, t), written D (x, t). Similarly, at any time t, the initial

data at (x0 , 0) only a↵ects the solution within the region x0 ct x x0 + ct.

This is the range of influence of data at x0 , written D+ (x0 ).

p

D+ (S)

D (p)

B

S

Green’s functions for the heat equation

Suppose : Rn ⇥ [0, 1) ! R solves the heat equation

@t = Dr2 ,

|Rn ⇥{0} = f.

To solve this, we take the Fourier transform of the heat equation in the spatial

variables. For simplicity of notation, we take n = 1. Writing F[ (x, t)] = ˜(k, t),

we get

@t ˜(k, t) = Dk 2 ˜(k, t)

with the boundary conditions

˜(k, 0) = f˜(k).

parameter for each k. So we have

˜(k, t) = f˜(k)e Dk2 t

.

77

7 More partial di↵erential equations IB Methods

Z h i

1 2

(x, t) = eikx f˜(k)e Dk t dk

2⇡ R

This is the inverse Fourier transform of a product. This thus gives the convolution

1 Dk2 t

(x, t) = f ⇤ F [e ].

So we are done if we can find the inverse Fourier transform of the Gaussian.

This is an easy exercise on example sheet 3, where we find

p

a2 x2 ⇡ k2 /4a2

F[e ]= e .

a

So setting

1

a2 = ,

4Dt

So we get ✓ ◆

1 Dk2 t 1 x2

F [e ]= p exp

4⇡Dt 4Dt

We shall call this S1 (x, t), where the subscript 1 tells us we are in 1+1 dimensions.

This is known as the fundamental solution of the heat equation. We then get

Z 1

(x, t) = f (y)S1 (x y, t) dy

1

f (x) = 0 (x).

So we start with a really cold room, with a huge spike in temperature at the

middle of the room. Then we get

✓ ◆

0 x2

(x, t) = p exp .

4⇡Dt 4Dt

What this shows, as we’ve seen before, is that if we start with a delta function,

then as time evolves, we get a Gaussian that gets shorter and broader.

Now note that if we start with a delta function, at t = 0, everywhere outside

the origin is non-zero. However, after any infinitely small time t, becomes

non-zero everywhere, instantly. Unlike the wave equation, information travels

instantly to all of space, ie. heat propagates arbitrarily fast according to this

equation (of course in reality, it doesn’t). This fundamentally, is because the

heat equation is parabolic, and only has one characteristic.

Now suppose instead satisfies the inhomogeneous, forced, heat equation

having an external heat source somewhere, starting at zero.

78

7 More partial di↵erential equations IB Methods

Note that if we can solve this, then we have completely solved the heat

equation. If we have an inhomogeneous equation and an inhomogeneous initial

condition, then we can solve this forced problem with homogeneous boundary

conditions to get F ; and solve the unforced equation with homogeneous equation

to get H . Then the sum = F + H solves the full equation.

As before, we take the Fourier transform of the forced equation with respect

to the spacial variables. As before, we will just do it in the cases with one spacial

dimension. We find

˜(k, 0) = 0.

t. Using an integrating factor, we can rewrite this as

@ Dk2 t ˜ 2

[e (k, t)] = eDk t F̃ (k, t).

@t

The solution is them

Z t

˜(k, t) = e Dk2 t 2

eDk u F̃ (k, u) du.

0

Z t

Dk2 t 2

G̃(k, t, y, ⌧ ) = e eDk u eiky (t ⌧ ) du,

0

(

0 t<⌧ 2

G̃(k, t; y, ⌧ ) = 2 = ⇥(t ⌧ )e iky e Dk (t ⌧)

.

e iky e Dk (t ⌧ ) t > ⌧

Z

⇥(t ⌧ ) Dk2 (t ⌧ )

G(x, t; y, ⌧ ) = eik(x y)

e dx

2⇡ R

This integral is just the inverse Fourier transform of the Gaussian with a phase

shift. So we end up with

✓ ◆

⇥(t ⌧ ) (x y)2

G(x, t; y, ⌧ ) = p exp = ⇥(t ⌧ )S1 (x y; t ⌧ ).

4⇡D(t ⌧ ) 4D(t ⌧ )

Z tZ

(x, t) = F (y, ⌧ )G(x, t; y, ⌧ ) dy d⌧.

0 R

79

7 More partial di↵erential equations IB Methods

It is interesting that the solution to the forced equation involves the same function

S1 (x, t) as the homogeneous equation with inhomogeneous boundary conditions.

In general, Sn (x, t) solves

@Sn

Dr2 Sn = 0

@t

(n)

with boundary conditions Sn (x, 0) = (x y), and we can find

✓ ◆

1 |x y|2

Sn (x, t) = exp .

(4⇡Dt)n/2 4Dt

Z

(x, t) = f (y)S(x y, t) dn y.

@Gn

Dr2 Gn = (t ⌧) (n)

(x y),

@t

with the boundary condition Gn (x, 0; y, ⌧ ) = 0. The solution is

Given our Green’s function, the general solution to the forced heat equation

@

Dr2 = F (x, t), (x, 0) = 0

@t

is just

Z 1 Z

(x, t) = F (y, ⌧ )G(x, t; y, ⌧ ) dn y d⌧

0 Rn

Z tZ

= F (y, ⌧ )G(x, t; y, ⌧ ) dn y d⌧.

0 Rn

Z t

(x, t) = F (x, t; ⌧ ) d⌧,

0

where Z

F = F (y, t)Sn (x y, t ⌧ ) dn y

R

Hence in general, we can think of the forcing term as providing a whole

sequence of “initial conditions” for all t > 0. We then integrate over the times

at which these conditions were imposed to find the full solution to the forced

problem. This interpretation is called Duhamel’s principle.

80

7 More partial di↵erential equations IB Methods

Suppose : Rn ⇥ [0, 1) ! C solves the inhomogeneous wave equation

@2

c2 r2 = F (x, t)

@t2

with

@

(x, 0) = (x, 0) = 0.

@t

We look for a Green’s function Gn (x, t; y, ⌧ ) that solves

@Gn

c2 r2 Gn = (t ⌧) (n)

(x y) (⇤)

@t

with the same initial conditions

@

Gn (x, 0, y, ⌧ ) = Gn (x, 0, y, ⌧ ) = 0.

@t

Just as before, we take the Fourier transform of this equation with respect the

spacial variables x. We get

@

G̃n + c2 |k|2 G̃n = (t ⌧ )e ik·y

.

@t

This is just an ordinary di↵erential equation from the point of view of t, and

is of the same type of initial value problem that we studied earlier, and the

solution is

sin |k|c(t ⌧ )

G̃n (k, t, y, ⌧ ) = ⇥(t ⌧ )e ik·y .

|k|c

To recover the Green’s function itself, we have to compute the inverse Fourier

transform, and find

Z

1 sin |k|c(t ⌧ ) n

Gn (x, t; y, ⌧ ) = eik·x ⇥(t ⌧ )e ik·y d k.

(2⇡)n Rn |k|c

Unlike the case of the heat equation, the form of the answer we get here does

depend on the number of spatial dimensions n. For definiteness, we look at the

case where n = 3, since our world (probably) has three dimensions. Then our

Green’s function is

Z

⇥(t ⌧ ) sin |k|c(t ⌧ ) 3

G(x, t; y, ⌧ ) = eik·(x y) d k.

(2⇡)3 c R3 |k|

We use spherical polar coordinates with the z-axis in k-space aligned along the

direction of x y. Hence k · (x y) = kr cos ✓, where r = |x y| and k = |k|.

Note that nothing in our integral depends on ', so we can pull out a factor

of 2⇡, and get

Z Z

⇥(t ⌧ ) 1 ⇡ ikr cos ✓ sin kc(t ⌧ ) 2

G(x, t; y, ⌧ ) = e k sin ✓ d✓ dk

(2⇡)2 c 0 0 k

81

7 More partial di↵erential equations IB Methods

exact di↵erential. So we get

Z

⇥(t ⌧ ) 1 eikr e ikr sin kc(t ⌧ ) 2

= k dk

(2⇡)2 c 0 ikr k

Z 1 Z 1

⇥(t ⌧ )

= eikr sin k↵ dk e ikr sin k↵ dk

(2⇡)2 icr 0 0

Z 1

⇥(t ⌧ ) 1

= eikr sin k↵ dk

2⇡icr 2⇡ 1

⇥(t ⌧ )

= F 1 [sin k↵],

2⇡icr

where we let ↵ = c(t ⌧ ).

Now recall F[ (x ↵)] = eik↵ . So

ik↵ ik↵

e e 1

F 1 [sin k↵] = F 1 = [ (x ↵) (x + ↵)].

2i 2i

⇥(t ⌧ ) h i

G(x, t; y, ⌧ ) = (|x y| c(t ⌧ )) (|x y| + c(t ⌧ )) .

4⇡c|x y|

Now we look at our delta functions. The step function is non-zero only if t > ⌧ .

Hence |x y| + c(t ⌧ ) is always positive. So (|x y| + c(t ⌧ )) does not

contribute. On the other hand, (|x y| c(t ⌧ )) is non-zero only if t > ⌧ . So

⇥(t ⌧ ) is always positive in this region. So we can write our Green’s function

as

1 1

G(x, t; y, ⌧ ) = (|x y| c(t ⌧ )).

4⇡c |x y|

As always, given our Green’s function, the general solution to the forced equation

@2

c2 r2 = F (x, t)

@t2

is Z 1 Z

F (y, ⌧ )

(x, t) = (|x y| c(t ⌧ )) d3 y.

0 R3 4⇡c|x y|

We can use the delta function to do one of the integrals. It is up to us which

integral we do, but we pick the time integral to do. Then we get

Z

1 F (y, tret ) 3

(x, t) = d y,

4⇡c2 R3 |x y|

where

|x y|

tret = t .

c

This shows that the e↵ect of the forcing term at some point y 2 R3 a↵ects

the solution at some other point x not instantaneously, but only after time

|x y|/c has elapsed. This is just as we saw for characteristics. This, again,

tells us that information travels at speed c.

82

7 More partial di↵erential equations IB Methods

Also, we see the e↵ect of the forcing term gets weaker and weaker as we

move further away from the source. This dispersion depends on the number of

dimensions of the space. As we spread out in a three-dimensional space, the

“energy” from the forcing term has to be distributed over a larger sphere, and

the e↵ect diminishes. On the contrary, in one-dimensional space, there is no

spreading out to do, and we don’t have this reduction. In fact, in one dimensions,

we get Z tZ

⇥(c(t ⌧ ) |x y|)

(x, t) = F (y, ⌧ ) dy d⌧.

0 R 2c

We see there is now no suppression factor at the bottom, as expected.

Let : R3 ! R satisfy the Poisson’s equation

r2 = F,

The fundamental solution to this equation is defined to be G3 (x, y), where

r2 G3 (x, y) = (3)

(x y).

Br = {|x y| r, x 2 R3 },

we have

Z

1= r2 G3 dV

Br

Z

= n · rG3 dS

@Br

Z

dG3 2

= r sin ✓ d✓ d

S 2 dr

dG3

= 4⇡r2 .

dr

So we know

dG3 1

= ,

dr 4⇡r2

and hence

1

G3 (x, y) = + c.

4⇡|x y|

We often set c = 0 such that

lim G3 = 0.

|x|!1

83

7 More partial di↵erential equations IB Methods

Green’s identities

To make use of this fundamental solution in solving Poisson’s equation, we first

obtain some useful identities.

Suppose , : R3 ! R are both smooth everywhere in some region ⌦ ✓ R3

with boundary @⌦. Then

Z Z Z

n · r dS = r · ( r ) dV = r2 + (r ) · (r ) dV.

@⌦ ⌦ ⌦

So we get

Z Z

n · r dS = r2 + (r ) · (r ) dV.

@⌦ ⌦

when Green first came up with this, divergence theorem hasn’t existed yet.

Similarly, we obtain

Z Z

n · r dS = r2 + (r ) · (r ) dV.

@⌦ ⌦

Proposition (Green’s second identity).

Z Z

2 2

r r dV = n·r n · r dS.

⌦ @⌦

Why is this useful? On the left, we have things like r2 and r2 . These

are things we are given by Poisson’s or Laplace’s equation. On the right, we

have things on the boundary, and these are often the boundary conditions we

are given. So this can be rather helpful when we have to solve these equations.

We wish to apply this result to the case = G3 (|x y|). However, recall that

when deriving Green’s identity, we assumed and are smooth everywhere in

our domain. However, our Green’s function is singular at x = y, but we want to

integrate over this region as well. So we need to do this carefully. Because of

the singularity, we want to take

In other words, we remove a small region of radius " centered on y from the

domain.

In this choice of ⌦, it is completely safe to use Green’s identity, since our

Green’s function is certainly regular everywhere in this ⌦. First note that since

r2 G3 = 0 everywhere except at x = y, we get

Z Z

r2 G3 G3 r2 dV = G3 r2 dV

⌦ ⌦

84

7 More partial di↵erential equations IB Methods

Z Z

2

G3 r dV = (n · rG3 ) G3 (n · ) dS

⌦ Sr2

Z

+ (n · rG3 ) G3 (n · ) dS

S"2

Note that on the inner boundary, we have n = r̂. Also, at S"2 , we have

1 dG3 1

G3 |S"2 = , = .

4⇡" dr S"2 4⇡"2

Z ⇣ ⌘

(n · rG3 ) G3 (n · ) "2 sin ✓ d✓ d

S"2

Z Z

"2 "2

= sin ✓ d✓ d + (n · r ) sin ✓ d✓ d

4⇡"2 S"2 4⇡" S"2

Now the final integral is bounded by the assumption that is everywhere smooth.

So as we take the limit " ! 0, the final term vanishes. In the first term, the "’s

cancel. So we are left with

Z

1

= d⌦

4⇡ S"2

= ¯

! (y)

Now suppose r2 = F . Then this gives

Proposition (Green’s third identity).

Z Z

(y) = (n · rG3 ) G3 (n · r ) dS G3 (x, y)F (x) d3 x.

@⌦ ⌦

G3 , the forcing term F and boundary data. In particular, if the boundary values

of and n · r vanish as we take r ! 1, then we have

Z

(y) = G3 (x, y)F (x) d3 x.

R3

R3 .

However, there is a puzzle. Suppose F = 0. So r2 = 0. Then Green’s

identity says Z ✓ ◆

dG3 d

(y) = G3 dS.

Sr2 dr dr

But we know there is a unique solution to Laplace’s equation on every bounded

domain once we specify the boundary value |@⌦ , or a unique-up-to-constant

solution if we specify the boundary value of n · r |@⌦ .

85

7 More partial di↵erential equations IB Methods

However, to get (y) using Green’s identity, we need to know and and

n · r on the boundary. This is too much.

Green’s third identity is a valid relation obeyed by solutions to Poisson’s

equation, but it is not constructive. We cannot specify and n · r freely. What

we would like is a formula of given, say, just the value of on the boundary.

To overcome this (in the Dirichlet case), we seek to modify G3 via

G3 ! G = G3 + H(x, y),

other words, we find some H that does not a↵ect the relations on G when acted

on by r2 , but now our G will have boundary value 0. We will find this H later,

but given such an H, we replace G3 with G H in Green’s third identity, and

see that all the H terms fall out, ie. G also satisfies Green’s third identity. So

Z Z

(y) = [ n · rG Gn · r ] dS F G dV

@⌦

Z Z

= n · rG dS F G dV.

@⌦

So as long as we find H, we can express the value of (y) in terms of the values

of on the boundary. Similarly, if we are given a Neumann condition, ie. the

value of n · r on the boundary, we have to find an H that kills o↵ n · rG on

the boundary, and get a similar result.

In general, finding a harmonic r2 H = 0 with

1

H|@⌦ =

4⇡|x x0 | @⌦

some special cases with lots of symmetry.

Example. Suppose

⌦ = {(x, y, z) 2 R3 : z 0}.

2

We wish to find a solution to r = F in ⌦ with ! 0 rapidly as |x| ! 1

with boundary condition (x, y, 0) = g(x, y).

The fundamental solution

1 1

G3 (x, x0 ) =

4⇡ |x x0 |

obeys all the conditions we need except

1 1

G3 |z=0 = 6= 0.

4⇡ [(x x0 )2 + (y y0 )2 + z02 ]1/2

However, let xR

0 be the point (x0 , y0 , z0 ) . This is the reflection of x0 in the

boundary plane z = 0. Since the point xR 0 is outside our domain, G3 (x, x0 )

R

obeys

r2 G3 (x, xR 0)=0

86

7 More partial di↵erential equations IB Methods

G3 (x, xR

0 )|z=0 = G3 (x, x0 )|z=0 .

Hence we take

G(x, x0 ) = G3 (x, x0 ) G3 (x, xR

0 ).

1 (z z0 ) (z + z0 )

n · rG|z=0 =

4⇡ |x x0 |3 |x xR 0|

3

z=0

1 z0

= .

2⇡ [(x x0 )2 + (y y0 )2 + z02 ]3/2

Z

1 1 1

(x0 ) = F (x) d3 x

4⇡ ⌦ |x x0 | |x xR 0|

Z

z0 g(x, y)

+ dx dy.

2⇡ R2 [(x x0 )2 + (y y0 )2 + z02 ]3/2

What have we actually done here? The Green’s function G3 (x, x0 ) in some sense

represents a “charge” at x0 . We can imagine that the term G3 (x, xR

0 ) represents

the contribution to our solution from a point charge of opposite sign located at

xR

0 . Then by symmetry, G is zero at z = 0.

x0

z=0

xR

0

by choosing two mirror charge distributions. But this is irrelevant for our present

purposes. We are only concerned with finding a solution in the region z 0.

This is just an artificial device we have in order to solve the problem.

Example. Suppose a chimney produces smoke such that the density (x, t) of

smoke obeys

@t Dr2 = F (x, t).

The left side is just the heat equation, modelling the di↵usion of smoke, while

the right forcing term describes the production of smoke by the chimney.

If this were a problem for x 2 R3 , then the solution is

Z tZ

(x, t) = F (y, ⌧ )S3 (x y, t ⌧ ) d3 y d⌧,

0 R3

87

7 More partial di↵erential equations IB Methods

where ✓ ◆

1 |x y|2

S3 (x y, t ⌧) = exp .

[4⇡D(t ⌧ )]3/2 4D(t ⌧ )

This is true only if the smoke can di↵use in all of R3 . However, this is not true

for our current circumstances, since smoke does not di↵use into the ground.

To account for this, we should find a Green’s function that obeys

n · rG|z=0 = 0.

This is achieved by picking

@t D2 r2 G = (t ⌧ ) 3 (x y)

n · rG|z0 = 0.

Hence the smoke density is given by

Z tZ

(x, t) = F (y, ⌧ )[S3 (x y, t ⌧ ) + S3 (x yR , t ⌧ )] d3 y.

0 ⌦

We can think of the second term as the contribution from a “mirror chimney”.

Without a mirror chimney, we will have smoke flowing into the ground. With a

mirror chimney, we will have equal amounts of mirror smoke flowing up from

the ground, so there is no net flow. Of course, there are no mirror chimneys in

reality. These are just artifacts we use to find the solution we want.

Example. Suppose we want to solve the wave equation in the region (x, t) such

that x > 0 with boundary conditions

1

(x, t) = [b(x ct) + b(x + ct)]

2

This is not what we want, since eventually we will have a wave moving past the

x = 0 line.

x=0

direction, such that when as they pass through each other at x = 0, there is no

net flow across the boundary.

88

7 More partial di↵erential equations IB Methods

x=0

where we set b(x) = 0 when x < 0. In the region x > 0 we are interested in, only

the b(x) term will contribute. In the x < 0 region, only x > 0 will contribute.

Then the general solution is

1

(x, t) = [b(x + ct) + b(x + c) + b( x ct) + b( x + ct)].

2

89

- Finite Element MethodDiunggah olehomer1299
- DWT MatlabDiunggah olehmehdi_torabnejad
- Algebra LinealDiunggah olehFloOrentoni Herrera
- 17.09.13Diunggah olehrameshbajiya
- mathgen-1079926766Diunggah olehmdp anon
- esatdo solidoDiunggah olehadolfobarrios
- Matlab InterfaceDiunggah olehSebastiao Silva
- NotesCh6Diunggah olehMarcos Alberto Viera
- Authoritative Sources in a Hyperlinked EnvironmentDiunggah olehRoxanne Caruana
- 1207.1551(1)68Diunggah olehMohamad Sannan
- DimensionDiunggah olehAsdf
- dimension.pdfDiunggah olehIsaac
- Discriminat AnalysisDiunggah olehAnkita Kumari
- The P-Version of the Finite Element Method_ch5_ecm003gDiunggah olehapmapm
- Cap_1_HK_2012Diunggah olehOscar González Frutos
- producto 2Diunggah olehEddher Cardoso
- Sol2Diunggah olehArkadebSengupta
- a Practical Approach to Linear AlgebraDiunggah olehDeepak Singh
- unit2_math_aside.pdfDiunggah olehHernan Kennedy Ricaldi Porras
- A Cookbook of MathematicsDiunggah olehMohsin Ali Sheikh
- LinearDiunggah olehBon Cyrell Villacarlos Plazos
- Q7Diunggah olehshashank gupta
- FEA_Lectures_2009_ICAST4Diunggah olehBrighton John
- Data Stream AlgorithmsDiunggah olehFD_23
- DIP Chp7 DocDiunggah olehAditya Shukla
- Finite Element Analysis Using SolidWorks.pdfDiunggah olehZicherka37
- Null Space and Range SpaceDiunggah olehKarthik Mohan K
- anIntro2FEM_2015Diunggah olehZar Maghust
- MIT2 29S15 Node OrderingDiunggah olehIhab Omar
- Me i II III IV Syllabus 7-11-07Diunggah olehappuamreddy

- P+M.pdfDiunggah olehoyeoye1997
- analysis_ii.pdfDiunggah olehoyeoye1997
- GroupsDiunggah olehoyeoye1997
- Differential EquationsDiunggah olehAnonymous MNQ6iZ
- Text File.txtDiunggah olehoyeoye1997
- Text File.txtDiunggah olehoyeoye1997
- DE Cambridge lecture notesDiunggah olehoyeoye1997
- Text File.txtDiunggah olehoyeoye1997
- Butter Chicken RecipeDiunggah olehoyeoye1997

- math3705 notesDiunggah olehmiso
- dialDiunggah olehCarlos Romero
- faDiunggah olehBadic Victor
- ma203-006.pdfDiunggah olehShibani Shankar Ray
- Lecture 2 28 ShortDiunggah olehzarkenfrood
- Vibration of Continuous SystemDiunggah olehhiral_ozkr
- pde3dDiunggah olehsorialolo7294
- Kreider Introduction to Linear AnalysisDiunggah olehManuel Balaguera
- Eigenvalue Problems for Odes 1Diunggah oleheggwash
- Chapter 1:Dynamic Equations on Time ScalesDiunggah olehali aldailami
- Chapter6-1Diunggah olehmasyuki1979
- ch0aDiunggah olehbbteenager
- ChewMA1506-14 Ch8Diunggah olehMinh Tieu
- Ecuaciones diferenciales parciales de Logan - Chapter 4 SolutionsDiunggah olehGibris Suárez Martínez
- mso203bDiunggah olehAnshu Kumar Gupta
- MATH F211 HandoutDiunggah olehMandar Gogate
- Fourier SeriesDiunggah olehTushar Chopra
- ch4.pdfDiunggah olehAnonymous Gd16J3n7
- fa.pdfDiunggah olehJeoff Libo-on
- Table of ContentsDiunggah olehDiana Mackin
- SolutionManual Pde PinchoverDiunggah olehcakil87as
- Mat3320 Ass3 SolDiunggah olehRichard Beard
- Propagators & Greens FunctionsDiunggah olehRamzan8850
- Umich Math450 syllabusDiunggah olehMert Türkol
- Mathematical Techniques for Engineers and Scientists _ Publications_ SPIEDiunggah olehKarthiganesbe85
- GuerreroDiunggah olehDiego Apulchro
- Sturm-Liouville Eigenvalue ProblemsDiunggah olehPrasad Uday Bandodkar
- Ordinary and Partial Differential EquationsDiunggah olehAmit Yadav
- Weighted Residual MethodDiunggah olehBharath Reddy
- Real and Functional Analysis Gerald TeschlDiunggah olehOffes

## Lebih dari sekadar dokumen.

Temukan segala yang ditawarkan Scribd, termasuk buku dan buku audio dari penerbit-penerbit terkemuka.

Batalkan kapan saja.