Richard Beals
Advanced
Mathematical
Analysis
Periodic Functions and Distributions,
Complex Analysis, Laplace Transform
and Applications
Richard Beals
Professor of Mathematics
University of Chicago
Department of Mathematics
5734 University Avenue
Chicago, Illinois 60637
Managing Editors
P. R. Halmos
C. C. Moore
Indiana University
Department of Mathematics
Swain Hali East
Bloomington, Indiana 47401
University of California
at Berkeley
Department of Mathematics
Berkeley, California 94720
ISBN 9780387900667
ISBN 9781468498868 (eBook)
DOI 10.1007/9781468498868
to Nancy
PREFACE
Once upon a time students of mathematics and students of science or
engineering took the same courses in mathematical analysis beyond calculus.
Now it is common to separate" advanced mathematics for science and engineering" from what might be called "advanced mathematical analysis for
mathematicians." It seems to me both useful and timely to attempt a
reconciliation.
The separation between kinds of courses has unhealthy effects. Mathematics students reverse the historical development of analysis, learning the
unifying abstractions first and the examples later (if ever). Science students
learn the examples as taught generations ago, missing modern insights. A
choice between encountering Fourier series as a minor instance of the representation theory of Banach algebras, and encountering Fourier series in
isolation and developed in an ad hoc manner, is no choice at all.
It is easy to recognize these problems, but less easy to counter the legitimate pressures which have led to a separation. Modern mathematics has
broadened our perspectives by abstraction and bold generalization, while
developing techniques which can treat classical theories in a definitive way.
On the other hand, the applier of mathematics has continued to need a variety
of definite tools and has not had the time to acquire the broadest and most
definitive graspto learn necessary and sufficient conditions when simple
sufficient conditions will serve, or to learn the general framework encompassing different examples.
This book is based on two premises. First, the ideas and methods of the
theory of distributions lead to formulations of classical theories which are
satisfying and complete mathematically, and which at the same time provide
the most useful viewpoint for applications. Second, mathematics and science
students alike can profit from an approach which treats the particular in a
careful, complete, and modern way, and which treats the general as obtained
by abstraction for the purpose of illuminating the basic structure exemplified
in the particular. As an example, the basic L2 theory of Fourier series can be
established quickly and with no mention of measure theory once L 2 (O, 21T) is
known to be complete. Here L2(O, 21T) is viewed as a subspace of the space of
periodic distributions and is shown to be a Hilbert space. This leads to a discussion of abstract Hilbert space and orthogonal expansions. It is easy to
derive necessary and sufficient conditions that a formal trigonometric series
be the Fourier series of a distribution, an L2 distribution, or a smooth
function. This in turn facilitates a discussion of smooth solutions and distribution solutions of the wave and heat equations.
The book is organized as follows. The first two chapters provide background material which many readers may profitably skim or skip. Chapters
3, 4, and 5 treat periodic functions and distributions, Fourier series, and
applications. Included are convolution and approximation (including the
vii
viii
Preface
TABLE OF CONTENTS
Chapter One
1.
2.
3.
4.
5.
6.
7.
Basis concepts
Chapter Two
1.
2.
3.
4.
5.
6.
27
34
38
42
47
51
57
62
67
1.
Chapter Four
23
Continuous functions
Chapter Three
5
10
14
19
69
72
77
81
84
89
94
99
103
109
113
116
121
125
Table of Contents
Chapter Five
1.
2.
3.
4.
5.
6.
Chapter Six
1.
2.
3.
4.
5.
6.
7.
Complex analysis
Complex differentiation
Complex integration
The Cauchy integral formula
The local behavior of a holomorphic function
Isolated singularities
Rational functions; Laurent expansions; residues
Holomorphic functions in the unit disc .
155
159
166
171
175
179
184
190
193
197
201
205
210
213
223
Notation index
225
Subject index
227
Chapter 1
Basic Concepts
1. Sets and functions
One feature of modern mathematics is the use of abstract concepts to
provide a language and a unifying framework for theories encompassing
numerous special cases and examples. Two important examples of such
concepts, that of "metric space" and that of "vector space," will be taken up
later in this chapter. In this section we discuss briefly the concepts, even more
basic, of "set" and of "function."
We assume that the intuitive notion of a "set" and of an "element" of a
set are familiar. A set is determined when its elements are specified in some
manner. The exact manner of specification is irrelevant, provided the elements
are the same. Thus
A = {3, 5, 7}
means that A is the set with three elements, the integers 3, 5, and 7. This is the
same as
A = {7, 3, 5},
or
A = {n I n is an odd positive integer between 2 and 8}
or
A = {2n + 1 I n = 1,2, 3}.
In expressions such as the last two, the phrase after the vertical line is supposed to prescribe exactly what precedes the vertical line, thus prescribing
the set. It is convenient to allow repetitions; thus A above is also
{5, 3, 7, 3, 3},
still a set with three elements. If x is an element of A we write
xEA
or A3X.
x A or A
x.
The sets of all integers and of all positive integers are denoted by 71. and
71. + respectively:
Basic concepts
Al
A2 U Aa
Am or
U ... U
UA
j,
1=1
nA
m
j=1
The union and the intersection of an infinite family of sets AI, A2 ... indexed
by 7L. + are denoted by
co
co
UA, and
nAI.
j=1
1=1
More generally, suppose J is a set, and suppose that for each j E J we are
given a set AI. The union and intersection of all the Aj are denoted by
UAj and
nAI.
jel
leI
{x I x
B, x A}.
Au C = B,
A('\C=0.
The product of two sets A and B is the set of ordered pairs (x, y) where
x E A and y E B; this is written A x B. More generally, if Al> A 2 , , An are
the sets then
Al
A2
x An
(Xl> X2, ,
x n), where
AxAxxA
of n copies of A is also written An.
A/unction from a set A to a set B is an assignment, to each element of A,
of some unique element of B. We write
/:A+B
g 0 f(x) = g(f(x)),
for all x
A.
nEZ,
nEZ,
n E Z.
Thenfis neither 11 nor onto, g is 11 but not onto, h is bijective, h 1 (n) =
1  n, andfoh(n) = n2  2n + 2.
A set A is said to be finite if either A = 0 or there is an n E 7L. +, and a
bijective function f from A to the set {I, 2, ... , n}. The set A is said to be
countable if there is a bijective f: A r Z +. This is equivalent to requiring that
there be a bijective g: 7L. + r A (since if such anf exists, we can take g = f 1 ;
if such a g exists, take f = gI). The following elementary criterion is
convenient.
Proposition 1.1. If there is a surjective (onto) function f: Z+ r A, then A
is either finite or countable.
Proof Suppose A is not finite. Define g: Z+ r Z+ as follows. Let
g(1) = 1. Since A is not finite, A =1= {f(1)}. Let g(2) be the first integer m such
thatf(m) =1= f(1). Having defined g(1), g(2), ... , g(n), let g(n + 1) be the first
integer m such thatf(n + 1) {f(1),f(2), ... ,f(n)}. The function g defined
inductively on all of Z + in this way has the property that fog: Z + r A is
bijective. In fact, it is 11 by the construction. It is onto becausefis onto and
by the construction, for each n the set {f(1),f(2), ... ,f(n)} is a subset of
{f 0 g(l),f 0 g(2), .. . ,f 0 g(n)}. 0
Corollary 1.2. If B is countable and A c B, then A is finite or countable.
Basic concepts
Proposition 1.3. If AI, A 2, A 3, ... are finite or countable, then the sets
n
..
Proposition 1.4. If AI> A 2, .. . , An are countable sets, then the product set
X x An is countable.
Al x A2
mE
Proposition 1.5. The set S of all sequences in the set {O, I} is neither finite
nor countable.
Proof Suppose f: 71. + ~ A. We shall show that f is not surjective. For
mE 71.+, f(m) is a sequence (an.m):'=l = (al. m, a2.m,' .. ), where each
an.m is 0 or 1. Define a sequence (a n):'=1 by setting an = 0 if an.n = 1, an = 1
if ann = O. Then for each m E 71.+, (an):'=1 '" (an.m):'=l = f(m). Thusfis not
each
surjective.
b) = {x I x
b] = {x I x
b) = {x I x
b] = {x I x
E
E
E
E
IR, a
IR, a
IR, a
IR, a
Also,
(a, (0) = {x I x E IR, a < x},
(00, a] = {x I x E IR, x ~ a}, etc.
Axioms of addition
AI.
A2.
A3.
A4.
x
Note that the element 0 is unique. In fact, if 0' is an element such that
= x for every x, then
+ 0'
0' = 0'
+0
= 0
+ 0'
= O.
+ y = 0, then
y = y + 0 = y + (x + (x)) = (y + x) + (x)
= (x + y) + (x) = 0 + (x) = (x) + 0 = x.
This uniqueness implies ( x) = x, since (x) + x = x + (x)
Also, given x the element  x is unique. In fact, if x
o.
Axioms of multiplication
Ml.
M2.
M3.
M4.
xx 1 =
1.
Basic concepts
Note that 1 and XI are unique. We leave the proofs as an exercise.
Distributive law
+ z)
xy
We can now readily deduce some other wellknown facts. For example,
Ox
= (0 + o)x = Ox + Ox,
so Ox = O. Then
(x)y =
1)x)y = (I)(xy) =
xy.
The axioms AIA4, MIM4, and DL do not determine IR. In fact there
is a set consisting of two elements, together with operations of addition and
multiplication, such that the axioms above are all satisfied: if we denote the
elements of the set by 0, 1, we can define addition and multiplication by
0+1 = 1 + 0,= 1,
0+0 = 1 + 1 = 0,
11 = 1.
00 = 10 = 01 = 0,
There is an additional familiar notion in IR, that of positivity, from which
one can derive the notion of an ordering of IR. We axiomatize this by introducing a subset P c IR, the set of "positive" elements.
Axioms of order
01. If x E IR, then exactly one of the following holds: x EP, x = 0, or
XEP.
02. If x, YEP, then x + YEP.
03. If x, YEP, then xy E P.
so
x = (z  y)
+ (y
 x) EP,
+ ... + x is >
y.
(One can think of this as saying that, given enough time, one can empty
a large bathtub with a small spoon.)
The axioms given so far still do not determine IR; they are all satisfied by
the subset 0 of rational numbers. The following notions will make a distinction between these two sets.
A nonempty subset A c IR is said to be bounded above if there is an x E IR
such that every YEA satisfies y ::; x (as usual, y ::; x means y < x or y = x).
Such a number x is called an upper bound for A. Similarly, if there is an
x E IR such that every YEA satisfies x ::; y, then A is said to be bounded below
and x is called a lower bound for A.
A number x E IR is said to be a least upper bound for a nonempty set
A c IR if x is an upper bound, and if every other upper bound x' satisfies
x' ~ x. If such an x exists it is clearly unique, and we write
= lubA.
glbA.
Theorem 2.1.
Proof Recall that there is no rational p/q, p, p E 7l., such that (p/q)2 = 2:
in fact if there were, we could reduce to lowest terms and assume either p or q
is odd. Butp2 = 2q2 is even, so p is even, so p = 2m, m E 7L.. Then 4m 2 = 2q2,
so q2 = 2m 2 is even and q is also even, a contradiction.
Let A = {x I x EO, x 2 < 2}. This is nonempty, since 0, 1 EA. It is
bounded above, since x ~ 2 implies x 2 ~ 4, so 2 is an upper bound. We shall
show that no x E 0 is a least upper bound for A.
If x ::; 0, then x < I E A, so x is not an upper bound. Suppose x > 0 and
x 2 < 2. Suppose h E 0 and 0 < h < I. Then x + h E 0 and x + h > x.
Also, (x + h)2 = x 2 + 2xh + h2 < x 2 + 2xh + h = x 2 + (2x + I)h. If we
choose h > 0 so small that h < I and h < (2  x2)/(2x + 1), then (x + h)2
< 2. Then x + h E A, and x + h > x, so x is not an upper bound of A.
Finally, suppose x E 0, X > 0, and x 2 > 2. Suppose hE 0 and 0 < h < x.
Then x  hE 0 and x  h > O. Also, (x  h)2 = x 2  2xh + h2 > x 2 2xh. If we choose h > 0 so small that h < I and h < (x 2  2)/2x, then
(x  h)2 > 2. It follows that if yEA, then y < xh. Thus x  h is an
upper bound for A less than x, and x is not the least upper bound. 0
We used the nonexistence ofa square root of2 in 0 to show that 05 does
not hold. We may turn the argument around to show, using 05, that there is
Basic concepts
a real number x > 0 such that x 2 = 2. In fact, let A = {y lYE IR, y2 < 2}.
The argument proving Theorem 2.1 proves the following: A is bounded
above; its least upper bound x is positive; if x 2 < 2 then x would not be an
upper bound, while if x 2 > 2 then x would not be the least upper bound.
Thus x 2 = 2.
Two important questions arise concerning the above axioms. Are the
axioms consistent, and satisfied by some set IR? Is the set of real numbers the
only set satisfying these axioms?
The consistency of the axioms and the existence of IR can be demonstrated
(to the satisfaction of most mathematicians) by constructjng IR, starting with
the rationals.
In one sense the axioms do not determine IR uniquely. For example, let
1R0 be the set of all symbols xO, where x is (the symbol for) a real number.
Define addition and multiplication of elements of 1R0 by
XO + yO = (x
+ y)O, xOyO
= (xy)o.
= (x + x', Y + y'),
= (xx'  yy', xy' + x'y).
operations satisfies AI, A2, Ml, M2, and DL. To verify the remaining
algebraic axioms, note that
(x, y) + (0,0) = (x, y).
(x,y) + (x, y) = (0,0),
(x, y)(1, 0) = (x, y),
(x, y)(x/(x 2 + y2),  y/(x2 + y2 = (1, 0)
Co. Let
jO
Also, (i0)2 = (0, 1)(0, 1) = (1,0) = _1. Thus we can write any element
of Co uniquely as XO + jOyO, x, y E IR, where (i0)2 = 1. We now drop the
superscripts and write x + jy for XO + jOyO and C for Co: this is legitimate,
since for elements of IR the new operations coincide with the old: XO + yO =
(x + y)O, xOyO = (xy)o. Often we shall denote elements of C by z or w. When
we write z = x + iy, we shall understand that x, yare real. They are called
the real part and the imaginary part of z, respectively:
z = x
+ iy,
= Re (z),
y = 1m (z).
+ iy)* =
x  iy.
(z
w)* = z* + w*,
(zw)* = z*w*,
(z*)* = z,
z*z = x 2 + y2.
z = x
+ iy.
Then if z =P 0,
1 = z*zlzl2 = z(z*lzl2),
or
+ z*
z  z* = 2iy
2x,
if z
= x + iy.
Thus
Re (z) = !(z
+ z*),
= !i1(z
1m (z)
 z*).
Exercises
1. There is a unique real number x > 0 such that x 3 = 2.
2. Show that Re(z + w) = Re(z) + Re (w), Im(z + w) = Im(z)
3. Suppose z = x + iy, x, y E R Then
Izl ~
4. For any z, WEe,
Izw*1 = Izllwl
5. For any z, WEe,
Iz
+ wi
~ Izl
Iwl
Ixl +
Iyl
+ Im(w).
Basic concepts
10
(Hint: Iz + WI2 = (z + w)*(z + w) = IzI2 + 2 Re (zw*) + Iw12; apply Exercises 3 and 4 to estimate IRe (zw*)I.)
6. The Archimedean axiom 04 can be deduced from the other axioms for
the real numbers. (Hint: use 05).
7. If a > 0 and n is a positive integer, there is a unique b > 0 such that
b n = a.
Zn + Z,
or
lim Zn = z,
n .... ao
or lim Zn = Z.
The number Z is called the limit of the sequence (Zn):'= l ' Note that the limit is
unique: suppose Zn + Z and also Zn + w. Given any 8 > 0, we can take n so
large that IZn  zi < 8 and also IZn  wi < 8. Then
Iz  wi :s; Iz  znl
IZn  wi <
+8 =
28.
(b) By Exercise 3 of 3,
IXn  xl
+ 21Yn
 YI
I(z
+ wn) 
(z
+ w)1
I(zn  z)
+ (w n 
IWn
wi.
(d) Choose M so large that if n ;::: M, then IZn  zi < 1. Then for
n~M,
11
~
M,
IZnl
IZnl
~ IZnl
+ tizi  lizi
+ Iz  znl  lizi
~ IZn
+ (z  zn)1  tizi
tlzl
M.
=

Klz  znl,
Z1 ~ o. 0
N, then
so IX n  xl
similar. 0
::;
I!!.
If A c Iffi is bounded above, the least upper bound of A is often called the
supremum of A, written sup A. Thus
sup A
= lubA.
Similarly, the greatest lower bound of a set B c Iffi which is bounded below
is also called the infimum of A, written inf A :
inf A
glb A.
inf Am
Basic concepts
12
all n.
Theorem 3.3. A sequence in C (or IR) converges if and only ifit is a Cauchy
sequence.
zi + Iz 
zml <
t8 + 18 = 8.
Let K = max {lXII, Ix2 1, ... , IXMII, IXMI + I}. Then for any n, IXnl :::; K.
Now since the sequence is bounded, we can associate the sequences (X;'):'=l
and (X~):'=l as above. Given 8 > 0, choose N so that IX n  xml < 8 if
n, m ~ N. Now suppose n ~ m ~ N. It follows that
Xm 
8 :::;
Xn :::; Xm
+ 8,
n~m~N.
8 :::;
X :::; Xm
+ 8,
N,
13
Similarly, lim sup Xn is the unique number x" with the properties
(i)" for any e > 0, there is an N such that Xn < x" + e whenever n ~ N,
(ii)" for any e > 0 and any N, there is an n ~ N such that Xn > Xe.
Proof We shall prove only the assertion about lim inf Xn. First, let
inf {xn' Xn+l> . .. } = inf An as above, and let x' = lim x~ = lim inf Xn.
Suppose e > o. Choose N so that x~ > x'  e. Then n ~ N implies Xn ~
x~ > x'  e, so (i)' holds. Given e > 0 and N, we have x~ :::;; x' < x' + le.
Therefore x' + '1e is not a lower bound for AN, so there is an n ~ N such that
Xn :::;; x' + le < x' + e. Thus (ii)' holds.
Now suppose x' is a number satisfying (i)' and (ii)'. From (i)' it follows
that ipf An > x'  e whenever n ~ N. Thus lim inf Xn ~ x'  e, all e, so
lim inf Xn ~ x'. From (ii)' it follows that for any N and any e, inf AN <
x' + e. Thus for any N, inf AN :::;; x', so lim inf Xn :::;; x'. We have lim inf
Xn = x'. 0
x~ =
Exercises
1. The sequence (1/n):'=l has limit O. (Use the Archimedean axiom, 2.)
2. If Xn > 0 and Xn ~ 0, then xn l/2 ~ o.
3. If a > 0, then al/n ~ 1 as n ~ 00. (Hint: if a ~ 1, let al/n = 1 + Xn.
By the binomial expansion, or by induction, a = (1 + xn)n :::;; 1 + nxn. Thus
Xn < nla ~ o. If a < 1, then al/n = (bl/n)l where b = aI> 1.)
14
Basic concepts
= a lim sup X n
4. Series
Suppose (Zn):=1 is a sequence in C. We associate to it a second sequence
(Sn):=l, where
n
Sn =
L
m=l
Zn = Z1
+ Z2 + ... + Zn'
L Zn
00
(4.1)
s=
n=l
(Of course if the sequence is indexed differently, e.g., (zn):=O, we make the
corresponding changes in defining Sn and in (4.1).) If the sequence (Sn):= 1 does
not converge, the series 2: Zn is said to diverge.
In particular, suppose (X n):= 1 is a real sequence, and suppose each Xn ~ O.
Then the sequence (sn):= 1 of partial sums is clearly an increasing sequence.
Either it is bounded, so (by Proposition 3.2) convergent, or for each M > 0
there is an N such that
m=l
Xm
> M
whenever n
L Xn <
n=l
00
(4.2)
00
L
n=1
00
(4.3)
Xn = 00.
N.
Series
15
2: Xn converges, (4.3)
>
2: Xn
diverges.
Examples
1. Consider the series
(symbolically),
2::=1 n 1.
We claim
2::=1 n 1
00.
In fact
L>l = 1 + 1 + t + ! + t + i
2': 1
2.
2::=1 n 2
L>2
=1
=1
+ ~ + i + ...
+ 1 + (! + !) + (i + i + i + i) + ...
+ 1 + 2(!) + 4(i) + 8(+6) + ...
+ 1 + 1 + ... = 00.
(We leave it to the reader to make the above rigorous by considering the
respective partial sums.)
How does one tell whether a series converges? The question is whether the
sequence (sn):= 1 of partial sums converges. Theorem 2.3 gives a necessary and
sufficient condition for convergence of this sequence: that it be a Cauchy
sequence. However this only refines our original question to: how does one
tell whether a series has a sequence of partial sums which is a Cauchy
sequence? The five propositions below give some answers.
Proposition 4.1.
Proof
2: lin diverges.
 z) 1 .
Sn = 1 +
+ Z2 + ... + zn1.
16
Basic concepts
Is"  sml =
L:' = 1 a". If n, m
~ N then
IJ=m+1
i Zn I~ J=m+1
i Iz,,1
~ M
L:"
J=m+1
a"
M(b"  bm).
suppose ZIt
(a)
If
then
If
L ZIt diverges.
Proof. (a) In this case, take r so that lim sup IZ"+l/Z,,1 < r < 1. By
Proposition 3.4, there is an N so that IZ"+l/Z,,1 ~ r whenever n ~ N. Thus if
n > N,
Iz,,1
rlzn11
r'rlz"_21
...
r"NlzNI = Mr",
Corollary 4.5. If ZIt =1= 0 for n = 1,2, ... and if lim IZn+1/z,,1 exists,
then the series L ZIt converges if the limit is < 1 and diverges if the limit is > 1.
Note that for both the series L lin and L I/n2 , the limit in Corollary 4.5
equals 1. Thus either convergence or divergence is possible in this case.
Proposition 4.6. (Root test).
(a) If
L ZIt converges.
(b) If
then
then
L ZIt diverges.
Series
17
Proof. (a) In this case, take r so that lim sup IZnl1fn < r
sition 3.4, there is an N so that IZn Ilfn ~ r whenever n ;::: N.
then IZnl ~ rn. Propositions 4.2 and 4.3 imply convergence.
(b) In this case, Proposition 3.4 implies that IZnl1fn ;:::
many values of n. Thus Proposition 4.1 implies divergence.
Note the tacit assumption in the statement and proof that (lznI1fn);,= 1 is a
bounded sequence, so that the upper and lower limits exist. However, if this
sequence is not bounded, then in particular IZnl ;::: 1 for infinitely many
values of n, and Proposition 4.1 implies divergence.
Corollary 4.7.
Note that for both the series L lin and L Iln 2 , the limit in Corollary 4.7
equals 1 (see Exercise 4 of 3). Thus either convergence or divergence is
possible in this case.
A particularly important class of series are the power series. If (an);'=o is
a sequence in C and Zo a fixed element of C, then the series
(4.2)
is the power series around Zo with coefficients (a n);'= o. Here we use the convention that WO = 1 for all w E C, including w = O. Thus (4.2) is defined, as a
series, for each Z E C. For Z = Zo it converges (with sum ao), but for other
values of Z it mayor may not converge.
Theorem 4.8.
R = 0
if(la nI 1fn );'=1 is not a bounded sequence,
R = (lim sup lanI1fn)1
if lim sup lanl 1fn > 0,
R = 00
if lim sup lanl 1fn = O.
Then the power series (4.2) converges if
Iz  zol > R.
Proof.
Iz  zol
We have
(4.3)
Suppose Z =I Zo0 If (lanI1fn);'=1 is not a bounded sequence, then neither is
(4.3), and we have divergence. Otherwise the conclusions follow from (4.3)
and the root test, Proposition 4.6. D
The number R defined in the statement of Theorem 4.8 is called the radius
of convergence of the power series (4.2). It is the radius of the largest circle in
the complex plane inside which (4.2) converges.
Theorem 4.8 is quite satisfying from a theoretical point of view: the
radius of convergence is shown to exist and is (in principle) determined in all
Basic concepts
18
cases. However, recognizing lim sup la nl1/n may be very difficult in practice.
The following is often helpful.
Theorem 4.9. Suppose an =J Ofor n 2': N, and suppose lim lan+1/anl exists.
Then the radius of convergence R of the power series (4.2) is given by
R = (lim lan+1/anJ)l
iflim la n+1lanl > O.
R = 00
iflim lan+1Janl = O.
=1=
Zo then
Exercises
I. If 2:'= 1Zn converges with sum sand 2:'= 1Wn converges with sum t,
then 2:'= 1 (zn + wn) converges with sum s + t.
2. Suppose 2 an and 2: bn each have all nonnegative terms. If there are
constants M > 0 and N such that bn 2': Man whenever n 2': N, and if 2: an =
00, then 2 bn = 00.
3. Show that 2::'=1 (n + 1)/(2n 2 + 1) diverges and 2::'=1 (n + 1)/(2n3 + 1)
converges. (Hint: use Proposition 4.3 and Exercise 2, and compare these to
2 lin, 2: Iln2.)
4. (2 k Test). Suppose a1 2': a2 2': ... 2': an 2': 0, all n. Then 2:'=1 an <
2k= 1 2 k a2k < 00. (Hint: use the methods used to show divergence of
2 lin and convergence of 2 IJn2.)
5. (Integral Test). Suppose a1 2': a2 2': ... 2': an 2': 0, all n. Suppose
f: [1,00) JiR is a continuous function such thatf(n) = an, all n, andf(y) ::;
f(x) if y 2': x. Then 2:'=1 an < 00 > II'" f(x) dx < 00.
6. Suppose p > O. The series 2:'=1 n P converges if p > I and diverges
if p :s; 1. (Use Exercise 4 or Exercise 5.)
7. The series 2::'=2n1(logn)2 converges; the series 2::'=2n1(logn)1
diverges.
8. The series 2::'=0 znJn! converges for any z E C. (Here O! = 1, n! =
n(n  I)(n  2) .... 1.)
9. Determine the radius of convergence of
00
>
2 n!zn,
n=O
00
L'" n! znJ(2n)!
n=O
10. (Alternating series). Suppose IXll 2': IX21 2': .. 2': IXnl, all n, Xn 2': 0
if n odd, Xn ::; 0 if n even, and Xn J O. Then 2: Xn converges. (Hint: the partial
sums satisfy S2 ::; S4 ::; S6 ::; . ::; S5 ::; S3 ::; SI')
11. 2::'= 1 (_l)njn converges.
Metric spaces
19
5. Metric spaces
A metric on a set S is a function d from the product set S x S to
the properties
~,
with
+ (y
 y')2]l/2.
If we coordinatize the Euclidean plane in the usual way, and if (x, y), (x', y')
are the coordinates of points P and P' respectively, then (5.1) gives the length
of the line segment PP' (Pythagorean theorem). In this example, D3 is the
analytic expression of the fact that the length of one side of a triangle is at
most the sum of the lengths of the other two sides. The same example in
different guise is obtained by letting S = C and taking
(5.2)
d(z, w) =
Iz  wi
Verification that the functions dl , d:1" and da satisfy the conditions Dl, D2, D3
is left as an exercise. Note that da works for any set S: if x, YES we set
d(x, y) = 1 if x =1= y and 0 if x = y.
A still simpler example of a metric space is IR, with distance function d
given by
(5.3)
d(x, y) =
Ix 
yl.
Again this coincides with the usual notion of the distance between two points
on the (coordinatized) line.
Another important example is ~n, the space of ordered ntuples x =
(Xl> X2, ... , x n) of elements of R There are various possible metrics on ~n
like the metrics d l , d2 , da defined above for ~n, but we shall consider here
only the generalization of the Euclidean distance in ~2 and ~a. If x =
(Xl' X2, ... , x n) and y = (Yl> Y2, ... , Yn) we set
(5.4)
20
Basic concepts
d given by (5.4) satisfies 01 and 02, but condition 03 is not so easy to verify.
For now we shall simply assert that d satisfies 03; a proof will be given in a
more general setting in Chapter 4.
Often when the metric d is understood, one refers to a set S alone as a
metric space. For example, when we refer to IR, C, or IRn as a metric space with
no indication what metric is taken, we mean the metric to be given by (5.3),
(5.2), or (5.4) respectively.
Suppose (S, d) is a metric space and T is a subset of S. We can consider
T as a metric space by taking the distance function on TxT to be the
restriction of d to TxT.
The concept of metric space has been introduced to provide a uniform
treatment of such notions as distance, convergence, and limit which occur in
many contexts in analysis. Later we shall encounter metric spaces much more
exotic than IRn and C.
Suppose (S, d) is a metric space, x is a point of S, and r is a positive real
number. The ball of radius r about x is defined to be the subset of S consisting
of all points in S whose distance from x is less than r:
B,(x) = {y lYE S, d(x, y) < r}.
Clearly x
Examples
When S = IR (metric understood), B,(x) is the open interval (x  r, x + r).
When S = 1R2 or C, B,(z) is the open disc of radius r centered at z. Here we
take the adjective "open" as understood; we shall see that the interval and
the disc in question are also open in the sense defined below.
A subset A c S is said to be a neighborhood of the point XES if A contains B,(x) for some r > 0. Roughly speaking, this says that A contains all
points sufficiently close to x. In particular, if A is a neighborhood of x it
contains x itself.
A subset A c S is said to be open if it is a neighborhood of each of its
points. Note that the empty set is an open subset of S: since it has no points
(elements), it is a neighborhood of each one it has.
Example
n~=l
Am is also open.
Metric spaces
21
+ d(y, x)
< s
+ dey, x)
= r.
Thus z E Br(x).
(b) Suppose x E n~ = 1 Am. Since each Am is open, there is rem) > 0 so that
Br(m)(x) C Am. Let r = min {r(l), r(2), ... , r(n)}. Then r > 0 and Br(x) C
Br(m)(x) C Am, so Br(x) C n~=l Am. (Why is it necessary here to assume that
A10 A 2 , is afinite collection of sets?)
(c) Suppose x E A = UnEB An. Then for some particular fl, x E An. Since
An is open, there is an r > 0 so that Br(x) C An C A. Thus A is open. 0
Again suppose (S, d) is a metric space and suppose A C S. A point XES
is said to be a limit point of A if for every r > 0 there is a point of A with
distance from x less than r:
Br(x) n A = 0
if r > O.
The interval (0, 1] C IR has as its set of limit points the closed interval
[0,1]. In fact if 0 < x ::; 1, then x is certainly a limit point. If x = 0 and
r > 0, then Br(O) n (0, I] = (r, r) n (0, I] = 0. If x < and r = lxi, then
BrCx) n (0, I] = 0, while if x> I and r = x  I, then Br(x) n (0, I] = 0.
Thus the interval (0, I] is neither open nor closed. The exact relationship
between open sets and closed sets is given in Proposition 5.3 below.
a point y
+ dey, x)
< e
+ r.
Since this is true for every e > 0, we must have d(z, x) ::; r. Thus z
C.
Basic concepts
22
(b) Suppose x A = U::'=l Am. For each m, x is not a limit point of Am,
so there is r(m) > 0 such that Br(mlx) n Am = 0. Let
r = min {r(1), r(2), ... , r(n)}.
or x" +x.
" .... 00
When S = ~ or C (with the usual metric), this coincides with the definition
in 3. Again the limit, if any, is unique.
A sequence (x,,);,= 1 in S is said to be a Cauchy sequence if for each e > 0
there is an N so that d(x", x m) < e if n, m ~ N. Again when S = IR or C, this
coincides with the definition in 3.
The metric space (S, d) is said to be complete if every Cauchy sequence in
S converges to a point of S. As an example, Theorem 3.3 says precisely that
~ and C are complete metric spaces with respect to the usual metrics.
Many processes in analysis produce sequences of numbers, functions,
etc., in various metric spaces. It is important to know when such sequences
converge. Knowing that the metric space in question is complete is a powerful
tool, since the condition that the sequence be a Cauchy sequence is then a
Compact sets
23
necessary and sufficient condition for convergence. We have already seen this
in our discussion of series, for example.
It follows that a sequence of points in IRn converges if and only if each of the
n corresponding sequences of coordinates converges in IR. Similarly, a sequence of points in IRn is a Cauchy sequence if and only if each of the n
corresponding sequences of coordinates is a Cauchy sequence in IR. Thus
completeness of IRn follows from completeness of IR. (This is simply a generalization of the argument showing C is complete.)
Exercises
I. If (S, d) is a metric space, XES, and r :2: 0, then
6. Compact Sets
Suppose that (S, d) is a metric space, and suppose A is a subset of S. The
subset A is said to be compact if it has the following property: suppose that
for each x E A there is given a neighborhood of x, denoted N(x); then there
are finitely many points Xl, X2, ... , Xn in A such that A is contained in the
union of N(Xl), N(X2), ... , N(xn). (Note that we are saying that this is true
for any choice of neighborhoods of points of A, though the selection of
points Xl, X2, . .. may depend on the selection of neighborhoods.) It is
obvious that any finite subset A is compact.
24
Basic concepts
Examples
1. The infinite interval (0, 00) c III is not compact. For example, let
N(x) = (x  I, x + I), x E (0, 00). Clearly no finite collection of these
intervals of finite length can cover all of (0, 00).
2. Even the finite interval (0, I] c III is not compact. To see this, let
N(x) = (!x,2), x E (0, I]. For any Xl, X2, ... , XlI E (0, 1], the union of the
intervals N(xj) will not contain y if y :::; ! min {Xl> X2, ... , x lI }.
3. The set A = {o} U {I, !, 1, t, ... } c IR is compact. In fact, suppose
for each x E A we are given a neighborhood N(x). In particular, the neighborhood N(O) of 0 contains an interval (e, e). Let M be a positive integer
larger than lie. Then lin E N(O) for n ~ M, and it follows that A c N(O) U
N(l) U NH) U ... U N(IIM).
The first two examples illustrate general requirements which compact sets
must satisfy. A subset A of S, when (S, d) is a metric space, is said to be
bounded if there is a ball Br(x) containing A.
=1=
0, and suppose
lI )}.
If YEA then for some m, dey, Xm) < 1. Therefore dey, Xl) :::; dey, Xm)
d(xm' Xl) < 1 + d(xm' Xl) :::; r, and we have A c Br(XI)' 0
Compact sets
25
in some closed interval [a, b]. Suppose for each x E A, we are given a neighborhood N(x) of x. We shall say that a closed subinterval of [a, b] is nice if
there are points XI. X2, ... , Xm E A such that UT~ 1 N(xj) contains the intersection of the subinterval with A; we are trying to show that [a, b] itself is
nice. Suppose it is not. Consider the two subintervals [a, c] and [c, b], where
c = tea + b) is the midpoint of [a, b]. If both of these were nice, it would
follow that [a, b] itself is nice. Therefore we must have one of them not nice;
denote its endpoints by a b bb and let C1 = t(a1 + b1). Again, one of the
intervals [ab cd and [Cb b1] must not be nice; denote it by [a2, b2]. Continuing in this way we get a sequence of intervals [am, bm], m = 0, 1,2, ... such
that [ao, bo] = [a, b], each [am, bm] is the left or right half of the interval
[amI. bm 1], and each interval [am, bm] is not nice. It follows that ao ~ a1 ~
... ~ am ~ bm ~ ... ~ b1 ~ bo and bm  am = 2m(b o  ao)+O. Therefore there is a point x such that am + x and bm+ x. Moreover, am ~ X ~ bm,
for all m. We claim that x E A; it is here that we use the assumption that A is
closed. Since [am, bm] is not nice, it must contain points of A: otherwise
A n [am, bm] = 0 would be contained in any Uj~l N(xj). Let
Thus, (Yk):'~ 1 is just a selection of some (possibly all) of the xn's, taken in
order. As an example, if (xn):~ 1 C IR has Xn = (l)nfn, and if we take
nk = 2k,then(xn):~1 = (l,t, t,t, t,)and(Yk):'~l = (t,t,!,)
As a second example, let (xn):~ 1 be an enumeration of the rationals. Then for
any real number x, there is a subsequence of (Xn):~l which converges to x.
26
Basic concepts
Examples
Vector spaces
27
Exercises
1. Suppose (x n):= 1 is a sequence in a metric space (S, d) which converges
to XES. Let A = {x} U {X n }:= l' Then A is compact and sequentially compact.
2. Let 0, the rationals, have the usual metric. Let A = {x I x E 0, X2 < 2}.
Then A is bounded, and is closed as a subset of 0, but is not compact.
3. Suppose A is a compact subset of a metric space (S, d). Then A is
sequentially compact. (Hint: otherwise there is a sequence (X n ):= 1 in A with
no subsequence converging to a point of A. It follows that for each x E A.
there is an rex) > 0 such that the ball N(x) = Br<x)(x) contains Xn for only
finitely many values of n. Since A is compact, this would imply that
{I, 2, 3, ... } is finite, a contradiction.)
4. A metric space is said to be separable if there is a dense subset which is
countable. If (S, d) is separable and A c S is sequentially compact, then A is
compact. (Hint: suppose for each x E A we are given a neighborhood N(x).
Let {Xl> x 2 , X3, ... } be a dense subset of S. For each x E A we can choose an
integer m and a rational rm such that x E Brm(xm) c N(x). The collection of
balls Brm(xm) so obtained is (finite or) countable; enumerate them as C 1 ,
C 2 , . Since each Cj is contained in some N(x), it is sufficient to show that
for some n, Ui= 1, C j ::::> A. If this were not the case, we could take Yn E A,
Yn ; Ui=l C j , n = 1,2, .... Applying the assumption of sequential compactness to this sequence and noting how the Cj were obtained, we get a
contradiction.)
7. Vector spaces
A vector space over IR is a set X in which there are an operation of addition and an operation of multiplication by real numbers which satisfy certain
conditions. These abstract from the wellknown operations with directed line
segments in Euclidean 3space.
Specifically, we assume that there is a function from X x X to X, called
addition, which assigns to the ordered pair (x, y) E X X X an element of X
denoted x + y. We assume
VI.
V2.
V3.
V4.
(x + y) + z = x + (y + z), all x, y, Z E X.
x + y = y + x, all x, y E X.
There exists 0 E X such that x + 0 = x, all X.
For all x E X, there exists x E X such that x
+ (x)
= O.
28
Basic concepts
Summarizing: a vector space over IR, or a real vector space, is a set X with
addition satisfying VIV4 and scalar multiplication satisfying VSVS. The
elements of X are called vectors and the elements of IR, in this context, are
often called scalars.
Similarly, a vector space over C, or a complex vector space, is a set X
together with addition sstisfying VIV4 and scalar multiplication defined
from C x X to X and satisfying VSVS. Here the scalars are, of course,
complex numbers.
Examples
1. IR is a vector space over IR, with addition as usual and the usual
multiplication as scalar multiplication.
2. The set with one element 0 is a vector space over IR or C with 0 + 0 =
0, aO = 0, all a.
3.lRn is a vector space of IR if we take addition and scalar multiplication as
+ Yn),
SES,
S, a E IR.
S E
IR.
o of assumption
Ox = Ox + 0 = Ox + [Ox + (Ox)]
= [Ox + Ox] + (Ox) = (0 + O)x
= Ox + ( Ox) = O.
+ (Ox)
29
Vector spaces
Note also that the element x in V4 is unique. In fact if x
+ Y=
0, then
Y = Y + 0 = Y + [x + (x)] = [y + x] + (x)
= [x + y] + (x) = 0 + (x) = (x) + 0 = x.
This implies that (I)x = x, since
x
(I)x = [I
+ (I)]x =
Ox =
o.
Basic concepts
30
called the span of S. We write Y = span (S). The set S is said to span Y. Note
that if S is empty, the span is the subspace {O}.
Examples
Let X be the space of all polynomials with real coefficients. Let fm be the
polynomial defined by fm(x) = xm. Then span {fo.!l' ... .!n} is the subspace
Xn of polynomials of degree ~ n.
A linear combination alXl + a2X2 + ... + anxn of the vectors Xl> X2, ... ,
Xn is said to be nontrivial if at least one of the coefficients al> a2, ... , an is not
zero. The vectors Xl> X2, ... , Xn are said to be linearly dependent if some nontrivial linear combination of them is the zero vector. Otherwise they are said
to be linearly independent. More generally, an arbitrary (possibly infinite)
subset S is said to be linearly dependent if some nontrivial linear combination
of finitely many distinct elements of S is the zero vector; otherwise S is said
to be linearly independent. (Note that with this definition, the empty set is
linearly independent.)
Lemma 7.2. Vectors Xl> X2, ... , Xn in X, n ~ 2, are linearly dependent if
and only if some Xj is a linear combination of the others.
Proof If Xl, X2, ... , Xn are linearly dependent, there are scalars al>
a2, ... , an, not all 0, such that L a,xj = 0. Renumbering, we may suppose
al # 0. Then Xl = L1=2 (allaj)x,.
Conversely, suppose Xl, say, is a linear combination L1=2 bjxj. Letting
al = 1, and aj = bj for j ~ 2, we have L ajxj = 0. 0
The vector space X is said to be finite dimensional if there is a finite subset
which spans X. Otherwise, X is said to be infinite dimensional. A basis of a
(finitedimensional) space X is an ordered finite subset (Xl> X2, ... , xn) which
is linearly independent and spans X.
Examples
1. IRn has basis vectors (el> e2, ... , en), where el = (1, 0, 0, ... , 0),
e2 = (0, 1,0, ... , 0), ... , en = (0,0, ... ,0, 1). This is called the standard
basis in IRn.
2. The set consisting of the single vector 1 is a basis for C as a complex
vector space, but not as a real vector space. The set (1, i) is a basis for C as a
real vector space, but is linearly dependent if C is considered as a complex
vector space.
Theorem 7.3. A finitedimensional vector space X has a basis. Any two
bases of X have the same number of elements.
Proof
Let {Xl> x 2 , , xn} span X. If these vectors are linearly independent then we may order this set in any way and have a basis. Otherwise
31
Vector spaces
ifn ;:::; 2 we may use Lemma 7.2 and renumber, so that Xn is a linear combination "17= f ajxj. Since span {Xl> X2, ... , xn} = X, any X E X is a linear combination
X
j=l
n1
bjxI =
/=1
2: (bj + bnaj)xj.
/=1
Thus span {Xl' X2, ... , Xn1} = X. If these vectors are not linearly independent, we may renumber and argue as before to show that
span{Xl>X2,""Xn 2} = X.
Eventually we reach a linearly independent subset which spans X, and thus
get a basis, or else we reach a linearly dependent set {Xl} spanning X and consisting of one element. This implies Xl = 0, so X = {O}, and the empty set is
the basis.
Now suppose (Xl> X2, ... , xn) and (Yl> Y2, ... , Ym) are bases of X. and
suppose m :::; n. If n = 0, then m = O. Otherwise Xl "# O. The y/s span X, so
Xl = "1 ajYj. Renumbering, we may assume a1 "# O. Then
Thus Y1 is a linear combination of Xl> Y2, ... , Ym' It follows easily that
span {XH Y2,' .. , Ym} ::::> span {Yl> Y2, .. ,ym} = X. If m = 1 this shows that
span {Xl} = X, and the linear independence of the x/s then implies n = 1.
Otherwise X2 = bX I + "11'= 2 bjYj. The independence of Xl and X2 implies
some bi "# O. Renumbering, we assume b2 "# O. Then
Y2
1=3
bi Yi ).
::::>
X.
Continuing in this way, we see that after the y/s are suitably renumbered,
each set {Xl> X2,"" Xb Yk+l>"" Ym} spans X, k :::; m. In particular, takitlg
k = m we have that {Xl> X2, ... , xm} spans X. Since the x/s were assumed
linearly independent, we must have n :::; m. Thus n = m. 0
Theorem 7.4. Suppose X is a finitedimensional vector space with dimension n. Any subset X which spans X has at least n elements. Any subset of X
which is linearly independent has at most n elements. An ordered subset of n
elements which either spans X or is linearly independent is a basis.
32
Basic concepts
Suppose (Xl' X2, ... , xn) is a basis of X. Then any X E X can be written as
a linear combination X = 2: ajxj The scalars aI, a2, ... , an are unique; in
fact if X = 2: bjxj, then
o=
x  x =
L (aj 
bj)Xi'
Since the x/s are linearly independent, each aj  bj = 0, i.e., ai = bj. Thus
the equation x = 2: ajxj associates to each x E X a unique ordered ntuple
(aI' a2, ... , an) of scalars, called the coordinates ofx with respect to the basis
(Xl' ... , xn). Note that if x and y correspond respectively to (ab a2, ... , an)
and (bb b2, ... , bn), then ax corresponds to (aal, aa2, ... , aan) and x + y
corresponds to (al + bl , a2 + b2, ... , an + bn). In other words, the basis
(Xl> X2, ... , xn) gives rise to a function from X onto IRn or en which preserves
the vector operations.
Suppose X and Yare vector spaces, either both real or both complex.
A function T: X + Y is said to be linear if for all vectors x, x' E X and all
scalars a,
T(ax) = aT(x), T(x + y) = T(x) + T(y).
A linear function is often called a linear operator or a linear transformation.
A linear function T: X + IR (for X a real vector space) or T: X + e (for X
a complex vector space) is called a linear functional.
Examples
1. Suppose X is a real vector space and (Xl> x 2, ... , x n) a basis. Let
T(2: ajxj) = (aI' a2, ... , an). Then T: X + IRn is a linear transformation.
2. Let T(z) = z*, Z E C. Then T is a linear transformation of e into itself
if e is considered as a real vector space, but is not linear if e is considered as
a complex vector space.
3. Letjf: IRn + IR be defined by jf(Xl' X2, ... , xn) = Xj' Thenjf is a linear
functional.
4. Let X be the space of polynomials with real coefficients. The two functions S, T defined below are linear transformations from X to itself. If
f(x) = I7=0 aixi, then
S(f)(x) =
T(f)(x)
L" U + 1)lajxi+1,
1=0
" jal x i  1.
= L
i=l
Note that T(S(f = J, while S(T(f = fif and only if ao = O.
Exercises
1. If the linearly independent finite set {Xb X2, ... , x,,} does not span X,
then there is a vector X,,+1 E X such that {Xb X2, ... , x,,, X"+1} is linearly
independent.
33
Vector spaces
are subspaces of X and Y, respectively. (They are called the null space or
kernel ofT, and the range ofT, respectively.) Tis 11 ifand only ifN(T) = {O}.
12. If X is finite dimensional, the subspaces N(T) and R(T) in problem 11
satisfy dim N(T) + dim R(T) = dim X. In particular, if dim Y = dim X,
then Tis 11 if and only if it is onto. (Hint: choose a basis for N(T) and use
problem 2 to extend to a basis for X. Then the images under T of the basis
elements not in N(T) are a basis for R(T).)
Chapter 2
Continuous Functions
1. Continuity, uniform continuity, and compactness
Suppose (S, d) and (S', d') are metric spaces. A function f: S ) S' is
said to be continuous at the point XES if for each e > there is a 0 > Osuch
that
d'(f(x),J(y)) < e
if d(x, y) < O.
In particular, if Sand S' are subsets of ~ or of C (with the usual metrics) then
the condition is
If(y)  f(x)1 < e
if Iy  xl < O.
(This definition is equivalent to the following one, given in terms of convergence of sequences: f is continuous at x if f(x n ) ~ f(x) whenever (x n );:'= 1
is a sequence in S which converges to x. The equivalence is left as an
exercise.)
Recall that we can add, multiply, and take scalar multiples of functions
with values in C (or ~): iff, g: S ~ C and a E C, XES, then
(f
+ g)(x)
= f(x) + g(x),
(af)(x) = af(x) ,
(fg)(x) = f(x)g(x),
(flg)(x) = f(x)/g(x)
if g(x) # 0.
35
Ig(x) I =
Ig(y)
~ Ig(Y)1
+ g(x)
 g(y)1
+ tlg(x)l,
so Ig(Y)1 ~ tlg(x)1 > O. Thus l/g is defined on Br(x). Since the product of
functions continuous at x is continuous at x, we only need show that l/g is
continuous at x. But if y E Br(x), then
+ g, af, and
d'(f(x),J(y < e
if d(x, y) < 8.
if Iy  xl < 8.
Continuous functions
36
Theorem 1.3. Suppose (S, d) and (S', d') are metric spaces and suppose
I: S ~ S' is continuous. If S is compact, then I is uniformly continuous.
Proof Given e > 0, we know that for each XES there is a number
8(x) > 0 such that
d'(f(x),f(y <
te
d'(f(x),f(y : : ; d'(f(x),f(Xt
+8
::::;; 28(xj),
+ d'(f(xt),f(y
::::;;~+~=~
I/(x) I : : ; M,
all XES.
S such that
37
Xo E
or feb) ~ c ~ f(a).
c.
Proof We consider only the case f(a) ~ c ~ feb). Consider the two
intervals [a, !(a + b)], [t(a + b), b]. For at least one of these, c lies between
the values of f at the endpoints; denote this subinterval by [aI, bd. Thus
f(al) ~ c ~ f(b l ). Continuing in this way we get a sequence of intervals
[an, b n] with [an + 1> b n+l ) c [an, b n], b n+l  an+l = t(b n  an), andf(an) ~
c ~ f(b n). Then there is Xo E [a, b] such that an ~ Xo, b n ~ Xo. Thus
f(xo) = limf(an) ~ c.
f(xo) = limf(b n) ~ c.
Exercises
1. Prove the equivalence of the two definitions of continuity at a point.
2. Use Theorem 1.6 to give another proof of the existence of V2. Prove
that any positive real number has a positive nth root, n = 1,2, ....
3. Supposef: S~ S', where (S, d) and (S', d') are metric spaces. Prove
that the following are equivalent:
(a) fis continuous;
(b) for each open set A' c S',jl(A') is open;
(c) for each closed set A' c S',jl(A') is closed.
4. Find continuous functionsJj: (0, 1) ~ lR,j = 1,2,3, such that
fl is not bounded,
f2 is bounded but not uniformly continuous,
f3 is bounded but there are no points x+, x_ E (0,1) such thatf3(x+) =
sup {!3(X) I XE(O, 1)},j3(x_) = inf{f3(x) I XE(O, I)}.
5. Suppose f: S ~ S' is continuous and S is compact. Prove thatf(S) is
compact.
6. Use Exercise 5 and Theorem 6.2 of Chapter 1 to give another proof of
Theorem 1.4.
7. Use Exercise 3 of Chapter 1, 6 to give a third proof of Theorem 1.4.
(Hint: take (Xn);'=l c S such that lim If(x n) I = sup {f(x) I XES}, etc.)
8. Suppose (S, d) is a metric space, XES, and r > O. Show that there is
a continuous functionf: S ~ IR with the properties: 0 ~ fey) ~ 1, all YES,
fey) = 0 if y rt Br(x),j(x) = 1. (Hint: takef(y) = max {I  rld(y, x), O}.
9. Suppose (S, d) is a metric space and 'suppose S is not compact. Show
that there is a continuousf: S ~ IR which is not bounded. (Hint: use Exercise
8.)
Continuous functions
38
a = Xo <
Xl
b.
Iff: [a, b] + C is a bounded function and P = (xo, Xl> , xn) is a partition of [a, b], then the Riemann sum off associated with the partition P is the
number
2: f(xl)(x; n
S(f; P) =
XII)'
1=1
More precisely, we mean that for any e > 0 there is a 8 > 0 such that
(2.1)
zl
IS(f;P) 
<
if IPI < 8.
If this is the case, the number z is called the integral off on [a, b] and denoted
by
ff
or ff(X)dx.
2: If(Xi) I(XI 
XII) :::;;
: :;
2: (XI 
Iff I: ;
M(b  a),
Recall that f: [a, b] + C is a sum f = g + ih where g and hare realvalued functions. The functions g and h are called the real and imaginary
parts off and are defined by
h(x) = 1m (f(x,
g(X) = Re (f(x,
X E
[a, b].
ff=
a
r
a
Ref +
if 1m!
a
Proof
Recall that if Z
(2.3)
11xl
39
= x + iy, x, y
+ 11yl
lR, then
Iyl
+ is(Imf; P),
Let z = x + iy, x, y
S(f; P) = S(Ref; P)
and S(Ref; P), S(Imf; P) are real.
(2.3) to S(f; P)  z. We get
1IS(Ref; P)  xl + 1IS(Imf; P)  yl
::; IS(f; P)  zl ::; IS(Ref; P)  xl
IS(Imf; P)  YI.
Proof
(f + g)
r r r
g,
cf = c
S(f + g; P)
S(f; P)
S(g; P),
The conclusions follow easily from these identities and the definition.
J:
= Y11
Xi
40
Continuous functions
Then
k(X,)(Xi  Xjl) 
~/(YI)(YI 
1~1 (f(Xj) 
~e
L (YI 
YIl)
f(YI(YI  YIl)
1=1
Similarly,
IS(f; Q)  S(/; P')I < e(b  a),
so
IS(f; P)  S(f; Q)I < 2e(b  a).
IS(f;p)  { I I <
~e,
IS(f; Q)
f/l
<
~e
if P, Q are partitions of [a, b], [b, c] respectively, IPI < 8, IQI < 8. Suppose
P' is a partition of [a, c], IP'I < 8. If b is a point of P', then P' determines
partitions P of [a, b] and Q of [b, c], IP I < 8, 101 < 8. It follows from (2.5)
that
(2.6)
IS(f; P')  .C I 
rI
I
< e.
If b is not a point of P', let P" be the partition obtained by adjoining b. Then
(2.6) holds with P" in place of P'. Suppose I/(x) I ~ M, all x E [a, b]. The
sums S(f; P') and S(f; PH) differ only in terms corresponding to the subinterval determined by P' which contains b. It is easy to check, then, that
IS(f; P')  S(f; P")I < 28M.
Thus
IS(f;p')  ( ' / 
f/l
< e + 28M
41
I:f I:f
Supposef: [ao, bol ? C is integrable, and suppose a, b E [ao, bolo If a < b,
then f is integrable on [a, b]. (In fact f is integrable on [ao, b], therefore on
[a, b], by two applications of Proposition 2.5.) If b < a, thenfis integrable
on [b, a] and we define
We also define
Then one can easily check, case by case, that for any a, b, c E [ao, bo],
(2.7)
LCf= ff+ ff
f f= fbf
cc
a
lim
b+oo
Z E
if b ~ bee).
> 0
Continuous functions
42
Exercises
f:
f:
f:f
If 1=
f
M(b  a),
where
sup {If(x) I I x
[a, b]}.
f:f=
(3.1)
y .... x
y  x
if
<
ZE
Iy  xl < 8.
If so, the (unique) number z is called the derivative off at x and denoted
variously by
f'(x) ,
Df(x),
or
df (x).
dx
(a, b) then f is
43
Proof Choose 8 > so that (3.2) holds with z = f'(x) and e = 1. Then
when Iy  xl < 8 we have
If(y)  f(x) I
As y * x,j(y) * f(x).
:,;
Iz(y  x)1
+ ih'(x).
Proof As in the proof of Proposition 2.l, the limit (3.l) exists if and
only if the limits of the real and imaginary parts of this expression exist. If so,
these are respectively g'(x) and h'(x). 0
Proposition 3.3. Suppose f: (a, b) * C and g: (a, b) * C are differentiable at x E (a, b), and suppose c E C. Then the functions f + g, cf, and fg are
differentiable at x, and
(f + g)'{x) = f'(x) + g'(x),
(cf)'(x) = cf'(x),
(fg)'{x) = f'(x)g(x) + f(x)g'(x).
If g(x) =F
[f'(x)g(x)  f(x)g'(x)]g(X)2.
Divide by (y  x) and let y * x. Since g(y) * g(x), the first term converges
to f'(x)g(x). The second converges to f(x)g'(x). 0
We recall the following theorem, which is only valid for realvalued
functions.
Theorem 3.4. (Mean Value Theorem). Suppose f: [a, b] * IR is continuous, and is differentiable at each point of (a, b). Then there is aCE (a, b) such
that
f'(c) = [feb)  f(a)](b  a)I.
Proof. Suppose first thatf(b) = f(a). By Theorem 1.5 there are points
c+ and c_ in [a, b] such thatf(c+) ~ f(x), all x E [a, b] andf(c_) :,; f(x), all
x E [a, b]. If c+ and c_ are both either a or b, thenfis constant andf'(c) = 0,
Continuous functions
44
all c E (a, b). Otherwise, suppose c+ E (a, b). It follows that [fey)  f(c+)] x
(y  c +)  1 is :::; if y < c + and ~ if y > c +. Therefore thelimit as y + c +
is zero. Similarly, if c_ "# a and c_ "# b, thenf'(c_) = 0. Thus in this case
f'(c) = for some c E (a, b).
In the general case, let
Then g(a) = f(a) = g(b). By what we have just proved, there is aCE (a, b)
such that
0= g'(c) =f'(c)  [feb)  f(a)](b  a)l.
= LX f
(3.3)
F(y)  F(x) =
=
f 
LX f
f +f
g
(1 g)
= f(x)(y  x)
(I  g).
If Iy  xl < 0, then
If (f  g) I:::; Iy <
elY  xl
4S
on (a, b). For each y E [f(a)f(b)] there is a unique point x = g(y) E [a, b]
such that f(x) = y. The function g = f 1 is differentiable at each point of
(f(a),f(b and
g'(y) = [f'(g(y))]I.
Proof If x, y E [a, b] and x < y, application of Theorem 3.4 to [x, y]
shows that f(x) < f(y). In particular, f(a) < f(b). By Theorem 1.6, if
f(a) ::; y ::; f(b) there is x E [a, b] withf(x) = y. Sincefis strictly increasing,
x is unique. Letting g = f 1 we note that g is continuous. In fact, suppose
y E (f(a),f(b and e > O. Take y', y" such that
f(a) =:;; y' < y < y" =:;;f(b)
and y"  y ::; e, y  y' =:;; e. Let x' = g(y'), x = g(y), x" = g(y"). Then
x' < x < x". Let S = min {x"  x, x  x'}. If Ix  wi < S then w E (x', x"),
sof(w) E (y', y"), so If(w)  f(x) I = If(w)  yl < e. Continuity atf(a) and
f(b) is proved similarly.
Finally, let x = g(y), x' = g(y'). Then
g(y')  g(y) =
x'  x .
y'  y
f(x')  f(x)
(3.3)
(f 0 g)'(x)
f
f'(g(xg'(x).
We have
Proof
(3.5)
g(y)  f
g(y)  g(x)
y  x
g(b)
f =
g(a)
Proof
fb
(fo g)g'.
fY
g(a)
J,
G(x) =
LX (fog)g'.
Continuous functions
46
We want to prove that F(g(b)) = G(b); we shall in fact show that Fog = G
on [a, b]. Since F g(a) = G(a) = 0, it suffices to prove that the derivatives
are the same. But
0
J<)(x) = J(x),
J<l)(X) = f'(x),
J<k+l)(X) = (f(k)'(X) ,
k = 0, 1,2, ....
The functionJ: (a, b) + C is said to be of class C k, or ktimes continuously
differentiable, if each of the derivatives f,f', ... ,J(k) is a continuous function
on (a, b). The function is said to be a class Coo, or infinitely differentiable, if
J<k) is continuous on (a, b) for every integer k ~ 0.
Exercises
1. Show that any polynomial is infinitely differentiable.
2. Show that the Mean Value Theorem is not true for complexvalued
functions, in general, by finding a differentiable function J such that J(O) =
= J(l) butf'(x) "" for < x < 1.
3. State and prove a theorem analogous to Theorem 3.7 whenf'(x) < 0,
all x E (a, b).
4. Suppose f, g are of class C k and c E Co Show that J + g, cf, and Jg are
of class Ck.
5. Suppose p is a polynomial with real coefficients. Show that between
any two distinct real roots of p there is a real root of p'.
6. Show that for any k = 0, 1,2, ... there is a function/: IR + IR which
is of class C k, such that J(x) = if x :s; 0, J(x) > if x > 0. Is there a
function of class Coo having this property?
7. Prove the following extension of the mean value theorem: ifJand g are
continuous realvalued functions on [a, b], and if the derivatives exist at each
point of (a, b), then there is c E (a, b) such that
47
xa
xa
lim Ifn  fl = 0,
n 00
This sequence (fn)';= 1 is said to be a uniform Cauchy sequence if for each e >
there is an integer N so that
(4.1)
if n, m 2': N.
Ifm(x)  f(x) I
I ::;
e.
gl
48
Continuous functions
~ N.
Finally, suppose each In is continuous on the metric space S. Suppose
XES and e > O. Choose N as above. Choose 0 > 0 so small that
so
if dey, x) <
o.
Then
f bI
= lim
Proof
n+oo
fb In.
By (2.2),
If f I If
In 
As n + 00, this + O.
(fn  I)
I: ;
lin  11lb  al
Example
For each positive integer n, letln : [0, I] + lR be the function whose graph
consists of the line segments joining the pairs of points (0,0), 2n)I, 2n);
2n)I, 2n), (nI, 0); (nI, 0), (1, 0). Then In is continuous, In(x) + 0 as
n + 00 for each x E [0, I], but In = I, all n.
f:
2: an(z n=O
00
(4.2)
zo)n,
ZEC.
Recall from 3 of Chapter I that there is a number R, 0 ::; R ::; 00, such that
(4.2) converges when Iz  zol < R and diverges when Iz  zol > R; R is
called the radius 01 convergence of (3.2). The partial sums
(4.3)
In(z) =
2: am(z 
m=O
zo)m
49
are continuous functions on C which converge at each point z with Iz  zol <
R to the function
(4.4)
f(z) =
L: am(z 00
zo)m,
Iz  zol < R.
m=O
Theorem 4.3. Let R be the radius of convergence of the power series (4.2)Then the function f defined by (4.4) is a continuous function. Moreover, the
functionsfn defined by (4.3) converge tofuniformly on each disc
0 < r < R.
= 0, 1, ....
n = 0, 1, ... ,
(4.5)
Ifn(z)  fm(z) I =
Ii
m+1
alz  zo)f
: :; L:
m=l
la,llz  zol'
ML.,.S
n+1
1 _
M
sm+1 
sn+1
IS
Msm+1
:::;; IS'
L: an(x 00
(4.6)
xo)n
n=O
+ R). Is this
Theorem 4.4. Suppose the power series (4.6) has radius of convergence R'
Then the function f defined by this series is differentiable, and
(4.7)
f'(x) =
L: nan(x 00
n=l
xo)nl,
Ix  xol < R.
Continuous functions
50
Xo
L n la lr
n=2
<Xl
(4.8)
n 2
converge uniformly for Ixl :s; r < R. Take r < s < R. Then (4.5) holds. It
follows that
Imamxm11 :s; M
m=l
msmrml
m=l
Then
n=l
(1
m=l
+ e)m8m.
This last series converges, so the first series in (4.8) converges uniformly for
Ixl :s; r. Similarly, m2 :s; (I + e)m for large m, and the second series in (4.8)
converges uniformly for Ixl :s; r.
Let g be the function defined by the first series in (4.8). Recall that we are
taking Xo to be O. We want to show that
(4.9)
L an[yn <Xl
(4.10)
n=2
Now
where
Thus
if Ixl :s; r,
Iyl:s; r.
Then
yn _ xn  nxn1(y  x)
= (y _ x)[ynl + xyn2 + ... + x n 1 _ nxn1]
= (y _ x)[(ynl _ x n 1) + (yn2 _ xn2)x + ... +(y _ x)xn2
+ xn  1 _ xn n
= (y  x)2[gn_l(X, y) + gn2(X, y)x + ... + gl(X, y)x  2],
1]
51
so
I[f(y) 
2:
co
n=2
Iy  xl
xl,
J<k)(X) =
(4.11)
Ix  xol < R.
2: n(n co
n=k
I)(n  2) (n  k
+ I)an(x
 xo)nk,
(4.12)
This means that the coefficients of the power series (4.6) are determined
uniquely by the function f (provided the radius of convergence is positive).
Exercises
1. Find the function defined for Ixl < 1 by f(x) = 2::'=1 xnjn. (Hint:
f:
f(x) = 1'.)
2. Show that iff is defined by (4.6), then
If
x
Xo
co
n~o (n
+ l)lan(X  xo)n+1.
3. Find the function defined for Ixl < 1 by f(x) = 2::'=1 nxn 1.
4. Supposethereisasequence(xn):'=lsuchthatlxn+l  xol < IX n  xol,
xn ~ xo, and f(x n) = 0 for each n, where f is defined by (4.6). Show that
f(x) = 0 for all x. (Hint: show that ao, al> a2, ... are each zero.)
(5.1)
E(O) = I,
E'(x) = E(x),
X E
IR.
52
Continuous functions
L a"x".
,,=0
00
E(x) =
Then (5.1) and Theorem 4.4 imply
co
L na"x"l = ,,=0
L a"x",
,,=1
or
00
00
= a,,/(n +
a,,+l
1),
= 0, 1,2, ....
But
ao
= E(O) =
1,
so inductively
L (n!)lx".
,,=0
00
E(x) =
(5.2)
The ratio test shows that (5.2) converges for all real or complex x, and
application of Theorem 4.4 shows that E is indeed a solution of (5.1). We
shall see that it is the only solution.
Theorem 5.1. For each a, c E C there is a unique continuously differentiable function f: IR ~ C such that
(5.3)
f(O) = c,
f'(x) = af(x),
XE!R.
This function is
(5.4)
f(x)
= cE(ax)
co
= c
L (n!)la"x".
,,=0
Proof The function given by (5.4) can be found by the argument used
to find E, and Theorem 4.4 shows that it is a solution of (5.3). To show
uniqueness, supposefis any solution of (5.3), and let g(x) = E( ax), so that
g(O)
1,
g'(x) =  ag(x).
(fg)'
53
f(x) = c/g(x),
Thus f is unique.
all x.
f(O) = c,
I'(x) = af(x)
+ hex),
XE
IR.
= fdo,
where f1
= gf.
Then
I'
= fVo + fd~
= fifo
+ af,
f1(0) = c,
fHx)fo(x) = hex).
f1(X) = c
+ {' gh.
(5.6)
f(x) = cfo(X)
+ fo(x)
f'
f'
gh
= cE(ax) + E(ax)
E( at)h(t) dt.
f(O) = do,
f"(x)
1'(0)
+ bl'(x) + cf(x)
= dt>
= hex),
X E
IR.
Ig
= g.
Continuous functions
54
+ 2hz +
z E C.
all
Thus
(S.9)
f(O) = do,
I' 
a2do. Thus f is a
ad= g,
where
(S.10)
But (S.lO) has a unique solution g, and once g has been found then (S.9) has
a unique solution. It follows that (S.7), (5.8) has a unique solution. 0
Now we return to the function E,
E(z)
L (n!)lzn,
co
C.
n=O
L (n!)l.
co
(S.11)
e = E(I) =
n=O
Theorem 5.4.
Moreover,
55
To prove (b), we wish to apply Theorem 1. Taking the first two terms in
the series shows (since y > 0) that E(y) > 1 + Y > y. Also, E(yl) > yl,
so
E( _yl) = E(yl)l < (yl)l = y.
Thus there is x E (  yl, y) such that E(x) = y. Since E' = E> 0, E is
strictly increasing and x is unique.
We have proved (c) when x =  y. Multiplying by E( x)E(  y), we want
to show
+ y)E(x)E(y)
E(x
= 1,
all x, y
1ft
E(nx) = E(x)n,
Thus
e = E(l) = E(n/n) = E(l/n)n,
e 1/n = E(l/n),
n = 1,2,3, ....
em/n = (e1/n)m = E{l/n)m = E(m/n).
e2 = E(z) =
(5.12)
L (nl)lzn.
co
n=O
The notation
(5.13)
e2
= expz
is also common.
We extend part of Theorem 5.4 to the complex exponential function.
(Recall that z* denotes the complex conjugate of z E C.)
E(z
+ w)
= E(z)E(w),
E(z*) = E(z)*.
= I,
all z,
WE
C.
Let
g(x) = Ez
+ w)x)E(zx)E(wx),
X E
1ft
Continuous functions
56
The notation (S.12) and the identity (S.14) can be used to consolidate
expressions for the solutions of the differential equations above. The unique
solution of
I'
1(0) = c,
is
(S.lS)
I(x) = ce ax
f:
al+ h
ea(Xt)h(t) dt.
1(0) = do,
1'(0) = d1 ,
is given by
(S.16)
where
+ bz + c.
Exercises
1(0) = 1,
f"  21' + 1 = 0
1'(0) = 0,
1(0) = 1,
g(O) = 0,
1'(0) = 0,
g'(O) = 1,
f" + bl' + cl = 0,
f" + bg' + cg = O.
+ bh' + ch
= O.
Show that conversely if h is a solution of this equation then there are unique
constants dh d2 E C such that h = dd + d 2 g. (This shows that the set of
solutions of (*) is a twodimensional complex vector space, and (J, g) is a
basis.)
3. Suppose h(x) = L::'=o dnxn, the series converging for all x. Show that
the solution of
1(0) = 0 = 1'(0),
f" + bl' + cl = h
57
is of the form 2:;=0 anxn, where this series converges for all x. (Hint: determine the coefficients a o, al, ... inductively, and prove convergence.)
4. Suppose ZS + bz 2 + cz + d = (z  al)(z  a2)(z  as), all z E C.
Discuss the problem of finding a functionfsuch that
f(O) = eo,
1"(0)
= e2,
(6.1)
S(O)
= 0,
(6.2)
C(O)
= 1,
S'(O) = 1,
S"
= 0,
C"
C'(O)
S = 0,
C
O.
where
g(x) = eix .
Thus
Sex)
= LX eHxt)eit dt = e ix
=
eIX(2i)le2it
(6.3)
Sex)
I~
= Ii (e lX
I:
e21t dt
(2i)leIX(e2IX  1),

e 1X ).
IR,
58
Continuous functions
(c) there is a smallest positive number p such that C(p)
(d) ifp is the number in (c), then
Sex
+ 4p)
C(x
= Sex),
+ 4p)
0,
all x
= C(x),
Proof Since the exponential function exp (ax) is of class Coo as a function of x for each a E C, Sand C are of class Coo. Since (exp (ix))* =
exp (  ix), Sand C are realvalued. In fact,
C(x) = Re (e iX ),
Sex) = 1m (e IX ),
so
e1x
= C(x)
is(x).
< C(x)
C(l)
+ IX C'(t) dt
::;; C(1) 
C(l) 
LX Set) dt
x:?: 1.
But for large x the last expression is negative, a contradiction. Thus C(x) =
for some x > 0. Let p = inf {x I x > 0, C(x) = a}. Then p :?: 0. There is a
sequence (xn)l' such that C(xn) = 0, P ::;; Xn ::;; p + lin. Thus C(p) = 0, and
p is the smallest positive number at which C vanishes.
To prove (d) we note that
1 = S(p)2
C(p)2 = S(p)2,
Similarly,  C(x
+ p)
= C(x),
x E lR.
Then
sex
C(x
+ 4p) =
+ 4p) =
7r
by
7r
2p,
+ p) =
+ p) =
Sex),
C(x).
59
p the number in (c), (d) of Theorem 6.3. We define the functions sine and
cosine for all
Z E
C by
(6.5)
1
sinz = _.(e I2  e 12 ) =
2: [(2n + I)!]1(I)"z2"+1,
,,=0
(6.6)
1
cos z =  (e 12
2: [2n!]1( I)"z2".
,,=0
2l
+ e 12 ) =
co
co
Note that because of the way we have defined 7r and the sine and cosine
functions, it is necessary to prove that they have the usual geometric significance.
o = to
= t.
The sum of the lengths of the line segments joining the points y(tl y(tl), i = 1,2, ... , n is
(6.7)
2:" [(cos tl 
cos tl _1)2
+ (sin tl
1)
and
 sin tl_1)2]1/2.
1=1
By the Mean Value Theorem, there are t; and t7 between tt1 and ti such that
60
Continuous functions
tl _ 1).
1=1
Since sine and cosine are continuous, hence uniformly continuous on [0, tl,
and since (sin t)2 + (cos t)2 = 1, it is not hard to show that as the maximum
length ItI  tl  1 1+ 0, (6.7) approaches t. 0
This theorem shows that sine, cosine, and 1T as defined above do indeed
have the usual interpretation. Next we consider them as functions from
Cto C.
Theorem 6.4. The sine, cosine, and exponential/unctions have the/ollowing
properties:
(a) exp (iz) = cos z + i sin z, all z E C,
(b) sin (z + 21T) = sin z, cos (z + 21T) = cos z, exp (z + 21Ti) = exp z,
all z E C,
(c) if W E C and w =ft 0, there is a z E C such that w = exp (z). If also
w = exp (z'), then there is an integer n such that z' = z + 2n1Ti.
Proof The identity (a) follows from solving (6.5) and (6.6) for exp (iz).
By Theorem 2.2 and the definition of 1T,
exp (21Ti)
Then since exp (z
+ w) =
+ 21Ti)
= exp
z.
+ i sin Yl2 =
(cos y)2
+ (sin y)2 =
1.
Therefore
lexp (x
+ iy)12 =
+ 21T,
61
or
y'
= y + 2mr + h,
[0, 2TT).
= exp (iy +
ih),
so
1
+ 2nTT.
The trigonometric functions tangent, secant, etc., are defined for complex
values by
tan z = sin z/cos z,
etc.
If w,
Z E
Z E
C,
cos z 1= 0,
= log w.
Theorem 6.4 shows that any w 1= has a logarithm; in fact it has infinitely
many, whose imaginary parts differ by integral multiples of 27T. Thus log w is
not a function of w, in the usual sense. It can be considered a function by
restricting the range of values of the imaginary part. For example, if w 1=
the z such that exp z = w, 1m z E [a, a + 2TT) is unique, for any given choice
of a E~.
If x > 0, it is customary to take for log x the unique real y such that
exp y = x. Thus as a function from (0, (0) to ~, the logarithm is the inverse
of the exponential function. Theorem 3.7 shows that it is differentiable, with
d
d eYI
(logx) = ( dy
y=logx
dx
Thus
(6.8)
log x =
f:
)1 =
t 1 dt,
e 10gx
x 1
x> 0.
Exercises
1. Prove the identities
sin (z
cos (z
+ w) =
+ w) =
Continuous functions
62
2. Show that tan x is a strictly increasing function from (117, trr) onto
IR. Show that the inverse function tan 1 x satisfies
~ (tan 1 x)
dx
3. Show that
4. Show that
J:
log (1
to
+ x 2) 1 dx
(1
+ x)
= (1
+ X 2)1
17.
LX (1 + f)1 dt,
1 < x <
l)n1
L
x
n=1
n
1<x<l.
00.
5. Show that
log (1
+ x) =
to
n,
A there is an
some r > O.
In particular, A contains (x, Yo) for each x in the open interval Xo  r <
x < Xo + r, and A contains (xo, y) for each y in the open interval Yo  r <
y < Yo + r.
SupposeJ: A  ? C. It makes sense to ask whether J(x, Yo) is differentiable
as a function of x at Xo. If so, we denote the derivative by
The derivatives Dd, Dd are called the first order partial derivatives of f
The second order partial derivatives are the first order derivatives of the first
order derivatives:
D 1Y = D 1(Dd),
D1Dd = D 1(Dd),
D 22J = D 2(Dd),
D2Dd= D2(Dd).
63
o2j
etc.
oyox'
A l> C is 01
Proof. Suppose (a, b) EA. Choose r > 0 so small that A contains the
closed square with center (a, b), edges parallel to the coordinate axes, and
sides of length 2r. Thus (x, y) E A if
Ix  al ::5: rand
Iy 
bl
::5: r.
Let g(y)
(7.1)
Dd(x, y)
= LX D 2Dd(s, y) ds + g'(y).
+ e)
 I(x, y)) 
[e 1(Dd(s,y
IX D 2Dd(s, y) ds + e)
g'(y)
 Dd(s,y))  D 2 Dd(s,y)]ds
[e 1 (g(y
+ e)
 g(y))  g'(y)].
The second term in brackets l> 0 as e l> O. IfI is realvalued, we may apply
the Mean Value Theorem to the first term and conclude that for each sand y
and for each small e, there is a point y' = y'(s, y, e) between y and y + e such
that
(7.2)
e 1 (Dd(s, y
+ e)
Now Iy'  yl < e, so I(s, y')  (s, y)1 < E. Since D2Dd is uniformly continuous on the square Ix  al ::5: r, Iy  hi ::5: r, it follows that the maximum
value of (7.2) converges to zero as E l> O. This implies convergence to zero of
Continuous functions
64
the integral of (7.1) with respect to s, proving (7.1) when / is real. In the
general case, we look at the real and imaginary parts of/separately. 0
Remarks. In the course of proving Theorem 7.1 we have, in effect,
proved the following. If/is a complexvalued function of class CI defined on
a rectangle Ix  al < r1> Iy  hi < r 2 , then the derivative with respect to
yof
is
LX Dd(s, y) ds.
Similarly, the derivative with respect to x of
is
/(x, t) dt
Dd(x, t) dt.
F(y)
is defined for
Iy  al
(7.3)
In fact
F(y
+ e)
/(s, y) ds
Dd(s, y) ds
 F(y) = a [f(s, y
+ e)
+ /(y, y).
 /(s, y)] ds
fY+8
I
/(S, y) dy.
Divide by e and let e + O. By the argument above, the first integral converges
to
Dd(s, y) ds.
In the second integral, we are integrating a function whose values are very
close to /(y, y), over an interval oflength e. Then, dividing bye, we see that
the limit is /(y, y).
We need two results on change of order of integration.
Theorem 7.2.
rectangle
A = {(x,y) I a::; x ::; h, c ::; y ::; d}.
65
I(x, t) dt,
hey) =
g(x) dx =
I(s, y) ds
hey) dy.
Proof The preceding remarks show that g and h are not only continuous
but differentiable. More generally,
LX I(s, t) ds
I(s, t) dt,
r{(
r
We want to show that Fl(b, d) = F2 (b, d). The remarks preceding this theorem show that
Theorem 7.3.
on the triangle
Then
Proof
r{LX
I(x, y) dY } dx
f {LX
I(x, y) dy } dx,
:$
f {f
t :$ a, defined by
By the remarks following Theorem 7.1, the derivatives of these functions with
respect to tare
f/(t, y) dy,
I(t, y) dy
{O} dy.
Continuous functions
66
Thus these functions differ by a constant. Since both are zero when t = 0,
they are identical. 0
Finally we need to discuss polar coordinates. If (x, y) E 1R2 and (x,y) =F
(0,0), let
Then
so there is a unique 8, 0
This means
x = rcos 8,
r = (x 2 + y2)1/2,
y = r sin 8
8 = tan 1 (yLx).
Thus any point p of the plane other than the origin is determined uniquely
either by its Cartesian coordinates (x, y) or by its polar coordinates r, 8. A
function defined on a subset of 1R2 can be expressed either as f(x, y) or
g(r, 8). These are related by
(7.4)
(7.5)
Theorem 7.4.
on the disc
fBB {ICB2112)1/2
_(B2_112)1/2
x
f(x, y) dx} dy =
dr.
LB
{f
67
Exercises
1. Suppose A c 1R2 is open and suppose g: A ~ C and h: A ~ C are of
class CI. Show that a necessary condition for the existence off: A ~ C such
that
Dd=g,
Dd=h
is that D 2 g = DIh.
2. In Exercise 1,supposeAisadisc{(x,y) I (x  XO)2 + (y  YO)2 < R2}.
Show that the condition D 2 g = DIh is sufficient. (Hint: consider
f(x, y)
= fX
g(s, y) ds
+ fY
Xo
h(xo, t) dt.)
Yo
2: an(x 0)
f(x)
xo)n.
n=O
We know
an = (n!) IJ(n)(xo)'
In particular, if all derivatives off are zero at Xo, then f is identically zero.
There are infinitely differentiable functions which do not have this property.
We definefby
x ~ 0,
x> 0.
f(x) = 0,
x ~
f(x) = exp (Ijx),
x > 0.
eY =
= 0, I, ....
n=O
Thus if x> 0,
<
f(x)
m!(ljx)m
m! x m,
68
Continuous functions
+ O.
It is easy to show by induction that for x > 0,
+ 1 and
If(x) I : :; (k + 3)! X k  3 ,
it follows that the right side of (8.1) converges to zero at x + O. ThusJ<k + 1)(0)
exists and is zero. Similarly, J<k + 1)(X) = PH 1(x 1)f(x) + 0 as x + 0, so
J<k+1) is continuous. 0
Note that all derivatives of the preceding function vanish at zero, but f is
not identically zero. Therefore f does not have a convergent power series
expansion around zero.
Corollary 8.2.
xrt(a, b),
xE(a,b).
= 0,
1,
hex) = 1,
Proof
x:::;; a,
0< x < b,
x 2! b.
LX",
g(t) dt,
Chapter 3
for each x
+ a) =
u(x)
+ a) =
+ a) =
u(x + a) = u(x) ,
u(x).
Thus u is also periodic with period 2a and with period a. More generally,
u is periodic with period na for each integer n. If u is periodic with period
a "" 0, then the function v,
vex)
= u(lalx/27T)
is periodic with period 27T. It is convenient to choose a fixed period for our
study of periodic functions, and the period 27T is particularly convenient.
From now on the statement" u is periodic" will mean "u is periodic with
period 27T." In this section we are concerned with continuous periodic
functions. We denote the set of all continuous periodic functions from IR
to C by re. This set includes, in particular, the functions
sin nx,
cos nx,
i sin nx
(2)
u, V E re,
u E re,
(au)(x) = au(x),
x E IR;
a E C,
IR.
it is easily checked that the functions u + v and au are periodic. By Proposition 1.1 of Chapter 2, they are also continuous. Thus u + v Ere, au Ere. The
axioms VIV8 for a vector space are easily verified. We note also that there
is a natural multiplication of elements of re,
(uv)(x) = u(x)v(x),
u, V Ere,
X E
IR.
The set re may also be considered as a metric space. Since the interval
[0, 27T] is a compact set in IR and since u E re is continuous,
sup
xe[O.2,,]
Iu(x) I <
69
00.
70
lui
(3)
= sup
xelll
Iu(x) I =
lui
where
sup lu(x)l.
xe[O.2,,]
lui
(4)
~ 0,
and
lui
lallul,
laul
(5)
= 0
(6)
u E re;
a E C,
u, v Ere.
The properties (4) and (5) are easily checked. As for (6), suppose x
I(u
R Then
d(u, v) =
(7)
lu 
vi.
lu  wi
I(u  v)
+ (v 
w)1
lui
~ 0,
0;
A normed linear space is a vector space X together with a norm lui. Associated
to the norm is the metric
d(u, v)
lu  vi.
If the normed linear space is complete with respect to this metric, it is said
to be a Banach space.
In this terminology, Theorem 1.1 has a very brief statement: ri is a
Banach space.
71
IF(u) I ~ clul,
all u E X.
Proof
clu  vi,
It is important both in theory and practice to determine all the continuous linear functionals on a given space of functions. The reason is that
many problems, in theory and in practice, can be interpreted as problems
about existence or uniqueness of linear functionals satisfying given conditions. The examples below show that it is not obvious that there is any
way to give a unified description of all the continuous linear functionals on
~. In fact one can give such a description (in terms of RiemannStieltjes
integrals, or integrals with respect to a bounded Borel measure), but we shall
not do this here. Instead we introduce a second useful space of periodic
functions and determine the continuous linear functionals on this second
space.
Exercises
~ la,,1
n=  co
< 00;
la,,1 =
,,=00
00
00
,,=0
,,=1
L la,,1 + L la_"I
L
00
u(x) =
n=
00
72
2:
00
vex)
n= 
u(x
+ 2mr)
00
= (27T)1
ro
2"
Iu(x) I dx.
o~
un(x)
un(x)
un(x)
1,
X E
= 0,
= 1,
I/n, 27T],
Then IUn  Urn I' + as n, m + 00. If u E~, there is an open interval
(7T/2  0,7T/2 + 0) on which either Iu(x) I > t or lu(x)  11 > t. Show that
IU n  ul' > 0/67T for large values of n.)
5. Which of the following are bounded linear functionals on ~, with
respect to the norm lui?
(a)
(b)
(c)
(d)
(e)
F(u) = u(7T/2),
F(u) =
sin nx u(x) dx,
F(u) =
(U(X2 dx,
F(u) = 17u(0) +
u(x) dx.
F(u) = 3Iu(0)1.
f:"
f:"
f:"
6. Suppose X is a normed linear space. Let X' be the set of all bounded
linear functionals on X. Then X' is a vector space. For FE X', let
IFI =
Show that
IFI
sup {IF(u)11
uE X, lui
~ I}.
IF(u)1
IFllul
Du(x
73
= lim h 1[u(x + h)
 u(x)]
= Du(x).
" ... 0
In particular, if u is infinitely differentiable and periodic, then each derivative Du, D2U, . .. , DkU,. .. is in ~.
We shall denote by gJ the subset of ~ which consists of all functions
u E ~ which are smooth, i.e., infinitely differentiable. Such a function will be
called a smooth periodic function. If u is in &! then the derivatives Du, D2U, . ..
are also in fJ'.
The set gJ is a subspace of ~ in the sense of vector spaces, so it is itself a
vector space. The function Isin xl is in C(j but not in &! so gJ i: ~. We could
consider gJ as a metric space with respect to the metric on ~ given in the
previous section, but we shall see later that gJ is not complete with respect
to that metric. To be able to consider gJ as a complete space we shall introduce a new notion of convergence for functions in ~
A sequence of functions (U n):'=1 c gJ is said to converge to u E gJ in the
sense of gJ if for each k = 0, 1,2, ... ,
DkUI+
as h + 00.
Thus (Un ):'=1 converges to u in the sense of gJ if and only if each derivative
of Un converges uniformly to the corresponding derivative of u as n + 00.
A sequence of functions (U n):'=1 is said to be a Cauchy sequence in the
sense of gJ if for each k = 0, 1, ... , (Dkun):,= 1 is a Cauchy sequence in ~.
Thus
for each k.
When there is no danger of confusion we shall speak simply of "convergence" and of a "Cauchy sequence," without referring to the "the sense
of fJ'." The statement of the following theorem is to be understood in this
way.
Theorem 2.1. The set gJ of all smooth periodic functions is a vector space.
If(u n):'=1 c
gJ
gJ
D"un{x) = D"un(O)
+ LX D"+1un(t) dt.
74
Vk(X)
lim Dkun(X)
n+Qj
lim Dkun(O)
n+oo
= Vk(O)
J:
lim
n .... oo
fX
Dk+lvn(t) dt
Vk+l(t) dt.
Therefore DVk = Vk+1 ' all k. This means that if u = Vo, then Vk = Dku and
IDkun  DkUI+ 0 as n + 00. Thus Un + u (in the sense of fJI). 0
The remainder of this section is not necessary for the subsequent development. We show that there is no way of choosing a norm on fJI so that convergence as defined above is equivalent to convergence in the sense of the
metric associated with the norm. However, there is a way of choosing a
metric on fJI (not associated with a norm) such that convergence in the sense
of fJI is equivalent to convergence in the sense of the metric. Finally, we
introduce the abstract concept which is related to fJI in the way that the
concept of "Banach space" is related to ~.
Suppose there were a norm lui' on fJI such that a sequence (Un);'=l c fJI
converges in the sense of fJI to u E fJI if and only if
IU n  ul' + O.
Then there would be a constant M and an integer N such that
(1)
lui' ~ M(lul
IDul
+ ... +
all u E!!l!
IDNUI),
In fact, suppose (1) is false for every M, N. Then for each integer n there
would be a Un E fJI such that
Let
Then
if n
k,
so Vn + 0 in the sense of!!l! But Ivnl' = 1, all n. This shows that the norm lui'
must satisfy (I) for some M, N. Now let
n = 1,2, ....
N.
Thus by (1),
But IDN+IW nl = 1, all n, so (W n);'=l does not converge to 0 in the sense of!!l!
This contradicts our assumption about the norm lui'.
75
L 2k1IDku 00
d'(u, v)
+ IDku  Dkvll 1.
Dkvl[l
k=O
u,
d'(u, v) < 1,
It is clear that
d'(u, v) ::::: 0,
d'(u, v) =
d'(u, v) = d'(v, u).
VE
fIJ.
u = v,
implies
d(u, v)
lu 
vi,
d*(u, v)
d(u, v)[l
+ d(u, V)]l.
d*(u, w)
d*(u, v)
+ d*(v,
w).
Then
=
L 2 k 1d*(DkU, Dk
k=O
...
GO
d'(u, w)
d'(u, v)
W)
+ d'(v, w).
te,
j = 0, 1, ... , k.
Then if m, n ::::: N,
=
2: 2 i  1d*(Dium , Djun)
j=O
<
2: 2j=O
00
d'(u m, un)
f 1
(!e) +
< !e + 2  k 1 <
2: 2j=k+1
00
j  1
!e + le.
Conversely, suppose (Un)~= 1 is a Cauchy sequence in the sense of the metric d'.
Given an integer k ::::: and an e > 0, choose N so large that if m, n ::::: N
then
76
For m,n
N,
lui
laul
lallul,
lu + vi : : ; lui + Ivl
(Thus a seminorm is a norm if and only if lui = 0 implies u = 0.) Suppose
there is given a sequence of semi norms on X, lUll> luI2"'" with the property
~ 0,
that
all k implies u = O.
(2)
d'(u, v) =
L 2 klu 00
1<=1
vlk[1 + lu  vlk]I.
d'(u n , v) + 0 as n + 00
is equivalent to
IUn  vlk+ O
as
n+oo,
for all k.
Iulk = IDklul,
then d' agrees with d' as defined above. Thus Theorems 3.1 and 3.2 say that
f!J'is a Frechet space.
Exercises
1. Which of the following are Cauchy sequences in the sense of f!J'?
un(x) = n 3 cos nx,
vn(x) = (nl)l sin nx,
wn(x) =
ml
77
alluEX.
U1(X) =
U2(X) =
U3(X) =
lxi,
tlx  el + tlx + el,
tlx  el + tlxl + tlx + el,
where e > O.
If U E CC and t E IR, the translation of u by t is the function Ttu,
XE
IR.
Then Ttu Ere. The graph of Tt is the graph of u shifted t units to the right
(i.e., shifted It I units to the left, if t < 0). In these terms the functions above
are
More generally, one could consider weighted averages of the form
(1)
where
ao
+ a1 + ... + aT
= 1,
o~
and we set
then (1) becomes
(2)
to
<
t1
tk
= 2'IT
78
vex)
(3)
1
2"
0
1
0
2"
b(t)Ttu dt,
b(t)u(x  t) dt.
(4)
* v)(x)
I
27T
1
2
0 "
(5)
* v = v * u,
a E 18,
(au) * v = a(u * v),
WE<iff,
(u + v) * W = U * W + v * W,
(u * v) * W = U * (v * w),
Tt(u * v) = (Ttu) * v = u * (Ttv).
(6)
(7)
(8)
(9)
(10)
Tt(u
* v)(x)
* v)(x
 t)
(u
2~ J 1tu(x 
LI
u(x 
t 
y)v(y) dy = (T,u)
y)v(y) dy
* v.
Therefore,
(12)
lTt(u
* v)
 u * vi
I(Ttu  v)
* vi
where we have assumed (7) and (8). Now u is uniformly continuous on [0, 27T]
and is periodic; it follows easily that u is uniformly continuous on IR. Therefore ITtu  ul+ 0 as t + O. Then (12) implies continuity of u * v. Also,
so u * v is periodic.
The equality of (6) follows from a change of variables in (4): let y' =
x  y, and use the periodicity of u and v. Equalities (7), (8), and (9) are easy
computations. The last part of (10) follows from (11) and (6). 0
79
+ x)
 U(X)]
t 1 [L tu  u](x).
u)  Dul+ 0
as
t + O.
Corollary 3.3.
If u E
&!
then
r 1(L t u  u) + u (.9')
as t + O.
Proof
Then
Dk[tl(Ltu  u)] = r 1(L t DkU  Dku),
which converges uniformly to D(Dku)
Proposition 3.4.
= Dk(Du). 0
(13)
Proof
By Proposition 3.1,
(l[Lt(u * v)  (u
(14)
* v)]
= [r1(Ltu  u)]
* v.
By Lemma 3.2 and (5), the expression on the right in (14) converges uniformly to (Du) * v as t + O. Thus
D(u * v)
and u
(Du)
Corcllary 3.5.
each u E&!
* v,
* v) =
(D 2u)
* V.
VE~,
and
IV n
u * Vn + U * v (.9').
Proof
For each k,
Dk(U * Vn
U * v)
Dku * (v n
v).
80
IDk(U * Vn
all k.
* v)1 + 0,
*U
f:"
[2"6 IPn(X) dx + 0 as n + 00
.16
f{f
each U E f{f,
IlPn
Moreover,
if U E f!J1
*U 
ul+O as n+oo.
then
27T1(lPn
Since (27T)1
* u)(x)
 U(x) I
J:" IPn(Y) dy =
If"
If:"
~
~
~
Isl,,26
sup ITsu  ul
Isl,,26
(f 6IPn + f2"
0
2,,6
+ 21ul
.16
"6
IPn)
IPn.
if
lsi
~ D.
2 1u l .16
2"6
IPII
< 27T8,
N.
J6
81
If n :2: N, then
l'Pn
* u  ul
< e.
This proves the first assertion. Now suppose u E fYJ. For each k,
Dk('Pn * u)
Dk(U * 'Pn)
(Dku) * 'Pn
'Pn * (Dku),
*u ~ u
(&).
In the next section we shall construct a sequence in & which is an approximate identity. It will follow, using Proposition 3.4 and Theorem 3.6, that &
is dense in ~.
Exercises
1. Let e,cCx) = exp (ikx), k = 0, 1, 2, .... These functions are in &
(see 6 of Chapter 2). Suppose u E~. Show that
where
I
ak = 211'
2. Show that ek * Lie
= I, and ej * ek =
ifj:F k.
L ak exp (ikx),
k=n
n
(1)
'P(x) =
where the coefficients ak are in C. The reason for the terminology is that for
k > 0,
exp ( ikx)
[exp ( iX)]k
= (cos x
i sin X)k.
Therefore any function of the form (I) can be written as a polynomial in the
trigonometric functions cos x and sin x. Conversely, recall that
cos x
sin x
ix)],
Therefore any polynomial in cos x and sin x can be written in the form (I).
82
11'(0) = <p(2n) = 1,
<p(x) < 1 for 0 < x < 2n.
Then successive powers of 11' will take values at points near 0 and 2n which
are relatively much greater than those taken at points between 0 and 2n.
We may take
<p(x) = t(1
+ cos x)
and set
where
Cn
is chosen so that
f"
<Pn(x) dx
2n.
(2)
2"6
<Pn(X) dx ~ 0 as n ~ 00
1 + cos x < r (1
(3)
+ cos y)
if
(4)
X E
Y E [0, !8].
[8, 2n  8],
or
X E
[8, 2n  8]
83
That is, ifu E ~ and v E~, there are sequences (un)f and (vn)f of trigonometric
polynomials such that
and
vn ~ v (.9).
Proof Let (Pn)!' be a sequence of trigonometric polynomials which is
an approximate identity, as in Lemma 4.1. Let
un = Pn
* u,
vn = Pn
* v.
~.
Theorem 4.3 is due to Weierstrass. There is a betterknown approximation theorem, also due to Weierstrass, which can be deduced from Theorem
4.3.
[0, 27T].
+ (b  a)x/7T),
X E
[0, 7T].
84
Exercises
1. Suppose u E <'C and suppose that for each integer k,
J,o
2,.
Show that u = O.
2. Suppose u: [a, b] + <'C is continuous, and for each integer n
u(x)xn dx
0,
= O.
Show that u = O.
5. Periodic distributions
In general, a "distribution" is a continuous linear functional on some
space of functions. A periodic distribution is a continuous linear functional
on the space ~ Thus a periodic distribution is a mapping F: f!IJ + C such
that
F(au) = aF(u),
F(u + v) = F(u) + F(v),
F(un) + F(u)
UEf!IJ;
aEC,
U, V E f!IJ;
if Un + U (f!IJ).
(1)
U E
<'C.
o=
so
27T(Fw(Un)  Fv(u n )) =
W =
v.
ih
0
(w(x)  v(x))un(x) dx +
ih
0
Periodic distributions
85
8(u)
= u(O),
u Ere.
Then the restriction of 8 to fYJ is a periodic distribution. It is called t~ 8distribution, or Dirac 8distribution. To see that it is not a function, let
un(X) =
H + 1 cos x)n.
Then 8(un) = I, all n. But un(x) ~ 0 uniformly for x E fe, 27T  e], any
e > O. Also 0 ~ un(x) ~ I, all x, n. It follows from this that for any v Ere,
Fiu n) ~ o. Thus 8 f= Fv.
The set of all periodic distributions is denoted by fYJ'. We consider fYJ'
as a vector space in the usual way: if F, G E fYJ', u E.9, a E C, then
(F
= aF(u).
all u E [lJ.
or simply by
(I
= 27T f2"
0
v(x)*u(x) dx = 27T f2"
0
v(x)u*(x) dx
)* =
F*(u) = F(u*)*,
v(x) = v( x).
v by
(Fv(u*.
86
Then
I
F;,(u) = 27T
f211
0
v( X)U(X) dx = 27T
f211
0
F(u)
(4)
F(u),
UE~
Ttv(x)
vex  t).
Then
I
27T
f211
0
Ttv(x)u(x) dx
27T
27T
f211
vex  t)u(x) dx
f211
0
v(x)u(x
+ t) dx =
Fv(T tu).
(5)
UE~
27T
f211
0
Dv(x)u(x) dx
27T
f211
0
v(x)Du(x) dx
Fv(Du).
(6)
(DF)(u)
F(Du),
uE ~
Then inductively,
(7)
UE~
(DF)(u n)
F(Dun) ~ F(Du)
DF(u),
so the derivative DFis continuous. Similarly, F*, F, TtF, and DkFare in &'.
In particular, let us take F = 8. Then
(8)
(9)
(10)
0= 0*
Tto(u)
Dk8(u)
u(t),
(_I)kDku(O),
UE~
Proposition 5.1. The operations in &' defined by equations (3), (4), (5),
and (6) are continuous, in the sense that if Fn ~ F (&') then
Fn * ~ F* (&'),
Fn  ~ F (&'),
TtFn ~ TtF (&'),
DFn ~ DF (&').
Periodic distributions
87
tl(LtF  F)
?
F (fll')
as t ? O.
Proof
(12)
Now
(13)
An argument like that proving Lemma 3.2 and Corollary 3.3 shows that the
expression in (13) converges to Du in the sense of fll as t ? O. From this fact
and (12) we get (11). 0
As an example,
tl(Lto  o)(u) = tl[u(t)  u(O)]? Du(O) = (Do)(u).
can be defined by
+ v*),
Re v = !(v
1m v
C(?
= ~ (v  v*).
+ F*),
= ~(F  F*).
v = D,
v = D.
88
we say F is odd if
Exercises
1. Which of the following define periodic distributions?
(a)
(b)
(c)
(d)
(e)
(f)
(g)
F(u)
F(u)
F(u)
F(u)
F(u)
F(u)
F(u)
= Du(1)  3u(27T).
= J:" (u(xW dx.
=
=
J: u(x)(1 + xf dx.
u E &'.
Show that
1 f2"
+ 27T
0
w(x)u(x) dx.
In other words,
DFv = Fw
[v(O)  V(27T)]8.
(D2Fv)(U),
UE&'.
89
(I)
The proof of this theorem will be given later in this section, after several
other lemmas and theorems. First we need the notion of the order of a
periodic distribution. A periodic distribution F is said to be of order k (k an
integer ~ 0) if there is a constant c such that
JF(u)J ~ c{JuJ
+ JDuJ + ... +
all u E &1J.
JDkUJ},
re
0 such that F is of
order k.
Proof
JF(Uk)J ~ (k
+ l){JukJ +
JDUkJ
+ ... +
JDkUkJ}.
Let
Vk = (k
+ 1)l{JUkJ +
JDUkJ
+ ... +
JDkukl}lUk.
Then we have
(2)
while
(3)
Suppose now that F were not of order k for any k ~ O. Then we could find a
sequence (Vk)k'=l c f!JJ satisfying (2) and (3) for each k. But (3) implies
Vk + 0 (f!JJ).
Then (2) contradicts the continuity of F. Thus F must be of order k, some k. 0
Lemma 6.3. Suppose FE f!JJ' is of order O. Then there is a unique continuous linear functional F1 : + such that
re re
all u E &1J.
Proof
JF(u)J ~ cJuJ,
u E &1J.
90
Un ? U
uniformly. Then
so
Thus
The functional F1 : 'i&' ? 'i&' defined by (4) is easily seen to be linear. It is
continuous (= bounded), because
IF(u) I =
lim IF(un) I ~
clim lunl
clul.
Conversely, suppose F2: 'i&' ? 'i&' is continuous and suppose F2(U) = F(u),
all u E f!JJ. For any u E'i&', let (un);." c & be such that Un ? U uniformly. Then
D
(The remainder of this section is not needed subsequently.)
Lemma 6.4. Suppose FE &' is of order 0, and suppose F(w)
a constant function. Then there is a function v E 'i&' such that
0 if w is
D2Fv = F.
Proof Let us suppose first that F = Ft , where f E 'i&'. We shall try to find
a periodic function v such that D2 v = f Then we must have
Dv(x)
Dv(O)
+ IX f(t) dt
o
r r
f(/) dl,
.0
vex)
Dv(t)dt
[a
O. Then
+ ff(S)dS] dt
= ax + fox ff(S)dSdl.
We use Theorem 7.3 of Chapter 2 to reverse the order of integration and get
vex)
ax
+ LX f(s)(x 
s) ds.
91
Let
(x  s)+
=0
if x < s,
Then
vex) = ax
(5)
By assumption onf,
(x  s)+
2"
2"
o hf(s) ds
X 
if x
;?:
s.
= 0,
o=
27Ta
fb
0
fb
0
sf(s) ds.
Thus
I
a = 27T
f2" sf(s) ds
0
and
I
vex) = 27T
(6)
s)+] ds.
(7)
We want to show that v E <'C and D2Fv = F. It is easy to check that
Therefore
92
rl
L: [v(Xj + t) 
v(Xj)]w(Xj)(Xj  Xjl)
=
since FI is linear. As partitions (xo, Xl, ... , xn) of (0,21T) are taken with
smaller mesh, the functions on which FI acts in (8) converge uniformly to
the function gt. Here
gtCs) =
SE
< X < s,
t + 0.
rl(ux+t  ux) + S as
This convergence is uniform for X in any closed subinterval of (0, s). Similarly,
(9)
where
211
g(s) = s 10 W(X) dx
W(X) dx.
Then
where
211
h(s) = s 10 Dw(x) dx
+ 21T f211
s
if w is a constant
Lemma 6.5. Suppose FE &' and suppose F(w) =
function. Then there is a unique G E &' such that DG = F and G(w) = if
w is a constant function. If F is of order k ~ 1, then G is of order k  1.
Proof If u E .9, it is not necessarily the derivative of a periodic function.
We can get a periodic function by setting
Ix
I211
IX
93
where
e(x) = 1,
all x,
Then
D(Su) = u  Fe(u)e.
It follows that if DG = F and G(e) = 0, then
(10)
Thus G is unique. To prove existence, we use (10) to define G. Since S: fJ' ~ fJ'
is linear, G is linear. Also
k;::; 2.
(fJ') we have
G(un ) = F(Sun )
F(Su) = G(u).
IG(u)1 =
IF(Su) I :=:;
:=:;
and G is of order k  1.
Corollary 6.6.
Proof
If G E fJ"
B = G  Ft.
Then DB =
G = Ft. 0
= F  Ft.
2 and Fo(e) = 0. By repeated applications of Lemma
Fo
2 E
fJ" so that
Fie) = 0,
and Fj is of order k  2  j. Then Fk  2 is of order 0. By Lemma 6.4, there
is a v E <'C such that D2Fv = Fk _2' Then
94
Exercises
1. To what extent are the functions v and f in Theorem 6.1 uniquely
determined?
8  T,,8,
i.e.,
F(u) = u(O)  u(7r).
3. Find v E'fj and a constant function f such that
8
= D2Fv + Ft.
7. Convolution of distributions
Suppose v E'fj and u E fJ'. The convolution v * u can be written as
1
(v * u)(x) = (u * v)(x) = 27r
=
I
27r
f2" u(x 0
f2" v(y)u(y 0
y)v(y) dy
x) dy = Fv(Txu);
here again u(x) = u( x). Because of this it is natural to define the convolution of a periodic distribution F and a smooth periodic function u by the
formula
(F * u)(x)
(I)
= F(Txu).
(3)
*u =
(F + G) * u =
(4)
F* (u
(2)
(aF)
+ v)
a(F* u) = F* (au),
FEfJ",uE&!aEC;
F *u
FEfJ", uEfJ';
+ G * u,
= F* u + F* v,
* u = F* (Ttu),
= (DF) * u = F* (Du),
FE fJ", u,
VE
fJ';
(5)
Tt(F* u) = (TtF)
FEfJ", UEfJ';
(6)
D(F* u)
FE fJ", u E&.
Proof The identities (2)(5) follow from the definition (1) by elementary
manipUlations. For example,
* u)(x).
Convolution of distributions
95
(F * u)(x
We know that
Therefore
t 1 [(F* u)(x
+ t)  (F* u)(x)]
=
=
=
t 1[(F*u)(x
+ t)  (F*u)(x)]
F* [t1(L t y  u)](x)
= (F * (Du))(x).
s*u =
(7)
u,
(8)
Fv*u(w)
Fu*v(w)
(27T)2
Fv(a * w).
f"
(27T)2 {2"
f"
u(x  y)v(y)w(x) dy dx
L n v(27Tm/n)T2"mmU.
n
(9)
Wn
(27T)2
m=1
Proof
27T(Wn(X)  w(x))
n=l
fx mn
Xml,n
96
Now u, v are uniformly continuous and over the range of integration of the
mth summand,
wl+ 0 as n + 00.
Proof
F(a
* v) = lim F(wn)
But
F(w n) = (27T)Z
L nlv(27Tm/n)F(Tz1Imtnu)
m=l
n
1 f211
+ 27T 0 v(x)f(x) dx = F,(v).
(F * G)(u) = F(G
(10)
* u),
I(G * un)(x)
G * u(x) I
= I(G * (un
 u(x) I
= IG(Tiu n  u)) I
Thus G * Un + G * u uniformly. Similarly, for each j, Df(GG * Djun + Df(G * u) uniformly. Thus
G
* Un + G * u (gJ),
* un)
97
Convolution of distributions
so
(F * G)(un) + (F * G)(u).
This shows that F
* G E f!JJ'. As an example,
(11)
Lemma 7.5.
F * u (f!JJ).
Corollary 7.6.
Suppose (fn)f
&, (gn)'f
Fn + F (f!JJ') and Gn
+
Suppose
G (f!JJ').
Then
Fn
* G + F* G (f!JJ')
and
F
Proof
(Fn
Also, Gn ~
* Gn + F * G (f!JJ').
* G)(u)
Fn(G~
* u + G~ * u
(f!JJ) so
(F * Gn)(u)
F(G n~
* u) + F(G~ * u)
* u) + F(G~ * u)
= (F * G)(u)
(F * G)(u).
Proof
Fn(u) = F(CPn
* u),
CPn
* U + u
(f!JJ).
Therefore
Fn(u) + F(u).
u E.9.
98
If (<Pn);= 1 is an approximate identity consisting of trigonometric polynomials, then the functions In = F * <Pn are also trigonometric polynomials.
In fact, let
Then
But
so
Thus
is a trigonometric polynomial.
(15)
(16)
(17)
F*G = G*F,
*G =
(F + G) * H =
(aF)
a(F* G) = F* (aG),
F
* H + G * H,
(F*G)*H= F*(G*H),
Tt(F * G) = (TtF)
D"(F* G)
* G = F * (TtG),
= (D"F) * G = F* (D"G).
Proof All of these identities except (12) and (15) follow from the
definitions by a sequence of elementary manipulations. As an example, we
shall prove part of (16):
[Tt(F * G)](u) = F * G(Ltu) = F(G * Ltu)
= F((T tG) * u)
= F((TtG) * u) = (F * TtG)(u).
To prove (12) and (15) we use Theorem 7.7 and Corollary 7.6. First, suppose
G = F g , g E f!JJ. Take (fn)i c fYJ such that
Fn = F'n + F (fYJ').
99
h,.
=/,. *g,
h,.
= g */,.,
so also
c ,gil
since
* H.
Exercises
1. Prove the identities (2), (3), (4).
2. Prove the identities (7), (11).
3. Prove the identities (13), (14), (16), (17), (18) directly from the definitions.
4. Prove the identities in Exercise 3 by approximating the distributions
F, G, H by smooth periodic functions.
u,. __ u (,gil) if
IDkU,. 
DkUI  0,
all k.
100
Fv(u) = 2'117
2n
0
v(x)u(x) dx.
8(u) = u(O).
The sum, scalar multiple, complex conjugate, reversal, and translation of
distributions are defined by
(F
+ G)(u)
F(u) + G(u),
aF(u),
(F(u**
(u*(x) = u(x)*),
= F(u)
(u(x) = u( x,
= F(T tu)
(Ttu(x) = u(x  t.
=
=
=
(aF)(u)
F*(u)
F(u)
(TtF)(u)
Derivatives ate defined by
(DkF)(U) = ( 1)kF(Dku).
We say
if
all u E &'.
In particular,
Then
8 = 8* = 8
(Fv)* = F".,
(Fv ) = F 1J ,
(F * u)(x) = F(T:;.fl),
u E &'.
101
Fv
* u = v * u.
*u =
I)
U,
Ef!Jl.
(F* G)(u) =
F(G~
* u),
u Ef/J.
F * Fv = FI , where 1= F * v.
Clearly
If
Fn * F (f!Jl')
then
Fn
* G * F * G
(f!Jl').
F*G=G*F,
(aF) * G = a(F* G) = F* (aG),
(F + G) * H = F * H + G * H,
(F * G)
* H = F * (G * H),
F= G
+ iH,
G, H real.
In fact
G
= Re F = !(F + F*),
1m F
= 2i (F  F*).
F= G
+ H,
F~
and odd if F = 
G even, H odd.
F~.
Any
102
In fact
G
H
= t(F + F),
= t(F  F).
IDkUI),
all u Ef!JJ.
+ Ff E f!)J'.
+ 2, k ~ 0, then there are v E <'C and con
DkFv
Chapter 4
(1)
(au, v)
(4)
+ U2, v)
(u, VI + V2)
(5)
(v, u)
(3)
(Ul
(u, v)
(6)
0,
(u, u) = 0 only if u =
o.
(7)
Lemma 1.1.
f.211
0
lu(x)12 dx
)1/2
If u, v E '1&', then
(8)
If v = 0 then
(u, v)
= (u,Ov) = O(u, v) = 0,
and (8) is true. Suppose v # O. Note that for any complex number a,
(9)
Let
Then (9) becomes
o~
and this implies (8).
The inequality (8) is known as the Schwarz inequality. Note that only the
properties (2)(6) were used in the proof, and no other features of the inner
product (1).
103
104
Ilull
(10)
Ilull = 0 only if u = 0,
Ilaull = lal Ilull,
a E C,
Ilu + vii :::; Ilull + IlvII
2: 0,
(11)
(12)
Property (10) follows from (6) and property (11) follows from (2). To prove
(12), we take the square and use the Schwarz inequality:
Ilu
+ vl1 2=
(u
Ilull :::;
(13)
(0,0),
(~7T  ~,J) ,
7T, 0),
G 0);
7T,
(27T, 0).
Then
U E
7/
In order to get a complete space which contains 7/ with this inner product,
we turn to the space of periodic distributions. Suppose (un)! C 7/ is a
Cauchy sequence with respect to the metric induced by the norm Ilull, I.e.
Ilun
urnll+ 0
as
n, m + 00.
lOS
Fn(v)
where again
inequality,
1 f2"
= 217
0 un(x)v(x) dx = (v, u:),
JvJJJun  umJJ.
The functional F: fYJ + t:C defined by (14) is clearly linear, since each Fn is
linear. In fact, F is a periodic distribution. To see this, we take N so large
that
ifn, m ;:: N.
Let
Then for any n
N,
while if n > N,
JJunJJ = JJun 
UN
+ UNJJ
< 1 + JJUNJJ ~ M.
Therefore
so
Fn = Fun
converges in the sense of fYJ' to a distribution F, which is of order O.
It is important to know when two Cauchy sequences in t:C give rise to the
same distribution.
Lemma 1.3. Suppose (un)i c t:C and (vn)i c t:C are Cauchy sequences
with respect to the norm JJuJJ. Let Fn = Fun and Gn = Fun be the corresponding
distributions, and let F, G be the limits:
106
Proof
show
Let Wn
if
Hn
+ 0
(31>')
if and only if
llwnll+ O.
Suppose
Then for any
U E
9,
Conversely, suppose
Hn
(15)
+ 0
(31>').
~
N implies
(16)
Fix, m
N. Then if n
IIwml1 =
2
ellwmll + IHn(wm)l
V and write
Un +F (V).
Lemma 1.2 can be rephrased: if
Un +F (V),
Vn
+ G
(V),
then
F=G
if and only if
Ilun vnll + O.
+ F
(V)
and
Vn
+ G (V),
An inner product in
~,
107
then
Un
+ Vn + F + G (V).
Vn +
G (V),
let
(17)
The existence of this limit is left as an exercise. Lemma 1.2 shows that the
limit is independent of the particular sequences (un)f and (vn)f. That is,
if also
Un + F
(V),
v' + G (V)
then
(18)
Theorem 1.3. The inner product in V defined by (17) satisfies the identities
(2), (3), (4), (5), (6). If we define
(19)
then this is a norm on L2. The space V is complete with respect to this norm.
Proof The fact that (2)(6) hold is a consequence of (17) and (2)(6)
for functions. We also have the Schwarz inequality in V:
I(F,
IIF I is a norm.
Finally, suppose (Fn)f C L2 is a Cauchy sequence with respect to this
norm. First, note that if
It follows that
un+F (V)
Vn
= v, all n, then
It follows that
i.e., F can be approximated in L2 by functions. Therefore, for each n =
1, 2, . . . we can find a function Vn E 'C such that
Then
Ilvn  vmll = IlFvn  Fvmll ::;; IlFvn  Fnll + IlFn  Fmll + IlFm  Fvmll
< IlFn  Fmll + n 1 + m 1 + 0 as n, m + 00.
108
v" + F (L2).
But then
Exercises
lIull
C(;'
Define
F(v) =
2~
f"
e [a, b),
I(x) = 1,
I(x) = 0,
x [a.. b).
I(x)v(x) dx =
2~
vex) dx,
bE
o=
Xo <
Xl
Define
F(v)
1
2
[Xllo XI),
17 .0
u" +F
(L2).
IITtu  T.ull+ 0
as t + s.
IITtF  T.FII+ 0
as t + s.
IIFII
= sup
I}.
109
Hilbert space
2. Hilbert space
In this section we consider an abstract version of the space L2 of l.
This clarifies the nature of certain theorems. In addition, the abstract version
describes other spaces which are obtained in very different ways.
Suppose H is a vector space over the real or complex numbers. An
inner product in H is a function assigning to each ordered pair of elements
u, v E H a real or complex number denoted by (u, v), such that
(U1
+ U2, v) =
(I)
where
(2)
Then lIuli is a norm on H. If H is complete with respect to the metric associated with this norm, then H is said to be a Hilbert space. In particular, V
is a Hilbert space. Clearly any Hilbert space is a Banach space.
A more mundane example than L2 is the finitedimensional vector space
eN of Ntuples of complex numbers, with
L anb:
n=l
N
(a, b) =
when
then
ab =
a1b1
+ a2b2
when a = (ab a2), b = (bb b2); here we use the dot to avoid confusing the
inner product with ordered pairs. It is worth noting that the law of cosines
of trigonometry can be written
(2)
ab = lallbl cos 0,
110
where is the angle between the line segments from 0 to a and the line
segment from 0 to b. Therefore labl = lallbl if and only if the segments
lie on the same line. Similarly, a b = 0 if and only if the segments form a
right angle.
Elements u and v of a Hilbert space H are said to be orthogonal if the
inner product (u, v) is zero. In 1R2 this means that the corresponding line
segments are perpendicular. We write
u.lv
when u and v are orthogonal. More generally, U E H is said to be orthogonal
to the subset S c H if
all v E S.
u.l v,
If so we write
u.l S.
Ifu.l v then
(3)
Ilu
In 1R2, this is essentially the Pythagorean theorem, and we shall give the
identity (3) that name in any case. Another simple identity with a classical
geometric interpretation is the parallelogram law:
(4)
Ilu  vl1 2
This follows immediately from the properties of the inner product. In 1R2
it says that the sum of the squares of the lengths of the diagonals of the
parallelogram with vertices 0, u, v, u + v is equal to the sum of the lengths
of the squares of the (four) sides.
When speaking of convergence in a Hilbert space, we shall always mean
convergence with respect to the metric associated with the norm. Thus
Un + U means
Ilun  ull+ O.
The Schwarz inequality shows that the inner product is a continuous function.
Lemma 2.1. [I(un)l, (Vn)l cHand
then
Hilbert space
111
Corollary 2.2.
closure of S.
The set
If we can show that (vn )1" is a Cauchy sequence, then it has a limit v E H.
Since Hl is closed, we would have v EHl and Ilu  vii = d as desired.
Geometrically the argument that (vn }1 is a Cauchy sequence is as follows.
The midpoint i(vn + vm
) of the line segment joining Vn and Vm has distance
~ d from u, by the definition of d. Therefore the square of the length of
one diagonal of the parallelogram with vertices u, Vn, Vm, Vn + Vm is nearly
equal to the sum of squares of the lengths of the sides. It follows that the
length Ilvn  vmll of the other diagonal is small. Algebraically, we use (4)
to get
< 2(d
:::;
To show uniqueness, suppose that v and w both are closest to u in the above
sense. Then another application of the parallelogram law gives
d 2 , we must have v = w.
112
Ilul 
awII2
= lIu  (v + aw)1I2
WE
Ul
HI' We
= U 
v.
WE
HI>
or
Let
(5)
Then (5) becomes
Thus (u1 , w) = O.
Conversely, suppose u  v .1 HI> and suppose W E HI' Then v so
(u  v) .1 (v  w).
lIu  wI12
=
~
If HI "# H, take
Then u = Uo 
Uo.
Uo E
Vo
Vo
is
4(u)
= (u, v),
uEH,
ILv(u)I
IIvllllull
Thus 4 is bounded.
Suppose L is a bounded linear functional on H. If L
= O. Otherwise, let
Hl = {u E H I L(u) = O}.
0 we may take
113
Then H1 is a subspace of H, since L is linear; H1 is closed, since L is continuous. Since L :f 0, H1 is not H. Take a nonzero UE H which is orthogonal to Hlo and let
v = Ilull2L(u)u.
Then also v 1 H 1 , so
4(w) = L{w),
Moreover,
4(u)
Ilull 2L{u){u, u)
= L(u).
If w is any element of H,
w  L{U)lL(w)u E H 1
Thus any element of H is of the form
au
+ W1
o=
Exercises
1. Prove (2).
2. Suppose FE &J'. Show that FE V if and only if there is a constant e
such that
all u E fiJ.
IF(u) I ::::; ellull,
3. Let H be any Hilbert space and let H1 be a closed subspace of H. Let
H2 = {u E H I u 1 H 1}.
Show that H2 is a closed subspace of H. Show that for any u E H there are
unique vectors U1 E H1 and U2 E H2 such that
u=
U1
+ U2.
x=
then
IIxl1 2 =
L:
N
n=l
lan l2
114
x = (an)!
such that
(1)
If
we set
co
(x, y) =
(2)
L anb~,
n=l
Proof
Suppose
x = (an)! El+2
and
y = (b n)! E/+2.
If a E C, then clearly
As for x
+ y,
we have
2:
::; 2
00.
Thus 1+ 2 is a vector space. To show that the inner product (2) is defined for
all x, y E 1+ 2, we use the inequality (1) from 2. For each N,
lanl2t2
C~l Ibnl2f2
lanl2 f2
C~
bnl 2f2
Therefore
and (2) converges. It is easy to check that (x, y) has the properties of an inner
product. The only remaining question is whether 1+2 is complete.
Suppose
m
= 1,2, ....
115
Ilxll
(x, X)1/2.
:$;
Ilxm  x1'l12 ? 0
Ilxm  xll?O.
Since
all m.
Therefore for any N,
L la,,12 = lim L:
N
m.... oo n=1
n=1
Finally, given
lam,,,12
Ilxm  x1'll
Then for any N and any m
:$;
K2.
M implies
< e.
M,
Thus
ifm
M.
x =
(anY~oo c C
such that
(3)
00
n=oo
Cn
lim
L:
M,N~+oo n=M
en
116
we let
2:
00
(x, y) =
(4)
n= 
anb:,
00
The proof of this theorem is very similar to the proof of Theorem 3.1.
Exercises
1. Prove Theorem 3.2
2. Let em E 1+2 be the sequence
JJemJJ = 1;
em.l ep ifm i= p;
(x, em) + 0 as m + 00, for each x
1+ 2;
= {x E 1+ 2JJJXJJ
I},
4. Orthonormal bases
The Hilbert space (;N is a finite dimensional vector space. Therefore any
element of (;N can be written uniquely as a linear combination of a given
set of basis vectors. It follows that the inner product of two elements of (;N
can be computed if we know the expression of each element as such a linear
combination. Conversely, the inner product makes possible a very convenient
way of expressing a given vector as a linear combination of basis vectors.
Orthonormal bases
Specifically, let en
117
E
C N be the Ntuple
en
where the 1 is in the nth place. Then {e l , e2, ... , eN} is a basis for CN. Moreover it is clear that
(1)
If x = (ab a2, ... , aN) E eN then the expression for x as a linear combination
of the basis vectors en is
(2)
Because of (1),
Thus we may rewrite (2) as
(3)
2: (x, en)en.
N
n=l
(x, y) =
2: anb~.
N
n=l
(x, y) =
n=l
n=l
In particular,
(5)
JJXJJ2 =
2:
N
n=l
J(x, enW
The aim of this section and the next is to carry this development over
to a class of Hilbert spaces which are not finite dimensional. We look for
infinite subsets (e n)l' with the properties (1), and try to write elements as
convergent infinite sums analogous to (3).
A subset S of a Hilbert space H is said to be orthonormal if each U E S
has norm 1, while
(U, v) =
if u, v E S,
f= v.
The following procedure for producing orthonormal sets is called the GramSchmidt method.
Lemma 4.1. Suppose {Ub U 2, ... } is a finite or infinite set of elements of
a Hilbert space H. Then there is a finite or infinite set S = {eb e2, ... } of
elements of H such that S is orthonormal and such that each Un is in the subspace spanned by {eb e2, ... , en}.
118
00,
I V111 IV 1.
L (u), en).
N
Uj 
n=1
U m +l
Note that completeness of H was not used. Thus Lemma 4.1 is valid in
any space with an inner product.
An orthonormal basis for a Hilbert space H is an orthonormal set S c H
such that span (S) is dense in H. This means that for any U E H and any
e > 0, there is a v, which is a linear combination of elements of S, such that
Ilu  vii < e.
A Hilbert space H is said to be separable if there is a sequence (un)f C H
which is dense in H. This means that for any U E H and any e > 0, there is
an n such that Ilu  unll < e.
Theorem 4.2. Suppose H is a separable Hilbert space. Then H has an
orthonormal basis S, which is finite or countable.
Conversely, if H is a Hilbert space which has a finite or countable orthonormal basis, then H is separable.
Orthonormal bases
119
where N is arbitrary, the en are in S, and the an are complex numbers whose
real and imaginary parts are rational. It is not difficult to show that T is
countable, so the elements of T may be arranged in a sequence (un)!? Any
complex number is the limit of a sequence of complex numbers with rational
real and imaginary parts. It follows that any linear combination of elements
of S is a limit of a sequence of elements in T. Since S is assumed to be an
orthonormal basis, this implies that (v n)1' is dense in H. D
To complete Theorem 4.2, we want to know whether two orthonormal
bases in a separable Hilbert space have the same number of elements.
Theorem 4.3. Suppose H is a separable Hilbert space. If dim H =
N < 00, then any orthonormal basis for H is a basis for H as a vector space,
and therefore has N elements.
IfH is not finite dimensional, then any orthonormal basisfor H is countable.
Proof Suppose dim H = N < 00, and suppose S cHis an orthonormal basis. If el> ... , eM are distinct elements of Sand
then
m = 1, ... ,M.
Thus the elements of S are linearly independent, so S has :0; N elements.
Let S = {e 1, e2, ... , eM}' We want to show that S is a basis. Let H1 be the
subspace spanned by S. Given U E H, let
U1
Then
2: (u, en)en.
M
n=l
1, ... ,M.
It follows that
+ (e, f) + (f, e) +
+ 0 + 0 + 1 = 2,
IIel12
= 1
IIfl12
120
so
Thus
Un
=F
Up.
= 0
e =F (,
so the mapping
Exercises
1. Let Xl be the vector space of continuous complexvalued functions
defined on the interval [I, 1]. If u, v E Xl> let an inner product (u, V)l be
defined by
(u, V)l =
f1
u(x)v(x)* dx.
Let un(x) = x nl, n = 1,2, .... Carry out the GramSchmidt process of
Lemma 4.1 to find polynomials Pn, n = 1, 2, 3, 4 such that Pn is of degree
n  1, Pn has real coefficients, the leading coefficient is positive, and
(Pn, Pm)l = 1 if n = m,
(Pn, Pm)l = 0 if n =F m.
L:
lu(x)j2e x dx <
L:
00.
(u, V)2 =
u(x)v(x)e X dx
L:
L:
Ipn(x)j2e<1/2)X 2 dx = 1,
Pn(x)Pm(X)e<1/2)X2 dx = 0
if n =F m.
Orthogonal expansions
121
all u, v E H.
all u, v E H.
(3)
(u, v) =
n=l
In particular,
L I(u, en}!?
N
IIul1 2
(4)
n=l
The expression (1) for u E H with coefficients given by (2) is called the
orthogonal expansion of u with respect to the orthonormal basis {eb ... , eN}'
We are now in a position to carry (1)(4) over to an infinitedimensional
separable Hilbert space.
Theorem 5.1.
122
(6)
The coefficients are given by
(7)
and they satisfy
2:
00
(8)
lan l2 = Ilu11 2
n=l
More generally,
if
2: bnen
00
and v =
(9)
n=l
then
(10)
(U, v) =
00
n=l
n=l
U E
(12)
if N
en) = an
n.
= lim
Noo
(UN'
UN
by (12). Then
l~n~N.
Ilu  vii
< e.
Orthogonal expansions
123
n=l
lan l2.
Thus
L la l2.
00
n=l
More generally, suppose U and v are given by (9). Let UN be defined by (12),
and let VN be defined in a similar way. Then by Lemma 2.1,
(u, v) = lim (UN' VN) = lim
00
n=l
n=l
L anb: = L anb:.
L
N
n=M+l
lan l2.
Since (12) is true, the right side of (13) converges to zero as M, N + 00.
(enY~ 00.
(14)
(15)
The coefficients are given by
(16)
and they satisfy
L
00
(17)
n= 
00
lan l2 =
IIuI12
124
More generally, if
then
(18)
(u, v) =
Conversely, suppose
00
00
00
(anY~~ 00
2:
2:
(19)
n=l
or
(20)
n= N
Exercises
1. Suppose Hand H1 are two infinitedimensional separable complex
Hilbert spaces. Use Theorems 4.2,4.3, and 5.1 to show that there is a linear
transformation U from H onto Hl such that
(Uu, Uv)
(u, v),
all u, v E H.
equivalence.
2. Let U: 1+ 2 + 1+ 2 be defined by
Ua 1, a2, a3, ... )) = (0, a1, a2, a3, ... ).
Show that U is a II linear transformation such that
(Ux, Uy)
Show that U is not onto.
(x, y),
Fourier series
125
6. Fourier series
Let L2 be the Hilbert space introduced in 1. Thus V consists of each
periodic distribution F which is the limit, in the sense of L2, of a sequence
(unH" c <C, i.e.,
Ilun
 umll+ 0,
Fun + F (.9").
We can identify the space C(j of continuous periodic functions with a subspace
of L2 by identifying the function u with the distribution Fu. Then
IJFul1 2
IIul1 2
1
217
n = 0, 1, 2, ...
as elements of L2.
(en)~ <x"
Proof
f"
Clearly Ilenll = 1. If m
en(x)em(x)* dx =
=
f"
=1=
considered as elements
n, then
m)x)I~"
= 0.
> 0,
e.
Iv  ul
< e.
Then
C such that
2:
00
(1)
F=
n=
00
an exp (inx)
126
2:
N
(2)
n= 
as N + 00.
anen + F (.,<l'2)
00
2: lanl
al
(4)
al
More generally,
IIFI12.
if
2: an exp (inx)
al
F=
2: bn exp (inx)
al
and G =
00
00
(5)
2: anb: = 2: F(e_n)G*(en).
al
al
co
co
al
Then there is a unique FE V such that (1) is true, in the sense of(2).
Proof.
that
The second identity follows from the first and the definition of G*. To prove
the first, take (um)i c C,
um+F (V).
Then
The an In (3) are called the Fourier coefficients of the distribution F. The
formal series on the right in (1) is called the Fourier series of F.
Suppose F = F" where u E r"C. Then (2) is equivalent to
(6)
1
2"
where
(7)
an = 21T
Fourier series
127
In fact, (6) and (7) remain valid when u is simply assumed to be an integrable
function on [0, 21T]. In this case the an are called the Fourier coefficients of
the function u, and the formal series
2: an exp (inx)
00
00
is called the Fourier series of the function u. The fact that (6) and (7) remain
valid in this case is easily established, as follows. If u: [0, 21T] * C is integrable, then again it defines a distribution F,. by
1
F,.(v) = 21T
= IIull IIvll
By Exercise 2 of 2, F,. E V.
When u E~, it is tempting to interpret (1) for F,. as
u(x)
2: an exp (inx).
00
00
In general, however, the series on the right may diverge for some values of
x, and it will certainly not converge uniformly without further restrictions
on u. It is sufficient to assume that u has a continuous derivative.
Lemma 6.3.
If (an) ~
00
c C
2: lanl
00
(8)
<
00,
00
128
u(x) =
all x
IR.
00
Moreover, ifu E ~ then the partial sums converge to u in the sense off!IJ.
Proof Let v = Du. There is a relation between the Fourier coefficients
of v and those of u. In fact integration by parts gives
(9)
1
bn = (v, en) = 21T
in
= 21T
laol +
00
2 n~1 n 2
)1/2
I Dull
<
L lanl
<
00.
00.
By Lemma 6.3, the partial sums of the Fourier series of u converge uniformly
to u.
Now suppose u E ~ and let
N
UN =
L
n=
N
anen
DUN =
L inanen = n=N
L bnen
n= N
is the Nth partial sum of the Fourier series of Du. Similarly, DkuN is the Nth
partial sum of Dku. Each Dku is continuously differentiable, so each DkuN +
Dku uniformly. 0
Fourier series expansions are very commonly written in terms of sine
and cosine functions, rather than the exponential function. This is particularly natural when the function u or distribution F is real. Suppose FE L2.
Let
(10)
bn = 2F(cos nx),
cn = 2F(sin nx).
Since
e n(x) = cos nx  i sin nx
Fourier series
129
we have
a" = t(b"  ie,,).
Also
e_" = e".
Thus for n > 0,
a"e,,(x)
+ a_fIe_flex) = a,,(cos nx + i sin nx) + a_,,(cos nx = btl cos nx + e" sin nx.
i sin nx)
Then
L:
N
n.= N
tbo +
tbo +
(11)
n=1
,,=1
is also called the Fourier series of F, and the coefficients btl, e" given by (10)
are also called the Fourier coefficients of F. If F is real, then btl, e" are real,
and (11) is a series of realvalued functions of x. Theorems 6.2 and 6.4 may
be restated using the series (11).
Exercises
1. Find the Fourier coefficients of the following integrable functions on
[0,2'IT]:
(a)
(b)
(c)
(d)
(e)
(f)
u(x)
u(x)
u(x)
u(x)
u(x)
u(x)
= Ix  'lT1.
(x  'IT)2.
= x
= Icos
xl.
2. Suppose u E re and suppose btl, e" are as in (10). Show that if u is even
then e" = 0, all n. Show that if u is odd, then btl = 0, all n. (It is convenient
to integrate over ['IT, 'IT] instead of [0, 2'IT].) Show that if u is real then
~ (b,,+
2 e"2) _ 2"
1 f.2" ( )2 d
41 b0 2 + 2:1 ,,=1
L..
u x x.
'IT 0
3. Suppose u E re,
L:
N
UN
,,=N
(u, e _")e,,.
130
Show that
where
DN(X)
sin (N
+ !)x/sin tx.
an
F(e_ n ),
n = 0, 1, 2, ....
00
Chapter 5
us(x)
L an exp (inx)
N
N
of the Fourier series converge to u in the sense of.9. Therefore it makes sense
to ask: what are necessary and sufficient conditions on a two sided sequence
(an)'~~ 00 c C that it be the sequence of Fourier coefficients of a function
u E!Y? The question is not hard to answer.
A sequence (an)~ 00 c C is said to be of rapid decrease if for every r >
there is a constant c = c(r) such that
all n =f. 0.
(2)
Theorem 1.1. A sequence (an)~ 00 c C is the sequence of Fourier coefficients of a function u E!Y if and only if it is of rapid decrease.
Proof Suppose first that u E!Y has (an)~ en as its sequence of Fourier
coefficients. Given r > 0, take an integer k ::::: r. In proving Theorem 6.4 of
Chapter 4 we noted that (inan)~ 00 is the sequence of Fourier coefficients of Du.
It follows that
(in)ka,,)~ 00
Inlklanl :::;
IDkul,
00
DkuN
L (in)kanen.
N
N
131
132
+ 2 implies
00
~ 00.
Therefore
Since e _n E fJ! the expression in (3) makes sense for any periodic distribution
F, whether or not it is in L2. Thus given any FE f!jJ', we define its Fourier
coefficients to be the sequence (an)'~ defined by (3). We know that if all the
an are zero, then F = 0 (see Chapter 3, Theorem 4.3). Therefore, FE f!jJ'
is uniquely determined by its Fourier coefficients, and we may ask: what are
necessary and sufficient conditions on a sequence (anY~~ c IC that it be the
sequence of Fourier coefficients of a periodic distribution F? Again, the
answer is not difficult.
A sequence (a,,)':: c IC is said to be of slow growth if there are some
positive constants c and r such that
00
00
00
(4)
all n "# O.
Theorem 1.2. A sequence (an)':: c IC is the sequence of Fourier coefficients of a distribution FE f!jJ' if and only if it is of slow growth.
00
Proof Suppose first that FE f!jJ' has (an)':: as its sequence of Fourier
coefficients. Recall that for some integer k, F is of order k. Thus for u E fJ!
00
IF(u) I ::;
c(lul
IDul
+ ... +
IDkUI)
< 2clnl lc
(5)
n"# O.
Let b o = 0 and
n"# O.
Let
2:
00
00
Ibnl <
00.
VN
133
(anr~~ 00'
o
In the course of the preceding proof we gave a second proof of the
characterization theorem for periodic distributions, Theorem 6.1 of Chapter 3.
In fact, the whole theory of f!J1 and f!J1' in Chapter 3 can be derived from the
point of view of Fourier series. We shall do much of such a derivation in this
section and the next. An important feature of such a program is to express
the action of F E f!J1' on u E f!J1 in terms of the respective sequences of Fourier
coefficients.
Theorem 1.3. Suppose FE f!J1' has (an)~ 00 as its sequence of coefficients,
and suppose u E f!J1 has (bn)~ 00 as its sequence of Fourier coefficients. Then
F(u) =
(6)
Proof
2: anb_ n = 2: a_nbn.
00
00
co
CD
Let
2b
N
N
UN (X)
exp (inx).
We know
Therefore
F(UN) =
Implicit in the proof of Theorem 1.3 is the proof that the series in (6)
converges. A more direct proof uses the criteria in Theorem 1.1 and 1.2.
In fact,
134
(an)~ 00,
and
if FN is
L: an exp (inx),
N
N
then
Proof
With
U, UN
as in Theorem 1.3,
o
Exercises
1. Compute the Fourier coefficients of Sand DkS.
cients of
(an)~ co,
F* ,
(an)~ 00
4. Suppose (!Pm)i c
00
Proof
Note that
(Txen)(Y) = e_n(y  x) = en(x)e_n(y).
135
Therefore
and
*u
(ancn)~ co'
Now
(G
* en)(x)
G(Txen)
en(x)G(en)
= en(x)G(e n) = b_nen(x).
Therefore
o
Using Theorem 2.l and Theorems 1.1 and 1.2, we may easily give a
second proof that F * u E f7J. Similarly, if u E f7J and G = F u , then
in fact F * G and F * u have the same Fourier coefficients.
The approximation theorems of Chapter 3 may be proved using Theorem
2.1 and the following two general approximation theorems.
Theorem 2.2.
Urn are
Suppose (urn)!
(am,n):=  co'
Suppose that for each r > 0 there is a constant c
all m, all n 1= O.
Suppose, finally, that for each n,
am,n + an
Then
over,
(an)~ 00
as m + 00.
um + u (f7J).
Proof
n 1= O.
Thus (an)~ co is the sequence of Fourier coefficients of a function u E.9.
Given e > 0, choose N so large that
L nco
c(2)
n=N
< e.
136
M implies
L: lam
1I 
N
It follows that if m
alii <
8.
M then
co
L:
co
lam1I
Since
this implies
ifm
M.
(am.II):'~ 
co.
lam.1I1
clnlr,
all m, all n # O.
lalll
clnlr,
all n # O.
n#O
n # O.
Similarly,
(bll)'~ co
137
jbm.nj
cjnj2,
bm n + bn
as
n # 0,
m + 00.
+
Also,
It follows that F m + F (.9").
Exercises
* u + u (.9')
* 9lm + F (.9")
9lm
F
ifuE.9';
if FE .9".
* 9lrn + F
(V).
4. Prove the converse of Theorem 2.2: if Urn + U (.9') then the Fourier
coefficients satisfy the hypotheses of Theorem 2.2.
o U(X, t)
ot
(ax
0)2U(x, t),
X E
(0, 7T),
u(x, 0) = g(X),
X E
[0, 7T],
t > 0,
138
u(O, t)
U(7T, t)
= 0,
t ;;:: 0;
ox U(O, t) = oX U(7T, t) = 0,
t> 0.
The function U describes the temperature distribution in a thin homogeneous metal rod of length 7T. The number x represents the distance of the
point P on the rod from one end of the rod, the number t represents the time,
and the number u(x, t) the temperature at the point P at time t. Equation (1)
expresses the assumption that the rod is in an insulating medium, with no
heat gained or lost except possibly at the ends. The constant K > is proportional to the thermal conductivity of the metal, and we may assume units
are chosen so that K = 1. Equation (2) expresses the assumption that the
temperature distribution is known at time t = 0. Equation (3) expresses the
assumption that the ends of the rod are kept at the constant temperature 0,
while the alternative equation (3)' applies if the ends are assumed insulated.
Later we shall sketch the derivation of Equation (1) and indicate some other
physical processes it describes.
Let us convert the two problems (1), (2), (3) and (1), (2), (3)' into a single
problem for a function periodic in x. Note that if (2) and (3) are both to hold,
we should have
(4)
g(O) = g(7T) =
o.
is odd, and we might expect that u is odd as a function of x for each t ;;:: 0.
If this is so, then necessarily (3) is true.
Similarly, if (2) and (3)' are both to hold, we would expect that if g is
smooth then
Dg(O)
Dg(7T)
= 0.
In this case, let g( x) = g(x), X E (7T, 0). Then g has a unique extension
to all of ~ which is even and periodic. If g is of class C 1 on [0, 27T], the extension is of class Cl on R Again, if u were a solution of (1), (2) which is
periodic in x for all t, we might expect u to be even for all t. Then (3)' is
necessarily true.
The above considerations suggest that we replace the two problems
l39
(ox
0)2u(x, t),
X E
JR,
u(x, 0) = g(x),
X E
JR,
o u(x, t) =
ot
(5)
(6)
t > 0,
(7)
(8)
= D2F.,
Ft~G
(gJ')
aIls> 0,
as
t~O.
Theorem 3.1. For each G E gJ' there is a unique family (Ft)t>o c gJ'
such that (7) and (8) hold. For each t > 0, Ft is a function ut(x) = u(x, t)
which is infinitely differentiable in both variables and satisfies (5).
0, 1, ....
Then
as t
s. In other words,
t > 0.
(9)
As
t~Owehave
(10)
The unique function a(t), t >
(11)
140
This shows that an(t) are uniquely determined, so the distributions Ft are
uniquely determined.
To show existence, we want to show that the an(t) defined by (11) are the
Fourier coefficients of a smooth function for t > 0. Recall that if y >
then
so
Therefore
m = 0, 1~ 2, ....
But
for some c and k. Therefore
= 0, 1,2, ....
It follows that for each t > 0, (an(t))~ 00 is the sequence of Fourier coefficients of a function Ut E &. Then u, is the uniform limit of the functions
un(x, t) =
N
2: bn exp (inx 
N
n2t).
Then
N
As above,
lan,l,r(t)1
<_ cm'lnlk+21+r2mtm,
= 0, 1,2, .. '.
U(x, t) = u,(x)
is smooth for x E ~, t > 0. For each N, UN satisfies (5). Therefore U satisfies
(5).
Let F t be the distribution determined by Ut. t > 0. Thus the Fourier
coefficients of F t are the same as those of Ut.
It follows from (11) that
141
(t  S)l(Ut  us) + :t Us
uniformly as t + s > 0. Therefore (7) also holds.
Exercises
1. In Theorem 3.1, suppose G is even or odd. Show that each Ft is respectively even or odd. Show that if G is real, then each F t is real.
2. Discuss the behavior of Ft as t + 00.
3. Formulate the correct conditions on the function u if it represents the
temperature in a rod with one end insulated and the other held at constant
temperature.
4. In Theorem 3.1, suppose G = Fw , the distribution defined by a
function w E fJ1. Show that the functions Ut + W (.9') as t + 0.
5. Let
L
00
gt(x)
+ inx) ,
exp (n 2 t
n=  co
t > 0,
X E
IR.
= G * gt.
gt
* g.
gt+8'
7. The backwards heat equation is the equation (1) considered for t :s; 0,
with initial (or" final") condition (2). Is it reasonable to expect that solutions
for this problem will exist? Specifically, given G E .9", will there always be a
family of distributions (Ft)t < 0 C .9" such that
all s < 0,
Ft + G (.9") as
t + O?
.02U
= I ox2'
all s > 0,
as
t + 0.
142
:t u(x, t)
(2)
u(x,O)
= (:xrU(X, t),
(0, 7T),
(3)
or
(3)'
t > 0;
ox u(O, t)
t ~
= oX U(7T, t) = 0,
t > 0.
Such a function u is called a classical solution of the problem (1), (2), (3)
or (1), (2), (3)', in contrast to the distribution solution for the periodic
problem given by Theorem 3.1. In this section we complete the discussion
by showing that a classical solution exists and is unique, and that it is given
by the distribution solution. We consider the problem (1), (2), (3), and leave
the problem (1), (2), (3)' as an exercise.
Given g E C[O, 7T] with g(O) = g(7T) =
(so that (3) is reasonable),
extend g to be an odd periodic function in 't', and let G = Fg By Theorem 3.1,
there is a function u(x, t) = ut(x) which is smooth in x, t for x E ~, t > 0,
satisfies (1) for all such x, t, and which converges to G in the sense of f!jJ' as
t + 0. By Exercise 1 of 3, u is odd as a function of x for each t > 0. Since u
is also periodic as a function of x, this implies that (2) holds when t > 0.
Ifwe knew that Ut + g uniformly as t + 0, it would follow that the restriction
of u to ~ x ~ 7T is a classical solution of (I), (2), (3). Note that this is true
when (the extension of) g is smooth: see Exercise 4 of 3. Everything else we
need to know follows from the maximum principle stated in the following
theorem.
~ x ~ 7T,
~ t ~ T
is attained on one of the three edges t = 0, x = 0, or x = 7T.
Proof
It is easy to see that the maximum value of v is attained on one of the edges
in question. Otherwise, it would be attained at (x o, to), where
Xo E
(0, 7T),
to > 0.
143
But then
:2': e > 0,
U(x, t) ~
Igl,
all x E [0,17],
t :2': 0.
144
+ e, t)
 u(x)
+ u(x
+ e, t)
 u(x  e, t)  2u(x)].
 e, t)  u(x)]
or
a u(x, t)
at
(4)
(ox
0)2 u(x, t).
Note that essentially the same reasoning applies to the following general
situation. A (relatively) narrow cylinder contains a large number of individual
objects which move rather randomly about. The random motion of each
object is assumed symmetric in direction (left or right is equally likely) and
essentially independent of position in the cylinder, past motion, or the
presence of the other objects. As examples one can picture diffusion of
molecules or dye in a tube of water kicked about by thermal motion of the
water molecules, or the late stages of a large cocktail party in a very long
narrow room. If u(x, t) represents the density of the objects near the point x
at time t, then equation (4) arises again. Boundary conditions like (3)'
correspond to the ends of the cylinder being closed, while those like (3)
correspond to having one way doors at the ends, to allow egress but not
ingress.
Exercises
I. Suppose G = Fw, W E ~ and suppose (Ut)t>o is the family of functions
in Theorem 3.1. Show that if W ~ 0, then each Ut is ~ O.
2. As in Exercise 5 of 3, let
gt(x) =
co
t>
o.
145
Show that
1
211"
f2n gt(x) dx = 1.
0
Show that if w E fJJ and w ~ 0, then gt * w ~ O. Show, by using anyapproximate identity in .9, that gt ~ O.
3. Show that (gt)t> 0 is an approximate identity as t ~ 0, i.e., (in addition
to the conclusions of Exercise 2) for each 0 < 8 < 11",
2,,6
lim
t .... o 6
gt(x) dx = O.
= g(x),
u(O, t)
t > 0,
X E(O, 11"),
= u(11", I) = O.
X E
(0, 11"),
t> O.
u(O, t)
= u(11", I) = 0,
O.
u(x, 0) = g(x),
8
81 u(x, 0) = hex),
X E
[0, 11"].
146
As in the case of the heat equation we begin by formulating a corresponding problem for periodic distributions and solving it. The conditions
(2) suggest that we extend g, h to be odd and periodic and look for a solution
periodic in x. The procedure is essentially the same as in 3.
Theorem 5.1. For each G, HE f!/J' there is a unique family (F,)t>o
with the following properties:
f!/J'
(4)
(5)
(6)
all s > 0,
Ft + G (f!/J')
; Ftl
+ H
as
(f!/J')
t + 0,
as
s + 0.
As in the proof of Theorem 3.1, conditions (4), (5), (6) imply that an is
twice continuously differentiable for t > 0, an and Dan have limits at t = 0,
and
D 2 an(t)
an(O) = b n,
= n 2 an(t),
Dan(O) = cn
i= 0,
(8)
Thus we have proved uniqueness. On the other hand, the functions (7), (8)
satisfy
(9)
(10)
(11)
(12)
Dan(t) +
Cn
as
t + 0,
all n.
It follows from (9) and Theorem 1.2 that (an(t))~ co is the sequence of Fourier
coefficients of a distribution Ft. It follows from (10) and Theorem 2.3 that
(5) is true. It follows from (11), the mean value theorem, and Theorem 1.2
that
147
(Dan(s~ <Xl.
It follows that
(~rFtl
t= s
Let us look more closely at the distribution F t in the case when G and H
are the distributions defined by functions g and h in .9. First, suppose h = O.
The Fourier series for g converges:
L an exp (inx).
<Xl
g(x)
<Xl
Inequality (9) implies that the Fourier series for F t also converges; then F t
is the distribution defined by the function Ut E ~ where
u(x, t) = u,(x) =
<Xl
~~
an[exp (int)
+ exp ( 
<Xl
= :2
L an exp (in(x + t
<Xl
00
+ :2 L an exp (in(x
<Xl
 t,
00
or
(13)
u(x, t) = !g(x
+ t) + tg(x
 t).
u(x, 0)
u(x, t) = utCx) =
8u
g(x),
8t (x,O)
(an(t~
<Xl
= O.
n .. O
: u(x, t)
x
<Xl
~~
00
:2
L Cn exp (in(x + t
<Xl
00
:2
00
148
u(x, t)
=:2
fx+t
xt hey) dy + aCt)
u(x,O) = 0,
au
at (x, 0) = hex).
Equation (l) for the vibrating string may be derived as follows. Approximate the curve Ut representing the displacement at time t by a polygonal line
joining the points ... (x  e, u(x  e, t, (x, u(x, t, (x + e, u(x + e, t, ....
The force on the string at the point x is due to the tension of the string. In
this approximation the force due to tension is directed along the line segments from (x, u(x, t to (x e, u(x e, t. The net vertical component
of force is then proportional to
(:r
u(x, t)
c2e 2[u(x
+ e) + u(x 
e)  2u(x)]
Lb
n=l
co
(14)
u(x, t)
A single term
2wl/cn.
Thus the frequency for this term is
cn/2wl = n(KT)1/2/2wlr,
an integral multiple of the lowest frequency
(15)
149
.L b
N
(16)
n=l
In general, if the basic frequency (15) is not too low (or high) it is heard as
the pitch of the string, and the coefficients b1 , b2 , , bN determine the
purity: a pure tone corresponds to alI but one bn being zero. Formula (15)
shows that the pitch varies inversely with length and radius, and directly
with the square root of the tension.
Exercises
1. In Theorem 5.1, suppose DG and H are in V. Show that
I:x u(x, t)
r f
dx
I:t u(x, t) 12 dx
Ft
!(TtG
+ LtG) + !(LtSH 
TtSH).
Here again T t denotes translation, while S is the operator from &' to &'
defined by
SH(u) = H(v),
where
vex)
f2"
x
vet) dt
27T
150
(1)
02U
+ oy2 = o.
u(r, 0),
g = g(O)
where
x = rcos 0,
y = r sin 0;
0= tan 1 (yx 1 ).
Then
au = au or
ax
or ax
+ au 00
00 ax
00
I au
1 02U
+ r or
 +2
r2 00.
(3)
(4)
02U
r2 or2
aU
02U
+ r or + 002
= 0,
u(I,O) = g(O),
o ::; r <
oE IR,
1,
oE IR,
151
O<r<1.
n # O.
(7)
We look for a solution an(r) of the form bnr c , where e = e(n) is a constant,
for each n. This will be a solution if and only if
e(e  1)
or
c2 =
+e
n2 = 0,
u(r, e) =
00
Pr(e) =
f!JJ. In fact,
2: rlnlelno
00
00
= 1+
i
1
(reiO)n
(reiO)n
2~ {" Pr(e) de = 1
and the last expression shows that pre e) ;;:: 0 and
1
2
lim
r+1 _ "   Pr(e) de = 0
for each 0 < 8 <
7T.
The function
per, e) = Pr(e)
is called the Poisson kernel for the Dirichlet problem in the unit disc.
152
ur = F*Pr
Proof Uniqueness was proved in the derivation above. Let Ur = F * P,.
Since P, is an approximate identity, we do have Ur + F (f!IJ') as r + I, and
u, + g uniformly if F = F g , g E '?l. We must show that u is smooth and
satisfies (3). Note that when F = Fg , g E Iif, then explicitly
u(r, 8)
f2" per, 8 
I
= 27T
cp)g(cp) dcp
and we may differentiate under the integral sign to prove that u is smooth.
Moreover, since per, 8) satisfies (3), so does u.
Finally, suppose u has merely a distribution F as its value on the boundary.
NotethatifO:s; r,s < 1 then (by computing Fourier coefficients, for example)
In particular, choose any R > 0, R < 1. It suffices to show that u is smooth
in the disc r < R and satisfies (3) there. But when r < R,
= rR 1
< I
and
u,
I f2"
= 27T
0 P(rRl, 8  CP)UR(CP) dcp,
:s; r < R. Again, differentiation under the integral sign shows that u is
smooth and satisfies (3). 0
Lo an(z 00
fez) =
zo)n,
Iz  zol
< e,
such that
u(x, y)
= Ref(x + iy)
Zo
= Xo + iyo,
153
when
Ix + iy  zol
<
e.
Proof Suppose first that (xo, Yo) = (0, 0) and that the set in which u is
defined contains the closed disc r + y2 :s; 1. Let
0 E IR.
Then u is the unique solution of the Dirichlet problem in the unit disc with
g as value on the boundary. If (bn)'~ 00 are the Fourier coefficients of g,
00
Let I be defined by
Lo anzn
00
I(z) =
where
an = 2bn,
n > O.
Then
u(x,y) = Re
=
(~an(reI8)n) =
Re(J(x
+ iy)),
Re (J(re IB ))
r + y2
< 1.
In the general case, assume that u is defined on a set containing the closed
disc of radius e centered at (xo, Yo), and let
Ul(X,
y) = u(xo
+ ex, Yo + ey).
= Ref,
where
+ iyo. 0
154
Exercises
2:o an(z 00
Iz  zol
zo)n,
< R,
and we let
u(x,y)
Ref(x
+ iy),
Ix + iy  zol
< R,
then u is harmonic.
2. There is a maximum principle for harmonic functions analogous to
the maximum principle for solutions of the heat equation discussed in 4.
(a) Show that if u is of class C2 on an open set A in 1R2 and
02U
ox2
02U
+ oy2 >
at each point of A, then u does not have a local maximum at any point of A.
(b) Suppose u is of class C2 and harmonic in an open disc in 1R2 and
continuous on the closure of this disc. Show, by considering the functions
uB(x, y) = u(x, y)
+ ex2 + ey2
Chapter 6
Complex Analysis
1. Complex differentiation
Suppose n is an open subset of the complex plane C. Recall that this
means that for each Zo E n there is a 8 > so that n contains the disc of
radius 8 around Zo:
ZEn
A function J:
if Iz  zol < 8.
Pe....:C,w)c....."J('"z)
w+z
w
exists. If so, the limit is called the derivative of J at Z and denoted j'(z).
These definitions are formally the same as those given for functions
defined on open subsets of IR, and the proofs of the three propositions below
are also identical to the proofs for functions of a real variable.
If J: n ~ C
Proposition 1.1.
tinuous at z.
is differentiable at
ZEn,
then it is con
(f
aj'(z)
The proof of the following theorem is also identical to the proof of the
corresponding Theorem 4.4 of Chapter 2.
Theorem 1.4.
2: an(z 00
J(z)
n=O
zo)n,
155
Iz  zol
< R.
Complex Analysis
156
L: nan(z n=1
co
f'(z) =
zo)n1.
In particular, the exponential and the sine and cosine functions are
differentiable as functions defined on C, and (exp z)' = exp z, (cos z)' =
 sin z, (sin z)' = cos z.
A remarkable fact about complex differentiation is that a converse of
Theorem 2.4 is true : iff is defined in the disc Iz  Zo I < R and differentiable
at each point of this disc, then f can be expressed as the sum of a power
series which converges in the disc. We shall sketch one proof of this fact in
the Exercises at the end of this section, and give a second proof in 3 and a
third in 7 (under the additional hypothesis that the derivative is continuous).
Here we want to give some indication why the hypothesis of differentiability
is so much more powerful in the complex case than in the real case. Consider
the function
fez) = z*,
or f(x
+ iy)
= x  iy.
u(x, y) = Ref(x
+ iy) =
vex, y) = Imf(x
Thus
f(x
+ iy) =
ff(x
u(x, y)
~f(x + iy).
+ iv(x, y).
We shall speak ofu and v as the real and imaginary parts off and write
f= u
+ iv.
au
ox'
au
oy'
ov
ox'
+ iv.
Then iff
Complex differentiation
157
(2)
ox u(x, y)
(3)
oy u(x, y)
oy vex, y)
0
ox vex, y).
= 
Conversely, suppose that the partial derivatives (1) all exist and are continuous in an open set containing (x, y) and satisfy (2), (3) at (x, y). Then f is
differentiable at z = x + iy.
The equations (2) and (3) are called the CauchyRiemann equations.
They provide a precise analytical version of the requirement that the limit
definingf'(z) be independent of the direction of approach. Note that in the
examplef(z) = z* we have
ou
ox
Proof
(4)
ou=ov=o
oy
ox
.
ov =1
oy
,
'
+ iy. Then
lim (it)l[f(z + it)  fez)].
Suppose f is differentiable at z
lim t1[f(z
t+o
=1
+ t)  fez)]
f'(z)
t+o
+ i :x vex, y),
+ :yv(x,y).
Equating the real and imaginary parts of these two expressions, we get (2)
and (3).
Conversely, suppose the first partial derivatives of u and v exist and are
continuous near (x, y), and suppose (2) and (3) are true. Let
h=a+ib
where a and b are real and near zero, h t= O. We apply the Mean Value
Theorem to u and v to get
fez
Complex Analysis
158
+ i ;y vex, Y)]h
= ;y u(x, y)b
vex, y)a.
Therefore
hl[f(z
+ h)
+ i ;y vex, y)]
oy u(x
+ a, y + tlb)
 oy u(x, y).
Proof Let u, v be the real and imaginary parts of g. We want to determine real functions q, r such that
f=
+ ir
has derivative g. Because of Theorem 1.5 we can see that this will be true
if and only if
oq or
oq
or
==U
 =   = v
oX
oy
,
oy
ox
.
The condition that u,  v be the partial derivatives of a function q is (by
Exercises 1 and 2 of 7, Chapter 2)
au
oy
ov
ox
ov ou
oy = ox'
Complex integration
159
Exercises
1. Let/(x + iy) = x 2 + y2. Show that lis differentiable only at z = O.
2. Suppose I: C ~ ~. Show that I is differentiable at every point if and
only ifI is constant.
3. Let 1(0) = 0 and
x + iy =I O.
Show that the first partial derivatives of I exist at each point and are both
zero at x = y = O. Show that/is not differentiable (in fact not continuous)
at z = O. Why does this not contradict Theorem 1.5?
4. Suppose I is holomorphic in n and suppose the real and imaginary
parts u, v are of class C2 in n. Show that u and v are harmonic.
5. Suppose I is as in Exercise 4, and suppose the disc Iz  zol :$; R is
contained in n. Use Exercise 4 together with Theorem 6.2 of Chapter 5 to
show that there is a power series 2: an(z  zo)n converging to I(z) for
Iz  zol < R.
6. Suppose g is holomorphic in n, and suppose n contains the disc
Iz  zol :$; R. Let I be such that f'(z) = g(z) for Iz  zol :$; R (using
Corollary 1.7). Show that the real and imaginary parts ofI are of class C2.
7. Use the results of Exercises 5 and 6, together with Theorem 1.4, to
prove the following theorem.
If g is ho10morphic in nand Zo E n, then there is a power series such that
L: an(z 00
g(z) =
n=O
zo)n,
Iz  zol
< R,
for any R such that n contains the disc of radius R with center zo0
2. Complex integration
Suppose nee is open. A curve in n is, by definition, a continuous
function" from a closed interval [a, b] c ~ into n. The curve" is said to be
smooth if it is a function of class C1 on the open interval (a, b) and if the
onesided derivatives exist at the endpoints:
(t  a)1[,,(t)  ,,(a)] converges as t ~ a, t > a;
(t  b)1[,,(t)  ,,(b)] converges as t~b, t < b.
The curve " is said to be piecewise smooth if there are points ao, a1> ... , ar
with
a = ao < a1 < ... < ar = b
Complex Analysis
160
such that the restriction of y to [ajI. aj] is a smooth curve, 1 :::;; j :::;; r. An
example is
yet) =
Zo
+ e exp (it),
t E [0, 27T];
I t E [0, 27T]}
is the circle of radius e around zoo This is a smooth curve. A second example is
yet) = t,
yet) = 1 + i(t  1),
yet) = 1 + i  (t  2),
yet) = i  (t  3)i,
t E [0, 1],
tE(1,2],
t E (2,3],
t E (3, 4].
is defined to be the limit, as the mesh of the partition P = (to, tI. ... , tn) of
[a, b] goes to zero, of
n
2 f(y(tl))[y(t l )
j=l
y(tl  1)].
(1)
{f
C is
f(y(t))y'(t) dt.
Proof The integral on the right exists, since the integrand is bounded
and is continuous except possibly at finitely many points of [a, b]. To prove
that (1) holds we assume first that y is smooth. Let P = (to, t 1 , ", tn) be a
partition of [a, b]. Then
(2)
2f(y(tt))[y(tt)  y(tt1)]
= 2f(y(tl))y'(tt)[t1  tid + R,
where
Applying the Mean Value Theorem to the real and imaginary parts of yon
[tl  1 , ttl and using the continuity of y', we see that
R~O
as the mesh
IPI ~O.
On the other hand, the sum on the right side of (2) is a Riemann sum for
the integral on the right side of (1). Thus we have shown that the limit exists
and (1) is true.
Complex integration
161
t E [0, 217],
Furthermore, it matters how many times the point set is traced out by)l: if
)la(t) = exp (int),
f l=nJ. f.
then
73
A curve
If so, then
)I:
71
LI=
0,
allf.
r(t,O)
r(a, s)
The function
let
= )lo(t),
= r(b, s),
r(t, 1) = )ll(t),
all s E [0, 1].
)10
SE
(0,1).
Then each )I. is a closed curve, and we think of these as being a family of
curves varying continuously from )10 to )110 within n.
Complex Analysis
162
n is either a disc
{z
liz  zol
< R}
or a rectangle
Lf=O.
Proof It is easy to see that in this case y is homotopic to a constant
curve Yo, so that the conclusion follows from Cauchy's Theorem. However,
let us give a ,different proof. By Corollary 1.7, or by the analogous result
for a rectangle in place of a disc, there is a function h, holomorphic in n,
such that h' = f But then
f =
Jb !(y(t))y'(t) dt =
a
C[h
h'(y(t))y'(t) dt
'a
YB(t)
Let
= ret, s),
t E [a,
b],
SE
(0,1).
Assume for the moment that each curve YB is piecewise smooth. We would like
to show that the integral off over YB is independent of s, ;5; s ;5; 1.
Assume first that r is of class C2 on the square [a, b] x (0,1), that the
first partial derivatives are uniformly bounded, and that
Y; ~ y~
,,~+y~
as s ~ 0,
as s+ 1
y~
or
y~,
respectively, is
Complex integration
163
F'(s) =
f :s (J(r(t,
:t ret, S) dt
f J(r(l,
:t [J(r(t, s
a:~s ret, s) dt
:s r(l, s)
J dt
<p(x) ~ 0,
all x,
<p(x) dx = 1
<p(x)
if
Ixl
~ 1.
<Pn(X) dx
= 1,
Let
(t, s)
f J=1
"O.n
Yl,n
J,
where
Ys.n(t) =
r n(t, s),
IE
[a,b],
SE
[0, 1].
164
Complex Analysis
)I,"n
+
ff
s = 0 or 1.
as n + 00,
fa
o ~ s ~ t,
ret, s) = Yo(t),
ret, s) = r o(t,3s  1),
ret, s) = YI(t),
If r has these properties, then when n
< s <j,
t~s~1.
3 we have
s = 0 or 1.
It follows that Ys,n + Ys uniformly, s = 0 or 1. It also follows, by differentiating with respect to t, that y~,n is uniformly bounded and
y~
is continuous, s
0 or 1. Therefore (3)
Exercises
1. Suppose YI: [a, b] + nand Y2: [c, d] + n are two piecewise smooth
curves with the same image:
Suppose these curves trace out the image in the same direction, i.e., if
[c, d]
and
and
s< t
then
Complex integration
165
This justifies writing the integral as an integral over the point set C:
J i
J=
Yl
J(z)dz,
IL
3. Let y(t) =
Zo
+ eelt ,
J(z) dz
:$;
M,
M2rrR.
L(z 
:$;
zo)l = 2rri.
Zl
2nrri.
Yn
5. Let n = {z Eel z  O}. Use the result of Exercise 4 to show that the
curves Yn and Ym are not homotopic in n if n  m. Show that each Yn is
homotopic to Yo in C, however.
6. Use Exercise 4 to show that there is no function/, defined for all z  0,
such thatf'(z) = zl, all z  0. Compare this to Corollary 1.7.
7. Let n be a disc with a point removed:
n = {z I Iz  zol
where
Iz  zll
Zl
Yl(t) = z
< R, z 
Zl},
t E
[0, 2rr],
[0, 2rr].
Here Iz  zll < r < Rand e > is chosen so that IYl(t) Construct a homotopy from Yo to Yl'
8. Suppose n contains the square
{x
+ iy 10:$;
x, y
:$;
zll
> e, all t.
I}.
LJ(z)dz = 0.
This extension of Corollary 2.3 is due to Goursat. (Hint: for each integer
k > 0, divide the square into 4k smaller squares with edges of length 2 k
Complex Analysis
166
Let Ck,l,"" Ck ,2k be the boundaries of these smaller squares, with the
counterclockwise direction, Then show that
JJ(z) dz = 2: J
f
It follows that if
II
J(Z)dZI
J(z) dz.
C".I
M > 0,
+ J'(z)(w
. z)
+ r(w)
as
w~ z
where
Jr(w)(w  Z)lJ
~0
Therefore for each e > 0 there is a 8 > 0 such that if Co is the boundary
of a square with sides of length h lying in the disc Jw  zJ < 8, then
1
J(w) = 2
7T1
167
We can find a piecewise smooth curve YI: [a, b] + .0 which traces out Cs
once in the counterclockwise direction and is homotopic to Yo in the region
.0 with the point w removed. Granting this for the moment, let us derive (1).
By Exercise 1 of 2, and Cauchy's Theorem applied to g(z) = J(z)(z  w)I,
we have
Lo J(z)(z 
W)I dz =
rJ(z)(z 
Jo
fu
so
(2)
f =f
W)I dz =
J(w)(z  W)I dz
08
L J(z)(z 
W)l dz,
08
By Exercise 2 of 2, the first integral on the right in (2) equals 27Tif(w). Since
Jis differentiable, the integrand in the second integral on the right is bounded
as e + O. But the integration takes place over a curve of length 27Te, so this
integral converges to zero as e + O. Therefore (1) is true.
Finally, let us construct the curve YI and the homotopy. For t E [a, b],
let YI(t) be the point at which Ce and the line segment joining w to Yo(t)
intersect. Then for 0 ~ s ~ 1, let
1.
J(w) = 2
7Tl
(3)
{z
liz 
Complex Analysis
168
2: anew n=O
co
(4)
J(w)
zo)n,
Proof Suppose 0 < r < R, and let C be the circle of radius r centered
at Zo, with the counterclockwise direction. If Jw  zoJ < rand z E C then
J(w  zo)(z  ZO)lJ =
< 1.
2: (w n=O
N
gN(Z)
J(z)
zo)n(z  zo)nl
an
1.
2
wI
This argument shows that the series exists and converges for Jw  zoJ < r.
The series is unique, since repeated differentiation shows that
(6)
Since r < R was arbitrary, and since the series is unique, it follows that it
converges for all Jw  zoJ < R. 0
Note that our two expressions for an can be combined to give
(7)
j<n)(zo) = 2
n''.
wI
j<n>(w)
n'
= '.
2m c
169
Corollary 3.5.
Proof We are assuming that there is a constant M such that If(z) 1 ::;; M,
all z E C. It is sufficient to show that f' == 0. Given w E C and R > 0, let C
be the circle with radius R centered at w. Then
f'(w)
so
If'(w) 1
Letting R
'J
::;;
2~M.R2.21TR =
Owe getf'(w) = 0.
MRl.
Corollary 3.6.
a complex root.
(8)
such that
if Izl ~ R.
Exercises
1. Verify the Cauchy Integral Formula in the form (1) by direct computation whenf(z) = eZ and C is a circle.
2. Compute the power series expansion (4) in the following cases. (Hint:
(6) is not always the simplest way to obtain the an.)
(a) fez) = sin z, Zo = 11T.
(b) eZ , Zo arbitrary.
Complex Analysis
170
C2 = {z I Iz  zol = R}
J(w)
(Hint: choose
1.
2
7Tl
C2
1
J(z)(z  W)1 dz  2
a E C, lal
7Tl
Cl
J(w)
1 f
2'
7Tl
J(z)(z  W)1 dz 
L 2'1 f
m
j=1
7Tl
CJ
171
=1=
0, and m ;:::: 1.
Zo E Q.
If
(1)
where ao and am are constants, m ;:::: 1, and his holomorphic in Q with h(zo)
Proof
(2)
= 1.
h(z)
=1=
L: anam1(Z 00
zo)nm.
n=m
This function is holomorphic near zo, and h(zo) = 1. On the other hand,
(1) defines a function h in Q except at zo, and the function so defined is
holomorphic. Thus there is a single such function holomorphic throughout Q. 0
Our first theorem here is the Inverse Function Theorem for holomorphic
functions.
Theorem 4.2. Suppose f is holomorphic in an open set Q, and suppose
Q,f'(zo) =1= 0. Let Wo = f(zo). Then there is an e1 > and a holomorphic
function g defined on the disc Iw  wol < e1 such that
Zo E
wol
yo(t)
yet)
= Zo + oett ,
= f(Yo(t,
[0, 21T],
Complex Analysis
172
and let C be the circle of radius 0 around zoo Assuming the truth of the
theorem and assuming that y enclosed WI> we should have
1
2:
I2" g(y(ty'(t)[y(t) 
1
2:
f2" yo(t)J'(Yo(ty~(t)[f(Yo(t
1T1
1T1
or
(3)
wd 1 dt
g(W 1 )
1.
2
1T1 c
wd 1 dt
wd 1 dz.
zJ'(z)[f(z) 
Our aim now is to use (3) to define g and show that it has the desired properties. First, note that (1) holds with m = l. We may restrict 0 still further,
so that Ih(z)1 ~ 1 for z E C. This implies that fez) =f. Wo if z E C. Then we
may choose e > 0 so that
fez) =f. WI
if
IWI  wol
< e and
z E C.
With this choice of 0 and e, (3) defines a function g on the disc IWI  Wo I < e.
This function is holomorphic; in fact it may be differentiated under the
integral sign.
Suppose
f(zl) =
WI>
and
IWI  wol
< e.
We can, and shall, assume that 0 is chosen so small that J'(z) =f. 0 when
Iz  zol < O. Then
fez)  WI
fez)  f(zl)
(z  zl)k(z)
Zd'(Zl)k(Zl)l
Thus g(f(Zl
Also
Zl'
Zl for Zl near zo, and we have shown that f is 11 near zoo
1
= (g f)'(zo) = g'(wo)f'(zo),
0
so g'(wo) =f. O. Therefore g is 11 near woo We may take eo > 0 so small that
eo :::;; e, and so that g is IIon the disc Iw  wol < eo. We may also assume
el :::;; eo so small that
Iw  wol <
el implies
173
and
Iw 
Iw 
f(g(w)) = w.
1 /n
= exp
(~log w),
(wl/n)n = exp
(n.~ log w) =
exp (log w) = w.
n log Zo = n log Zl
+ 2m7Ti,
Zl
exp (2m7Tinl).
Complex Analysis
174
0<
Iw  wol
<
Iz  zol
Proof.
By Lemma 4.1,
fez) = Wo
+ (z  zo)mh(z),
fez) = Wo
[(z  zo)(g(h(z))]m = Wo
+ k(z)m,
where k is holomorphic near zoo Then k(zo) = 0, k'(zo) = g'(h(zo)) '# O. For
z near zo, z '# zo, we have
0<
Iz  zol
Iw  wol
<
e 1 / m
The following corollary is called the open mapping property of holomorphic functions.
Isolated singularities
175
2. Withfas in Exercise 1, show that g(z) = If(z) I does not have a local
minimum at Zo unless f(zo) = O.
3. Use Exercise 2 to give another proof of the Fundamental Theorem of
Algebra.
4. Suppose z = log w. Show that Re z = log Iwl.
5. Supposefis holomorphic near Zo andf(zo) # O. Show that log If(z) I
is harmonic near zoo
6. Use Exercise 5 and the maximum principle for harmonic functions to
give another proof of the Maximum Modulus Theorem.
7. Use the Cauchy integral formula (for a circle with center zo) to give
still another proof of the Maximum Modulus Theorem.
8. Use the Maximum Modulus Theorem to prove Corollary 4.4. (Hint:
let Wo = f(zo) and let C be a small circle around Zo such that fez) # Wo if
z E C. Choose e > 0 so that If(z)  wol ;::: 2e if z E C. If Iw  wol < e, can
(f(z)  W)l be holomorphic inside C?
9. A set 0 c C is connected if for any points zo, Zl E 0 there is a (continuous) curve y: [a, b] '> 0 with yea) = zo, y(b) = Zl' Suppose 0 is open
and connected, and supposefis holomorphic in O. Show that iffis identically
zero in any nonempty open subset 0 1 c 0, thenf == 0 in O.
10. Let 0 be the union of two disjoint open discs. Show that 0 is not
connected.
5. Isolated singularities
Suppose f is a function holomorphic in an open set O. A point Zo is said
to be an isolated singularity off if Zo 1: 0 but if every point sufficiently close
to Zo is in O. Precisely, there is a a > 0 such that
if 0 < Iz  zol < O.
z EO
(1)
If(z) I
:$
if 0 <
Iz 
zol <
o.
Complex Analysis
176
Conversely, suppose (1) is true. Choose r with 0 < r < 8 and let 8 > 0
be such that 0 < 8 < r. Let C be the circle with center Zo and radius r. Given
w with 0 < Jw  zoJ < r, choose 8 so small that 0 < 8 < Jw  zoJ, and let
C~ be the circle with center Zo and radius 8. By Exercise 5 of 3,
.f
1
f(w) = 2
TTl
.f
1
f(z)(z  W)l dz  2
TTl
c.
(2)
f(w)
1
2
TTl
o<
Jw  zoJ < r.
We may define f(zo) by (2) with w = zo, and then (2) will hold for all w,
Jw  zoJ < r. The resulting function is then holomorphic. 0
An isolated singularity Zo for a function f is said to be a pole of order n
for J, where n is an integer ~ 1, iff is of the form
(3)
f(z)
(z  ZO)lIg(Z)
=1=
O. A pole of
is not.
Proof. It follows easily from the definition that if Zo is a pole of order n
the asserted consequences are true.
Conversely, suppose (z  zoYi(z) = g(z) is bounded near zo0 Then Zo is
an isolated singularity, so we may extend g to be defined at Zo and holo~
morphic. We want to show that g(zo) =1= 0 if (z  ZO)"lJ(Z) is not bounded
near Z00 But if g(zo) = 0 then by Lemma 4.1,
g(z)
(z  zo)mh(z)
for some m ~ I and some h holomorphic near Z00 But then (z  ZO)IIY(Z)
(z  zo)m1h(z) is bounded near Z00 0
f(z) + a as z + Zo
This is most emphatically not true near an esseutial singularity.
177
Isolated singularities
Iz 
zol < e,
al
If(z) 
< e.
Proof Suppose the conclusion is not true. Then for some e > 0 and
some a E C we have
If(z) 
al
~ e
where 0 <
Iz 
zol < e.
Therefore h(z) = (f(z)  a)l is bounded near zoo It follows that h can be
extended so as to be defined at Zo and holomorphic near zoo Then for some
m ~ 0,
h(z) = (z  zo)mk(z)
where k is holomorphic near Zo and k(zo) # O. We have
zol < e.
0
Actually, much more is true. Picard proved that if Zo is an isolated essential singularity for J, then for any e > 0 anti any a E C, with at most one
exception, there is a z such that 0 < Iz  zol < e andf(z) = a. An example
is fez) = exp (liz), z # 0, which takes any value except zero in any disc
around zero.
Isolated singularities occur naturally in operations with holomorphic
functions. Suppose, for example, that f is holomorphic in nand Zo E n.
Iff(zo) # 0, then we know thatf(z)l is holomorphic near zoo The function
fis said to have a zero of order n (or mUltiplicity n) at zo, n an integer ~ 0, if
J<k)(ZO) = 0,
J<fI)(ZO) # O.
o :$;
k < n.
(In particular, f has a zero of order zero at Zo if f(zo) # 0.) A zero of order
one is called a simple zero.
Lemma 5.4. If f is holomorphic near Zo and has a zero of order n at Zo,
then fez) 1 has a pole of order n at zo0
Proof.
(4)
By Lemma 4.1,
fez) = (z  zo)flh(z),
Complex Analysis
178
af,
are meromorphic in
morphic in Q.
Q.
f+g,
fg
If Q is connected and g
;;J.
in
Q,
Q.
Suppose
Exercises
1.
2.
What
3.
zl
< 2
179
4. Suppose f has a pole of order n at zoo Show that there are an R > 0
and a 0 > 0 such that if Iwl > R then there are exactly n points z such that
Iz  zol
fez) = w,
<
o.
{z Ilzl
> R}
g(z) = f(1/z),
I/z in the domain off Similarly, 00 is said to be a zero or pole of order n
for f if 0 is a zero or pole of order n for g. Discuss the status of 00 for the
following functions:
(a)
(b)
(c)
(d)
fez) = p(z)/q(z)
where p and q are polynomials, q "= O. By Theorem 5.4, a rational function is
meromorphic in the whole plane C. (In fact it is also meromorphic at 00;
see Exercise 5 of 5.)
It is easy to see that sums, scalar multiples, and products of rational
functions are rational functions. Iff and g are rational functions and g "= 0,
thenf/g is a rational function. In particular, any function of the form
(1)
+ aiz
ZO)2
+ ... + an(z
 zo)n
where p is a polynomial with p(O) = O. It turns out that any rational function
is the sum of a polynomial and rational functions of the form (I).
Theorem 6.1. Suppose f is a rational function with poles at the distinct
points Z1> Z2, ... , Zm and no other poles. There are unique polynomials Po,
P1, ... , Pm such that plO) = 0 if j =f. 0 and
180
Complex Analysis
L: an(z n=l
co
h(z)
Therefore
f(z)
ao(z  zo)r
+ al(z
 zo)lr
zo)n.
+ ... + arI(z 
zo)l
+ k(z),
Z+1Xl
lim ZkpO(Z)
2+00
bk.
2+2 1
Cn
181
fez) =
'" bn(z L:
11.="
zo)n,
o < Iz 
zol < 8,
some 8 > O. This generalization of the power series expansion of a holomorphic function is called a Laurent expansion. It is one case of a general
result valid, in particular, near any isolated singularity.
Iz 
(4)
L:'" an(z 
'"
Iz 
Iz 
r<
zo)n,
c C
such that
zol < R.
r < r1 <
Iz 
Let C1 be the circle of radius r1 and C2 the circle of radius R1 centered at zo0
By Exercise 5 of 3,
.f
1
fez) = 2
TTl
C2
.f
1
f(w)(w  zo)1 dw  2
TTl
Cl
f(w)(w  zo)l dw
= f2(Z) + f1(Z).
Here f2 and f1 are defined by the respective integrals. We consider f2 as
being defined for Iz  zol < R1 and f1 as being defined for Iz  zol > r1.
Thenf2 is holomorphic and has the power series expansion
f2(Z) =
(5)
L:'" an(z 
n=O
zo)n,
'" (w L:
n=1
zo)n1(z  zo)n.
f1(Z) =
'" a_n(z L:
n=1
zo)n
Complex Analysis
182
where
a_ n =
2~.J
TTl
01
00
Ie (z 
zo)n dz
aI =
2~'
TTl
Jfez) dz.
0
am
(8)
.J f(z)(z 
1
= 2
TTl
zo)ml dz,
o<
Iz  zol < R,
some R > O. The coefficient a 1 is called the residue off at zoo Equation (7)
determines the residue by evaluating an integral; reversing the viewpoint
we may evaluate the integral if we can determine the residue. These observations are the basis for the" calculus of residues." The following theorem is
sufficient for many applications.
fez) dz = 27Ti(b 1
+ bJ + ... + brn).
183
Zo,
= (z
 zo)nh(z)
= (z
 zo)n
L:'" bm(z 
zo)m,
m=O
so the residue at Zo is
i'"
(1
+ t 2)1 dt
Let CR , R > 0, be the square with vertices Rand R + Ri. Let fez) =
+ Z2)1. The integral off over the three sides of CR which do not lie on
the real axis is easily seen to approach zero as R + 00. Therefore
(1
f:",
(1
+ t 2)1 dt =
1!~",rR (1
= lim
R+oo
OR
+ t 2)1 dt
fez) dz.
+ i)1,
OB
Complex Analysis
184
Exercises
1. Compute the partial fractions decomposition of
1)(t 3
2it2
fa'" t 2(t
t  2i)1 dt
+ 1)1 dt.
S:
4. Show that
t 1 sin t dt = Pr. (Hint: this is an even function, and
it is the imaginary part of J(z) = z 1etz Integrate J over rectangles lying in
the halfplane 1m z ? 0, but with the segment  e < t < e replaced by a
semicircle of radius e in the same halfplane, and let the rectangles grow long
in proportion to their height.)
5. Show that a rational function is holomorphic at 00 or has a pole at 00.
6. Show that any flfnction which is merom orphic in the whole plane and
is holomorphic at 00, or has a pole there, is a rational function.
7. Show that if Re z > 0, the integral
r(z) =
fa'" t
Z 1
t:t dt
exists and is a hoi om orphic function of z for Re z > 0. This is called the
Gamma Junction.
8. Integrate by parts to show that
r(z
1)
= zr(z) ,
::s;;
Rez> 0.
0, z  0, by
r(z) = z 1 r(z
1).
Show that r is meromorphic for Re z > 1, with a simple pole at zero.
10. Use the procedure of Exercise 9 to extend r so as to be merom orphic
in the whole plane, with simple poles at 0, 1,  2, ....
11. Show inductively that the residue of r at  n is ( 1)n(n!) 1.
12. Is r a rational function?
185
+ Rw)
we get a function fl holomorphic for Iwi < 1. Similarly, iff has an isolated
singularity at zo, we may transform it to a function with an isolated singularity at zero and holomorphic elsewhere in the unit disc. Since f can be
recovered fromfl by
fez) = h.(Rl(Z  zo)),
all the information about local behavior can be deduced from study of fl
instead.
Supposefis holomorphic in D. Then the function
g(r, 8) = f(re iO )
+ h, 8)
Letting h + 0 we get
Similarly,
h 1 [g(r 1 8
+ h)
so
8g
88
zr =.
(1)
Now let grC8) = g(r, 8), 0 ::::; r < 1. Since gr is continuous, periodic, and
continuously differentiable as a function of 8, it is the sum of its Fourier
series:
(2)
gr(8)
L an(r)efRO
00
00
fR8
d8.
186
Complex Analysis
It follows that air) is continuous for 0 :::; r < 1 and differentiable for
< r < 1, with derivative
,( ) _ 1
an r  21T
2"
og ( e)  inD de
or r, e
.
n =F 0,
(3)
and a~(r) = O. Thus ao is constant. The equation (3) may be solved for an
as follows, n =F O. The real and imaginary parts of an are each real solutions of
u'(r) = ,lnu(r).
On any interval where u(r) =F 0 this is equivalent to
J(re iD ) =
L: anrneinD,
00
O:::;r<1.
n=O
Thus
L: anzn,
00
J(z) =
n=O
Izl
< 1.
(5)
where QT is the periodic distribution with Fourier coefficients bn = rn, n > 0
and bn = 0, n < O. Then
QT(e) =
or
(6)
Equation (5) can be written
L: rneinD = L: (reiD)n
co
00
n=O
n=O
187
few) = 2'
1
J' f(z)(z 
TTL c
W)l dz,
where C is the unit circle. Thus (5) may be regarded as a version of the Cauchy
integral formula.
Note that Qr is a smooth periodic function when
r < 1. Therefore
(5) defines a function in the disc if gl is only assumed to be a periodic distribution. In terms of the Fourier coefficients, if gl has Fourier coefficients
(anY~~ 00 then those of gr are (an(r ~ 00 where an(r) = 0, n < 0, an(r) = anr n,
n ;::: 0. These observations and the results of I and 2 of Chapter 5 leads
to the following theorem.
: ;
Theorem 7.1.
where an
(an)~ 00,
(8)
= 0for n
f(re i9 ) = (F * Q,)(8),
2: anz n,
00
(9)
fez) =
n=O
Izl <
1.
n > 0,
then there is a distribution F such that (8) holds. If we require that the Fourier
coefficients ofF with negative indices vanish, then F is unique and is the boundary
value off in the sense above.
Condition (10) is not necessarily satisfied by the coefficients of a power
series (9) converging in the disc. An example is
n > 0.
Thus the condition (10) specifies a subset of the set of all holomorphic
functions in the disc. This set of holomorphic functions is a vector space.
Theorem 7.1 shows that this space corresponds naturally to the subspace of
fIJ' consisting of distributions whose negative Fourier coefficients all vanish.
Recall that FE fIJ' is in the Hilbert space 22 if and only if its Fourier
coefficients (an)~ co satisfy
(11)
For such distributions there is a result exactly like the preceding theorem.
Complex Analysis
188
L: anzn,
co
f(z) =
(12)
Izl
n=O
< 1
with
(13)
11F112
1
sup 2
0:$1<1
7T
2"
Conversely, suppose f is defined in the unit disc by (12). Suppose either that
(13) is true or that
sup
(14)
(,
2"
0:$1:$1,0
00.
Then both (13) and (14) hold, and the boundary value off is a distribution
FEV.
Proof. The first part of the theorem follows from Theorem 7.1 and the
fact that (11) is a necessary condition for F to be in L2. The second part of
the theorem is based on the identity
(15)
27T
f2" If(re
0
I9
)1 2 dO =
L:
co
n=O
lan l2r 2n ,
o :s; r
< 1,
which is true because the Fourier coefficients off, are anrn for n ~ 0 and
zero for n < O. If (13) is true then (14) follows. Conversely, if (13) is false,
then (15) shows that the integrals in (14) will increase to 00 as r _ 1. Thus
(13) and (14) are equivalent. By Theorem 7.1, if (13) holds then f has a
distribution F as boundary value. The an are the Fourier coefficients of F,
so (13) implies FE V. 0
The set of hoi om orphic functions in the disc which satisfy (14) is a vector
space which can be identified with the closed subspace of V consisting of
distributions whose negative Fourier coefficients are all zero. Looked at
either way this is a Hilbert space, usually denoted by
H2 or H2(D).
Exercises
1. Verify that
f(z) =
L: nr..zn
co
n=l
converges for
Izl
189
= J(re!8)
to deduce:
2:
ao
n .. m
anz n.
Izl < 1.
Carry
Chapter 7
(1)
where
(2)
a"
1
217
L"" a"e
""
l """,
f2" u(x)e0
I """
dx.
Of course the particular exponential functions which occur here are precisely
those which are periodic (period 217). If u: ~ + C is a function which is not
periodic, then there is no such natural way to single out a sequence of
exponential functions for a representation like (1). One might suspect that
(1) would be replaced by a continuous sum, i.e., an integral. This suspicion
is correct. To derive an appropriate formula we start with the analogue of
(2). Let
(3)
g(z) =
i:
u(t)eIlt dt,
when the integral exists. (Of course it may not exist for any z E C unless
restrictions are placed on u.)
If we are interested in functions u defined only on the halfline [0, (0),
we may extend such a function to be zero on (00,0]. Then (3) for the
extended function is equivalent to
(4)
g(z) =
If u is bounded and continuous, then the integral (4) will exist for each
z E C which has positive real part. More generally, if a E ~ and eatu(t) is
continuous and bounded for t > 0, then the integral (4) exists for Re z > a.
Moreover, the function
g =Lu
(5)
g'(z) = 
Rez > a.
Introduction
191
In particular, let
t> 0;
t ::;; 0,
Then for Re z > Re w,
Luw(z)
1
00
e(wz)t dt
WECo
(z  W)l.
The operator L defined by (3) or (4) and (5) assigns, to certain functions
on IR, functions holomorphic in halfplanes in C. This operator is clearly
linear. We would like to invert it: given g = Lu, find u. Let us proceed
formally, with no attention to convergence. Since Lu is holomorphic in some
halfplane Re z > a, it is natural to invoke the Cauchy integral formula.
Given z with Re z > a, choose b such that
g(z)
Lu(z)
1.
2
1.
2
7Tl
1Tl
fc J(w)(w 
Z)l dw
J(w)LuwCz) dw.
1.
u(t) = 2
(6)
1Tl C
g(w)ewt dw,
or
u(t)
= 1
21T
foo
_
g(a
+ is)e(a+is)t ds.
00
Thus (3) or (4) and (6) are our analogues of (2) and (1) for periodic functions.
It is convenient for applications to interpret (3) and (6) for an appropriate
class of distributions F. Thus if Fis a continuous linear functional on a suitable
space !l' of functions, we interpret (3) as
The space !l' will be chosen in such a way that each such continuous linear
functional F can be extended to act on all the functions ez for z in some
halfplane Re z > a. Then the function LF will be holomorphic in this halfplane. We shall characterize those functions g such that g = LF for some F,
and give an appropriate version of the inversion formula (6).
192
[Lu'](z) = zLu(z).
(7)
+ ao.
p(D) = amDm
i.e.,
Then formally
[Lp(D)u](z) = p(z)Lu(z).
(8)
p(D)u
=v
we want
p(z)Lu(z)
= Lw(z).
(9)
As we shall see, all these purely formal manipulations can be justified.
Exercises
1. Show that the inversion formula (6) is valid for the functions uw , i.e.,
if a > Re wand C = {z I Re z = a} then
_1_.
2m c
(z  w)Ie zt dz = ewt,
= 0,
t> 0;
t < 0.
f"
eztu'(t) dt = z
f"
eztu(t) dt  u(O),
The space:e
193
(1)
t+
+ 00
This is equivalent to the requirement that for each integer k > 0, each a E IR,
and each ME IR the function
(2)
is bounded on the interval [M, 00). In fact (1) implies that (2) is bounded on
every such interval. Conversely if (2) is bounded on [M,oo) then (1) holds
when a is replaced by any smaller number a' < a.
It follows that if u E 2; then for each k, a, M we have that
(3)
is finite. Conversely, if (3) is finite for every integer k ~ and every a E IR,
M E IR, then u E Y.
The set of functions 2 is a vector space: it is easily checked that if u, v E 2
and bEe then bu and u + v are in Y. Moreover,
(4)
(5)
Ibulk,a,M = Ibllulk,a,M'
lu
+ VI",a,M
:::;; lul",a,M
Ivl",a,M'
IUn  Uka,M +
as n + 00.
If so, we write
Un + U (2).
The sequence (un)'f c: 2 is said to be a Cauchy sequence in the sense of 2
if for each k, a, M,
Theorem 2.1. 2 is a vector space. It is complete with respect to convergence as defined by the expressions (3); i.e., if (un) l' c: 2 is a Cauchy sequence
in the sense of 2; then there is a unique u E 2 such that Un + U (2).
194
Proof Let (un)f be a Cauchy sequence in the sense of !l'. Taking (3)
with a = 0, we see that each sequence of derivatives (Dlcun)f is a uniform
Cauchy sequence on each interval [M, 00). It follows, by Theorem 4.1 of
Chapter 2, that there is a unique smooth function u such that Dlcun + DIcU
uniformly on each [M, 00). Now let a be arbitrary. Since (eatDlcun)f is also
a uniform Cauchy sequence on [M, 00) it follows that this sequence converges
uniformly to eatDlcu. Thus u E.fR and Un + U (.fR). 0
It follows immediately from the definition of .fR that certain operations
on functions in .fR give functions in !l'. In particular, this is true of differen
tiation:
ifuE.fR;
translation:
T.u E.fR
ifuE.fR
where
u* E.fR
if u E.fR
where
u*(t) = u(t)*.
It follows that if u E 2, so are the real and imaginary parts:
Re u
= !(u + u*),
1m u
= 2 (u*  u).
S+u(t) = 
fO u(s) ds.
In fact, DS+u = u so
(6)
For k
/S+u(t)/
1.
M, a > 0,
~ {Xl /u(s)/ ds ~
/U/O.a.M {' e as ds
= a1/u/O.a.Me at.
Thus
a> O.
(7)
The finiteness of
/S+U/O.a.M
for a ~ 0 follows from finiteness for any a > O. Thus S+u E!l'.
The space .I
195
Lemma 2.2. The operations of differentiation, translation, complex conjugation, and integration are continuous from !l' to !l' with respect to convergence in the sense of !l'. Moreover, if u E !l' then the difference quotient
as s.....,..O.
Proof These statements chiefly involve routine verifications. We shall
prove the final statement. Given an integer k ~ 0, let v = Dku. Suppose
t ~ M and 0 < Is I :;:; 1. The Mean Value Theorem implies
sl[L sv(t)  vet)] = Dv(r)
where
It  rl
<
lsi. Then
(8)
M.
The left side of (8) converges to zero as s .....,.. 0, uniformly on bounded intervals. It follows that
The functions ez ,
(9)
IR,
are not in!l' for any Z E C. However, they may be approximated by functions
from !l' in a suitable sense.
Lemma 2.3.
rpn(t) = rp(tfn),
Then Un is smooth and vanishes for t ~ 2n, so Un E!l'. We shall consider in
detail only the (typical) case k = 1.
196
Now both 1  CPn(t) and DCPn(t) are bounded independent of t and n, and
vanish except on the interval [n, 2n]. Therefore
o~
a',
M;::: M'.
k',
(10)
luka,M ~ clulk',a',M"
all u E Ii'.
Proof It is sufficient to prove (10) in all cases when two of the three
indices are the same. The case k = k', a = a' is trivial. The case k = k',
M = M' is straightforward. Thus, suppose a = a' > 0 and M = M'. Let
k' = k + j and set v = Dk'U. We may obtain Dku from v by repeated integrations:
We use (7) repeatedly to get
lulk,a,M
IDkulo,a,M ~ aJlvlo,a,M
a JIDk'u IO,a,M
aJlulk',a,M'
If u E .P, we set
lulk
lulk,k,k
sup {lektDku(t)1
It
;::: k}.
Corollary 2.5.
197
3. Show that u E Y, Z E C implies ezu E 24. Complete the proof of Lemma 2.2.
F(au)
aF(u),
F(u
+ v) =
F(u)
+ F(v).
un + U (!')
implies
The set of all continuous linear functionals on !' will be denoted !". An
element FE!" will be called a distribution of type !", or simply a distribution.
An example is the adistribution defined by
a(u) = u(O).
(1)
(2)
Z E
C.
f(t) = 0,
(4)
ea!j(t) is bounded.
M,
We may define
by
(5)
Flu)
t:
f(t)u(t) dt.
1F,(u) I
clulo.a'.M
198
u E!l'.
F,*(u) = F,(u*)*,
u E!l'.
Similarly,
We shall define the translates and the complex conjugate of any arbitrary
FE!l" by
(7)
(8)
= F(u*)*,
E!l'.
u E!l'.
(10)
1m F =
2 (F*
 F).
FD'(U)
F(Du),
u E!l'.
DF(u)
= F(Du),
E!l:
(12)
Proposition 3.1.
The set !l" is a vector space. ifF E !l", then the translates
TaF, the complex conjugate F*, and the derivatives DkF are in !l".
Proof All these statements follow easily from the definitions and the
continuity of the operations in!l'; see Lemma 2.2. For example, DkF:!l' + C
is certainly linear. If u" + u (.!l'), then
DkU" + Dku (!l'),
so
199
Fn(u) + F(u)
We denote this by
Fn + F (2').
The operations defined above are continuous with respect to this notion of
convergence.
Suppose
Proposition 3.2.
Fn + F (2'),
and suppose a E C, s
Gn + G (2'),
IR. Then
Fn
aFn + aF (2'),
(2'),
ToFn + ToF (2'),
D"Fn + D"F (2').
+ Gn + F + G
F(Sl[Tou  uD
+ F(  Du) = (DF)(u).
The following theorem gives a very useful necessary and sufficient condition for a linear functional on 2 to be continuous.
Theorem 3.3. Suppose F: 2 + C is linear. Then F is continuous if and
only if there are an integer k ;;:: 0 and constants a, M, K c IR such that
(13)
Proof.
all u E!l'.
Thus F is continuous.
To prove the converse, suppose that (13) is not true for any k, a, M, K.
In particular, for each positive integer k we may find a v" E 2 such that
!F(v,,)! ;;:: k!Vk!k
Let Uk
2 be
k!v!"."._k  O.
200
Then
lu"l" =
k1,
If FE 2',
there is M
IR such that
then
luka.M =
Exercises
1. Compute the following in the case F = S:
T.F(u),
F*(u),
ReF,
ImF,
+ zF.
201
IF(u) I ::;;
(2)
Klu ",a,M'
all u E 2.
where supp (f) c [M, (0). If u E.P and v = S+u, then Dv = u. Integration
by parts gives
Flu) =
For an arbitrary FE 'p' we define the integral of F (from the left), denoted
S_F, by
Proposition 4.1.
If FE.P'
(3)
If F is of order k
1, then S _F is of order k  1.
E!l',
202
J(t) =
=
=
f",
DJ(s) ds =
f", f
f",
f", f",
D2f(r) dr ds
D2f(r) ds dr
(tr)D2J(r)dr.
Let
h(t) = Itl,
= 0,
(4)
t :::; 0,
t> O.
J(t) =
L"'", her 
t)D2J(r) dr
or
(5)
We would like to interpret (5) as the action of the distribution defined
by D2f on the function Tth; howev~r Tth is not in .P. Nevertheless, h can be
approximated by elements of .P.
Lemma 4.2. Let h be defined by (4). There is a sequence (h n )'{! c 2 such
that
(6)
Ih n
ME
hl oaM
?
0 as n ? 00
IR.
Proof There is a smooth function rp: IR ? IR such that 0 :::; rp(t) :::; 1 for
all t and rp(t) = 1, t :::; 2, rp(t) = 0, t 2': 1; see 8 of Chapter 2. Let
hn(t) = rp(tln)h(t) = rpn(t)h(t).
Then hn is smooth, since rpn is zero in an interval around 0 and h is smooth
except at O. Also hn(t) = h(t) except in the interval (2In, 0), and
t E (2In, 0).
203
Theorem 4.3.
~ 2.
F = Dk(F,).
(7)
We may suppose a
(8)
alluEL.
(9)
, . .... IX)
But
ifs
(10)
ITshl oaM
(11)
IT.hlo.a.M ~ (s  M)e aS
M,
ifs> M.
Thus
supp (f)
[M, (0)
If(s) I
cea's,
all s E IR.
M. For s > M,
204
Vn(t) =
L:
Ti1n(t)D 2u(s) ds
M,
:0;
i'"
It  sle as ds =
~ e at,
Now let
v(t) =
L"'", h(t 
s)D 2u(s)ds.
Then
Therefore
D2Ff (u)
205
But
vet) = { ' (s  t)D 2 u(s) ds
=
Thus D2F,
F.
But then
DkF,
Dk 2G
F.
DF
Exercises
I:
~ +
(1)
I/(t)1 : :; Ke at ,
(2)
If Z
all
t.
If Re Z = b > a then
= e"t,
E ~.
206
L:
(3)
f(t)e,,(t) dt
f:
f(t)e,,(t) dt
exists when Re z > a. The Lap/ace transform of the function f is the function
Lf defined by (3):
(4)
Lf(z)
= L"oo
Re z > a.
e2tf(t) dt,
(5)
(Lf)'(z)
(6)
Rez> a.
Proof
(w  z)1[Lf(w)  Lf(z)] =
where
g(w, z, t)
(w  z)l[e wt  e 2t ].
b > a. Let
Os;ss;1.
Then
g(w, z, t)
= (w  z)1[h(l)  h(O)].
An application of the Mean Value Theorem to the real and imaginary parts
of h shows that
Ih(l)  h(O) 1
S;
clw  zleb't,
t ~ 0
S;
c1e Bt,
where
e > O.
Moreover,
g(w, z, t)f(t) _  tealJ(t)
as w _ z, uniformly on each interval [M, N]. It follows that Lfis differentiable and that (5) is true.
f:
207
ILf(z) I ::;;
::;; K
If(t)e 2t l dt
exp tea  Re z) dt
"M
g =
(7)
~.
2m
then the gN are bounded uniformly on the line C and converge uniformly to
g. Thus F(t) is the limit as N ~ ex) of FN , where
= IN
N
{~.f
2m c
Let us consider the integral in braces. When s > t the integrand is holomorphic to the right of C and has modulus::;; klzl 2 for some constant k.
Let C n be the curve consisting of the segment {Re z = b I Iz  bl ::;; N} and
the semicircle {Re z > b liz  bl = N}. Then the integral of
ex)
is
208
Thus the integral in braces vanishes for s > t. When s < t, let
reflection of CB about the line C. Then for R > b,
C~
be the
= t  s.
.f
1
2
7T1
ez <t')z2 dz = t  s,
Thus
F(t) =
f""
s < t.
(t  s)J(s) ds.
suPpJ c: [M',oo),
(10)
(11)
some M',
(12)
J(t)
1
= 2
'lT1
C(b)
eztg(z) dz.
It follows from (8) that the integral exists and defines a continuous function.
It follows from (8) and an elementary contour integration argument that (12)
is independent of b, provided b > a. Moreover, (8) gives the estimate
(13)
b > a,
J(t) = 0, t < M.
Thus the Laplace transform of J can be defined for Re z > a. If Re w > a,
choose
a < b < Rew.
209
f"" e wt{21f
Lf(w) =
7r1
C(b)
Since
lewtHtg(z)1
e(1
+ t(b
 Re w)]
f
=  ~f
Lf(w) = _1.
2m
2m
C\b)
C\b)
e(1
Izi)2.
Exercises
1.
f(t) = 2
7r1
eatLf(z) dz,
4. Compute Lf when
f(t) = 0,
t < 0;
f(t) = tn,
t > 0,
210
6. Compute Lf when
f(t)
t < 0;
0,
7. Compute Lfwhenf(t)
f(t) =
f(t)
= ewtt",
t > O.
0, t < 0;
f~ sin s ds,
t > O.
alluE2.
2 such
(2)
where e2 (t) = e 2 Now (1) and (2) imply that (F(U,,:=l is a Cauchy
sequence in C. We shall define the Laplace transform LF by
(3)
LF(z)
= F(e
2)
Jv" 
e2 JIc,a,M + 0 as n + 00
then
JF(v,,)  F(u,,)J
;:S;
KJv"  U"JIc,a,M + 0
(6)
L(F + G) = LF
(7)
(8)
= e LF(z);
L(DlcF)(z) = zlcLF(z);
(9)
L(S_F)(z)
L(T.F)(z)
+ LG;
28
= zlLF(z),
z =F O.
Moreover,
(10)
L(ewF)(z)
= LF(z + w),
211
(11)
If F is defined by a function J,
then
LF = Lf.
(12)
Proof. The identities (5) and (6) follow immediately from the definitions.
If (un)f c !f' satisfies (2) then the sequence (Lsun)f approximates Lse;, in
the same sense. But
so
L(TsF)(z) = lim TsF(un ) = lim F(T sun)
= e;'S lim F(u n) = e 2SLF(z).
This proves (7), and the proofs of (8), (9) and (10) are similar. Note that
U E !f' implies ewu E !f' and
implies
ewun ~ ewu (!f').
Moreover,
(13)
f(t) = ....!.....
2m
F = Dk+2F,.
It was shown in the proof of Theorem 4.3 that for any b > max {a, O} there
is a constant c such that
If(t)1 ~ cebt,
all t.
212
Therefore Lfis holomorphic for Re z > max {a, O}. By Proposition 6.1,
(IS)
LF(z) = zk+2Lf(z),
Therefore LF is hoI om orphic in this half plane. This completes the proof
of the first statement in the case a ;::: O. When a < 0, let G = eaF. Then (1)
implies
(16)
get)
(18)
Ig(z) I :$;
K1(l
Rez> a.
where Kl = Ke aM
Conversely, suppose (18) is true. Take b > max {a, O}. We may apply
Theorem 5.3 to
h(z) = Zk2g (Z)
to conclude that
h = LJ,
Differential equations
213
where I is continuous,
suppl c [M, (0),
I/(t)1
cebt
Rez > b.
Since this is true whenever b > max {a, O}, the proof is complete in the case
O.
gl(Z) = g(z
+ a).
It follows that gl
= LFI for
g =LF,
Exercises
seR.
2. Compute the Laplace transform of F when
7. Differential equations
In 5, 6 of Chapter 2 we discussed differential equations of the form
U'(x)
+ au(x) = I(x),
In this section we turn to the theory and practice of solving general nth order
linear differential equations with constant coefficients:
(1)
214
L akzk.
n
k=O
L ak D".
n
(2)
p(D) =
"=0
(1)"
an =F 0.
Before discussing (1)" for junctions, let us look at the corresponding
problem for distributions: given HE 2', find FE 2' such that
p(D)F = H.
Theorem 7.1. Suppose p is a polynomial oj degree n > 0, and suppose
HE 2'. Then there is a unique distribution FE 2' such that
p(D)F = H.
(3)
(4)
Rez> a
We may choose a so large that p(z) =F if Re z ::c: a, and so that LH is holomorphic for Re z > a and satisfies the estimate given in Theorem 6.3. Then
we may define
g(z) = p(z)lLH(z),
Rez > a.
Ig(z) I ::;;
K(I
Rez> a.
Theorem 6.3 assures us that there is a unique FE 2' such that LF = g, and
then (4) holds. 0
Differential equations
215
The proof just given provides us, in principle, with a way to calculate F,
given H. Let us carry out the calculation formally, treating F and H as
though they were functions:
f
~. f
ff
f [_1.f
1.
F(t) = 2
=
7f1
2m
et2LF(z) dz
etzp(z)lLH(z) dz
et2p(z)le S2 H(s) ds dz
= _I.
=
2711
2m
e<tS)Zp(z)l dZ]H(S) ds
where
(5)
G(t) =
J....
2m
et2p(z)1 dz
(5)'
1.
G(t) = lim 2
R. .o
7f1
CB
etzp(z) 1 dz,
gilc(t) = 0,
t < 0;
gjlc(t) = tic exp (zjt),
t >
o.
Proof Suppose t < O. Let DR denote the rectangle with vertices b iR,
(b + Rl/2) iR. When Re Z ~ b,
(6)
le2tp(z)11
c(t)(1
216
where c(t) is independent of z. Now the line segment CB is one side of the
rectangle DB, and the estimates (6) show that the integral of e2tp(z)1 over
the other sides converges to 0 as R + 00. On the other hand, the integral
over all of DB vanishes, because the integrand is holomorphic inside DB.
Thus the limit in (5)' exists and is 0 when t < 0 (and also when t = 0, if
n> 1).
.J.
1
G(t) = 2
(5)"
17'
t > O.
e2tp(z)ldz,
CB
Now we may apply Theorem 6.3 of Chapter 6: G(t) is the, sum of the residues
of the meromorphic function e2tp(z)l. The point z, is a pole of order m"
so near we have a Laurent expansion
z,
L:
p(Z)l =
bm(z  zJ)m.
mO!:m,
e"l t
L:
(ml)l(z  z,)mt m,
m.. O
we see that the residue (the coefficient of(z  Z,)l in the Laurent expansion)
at ZI is a linear combination of
tIc
exp (Z,t),
when the limit on the right exists as t approaches from the right.
We take the Green's function for p(D) to be 0 at t = 0; when n > 1 this
agrees with (5)'.
Theorem 7.3. Let p be a polynomial of degree n > 0, with leading coefficient all :F O. Let G be the Green's function for p(D). Then G is the unique
function from IR to C having the following properties:
(7)
(8)
= 0,
(9)
G(t)
(10)
p(D)G(t)
(11)
= 0,
0;
t > 0;
aID"1G(0+) = 1.
n  2;
Differential equations
217
(12)
1.f
2m
z ke 2tp(z)1 dz.
DR
Thus
.f
1
DkG(O+) = 2
7rl
Zkp (Z)ldz.
DR
We may replace DR by a very large circle centered at the origin and conclude
that
k ~ n  2.
Therefore (8) is true. Let us apply the same argument when k = n  1.
Over the large circle the integrand is close to
zn1(anZn)1
= a n lZ l ,
so
Finally, (12) gives
1.
p(D)G(t) = 2
7rl
DR
t> O.
e zt dz = 0,
Zj
= J,
k > O.
Then eachlk is a linear combination of DiJ, 0
~ j ::;;
k, so
k~nl.
Moreover,
In = (D  zn)(D  Zn1) ... (D  zl)1
= an lp(D)1 = 0.
Thus
In1(0)
0,
Dln1  Znln1
= In =
0,
so
In1 = 0.
Then
In2(0) = 0,
Dln2  zn1/n2 = 0,
and let
218
so fn2 = O. (We are using Theorem 5.1 of Chapter 2). Inductively, each
fk = 0, k ::;; n, so f = 0 and G = G1 0
Now let us return to differential equations for functions.
Theorem 7.4. Suppose p is a polynomial of degree n > 0, and suppose
f: [0, (0) ~ C is a continuous function. Then there is a unique solution
u: [0, (0) ~ C to the problem
(13)
p(D)u(t) = f(t),
t > 0;
(14)
DiU(O+) = 0,
O::;;j::;;nl.
(15)
1:
Du(t)
G(O+ )f(t) +
I:
O.
DG(t  s)f(s) ds
(17)
Dku(t) =
1:
(18)
Dnu(t)
an If(t)
s:
p(D)u(t) = f(t)
p(D)G(t  s)f(s) ds
Thus
k ::;; n  1,
= f(t).
ealJ(t) is bounded,
then we may define a distribution HE .fR' by
H(v) =
f(t)v(t) dt,
E.fR.
Differential equations
219
p(D)F = H.
(19)
In the context of this chapter, this amounts to settingf(t) = 0 for t < 0 and
considering the distribution determined by J, exactly as in Remark 1.
3. Let us consider an example of the situation described in Remark 2.
A table of Laplace transforms may read, in part,
f
Lf
sin t
sinh t
(Z2
+ 1)1
1)1
(Z2 
(As noted in Remark 2, the function sin t in the table is considered only for
t ;:::: 0, or is extended to vanish for t < 0.)
Now suppose we wish to solve:
u(O)
(21)
Let p(z) =
t > 0;
(20)
Z2 
= u'(O) = O.
1. Our problem is
p(D)u = sin t,
t > 0;
u(O) = Du(O) = O.
The solution u is the function whose Laplace transform is
p(z)lL(sin t)(z)
= (Z2  1)1(Z2
+ 1)1.
But
(Z2  1)1(Z2
+ 1)1
u(t) =
sinh t 
sin t,
t ;::::
o.
+ 1)1.
220
4. In cases where the above method fails, either because the given
function! grows too fast to have a Laplace transform or because the function
Lu cannot be located in a table, one may wish to compute the Green's
function G and use (IS). The Green's function may be computed explicitly
if the roots of the polynomial p are known (of course (5)" gives us G in
principle). In fact, suppose the roots are Zlo Z2, ... , Zr with multiplicities
ml, m2, ... , m r We know that G for t > 0, is a linear combination of the n
functions
Thus
t> 0,
where we must determine the constants
Cjk'
2:
.2
CjO,
ZjCjO
+ 2: Cjlo
etc.
5. The more general problem
(22)
p(D)u(t) = Jet),
t> 0;
(23)
Dku(O +) = b k,
O:S.k<n
may be reduced to (13)(14). Two ways of doing this are given in the exercises.
6. The formal calculation after Theorem 7.1 led to a formula
F(t) =
G(t  s)H(s) ds
Exercises
1. Compute the Green's function for the operator p(D) in each of the
following cases:
p(Z)
p(z)
p(z)
p(z)
= Z2  4z  5
= Z2  4z + 4
= Z3 + 2Z2  Z
= Z3  3z + 2.
Differential equations
221
2. Solve for u:
u"(t)  4u'(t) + 4u(t) = et ,
t > 0,
u(O+) = u'(O+) = O.
3. Solve for u:
u"'(t)  3u'(t) + 2u(t) = t 2  cos t,
t > 0,
u(O+) = u'(O+) = u"(O+) = 0
4. Let Uo: (0, 00) ~ C be given by
uo(t) =
L (k!)1b t
nl
k k
k=O
Show that
O:S;k:s;n1.
5. Suppose uo: (0, 00) ~ IR is such that
O:s;k:S;n1.
Show that u is a solution of (22)(23) if and only if u =
the solution of
Uo
+ Uh where U1 is
t > 0,
o :s; k
<j
t > 0
Show that
DFo = F1
+ u(O+ )8,
222
and in general
DkFO = Fk
2 Dfu(O+ )DklfS.
k=l
j=O
10. In Exercise 9 let u(t) = G(t), t > 0, where G is the Green's function
for p(D). Show that
p(D)Fo = S.
'p'
JiJ(t 
s)u(s) ds
F* u(t).
(b) Show that for each FE 'p' and u E 2, the function F * u is in !l'.
14. If F, HE 'p', set
(F * H)(u) = F(D * u),
u E!l'.
p(D)F = H
is
where G is as in Exercise 17.
2.
AHLFORS, L. V.: Complex Analysis, 2nd ed. New York: McGrawHill 1966.
BREMER MANN, H. J.: Distributions, Complex Variables, and Fourier Transforms. Reading, Mass.: AddisonWesley 1966.
223
224
3.
5.
4.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
NOTATION INDEX
complex numbers, 8
rational numbers, 4
IR
real numbers, 5
integers, 1
Z
positive integers, 1
z+
L2
Hilbert space of periodic distributions, 106
rc
continuous periodic functions, 69
,!l7
smooth functions of fast decrease at + 00, 193
,!l7'
distributions acting on L, 197
~
smooth periodic functions, 73
~'
periodic distributions, 84
differentiation operator, 72, 86, 198
D
Laplace transform operator, 192, 206, 210
L
U * v, F * u, F * G convolution, 78, 94, 96, 100, 101, 222
Un > U (,!l7)
193
Un > U (~)
73
Un > F(L2)
106
F7} > F (,!l7')
199
Fn > F(~')
85, 100
C
Q
225
SUBJECT INDEX
approximate identity, 80
227
228
Subject Index
229
Subject Index
Liouville's theorem, 169
logarithm, 61, 173
lower bound, 7
lower limit, 12
lub,7
maximum modulus theorem, 174
maximum principle, for harmonic
functions, 154
 for heat equation, 142
mean value theorem, 43
meromorphic function, 178
mesh, of partition, 38
metric, metric space, 19
modulus, 9
neighborhood, 20
norm, normed linear space, 70
null space, 33
odd function, 87
odd periodic distribution, 88, 101
open mapping property, 174
open set, 20
order, of distribution in 2', 201
 of periodic distribution, 89, 102
 of pole, 177
 of zero, 177 .
orthogonal expansion, 121, 124
orthogonal vectors, 110
orthonormal set, orthonormal basis,
117
parallelogram law, 110
Parseval's identity, 124
partial fractions decomposition, 180
partial sum, of series, 14
partition, 38
period,69
periodic distribution, 84, 100
periodic function, 69
Poisson kernel, 151
polar coordinates, 66
pole, simple pole, 176
power series, 17
product, of sets, 2
Pythagorean theorem, in Hilbert
space, 110
radius of convergence, 17
rapid decrease, 131
ratio test, 16
rational function, 179
rational number, 4
real part, of complex number, 9
 of distribution in 2', 198
 of function, 38
 of periodic distribution, 87, 101
real distribution in 2', 198
real periodic distribution, 87, 101
removable singularity, 175
residue, 182
Riemann sum, 38
Riesz representation theorem, 112
root test, 16
scalar, 28
scalar multiplication, 27
Schrodinger equation, 141
Schwarz inequality, 103, 109
semi norm, 76
separable, 27
sequence, 4
sequentially compact set, 26
series, 14
simple pole, simple zero, 177
singularity, essential, 176
 isolated, 175
 removable, 175
slow growth, 132
smooth function, 73
span, 30
standard basis, 30
subset, 2
subsequence, 24
subspace, 29
sup, 11
support, 200
supremum, 11
230
Subject Index
Soft and hard cover editions are available for each volume.
For information
A student approaching mathematical research is often discouraged by the sheer
volume of the literature and the long history of the subject, even when the actual
problems are readily understandable. Graduate Texts in Mathematics, is intended
to bridge the gap between passive study and creative understanding; it offers introductions on a suitably advanced level to areas of current research. These introductions are neither complete surveys, nor brief accounts of the latest results only.
They are textbooks carefully designed as teaching aids; the purpose of the authors
is, in every case, to highlight the characteristic features of the theory.
Graduate Texts in Mathematics can serve as the basis for advanced courses.
They can be either the main or subsidiary sources for seminars, and they can be
used for private study. Their guiding principle is to convince the student that
mathematics is a living science.
Vol.
Vol. 2
Vol. 3
Vol. 4
Vol. 5
MAC LANE: Categories for the Working Mathematician. ix, 262 pages.
1972.
Vol. 6
Vol. 7
Vol. 8
Vol. 9
Vol. 10
Vol. 11
In preparation
Vol. 12
Vol. 13
Vol. 14
Vol. 15
Vol. 16
WINTER: The Structure of Fields. xii, 320 pages approximately. Tentative publication date: January, 1973.