Anda di halaman 1dari 79

Physics 451 Fall 2004

Homework Assignment #1 Solutions


Textbook problems: Ch. 1: 1.1.5, 1.3.3, 1.4.7, 1.5.5, 1.5.6
Ch. 3: 3.2.4, 3.2.19, 3.2.27
Chapter 1
1.1.5 A sailboat sails for 1 hr at 4 km/hr (relative to the water) on a steady compass heading
of 40

east of north. The saiboat is simultaneously carried along by a current. At the


end of the hour the boat is 6.12 km from its starting point., The line from its starting
point to its location lies 60

east of north. Find the x (easterly) and y (northerly)


components of the water velocity.
This is a straightforward relative velocity (vector addition) problem. Let v
bl
denote the velocity of the boat with respect to land, v
bw
the velocity of the boat
with respect to the water and v
wl
the velocity of the water with respect to land.
Then
v
bl
= v
bw
+v
wl
where
v
bw
= 4 km/hr @ 50

= (2.57 x + 3.06 y) km/hr


v
bl
= 6.12 km/hr @ 30

= (5.3 x + 3.06 y) km/hr


Thus
v
wl
= v
bl
v
bw
= 2.73 x km/hr
1.3.3 The vector r, starting at the origin, terminates at and species the point in space
(x, y, z). Find the surface swept out by the tip of r if
(a) (r a ) a = 0
The vanishing of the dot product indicates that the vector r a is perpendicular
to the constant vector a. As a result, r a must lie in a plane perpendicular
to a. This means r itself must lie in a plane passing through the tip of a and
perpendicular to a
ra
a
r
(b) (r a ) r = 0
This time the vector r a has to be perpendicular to the position vector r itself.
It is perhaps harder to see what this is in three dimensions. However, for two
dimensions, we nd
ra
r
a
which gives a circle. In three dimensions, this is a sphere. Note that we can also
complete the square to obtain
(r a ) r = |r
1
2
a |
2
|
1
2
a |
2
Hence we end up with the equation for a circle of radius |a |/2 centered at the
point a/2
|r
1
2
a |
2
= |
1
2
a |
2
1.4.7 Prove that (

A

B) (

A

B) = (AB)
2
(

A

B )
2
.
This can be shown just by a straightforward computation. Since

A

B = (A
y
B
z
A
z
B
y
) x + (A
z
B
x
A
x
B
z
) y + (A
x
B
y
A
y
B
x
) z
we nd
|

A

B|
2
= (A
y
B
z
A
z
B
y
)
2
+ (A
z
B
x
A
x
B
z
)
2
+ (A
x
B
y
A
y
B
x
)
2
= A
2
x
B
2
y
+ A
2
x
B
2
z
+ A
2
y
B
2
x
+ A
2
y
B
2
z
+ A
2
z
B
2
x
+ A
2
z
B
2
y
2A
x
B
x
A
y
B
y
2A
x
B
x
A
z
B
z
2A
y
B
y
A
z
B
z
= (A
2
x
+ A
2
y
+ A
2
z
)(B
2
x
+ B
2
y
+ B
2
z
) (A
x
B
x
+ A
y
B
y
+ A
z
B
z
)
2
where we had to add and subtract A
2
x
B
2
x
+A
2
y
B
2
y
+A
2
z
B
2
z
and do some factorization
to obtain the last line.
However, there is a more elegant approach to this problem. Recall that cross
products are related to sin and dot products are related to cos . Then
|

A

B|
2
= (AB sin)
2
= (AB)
2
(1 cos
2
) = (AB)
2
(AB cos )
2
= (AB)
2
(

A

B )
2
1.5.5 The orbital angular momentum

L of a particle is given by

L = r p = mr v where p
is the linear momentum. With linear and angular velocity related by v = r, show
that

L = mr
2
[ r( r )]
Here, r is a unit vector in the r direction.
Using

L = mr v and v = r, we nd

L = mr ( r )
Because of the double cross product, this is the perfect opportunity to use the
BACCAB rule:

A(

B

C) =

B(

A

C)

C(

A

B)

L = m[ (r r ) r(r )] = m[ r
2
r(r )]
Using r = r r, and factoring out r
2
, we then obtain

L = mr
2
[ r( r )] (1)
1.5.6 The kinetic energy of a single particle is given by T =
1
2
mv
2
. For rotational motion
this becomes
1
2
m( r )
2
. Show that
T =
1
2
m[r
2

2
(r )
2
]
We can use the result of problem 1.4.7:
T =
1
2
m( r )
2
=
1
2
m[(r)
2
( r )
2
] =
1
2
m[r
2

2
(r )
2
]
Note that we could have written this in terms of unit vectors
T =
1
2
mr
2
[
2
( r )
2
]
Comparing this with (1) above, we nd that
T =
1
2

L
which is not a coincidence.
Chapter 3
3.2.4 (a) Complex numbers, a + ib, with a and b real, may be represented by (or are
isomorphic with) 2 2 matrices:
a + ib

a b
b a

Show that this matrix representation is valid for (i) addition and (ii) multiplica-
tion.
Let us start with addition. For complex numbers, we have (straightforwardly)
(a + ib) + (c + id) = (a + c) + i(b + d)
whereas, if we used matrices we would get

a b
b a

c d
d c

(a + c) (b + d)
(b + d) (a + c)

which shows that the sum of matrices yields the proper representation of the
complex number (a + c) + i(b + d).
We now handle multiplication in the same manner. First, we have
(a + ib)(c + id) = (ac bd) + i(ad + bc)
while matrix multiplication gives

a b
b a

c d
d c

(ac bd) (ad + bc)


(ad + bc) (ac bd)

which is again the correct result.


(b) Find the matrix corresponding to (a + ib)
1
.
We can nd the matrix in two ways. We rst do standard complex arithmetic
(a + ib)
1
=
1
a + ib
=
a ib
(a + ib)(a ib)
=
1
a
2
+ b
2
(a ib)
This corresponds to the 2 2 matrix
(a + ib)
1

1
a
2
+ b
2

a b
b a

Alternatively, we rst convert to a matrix representation, and then nd the inverse


matrix
(a + ib)
1

a b
b a

1
=
1
a
2
+ b
2

a b
b a

Either way, we obtain the same result.


3.2.19 An operator

P commutes with J
x
and J
y
, the x and y components of an angular
momentum operator. Show that

P commutes with the third component of angular
momentum; that is,
[

P, J
z
] = 0
We begin with the statement that

P commutes with J
x
and J
y
. This may be
expressed as [

P, J
x
] = 0 and [

P, J
y
] = 0 or equivalently as

PJ
x
= J
x

P and

PJ
y
= J
y

P. We also take the hint into account and note that J
x
and J
y
satisfy
the commutation relation
[J
x
, J
y
] = iJ
z
or equivalently J
z
= i[J
x
, J
y
]. Substituting this in for J
z
, we nd the double
commutator
[

P, J
z
] = [

P, i[J
x
, J
y
]] = i[

P, [J
x
, J
y
]]
Note that we are able to pull the i factor out of the commutator. From here,
we may expand all the commutators to nd
[

P, [J
x
, J
y
]] =

PJ
x
J
y


PJ
y
J
x
J
x
J
y

P + J
y
J
x

P
= J
x

PJ
y
J
y

PJ
x
J
x

PJ
y
+ J
y

PJ
x
= 0
To get from the rst to the second line, we commuted

P past either J
x
or J
y
as
appropriate. Of course, a quicker way to do this problem is to use the Jacobi
identity [A, [B, C]] = [B, [A, C]] [C, [A, B]] to obtain
[

P, [J
x
, J
y
]] = [J
x
, [

P, J
y
]] [J
y
, [

P, J
x
]]
The right hand side clearly vanishes, since

P commutes with both J
x
and J
y
.
3.2.27 (a) The operator Tr replaces a matrix A by its trace; that is
Tr (a) = trace(A) =

i
a
ii
Show that Tr is a linear operator.
Recall that to show that Tr is linear we may prove that Tr (A+B) = Tr (A)+
Tr (B) where and are numbers. However, this is a simple property of
arithmetic
Tr (A + B) =

i
(a
ii
+ b
ii
) =

i
a
ii
+

i
b
ii
= Tr (A) + Tr (B)
(b) The operator det replaces a matrix A by its determinant; that is
det(A) = determinant of A
Show that det is not a linear operator.
In this case all we need to do is to nd a single counterexample. For example, for
an n n matrix, the properties of the determinant yields
det(A) =
n
det(A)
This is not linear unless n = 1 (in which case A is really a single number and
not a matrix). There are of course many other examples that one could come up
with to show that det is not a linear operator.
Physics 451 Fall 2004
Homework Assignment #2 Solutions
Textbook problems: Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30
Chapter 3
3.3.1 Show that the product of two orthogonal matrices is orthogonal.
Suppose matrices A and B are orthogonal. This means that A

A = I and B

B = I.
We now denote the product of A and B by C = AB. To show that C is orthogonal,
we compute C

C and see what happens. Recalling that the transpose of a product


is the reversed product of the transposes, we have
C

C = (AB)(

AB) = AB

A = A

A = I
The statement that this is a key step in showing that the orthogonal matrices form
a group is because one of the requirements of being a group is that the product
of any two elements (ie A and B) in the group yields a result (ie C) that is also
in the group. This is also known as closure. Along with closure, we also need
to show associativity (okay for matrices), the existence of an identity element
(also okay for matrices) and the existence of an inverse (okay for orthogonal
matrices). Since all four conditions are satised, the set of n n orthogonal
matrices form the orthogonal group denoted O(n). While general orthogonal
matrices have determinants 1, the subgroup of matrices with determinant +1
form the special orthogonal group SO(n).
3.3.12 A is 2 2 and orthogonal. Find the most general form of
A =

a b
c d

Compare with two-dimensional rotation.


Since A is orthogonal, it must satisfy the condition A

A = I, or

a b
c d

a c
b d

a
2
+ b
2
ac + bd
ac + bd c
2
+ d
2

1 0
0 1

This gives three conditions


i) a
2
+ b
2
= 1, ii) c
2
+ d
2
= 1, iii) ac + bd = 0
These are three equations for four unknowns, so there will be a free parameter
left over. There are many ways to solve the equations. However, one nice way is
to notice that a
2
+ b
2
= 1 is the equation for a unit circle in the ab plane. This
means we can write a and b in terms of an angle
a = cos , b = sin
Similarly, c
2
+ d
2
= 1 can be solved by setting
c = cos , d = sin
Of course, we have one more equation to solve, ac + bd = 0, which becomes
cos cos + sin sin = cos( ) = 0
This means that = /2 or = 3/2. We must consider both cases
separately.
= /2: This gives
c = cos( /2) = sin, d = sin( /2) = cos
or
A
1
=

cos sin
sin cos

(1)
This looks almost like a rotation, but not quite (since the minus sign is in the
wrong place).
= 3/2: This gives
c = cos( 3/2) = sin, d = sin(theta 3/2) = cos
or
A
2
=

cos sin
sin cos

(2)
which is exactly a rotation.
Note that we can tell the dierence between matrices of type (1) and (2) by
computing the determinant. We see that det A
1
= 1 while det A
2
= 1. In fact,
the A
2
type of matrices form the SO(2) group, which is exactly the group of
rotations in the plane. On the other hand, the A
1
type of matrices represent
rotations followed by a mirror reection y y. This can be seen by writing
A
1
=

1 0
0 1

cos sin
sin cos

Note that the set of A


1
matrices by themselves do not form a group (since they
do not contain the identity, and since they do not close under multiplication).
However the set of all orthogonal matrices {A
1
, A
2
} forms the O(2) group, which
is the group of rotations and mirror reections in two dimensions.
3.3.13 Here |x and |y are column vectors. Under an orthogonal transformation S, |x

=
S|x, |y

= S|y . Show that the scalar product x|y is invariant under this orthog-
onal transformation.
To prove the invariance of the scalar product, we compute
x

|y

= x|

SS|y = x|y
where we used

SS = I for an orthogonal matrix S. This demonstrates that the
scalar product is invariant (same in primed and unprimed frame).
3.5.4 Show that a real matrix that is not symmetric cannot be diagonalized by an orthogonal
similarity transformation.
We take the hint, and start by denoting the real non-symmetric matrix by A.
Assuming that A can be diagonalized by an orthogonal similarity transformation,
that means there exists an orthogonal matrix S such that
= SA

S where is diagonal
We can invert this relation by multiplying both sides on the left by

S and on
the right by S. This yields
A =

SS
Taking the transpose of A, we nd

A = (

SS) =

S

S
However, the transpose of a transpose is the original matrix,

S = S, and the
transpose of a diagonal matrix is the original matrix,

= . Hence

A =

SS = A
Since the matrix A is equal to its transpose, A has to be a symmetric matrix.
However, recall that A is supposed to be non-symmetric. Hence we run into a
contradiction. As a result, we must conclude that A cannot be diagonalized by
an orthogonal similarity transformation.
3.5.6 A has eigenvalues
i
and corresponding eigenvectors |x
i
. Show that A
1
has the
same eigenvectors but with eigenvalues
1
i
.
If A has eigenvalues
i
and eigenvectors |x
i
, that means
A|x
i
=
i
|x
i

Multiplying both sides by A


1
on the left, we nd
A
1
A|x
i
=
i
A
1
|x
i

or
|x
i
=
i
A
1
|x
i

Rewriting this as
A
1
|x
i
=
1
i
|x
i

it is now obvious that A


1
has the same eigenvectors, but eigenvalues
1
i
.
3.5.9 Two Hermitian matrices A and B have the same eigenvalues. Show that A and B are
related by a unitary similarity transformation.
Since both A and B have the same eigenvalues, they can both be diagonalized
according to
= UAU

, = V BV

where is the same diagonal matrix of eigenvalues. This means
UAU

= V BV

B = V

UAU

V
If we let W = V

U, its Hermitian conjugate is W

= (V

U)

= U

V . This means
that
B = WAW

where W = V

U
and WW

= V

UU

V = I. Hence A and B are related by a unitary similarity


transformation.
3.5.30 a) Determine the eigenvalues and eigenvectors of

1
1

Note that the eigenvalues are degenerate for = 0 but the eigenvectors are or-
thogonal for all = 0 and 0.
We rst nd the eigenvalues through the secular equation

1
1

= (1 )
2

2
= 0
This is easily solved
(1 )
2

2
= 0 ( 1)
2
=
2
( 1) = (3)
Hence the two eigenvalues are
+
= 1 + and

= 1 .
For the eigenvectors, we start with
+
= 1 + . Substituting this into the eigen-
value problem (AI)|x = 0, we nd

a
b

= 0 (a b) = 0 a = b
Since the problem did not ask to normalize the eigenvectors, we can take simply

+
= 1 + : |x
+
=

1
1

For

= 1 , we obtain instead

a
b

= 0 (a + b) = 0 a = b
This gives

= 1 : |x

1
1

Note that the eigenvectors |x


+
and |x

are orthogonal and independent of . In


a way, we are just lucky that they are independent of (they did not have to turn
out that way). However, orthogonality is guaranteed so long as the eigenvalues
are distinct (ie = 0). This was something we proved in class.
b) Determine the eigenvalues and eigenvectors of

1 1

2
1

Note that the eigenvalues are degenerate for = 0 and for this (nonsymmetric)
matrix the eigenvectors ( = 0) do not span the space.
In this nonsymmetric case, the secular equation is

1 1

2
1

= (1 )
2

2
= 0
Interestingly enough, this equation is the same as (3), even though the matrix is
dierent. Hence this matrix has the same eigenvalues
+
= 1 + and

= 1 .
For
+
= 1 + , the eigenvector equation is

a
b

= 0 a + b = 0 b = a
Up to normalization, this gives

+
= 1 + : |x
+
=

(4)
For the other eigenvalue,

= 1 , we nd

a
b

= 0 a + b = 0 b = a
Hence, we obtain

= 1 : |x

(5)
In this nonsymmetric case, the eigenvectors do depend on . And furthermore,
when = 0 it is easy to see that both eigenvectors degenerate into the same

1
0

.
c) Find the cosine of the angle between the two eigenvectors as a function of for
0 1.
For the eigenvectors of part a), they are orthogonal, so the angle is 90

. Thus
this part really refers to the eigenvectors of part b). Recalling that the angle can
be dened through the inner product, we have
x
+
|x

= |x
+
| |x

| cos
or
cos =
x
+
|x

x
+
|x
+

1/2
x

|x

1/2
Using the eigenvectors of (4) and (5), we nd
cos =
1
2

1 +
2

1 +
2
=
1
2
1 +
2
Recall that the Cauchy-Schwarz inequality guarantees that cos lies between 1
and +1. When = 0 we nd cos = 1, so the eigenvectors are collinear (and
degenerate), while for = 1, we nd instead cos = 0, so the eigenvectors are
orthogonal.
Physics 451 Fall 2004
Homework Assignment #3 Solutions
Textbook problems: Ch. 1: 1.7.1, 1.8.11, 1.8.16, 1.9.12, 1.10.4, 1.12.9
Ch. 2: 2.4.8, 2.4.11
Chapter 1
1.7.1 For a particle moving in a circular orbit r = xr cos t + y r sint
(a) evaluate r

r
Taking a time derivative of r, we obtain

r = xr sint + y r cos t (1)


Hence
r

r = ( xr cos t + y r sint) ( xr sint + y r cos t)
= ( x y)r
2
cos
2
t ( y x)r
2
sin
2
t
= z r
2
(sin
2
t + cos
2
t) = z r
2

(b) Show that



r +
2
r = 0
The acceleration is the time derivative of (1)

r = xr
2
cos t y r
2
sint =
2
( xr cos t + y r sint) =
2
r
Hence

r +
2
r = 0. This is of course the standard kinematics of uniform circular
motion.
1.8.11 Verify the vector identity

A

B) = (

B

)

A(

A

)

B

B(


A) +

A(


B)
This looks like a good time for the BACCAB rule. However, we have to be
careful since

has both derivative and vector properties. As a derivative, it
operates on both

A and

B. Therefore, by the product rule of dierentiation, we
can write

A

B) =

A

B) +

A

B)
where the arrows indicate where the derivative is acting. Now that we have
specied exactly where the derivative goes, we can treat

as a vector. Using the
BACCAB rule (once for each term) gives

A

B) =

A(


B)

B(


A) +

A(


B)

B(


A) (2)
The rst and last terms on the right hand side are backwards. However, we can
turn them around. For example

A(


B) =

A(

B ) =

B

)

A
With all the arrows in the right place [after ipping the rst and last terms in
(2)], we nd simply

A

B) = (

B

)

A

B(


A) +

A(


B) (

A

)

B
which is what we set out to prove.
1.8.16 An electric dipole of moment p is located at the origin. The dipole creates an electric
potential at r given by
(r ) =
p r
4
o
r
3
Find the electric eld,

E =

at r.
We rst use the quotient rule to write

E =

=
1
4
0

p r
r
3

=
1
4
0
r
3

( p r ) ( p r )

(r
3
)
r
6
Applying the chain rule to the second term in the numerator, we obtain

E =
1
4
0
r
3

( p r ) 3r
2
( p r )

(r)
r
6
We now evaluate the two separate gradients

( p r ) = x
i

x
i
(p
j
x
j
) = x
i
p
j
x
j
x
i
= x
i
p
j

ij
= x
i
p
i
= p
and

r = x
i

x
i

x
2
1
+ x
2
2
+ x
2
3
= x
i
1
2

x
2
1
+ x
2
2
+ x
2
3
2x
i
=
x
i
x
i
r
=
r
r
= r
Hence

E =
1
4
0
r
3
p 3r
2
( p r ) r
r
6
=
1
4
0
p 3( p r) r
r
3
Note that we have used the fact that p is a constant, although this was never
stated in the problem.
1.9.12 Show that any solution of the equation


A) k
2

A = 0
automatically satises the vector Helmholtz equation

2

A + k
2

A = 0
and the solenoidal condition


A = 0
We actually follow the hint and demonstrate the solenoidal condition rst. Taking
the divergence of the rst equation, we nd


A) k
2


A = 0
However, the divergence of a curl vanishes identically. Hence the rst term is
automatically equal to zero, and we are left with k
2


A = 0 or (upon dividing
by the constant k)


A = 0.
We now return to the rst equation and simplify the double curl using the BAC
CAB rule (taking into account the fact that all derivatives must act on

A)

A) =

(


A)
2

A (3)
As a result, the rst equation becomes


A)
2

Ak
2

A = 0
However, we have shown above that


A = 0 for this problem. Thus (3) reduces
to

2

A + k
2

A = 0
which is what we wanted to show.
1.10.4 Evaluate

r dr
We have evaluated this integral in class. For a line integral from point 1 to point
2, we have

2
1
r dr =
1
2

2
1
d(r
2
) =
1
2
r
2

2
1
=
1
2
r
2
2

1
2
r
2
1
However for a closed path, point 1 and point 2 are the same. Thus the integral
along a closed loop vanishes,

r dr = 0. Note that this vanishing of the line
integral around a closed loop is the sign of a conservative force.
Alternatively, we can apply Stokes theorem

r dr =

r d
It is easy to see that r is curl-free. Hence the surface integral on the right hand
side vanishes.
1.12.9 Prove that

u

v d

u d

This is an application of Stokes theorem. Let us write

(u

v + v

u) d

(u

v + v

u) d (4)
We now expand the curl using

(u

v) = (

u) (

v) + u


v = (

u) (

v)
where we have also used the fact that the curl of a gradient vanishes. Returning
to (4), this indicates that

(u

v + v

u) d

S
[(

u) (

v) + (

v) (

u)] d = 0
where the vanishing of the right hand side is guaranteed by the antisymmetry of
the cross-product,

A

B =

B

A.
Chapter 2
2.4.8 Find the circular cylindrical components of the velocity and acceleration of a moving
particle
We rst explore the time derivatives of the cylindrical coordinate basis vectors.
Since
= (cos , sin, 0), = (sin, cos , 0), z = (0, 0, 1)
their derivatives are

= (sin, cos , 0) = ,

= (cos , sin, 0) =
Using the chain rule, this indicates that

= ,

=

= (5)
Now, we note that the position vector is given by
r = + zz
So all we have to do to nd the velocity is to take a time derivative
v =

r = + z z +

+

zz = + z z +
Note that we have used the expression for

in (5). Taking one more time deriva-
tive yields the acceleration
a =

v = + z z + ( + ) +

+

z z +


= + z z + ( + ) +
2
= (
2
) + z z + ( + 2 )
2.4.11 For the ow of an incompressible viscous uid the Navier-Stokes equations lead to

(v (

v )) =

2
(

v )
Here is the viscosity and
0
the density of the uid. For axial ow in a cylindrical
pipe we take the velocity v to be
v = zv()
From Example 2.4.1

(v (

v )) = 0
for this choice of v. Show that

2
(

v) = 0
leads to the dierential equation
1

d
d

d
2
v
d
2

2
dv
d
= 0
and that this is satised by
v = v
0
+ a
2

2
This problem is an exercise in applying the vector dierential operators in cylin-
drical coordinates. Let us rst compute

V =

v

V =

v =
1

z
0 0 v()

=
dv
d
V

=
dv
d
Note that, since v() is a function of a single variable, partial derivatives of v are
the same as ordinary derivatives. Next we need to compute the vector Laplacian

2
(

v ) =
2

V . Using (2.35) in the textbook, and the fact that on the V

component is non-vanishing, we nd
(
2

V )

=
2

2
V

= 0
(
2

V )

=
2
(V

)
1

2
V

=
2

dv
d

+
1

2
dv
d
(
2

V )
z
= 0
This indicates that only the component of the vector Laplacian gives a non-
trivial equation. Finally, we evaluate the scalar Laplacian
2
(dv/d) to obtain
(
2

V )

=
1

d
d

d
2
v
d
2

+
1

2
dv
d
(6)
Setting this equal to zero gives the equation that we were asked to prove.
To prove that v = v
0
+ a
2

2
satises the (third order!) dierential equation, all
we have to do is substitute it in. However, it is more fun to go ahead and solve
the equation. First we notice that v only enters through its derivative f = dv/d.
Substituting this into (6), we nd
1

d
d

df
d

2
f = 0
Expanding the derivatives in the rst term yields
d
2
f
d
2
+
1

df
d

1

2
f = 0
Since this is a homogeneous equation, we may substitute in f =

to obtain the
algebraic equation
( 1) + 1 = 0 = 1
This indicates that the general solution for f() is of the form
f = 2a + b
1
where the factor of 2 is chosen for later convenience. Integrating f once to obtain
v, we nd
v =

f d = v
0
+ a
2
+ b log
which agrees with the given solution, except for the log term. However, now we
can appeal to physical boundary conditions for uid ow in the cylindrical pipe.
The point = 0 corresponds to the central axis of the pipe. At this point, the
uid velocity should not be innite. Hence we must throw away the log, or in
other words we must set b = 0, so that v = v
0
+ a
2
.
Incidentally, the uid ow boundary conditions should be that the velocity van-
ishes at the wall of the pipe. If we let R be the radius of the pipe, this means
that we can write the solution as
v() = v
max

1

2
R
2

where the maximum velocity v


max
is for the uid along the central axis (with the
velocity going to zero quadratically as a function of the radius).
Physics 451 Fall 2004
Homework Assignment #4 Solutions
Textbook problems: Ch. 2: 2.5.11, 2.6.5, 2.9.6, 2.9.12, 2.10.6, 2.10.11, 2.10.12
Chapter 2
2.5.11 A particle m moves in response to a central force according to Newtons second law
m

r = r f(r )
Show that r

r = c, a constant, and that the geometric interpretation of this leads
to Keplers second law.
Actually, r

r is basically the angular momentum,

L = r p = mr

r. To show
that

L is constant, we can take its time derivative

L =
d
dt
(mr

r ) = m

r

r + mr

r
The rst cross-product vanishes. So, by using Newtons second law, we end up
with

L = r r f(r ) = (r r )
f(r )
r
= 0
This indicates that the angular momentum

L is a constant in time (ie that it is
conserved). The constant vector c of the problem is just

L/m. Note that this
proof works for any central force, not just the inverse square force law.
For the geometric interpretation, consider the orbit of the particle m
dr
r
The amount of area swept out by the particle is given by the area of the triangle
dA =
1
2
|r dr |
So the area swept out in a given time dt is simply
dA
dt
=
1
2

r
dr
dt

=
1
2
|r

r |
Since this is a constant, we nd that equal areas are swept out in equal times.
This is just Keplers second law (which is also the law of conservation of angular
momentum).
2.6.5 The four-dimensional, fourth-rank Riemann-Christoel curvature tensor of general
relativity R
iklm
satises the symmetry relations
R
iklm
= R
ikml
= R
kilm
With the indices running from 0 to 3, show that the number of independent compo-
nents is reduced from 256 to 36 and that the condition
R
iklm
= R
lmik
further reduces the number of independent components to 21. Finally, if the com-
ponents satisfy an identity R
iklm
+ R
ilmk
+ R
imkl
= 0, show that the number of
independent components is reduced to 20.
Here we just have to do some counting. For a general rank-4 tensor in four
dimensions, since each index can take any of four possible values, the number of
independent components is simply
independent components = 4
4
= 256
Taking into account the rst symmetry relation, the rst part
R
iklm
= R
ikml
indicates that the Riemann tensor is antisymmetric when the last pair of indices
is switched. Thinking of the last pair of indices as specifying a 44 antisymmetric
matrix, this means instead of having 4
2
= 16 independent elements, we actually
only have
1
2
(4)(3) = 6 independent choices for the last index pair (this is the
number of elements in an antisymmetric 44 matrix). Similarly, the second part
of the rst symmetry relation
R
iklm
= R
kilm
indicates that the Riemann tensor is antisymmetric in the rst pair of indices. As
a result, the same argument gives only 6 independent choices for the rst index
pair. This accounts for
independent components = 6 6 = 36
We are now able to handle the second condition
R
iklm
= R
lmik
By now, it should be obvious that this statement indicates that the Riemann
tensor is symmetric when the rst index pair is interchanged with the second
index pair. The counting of independent components is then the same as that for
a 6 6 symmetric matrix. This gives
independent components =
1
2
(6)(7) = 21
Finally, the last identity is perhaps the trickiest to deal with. As indicated in the
note, this only gives additional information when all four indices are dierent.
Setting iklm to be 0123, this gives
R
0123
+ R
0231
+ R
0312
= 0 (1)
As a result, this can be used to remove one more component, leading to
independent components = 21 1 = 20
We can, of course, worry that a dierent combination of iklm (say 1302 or some-
thing like that) will give further relations that can be used to remove additional
components. However, this is not the case, as can be seen by applying the rst
to relations.
Note that it is an interesting exercise to count the number of independent com-
ponents in the Riemann tensor in d dimensions. The result is
independent components for d dimensions =
1
12
d
2
(d
2
1)
Putting in d = 4 yields the expected 20. However, it is fun to note that putting
in d = 1 gives 0 (you cannot have curvature in only one dimension) and putting
in d = 2 gives 1 (there is exactly one independent measure of curvature in two
dimensions).
2.9.6 a) Show that the inertia tensor (matrix) of Section 3.5 may be written
I
ij
= m(r
2

ij
x
i
x
j
) [typo corrected!]
for a particle of mass m at (x
1
, x
2
, x
3
).
Note that, for a single particle, the inertia tensor of Section 3.5 is specied as
I
xx
= m(r
2
x
2
), I
xy
= mxy, etc
Using i = 1, 2, 3 notation, this is the same as indicating
I
ij
=

m(r
2
x
2
i
) i = j
mx
i
x
j
i = j
We can enforce the condition i = j by using the Kronecker delta,
ij
. Similarly,
the condition i = j can be enforced by the opposite expression 1
ij
. This
means we can write
I
ij
= m(r
2
x
2
i
)
ij
mx
i
x
j
(1
ij
) (no sum)
distributing the factors out, and noting that it is safe to set x
i
x
j

ij
= x
2
i

ij
, we
end up with
I
ij
= mr
2

ij
mx
2
i

ij
mx
i
x
j
+ mx
2
i

ij
= m(r
2

ij
x
i
x
j
)
Note that there is a typo in the books version of the homework exercise!
b) Show that
I
ij
= M
il
M
lj
= m
ilk
x
k

ljm
x
m
where M
il
= m
1/2

ilk
x
k
. This is the contraction of two second-rank tensors and
is identical with the matrix product of Section 3.2.
We may calculate
M
il
M
lj
= m
ilk
x
k

ljm
x
m
= m
lki

ljm
x
k
x
m
Note that the product of two epsilons can be re-expressed as

lki

ljm
=
kj

im

km

ij
(2)
This is actually the BACCAB rule in index notation. Hence
M
il
M
lj
= m(
kj

im

km

ij
)x
k
x
m
= m(
kj
x
k

im
x
m

km
x
k
x
m

ij
)
= m(x
j
x
i
x
k
x
k

ij
) = m(r
2

ij
x
i
x
j
) = I
ij
Note that we have used the fact that x
k
x
k
= x
2
1
+ x
2
2
+ x
2
3
= r
2
.
2.9.12 Given A
k
=
1
2

ijk
B
ij
with B
ij
= B
ji
, antisymmetric, show that
B
mn
=
mnk
A
k
Given A
k
=
1
2

ijk
B
ij
, we compute

mnk
A
k
=
1
2

mnk

kij
B
ij
=
1
2

kmn

kij
B
ij
=
1
2
(
mi

nj

mj

ni
)B
ij
=
1
2
(B
mn
B
nm
) = B
mn
We have used the antisymmetric nature of B
ij
in the last step.
2.10.6 Derive the covariant and contravariant metric tensors for circular cylindrical coordi-
nates.
There are several ways to derive the metric. For example, we may use the relation
between Cartesian and cylindrical coordinates
x = cos , y = sin, z = z (3)
to compute the dierentials
dx = d cos sind, dy = d sin + cos d, dz = dz
The line element is then
ds
2
= dx
2
+ dy
2
+ dz
2
= (d cos sind)
2
+ (d sin + cos d)
2
+ dz
2
= d
2
+
2
d
2
+ dz
2
Since ds
2
= g
ij
dx
i
dx
j
[where (x
1
, x
2
, x
3
) = (, , z)] we may write the covariant
metric tensor (matrix) as
g
ij
=

1 0 0
0
2
0
0 0 1

(4)
Alternatively, the metric is given by g
ij
= e
i
e
j
where the basis vectors are
e
i
=
r
x
i
Taking partial derivatives of (3), we obtain
e

= xcos + y sin
e

= ( xsin + y cos )
e
z
= z
Then
g

= e

= ( xcos + y sin) ( xcos + y sin) = cos


2
+ sin
2
= 1
g

= e

= ( xcos + y sin) ( xsin + y cos )


= (cos sin + sincos ) = 0
etc . . .
The result is the same as (4).
The contravariant components of the metric is given by the matrix inverse of (4)
g
ij
=

1 0 0
0
2
0
0 0 1

(5)
2.10.11 From the circular cylindrical metric tensor g
ij
calculate the
k
ij
for circular cylindrical
coordinates.
We may compute the Christoel components using the expression

ijk
=
1
2
(
k
g
ij
+
j
g
ik

i
g
jk
)
However, instead of working out all the components one at a time, it is more e-
cient to examine the metric (4) and to note that the only non-vanishing derivative
is

= 2
This indicates that the only non-vanishing Christoel symbols
ijk
are the ones
where the three indices ijk are some permutation of . It is then easy to see
that

= ,

=
Finally, raising the rst index using the inverse metric (5) yields

= ,

=
1

(6)
Note that the Christoel symbols are symmetric in the last two indices.
2.10.12 Using the
k
ij
from Exercise 2.10.11, write out the covariant derivatives V
i
;j
of a
vector

V in circular cylindrical coordinates.
Recall that the covariant derivative of a contravariant vector is given by
V
i
;j
= V
i
,j
+
i
jk
V
k
where the semi-colon indicates covariant dierentiation and the comma indicates
ordinary partial dierentiation. To work out the covariant derivative, we just
have to use (6) for the non-vanishing Christoel connections. The result is
V

;
= V

,
+

k
V
k
= V

,
V

;
= V

,
+

k
V
k
= V

,
+

V

= V

,
+
1

V

V
z
;
= V
z
,
+
z
k
V
k
= V
z
,
V

;
= V

,
+

k
V
k
= V

,
+

V

= V

,
V

V

;
= V

,
+

k
V
k
= V

,
+

V

= V

,
+
1

V

V
z
;
= V
z
,
+
z
k
V
k
= V
z
,
V

;z
= V

,z
+

zk
V
k
= V

,z
V

;z
= V

,z
+

zk
V
k
= V

,z
V
z
;z
= V
z
,z
+
z
zk
V
k
= V
z
,z
Note that, corresponding to the three non-vanishing Christoel symbols, only
three of the expressions are modied in the covariant derivative.
Physics 451 Fall 2004
Homework Assignment #5 Solutions
Textbook problems: Ch. 5: 5.1.1, 5.1.2
Chapter 5
5.1.1 Show that

n=1
1
(2n 1)(2n + 1)
=
1
2
We take the hint and use mathematical induction. First, we assume that
s
m
=
m
2m + 1
(1)
In this case, the next partial sum becomes
s
m+1
= s
m
+ a
m+1
=
m
2m + 1
+
1
(2(m + 1) 1)(2(m + 1) + 1)
=
m
2m + 1
+
1
(2m + 1)(2m + 3)
=
m(2m + 3) + 1
(2m + 1)(2m + 3)
=
2m
2
+ 3m + 1
(2m + 1)(2m + 3)
=
(m + 1)(2m + 1)
(2m + 1)(2m + 3)
=
(m + 1)
2(m + 1) + 1
which is of the correct form (1). Finally, by explicit computation, we see that
s
1
= 1/(1 3) = 1/3 = 1/(2 1 + 1), so that (1) is correct for s
1
. Therefore, by
induction, we conclude that the mth partial sum is exactly s
m
= m/(2m + 1).
It is now simple to take the limit to obtain
S = lim
m
s
m
= lim
m
m
2m + 1
=
1
2
Note that we could also have evaluated this sum by partial fraction expansion

n=1
1
(2n 1)(2n + 1)
=

n=1

1
2(2n 1)

1
2(2n + 1)

Since this is a telescoping series, we have


s
m
=
1
2(2 1 1)

1
2(2m + 1)
=
m
2m + 1
which agrees with (1).
5.1.2 Show that

n=1
1
n(n + 1)
= 1
This problem may be solved in a similar manner. While there is no hint of for
the partial sum, we may try a few terms
s
1
=
1
2
, s
2
= s
1
+
1
2 3
=
2
3
, s
3
= s
2
+
1
3 4
=
3
4
This suggests that
s
m
=
m
m + 1
(2)
We now prove this statement by induction. Starting from s
m
, we compute
s
m+1
= s
m
+ a
m+1
=
m
m + 1
+
1
(m + 1)(m + 2)
=
m(m + 2) + 1
(m + 1)(m + 2)
=
(m + 1)
2
(m + 1)(m + 2)
=
m + 1
m + 2
=
(m + 1)
(m + 1) + 1
Therefore if (2) holds for m, it also holds for m + 1. Finally, since (2) is correct
for s
1
= 1/2, it must be true for all m by induction.
Taking the limit yields
S = lim
m
s
m
= lim
m
m
m + 1
= 1
The partial fraction approach to this problem is to note that

n=1
1
n(n + 1)
=

n=1

1
n

1
n + 1

Hence
s
m
=
1
1

1
m + 1
=
m
m + 1
which reproduces (2).
Additional Problems
1. The metric for a three-dimensional hyperbolic (non-Euclidean) space can be written
as
ds
2
= L
2
dx
2
+ dy
2
+ dz
2
z
2
where L is a constant with dimensions of length. Calculate the non-vanishing Christof-
fel coecients for this metric.
We rst note that the metric is given in matrix form as
g
ij
=

L
2
/z
2
0 0
0 L
2
/z
2
0
0 0 L
2
/z
2

so the non-zero components are


g
xx
=
L
2
z
2
, g
yy
=
L
2
z
2
, g
zz
=
L
2
z
2
(3)
The covariant components of the Christoel connection are obtained from the
metric by

ijk
=
1
2
(g
ij,k
+ g
ik,j
g
jk,i
)
where the comma denotes partial dierentiation. According to (3), the only non-
zero metric components have repeated indices. In addition, only the z-derivative
is non-vanishing. Hence we conclude that the only non-vanishing Christoel sym-
bols must have two repeated indices combined with a z index. Recalling that
ijk
is symmetric in the last two indices, we compute

zxx
=
1
2
g
xx,z
=
L
2
z
3
,
xzx
=
xxz
=
1
2
g
xx,z
=
L
2
z
3

zyy
=
1
2
g
yy,z
=
L
2
z
3
,
yzy
=
yyz
=
1
2
g
yy,z
=
L
2
z
3

zzz
=
1
2
g
zz,z
=
L
2
z
3
Raising the rst index using the inverse metric g
ij
= (z
2
/L
2
)
ij
nally yields

z
xx
=
1
z
,
x
zx
=
x
xz
=
1
z

z
yy
=
1
z
,
y
zy
=
y
yz
=
1
z

z
zz
=
1
z
(4)
2. The motion of a free particle moving along a path x
i
(t) in hyperbolic space is governed
by the geodesic equation
x
i
(t) +
i
jk
x
j
(t) x
k
(t) = 0
Taking (x
1
, x
2
, x
3
) to be (x, y, z), and using the Christoel coecients calculated
above, show that the geodesic equation is given explicitly by
x
2
z
x z = 0
y
2
z
y z = 0
z +
1
z
( x
2
+ y
2
z
2
) = 0
Using the Christoel coecients in (4), we compute the three components of the
geodesic equation
x +
x
xz
x z +
x
zx
z x = 0 x
2
z
x z = 0 (5)
y +
y
yz
y z +
y
zy
z y = 0 y
2
z
y z = 0 (6)
z +
z
xx
x x +
z
yy
y y +
z
zz
z z = 0 z +
1
z
( x
2
+ y
2
z
2
) = 0 (7)
The geodesic equation is important because it describes the motion of free parti-
cles in curved space. However, for this problem, all that is necessary is to show
that it gives a system of coupled ordinary dierential equations (5), (6), (7).
3. Show that a solution to the geodesic equation of Problem 2 is given by
x = x
0
+ Rcos tanh(v
0
t)
y = y
0
+ Rsintanh(v
0
t)
z = Rsech(v
0
t)
where x
0
, y
0
, R, and v
0
are constants. Show that the path of the particle lies on a
sphere of radius R centered at (x
0
, y
0
, 0) in the Cartesian coordinate space given by
(x, y, z). Note that this demonstrates the non-Euclidean nature of hyperbolic space;
in reality the sphere is at, while the space is curved.
It should be a straightforward exercise to insert the x, y and z equations into (5),
(6) and (7) to show that it is a solution. However it is actually more interesting
to solve the equations directly. We start with the x equation, (5). If we are
somewhat clever, we could rewrite (5) as
x
x
= 2
z
z

d
dt
log x = 2
d
dt
log z
Both sides of this may be integrated in time to get
x = a
x
z
2
(8)
where a
x
is a constant. It should be clear that the y equation, (6) can be worked
on in similar manner to get
y = a
y
z
2
(9)
Of course, we have not yet completely solved for x and y. But we are a step closer
to the solution. Now, inserting (8) and (9) into the z equation, (7), we obtain
z + (a
2
x
+ a
2
y
)z
3

z
2
z
= 0
This non-linear dierential equation can be simplied by performing the substi-
tution z(t) = 1/u(t). Noting that
z =
u
u
2
, z =
u
u
2
+ 2
u
2
u
3
the z equation may be rewritten as
u u u
2
= (a
2
x
+ a
2
y
)
While this equation is still non-linear, it is possible to obtain a general solution
u(t) =
1
v
0

a
2
x
+ a
2
y
cosh(v
0
(t t
0
))
where v
0
and t
0
are constants.
Given the solution for z = 1/u, we now insert this back into (8) to obtain
x =
a
x
u
2
=
v
2
0
a
x
a
2
x
+ a
2
y
sech
2
(v
0
(t t
0
))
which may be integrated to yield
x(t) = x
0
+
v
0
a
x
a
2
x
+ a
2
y
tanh(v
0
(t t
0
))
Similarly, for y, we integrate (9) to nd
y(t) = y
0
+
v
0
a
y
a
2
x
+ a
2
y
tanh(v
0
(t t
0
))
Note that the three (coupled) second order dierential equations give rise to six
constants of integration, (x
0
, y
0
, a
x
, a
y
, v
0
, t
0
). The expressions may be simplied
by dening
a
x
=
v
0
R
cos , a
y
=
v
0
R
sin
in which case we see that
x = x
0
+ Rcos tanh(v
0
(t t
0
))
y = y
0
+ Rsintanh(v
0
(t t
0
))
z = Rsech(v
0
(t t
0
))
which is the answer we wanted to show, except that here we have retained an
extra constant t
0
related to the time translation invariance of the system.
Finally, to show that the path of the particle lies in a sphere, all we need to
demonstrate is that
(x x
0
)
2
+ (y y
0
)
2
+ z
2
= R
2
cos
2
tanh
2
(v
0
t) + R
2
sin
2
tanh
2
(v
0
t) + R
2
sech
2
(v
0
t)
= R
2
(tanh
2
(v
0
t) + sech
2
(v
0
t)) = R
2
This is indeed the equation for a sphere, (x x
0
)
2
+ (y y
0
)
2
+ z
2
= R
2
.
Physics 451 Fall 2004
Homework Assignment #6 Solutions
Textbook problems: Ch. 5: 5.2.6, 5.2.8, 5.2.9, 5.2.19, 5.3.1
Chapter 5
5.2.6 Test for convergence
a)

n=2
(lnn)
1
As in all these convergence tests, it is good to rst have a general idea of whether
we expect this to converge or not, and then nd an appropriate test to conrm
our hunch. For this one, we can imagine that lnn grows very slowly, so that its
inverse goes to zero very slowly too slowly, in fact, to converge. To prove this,
we can perform a simple comparison test. Since lnn < n for n 2, we see that
a
n
= (lnn)
1
> n
1
since the harmonic series diverges, and each term is larger than the corresponding
harmonic series term, this series must diverge.
Note that in this and all subsequent tests, there may be more than one way to
prove convergence/divergence. Your solution may be dierent than that given
here. But any method is okay, so long as the calculations are valid.
b)

n=1
n!
10
n
In this case, when n gets large (which is the only limit we care about), the factorial
in the numerator will start to dominate over the power in the denominator. So
we expect this to diverge. As a proof, we can perform a simple ratio test.
a
n
=
n!
10
n

a
n
a
n+1
=
10
n + 1
Taking the limit, we obtain
lim
n
a
n
a
n+1
= 0
hence the series diverges by the ratio test.
c)

n=1
1
2n(2n + 1)
We rst note that this series behaves like 1/4n
2
for large n. As a result, we expect
it to converge. To see this, we may consider a simple comparison test
a
n
=
1
2n(2n + 1)
<
1
2n 2n
=
1
4
_
1
n
2
_
Since the series (2) =

n=1
(1/n
2
) converges, this series converges as well.
d)

n=1
[n(n + 1)]
1/2
This series behaves as 1/n for large n. Thus we expect it to diverge. While
the square root may be a bit awkward to manipulate, we can actually perform a
simple comparison test with the harmonic series
a
n
=
1
_
n(n + 1)
>
1
_
(n + 1)(n + 1)
=
1
n + 1
Because the harmonic series diverges (and we do not care that the comparison
starts with the second term in the harmonic series, and not the rst) this series
also diverges.
e)

n=0
1
2n + 1
Since this behaves as 1/2n for large n, the series ought to diverge. We may either
compare this with the harmonic series or perform an integral test. Consider the
integral test
_

0
dx
2x + 1
=
1
2
ln(2x + 1)

0
=
Thus the series diverges
5.2.8 For what values of p and q will the following series converge?

n=2
1/ [n
p
(lnn)
q
]
Since the lnn term is not as dominant as the power term n
p
, we may have some
idea that the series ought to converge or diverge as the 1/n
p
series. To make this
more precise, we can use Raabes test
a
n
=
1
n
p
(lnn)
q

a
n
a
n+1
=
(n + 1)
p
(ln(n + 1))
q
n
p
(lnn)
q
=
_
1 +
1
n
_
p
_
1 +
ln(1 +
1
n
)
lnn
_q
=
_
1 +
1
n
_
p
_
1 +
1
nlnn
+
_
q
=
_
1 +
p
n
+
__
1 +
q
nlnn
+
_
=
_
1 +
p
n
+
q
nlnn
+
_
Note that we have Taylor (or binomial) expanded the expressions several times.
Raabes test then yields
lim
n
n
_
a
n
a
n+1
1
_
= lim
n
_
p +
q
lnn
+
_
= p
This gives convergence for p > 1 and divergence for p < 1.
For p = 1, Raabes test is ambiguous. However, in this case we can perform an
integral test. Since
p = 1 a
n
=
1
n(lnn)
q
we evaluate
_

2
dx
x(lnx)
q
=
_

ln 2
du
u
q
where we have used the substitution u = lnx. This converges for q > 1 and
diverges otherwise. Hence the nal result is
p > 1, any q converge
p = 1, q > 1 converge
p = 1, q 1 diverge
p < 1, any q diverge
5.2.9 Determine the range of convergence for Gausss hypergeometric series
F(, , ; x) = 1 +

1!
x +
( + 1)( + 1)
2!( + 1)
x
2
+
We rst consider non-negative values of x (so that this is a positive series). More
or less, this is a power series in x. So as long as , , are well behaved, this
series ought to converge for x < 1 (just like an ordinary geometric series). To see
this (and to prepare for Gauss test), we compute the ratio
a
n
=
( + 1) ( + n 1)( + 1) ( + n 1)
n!( + 1) ( + n 1)
x
n

a
n
a
n+1
=
(n + 1)( + n)
( + n)( + n)
x
1
This allows us to begin with the ratio test
lim
n
a
n
a
n+1
= lim
n
(n + 1)( + n)
( + n)( + n)
x
1
= x
1
Hence the series converges for x < 1 and diverges for x > 1. However, the ratio
test is indeterminate for x = 1. This is where we must appeal to Gauss test.
Setting x = 1, we have
a
n
a
n+1
=
(n + 1)( + n)
( + n)( + n)
Since this approaches 1 as n , we may highlight this leading behavior by
adding and subtracting 1
a
n
a
n+1
= 1 +
_
(n + 1)( + n)
( + n)( + n)
1
_
= 1 +
( + 1)n +
( + n)( + n)
We can now see that the fraction approaches (+1)/n as n gets large. This
is the h/n behavior that we need to extract for Gauss test: a
n
/a
n+1
= 1+h/n+
B(n)/n
2
. In principle, we may add and subtract h/n where h = +1 in
order to obtain an explicit expression for the remainder term B(n)/n
2
. However,
it should be clear based on a power series expansion that this remainder will
indeed behave as 1/n
2
, which is the requirement for applying Gauss test.
Thus, with h = +1, we see that the hypergeometric series F(, , ; 1)
converges for > + (h > 1) and diverges otherwise.
To summarize, we have proven that for non-negative x, the hypergeometric series
converges for x < 1 (any , , ) and x = 1 if > +, and diverges otherwise.
In fact, for negative values of x, we may consider the series for |x|. In this case,
we have absolute convergence for |x| < 1 and |x| = 1 if > + . Based on
the ratio test, it is not hard to see that the series also diverges for |x| > 1 (for
negative x, each subsequent term gets larger than the previous one). However,
there is also conditional convergence for + 1 < + (this is harder to
show).
5.2.19 Show that the following series is convergent.

s=0
(2s 1)!!
(2s)!!(2s + 1)
It is somewhat hard to see what happens when s gets large. However, we can
perform Raabes test
a
s
=
(2s 1)!!
(2s)!!(2s + 1)

a
s
a
s+1
=
(2s 1)!!
(2s)!!(2s + 1)

(2s + 2)!!(2s + 3)
(2s + 1)!!
=
(2s 1)!!(2s + 2)!!(2s + 3)
(2s + 1)!! (2s)!! (2s + 1)
=
(2s + 2)(2s + 3)
(2s + 1)(2s + 1)
By adding and subtracting 1, we obtain
a
s
a
s+1
= 1 +
_
(2s + 2)(2s + 3)
(2s + 1)
2
1
_
= 1 +
6s + 5
(2s + 1)
2
Then
lim
s
s
_
a
s
a
s+1
1
_
= lim
s
s
_
6s + 5
(2s + 1)
2
_
=
3
2
Since this is greater than 1, the series converges.
5.3.1 a) From the electrostatic two hemisphere problem we obtain the series

s=0
(1)
s
(4s + 3)
(2s 1)!!
(2s + 2)!!
Test it for convergence.
Since this is an alternating series, we may check if it is monotonic decreasing.
Taking the ratio, we see that
|a
s
|
|a
s+1
|
=
(4s + 3)(2s 1)!!(2s + 4)!!
(4s + 7)(2s + 1)!!(2s + 2)!!
=
(4s + 3)(2s + 4)
(4s + 7)(2s + 1)
=
8s
2
+ 22s + 12
8s
2
+ 18s + 7
= 1 +
4s + 5
8s
2
+ 18s + 7
> 1
As a result
|a
s
| > |a
s+1
|
and hence the series converges based on the Leibniz criterion. (Actually, to be
careful, we must also show that lim
s
a
s
= 0. However, I have ignored this
subtlety.)
b) The corresponding series for the surface charge density is

s=0
(1)
s
(4s + 3)
(2s 1)!!
(2s)!!
Test it for convergence.
This series is rather similar to that of part a). However the denominator is
missing a factor of (2s + 2). This makes the series larger (term by term) than
the above. To see whether the terms get too large, we may take the ratio
|a
s
|
|a
s+1
|
=
(4s + 3)(2s 1)!!(2s + 2)!!
(4s + 7)(2s + 1)!! (2s)!!
=
(4s + 3)(2s + 2)
(4s + 7)(2s + 1)
=
8s
2
+ 14s + 6
8s
2
+ 18s + 7
= 1
4s + 1
8s
2
+ 18s + 7
< 1
In this case
|a
s
| < |a
s+1
|
and the series diverges since the terms get larger as s .
Physics 451 Fall 2004
Homework Assignment #7 Solutions
Textbook problems: Ch. 5: 5.4.1, 5.4.2, 5.4.3, 5.5.2, 5.5.4
Chapter 5
5.4.1 Given the series
ln(1 + x) = x
x
2
2
+
x
3
3

x
4
4
+ , 1 < x 1
show that
ln

1 + x
1 x

= 2

x +
x
3
3
+
x
5
5
+

, 1 < x < 1
We use the property ln(a/b) = lna lnb to write
ln

1 + x
1 x

= ln(1 + x) ln(1 x) =

n=1
(1)
n+1
x
n
n

n=1
(1)
n+1
(x)
n
n
=

n=1
((1)
n+1
+ 1)
x
n
n
= 2

n odd
x
n
n
Note that, since we use the ln(1 + x) series for both +x and x, the common
range of convergence is the intersection of 1 < x 1 and 1 < x 1, namely
|x| < 1.
5.4.2 Determine the values of the coecients a
1
, a
2
, and a
3
that will make (1+a
1
x+a
2
x
2
+
a
3
x
3
) ln(1 + x) converge as n
4
. Find the resulting series.
Using the expansion for ln(1 + x), we write
(1 + a
1
x + a
2
x
2
+a
3
x
3
) ln(1 + x)
=

n=1
(1)
n+1

x
n
n
+
a
1
x
n+1
n
+
a
2
x
n+2
n
+
a
3
x
n+3
n

We want to collect identical powers of x on the right hand side. To do this, we


must shift the index n according to n n 1, n n 2 and n n 3 for
the second, third and last terms on the right hand side, respectively. After doing
so, we may combine terms with powers x
4
and higher. The rst few terms (x, x
2
and x
3
) may be treated as exceptions. The result is
(1 + a
1
x + a
2
x
2
+a
3
x
3
) ln(1 + x)
= (x
1
2
x
2
+
1
3
x
3
) + a
1
(x
2

1
2
x
3
) + a
2
x
3
+

n=4
(1)
n+1

x
n
n

a
1
x
n
n 1
+
a
2
x
n
n 2

a
3
x
n
n 3

= x + (a
1

1
2
)x
2
+ (a
2

1
2
a
1
+
1
3
)x
3
+

n=4
(1)
n+1

1
n

a
1
n 1
+
a
2
n 2

a
3
n 3

x
n
(1)
Combining the terms over a common denominator yields

1
n

a
1
n 1
+
a
2
n 2

a
3
n 3

=
(n 1)(n 2)(n 3) a
1
n(n 2)(n 3) + a
2
n(n 1)(n 3)
a
3
n(n 1)(n 2)
n(n 1)(n 2)(n 3)
=
(1 a
1
+ a
2
a
3
)n
3
+ (6 + 5a
1
4a
2
+ 3a
3
)n
2
+(11 6a
1
+ 3a
2
2a
3
)n 6
n(n 1)(n 2)(n 3)
To make this converge as n
4
, we need to cancel the coecients of n
3
, n
2
and n
in the numerator. Solving for
1 a
1
+a
2
a
3
= 0, 6 +5a
1
4a
2
+3a
3
= 0, 11 6a
1
+3a
2
2a
3
= 0
yields the solution
a
1
= 3, a
2
= 3, a
3
= 1
Finally, inserting this back into (1), we obtain
(1+3x+3x
2
+x
3
) ln(1+x) = x+
5
2
x
2
+
11
6
x
3
+6

n=4
(1)
n
x
n
n(n 1)(n 2)(n 3)
or
ln(1 + x) =
x +
5
2
x
2
+
11
6
x
3
+ 6

n=4
(1)
n x
n
n(n1)(n2)(n3)
(1 + x)
3
5.4.3 Show that
a)

n=2
[(n) 1] = 1
Using the sum formula for the Riemann zeta function, we have

n=2
[(n) 1] =

n=2

p=1
1
p
n

n=2

p=2
1
p
n

p=2

n=2
1
p
n

where in the last step we have rearranged the order of summation. In doing so,
we have now changed this to a geometric series, with sum

n=2
p
n
=
p
2
1 p
1
=
1
p(p 1)
In this case

n=2
[(n) 1] =

p=2
1
p(p 1)
=

p=2

1
p 1

1
p

= 1
since this is a telescoping series.
b)

n=2
(1)
n
[(n) 1] =
1
2
The solution to this is similar to that of part a). The addition of (1)
n
yields

n=2
(1)
n
[(n) 1] =

n=2
(1)
n

p=2
1
p
n

n=2

p=2
1
(p)
n

p=2

n=2
1
(p)
n

The sum over n is still a geometric series, this time with

n=2
(p)
n
=
(p)
2
1 (p)
1
=
1
p(p + 1)
In this case

n=2
(1)
n
[(n) 1] =

p=2
1
p(p + 1)
=

p=2

1
p

1
p + 1

=
1
2
5.5.2 For what range of x is the geometric series

n=0
x
n
uniformly convergent?
We use the Weierstrass M test. We rst note that the geometric series

n=0
x
n
is absolutely convergent for |x| < 1. This means that the series

n=0
s
n
is
convergent for 0 s < 1. While this is all very obvious, the introduction of
this convergent series in s allows us to bound the x series by an x-independent
convergent one. This is precisely the setup of the Weierstrass M test.
We simply choose M
n
= s
n
. Then, so long as |x|
n
M
n
(ie |x| s), the
geometric series is uniformly convergent. Therefore we have shown that

n=0
x
n
is uniformly convergent provided |x| s < 1.
5.5.4 If the series of the coecients

a
n
and

b
n
are absolutely convergent, show that
the Fourier series

(a
n
cos nx + b
n
sinnx)
is uniformly convergent for < x < .
This is also a case for the Weierstrass M test. Note that, if we let (x) =
a
n
cos nx + b
n
sinnx denote the n-th element of the series, then
|(x)| = |a
n
cos nx + b
n
sinnx| |a
n
cos nx| +|b
n
sinnx| |a
n
| +|b
n
|
for the entire domain x (, ). Since the problem states that

a
n
and

b
n
are absolutely convergent, we now take simply M
n
= |a
n
| + |b
n
|. Clearly,

M
n
converges, and since |(x)| M
n
, we conclude that

(x) is uniformly
convergent for x (, ).
Physics 451 Fall 2004
Homework Assignment #8 Solutions
Textbook problems: Ch. 5: 5.6.2, 5.6.19, 5.7.4, 5.7.15, 5.9.11, 5.10.1, 5.10.7
Chapter 5
5.6.2 Derive a series expansion of cot x by dividing cos x by sinx.
Since cos x = 1
1
2
x
2
+
1
24
x
4
and sinx = x
1
6
x
3
+
1
120
x
5
, we divide
to obtain
cot x =
1
1
2
x
2
+
1
24
x
4

x
1
6
x
3
+
1
120
x
5

=
1
1
2
x
2
+
1
24
x
4

x(1
1
6
x
2
+
1
120
x
4
)
We now run into an issue of dividing one series by another. However, instead of
division, we may change this into a multiplication problem by using (1 r)
1
=
1 + r + r
2
+ r
3
+ to rewrite the denominator
(1
1
6
x
2
+
1
120
x
4
)
1
= 1 + (
1
6
x
2

1
120
x
4
+ ) + (
1
6
x
2

1
120
x
4
+ )
2
+
= 1 +
1
6
x
2
+ (
1
120
+
1
36
)x
4
+
= 1 +
1
6
x
2
+
7
360
x
4
+
where we have only kept terms up to O(x
4
). Returning to cot x, we now nd
cot x = x
1
(1
1
2
x
2
+
1
24
x
4
)(1 +
1
36
x
2
+
7
360
x
4
+ )
= x
1
(1 + (
1
2
+
1
6
)x
2
+ (
1
24

1
12
+
7
360
)x
4
+ )
= x
1
(1
1
3
x
2

1
45
x
4
+ )
In principle, we could work this out to higher orders by keeping more powers of
x in the series expansions.
Note that there is a nice expression for cot x in terms of the Bernoulli numbers.
This may be obtained by noting that the generating function denition of B
n
is
x
e
x
1
=

n=1
B
n
n!
x
n
=
1
2
x +

p=0
B
2p
(2p)!
x
2p
where we have used the fact that all odd Bernoulli numbers vanish except for
B
1
=
1
2
. Moving the
1
2
x to the left hand side, and using the identity
x
e
x
1
+
1
2
x =
x
2

e
x
+ 1
e
x
1
=
x
2
coth
x
2
we obtain
x
2
coth
x
2
=

p=0
B
2p
(2p)!
x
2p
or, by substituting x 2x and dividing through by x
cothx =

p=0
2B
2p
(2p)!
(2x)
2p1
Finally, to change coth into cot, we may work in the complex domain and note
that cothiz = i cot z. Therefore we make the substitution x ix to yield
i cot x =

p=0
2B
2p
(2p)!
(2ix)
2p1
Multiplying through by i and simplifying then gives the expression
cot x =

p=0
(1)
p
2
2p
B
2p
(2p)!
x
2p1
5.6.19 a) Plancks theory of quandized oscillators leads to an average energy
=

n=1
n
0
exp(n
0
/kT)

n=0
exp(n
0
/kT)
where
0
is a xed energy. Identify the numerator and denominator as binomial
expansions and show that the ratio is
=

0
exp(
0
/kT) 1
To simplify the expressions, we begin with the substitution r = exp(
0
/kT).
This yields = N/D where the numerator and denominator are
N =

n=1
n
0
r
n
, D =

n=0
r
n
We now see that the denominator is a simple geometric series. Hence D =
1/(1 r). For the numerator, we note that nr
n
= r
d
dr
(r
n
). Hence we may write
N =
0
r
d
dr
_

n=1
r
n
_
=
0
r
d
dr
r
1 r
=

0
r
(1 r)
2
Dividing the numerator and denominator nally yields
=

0
r
1 r
=

0
r
1
1
=

0
exp(
0
/kT) 1
b) Show that the of part (a) reduces to kT, the classical result, for kT
0
.
In this limit,
0
/KT 1, we may expand the exponential in the denominator
exp(
0
/kT) 1 +

0
kT
+
As a result


0

0
/kT +
kT
5.7.4 The analysis of the diraction pattern of a circular opening involves
_
2
0
cos(c cos ) d
Expand the integrand in a series and integrate by using
_
2
0
cos
2n
d =
(2n)!
2
2n
(n!)
2
2,
_
2
0
cos
2n+1
d = 0
Setting x = c cos , we expand
cos x =

n=0
(1)
n
(2n)!
x
2n
so that
_
2
0
cos(c cos ) d =
_
2
0

n=0
(1)
n
(2n)!
c
2n
cos
2n
d
=

n=0
(1)
n
c
2n
(2n)!
_
2
0
cos
2n
d
=

n=0
(1)
n
c
2n
(2n)!
2(2n)!
2
2n
(n!)
2
=

n=0
2(1)
n
(n!)
2
_
c
2
_
2n
5.7.15 The Klein-Nishina formula for the scattering of photons by electrons contains a term
of the form
f() =
(1 + )

2
_
2 + 2
1 + 2

ln(1 + 2)

_
Here = h/mc
2
, the ratio of the photon energy to the electron rest mass energy.
Find
lim
0
f()
This problem is an exercise in taking Taylor series. Note that, if we simply set
= 0 in f(), the rst term (1 +)/
2
would diverge as
2
. Hence this provides
a hint that we should keep at least two powers of in any series expansion we
perform. Keeping this in mind, we rst work on the fraction
2 + 2
1 + 2
= 2(1+)(1+2)
1
= 2(1+)(12+4
2
) = 2(1+2
2
+ ) (1)
next we turn to the log
ln(1 + 2)

=
1
ln(1+2) =
1
(2
1
2
(2)
2
+
1
3
(2)
3
+ ) = (22+
8
3

2
)
(2)
Subtracting (2) from (1), and combining with the prefactor (1 + )/
2
, we nd
f() =
(1 + )

2
[2(1 + 2
2
+ ) (2 2 +
8
3

2
)]
=
(1 + )

2
[4
2

8
3

2
+ ] =
(1 + )

2
[
4
3

2
+ ] =
4
3
(1 + )[1 + ]
We are now in a position to take the limit 0 to obtain
lim
0
f() =
4
3
5.9.11 The integral
_
1
0
[ln(1 x)]
2
dx
x
appears in the fourth-order correction to the magnetic moment of the electron. Show
that it equals 2(3).
We begin with the variable substitution
1 x = e
t
, dx = e
t
dt
to obtain
_
1
0
[ln(1 x)]
2
dx
x
=
_

0
t
2
e
t
1 e
t
dt
This integral involves powers and exponentials, and is not so easy to do. Thus
we expand the fraction as a series
e
t
1 e
t
= e
t
(1 e
t
)
1
= e
t
(1 + e
t
+ e
2t
+ e
3t
+ ) =

n=1
e
nt
This gives
_
1
0
[ln(1 x)]
2
dx
x
=
_

0
t
2
_

n=1
e
nt
_
dt =

n=1
_

0
e
nt
t
2
dt
This integral may be evaluated by integration by parts (twice). Alternatively, we
make the substitution s = nt to arrive at
_
1
0
[ln(1 x)]
2
dx
x
=

n=1
n
3
_

0
e
s
s
2
ds =

n=1
n
3
(3) = 2(3)
Here we have used the denition of the Gamma function
(z) =
_

0
e
s
s
z1
dz
as well as the zeta function
(z) =

n=1
n
z
5.10.1 Stirlings formula for the logarithm of the factorial function is
ln(x!) =
1
2
ln2 +
_
x +
1
2
_
lnx x
N

n=1
B
2n
2n(2n 1)
x
12n
The B
2n
are the Bernoulli numbers. Show that Stirlings formula is an asymptotic
expansion.
Instead of using the textbook denition of an asymptotic series

a
n
(x), we aim
to demonstrate the two principle facts; i) that the series diverges for xed x when
N , and ii) that the remainder vanishes for xed N when x . To do
so, we rst examine the form of a
n
(x)
a
n
(x) =
B
2n
2n(2n 1)
x
12n
Using the relation
B
2n
=
(1)
n+1
2(2n)!
(2)
2n
(2n)
we nd
|a
n
(x)| =
2(2n 2)!(2n)
(2)
2n
x
12n
For condition i), in order to show that the series diverges for xed x, we may
perform the ratio test
|a
n
|
|a
n+1
|
=
2(2n 2)!(2n)
(2)
2n
(2)
2n+2
2(2n)!(2n + 2)
x
2
=
(2)
2
2n(2n 1)
(2n)
(2n + 2)
x
2
(3)
Since lim
n
(n) = 1, and since there are factors of n in the denominator, we
see that
lim
n
|a
n
|
|a
n+1
|
= 0 (for xed x)
and hence the ratio test demonstrates that the series diverges.
For showing condition ii), on the other hand, we suppose the series stops at term
n = N. Then the error or remainder is related to the subsequent terms a
N+1
,
a
N+2
, etc. However, according to (3), if we take the limit x for xed N we
have
lim
x
|a
N
|
|a
N+1
|
= |a
N+1
|
|a
N
|

= 0 as x
Hence the remainder terms fall o suciently fast to satisfy the criteria for an
asymptotic series. We thus conclude that Stirlings formula is an asymptotic
expansion.
5.10.7 Derive the following Bernoulli number asymptotic series for the Euler-Mascheroni
constant
=
n

s=1
s
1
lnn
1
2n
+
N

k=1
B
2k
(2k)n
2k
Let us start by recalling the useful denition of the Euler-Mascheroni constant
= lim
n
_
n

s=1
s
1
lnn
_
Essentially, the constant is the dierence between the sum and the integral
approximation. This suggests that we begin by inserting the function f(x) = 1/x
into the Euler-Maclauren sum formula
n

x=1
f(x) =
_
n
1
f(x)dx +
1
2
f(1) +
1
2
f(n) +
N

p=1
1
(2p)!
B
2p
[f
(2p1)
(n) f
(2p1)
(1)]

1
(2N)!
_
1
0
B
2N
(x)
n1

=1
f
(2N)
(x + )dx
(4)
However, we rst note that, for f(x) = 1/x we have
_
n
1
f(x)dx =
_
n
1
dx
x
= lnn
as well as
f
(k)
(x) = (1)
k
k!
x
k+1
Using these results, and returning to (4), we nd
n

s=1
s
1
= lnn +
1
2
+
1
2n

N

p=1
B
2p
2p
[n
2p
1]
_
1
0
B
2N
(x)
n1

=1
(x + )
2N+1
dx
or
n

s=1
s
1
lnn =
1
2
+
1
2n

N

p=1
B
2p
2p
[n
2p
1] + R
N
(n) (5)
where the remainder R
N
(n) is given by
R
N
(n) =
_
1
0
B
2N
(x)
n1

=1
(x + )
2N+1
dx (6)
At this point, may note that the left hand side of (5) is close to the expression
we want for the Euler-Mascheroni constant. However, we must recall that the
sum formula (4) generally yields an asymptotic expansion (since the Bernoulli
numbers diverge). Thus we have to be careful about the remainder term.
Of course, we can still imagine taking the limit n in (5) to obtain
= lim
n
_
n

s=1
s
1
lnn
_
=
1
2
+
N

p=1
B
2p
2p
+ R
N
() (7)
Noting that the remainder (6) is a sum of terms
R
N
(n) =
_
1
0
B
2N
(x)
_
1
(x + 1)
2N+1
+
1
(x + 2)
2N+1
+ +
1
(x + n 1)
2N+1
_
dx
and that the rst few terms in the sum dominate, we may eliminate most (but
not all) of the remainder by subtracting (5) from (7)

n

s=1
s
1
+ lnn =
1
2n
+
N

p=1
B
2p
2p
1
n
2p
+ [R
N
() R
N
(n)]
Finally, dropping the dierence of remainders, we obtain the result
=
n

s=1
s
1
lnn
1
2n
+
N

p=1
B
2p
2p
1
n
2p
Physics 451 Fall 2004
Homework Assignment #9 Solutions
Textbook problems: Ch. 6: 6.1.3, 6.1.7, 6.2.5, 6.2.6, 6.3.3, 6.4.3, 6.4.4
Chapter 6
6.1.3 Prove algebraically that
|z
1
| |z
2
| |z
1
+ z
2
| |z
1
| +|z
2
|
Interpret this result in terms of vectors. Prove that
|z 1| < |
_
z
2
1| < |z + 1|, for (z) > 0
We start by evaluating |z
1
+ z
2
|
2
|z
1
+ z
2
|
2
= (z
1
+ z
2
)(z

1
+ z

2
) = |z
1
|
2
+|z
2
|
2
+ z
1
z

2
+ z

1
z
2
= |z
1
|
2
+|z
2
|
2
+ (z
1
z

2
) + (z
1
z

2
)

= |z
1
|
2
+|z
2
|
2
+ 2(z
1
z

2
)
(1)
We now put a bound on the real part of z
1
z

2
. First note that, for any complex
quantity , we have ||
2
= ()
2
+ ()
2
()
2
. Taking a square root gives
|| || or || ||. For the present case (where = z
1
z

2
) this gives
|z
1
||z
2
| (z
1
z

2
) |z
1
||z
2
|. Using this inequality in (1), we obtain
|z
1
|
2
+|z
2
|
2
2|z
1
||z
2
| |z
1
+ z
2
|
2
|z
1
|
2
+|z
2
|
2
+ 2|z
1
||z
2
|
or
(|z
1
| |z
2
|)
2
|z
1
+ z
2
|
2
(|z
1
| +|z
2
|)
2
Taking the square root then proves the triangle inequality. The reason this is
called the triangle inequality is that, in terms of vectors, we can think of z
1
, z
2
and z
1
+ z
2
as the three sides of a triangle
2 1
z2
+
z
z
1
z
Then the third side (|z
1
+ z
2
|) of a triangle can be no longer than the sum of
the lengths of the other two sides (|z
1
| + |z
2
|) nor shorter than the dierence of
lengths (|z
1
| |z
2
|).
Finally, for the second inequality, we start by proving that
|z + 1|
2
= |z|
2
+ 1 + 2z = (|z|
2
+ 1 2z) + 4z = |z 1|
2
+ 4z > |z 1|
2
for z > 0. This implies that |z + 1| > |z 1| for z > 0. The picture here is
that if z is on the right half of the complex plane then it is closer to the point 1
than the point 1.
z
1
+

z
1
z
z
Given this result, it is simple to see that
|z 1|
2
< |z 1||z + 1| < |z + 1|
2
or, by taking a square root
|z 1| < |
_
(z 1)(z + 1)| < |z + 1|
which is what we set out to prove.
6.1.7 Prove that
a)
N1

n=0
cos nx =
sin(Nx/2)
sinx/2
cos(N 1)
x
2
b)
N1

n=0
sinnx =
sin(Nx/2)
sinx/2
sin(N 1)
x
2
We may solve parts a) and b) simultaneously by taking the complex combination
S =
N1

n=0
cos nx + i
N1

n=0
sinnx =
N1

n=0
(cos nx + i sinnx) =
N1

n=0
e
inx
The real part of S gives part a) and the imaginary part of S gives part b). When
written in this fashion, we see that S is a terminating geometric series with ratio
r = e
ix
. Thus
S =
N1

n=0
r
n
=
1 r
N
1 r
=
1 e
Nix
1 e
ix
=
e
1
2
Nix
(e
1
2
Nix
e

1
2
Nix
)
e
1
2
ix
(e
1
2
ix
e

1
2
ix
)
We performed the last step in order to balance positive and negative exponentials
inside the parentheses. This is so that we may relate both the numerator and
denominator to sin = (e
i
e
i
)/2i. The result is
S = e
1
2
(N1)ix
sin(Nx/2)
sinx/2
=
_
cos
1
2
(N 1)x + i sin
1
2
(N 1)x
_
sin(Nx/2)
sinx/2
It should now be apparent that the real and imaginary parts are indeed the
solutions to parts a) and b).
6.2.5 Find the analytic function
w(z) = u(x, y) + iv(x, y)
a) if u(x, y) = x
3
3xy
2
We use the Cauchy-Riemann relations
v
x
=
u
y
= 6xy v = 3x
2
y + C(y)
v
y
=
u
x
= 3x
2
3y
2
v = 3x
2
y y
3
+ D(x)
In order for these two expressions to agree, the functions C(y) and D(x) must
have the form C(y) = y
3
+c and D(x) = c where c is an arbitrary constant. As
a result, we nd that v(x, y) = 3x
2
y y
3
+ c, or
w(z) = (x
3
3xy
2
) + i(3x
2
y y
3
) + ic = z
3
+ ic
The constant c is unimportant.
b) v(x, y) = e
y
sinx As above, we have
u
x
=
v
y
= e
y
sinx u = e
y
cos x + C(y)
u
y
=
v
x
= e
y
cos x u = e
y
cos x + D(x)
Thus we must have C(y) = D(x) = c with c a constant. The complex function
w(z) is
w(z) = c + e
y
cos x + ie
y
sinx = c + e
y
(cos x + i sinx) = c + e
ixy
= c + e
iz
6.2.6 If there is some common region in which w
1
= u(x, y) + iv(x, y) and w
2
= w

1
=
u(x, y) iv(x, y) are both analytic, prove that u(x, y) and v(x, y) are constants.
If u + iv and u iv are both analytic, then they must both satisfy the Cauchy-
Riemann equations. This corresponds to
u
x
=
v
y
,
u
y
=
v
x
(from u + iv) and
u
x
=
v
y
,
u
y
=
v
x
(from u iv). Clearly this indicates that
u
x
=
u
y
= 0,
v
x
=
v
y
= 0
Since all partial derivatives vanish, u and v can only be constants.
6.3.3 Verify that
_
1+i
0
z

dz
depends on the path by evaluating the integral for the two paths shown in Fig. 6.10.
2
x
+ 1 i
y
z
2
1
1
We perform this integral as a two-dimensional line integral
_
z

dz =
_
(x iy)(dx + idy)
For path 1, we rst integrate along the x-axis (y = 0; dy = 0) and then along the
y-axis (x = 1; dx = 0)
_
1+i
0
z

dz =
_
1
0
(x iy)

y=0
dx +
_
1
0
(x iy)

x=1
idy
=
_
1
0
xdx +
_
1
0
(i + y)dy =
1
2
x
2

1
0
+ (iy +
1
2
y
2
)

1
0
= 1 + i
Similarly, for path 2, we nd
_
1+i
0
z

dz =
_
1
0
(x iy)

x=0
idy +
_
1
0
(x iy)

y=1
dx
=
_
1
0
ydy +
_
1
0
(x i)dx =
1
2
y
2

1
0
+ (
1
2
x
2
ix)

1
0
= 1 i
So we see explicitly that the integral depends on the path taken (1 + i = 1 i).
6.4.3 Solve Exercise 6.3.4 [
_
C
dz/(z
2
+z) where C is a circle dened by |z| > 1] by separating
the integrand into partial fractions and then applying Cauchys integral theorem for
multiply connected regions.
Note that, by applying Cauchys integral formula to a constant function f(z) = 1,
we may derive the useful expression
_
C
dz
z z
0
= 2i (2)
provided point z
0
is contained inside the contour C (it is zero otherwise). Then,
using partial fractions, we see that
_
C
dz
z
2
+ z
=
_
C
dz
z(z + 1)
=
_
C
_
1
z

1
z + 1
_
dz =
_
C
dz
z

_
dz
z + 1
Since C is a circle of radius greater than one, it encompasses both points z
0
= 0
and z
0
= 1. Thus, using (2), we nd
_
C
dz
z
2
+ z
= 2i 2i = 0
Note that, if the radius of C is less than one, we would have encircled only the
pole at z
0
= 0. The result would then have been 2i instead of zero.
6.4.4 Evaluate
_
C
dz
z
2
1
where C is the circle |z| = 2.
Again, we use partial fractions and (2)
_
C
dz
z
2
1
=
_
C
dz
(z + 1)(z 1)
=
_
C
_
1/2
z 1

1/2
z + 1
_
dz
=
1
2
_
C
dz
z 1

1
2
_
C
dz
z + 1
= i i = 0
Here it is important that the contour of radius 2 encircles both points z
0
= 1
and z
0
= 1.
Physics 451 Fall 2004
Homework Assignment #10 Solutions
Textbook problems: Ch. 6: 6.5.2, 6.5.8, 6.6.2, 6.6.7
Ch. 7: 7.1.2, 7.1.4
Chapter 6
6.5.2 Derive the binomial expansion
(1 + z)
m
= 1 + mz +
m(m1)
1 2
z
2
+ =

n=0

m
n

z
n
for m any real number. The expansion is convergent for |z| < 1. Why?
To derive the binomial expansion, consider generating the Taylor series for f(z)
around z = 0 where f(z) = (1 + z)
m
. Taking derivatives of f(z), we nd
f

(z) = m(1 + z)
m1
,
f

(z) = m(m1)(1 + z)
m2
,
f

(z) = m(m1)(m2)(1 + z)
m3
,
etc.
In general, the n-th derivative is given by
f
(n)
(z) = m(m1)(m2) (mn + 1)(1 + z)
mn
=
m!
(mn)!
(1 + z)
mn
where the factorial for non-inteter m may be dened by the Gamma function,
or by the expression indicated. In particular, f
(n)
(0) = m!/(mn)!. Hence the
Taylor series has the form
f(z) =

n=0
1
n!
f
(n)
(0)z
n
=

n=0
m!
n!(mn)!
z
n
=

n=0

m
n

z
n
For non-integer m (but integer n), the binomial coecient may be dened by the
Gamma function, or alternately by

m
n

=
m(m1)(m2) (mn + 1)
1 2 3 n
=
n

k=1
mk + 1
k
Note that, for non-integer m, the expression (1+z)
m
has a branch point at z = 1.
(This is explored in problem 6.6.7 below.) Since the radius of convergence of the
Taylor series is the distance to the nearest singularity, this explains why |z| < 1
is necessary for convergence. For negative integer m, there is no branch point,
but there is still a pole (of order |m|) at z = 1. The pole also results in a radius
of convergence of |z| < 1. On the other hand, for m a non-negative integer, the
series terminates (giving a traditional binomial expansion for (1 +z) raised to an
integer power), and the radius of convergence is innite. This is consistent with
the absence of any singularity in this case.
6.5.8 Develop the rst three nonzero terms of the Laurent expansion of
f(z) = (e
z
1)
1
about the origin.
Since the Laurent expansion is a unique result, we may obtain the expansion any
way we wish. What we can do here is to start with a Taylor expansion of the
denominator
e
z
1 = z +
1
2
z
2
+
1
6
z
3
+ = z(1 +
1
2
z +
1
6
z
+
)
Hence
f(z) = (e
z
1)
1
= z
1
(1 +
1
2
z +
1
6
z
2
+ )
1
For small z, we invert the series using (1 + r)
1
= 1 r + r
2
where r =
1
2
z +
1
6
z
2
+ . This gives
f(z) = z
1

1 (
1
2
z +
1
6
z
2
+ ) + (
1
2
z +
1
6
z
2
+ )
2

= z
1

1
1
2
z +
1
12
z
2
+

=
1
z

1
2
+
1
12
z +
(1)
Of course, we could also take the hint and use the generating function of Bernoulli
numbers to write
f(z) =
1
e
z
1
= z
1

z
e
z
1

= z
1

n=0
B
n
n!
z
n
=
B
0
z
+B
1
+
1
2
B
2
z+
1
6
B
3
z
2
+
Inserting B
0
= 1, B
1
=
1
2
and B
2
=
1
6
then immediately yields the last line of
(1). However, this method requires us to either remember or look up the values
of the Bernoulli numbers.
6.6.2 What part of the z-plane corresponds to the interior of the unit circle in the w-plane
if
a) w =
z 1
z + 1
Note that, by trying a few numbers, we can see that z = 0 gets mapped to w = 1
and z = 1 gets mapped to w = 0
1 z
+1 z
1 z
+
1
z
z
z w
w =

In fact, the unit circle in the w-plane is given by the equation |w| = 1, which
maps to |z 1| = |z + 1| in the z-plane. Geometrically, this is saying that the
point z is equidistant to both +1 and 1. This can only happen on the imaginary
axis (x = 0). Hence the imaginary axis maps to the circumference of the circle.
Furthermore, since z = 1 gets mapped into the interior of the circle, we may
conclude that the right half (rst and fourth quadrants) of the complex z-plane
gets mapped to the interior of the unit circle.
b) w =
z i
z + i
This map is similar to that of part a), except that the distances are measured to
the points +i and i instead. Thus in this case the real axis (y = 0) gets mapped
to the circle. The upper half plane (rst and second quadrants) gets mapped to
the interior of the unit circle.
6.6.7 For noninteger m, show that the binomial expansion of Exercise 6.5.2 holds only for
a suitably dened branch of the function (1 + z)
m
. Show how the z-plane is cut.
Explain why |z| < 1 may be taken as the circle of convergence for the expansion of
this branch, in light of the cut you have chosen.
Returning to the binomial expansion of f(z) = (1+z)
m
, we note that if w = 1+z,
we end up with a function f(w) = w
m
which is multi-valued under w we
2i
whenever m is nonintegral. This indicates that w = 0 is a branch point, and a
suitable branch must be dened. We take the branch cut to run from w = 0
along the negative real axis in the w-plane. However, the simple transformation
z = w 1 allows us to return to the original z-plane. In this case, w = 0 is the
same as z = 1, so the branch point is at z = 1, with a cut running to the left.
The picture of the cut z-plane is then as follows
1
+
z
1
z
cut

z
where the principle value is taken to be < . In this case, f(z) =
|1 + z|
m
e
im
. Since the Taylor series is expanded about z = 0, the radius of
convergence is |z| < 1, which is the distance to the nearest singularity (the branch
point at z = 1). This is why it is desired to take the branch cut running along
the left (otherwise, if it goes inside the unit circle, it will reduce or eliminate the
radius of convergence).
Chapter 7
7.1.2 A function f(z) can be represented by
f(z) =
f
1
(z)
f
2
(z)
in which f
1
(z) and f
2
(z) are analytic. The denominator f
2
(z) vanishes at z = z
0
,
showing that f(z) has a pole at z = z
0
. However, f
1
(z
0
) = 0, f

2
(z
0
) = 0. Show that
a
1
, the coecient of (z z
0
)
1
in a Laurent expansion of f(z) at z = z
0
, is given by
a
1
=
f
1
(z
0
)
f

2
(z
0
)
Since f
1
(z) and f
2
(z) are both analytic, they may be expanded as Taylor series
f
1
(z) = f
1
(z
0
) + f

1
(z
0
)(z z
0
) + ,
f
2
(z) = f

2
(z
0
)(z z
0
) +
1
2
f

2
(z
0
)(z z
0
)
2
+
Here we have already set f
2
(z
0
) = 0 since the function vanishes at z = z
0
. As a
result, we have
f(z) =
f
1
(z)
f
2
(z)
=
f
1
(z
0
) + f

1
(z
0
)(z z
0
) +
f

2
(z
0
)(z z
0
) +
1
2
f

2
(z
0
)(z z
0
)
2
+
=
f
1
(z
0
)/f

2
(z
0
)
z z
0
1 + (f

1
/f
1
)(z z
0
) +
1 +
1
2
(f

2
/f

2
)(z z
0
) +
For z z
0
, the denominator 1 +
1
2
(f

2
/f

2
)(z z
0
) + may be inverted using
the geometric series relation 1/(1 +r) = 1 r +r
2
. The result is a Laurent
series of the form
f(z) =
f
1
(z
0
)/f

2
(z
0
)
z z
0

1 + (
f

1
f
1

2
2f

2
)(z z
0
) +

This expansion has a single pole, and its residue is simply


a
1
=
f
1
(z
0
)
f
2
(z
0
)
7.1.4 The Legendre function of the second kind Q

(z) has branch points at z = 1. The


branch points are joined by a cut line along the real (x) axis.
a) Show that Q
0
(z) =
1
2
ln((z + 1)/(z 1)) is single-valued (with the real axis
1 x 1 taken as a cut line).
Because lnw has a branch point at w = 0, this ratio of logs has branch points at
z = 1 as promised. Joining the branch points by a cut line along the real axis
gives the picture
z
1
+
z
1 1

1
z

Of course, to make this picture well dened, we provide a principle value for the
arguments
z + 1 = |z + 1|e
i
, < ,
z 1 = |z 1|e
i
, <
Thus
Q
0
(z) =
1
2
ln(z + 1)
1
2
ln(z 1) =
1
2
ln

z + 1
z 1

+
i
2
( ) (2)
It is the manner in which the arguments and show up in (2) that indicate the
branch cut is as written. For x > 1 on the real axis, both and are smooth,
0 and 0 for going either a little bit above or below the axis. Hence there
is no discontinuity in Q
0
(x > 1) and thus no branch cut. For 1 < x < 1, on the
other hand, the argument 0 is smooth innitesimally above or below the axis,
but the argument is discontinuous: above the axis, but below
the axis. This shows that the value of Q
0
changes by i when crossing the real
axis. For x < 1, the situation is more interesting, as both and jump when
crossing the axis. However the dierence () is unchanged. In this sense, the
two branch cuts cancel each other out, so that the function Q
0
(x < 1) is well
dened without a cut.
Essentially, the branch cut prevents us from going around either of the points
z = 1 or z = 1 individually. However, we can take a big circle around both
points. In this case, + 2 and + 2, but once again the dierence
( ) in (2) is single-valued. So this is an appropriate branch cut prescription.
b) For real argument x and |x| < 1 it is convenient to take
Q
0
(x) =
1
2
ln
1 + x
1 x
Show that
Q
0
(x) =
1
2
[Q
0
(x + i0) + Q
0
(x i0)]
The branch cut prescription described in part a) is somewhat unfortunate for real
arguments |x| < 1, since those values of x sit right on top of the cut. To make
this well dened for real x, we must provide a prescription for avoiding the cut.
This is what the x +i0 (above the cut) and x i0 (below the cut) prescription is
doing for us. Noting that (for |x| < 1) the arguments have the following values
x + i0 (above the cut) : 0, ,
x i0 (below the cut) : 0,
The expression of (2) yields
Q
0
(x + i0) =
1
2
ln

x + 1
x 1

i
2
,
Q
0
(x i0) =
1
2
ln

x + 1
x 1

+
i
2
(3)
Taking the average gives
Q
0
(x) =
1
2
[Q
0
(x + i0) + Q
0
(x i0)] =
1
2
ln

x + 1
x 1

=
1
2
ln
1 + x
1 x
where we have used the fact that |x 1| = 1 x for |x| < 1. In this case, we see
that averaging the function below and above the cut cancels the imaginary parts,
i/2 in (3).
Physics 451 Fall 2004
Homework Assignment #11 Solutions
Textbook problems: Ch. 7: 7.2.5, 7.2.7, 7.2.14, 7.2.20, 7.2.22
Chapter 7
7.2.5 The unit step function is dened as
u(s a) =
_
0, s < a
1, s > a
Show that u(s) has the integral representations
a) u(s) = lim
0
+
1
2i
_

e
ixs
x i
dx
Let us rst suppose we may close the contour with a semi-circle in the upper half
plane

z
u(s)
C
I
R
R
i
Always assuming the limit 0
+
, we see that the real integral for u(s) may be
promoted to a closed contour integral
1
2i
_
C
e
izs
z i
dz = u(s) + I
R
(1)
where I
R
denotes the integral along the semi-circle at innity. We now show that,
at least for s > 0, the integral I
R
vanishes. To do so, we make an explicit variable
substitution
z = Re
i
, dz = iRe
i
d
so that
I
R
=
1
2i
_

0
e
isRe
i
Re
i
i
iRe
i
d =
1
2
_

0
e
isR(cos +i sin )
d
where we have safely taken the limit 0
+
. Expanding out the exponent, we
nd
I
R
=
1
2
_

0
e
isRcos
e
sRsin
d (2)
This vanishes by Jordans lemma provided sRsin > 0 so that the real exponen-
tial is suppressed instead of blowing up (in fact, this is Jordans lemma). Since
R is positive and sin > 0 in the upper half plane, this corresponds to the
requirement s > 0 (as alluded to above). In this case, since I
R
= 0, (1) simplies
to
u(s) =
1
2i
_
C
e
izs
z i
dz = residue of
e
izs
z i
at z = i (s > 0)
The residue at i is simply lim
0
+ e
s
= 1. Hence we have conrmed that
u(s) = 1 for s > 0.
For s < 0, on the other hand, Jordans lemma makes it clear that we should
instead close the contour with a semi-circle in the lower half plane

I
R
z
u(s)
i
C
Since there are no residues inside the contour, we simply obtain u(s) = 0 for
s < 0. Although the problem does not discuss the case when s = 0, it is worth
considering. In this case, we might as well close the contour in the upper half
plane. Then I
R
can be directly evaluated by inserting s = 0 into (2). The result
is simply I
R
=
1
2
. Since the contour integral still has the value of 1 (residue at
the pole at i), inserting I
R
=
1
2
into (1) gives
1 = u(0) +
1
2
u(0) =
1
2
which is a nice result indicating that the step function is given completely by
u(s a) =
_
0, s < a
1
2
, s = a
1, s > 1
at least using this denition.
b) u(s) =
1
2
+
1
2i
P
_

e
ixs
x
dx
The principal value integral can be evaluated by deforming the contour above
and below the pole and then taking the average of the results. For s > 0, this
corresponds to something like
z
u(s)
C
I
R
or
z
u(s)
C
I
R
As in part a), the residue of the pole at z = 0 is simply 1. So for the contour on
the left we have
1
2i
_
e
izs
z
dz = 1, while for the one on the right (no poles inside)
we have 0. The principal value then gives 1/2 (the average of 1 and 0). This
indicates that
u(s) =
1
2
+
1 + 0
2
= 1, (s > 0)
For s < 0, on the other hand, we close the contour on the lower half plane. Again,
we average between the case when the pole is inside and when it is outside the
contour. However, it is important to realize that by closing the contour on the
lower half plane, we are actually choosing a clockwise (wrong direction) contour.
This means the contour integral gives either 1 or 0 depending on whether the
pole is inside or outside. The principal value prescription then yields
u(s) =
1
2
+
1 + 0
2
= 0, (s < 0)
If we wanted to be careful, we could also work this out for s = 0 to nd the same
answer u(0) =
1
2
.
7.2.7 Generalizing Example 7.2.1, show that
_
2
0
d
a b cos
=
_
2
0
d
a b sin
=
2
(a
2
b
2
)
1/2
for a > |b|. What happens if |b| > |a|?
Since this integral is over a complete period, we note that we would get the
same answer whether we integrate cos or sin. Furthermore, it does not matter
whether we integrate a + b cos or a b cos . This can be proven directly by
considering the substitutions

2
or + into the integral. In any
case, this means we only need to consider
I =
_
2
0
d
a + b cos
where we assume a > b > 0. For these types of trig integrals, we make the
substitutions
z = e
i
, dz = ie
i
d = izd, cos =
z + z
1
2
to change the real integral into a contour integral on the unit circle |z| = 1
I =
_
C
dz
iz(a +
b
2
(z + z
1
))
=
2i
b
_
C
dz
z
2
+
2a
b
z + 1
Since the contour is already closed, we do not need to worry about nding a way
to close the contour. All we need is to identify the poles inside the contour and
their residues. To do this, we solve the quadratic equation in the denominator to
obtain
I =
2i
b
_
C
dz
(z z
+
)(z z

)
where
z

=
a
b

_
a
2
b
2
1 (3)
Since we have assumed a > b > 0, the two zeros of the denominator, z

and z
+
are located as follows
z
C
z

z
+
In particular, it is not hard to check that the pole at z
+
lies inside the circle of
unit radius. As a result
I = (2i)
2i
b
_
residue of
1
(z z
+
)(z z

)
at z = z
+
_
=
4
b
1
(z
+
z

)
=
4
2b
_
a
2
/b
2
1
=
2

a
2
b
2
Note that, for a < 0, the integrand would always be negative. In this case, I
would be negative. Thus the complete answer is
I =
2

a
2
b
2
sign(a)
For |b| > |a|, the integrand would blow up when = cos
1
(a/b) so the integral
is not dened. What happens in this case is that, on the complex plane, the two
poles z
+
and z

, which still solve (3), move o the real axis but stay on the unit
circle contour itself.
z
+
z

C
z
So the complex integral is just as bad as the real integral. This is an example
where we could consider using a principal value prescription to make the integral
well dened.
7.2.14 Show that (a > 0)
a)
_

cos x
x
2
+ a
2
dx =

a
e
a
How is the right-hand side modied if cos x is replaced by cos kx?
For these types of integrals with sin or cos in the numerator, it is best to consider
sinx or cos x as the imaginary or real part of the complex exponential e
ix
. In
this case, we write
I =
_

cos x
x
2
+ a
2
dx =
_

e
ix
x
2
+ a
2
dx
Using Jordans lemma, we may close the contour using a semi-circle in the upper
half plane.
ia
z
C
I
R
Since I
R
= 0 (by Jordans lemma), we have simply
I =
_
e
iz
z
2
+ a
2
dz =
_
e
iz
(z ia)(z + ia)
dz =
_
2i
e
a
2ia
_
=

a
e
a
(for a positive).
If cos x is replaced by cos kx, we would write the numerator as e
ikx
. In this
case, for k > 0 we would close the contour in the upper half plane as before.
In addition, the exponential factor in the residue would be e
ka
, so for cos kx,
the integral would be (/a)e
ka
. For k < 0, on the other hand, we could close
the contour in the lower half plane. However, it is actually easier to see that
cos(kx) = cos kx, so the answer should be independent of the sign of k. Hence
_

cos kx
x
2
+ a
2
dx =

|a|
e
|ka|
is valid for any sign of k and a.
b)
_

xsinx
x
2
+ a
2
dx = e
a
How is the right-hand side modied if sinx is replaced by sinkx?
As above, we write sinx = e
ix
. Closing the contour in the same manner, and
using Jordans lemma to argue that I
R
= 0, we obtain
_

xsinx
x
2
+ a
2
dx =
_
ze
iz
z
2
+ a
2
dz =
_
ze
iz
(z ia)(z + ia)
dz
=
_
2i
iae
a
2ia
_
= e
a
If sinx is replaced by sinkx, the residue would get modied so that e
a
is replaced
by e
ka
. As a result
_

xsinkx
x
2
+ a
2
dx = e
|ka|
The reason for the absolute value is the same as for part a) above.
7.2.20 Show that
_

0
dx
(x
2
+ a
2
)
2
=

4a
3
, a > 0
This problem involves a double pole at z = ia. Choosing to close the contour in
the upper half plane
ia
z
C
I
R
we obtain
I =
_

0
dx
(x
2
+ a
2
)
2
=
1
2
_

dx
(x
2
+ a
2
)
2
=
1
2
_
C
dz
(z
2
+ a
2
)
2
= i (residue at z = ia)
(4)
Although this is a double pole, it may still have a residue. To see this, imagine
expanding the integrand in a power series near the pole at z = ia
1
(z
2
+ a
2
)
2
= (z ia)
2
(z + ia)
2
= (z ia)
2
[2ia + (z ia)]
2
= (z ia)
2
(2ia)
2
_
1 +
z ia
2ia
_
2
=
1
4a
2
(z ia)
2
_
1 2
_
z ia
2ia
_
+
2 3
2
_
z ia
2ia
_
2

_
=
1/4a
2
(z ia)
2
+
i/4a
3
(z ia)
+ (3/16a
4
) +
Here we have used the binomial expansion for (1 + r)
2
. This shows that, in
addition to the obvious double pole, there is a single pole hidden on top of it
with residue a
1
= i/4a
3
. Alternatively, we could have computed the residue
much more quickly by noting that for a double pole in f(z) = 1/(z
2
+ a
2
)
2
, we
form the non-singular function g(z) = (z ia)
2
f(z) = 1/(z + ia)
2
. The residue
is then the derivative
a
1
= g

(ia) =
d
dz
1
(z + ia)
2

z=ia
=
2
(z + ia)
3

z=ia
=
2
(2ia)
3
=
i
4a
3
In either case, using this residue in (4), we nd
I = i
_
i
4a
3
_
=

4a
3
or more precisely I = /4|a|
3
, which is valid for either sign of a.
It is worth noting that, for this problem, the integrand falls o suciently fast at
innity that we could choose to close the contour either in the upper half plane
or the lower half plane. Had we worked with the lower half plane, we would
have found a pole at ia with opposite sign for the residue. On the other hand,
the clockwise contour would have contributed another minus sign. So overall we
would have found the same result either way (which is a good thing).
7.2.22 Show that
_

0
cos(t
2
)dt =
_

0
sin(t
2
)dt =

2
Again, when we see sin or cos, it is worth considering this as the imaginary or real
parts of the complex exponential. Hence we rst choose to evaluate the integral
I =
_

0
e
it
2
dt
Taking the hint into account, we write down a (closed) contour integral using the
contour
I
/2
z
C
I
I
R
2
Thus
_
C
e
iz
2
dz = I + I
R
+ I
2
We start by evaluating the contour integral on the left. Although e
iz
2
has an
essential singularity at innity, that actually lies outside the contour (this is
certainly true for any xed large radius R; it also remains outside the contour
in the limit R ). Since there are no poles and no singularities inside the
contour, the contour integral vanishes. As a result,
0 = I + I
R
+ I
2
I = I
R
I
2
We now show that the integral on I
R
vanishes as well by a simple modication
to Jordans lemma. For I
R
, we let z = Re
i
, so that
I
R
=
_
/2
0
e
iR
2
e
2i
iRe
i
d = iR
_
/2
0
e
iR
2
cos 2
e
R
2
sin 2
e
i
d
Hence
|I
R
| = R
_
/2
0
e
R
2
sin 2
d
Using the same argument as in Jordans lemma, we can show that the integral
_
/2
0
e
R
2
sin 2
d may be bounded by 1/R
2
. Hence |I
R
| itself falls o as 1/R, and
vanishes when we take R . As a result, we are left with the observation
I = I
2
To examine I
2
, we note that the path of integration is a line of constant slope in
the complex plane
z = e
i/2
, dz = e
i/2
d
Thus
I
2
=
_
0

e
i(e
i/2
)
2
e
i/2
d = e
i/2
_

0
e

2
d
Note that the minus sign came from consideration of the direction of integration
along the I
2
contour. At this point, complex analysis does not really help us, and
we must recall (or look up) the gaussian integral
_

0
e

2
d =
1
2
_

2
d =

2
Thus
I = I
2
= e
i/2

2
= (cos(/2) + i sin(/2))

2
= (1 + i)

2
Since I =
_

0
e
it
2
dt, we may now take the real (cos) and imaginary (sin) parts of
I to obtain
_

0
cos(t
2
)dt =
_

0
sin(t
2
)dt =

2
This is a curious result, as it is not directly obvious why integrating cos(t
2
) and
sin(t
2
) would give identical results.
Physics 451 Fall 2004
Homework Assignment #12 Solutions
Textbook problems: Ch. 8: 8.2.2, 8.2.5, 8.2.6, 8.2.10, 8.2.16
Chapter 8
8.2.2 The Laplace transform of Bessels equation (n = 0) leads to
(s
2
+ 1)f

(s) + sf(s) = 0
Solve for f(s)
This equation is amenable to separation of variables
df
f
=
s
s
2
+ 1
ds

df
f
=

s
s
2
+ 1
ds
lnf =
1
2
ln(s
2
+ 1) + c
Exponentiating this and redening the consant, we obtain
f(x) =
C

s
2
+ 1
8.2.5 A boat, coasting through the water, experiences a resisting force proportional to v
n
,
v being the instantaneous velocity of the boat. Newtons second law leads to
m
dv
dt
= kv
n
With v(t = 0) = v
0
, x(t = 0) = 0, integrate to nd v as a function of time and then
the distance.
This equation is separable
dv
v
n
=
k
m
dt
For n = 1, this may be integrated to give

v
v
0
dv

v
n
=
k
m

t
0
dt


1
n 1

1
v
n1

1
v
n1
0

=
k
m
t
v(t) =

v
(n1)
0
+
(n 1)k
m
t

1/(n1)
(1)
This may be integrated once more to obtain x as a function of t
x(t) =

t
0
v(t

)dt

t
0

v
(n1)
0
+
(n 1)k
m
t

1/(n1)
dt

Although this may look somewhat scary, it is in fact trivial to integrate, as it is


essentially t

(plus a constant) to some fractional power. The only diculty is


bookkeeping the various constants. For n = 2, the result is
x(t) =
1
1 1/(n 1)
m
(n 1)k

v
(n1)
0
+
(n 1)k
m
t

11/(n1)

t
0
=
m
(n 2)k

v
(n1)
0
+
(n 1)k
m
t

(n2)/(n1)
v
(n2)
0

(2)
If desired, the position and velocity, (2) and (1) may be rewritten as
x(t) =
m
(n 2)kv
n2
0

1 +
(n 1)kv
n1
0
m
t

(n2)/(n1)
1

v(t) = v
0

1 +
(n 1)kv
n1
0
m
t

1/(n1)
As a result, it is possible to eliminate t and obtain the velocity as a function of
position
v = v
0

1 +
(n 2)kv
n2
0
x
m

1/(n2)
(3)
Note that we may dene
x
k
=
m
(n 2)kv
n2
0
which represents a length scale related to the resisting force and initial velocity.
In terms of x
k
, the velocity and position relation may be given as
v
v
0
=

1 +
x
x
k

1/(n2)
or

v
0
v

n2
= 1 +
x
x
k
Note that, in fact, it is possible to obtain (3) directly from Newtons second law
by rewriting it as
dv
v
n1
=
k
m
v dt =
k
m
dx
and then integrating

v
v
0
dv

v
n1
=
k
m

x
0
dx


1
n 2

1
v
n2

1
v
n2
0

=
k
m
x

v
0
v

n2
= 1 +
(n 2)kv
n2
0
m
x
So far, what we have done does not apply to the special cases n = 1 or n = 2.
For n = 1, we have
dv
v
=
k
m
dt ln

v
v
0

=
k
m
t v(t) = v
0
e
(k/m)t
Integrating once more yields
x(t) =
mv
0
k
(1 e
(k/m)t
)
v
v
0
= 1
kx
mv
0
which is in fact consistent with setting n = 1 in (3).
For n = 2, we have
dv
v
2
=
k
m
dt
1
v
+
1
v
0
=
k
m
t v(t) =
v
0
1 + (kv
0
/m)t
Integrating this for position yields
x(t) =
m
k
ln

1 +
kv
0
m
t


kx
m
= ln

v
0
v

8.2.6 In the rst-order dierential equation dy/dx = f(x, y) the function f(x, y) is a function
of the ratio y/x:
dy
dx
= g

y
x

Show that the substitution of u = y/x leads to a separable equation in u and x.


If we let u = y/x, this means that y = xu. So, by the product rule
dy
dx
= x
du
dx
+ u
The above dierential equation now becomes
x
du
dx
+ u(x) = g(u)
du
g(u) u
=
dx
x
which is separated in u and x.
8.2.10 A certain dierential equation has the form
f(x)dx + g(x)h(y)dy = 0
with none of the functions f(x), g(x), h(y) identically zero. Show that a necessary
and sucient condition for this equation to be exact is that g(x) = const.
The check for exactness is

y
f(x) =

x
(g(x)h(y))
or
0 =
dg(x)
dx
h(y)
Since h(y) is not identically zero, we may divide out by h(y) (at least in any
domain away from isolated zeros of h), leading to dg(x)/dx = 0, which indicates
that g(x) must be constant.
8.2.16 Bernoullis equation
dy
dx
+ f(x)y = g(x)y
n
is nonlinear for n = 0 or 1. Show that the substitution u = y
1n
reduces Bernoullis
equation to a linear equation.
For n = 1, the substitution u = y
1n
is equivalent to y = u
1/(1n)
. Thus
dy
dx
=
1
1 n
u
1/(1n)1
du
dx
=
1
1 n
u
n/(1n)
du
dx
Bernoullis equation then becomes
1
1 n
u
n/(1n)
du
dx
+ f(x)u
1/(1n)
= g(x)u
n/(1n)
Multiplying by u
n/(1n)
gives
1
1 n
du
dx
+ f(x)u = g(x)
or
du
dx
+ (1 n)f(x)u = (1 n)g(x)
Physics 451 Fall 2004
Homework Assignment #13 Solutions
Textbook problems: Ch. 8: 8.4.1, 8.4.3, 8.5.6, 8.5.11, 8.5.14, 8.5.17
Chapter 8
8.4.1 Show that Legendres equation has regular singularities at x = 1, 1, and .
Legendres equation may be written as
y

2x
1 x
2
y

+
l(l + 1)
1 x
2
y = 0
so that
P(x) =
2x
1 x
2
=
2x
(x 1)(x + 1)
, Q(x) =
l(l + 1)
1 x
2
=
l(l + 1)
(x 1)(x + 1)
Written in this fashion, we see that both P(x) and Q(x) have simple poles at
x = 1 and x = 1. This is sucient to indicate that these two points are regular
singular points.
For the point at , we make the substitution x = 1/z. As worked out in the
text, we end up with

P(z) =
2z P(z
1
)
z
2
=
2z + 2z
1
/(1 z
2
)
z
2
=
2
z
+
2
z(z
2
1)
=
2z
z
2
1
and

Q(z) =
Q(z
1
)
z
4
=
l(l + 1)/(1 z
2
)
z
4
=
l(l + 1)
z
2
(z
2
1)
Examining the behavior of

P and

Q as z 0, we see that

P is regular, while

Q
has a double pole. Because of the double pole in

Q, Legendres equation also has
a regular singularity at x = .
8.4.3 Show that the substitution
x
1 x
2
, a = l, b = l + 1, c = 1
converts the hypergeometric equation into Legendres equation.
Making the above substitution (along with dx
1
2
dx which implies y

(2)y

and y

(2)
2
y

) into the Hypergeometric equation, we nd


x(x 1)y

+ [(1 + a + b)x c]y

+ aby = 0

1 x
2

1 x
2
1

(2)
2
y

(1 l + (l + 1))
1 x
2
1

(2)y

l(l + 1)y = 0
(1 x
2
)y

+ 2xy

l(l + 1)y = 0
Changing an overall sign yields Legendres equation
(1 x
2
)y

2xy

+ l(l + 1)y = 0
This indicates that Legendres equation is in fact a special case of the more general
Hypergeometric equation.
8.5.6 Develop series solutions for Hermites dierential equation
a) y

2xy

+ 2y = 0
Since x = 0 is a regular point, we develop a simply Taylor series solution
y =

n=0
a
n
x
n
, y

n=0
na
n
x
n1
, y

n=0
n(n 1)a
n
x
n2
Substituting this in to Hermites equation, we nd

n=0
[n(n 1)a
n
x
n2
2na
n
x
n
+ 2a
n
x
n
] = 0

n=0
[(n + 2)(n + 1)a
n+2
+ 2( n)a
n
]x
n
= 0
To obtain the second line, we had made the substitution n n + 2 in the rst
term of the series so that we could collect identical powers of x
n
. Since this series
vanishes for all values of x, each coecient must vanish. This yields the recursion
relation
a
n+2
=
2(n )
(n + 2)(n + 1)
a
n
(1)
which determines all higher a
n
s, given a
0
and a
1
as a starting point.
In fact, we obtain two series, one for n even and one for n odd. For n even, we
set a
0
= 1 and nd
a
0
= 1, a
2
=
2()
2!
, a
4
=
2(2 )
4 3
a
2
=
2
2
()(2 )
4!
, etc.
This gives the even solution
y
even
= 1 + 2()
x
2
2!
+ 2
2
()(2 )
x
4
4!
+ 2
3
()(2 )(4 )
x
6
6!
+ (2)
For n odd, we set a
1
= 1 and nd
a
1
= 1, a
3
=
2(1 )
3!
, a
5
=
2(3 )
5 4
a
3
=
2
2
(1 )(3 )
5!
, etc.
This results in the odd solution
y
odd
= x+2(1)
x
3
3!
+2
2
(1)(3)
x
5
5!
+2
3
(1)(3)(5)
x
7
7!
+ (3)
Note that, and an ordinary point, we did not have to solve the indicial equation.
However, if we had chosen to do so, we would have found k = 0 or k = 1, yielding
the even and odd solutions, respectively.
b) Show that both series solutions are convergent for all x, the ratio of successive
coecients behaving, for large index, like the corresponding ratio in the expansion
of exp(2x
2
).
To test for convergence, all we need is to use the ratio test
lim
n
a
n
x
n
a
n+2
x
n+2
= lim
n
(n + 2)(n + 1)
2(n )x
2
= lim
n
n
2x
2
= (4)
Since this is larger than 1, the series converges for all values of x. Note that
the ratio a
n
/a
n+2
was directly obtained from the recursion relation (1), and this
result is valid for both y
even
and y
odd
. Furthermore, if we compared this with
exp(2x
2
), we would see that the n-th term in the Taylor series of the exponential
is b
n
= (2x
2
)
n
/n!, which leads to a ratio
b
n1
b
n
=
n
2x
2
in direct correspondence with that of (4). Hence the solutions to Hermites equa-
tions are (generically) asymptotic to exp(2x
2
).
c) Show that by appropriate choice of the series solutions may be cut o and
converted to nite polynomials.
Examination of the series solutions (2) and (3) indicates that y
even
terminates
for = 0, 2, 4, . . . and y
odd
terminates for = 1, 3, 5, . . .. This means the for
a non-negative integer either y
even
or y
odd
(depending on being even or odd)
terminates, yielding a nite Hermite polynomial.
8.5.11 Obtain two series solutions of the conuent hypergeometric equation
xy

+ (c x)y

ay = 0
Test your solutions for convergence.
We rst observe that this equation has a regular singular point at x = 0 and an
irregular one at x = . We would like to develop a series solution around the
regular singular point at x = 0. Thus we start with the indicial equation
y

+
c x
x
y

a
x
y = 0 p
0
= c, q
0
= 0
and
k(k 1) +p
0
k +q
0
= 0 k(k 1) +ck = 0 k(k +c 1) = 0
This shows that the indices at x = 0 are k
1
= 0 and k
2
= 1 c. We start with
k
1
= 0. Since the index vanishes, we attempt an ordinary Taylor series solution
y =

n=0
a
n
x
n
, y

n=0
na
n
x
n1
, y

n=0
n(n 1)a
n
x
n2
Substituting this into the conuent hypergeometric equation, we obtain

n=0
[n(n 1)a
n
x
n1
+ nca
n
x
n1
na
n
x
n
aa
n
x
n
] = 0
Making the substition n n + 1 in the rst two terms and simplifying gives

n=0
[(n + 1)(c + n)a
n+1
(a + n)a
n
]x
n
= 0
Therefore we have a recursion relation of the form
a
n+1
=
a + n
(n + 1)(c + n)
a
n
(5)
Setting a
0
= 1, the rst few terms in the series becomes
a
0
= 1, a
1
=
a
c
, a
2
=
a + 1
2(c + 1)
a
1
=
a(a + 1)
2!c(c + 1)
,
a
3
=
a + 2
3(c + 2)
a
2
=
a(a + 1)(a + 2)
3!c(c + 1)(c + 2)
This indicates that
y = 1 +
a
c
x +
a(a + 1)
c(c + 1)
x
2
2!
+
a(a + 1)(a + 2)
c(c + 1)(c + 2)
x
3
3!
+
=

n=0
(a)
n
(c)
n
x
n
n!
(6)
where the notation (a)
n
is given by
(a)
n
= a(a + 1)(a + 2) (a + n 2)(a + n 1) =
(a + n)
(a)
(7)
This is the regular solution of the conuent hypergeometric equation. We now
test this series for convergence using the ratio test. Given the recursion relation
(5), we nd
lim
n
a
n
x
n
a
n+1
x
n+1
= lim
n
(n + 1)(c + n)
(a + n)x
= lim
n
n
x
=
Therefore this series converges for all values of x, unless c is a non-positive integer,
in which case the denominators in (6) will eventually all blow up.
Turning next to k
2
= 1 c, we seek a series solution of the form
y = x
1c

n=0
a
n
x
n
, y

= x
c

n=0
(n + 1 c)a
n
x
n
,
y

= x
1c

n=0
(n + 1 c)(n c)a
n
x
n
Substituting this into the conuent hypergeometric equation, we nd
x
1c

n=0
[(n+1c)(nc)a
n
x
n1
+c(n+1c)a
n
x
n1
(n+1c)a
n
x
n
aa
n
x
n
] = 0
Performing the shift n n + 1 in the rst two terms and simplifying, we obtain
x
1c

n=0
[(n + 2 c)(n + 1)a
n+1
(n + 1 + a c)a
n
]x
n
= 0
which yields the recursion relation
a
n+1
=
n + 1 + a c
(n + 2 c)(n + 1)
a
n
Supposing that a
0
= 1, the rst few terms in this series are given by
a
0
= 1, a
1
=
1 + a c
2 c
, a
2
=
2 + a c
2(3 c)
a
1
=
(1 + a c)(2 + a c)
2!(2 c)(3 c)
,
a
3
=
3 + a c
3(4 c)
a
2
=
(1 + a c)(2 + a c)(3 + a c)
3!(2 c)(3 c)(4 c)
Following the notation of (7), we may write the series solution as
y
new
= x
1c

n=0
(1 + a c)
n
(2 c)
n
x
n
n!
(8)
This series is rather similar to the standard one (6). In fact, the solution of (6)
may be converted into y
new
by making the substitions a a+1c and c 2c
and multiplying y by the prefactor x
1c
. [Why this works may be seen by making
the substitutions directly into the conuent hypergeometric equation itself.] As
a result, by the same ratio test argument as before, y
new
converges for all values
of x, except when c = 2, 3, 4, . . . where the denominators in (8) would eventually
all blow up.
To summarize, for non-integer values of c, the two solutions (6) and (8) form
a complete linearly independent set. For c = 1, both (6) and (8) are precisely
the same, and we have found only one solution. For other integer values of c,
only one of (6) or (8) makes sense (and the other one blows up because of a bad
denominator). So in fact for all integer c, we have only obtained one solution
by the series method, and the second solution would be of the irregular form
(which is not fun at all).
8.5.14 To a good approximation, the interaction of two nucleons may be described by a
mesonic potential
V =
Ae
ax
x
attractive for A negative. Develop a series solution of the resultant Schrodinger wave
equation
h
2
2m
d
2

dx
2
+ (E V ) = 0
We begin by substituting the explicit potential in the Schrodinger equation
d
2

dx
2
+

2mE
h
2

2mAe
ax
h
2
x

= 0
As in the text, it would be convenient to dene
E =
2mE
h
2
, A =
2mA
h
2
In this case, we want to solve the second order equation

E A
e
ax
x

= 0 (9)
which has a regular singular point at x = 0 and an irregular one at x = . We
now develop a series solution around x = 0. Noting that
P(x) = 0, Q(x) =
e
ax
x
p
0
= 0, q
0
= 0
the indicial equation is trivial, k(k 1) = 0. Since we have k
1
= 1 and k
2
= 0,
we look for the k
1
= 1 series (the larger index one always works). Here we have
to worry that e
ax
is non-polynomial. As a result, we will not be able to obtain
a simple recursion relation. We thus content ourselves with just working out a
few terms in the series. Normalizing the rst term in the series to be x, we take
y = x+a
2
x
2
+a
3
x
3
+ , y

= 1+2a
2
x+3a
3
x
2
+ , y

= 2a
2
+6a
3
x+
Substitution into (9) gives
2a
2
+ 6a
3
x + + (Ex Ae
ax
)(1 + a
2
x + a
3
x
2
+ ) = 0
Since we have used a series for the wavefunction (x), we ought to also expand
the exponential as a series, e
ax
= 1 ax +
1
2
a
2
x
2
. Keeping appropriate
powers of x, we nd
0 = 2a
2
+ 6a
3
x + + (Ex A(1 ax + ))(1 + a
2
x + )
= 2a
2
+ 6a
3
x + + (A+ (aA+E)x + )(1 + a
2
x + )
= 2a
2
+ 6a
3
x + + (A) + (aA+E a
2
A)x +
= (2a
2
A) + (6a
3
+ aA+E a
2
A)x +
Setting the coecients to zero gives
a
2
=
1
2
A, a
3
=
1
6
(a
2
AE aA) =
1
6
(
1
2
A
2
E aA)
The series solution is the of the form
= x +
1
2
Ax
2
+
1
6
(
1
2
A
2
E aA)x
3
+
8.5.17 The modied Bessel function I
0
(x) satises the dierential equation
x
2
d
2
dx
2
I
0
(x) + x
d
dx
I
0
(x) x
2
I
0
(x) = 0
From Exercise 7.4.4 the leading term in an asymptotic expansion is found to be
I
0
(x)
e
x

2x
Assume a series of the form
I
0
(x)
e
x

2x
(1 + b
1
x
1
+ b
2
x
2
+ )
Determine the coecients b
1
and b
2
The (modied) Bessel equation has a regular singular point at x = 0 and an
irregular one at x = . Here we are asked to develop an asymptotic expansion
around x = . Although this is an irregular one (witness the essential singularity
e
x
), we are given the form of the series. As a result, all we have to do is to take
derivatives and insert the expressions into the dierential equation. To make it
easier to obtain the derivatives, we write
I
0
(x)
e
x

2
(x

1
2
+ b
1
x

3
2
+ b
2
x

5
2
+ b
3
x

7
2
+ )
The derivative d/dx acts either on the e
x
factor or the series in the parentheses.
The resulting rst derivative is
I

0
(x)
e
x

2
(x

1
2
+ (b
1

1
2
)x

3
2
+ (b
2

3
2
b
1
)x

5
2
+ (b
3

5
2
b
2
)x

7
2
+ )
Taking one more derivative yields
I

0
(x)
e
x

2
(x

1
2
+(b
1
1)x

3
2
+(b
2
3b
1
+
3
4
)x

5
2
+(b
3
5b
2
+
15
4
b
1
)x

7
2
+ )
Substituting the above into the modied Bessel equation and collecting like pow-
ers of x, we nd
0
e
x

2
(x
3
2
+ (b
1
1)x
1
2
+ (b
2
3b
1
+
3
4
)x

1
2
+ (b
3
5b
2
+
15
4
b
1
)x

3
2
+
+ x
1
2
+ (b
1

1
2
)x

1
2
+ (b
2

3
2
b
1
)x

3
2
+
x
3
2
b
1
x
1
2
b
2
x

1
2
b
3
x

3
2
)

e
x

2
((2b
1
+
1
4
)x

1
2
+ (4b
2
+
9
4
b
1
)x

3
2
+ )
Setting the coecients to zero gives
b
1
=
1
8
, b
2
=
9
16
b
1
=
9
128
so that the asymptotic series develops as
I
0
(x)
e
x

2x
(1 +
1
8
x
1
+
9
128
x
2
+ )
Note that, in order to nd b
1
and b
2
, we needed to keep track of the b
3
coecient,
even though it dropped out in the end.

Anda mungkin juga menyukai