Anda di halaman 1dari 79

# Physics 451

Fall 2004
Homework Assignment #1 Solutions

## Textbook problems: Ch. 1: 1.1.5, 1.3.3, 1.4.7, 1.5.5, 1.5.6

Ch. 3: 3.2.4, 3.2.19, 3.2.27
Chapter 1
1.1.5 A sailboat sails for 1 hr at 4 km/hr (relative to the water) on a steady compass heading
of 40 east of north. The saiboat is simultaneously carried along by a current. At the
end of the hour the boat is 6.12 km from its starting point., The line from its starting
point to its location lies 60 east of north. Find the x (easterly) and y (northerly)
components of the water velocity.
This is a straightforward relative velocity (vector addition) problem. Let ~vbl
denote the velocity of the boat with respect to land, ~vbw the velocity of the boat
with respect to the water and ~vwl the velocity of the water with respect to land.
Then
~vbl = ~vbw + ~vwl
where

x + 3.06
y ) km/hr

## ~vbl = 6.12 km/hr @ 30 = (5.3

x + 3.06
y ) km/hr

Thus
~vwl = ~vbl ~vbw = 2.73
x km/hr
1.3.3 The vector ~r, starting at the origin, terminates at and specifies the point in space
(x, y, z). Find the surface swept out by the tip of ~r if
(a) (~r ~a ) ~a = 0
The vanishing of the dot product indicates that the vector ~r ~a is perpendicular
to the constant vector ~a. As a result, ~r ~a must lie in a plane perpendicular
to ~a. This means ~r itself must lie in a plane passing through the tip of ~a and
perpendicular to ~a

ra
r
a

(b) (~r ~a ) ~r = 0
This time the vector ~r ~a has to be perpendicular to the position vector ~r itself.
It is perhaps harder to see what this is in three dimensions. However, for two
dimensions, we find

ra
a
r

which gives a circle. In three dimensions, this is a sphere. Note that we can also
complete the square to obtain
(~r ~a ) ~r = |~r 12 ~a |2 | 12 ~a |2
Hence we end up with the equation for a circle of radius |~a |/2 centered at the
point ~a/2
|~r 12 ~a |2 = | 12 ~a |2
~ B)
~ (A
~ B)
~ = (AB)2 (A
~B
~ )2 .
1.4.7 Prove that (A
This can be shown just by a straightforward computation. Since
~B
~ = (Ay Bz Az By )
A
x + (Az Bx Ax Bz )
y + (Ax By Ay Bx )
z
we find
~B
~ |2 = (Ay Bz Az By )2 + (Az Bx Ax Bz )2 + (Ax By Ay Bx )2
|A
= A2x By2 + A2x Bz2 + A2y Bx2 + A2y Bz2 + A2z Bx2 + A2z By2
2Ax Bx Ay By 2Ax Bx Az Bz 2Ay By Az Bz
= (A2x + A2y + A2z )(Bx2 + By2 + Bz2 ) (Ax Bx + Ay By + Az Bz )2
where we had to add and subtract A2x Bx2 +A2y By2 +A2z Bz2 and do some factorization
to obtain the last line.
However, there is a more elegant approach to this problem. Recall that cross
products are related to sin and dot products are related to cos . Then
~B
~ |2 = (AB sin )2 = (AB)2 (1 cos2 ) = (AB)2 (AB cos )2
|A
~B
~ )2
= (AB)2 (A

~ of a particle is given by L
~ = ~r p~ = m~r ~v where p~
1.5.5 The orbital angular momentum L
is the linear momentum. With linear and angular velocity related by ~v =
~ ~r, show
that
~ = mr2 [~
L
r(
r
~ )]
Here, r is a unit vector in the ~r direction.
~ = m~r ~v and ~v =
Using L
~ ~r, we find
~ = m~r (~
L
~r )
Because of the double cross product, this is the perfect opportunity to use the
~ (B
~ C)
~ = B(
~ A
~ C)
~ C(
~ A
~ B)
~
BACCAB rule: A
~ = m[~
L
(~r ~r ) ~r(~r
~ )] = m[~
r2 ~r(~r
~ )]
Using ~r = r r, and factoring out r2 , we then obtain
~ = mr2 [~
L
r(
r
~ )]

(1)

1.5.6 The kinetic energy of a single particle is given by T = 21 mv 2 . For rotational motion
this becomes 21 m(~
~r )2 . Show that
~ )2 ]
T = 12 m[r2 2 (~r
We can use the result of problem 1.4.7:
T = 21 m(~
~r )2 = 12 m[(r)2 (~
~r )2 ] = 12 m[r2 2 (~r
~ )2 ]
Note that we could have written this in terms of unit vectors
r
~ )2 ]
T = 21 mr2 [ 2 (
Comparing this with (1) above, we find that
~
T = 21 L
~
which is not a coincidence.

Chapter 3
3.2.4 (a) Complex numbers, a + ib, with a and b real, may be represented by (or are
isomorphic with) 2 2 matrices:


a b
a + ib
b a
Show that this matrix representation is valid for (i) addition and (ii) multiplication.
Let us start with addition. For complex numbers, we have (straightforwardly)
(a + ib) + (c + id) = (a + c) + i(b + d)
whereas, if we used matrices we would get

 
 
a b
c d
(a + c)
+
=
b a
d c
(b + d)

(b + d)
(a + c)

which shows that the sum of matrices yields the proper representation of the
complex number (a + c) + i(b + d).
We now handle multiplication in the same manner. First, we have
(a + ib)(c + id) = (ac bd) + i(ad + bc)
while matrix multiplication gives


 
a b
c d
(ac bd)
=
b a
d c
(ad + bc)

(ad + bc)
(ac bd)

## which is again the correct result.

(b) Find the matrix corresponding to (a + ib)1 .
We can find the matrix in two ways. We first do standard complex arithmetic
(a + ib)1 =

a ib
1
1
=
= 2
(a ib)
a + ib
(a + ib)(a ib)
a + b2

## This corresponds to the 2 2 matrix

1

(a + ib)

1
2
a + b2

a b
b a

Alternatively, we first convert to a matrix representation, and then find the inverse
matrix



1
1
a b
a b
1
(a + ib)

= 2
b a
a + b2 b a
Either way, we obtain the same result.
3.2.19 An operator P~ commutes with Jx and Jy , the x and y components of an angular
momentum operator. Show that P~ commutes with the third component of angular
momentum; that is,
[P~ , Jz ] = 0
We begin with the statement that P~ commutes with Jx and Jy . This may be
expressed as [P~ , Jx ] = 0 and [P~ , Jy ] = 0 or equivalently as P~ Jx = Jx P~ and
P~ Jy = Jy P~ . We also take the hint into account and note that Jx and Jy satisfy
the commutation relation
[Jx , Jy ] = iJz
or equivalently Jz = i[Jx , Jy ]. Substituting this in for Jz , we find the double
commutator
[P~ , Jz ] = [P~ , i[Jx , Jy ]] = i[P~ , [Jx , Jy ]]
Note that we are able to pull the i factor out of the commutator. From here,
we may expand all the commutators to find
[P~ , [Jx , Jy ]] = P~ Jx Jy P~ Jy Jx Jx Jy P~ + Jy Jx P~
= Jx P~ Jy Jy P~ Jx Jx P~ Jy + Jy P~ Jx
=0
To get from the first to the second line, we commuted P~ past either Jx or Jy as
appropriate. Of course, a quicker way to do this problem is to use the Jacobi
identity [A, [B, C]] = [B, [A, C]] [C, [A, B]] to obtain
[P~ , [Jx , Jy ]] = [Jx , [P~ , Jy ]] [Jy , [P~ , Jx ]]
The right hand side clearly vanishes, since P~ commutes with both Jx and Jy .
3.2.27 (a) The operator Tr replaces a matrix A by its trace; that is
Tr (a) = trace(A) =

X
i

## Show that Tr is a linear operator.

aii

Recall that to show that Tr is linear we may prove that Tr (A+B) = Tr (A)+
Tr (B) where and are numbers. However, this is a simple property of
arithmetic
X
X
X
Tr (A + B) =
(aii + bii ) =
aii +
bii = Tr (A) + Tr (B)
i

## (b) The operator det replaces a matrix A by its determinant; that is

det(A) = determinant of A
Show that det is not a linear operator.
In this case all we need to do is to find a single counterexample. For example, for
an n n matrix, the properties of the determinant yields
det(A) = n det(A)
This is not linear unless n = 1 (in which case A is really a single number and
not a matrix). There are of course many other examples that one could come up
with to show that det is not a linear operator.

Physics 451

Fall 2004
Homework Assignment #2 Solutions

Textbook problems: Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30
Chapter 3
3.3.1 Show that the product of two orthogonal matrices is orthogonal.
e = I and B B
e = I.
Suppose matrices A and B are orthogonal. This means that AA
We now denote the product of A and B by C = AB. To show that C is orthogonal,
e and see what happens. Recalling that the transpose of a product
we compute C C
is the reversed product of the transposes, we have
e = (AB)(AB)
g = AB B
eA
e = AA
e=I
CC
The statement that this is a key step in showing that the orthogonal matrices form
a group is because one of the requirements of being a group is that the product
of any two elements (ie A and B) in the group yields a result (ie C) that is also
in the group. This is also known as closure. Along with closure, we also need
to show associativity (okay for matrices), the existence of an identity element
(also okay for matrices) and the existence of an inverse (okay for orthogonal
matrices). Since all four conditions are satisfied, the set of n n orthogonal
matrices form the orthogonal group denoted O(n). While general orthogonal
matrices have determinants 1, the subgroup of matrices with determinant +1
form the special orthogonal group SO(n).
3.3.12 A is 2 2 and orthogonal. Find the most general form of


a b
A=
c d
Compare with two-dimensional rotation.
e = I, or
Since A is orthogonal, it must satisfy the condition AA


  2
 

a b
a c
a + b2 ac + bd
1 0
=
=
c d
b d
ac + bd c2 + d2
0 1
This gives three conditions
i) a2 + b2 = 1,

ii) c2 + d2 = 1,

iii) ac + bd = 0

These are three equations for four unknowns, so there will be a free parameter
left over. There are many ways to solve the equations. However, one nice way is

to notice that a2 + b2 = 1 is the equation for a unit circle in the ab plane. This
means we can write a and b in terms of an angle
a = cos ,

b = sin

c = cos ,

d = sin

## Of course, we have one more equation to solve, ac + bd = 0, which becomes

cos cos + sin sin = cos( ) = 0
This means that = /2 or = 3/2. We must consider both cases
separately.
= /2: This gives
c = cos( /2) = sin ,
or


A1 =

## d = sin( /2) = cos

cos
sin

sin
cos


(1)

This looks almost like a rotation, but not quite (since the minus sign is in the
wrong place).
= 3/2: This gives
c = cos( 3/2) = sin ,
or


A2 =

cos
sin

sin
cos


(2)

## which is exactly a rotation.

Note that we can tell the difference between matrices of type (1) and (2) by
computing the determinant. We see that det A1 = 1 while det A2 = 1. In fact,
the A2 type of matrices form the SO(2) group, which is exactly the group of
rotations in the plane. On the other hand, the A1 type of matrices represent
rotations followed by a mirror reflection y y. This can be seen by writing

A1 =

1 0
0 1



cos
sin

sin
cos

Note that the set of A1 matrices by themselves do not form a group (since they
do not contain the identity, and since they do not close under multiplication).
However the set of all orthogonal matrices {A1 , A2 } forms the O(2) group, which
is the group of rotations and mirror reflections in two dimensions.
3.3.13 Here |~x i and |~y i are column vectors. Under an orthogonal transformation S, |~x 0 i =
S|~x i, |~y 0 i = S|~y i. Show that the scalar product h~x |~y i is invariant under this orthogonal transformation.
To prove the invariance of the scalar product, we compute
e y i = h~x |~y i
h~x 0 |~y 0 i = h~x |SS|~
e = I for an orthogonal matrix S. This demonstrates that the
where we used SS
scalar product is invariant (same in primed and unprimed frame).
3.5.4 Show that a real matrix that is not symmetric cannot be diagonalized by an orthogonal
similarity transformation.
We take the hint, and start by denoting the real non-symmetric matrix by A.
Assuming that A can be diagonalized by an orthogonal similarity transformation,
that means there exists an orthogonal matrix S such that
= SASe

where is diagonal

We can invert this relation by multiplying both sides on the left by Se and on
the right by S. This yields
e
A = SS
Taking the transpose of A, we find
ee
e = (SS)
eg = Se
eS
A
e
However, the transpose of a transpose is the original matrix, Se = S, and the
e = . Hence
transpose of a diagonal matrix is the original matrix,
e = SS
e
A
=A
Since the matrix A is equal to its transpose, A has to be a symmetric matrix.
However, recall that A is supposed to be non-symmetric. Hence we run into a
contradiction. As a result, we must conclude that A cannot be diagonalized by
an orthogonal similarity transformation.

3.5.6 A has eigenvalues i and corresponding eigenvectors |~xi i. Show that A1 has the
same eigenvectors but with eigenvalues 1
i .
If A has eigenvalues i and eigenvectors |~xi i, that means
A|~xi i = i |~xi i
Multiplying both sides by A1 on the left, we find
A1 A|~xi i = i A1 |~xi i
or
|~xi i = i A1 |~xi i
Rewriting this as
A1 |~xi i = 1
xi i
i |~
it is now obvious that A1 has the same eigenvectors, but eigenvalues 1
i .
3.5.9 Two Hermitian matrices A and B have the same eigenvalues. Show that A and B are
related by a unitary similarity transformation.
Since both A and B have the same eigenvalues, they can both be diagonalized
according to
= U AU ,
= V BV
where is the same diagonal matrix of eigenvalues. This means
U AU = V BV

B = V U AU V

## If we let W = V U , its Hermitian conjugate is W = (V U ) = U V . This means

that
B = W AW
where W = V U
and W W = V U U V = I. Hence A and B are related by a unitary similarity
transformation.
3.5.30

## a) Determine the eigenvalues and eigenvectors of



1 
 1

Note that the eigenvalues are degenerate for  = 0 but the eigenvectors are orthogonal for all  6= 0 and  0.

1




= (1 )2 2 = 0
1 

(1 )2 2 = 0

( 1)2 = 2

( 1) = 

(3)

## Hence the two eigenvalues are + = 1 +  and = 1 .

For the eigenvectors, we start with + = 1 + . Substituting this into the eigenvalue problem (A I)|xi = 0, we find


 
 

 
a
=0
b

(a b) = 0

a=b

Since the problem did not ask to normalize the eigenvectors, we can take simply
 
1
|x+ i =
1

+ = 1 +  :
For = 1 , we obtain instead








 
a
=0
b

(a + b) = 0

a = b

This gives

= 1  :

|x i =

1
1

Note that the eigenvectors |x+ i and |x i are orthogonal and independent of . In
a way, we are just lucky that they are independent of  (they did not have to turn
out that way). However, orthogonality is guaranteed so long as the eigenvalues
are distinct (ie  6= 0). This was something we proved in class.
b) Determine the eigenvalues and eigenvectors of


1
2

1
1

Note that the eigenvalues are degenerate for  = 0 and for this (nonsymmetric)
matrix the eigenvectors ( = 0) do not span the space.

## In this nonsymmetric case, the secular equation is

1
1
2
= (1 )2 2 = 0

1
Interestingly enough, this equation is the same as (3), even though the matrix is
different. Hence this matrix has the same eigenvalues + = 1 +  and = 1 .
For + = 1 + , the eigenvector equation is

 
 1
a
=0

a + b = 0
2 
b

b = a

+ = 1 +  :

 
1
|x+ i =



 
 1
a
=0

a + b = 0
2
 
b

(4)

b = a

Hence, we obtain

= 1  :

|x i =

1



(5)

## In this nonsymmetric case, the eigenvectors do depend on . And furthermore,

 
1
when  = 0 it is easy to see that both eigenvectors degenerate into the same
.
0
c) Find the cosine of the angle between the two eigenvectors as a function of  for
0  1.
For the eigenvectors of part a), they are orthogonal, so the angle is 90 . Thus
this part really refers to the eigenvectors of part b). Recalling that the angle can
be defined through the inner product, we have
hx+ |x i = |x+ | |x | cos
or
cos =

hx+ |x i
hx+ |x+ i1/2 hx |x i1/2

## Using the eigenvectors of (4) and (5), we find

cos =

1 2
1 2

=
1 + 2
1 + 2 1 + 2

Recall that the Cauchy-Schwarz inequality guarantees that cos lies between 1
and +1. When  = 0 we find cos = 1, so the eigenvectors are collinear (and
degenerate), while for  = 1, we find instead cos = 0, so the eigenvectors are
orthogonal.

Physics 451

Fall 2004
Homework Assignment #3 Solutions

## Textbook problems: Ch. 1: 1.7.1, 1.8.11, 1.8.16, 1.9.12, 1.10.4, 1.12.9

Ch. 2: 2.4.8, 2.4.11
Chapter 1
1.7.1 For a particle moving in a circular orbit ~r = x
r cos t + y r sin t
(a) evaluate ~r ~r
Taking a time derivative of ~r, we obtain
~r =
x r sin t + y r cos t

(1)

Hence
~r ~r = (
x r cos t + y r sin t) (
x r sin t + y r cos t)
= (
x y)r2 cos2 t (
yx
)r2 sin2 t
= z r2 (sin2 t + cos2 t) = z r2
(b) Show that ~r + 2~r = 0
The acceleration is the time derivative of (1)
~r =
x r 2 cos t y r 2 sin t = 2 (
x r cos t + y r sin t) = 2~r
Hence ~r + 2~r = 0. This is of course the standard kinematics of uniform circular
motion.
1.8.11 Verify the vector identity
~ (A
~ B)
~ = (B
~ )
~ A
~ (A
~ )
~ B
~ B(
~
~ A)
~ + A(
~
~ B)
~

This looks like a good time for the BACCAB rule. However, we have to be
~ has both derivative and vector properties. As a derivative, it
careful since
~ and B.
~ Therefore, by the product rule of differentiation, we
operates on both A
can write

~
~
~
~
~
~
~
~
~
(A B) = (A B) + (A B)

where the arrows indicate where the derivative is acting. Now that we have
~ as a vector. Using the
specified exactly where the derivative goes, we can treat
BACCAB rule (once for each term) gives

~
~
~
~
~ A)
~
~
~
~
~
~
~
~
~
~
(A B) = A( B) B( A) + A( B) B(

(2)

The first and last terms on the right hand side are backwards. However, we can
turn them around. For example

~
~
~
~
~
~
~
~
A( B) = A(B ) = (B )A
With all the arrows in the right place [after flipping the first and last terms in
(2)], we find simply
~ (A
~ B)
~ = (B
~ )
~ A
~ B(
~
~ A)
~ + A(
~
~ B)
~ (A
~ )
~ B
~

## which is what we set out to prove.

1.8.16 An electric dipole of moment p~ is located at the origin. The dipole creates an electric
potential at ~r given by
p~ ~r
(~r ) =
4o r3
~ =
~ at ~r.
Find the electric field, E
We first use the quotient rule to write
~
~ =
~ = 1
E
40

p~ ~r
r3


=

~ p ~r ) (~
~ 3)
1 r3 (~
p ~r )(r
40
r6

Applying the chain rule to the second term in the numerator, we obtain
3~
~
p ~r ) 3r2 (~
p ~r )(r)
~ = 1 r (~
E
40
r6

## We now evaluate the two separate gradients

xj
~ p ~r ) = x
(~
i
(pj xj ) = ~xi pj
=x
i pj ij = x
i pi = p~
xi
xi
and

~ =x
r
i
xi

1
x
i xi
~r
x21 + x22 + x23 = x
i p 2
2x
=
=
= r
i
r
r
2 x1 + x22 + x23

Hence

3
2
1 p~ 3(~
p ~r )
r
p r)
r
~ = 1 r p~ 3r (~
=
E
6
3
40
r
40
r

Note that we have used the fact that p~ is a constant, although this was never
stated in the problem.
1.9.12 Show that any solution of the equation
~ (
~ A)
~ k2 A
~=0

## automatically satisfies the vector Helmholtz equation

~ + k2 A
~=0
2 A
and the solenoidal condition
~ A
~=0

We actually follow the hint and demonstrate the solenoidal condition first. Taking
the divergence of the first equation, we find
~
~ (
~ A)
~ k2
~ A
~=0

However, the divergence of a curl vanishes identically. Hence the first term is
~ A
~ = 0 or (upon dividing
automatically equal to zero, and we are left with k 2
~ A
~ = 0.
by the constant k)
We now return to the first equation and simplify the double curl using the BAC
~
CAB rule (taking into account the fact that all derivatives must act on A)
~ (
~ A) = (
~
~ A)
~ 2 A
~

(3)

## As a result, the first equation becomes

~
~ A)
~ 2 A
~ k2 A
~=0
(
~ A
~ = 0 for this problem. Thus (3) reduces
However, we have shown above that
to
~ + k2 A
~=0
2 A
which is what we wanted to show.

1.10.4 Evaluate

~r d~r

We have evaluated this integral in class. For a line integral from point 1 to point
2, we have
Z 2
Z 2
2
1
d(r2 ) = 12 r2 1 = 12 r22 21 r12
~r d~r = 2
1

## However for a closed path, point

H 1 and point 2 are the same. Thus the integral
along a closed loop vanishes, ~r d~r = 0. Note that this vanishing of the line
integral around a closed loop is the sign of a conservative force.
Alternatively, we can apply Stokes theorem
I
Z
~ ~r d~

~r d~r =
S

It is easy to see that ~r is curl-free. Hence the surface integral on the right hand
side vanishes.
1.12.9 Prove that

~ d~ =
uv

~ d~
v u

I
Z
~
~ (uv
~ + v u)
~ d~
~
~

(uv + v u) d =

(4)

## We now expand the curl using

~ (uv)
~ = (u)
~ (v)
~ + u
~ v
~ = (u)
~ (v)
~

where we have also used the fact that the curl of a gradient vanishes. Returning
to (4), this indicates that
I

~ + v u)
~ d~ =
(uv

~ (v)
~ + (v)
~ (u)]
~
[(u)
d~ = 0

where the vanishing of the right hand side is guaranteed by the antisymmetry of
~B
~ = B
~ A.
~
the cross-product, A

Chapter 2
2.4.8 Find the circular cylindrical components of the velocity and acceleration of a moving
particle
We first explore the time derivatives of the cylindrical coordinate basis vectors.
Since
= (cos , sin , 0),

z = (0, 0, 1)

## their derivatives are

= ( sin , cos , 0) = ,

= ( cos , sin , 0) =

= ,

=
=

(5)

## Now, we note that the position vector is given by

~r = + zz
So all we have to do to find the velocity is to take a time derivative
~v = ~r = + zz + + z z = + zz +

Note that we have used the expression for in (5). Taking one more time derivative yields the acceleration
~a = ~v = + zz + (
+ )
+ + z z +

= + zz + (
+ )
+ 2
= (
2 ) + zz + (
+ 2 )

2.4.11 For the flow of an incompressible viscous fluid the Navier-Stokes equations lead to
~ (~v (
~ ~v )) = 2 (
~ ~v )

0
Here is the viscosity and 0 the density of the fluid. For axial flow in a cylindrical
pipe we take the velocity ~v to be
~v = zv()

~ (~v (
~ ~v )) = 0

## for this choice of ~v . Show that

~ ~v ) = 0
2 (
leads to the differential equation
1 d
d

d2 v
2
d

1 dv
=0
2 d

## and that this is satisfied by

v = v0 + a2 2
This problem is an exercise in applying the vector differential operators in cylin~ =
~ ~v
drical coordinates. Let us first compute V

1
~ =
~ ~v =
V

0

z

dv
=
z
d
v()

V =

dv
d

Note that, since v() is a function of a single variable, partial derivatives of v are
the same as ordinary derivatives. Next we need to compute the vector Laplacian
~ ~v ) = 2 V
~ . Using (2.35) in the textbook, and the fact that on the V
2 (
component is non-vanishing, we find
2 V
=0
2
 
1
dv
1 dv
2~
2
2
( V ) = (V ) 2 V =
+ 2

d
d
~ )z = 0
(2 V
~ ) =
(2 V

This indicates that only the component of the vector Laplacian gives a nontrivial equation. Finally, we evaluate the scalar Laplacian 2 (dv/d) to obtain
1 d
( V ) =
d
2~

d2 v
2
d


+

1 dv
2 d

(6)

Setting this equal to zero gives the equation that we were asked to prove.
To prove that v = v0 + a2 2 satisfies the (third order!) differential equation, all
we have to do is substitute it in. However, it is more fun to go ahead and solve

the equation. First we notice that v only enters through its derivative f = dv/d.
Substituting this into (6), we find
1 d
d

df

1
f =0
2

## Expanding the derivatives in the first term yields

1
d2 f
1 df
+
2f = 0
2
d
d
Since this is a homogeneous equation, we may substitute in f = to obtain the
algebraic equation
( 1) + 1 = 0

= 1

## This indicates that the general solution for f () is of the form

f = 2a + b1
where the factor of 2 is chosen for later convenience. Integrating f once to obtain
v, we find
Z
v=

f d = v0 + a2 + b log

which agrees with the given solution, except for the log term. However, now we
can appeal to physical boundary conditions for fluid flow in the cylindrical pipe.
The point = 0 corresponds to the central axis of the pipe. At this point, the
fluid velocity should not be infinite. Hence we must throw away the log, or in
other words we must set b = 0, so that v = v0 + a2 .
Incidentally, the fluid flow boundary conditions should be that the velocity vanishes at the wall of the pipe. If we let R be the radius of the pipe, this means
that we can write the solution as


2
v() = vmax 1 2
R
where the maximum velocity vmax is for the fluid along the central axis (with the
velocity going to zero quadratically as a function of the radius).

Physics 451

Fall 2004
Homework Assignment #4 Solutions

Textbook problems: Ch. 2: 2.5.11, 2.6.5, 2.9.6, 2.9.12, 2.10.6, 2.10.11, 2.10.12
Chapter 2
2.5.11 A particle m moves in response to a central force according to Newtons second law
m~r = r f (~r )
Show that ~r ~r = ~c, a constant, and that the geometric interpretation of this leads
to Keplers second law.
~ = ~r p~ = m~r ~r . To show
Actually, ~r ~r is basically the angular momentum, L
~ is constant, we can take its time derivative
that L
~ = d (m~r ~r ) = m~r ~r + m~r ~r
L
dt
The first cross-product vanishes. So, by using Newtons second law, we end up
with
~ = ~r r f (~r ) = (~r ~r ) f (~r ) = 0
L
r
~ is a constant in time (ie that it is
This indicates that the angular momentum L
~
conserved). The constant vector ~c of the problem is just L/m.
Note that this
proof works for any central force, not just the inverse square force law.
For the geometric interpretation, consider the orbit of the particle m
dr
r

The amount of area swept out by the particle is given by the area of the triangle
dA = 12 |~r d~r |
So the area swept out in a given time dt is simply

d~r 1
dA
1
= 2 ~r = 2 |~r ~r |
dt
dt
Since this is a constant, we find that equal areas are swept out in equal times.
This is just Keplers second law (which is also the law of conservation of angular
momentum).

## 2.6.5 The four-dimensional, fourth-rank Riemann-Christoffel curvature tensor of general

relativity Riklm satisfies the symmetry relations
Riklm = Rikml = Rkilm
With the indices running from 0 to 3, show that the number of independent components is reduced from 256 to 36 and that the condition
Riklm = Rlmik
further reduces the number of independent components to 21. Finally, if the components satisfy an identity Riklm + Rilmk + Rimkl = 0, show that the number of
independent components is reduced to 20.
Here we just have to do some counting. For a general rank-4 tensor in four
dimensions, since each index can take any of four possible values, the number of
independent components is simply
independent components = 44 = 256
Taking into account the first symmetry relation, the first part
Riklm = Rikml
indicates that the Riemann tensor is antisymmetric when the last pair of indices
is switched. Thinking of the last pair of indices as specifying a 44 antisymmetric
matrix, this means instead of having 42 = 16 independent elements, we actually
only have 21 (4)(3) = 6 independent choices for the last index pair (this is the
number of elements in an antisymmetric 4 4 matrix). Similarly, the second part
of the first symmetry relation
Riklm = Rkilm
indicates that the Riemann tensor is antisymmetric in the first pair of indices. As
a result, the same argument gives only 6 independent choices for the first index
pair. This accounts for
independent components = 6 6 = 36
We are now able to handle the second condition
Riklm = Rlmik
By now, it should be obvious that this statement indicates that the Riemann
tensor is symmetric when the first index pair is interchanged with the second

index pair. The counting of independent components is then the same as that for
a 6 6 symmetric matrix. This gives
independent components = 21 (6)(7) = 21
Finally, the last identity is perhaps the trickiest to deal with. As indicated in the
note, this only gives additional information when all four indices are different.
Setting iklm to be 0123, this gives
R0123 + R0231 + R0312 = 0

(1)

## As a result, this can be used to remove one more component, leading to

independent components = 21 1 = 20
We can, of course, worry that a different combination of iklm (say 1302 or something like that) will give further relations that can be used to remove additional
components. However, this is not the case, as can be seen by applying the first
to relations.
Note that it is an interesting exercise to count the number of independent components in the Riemann tensor in d dimensions. The result is
independent components for d dimensions =

1 2 2
12 d (d

1)

Putting in d = 4 yields the expected 20. However, it is fun to note that putting
in d = 1 gives 0 (you cannot have curvature in only one dimension) and putting
in d = 2 gives 1 (there is exactly one independent measure of curvature in two
dimensions).
2.9.6

a) Show that the inertia tensor (matrix) of Section 3.5 may be written
Iij = m(r2 ij xi xj )

[typo corrected!]

## for a particle of mass m at (x1 , x2 , x3 ).

Note that, for a single particle, the inertia tensor of Section 3.5 is specified as
Ixx = m(r2 x2 ),

Ixy = mxy,


Iij =

m(r2 x2i ) i = j
mxi xj
i=
6 j

etc

## We can enforce the condition i = j by using the Kronecker delta, ij . Similarly,

the condition i 6= j can be enforced by the opposite expression 1 ij . This
means we can write
Iij = m(r2 x2i )ij mxi xj (1 ij )

(no sum)

distributing the factors out, and noting that it is safe to set xi xj ij = x2i ij , we
end up with
Iij = mr2 ij mx2i ij mxi xj + mx2i ij = m(r2 ij xi xj )
Note that there is a typo in the books version of the homework exercise!
b) Show that
Iij = Mil Mlj = milk xk ljm xm
where Mil = m1/2 ilk xk . This is the contraction of two second-rank tensors and
is identical with the matrix product of Section 3.2.
We may calculate
Mil Mlj = milk xk ljm xm = mlki ljm xk xm
Note that the product of two epsilons can be re-expressed as
lki ljm = kj im km ij
This is actually the BACCAB rule in index notation. Hence
Mil Mlj = m(kj im km ij )xk xm = m(kj xk im xm km xk xm ij )
= m(xj xi xk xk ij ) = m(r2 ij xi xj ) = Iij
Note that we have used the fact that xk xk = x21 + x22 + x23 = r2 .
2.9.12 Given Ak = 21 ijk Bij with Bij = Bji , antisymmetric, show that
Bmn = mnk Ak
Given Ak = 21 ijk Bij , we compute
mnk Ak = 12 mnk kij Bij = 12 kmn kij Bij = 12 (mi nj mj ni )Bij
= 12 (Bmn Bnm ) = Bmn
We have used the antisymmetric nature of Bij in the last step.

(2)

2.10.6 Derive the covariant and contravariant metric tensors for circular cylindrical coordinates.
There are several ways to derive the metric. For example, we may use the relation
between Cartesian and cylindrical coordinates
x = cos ,

y = sin ,

z=z

(3)

## to compute the differentials

dx = d cos sin d,

dy = d sin + cos d,

dz = dz

## The line element is then

ds2 = dx2 + dy 2 + dz 2 = (d cos sin d)2 + (d sin + cos d)2 + dz 2
= d2 + 2 d2 + dz 2
Since ds2 = gij dxi dxj [where (x1 , x2 , x3 )
metric tensor (matrix) as

1
gij = 0
0

## = (, , z)] we may write the covariant

0
2
0

0
0
1

(4)

Alternatively, the metric is given by gij = ~ei ~ej where the basis vectors are
~ei =

~r
xi

## Taking partial derivatives of (3), we obtain

~e = x
cos + y sin
~e = (
x sin + y cos )
~ez = z
Then
g = ~e ~e = (
x cos + y sin ) (
x cos + y sin ) = cos2 + sin2 = 1
g = ~e ~e = (
x cos + y sin ) (
x sin + y cos )
= ( cos sin + sin cos ) = 0
etc . . .
The result is the same as (4).
The contravariant components of the metric is

1
0
g ij = 0 2
0
0

## given by the matrix inverse of (4)

0
0
(5)
1

2.10.11 From the circular cylindrical metric tensor gij calculate the k ij for circular cylindrical
coordinates.
We may compute the Christoffel components using the expression
ijk = 21 (k gij + j gik i gjk )
However, instead of working out all the components one at a time, it is more efficient to examine the metric (4) and to note that the only non-vanishing derivative
is
g = 2
This indicates that the only non-vanishing Christoffel symbols ijk are the ones
where the three indices ijk are some permutation of . It is then easy to see
that
= ,
= =
Finally, raising the first index using the inverse metric (5) yields
1

Note that the Christoffel symbols are symmetric in the last two indices.
= ,

= =

(6)

2.10.12 Using the k ij from Exercise 2.10.11, write out the covariant derivatives V i ;j of a
~ in circular cylindrical coordinates.
vector V
Recall that the covariant derivative of a contravariant vector is given by
V i ;j = V i ,j + i jk V k
where the semi-colon indicates covariant differentiation and the comma indicates
ordinary partial differentiation. To work out the covariant derivative, we just
have to use (6) for the non-vanishing Christoffel connections. The result is
V ; = V , + k V k = V ,
1
V ; = V , + k V k = V , + V = V , + V

z
z
z
k
z
V ; = V , + k V = V ,
V ; = V , + k V k = V , + V = V , V
1
V ; = V , + k V k = V , + V = V , + V

V z ; = V z , + z k V k = V z ,
V ;z = V ,z + zk V k = V ,z
V ;z = V ,z + zk V k = V ,z
V z ;z = V z ,z + z zk V k = V z ,z
Note that, corresponding to the three non-vanishing Christoffel symbols, only
three of the expressions are modified in the covariant derivative.

Physics 451

Fall 2004
Homework Assignment #5 Solutions

## Textbook problems: Ch. 5: 5.1.1, 5.1.2

Chapter 5
5.1.1 Show that

1
1
=
(2n 1)(2n + 1)
2
n=1
We take the hint and use mathematical induction. First, we assume that
sm =

m
2m + 1

(1)

## In this case, the next partial sum becomes

m
1
+
2m + 1 (2(m + 1) 1)(2(m + 1) + 1)
m
1
m(2m + 3) + 1
=
+
=
2m + 1 (2m + 1)(2m + 3)
(2m + 1)(2m + 3)
2
(m + 1)(2m + 1)
2m + 3m + 1
=
=
(2m + 1)(2m + 3)
(2m + 1)(2m + 3)
(m + 1)
=
2(m + 1) + 1

sm+1 = sm + am+1 =

which is of the correct form (1). Finally, by explicit computation, we see that
s1 = 1/(1 3) = 1/3 = 1/(2 1 + 1), so that (1) is correct for s1 . Therefore, by
induction, we conclude that the mth partial sum is exactly sm = m/(2m + 1).
It is now simple to take the limit to obtain
S = lim sm = lim
m

m
1
=
2m + 1
2

Note that we could also have evaluated this sum by partial fraction expansion


X
1
1
1
=

(2n

1)(2n
+
1)
2(2n

1)
2(2n + 1)
n=1
n=1

## Since this is a telescoping series, we have

sm =

1
1
m

=
2(2 1 1) 2(2m + 1)
2m + 1

## which agrees with (1).

5.1.2 Show that

1
=1
n(n + 1)
n=1
This problem may be solved in a similar manner. While there is no hint of for
the partial sum, we may try a few terms
s1 =

1
,
2

s2 = s1 +

1
2
= ,
23
3

sm =

s3 = s2 +

1
3
=
34
4

m
m+1

(2)

## We now prove this statement by induction. Starting from sm , we compute

1
m(m + 2) + 1
m
+
=
m + 1 (m + 1)(m + 2)
(m + 1)(m + 2)
(m + 1)2
m+1
(m + 1)
=
=
=
(m + 1)(m + 2)
m+2
(m + 1) + 1

sm+1 = sm + am+1 =

Therefore if (2) holds for m, it also holds for m + 1. Finally, since (2) is correct
for s1 = 1/2, it must be true for all m by induction.
Taking the limit yields
S = lim sm = lim
m

m
=1
m+1



X
1
1
1
=

n(n
+
1)
n
n+1
n=1
n=1
Hence
sm =

1
m
1

=
1 m+1
m+1

## which reproduces (2).

Additional Problems
1. The metric for a three-dimensional hyperbolic (non-Euclidean) space can be written
as
dx2 + dy 2 + dz 2
ds2 = L2
z2

where L is a constant with dimensions of length. Calculate the non-vanishing Christoffel coefficients for this metric.
We first note that the metric is given in matrix form as

L2 /z 2
gij = 0
0

0
L2 /z 2
0

0
0
2
L /z 2

L2
,
z2

gzz =

## so the non-zero components are

gxx =

L2
,
z2

gyy =

L2
z2

(3)

The covariant components of the Christoffel connection are obtained from the
metric by
ijk = 21 (gij,k + gik,j gjk,i )
where the comma denotes partial differentiation. According to (3), the only nonzero metric components have repeated indices. In addition, only the z-derivative
is non-vanishing. Hence we conclude that the only non-vanishing Christoffel symbols must have two repeated indices combined with a z index. Recalling that ijk
is symmetric in the last two indices, we compute
L2
zxx =
= 3,
z
2
L
zyy = 12 gyy,z = 3 ,
z
L2
zzz = 12 gzz,z = 3
z
21 gxx,z

xzx = xxz =

1
2 gxx,z

## yzy = yyz = 12 gyy,z

L2
= 3
z
L2
= 3
z

Raising the first index using the inverse metric g ij = (z 2 /L2 ) ij finally yields
1
,
z
1
= ,
z
1
=
z

1
z
1
=
z

z xx =

x zx = x xz =

z yy

y zy = y yz

z zz

(4)

2. The motion of a free particle moving along a path xi (t) in hyperbolic space is governed
by the geodesic equation
x
i (t) + i jk x j (t)x k (t) = 0

Taking (x1 , x2 , x3 ) to be (x, y, z), and using the Christoffel coefficients calculated
above, show that the geodesic equation is given explicitly by
2
x
x z = 0
z
2
y y z = 0
z
1 2
z + (x + y 2 z 2 ) = 0
z
Using the Christoffel coefficients in (4), we compute the three components of the
geodesic equation
x
+ x xz x z + x zx z x = 0

2
x
x z = 0
z

(5)

2
(6)
y y z = 0
z
1
z + z xx x x + z yy y y + z zz z z = 0

z + (x 2 + y 2 z 2 ) = 0 (7)
z
The geodesic equation is important because it describes the motion of free particles in curved space. However, for this problem, all that is necessary is to show
that it gives a system of coupled ordinary differential equations (5), (6), (7).
y + y yz y z + y zy z y = 0

## 3. Show that a solution to the geodesic equation of Problem 2 is given by

x = x0 + R cos tanh(v0 t)
y = y0 + R sin tanh(v0 t)
z = R sech(v0 t)
where x0 , y0 , R, and v0 are constants. Show that the path of the particle lies on a
sphere of radius R centered at (x0 , y0 , 0) in the Cartesian coordinate space given by
(x, y, z). Note that this demonstrates the non-Euclidean nature of hyperbolic space;
in reality the sphere is flat, while the space is curved.
It should be a straightforward exercise to insert the x, y and z equations into (5),
(6) and (7) to show that it is a solution. However it is actually more interesting
to solve the equations directly. We start with the x equation, (5). If we are
somewhat clever, we could rewrite (5) as
x

z
=2
x
z

d
d
log x = 2 log z
dt
dt

## Both sides of this may be integrated in time to get

x = ax z 2

(8)

where ax is a constant. It should be clear that the y equation, (6) can be worked
on in similar manner to get
y = ay z 2
(9)
Of course, we have not yet completely solved for x and y. But we are a step closer
to the solution. Now, inserting (8) and (9) into the z equation, (7), we obtain
z + (a2x + a2y )z 3

z 2
=0
z

This non-linear differential equation can be simplified by performing the substitution z(t) = 1/u(t). Noting that
z =

u
,
u2

z =

u 2
+
2
u2
u3

## the z equation may be rewritten as

u
u u 2 = (a2x + a2y )
While this equation is still non-linear, it is possible to obtain a general solution
u(t) =

1q 2
ax + a2y cosh(v0 (t t0 ))
v0

## where v0 and t0 are constants.

Given the solution for z = 1/u, we now insert this back into (8) to obtain
v02 ax
ax
x = 2 = 2
sech2 (v0 (t t0 ))
2
u
ax + ay
which may be integrated to yield
x(t) = x0 +

v0 ax
tanh(v0 (t t0 ))
+ a2y

a2x

## Similarly, for y, we integrate (9) to find

y(t) = y0 +

v0 ay
tanh(v0 (t t0 ))
+ a2y

a2x

Note that the three (coupled) second order differential equations give rise to six
constants of integration, (x0 , y0 , ax , ay , v0 , t0 ). The expressions may be simplified
by defining
v0
v0
ax =
cos ,
ay =
sin
R
R

## in which case we see that

x = x0 + R cos tanh(v0 (t t0 ))
y = y0 + R sin tanh(v0 (t t0 ))
z = R sech(v0 (t t0 ))
which is the answer we wanted to show, except that here we have retained an
extra constant t0 related to the time translation invariance of the system.
Finally, to show that the path of the particle lies in a sphere, all we need to
demonstrate is that
(x x0 )2 + (y y0 )2 + z 2
= R2 cos2 tanh2 (v0 t) + R2 sin2 tanh2 (v0 t) + R2 sech2 (v0 t)
= R2 (tanh2 (v0 t) + sech2 (v0 t)) = R2
This is indeed the equation for a sphere, (x x0 )2 + (y y0 )2 + z 2 = R2 .

Physics 451

Fall 2004
Homework Assignment #6 Solutions

## Textbook problems: Ch. 5: 5.2.6, 5.2.8, 5.2.9, 5.2.19, 5.3.1

Chapter 5
5.2.6 Test for convergence

X
a)
(ln n)1
n=2

As in all these convergence tests, it is good to first have a general idea of whether
we expect this to converge or not, and then find an appropriate test to confirm
our hunch. For this one, we can imagine that ln n grows very slowly, so that its
inverse goes to zero very slowly too slowly, in fact, to converge. To prove this,
we can perform a simple comparison test. Since ln n < n for n 2, we see that
an = (ln n)1 > n1
since the harmonic series diverges, and each term is larger than the corresponding
harmonic series term, this series must diverge.
Note that in this and all subsequent tests, there may be more than one way to
prove convergence/divergence. Your solution may be different than that given
here. But any method is okay, so long as the calculations are valid.

X
n!
b)
10n
n=1

In this case, when n gets large (which is the only limit we care about), the factorial
in the numerator will start to dominate over the power in the denominator. So
we expect this to diverge. As a proof, we can perform a simple ratio test.
an =
Taking the limit, we obtain

n!
10n

an
10
=
an+1
n+1

an
=0
n an+1
lim

## hence the series diverges by the ratio test.

c)

1
2n(2n + 1)
n=1
We first note that this series behaves like 1/4n2 for large n. As a result, we expect
it to converge. To see this, we may consider a simple comparison test
1
1
1
<
=
an =
2n(2n + 1)
2n 2n
4
Since the series (2) =

d)

n=1 (1/n

1
n2

## ) converges, this series converges as well.

[n(n + 1)]1/2

n=1

This series behaves as 1/n for large n. Thus we expect it to diverge. While
the square root may be a bit awkward to manipulate, we can actually perform a
simple comparison test with the harmonic series
1
1
1
an = p
>p
=
n+1
n(n + 1)
(n + 1)(n + 1)
Because the harmonic series diverges (and we do not care that the comparison
starts with the second term in the harmonic series, and not the first) this series
also diverges.
e)

1
2n + 1
n=0
Since this behaves as 1/2n for large n, the series ought to diverge. We may either
compare this with the harmonic series or perform an integral test. Consider the
integral test

Z

dx
1
= ln(2x + 1) =
2x + 1
2
0

## Thus the series diverges

5.2.8 For what values of p and q will the following series converge?

## 1/ [np (ln n)q ]

n=2

Since the ln n term is not as dominant as the power term np , we may have some
idea that the series ought to converge or diverge as the 1/np series. To make this

## more precise, we can use Raabes test

an =

1
p
n (ln n)q

an
(n + 1)p (ln(n + 1))q
=
an+1
np (ln n)q

p 
q
ln(1 + n1 )
1
= 1+
1+
n
ln n

p 
q
1
1
= 1+
1+
+
n
n ln n



p
q
= 1 + + 1 +
+
n
nln n

p
q
= 1+ +
+
n n ln n

Note that we have Taylor (or binomial) expanded the expressions several times.
Raabes test then yields

lim n




an
q
+ = p
1 = lim p +
n
an+1
ln n

## This gives convergence for p > 1 and divergence for p < 1.

For p = 1, Raabes test is ambiguous. However, in this case we can perform an
integral test. Since
1
p = 1 an =
n(ln n)q
we evaluate

Z
2

dx
=
x(ln x)q

ln 2

du
uq

where we have used the substitution u = ln x. This converges for q > 1 and
diverges otherwise. Hence the final result is
p > 1,
p = 1,
p = 1,
p < 1,

any q
q>1
q1
any q

converge
converge
diverge
diverge

## 5.2.9 Determine the range of convergence for Gausss hypergeometric series

F (, , ; x) = 1 +

( + 1)( + 1) 2

x+
x +
1!
2!( + 1)

We first consider non-negative values of x (so that this is a positive series). More
or less, this is a power series in x. So as long as , , are well behaved, this

series ought to converge for x < 1 (just like an ordinary geometric series). To see
this (and to prepare for Gauss test), we compute the ratio
an =

( + 1) ( + n 1)( + 1) ( + n 1) n
x
n!( + 1) ( + n 1)
an
(n + 1)( + n) 1

=
x
an+1
( + n)( + n)

## This allows us to begin with the ratio test

an
(n + 1)( + n) 1
= lim
x = x1
n an+1
n ( + n)( + n)
lim

Hence the series converges for x < 1 and diverges for x > 1. However, the ratio
test is indeterminate for x = 1. This is where we must appeal to Gauss test.
Setting x = 1, we have
an
(n + 1)( + n)
=
an+1
( + n)( + n)
Since this approaches 1 as n , we may highlight this leading behavior by
adding and subtracting 1
an
=1+
an+1

(n + 1)( + n)
1
( + n)( + n)


=1+

( + 1)n +
( + n)( + n)

We can now see that the fraction approaches ( +1)/n as n gets large. This
is the h/n behavior that we need to extract for Gauss test: an /an+1 = 1 + h/n +
B(n)/n2 . In principle, we may add and subtract h/n where h = + 1 in
order to obtain an explicit expression for the remainder term B(n)/n2 . However,
it should be clear based on a power series expansion that this remainder will
indeed behave as 1/n2 , which is the requirement for applying Gauss test.
Thus, with h = + 1, we see that the hypergeometric series F (, , ; 1)
converges for > + (h > 1) and diverges otherwise.
To summarize, we have proven that for non-negative x, the hypergeometric series
converges for x < 1 (any , , ) and x = 1 if > + , and diverges otherwise.
In fact, for negative values of x, we may consider the series for |x|. In this case,
we have absolute convergence for |x| < 1 and |x| = 1 if > + . Based on
the ratio test, it is not hard to see that the series also diverges for |x| > 1 (for
negative x, each subsequent term gets larger than the previous one). However,
there is also conditional convergence for + 1 < + (this is harder to
show).

## 5.2.19 Show that the following series is convergent.

X
s=0

(2s 1)!!
(2s)!!(2s + 1)

It is somewhat hard to see what happens when s gets large. However, we can
perform Raabes test
as =

(2s 1)!!
(2s)!!(2s + 1)

as
(2s 1)!!
(2s + 2)!!(2s + 3)
=

as+1
(2s)!!(2s + 1)
(2s + 1)!!
(2s 1)!!(2s + 2)!!(2s + 3)
=
(2s + 1)!! (2s)!! (2s + 1)
(2s + 2)(2s + 3)
=
(2s + 1)(2s + 1)

as
=1+
as+1
Then


lim s


(2s + 2)(2s + 3)
6s + 5
1 =1+
2
(2s + 1)
(2s + 1)2

as
1
as+1


= lim s
s

6s + 5
(2s + 1)2


=

3
2

5.3.1

X
s=0

(1)s (4s + 3)

(2s 1)!!
(2s + 2)!!

## Test it for convergence.

Since this is an alternating series, we may check if it is monotonic decreasing.
Taking the ratio, we see that
|as |
(4s + 3)(2s 1)!!(2s + 4)!!
(4s + 3)(2s + 4)
=
=
|as+1 |
(4s + 7)(2s + 1)!!(2s + 2)!!
(4s + 7)(2s + 1)
2
8s + 22s + 12
4s + 5
=
=
1
+
>1
8s2 + 18s + 7
8s2 + 18s + 7
As a result
|as | > |as+1 |

and hence the series converges based on the Leibniz criterion. (Actually, to be
careful, we must also show that lims as = 0. However, I have ignored this
subtlety.)
b) The corresponding series for the surface charge density is

X
s=0

(1)s (4s + 3)

(2s 1)!!
(2s)!!

## Test it for convergence.

This series is rather similar to that of part a). However the denominator is
missing a factor of (2s + 2). This makes the series larger (term by term) than
the above. To see whether the terms get too large, we may take the ratio
(4s + 3)(2s 1)!!(2s + 2)!!
(4s + 3)(2s + 2)
|as |
=
=
|as+1 |
(4s + 7)(2s + 1)!! (2s)!!
(4s + 7)(2s + 1)
2
8s + 14s + 6
4s + 1
= 2
=1 2
<1
8s + 18s + 7
8s + 18s + 7
In this case
|as | < |as+1 |
and the series diverges since the terms get larger as s .

Physics 451

Fall 2004
Homework Assignment #7 Solutions

## Textbook problems: Ch. 5: 5.4.1, 5.4.2, 5.4.3, 5.5.2, 5.5.4

Chapter 5
5.4.1 Given the series
ln(1 + x) = x

x3
x4
x2
+

+ ,
2
3
4

1 < x 1

show that

ln

1+x
1x


x3
x5
=2 x+
+
+ ,
3
5


1 < x < 1

## We use the property ln(a/b) = ln a ln b to write


ln

1+x
1x


= ln(1 + x) ln(1 x) =
=

X
n=1

X
n=1

(1)n+1

xn X
(x)n
(1)n+1

n
n
n=1

((1)n+1 + 1)

X xn
xn
=2
n
n
n odd

Note that, since we use the ln(1 + x) series for both +x and x, the common
range of convergence is the intersection of 1 < x 1 and 1 < x 1, namely
|x| < 1.
5.4.2 Determine the values of the coefficients a1 , a2 , and a3 that will make (1 + a1 x + a2 x2 +
a3 x3 ) ln(1 + x) converge as n4 . Find the resulting series.
Using the expansion for ln(1 + x), we write
(1 + a1 x + a2 x2 +a3 x3 ) ln(1 + x)

 n

X
a1 xn+1
a2 xn+2
a3 xn+3
x
n+1
+
+
+
=
(1)
n
n
n
n
n=1
We want to collect identical powers of x on the right hand side. To do this, we
must shift the index n according to n n 1, n n 2 and n n 3 for
the second, third and last terms on the right hand side, respectively. After doing

so, we may combine terms with powers x4 and higher. The first few terms (x, x2
and x3 ) may be treated as exceptions. The result is
(1 + a1 x + a2 x2 +a3 x3 ) ln(1 + x)
= (x 12 x2 + 13 x3 ) + a1 (x2 21 x3 ) + a2 x3

 n

X
a1 xn
a2 xn
a3 xn
x
n+1
+

(1)
n
n

1
n

2
n3
n=4

(1)

## = x + (a1 12 )x2 + (a2 21 a1 + 31 )x3




X
1
a1
a2
a3
n+1

+
(1)
xn
n
n

1
n

2
n

3
n=4
Combining the terms over a common denominator yields



1
a1
a2
a3

n n1 n2 n3
(n 1)(n 2)(n 3) a1 n(n 2)(n 3) + a2 n(n 1)(n 3)
a3 n(n 1)(n 2)
=
n(n 1)(n 2)(n 3)
3
(1 a1 + a2 a3 )n + (6 + 5a1 4a2 + 3a3 )n2
+(11 6a1 + 3a2 2a3 )n 6
=
n(n 1)(n 2)(n 3)

## To make this converge as n4 , we need to cancel the coefficients of n3 , n2 and n

in the numerator. Solving for
1 a1 + a2 a3 = 0,

a1 = 3,

a2 = 3,

a3 = 1

## Finally, inserting this back into (1), we obtain

2

(1 + 3x + 3x + x ) ln(1 + x) = x +

11 3
5 2
2x + 6 x +6

X
n=4

or
ln(1 + x) =

x + 25 x2 +

11 3
6 x

+6

(1)n

xn
n(n 1)(n 2)(n 3)

xn
n
n=4 (1) n(n1)(n2)(n3)
(1 + x)3

## 5.4.3 Show that

X
a)
[(n) 1] = 1
n=2

Using the sum formula for the Riemann zeta function, we have

[(n) 1] =

n=2

"

n=2

X
1
pn
p=1

"
#
"
#

X
X 1
X
X 1
1 =
=
pn
pn
n=2 p=2
p=2 n=2
#

where in the last step we have rearranged the order of summation. In doing so,
we have now changed this to a geometric series, with sum

p2
1
=
1
1p
p(p 1)

pn =

n=2

In this case

X
1
=
[(n) 1] =
p(p 1) p=2
n=2
p=2

1
1

p1 p


=1

## since this is a telescoping series.

b)

(1)n [(n) 1] =

n=2

1
2

The solution to this is similar to that of part a). The addition of (1)n yields

(1) [(n) 1] =

n=2

X
n=2

"
n

(1)

X
1
pn
p=2

#
=

"

X
X
n=2

1
(p)n
p=2

#
=

"

X
X
p=2

1
(p)n
n=2

X
n=2

(p)n =

1
(p)2
=
1
1 (p)
p(p + 1)

In this case

X
1
(1) [(n) 1] =
=
p(p + 1) p=2
n=2
p=2
n

1
1

p p+1


=

1
2

## 5.5.2 For what range of x is the geometric series

n=0

xn uniformly convergent?

P n
We use the Weierstrass M test. We first note that the geometric series
P n=0n x
is absolutely convergent for |x| < 1. This means that the series
n=0 s is
convergent for 0 s < 1. While this is all very obvious, the introduction of
this convergent series in s allows us to bound the x series by an x-independent
convergent one. This is precisely the setup of the Weierstrass M test.
We simply choose Mn = sn . Then, so long as |x|n Mn (ie |x| Ps), the

## geometric series is uniformly convergent. Therefore we have shown that n=0 xn

is uniformly convergent provided |x| s < 1.
P
P
5.5.4 If the series of the coefficients
an and
bn are absolutely convergent, show that
the Fourier series
X
(an cos nx + bn sin nx)
is uniformly convergent for < x < .
This is also a case for the Weierstrass M test. Note that, if we let (x) =
an cos nx + bn sin nx denote the n-th element of the series, then
|(x)| = |an cos nx + bn sin nx| |an cos nx| + |bn sin nx| |an | + |bn |
P
for
the
entire
domain
x

(,
).
Since
the
problem
states
that
an and
P
b
are
absolutely
convergent,
we
now
take
simply
M
=
|a
|
+
|b
|.
Clearly,
n
n
P n
Pn
Mn converges, and since |(x)| Mn , we conclude that
(x) is uniformly
convergent for x (, ).

Physics 451

Fall 2004
Homework Assignment #8 Solutions

Textbook problems: Ch. 5: 5.6.2, 5.6.19, 5.7.4, 5.7.15, 5.9.11, 5.10.1, 5.10.7
Chapter 5
5.6.2 Derive a series expansion of cot x by dividing cos x by sin x.
Since cos x = 1 12 x2 +
to obtain

1 4
24 x

and sin x = x 16 x3 +

1
5
120 x

, we divide

1 4
1 4
1 12 x2 + 24
1 21 x2 + 24
x
x
cot x =
=
1
1
x 16 x3 + 120
x5
x(1 61 x2 + 120
x4 )

We now run into an issue of dividing one series by another. However, instead of
division, we may change this into a multiplication problem by using (1 r)1 =
1 + r + r2 + r3 + to rewrite the denominator
(1 16 x2 +

4
1
120 x

)1 = 1 + ( 16 x2
= 1 + 61 x2 +
= 1 + 16 x2 +

4
1
1 2
120 x + ) + ( 6 x
1
1
( 120
+ 36
)x4 +
4
7
360 x +

4
1
120 x

+ )2 +

where we have only kept terms up to O(x4 ). Returning to cot x, we now find
cot x = x1 (1 12 x2 +
= x1 (1 +
= x1 (1

4
1 4
1 2
7
24 x )(1 + 36 x + 360 x + )
1
1
7
( 12 + 16 )x2 + ( 24
12
+ 360
)x4 + )
1 4
1 2
3 x 45 x + )

In principle, we could work this out to higher orders by keeping more powers of
x in the series expansions.
Note that there is a nice expression for cot x in terms of the Bernoulli numbers.
This may be obtained by noting that the generating function definition of Bn is

X
X
x
Bn n
1
B2p 2p
=
x = x+
x
x
e 1 n=1 n!
2
(2p)!
p=0

where we have used the fact that all odd Bernoulli numbers vanish except for
B1 = 12 . Moving the 12 x to the left hand side, and using the identity
x
1
x ex + 1
x
x
+ x= x
= coth
x
e 1 2
2 e 1
2
2

we obtain

x
x X B2p 2p
coth =
x
2
2 p=0 (2p)!
or, by substituting x 2x and dividing through by x

X
2B2p

coth x =

p=0

(2p)!

(2x)2p1

Finally, to change coth into cot, we may work in the complex domain and note
that coth iz = i cot z. Therefore we make the substitution x ix to yield

X
2B2p

i cot x =

p=0

(2p)!

(2ix)2p1

cot x =

X
(1)p 22p B2p

(2p)!

p=0

5.6.19

x2p1

hi =

n0 exp(n0 /kT )

n=1

exp(n0 /kT )

n=0

## where 0 is a fixed energy. Identify the numerator and denominator as binomial

expansions and show that the ratio is
hi =

0
exp(0 /kT ) 1

## To simplify the expressions, we begin with the substitution r = exp(0 /kT ).

This yields hi = N/D where the numerator and denominator are
N=

X
n=1

n0 r ,

D=

rn

n=0

## We now see that the denominator is a simple geometric series. Hence D =

d
1/(1 r). For the numerator, we note that nrn = r dr
(rn ). Hence we may write
!

d X n
d r
0 r
N = 0 r
r
= 0 r
=
dr n=1
dr 1 r
(1 r)2

## Dividing the numerator and denominator finally yields

hi =

0
0
0 r
= 1
=
1r
r 1
exp(0 /kT ) 1

b) Show that the hi of part (a) reduces to kT , the classical result, for kT  0 .
In this limit, 0 /KT  1, we may expand the exponential in the denominator
exp(0 /kT ) 1 +
As a result
hi

0
+
kT

0
kT
0 /kT +

Z

cos(c cos ) d
0

2

cos
0

2n

(2n)!
2,
d = 2n
2 (n!)2

cos2n+1 d = 0

## Setting x = c cos , we expand

X
(1)n 2n
x
cos x =
(2n)!
n=0

so that
Z

cos(c cos ) d =
0

2 X

(1)n 2n
c cos2n d
(2n)!
0
n=0

X (1)n c2n Z 2
=
cos2n d
(2n)!
0
n=0
Z

X
X
2(1)n  c 2n
(1)n c2n 2(2n)!
=
=
(2n)! 22n (n!)2
(n!)2
2
n=0
n=0

5.7.15 The Klein-Nishina formula for the scattering of photons by electrons contains a term
of the form


(1 + ) 2 + 2 ln(1 + 2)
f () =

2
1 + 2

Here = h/mc2 , the ratio of the photon energy to the electron rest mass energy.
Find
lim f ()
0

This problem is an exercise in taking Taylor series. Note that, if we simply set
= 0 in f (), the first term (1 + )/2 would diverge as 2 . Hence this provides
a hint that we should keep at least two powers of in any series expansion we
perform. Keeping this in mind, we first work on the fraction
2 + 2
= 2(1+)(1+2)1 = 2(1+)(12+42 ) = 2(1+22 + ) (1)
1 + 2
next we turn to the log
ln(1 + 2)
= 1 ln(1+2) = 1 (2 12 (2)2 + 31 (2)3 + ) = (22+ 38 2 )

(2)
2
Subtracting (2) from (1), and combining with the prefactor (1 + )/ , we find
(1 + )
[2(1 + 22 + ) (2 2 + 38 2 )]
2
(1 + ) 2 8 2
(1 + ) 4 2
=
[4

]
=
[ 3 + ] = 43 (1 + )[1 + ]
3
2
2

f () =

lim f () =

4
3

## 5.9.11 The integral

Z

dx
x

[ln(1 x)]2

appears in the fourth-order correction to the magnetic moment of the electron. Show
that it equals 2(3).
We begin with the variable substitution
1 x = et ,
to obtain
Z

2 dx

[ln(1 x)]
0

dx = et dt
Z
=
0

t2

et
dt
1 et

This integral involves powers and exponentials, and is not so easy to do. Thus
we expand the fraction as a series

X
et
t
t 1
t
t
2t
3t
= e (1 e ) = e (1 + e + e
+e
+ ) =
ent
1 et
n=1

This gives
Z

2 dx

[ln(1 x)]
0

!
nt

dt =

n=1

Z
X
n=1

ent t2 dt

## This integral may be evaluated by integration by parts (twice). Alternatively, we

make the substitution s = nt to arrive at
Z 1
Z

X
X
3
2 dx
s 2
n
n3 (3) = 2(3)
[ln(1 x)]
=
e s ds =
x
0
0
n=1
n=1
Here we have used the definition of the Gamma function
Z
(z) =
es sz1 dz
0

(z) =

nz

n=1

## 5.10.1 Stirlings formula for the logarithm of the factorial function is



N
X
1
B2n
1
ln x x
x12n
ln(x!) = ln 2 + x +
2
2
2n(2n

1)
n=1
The B2n are the Bernoulli numbers. Show that Stirlings formula is an asymptotic
expansion.
P
Instead of using the textbook definition of an asymptotic series
an (x), we aim
to demonstrate the two principle facts; i) that the series diverges for fixed x when
N , and ii) that the remainder vanishes for fixed N when x . To do
so, we first examine the form of an (x)
an (x) =

B2n
x12n
2n(2n 1)

B2n =

(1)n+1 2(2n)!
(2n)
(2)2n

we find
|an (x)| =

## 2(2n 2)!(2n) 12n

x
(2)2n

For condition i), in order to show that the series diverges for fixed x, we may
perform the ratio test
2(2n 2)!(2n)
(2)2n+2
(2n)
(2)2
|an |
2
=
x =
x2
2n
|an+1 |
(2)
2(2n)!(2n + 2)
2n(2n 1) (2n + 2)

(3)

Since limn (n) = 1, and since there are factors of n in the denominator, we
see that
|an |
=0
(for fixed x)
lim
n |an+1 |
and hence the ratio test demonstrates that the series diverges.
For showing condition ii), on the other hand, we suppose the series stops at term
n = N . Then the error or remainder is related to the subsequent terms aN +1 ,
aN +2 , etc. However, according to (3), if we take the limit x for fixed N we
have
|aN |
|aN |
lim
=

|aN +1 |
= 0 as x
x |aN +1 |

Hence the remainder terms fall off sufficiently fast to satisfy the criteria for an
asymptotic series. We thus conclude that Stirlings formula is an asymptotic
expansion.
5.10.7 Derive the following Bernoulli number asymptotic series for the Euler-Mascheroni
constant
n
N
X
X
1
B2k
1
s ln n
=
+
2n
(2k)n2k
s=1
k=1

## Let us start by recalling the useful definition of the Euler-Mascheroni constant

!
n
X
= lim
s1 ln n
n

s=1

Essentially, the constant is the difference between the sum and the integral
approximation. This suggests that we begin by inserting the function f (x) = 1/x
into the Euler-Maclauren sum formula
Z n
n
N
X
X
1
1
1
f (x) =
f (x)dx + f (1) + f (n) +
B2p [f (2p1) (n) f (2p1) (1)]
2
2
(2p)!
1
p=1
x=1
1

(2N )!

B2N (x)
0

n1
X

f (2N ) (x + )dx

=1

(4)

## However, we first note that, for f (x) = 1/x we have

Z n
Z n
dx
f (x)dx =
= ln n
x
1
1
as well as
f (k) (x) = (1)k

k!

xk+1
Using these results, and returning to (4), we find
n
X
s=1

or

X B2p
1
1
= ln n + +

[n2p 1]
2 2n p=1 2p
n
X

B2N (x)
0

n1
X

(x + )2N +1 dx

=1

s=1

X B2p
1
1
ln n = +

[n2p 1] + RN (n)
2 2n p=1 2p

## where the remainder RN (n) is given by

Z 1
n1
X
RN (n) =
(x + )2N +1 dx
B2N (x)
0

(5)

(6)

=1

At this point, may note that the left hand side of (5) is close to the expression
we want for the Euler-Mascheroni constant. However, we must recall that the
sum formula (4) generally yields an asymptotic expansion (since the Bernoulli
numbers diverge). Thus we have to be careful about the remainder term.
Of course, we can still imagine taking the limit n in (5) to obtain
!
n
N
X
1 X B2p
1
= lim
s ln n = +
+ RN ()
n
2 p=1 2p
s=1

(7)

## Noting that the remainder (6) is a sum of terms



Z 1
1
1
1
RN (n) = B2N (x)
+
+ +
dx
(x + 1)2N +1
(x + 2)2N +1
(x + n 1)2N +1
0
and that the first few terms in the sum dominate, we may eliminate most (but
not all) of the remainder by subtracting (5) from (7)

n
X

s1 + ln n =

s=1

X B2p 1
1
+
+ [RN () RN (n)]
2n p=1 2p n2p

## Finally, dropping the difference of remainders, we obtain the result

=

n
X
s=1

s1 ln n

X B2p 1
1
+
2n p=1 2p n2p

Physics 451

Fall 2004
Homework Assignment #9 Solutions

Textbook problems: Ch. 6: 6.1.3, 6.1.7, 6.2.5, 6.2.6, 6.3.3, 6.4.3, 6.4.4
Chapter 6
6.1.3 Prove algebraically that
|z1 | |z2 | |z1 + z2 | |z1 | + |z2 |
Interpret this result in terms of vectors. Prove that
p
|z 1| < | z 2 1| < |z + 1|, for

<(z) > 0

## We start by evaluating |z1 + z2 |2

|z1 + z2 |2 = (z1 + z2 )(z1 + z2 ) = |z1 |2 + |z2 |2 + z1 z2 + z1 z2
= |z1 |2 + |z2 |2 + (z1 z2 ) + (z1 z2 ) = |z1 |2 + |z2 |2 + 2<(z1 z2 )

(1)

We now put a bound on the real part of z1 z2 . First note that, for any complex
quantity , we have ||2 = (<)2 + (=)2 (<)2 . Taking a square root gives
|| |<| or || < ||. For the present case (where = z1 z2 ) this gives
|z1 ||z2 | <(z1 z2 ) |z1 ||z2 |. Using this inequality in (1), we obtain
|z1 |2 + |z2 |2 2|z1 ||z2 | |z1 + z2 |2 |z1 |2 + |z2 |2 + 2|z1 ||z2 |
or
(|z1 | |z2 |)2 |z1 + z2 |2 (|z1 | + |z2 |)2
Taking the square root then proves the triangle inequality. The reason this is
called the triangle inequality is that, in terms of vectors, we can think of z1 , z2
and z1 + z2 as the three sides of a triangle

z2
z 1+

z2

z1

Then the third side (|z1 + z2 |) of a triangle can be no longer than the sum of
the lengths of the other two sides (|z1 | + |z2 |) nor shorter than the difference of
lengths (|z1 | |z2 |).

## Finally, for the second inequality, we start by proving that

|z + 1|2 = |z|2 + 1 + 2<z = (|z|2 + 1 2<z) + 4<z = |z 1|2 + 4<z > |z 1|2
for <z > 0. This implies that |z + 1| > |z 1| for <z > 0. The picture here is
that if z is on the right half of the complex plane then it is closer to the point 1
than the point 1.
z

z +1

## Given this result, it is simple to see that

|z 1|2 < |z 1||z + 1| < |z + 1|2
or, by taking a square root
p
|z 1| < | (z 1)(z + 1)| < |z + 1|
which is what we set out to prove.
6.1.7 Prove that
N
1
X
x
sin(N x/2)
cos(N 1)
a)
cos nx =
sin x/2
2
n=0
b)

N
1
X

sin nx =

n=0

x
sin(N x/2)
sin(N 1)
sin x/2
2

S=

N
1
X

cos nx + i

n=0

N
1
X
n=0

sin nx =

N
1
X

## (cos nx + i sin nx) =

n=0

N
1
X

einx

n=0

The real part of S gives part a) and the imaginary part of S gives part b). When
written in this fashion, we see that S is a terminating geometric series with ratio
r = eix . Thus
N
1
X

1 rN
1 eN ix
e 2 N ix (e 2 N ix e 2 N ix )
S=
r =
=
=
1
1
1
1r
1 eix
e 2 ix (e 2 ix e 2 ix )
n=0
n

We performed the last step in order to balance positive and negative exponentials
inside the parentheses. This is so that we may relate both the numerator and
denominator to sin = (ei ei )/2i. The result is
1

S = e 2 (N 1)ix

 sin(N x/2)
sin(N x/2)
= cos 12 (N 1)x + i sin 12 (N 1)x
sin x/2
sin x/2

It should now be apparent that the real and imaginary parts are indeed the
solutions to parts a) and b).
6.2.5 Find the analytic function
w(z) = u(x, y) + iv(x, y)
a) if u(x, y) = x3 3xy 2
We use the Cauchy-Riemann relations
u
v
=
= 6xy
x
y
v
u
=
= 3x2 3y 2
y
x

v = 3x2 y + C(y)

v = 3x2 y y 3 + D(x)

In order for these two expressions to agree, the functions C(y) and D(x) must
have the form C(y) = y 3 + c and D(x) = c where c is an arbitrary constant. As
a result, we find that v(x, y) = 3x2 y y 3 + c, or
w(z) = (x3 3xy 2 ) + i(3x2 y y 3 ) + ic = z 3 + ic
The constant c is unimportant.
b) v(x, y) = ey sin x As above, we have
u
v
=
= ey sin x
x
y
u
v
=
= ey cos x
y
x

u = ey cos x + C(y)

u = ey cos x + D(x)

Thus we must have C(y) = D(x) = c with c a constant. The complex function
w(z) is
w(z) = c + ey cos x + iey sin x = c + ey (cos x + i sin x) = c + eixy = c + eiz

## 6.2.6 If there is some common region in which w1 = u(x, y) + iv(x, y) and w2 = w1 =

u(x, y) iv(x, y) are both analytic, prove that u(x, y) and v(x, y) are constants.
If u + iv and u iv are both analytic, then they must both satisfy the CauchyRiemann equations. This corresponds to
v
u
=
,
x
y

u
v
=
y
x

v
u
= ,
x
y

u
v
=
y
x

u
u
=
= 0,
x
y

v
v
=
=0
x
y

## Since all partial derivatives vanish, u and v can only be constants.

6.3.3 Verify that
Z

1+i

z dz

depends on the path by evaluating the integral for the two paths shown in Fig. 6.10.
y
2
2

1+ i

1
x

## We perform this integral as a two-dimensional line integral

Z

z dz =

(x iy)(dx + idy)

For path 1, we first integrate along the x-axis (y = 0; dy = 0) and then along the
y-axis (x = 1; dx = 0)
Z
0

1+i

Z 1

(x iy) dx +
(x iy)
idy
z dz =
x=1
y=0
0
0
Z 1
Z 1
1
1

=
xdx +
(i + y)dy = 21 x2 + (iy + 21 y 2 ) = 1 + i

## Similarly, for path 2, we find

Z 1
Z 1+i

(x iy)
z dz =
0

Z
idy +
x=0

Z
ydy +

=
0

(x iy)

dx
y=1

1
1

(x i)dx = 21 y 2 + ( 21 x2 ix) = 1 i
0

So we see explicitly that the integral depends on the path taken (1 + i 6= 1 i).
H
6.4.3 Solve Exercise 6.3.4 [ C dz/(z 2 +z) where C is a circle defined by |z| > 1] by separating
the integrand into partial fractions and then applying Cauchys integral theorem for
multiply connected regions.
Note that, by applying Cauchys integral formula to a constant function f (z) = 1,
we may derive the useful expression
I
dz
= 2i
(2)
C z z0
provided point z0 is contained inside the contour C (it is zero otherwise). Then,
using partial fractions, we see that

I
I
I 
I
I
dz
1
1
dz
dz
dz
=
=

dz
=

2
z
z+1
z+1
C z(z + 1)
C
C z
C z +z
Since C is a circle of radius greater than one, it encompasses both points z0 = 0
and z0 = 1. Thus, using (2), we find
I
dz
= 2i 2i = 0
2
C z +z
Note that, if the radius of C is less than one, we would have encircled only the
pole at z0 = 0. The result would then have been 2i instead of zero.
6.4.4 Evaluate

I
C

dz
1

z2

## where C is the circle |z| = 2.

Again, we use partial fractions and (2)

I
I
I 
dz
dz
1/2
1/2
=
=

dz
2
z1 z+1
C z 1
C (z + 1)(z 1)
C
I
I
dz
1
dz
1
=

= i i = 0
2 C z1 2 C z+1
Here it is important that the contour of radius 2 encircles both points z0 = 1
and z0 = 1.

Physics 451

Fall 2004
Homework Assignment #10 Solutions

## Textbook problems: Ch. 6: 6.5.2, 6.5.8, 6.6.2, 6.6.7

Ch. 7: 7.1.2, 7.1.4
Chapter 6
6.5.2 Derive the binomial expansion
m

(1 + z)



X
m(m 1) 2
m
zn
= 1 + mz +
z + =
n
12
n=0

for m any real number. The expansion is convergent for |z| < 1. Why?
To derive the binomial expansion, consider generating the Taylor series for f (z)
around z = 0 where f (z) = (1 + z)m . Taking derivatives of f (z), we find
f 0 (z) = m(1 + z)m1 ,
f 00 (z) = m(m 1)(1 + z)m2 ,
f 000 (z) = m(m 1)(m 2)(1 + z)m3 ,
etc.
In general, the n-th derivative is given by
f (n) (z) = m(m 1)(m 2) (m n + 1)(1 + z)mn =

m!
(1 + z)mn
(m n)!

where the factorial for non-inteter m may be defined by the Gamma function,
or by the expression indicated. In particular, f (n) (0) = m!/(m n)!. Hence the
Taylor series has the form



X
X
X
1 (n)
m!
m
n
n
f (0)z =
z =
zn
f (z) =
n
n!
n!(m

n)!
n=0
n=0
n=0
For non-integer m (but integer n), the binomial coefficient may be defined by the
Gamma function, or alternately by


m
n


=

n
Y
m(m 1)(m 2) (m n + 1)
mk+1
=
1 2 3n
k
k=1

Note that, for non-integer m, the expression (1+z)m has a branch point at z = 1.
(This is explored in problem 6.6.7 below.) Since the radius of convergence of the

Taylor series is the distance to the nearest singularity, this explains why |z| < 1
is necessary for convergence. For negative integer m, there is no branch point,
but there is still a pole (of order |m|) at z = 1. The pole also results in a radius
of convergence of |z| < 1. On the other hand, for m a non-negative integer, the
series terminates (giving a traditional binomial expansion for (1 + z) raised to an
integer power), and the radius of convergence is infinite. This is consistent with
the absence of any singularity in this case.
6.5.8 Develop the first three nonzero terms of the Laurent expansion of
f (z) = (ez 1)1
about the origin.
Since the Laurent expansion is a unique result, we may obtain the expansion any
way we wish. What we can do here is to start with a Taylor expansion of the
denominator
ez 1 = z + 12 z 2 + 16 z 3 + = z(1 + 21 z + 16 z + )
Hence
f (z) = (ez 1)1 = z 1 (1 + 12 z + 61 z 2 + )1
For small z, we invert the series using (1 + r)1 = 1 r + r2 where r =
1
1 2
2 z + 6 z + . This gives

f (z) = z 1 1 ( 12 z + 16 z 2 + ) + ( 12 z + 16 z 2 + )2

1 2
= z 1 1 12 z + 12
z +
1 1
1
= + z +
z
2 12

(1)

Of course, we could also take the hint and use the generating function of Bernoulli
numbers to write
1
f (z) = z
= z 1
e 1

z
ez 1


=z

X
Bn n
B0
1
1
z =
+B1 + B2 z + B3 z 2 +
n!
z
2
6
n=0

## Inserting B0 = 1, B1 = 21 and B2 = 61 then immediately yields the last line of

(1). However, this method requires us to either remember or look up the values
of the Bernoulli numbers.
6.6.2 What part of the z-plane corresponds to the interior of the unit circle in the w-plane
if
z1
a) w =
z+1

Note that, by trying a few numbers, we can see that z = 0 gets mapped to w = 1
and z = 1 gets mapped to w = 0
z

1
z+

z
z 1

w=

z 1
z +1

In fact, the unit circle in the w-plane is given by the equation |w| = 1, which
maps to |z 1| = |z + 1| in the z-plane. Geometrically, this is saying that the
point z is equidistant to both +1 and 1. This can only happen on the imaginary
axis (x = 0). Hence the imaginary axis maps to the circumference of the circle.
Furthermore, since z = 1 gets mapped into the interior of the circle, we may
conclude that the right half (first and fourth quadrants) of the complex z-plane
gets mapped to the interior of the unit circle.
b) w =

zi
z+i

This map is similar to that of part a), except that the distances are measured to
the points +i and i instead. Thus in this case the real axis (y = 0) gets mapped
to the circle. The upper half plane (first and second quadrants) gets mapped to
the interior of the unit circle.
6.6.7 For noninteger m, show that the binomial expansion of Exercise 6.5.2 holds only for
a suitably defined branch of the function (1 + z)m . Show how the z-plane is cut.
Explain why |z| < 1 may be taken as the circle of convergence for the expansion of
this branch, in light of the cut you have chosen.
Returning to the binomial expansion of f (z) = (1+z)m , we note that if w = 1+z,
we end up with a function f (w) = wm which is multi-valued under w we2i
whenever m is nonintegral. This indicates that w = 0 is a branch point, and a
suitable branch must be defined. We take the branch cut to run from w = 0
along the negative real axis in the w-plane. However, the simple transformation
z = w 1 allows us to return to the original z-plane. In this case, w = 0 is the
same as z = 1, so the branch point is at z = 1, with a cut running to the left.
The picture of the cut z-plane is then as follows
z

z+

cut

## where the principle value is taken to be < . In this case, f (z) =

|1 + z|m eim . Since the Taylor series is expanded about z = 0, the radius of
convergence is |z| < 1, which is the distance to the nearest singularity (the branch
point at z = 1). This is why it is desired to take the branch cut running along
the left (otherwise, if it goes inside the unit circle, it will reduce or eliminate the
radius of convergence).
Chapter 7
7.1.2 A function f (z) can be represented by
f (z) =

f1 (z)
f2 (z)

in which f1 (z) and f2 (z) are analytic. The denominator f2 (z) vanishes at z = z0 ,
showing that f (z) has a pole at z = z0 . However, f1 (z0 ) 6= 0, f20 (z0 ) 6= 0. Show that
a1 , the coefficient of (z z0 )1 in a Laurent expansion of f (z) at z = z0 , is given by
a1 =

f1 (z0 )
f20 (z0 )

Since f1 (z) and f2 (z) are both analytic, they may be expanded as Taylor series
f1 (z) = f1 (z0 ) + f10 (z0 )(z z0 ) + ,
f2 (z) = f20 (z0 )(z z0 ) + 12 f200 (z0 )(z z0 )2 +
Here we have already set f2 (z0 ) = 0 since the function vanishes at z = z0 . As a
result, we have
f1 (z0 ) + f10 (z0 )(z z0 ) +
f1 (z)
= 0
f2 (z)
f2 (z0 )(z z0 ) + 21 f200 (z0 )(z z0 )2 +
f1 (z0 )/f20 (z0 ) 1 + (f10 /f1 )(z z0 ) +
=
z z0
1 + 12 (f200 /f20 )(z z0 ) +

f (z) =

## For z z0 , the denominator 1 + 21 (f200 /f20 )(z z0 ) + may be inverted using

the geometric series relation 1/(1 + r) = 1 r + r2 . The result is a Laurent
series of the form


f1 (z0 )/f20 (z0 )
f10
f200
f (z) =
1 + ( 0 )(z z0 ) +
z z0
f1
2f2
This expansion has a single pole, and its residue is simply
a1 =

f1 (z0 )
f2 (z0 )

7.1.4 The Legendre function of the second kind Q (z) has branch points at z = 1. The
branch points are joined by a cut line along the real (x) axis.
a) Show that Q0 (z) = 21 ln((z + 1)/(z 1)) is single-valued (with the real axis
1 x 1 taken as a cut line).
Because ln w has a branch point at w = 0, this ratio of logs has branch points at
z = 1 as promised. Joining the branch points by a cut line along the real axis
gives the picture

z +1

Of course, to make this picture well defined, we provide a principle value for the
arguments
z + 1 = |z + 1|ei ,
< ,
z 1 = |z 1|ei ,

<

Thus
Q0 (z) =

1
2

ln(z + 1)

1
2

ln(z 1) =

1
2

z + 1 i
+ ( )
ln
z 1 2

(2)

It is the manner in which the arguments and show up in (2) that indicate the
branch cut is as written. For x > 1 on the real axis, both and are smooth,
0 and 0 for going either a little bit above or below the axis. Hence there
is no discontinuity in Q0 (x > 1) and thus no branch cut. For 1 < x < 1, on the
other hand, the argument 0 is smooth infinitesimally above or below the axis,
but the argument is discontinuous: above the axis, but below
the axis. This shows that the value of Q0 changes by i when crossing the real
axis. For x < 1, the situation is more interesting, as both and jump when
crossing the axis. However the difference ( ) is unchanged. In this sense, the
two branch cuts cancel each other out, so that the function Q0 (x < 1) is well
defined without a cut.
Essentially, the branch cut prevents us from going around either of the points
z = 1 or z = 1 individually. However, we can take a big circle around both
points. In this case, + 2 and + 2, but once again the difference
( ) in (2) is single-valued. So this is an appropriate branch cut prescription.
b) For real argument x and |x| < 1 it is convenient to take
Q0 (x) =

1
2

ln

1+x
1x

Show that
Q0 (x) = 12 [Q0 (x + i0) + Q0 (x i0)]
The branch cut prescription described in part a) is somewhat unfortunate for real
arguments |x| < 1, since those values of x sit right on top of the cut. To make
this well defined for real x, we must provide a prescription for avoiding the cut.
This is what the x + i0 (above the cut) and x i0 (below the cut) prescription is
doing for us. Noting that (for |x| < 1) the arguments have the following values
x + i0 (above the cut) :
x i0 (below the cut) :

0,
0,

x + 1

Q0 (x + i0) = ln
x 1

x + 1
1
+
Q0 (x i0) = 2 ln
x 1
1
2

i
,
2
i
2

(3)

## Taking the average gives

Q0 (x) =

1
2 [Q0 (x

+ i0) + Q0 (x i0)] =

1
2

x + 1

=
ln
x 1

1
2

ln

1+x
1x

where we have used the fact that |x 1| = 1 x for |x| < 1. In this case, we see
that averaging the function below and above the cut cancels the imaginary parts,
i/2 in (3).

Physics 451

Fall 2004
Homework Assignment #11 Solutions

## Textbook problems: Ch. 7: 7.2.5, 7.2.7, 7.2.14, 7.2.20, 7.2.22

Chapter 7
7.2.5 The unit step function is defined as

u(s a) =

0,
1,

s<a
s>a

## Show that u(s) has the integral representations

Z ixs
1
e
a) u(s) = lim
dx
+
0 2i x i
Let us first suppose we may close the contour with a semi-circle in the upper half
plane
z
IR
R
i
u(s)

Always assuming the limit  0+ , we see that the real integral for u(s) may be
promoted to a closed contour integral
1
2i

I
C

eizs
dz = u(s) + IR
z i

(1)

where IR denotes the integral along the semi-circle at infinity. We now show that,
at least for s > 0, the integral IR vanishes. To do so, we make an explicit variable
substitution
z = Rei ,
dz = iRei d
so that
1
IR =
2i

Z
0

1
eisRe
iRei d =
i
Re i
2

eisR(cos +i sin ) d

where we have safely taken the limit  0+ . Expanding out the exponent, we
find
Z
1
IR =
eisR cos esR sin d
(2)
2 0

This vanishes by Jordans lemma provided sR sin > 0 so that the real exponential is suppressed instead of blowing up (in fact, this is Jordans lemma). Since
R is positive and sin > 0 in the upper half plane, this corresponds to the
requirement s > 0 (as alluded to above). In this case, since IR = 0, (1) simplifies
to
I
eizs
eizs
1
dz = residue of
at z = i
(s > 0)
u(s) =
2i C z i
z i
The residue at i is simply lim0+ es = 1. Hence we have confirmed that
u(s) = 1 for s > 0.
For s < 0, on the other hand, Jordans lemma makes it clear that we should
instead close the contour with a semi-circle in the lower half plane
z
i
u(s)

IR

Since there are no residues inside the contour, we simply obtain u(s) = 0 for
s < 0. Although the problem does not discuss the case when s = 0, it is worth
considering. In this case, we might as well close the contour in the upper half
plane. Then IR can be directly evaluated by inserting s = 0 into (2). The result
is simply IR = 12 . Since the contour integral still has the value of 1 (residue at
the pole at i), inserting IR = 12 into (1) gives
1 = u(0) +

1
2

u(0) =

1
2

which is a nice result indicating that the step function is given completely by
( 0, s < a
u(s a) = 12 , s = a
1, s > 1
at least using this definition.
Z ixs
1
1
e
b) u(s) = +
P
dx
2 2i
x
The principal value integral can be evaluated by deforming the contour above
and below the pole and then taking the average of the results. For s > 0, this
corresponds to something like
z

IR

IR

or
C
u(s)

C
u(s)

As in part a), the residue of the pole at z = 0 is simply 1. So for the contour on
H eizs
1
the left we have 2i
z dz = 1, while for the one on the right (no poles inside)
we have 0. The principal value then gives 1/2 (the average of 1 and 0). This
indicates that
1 1+0
= 1,
(s > 0)
u(s) = +
2
2
For s < 0, on the other hand, we close the contour on the lower half plane. Again,
we average between the case when the pole is inside and when it is outside the
contour. However, it is important to realize that by closing the contour on the
lower half plane, we are actually choosing a clockwise (wrong direction) contour.
This means the contour integral gives either 1 or 0 depending on whether the
pole is inside or outside. The principal value prescription then yields
u(s) =

1 1 + 0
+
= 0,
2
2

(s < 0)

If we wanted to be careful, we could also work this out for s = 0 to find the same
answer u(0) = 12 .
7.2.7 Generalizing Example 7.2.1, show that
Z 2
Z 2
d
d
2
=
= 2
a b cos
a b sin
(a b2 )1/2
0
0
for a > |b|. What happens if |b| > |a|?
Since this integral is over a complete period, we note that we would get the
same answer whether we integrate cos or sin . Furthermore, it does not matter
whether we integrate a + b cos or a b cos . This can be proven directly by
considering the substitutions 2 or + into the integral. In any
case, this means we only need to consider
Z 2
d
I=
a + b cos
0
where we assume a > b > 0. For these types of trig integrals, we make the
substitutions
z = ei ,

dz = iei d = izd,

cos =

z + z 1
2

to change the real integral into a contour integral on the unit circle |z| = 1
I
I
dz
2i
dz
I=
=
b
2
1
b C z + 2a
))
C iz(a + 2 (z + z
b z+1
Since the contour is already closed, we do not need to worry about finding a way
to close the contour. All we need is to identify the poles inside the contour and

## their residues. To do this, we solve the quadratic equation in the denominator to

obtain
I
dz
2i
I=
b C (z z+ )(z z )
where
a
z =
b

a2
1
b2

(3)

Since we have assumed a > b > 0, the two zeros of the denominator, z and z+
are located as follows
z

C
z

z+

In particular, it is not hard to check that the pole at z+ lies inside the circle of
unit radius. As a result


2i
1
I = (2i)
at z = z+
residue of
b
(z z+ )(z z )
4
1
4
2
=
= p
=
b (z+ z )
a2 b2
2b a2 /b2 1
Note that, for a < 0, the integrand would always be negative. In this case, I
would be negative. Thus the complete answer is
I=

2
sign(a)
a2 b2

For |b| > |a|, the integrand would blow up when = cos1 (a/b) so the integral
is not defined. What happens in this case is that, on the complex plane, the two
poles z+ and z , which still solve (3), move off the real axis but stay on the unit
circle contour itself.
z

z+

So the complex integral is just as bad as the real integral. This is an example
where we could consider using a principal value prescription to make the integral
well defined.

## 7.2.14 Show that (a > 0)

Z
cos x

dx = ea
a)
2
2
a
x + a
How is the right-hand side modified if cos x is replaced by cos kx?
For these types of integrals with sin or cos in the numerator, it is best to consider
sin x or cos x as the imaginary or real part of the complex exponential eix . In
this case, we write
Z
Z
eix
cos x
dx
=
<
dx
I=
2
2
2
2
x + a
x + a
Using Jordans lemma, we may close the contour using a semi-circle in the upper
half plane.
z
IR
ia
C

## Since IR = 0 (by Jordans lemma), we have simply



I
I
ea
eiz

eiz
I=<
dz = < 2i
dz = <
= ea
2
2
z +a
(z ia)(z + ia)
2ia
a
(for a positive).
If cos x is replaced by cos kx, we would write the numerator as <eikx . In this
case, for k > 0 we would close the contour in the upper half plane as before.
In addition, the exponential factor in the residue would be eka , so for cos kx,
the integral would be (/a)eka . For k < 0, on the other hand, we could close
the contour in the lower half plane. However, it is actually easier to see that
cos(kx) = cos kx, so the answer should be independent of the sign of k. Hence
Z
cos kx
|ka|
dx =
e
2
2
|a|
x + a
is valid for any sign of k and a.
Z
x sin x
b)
dx = ea
2
2
x + a
How is the right-hand side modified if sin x is replaced by sin kx?
As above, we write sin x = =eix . Closing the contour in the same manner, and
using Jordans lemma to argue that IR = 0, we obtain
I
I
Z
zeiz
zeiz
x sin x
dx
=
=
dz
=
=
dz
2
2
z 2 + a2
(z ia)(z + ia)
x + a


iaea
= = 2i
= ea
2ia

If sin x is replaced by sin kx, the residue would get modified so that ea is replaced
by eka . As a result
Z
x sin kx
dx = e|ka|
2 + a2
x

The reason for the absolute value is the same as for part a) above.
7.2.20 Show that

Z
0

(x2

dx

= 3,
2
2
+a )
4a

a>0

This problem involves a double pole at z = ia. Choosing to close the contour in
the upper half plane
z
IR
ia
C

we obtain
Z

I=
0

1
dx
=
(x2 + a2 )2
2

1
dx
=
(x2 + a2 )2
2

I
C

dz
(z 2 + a2 )2

(4)

= i (residue at z = ia)
Although this is a double pole, it may still have a residue. To see this, imagine
expanding the integrand in a power series near the pole at z = ia
(z 2

1
= (z ia)2 (z + ia)2 = (z ia)2 [2ia + (z ia)]2
+ a2 )2
2

z ia
2
2
= (z ia) (2ia)
1+
2ia
!



2
1
z

ia
2

3
z

ia
= 2 (z ia)2 1 2
+

4a
2ia
2
2ia
=

1/4a2
i/4a3
+
+ (3/16a4 ) +
2
(z ia)
(z ia)

Here we have used the binomial expansion for (1 + r)2 . This shows that, in
addition to the obvious double pole, there is a single pole hidden on top of it
with residue a1 = i/4a3 . Alternatively, we could have computed the residue
much more quickly by noting that for a double pole in f (z) = 1/(z 2 + a2 )2 , we
form the non-singular function g(z) = (z ia)2 f (z) = 1/(z + ia)2 . The residue
is then the derivative

d
1
2
2
i
0

a1 = g (ia) =
=
=
= 3

2
3
3
dz (z + ia) z=ia
(z + ia) z=ia
(2ia)
4a

## In either case, using this residue in (4), we find




i
= 3
I = i
3
4a
4a
or more precisely I = /4|a|3 , which is valid for either sign of a.
It is worth noting that, for this problem, the integrand falls off sufficiently fast at
infinity that we could choose to close the contour either in the upper half plane
or the lower half plane. Had we worked with the lower half plane, we would
have found a pole at ia with opposite sign for the residue. On the other hand,
the clockwise contour would have contributed another minus sign. So overall we
would have found the same result either way (which is a good thing).
7.2.22 Show that

cos(t )dt =
0

sin(t )dt =
2 2
2

Again, when we see sin or cos, it is worth considering this as the imaginary or real
parts of the complex exponential. Hence we first choose to evaluate the integral
Z
2
I=
eit dt
0

Taking the hint into account, we write down a (closed) contour integral using the
contour
z
IR

I2
/2
I

Thus

eiz dz = I + IR + I2

C
2

We start by evaluating the contour integral on the left. Although eiz has an
essential singularity at infinity, that actually lies outside the contour (this is
certainly true for any fixed large radius R; it also remains outside the contour
in the limit R ). Since there are no poles and no singularities inside the
contour, the contour integral vanishes. As a result,
0 = I + IR + I2

I = IR I2

## We now show that the integral on IR vanishes as well by a simple modification

to Jordans lemma. For IR , we let z = Rei , so that
Z /2
Z /2
2
2
iR2 e2i
i
IR =
e
iRe d = iR
eiR cos 2 eR sin 2 ei d
0

Hence
Z

/2

eR

|IR | = R

sin 2

Using the same argument as in Jordans lemma, we can show that the integral
R /2 R2 sin 2
e
d may be bounded by 1/R2 . Hence |IR | itself falls off as 1/R, and
0
vanishes when we take R . As a result, we are left with the observation
I = I2
To examine I2 , we note that the path of integration is a line of constant slope in
the complex plane
z = `ei/2 ,
dz = ei/2 d`
Thus

Z
I2 =

i(`ei/2 )2 i/2

i/2

e` d`

d` = e

Note that the minus sign came from consideration of the direction of integration
along the I2 contour. At this point, complex analysis does not really help us, and
we must recall (or look up) the gaussian integral

`2

e
0

1
d` =
2

`2

d` =

Thus

i/2

I = I2 = e

= (cos(/2) + i sin(/2))
= (1 + i)
2
2
2 2

R 2
Since I = 0 eit dt, we may now take the real (cos) and imaginary (sin) parts of
I to obtain

Z
Z

2
2
cos(t )dt =
sin(t )dt =
2 2
0
0
This is a curious result, as it is not directly obvious why integrating cos(t2 ) and
sin(t2 ) would give identical results.

Physics 451

Fall 2004
Homework Assignment #12 Solutions

## Textbook problems: Ch. 8: 8.2.2, 8.2.5, 8.2.6, 8.2.10, 8.2.16

Chapter 8
8.2.2 The Laplace transform of Bessels equation (n = 0) leads to
(s2 + 1)f 0 (s) + sf (s) = 0
Solve for f (s)
This equation is amenable to separation of variables
Z

s
df
= 2
ds
f
s +1

Z
df
s
=
ds
2
f
s +1
ln f = 12 ln(s2 + 1) + c

## Exponentiating this and redefining the consant, we obtain

f (x) =

C
s2 + 1

8.2.5 A boat, coasting through the water, experiences a resisting force proportional to v n ,
v being the instantaneous velocity of the boat. Newtons second law leads to
m

dv
= kv n
dt

## With v(t = 0) = v0 , x(t = 0) = 0, integrate to find v as a function of time and then

the distance.
This equation is separable
k
dv
= dt
n
v
m
For n 6= 1, this may be integrated to give
Z

v0

dv 0
k
=
0n
v
m

dt



1
1
1
k

n1 = t
n1
n1 v
m
v0

1/(n1)
(n 1)k
(n1)
v(t) = v0
+
t
m

(1)

## This may be integrated once more to obtain x as a function of t

1/(n1)
Z t
Z t
(n 1)k 0
(n1)
0
0
v0
v(t )dt =
+
t
dt0
x(t) =
m
0
0
Although this may look somewhat scary, it is in fact trivial to integrate, as it is
essentially t0 (plus a constant) to some fractional power. The only difficulty is
bookkeeping the various constants. For n 6= 2, the result is

11/(n1) t
1
m
(n 1)k 0

(n1)
x(t) =
v0
+
t

1 1/(n 1) (n 1)k
m
0
(2)
"
#
(n2)/(n1)
m
(n 1)k
(n1)
(n2)
=
v0
+
t
v0
(n 2)k
m
If desired, the position and velocity, (2) and (1) may be rewritten as
"
#
(n2)/(n1)
m
(n 1)kv0n1
x(t) =
1+
t
1
m
(n 2)kv0n2
1/(n1)

(n 1)kv0n1
v(t) = v0 1 +
t
m
As a result, it is possible to eliminate t and obtain the velocity as a function of
position

1/(n2)
(n 2)kv0n2 x
v = v0 1 +
(3)
m
Note that we may define
xk =

m
(n 2)kv0n2

which represents a length scale related to the resisting force and initial velocity.
In terms of xk , the velocity and position relation may be given as

1/(n2)
 v n2
v
x
x
0
= 1+
or
=1+
v0
xk
v
xk
Note that, in fact, it is possible to obtain (3) directly from Newtons second law
by rewriting it as
dv
k
k
=

v
dt
=

dx
v n1
m
m
and then integrating


Z v
Z
dv 0
k x 0
1
1
1
k
=
dx

n2 = x
0n1
n2
m 0
n2 v
m
v0
v0 v
 v n2
(n 2)kv0n2
0

=1+
x
v
m

So far, what we have done does not apply to the special cases n = 1 or n = 2.
For n = 1, we have
 
k
v
k
dv
= dt

ln
= t

v(t) = v0 e(k/m)t
v
m
v0
m
Integrating once more yields
x(t) =

mv0
(1 e(k/m)t )
k

v
kx
=1
v0
mv0

## which is in fact consistent with setting n = 1 in (3).

For n = 2, we have
k
dv
= dt
2
v
m

1
1
k
+
= t
v v0
m

## Integrating this for position yields



kv0
m
ln 1 +
t
x(t) =
k
m

v(t) =

v0
1 + (kv0 /m)t

v 
kx
0
= ln
m
v

8.2.6 In the first-order differential equation dy/dx = f (x, y) the function f (x, y) is a function
of the ratio y/x:
y
dy
=g
dx
x
Show that the substitution of u = y/x leads to a separable equation in u and x.
If we let u = y/x, this means that y = xu. So, by the product rule
du
dy
=x
+u
dx
dx
The above differential equation now becomes
x

du
+ u(x) = g(u)
dx

du
dx
=
g(u) u
x

## which is separated in u and x.

8.2.10 A certain differential equation has the form
f (x)dx + g(x)h(y)dy = 0
with none of the functions f (x), g(x), h(y) identically zero. Show that a necessary
and sufficient condition for this equation to be exact is that g(x) = const.

## The check for exactness is

f (x) =
(g(x)h(y))
y
x
or
0=

dg(x)
h(y)
dx

Since h(y) is not identically zero, we may divide out by h(y) (at least in any
domain away from isolated zeros of h), leading to dg(x)/dx = 0, which indicates
that g(x) must be constant.
8.2.16 Bernoullis equation
dy
+ f (x)y = g(x)y n
dx
is nonlinear for n 6= 0 or 1. Show that the substitution u = y 1n reduces Bernoullis
equation to a linear equation.
For n 6= 1, the substitution u = y 1n is equivalent to y = u1/(1n) . Thus
1
1
dy
du
du
=
u1/(1n)1
=
un/(1n)
dx
1n
dx
1n
dx
Bernoullis equation then becomes
1
du
un/(1n)
+ f (x)u1/(1n) = g(x)un/(1n)
1n
dx
Multiplying by un/(1n) gives
1 du
+ f (x)u = g(x)
1 n dx
or

du
+ (1 n)f (x)u = (1 n)g(x)
dx

Physics 451

Fall 2004
Homework Assignment #13 Solutions

## Textbook problems: Ch. 8: 8.4.1, 8.4.3, 8.5.6, 8.5.11, 8.5.14, 8.5.17

Chapter 8
8.4.1 Show that Legendres equation has regular singularities at x = 1, 1, and .
Legendres equation may be written as
y 00

2x 0 l(l + 1)
y +
y=0
1 x2
1 x2

so that
P (x) =

2x
2x
=
,
2
1x
(x 1)(x + 1)

Q(x) =

l(l + 1)
l(l + 1)
=
2
1x
(x 1)(x + 1)

Written in this fashion, we see that both P (x) and Q(x) have simple poles at
x = 1 and x = 1. This is sufficient to indicate that these two points are regular
singular points.
For the point at , we make the substitution x = 1/z. As worked out in the
text, we end up with
2z P (z 1 )
2z + 2z 1 /(1 z 2 )
2
2
2z
Pe(z) =
=
= +
= 2
2
2
2
z
z
z
z(z 1)
z 1
and
Q(z 1 )
l(l + 1)/(1 z 2 )
l(l + 1)
e
Q(z)
=
=
= 2 2
4
4
z
z
z (z 1)
e as z 0, we see that Pe is regular, while Q
e
Examining the behavior of Pe and Q
e
has a double pole. Because of the double pole in Q, Legendres equation also has
a regular singularity at x = .
8.4.3 Show that the substitution
x

1x
,
2

a = l,

b = l + 1,

c=1

## Making the above substitution (along with dx 21 dx which implies y 0 (2)y 0

and y 00 (2)2 y 00 ) into the Hypergeometric equation, we find
x(x 1)y 00 + [(1 + a + b)x c]y 0 + aby = 0




1x
1x 1x
2 00
1 (2) y + (1 l + (l + 1))
1 (2)y 0

2
2
2
l(l + 1)y = 0

## Changing an overall sign yields Legendres equation

(1 x2 )y 00 2xy 0 + l(l + 1)y = 0
This indicates that Legendres equation is in fact a special case of the more general
Hypergeometric equation.
8.5.6 Develop series solutions for Hermites differential equation
a) y 00 2xy 0 + 2y = 0
Since x = 0 is a regular point, we develop a simply Taylor series solution
y=

an xn ,

y0 =

n=0

nan xn1 ,

y 00 =

n=0

n=0

n=0

## [(n + 2)(n + 1)an+2 + 2( n)an ]xn = 0

n=0

To obtain the second line, we had made the substitution n n + 2 in the first
term of the series so that we could collect identical powers of xn . Since this series
vanishes for all values of x, each coefficient must vanish. This yields the recursion
relation
2(n )
an+2 =
an
(1)
(n + 2)(n + 1)
which determines all higher an s, given a0 and a1 as a starting point.
In fact, we obtain two series, one for n even and one for n odd. For n even, we
set a0 = 1 and find
a0 = 1,

a2 =

2()
,
2!

a4 =

2(2 )
22 ()(2 )
a2 =
,
43
4!

etc.

## This gives the even solution

yeven = 1 + 2()

x4
x6
x2
+ 22 ()(2 ) + 23 ()(2 )(4 ) + (2)
2!
4!
6!

a1 = 1,

a3 =

2(1 )
,
3!

a5 =

2(3 )
22 (1 )(3 )
a3 =
,
54
5!

etc.

## This results in the odd solution

yodd = x + 2(1 )

x5
x7
x3
+ 22 (1 )(3 ) + 23 (1 )(3 )(5 ) + (3)
3!
5!
7!

Note that, and an ordinary point, we did not have to solve the indicial equation.
However, if we had chosen to do so, we would have found k = 0 or k = 1, yielding
the even and odd solutions, respectively.
b) Show that both series solutions are convergent for all x, the ratio of successive
coefficients behaving, for large index, like the corresponding ratio in the expansion
of exp(2x2 ).
To test for convergence, all we need is to use the ratio test
an xn
(n + 2)(n + 1)
n
=
lim
=
lim
=
n an+2 xn+2
n 2(n )x2
n 2x2
lim

(4)

Since this is larger than 1, the series converges for all values of x. Note that
the ratio an /an+2 was directly obtained from the recursion relation (1), and this
result is valid for both yeven and yodd . Furthermore, if we compared this with
exp(2x2 ), we would see that the n-th term in the Taylor series of the exponential
is bn = (2x2 )n /n!, which leads to a ratio
bn1
n
= 2
bn
2x
in direct correspondence with that of (4). Hence the solutions to Hermites equations are (generically) asymptotic to exp(2x2 ).
c) Show that by appropriate choice of the series solutions may be cut off and
converted to finite polynomials.
Examination of the series solutions (2) and (3) indicates that yeven terminates
for = 0, 2, 4, . . . and yodd terminates for = 1, 3, 5, . . .. This means the for
a non-negative integer either yeven or yodd (depending on being even or odd)
terminates, yielding a finite Hermite polynomial.

## 8.5.11 Obtain two series solutions of the confluent hypergeometric equation

xy 00 + (c x)y 0 ay = 0
Test your solutions for convergence.
We first observe that this equation has a regular singular point at x = 0 and an
irregular one at x = . We would like to develop a series solution around the
regular singular point at x = 0. Thus we start with the indicial equation
y 00 +

cx 0 a
y y=0
x
x

p0 = c,

q0 = 0

and
k(k 1) + p0 k + q0 = 0

k(k 1) + ck = 0

k(k + c 1) = 0

## This shows that the indices at x = 0 are k1 = 0 and k2 = 1 c. We start with

k1 = 0. Since the index vanishes, we attempt an ordinary Taylor series solution
y=

an xn ,

y0 =

nan xn1 ,

y 00 =

n=0

n=0

n=0

## [n(n 1)an xn1 + ncan xn1 nan xn aan xn ] = 0

n=0

Making the substition n n + 1 in the first two terms and simplifying gives

n=0

an+1 =

a+n
an
(n + 1)(c + n)

## Setting a0 = 1, the first few terms in the series becomes

a
,
c

a+1
a(a + 1)
a1 =
,
2(c + 1)
2!c(c + 1)
a+2
a(a + 1)(a + 2)
a3 =
a2 =
3(c + 2)
3!c(c + 1)(c + 2)

a0 = 1,

a1 =

a2 =

(5)

## This indicates that

a(a + 1) x2
a(a + 1)(a + 2) x3
a
+
+
y =1+ x+
c
c(c + 1) 2!
c(c + 1)(c + 2) 3!

X
(a)n xn
=
(c)n n!
n=0

(6)

## where the notation (a)n is given by

(a)n = a(a + 1)(a + 2) (a + n 2)(a + n 1) =

(a + n)
(a)

(7)

## This is the regular solution of the confluent hypergeometric equation. We now

test this series for convergence using the ratio test. Given the recursion relation
(5), we find
an xn
(n + 1)(c + n)
n
= lim
= lim
=
n+1
n an+1 x
n
n x
(a + n)x
lim

Therefore this series converges for all values of x, unless c is a non-positive integer,
in which case the denominators in (6) will eventually all blow up.
Turning next to k2 = 1 c, we seek a series solution of the form
y = x1c

y 0 = xc

an xn ,

n=0

y 00 = x1c

(n + 1 c)an xn ,

n=0

(n + 1 c)(n c)an xn

n=0

1c

## [(n+1c)(nc)an xn1 +c(n+1c)an xn1 (n+1c)an xn aan xn ] = 0

n=0

Performing the shift n n + 1 in the first two terms and simplifying, we obtain
1c

n=0

## which yields the recursion relation

an+1 =

n+1+ac
an
(n + 2 c)(n + 1)

Supposing that a0 = 1, the first few terms in this series are given by
1+ac
2+ac
(1 + a c)(2 + a c)
,
a2 =
a1 =
,
2c
2(3 c)
2!(2 c)(3 c)
3+ac
(1 + a c)(2 + a c)(3 + a c)
a3 =
a2 =
3(4 c)
3!(2 c)(3 c)(4 c)

a0 = 1,

a1 =

## Following the notation of (7), we may write the series solution as

1c

ynew = x

X
(1 + a c)n xn
(2 c)n n!
n=0

(8)

This series is rather similar to the standard one (6). In fact, the solution of (6)
may be converted into ynew by making the substitions a a + 1 c and c 2 c
and multiplying y by the prefactor x1c . [Why this works may be seen by making
the substitutions directly into the confluent hypergeometric equation itself.] As
a result, by the same ratio test argument as before, ynew converges for all values
of x, except when c = 2, 3, 4, . . . where the denominators in (8) would eventually
all blow up.
To summarize, for non-integer values of c, the two solutions (6) and (8) form
a complete linearly independent set. For c = 1, both (6) and (8) are precisely
the same, and we have found only one solution. For other integer values of c,
only one of (6) or (8) makes sense (and the other one blows up because of a bad
denominator). So in fact for all integer c, we have only obtained one solution
by the series method, and the second solution would be of the irregular form
(which is not fun at all).
8.5.14 To a good approximation, the interaction of two nucleons may be described by a
mesonic potential
Aeax
V =
x
attractive for A negative. Develop a series solution of the resultant Schr
odinger wave
equation
h2 d 2

+ (E V ) = 0
2m dx2
We begin by substituting the explicit potential in the Schr
odinger equation


2mE
d2
2mAeax
+

=0
dx2
h2

h2 x

E=

2mE
,
h2

A=

2mA
h2

eax
+ E A
x
00


=0

(9)

## which has a regular singular point at x = 0 and an irregular one at x = . We

now develop a series solution around x = 0. Noting that
eax
Q(x) =
x

P (x) = 0,

p0 = 0,

q0 = 0

## the indicial equation is trivial, k(k 1) = 0. Since we have k1 = 1 and k2 = 0,

we look for the k1 = 1 series (the larger index one always works). Here we have
to worry that eax is non-polynomial. As a result, we will not be able to obtain
a simple recursion relation. We thus content ourselves with just working out a
few terms in the series. Normalizing the first term in the series to be x, we take
y 0 = 1+2a2 x+3a3 x2 + ,

y = x+a2 x2 +a3 x3 + ,

y 00 = 2a2 +6a3 x+

## Substitution into (9) gives

2a2 + 6a3 x + + (Ex Aeax )(1 + a2 x + a3 x2 + ) = 0
Since we have used a series for the wavefunction (x), we ought to also expand
the exponential as a series, eax = 1 ax + 21 a2 x2 . Keeping appropriate
powers of x, we find
0 = 2a2 + 6a3 x + + (Ex A(1 ax + ))(1 + a2 x + )
= 2a2 + 6a3 x + + (A + (aA + E)x + )(1 + a2 x + )
= 2a2 + 6a3 x + + (A) + (aA + E a2 A)x +
= (2a2 A) + (6a3 + aA + E a2 A)x +
Setting the coefficients to zero gives
a2 = 12 A,

## The series solution is the of the form

= x + 21 Ax2 + 16 ( 12 A2 E aA)x3 +
8.5.17 The modified Bessel function I0 (x) satisfies the differential equation
x2

d2
d
I0 (x) + x I0 (x) x2 I0 (x) = 0
2
dx
dx

## From Exercise 7.4.4 the leading term in an asymptotic expansion is found to be

ex
I0 (x)
2x
Assume a series of the form
ex
I0 (x)
(1 + b1 x1 + b2 x2 + )
2x
Determine the coefficients b1 and b2
The (modified) Bessel equation has a regular singular point at x = 0 and an
irregular one at x = . Here we are asked to develop an asymptotic expansion
around x = . Although this is an irregular one (witness the essential singularity
ex ), we are given the form of the series. As a result, all we have to do is to take
derivatives and insert the expressions into the differential equation. To make it
easier to obtain the derivatives, we write
1
3
5
7
ex
I0 (x) (x 2 + b1 x 2 + b2 x 2 + b3 x 2 + )
2
The derivative d/dx acts either on the ex factor or the series in the parentheses.
The resulting first derivative is
1
3
5
7
ex
I00 (x) (x 2 + (b1 12 )x 2 + (b2 23 b1 )x 2 + (b3 52 b2 )x 2 + )
2
Taking one more derivative yields
1
3
5
ex
72
I000 (x) (x 2 + (b1 1)x 2 + (b2 3b1 + 43 )x 2 + (b3 5b2 + 15
+ )
4 b1 )x
2
Substituting the above into the modified Bessel equation and collecting like powers of x, we find
3
1
1
ex
32
0 (x 2 + (b1 1)x 2 + (b2 3b1 + 43 )x 2 + (b3 5b2 + 15
+
4 b1 )x
2
3

x2

+ x2

+ (b1 12 )x 2

b2 x 2

b1 x 2

+ (b2 23 b1 )x 2 +

b3 x 2 )

1
3
ex
((2b1 + 14 )x 2 + (4b2 + 49 b1 )x 2 + )
2
Setting the coefficients to zero gives

b1 = 18 ,

b2 =

9
16 b1

9
128

## so that the asymptotic series develops as

ex
9
I0 (x)
(1 + 18 x1 + 128
x2 + )
2x
Note that, in order to find b1 and b2 , we needed to keep track of the b3 coefficient,
even though it dropped out in the end.