Anda di halaman 1dari 6

Crash Course: The Math of Quantum Mechanics

March 19, 2013


Quantum mechanics uses a wide range of mathematics including
complex numbers, linear algebra, calculus, and differential equations. In this handout, we give a crash course on some of the basic
mathematics used.

See also: http://simple.wikipedia.


org/wiki/Quantum_mechanics

Complex Numbers
Complex numbers1 give us a way of saying that the square of a number can be negative. To that end, we have to introduce the imaginary
number i such that

i = 1.
(1)

A complex number then has the form z = a + ib where a and b are the
usual real numbers we use every day.
Every complex number also has a complex conjugate. This number
is made by going through a complex number and replacing i with i
and is represented in physics by z = a ib. One important property
of this conjugation is

The number z can be treated just as any


number with a variable i, you just have
to remember that i2 = 1.

(2)

For another simple explanation, see


also: http://simple.wikipedia.org/
wiki/Complex_numbers

z z = ( a ib)( a + ib) = a2 + iab iab i2 b2 = a2 + b2 .

Since a and b are the usual real numbers, z z is not only real, but
positive.

Exponentiation
A very important operation for complex numbers and many physical
applications is the exponential function2 . We define it with an infinite
sum
(3)

ex =

n =0

http://simple.wikipedia.org/wiki/
Exponential_function

xn
x
x2
x3
= 1+ +
+
+ .
n!
1 21 321

It has the important property that if you take its derivative you get
back the same function3
d x
(4)
e = ex .
dx
Now, this function has the particular property that if instead of x,
you use an imaginary number, you obtain
(5)

ei = cos + i sin .

And in fact, any complex number can be represented by z = r ei


where r and are both real numbers. We can see from this that z z =

r2 , so if we look at Eq. (2), we can see that r = a2 + b2 .

We will see this a lot in the form


d ipx
= i eit or dx
e = ip eipx .

d it
dt e

This property gives the famous ei =


1.

Remember to change i to i, so z =
rei , and also remember that e x ey =
e x +y .

crash course: the math of quantum mechanics

Linear algebra
Linear algebra is at the heart of quantum mechanics. It deals with
vectors and matrices in many dimensions.

Vectors
A column vector defined by |vi is defined as just an ordered set of
numbers in a column:

z1

z2

(6)
|vi =
.. .
.

The |i is called a ket in quantum


mechanics

zN
The number N is called the dimension. We can define the corresponding row vector by hv|, and it looks like


hv| = z1 z2 zN .
(7)

The h| is called a bra in quantum


mechanics.

Notice how in making the column vector into a row vector we took
the complex conjugate of each entry. This is very important since now
we will define the inner product of a two vectors |vi and |wi as hv|wi
(just replace z with w to obtain the entries of |wi) where for the above
we get
(8)

hv|wi = z1 w1 + z2 w2 + + zN w N .

Matrices
Now, what makes linear algebra linear is that we act on vectors with
matrices that have the property that if M is a matrix, |vi and |wi are
vectors and a, b are just numbers then

When we write a |vi, it means multiply


each entry of |vi by the number a

M ( a |vi + b |wi) = aM |vi + bM |wi .


A matrix looks like4

(9)

Strictly speaking, this is a square matrix; in general, the rows and columns
need not be the same, but most of our
use will be with square matrices.
4

m11

m21
M=
..
.
m N1

m12
m22
..
.
m N2

..
.

m1N

m2N
..
.
.
m NN

Matrices can be multipled, but the procedure for this is similar to that
of the inner product and not how one might guess to multiply matrices. To develop this, consider two ways of thinking of a matrix: As a
row vector of column vectors or as a column vector of row vectors:

crash course: the math of quantum mechanics

Here we have defined for example

m11
m21

| m1 i = .
..
m N1

hm 1 |

2|

hm

=
|m N i
.. .
.


M = | m1 i

(10)

| m2 i

and

hm N |

hm 1 | = m11

m12

m1N

And like this we can define matrix multiplication


(11)

hm 1 |

h m2 |

MN =
.. |n1 i |n2 i
.

hm 1 |n1 i

2 | n1 i
hm
|n N i =
..

hm 1 |n2 i
hm 2 |n2 i
..
.

..
.

hm N |n1 i hm N |n2 i

hm N |

For a real example of this, see5 .


Multiplication by a vector is similar. In fact we can define it for
column and row vectors:

hm 1 |vi
hm 1 |



h m2 | v i
h m2 |

(12)
M |vi = . |vi = .
,
..
..

hm 1 |n N i
hm 2 |n N i

.
..

hm N |n N i
5

http://simple.wikipedia.org/wiki/
Matrix_(mathematics)

hm N |vi

hm N |
and
(13)



h v | M = h v | | m1 i | m2 i | m N i


= h v | m1 i h v | m2 i h v | m N i

Now, we can also construct the hermitian conjugate of a matrix, M


by just writing

m11
m21
mN1

mN2
m12 m22

M =
(14)
..
..
..
..
.
.
.
.
.

m1N

m2N

mNN

Notice how every entry is the complex conjugate and flipped across
the diagonal6 . A matrix is hermitian if M = M . Note that a hermitian
matrix doesnt necessarily have all real entries, just that the entries
above the diagonal are conjugate to those below the diagonal, as
defined by mij = m ji .
There is also a special matrix called the identity matrix which has 1

This is also sometimes called the


conjugate transpose.
6

crash course: the math of quantum mechanics

on the diagonal and zero off the diagonal:

1 0 0

0 1 0

,
(15)
I = . . .
. . ...
.. ..

0 0 1
this has the property that MI = M and I M = M for any matrix M
(Check this as an exercise).

Eigenvalues and Eigenvectors


Matrices can have special vectors called eigenvectors which have the
property that if | Ei is an eigenvector of M,
(16)

M | Ei = E | Ei ,

we sometimes identify eigenvectors by their eigenvalues, calling | Ei,


the vector with eigenvalue E. If M is hermitian, as it usually is in our
applications, then the eigenvalues E are real numbers (not imaginary
or complex).
And if M is an N N matrix, there are in fact N eigenvectors
associated with it. One of the astounding properties of these vectors
is that any vector can be written as a sum of them i.e. they are a
basis. For example, if we have a three dimensional matrix M with
eigenvectors | E1 i, | E2 i, and | E3 i, then any vector can be written
(17)

Note that if | Ei is an eigenvector, so


is a | Ei. This ambiguiuty oftentimes
lets us normalize the vectors by using
the eigenvector with h E| Ei = 1. This
condition is also related to probabilities
in quantum mechanics.

|vi = v1 | E1 i + v2 | E2 i + v3 | E3 i .

In arbitary dimension N this takes the form


N

(18)

|vi =

vi |Ei i .

i =1

Lastly, these vectors are orthogonal. Which means h Ei | Ej i = 0 if i


and j are different. The proof of this is simple. There are two ways to
evaluate
(19)

h Ei | M| Ej i = Ei h Ei | Ej i = Ej h Ei | Ej i .

Thus rewriting the expressions,


(20)

( Ei Ej ) h Ei | Ej i = 0,

so either Ei = Ej or h Ei | Ej i = 0, and since we have assumed Ei 6= Ej ,


we get that h Ei | Ej i = 0 necessarily. This proves theyre orthogonal7 .

7
Sometimes Ei = Ej and this is called
a "degeneracy". A modified version of
this property holds in that case.

crash course: the math of quantum mechanics

Functions and Fourier transforms


Sometimes instead of the usual vectors and matrices we are working
with functions and differentiation8 . For many purposes functions
actually behave like vectors and differentiation like a matrix. For
instance, take the differential equation
d
f ( x ) = k f ( x ),
dx
this looks very much like the eigenvalue equation seen in Eq. (16) if
d
we change M to i dx
, | Ei to f ( x ), and E to k. The solution to it is
(which you can check using Eq. 4)

(21)

f ( x ) = eikx .

(22)

The relation to eigenvectors goes even further. Any function can


be written as an integral of these functions9 . Then any g( x ) can be
written as
Z
dk
(23)
.
g( x ) =
g (k)eikx
2

This decomposition into basis functions is called a Fourier decomposition or an inverse Fourier transform. The reason it is an inverse Fourier
transform is because we can actually find the function g (k) (called
the Fourier transform) by taking
(24)

More generally, functions and linear


operators. In fact, all of linear algebra
can be applied to functions and linear
operators defined by differentials and
integrals. This section is a taste of that.
8

g (k) =

When going from vectors to functions,


sums are (sometimes) replaced by
integrals.
9

The introduction of 2 is necessary


and is sometimes defined elsewhere.
In much of quantum mechanics, the
1
2 is placed with the inverse Fourier
transform.

g( x )eikx dx,

To check this is the case, we can substitute this into Eq. (23) while
change the dummy variable x to y to see
(25)

g( x ) =

Z Z

g(y)eik( xy) dy

dk
,
2

now the k-integral can be done and it gives a new function we call
the -function:
Z
dk
( x y) =
(26)
eik( xy)
.
2

This function has the property that if we do the k-integral in Eq. (25)
we see:
(27)

g( x ) =

g(y)( x y) dy.

The equation Eq. (27) is really what defines the -function, but you
can think of ( x y) as a function that is zero everywhere except
when x y = 0, then it is infinitely large10 .
There are many more relations between functions and linear algebra. In fact, the two are intimately connected. Throughout the
course, we will find different basis functions by solving eigenvalue
equations of functions. Much of the machinery here will hold over to
those problems as well.

For some pictures and a more in


depth discussion, see: http://en.
10

wikipedia.org/wiki/Dirac_delta_
function

crash course: the math of quantum mechanics

Inner product of functions


We defined inner products for vectors, but just as we discussed in the
last section that linear algebra works for functions, we briefly define
the inner product of functions. If we have functions f ( x ) and g( x ), then
their inner product is defined by
(28)

h f | gi =

f ( x ) g( x ) dx.

Depending on the problem, the integral might not be from to


+. The bounds of the integral can change.
We can also write


Z

d
dg( x )

f i
g = i
f (x)
(29)
dx.
dx
dx

As an exercise, use the Fourier transforms and the -function from


the last section to show that
 Z


d
g =
(30)
f i
k f (k) g (k) dk.
dx

Gaussian integrals
An important integral for physics is the Gaussian integral. Written out
in full glory, it is
r
Z
b2 /(4a)
ax2 +bx
(31)
e
.
e
dx =
a

With this identity we can actually see that the Fourier transform of a
2
Gaussian is a Gaussian. Take the function g( x ) = e ax , then we can
write
(32)

g (k) =

e ax eikx dx,

Thus, if we let b = ik, the right hand side of Eq. (32) can be evaluated to be
r
k2 /(4a)
(33)
e
.
g (k) =
a
As an exercise, perform the inverse fourier transform to recover
2
g( x ) = e ax .

For those interested, this can be both


proven and generalized as seen in
http://en.wikipedia.org/wiki/
Gaussian_integral

Anda mungkin juga menyukai