Anda di halaman 1dari 28

Monte Carlo Methods

H. Rieger, Saarland University, Saarbrcken, Germany

Summerschool on Computational Statistical Physics, 4.-11.8.2010


NCCU Taipei, Taiwan

Monte Carlo Methods =


Stochastic evaluation of physical quantities using random numbers
Example: Pebble game on the beach of Monaco
= computation of using random events
Kids game:

Direct sampling

Adults game:

Markov chain sampling

[after W. Krauth, Statistical Mechanics, Algorithms and Computation, Oxford Univ. Press]

Direct sampling

easy

hard spheres in 2d hard!


No direct sampling algorithm for hard spheres
(NOT random sequantial adsorption!)

Markov chain sampling


What to do,
when a stone from the lady
lands here?
1) Simply move on
2) Climb over the fence, and continue
until, by accident, she will reenter
the heliport
3) .?

Move a pebble in a 3x3 lattice probabilistically


such that each site a is visited with the same probability p(a) = 1/9

w(ab) =
ba

w(aa) = 0

w(aa) = 1/4

w(aa) = 1/2

Detailled Balance
b

The following condition for p(a) must hold:

c
Together with
This yields:

This condition is fulfilled when

etc.
This is called detailled balance condition
and leads here to w(ab,c)=1/4, w(aa)=1/2

Rejections
Adults pebbel game solution: if a stone lands outside the heliport, stay there
put a stone on top of the present stone and continue,
i.e. reject move outside the square!

More or less large piles close to the boundary due to rejections

Master equation
Markov chain described by Master equation (t = time step)

The time independent probability distribution p(a) is a solution of this equation,


if the transition probabilities w(ab) etc. fulfill
detailed balance:

For a given p(a) one can, for instance, choose

Which is the Metropolis rule

Monte Carlo for Thermodynamic Equilibrium


a are configurations of a many particle system,
E(a) the energy of configuration a.
Thermodynamic equilibrium at temperature T is then described by the
Boltzman distribution
= 1/kBT inverse temperature

is the normalization,
called partition function
Thus the Metropolis rule for setting up a Markov chain leading to
The Boltzmann distribution is

Hard spheres
a = (r1,r2, , rN)

- configurations = coordinates of all N spheres in d dimension

All spheres have radius , they are not allowed to overlap,


otherwise all configurations have the same energy (no interactions):

H(a) = if there is a pair (i,j) with |ri-rj|<2


H(a) = 0 otherwise
i.e.:

Define w(ab) in the following way:


In configuration a choose randomly a particle i
and displace it by a random vector - this constitutes configuration b.
w(ab) = 1 if b is allowed (no overlaps),
w(ab) = 0 (reject) if displaced particle overlaps with some other particle

Hard spheres (2)


Tagged particle

Iteration: t

Tagged particle

Soft spheres / interacting particles


a = {(r1,p1),(r2,p2),,(rN,pN)}
- configurations = coordinates and momenta of all N particles in d dimension
Energy:

Partition function

Peforming the
momentum integral
(Gaussian)
Left with the
configuration integral I

Example:
LJ (Lennard-Jones)

L = box size

MC simulation for soft spheres: Metropolis


Choose randomly particle i, its position is ri
Define new position by ri=ri+, a random displacement vector, [-,]3
All othe rparticle remain fixed.
Acceptance probability for the new postion:
if
otherwise
Repeat many times
Measurements:
Energy, specific heat, spatial correlation functions, structure function
Equlibration!

Note: Gives the same results


as molecular dynamics

Discrete systems: Ising spins


System of N interacting Ising spins Si {+1,-1},
placed on the nodes of a d-dimensional lattice
a = (S1,S2,,SN): spin configurations
Energy:
(i,j)
Jij = coupling strengths, e.g. Jij = J > 0 for all (i,j) ferromagnet
h = external field strength
For instance 1d:

with periodic
poundary conditions

Quantities of interest / Measurements


Magnetization

Susceptibility

Average energy
Specific heat

How to compute:
where at are the configurations generated by the
Markov chain (the simulation) at time step t.

Ising spins: Metropolis update


H(S,S) = H(S)-H(S)
for
for
Procedure Ising Metropolis:
Initialize S = (S1,,SN)
label Generate new configuration S
Calculate H= H(S,S)
if H 0
accept S (i.e. SS)
else
generate random numer x[0,1]
if x<exp(-H)
accept S (i.e. SS)
compute O(S)
goto label

Single spin flip Metropolis for 2d Ising


Procedure single spin flip
Input L, T, N=L*L
Define arrays: S[i], i=1,,N, h[i], i=1,,N, etc.
Initialize S[i], nxm[i], nxp[i],., h[i]
step = 0
while (step<max_step)
choose random site i
calculate dE = 2*h[i]*S[i]
if ( dE <= 0 )
S[i]=-S[i]; update h[nxm[i]], h[nxp[i]],
else
p = exp(-dE/T)
x = rand()
if ( x<p)
S[i]=-S[i]; update h[nxm[i]], h[nxp[i]],
compute M(S), E(S),
accumulate M, E,
step++
Output m, e,

Implementation issues
Periodic boundary conditions

Neighbor tables

e.g.:

if
if

Implementation issues (2)


With single spin flip E(S) and E(S) differ only by 4 terms in 2d (6 terms in 3d):
Flip spin i means Si = -Si, all other spins fixed, i.e. Sj=Sj for all ji

Tabulate exponentials exp(-4), exp(-8)


to avoid transcendental functions in the innermost loop
Use array h[i] for local fields, if move (flip is rejected nothing to be done,
if flip accepted update Si and hnxm[i], hnxp[i], etc.

Study of phase transitions with MC

Ising model in d>1 has a 2nd order phase transition at h=0, T=Tc

Magnetization (order parameter):


h=0

2nd order
phase transition
as a function of T
at h=0!

Phase
diagram

1st order
phase transition
as a function
of h!

h0

T<Tc

Critical behavior
Singularities at Tc in the thermodynamic limit (N):

Magnetization:

Susceptibility:
Specific heat:

Correlation function:

Correlation length:
Scaling relations:

Finite size behavior

w. Periodic b.c.

Finite Size Scaling


FSS forms:

4th order cumulant:

Dimensionless (no L-dependent prefactor)


- Good for the localization of the critical point

Critical exponents of the d-dim. Ising model

Slowing down at the critical point


Quality of the MC estimats of therodynamic expectation values depends
on the number of uncorrelated configurations
Need an estimate for the correlation time of the Markov process!
Autocorrelationfunction

Schematically:
for TTc

Configurations should decorrelate


faster than with single spin-flip!
Solution: Cluster Moves

Cluster Algorithms

Construction process of the clusters in the Wolff algorith:


Start from an initial + site, include other + sites with prbability p (left).
The whole cluster (gray) is then flipped

Here c1=10, c2=14


A priori or construction probability:

p(a)

p(b)

Detailled balance condition: p(a) A(ab) w(ab) = p(b) A(ba) w(ba)

Wolff algorithm (cont.)

Once the cluster is constructed with given p, one gets c1 and c2 ,


with which one can compute the acceptance probability w(ab)

But with p = 1-e-2 the acceptance probability w(ab) becomes 1!


Thus with p=1-e-2 the constructed cluster is always flipped!
Remarkable speed up, no critical slowing down at the critical point!

Wolff cluster flipping for Ising


(1)

Randomly choose a site i

(2)

Draw bonds to all nearest neighbors j


with probability

(3)

If bonds have been drawn to any site j draw bonds to all


nearest neighbors k of j with probability

(4)

Repeat step (3) until no more bonds are created

(5)

Flip all spins in the cluster

(6)

Got to (1)

(Note

for S=S, and = 0 for SS, such that p=0 for SjSk)

Swendsen-Wang algorithm
Similar to Wolff, but
(1)

Draw bonds between ALL nearest neighbors


with probability

(2)

Identify connected clusters

(3)

Flip each individual cluster with probability 1/2

Anda mungkin juga menyukai