Anda di halaman 1dari 111

PH2610 Classical and Statistical Thermodynamics Course Content

Section 1 Introduction to Classical Thermodynamics


1.1 Introduction Macroscopic and microscopic descriptions 1.2 Systems and Variables 1.3 Thermodynamic Equilibrium 1.3.1 Equilibrim 1.3.2 Thermodynamic state 1.3.3 Functions of state 1.3.4 Reversible processes 1.3.5 Cyclic processes 1.3.6 Temperature, thermal equilibrium and the zeroth law of thermodynamics 1.3.7 Temperature scales 1.4 The First Law of Thermodynamics 1.5 Work 1.5.1 Expression for work 1.5.2 An example: work done on a gas 1.5.3 Other examples of expressions for work

Section 2

Introduction to Statistical Mechanics

2.1 Introducing entropy 2.1.1 Boltzmanns formula 2.2 The spin 1/2 paramagnet as a model system 2.2.1 Quantum states of a spin 1/2 paramagnet 2.2.2 The notion of a microstate 2.2.3 Counting the microstates 2.2.4 Distribution of particles among states 2.2.5 The average distribution and the most probable distribution 2.3 Entropy and the second law of thermodynamics 2.3.1 Order and entropy 2.3.2 The second law of thermodynamics 2.3.3 Thermal interaction between systems and temperature 2.4 More on the S=1/2 paramagnet 2.4.1 Energy distribution of the S=1/2 paramagnet 2.4.2 Magnetisation of S = 1 / 2 magnet Curies law 2.4.3 Entropy of S = 1 / 2 magnet 2.5 The Boltzmann distribution 2.5.1 Thermal interaction with the rest of the world using a Gibbs ensemble 2.5.2 Most likely distribution 2.5.3 What are and ? 2.5.4 Link between the partition function and thermodynamic variables 2.5.5 Finding Thermodynamic Variables

2.6 Localised systems 2.6.1 Working in terms of the partition function of a single particle 2.6.2 Using the partition function I the S = 1 / 2 paramagnet (again) 2.6.3 Using the partition function II the Einstein model of a solid

Section 3

Entropy and Classical Thermodynamics

3.1 Entropy in thermodynamics and statistical mechanics 3.1.1 The Second Law of Thermodynamics 3.1.2 Restatement of the First Law 3.1.3 Microscopic interpretation of the first law 3.1.4 Entropy changes in irreversible processes 3.2 Alternative statements of the Second Law 3.2.1 Some statements of the Second Law 3.2.2 Demonstration that the law of increasing entropy implies statement 1a 3.3 The Carnot cycle 3.3.1 Introduction to Carnot cycles Thermodynamic temperature 3.3.2 Efficiency of heat engines and heat pumps 3.3.3 Equivalence of ideal gas and thermodynamic temperatures 3.4 Thermodynamic potentials 3.4.1 Equilibrium states 3.4.2 Constant temperature (and volume): the Helmholtz potential 3.4.3 Constant pressure and energy: the Enthalpy function 3.4.4 Constant pressure and temperature: the Gibbs free energy 3.4.5 Differential expressions for the potentials 3.4.6 Natural variables and the Maxwell relations 3.5 Some applications 3.5.1 Entropy of an ideal gas 3.5.2 General expression for Cp CV 3.5.3 Joule Expansion ideal gas 3.5.4 Joule Expansion real non-ideal gas 3.5.5 Joule-Kelvin or Joule-Thomson process Throttling 3.5.6 Joule-Thomson coefficient 3.5.7 Joule-Thomson effect for real and ideal gases 3.5.8 Inversion temperature and liquefaction of gases

Section 4

Statistical Thermodynamics of delocalised particles

4.1 Classical Ideal Gas 4.1.1 Indistinguishability 4.1.2 Classical approximation 4.1.3 Specifying the single-particle energy states 4.1.4 Density of states 4.1.5 Calculating the partition function 4.1.6 Thermodynamic properties 4.1.7 Thermal deBroglie wavelength 4.1.8 Breakdown of the classical approximation

4.2 Quantum statistics 4.2.2 Bosons and Fermions 4.2.3 The quantum distribution functions 4.2.4 The chemical potential 4.2.5 Methodology for quantum gases 4.3 The Fermi gas 4.3.1 The Fermi-Dirac distribution 4.3.2 Fermi gas at zero temperature 4.3.3 Fermi temperature and Fermi wavevector 4.3.4 Qualitative behaviour of a degenerate Fermi gas 4.3.5 Fermi gas at low temperatures simple model 4.3.6 Internal energy and thermal capacity 4.3.6 Equation of state 4.4 The Bose gas 4.4.1 Generalisation of the density of states function 4.4.2 Examples of Bose systems 4.4.3 Helium-4 4.4.4 Phonons and photons quantised waves 4.4.5 Photons in thermal equilibrium black body radiation 4.4.6 The spectrum maximum 4.4.7 Internal energy and thermal capacity of a photon gas 4.4.8 Energy flux 4.4.9 Phonons Debye model of a solid 4.4.10 Phonon internal energy and thermal capacity 4.4.11 Limiting forms at high and low temperatures

Section 5

Further Thermodynamics

5.1 Phase equilibrium 5.1.1 Conditions for equilibrium coexistence 5.1.2 The phase diagram 5.1.3 Clausius-Clapeyron equation 5.1.4 Saturated vapour pressure 5.2 The Third Law of thermodynamics 5.2.1 History 5.2.2 Entropy 5.2.3 Quantum viewpoint 5.2.4 Unattainability of absolute zero 5.2.5 Heat capacity at low temperatures 5.2.6 Other consequences of the Third Law 5.2.7 Pessimists statement of the laws of thermodynamics

Section 0 Aims and Objectives of the Course


0.1 Thermodynamics and Statistical Mechanics
0.1.1 The nature of these disciplines Thermodynamics provides a description of the macroscopic properties of matter and their interrelationships. The laws of thermodynamics impose inviolable constraints on the behaviour of systems. Thus, for instance, we might find a relation between the thermal capacity of a gas measured at constant pressure and at constant volume. Thermodynamics provides relationships between thermodynamic variables. Statistical Mechanics explains the macroscopic properties of matter in terms of its microscopic details. Thus it enables us to calculate thermodynamic variables of a system in terms of a microscopic model of that system. It helps us understand the significance of the thermodynamic variables particularly those such as temperature and entropy which have no microscopic / mechanical explanation. Statistical mechanics will also explain the laws of thermodynamics from microscopic first principles. Statistical mechanics has a predictive power. We construct a microscopic model. Using the tools of statistical mechanics we can calculate something which is measurable, thereby offering the possibility of an experimental test of our model. The tools of statistical mechanics are general, but the microscopic models are system-specific. In this course we will introduce the tools and see them in action on various model systems. Thermodynamics, on the other hand, is completely general. It must apply to all systems and therein lies its power. Its results do not depend on the microscopic model of the system. If we find that somehow or other we have managed to violate the laws of thermodynamics then we have a problem! Thermodynamics also makes life easier. Thermodynamics tells us about simple relationships between different properties of a system. So if used in conjunction with statistical mechanics we dont have to calculate all properties starting from the microscopic model. 0.1.2 Historical Perspective Thermodynamics and statistical mechanics evolved as separate disciplines. Thermodynamics is quite old, having started with people such as Robert Boyle (1627-1691), Benjamin Thompson, Count Rumford (1753-1814) and Sadi Carnot (1796-1832). Statistical mechanics really got underway with the work of Josiah Willard Gibbs (1839-1903) and Ludwig Boltzmann (1844-1906).

PH261 BPC/JS 1997

Page 0.1

Section 1 Introduction to Classical Thermodynamics


1.1 Introduction Macroscopic and microscopic descriptions
Any system may be described macroscopically or microscopically (G pp. 1-4, F pp. xi-xii). The macroscopic description considers the system as a whole. The system is characterised, in equilibrium, by a relatively small number of variables e.g. pressure, volume, temperature, internal energy, entropy. The state of a system described in this way is referred to as a macrostate. The macrostate of a system can be changed in two principal ways: we can do work on the system and/or heat can flow in. (The macrostate would also change if particles flowed into or out of the system.) The general rules governing these macroscopic variables and their inter-relationships is the subject of thermodynamics. The microscopic description considers the detailed microscopic nature of the system. At this level the system is characterised by a large number of variable which specify the state of the microscopic entities that make it up. For example, classically the state of a gas is given at any instant by the position and momentum of each particle. So for N particles we need 6N variables. This is a very large number; for 1cm3 of gas at STP, N is of order 1019. The state of a system described in this way is referred to as a microstate. The positions and momenta of the particles, and thus the microstate, evolves with time according to the laws of mechanics. Practically, we could never hope to perform such a calculation! In the quantum mechanical description a microstate will correspond to the system having a given wave function. The microstate will be specified by a set of quantum numbers for the system. In specifying microstates of a system we note that the energy of each atom may take on a continuous or a discrete set of values. In equilibrium there will be a certain distribution of atoms among these energy levels. This distribution will tell us how many atoms have, on average, energy in the range E to E + dE. This probabilistic description is enough to determine the macroscopic thermodynamic variables. This is the subject of statistical mechanics: to connect a probabilistic microscopic mechanical description of a system to the macroscopic thermodynamic description. Historically the two most important figures in the unification of thermal physics and mechanics were Boltzmann and Gibbs.. Here we will review some fundamental macroscopic concepts. You should read the first three chapters of Finn for more details.

1.2 Systems and Variables


A system is separated from its surroundings by walls. In general we consider closed systems that do not exchange matter with their surroundings. Walls can then be classified into two types Adiabatic: no heat can pass through the wall. Diathermal: heat can pass through the wall.

PH261 BPC/JS 1997

Page 1.1

Two systems we will use a lot are an isolated system surrounded by an adiabatic wall (an adiabatic system) a system in contact with a heat bath via a diathermal wall (an isothermal system). A system is described by thermodynamic variables. These are of two kinds: extensive and intensive. Intensive variables are independent of the size of a system and they can vary throughout the system (i.e. they can be defined locally). Examples are pressure, temperature, density. Extensive variables scale with the size of the system, and are proportional to its volume, if other conditions are kept constant. Examples are volume (obviously!), numbers of particles, mass, internal energy. The appropriate variables depend on the particular system. Examples are given below. Extensive and intensive variables tend to come in pairs (conjugate variables). The incremental work done on a body is (it turns out) the product of an intensive variable and the increment in the conjugate extensive variable dW = Xdx. Conjugate variables: system generalised fluid wire film intensive variable generalised force X pressure p tension F surface tension extensive variable generalised displacement x volume V length l area A

We will see that the incremental increase in energy of a body due to the flow of heat may also be expressed in the same way all systems temperature T entropy S.

______________________________________________________________ End of lecture 1

1.3 Thermodynamic Equilibrium


1.3.1 Equilibrium The concept of equilibrium is fundamental to thermodynamics. In equilibrium the thermodynamic properties (i.e. the macroscopic physical properties) are constant throughout the system and they do not vary with time. Imagine an isolated cylinder of gas with a piston at one end. Rapidly move the piston so as to expand the gas. Then the gas will be in a chaotic state; there may be a shock wave associated with the turbulent flow of the gas and the intensive variables such as temperature, pressure and density will be non-uniform and changing. After waiting a sufficiently long time the gas will achieve internal PH261 BPC/JS 1997 Page 1.2

equilibrium (in the microscopic view we know this happens because of collisions between the gas molecules). In the equilibrium state the temperature, pressure and density are constant throughout the gas and they do not vary subsequently in time, so long as the system is not disturbed. It is equilibrium states which are able to be described by a small number of macroscopic variables. In this course we shall be concerned exclusively with equilibrium states. 1.3.2 Thermodynamic state A thermodynamic (macro)state is specified by the appropriate choice of state variables. For example for a gas p, V , T are possible state variables. But they cannot be chosen independently because they are related by the equation of state. For an ideal gas of N atoms pV = NkT (the ideal gas law). So we have three variables and one constraint, leading to two independent variables. In other words you only need to fix say V and T. Then the state of the gas (comprising N atoms) is determined. If you want to know the pressure in this state then plug the values of V and T into the equation of state. 1.3.3 Functions of state We will encounter a number of quantities which are uniquely determined by the thermodynamic state of the system. These are called functions of state. An example is the internal energy E. The state is fixed by the pair V , T (or p, T or p, V ). Since E is a function of state it is determined by V , T . Mathematically it is a function of two variables and this may be written E (V , T ). Other examples of functions of state we will encounter are entropy S, enthalpy H, Helmholtz free energy F, and Gibbs free energy G. These must be distinguished from other quantities which are not functions of state. For example the work done on a system and the heat flowing into a system are not functions of state. 1.3.4 Reversible processes You might think that a system sitting at thermodynamic equilibrium is not that interesting. Indeed historically the development of thermodynamics was driven by the need to understand the operation of steam engines. Essentially these involve a sequence of processes in which the thermodynamic state of a gas is changed. Well, steam engines are rather complicated. So lets do the usual trick of the physicist: simplify and idealise. Consider a process in which the system goes from one equilibrium state to another. We will continue to use a gas as our workhorse example of a physical system, but of course the ideas apply to any system with the appropriate choice of state variables. So for a gas, the end points of the process, the initial and the final states are specified by the state variables. Here we use pressure and volume. The initial state 1 is (p1, V1) and the final state 2 is (p2, V2). We can represent these on a two-dimensional plot called an indicator diagram.

PH261 BPC/JS 1997

Page 1.3

p 2 ( p2, V2) irreversible process quasistatic process

1 ( p1, V1 )

Indicator diagram Suppose the change from 1 to 2 is accomplished sufficiently slowly (by moving the piston very slowly) so that at each stage of the process the gas is in equilibrium. Such a process passing through a succession of equilibrium states is called quasistatic. Only in this case may it be represented by a trajectory on the indicator diagram. Now suppose that no dissipative forces such as friction are present (these may be reduced to insignificance by moving the piston very slowly). Then the direction of the process can be reversed by an infinitesimal change in the force bringing it about; in our example this is the pressure differential across the piston. This then leads us to the notion of a reversible process: one which is quasistatic and for which no dissipative forces are present. In a reversible process the system can be moved from 1 to 2; this process can be reversed and the system returned to 1. Thus the state of the system and the rest of the world are left unchanged when the system returns to its original state (an alternative definition). Important examples of reversible processes are: Reversible adiabatic process. Here, complying with the above conditions the state of the thermally isolated system is changed. No heat may enter or leave. Later on we shall see that for such a process the entropy is constant (the process is called isentropic) Reversible isothermal process. During this process the system is in thermal contact with a heat bath with which it can exchange heat energy. The temperature is constant and equal to that of the bath. 1.3.5 Cyclic processes When a system passes through a sequence of states, ending up back in its original state, such a process is called a cyclic process. Clearly a cyclic process is described by a closed curve on an indicator diagram. Cyclic processes have a close connection with functions of state, since these will all be at their original values at the end of the cycle. Other variables, such as the work done on a substance during such a cycle or the heat input/output will change.

PH261 BPC/JS 1997

Page 1.4

Satisfy yourself that the work done on a fluid is given by the area of the closed curve of the p, V indicator diagram. Cyclic processes may be used to make general deductions about non- functions of state, such as the interconversion of heat and energy. Thus Carnot deduced that the efficiency of a steam engine did not depend on the working substance. 1.3.6 Temperature, thermal equilibrium and the zeroth law of thermodynamics We need to clean up our ideas a little on thermal equilibrium between systems and how this leads to the concept of temperature. Two volumes of gas each in internal equilibrium are placed in thermal contact. In general they will not be in thermal equilibrium with each other. Thus the states of the two samples of gas will change, as observed by changing pressures, for example. When no further change takes place thermal equilibrium has been achieved. The two systems are then said to be at the same temperature. Thus we assert that all systems possess a common property called temperature. This is the property of a system that determines whether it is in thermal equilibrium with other systems. This view is encapsulated by the zeroth law of thermodynamics (F pp. 4-5). Zeroth law of thermodynamics: If two systems are separately in thermal equilibrium with a third then they are in thermal equilibrium with each other. 1.3.7 Temperature scales You should be familiar with the Celsius scale and the ideal gas scale. These are covered in Finn, chapter 1. Read this. The ideal gas temperature scale is based on the equation of state of an ideal gas pV = NkT . We shall see that this temperature is identical to that defined on the Kelvin / thermodynamic scale. It is also equivalent to statistical temperature, which arises from a microscopic treatment of systems. These equivalences will be demonstrated later on in the course. Thus the temperature which crops up in all the equations of this course is called absolute temperature. Absolute zero, T = 0, corresponds to 273.15 C. Absolute zero is unattainable, but you can get infinitessimally close to it in principle. This result follows from the third law of thermodynamics, covered later in the course. This is the province of ultralow temperature physics, one of the research areas of this department. It is the low energy frontier of physics where new discoveries await the intrepid explorer. The record low temperature for a system in internal thermal equilibrium is held by colleagues at the University of Lancaster: 7K in a brush of metal wires. In other systems you can play tricks to cool only parts to temperatures of a few nK. You can also have negative temperatures, but these are hotter than T = . Confused?!......... well stay tuned! ______________________________________________________________ End of lecture 2

PH261 BPC/JS 1997

Page 1.5

1.4 The First Law of Thermodynamics


There is a function of state called the internal energy, E. When a system is taken from one equilibrium state 1 to another 2, the change in internal energy E is given by the sum of the work done on the system W and the heat flowing into the system Q: E = W + Q (Note the sign convention for W and Q) where E
=

E2

E1.

This law is a statement of the equivalence of heat and work. It is the principle of conservation of energy generalised to thermal systems. Heating is thus nothing more than the flow of energy into a system from its surroundings. From a modern perspective we see the first law simply as a statement of the conservation of energy. But historically it was important since there was uncertainty about the nature of heat (there was the caloric theory, phlogiston theory, etc.). When formulated, the first law was telling people simply that heat is another form of energy. The complete conversion of work into heat is possible and it is allowed by the first law. This was what Joule spent so much of his time investigating, with paddles in water. The complete conversion of heat into work is not possible in a cyclic process. (It is possible in a single shot process e.g. the reversible isothermal expansion of an ideal gas against a piston). Although allowed by the first law, the continuous conversion of heat into work is forbidden by the second law of thermodynamics. Historically the first law followed from the experiments of Joule, who demonstrated that for a thermally isolated system ( Q = 0) the work necessary to perform a given change in its thermodynamic state (for example increase the temperature by one degree) does not depend on how the work is done or the choice of path between the states. This led to the introduction of internal energy E as a function of state. In the case of a non-adiabatic process E is no longer equal to W . The difference (heat Q) is the non-mechanical exchange of energy of the system with the surroundings, arising from a temperature difference. Note importantly that E
=

Q applies to both reversible and ireversible changes.

For an infinitesimal change we have dE = dW + dQ. The bars in the differentials dW , dQ are to emphasise that we are not taking the differential of a function, W and Q are not functions of state. dW and dQ are inexact differentials, they represent infinitesimal quantities only.

PH261 BPC/JS 1997

Page 1.6

1.5 Work
See Finn pp22-33. 1.5.1 Expression for work So far we have considered work and heat in general terms. However an expression for the work done on a system in a process may be given in terms of the particular state variables appropriate to that system, if the process is reversible. In some cases we can use the fact that work = force distance. Thus in an infinitesimal change dW = F dx. Since in general F is a function of x, we should write dW = F (x) dx. The work done in a process in which x is changed by a finite amount, from x1 to x2 is then given by W
=

x1 F (x) dx.

x2

1.5.2 An example: work done on a gas Consider the work done in reversibly compressing a gas. The force exerted by the gas, of pressure p, on the piston, of area A, is p A (pressure = force per unit area remember?). dx F
+

dF

Compression of a gas To hold the piston stationary we must exert a force F = pA. To move the piston in by dx we increase the force F by an infinitesimal amount dF. The piston is frictionless so it moves! The incremental work done is dW = (F + dF) dx = Fdx = pAdx. But the volume of the gas has changed by dV = Adx (the volume has decreased). So dW = pdV is the expression for the work done on the gas. Note the sign convention work done on a system is positive; this is important. This is an exact differential, but don't forget we are dealing with the special case of a reversible compression. So you only do work on a gas if you change its volume remember that! In a finite quasistatic volume change (all intermediate states equilibrium states) from volume V1 to V2: W
=

V1 p (V ) dV .

V2

Remember as the volume is decreased the pressure will go up , i.e. the pressure is a function of volume, so we must do an integral. PH261 BPC/JS 1997 Page 1.7

The result will depend on the conditions under which the compression is performed. This determines the form of p (V ) (that is the path of the process, as represented on an indicator diagram). And the area under that path is the work.
p 2 ( p2, V2)

1 ( p1, V1 ) Work done = area

Work done in compressing a fluid Clearly the area depends on the path; work is not a function of state. For an ideal gas ( as an exercise prove these expressions for W): Isothermal compression: pV = NkT from V1 to V2, W = NkT ln V1 / V2. Adiabatic compression: pV
= =

constant so p

V 1. Hence in a compression 1)1 [ p2V2 p1V1] .

constant. Then W

= (

For a compressible solid or fluid see Finn p32.

1.5.3 Other examples of expressions for work (These are discussed in all the books, see Finn 28-31) Stretching a wire under tension (F): dW = Fdx. Increasing the surface area of a liquid film: dW = dA, where is the surface tension (usually independent of A). ______________________________________________________________ End of lecture 3

PH261 BPC/JS 1997

Page 1.8

Section 2 Introduction to Statistical Mechanics


2.1 Introducing entropy
2.1.1 Boltzmanns formula A very important thermodynamic concept is that of entropy S. Entropy is a function of state, like the internal energy. It measures the relative degree of order (as opposed to disorder) of the system when in this state. An understanding of the meaning of entropy thus requires some appreciation of the way systems can be described microscopically. This connection between thermodynamics and statistical mechanics is enshrined in the formula due to Boltzmann and Planck: S = k ln where is the number of microstates accessible to the system (the meaning of that phrase to be explained). We will first try to explain what a microstate is and how we count them. This leads to a statement of the second law of thermodynamics, which as we shall see has to do with maximising the entropy of an isolated system. As a thermodynamic function of state, entropy is easy to understand. Entropy changes of a system are intimately connected with heat flow into it. In an infinitesimal reversible process dQ = TdS; the heat flowing into the system is the product of the increment in entropy and the temperature. Thus while heat is not a function of state entropy is.

2.2 The spin 1/2 paramagnet as a model system


Here we introduce the concepts of microstate, distribution, average distribution and the relationship to entropy using a model system. This treatment is intended to complement that of Guenault (chapter 1) which is very clear and which you must read. We choose as our main example a system that is very simple: one in which each particle has only two available quantum states. We use it to illustrate many of the principles of statistical mechanics. (Precisely the same arguments apply when considering the number of atoms in two halves of a box.) 2.2.1 Quantum states of a spin 1/2 paramagnet A spin S = 1 / 2 has two possible orientations. (These are the two possible values of the projection of the spin on the z axis: ms = .) Associated with each spin is a magnetic moment which has the two possible values . An example of such a system is the nucleus of 3He which has S = (this is due to an unpaired neutron; the other isotope of helium 4He is non-magnetic). Another example is the electron. In a magnetic field the two states are of different energy, because magnetic moments prefer to line up parallel (rather than antiparallel) with the field. Thus since E = .B , the energy of the two states is E = B, E = B. (Here the arrows refer to the direction of the nuclear magnetic moment: means that the moment is parallel to the field and means antiparallel. There is potential confusion because for 3He the moment points opposite to the spin!). PH261 BPC/JS 1997 Page 2.1

2.2.2 The notion of a microstate So much for a single particle. But we are interested in a system consisting of a large number of such particles, N. A microscopic description would necessitate specifying the state of each particle. In a localised assembly of such particles, each particle has only two possible quantum states or . (By localised we mean the particles are fixed in position, like in a solid. We can label them if we like by giving a number to each position. In this way we can tell them apart; they are distinguishable). In a magnetic field the energy of each particle has only two possible values. (This is an example of a two level system). You can't get much simpler than that! Now we have set up the system, let's explain what we mean by a microstate and enumerate them. First consider the system in zero field . Then the states and have the same energy. We specify a microstate by giving the quantum state of each particle, whether it is or . For N
=

10 spins

are possible microstates.

Thats it! 2.2.3 Counting the microstates What is the total number of such microstates (accessible to the system). This is called . Well for each particle the spin can be or ( there is no restriction on this in zero field). Two possibilities for each particle gives 210 arrangements (we have merely given three examples) for N = 10. For N particles = 2N. 2.2.4 Distribution of particles among states This list of possible microstates is far too detailed to handle. What's more we dont need all this detail to calculate the properties of the system. For example the total magnetic moment of all the particles is M = (N N ) ; it just depends on the relative number of up and down moments and not on the detail of which are up or down. So we collect together all those microstates with the same number of up moments and down moments. Since the relative number of ups and downs is constrained by N + N = N (the moments must be either up or down), such a state can be characterised by one number: m = N N . This number m tells us the distribution of N particles among possible states: N = (N + m) / 2. N = (N m) / 2, Now we ask the question: how many microstates are there consistent with a given distribution (given value of m; given values of N , N ). Call this t (m). (This is Guenaults notation). Look at N = 10. For m = 10 (all spins up) and m = 10 (all spins down), that's easy! t = 1. Now for m = 8 (one moment down) t (m = 8) = 10. There are ten ways of reversing one moment. PH261 BPC/JS 1997 Page 2.2

The general result is t (m) = N! / N ! N !. This may be familiar to you as a binomial coefficient. It is the number of ways of dividing N objects into one set of N identical objects and N different identical objects (red ball and blue balls or tossing an unbiassed coin and getting heads or tails). In terms of m the function t (m) may be written N! t (m) = N m N + m . ( 2 )! ( 2 )! If you plot the function t (m) it is peaked at m = 0, (N = N = N / 2). Here we illustrate that for N = 10. The sharpness of the distribution increases with the size of the system N, since the standard deviation goes as N .

t (m )
252 210 210

120

120

45 1 10 10 8 6 4 2 0 2 4

45 10 6 8 1 10

m
When N
=

Plot of the function t (m) for 10 spins 100 the function is already much narrower:
110 0.8 0.6 0.4 0.2
29

100

50

50

100

Plot of the function t (m) for 100 spins When N is large, then t (m) approaches the normal (Gaussian) function m2 . 2N This may be shown using Stirlings approximation (Guenault, Appendix 2). Note that the RMS width of the function is N . t (m)

2N exp

PH261 BPC/JS 1997

Page 2.3

2.2.5 The average distribution and the most probable distribution The physical significance of this result derives from the fundamental assumption of statistical physics that each of these microstates is equally likely. It follows that t (m) is the statistical weight of the distribution m (recall m determines N and N ), that is the relative probability of that distribution occurring. Hence we can work out the average distribution; in this example this is just the average value of m. The probability of a particular value of m is just t (m) / i.e. the number of microstates with that value of m divided by the total number of microstates ( = m t (m)). So the average value of m is m m t (m) / . In this example because the distribution function t (m) is symmetrical it is clear that mav the value of m for which t (m) is maximum is m = 0.
=

0. Also

So on average for a system of spins in zero field m = 0: there are equal numbers of up and down spins. This average value is also the most probable value. For large N the function t (m) is very strongly peaked at m = 0; there are far more microstates with m = 0 than any other value. If we observe a system of spins as a function of time for most of the time we will find m to be at or near m = 0. The observed m will fluctuate about m = 0, the (relative) extent of these fluctuations decreases with N. States far away from m = 0 such as m = N (all spins up) are highly unlikely; the probability of observing that state is 1 / = 1 / 2N since there is only one such state out of a total 2N. (Note: According to the Gibbs method of ensembles we represent such a system by an ensemble of systems, each in a definite microstate and one for every microstate. The thermodynamic properties are obtained by an average over the ensemble. The equivalence of the ensemble average to the time average for the system is a subtle point and is the subject of the ergodic hypothesis.) It is worthwhile to note that the model treated here is applicable to a range of different phenomena. For example one can consider the number of particles in two halves of a container, and examine the density fluctuations. Each particle can be either side of the imaginary division so that the distribution of density in either half would follow the same distribution as derived above. ______________________________________________________________ End of lecture 4

2.3 Entropy and the second law of thermodynamics


2.3.1 Order and entropy Suppose now that somehow or other we could set up the spins, in zero magnetic field, such that they were all in an up state m = N . This is not a state of internal thermodynamic equilibrium; it is a highly ordered state. We know the equilibrium state is random with equal numbers of up and down spins. It also has a large number of microstates associated with it, so it is far the most probable state. The initial state of the system will therefore spontaneously evolve towards the more likely, more disordered m = 0 equilibrium state. Once there the fluctuations about m = 0 will be small, as we have seen, and the probability of the system returning fleetingly to the initial ordered state is infinitesimally small. The tendency of isolated systems to evolve in the direction of increasing disorder thus follows from (i) disordered distributions having a larger number of microstates (mathematical fact) (ii) all microstates are equally likely (physical hypothesis). The thermodynamic quantity intimately linked to this disorder is the entropy. Entropy is a function of state and is defined for a system in thermodynamic equilibrium. (It is possible to introduce a PH261 BPC/JS 1997 Page 2.4

generalised entropy for systems not in equilibrium but let's not complicate the issue). Entropy was defined by Boltzmann and Planck as S = k ln where is the total number of microstates accessible to a system. Thus for our example of spins N = 2 so that S = Nk ln 2. Here k is Boltzmanns constant. 2.3.2 The second law of thermodynamics The second law of thermodynamics is the law of increasing entropy. During any real process the entropy of an isolated system always increases. In the state of equilibrium the entropy attains its maximum value. This law may be illustrated by asking what happens when a constraint is removed on an isolated composite system. Is the number of microstates of the final equilibrium state be smaller, the same or bigger ? We expect the system to evolve towards a more probable state. Hence the number of accessible microstates of the final state must be greater or the same and the entropy increases or stays the same. A nice simple example is given by Guenault on p12-13. The mixing of two different gases is another. For a composite system = 12. If the two systems are allowed to exchange energy, them in the final equilibrium state the total number of accessible microstates is always greater. The macroscopic desciption of this is that heat flows until the temperature of the two systems is the same . This leads us, in the next section, to a definition of statistical temperature as 1 S = . T E V ,N

2.3.3 Thermal interaction between systems and temperature Consider two isolated systems of given volume, number of particles, total energy When separated and in equilibrium they will individually have
1 = 1 ( E1,

V1, N 1) and

2 = 2 (E2,

V2, N 2)

microstates. The total number of microstates for the combined system is


= 12.

Now suppose the two systems are brought into contact through a diathermal wall, so they can now exchange energy. E1, V1 , N 1 E2, V2 , N 2

E1, V1 , N 1

E2, V2 , N 2

fixed diathermal wall Thermal interaction The composite system is isolated, so its total energy is constant. So while the systems exchange energy (E1 and E2 can vary) we must keep E1 + E2 = E0 = const. And since V1, N 1, V2, N 2 all PH261 BPC/JS 1997 Page 2.5

remain fixed, they can be ignored (and kept constant in any differentiation). Our problem is this: after the two systems are brought together what will be the equilibrium state? We know that they will exchange energy, and they will do this so as to maximise the total number of microstates for the composite system. The systems will exchange energy so as to maximise . Writing
= 1 ( E) 2 ( E0

E)

we allow the systems to vary E so that is a maximum:

E
or

1 E 2
=

2 E

1 1 1 E or

1 2 2 E

ln 1 E
But from the definition of entropy, S characterised by
=

ln 2 . E S2 . E

k ln , we see this means that the equilibrium state is

In other words, when the systems have reached equilibrium the quantity S / E of system 1 is equal to S / E of system 2. Since we have defined temperature to be that quantity which is the same for two systems in thermal equilibrium, then it is clear that S / E (or ln / E) must be related (somehow) to the temperature. We define statistical temperature as 1 S = T E V ,N (recall that V and N are kept constant in the process) With this definition it turns out that statistical temperature is identical to absolute temperature.

S1 E

It follows from this definition and the law of increasing entropy that heat must flow from high temperature to low temperature (this is actually the Clausius statement of the second law of thermodynamics. We will discuss all the various formulations of this law a bit later on.) Let us prove this. For a composite system = 12 so that S = S1 According to the law of increasing entropy S 0, so that: S1 S2 E1 0 S = E E or, using our definition of statistical temperature: 1 1 E1 0. S = T1 T2
+

S2 as we have seen.

PH261 BPC/JS 1997

Page 2.6

this means that E1 increases if T2


>

T1

E1 decreases if T2 < T1 so energy flows from systems at higher temperatures to systems at lower temperatures. ______________________________________________________________ End of lecture 5

2.4 More on the S=1/2 paramagnet


2.4.1 Energy distribution of the S=1/2 paramagnet Having motivated this definition of temperature we can now return to our simple model system and solve it to find N and N as a function of magnetic field B and temperature T. Previously we had considered the behaviour in zero magnetic field only and we had not discussed temperature. As discussed already in a magnetic field the energies of the two possible spin states are We shall write these energies as . E
=

B and E

B.

Now for an isolated system, with total energy E, number N, and volume V fixed, it turns out that N and N are uniquely determined (since the energy depends on N N ). Thus m = N N is fixed and therefore it exhibits no fluctuations (as it did in zero field about the average value m = 0). This follows since we must have both E N These two equations are solved to give N N
= (N + = (N = (N = (N +

N ) N )

E / ) / 2 E / ) / 2 .

All microstates have the same distribution of particles between the two energy levels N and N . But there are still things we would like to know about this system; in particular we know nothing about the temperature. We can approach this from the entropy in our now-familiar way. The total number of microstates is given by
=

N! . N ! N !

So we can calculate the entropy via S

k ln :

S = k {ln N! ln N ! ln N !} Now we use Stirlings approximation (Guenault Appendix 2) which says ln x! this is important you should remember this.

x ln x

x;

PH261 BPC/JS 1997

Page 2.7

Hence k {N ln N N ln N N ln N } , where we have used N + N = N. Into this we substitute for N and N since we need S in terms of E for differentiation to get temperature. 1 1 1 1 S = k N ln N (N + E / ) ln (N + E / ) (N E / ) ln (N E / ) . 2 2 2 2 Now we use the definition of statistical temperature: 1 / T = S / E|N.V to obtain the temperature: 1 k N E/ = ln . T 2 N + E/ (You can check this differentiation with Maple or Mathematica if you wish!) S
=

Recalling the expressions for N and N , the temperature expression can be written: 1 k N = ln , T 2 N which can be inverted to give the ratio of the populations as N = exp 2 / kT . N Or, since N + N = N we find N exp / kT = exp + / kT + exp / kT N

{ }

N N
1.0

exp + / kT exp + / kT + exp / kT

0.8

N / N

0.6

0.4

0.2

N / N kT /

0.0 0 2 4 6 8 10 12 14 16

Temperature variation of up and sown populations This is our final result. It tells us the fraction of particles in each of the two energy states as a function of temperature. This is the distribution function. On inspection you see that it can be written rather concisely as exp / kT n ( ) = N z PH261 BPC/JS 1997 Page 2.8

where the quantity z z = exp / kT + exp + / kT is called the (single particle) partition function. It is the sum over the possible states of the factor exp / kT . This distribution among energy levels is an example of the Boltzmann distribution. This distribution applies generally(to distinguishable particles) where there is a whole set of possible energies available and not merely two as we have considered thus far. This problem we treat in the next section. In general z
=

states

exp / kT ,

where the sum is taken over all the possible energies of a single particle. In the present example there are only two energy levels. In the general case n () is the average number of particles in the state of energy (In the present special case of only two levels n (1) and n (2) are uniquely determined). To remind you, an example of counting the microstates and evaluating the average distribution for a system of a few particles with a large number of available energies is given in Guenault chapter 1. The fluctuations in n () about this average value are small for a sufficiently large number of particles. (The 1/ N factor) 2.4.2 Magnetisation of S = 1 / 2 magnet Curies law We can now obtain an expression for the magnetisation of the spin 1/2 paramagnet in terms of temperature and applied magnetic field. The magnetisation (total magnetic moment) is given by: M = (N N ) . We have expressions for N and N , giving the expression for M as exp / kT exp + / kT M = N exp + / kT + exp / kT
= N

or, since

B, the magnetisation in terms of the magnetic field is B M = N tanh ( ) .


M / N 1.0

tanh / kT

kT

saturation

0.8

0.6

0.4 linear region N2B M kT

0.2

0.0 0.0 1.0 2.0 3.0

B / kT

magnetisation of paramagnet PH261 BPC/JS 1997 Page 2.9

The general behaviour of magnetisation on magnetic field is nonlinear. At low fields the magnetisation starts linearly but at higher fields it saturates as all the moments become aligned. The low field, linear behaviour may be found by expanding the tanh: M The magnetisation has the general form B , T proportional to B and inversely proportional to T. This behaviour is referred to as Curies law and the constant C = N 2 / k is called the Curie constant. M
= =

N 2 B. kT C

2.4.3 Entropy of S = 1 / 2 magnet We can see immediately that for the spin system the entropy is a function of the magnetic field. In particular at large B / T all the moments are parallel to the field. There is then only one possible microstate and the entropy is zero. But it is quite easy to determine the entropy at all B / T from S = k ln . As we have seen already Since N
=

S / k = N ln N N , this can be written

N ln N

N ln N .

S / k = N ln N / N And substituting for N and N we then get Nk ln [ exp (2B / kT ) + 1] S = exp (2B / kT ) + 1

N ln N / N. Nk exp (2B / kT ) ln [ exp (2B / kT ) 1]

Nk ln 2

lower B higher B

0 0 T

Entropy of a spin 1/2 paramagnet

PH261 BPC/JS 1997

Page 2.10

1 At low temperatures kT 2B. Then the first term is negligible and the 1s may be ignored. In this case S Nk (2B / kT ) exp (2B / kT ). Clearly S 0 as T 0, in agreement with our earlier argument. 2 At high temperatures kT 2B. Then both terms are equal and we find S = Nk ln 2. There are equal numbers of up and down spins: maximum disorder. 3 The entropy is a function of B / T and increasing B at constant T reduces S; the spins tend to line up; the system becomes more disordered. ______________________________________________________________ End of lecture 6

2.5 The Boltzmann distribution


2.5.1 Thermal interaction with the rest of the world using a Gibbs ensemble For an isolated system all microststes are equally likely; this is our Fundamental Postulate. But what about a non-isolated system? What can we say about the probabilities of microstates of such a system? Here the probability of a microstate will depend on its energy and on properties of the surroundings. A real system will have its temperature determined by its environment. That is, it is not isolated, its energy is not fixed; it will fluctuate (but the relative fluctuations are very small because of the 1 / N rule). All we really know about is isolated systems; here all quantum states are equally probable. So to examine the properties of a non-isolated system we shall adopt a trick. We will take many copies of our system. These will be allowed to exchange (heat) energy with each other, but the entire collection will be isolated from the outside world. This collection of systems is called an ensemble. The point now is that we now know that all microstates of the composite ensemble are equally likely. Of the N individual elements of the ensemble, there will be ni in the microstate of energy i , so the probability of an element being in this microstate will be ni / N.

System in energy state of j

collection of identical systems

a Gibbs ensemble of systems used to calculate {ni} There are many distributions {ni} which are possible for this ensemble. In particular we know that because the whole ensemble is isolated then the number of elements and the total energy are fixed. In other words any distribution {ni} must satisfy the requirements

PH261 BPC/JS 1997

Page 2.11

ni
i

N E.

nii
i

By analogy with the case of the spin 1/2 paramagnet, we will denote the number of microstates of the ensemble corresponding to a given distribution {ni} by t ({ni}). Then the most likely distribution is that for which t ({ni}) is maximised. 2.5.2 Most likely distribution The value of t ({ni}) is given by N! i ni ! since there are N elements all together and there are ni in the ith state. So the most probable distribution is found by varying the various ni to give the maximum value for t. It is actually more convenient (and mathematically equivalent) to maximise the logarithm of t. t ({ni})
=

The maximum in ln t corresponds to the place where its differential is zero: ln t ln t ln t dn1 + dn2 + + dn d ln t ({ni}) = n1 n2 ni i

If the ni could be varied independently then we would have the following set of equations

ln t ({ni}) ni

0,

1, 2, 3,

Unfortunately the ni can not all be varied independently because all possible distributions {ni} must satisfy the two requirements
ni
i

N E;

nii
i

there are two constraints on the distribution {ni}. These may be incorporated in the following way. Since the constraints imply that d ni
i

and d nii
i

0,

we can consider the differential of the expression ln t ({ni})

ni
i

nii
i

where and are, at this stage undetermined. This gives us two extra degrees of freedom so that we can maximise this by varying all ni independently. In other words, the maximum in ln t is also PH261 BPC/JS 1997 Page 2.12

specified by d ln t ({ni})

ni
i

nii
i

Now we have recovered the two lost degrees of freedom so that the ni can be varied independently. But then the multipliers and are no longer free variables and their values must be found from the constraints fixing N and E. (This is called the method of Lagranges undetermined multipliers.) The maximisation can now be performed by setting the N partial derivatives to zero

ln t ({ni}) ni

ni
i

nii
i

0,

1, 2, 3,

We use Stirlings approximation for lnt, given by ln t ({ni})


=

( n ) ln ( n )
i i i i

ni ln ni.
i

so that upon evaluating the N partial derivatives we obtain ln ni


i

ln nj

j
.

which has solution nj


=

Nee

These are the values of nj which maximise t subject to N and E being fixed. This gives us the most probable distribution {ni}. The probability that a system will be in the state j is then found by dividing by N: pj
=

ee

So the remaining step, then, is to find the constants and . 2.5.3 What are and ? For the present we shall sidestep the question of by appealing to the normalisation of the probability distribution. Since we must have
pj
j

it follows that we can express the probabilities as pj where the normalisation constant Z is given by Z
= =

e
j

The constant Z will turn out to be of central importance; from this all thermodynamic properties can be found. It is called the partition function, and it is given the symbol Z because of its name in German: zustandsumme (sum over states).

PH261 BPC/JS 1997

Page 2.13

But what is this ? For the spin 1/2 paramagnet we had many similar expressions with 1 / kT there, where we have here. We shall now show that this identification is correct. We will approach this from our definition of temperature: 1 S = . T E V ,N Now the fundamental expression for entropy is S = k ln , where is the total number of microstates of the ensemble. This is a little difficult to obtain. However we know t ({ni}), the number of microstates corresponding to a given distribution {ni}. And this t is a very sharply peaked function. Therefore we make negligible error in approximating by the value of t corresponding to the most probable distribution. In other words, we take N! S = k ln i ni ! where

nj Using Stirlings approximation we have S


=

Nee

k N ln N k {N ln N 1 T

ni ln ni
i

N
=

E} .

From this we immediately identify


=

S E |V N
,

so that

as we asserted.

1 kT

The probability of a (non-isolated) system being found in the ith microstate is then pi or ei/kT pi = Z This is known as the Boltzmann distribution or the Boltzmann factor. It is a key result. Feynman says This fundamental law is the summit of statistical mechanics, and the entire subject is either a slidedown from the summit, as the principle is applied to various cases, or the climb-up to where the fundamental law is derived and the concepts of thermal equilibrium and temperature clarified.

ei/kT

______________________________________________________________ End of lecture 7

PH261 BPC/JS 1997

Page 2.14

2.5.4 Link between the partition function and thermodynamic variables All the important thermodynamic variables for a system can be derived from Z and its derivatives. We can see this from the expression for entropy. We have And since e so that S
= =

S = k {N ln N + N N / Z we have, on taking logarithms

E} .

k {N ln N

ln N

ln Z N ln Z
+

N ln N

E / kT }

Nk ln Z + E / T . Here both S and E refer to the ensemble of N elements. So for the entropy and internal energy of a single system S which, upon rearrangement, can be written E Now the thermodynamic function F = E TS is known as the Helmholtz free energy, or the Helmholtz potential. We then have the memorable result F
= kT =

k ln Z TS

E / T, ln Z.

= k

ln Z.

2.5.5 Finding Thermodynamic Variables A host of thermodynamic variables can be obtained from the partition function. This is seen from the differential of the free energy. Since dE it follows that dF = SdT pdV . We can then identify the various partial derivatives: F ln Z = kT + k ln Z S = T V T V F ln Z p = = kT . V T V T Since E = F + TS we can then express the internal energy as ln Z . E = kT 2 T V Thus we see that once the partition function is evaluated by summing over the states, all relevant thermodynamic variables can be obtained by differentiating Z.
=

TdS

pdV

| |

It is instructive to examine this expression for the internal energy further. This will also confirm the identification of this function of state as the actual energy content of the system. If pj is the probability of the system being in the eigenstate corresponding to energy j then the mean energy of the system may be expressed as PH261 BPC/JS 1997 Page 2.15

jpj
j

1 j ej Z j

where Z is the previously-defined partition function, and it is convenient here to work in terms of rather than converting to 1 / kT . In examimining the sum we note that

j e j

j e

so that the expression for E may be written. after interchanging the differentiation and the summation, E
=

1 e j. Z j

But the sum here is just the partition function, so that 1 Z E = Z


=

And since

ln Z.

1 / kT , this is equivalent to our previous expression ln Z E = kT 2 , T V however the mathematical manipulations are often more convenient in terms of .
=

2.5.6 Summary of methodology We have seen that the partition function provides a means of calculating all thermodynamic information from the microscopic description of a system. We summarise this procedure as follows: 1 2 3 4 Write down the possible energies for the system. Evaluate the partition function for the system. The Helmholtz free energy then follows from this. All thermodynamic variables follow from differentiation of the Helmholtz free energy.

2.6 Localised systems


2.6.1 Working in terms of the partition function of a single particle The partition function methodology described in the previous sections is general and very powerful. However a considerable simplification follows if one is considering systems made up of noninteracting distinguishable or localised particles. The point about localised particles is that even though they may be indistinguishable, perhaps a solid made up of argon atoms, the particles can be distinguished by their positions. PH261 BPC/JS 1997 Page 2.16

Since the partition function for distinguishable systems is the product of the partition function for each system, if we have an assembly of N localised identical particles, and if the partition function for a single such particle is z, then the partition function for the assembly is Z = zN. It then follows that the Helmholtz free energy for the assembly is F
= kT

ln Z

ln z in other words the free energy is N times the free energy contribution of a single particle; the free energy is an extensive quantity, as expected. This allows an important simplification to the general methodology outlined above. For localised systems we need only consider the energy levels of a single particle. We then evaluate the partition function z of a single particle and then use the relation F = NkT ln z in order to find the thermodynamic properties of the system. We will consider two examples of this, one familiar and one new. ______________________________________________________________ End of lecture 8 2.6.2 Using the partition function I the S = 1 / 2 paramagnet (again) Step 1) Write down the possible energies for the system The assembly of magnetic moments are placed in a magnetic field B. The spin has two quantum states, which we label by and . The two energy levels are than

= NkT

B,

B.

Step 2) Evaluate the partition function for the system We require to find the partition function for a single spin. This is z
=
i /kT e

states i

eB/kT + eB/kT . This time we shall obtain the results in terms of hyperbolic functions rather than exponentials for variety. The partition function is expressed as the hyperbolic cosine:
=

2 cosh B / kT .

Step 3) The Helmholtz free energy then follows from this Here we use the relation F
= NkT = NkT

ln z ln {2 cosh B / kT } .

Step 4) All thermodynamic variables follow from differentiation of the Helmholtz free energy Before proceeding with this step we must pause to consider the performance of magnetic work. Here we dont have pressure and volume as our work variables; we have magnetisation M and magnetic field B. The expression for magnetic work is PH261 BPC/JS 1997 Page 2.17

dW = MdB, so comparing this with our familiar pdV , we see that when dealing with magnetic systems we must make the identification p

V B. The internal energy differential of a magnetic system is then dE = TdS MdB and, of more immediate importance, the Helmholtz free energy E dF = SdT We can then identify the various partial derivatives: F = NkT S = T B F M = = NkT B T Upon differentiation we then obtain N B tanh B / kT + S = T

TS has the differential

MdB.

ln z + Nk ln z, T |B ln z . B |T
Nk ln {2 cosh B / kT } ,

M = N tanh B / kT . The internal energy is best obtained now from E


=

TS ln z
+

= NkT = NkT

TS
+

ln {2 cosh B / kT } E

N B tanh B / kT T

Nk ln {2 cosh B / kT } ,

giving
= N

B tanh B / kT .
= MB

We note that the internal energy can be written as E

as expected.

By differentiating the internal energy with respect to temperature we obtain the thermal capacity at constant field CB
=

Nk (

kT )

sech 2 B / kT .

Some of these expressions were derived before, from the microcanonical approach; you should check that the exponential expressions there are equivalent to the hyperbolic expressions here. We plot the magnetisation as a function of inverse temperature. Recall that at high temperatures we have a linear region, where Curies law holds. At low temperatures the magnetisation saturates as all the moments become aligned against the applied field. Incidentally, we note that the expression M PH261 BPC/JS 1997
=

N tanh B / kT Page 2.18

is the equation of state for the paramagnet. This is a nonlinear equation. But recall that we found linear behaviour in the high T / B region where N 2B , kT just as it is in the high temperature region that the ideal gas equation of state p M
=

NkT / V is valid.

M / N 1.0

saturation

0.8

0.6

0.4 linear region N 2B M kT

0.2

0.0 0.0 1.0 2.0 3.0

B / kT

magnetisation of spin 1/2 paramagnet against inverse temperature Next we plot the entropy, internal energy and the thermal capacity as a function of temperature. Observe that they all go to zero as T 0. At low temperatures the entropy goes to zero, as expected. And at high temperatures the entropy goes to the classical two-state value of k ln 2 per particle S S

2Nk (

e kT )

kT

2B

T . kT ) The thermal capacity is particularly important as it is readily measurable. It exhibits a maximum of CB 0.44Nk at a temperature of T 0.83B / k . This is known as a Schottky anomaly. Ordinarily the thermal capacity of a substance increases with temperature, saturating at a constant value at high temperatures. Spin systems are unusual in that the energy states of a spin are finite and therefore bounded from above. The system then has a maximum entropy. As the entropy increases towards this maximum value it becomes increasingly difficult to pump in more heat energy.

Nk ln 2

4Nk (

At both high and low temperatures the thermal capacity goes to zero, as may be seen by expanding the expression for CB: CB CB PH261 BPC/JS 1997

4Nk ( 2Nk (

kT )

e kT
2

2B

T T

kT )

Page 2.19

Nk ln 2

lower B

kT / B 0 0.0 1.0 2.0 3.0 4.0 5.0

entropy of spin paramagnet


E / NB 0.0

-0.2

-0.4

-0.6

-0.8 kT / B 0.0 1.0 2.0 3.0 4.0 5.0

-1.0

internal energy of spin paramagnet


CB / Nk 0.40

0.30

0.20

0.10

0.00 0.0 1.0 2.0 3.0 4.0 5.0

kT / B

thermal capacity of spin paramagnet ______________________________________________________________ End of lecture 9 PH261 BPC/JS 1997 Page 2.20

2.6.3 Using the partition function II the Einstein model of a solid One of the challenges faced by Einstein was the explanation of why the thermal capacity of solids tended to zero at low temperatures. He was concerned with nonmagnetic insulators, and he had an inkling that the explanation was something to do with quantum mechanics. The thermal excitations in the solid are due to the vibrations of the atoms. Einstein constructed a simple model of this which was partially successful in explaining the thermal capacity. In Einsteins model each atom of the solid was regarded as a simple harmonic oscillator vibrating in the potential energy minimum produced by its neighbours. Each atom sees a similar potential, so they all oscillate at the same frequency; let us call this / 2. And since each atom can vibrate in three independent directions, the solid of N atoms is modelled as a collection of 3N identical harmonic oscillators. We shall follow the procedures outlined above. Step 1) Write down the possible energies for the system The energy states of the harmonic oscillator form an infinite ladder of equally spaced levels; you should be familiar with this from your quantum mechanics course.

9 2 7 2 5 2 3 2 1 2

4 3 2 1 j
=

single particle energies:

(j

1 + 2)

0, 1, 2, 3,

Step 2) Evaluate the partition function for the system We require to find the partition function for a single harmonic oscillator. This is

j=0

j/kT

j=0

exp

{ (j + 2 ) kT } .
/ k.

To proceed, let us define a characteristic temperature , related to the oscillator frequency by

Then the partition function may be written as

j=0

exp

{ (j + 2 ) T }

/2T

j=0

(e

/T j

and we observe the sum here to be a (convergent) geometric progression. You should recall the result

n=0

n x

x2

x3

1 1

PH261 BPC/JS 1997

Page 2.21

If you dont remember this result then multiply the power series by 1 The harmonic oscillator partition function is then given by z
=

x and check the sum.

e/2T . 1 e/T

Step 3) The Helmholtz free energy then follows from this Here we use the relation F
= 3NkT =

ln z
+

3N k 2

3NkT ln {1

e/T } .

Step 4) All thermodynamic variables follow from differentiation of the Helmholtz free energy Here we have no explicit volume dependence, indicating that the equilibrium solid is at zero pressure. If an external pressure were applied then the vibration frequency / 2 would be a function of the interparticle spacing or, equivalently, the volume. This would give a volume dependence to the free energy from which the pressure could be found. However we shall ignore this. The thermodynamic variables we are interested in are then entropy, internal energy and thermal capacity. The entropy is

F T |V

3Nk ln

( e

e/T
/T

1)

(e /T

/T

1)

In the low temperature limit S goes to zero, as one would expect. The limiting low temperature behaviour is e/T T 0. T At high temperatures the entropy tends towards a logarithmic increase T T . S 3Nk ln ( ) S

3Nk

The internal energy is found from E where we write ln z as ln z Then on differentiation we find e 1 The first term represents the contribution to the internal energy from the zero point oscillations; it is present even at T = 0. PH261 BPC/JS 1997 Page 2.22 E
= = = 3N

ln z

ln {1

e} .

3N

{2

}.

At high temperatures the variation of the internal energy is 1 2 . 12 ( T ) The first term is the classical equipartition part, which is independent of the vibration frequency. E

3NkT 1

Turning now to the thermal capacity, we have CV


=

E , T |V 1 3Nk { e/T 1 } . T
3Nk (

Upon differentiation this then gives T ) (e/T 1)2 At high temperatures the variation of the thermal capacity is CV
=

e/T

1 2 CV 3Nk Nk ( ) + T 4 T where the first term is the constant equipartition value. At low temperatures we find CV

3Nk (

) e /T T

We see that this model does indeed predict the decrease of CV to zero as T 0, which was Einsteins challenge. The decrease in heat capacity below the classical temperature-independent value is seen to arise from the quantisation of the energy levels. However the observed low temperature behaviour of such thermal capacities is a simple T 3 variation rather than the more complicated variation predicted from this model. The explanation for this is that the atoms do not all oscillate independently. The coupling of their motion leads to normal mode waves propagating in the solid with a continuous range of frequencies. This was explained by the Debye model of solids. The Einstein model introduces a single new parameter, the frequency / 2 of the oscillations or, equivalently, the characteristic temperature = / k . And the prediction is that CV is a universal function of T / . To the extent that the model has some validity, each substance should have its own value of ; as the CV figure shows, the value for diamond is 1300K. This is an example of a law of corresponding states: When the temperatures are scaled appropriately all substances should exhibit the same behaviour.

PH261 BPC/JS 1997

Page 2.23

S / Nk

1 T/ 0.0 1.0 2.0

entropy of Einstein solid


E / Nk 6

3 Ecl = 2 NkT classical limit

E0

3 = 2 Nk

zero point motion T/ 0.0 1.0 2.0

internal energy of Einstein solid


3.0 CV / Nk experimental points for diamond fit to line for = 1300K 2.0

1.0

0.0 0.0 1.0 2.0

T/

thermal capacity of Einstein solid

PH261 BPC/JS 1997

Page 2.24

Finally we give the calculated properties of the Einstein model in terms of hyperbolic functions; you should check these. The partition function for simple harmonic oscillator can be written as 1 cosech ( ) , z = 2 2T so that the Helmholtz free energy for the solid comprising 3N such oscillators is F
=

3NkT ln 2 sinh (

) . 2T

And from this the various properties follow: S E CV


=

3Nk

3 Nk coth ( ) 2 2T 3Nk ( 2T )

{( 2T ) coth ( 2T ) ln 2 sinh ( 2T )}

2T ) ______________________________________________________________ End of lecture 10 Equation of state for the Einstein solid We have no equation of state for this model so far. No p V relation has been found, since there was no volume-dependence in the energy levels, and thus in the partition function and everything which followed from that. This deficiency can be rectified by recognising that the Einstein frequency, or equivalently the temperature may vary with volume. Then the pressure may be found from
= .

cosech2 (

3Nk d coth . 2 2T dV We note that the equilibrium state of the solid when no pressure is applied, corresponds to the vanishing of d / dV . In fact will be a minimum at the equilibrium volume V0 (why?). For small changes of volume from this equilibrium we may then write
=

F V |T

( )

so that

V0

V0

d 2 (V V0) . = 2 dV V0 Then the equation of state for the solid is 3Nk 0 (V0 V ) coth p = 2 V0 2T The high-temperature limit of this is 6Nk (V0 V ) T p = 2 V0 and the low-temperature, temperature-independent limit is (V0 V ) . p = 3Nk 2 V0 PH261 BPC/JS 1997

( )

Page 2.25

Section 3 Entropy and Classical Thermodynamics


3.1 Entropy in thermodynamics and statistical mechanics
3.1.1 The Second Law of Thermodynamics There are various statements of the second law of thermodynamics. These must obviously be logically equivalent. In the spirit of our approach we shall adopt the following statement: There exists an extensive function of state called entropy, such that in any process the entropy of an isolated system increases or remains constant, but cannot decrease. The thermodynamic definition of entropy is as follows. If an infinitesimal amount of heat dQ is added reversibly to a system at temperature T (where T is the absolute temperature of the system), then the entropy of the system increases by dS = dQ / T . We shall see that this definition (and note that in thermodynamics only entropy changes are defined) is consistent with the statistical mechanics definition of entropy S = k ln (here the absolute value of the entropy is defined). We have seen already that the law of increasing entropy is a natural consequence of the equation S = k ln . In some thermodynamics texts (the classic is the book by H.B.Callen) this law is given the status of a postulate or axiom. It may be summarised by S 0. 3.1.2 Restatement of the First Law Let us now ask what form the first law takes, given this connection between entropy and heat. For an infinitesimal change Taking a fluid (p we have: and also dQ Thus dE
= =

dE = dQ + dW . this is true always V variables) as our thermodynamic system and considering a reversible process dW
= pdV

for a reversible process for a reversible process

TdS. TdS

pdV .

for a reversible process

But all the variables in this last expression are state variables. In a process connecting two states dE
=

Efinal

Einitial

etc.

This does not depend on any details of the process. So we conclude (Finn p86) that for all processes, reversible or irreversible dE
=

TdS

pdV .

always

This form of the first law is very important. You note that all the differentials are exact (i.e. they are differentials of a well defined function). You see that T = E / S|V in agreement with the statistical definition of temperature. PH261 BPC/JS 1997 Page 3.1

3.1.3 Microscopic interpretation of the first law Returning to the microscopic view, the internal energy of a system is given by E
=

pjj
j

where pj is the probability that the system is in the j energy state. So E can change because of changes in j, the energies of the possible quantum states of the system, or changes in pj, the probability distribution for the system. In an infinitesimal change we can write dE
=

jdpj
j

pjdj.
j

thermal

mechanical

The first term is identified with TdS, the second with pdV . Entropy changes are associated with changes in the occupancies of states, while the mechanical term (work) relates to the energy change of the states. Let us establish this equivalence. a) Work term Suppose the energies j, depend on volume (as they do for particles in a box a gas). Then d j
=

j dV . V

We suppose the derivative to be taken at constant pj i.e constant S. Then


pjdj
j

pj
j

j dV V |S

and since the derivative is taken at constant pj then the derivative can be taken outside of the sum:
pjdj
j

p dV V |S j j j

E dV V |S

= pdV .

Thus we have identified the j pjdj term of dE as the work done on the system. b) Heat term The fact that TdS = j jdpj takes a bit longer to prove (see also Guenault p 25, although our approach differs in detail from his; we have already shown = 1 / kT ) We must make connection with entropy, so we start for the expression for S for the Gibbs ensemble Sensemble
=

k N ln N

nj ln nj .
j

This is the entropy for the ensemble of N equivalent systems. What we are really interested in is the (mean) entropy of a single system which of course should be independent of N. We express the first N as the sum j nj, so that PH261 BPC/JS 1997 Page 3.2

Sensemble

k nj ln N
j

nj ln nj
j

= k

nj ln
j

(N) .
nj

nj

Then the (mean) entropy of a single system is one Nth of this: S


= k

( N ) ln ( N )

nj

but here we can identify nj / N as pj the probability that a system will be in the jth state. So we have the useful formula for the entropy of a non-isolated system S
= k

pj ln pj.
j

We shall now use this expression to make the connection between j jdpj and heat. The differential of S is dS
= k

(dpj ln pj
j

dpj) .

Now the second dpj sums to zero since the total probability is constant. Thus we have dS
= k

dpj ln pj
j

and for ln pj we shall use the Boltzmann factor pj so that ln pj giving dS


= = =

j/kT

( kT
j

),

k kT j

Z dpj.

The second term in the sum is zero since Z is a constant and dpj sums to zero as above. So only the first term contributes and we obtain dS or
j dpj
j

1 j dpj , T j

TdS

which completes the demonstration. ______________________________________________________________ End of lecture 11

PH261 BPC/JS 1997

Page 3.3

3.1.4 Entropy changes in irreversible processes We have seen that dS = dQ / T . And we stated that this applies in a reversible change. In a separate handout we gave some examples of calculating entropy changes. What happens to the entropy in an irreversible process? To highlight matters let us take an isolated system for which dQ = 0. Consider a process undergone by this system on removing a constraint. The microscopic understanding of the second law tells us the entropy will increase, as the system goes to the new (more probable) equilibrium macrostate with a larger number of microstates. Since for an isolated system dQ = 0, the expression dS = dQ / T certainly does not apply. Indeed, in this case TdS > dQ. This is generally true for irreversible processes. Consider a specific example of an irreversible process: the infinitesimal adiabatic free expansion of a gas (by the removal of a partition in a chamber). The partition is the constraint. Replacing the partition will certainly not get us back to the initial state. What will? In this case dQ so then dE by the first law. Since the expansion is infinitesimal p is essentially constant, and pdV
> = =

and 0

dW

0. But we know that


>

dE = TdS pdV for all processes. Therefore in the free expansion, for which dE = 0, if pdV correspondingly have TdS > 0. So for this irreversible process we find TdS
> dQ pdV < dW .

0, we must

So although no heat has flowed into the system, the entropy has increased. This is a consequence of the irreversible nature of the process. To summarise: TdS
> dQ

in an irreversible process.

3.2 Alternative statements of the Second Law


3.2.1 Some statements of the Second Law Now when you read chapter 4 of Finn you will find that the second law is introduced through a discussion of cyclic processes and heat engines. The statement of the law takes a rather different form from the one we have given. That is how the subject developed historically. But we are approaching things from a somewhat different angle. We started from statistical mechanics and the principle of increasing entropy follows easily from that treatment. Well look at these cyclic processes in a bit. Here are various statements of the second law: PH261 BPC/JS 1997 Page 3.4

1a. Heat cannot pass spontaneously from a lower to a higher temperature, while the constraints on the system and the state of the rest of the world are left unchanged. 1b. It is impossible to construct a device that, operating in a cycle, produces no effect other than the transfer of heat from a colder to a hotter body. (Clausius) 2. It is impossible to constuct a device that, operating in a cycle, produces no effect other than the extraction of heat from a body and the performance of an equivalent amount of work. (KelvinPlanck). These various statements can be shown to follow from each other (Finn p57) and from the law of increasing entropy. 3.2.2 Demonstration that the law of increasing entropy implies statement 1a Let us show that S 0 statement 1a. (The argument here is identical to that in Section 2.3.3) Consider two systems A and B separated by a fixed diathermal wall. Thus the volumes of each system are constant, so no work can be done. Since the composite system is thermally isolated S 0. If the two systems are not in thermal equilibrium with each other (i.e. they are at different temperatures) then heat will flow between them. Consider the exchange of an infinitesimal amount of heat dQ.

dQ

heat flow The law of entropy increase tells us that dS


= =

dSA

dSB

SA dE EA |V A

since, in general, 1 / T Thus

1 1 dEA + dEB TA TB S / E|V . Now since no work is done on the system, then
=

dQ

dEA

1 0. TB A And so then if dQ is positive then thie expression implies that TA TB. So heat must flow spontaneously from hot to cold. This is part of our own experience. Note that we can make heat flow from cold to hot if we perform work, a device to do this is called a heat pump (see later). dQ

(T

PH261 BPC/JS 1997

fixed diathermal wall

SB dE EB |V B

= dEB.

Page 3.5

3.3 The Carnot cycle


3.3.1 Introduction to Carnot cycles Thermodynamic temperature The Carnot cycle is an important example of a cyclic process. In such a process the state of the working substance is varied but returns to the original state. So at the end of the cycle the functions of state of the working substance are unchanged. The cycle can be repeated. It thus serves as an idealised model for the operation of real heat engines. It was of practical value at the time of the Industrial Revolution to achieve an understanding of the limitations to the conversion of heat into work. The analysis by Carnot in 1824 was his only publication but a seminal achievement. We represent the cycle on an indicator diagram as all steps are presumed to be reversible processes. In one cycle a quantity of heat Q1 is taken from a source at high temperature T1 and Q2 rejected to a sink at a lower temperature T2.
p a Indicator diagram
adiabat

schematic representation T1 source

1 adiabat

b T1 isotherm d Q
2

Q1

Q1

Q2

c T2 isotherm V T2

Q2 sink

The Carnot cycle Since the process is cyclic the working substance returns to its initial state. And since E and S are both functions of state it follows that E Since for the working substance E
=

0 and S = 0. 0 then the first law tells us


=

W = Q1 Q2 . Now the whole system, including the reservoirs, is isolated. So according to our statement of the second law Stotal So S1 + S2 + S 0; the total entropy change is the sum of that of reservoirs 1 and 2 and that of the system. Since PH261 BPC/JS 1997 Page 3.6

0.

S S1 S2 Then

0,

= Q1 / T1 =

Q2 / T2.

Q1 Q2 + 0. T1 T2 Now imagine that the cycle is operated backwards. Now we have a heat pump. The analysis is identical except

Q2 Q2. Q1 Q1 , Then the law of entropy increase tells us that Q1 Q2 + 0. T1 T2 It is clear that the equality must hold in such a reversible cycle. So then Q1 T1 = . Q2 T2 In thermodynamics texts this is taken to be the definition of the thermodynamic temperature scale. By taking the working substance to be an ideal gas we may show the identity of this temperature scale with the ideal gas scale. ______________________________________________________________ End of lecture 12 3.3.2 Efficiency of heat engines and heat pumps The efficiency of an engine characterises how well the engine transforms heat into work.
T1 source

Q1

Q1

Q2
=

efficiency of engine
Q2 sink

work out heat in

W Q1

T2

a heat engine Since W = Q1 Q2 it follows that the efficiency of a Carnot engine depends only on the temperature of the reservoirs.

PH261 BPC/JS 1997

Page 3.7

Q1

Q2

Q1 1 1

Q2 Q1 T2 . T1

The significance of this result is that the efficiency depends only on the choice of reservoir temperatures (and not on the choice of working substance). It may be shown (Finn p59) that no engine operating between the same two reservoirs can be more efficient than a Carnot engine and that all reversible engines operating between the same two reservoirs are equally efficient. In practical engines T2 is fixed (near ambient temperature) so in order to increase we must increase T1 by, for example, the use of superheated steam. Note that in principle 100% efficiency would be achievable for T2 = 0. The unattainability of absolute zero: the third law of thermodynamics, disallows this however. A heat pump is a heat engine operating in reverse. In this case work is done on the working substance, and as a consequence heat is pumped from a cooler reservoir to a hotter one.
T1 sink

Q1

Q1

Q2
=

efficiency of heat pump


Q2 source T2

heat out work in

Q1 W

a heat pump As an example of a heat pump, the Festival Hall is heated from the (cooler) River Thames. You see that the efficiency of a heat pump is different from that of a heat engine. In fact the efficiency of a heat pump will always be greater than 100%! Otherwise we may as well just use an electric heater and get just 100%. The efficiency of a Carnot heat pump is given by Q1 = Q1 Q2 1 ; 1 T2 / T1 it also depends only on the initial and final temperatures. Observe that the efficiency is greatest when the initial and final temperatures are close together. For this reason heat pumps are best used for providing low-level background heating.
=

PH261 BPC/JS 1997

Page 3.8

3.3.3 Equivalence of ideal gas and thermodynamic temperatures Thus far we have seen that our originally introduced statistical temperature, defined by 1 S = T V no work is equivalent to thermodynamic temperature, defined by T2 Q2 = , T1 Q1 the heat ratio for a Carnot cycle. In this section we will demonstrate the equivalence with the ideal gas temperature. For clarity we denote the ideal gas temperature by t; we take the ideal gas equation of state as

pV = Nkt and our task is to calculate the heat ratio for a Carnot cycle having a working substance which obeys this equation of state.
p a Q
1

pV

Nkt 1

isotherm

pV

adiabat

C2
d

t1
Q
2

pV
c

adiabat

C1

pV

Nkt 2

isotherm

t2
V

ideal gas Carnot cycle Two other properties of the ideal gas will be used. We have the adiabatic equation pV = const . and we will use the fact that the internal energy of an ideal gas depends only on its temperature; in particular, E will remain constant along an isotherm. The cycle consists of four steps: 1 a b During the isothermal expansion a b heat Q1 is taken in at constant temperature t 1. The internal energy is constant during this step, so the work done on the system is minus the heat flow Q1 into the system. 2 b c During the adiabatic expansion b c there is no heat flow, but the gas is cooled from temperature t 1 to temperature t 2. 3 c d During the isothermal compression c d heat Q2 is given out at constant temperature t 2. The internal energy is constant during this step, so the work done on the system is equal to the heat flow Q1 into the system. PH261 BPC/JS 1997 Page 3.9

4 d a During the adiabatic compression d a there is no heat flow, but the gas is warmed from temperature t 2 up to the starting temperature t 1. The initial state has been recovered. We must calculate the heat flows in steps 1 and 3. In step 1 the heat flow Q1 into the system is minus the work done on the system: Q1
Vb = + p dV Va

but the path is an isotherm so that p is given by p


=

1 Nkt 1. V
Vb Va

Integrating up the expression for Q1 we then find Q1 or Q1


= =

Nkt 1

dV V

Nkt 1 ln

Vb . Va

In step 3 the heat flow Q2 out of the system is equal to the work done on the system: Q2
Vd = p dV Vc

but the path is an isotherm so that p is given by p


=

1 Nkt 2. V dV V

Integrating up the expression for Q2 we then find Q2 or Q1


=
Vd = Nkt 1 Vc

Nkt 1 ln

Vc . Vd

And the ratio of the heats in and out is then Q1 t 1 ln Vb / Va = . Q2 t 2 ln Vc / Vd The ratio t 1 / t 2 looks promising, but we must now find the expressions for the volumes. First we summarise the simultaneous relations between p and V at the points a, b, c, d. a : b : c : d : paVa pbVb pcVc pdVd
= = = =

Nkt 1 Nkt 1 Nkt 2 Nkt 2

, , , ,

paVa
pbVb

= = = =

C2 C1 C1 C 2.

pcVc
pdVd

To eliminate the ps let us divide each second (adiabatic) equation by the corresponding first PH261 BPC/JS 1997 Page 3.10

(isothermal). This gives Va 1


Vb 1
= = = =

C2 / Nkt 1 C1 / Nkt 1 C1 / Nkt 2 C2 / Nkt 2


1 1
1 1

Vc 1
Vd 1

and we then have the results Vb / Va Vc / Vd These two expressions are equal so that ln Vb / Va ln Vc / Vd giving the result Q1 t1 = Q2 t2 relating the heat input and output for an ideal gas Carnot cycle to the ideal gas temperature. But the identical relation was used as an operational definition of thermodynamic temperature. Thus we have demonstrated the equivalence of the ideal gas and the thermodynamic temperature scales (to within a multiplicative constant). Recall that our definition of thermodynamic temperature followed from our definition of statistical temperature. So in reality we now have the equivalence of the ideal gas, thermodynamic, and statistical temperatures.
thermodynamic
= = ( C 1 / C 2) = ( C 1 / C 2)

1,

statistical

ideal gas

equivalence of temperature scales Logically there is no need to complete the triangle and prove directly that the statistical and ideal gas scales are equivalent, but we will do that when studying the statistical mechanics of the ideal gas. ______________________________________________________________ End of lecture 13

PH261 BPC/JS 1997

Page 3.11

3.4 Thermodynamic potentials


3.4.1 Equilibrium states We have seen that the equilibrium state of an isolated system characterised by E, V , N is determined by maximising the entropy S. On the other hand we know that a purely mechanical system settles down to the equilibrium state which minimises the energy: the state where the forces Fi = E / xi vanish. In this section we shall see how to relate these two ideas. And then in the following sections we shall see how to extend the ideas. By a purely mechanical system we mean one with no thermal degrees of freedom. This means no changing of the populations of the different quantum states i.e. at constant entropy. But this should also apply for a thermodynamic system at constant entropy. So we should be able to find the equilibrium state in two ways: Maximise the entropy at constant energy. Minimise the energy at constant entropy. That these approaches are equivalent may be seen by considering the state of the E S X surface for a system. Here X is some extensive quantity that will vary when the system approaches equilibrium like the energy in or the number of particles in one half of the system.

S plane S
=

S plane E
=

S0 P

E0 P

X
the equilibrium state P as a point of minimum energy at constant entropy

X
the equilibrium state P as a point of maximum entropy at constant energy

alternative specification of equilibrium states At constant energy the plane E = E0 intersects the E S X surface along a line of possible states of the system. We have seen that the equilibrium state the state of maximum probability will be the point of maximum entropy: the point P on the curve. But now consider the same system at constant entropy. The plane S = S0 intersects the E S X surface along a different line of possible states. Comparing the two pictures we see that the equilibrium point P is now the point of minimum energy. Equivalence of the entropy maximum and the energy minimum principles depends on the shape of the E S X surface. In particular, it relies on PH261 BPC/JS 1997 Page 3.12

a) S having a maximum with respect to X guaranteed by the second law, b) S being an increasing function of E S / E = 1 / T > 0 means positive temperatures c) E being a single-valued continuous function of S. To demonstrate the equivalence of the entropy maximum and the energy minimum principles we shall show that the converse would lead to a violation of the second law. Assume that the energy E is not the minimum value consistent with a given entropy S. Then let us take some energy out of the system in the form of work and return it in the form of heat. The energy is then the same but the entropy will have increased. So the original state could not have been an equilibrium state. The equivalence of the energy minimum and the entropy maximum principle is rather like describing the circle as having the maximum area at fixed circumference or the minimum circumference for a given area. We shall now look at the specification of equilibrium states when instead of energy or entropy, other variables are held constant. 3.4.2 Constant temperature (and volume): the Helmholtz potential To maintain a system of fixed volume at constant temperature we shall put it in contact with a heat reservoir. The equilibrium state can be determined by maximising the total entropy while keeping the total energy constant. The total entropy is the sum of the system entropy and that of the reservoir. The entropy maximum condition is then dST
=

dS d2ST

+ <

dSres 0.

The entropy differential for the reservoir is dSres


=

dEres Tres

dE T since the total energy is constant. The total entropy maximum condition is then
=

dS d 2S Or, since T is constant, d (E d2 ( E PH261 BPC/JS 1997

dE T d 2E T

0 0.

<

TS) TS)

= >

0 0,

Page 3.13

which is the condition for a minimum in E TS. But we have encountered E TS before, in the consideration of the link between the partition function and thermodynamic properties. This function is the Helmholtz free energy F. So we conclude that at constant N, V and T, F = E TS is a minimum The Helmholtz minimum principle. We can understand this as a competition between two opposing effects. At high temperatures the enropy tends to a maximum, while at low temperatures the energy tends to a minimum. And the balance between these competing processes is given, at general temperatures, by minimising the combination F = E TS. 3.4.3 Constant pressure and energy: the Enthalpy function To maintain a system at constant pressure we shall put it in mechanical contact with a volume reservoir. That is, it will be connected by a movable, thermally isolated piston, to a very large volume. As before, we can determine the equilibrium state by maximising the total entropy while keeping the total energy constant. Alternatively and equivalently, we can keep the total entropy constant while minimising the total energy. The energy minimum condition is dET
=

dE d ET
2

+ >

dEres 0.

In this case the reservoir may do mechanical work on our system: dEres
= presdVres =

pdV

since the total volume is fixed. We then write the energy minimum condition as dE + pdV d E + pd2V
2

= >

0 0, 0 0.

or, since p is constant d (E d2 (E This is the condition for a minimum in E the symbol H. So we conclude that
+ + +

pV ) pV )

= >

pV . This function is called the Enthalpy, and it is given

at constant N, p and E, H = E + pV is a minimum The Enthalpy minimum principle. 3.4.4 Constant pressure and temperature: the Gibbs free energy In this case our system can exchange both thermal and mechanical energy with a reservoir; both heat energy and volume may be exchanged. Working in terms of the minimum energy at constant entropy condition for the combined system + reservoir

PH261 BPC/JS 1997

Page 3.14

dET

dE d2ET

+ >

dEres 0.

In this case the reservoir give heat energy and/or it may do mechanical work on our system: dEres
=

TresdSres

presdVres

= TdS +

pdV

since the total energy is fixed. We then write the energy minimum condition as dE TdS + pdV d E Td2S + pd2V
2

= >

0 0,

or, since T and p are constant d (E d2 (E


TS TS

+ +

pV ) pV )

= >

0 0.

This is the condition for a minimum in E TS + pV . This function is called the Gibbs free energy, and it is given the symbol G. So we conclude that at constant N, p and T, G = E TS + pV is a minimum The Gibbs free energy minimum principle. 3.4.5 Differential expressions for the potentials The internal energy, Helmholtz free energy, enthalpy and Gibbs free energy are called thermodynamic potentials. Clearly they are all functions of state. From the definitions F H
= =

E E

TS pV
+

Helmholtz function Enthalpy function pV Gibbs function

G = E TS and the differential expression for the internal energy

dE = TdS pdV we obtain the differential expressions for the potentials: dF dH


= SdT =

pdV Vdp

TdS

dG = SdT + Vdp. Considering the differentials as virtual changes, i.e. the system feeling out the situation in the neighbourhood of the equilibrium state, we immediately see that at fixed S and V, at fixed T and V, at fixed S and p, at fixed T and p, dS = 0 and dV = 0 dT = 0 and dV = 0 dS = 0 and dp = 0 dT = 0 and dp = 0 so that dE = 0 so that dF = 0 so that dH = 0 so that dG = 0

E is minimised F is minimised H is minimised G is minimised.

This summarises compactly the extremum principles obtained in the previous section. PH261 BPC/JS 1997 Page 3.15

3.4.6 Natural variables and the Maxwell relations Each of the thermodynamic potentials has its own natural variables. For instance, taking E, the differential expression for the first law is dE = TdS pdV . Thus if E is known as a function of S and V then everything else like T and p can be obtained by differentiation, since T = E / S|V and p = E / V |p. If, instead, E were known as a function of, say, T and V, then we would need more information like an equation of state to completely determine the remaining thermodynamic functions. All the potentials have their natural variables in terms of which the dependent variables may be found by differentiation: E E , p = , E (S, V ) , dE = TdS pdV , T = S V V p

F (T , V ) , H (S, p) , G (T , p) ,

dF dH dG

= SdT

pdV ,

S T S

TdS

Vdp, Vdp,

= SdT +

F , T |V H , S |p G , T |p

p V V

F , V |T H , p |S G . p |T

If we differentiate one of these results with respect to a further variable then the order of differentiation is immaterial; differentiation is commutative. Thus, for instance, using the energy natural variables we see that

|S S |V V
and operating on E with this we obtain

|V V |S , S E S |V V |S

E V |S S |V
T so that we obtain the result

T V |S

p . S |V

Similarly, we get one relation for each potential by differentiating it with respect to its two natural variables T p = E V S S V S p F = V T T V T V H = p S S p

S p |T

V . T |p
Page 3.16

PH261 BPC/JS 1997

The Maxwell relations give equations between seemingly different quantities. In particular, they often connect easily measured, but uninteresting quantities to difficult-to-measure, but very interesting ones. ______________________________________________________________ End of lecture 14

3.5 Some applications


3.5.1 Entropy of an ideal gas We want to obtain an expression for the entropy of an ideal gas. Of course the mathematical expression will depend on the choice of independent variables. We take as independent variables temperature and volume (the natural variables of the canonical distribution or F). There is a function S (T , V ), that we want to find. We shall do this by taking the differential of this, identifying the coefficients, and then integrating up. So we start from S S dT + dV . dS = T V V T The first coefficient is seen to be related to the thermal capacity, since S . CV = T T V The second coefficient can be transformed with a Maxwell relation S p = V T T V and the derivative on the right hand side may be found from the equation of state: NkT p = V so that p Nk . = T V V then the differential of S becomes CV Nk dV . dS = dT + V T The second term can be integrated immediately. And on the assumption that CV is a constant, the first term also can be integrated. In this case we find

S = CV ln T + Nk ln V + S0 where S0 is a constant. Note that from purely macroscopic considerations the entropy can not be determined absolutely; there is always an arbitrary constant. Furthermore beware of considering separate terms of this expression separately. It makes no sense to ascribe one contribution to the entropy from the temperature and another from the volume; these will change upon changing temperature and volume units. What is well-defined is entropy changes. Writing T2 V2 S2 S1 = CV ln + Nk ln T1 V1 the arguments of the logarithms are dimensionless and then no ambiguity occurs in ascribing different contributions to the different terms of the equation. PH261 BPC/JS 1997 Page 3.17

We also note that the microscopic approach will give an expression of the above form, with a welldefined value for the arbitrary constant. For a monatomic gas we will see later on that statistical mechanics will give us the result 3 3 mk 5 Nk , Nk ln N + S = Nk ln T + Nk ln V + Nk ln 2 2 2 2 2 sometimes called the Sackur-Tetrode equation. The first two terms give the ln T and ln V terms as 3 before where, of course, for the monatomic gas CV = 2 Nk . The last three terms give an absolute value for the constant S0. A microscopic parameter enters here in the mass m of the particles. 3.5.2 General expression for Cp CV The thermal capacity of an object is found by measuring the temperature rise when a small quantity of heat is applied. It is much easier to do this while keeping the body at constant pressure (possibly atmospheric pressure), than at constant volume. However if we are calculating the properties of a system then this is much easier done at constant volume, when the energy levels do not change. The relation between these two thermal capacities provides a nice example of the use of Maxwell relations. We know that for an ideal gas the difference between the thermal capacities is given by Cp

CV

Nk .

This is not true for a general system. Here we shall investigate the relation for a real gas. We start from the expression for CV : S CV = T T V which suggests we should consider the entropy as a function of T and V (as we did above). The differential of S (T , V ) is S S dT + dV dS = T V V T CV S dT + = dV . V T T What we want to find is Cp:

Cp

S T |p

which suggests we divide the dS equation by dT at constant pressure. This gives S CV S V + = , T p V T T p T or S V . Cp CV = T V T T p Ideally, this expression should be related to readily-measurable quantities. Now the V / T term is connected with the thermal expansion coefficient, but the S / V term is not so easily understood; entropy is not easy to measure. However a Maxwell relation will rescue us: S p = , V T T V so that

PH261 BPC/JS 1997

Page 3.18

Cp

CV

The readily available quantities are usually the isobaric expansion coefficient p

p V . T |V T |p

p
and the isothermal compressibility k T

1 V V T

1 V . V p T This last expression may be transformed using the cyclic rule of partial differentiation x y z = 1 y z z x x y and the reciprocal rule x 1 = y . y z x z So we can write the p / T term as p V p = . T V T p V T This enables us to express the difference in the thermal capacities as kT
=

Cp or

CV

= T

V 2 p T |p V |T

TV 2 p . kT This is actually quite an important result. From this equation a number of deductions follow: Cp

CV

Since k must be positive (stability requirement) and 2 will be positive it follows that Cp can never be less than CV . The thermal capacities will become equal as T

0.

At finite temperatures the thermal capacities will become equal when the expansion coefficient becomes zero: a maximum or minimum in density. Water at 4C is an example. ______________________________________________________________ End of lecture 15 3.5.3 Joule Expansion ideal gas Joule wanted to know how the internal energy of a gas depended on its volume. Clearly this is related to the interactions between the molecules and how this varies with their separation. Joule had two containers joined by a tap. One container had air at a high pressure and the other was evacuated. The whole assembly was placed in a bucket of water and Joule looked for a change in the temperature of the system when the tap was opened. PH261 BPC/JS 1997 Page 3.19

Joules experiment As we have seen already, when the tap is opened the subsequent process is irreversible. We cannot follow the evolution of the system, but once things have settled down and the system has come to an equilibrium, we can consider the various functions of state and we can construct any convenient reversible path between these states. In this case, since the assembly is isolated, the internal energy will be unchanged. So writing E as a function of temperature and volume, its differential is E E dV + dT . dE = V T T V But since the internal energy remains unchanged, dE = 0, it follows that E E dV + dT = 0. V T T V Now in his experiments Joule could not detect any temperature change. So he concluded that E = 0. V T That is, he deduced that while the internal energy might depend on temperature, it cannot depend on the volume. This meant that the energy of the molecules was independent of their separation in other words, no forces of interaction. And of course with hindsight, we understand an ideal gas as one with negligible interactions between the particles.

In fact Joules experiment was extremely insensitive because of the thermal capacity of the surroundings. This tended to reduce the magnitude of any possible temperature change. Rather than a single-shot experiment, greater sensitivity could be obtained with a continuous flow experiment. Before considering this, however, we shall examine the Joule experiment for a real, non-ideal gas. 3.5.4 Joule Expansion real non-ideal gas A real gas does have interactions between the constituent particles. So in a free expansion there might well be a temperature change. Although the process is fundamentally irreversible, in line with our previous discussion, we can evaluate functions of state by taking any reversible path between the initial and final states. The free expansion occurs at constant internal energy. When the volume increases by V there may PH261 BPC/JS 1997 Page 3.20

be a temperature increase T and to quantify this we introduce the Joule coefficient J:

T . V |E

We want to relate this coefficient to the behaviour of a real gas, involving such things as a realistic equation of state. In evaluating J the first thing we observe is that the derivative is taken at constant internal energy. This is a little awkward to handle in realistic calculations, so let us transform it away using the cyclical rule T E V = 1 V E T V E T so that T T E = V E E V V T or 1 E . J = CV V T Now since we know

dE it follows that

TdS

pdV

E V |T

S V |T

and the derivative here may be transformed using a Maxwell relation: E p = T p V T T V giving, finally, 1 p p . J = T CV T V

We may represent the equation of state of a real gas by a virial expansion (an expansion in powers of the density): p N N 2 N 3 = + B2 ( T ) ( ) + B3 ( T ) ( ) + .... . kT V V V th virial coefficient. The B factors are called virial coefficients; Bn is the n The derivative is evaluated as N N 2 dB2 (T ) N 2 + kB2 (T ) ( ) + kT = k V V dT ( V ) giving the Joule coefficient, to leading order in density
2 1 2 dB2 (T ) N ; J = kT CV dT ( V ) the smaller higher order terms have been neglected.

p T |V

....

PH261 BPC/JS 1997

Page 3.21

20

B2 (T ) in cm3 mol1

0 20 40 60 80 100 200 300 400 500 600 700 800 Temperature in K

second virial coefficient for argon As an example, for argon at 0C, dB2 / dT
=

0.25 cm3 mol1 K1, and CV 10 K mol cm


5 3

3 = 2 Nk ,

giving J as

= 2.5

The temperature change in a finite free expansion is found by integrating: T


=

V1 d V .

V2

For one mole of argon at STP, if we double its volume the temperature will drop by only 0.6 K. This was too small for Joule to measure. Free expansion always results in cooling. Temperature is a measure of the kinetic energy of the molecular motion. On expansion the mean spacing of the molecules increases. The attractive tail of the interparticle interaction then gets weaker, tending to zero from its negative value. Since the free expansion takes place at constant E, or total energy, this increase in potential energy is accompanied by a corresponding decrease in kinetic energy. So the temperature decreases. Incidentally, we note from the relation

J
and our result for J:

1 E CV V

1 p p T CV T V that for a gas obeying the ideal gas equation of state E = 0. V T In other words we have demonstrated that for an ideal gas the internal energy is a function only of its temperature.

PH261 BPC/JS 1997

Page 3.22

3.5.5 Joule-Kelvin or Joule-Thomson process Throttling The names Kelvin and Thomson refer to the same person; William Thomson became Lord Kelvin. In a throttling process a gas is forced through a flow impedance such as a porous plug. For a continuous process, in the steady state, the pressure will be constant (but different) either side of the impedance. As this is a continuous flow process the walls of the container will be in equilibrium with the gas; their thermal capacity is thus irrelevant. Such a throttling process, where heat neither enters nor leaves the system is referred to as a Joule-Kelvin or Joule Thompson process.

gas flow

p1 , T1

p2 , T2

throttling of a gas This is fundamentally an irreversible process, but the arguments of thermodynamics are applied to such a system simply by considering the equilibrium initial state and the equilibrium final state which applied way before and way after the actual process. To study this throttling process we shall focus attention on a fixed mass of the gas. We can then regard this portion of the gas as being held between two moving pistons. The pistons move so as to keep the pressures p1 and p2 constant.

p1 , V1, T1 before

p2 , V2, T2 after

modelling the Joule-Kelvin process Work must be done to force the gas through the plug. The work done is
0 V2

W =

V1 p1dV

0 p2dV

p1V1

p2V2 .

Since the system is thermally isolated the change in the internal energy is due entirely to the work done: E2 or E1 The enthalpy H is defined by H = E + pV so we conclude that in a Joule-Kelvin process the enthalpy is conserved. The interest in the throttling process is that whereas for an ideal gas the temperature remains constant, it is possible to have either cooling or warming when the process happens to a non-ideal gas. The operation of most refrigerators is based on this. By contrast, with Joule expansion one only has cooling. ______________________________________________________________ End of lecture 16
+

E1 p1V1

p1V1
=

p2V2 p2V2 .

E2

PH261 BPC/JS 1997

Page 3.23

3.5.6 Joule-Thomson coefficient The fundamental differential relation for the enthalpy is dH = TdS + Vdp and dH is zero for this process. It is, however, rather more convenient to use T and p as the independent variables rather than the natural S and p, since ultimately we want to relate dT to dp. This is effected by expressing dS is expressed in terms of dT and dp: S S dT + dp . dS = T p p T But Cp S = T p T and using a Maxwell relation we have S V = p T T p so substituting these into the expression for dH gives V dp . dH = CpdT + V T T p Now since H is conserved in the throttling process dH = 0 so that 1 V T V dp dT = Cp T p

|}

which tells us how the temperature change is determined by the pressure change. The Joule-Thomson coefficient J is defined as the derivative

J
giving

T , p |H

1 V T Cp T

This is zero for the ideal gas. When is positive then the temperature decreases in a throttling process when a gas is forced through a porous plug.

PH261 BPC/JS 1997

Page 3.24

600 500 Temperature in K 400 300 200 100 0 0 10

zero negative
Heating

positive
Cooling

20

30

40

50

Pressure in MPa Isenthalps and inversion curve for nitrogen 3.5.7 Joule-Thomson effect for real and ideal gases Let us consider a real gas, with an equation of state represented by a virial expansion. We consider the case where the second virial coefficient gives a good approximation to the equation of state. Thus we are assuming that the density is low enough so that the third and higher coefficients can be ignored. This means that the second virial coefficient correction to the ideal gas equation is small and then solving for V in the limit of small B2 (T ) gives NkT + NB2 (T ) . V = p so that the Joule-Thomson coefficient is then NT dB2 (T ) B2 (T ) J = . Cp dT T

Within the low density approximation it is appropriate to use the ideal gas thermal capacity 5 Cp = Nk 2 so that 2T dB2 (T ) B2 (T ) J = . 5k dT T

3.5.8 Inversion temperature and liquefaction of gases The inversion curve for nitrogen is shown in the figure. We see that at high temperatures J is negative, and the throttling process leads to heating. As the temperature is decreased the inversion curve is crossed (the inversion temperature Ti) and J becomes positive. For temperatures T < Ti throttling will lead to cooling and this may be used for liquefaction of gases. Joule-Kelvin expansion may thus be used to liquefy a gas so long as the temperature starts below the PH261 BPC/JS 1997 Page 3.25

inversion temperature. For nitrogen Ti = 621 K so that room temperature nitrogen will be cooled. However the inversion temperature of helium is 23.6 K, so helium gas must be cooled below this temperature before the Joule-Kelvin process can be used for liquefaction. This may be done by precooling with liquid hydrogen, but that is dangerous. Alternatively free expansion may be used to attain the inversion temperature. In this case helium at a high pressure is initially cooled to 77K with liquid nitrogen. This gas is then expanded to cool below Ti so that throttling can be used for the final stage of cooling. The inversion temperature is given by the solution of the equation dB2 (T ) B2 (T ) = 0 dT T within the second virial coefficient approximation. Note, however, that this approximation fails to give any pressure dependence of the inversion temperature.

cooler

heat exchanger

high pressure low pressure throttle compressor

Liquefaction of gas using the Joule-Kelvin effect

______________________________________________________________ End of lecture 17

PH261 BPC/JS 1997

Page 3.26

Section 4 Statistical Thermodynamics of delocalised particles


4.1 Classical Ideal Gas
4.1.1 Indistinguishability Since the partition function is proportional to probabilities it follows that for composite systems the partition function is a product of the partition functions for the individual subsystems. The free energy is proportional to the logarithm of of the partition function and this leads to the the extensive variables of composite systems being additive. In this section we shall examine how the (canonical) partition function of a many-particle system is related to the partition function of a single particle. In Section 2.6 we saw how this could be done for localised systems where the particles, although indistinguishable, could be individually identified by the sites they occupied. In that case for an assembly of N identical but distinguishable particles the resultant partition function would be the product of the (same) partition functions of a single particle, z Z = zN. For delocalised particles, as in a gas, this is not possible. The key question is that of indistinguishability of the atoms or molecules of a many-body system. When two identical molecules are interchanged the system is still in the same microstate, so the distinguishable particle result overcounts the states in this case. Now the number of ways of redistributing N particles when there are n1 particles in the first state, n2 particles in the second state etc. is N! n1! n2! n3! ..... so that for a given distribution {ni} the partition function for identical indistinguishable particles is n1! n2! n3! ..... N z Z = N! 4.1.2 Classical approximation The problem here is the occupation numbers {ni}; we do not know these in advance. However at high temperatures the probability of occupancy of any state is small; the probability of multiple occupancy is then negligible. This is the classical rgime. Under these circumstances the factors n1! n2! n3! .... can be ignored and we have a soluble problem. In the classical case we have then Z The Helmholtz free energy F is thus F = NkT ln z + kT ln N! . This is N times the Helmholtz free energy for a single particle plus an extra term depending on T and N. So the second term can be ignored so long as we differentiate with respect to something other than T or N. Thus when differentiating with respect to volume to find the pressure, the result is N times that for a single particle. PH261 BPC/JS 1997 Page 4.1
= kT =

1 N z . N! ln Z

The problem then boils down to finding the partition function z for a single particle z
=

e
i

i /kT

so the first thing we must do is to find what these energies i are. 4.1.3 Specifying the single-particle energy states We consider a cubic box of volume V . Each side has length V 1/3. Elementary quantum mechanics tells us that the wave function must go to zero at the walls of the box only standing waves are allowed.

V 1/3

1 1/3 V 2 2 Standing waves in a box


=

1 1/3 V 3

In the general case the allowed wave lengths satisfy / 2 = V 1/3 / n or 2 n = 1, 2, 3, 4 ........ n = V 1/3 n In three dimensions there will be a for the x, y, and z directions:

nx

V 1/3 , nx

ny

V 1/3 , ny

nz

V 1/3 . nz

We can now use the deBroglie relation p = h / = 2 / to obtain the momentum and hence the energy. ny nx nz , py = , pz = 1/3 . px = 1/3 1/3 V V V and so for a free particle the energy is then

which is

p2 2m

p2 x

p2 + p2 y z 2m

2 2 2 {nx + ny + nz } . 2mV 2/3 In this expression it is the triple of quantum numbers (nx, ny, nz) which specify the quantum state, so we may write

2 2

(nx, ny, nz) .

4.1.4 Density of states In principle, since the quantum states are now specified, it is possible to evaluate the partition function sum. In practice for a box of macroscopic dimensions the energy levels are extremely closely spaced and it proves convenient to approximate the sum by an integral:

PH261 BPC/JS 1997

Page 4.2

i /kT e

g () e/kT d
+

where g () d is the number of single particle quantum states with energy between and quantity g () is called the density of states.

d. The

To find an expression for the density of states we note that each triple of quantum numbers (nx, ny, nz) specifies point on a cubic grid. If we put R2 then the energy is given by
=

n2 x

n2 y

n2 z

ny 8 7 6 5 4 3 2 1 0 0 1 R

22R2
2mV 2/3

dR

nx 2 3 4 5 6 7 Counting of quantum states 8

Now the number of states of energy up to , denoted by N () is given by the number of points in the octant up to (R). (An octant since nx, ny and nz are positive). And the number of points in the octant is approximately equal to the volume of the octant: 14 3 R . N ( ) = 83 But since R we then obtain 1 V 3/2 [ 2m] . 2 3 6 Recall that the density of states g () is defined by saying that the number of states with energy between and + d is g () d. In other words N ( )
=
2/3 1/2 2mV 1/2 = , 2 2

g ( ) d or, simply PH261 BPC/JS 1997

N (

d )

N ( ) Page 4.3

g ( ) So differentiating N () we obtain g ( )
=

dN () . d

1 V 3/2 1/2 [ 2m] 2 3 4 which is the required expression for the density of states. 4.1.5 Calculating the partition function The partition function, in integral form, is given by z
=

0 g () exp / kT d

4 This integral is evaluated by making the substitution


2 3

3/2 ( 2m)

1/2 0 exp / kT d.

x so that V
2 3

/ kT
1/2 x 0 x e dx.

4 Here the physics is all outside the integral. The integral is a pure number (an example of the gamma function), given by / 2, so that z For a gas of N particles we then have 1 N z . N! We use Stirlings approximation when evaluating the logarithm: Z
= =

3/2 (2mkT )

mkT 22

3/2

V.

ln Z

= N

ln N

N ln z

ln

mkT 22

3/2

Ve N

from which all thermodynamic properties can be found.


We

note parenthetically that the (single particle) partition function could have been evaluated classically without enumeration of the quantum states and without the consequent need for the energy density of states. The classical partition function is given by the integral 1 z = 3 exp / kT d3p d3q h where the classical state is specified as a cell in p q space, or phase space. The extent in phase space of such a state is given by the unspecified quantity h. For the ideal gas p2 / 2m. Thus the q integrals are trivial, giving a factor V, and we have
=

z PH261 BPC/JS 1997

3 V 2 exp p / 2mkT dp . h3

Page 4.4

The integral is transformed to a pure number by changing variables: p z The integral is so that
=
3 V 3/2 {2mkT } exp x2 dx . h3

x 2mkT so that

2mkT V h2 just as in the quantum calculation. Thus we obtain the partition function from purely classical arguments and as a bonus we see that by comparison with the quantum result, this justifies the use of Plancks constant in the normalization factor for the classical state element of phase space. ______________________________________________________________ End of lecture 18 z
=

3/2

4.1.6 Thermodynamic properties We start from the Helmholtz free energy: F giving NkT . V This is the ideal gas equation, and from this we identify directly our statistical temperature T with the temperature as measured by an ideal gas thermometer. p
= = kT

ln Z

= NkT

ln

mkT 22

3/2

Ve , N

kT

ln Z V |T

NkT

ln z V |T

The internal energy is d ln T 3/2 3 NkT . = 2 dT This is another important property of an ideal gas. On purely macroscopic grounds we saw that the ideal gas equation of state leads to an internal energy which depends only on temperature (not pressure or density). Here we have calculated this dependence for a monatomic gas. From the energy expression we obtain the thermal capacity E 3 Nk . = CV = T V 2 This is a constant, independent of temperature, in violation of the Third Law. This is because of the classical approximation ignoring multiple state occupancy etc. E
=

kT 2

ln Z T |V

NkT 2

The entropy is found from S which leads to S


= =

F T |V

Nk ln

mkT 22

3/2

Ve5/2 . N

This is the Sackur-Tetrode equation, quoted before in Section 3.5.1. Note that the T 0 value of the entropy is . This is totally un-physical and it is in violation of the third law of thermodynamics. Of course this problem is all tied up with quantum mechanics and multiple occupancy; this semiclassical model lets us down here. PH261 BPC/JS 1997 Page 4.5

4.1.7 Thermal deBroglie wavelength It is instructive to examine further the expression we have derived for the single-particle partition function. We found mkT V, 22 and since z must be dimensionless it follows that the quantity 22 / mkT must have the dimensions of length. This can be verified directly. Let us therefore define a quantity by z
=

3/2

22 , = mkT which we shall call the thermal deBroglie wavelength. Then in terms of this the partition function is given, simply, by V z = 3.

1/2

The explanation for the name is as follows. A temperature T is associated with a thermal energy kT which manifests itself in the form of kinetic energy. We may therefore equate this to p2 / 2m which gives us a corresponding thermal momentum. And from the momentum we can find a wavelength using the deBroglie relation p = h / . Thus we have E from which we find 2 2 2 = mkT which, apart from a numerical factor, corresponds to the thermal deBroglie wavelength defined above.
=

kT

p2 2m

(2 / )

2m

1/2

So represents the quantum-mechanical size of the particle due to its thermal energy. By size here we mean the distance over which the particle may be found the uncertainty in its position. We then have a simple interpretation of the partition function: z
=

volume of the box . thermal volume of the particle

4.1.8 Breakdown of the classical approximation Let us now briefly examine the region of validity of our treatment of the ideal gas. In particular we are concerned with the adding together of the independent effects of a number of particles. And clearly this must be related to the question of multiple occupancy of quantum states. The classical approximation can only hold if the particles can be regarded as being truly independent. In other words, we require that quantum mechanical effects dont cause the particles to see one another. We may express this condition as thermal volume of particle or PH261 BPC/JS 1997 Page 4.6 volume of box number of particles

1. V We tabulate below this parameter 3N / V for a number of gases at their normal boiling point T in K He H2 Ne A 4.2 20.4 27.2 87.4
3 N

3 N

/V

1.5 0.44 0.015 0.00054

Since m1/2 it follows that deviations from classical behaviour are more likely to be observed with lighter atoms. The table indicates that the classical condition is satisfied for most gases right down to the liquefaction point, except for helium. In considering liquid and gaseous helium the effects of quantum mechanics must be taken into account. Thus the remarkable properties of cooled helium.

4.2 Quantum statistics


4.2.2 Bosons and Fermions All particles in nature can be classified into two groups according to the behaviour of the wave function under the exchange of identical particles. For simplicity let us consider only two identical particles. The wave function can then be written where and r 2) r1 is the position of the particle r2 is the position of the 2nd particle.
= ( r 1,

1st

Let us interchange the particles. The operator to effect this is denoted by P (the permutation operator). Then P (r1, r2)
= ( r 2,

r 1) .

We are interested in the behaviour of the wave function under interchange of the particles. Since the particles are indistinguishable, all observed properties will be the same before and after the interchange. This means that the wave function can only be multiplied by a phase factor p = ei: P = ei. Thus far it is not much of a conclusion. But let us now perform the swapping operation again. Then we have P2 (r1, r2) = P (r2, r1) = (r1, r2) ; the effect is to return the particles to their original states. Thus the operator P must obey P2 = 1. And taking the square root of this we find for p, the eigenvalues of P: p = 1 . In other words the effect of swapping two identical particles is either to leave the wave function unchanged or to change the sign of the wave function. PH261 BPC/JS 1997 Page 4.7

All particles in nature belong to one class or the other. Particles for which p = +1 are called Bosons while those for which p = 1 are called Fermions. This property continues for all time since the permutation operator commutes with the Hamiltonian. Fermions have the important property of not permitting multiple occupancy of quantum states. Consider two particles in the same state, at the same position r. The wave function is then
= (r,

r) .

Swapping over the particles we have But


= (r,

r) so that P

P = . + since both particles are in the same state. The conclusion is that
(r ,

r)

= (r ,

r)

and this can only be so if r) = 0. Now since is related to the probability of finding particles in the given state, the result = 0 implies a state of zero probability an impossible state. We conclude that it is impossible to have more than one Fermion in a given quantum state. This discussion was carried out using r1 and r2 to denote position states. However that is not an important restriction. In fact they could have designated any sort of quantum state and the same argument would follow. This is the explanation of the Pauli exclusion principle obeyed by electrons. We conclude: For Bosons we can have any number of particles in a quantum state. For Fermions we can have either 0 or 1 particle in a quantum state. But what determines whether a given particle is a Boson or a Fermion? The answer is provided by relativistic quantum mechanics and it depends on the spin of the particle. Particles whose spin angular momentum is an integral multiple of are Bosons while particles whose spin angular momentum is integer plus a half are Fermions. (In quantum theory /2 is the smallest unit of spin angular momentum.) For some elementary particles we have: electrons protons
neutrons (r ,

1 2

Fermions

PH261 BPC/JS 1997

Page 4.8

photons

S S

mesons
mesons

= 0

Bosons

For composite particles (such as atoms) we simply add the spins of the constituent parts. And since protons, neutrons and electrons are all Fermions we can say: Odd number of Fermions Even number of Fermions

Fermion Boson.

The classic example of this is the two isotopes of helium. Thus


3He 4He

is a Fermion is a Boson.

At low temperatures these isotopes have very different behaviour. ______________________________________________________________ End of lecture 19 4.2.3 The quantum distribution functions We shall obtain the distribution functions for particles obeying Fermi-Dirac statistics and those obeying Bose-Einstein statistics. Thus we want to know the mean number of particles which may be found in a given quantum state. The method is based on an idea in Feynmans book Statistical Mechanics, Benjamin (1972). We start by considering an idealised model, of a subsystem comprising a single quantum state of energy , in thermal equilibrium with a reservoir of many particles. The mean energy of a particle in the reservoir is denoted by (we will tighten up on the precise definition of mean energy later).
reservoir of many particles
m: mean energy of particle in reservoir
particle transferred from reservoir to subsystem

e: energy of particle in subsystem

Transfer of a particle from reservoir to the subsystem A particle may be in the reservoir or may be in the subsystem. The probability that it is in the subsystem is proportional to the Boltzmann factor exp ( / kT ), while the probability that it is in the reservoir is proportional to exp ( / kT ). If P (1) is the probability that there is one particle in the subsystem and P (0) is the probability of no particles in the subsystem, then we may write P ( 1) ( ) ( ) = exp or P (1) = P (0) exp . P (0) kT kT If the statistics allow (for Bosons, but not for Fermions) then we may transfer more particles from the reservoir to the subsystem. Each particle transferred will lose an energy and gain an energy . Associated with the transfer of n particles there will therefore be a Boltzmann factor of exp n ( ) / kT . And so the probability of having n particles in the subsystem is PH261 BPC/JS 1997 Page 4.9

P (n) Let us put

P (0) exp
(

n ( ) . kT

x Then

exp

kT

( 1)

P (n)

P (0) xn.

( 2)

Normalisation requires that all possible probabilities sum to unity. For Fermions we know that n can take on only the values 0 and 1, while for Bosons n can be any integer. Thus we have P (0)
+

P (1)

= =

1 1

for Fermions for Bosons

n=0

P (n)
a

which can be written, quite generally as


P (n)
=

n=0

where a

1 for Fermions and a

for Bosons.

Since P (n) is given by Equation (2), the normalisation requirement may be expressed as
a

P (0) xn
n=0

which gives us P (0):


1 a n P (0) = x . n = 0

We will be encountering the above sum of powers of x quite frequently, so lets denote it by the symbol :
a

n=0

n x.

(3)

In terms of this P (0) and then from Equation (2) P (n)


= =
1

xn / .

What we want to know is the mean number of particles in the subsystem. That is, we want to calculate
a

n =

n=0

nP (n)
a

which is given by
n =

n=0

nx

n .

PH261 BPC/JS 1997

Page 4.10

The sum of nxn may be found by using a trick (which is really at the heart of many Statistical Mechanics calculations). The sum differs from the previous sum which we used, because of the extra factor of n. Now we can bring down an n from xn by differentiation. Thus we write d nxn = x xn, dx so that
a n=0 n nx

d a n x. dx n0 =

Observe that the sum on the right hand side here is our original sum . This means that n can be expressed as 1 d n = x dx or d ln n = x . dx It remains, then, to evaluate for the two cases. For Fermions we know that a = 1, so that the sum in Equation (3) is 1 + x. For Bosons a is infinity; the sum is an infinite (convergent) geometric series. The sum of such a geometric progression is 1 / (1 x). Thus we have Fermions
=

Bosons
= (1

x
+

x)1

ln

ln (1

x) upon differentiating

ln

ln (1

x)

d ln dx x d ln dx
n =

1 1
+

x so that

d ln dx x and n is then given by d ln dx


n =

1 1

x 1 1
+

x 1 1

x1

1 Finally, substituting for x from Eq (1):

x1

n =

1 exp (

) / kT

n =

1 exp (

) / kT

These expressions will be recognised as the Fermi-Dirac and the Bose-Einstein distribution functions. However, it is necessary to understand the way in which this idealised model relates to realistic assemblies of Bosons or Fermions. We have focussed attention on a given quantum state, and treated it as if it were apart from the reservoir. In reality the reservoir is the entire system and the quantum state of interest is in that system. The entire analysis then follows through so long as the mean energy of a particle, , in the system is changed by a negligible amount if a single quantum state is excluded. And this must be so for any macroscopic system. PH261 BPC/JS 1997 Page 4.11

4.2.4 The chemical potential We now turn to an examination of the meaning of within the spirit of this picture. We said that it was the mean energy lost when a particle is removed from the reservoir, which we now understand to mean the entire system. When a particle is removed the system remains otherwise unchanged. There are two ways in which the system is unchanged. The distribution of particles in the other (single particle) energy states is unchanged - the entropy remains constant. The energy of the various states (single particle energy levels) is unchanged as the volume remains constant. Thus our is equal to E / N at constant S and V. This tells us that we have an extension of the first law of thermodynamics; there is another way in which the internal energy of a system can be changed. As well as adding heat and doing work, we can add particles. So the extended differential expression for E is: dE = TdS pdV + dN. The quantity is called the chemical potential, and it is defined as

E . N |S V
,

An expression in terms of the Helmholtz free energy is more amenable to calculation (from the partition function). Recall that F is defined as F form which we obtain dF = dE TdS SdT . Using the extended version of the First Law expression for dE, this then gives dF = SdT pdV form which we may express the chemical potential as
+ =

TS

dN

For

F . N |T V
,

the classical ideal gas, whose free energy was found to be F


= NkT

ln

mkT 22

3/2

Ve , N

the chemical potential is obtained by differentiation: mkT V . 2 2 N At high temperatures is large and negative. It goes to zero as T 0, but of course then the classical approximation breaks down and the correct quantum-mechanical expression for must be used, which will be different for Fermions and Bosons. ______________________________________________________________ End of lecture 20

= kT

ln

3/2

PH261 BPC/JS 1997

Page 4.12

4.2.5 Methodology for quantum gases The Bose-Einstein and the Fermi-Dirac distribution functions give the mean number of particles in a given single particle quantum state in terms of the temperature T and the chemical potential . These are the intensive variables which determine the equilibrium distribution n (). Now we have a good intuitive feel for the temperature of a system. But the chemical potential is different. This determines the number of particles in the system when it is in a state of equilibrium. In reality, however, it is more intuitive to speak of a system containing a given (mean) number of particles. In that case it is the number of particles in the system which determines the chemical potential. Now the number of particles in the system is given by N which converts to the integral N
= =

n (i )
i

0 n () g () d

where is the factor which accounts for the degeneracy of the particles spin states. This is 2 for electrons since there are two spin states for a spin ; in general it will be (2S + 1). The expression for N is inverted to give , which can then be used in the distribution function to find the other properties of the system. For instance, the internal energy of the system would be found from E or, in integral form E
= =

i n (i )
i

0 n () g () d

4.3 The Fermi Gas


4.3.1 The Fermi-Dirac distribution One has to use the Fermi distribution for fermions whenever the classical low density / high temperature approximation breaks down. That is, when N 22 V mkT

) 1

1/2

Some examples of this are: Conduction electrons in a metal Here, because the electrons are so light, even at room temperatures one is well-into the quantum rgime. Liquid 3He When the helium is a liquid, at temperatures in the region of 1K, one is in the quantum rgime because the helium atoms are sufficiently light. Neutron stars In such stars the density of matter is very high indeed. Nuclear matter Although the numbers of particles involved makes the concept of thermal equilibrium rather suspect, the high density of nuclear matter requires a quantum treatment. PH261 BPC/JS 1997 Page 4.13

On the other hand, usually the density of electrons in a semiconductor is sufficiently small to permit the use of classical statistics. The Fermi-Dirac distribution for the mean number of fermions in the state of energy is given by 1 n () = . exp ( ) / kT + 1 We see that n () goes from zero at high energies to one at low energies. The changeover occurs at = , where n () = .
n( )

Fermi-Dirac distribution The transition from zero to one becomes sharper as the temperature is reduced. 4.3.2 Fermi gas at zero temperature At zero temperature the Fermi-dirac distribution function 1 n () = exp ( ) / kT becomes a box function
n ( ) = =

<

0 > . Note that in general the chemical potential depends on temperature. Its zero temperature value is called the Fermi energy

F
n( )

(T

0) .

all states filled up

empty states

Fermi-Dirac distribution at T = 0 In accordance with the methodology described in the previous section, the first thing to do is to evaluate the total number of particles in the system in order to find the chemical potential the Fermi energy in the T = 0 case. PH261 BPC/JS 1997 Page 4.14

The density of states is given by 4 and we shall consider spin particles, so we have set = 2. The total number of particles in the system is then F V 3/2 1/2 (2m) N = 0 d 2 3 2 V 3/2 = ( 2mF) . 323 This may be inverted to obtain the Fermi energy
2 3

g ( )

3/2 1/2 (2m)

2m V This gives the chemical potential at zero temperature. Observe that it depends on the density of particles in the stystem, so the Fermi energy is, as expected, an intensive variable. Having obtained the zero-temperature chemical potential, the Fermi-Dirac function is now completely specified at T = 0, and we can proceed to find the internal energy of the system. This is given by F V 3/2 3/2 (2m) E = 0 d 2 3 2 which may be expressed as E = 3 N F . 5 The internal energy is proportional to the number of particles in the system and so it is, as expected, an extensive quantity. 4.3.3 Fermi temperature and Fermi wavevector Corresponding to the Fermi energy it proves convenient to introduce a Fermi temperature TF defined by kTF The Fermi temperature is then given by 32N TF = 2mk V and in terms of this the condition for the classical approximation to be valid:
=

2 32N

2/3

F .

2/3

N 22 V mkT

1/2

becomes, to within a numerical factor T TF. We can estimate TF for electrons in a metal. The interatomic spacing is roughly V 1/3 (N) giving a number density of N V PH261 BPC/JS 1997

2.7,

1022 electrons per cm3 Page 4.15

if we assume that each atom contributes one electron. This gives

F
or

4.56 eV

4 TF 5 10 K. The conclusion, from this, is that the behaviour of electrons in a metal at room temperature are TF determined very much by quantum mechanics and the exclusion principle. For temperatures T most of the energy states below F are filled, while most above are empty. In this case the system is said to be degenerate. Thus electrons in a metal are degenerate at room temperatures.

Some values of Fermi temperatures calculated from known electron densities for monovalent metals are electron concentration in cm3 4.7 1022 2.7 1022 1.4 1022 1.2 1022 0.9 1022 TF in K 5.5 104 3.8 104 2.5 104 2.2 104 1.8 104

Li Na K Rb Cs

The case of 3He is a little more complicated. The calculated value of TF is 5K, but the observed value is more in the region of 0.05K. This discrepancy is understood in terms of the interatomic interactions. A remarkably successful theory for this has been developed Landaus Fermi liquid theory. Other parameters can be used to specify the properties of the highest-energy fermions at T = 0. The energy of such a particle is F. Alternatively we could talk of the velocity of the particle the Fermi velovity. Similarly we can specify the Fermi momentum or the Fermi wavevector. The wavevector of the particles of energy F is found from the expression

F
or kF Using the expression for F we then find:

2
2m 1

k F,

2mF.
1/3

32N kF = V which depends only on concentration. In fact we see that the Fermi wavevector is approximately equal to the inter-particle spacing.

______________________________________________________________ End of lecture 21

PH261 BPC/JS 1997

Page 4.16

4.3.4 Qualitative behaviour of degenerate Fermi gas The effect of a small finite temperature may be modelled very simply by approximating the FermiDirac distribution function by a piecewise linear function. This must match the slope of the curve at = , and the derivative is found to be dn () 1 . = d = 4kT

n( )

partially filled states all states filled up empty states

2kT

2kT

Simple modelling of Fermi-Dirac distribution

In this model we see that there is a region of partially-occupied states extending from = 2kT to = + 2kT . States below this are totally filled and states above are empty. Most of the particles are frozen in to states below the Fermi energy. Only a small fraction T / TF of the particles are practically excitable having vacant states above and below them. It is only this fraction of the particles which is available for responding to stimuli. Very crudely this is saying that at a temperature T extensive properties of a degenerate Fermi gas will be a fraction T / TF of that of the corresponding classical system. The thermal capacity of a classical 3 gas is 2 Nk , so we would expect the thermal capacity of a Fermi gas to be T Nk C TF the prediction is that C depends linearly on temperature. And indeed it does, as we shall see. Similarly, since we saw that the magnetisation of a classical system of magnetic moments was inversely proportional to temperature: M = CB / T , Curies law where C here is the curie constant. So we expect the thermal capacity of a Fermi gas to be CB M TF the prediction is that the magnetisation will tend to a temperature-independent value. This is observed to be so. 4.3.5 Fermi gas at low temperatures simple model We shall now use the above piecewise approximation to calculate properties of the Fermi gas. At each stage we will compare the approximate result with the correct expression. The distribution function, in this approximation, is
n ( ) = = =

1 1

<

<

< <

2kT
<

2kT

4kT

2kT

2kT

0 + 2kT from which the thermodynamic properties may be calculated. PH261 BPC/JS 1997

Page 4.17

The chemical potential may now be found by considering the total number of particles in the system: 4V 3/2 1/2 (2m) N = 0 n () d . 3 h Using the approximate form for n () this gives 8V 4 3/2 2 V 3/2 1/2 (kT ) N = (2m) + ( 2m) + .... 3 3 3h 3 h If we express N in terms of the previously-calculated Fermi energy, in other words, in terms of the zero temperature chemical potential, then we have

3/2 = F

3/2 1 +

1 kT 2

( )
2

+ ... ,

which, by the binomial theorem, gives F as

1 + +

1 kT 2

( )

+ ...

2/3

1 (kT )2 + ... . 3 This may be solved for and re-expanded in powers of T to give (in terms of TF rather than F)
=

1 T 2 + ... . 3 TF The correct expression (i.e. not approximating the Fermi distribution) is very similar:

F 1

( )
F

F 1

12 ( T )

2 T

+ ... ,

the leading temperature term being some 2.5 times greater. This result shows that as the temperature is increased from T = 0 the chemical potential decreases from its zero temperature value, and the first term is in T 2. From a knowledge of the temperature dependence of the chemical potential the Fermi-Dirac distribution is then given as a function of temperature and energy. The function is plotted below for electrons in a metal for which TF = 5 104 K.
n( )

1 10 4
0.8

5000

500 K

2.5
0.6 0.4 0.2 0 0

10 4

5 10 4

10 10 4

K
2 3 4 5 6 7 8

/ k in units of 104 K

Fermi-Dirac distribution function at different temperatures for a system for which TF = 5 104 K (electrons in a metal) PH261 BPC/JS 1997 Page 4.18

4.3.6 Internal energy and thermal capacity In the spirit of the piecewise approximation of the Fermi-Dirac distribution, we can take the approximate chemical potential and use it in the piecewise approximation to find the internal energy of the fermions. In this way one finds E while the exact series expression is E up to terms in T . The thermal capacity at low temperatures is found by differentiating this expression for the internal energy. Thus we obtain E 5 2 T E0 2 , = CV = T V 6 TF or, eliminating E0 in favour of TF, 1 T CV = 2Nk . 2 TF So at low temperatures the thermal capacity of a fermi gas is linear in temperature. Thus CV goes to zero as T goes to zero, as required by the third law of thermodynamics.
2
= = E0 1 +

52 T 2 TF 52 T 12 TF

( ) ( )

E0 1

The figure below shows the measured thermal capacity of potassium at low temperatures. It is conventional to plot CV / T against T 2 in order to distinguish the linear electronic thermal capacity and the cubic phonon thermal capacity. In other words we are saying that the expected behaviour, when the effect of lattice vibrations is included, is 1 T 3 CV = 2Nk + const T , 2 TF so that CV 1 2 1 2 Nk = + const T . 2 TF T Thus when CV / T is plotted against T 2 the intercept gives the electronic contribution to CV . The straight line fit to the data is given by CV / T = 2.08 + 2.57 T 2. This corresponds to a Fermi temperature of 1.97 104 K (check this). This is in qualitative agreement with the TF calculated from the known electron density, which was 2.5 104 K.

PH261 BPC/JS 1997

Page 4.19

3.0

C V / T in mJ mol1 K2

2.5

2.0 0 0.1 T in K
2 2

0.2

0.3

Thermal capacity of potassium The thermal capacity of liquid 3He at very low temperatures exhibits the expected linear temperature dependence. Measurements of Cp taken over a range of pressures are shown in the figure below. The linear behaviour is quite apparent. The strange behaviour occurring at very low temperatures corresponds to the superfluid phase transition. But that is another story 300 C p in mJ mol1 K1 200 100 0 29.2 24.4 20.5

ure pres s

5.47 2.64 0.0 0

in b

10.2

ars

15.2

10

Temperature in mK thermal capacity of liquid 3He at different pressures 4.3.6 Equation of state. Not covered. ______________________________________________________________ End of lecture 22

PH261 BPC/JS 1997

Page 4.20

4.4 The Bose Gas


4.4.1 Generalisation of the density of states function This section on the density of states is not specific to bosons. It is just that it is convenient to take a fresh look at the density of states at this stage with a view to some future applications, which happen to be boson systems. Recall that the fundamental problem which the density of states addresses is the conversion of sums over states to integrals in the case considered, integrals over energy. The energy density of states g () was defined by saying that the number of states with energy between and + d is g () d. This is compactly expressed in the equation dN () g ( ) = d where N () is the number of states having energy less than . In Section 4.1.4 we obtained the expression for g () as 1 V 3/2 1/2 g ( ) = [ 2m] 4 2 3 in particular, the energy dependence goes as 1/2. The derivation of this expression relied on a number of assumptions. It was for free and nonrelativistic particles: = p2 / 2m. And the result applied to three-dimensional systems: we had three quantum numbers for the three spatial dimensions and we evaluated the volume of an octant. (So for instance we could not use the result for the treatment of surface waves or adsorbed films.) In this section we shall relax the first restriction, allowing for other energy momentum relations. The extension to other dimensions will simply be quoted. That derivation (actually quite simple) is left to the student. The general treatment here is best treated from a consideration of the density of states in k space. The starting point for the specification of the state of a system confined to a box is the fact that the wavefunction must go to zero at the walls of the box. This leads us to admit only states which comprise an integral number of half wavelengths within the box. The allowed values were denoted by

nx
where

V 1/3 , nx

ny

V 1/3 , ny

nz

V 1/3 nz

nx , ny , nz

1, 2, 3, 4,

This gives us the components of the wave vector as ny nx k x = 1/3 , k y = 1/3 , V V so that the wavevector is simply k
=

kz

nz
V 1/3

{nxi + ny j + nz k} . V 1/3 This is telling us that the allowed states of the system can be represented by the points on a rectangular grid in k space. And in particular this indicates that the density of points in k space is uniform.

PH261 BPC/JS 1997

Page 4.21

To discuss the magnitude k of the vavevector k it is convenient to define the quantity R as R so that R. V 1/3 Then we may say that the number of states for which the magnitude of the wavevector is less than k is given by the volume of the octant of radius R. That is, 14 3 R , N (k ) = 83 giving us V 3 N (k ) = k. 62 The density of states in k space is the derivative of this: dN V 2 = g (k ) = k. dk 2 2
= =

n2 x

n2 y

n2 z

Previously we were interested in the energy density of states g () = dN / d, and this can now be found by the chain rule dN dN dk = g ( ) = d dk d V 2 d , = k 2 2 dk which is the most convenient way to represent the energy wavevector derivative.

We also note here that the procedure generalises to the density of states expressed in terms of any other variable. For instance, we will be using (angular) frequency in a future application, and the chain rule may be used in exactly the same way to give dN dN dk g () = = d dk d V 2 d = . k 2 2 dk Finally we quote the density of states in k space for other dimensions: L one dimension g (k ) =

g (k ) g (k )

A k 2 V 2 k. 2 2

two dimensions three dimensions

From these the density of states in any other variable may be found.

PH261 BPC/JS 1997

Page 4.22

4.4.2 Examples of Bose systems There are two important cases of bosons to consider. Real particles such as Helium-4, rubidium vapour etc. Zero-mass particles such as Photons and Phonons Only the first is a real particle in the classical meaning of the word. The number of particles is conserved in the first case. This is in marked contrast to the second case; the number of photons or phonons is not conserved. This has important thermodynamic consequences. We will explore this later. We start the discussion of bosons with a survey of the properties of 4He. 4.4.3 Helium - 4 Liquid 4He displays the remarkable property of superfluidity below a temperature of 2.17 K. Two of the most startling properties of the system are Viscosity Thermal conductivity

0 !!

On the other hand liquid 3He shows no such behaviour (at these temperatures). This implies that superfluidity is closely related to the statistics of the particles. If we consider an assembly of non-interacting bosons it is quite clear that at T = 0 all particles will be in the same quantum state the single-particle ground state. This is in contrast to the Fermi case where the Pauli exclusion principle forbids such behaviour. For bosons it then seems obvious that at very low temperatures there will be a macroscopic number of particles still in the ground state. This number may be calculated: T 3/2 TB where N is the total number of particles in the system and the temperature TB in the expression is given by N0
=

N 1

( )

h2 N 2/3 . 2mk 2.612 V The calculation of N 0 is slightly messy. It follows the calculation of N as in the Fermi case, converting the sum over single-particle states for N by an integral using the energy density of states g (). But in this case one must recognise that the density of states, being proportional to 1/2, excludes the ground state which must be put in by hand. TB
=

The temperature TB is close to the Fermi temperature, so it indicates the temperature at which specifically quantum behaviour will occur. Below we plot a graph of the ground state occupation.

PH261 BPC/JS 1997

Page 4.23

N0 N

TB ground state occupation for a Bose gas

For temperatures T > TB there will be some particles in the ground state. However it will not be a macroscopic number. The plot indicates that for temperatures T < TB there will be a macroscopic number of particles in the ground state. The particles in the ground state will have no entropy. It is therefore to be expected that the behaviour of this system will be different below and above this temperature TB. If we estimate this temperature for liquid 4He, using the measured molar volume of 27 cm3 then we obtain TB 3.13 K. The agreement with the superfluid transition temperature of liquid 4He, 2.17 K, is fair when we consider that the atoms of 4He do actually interact and so they cannot really be considered as an ideal (noninteracting) gas, This collapse of particles into the ground state below TB is known as Bose-Einstein condensation. It is the only example of a phase transition occurring in a system of noninteracting particles. We see then that the superfluidity of 4He is understood in terms of a Bose-Einstein condensation. Superfluidity of 3He is observed at temperatures below 3 mK; we saw the indication of this in the heat capacity curves for liquid 3He above. And superfluidity of electrons superconductivity is observed in some metals and metal oxide compounds. These examples of the superfluidity of fermions cannot be understood in terms of a simple Bose-Einstein condensation. In these cases it is believed that the fermions form pairs, that will be bosons. Superfluidity in Fermi systems is understood in terms of a Bose-Einstein condensation of these pair bosons. ______________________________________________________________ End of lecture 23

PH261 BPC/JS 1997

Page 4.24

4.4.4 Phonons and photons quantised waves The harmonic oscillator has the very important property that its energy eigenvalues are equally spaced: n + zero point energy. We have already calculated the internal energy of a harmonic oscillator. We shall write this result here as

+ zero point contribution. exp 1 kT We shall, in the following sections, ignore the (constant) zero point energy contribution. This is allowed since in reality we would always differentiate such expressions to obtain measurable quantities.

The expression for E can be reinterpreted in terms of the Bose distribution. The internal energy has the form E = n where n is the Bose distribution, the mean number of bosons of energy . But here the chemical potential is zero. The conclusion is that we can regard a harmonic oscillator of (angular) frequency as a collection of bosons of energy , having zero chemical potential. The fact that we can regard the harmonic oscillator as a collection of bosons is a consequence of the equal spacing of the oscillator energy levels. The vanishing of the chemical potential is due to the fact that the number of these bosons is not conserved. We shall explore this by considering an isolated system of particles. If N is conserved then the number is determined it is given and it will remain constant for the system. On the other hand if the number of particles is not conserved then one must determine the equilibrium number by maximising the entropy: S = 0. N E,V (E and V are constant since the system is isolated.)

Now from the differential expression for the First Law dE we see that dS So the entropy derivative is T and we conclude that the equilibrium condition for this system (at finite temperature) is simply
,

TdS

pdV

dN dN} .

1 {dE T

pdV

S N |E V

0.

In other words

0 for non-conserved particles. Page 4.25

PH261 BPC/JS 1997

We seem to have arrived at a particle description through the quantisation of harmonic vibrations. These ideas can be applied to simple harmonic oscillations in: waves in solids (and liquids) electromagnetic waves phonons photons.

In both these cases we may have a range of frequencies or energies ( = ) so that in calculations of thermodynamic properties we will need to calculate the density of these states. 4.4.5 Photons in thermal equilibrium black body radiation Black body radiation was a big problem before the advent of quantum theory. There was a simplystated problem without a satisfactory solution. It is known that bodies glow and emit light when heated sufficiently. The spectrum of colours seems to depend little on the nature of the body, particularly if the surface is black. The problem is to explain this universal behaviour the shape of the spectrum as a function of temperature. In considering the spectrum of the radiation the universality gives the first hint of the solution; it must be a property of the body and not of the body under consideration. Thus we can make a model system, an idealisation of the situation which retains the important features of the problem, but which is possible to solve. Our model is simply a cavity which is connected to the outside world by a small hole. We shall look through the hole, at the spectrum of the radiation in the cavity.

radiation from a cavity We consider the light waves in the cavity to be in thermal equilibrium with the walls. The photons will have a distribution given by the Bose-Einstein formula, but with zero chemical potential. To calculate the properties of this system we then use the Bose-Einstein distribution function together with the photon density of states. The internal energy can immediately be written down: E
=

0 () g () n () d

where is the degeneracy factor, here 2 for the 2 transverse polarisations of the photon. The energy of a photon of frequency is

()
The formula for the density of frequency states is PH261 BPC/JS 1997

.
Page 4.26

g () and since for photons

V 2 d k 2 2 dk

ck where c is the speed of light. This gives us the frequency density of states as
=

g () The internal energy is then

V 2 . 2 2 c 3

V 3 d . 2c3 0 exp 1 kT Before performing this integral we note that its argument gives the energy density per unit frequency range: E
=

dE V 3 = . d 2c3 exp 1 kT This is Plancks formula for the spectrum of black body radiation. An example is shown in the figure, but plotted as a function of wavelength, which is more popular with spectroscopists.
intensity 8000 K

blue green red

5800 K 4000 K 0 0.5 wavelength


1.0

black body radiation at three different temperatures the wiggly line is the spectrum from the sun The spectrum from the sun indicates that the suns surface temperature is about 5800 K. It is also interesting to note that the peak of the suns spectrum corresponds to the visible spectrum, that is the region of greatest sensitivity of the human eye. A remarkable example of black body radiation is the spectrum of electromagnetic radiation observed arriving from outer space. It is found that when looking into space with radio telescopes, a uniform background of electromagnetic noise is seen. The spectrum of this is found to fit the black body curve for a temperature of about 2.7 K. The conclusion is that the equilibrium temperature of the PH261 BPC/JS 1997 Page 4.27

universe is 2.7 K, which is understood as the remaining glow from the Big Bang. The data shown below comes from the COBE satellite, and it fits to the Planck black body curve for a temperature of 2.735 K. The quality of the fit of the data to the theoretical curve is quite remarkable.
120 100

intensity

80 60 40 20 0 0 2 4 6 8 10 12
1

14

16

18

20

in cm

cosmic background radiation plotted on 2.735 K black body curve ______________________________________________________________ End of lecture 24 4.4.6 The spectrum maximum When a body is heated it goes from a dull red at lower temperatures, to a bluish white at higher temperatures. The perceived colour of the radiation can be found by examining the peak in the emission spectrum. Let us first find the maximum of the energy density dE V 3 = . d 2c3 exp 1 kT Upon differentiation and setting equal to zero we obtain

(3

kT This specifies the maximum in the spectrum:

exp kT ) kT

3.

This can not be solved analytically, but the solution may be found very easily by iteration using a pocket calculator. The result is
=

2.8214

max
which in S.I. units is

2.8214

kT

max

3.67

1011 T .

The peak in the spectrum, max is proportional to the temperature. This is known as Weins displacement law. The magnitude of the energy density at the maximum may be found by substituting max back into the expression for the energy density spectrum: PH261 BPC/JS 1997 Page 4.28

dE Vk 3T 3 = 1.421 d peak 2c32 so the magnitude of the peak scales with the cube of the temperature.

4.4.7 Internal energy and thermal capacity of a photon gas The internal energy is found by integrating the expression for the energy density. We have already encountered the expression: E
=

V 2 c 3 x

exp kT

3 d

By changing the variable of integration to


=

kT
4

the integral becomes a dimensionless number V kT x3 dx . E = 2 3 c 0 ex 1 The integral may be evaluated numerically, or it may be solved in terms of the gamma function to give

( )

x3 dx ex 1

4
15

so that the internal energy is E


=

2V kT 4 . 15c3 ( )

From the expression for the internal energy we may find the pressure of the photon gas using 1 pV = E , 3 which gives, in S.I. units

p = 2.47 1016 T . This is obviously related to the radiation pressure treated in electromagnetism. Note that the pressure is very small indeed. Turning now to the thermal capacity, we obtain this by differentiating the internal energy: E , CV = T V giving

42Vk 4 3 T. 153c3 We see that CV is proportional to T 3. Also note that the thermal capacity goes to zero as T required by the Third Law. CV
=

0 as

4.4.8 Energy flux Finally we consider the energy carried by a photon gas. This is particularly important when treating the energy flow through radiation from a hot body to a cold body.

PH261 BPC/JS 1997

Page 4.29

Kinetic theory tells us that the energy flux, the power per unit area e carried by particles of velocity c is given by 1 E e = c . 4 V We thus find immediately e
=

2 k 4 4 T. 603c2 T 4

The energy flux is proportional to the fourth power of the temperature. This is known as the StefanBoltzmann radiation law. This result is conveniently written as e where is Stefans constant: 5.68 W m2 K4 . The net energy flux between two bodies at different temperatures is given by the difference between the fluxes in either directions. Thus
= =

enet

4 (Th

Tc4)

where Th is the temperature of the hot body and Tc is the temperature of the cold body. Remember that these results hold for black bodies perfect absorbers of the radiation. The flux between shiny surfaces will be considerably reduced.
4.4.9

Phonons Debye model of a solid The Einstein model of a solid was successful in that it showed that the thermal capacity went to zero as the temperature was reduced. But the exponential reduction in CV was not in accord with the experimentally-observed T 3 behaviour. The problem with the Einstein model was that it treated each atom as if its motion were independent of that of its neighbours. In practice the vibrations are coupled and this leads to the propagation of waves throughout the solid with a range of oscillation frequencies. Phonons (sound waves) are different from photons (light waves) in that they propagate in a discrete medium. This leads to a maximum frequency of oscillation since the wavelength can not be less than the interparticle spacing. Since there are N particles, each with three directions of oscillation, the system will have 3N degrees of freedom. There will then be 3N modes of oscillation recall ideas on normal modes from your mechanics courses. This will enable us to find the maximum frequency of oscillation of the propagating waves. Counting all the modes up to the maximum frequency we have
max

g () d

3N

where is the degeneracy factor, here 3 for the three polarisations of the wave or three directions of oscillation and g () is the frequency density of states given by V 2 d . g () = k 2 2 dk To proceed we must thus find the k relation, the dispersion relation for the propagating waves. You might recall having calculated | sin ka / 2 | for a one-dimensional chain. For small wavenumber k this reduces to a linear relation, = ck . Here the speed of propagation is independent of k. In the Debye model this linear relation is assumed to hold over the entire allowed frequency range. We have an indication that we should be on the right track to obtain the correct lowtemperature behaviour since at low temperatures only the lowest k states will be excited. And we have

PH261 BPC/JS 1997

Page 4.30

already seen that a linear dispersion relation leads, in three dimensions, to a cubic thermal capacity (photons). So we are assuming the relation ck , where c is the speed of sound, up to the cutoff frequency. This gives us the frequency density of states as in the photon case
=

V 2 . 2 2 c 3 In practice the velocity of propagation may be different for the different polarisations. This may be accounted for by writing 1 1 1 1 1 = + + . 3 3 3 c 3 c1 c2 c3 3 The cutoff frequency max is found from g ()
=

max

V 2 3 2 3 d 2 c
=

3N ,

which may be integrated up to give 6N 2c3 . V And the density of states may be expressed in terms of this frequency:

max

g ()

3N

2 . 3 max

4.4.10 Phonon internal energy and thermal capacity The internal energy is similar to that for the photon case, except that here the integral over frequency has a cutoff rather than going of to infinity: E
max
=

() g () n () d. ()
=

Here is the degeneracy factor 3 and the energy of a phonon of frequency is

. 3 d
.

Using our above formula for the phonon density of frequency states, this gives E
=

9N

3 max 0

max

exp kT

Because of the finite upper limit of the integral, it is impossible to obtain an analytic expression for E. Only a numerical solution is possible. To find the thermal capacity we must differentiate the internal energy. Only the Bose-Einstein factor depends on temperature, and differentiating this gives d 1 dT exp kT The expression for CV is then CV
=

exp kT . 2 exp 1 2 kT ( ) kT

E T |V

9N 2 3 kT 2 max

max

4 exp d kT
(exp kT

1)2

PH261 BPC/JS 1997

Page 4.31

Again, this is impossible to integrate. However simplification is possible by changing the variable of integration to kT Also we introduce a temperature corresponding to the cutoff frequency: x
=

D
The thermal capacity can then be written as

max
k
3

x4 ex dx 2 . (ex 1) D The integral is a function of D / T only and it may be calculated numerically. The function CV
=

9Nk

( )

D/T

3 y x4 ex dx y3 0 (ex 1)2 is called the Debye function and it is tabulated in the American Institute of Physics handbook. In terms of this function the thermal capacity is FD (y)
=

T) Observe that the thermal capacity is a universal function of T / D. CV


= .

3NkFD (

The graph shows thermal capacity against T / D for a variety of substances. The fit is impressive and it supports the universality idea. (Cp is shown rather than CV as it is easier to measure. But for solids we know that Cp CV .) Certainly the Debye model gives a better fit to the data than does the Einstein model.
1

C p / 3Nk

10

10

ice copper diamond silicon argon

D D D D D

= = = = =

192 K 344.5 K 2200 K 640 K 92.0 K

10

10

10

T / D

10

thermal capacity of various solids solid line is fit to Debye model dotted line is Einstein model with E = D

PH261 BPC/JS 1997

Page 4.32

4.4.11 Limiting forms at high and low temperatures At high temperatures we recover the usual classical behaviour: CV
=

3Nk

for

D.

As T 0 the upper limit of the integral tends to infinity and the integral tends to a constant number, just as in the photon case. The thermal capacity then has the characteristic T 3 behaviour, in this case: 12 4 T 3 Nk . 5 D This is the important cubic behaviour that is observed experimentally, but which the Einstein model did not give. CV

( )

The following figure shows the heat capacity of solid argon plotted against T 3. This shows the quality of the fit at low temperatures. 15

heat capacity mJ mol1K 1

10

0 0 3 4 T 3 in K3 heat capacity of solid argon at low temperatures 1 2 5

The Debye model should be regarded as an interpolation procedure. It gives the correct behaviour at high and low temperatures. It is not quite so good in the middle range. Here the precise density of states, obtained from the correct dispersion relation, is important. Our expression for g () was only an approximation, treating the solid as a continuum (linear k relation), but with a frequency cutoff. In reality the equations of motion for the system must be solved to obtain the correct (k ) relation, and from that the real density of states g () but of course that is impossible. The figure below shows g () for a real solid together with the forms predicted by the two models: Einstein model Debye model g () g ()

( E) 2 ( / D) up to cutoff.

PH261 BPC/JS 1997

Page 4.33

D (T ) in K
120 115

force-fit Debye temperature

g ( )

110 105 0 5 10 15 20 T in K

Einstein model Debye model real solid frequency

density of states for a real solid The real density of states contains kinks and spikes. In this respect it has some features of the Einstein model delta function. So both the Einstein model and the Debye models contain some aspects of the truth. But of course, as we have argued already, the Debye model is more appropriate at explaining the low temperature behaviour. An alternative viewpoint is to adopt the Debye model but to force agreement with the temperature dependence of the thermal capacity by allowing the Debye temperature to vary with temperature. Such a temperature variation is shown in the inset to the figure.

______________________________________________________________ End of lecture 25

PH261 BPC/JS 1997

Page 4.34

Section 5 Further Thermodynamics


5.1 Phase equilibrium
5.1.1 Conditions for equilibrium coexistence It is an observed fact that the physical state of a system can sometimes be changed dramatically when its external conditions are changed only slightly. Thus ice melts when the temperature is increased from slightly below 0C to slightly above this temperature. Different physical states of the same substance are referred to as phases and the study of transitions between phases is one of the most interesting problems in statistical thermodynamics. This is partly because the question of predicting when and how a phase transition will occur is still not a fully solved problem. There is the further point that a comprehensive understanding of phase transition phenomena might have wider application to such things as the outbreak of a war or a stock market crash. Very generally, phase transitions are due to interactions between the constituent particles of a system. The model-dependence of the behavior would suggest that full understanding can only come from a statistical mechanical study, not from thermodynamics. However there are many aspects of phase transitions which seem to be general and common to many systems. Macroscopic thermodynamics is of help here since it is model-independent and it connects seemingly unrelated properties of the system. This frees us to concentrate on the few thermodynamic variables of the system instead of getting bogged down in the microscopic detail. We start by considering the conditions for equilibrium to exist between two phases of the same substance such as ice and water, for example.

phase 2

phase 1

two phases in coexistence The fact that the two phases are in intimate contact with one another means that they must be at the same temperature and pressure. And we know that the condition for equilibrium in a system at given temperature and pressure is that the Gibbs free energy be a minimum. Note that this condition is independent of the nature of the constraints on the composite system. To summarise Phase equilibrium is determined by minimising the Gibbs free energy G. This is regardless of the constraints on the system.

PH261 BPC/JS 1997

Page 5.1

We shall now make an important connection with the chemical potential. From the definition of G we know that its differential expression is dG = SdT + Vdp + dN and that the proper variables for G are then T, p and N. But G (T , p, N ) is an extensive quantity. This means that G (T , p, N ) = NG (T , p, 1) since both the other variables T and p are intensive. From the expression for dG we see that may be specified as G = N T ,p so that

= =

G (T , p, N) N |T p NG (T , p,1) N |T p
, ,

G (T , p,1) . In other words the chemical potential is none other than the Gibbs free energy per particle, which we shall sometimes denote by g. The equilibrium state of the system is that for which G is a minimum: dG
=

d ( G1

G2 )

Now if there are N 1 particles in phase 1 and N 2 particles in phase 2 then since G1 = 1N 1 the equilibrium condition may be written as and G2
=

2N 2,

d (1N 1 + 2N 2) = 0. The total number of particles in the composite system is fixed: N 1 + N 2 = const. So if we consider movement of particles between the phases, then

1dN 1
dN 1 or
(1

+ +

2dN 2
dN 2

= =

0 0 0,

2) dN 1
=

which can only be satisfied if

2 .

In other words, when equilibrium is established between two phases, not only are the temperature and the pressure equalised, but the chemical potentials are the same as well. In retrospect this is obvious since is the force which drives particle flow.

PH261 BPC/JS 1997

Page 5.2

1 1

< >

2 means particles flow from phase 2 to phase 1 2 means particles flow from phase 1 to phase 2.

So when equilibrium is established then there is no net flow of particles between the phases.

5.1.2 The phase diagram The phase diagram of a typical p V T system is shown in the figure below. There are three distinct phases: solid, liquid and gas. The lines separating the phases are the transition lines: the melting line, the boiling line and the sublimation line. When a transition line is crossed there is a change of phase. Along a transition line one has a coexistence of the two phases. And at a single point one can have a coexistence of three phases. This is called the triple point.

p
melting solid sublimation liquid critical point boiling triple point gas

T
typical phase diagram The symmetry of the liquid and the gas phases are the same. For this reason the transition line between liquid and gas, the boiling line, can terminate. The termination point is called the critical point. Conversely the symmetry of the solid and the liquid/gas phases are different. Now symmetry cannot change continuously. This means that the melting line cannot terminate; it goes on forever. The equilibrium density of a system at specified temperature and pressure is found by minimising the Gibbs free energy of the system. In other words the Gibbs free energy (or the Gibbs free energy per particle ) will be a minimum as a function of density (or the specific volume v).

PH261 BPC/JS 1997

Page 5.3

equilibrium specific volume v


equilibrium density of a system Now on a transition line two phases, with different densities, can coexist. In terms of the Gibbs free energy this is telling us that there must be two minima of equal depth, at two different densities. So there is a double minimum.

gas specific volume liquid specific volume v


coexistence of two phases We can extend these arguments to other regions of the phase diagram. Along the coexistence curve there will be two minima of equal depth. Moving along the coexistence curve towards the critical point, the two minima become closer together as the liquid and gas densities become closer. At the critical point the two minima coalesce; they become degenerate. Away from the coexistence curve the two minima will be of unequal depth. In the liquid phase the higher-density minimum will be lower, while in the gas phase the lower-density minimum will be lower.

PH261 BPC/JS 1997

Page 5.4


liquid specific volume v

liquid

critical point
v

v v

gas

gas specific volume v

variation of around the phase diagram ______________________________________________________________ End of lecture 26 5.1.3 Clausius-Clapeyron equation Equilibrium will be maintained between two phases over a range of temperatures so long as particles can flow so as to equalise the chemical potentials. This gives a line of equilibrium coexistence in the T p plane, as we have seen.

p
phase 1

<

p T

phase 2

1
=

>

coexistence curve, where 1

T
coexistence curve We shall examine the differential relation satisfied by the coexistence curve. This is a relation which holds between the phases as the external conditions are varied.

PH261 BPC/JS 1997

Page 5.5

Along the coexistence curve we must have 1 one moves along the curve.

2 although the value of 1 and 2 will be different as

For a small displacement along the coexistence curve we must have


1 = 2

or, in terms of T and p:

1 T T |p

1 p p T

2 T T |p
Ts
+

2 p. p T

Now since is the Gibbs free energy per particle, we can write

where e is the energy per particle s is the entropy per particle v is the volume per particle.

pv

The derivatives of are then given by

T |p
so that we have
s1T +

= s

p T

v1p

= s2T +

v2p

or s1) T = (v2 v1) p. Letting T and p tend to zero, we obtain the slope of the coexistence curve as dp s2 s1 = dT v2 v1 or dp s = dT v where s is the change in specific entropy and v is the change in specific volume either side of the coexistence line.

(s2

The entropy change per particle is the difference in a particles entropy in the two phases at a specified temperature and pressure. We are thus considering an entropy change, or heat flow at constant temperature. This is related to the latent heat of transformation: l1 2 where l is the latent heat per particle. We can then write the Clausius-Clapeyron equation as dp l = . dT T v
=

T (s2

s1)

PH261 BPC/JS 1997

Page 5.6

In melting one usually finds: The entropy increases the liquid is more disordered than the solid On melting a solid expands v is positive.
s

is positive.

So the slope dp / dT of the coexistence curve is usually positive. There are two notable exceptions from this normal behaviour. In 3He below about 0.3 K the solid is more disordered than the liquid! Thus s is negative. But solid 3He does expand on melting, so v is positive. So the slope dp / dT of the coexistence curve is negative. When ice melts the liquid is indeed more disordered than the solid; s is positive. But ice contracts on melting; v is negative. So here also slope dp / dT of the coexistence curve is negative. This explains why ice can be melted by increasing the pressure a fact of importance to ice skaters but rather unfortunate for motorists.

p liquid solid
pressure causes melting

p liquid solid
pressure causes freezing

T
normal system anomalous behaviour of ice 3He below 0.3 K and effect of pressure on a normal and an anomalous system

5.1.4 Saturated vapour pressure From the Clausius-Clapeyron equation we may obtain an approximate expression for the vapour pressure over a solid or a liquid. Along the coexistence curve there are no free variables. Thus so long as both phases are present, p is a unique function of T. In fact at low temperatures (from about 5 K to about 0.3 K) the standard experimental temperature scale is defined in terms of the vapour pressure above liquid helium. Tables and a formula are given for this. (For temperatures below 0.3 K the pressure along the solid-liquid coexistence of 3He is used for thermometry.) We shall use the Clausius-Clapeyron relation to derive an approximate expression relating temperature and pressure of a saturated vapour. So we start from dp l = dT T v and we make the following assumptions: PH261 BPC/JS 1997 Page 5.7

The specific volume of the condensed phase can be neglected in comparison to that of the vapour.
v =

v2

v1

v2 .

The vapour may be treated as an ideal gas. pv2


=

kT .

( v2

is the volume per particle)

The latent heat is independent of temperature. l


=

const.

Using these approximations in the Clausius-Clapeyron equation gives dp lp = . dT kT 2 Writing this as 1 dp l = p dT kT 2 we can integrate to obtain l + const ln p = kT or l p = p0 exp . kT Thus the vapour pressure is a rapidly increasing function of temperature. This also gives a rather convenient way of measuring the latent heat of vapourisation, which is a measure of the binding energy in the condensed phase. There are transitions for which S and V are zero. Clearly there is no latent heat associated with such transitions. These are called second order transitions, while those considered above are called first order transitions. Second order transitions are studied in PH361. ______________________________________________________________ End of lecture 27

5.2 The Third Law of thermodynamics


5.2.1 History The third law of thermodynamics arose as the result of experimental work in chemistry, principally by the famous chemist Nernst. He published what he called his heat theorem in 1906. Nernst measured the change in Gibbs free energy and the change in enthalpy for chemical reactions which started and finished at the same temperature. At lower and lower temperatures he found that the changes in G and the changes in H became closer and closer.

PH261 BPC/JS 1997

Page 5.8

energy
G

0 Nernsts observations

Nernst was led to conclude that at T = 0 the changes in G and H were the same. And from some elementary thermodynamic arguments he was able to infer the behaviour of the entropy at low temperatures. Changes in H and G are given by
H =

T S

V p V p.

G = ST +

Thus G and H are related by T S ST and if the temperature is the same before and after, T = 0, so then
G = H G = H

T S.

This is a very important equation for chemists. Now Nernsts observation may be stated as
H G

0 as

as T

0,

which he realised implied that T S

0.

5.2.2 Entropy On the face of it this result is no surprise since the factor T will ensure the product TdS goes to zero. But Nernst took the result further. He studied how fast H G tended to zero. And his observation was that it went faster than linearly. In other words he concluded that H G 0 as T 0. T So even though 1 / T was getting bigger and bigger, the quotient (H G) / T still tended to zero. But we know that
H G

T So from this Nernst drew the conclusion PH261 BPC/JS 1997

= S.

Page 5.9

0 as T 0. The entropy change in a process tends to zero at T = 0. The entropy thus remains a constant in any process at absolute zero. We conclude: The entropy of a body at zero temperature is a constant, independent of all other external parameters. This was the conclusion of Nernst, sometimes called Nernsts heat theorem. It was subsequently to be developed into the Third Law of thermodynamics. 5.2.3 Quantum viewpoint From the purely macroscopic perspective the third Law is as stated above: at T = 0 the entropy of a body is a constant. And many conclusions can be drawn from this. One might ask the question what is the constant?. However we do know that thermodynamic conclusions about measurable quantities are not influenced by any such additive constants since one usually differentiates to find observables. (But a constant of minus infinity, as we found for the classical ideal gas, might be problematic.) If we want to ask about the constant then we must look into the microscopic model for the system under investigation. Recall the Boltzmann expression for entropy: S = k ln where is the number of microstates in the macrostate. Now consider the situation at T = 0. Then we know the system will be in its ground state, the lowest energy state. But this is a unique quantum state. Thus for the ground state
=

1 0.

and so S
=

Nernsts constant is thus zero and we then have the expression for the Third Law: As the absolute zero of temperature is approached the entropy of all bodies tends to zero. We note that this applies specifically to bodies which are in thermal equilibrium. The Third Law can be summarised as S 0 as T 0. anything 5.2.4 Unattainability of absolute zero The Third Law has important implications concerning the possibility of cooling a body to absolute zero. Let us consider a sequence of adiabatic and isothermal operations on two systems, one obeying the Third Law and one not.

PH261 BPC/JS 1997

Page 5.10

isothermal

adiabatic

T System obeying Third Law

System not obeying Third Law One can get to T = 0 in 2 steps

One can not get to T = 0 in a finite number of steps

approaching absolute zero Taking a sequence of adiabatics and isothermals between two values of some external parameter we see that the existence of the Third Law implies that you cannot get to T = 0 in a finite number of steps. This is, in fact, another possible statement of the Third Law. Although one cannot get all the way to T = 0, it is possible to get closer and closer. The following graph indicates the success in this venture.
10
3

(Faraday) 10
2

N2 O2 H2 Gas liquefaction
4

10

He
4

Temperature T[K]

10

He

He

10

-1

10

-2

He -4He mixtures

Magnetic refrigeration 10
-3

10

-4

10

-5

10

-6

1840

1860

1880

1900

1920

1940

1960

1980

Ultra low temperatures 2000

Year

the road to absolute zero ______________________________________________________________ End of lecture 28 PH261 BPC/JS 1997 Page 5.11

Low temperatures

O2

5.2.5 Heat capacity at low temperatures The Third Law has important consequences for the thermal capacity of bodies at low temperatures. Since Q C = T S = T , T and the Third Law tells us that S 0 as T 0, T we then have C

as

0.

Classical models often give a constant thermal capacity. Recall that for an ideal gas 3 CV = Nk 2 independent of temperature. The Third Law tells us that this cannot hold at low temperatures. And indeed we saw that for a Fermi gas (and a Bose gas?) CV does indeed go to zero at T = 0. 5.2.6 Other consequences of the Third Law Most response functions or susceptibilities generalised spring constants go to zero or a constant as T 0 as a consequence of the Third Law. This is best seen by examining the relevant Maxwell relation. For example consider the thermal expansion coefficient. The Maxwell relation here is V S = . T p p T The right hand side is zero by virtue of the Third Law. Thus we conclude that V 0 as T 0; T p the expansion coefficient goes to zero.

An interesting example is the susceptibility of a paramagnet. The connection with the model p system is made via M

B V. The magnetic susceptibility is (neglecting factors of 0)

M B p . V

There is no Maxwell relation for this, but consider the variation of the susceptibility with temperature:

PH261 BPC/JS 1997

Page 5.12

2M T B 2p . T V

The order of differentiation can be reversed here. In other words M p . B T V T And now we do have a Maxwell relation: p S M S = = . T V V T T V B T The Third Law tells us that the right hand side of these equations goes to zero as T conclude then that

0. We

T
or

as

const as T 0. The Third Law tells us that the magnetic susceptibility becomes constant as T Curies law give? This stated C = T so that

0. But what did

as This is completely incompatible with the Third Law.


0 !!

But Curies law is a specifically high temperature result. Also it does not consider the interactions between magnetic moments which have to occur. In fluid systems, where the particles must be treated as delocalised, the statistics will also have an effect. Recall the behaviour of fermions at low temperatures. We saw that very roughly only a fraction T / TF of the particles were free and available to participate in normal behaviour. We then expect that the Curie law behaviour will be modified to T C TF T or C TF which is indeed a constant, in conformity with the Third Law. This result is correct, but a numerical calculation must be done to determine the numerical constants involved.

( )

PH261 BPC/JS 1997

Page 5.13

5.2.7 Pessimists statement of the laws of thermodynamics As we have now covered all the laws of thermodynamics we can present statements of them in terms of what they prohibit in the operation of Nature. First Law: You cannot convert heat to work at greater than 100% efficiency Second Law: You cannot even achieve 100% efficiency except at T Third Law: You cannot get to T
= =

0.

0.

This is a simplification, but it encapsulates the underlying truths, and it is easy to remember. ______________________________________________________________ End of lecture 29

PH261 BPC/JS 1997

Page 5.14

Anda mungkin juga menyukai