Cl
38
Ar
17
e 01
Previous PH6 examinations contain similar passages with the difference that, in the legacy
specification, candidates had not previously seen the passage and were expected to read it in
the examination. Nevertheless, these passages form a good resource for introducing this
section of the paper. Because candidates will be expected to have studied the passage prior to
the examination, no allowance for reading is built into the duration of PH5, which is 105
minutes.
SECTION C Options
There are 5 optional topics:
A
B
C
D
E
Each topic is designed to be studied in approximately 15 hours of teaching time. They could
all be taught at the end of PH5. They fit in with the rest of the specification in different ways,
which suggests that different teaching strategies are appropriate:
Option A follows on immediately from the electromagnetism and A.C. material in PH5. The
filters section relates also to the potential divider ideas in PH1.
The Electromagnetic Revolution aspect of Option B, which will be the setting for questions
for the first 3 years, relates in the early stages to the optics material in PH2, the electrostatics
in PH4 and the electromagnetism in PH5. There is a strong case, if this option is to be
offered, for incorporating its ideas throughout the teaching of the rest of the course,
Option C, materials, consists of ideas which were previously in the compulsory specification
and are now optional. There are few strong links with other sections of the specification.
Option D, Biological Measurement and Medical Imaging, has links to PH2 and PH5.
Option E, Energy Matters, links to PH1, PH2 and PH4 and so a possible approach would be
introduce the content throughout the course.
Guidance notes on each of the options follows:
The main component in determining the Q factor of the circuit is the resistance of the circuit
because it is the resistance that dissipates energy away from the circuit. This is similar to
pushing a swing back and forth if there is a lot of friction taking energy away from the
swing its difficult to achieve a high amplitude and sharp resonance. The easiest way to
define the Q factor is as follows
Q
As the capacitor and inductor have equal reactance at resonance, the Q factor can also be
written:
6
Q
I 0 L
IR
0 L
and also Q
R
0 C
IR
0 CR
0 L
R
LC
R
1
LC
, then
0 L
0CR
Note that, in the expressions for the Q factor, we can eliminate L, C and 0 but we cannot
eliminate R it is in all 3 expressions. Note also that the Q factor is a ratio and it has no units.
Now consider this circuit:
10 V
10 nF
10 mH
10
These values for R, C, L make our arithmetic reasonably easy. They give us the following
figures:
0
1
LC
1
2
10 10
1
10
10
105 s -1
and
Q
0 L
R
105 102
10
100
We can also calculate the current flowing at resonance because the whole of the supply p.d. is
across the resistor at resonance (p.d.s across the inductor and capacitance are equal and
opposite, so cancel).
I
V
R
10
10
1 A
All seems nice and straight forward until we look at the p.d. across the capacitor or inductor.
antenna
If you look at the loop in the above circuit, youll notice that there is no resistor. It is an LC
circuit without the R. Why is this? Remember that we want a high Q factor and one of the
ways that this is achieved is to keep the resistance low. Does this mean that the resistance in
the LC loop is zero? Obviously the resistance cannot be zero because the connecting wires
arent made of superconductors. But the main source of resistance in the LC loop is the
inductor. Remember that an inductor is a long wire wound into a coil. In order to make a
large number of loops we need a thin wire and this increases the resistance of the inductor (a
bit of a Catch 22 situation).
A simplified way of analysing the performance of the detecting circuit above is to consider it
as follows:
~
VOUT
i.e. we have a series LCR circuit and the voltage across the variable capacitor is the output
voltage. Note also that we have redrawn the inductor as a resistor and inductor in series
because of the inherent resistance of the wires of the inductor.
If we have resonance in the LCR circuit we know (from the definition of the Q factor) that the
p.d. across the capacitor will be Q times the supply p.d. Hence, we can amplify the input p.d.
by a factor of Q. Also, because of the shape of the resonance curve we only amplify the
frequencies around the resonance frequency, so we have selectivity.
So why do we use a variable capacitor? This is because we can vary the resonance frequency
by varying the capacitance. We obtain the resonance frequency from the equation below.
f0
0
2
1
2 LC
Example
This circuit is used in a simple radio.
(i)
(ii)
10
0.15 mH
6 600 pF
VIN VC 2 VR 2
VIN VC VR
Low pass filter:
R1
Compare with
R
~ Vin
C
R2
Vout
The easiest way to explain how the above circuit behaves as a low pass filter is to compare it
with a voltage divider.
In the circuit on the right, the supply voltage is shared between the two resistors. In the low
pass filter, on the left, the voltage is divided between the capacitor and the resistor.
Remember that the reactance of the capacitor is given by: X C
1
C
From the above equation, at low frequencies XC will be very large. So at low frequencies we
have a voltage divider with a very large resistance in the R2 position. This means that nearly
all the supply voltage will be across the capacitor at low frequencies.
At high frequencies XC will be very small. So at high frequencies we have a voltage divider
with a very low resistance in the R2 position. This means that nearly all the supply voltage
will be across the resistor at low frequencies i.e. there will be a very low p.d. across the
capacitor.
If we were to draw a graph of Vout/Vin against frequency we would get:
10
Low pass filter output
1.0
Vout/Vin
0.8
0.6
0.4
0.2
0 0
10
10
10
10
Frequency/Hz
Note that Vout/Vin is usually called the gain and that it starts at 1 and drops to zero (this is
because Vout = Vin at very low frequencies and Vout = 0 at very high frequencies).
Example
1k
10V ac supply
1nF
Vout
~
Vout
Answers
1
1
1
159 kHz .
Using 2 f , we get: f
2 2 CR 2 109 1000
Beware:
There are 3 pitfalls to avoid if you want to obtain the correct answer even after youve
obtained the equation .
First, you must remember that k means 1000 .
Second, you must remember that nF means 10-9 F.
Third (but this only applies if you have an EXP button on your calculator and if
youre too lazy to do the powers of 10 in your head!), when putting 10-9 in your
calculator you cannot type 10 exp -9 because this is the same as 1010-9. You must
type (and this might seem strange until you think about it carefully) 1 exp -9 because
this is 110-9.
11
7 07 V
So VS 2VC , VC 2 S and hence VC
2
2
2
So the correct answer is that the p.d. across both the capacitor and the resistor is 7.07 V.
Beware:
Do not fall into the trap of saying that both rms p.d.s must be 5V so that they add up to 10V.
Although this sort of argument applies to instantaneous p.d.s it is completely wrong for
obtaining rms p.d.s because the p.d. across the capacitor is out of phase with the p.d. across
the resistor.
Compare with
R1
~ Vin
R
Vout
R2
Again, the easiest way to explain how the above circuit behaves as a high pass filter is to
compare it with the voltage divider (on the right).
In the circuit on the right, the supply voltage is shared between the two resistors. In the high
pass filter, on the left, the voltage is divided between the capacitor and the resistor.
1
C
From the above equation, at low frequencies XC will be very large. So at low frequencies we
have a voltage divider with a very large resistance in the R1 position. This means that the
output voltage across the resistor at low frequencies will be close to zero.
Again, remember that the reactance of the capacitor is given by: X C
12
At high frequencies XC will be very small. So at high frequencies we have a voltage divider
with a very low resistance in the R1 position. This means that nearly all the supply voltage
will be across the resistor at high frequencies.
If we were to draw a graph of Vout/Vin against frequency we would now get:
High pass filter output
1.0
Vout / Vin
0.8
0.6
0.4
0.2
0 0
10
10
10
10
Frequency / Hz
Note:
Filters are usually drawn in the following manner:
VIN
VOUT
C
0V
This makes it easier to draw higher order filters (i.e. one filter feeding into another to provide
more filtering). This notation has not been used here so that students can compare the circuit
more easily with a potential divider. However, the above notation may well be used in an
examination.
13
Unit PH5 Option B Revolutions in Physics
OPTION B: REVOLUTIONS IN PHYICS
ELECTROMAGNETISM AND SPACE-TIME
1.
When a History of Physics option was proposed, two periods of revolutionary change immediately
suggested themselves for study: the century of Kepler, Galileo and Newton, and the century of Young,
Faraday and Maxwell. Rather more interest was expressed in the second of these, and only the
Electromagnetism and Space-time revolution will be examined in 2010, 2011 and 2012. After the new
A-Level has been running for a year or two, teachers will be consulted on whether or not a change
should be made for examinations in 2013 and beyond.
2.
Content
One of the most exciting things in Physics is to discover relationships between phenomena which are
seemingly very different in nature. What happened in electromagnetism in the nineteenth century is a
wonderful example. In the year 1800 there were only the vaguest indications that magnetism had
anything to do with moving electric charges, and no evidence at all that light had anything to do with
electricity or magnetism. By 1900 magnetism and electricity had been firmly linked, and light had
been shown to be an electromagnetic wave.
The seemingly obvious need for electromagnetic waves to have a propagation medium (the ether)
created problems. These were resolved in a very radical way by Einsteins Special Theory of
Relativity.
The structure of the course is shown in a little more detail in the diagram below. The first main block
deals with events leading to the acceptance of the wave theory of light, starting with a careful look at
Thomas Youngs description of his two slits experiment. Electromagnetism is the subject of the next
main block, starting with rsteds discovery of the magnetic effect of a current, and considering at
some length the subsequent work of Ampre and Faraday.
Maxwell arrived at the conclusion that light was an electromagnetic wave using what we would now
call a mechanical model of electric and magnetic fields. How he made the synthesis is looked at in
some detail, as are the beautifully simple confirmatory experiments of Hertz. The Michelson-Morley
experiment is then outlined, as are responses to its failure to yield the expected evidence for the ether.
Finally there is a small taste of Special Relativity theory (a simple treatment of time dilation)
A brief survey of light, electricity and magnetism before
1800
(NEWTON, HUYGENS, GILBERT, GALVANI, VOLTA)
is followed by a more detailed study of..
YOUNG
FRESNEL
RSTED
AMPRE
FARADAY
synthesis
MAXWELL
HERTZ
MICHELSON
EINSTEIN
14
3.
Serving Suggestions
All the material to be tested in the examination is contained in the 34 sides of WJEC notes, which are
available in electronic form from the Physics section of the WJEC website <link to be inserted> or as
hard copy from the WJEC subject officer.
The notes contain many self-test questions and could be used by a student for self-study. They are also
peppered with links to websites which help to bring the basic material of the option alive and make it
easier to learn. The sites often contain pictures and diagrams.
The first half of the course (Young, rsted, Ampre, Faraday) deals largely with concepts in light and
electromagnetism which are key parts of the non-optional A-level specification, but comes at them
from a different angle, adding human interest, and (obviously) a historical perspective. The result
should be reinforcement. A possible teaching strategy is to integrate the material of the first part of
this option with the normal teaching of the relevant topics. The second half of the material might lend
itself to self-study with lessons on specific topics, such as time dilation.
4.
The extracts contained in the WJEC notes are short but they do give the student something
approaching direct contact with great physicists of the past. They are supported by explanatory notes
and self-test questions to help with understanding. In the examination, part of the Option B question
might present the student with a snippet from one of the extracts and ask him or her to explain certain
points, or to put the extract in its historical context.
Those who associate studying history with the enforced learning of dates need not have too many
fears about this option. Placing discoveries in the right half decade will suffice.
5.
Books
Two thinnish and very readable books which provide good support are
Michael Faraday and the Royal Institution: by John Meurig Thomas (ISBN 0-7503-0145-7).
Relativity and its Roots: by Banesh Hoffmann (ISBN 0-486-40676-8).
Chapter 4 tells pretty much the same story as this course, but, as the books title makes clear,
Hoffmann has a special agenda, and his emphases are different.
Examination questions are restricted, however, to the material in the WJEC notes (though the student
will be assumed to have tackled the embedded self-test questions which occasionally call for him or
her to find facts elsewhere).
15
16
Table 1.
Selected
Book
Reference
Specification
Reference
a
Duncan
10.3
p. 17 19, 24
b
c
d
e
g*
h*
i*
j
k*
l
m*
n
o*
p*
q
p. 29
p. 30
p. 39
Advanced
Physics
for Muncaster
You
p. 288 289,
291
p. 282
p. 284 285
p. 286
p. 287
p. 35 36
p. 290
p. 33
p. 28 29
p. 34 35
p. 289, 290
11.10,
11.11
(p.191)
11.3 (p. 193)
p. 190
p. 290
p. 41
p. 291
p. 291
p. 41
p. 291
p. 37
p. 293
p. 38
p. 294
17
Detailed guidance
Specification references (a) (f), (j). (l), (n), (q) and (r) are treated in sufficient detail in
standard A level text books to obviate the need for guidance in this document.
Elastic and Plastic Strain
The process of deformation of ductile materials, including the movement of edge
dislocations, is treated at the molecular level in the WJEC document:
The plastic behaviour of ductile metals.
SUPERALLOYS
Statement (i) draws upon statements (h) and (j). Candidates should have an understanding of
the effects of dislocations at the molecular level, and the strengthening and stiffening of
materials by the introduction of dislocation barriers such as foreign atoms, other dislocations
and grain boundaries (specification statement (h)). Candidates should also be able to describe
failure mechanisms in ductile materials and have an understanding of creep and fatigue
(specification statement (j)).
Introduction
Aircraft jet engines are required to operate within extreme conditions of temperature and
pressure. Jet engine turbine blades rotate at a typical speed of 10,000 rpm for long periods in
an environment of combustion products at working temperatures of 1250C (though the inlet
temperatures of high performance engines can exceed 1650C); non aviation gas turbines
operate at approximately 1500C. The blades must be able to withstand impact and erosion
from debris drawn in with the air stream. In addition, different parts of the blade may be at
different temperatures and they will be subjected to large and rapid temperature changes
when the engine is started up and turned off.
The following is a list of the properties required of the material from which the blades are
made:
Creep Resistance
Centripetal forces acting on the blade at high rotational speeds provide a considerable
load along the turbine blade axis. Over prolonged periods of time this can cause creep. It
becomes increasingly pronounced as temperature increases. Creep could cause a turbine
blade to deform sufficiently that it might touch the engine casing.
Corrosion Resistance
Iron corrodes to form rust. At high temperatures, the presence of carbon dioxide, water
vapour and other products of the combustion of fuel constitute a highly corrosive
environment.
Toughness
The blades must resist impact with debris passing through the engine. In addition,
stresses generated by expansion and contraction, between different parts of the blade at
different temperatures, must not give rise to cracking.
18
Metallurgical Stability
The mechanical properties of metals can be modified by heat treatment. Blade materials
must be resistant to such changes and the microstructure must remain stable at high
temperatures.
Density
The density must be low to keep engine weight as low as possible.
Work hardening.
This is a process which makes a metal stronger. The metal is worked or deformed (by
hammering or repeated bending) when cold to make it stronger and harder. The effect of
working the metal is to increase the number of dislocations, so increasing its strength. The
effects of work-hardening can be felt by bending the wire of a steel coat hanger
backwards and forwards until it snaps.
Quench hardening.
Suggested experiment: Heat one end of a 20cm (approx) length of steel wire (held with
tongs) in a Bunsen flame until it becomes cherry red in colour- about 800C. Then plunge
the hot end of the wire quickly into cold water. When the rest of the wire has cooled try to
bend the quenched end. What do you notice?
Rapid cooling freezes a particular grain structure into the metal. The higher the
quenching temperature the smaller the grains and the harder and more brittle the resulting
metal.
Annealing.
Suggested experiment: Use the same sample of wire as above and heat the other end of
until it is red hot, and keep it at red heat for about 15 seconds. Withdraw it from the heat
very slowly so that it cools gradually. When cool, try to bend the annealed end. What do
you notice? This experiment can be carried out using a length of copper instead of steel.
Slow cooling allows grains to grow larger, making the metal softer, more easily bent,
hammered or scratched.
19
1
20
Diameter/ m
The glass fractures by a process known as brittle fracture. This is accelerated by the
presence of surface imperfections or cracks.
This is shown in the diagrams on the next page. The stress becomes concentrated around the
tips of a crack. Bonds near the crack will break, increasing the load on neighbouring bonds
which are still intact, causing them to break and the crack propagates rapidly [at
approximately the speed of sound in the glass.
Crack
20
Stress lines and stress concentrations can be photographed by making specimens out of
Perspex and stressing them between crossed polaroids. The pictures on the next page show
this. The picture on the left shows stress lines in a uniform bar which is stressed. The bar on
the right has a small crack half way up its left side, resulting in a high concentration of stress
around the tip.
This diagram represents the atoms and bonds around the tip of a crack:
The force in bond A will be large since it has to balance the
forces exerted on molecules X and Y from above and
below.
The top two lines are incomplete because of the crack, so
that the stress they carry is transferred to the line of atoms
below. The bond A at the bottom of the crack is therefore
carrying a much higher stress than the rest of the bonds.
The stress can exceed the breaking stress of the material
only in this region causing the bond to break, increasing the
size of the crack and also the stress concentration. The
crack will therefore propagate quickly through the material
causing it to fracture.
In the case of the glass fibres, surface cracks are caused
among other things by differential cooling at the surface and in the centre. The narrower the
thread, the more uniform the temperature and so the less significant are any cracks that form.
This makes the small diameter rods much stronger. The very narrowest glass threads [~ 1 m]
approach the theoretical strength predicted by Griffith. For very narrow threads, inducing
cracks by simply touching the surface brings their tensile strength back to that of the bulk
glass.
This property of brittle materials is exploited by glaziers when cutting a piece of glass to
size by putting a scratch in it and then snapping it similarly with tiles.
Since amorphous solids break by brittle fracture, they will be weak under tension, but under
compression they will be very strong as the stress will cause the cracks to close preventing
propagation. When amorphous solids such as brick are used for building, the structures
produced are strong provided the material is kept under compression.
21
stres
s
A
strai
n
22
Between points O and A, the deformation is elastic and from the slope of the graph it can be
seen that, after an initial stiff region, Youngs modulus is small. At stresses greater than A,
the deformation is still elastic, but the value of Youngs modulus is much greater. If suffi
iently stressed, the material breaks.
Initially, as the rubber is deformed, no bonds are extended; the long chain molecules are
straightened against their thermal motion (which tends to increase the amount of folding in
the molecular chains). The van der Waals bonds are responsible for the initial stiff region,
but once they are overcome, the rubber molecules unfold and the material can extend by
several times its original length. Because bonds are not being broken here, the additional
stress needed to do this is small. The structure of the material changes as shown in the
following diagram.
_______ molecule 1
- - - - - - molecule 2
Diagram 2. Stretched rubber
At point A, the molecules the sections of the molecules which are free to unwind are more or
less straight, therefore if any further extension of the material takes place, bonds are
stretched. This is far more difficult to do than straightening the molecules, therefore the
value of Youngs modulus increases at this point.
When the stress is removed the thermal motion of the chain molecules makes the polymer
return to its original dimensions. The value of Youngs modulus for such a polymer increases
with temperature, the opposite to the variation in crystalline and amorphous solids. This is
due to the fact that the chain molecules have to be straightened against their thermal motion.
As the temperature increases, the thermal motion increases the amount of folding, so that the
average end-to-end distance in an individual molecule decreases, with the result that
straightening the molecules becomes more difficult.
These polymers also show elastic hysteresis i.e.
the stress-strain curves for the loading and
unloading do not coincide. This is shown in
this graph:
stress
23
This closed curve is called a hysteresis loop; its area is the energy per unit volume converted
into internal energy [or, lost as heat in common parlance] during one cycle. Thus when a
polymer is repeatedly stressed, its temperature increases. Rubber with a hysteresis loop of
small area is said to have resilience. This is an important property where the rubber
undergoes continual compression and relaxation as does a car tyre when it touches the road
as it rolls on. If the rubber used did not have a high resilience, there would be appreciable
loss of energy resulting in increased petrol consumption and lower maximum speed. Heat
build up could even lead to tyre disintegration.
Not for examination: Chemical engineers alter the properties of natural rubber by the process of
vulcanization, in which strong covalent bonds are deliberately introduced between the long
molecules. This has the effect of making the rubber stiffer and increasing its resilience. This very
stiff form of rubber is useful for applications which involve repeated deformation, e.g. car tyres.
Polymers exhibit a property called creep during which the chain segments slowly disentangle
under a constant stress as a consequence of the thermal motion of the chain segments.
On the release of stress, thermal motion restores the mixing, but slowly, since the segments
get in each others way during the shuttling process.
(ii)
Force
Force
These lamellae, as they are called, (from the word laminate) form small, spherical grains
called spherulites. [Note: The crystalline regions of a thin film of high density polythene
can be observed using polarised light and a microscope]. Between these crystalline grains,
the polymer molecules are tangled together in an amorphous state, where little long range
order exists. The graph below shows how the stress varies with the strain for a strip of
high density polythene when it is stretched.
24
Between O and A, the strip can regain its original length as its behaviour is elastic. The
parallel parts within the lamellae crystal are held together by weak van-der-Waals bonds
and at low strains these bonds resist the applied stress.
From A to B a neck forms in the strip as the tangled molecules in the amorphous regions
start to align with each other.
From B to C the width of the neck remains unchanged as the strip is extended. In this
region the van-der Waals forces between the lamellae are overcome and they begin to
unfold and become parallel to each other.
The strip is now said to be cold-drawn and is very strong along the axis of the applied
stress. Beyond C, the stress is resisted by the strong covalent bonds between the carbon
atoms within each polymer molecule.
Investigating forceextension curves for rubber and polythene.
In these experiments you will investigate how rubber and polythene behave under
tension. It is not intended that you should obtain accurate values for the mechanical
properties of these materials, but basic quantities such as the elongation at fracture and the
breaking strength may be determined from the force-extension or stress-strain curves.
(a) (i) Rubber Band (cross-linked polymer).
(1) Hang a (cut) rubber band of (approximate) cross-section 1mm by 2mm vertically from
a stand, boss and clamp (Hoffmann clips are useful here to suspend the rubber band).
The base of the stand should be secured using a G-clamp. Attach another Hoffmann
clamp to the other end of the rubber band and use this to hang a 50gram mass holder
from. Place a metre rule as close as possible to the mass holder. The extension may be
read using an optical pin attached to the base of the mass holder.
(2) Measure the length, width and thickness of the rubber when it is supporting the 50
gram holder. Try to avoid squashing the rubber with the micrometer screw gauge.
(3) Increase the mass in 50 gram steps. Depending on the thickness of the rubber, you
may need to change the smaller masses for a single 0.5 kg mass in order to exceed the
elastic limit. Continue to add 50 gram masses until the rubber band breaks.
25
(4) Plot the force extension curve and determine the Young Modulus from the linear
section.
(ii) Natural Rubber.
A similar experiment can be carried out with a natural rubber strip, (linear polymer chains
with little or no cross-linking) cut so its width is about 5 mm. follow the procedure as for
the rubber band, increasing the mass in 50 gram steps up to 500 gram. You will probably
need to cater for an extension greater than 1.0 m. From the force-extension curve you
should be able to identify two regions which are approximately linear. If this is possible,
calculate two values of E for natural rubber.
(b) (i) Low density Polythene.
(1) Cut a strip of polythene about 1 metre in length from a thin polythene bag. You may
need to fold the polythene several times in order to get measurable thickness using a
micrometer screw gauge. You can then calculate the thickness if a single sheet.
(2) Increase the mass, initially in 50 gram steps, measuring the extension for each mass
added. The polythene, at some stage, will increase in length without any further
increase in load (A to C in above graph). If you plot a stress-strain graph as you go
along it is possible to find an accurate value for this increase in extension. Using 5
gram and 10 gram masses will also help you identify the point where this region
begins. When this region ends (beyond C), larger loads will be needed to produce any
further extension.
(ii) High density polythene.
Cut a strip of high density polythene from the rings used to hold cans of drink
together (a good excuse here for buying beer!!). Mark the strip at two points along its
length and, using the above procedure, slowly stretch it as much as possible without
breaking it. After the neck forms, observe how it lengthens without becoming
narrower. The strip is now cold-drawn and you can measure the samples breaking
stress.
The behaviour of polythene when
it is stressed.
At the point B, the chains are straight and any additional strain is due once more to the
stretching of bonds, thus the behaviour shown beyond this point is once more elastic. At C, the
material breaks.
26
Very high
voltage
Copper/
Tungsten
block (anode)
Heater
Current
supply
Heater
filament
Electron
beam
Focussing
anode
X rays
Cooling fins
The heater boils off electrons by thermionic emission. These are then accelerated to very
high velocities by the p.d. between the heater filament and anode. They are collimated by the
focussing anode. The tube is evacuated so the electrons travel in straight lines and collide
with a tungsten target (the anode) embedded in a copper block. The resulting deceleration
produces an enormous amount of heat (up to 99% of the energy input) and also X-rays, which
emerge from a window in the lead housing.
A continuous spectrum, then, can be obtained by electrons decelerating rapidly in the target
and transferring their energy to single photons. This radiation is known as Bremsstrahlung
or braking radiation. Superimposed on the continuous spectrum are several sharp lines. These
result from the bombarding electrons knocking out orbital electrons from the innermost shells
of the target atoms. Electrons from outer shells will then make transitions to fill the gaps in
the inner shells, emitting photons whose energies are characteristic of the target atom.
27
Transitions into the K shell give rise to K lines, the L shell L lines and so on. For heavy metal
targets the resulting photons are in the X-ray range. A typical intensity spectrum would be:
1.0
0.5
min
0.05
0.1
0.15
The intensity of X-rays is defined as the energy per second per unit area passing through a
surface. This can be increased by increasing the voltage of the X-ray tube, or by increasing
the current supplied to the filament. The photon energy is also determined by the tube voltage
with the maximum photon energy being given by:
hc
Emax
eV ,
min
where V is the tube voltage. The optimum photon energy for radiography is around 30 keV
which is obtained using a peak tube voltage of 80 100 kV.
A narrow beam of X-rays is preferred as this reduces scattering and so leads to a sharper
image. Blurring can also occur due to scattered radiation. This can be reduced by introducing
a grid directly in front of the detector. This grid consists of a large number of lead strips so
that only primary or direct radiation will get through to the film.
X-ray beam
Scattered
Radiation
Patient
Transmitted
Radiation
Grid
photographic
film
28
When X-rays pass through matter they are absorbed and scattered and therefore the beam is
attenuated. This attenuation can be calculated using the equation:
I I 0e x
Where I = intensity at a depth x
Io= intensity at the surface
= the attenuation coefficient
Note that the half value thickness, x can be calculated using
ln 2
x 12
.
This can be derived in the same way as the half life equation in radioactivity.
The main absorption mechanism of X-rays in the body is the photoelectric effect. The X-ray
photon is absorbed by an electron which then leaves the atom. This is more efficient for
atoms with larger numbers of electrons i.e. higher atomic numbers. Consequently denser
materials such as bone will absorb more X-ray photons than less dense areas such as soft
tissue. This will lead to a large contrast between bones and soft tissue and therefore a sharp
image. If there is not a great contrast between the areas of the body being studied then
sometimes a contrast media is used, e.g. a barium meal when studying the stomach or
intestines.
Computed tomography (CT or CAT scan) also uses X-rays, but in this case the X-ray tube
moves in a circle around the patient taking images of the body at all different angles. A
computer combines these images to produce a cross sectional image of the body. By adding
these slices together a 3-D image can be produced.
CT scans are very quick to produce and show a wide range of different tissue clearly. They do
however subject a patient to a high dose of radiation and the machines are very expensive.
Ultrasound
Ultrasound can be generated using piezoelectric crystals. If you apply an alternating p.d.
across the crystal you cause it to become deformed, with the crystal vibrating at the same
frequency as the applied p.d. This can be used to generate ultrasound. The process also works
in reverse, with the crystal receiving ultrasound and converting it to an alternating p.d. The
crystal, then, can act as both an emitter and a receiver of ultrasound.
Ultrasound can be used in diagnosis in two different ways
1. A-scans, or amplitude scans, where a short pulse of ultrasound is sent into the body and a
detector (usually connected to a C.R.O.) scanning for reflected pulses. Using the time
base, the time the echo takes to return can be found and the distance between structures
in the body can calculated. A-scans are usually used when the anatomy of the section is
well known but the precise depth is needed e.g. a delay in measuring a known position in
the brain could indicate the presence of a tumour.
2. B-scans, or brightness scans in which the reflected pulse is displayed by the brightness of
the spot. If an array of transducers is used a 2-D image can be built up. This is widely
used to assess the health and growth of a prenatal foetus.
29
c
where c = the speed of the ultrasound wave. Note the 2 in the equation.
This technique will show up any changes in the blood flow through a vein or artery and so
can be used to detect clots or thrombosis.
Magnetic Resonance Imaging [MRI]
Nucleons (protons and neutrons) possess spin, which makes them behave like small magnets.
Usually these will cancel each other out. However a hydrogen nucleus only has one proton
and it is this nucleus that is studied using an MRI scan. Under normal conditions the
hydrogen nuclei will be randomly arranged and cancel each other out. However if a strong
magnetic field is applied they tend to align themselves, in almost equal numbers, either with
the field lines (in parallel) or exactly opposite to the field (antiparallel).
The nuclei are in continuous motion, due to thermal energies, and will all wobble or precess
around the field lines at the same frequency (called the Larmor frequency)
If radio waves are directed at the hydrogen nuclei at the same frequency as they are
precessing (Larmor frequency), they will resonate and flip from one alignment to another so
producing a magnetic field. When the radiowaves are switched off the nuclei revert back to
their original state giving off electromagnetic radiation. It is this signal that is detected by the
scanner. The time taken for the nuclei to switch back is called the relaxation time, and
depends on what tissue type the nuclei are in. By measuring the various properties of the MRI
signal along with the relaxation time a detailed image of a cross section of the body can be
built up.
30
Magnetic resonance imaging is particularly good in obtaining high quality images of soft
tissue such as the brain, but is not as good for harder objects such as bone.
Some of the advantages and disadvantages of X-rays, ultrasound and MRI for examining
internal structures can be summarised by the following table:
Technique
Advantages
Disadvantages
X-rays
Ultrasound
MRI
31
Electrocardiograph (ECG)
The heart is a large muscle which acts like a double pump. The left hand side receives
oxygenated blood from the lungs and pumps it to the rest of the body. The right hand side
pumps the blood returning from the body, at low pressure, to the lungs.
The heart typically beats at between 60 and 100 times a minute. Each beat is triggered by a
pulse starting in the upper right region by a cluster of cells called the sinoatrial node. This
signal spreads through the atria causing them to contract, forcing blood into the ventricles. A
short time later the electrical pulse reaches the ventricles causing them to contract forcing
blood out of the heart. There is a one way valve between the atrium and the ventricle to
ensure the blood flows the right way.
32
33
by a bank of photomultiplier tubes which build up an image of the levels of gamma radiation
being emitted from different parts of the tissue.
34
35
Unit PH5 Option E Energy Matters
The aim of this option is to allow students to explore arguably the most pressing topic of our age in
the context of underlying physical principles. After studying this option, students will be in a position
to absorb and dissect information, often contradictory and misleading which is presented in the
popular media and to make informed decisions. Energy already pervades much of the specification.
The Table below identifies these earlier energy-related topics and gives an overview in the centre box
of how they are developed, extended and linked in this option.
PH2.5
BLACK BODY RADIATION.
STEFAN & WIEN LAWS
PH1
MECHANICS
ENERGY
RENEWABLES
TIDAL
HYDROELECTRIC
WIND
STORAGE
SOLAR ENERGY
GREENHOUSE EFFECT
SAVING
(Thermal Conductivity)
STOCKS
HAZARDS
WIDER ISSUES:
(Economic, Social, Political)
PH4
THERMAL PHYSICS
FISSION
Enrichment
Breeding
PH5.5
NUCLEAR
PHYSICS
FUSION
Problems & Prospects
(JET, ITER)
PH1.5
SUPERCONDUCTIVITY
PH5.2
B FIELDS
36
37
38
very strong B fields are required to deflect and therefore contain fast particles (see
PH5.2(m) for background);
fields of around 5 T require currents of around 7 MA which are only really achievable
with superconducting coils (see PH1.5 (k) to (p)).
The problems are immense and continuous energy through fusion is still a long way off. But
huge efforts and investments are being made. The International Thermonuclear Experimental
Reactor, ITER ( www.iter.org ), a joint venture by most of the great Nations including
China, India, USA, Russia, EEC is underway in France and due to power up in 2016.
Fusion, if it works, will provide all our energy needs. Deuterium is abundant in seawater [~ 1
in 104 hydrogen atom is deuterium] and tritium can be obtained by neutron capture by lithium
in the reactor itself. There are no toxic products.
Heat transfer processes convection and conduction
These are familiar topics well covered in standard textbooks. Emphasis will be on thermal
insulation and energy-saving, but note that publications on these topics use Thermal
Transmittance, (U value) rather than thermal conductivity K. The U value of a slab of
thickness d is given by
U = K/d.
This will be provided, if required, in any question.
Solar radiation as an energy source
A form of renewable energy quite different from those treated earlier is solar energy. The
background physics has already been developed in PH2.5(a to d). Revision of this material is
a good starting point with emphasis on the laws of Wien and Stefan, the inverse square law
and what is meant by black body.
A key quantity is the Solar Constant the total radiated power per square metre crossing a
plane perpendicular to the earth-sun radius measured just outside the earths atmosphere. The
value is not constant (despite the name), as the earth-sun distance varies over the year, but
averages at 1.35 kW m-2. We can estimate the rate of solar power arriving at earth, ignoring
clouds, atmospheric absorption etc. as of the order of 10 17 W and compare this with the rate at
which energy is consumed throughout the world (of the order of 1013 W). So there is abundant
solar energy; the problem is harnessing it effectively. There are two ways:
solar panels;
photovoltaic cells.
In the solar panel, water is heated directly from sunlight. The panel contains a flat coil of pipe
connected to the domestic hot water cylinder and is placed, ideally, on a South-facing roof.
As the name suggests, the photovoltaic cell produces electrical energy from solar energy. At
present, photovoltaic cells make very little contribution to our energy because of high
manufacturing cost. Most of the cells currently in use are made of very pure silicon which has
to be doped and cut in a special way all very expensive.
Not for examination: As semiconductor devices and band theory are not in the specification, this
rough outline may be helpful: the silicon cell consists of n- and p-doped regions forming a p-n
junction. Incident photons excite electrons into the conduction band creating electron-hole pairs
which migrate to form an electric current. Detailed knowledge will not be expected.
39
Much work is in hand to develop cheaper and more efficient cells using, for example,
composite materials. The Energy Conversion Efficiency of a photovoltaic cell is defined by
the usual efficiency equation, in this case written as:
Conversion Efficiency
Values range from around 6% for the cheapest commercial cells to around 40% for the most
expensive state-of-art cells. It follows that large areas of cell are required for even moderate
power such as typical domestic consumption, but small cells of around 5 cm 2 are sufficient to
power pocket calculators which require less than 1 mW.
Carbon footprint
Candidates can not be expected to remember statistics, but some key figures are worth
bearing in mind. For example, about one fifth of UK electricity is from nuclear reactors and
three quarters from fossil fuels (coal, oil and natural gas). Fossil fuels have two major
drawbacks. First, the stocks are finite. Secondly they produce carbon dioxide gas which is
harmful if allowed to escape into the atmosphere. In a chemical reaction in which carbon is
oxidised (burned), each carbon atom combines with two oxygen atoms from the atmosphere
to form carbon dioxide CO2. Straightforward calculation from the atomic masses shows that
one kilogram of carbon produces 2.66 kg of carbon dioxide.
Carbon dioxide is a greenhouse gas and its increased presence in the atmosphere leads to
global warming in the following way:
The solar radiation spectrum covers a range of wavelengths with maximum power at
around 480nm which is at the blue end of the visible spectrum. This value is
determined by the temperature of the suns surface (Wiens displacement law;
max T-1).
The atmosphere is essentially transparent to this wavelength so the solar energy
passes through and is absorbed at the earths surface.
The earth in turn radiates thermal energy but, because the earths surface temperature
is much lower than that of the sun, this peaks at around 10 m which is in the far
infrared region.
Carbon dioxide absorbs strongly at this wavelength, and re-emits in all directions
including back to earth leading to global warming. Other polyatomic molecules such
as methane and nitrous oxide behave similarly but carbon dioxide is more abundant.
This is how greenhouses heat up hence the name; glass, like CO 2 is transparent in
the visible but absorbs in the IR.
Experiments show that burning 1 kg of carbon produces about 13 kWh of energy which
works out at around 6 eV per carbon atom. It is interesting to compare this with the 200 MeV
produced by the fission of one U235 nucleus.
The consequences of increasing greenhouse gases need to be recognised: global warming,
polar icecap melting, weather changes, flooding, more hurricanes etc. Also important is the
decline of vegetation, particularly the rain forests, which remove CO 2 from the atmosphere
through organic growth. Important too is recognition that the worlds population is increasing
as is the industrialisation (and hence energy requirements) of emerging nations: also to be
noted are the measures to counter the ill effects (national and international reduction targets,
Kyoto protocol, carbon footprints, Environmental Impact ratings etc).
40
Fuel cells
Interest is reviving in fuel cells as they offer the possibility of efficient and environmentally
friendly energy production, especially for transport, and could become a replacement for the
internal combustion engine when the oil runs out. Prototype cars powered by fuel cells
already exist. The fuel cell is electrolysis in reverse. As electrolysis has long disappeared
from physics specification, a brief outline is necessary.
When an electric current is passed through water, ionization of the water molecules occurs
through collision with the charge carriers. Avoiding the detailed chemistry, the upshot of this
is that water is broken down into its constituent gases with hydrogen bubbles collecting at the
cathode and oxygen at the anode. The apparatus is simple: a dish of water with two electrodes
each with an inverted jar above it to collect the gases, and a current source. A drop of acid is
needed to make the water conducting and the anode made of platinum to avoid oxidation.
So, in summary, electrical energy breaks water down into oxygen and hydrogen gases. Can
the reverse take place in which hydrogen is recombined with oxygen to provide electrical
energy? Fuel cells do just this. The process is complex but in crude outline the following is
what happens in the Polymer Electrolyte Fuel Cell (PEFC):
Hydrogen is supplied to the anode where the atoms are ionized by a catalyst.
A polymer electrolyte then routes the electrons to the cathode via an external circuit
forming a useable electric current.
The protons continue through the polymer electrolyte to the anode where they
recombine with the electrons, and the hydrogen reacts with oxygen, which is fed
directly into the anode, to form water.
One great advantage is that there are no damaging products particularly no carbon dioxide.
Another great advantage is that there is no heating useful energy is not being obtained from
heat; this will be returned to later after Heat Engines. Also the cell can be connected directly
to electric motors on drive wheels of cars, cutting out the heavy and inefficient engines
(cylinders, reciprocating pistons etc) of traditional cars. Hydrogen of course is a hazard and
there are difficulties over its delivery and storage, but its supply will become plentiful if
fusion succeeds hydrogen and oxygen produced by electrolysis of water from electricity
from turbines powered by fusion reactors but this is a long way off. At present, the
experimental fuel cell cars are extremely expensive and the electrolyte polymer degrades and
has to be replaced within the lifetime of the car.
A useful website, albeit commercial is:
http://automobiles.honda.com/fcx-clarity/how-fcx-works/v-flow/
41
Thermodynamics Carnot cycles, heat pumps and the 2nd Law of Thermodynamics
The groundwork for understanding the central problem of obtaining useful work from heat
has been developed already in PH4.3 Thermal Physics (i) to (d) and a good starting point is to
look again at these. One ideal heat engine consists of a cylinder with a piston and containing
ideal gas (the working substance). The engine is also ideal in the sense that the walls and
the piston are perfect heat insulators, the base is a perfect conductor and there are no friction
losses as the piston moves. The process involved is treated in many textbooks but is set out
here for convenience.
1. The engine is first placed on heat reservoir at temperature T1; initially the state of the
gas is shown by point A on the p V diagram.
Heat Q1 passes from the reservoir into the gas as the state of the gas changes from A
to B. Work is done as theuugas
ur expands (piston goes up) but the internal energy does
not change as the curve AB is an isotherm (temperature constant so internal energy
constant). The changes along are given by the First Law of Thermodynamics (in the
form U = Q W ) and are shown in the first line of the table.
2. The engine is now transferred to a perfectly insulating stand and allowed to expand
further to state C; no heat is transferred but work is done (no need to mention
adiabatic); changes are given in line 2.
3. The engine is now transferred to second reservoir at T2 and the gas is compressed
isothermally to state D; changes are given on line 3 but note particularly that heat is
ejected from the gas into the second reservoir the sink.
4. Finally the cylinder is placed back on the insulating stand and compressed to return to
state A. The cycle is complete, and we can see from the bookkeeping table that an
amount of work (Q1-Q2) has been obtained from an amount of heat Q1.
Heat
reservoir
T1
T2
uuu
r
BC
uuur
AB
uuu
r
CD
Step
A to B
Q1
Q1
B to C
(U1U2)
U1U2
C to D
-Q2
Q2
D to A
U1U2
(U1U2)
Heat
sink
uuur
DA
42
pressure, p
Indicator diagram
for a Carnot cycle
Q1
B
T1
Q2
T2
Volume, V
HOT
T1
HEAT Q1
ENGINE
WORK
W = Q 1 Q2
HEAT Q2
COLD
T2
The question arises, Why does heat have to be ejected? Or, equivalently, Why does there
have to be a sink?
If the process terminated at B the engine would be of no further use. To obtain useful work
the process must be continuous, which requires that the cycle be repeated over and over
again for as long as is necessary. This means repeatedly returning the gas to its original state
something which can only be achieved by ejecting heat at one stage in each cycle.
The efficiency of the heat energy is defined by:
Useful work output
Efficiency
100% .
Total energy input
Q Q2
Q
Efficiency 1
1 2
So
Q1
Q1
(1)
43
All that we know, or need to know, about the source and sink is that they have temperatures
of, respectively, T1 and T2. Clearly the amounts of heat transferred will be governed by these
temperatures. In fact, the ratio Q2/Q1 is used to define the ratio of Kelvin temperatures so
that
Q2 T2
Q1 T1
and equation (1) above becomes
T
Efficiency 1 2
(2)
T1
So a Carnot engine working between 100 oC and 0oC [373 K and 273 K] can not have an
efficiency of greater than 27%. In practice there will be other factors such as friction and heat
loss which will make the actual efficiency even less. In the ideal engine described here there
are no such losses, and the cycle is therefore reversible in that if the operations described are
performed in the opposite (anticlockwise) sense the system is returned to precisely its initial
state. This would not happen if there were losses.
All this shows a fundamental difference between heat and other forms of energy. While
electrical energy can be transformed entirely into heat (I2R heating in a resistor) only a
fraction of heat can be transformed into useful work. This has been shown here to be true for
the ideal Carnot engine; it is in fact generally true and is formulated in the Second Law of
Thermodynamics which will be looked at later.
Details of actual engines, to which the same principles apply, will not be expected. The
simplest example is probably the steam engine, but this is now of historical rather than
practical interest. The Otto cycle for the internal combustion engine is worth looking at as a
more complex example.
The running of the Carnot cycle in reverse has already been mentioned and this is the basis of
the refrigerator and the heat pump. They operate on the same principle that of extracting
heat from a cold source and ejecting it at a hot sink and the Carnot cycle runs anticlockwise around the indicator diagram. Work is required to achieve this and a schematic
diagram for the process is given below. Note that the same diagram applies both to the
refrigerator and the heat pump, but that we do not use the term efficiency to describe their
effectiveness efficiency is reserved for the heat engine. Instead, the figure of merit for these
devices is the Coefficient of Performance. The definition of the CoP is essentially the same as
that for efficiency:
Useful energy transfer
100% .
Work input
The nature of the Useful energy transfer differs however and the CoP is defined separately
for each case below.
i.e.
Coefficient of Performance
44
HOT
T1
COP
HEAT Q1
ENGINE
HEAT Q2
COLD
T2
WORK
W = Q1 Q2
T2
T1 T2
T1
T1 T2
For refrigeration, the source is the inside of the fridge and the sink is the kitchen or, more accurately,
the cooling grill at the back of the cabinet. In the case of the heat pump the source is the ground,
preferably in ground water or a river for better heat transfer, and the sink is the radiators inside of the
house to be heated. Practical details are not expected but the broad principles should be understood.
For example, the working substance or refrigerant might be a liquid which can be made to
evaporate at the cold source, thus absorbing heat, and then to eject heat at the hot sink where the vapor
condenses back to liquid. Work is done in circulating the refrigerant and bringing about the necessary
phase changes.
The COPs are, like the Carnot efficiency, theoretical maximums. In practice, performance is much
poorer owing to the usual losses, but it is still instructive to insert numbers for typical situations. Heat
pumps seem a good proposition, but the capital expenditure is high and the losses great. They are used
in major buildings (Festival Hall, Buckingham Palace, and more recently the Senedd at Cardiff) while
many firms offer domestic appliances (try inserting Heat Pump into a search engine).
As already mentioned, the limitations of heat into work are a consequence of the Second Law of
Thermodynamics, the Kelvin statement of which is: No process is possible the only result of which
is the total conversion of heat into work. To illustrate this, look again at the Carnot cycle and the step
from A to B. This would seem at first to contradict the law for all the heat absorbed in the step is
converted into work. But this is not the only result because, in the process, the state of the gas
(pressure and volume) has changed. The law is fundamental, far reaching and one of the great
cornerstones of science. For example, an essential stage in obtaining work from a nuclear or any other
power plant is that of transforming heat to drive a turbine, and all the limitations of the Carnot cycle
apply. Suppose steam enters the turbine at 500 oC and then condenses to water at 60 oC; the Carnot
efficiency is 0.57 and the actual efficiency will be much less again. So, most of the energy is wasted.
One way to recover some of this is to pump the hot water from the power station condenser around
local housing in a massive central heating system. This is known as a Combined Heat and Power
(CHP) scheme and was pioneered in Britain at the old Battersea power station.
45
The overall efficiency obtained in a CHP scheme is much greater than the Carnot efficiency because a
large fraction of the waste heat Q2 now becomes useful though there will be some losses between
the power station and the houses. In practice, CHP generators are run with a higher temperature cold
sink [~ 100C], thus reducing the efficiency of the electricity generation but allowing for more useful
heat energy distribution. It is easy to understand now why electricity should not be used for heating
most of the original energy has already been wasted as heat. We can also now better understand a
great advantage of fuel cells no heat is involved so there is no Carnot wastage in the cell itself,
though heat may have been involved at an earlier stage in producing the hydrogen and oxygen.