Anda di halaman 1dari 200

NUCLEAR MEDICINE

NUCLEAR MEDICINE
PRACTICAL PHYSICS,
ARTIFACTS, AND PITFALLS

Daniel A. Pryma, MD
Associate Professor of Radiology
Clinical Director of Nuclear Medicine & Molecular Imaging
University of Pennsylvania Perelman School of Medicine
Philadelphia, Pennsylvania

1
1
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide.
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trademark of Oxford University Press
in the UK and certain other countries.
Published in the United States of America by
Oxford University Press
198 Madison Avenue, New York, NY 10016
© Oxford University Press 2014

All rights reserved. No part of this publication may be reproduced, stored in


a retrieval system, or transmitted, in any form or by any means, without the prior
permission in writing of Oxford University Press, or as expressly permitted by law,
by license, or under terms agreed with the appropriate reproduction rights organization.
Inquiries concerning reproduction outside the scope of the above should be sent to the
Rights Department, Oxford University Press, at the address above.

You must not circulate this work in any other form


and you must impose this same condition on any acquirer.

Library of Congress Cataloging-in-Publication Data


Pryma, Daniel A., author.
Nuclear medicine : practical physics, artifacts, and pitfalls / Daniel A. Pryma.
p. ; cm.
Includes index.
ISBN 978–0–19–991803–4 (alk. paper)
I. Title.
[DNLM:  1.  Nuclear Medicine––Case Reports.  2.  Nuclear Physics––methods––Case Reports.
3.  Radiation––Case Reports. WN 440]
R895
616.07′57—dc23
2014014141

This material is not intended to be, and should not be considered, a substitute for medical or other professional advice. Treatment
for the conditions described in this material is highly dependent on the individual circumstances. And, while this material is designed
to offer accurate information with respect to the subject matter covered and to be current as of the time it was written, research and
knowledge about medical and health issues is constantly evolving and dose schedules for medications are being revised continually,
with new side effects recognized and accounted for regularly. Readers must therefore always check the product information and
clinical procedures with the most up-to-date published product information and data sheets provided by the manufacturers and the
most recent codes of conduct and safety regulation. The publisher and the authors make no representations or warranties to readers,
express or implied, as to the accuracy or completeness of this material. Without limiting the foregoing, the publisher and the authors
make no representations or warranties as to the accuracy or efficacy of the drug dosages mentioned in the material. The authors and
the publisher do not accept, and expressly disclaim, any responsibility for any liability, loss or risk that may be claimed or incurred as
a consequence of the use and/or application of any of the contents of this material.

1 3 5 7 9 8 6 4 2
Printed in the United States of America
on acid-free paper
DEDICATION
This book is dedicated to Katie, Reilly, Marin, and Rowan for their constant love and support.
ACKNOWLEDGMENTS
Many thanks to Eleanor Mantel, CNMT, Joshua Scheuermann, MMP, Janet Reddin, PhD, and Joel
Karp, PhD, for providing me with several images and for serving as sounding boards and excellent
accuracy checkers. Also to David Mankoff, MD, PhD, for his insight, unwavering support, and valued
mentoring. Thank you as well to Andrea Seils and Rebecca Suzan for their patience and guidance.
More globally, I will be forever grateful to Harvey Mudd College Professors Helliwell and Chen who
first stoked my love of physics despite my primarily studying biology. That passion for physics bloomed
under the patient support of John Humm, PhD, Joseph O’Donaghue, PhD, and others in Medical
Physics and Nuclear Medicine at Memorial Sloan-Kettering Cancer Center. Without the teaching and
encouragement I have been lucky to receive, I would never have undertaken this book.
CONTENTS
Preface xi

1  INTRODUCTION TO NUCLEAR MEDICINE 1

2  RADIATION 5
X-rays 6
Nuclear nomenclature 8
Nuclear radiation 8
Electron capture 9
Beta emission 9
Positron emission 10
Alpha emission 10
Isomeric transition 11
Gamma radiation 11
Internal conversion 12
Auger electrons 12
Units of radioactivity 12

3  RADIOBIOLOGY 15
Units of radiation exposure 16
Deterministic effects 17
Stochastic effects 18
Radiation safety 21

4  RADIATION DETECTORS—IONIZATION DETECTORS 23


Ionization chambers 24
Dose calibrators 25
Survey meters 27
Proportional counters 30

5  RADIATION DETECTORS—SINGLE PHOTON 31


Collimators 32
Scintillators 37
Photomultiplier tubes 39
The gamma camera 41
Static planar imaging 42
Dynamic imaging 44
x CONTENTS

Gated imaging 46
Single photon emission computed tomography 48
SPECT/CT 52
Gamma probes and well counters 54

6  RADIATION DETECTION—PET 55
PET principles 56
PET acquisition and reconstruction 59
Time of flight 62
PET/CT 63
PET/MRI 65

7  IONIZATION CHAMBER/DOSE CALIBRATOR ARTIFACTS 67


Case 1. Altitude 69
Case 2. Geometry 70
Case 3. Materials 71

8  GAMMA CAMERA ARTIFACTS 73


Case 1. Cracked crystal 75
Case 2. Hygroscopic crystal 76
Case 3. PMT malfunction 78
Case 4. Flood nonuniformity 80

9  PLANAR ACQUISITION ARTIFACTS 83


Case 1. Off-peak acquisition 85
Case 2. Motion artifact 88
Case 3. Dose infiltration 91
Case 4. Collimator penetration 93

10  SPECT ACQUISITION ARTIFACTS 95


Case 1. Center of rotation error 97
Case 2. Filtered back projection streak 98
Case 3. Noisy images 101
Case 4. Iterative reconstruction errors 104
Case 5. Motion artifact 106

11  PET ACQUISITION ARTIFACTS 109


Case 1. PMT malfunction 111
Case 2. Crystal temperature instability 113
Case 3. Table misregistration 114
Case 4. Scatter correction errors 116
Case 5. Attenuation correction errors 118
Case 6. CT artifacts affecting PET reconstruction 121
Contents xi

12  DOSE CALIBRATOR PITFALLS 123


Case 1. Dose calibrator contamination 125
Case 2. Wrong setting used on dose calibrator 127
Case 3. High background activity 128

13  SINGLE PHOTON PITFALLS 129


Case 1. Prostheses 131
Case 2. Recent prior study 133
Case 3. Contamination 135
Case 4. Poor dynamic timing 137
Case 5. Background activity 140

14  PET PITFALLS 145


Case 1. Infiltration 147
Case 2. Treatment effect mimics new disease 150
Case 3. Misregistration and attenuation correction 152
Case 4. Respiratory motion artifact 154

15  THERAPY PITFALLS 157


Case 1. Empiric dosing exceeds safe limits 159
Case 2. Gastrointestinal toxicity 162
Case 3. Radioactive vomit 163
Case 4. Therapy infusion via indwelling catheter 165

16  PUZZLERS 167


Answers 178

Index 181
PREFACE
I love to teach nuclear medicine physics to our students, residents, and fellows. However, I have observed
that physics instills almost universal fear and angst like no other subject. Most students assume that
the physics behind nuclear medicine will be impossibly complex and beyond the abilities of all but PhD
physicists. Furthermore, they believe that only the physicists that design and build instrumentation
really need to understand the physics, and that a detailed understanding of physics is not necessary for
the practitioner. Nothing could be further from the truth.
A basic understanding of the physics behind nuclear medicine imaging and therapy is truly quite
approachable. Furthermore, once a person understands the physics behind the procedure he or she will
not only recognize artifacts and pitfalls but also quickly recognize the best way to mitigate the root
problems causing the artifacts.
I am not a physicist, but rather a nuclear medicine physician with a great fondness for physics.
Although my students truly want to learn nuclear medicine physics, they struggle to find a book that
has some middle ground between a cursory overview and a terrifying amount of detail. My goal in
writing this book is to dispel the fear of physics with simple explanations. Some subjects are covered
necessarily simplistically, but I endeavor to present a sufficient depth to understand how nuclear medi-
cine procedures work without getting overwhelmed with an unnecessary degree of detail.
1 INTRODUCTION TO NUCLEAR
MEDICINE
2 NUCLEAR MEDICINE

In the grand scheme of medicine, the field of nuclear medicine is quite young, spanning little more
than a century (Figure 1.1). The pioneering work of Henri Becquerel and Marie Curie in identifying
and beginning to understand radioactive decay laid the framework on which nuclear medicine was
built. These seminal discoveries occurred in the late 1800s and experimentation with radioactive
materials in humans and animals continued.
The next major advancement in the birth of nuclear medicine was the invention of the cyclotron in
1930 by Ernest Orlando Lawrence. A cyclotron is a device that accelerates charged particles (often
protons or deuterons) and then allows them to bombard a target. The target can be solid, liquid,
or gas, but in any event, the target is a very pure sample of a specific isotope of a specific element.

1958
1936 1973
Hal Anger
John H. William Strauss 2008
1897 invents the
Lawrence uses introduces exercise First human
Marie Curie scintillation
P-32 to treat myocardial perfusion PET/MRI device
names rays camera
leukemia imaging installed
“radioactivity”

1924 1970
Georg de W. Eckelman and
Hevesy et al 1946 P. Richards develop 1998
perform the first Samuel Seidlin, Leo Tc-99m “instant kit” FDG PET used to
radiotracer Marinelli and Eleanor radiopharmaceuti- predict response to
studies in Oshry treat a thyroid cals high dose
animals cancer patient with chemotherapy
I-131

1913 1990
Frederick 1938 1962
Emilio David Kuhl Steve Lamberts
Proescher and Eric Krenning
publishes Segre and introduces
Glenn emission image endocrine
study on IV tumors with
radium for Seaborg reconstruc-
discover tion somatostatin
treatment of receptor-binding
various Tc-99m tomography
radiotracers
diseases
1896 1932 1951 1971 2000
Henri Becquerel Ernest Lawrence FDA approves The AMA officially Time Magazine
discovers “rays” and Stanley sodium I-131 for recognizes nuclear recognizes
from uranium Livingston publish thyroid patients, medicine as a Siemens Biograph
article that forms the first approved medical specialty PET/CT as
the basis for the radiopharmaceuti- invention of the
cylotron cal year

Figure 1.1.  A timeline of nuclear medicine. AMA, American Medical Association; CT, computed tomography; FDA, Food and Drug
Administration; FDG, fluorodeoxyglucose; MRI, magnetic resonance imaging; PET, positron emission tomography. Adapted with permis-
sion from the Society of Nuclear Medicine and Molecular Imaging.
Introduction to Nuclear Medicine 3

When the rapidly accelerated particles contact the atoms in the target, they can knock other par-
ticles from the nuclei of the atoms in the target, for example replacing a neutron with a proton, thus
changing the element of the atom. The target and incident particles can be chosen to create atoms of
a specific radioactive isotope in the target. Lawrence’s invention opened the door to greater quantities
of radioactive materials for experimentation and, for the first time, to the creation of new isotopes.
Just a few short years later, in the late 1930s, iodine-131 and technetium-99m were discovered and
continue to be widely used in clinical nuclear medicine today.
As World War II progressed, research in radioactivity, particularly in nuclear fission, accelerated.
When uranium-235 undergoes nuclear fission, a great deal of iodine-131 and molybdenum-99 is
produced. Molybdenum-99 decays to technetium-99m, which is a valuable diagnostic isotope. Thus,
one of the byproducts of the development of the atomic bomb as well as developments in the use
of controlled nuclear fission for power generation was the ever-increasing supply of iodine-131 and
molybdenum-99. This supply allowed the more widespread dissemination of medical isotopes and a
faster pace of research. Indeed, starting in the late 1940s a commercial distributor of radioisotopes
became available.
Availability of radioisotopes is only half the battle; a means to form an image from the radioactivity
emitted from the patient was also needed. Benedict Cassen invented the rectilinear scanner in 1950.
This scanner had a radiation detector that moved laterally along a line and detected the amount of
activity at each position. As the scanner moved through an area one line at a time, it was able to form
an image map of the distribution of radioactivity. However, because of the scan time involved, it was
limited to imaging only static processes.
Shortly thereafter, in 1956, Hal Anger released a revolutionary design: the Anger gamma camera.
This design used a scintillator (a material that emits visible light when radioactive photons interact
with it) coupled with multiple photomultiplier tubes. The Anger gamma camera was able to form an
image of an entire organ at once, opening the door to dynamic imaging that was not feasible with the
rectilinear scanner.
In 1962, David Kuhl opened another dimension in nuclear medicine imaging with the invention
of emission tomography. After acquiring images of an object from multiple projections around the
object, emission tomographic reconstruction creates a three-dimensional rendering of the object
that can be sliced and reviewed. Suddenly, the innermost structures of the body could be accurately
visualized!
The basic principles described by Anger and Kuhl more than 50  years ago continue to be used
today. Many important discoveries have built on their contributions in imaging and have been nicely
paired with novel radioactive compounds that can be used to image and treat a myriad of human
medical conditions. It is important to note that by their very nature these radioactive compounds are
continuously decaying. Therefore, they must be used within a relatively short time of synthesis. Thus,
the development of kit-based preparation of many of these compounds has expanded the availability
of clinical nuclear medicine imaging literally to the ends of the earth.
In the United States today, nuclear medicine as a clinical medical specialty is generally considered
a subspecialty of radiology. Most practitioners in nuclear medicine have either trained in nuclear
medicine alone or received nuclear medicine training as part of a radiology residency. Similarly, tech-
nologists in nuclear medicine may have received training specifically in nuclear medicine or may have
subspecialized after training in radiologic technology.
4 NUCLEAR MEDICINE

A wide range of diagnostic and therapeutic procedures is commonly performed in nuclear medicine
departments worldwide. The most common procedures include myocardial perfusion imaging at rest
and stress, bone scanning, dynamic renal imaging, cancer imaging with fluorodeoxyglucose positron
emission tomography/computed tomography, and thyroid cancer imaging and therapy. Still a young
and dynamic field, clinical nuclear medicine is constantly changing with new techniques and radio-
pharmaceuticals. As we begin to enter the era of precision medicine, nuclear medicine is poised to
play a critical role in understanding which treatments may work best in a given patient. The future
is bright!
2 RADIATION
6 NUCLEAR MEDICINE

Radiation, at its most basic level, is nothing more than moving energy. It can be in the form of elec-
tromagnetic waves, such as x-rays, or as particles ejected from an atom (therefore, subatomic particles
because they are a small piece of what is in the atom). Because radiation is energy, it can exert effects
on objects, both living and inanimate. Furthermore, because that energy is invisible to the naked eye
(although it is most certainly detectable; see Chapters  4–6), it is a cause of great fear (sometimes
extending into the irrational). In this chapter, we review the different forms of radiation and how
they are generated.
Before discussing the forms of radiation, it is important to define the unit of energy used to
describe radiation:  the electron volt (eV). An electron volt is defined as the amount of energy
gained by a single electron when passed through a one volt electric potential. Although that is not
an intuitive amount of energy, it forms the basis on which energies are reported in nuclear phys-
ics. If it sounds like a tiny amount of energy, it is—when converted into the SI unit of joules (J),
it amounts to only 1.6 x 10-19 J. Relevant forms of radioactive waves or particles have energies
in the thousands or millions of electron volts and so are designated as kiloelectron volts (keV) or
megaelectron volts (MeV), respectively.
As radiation energy moves through matter, it deposits its energy along the way. A critically impor-
tant aspect of the effect of radiation on tissues is the rate at which the energy is deposited. Next we
review many different forms of radiation, some with high energy and some with low energy, but the
energy itself is just one piece of the equation. The other critical piece of the puzzle is the distance
over which the radiation deposits its energy and how readily that energy forms ionizations. Thus, the
concentration of ionizations caused by the radiation is called the linear energy transfer (LET). Two
different radioactive emissions may both have an energy of 1 MeV, but one could deposit its energy
over many meters, resulting in very few ionizations per unit distance, producing only a small (or even
no) effect in any given piece of tissue; the other may deposit all of its energy in a single cell causing
numerous ionizations to devastating effect.

X-rays

The most common form of radiation used in medicine is the x-ray. X-rays are photons of light but
with much higher energy than visible light. Because energy and wavelength are inversely propor-
tional, x-rays have very short wavelengths, much shorter than visible light. Their high energy allows
x-rays to penetrate through many materials and to deposit some of their energy within the material.
One way to remember this is to think about going to the beach in the summer. It has become fairly
ingrained that the ultraviolet light in sunlight can induce changes in the skin, from tanning to severe
damage from burns to induction of mutation causing skin cancer (see Chapter 3). How does it do
that? If you remember the colors of the rainbow, violet is the last color, so it is the shortest wave-
length, highest energy color of visible light. Ultraviolet, as its name implies, is higher energy than
violet. It is no longer visible light and now has enough energy to penetrate through the epidermis and
deposit its energy in the skin cells below. X-rays are physically the same as visible or ultraviolet light,
they are just one more step up in energy and so they have greater ability to penetrate. Medically used
diagnostic x-rays have energies ranging from about 100 eV to about 150 keV.
Radiation 7

Depending on the energy of the x-ray and the density and thickness of the material that the x-ray is
hitting, an individual ray could either pass through the material or be absorbed in the material. This
is the fundamental basis behind using x-rays for imaging. Imagine a chest x-ray: you see the ribs as
white lines; you see the lungs as largely black, but with subtle gray patches within them. Now pic-
ture the film: it started as a white piece of film, and then x-rays were shot at it, but with a patient in
between the x-ray source and the film. The lungs, which are mostly air and unlikely to stop an x-ray,
have a lot of x-rays pass through them and hit the film, so the film turns black in those areas when
developed. The ribs are much higher density and so the x-rays are more likely to be stopped by the
ribs and so the film stays white in those areas. Finally, within the lungs are vessels and membranes that
have intermediate density and so they stop an intermediate percentage of the incident x-rays and the
film turns gray in those areas. Now, finally, imagine the following three scenarios:
1. The x-rays incident on the patient are of such low energy that they all stop within the patient.
The film remains white and nothing is seen.
2. The x-rays incident on the patient are of such high energy that they all pass right through the
patient regardless of the density of the different areas of the body. The film is completely black
and no definition is seen.
3. The energy of the x-rays incident on the patient is tuned so that you are able to see a range of
grays over the tissues that you are trying to discern.
How are x-rays made? The first mode of production is from the shell of electrons surrounding
an atom’s nucleus. The closer an electron is to the nucleus the lower is its potential energy. So, if an
electron gets excited to an outer shell (one further from the nucleus), it has higher energy. If it then
is able to drop back down into an inner shell (one nearer the nucleus), its energy state is lower. Thus,
the difference in energy must be released and it is released in the form of an x-ray. To generate x-rays,
one needs to only excite electrons to a higher energy state shell and then when they drop back down
to their lower energy state x-rays are emitted. This can be done with an x-ray tube (Figure 2.1). In
an x-ray tube, electrons are accelerated across an electrical potential and are aimed at a metal target
(most often tungsten or a tungsten alloy). Those electrons deposit their energy in the tungsten and, in
doing so, excite some of the inner shell electrons in the tungsten atoms. When they drop back to their

Uh Ua
– +
Wout
A
C
Win

Figure 2.1.  A water-cooled x-ray tube. Because there is a voltage (Ua) across the anode (A) and cathode (C), electrons flow toward the
anode. When they hit, they generate x-rays (X), both characteristic x-rays and bremsstrahlung. The cathode also includes a heating coil
that is under voltage (Uh). Furthermore, behind the anode is a cooling chamber filled with cold water flowing in (Win) and warm water flow-
ing out (Wout). Coolth, Water cooled x-ray tube. October 4, 2010. http://commons.wikimedia.org. Accessed October 29, 2013. Reproduced
with permission.
8 NUCLEAR MEDICINE

ground state, x-rays are emitted. Because the energy difference between two given shells is always the
same, these x-rays come out in one of a few known energies based on the shell from which they came
and the shell in which they ended up; these are called characteristic x-rays. In typical x-ray tubes, this
mode of production is relatively minor.
The second (and predominant) source of x-rays from an x-ray tube is called bremsstrahlung, which
is German for “braking radiation.” When the electrons that are accelerated into the metal target hit
it, some of them are deflected off their original course. In this deflection they lose energy (the amount
determined by the angle of deflection) and that energy is released in the form of x-rays. The emitted
x-rays are of different energies (polyenergetic), but the maximum energy is defined by the potential
applied across the x-ray tube. If the x-ray tube potential is 100 kV, the maximum x-ray energy cannot
exceed 100 keV and most resultant x-rays will have lower energy. Most of the x-rays coming from an
x-ray tube are the product of bremsstrahlung and so x-ray tubes emit polyenergetic x-ray beams. It is
important to note that bremsstrahlung is not a phenomenon for x-ray tubes only and it can be emitted
from the nuclear particle radiations discussed later.
Because x-rays can pass through tissues and generally travel a long distance (at the speed of light),
they have quite a low LET. Thus, on average, they deposit very little energy per unit distance.

Nuclear Nomenclature

Without belaboring what is considered by most a less-than-thrilling topic, it is necessary to briefly


review some nomenclature about atoms and, specifically, the atomic nucleus. An atom is made up of
a nucleus surrounded by cloud of electrons. The nucleus is made up of protons and neutrons. Each
electron has a charge of −1, whereas each proton has a charge of +1. The neutrons have no charge.
When the number of protons and electrons is equal, the atom has no charge.
The number of protons in the nucleus, which is called the atomic number, defines an element. The
atomic mass is the sum of the protons and neutrons in an atom. The reason the electrons are not
included in this calculation is that their mass is orders of magnitude smaller than that of protons and
neutrons and so does not significantly contribute to the mass of the atom. Different atoms of a given
element can have different atomic masses because the number of neutrons can vary; these are called
isotopes of an element. A specific isotope is designated by giving the name of the element along with
the atomic mass, either as a superscript before the element name or as the element name followed by a
hyphen and the isotope number. For example, the isotope of molybdenum (Mo) with an atomic mass
of 99 is indicated by either 99Mo or Mo-99.

Nuclear Radiation

Aside from x-rays, the other medically relevant forms of radiation all emit from the atomic nucleus
rather than from the electron shell. The nucleus contains protons, which have a positive charge, and
neutrons, which are uncharged. Particles with the same charge repel one another. Indeed, electrons
Radiation 9

repel each other, but because the electrons are spread fairly thinly in the electron shells, this is not an
issue. The nucleus, however, is incredibly densely packed and is held together by strong forces. The
strong forces between the protons are not enough to keep a nucleus together and so the neutrons con-
tribute to hold the nucleus together. The neutrons play a critical role in allowing the protons to coex-
ist within the nucleus, helping to stabilize it. Furthermore, the larger the nucleus, the more neutrons
are needed to achieve stability. In small atomic nuclei such as helium (two neutrons, two protons),
there is generally close to a 1:1 neutron/proton ratio. As the number of protons increases, the rela-
tive number of neutrons needed to maintain a stable nucleus increases. The largest stable isotope is
lead-208 (Pb-208), with 82 protons and 126 neutrons, or about 1.5 neutrons per proton.
When a nucleus has too few or too many neutrons to appropriately balance the nuclear forces,
the atom is unstable, meaning that it undergoes some form of rearrangement to become more stable.
Because these nuclear rearrangements allow the nucleus to transition to a more stable, lower energy
state, they result in a net release of energy (which is why fission power plants can produce electricity).
The radioactive atom is called the parent and the atom after decay is called the daughter. The medi-
cally relevant forms of nuclear decay are covered next.

Electron Capture

Nuclei that have relatively too much positive charge can become more stable by converting a proton
into a neutron through the process of electron capture. An electron from an inner shell can be cap-
tured into the nucleus; it combines with a proton and their charges cancel and the proton becomes
a neutron. For the sake of thoroughness it bears mentioning that to balance the equation a massless,
chargeless particle called a neutrino results. Finally, energy is also released, often in the form of a γ-ray
or characteristic x-ray (discussed later). After electron capture, the atom has the same atomic mass,
but the atomic number (i.e., the form of element that the atom is a part of) decreases by one. For
example, when indium-111 (In-111; comprised of 49 protons and 62 neutrons) undergoes electron
capture, it becomes cadmium-111 (Cd-111) with 48 protons and 63 neutrons.

Beta Emission

In nuclei that have too many neutrons for stability, a neutron can be converted into a proton. This is
basically the exact opposite of electron capture: an electron is ejected from the nucleus resulting in a
neutron becoming a proton. Because it is the opposite process to electron capture, in the case of beta
decay, an antineutrino balances the equation. Although the ejected electron is physically identical to
any other electron, to denote that it originated in the nucleus, it is called a beta particle: β−. The minus
sign after the β denotes its negative charge. The maximum energy of a β− particle that can result from
a specific isotope’s decay is characteristic. However, the antineutrino can carry variable amounts of
energy and so any individual β− particle is emitted with some energy up to the maximum. The average
energy of a given β− is about one-third of the maximum energy. On undergoing β− decay, the atom
10 NUCLEAR MEDICINE

maintains the same atomic mass, but the atomic number increases by one. An example of a β− emitter
used in medicine is iodine-131 (I-131), with 54 protons, which decays into xenon-131 (Xe-131), with
55 protons.
The distance traveled by β− particles depends on their energy and is typically in the millimeter to
centimeter range in tissue. A higher energy results in a longer path length. Because the path length is
so much shorter than with photon radiation, the biologic effects are much more local.

Positron Emission

Positron emission occurs in atoms with relatively too many protons. In a process very similar to β−
decay, a proton can eject a positively charged electron and thus convert into a neutron (a neutrino is,
again, generated as well). The positively charged electron emitted from the nucleus is called a positron
and denoted β+. A positron is physically exactly the same as an electron aside from the opposite charge
but of course positrons are not observed to exist stably in nature. In fact, they are inherently unstable
and persist for a tiny fraction of a second. After being ejected from the nucleus, a positron picks up
an electron. The electron orbits about the positron to form a pseudo-atom called positronium. After
a short time, the electron and positron annihilate one another. Their mass is converted into energy in
the form of photons. If one works out the familiar formula E = mc2 using the mass of an electron or a
positron, the result is 511 keV of energy. Thus, two photons are created, one each for the electron and
the positron that are annihilated; the two photons travel directly away from one another. Fluorine-18
(F-18) is a commonly used positron emitter (with nine protons and nine neutrons) that decays into
oxygen-18 (O-18), with 8 protons and 10 neutrons.
Positron emission and electron capture result in the same net nuclear change: to decrease the pro-
ton/neutron ratio in the nucleus. So why do some nuclei undergo positron emission, whereas others
undergo electron capture? Generally speaking (there are always exceptions), atoms with high atomic
number have inner shell electrons that are nearer the nucleus and more likely to interact with the
nucleus, and so electron conversion is more likely; smaller nuclei are more likely to undergo positron
emission.
Positrons behave just like β− particles with respect to path length for the β+ emission itself, whereas
the 511-keV photons have biologic effects similar to x-rays. Thus, radiation energy deposition into
tissue from β+ decay is additive between the β+ contribution and the photon contribution.

Alpha Emission

The final nuclear particle emission to be discussed is alpha emission, denoted by the Greek symbol α.
An α particle is the same as a helium nucleus, containing two protons and two neutrons. When com-
pared with a relatively tiny electron as in a β− particle, an α particle has markedly higher mass (about
7,300 times higher). Alpha particles are emitted only from relatively large atoms that are far from
stable and need to shed significant nucleons to become most stable. The path length of α particles is
Radiation 11

very short, in the micron range (on the order of a thousand times shorter than β− particles) and the
LET is incredibly high, about 20 times higher than for x-rays. An example of an α emitter with medi-
cal utility is astatine-211 (At-211), which decays via α emission into bismuth-207 (Bi-207), which in
turn decays via electron capture into Pb-207. Because the isotopes that decay via α emission are typi-
cally large and quite unstable, it is common for their daughter isotopes to be also unstable; multiple
radioactive decays are common before eventually achieving a stable isotope.

Isomeric Transition

Some atoms of a given isotope can be at a high energy state and because all matter strives to the
lowest energy state possible for maximal stability, such atoms eventually drop to their ground energy
state and, in doing so, release energy. These excited atoms are said to be in a meta-stable state and so
they are designated with an m after the isotope number. For example, the most widely used isotope
in medical imaging is Technetium-99m (Tc-99m). Molybdenum-99 (Mo-99) decays by β− emission
such that one of its 56 neutrons loses a negative charge and becomes a proton, converting Mo-99
into Tc-99m. Initially, the Tc-99m is at a higher energy state and it eventually decays into Tc-99
by isomeric transition. The energy released by isomeric transition is generally emitted by one of the
methods described next.

Gamma Radiation

The form of nuclear radiation most similar to x-rays (and most famous for turning Dr. Bruce Banner
into The Incredible Hulk) is gamma radiation, denoted by the Greek symbol γ. Physically, γ-rays are
identical to x-rays except for three important differences:
1. The photons are emitted from the atomic nucleus rather than from the electron shell.
2. They have an exact characteristic energy (i.e., monoenergetic; although some atoms can emit
multiple different γ-rays, each has its own characteristic energy). Although characteristic x-rays
have an exact energy, the production of x-rays through a tube is polyenergetic.
3. The energy spectrum of γ-rays extends higher than x-rays, with some γ photons reaching MeV
energies.
In the previous example of Tc-99m, when the isomeric transition releases energy, most of it is emit-
ted in the form of a 140 keV γ-ray. Because γ-rays are like x-rays, they can pass through tissue to be
detected externally. Thus, γ-rays are the photons used for imaging in general nuclear medicine scans.
Many isotopes emit γ-rays as part of their decay to a more stable state. For example, the β− emit-
ter I-131 also emits γ-rays, most prominently one with 364-keV energy; about 83% of I-131 decays
result in a γ emission. From an imaging perspective, only the γ emissions result in detectable radia-
tion and so contribute to the image formation. Any other emissions contribute only to radiation
12 NUCLEAR MEDICINE

deposition into the patient, so the optimal imaging agent is a pure γ emitter. Isotopes that emit γ-rays
via isomeric transition come closest to this ideal.

Internal Conversion

Although meta-stable isotopes often release γ-rays via isomeric transition, in some percentage of
decays they also undergo what is called internal conversion. In internal conversion, a nuclear rear-
rangement results in release of energy and that energy results in the ejection of a shell electron. Such
an electron is called a conversion electron. As discussed previously, these electrons contribute to the
radiation dose to the patient (described in more detail in Chapter 3) without adding anything to the
image. Therefore, in an ideal imaging isotope, internal conversion is rare.

Auger Electrons

The final emission to be discussed is the Auger electron, which is quite similar to a conversion elec-
tron physically, but arises from a slightly different mechanism. An Auger electron is emitted when the
energy released by, for example, a γ-ray is absorbed in the electron shell causing the ejection of an
electron. Auger electrons have very short path length (often in the nanometer range) and seem to have
high LET with incredibly local biologic effects.

Units of Radioactivity

Radioactive decay is a random event. Specifically, a given atom of a radioactive isotope can decay
at any time; it is not possible to predict when a single atom will decay, nor is it possible to hasten or
delay the process. However, for a group of atoms of a specific isotope decay happens with a charac-
teristic time course such that half of the atoms decay over a period of time. That time is called the
half-life (denoted T½) and is an inalterable feature of a specific isotope. You may notice that it does
not matter how many atoms are started with; half decay over the half-life. Thus, radioactive decay is
an exponential process. The amount of radioactivity at any given time can be calculated by the fol-
lowing equation:

−0.693*( time elapsed/T1/ 2 )


Activity now =Starting activity*e

Half-life can vary from tiny fractions of a second to millions of years. Most isotopes useful
in medical imaging have a half-life on the order of hours or a few days. To measure how much
radiation is present, the SI unit of becquerel (Bq) is used. One Bq of radioactivity equals one dis-
integration per second. In nuclear medicine, we are typically dealing with activities in the kBq to
GBq range. In the United States, an older (non-SI) unit of curie (Ci) is still widely used. The Ci
Radiation 13

was based on the rate of disintegration of 1 g of radium-226 and was measured to be 3.7 x 1010
disintegrations per second (incidentally, the true decay rate of 1 g of radium-226 is somewhat dif-
ferent from the measured value, but the unit has remained). One Ci of radioactivity, then, is equal
to 3.7 x 1010 Bq or 37 GBq. Activities typically used are in the range of mCi. Note that because
decay is a constantly ongoing process, the measurement of the amount of radioactivity present is
a dynamic one. Thus, as decay happens, the amount of radioactivity present decreases (and can be
calculated as above).
Finally, the half-life described previously is the physical half-life and is an unwavering property
of the isotope in question (there is nothing that can be done to change the half-life of an isotope).
However, when an isotope is given to a living organism, it can be excreted. The rate at which the
isotope leaves the body is called the biologic half-life. While it is being excreted, it is also decaying.
The combination of the physical half-life and biologic half-life is called the effective half-life and can
be calculated as follows:

1 / effective half - life = 1 / physical half-life + 1 / biologic half-life

For an isotope with a very long physical half-life, 1/physical half-life is close to zero, so the
effective half-life is essentially the same as the biologic half-life and vice versa. Most isotopes used
in nuclear medicine have relatively short half-life (minutes to a few days) and do undergo signifi-
cant biologic excretion, so the effective half-life is usually significantly shorter than the physical
half-life. A notable exception is the heavy metal Tl-201, which has little biologic excretion and so
the effective half-life is very near the physical half-life, resulting in a relatively high radiation dose
to the patient.
3 RADIOBIOLOGY
16 NUCLEAR MEDICINE

It is commonly known that radiation is “dangerous” but it is far less well understood why. As was
discussed in Chapter 2, radioactive decay results in the release of energy, and the energy is in motion,
so it can interact with nearby matter. Any energy deposition into living tissue can cause damage to
that tissue (e.g., putting one’s hand onto a hot stove burner causes a burn). Extraordinarily high expo-
sures to radiation can, indeed, cause burns and damage tissue directly and immediately, causing rapid
death. However, such doses are generally seen only in the setting of nuclear weapons or disasters.
More typically when we discuss the risks of radiation, it is because the forms of radiation under
discussion have sufficient energy to cause ionizations and, therefore, are called ionizing radiation.
Ionization is, at its most basic, disrupting an existing atom or molecule such that it becomes an ion
(i.e., charged). The most common form of ionization is when the radiation knocks an electron out, but
ionization includes any alteration of an atom or molecule including disrupting bonds. Because ion-
ization can directly disrupt bonds/molecules, ionizing radiation passing through a cell could directly
damage the DNA. However, this is relatively unlikely to occur, statistically, and in most cases can be
repaired by the cell.
Far more common is that the ionization results in something called free radical formation. Free
radicals are molecules that are missing an electron, making them amazingly reactive; they indiscrimi-
nately react with any nearby molecule that could provide the missing electron and in the process sig-
nificantly damage that molecule. A common example of this is the oxygen free radical when oxygen
gas (O2) loses an electron and can easily disrupt the bonds in DNA to obtain that missing electron.
Most damage to cells from photons or small particles like β− particles is by free radicals.
Very high doses of radiation can cause what are called deterministic effects wherein there is a pre-
defined dose-response relationship and a threshold dose below which the effect is not seen. Far more
common and more relevant to diagnostic radiation are so-called stochastic effects. Stochastic effects
are random events in that if there is a population that is exposed, a certain percentage may undergo
the effect, but it is impossible to predict which specific individuals will be affected.

Units of Radiation Exposure

The most basic level of quantification of radiation exposure to some material is simply the amount
of energy deposited into the material. The SI unit of gray (Gy) corresponds to 1 J of energy deposited
per kilogram; an older unit of rad is still sometimes used in the United States wherein 100 rad equals
1 Gy. However, we are generally concerned with radiation absorption by living tissues.
Different tissues and organs in a living being respond differently to radiation—some are very sensi-
tive and easily damaged or destroyed by radiation, whereas others are more resilient. Thus, the dose
deposited does not inform about the effect of the dose. For example, 10 Gy to the foot has a vastly
different effect from 10 Gy to the gonads. Furthermore, irradiating the entire body has a different
effect from irradiating a small area even though the radiation dose in Joule per kilogram may be the
same. Therefore, an alternate SI unit of dose has been developed called the sievert (Sv), with an older
unit, the roentgen equivalent man (REM), which is 0.01 Sv.
A uniform dose of 1 Gy to an entire organism results in a dose of 1 Sv. However, when discussing
medical radiation exposures, uniform whole-body doses are uncommon. Typically, different areas of
Radiobiology 17

the body or different organs receive significantly different energy deposition (in Gy). As we discussed
above, a Gy to the gonads has a different effect from a Gy to the extremities, so one cannot simply
add up the dose in Gy to the various organs divided by the relative size of each organ.
Instead, two conversion factors must be used. The first is called the quality factor, which relates to
the type of radiation. The quality factor of a given form of radiation is often expressed in the context
of relative biologic effectiveness (RBE). As the name implies it is a relative measurement; gamma
radiation has an RBE of one, which is the same for electrons. On the other hand, α-particles have an
RBE of 20, meaning that a given dose of alphas has 20 times the biologic effect (cell-killing ability) as
the same dose of γ-rays or electrons.
The other conversion relates to the sensitivity of the irradiated tissue, called the weighting fac-
tor. The weighting factor takes into consideration the relative sensitivity of a specific tissue type to
the effects of ionizing radiation. Weighting factors are periodically published by the International
Commission on Radiological Protection (ICRP) based on current radiobiologic knowledge.
Because a uniform dose to the whole body is an easily understandable amount of radiation expo-
sure, it is advantageous to be able to convert nonuniform doses into the equivalent uniform dose in
terms of effect on the organism as a whole. This conversion is called effective dose. By multiplying
the actual dose to an organ or tissue by the weighting factor for that organ and by the quality factor
of the radiation, one can come to a dose in Sv that is equivalent to a whole body exposure of that
many Sv.
Obviously, organisms are complex and so it is impossible to exactly convert the effect on the whole
person of a radiation dose to the big toe with a dose to the crown of the scalp; effective dose calcu-
lations are not perfect. However, they represent a reasonable estimate of dose effects and are very
powerful in that they allow meaningful comparisons of exposures from various sources of radiation
even though those sources may have different dose rates and distributions.

Deterministic Effects

Deterministic effects only occur above a threshold dose, so below the threshold the effect should not
occur and above the threshold the effect will occur. Because we are dealing with biology and living
organisms, the threshold is not a precise dose, but it is fairly narrow. There are four deterministic
effects that are the most clinically relevant: (1) cataracts, (2) hematopoietic syndrome, (3) gastrointes-
tinal syndrome, (4) neurologic syndrome.
Radiation-induced cataract formation is perhaps the most interesting and well-understood deter-
ministic effect for a unique reason: radiation-induced cataracts can be distinguished from sporadic
cataracts by clinical examination. Therefore, it is possible to correlate dose with cataract forma-
tion without confounding information from sporadic cataracts. Although most of the effects dis-
cussed relate primarily to patient or public radiation exposures, dose to the lens of the eye sufficient
for cataract formation is largely limited to radiation workers, primarily those performing fluoros-
copy. However, because those workers can be monitored and because the effect is deterministic,
radiation-induced cataracts should be avoidable.
18 NUCLEAR MEDICINE

Hematopoietic syndrome can occur with acute whole-body exposures on the order of 1–10 Gy.
Because the bone marrow cells are the most sensitive to radiation-induced death, hematopoietic syn-
drome is the major deterministic syndrome that occurs at the lowest dose. The progenitor stem cells in
the marrow are killed and a patient’s blood counts drop as circulating cells die through normal senes-
cence and are not replaced. When they drop sufficiently, left untreated, death occurs from infection
or bleeding over the course of weeks after the exposure. With supportive care, death can be avoided
in a high percentage of cases.
At doses greater than 10 Gy, the gastrointestinal syndrome can develop. This is superimposed
on, rather than instead of, the hematopoietic syndrome. The cells lining the gastrointestinal tract
are also quite sensitive to radiation, although not as much so as the bone marrow. With high acute
whole-body doses, many of these cells die and slough off, breaking down the body’s barrier to infec-
tion from gastrointestinal sources. Thus, patients typically die of overwhelming infection within days
of the exposure. Supportive care is insufficient in most cases.
Finally, very high acute doses greater than 50 Gy can induce the neurologic syndrome. In this case,
the radiation dose is high enough to cause swelling in the central nervous system, which causes rapid
deterioration, disorientation, and ultimately death within hours of the exposure.
It is important to note a few critical points regarding these deterministic effects:
• The above-listed doses are for single, acute, whole-body doses. For fractionated doses or chronic
low doses or doses to a portion of the body, the body’s tolerance is remarkably higher because
of repair mechanisms.
• The above doses are estimates only; some individuals are more or less sensitive, so the specific
syndromes are possible at doses below those listed.
• Neurologic syndrome should never be observed outside the setting of a nuclear disaster.
However, hematopoietic syndrome can be induced in the setting of radiotherapy and doses can
be given that may induce the gastrointestinal syndrome.

Stochastic Effects

Stochastic effects are random events that may be caused by a radiation exposure. Typically, the higher
the dose of radiation, the more likely is the effect to occur in an exposed individual, but there is not
known to be a threshold either below which the effect definitely will not happen or above which the
effect definitely will happen. By far, the most common stochastic event discussed in the setting of
radiation exposure is cancer induction.
How does radiation cause cancer? This question is the primary concern of most with regard to
radiation exposure, so it bears to have a thorough understanding of the process. Previously we dis-
cussed that radiation can cause ionizations that can damage the DNA in the cell either directly or,
more commonly, by free radicals. So, is there something magical about the free radicals that makes
them home in on DNA? Indeed there is not—the free radicals can damage other parts of the cell with
equal likelihood. However, DNA damage drives the fate of the cell. To understand why, let us envision
the following scenarios:
Radiobiology 19

• Ionizing radiation causes damage to a protein within the cell. For the most part, cells are con-
tinuously turning over proteins and a new copy can be translated.
• Ionizing radiation damages an mRNA molecule encoding for a critical protein. The cell detects
that the protein is insufficient and simply triggers repeat transcription of the mRNA.
• Ionizing radiation damages an organelle critical for organ function. The cell may die or it may
be able to recover.
• Ionizing radiation damages the DNA. Either the DNA is damaged beyond repair, in which case
the cell dies, or the DNA is altered in such a way that the functions it encodes are no longer
normal. Given several such “hits” the DNA may become sufficiently abnormal that the cell
ceases to function normally and begins to function outside the normal control mechanisms in
the body. That is, the cell becomes cancerous.
So one can see why damage to most parts of the cell can be recovered from or, at worst, the single
cell dies. However, although damage to the DNA can cause cell death, it can, uniquely, cause the
malignant transformation of a cell. The series of events leading up to malignant transformation is
both random and unlikely. There need to be many, many distinct events affecting the DNA and none
of the events can be deadly to the cell. This is why, in the grand scheme of things, cancer is relatively
rare when you look on the cell level. There are trillions of cells in the body and each cell undergoes
many, many DNA-damaging events. Most of these are repaired, but very rarely a cell accumulates
enough damage to undergo malignant transformation.
Estimating the risk of cancer from a given dose of radiation is not straightforward and is the
source of much controversy. There are a few reasons for this. First, unlike with cataracts, it is
impossible to differentiate cancers caused by radiation from cancers arising from other causes.
In addition, cancers are relatively common over a lifetime; as people live longer and nonmalig-
nant causes of death are treated or delayed, the lifetime incidence of cancer is more than 50%.
Furthermore, the formation of a cancer is the accumulation of multiple insults from multiple
sources. For example, a lung cancer can be caused by a combination of air pollution, tobacco
smoke, radon gas, and so forth, and it is not possible to sort out the relative contribution of
each of these insults. To make matters worse, different people have different abilities to repair
DNA damage so an insult that can be easily repaired in one person may lead to cancer in another
(indeed, many heredity cancer susceptibility syndromes have been traced back to defects in DNA
repair mechanisms). Finally, people are exposed to low levels of radiation throughout their lives
and different areas have differing levels of background radiation. Denver, for example, because
of its altitude, has higher background radiation than a city at sea level. Some areas have much
higher radon levels than others (and this variation can be dramatic not only on a region-by-region
basis, but also on a house-by-house basis). Furthermore, people move around a lot, so just because
someone lives in Denver now, it is impossible to extrapolate their radiation dose because they may
have lived most of their lives elsewhere. In summary, if we wanted to understand the risk of cancer
from a given dose of radiation, we would need to know how much radiation each member of a
population was exposed to, we would need to confirm that all members of that population have
the same cancer susceptibility (i.e., none of the members harbor genetic defects in DNA repair),
and we would need to control for other behaviors that increase cancer risk. Clearly, all of these
things are impossible in most scenarios.
20 NUCLEAR MEDICINE

One situation in which we can derive fairly good data is in the survivors of nuclear disasters, such
as atomic bomb survivors in Hiroshima and Nagasaki. The radiation dose to these individuals could
be estimated relatively accurately based on their distance from the explosion, and the radiation dose
was quite high, so the risk from the radiation essentially outweighed all the other confounders. In
these individuals, there was observed a fairly linear increase in risk of cancer with increasing radia-
tion dose.
Thankfully, few people are exposed to radiation doses as high as those mentioned previously, and
hopefully it will stay that way. What we would like, medically, is a better understanding of the risk
conveyed by relatively low doses of radiation. Let us design a hypothetical study to see if there is an
increased risk of cancer in people who have undergone at least 10 computed tomography (CT) scans.
Assume that about 50% of people develop a cancer at some point in their lives and hypothesize that
those 10 CT scans increase their absolute risk by 1% so they would have a 51% lifetime risk of
cancer. The sample size needed for that experiment is more than 50,000 subjects! What’s more, these
people would need to be followed for the rest of their lives. To make matters even worse, some of
the people in the no CT group would at some point need to undergo CTs, confounding the results.
Therefore, it is safe to assume that we will never have observational data on cancer risk from rela-
tively low radiation doses.
Because we have data at very high doses and we will never have data at low doses, we are stuck try-
ing to extrapolate risk. Even this is not simple because many different possibilities exist (Figure 3.1).
The most straightforward model is called the linear no threshold model. In this model, the linear risk
of cancer at very high doses is extrapolated down linearly to zero risk at zero dose. Because this is a
very conservative assumption, it is what is used for establishing rules on radiation safety and protec-
tion. It is worth noting that the regulatory bodies that recommend the use of the linear no threshold
model specifically warn against trying to use it to calculate exact risks from low doses of radiation
because insufficient data exist for this use.
Some have advocated a linear threshold model to parallel the thresholds observed in determinis-
tic effects. In this model, below a threshold dose there is no increased risk of cancer, but above the
Radiation-related cancer risk

b a
c d
e Dose

Figure 3.1.  Schematic representation of different possible extrapolations of measured radiation risks down to very low doses, all of
which could, in principle, be consistent with higher-dose epidemiologic data. (a) Linear extrapolation. (b) Downwardly curving (decreas-
ing slope). (c) Upwardly curving (increasing slope). (d) Threshold. (e) Hormetic. Reprinted with permission from Brenner DJ, Doll R,
Goodhead DT, et al. Cancer risks attributable to low doses of ionizing radiation: assessing what we really know. Proc Natl Acad Sci USA.
2003;100(24):13761–6. Copyright 2003 National Academy of Sciences USA.
Radiobiology 21

threshold the risk rises linearly to meet the observed data at high doses. Yet another model could
assume an exponential risk such that low doses have a very low risk, but with increasing dose, the
risk goes up exponentially rather than linearly. One could also postulate that perhaps low doses have
a higher risk wherein the risk is initially very steep but then levels off at higher doses (although there
are no data to suggest this).
A final model that is of some interest is called the radiation hormesis model. The theory of hor-
mesis is based on the observation that certain areas of the world with unusually high levels of
background radiation actually have unexpectedly low cancer incidence. Thus, some scientists have
postulated that low levels of radiation may actually be protective (perhaps by upregulating DNA
repair enzymes).
The three take-home points from this section are (1) high doses of radiation cause cancer but there
is an almost complete lack of data at lower doses and such data are unlikely to ever become available;
(2) innumerable models have been and can be proposed that are compatible with the existing limited
high-dose data; and (3) the linear no threshold model is currently held to be the most appropriate
and conservative model to use for the development of radiation protection regulations, but it is not
appropriate to use it to try to estimate cancer risk from individual low radiation exposures.

Radiation Safety

If radiation is generally considered “dangerous” and we typically try to avoid things that are hazard-
ous, how can radiation safely be used in medical settings? Is it possible? Furthermore, just being safe
is not enough, there should be some benefits to outweigh the risks. Overall, many uses of ionizing
radiation in medicine are, on the balance, considered to have a favorable risk/benefit ratio (meaning
that the benefit is greater than the risk). In this section, we rationally assess risks and benefits, which
can be quite complex for several reasons.
To perform a risk/benefit analysis, one must first define the risks and benefits of a course of action.
In the context of radiation exposure from diagnostic imaging, this is very difficult. As discussed previ-
ously, the linear no threshold model has insufficient data at low-dose levels and should not be used
to extrapolate risks from individual relatively low exposures. Even if it were possible to accurately
quantify the exact risk of a future cancer from a given radiation dose, it is still nontrivial to try to
balance that against the risks of not receiving the dose.
For example, imagine a patient with right lower quadrant pain and suspected acute appendicitis.
The basic choices are to take the patient to the operating room to remove the appendix (running the
risk that the patient may not have appendicitis) or doing a CT scan to look for evidence of acute
appendicitis. Imagine that we magically know the exact risk of the patient having something other
than acute appendicitis, the risk of the CT being wrong, the risk of cancer from the CT, and the risk
of injury or even death from the surgery. Other than the (very small) risk of death directly from the
surgery, the other risks are not, in fact, well known. But even if they were, how does one balance a risk
of injury now against a risk of illness later? It is astoundingly difficult under the best of circumstances
and may be impossible in these cases where the risk of the late toxicity (i.e., cancer induction from
radiation) is very small.
22 NUCLEAR MEDICINE

Because an exact assessment of risks and benefits is not possible, we want to at least make sure that
there is a reasonable likelihood of benefit from the radiation exposure. We can assume from the data
that are available that even if there is real risk from the radiation exposure, it is very small. Thus, in
most cases we can conclude that the risk/benefit ratio is favorable. Obviously, it is critical that there
is a reasonable likelihood of benefit so tests should not be done when there is no reasonable reason
to think they will benefit the patient. Of course, there is a lot of room for interpretation in the word
“reasonable” and what may be considered reasonable to one person is preposterous to others.
Another complicating factor in the evaluation of risk and benefit from radiation exposure is that
the benefit and risks may not be shared equally by all involved parties. For example, in a nuclear med-
icine study, the patient is most likely to benefit from the test and also receives the highest radiation
exposure. However, the technologist performing the study also receives radiation dose in the process.
Furthermore, the patient’s family members or other close contacts may receive some radiation dose.
The benefits of these exposures are myriad (e.g., the technologist has a paying job, the family member
may be aided by the patient’s getting appropriate treatment).
As a society, it is impossible to accurately estimate the exact total risk and total benefit of a given
radiation exposure. That there are benefits is unquestionable—radiation is used to diagnose and treat
deadly diseases every day. Therefore, radiation protection is focused on a principle called ALARA,
which stands for As Low As Reasonably Achievable. The idea here is that to maximize the risk/benefit
ratio, the risk should be reduced as much as is reasonably possible.
The word “reasonably” is of critical importance. If an interventional radiologist wears a lead apron
to minimize the dose to his or her body, should he or she not then wear an apron that is twice as thick
to decrease the dose a little more? Not if that extra weight prohibits working effectively. There are
always tradeoffs between radiation exposure and output. Lower doses may decrease image quality;
greater shielding may lead to unreasonable costs or physical difficulties. Therefore, such decisions
need to be made in the context of what is considered reasonable.
The average annual background radiation dose is typically reported as 3 mSv, although it can range
quite a lot based on location. In the United States, workers who receive radiation exposure as part
of their jobs can receive up to 50 mSv of dose to the whole body from job activities per year. There
are separate organ limits as well, up to 500 mSv. A pregnant worker is limited to 5 mSv dose to the
fetus. One important nuance to this regulation is that a pregnant worker must formally declare to the
radiation safety officer that she is pregnant for the lower dose limit to be in effect. Furthermore, for
patients treated with radioactive isotopes, there are established limits that no member of the public
should receive more than 1 mSv from the patient (up to 5 mSv in some cases).
The context of these doses and limits help to establish an idea of what is reasonable in terms of
worker, patient, and member of the public doses. Although there are some strict caps on doses to the
public and to workers, there are no strict limits for patient doses. Furthermore, in most cases the goals
are to achieve doses well below the absolute maximum limits. Therefore, there exists quite a gray area
in which different centers and groups may determine what they individually consider reasonable.
Although a cavalier attitude toward radiation exposure can be dangerous, equally (or possibly more)
dangerous is an overly restrictive interpretation that can unreasonably prevent the potential benefits
of radiation because of unfounded fears or magnification of potential risks.
4 RADIATION DETECTORS—
IONIZATION DETECTORS
24 NUCLEAR MEDICINE

When considering medical imaging, we typically think of the cameras that result in an image. However,
it is critical to be able to detect and quantify the presence of radioactivity quite apart from image for-
mation. This is generally done by exploiting the ionizations caused by the radiation. As a radioactive
emission travels through matter, it causes ionizations. If an ion in space is subjected to an electrical
potential, the negative ions are drawn to the anode (the positive end of the potential). When that
charge reaches the anode, it flows to the cathode and thus there is electrical current. This phenomenon
is the underpinning of most nonimaging radiation detectors used in clinical practice.
There are three basic needs addressed by ionization detectors. First is the need for very sensitive
detection ability to find even small amounts of radioactivity (e.g., to look for contamination); this is
addressed by Geiger-Mueller (GM) survey meters. Beyond simply detecting low levels of radiation, it
can be necessary to quantify the amount of very low-level radiation present, which can be done with
proportional counters. Finally, for amounts of radiation that are higher and more easily detectable,
it is important to have devices that can accurately quantify the amount of activity present across a
wide range, which can be done by ionization chambers and dose calibrators. Each of these devices is
described next.

Ionization Chambers

Ionization chambers are widely used and fairly simple. As discussed, an ionization chamber is simply
two conductors with a potential (voltage) across them. The “chamber,” which is the space between the
conductors, is often just air at ambient pressure, but alternatives are possible such as a sealed, pressur-
ized gas, or a plastic semiconductor. If ionizing radiation passes through the chamber, any ionizations
that occur produce current. The current produced is proportional to the total number of ionizations
that occur within the chamber. However, the ionization chamber cannot tell the energy of the incident
radiations causing the ionizations; that is, the current produced is not proportional to the energy of
the incident radiation.
Typical ionization chambers are widely used (Figure 4.1) and are comprised of a base unit con-
taining the electronics and battery. There is a dial with a needle that displays the amount of current
being detected (displayed converted to units of millisievert per hour). Additionally, there is a dial with
several sensitivity settings that adjust the amount of current needed to produce a given deflection in
the needle. This allows discrimination of different levels of radioactivity. Some instruments are digital
and automatically vary the sensitivity based on the measured activity.
It is important to note that when the chamber is air at atmospheric pressure, the relative atmo-
spheric pressure can change the sensitivity of the device. For example, because air is thinner at higher
altitudes, the efficiency of detection decreases with increasing altitude. Therefore, a calibration must
be performed to account for the location of the instrument. It is worthnoting that the daily variations
in barometric pressure in a given location are insufficient to cause significant changes in calibration.
Ionization chambers can accurately quantify ionizations per unit of time to measure a radiation
dose rate in a given location. This requires an exquisitely sensitive current meter because the absolute
currents generated are quite small. Furthermore, they are relatively insensitive to very low levels of
radiation, so are not used to survey for the presence of radiation but rather to quantify dose from
Radiation detectors—ionization detectors 25

Figure 4.1.  An ionization chamber survey meter. This model is digital and autosensing. Rather than needing dials to adjust the
sensitivity, the unit automatically adjusts the sensitivity and display units to match the amount of radioactivity detected.

a known source of activity. One example of the use of an ionization chamber in a nuclear medicine
clinic is to measure the dose rate of a package containing radioactive materials, both at the surface of
the package and at 1 m.

Dose Calibrators

A specialized form of ionization chamber is the dose calibrator (Figure 4.2). As discussed, an ioniza-
tion chamber can measure only total current. However, if a known source of radioactivity (i.e., a
known isotope) is being measured, the dose calibrator can use knowledge of how much current is pro-
duced per decay for that isotope to convert current into units of radioactivity (Bq). The components
of a dose calibrator include a base unit that contains the electronics, a display of measured value in
Bq, and buttons to set the isotope being measured. There is either an attached or a standalone cham-
ber. This chamber is made up of a shielded cylinder with the conductors within it and an opening in
which to place the source of radioactivity. Think of the chamber as a donut and the source of activity
is put into the hole in the middle of the donut. The chamber itself contains sealed, pressurized gas to
make it insensitive to atmospheric pressure.
After a dose calibrator has been calibrated upon installation, certain quality control procedures are
necessary to ensure that the device is working as it should. First, the accuracy of the dose calibrator
must be tested using standard samples of the various isotopes that will be measured with the device.
Furthermore, linearity must be tested in which different amounts of activity are measured over the
range of doses that will ever be measured in the device to make sure that accuracy is the same at low
26 NUCLEAR MEDICINE

Figure 4.2.  A dose calibrator. The dose calibrator is simply a specialized ionization chamber consisting of two parts. The ionization
chamber itself (black arrow) is a shielded cylinder with an opening in the top. A radioactive source is placed into the cylinder and a dose
rate is measured. The control unit (red arrow) has buttons to select the isotope being measured and a display that shows the measured
activity. The device is calibrated to convert from units of exposure (e.g., mSv/hr) to units of activity (e.g., MBq) for each isotope. It is
critically important to remember that the dose calibrator requires that the user provide the information on which isotope is being
measured so that the appropriate calibration is used.

and high doses and linearly along the range. A linearity test can simply be done starting with a known
amount of Tc-99m that will be measured in the dose calibrator periodically over several half-lives.
Because decay is exponential, a correct measurement looks like an exponential curve and it is difficult
to visually determine whether the curve matches what would be expected. However, if those same
data points are plotted on semi-log graph paper (Figure 4.3), an exponential process appears linear.
Nonlinearity of the curve indicates a problem with the instrument.
Finally, the constancy of measurements must be verified. This is done with a sealed source of a
long-lived isotope that can be measured every working day. The measured value of radioactivity is
compared with the known activity of the source based on the time that has elapsed since its manu-
facture. If the measurement is outside the expected range, the calibration of the dose calibrator must
be tested and fixed. Two commonly used isotopes for constancy sources are cobalt-57 (Co-57), with
γ energy of 122 keV (similar to the 140 keV of Tc-99m) and T½ 272 d and cesium-137 (Cs-137) with
γ energy of 662 keV (similar to the 511 keV from β+ annihilation) and T½ 30 y.
Other issues that may affect ionization chamber and dose calibrator measurements are discussed
in Chapters  7 and 12. Suffice it to say that these are relatively simple devices. However, they are
critically important in practice and seemingly simple mistakes can result in significant errors in dose
measurement.
Radiation detectors—ionization detectors 27

(A) 800

700

600

500
Activity (mCi)

400

300

200

100

0
0 10 20 30 40 50 60 70 80
Time (hours)

(B) 1000

100
Activity (mCi)

10

0.1
0 10 20 30 40 50 60 70 80
Time (hours)

Figure 4.3.  Linearity testing. A sample of Tc-99m was measured at multiple time points over more than 10 half-lives. When the
measured activity is plotted on a linear scale (A), activity versus time, the decay is exponential and it is impossible to ascertain visually
whether the points on the graph are where they ought to be. When the same data are plotted on a semi-log scale (B), activity on a loga-
rithmic scale (y-axis) versus time on a linear scale (x-axis), the result is a straight line. It is readily apparent if any of the measured points
deviate from the line.

Survey Meters

Because radiation can contaminate a surface but not be visible to the naked eye, it is important to
survey all areas in which radioactive materials are used. Although the instruments discussed previ-
ously are used to quantify the amount of radioactivity present with a fairly high degree of accuracy,
a survey meter is designed to maximize sensitivity even at the expense of accuracy. Survey meters
28 NUCLEAR MEDICINE

typically use GM tubes, also known as Geiger counters Figure 4.4), which are physically quite similar
to ionization chambers with a base unit containing a battery and electronics, although in a GM tube
the tube itself is typically separate, attached by a wire and able to be moved independent of the base
unit. Furthermore, the end of the tube often has a cover that can be removed to optimize detection of
poorly penetrating radiation, such as α or β particles. GM tubes and ionization chambers have two
critical differences: GM tubes are operated at a much higher voltage across the conductors than ion-
ization chambers; and GM tubes are sealed and filled with a specific gas, typically a noble gas doped
with a halogen gas.
These differences compared with ionization chambers combine to permit much greater sensitivity
in the detection of ionizing radiation. We learned that when an ionization occurs within an electri-
cal potential (i.e., between two charged conductors), the ionization electron (because of its negative
charge) moves toward the positively charged conductor (the anode). Because of the very high voltage
across the conductors in a GM tube, these ionization electrons are accelerated very briskly toward
the anode; so briskly, in fact, that they can cause ionizations. Imagine an ionization electron flying
toward the anode. It causes an ionization and continues toward the anode along with the additional
electron just released through ionization. Each of those electrons causes an additional ionization, so
now there are four electrons flying toward the anode, and so on. This exponential increase in the

(A) (B)
γ -rays

Amplifier and
R Pulse Counter
Gas-Filled
Chamber Vdc

Figure 4.4.  A Geiger-Mueller survey meter. (A) This survey meter includes two separate Geiger-Mueller tubes, a hand-held unit for
surveying surfaces (black arrow) and a floor unit (red arrow) that makes it easy to survey shoes for potential contamination. The base
unit (black arrowhead) has controls to adjust the sensitivity of the device and a display that allows the quantification of counts per
minute. (B) This is a schematic of how the tube functions. The tube/chamber itself (in yellow) is a sealed chamber. The outer wall and
the inner conductor have a potential across them (Vdc). When incident radiation enters the chamber (red arrow), ionizations cause a
Townsend avalanche that results in current flowing through the circuit. This current is amplified and counted in the base unit. Kieran
Maher, Geiger-Mueller counter basic circuit. June 13 2006. http://commons.wikimedia.org. Accessed October 29, 2013. Reproduced with
permission.
Radiation detectors—ionization detectors 29

number of ionizations resulting from a single ionization event caused by incident radiation is called
the Townsend avalanche (Figure 4.5).
Because of the Townsend avalanche, the amount of current caused to flow through the meter per
ionization is orders of magnitude higher than with an ionization chamber. Therefore, far fewer events
are needed within the chamber to be detected by the electronics. This accounts for the high sensitivity
of the GM tube. Like an ionization chamber, the energy of the incident radiation is not detectable.
Although GM tubes are far more sensitive than ionization chambers, there are limits to their uses
because of a phenomenon called dead time. Imagine this massive cascade of electrons that is drawn
to anode. Once enough electrons (with their negative charges) are at the anode, the potential energy
across the conductors is diminished and the tube no longer functions. This leads to a brief period
called dead time during which events are not detected. When subjected to high levels of ionizing
radiation, the dead time occurs frequently and the meter is not able to accurately measure the relative
levels of radioactivity. Thus, survey meters are used to detect low levels of radiation, whereas ioniza-
tion chamber meters are used to quantify higher levels of radiation.
The final point worth discussing with GM tubes is the presence of a halogen gas within the tube.
These halogen gases are present to provide for quenching. So far, we have talked only about the ion-
ization electron being drawn to the anode. There is also the positively charged ionized atom being
drawn to the cathode where it can receive an electron and return to ground state. The problem is that
many gases emit a photon upon returning to ground state (e.g., a neon sign); this photon could cre-
ate its own ionization and its own avalanche, leading to an endless loop of avalanches in the system;
thus, the system must be quenched. The halogen gas can interact with the ionized gas, absorbing its
energy and charge to prevent the photon emission. These ionized halogens can then form back into
their neutral gas molecules without photon emission.

+
e–

+
+
+
+
+
A

Figure 4.5.  The Townsend avalanche. There is a high potential across the anode (A) and cathode (C). When radiation traverses the
Geiger-Mueller tube (red arrow) ionization results in the flow of an electron toward the anode. Because of the high potential, the electron
is accelerated, causing additional ionizations and additional liberated electrons being accelerated toward the anode and, in turn, causing
additional ionizations. When the avalanche reaches the anode, there is enough negative charge to decrease the potential and cause the
tube to transiently cease to function (dead time). Björn Nordberg, Elektronlavin. July 11, 2006. http://commons.wikimedia.org. Accessed
October 30, 2013. Adapted with permission.
30 NUCLEAR MEDICINE

Proportional Counters

A final type of meter is called the proportional counter. These function at an energy potential between
that of an ionization chamber and a GM tube. In proportional counters, the current caused by a
detected ionization event is proportional to the energy of the ionizing radiation (thus the name pro-
portional counter). Proportional counters are not widely used in clinical nuclear medicine and so are
not further discussed.
5 RADIATION DETECTORS—SINGLE
PHOTON
32 NUCLEAR MEDICINE

In this chapter, we will discuss our first imaging device:  the gamma camera. This area of nuclear
medicine is called single photon imaging and it derives its name from the fact that we are detecting
individual photons (γ-rays or x-rays). The modern gamma camera is derived from the revolutionary
invention of Hal Anger in 1956 and, although significantly improved from those first instruments
designed almost 60 years ago, the basic components are relatively unchanged. The gamma camera,
which is used for essentially all modern single photon imaging, is comprised of three basic compo-
nents: (1) collimator, (2) scintillator, and (3) photomultiplier tubes (PMTs) (Figure 5.1).

Collimators

The outer-most component of the gamma camera is the collimator, and the easiest way to understand
the function of a collimator is to imagine a photographic camera: the collimator would be the lens
and the rest of the gamma camera would be the camera’s detector. Imagine that we are trying to take
a picture of something and that something has emissions radiating out of it in a sphere. We need to
focus those radiations onto our detector to have a sharp image. A  properly adjusted camera lens
focuses the image onto the detector and blocks out those that are out of focus.
The simplest collimator design is called the pinhole collimator and is designed just as you would
expect from the name, with a single tiny hole for photons to pass through. However, a collimator
must not only let the photons pass through in a way that they are focused on the detector, but also
to block out all of the out-of-focus photons. Thus, a pinhole collimator is typically a cone made out
of lead (thick enough to block out even the most energetic photons typically seen in nuclear medicine
imaging) with an aperture (the pin hole) at the end. Some pinhole collimators have removable aper-
tures so that different size holes can be used for different applications. Examples of pinhole collima-
tors are shown in Figure 5.2.
When a photon passes through the aperture, it proceeds to the detector. If you imagine an object
that you are imaging with a pinhole collimator, a photon coming from the top of the object is only

C D

Figure 5.1.  The components of a gamma camera. There are three primary components. The outermost is the collimator (A); two incident
photons (arrows) hit the collimator. The one that is perpendicular to the collimator is able to pass through, whereas the one hitting at an
angle is blocked. The photon that passes through the collimator then contacts the scintillating crystal (B). Its deposition of energy in the
crystal results in the output of visible light (lightning bolt), which then interacts with one of an array of photomultiplier tubes (C) that are
in a housing (D).
Radiation detectors—single photon 33

(A) (B)

Figure 5.2.  Pinhole collimators. Both pinhole collimators consist of a shielded cone along with an aperture (arrows). (A) Collimator
with a removable aperture (set screw to attach it indicated by the arrowhead), such that a smaller aperture can be used when higher
resolution is favored over sensitivity, whereas a larger aperture provides greater count rate in exchange for lower spatial resolution.
(B) Collimator with a fixed aperture.

able to pass through the aperture if it is going downward and is then detected at the bottom of the
detector. Conversely, a photon from the bottom must be traveling upward; therefore, a pinhole col-
limator generates an image that is inverted compared with the object being imaged (Figure 5.3).
Furthermore, imagine moving the object closer to or further from the aperture. Doing so makes the
object appear smaller or larger, respectively. Thus, another important property of the pinhole col-
limator is magnification. If you are imaging a small object, the magnification can be advantageous.
However, if you are imaging a thick object, the furthest point of the object is magnified more than the
nearest point of the object and so the resulting image is distorted. Therefore, pinhole collimators are
typically used to image only relatively thin objects (those with relatively little depth).
Finally, regarding aperture size, the smaller the aperture, the more precisely localized is the image.
In other words, if the aperture is so tiny that a photon can barely fit through, you know for certain
exactly where it came from based on where it hits the detector. If the aperture is larger, photons can
come from anywhere within a somewhat larger area and hit the same area of the detector. Therefore,
the image is less sharp having lower spatial resolution (spatial resolution is the measure of how far
apart two points need to be to be detected as two separate points). One might think, then, that it is
best to have the smallest possible aperture. However, spatial resolution is only one piece of the puzzle.
The other dominant piece of the puzzle is sensitivity, which is the measure of what percent of the
photons hitting the collimator get through to the detector. The smaller the aperture, the lower the
sensitivity, the less photons are detected, the more noisy is the image. Thus, in collimator design there
is a tradeoff between resolution and sensitivity and it must be tailored to the application.
34 NUCLEAR MEDICINE

(A) (B) (C)

Figure 5.3.  Magnification from a pinhole collimator. (A) The pinhole collimator (gray cone) is used to image a line that is near the aper-
ture. While the image is inverted, it is also projected onto a large area of the detector (so it is magnified). (B) The object is moved farther
from the aperture and so the magnification is less. (C) When an object with depth is imaged, the resulting image is distorted because
the nearer parts of the object are magnified more than the distant parts. This is why pinhole collimators are typically only used to image
relatively small objects with relatively little thickness.

Although pinhole collimators are the simplest and illustrate very nicely the critical aspects of col-
limators, they are used for only a few special applications in clinical nuclear medicine. Far and away
the most widely used collimator design is the parallel hole collimator (Figure 5.4). This collimator is
typically made out of a slab of lead with an array of holes in it, almost always hexagonal holes in a
honeycomb pattern. The pieces of lead between the holes are called septae. The purpose of a parallel
hole collimator is to allow only those photons that are hitting the collimator perpendicular to the face
of the camera to pass through the hole. All of the other photons are blocked by the lead.
To visualize this, imagine that a single hole in the collimator is a piece of pipe and the photon is a
pencil (Figure 5.5). If the piece of pipe is very short and with a very wide diameter, the pencil can come
in at almost any angle and pass through; conversely, if the pipe is longer but with the same diameter,
the angles from which the pencil can pass through decrease. Similarly, imagine keeping the length of
the pipe the same and varying the diameter: the smaller the diameter, the smaller the possible angles
that pass through. Similar to a pinhole collimator, the more photons are allowed to pass through, the
higher the sensitivity and the lower the resolution.
Remember that a collimator has two functions. The first is to permit photons to pass through to
the detector when they should (i.e., those that are hitting the aperture at the right point with the right
angle). The other is to block any other photons from making it through to the detector. In a pinhole
collimator, the lead cone is designed to block the highest energy photons that might be used, so the
same collimator can be used regardless of photon energy. With a parallel hole collimator, however, the
thickness of the septae has to be tailored to the expected incident photon energy.
(A) (B)

Figure 5.4.  A parallel hole collimator. (A) Multiple hexagonal holes are arranged in a honeycomb pattern. (B) When an incident photon
(arrow) is perpendicular to the collimator face it is able to pass through the hexagonal hole. An incident photon that is not perpendicular
to the collimator face is blocked by the septa.

Figure 5.5.  The effect of hole diameter and length on collimator resolution. The image represents a piece of pipe with pencils passing
through it (arrows), which are entering at the most extreme angle possible while still passing through. By making the hole diameter bigger,
the acceptable angle increases (spatial resolution decreases). By making the piece of pipe longer, the acceptable angle decreases (spatial
resolution increases). When a collimator must be designed with thick septa to block high incident energy, sensitivity goes down because
more of the collimator is lead and less left for holes. To compensate, hole diameter is increased to improve sensitivity, but spatial resolu-
tion decreases.
36 NUCLEAR MEDICINE

Why not always use a collimator with septae thick enough to block the highest energy photons
that could be encountered? To block the undesirable photons, the septae need to be thicker, so the
holes must be spaced farther apart, resulting in lower sensitivity. In order for sensitivity to be suf-
ficiently high to form an acceptable image, the hole diameter needs to increase. By increasing hole
diameter, spatial resolution decreases. Therefore, a parallel hole collimator designed for high-energy
photons has lower spatial resolution than one designed for low-energy photons assuming they are
both designed to have acceptable sensitivity.
From the previous discussion, one could see that there are significant tradeoffs in collimator design
primarily related to sensitivity and resolution (assuming that a collimator is tailored to a specific
energy range). Parallel hole collimators are conventionally named in such a way as to first give the
appropriate energy range (low, medium, high, ultrahigh) and then the relative resolution. Keeping in
mind that the names are chosen by companies who hope to sell collimators, it is not to be wondered
at that there are no “low-resolution” collimators. Rather, the collimators that have the lowest resolu-
tion (and therefore highest sensitivity) are often called all-purpose or general purpose. Collimators
with higher resolution and lower sensitivity are called high resolution; ultrahigh-resolution collima-
tors are also available for some applications. Examples of commonly used collimators include low
energy high resolution and medium energy general purpose. One important consideration is that the
spatial resolution permitted by a collimator can be compared only with other collimators designed
for the same energy range. Thus, a low-energy high-resolution collimator has higher resolution than a
low-energy all-purpose collimator. However, a high-energy high-resolution collimator does not allow
for higher resolution than a low-energy all-purpose collimator.
Low-energy collimators are typically designed to appropriately collimate energies up to about
170–200 keV, whereas medium-energy collimators work well up to about 300–400 keV. High- and
ultrahigh-energy collimators are used above this. Remember that many isotopes emit a variety of
γ-rays each with its own characteristic energy; it is important to choose a collimator for the highest
energy photons even if they may not be the most abundant. Of course, if a specific γ-ray has incredibly
low abundance it will not significantly interfere with the imaging.
Finally, many other collimator designs are possible with the most common being converging and
diverging collimators (Figure 5.6). These are similar to parallel hole collimators in that they are lead
sheets with an array of holes. However, the holes are not parallel to one another, rather they are
angled. The name is always based on the behavior of the holes when looking from the inside of the
camera rather than looking at the front of a collimator. Thus, in a diverging collimator, the holes are
angling away from the center of the collimator and in a converging collimator the holes are angling
toward the center of a collimator.
If you imagine an object being imaged and the photons that it emits that are allowed to pass
through the collimator, you can see that in a diverging collimator a large object can fit onto a smaller
camera, so a diverging collimator is used to minify the object. These were more common in the early
days of gamma cameras when the detectors were quite small. However, most modern gamma cam-
eras have a large enough imaging area with a parallel hole collimator that diverging collimators are
rarely necessary. Furthermore, by fitting a larger object onto a smaller detector, the spatial resolution
decreases.
A converging collimator magnifies an object and results in higher spatial resolution. However, just
like a pinhole collimator, the magnification of the object depends on its distance from the detector.
Radiation detectors—single photon 37

(A) (B) (C)

Figure 5.6.  Parallel hole, diverging and converging collimators. The object being imaged is signified by the arrow and the collimator by
the arrowhead. Below the collimator is the rest of the camera and in black is a representation of the appearance of the object being imaged
on the camera. (A) On the parallel hole collimator, all holes are perpendicular to the face of the camera and the object being imaged
appears true to size on the camera, regardless of how far it is from the camera. (B) On a diverging collimator, the holes are angled away
from the camera face. Thus, the field of view is larger than that of the detector face, but objects being imaged are minified; the farther they
are from the camera the more the minification. (C) The converging collimator has holes that angle in the opposite fashion compared with a
diverging collimator. Therefore, an object being imaged is magnified and the field of view is made smaller.

In larger objects, there is distortion because the deeper parts of the object are magnified more than
the shallow parts. The final consideration with converging and diverging collimators is that because
they magnify or minify, respectively, based on distance they cannot be used for quantitative imaging
in most cases because it is not practically possible to correct for the distortion caused by object depth.

Scintillators

The next part of the gamma camera, after the collimator, is the scintillator. Generically, a scintillator is
a material that emits light in response to energy deposition. In the case of a gamma camera, when ion-
izing radiation interacts with a scintillator, the energy deposited by the radiation results in the emis-
sion of visible light from the scintillator. Until very recently, virtually all gamma cameras were made
with thallium-doped sodium iodide crystals. What that means is that when the sodium iodide crystals
are formed, a tiny amount of thallium is added to optimize the scintillation. Recently, some alternative
materials have begun to be used in some commercially released gamma cameras, such as cadmium
zinc telluride, but they are beyond the scope of this book and currently have very limited clinical use.
There are many properties of a scintillator that contribute to its usefulness for gamma camera use.
One of these is the stopping power (also called Z number). When ionizing radiation is incident on a
scintillator, the radiation can pass right through the material (and so nothing happens), or the radia-
tion can interact with the material, depositing its energy in the scintillator and causing scintillation.
The stopping power of the material determines the likelihood that a photon of a given energy will be
stopped by the material. Thus, a higher Z number results in a higher likelihood of a photon stopping
in the material and causing scintillation.
38 NUCLEAR MEDICINE

The Z number is only one factor in determining the likelihood of stopping a given photon. Another
factor is the thickness of the scintillator (the thicker it is, the less likely a photon could pass through
it without being stopped). As with virtually everything related to system design, there are tradeoffs
with scintillator thickness. A  thicker scintillator results in higher sensitivity (a greater percentage
of the photons that hit it are stopped and result in light output) but is heavier and more expensive.
Furthermore, after the scintillation light comes out, it spreads out as it is leaving the crystal; the
thicker the crystal, the more the light is spread before it is detected and localized and the resolution
is degraded (Figure 5.7). Thus, the thicker the scintillator, the lower is the maximum resolution of
the system. This is a finite consideration, however, because other aspects of the system limit the spa-
tial resolution. For example, there is no gain in having a scintillator thin enough to achieve a higher
resolution than would be permitted by the collimator you intend to use. The most commonly used
sodium iodide crystal thickness offered in gamma cameras is about three-eighths of an inch. However,
many vendors have offered crystals in five-eighths or even one-inch thickness; although these systems
have lower spatial resolution compared with a three-eighths of an inch crystal design when used with
low-energy collimators, the resolution is the same when using high-energy collimators (because the
collimator’s resolution is the least common denominator) and the sensitivity is significantly higher.
Therefore, these systems are sometimes used when the intended application was predominantly using
high-energy emitters.
After ionizing energy is deposited into a scintillator, there is a time period over which the light is
output and that length of time is a property of the scintillator material. For the purposes of single
photon imaging, fast light output is not an important consideration, but it is for positron emission
tomography (PET), discussed in Chapter  6. A  much more important property of a scintillator for
single photon imaging is the light output (i.e., the amount of light that is released in response to a
given energy deposition). The more light is output, the higher the signal. With sufficient light output,
it is possible to detect the scintillation caused by a single interaction in the crystal. Furthermore, the
higher the signal, the better the energy can be resolved and the origin of the light can be localized.
Sodium iodide has fairly high light output, accounting for its popularity for gamma cameras.
There are many other properties of scintillators that can affect choice and system performance.
For example, scintillators are sensitive to temperature in that light output can vary with changing

(A) (B)

Figure 5.7.  The effect of crystal thickness on spatial resolution. The arrow represents an incident gamma on the crystal (gray box).
Scintillation in the crystal results in visible light that emits spherically from the point of interaction in the crystal (black sphere). In A the
crystal is relatively thin and so the sphere of visible light hitting the PMTs is relatively small, whereas in B with a crystal that is much
thicker the light spreads out more before it reaches the PMTs and so the exact location of the origin of the scintillation is less clear,
resulting in lower spatial resolution.
Radiation detectors—single photon 39

temperature. In fact, some materials can have fairly dramatic changes in light output across a fairly
narrow temperature swing within the range of typical room temperatures. This can be important if,
for example, one is designing a mobile camera for field use. Other examples include durability, manu-
facturability, cost, and stability across parameters other than temperature, such as humidity.

Photomultiplier Tubes

The final component of the gamma camera is the PMT array. Remember that the purpose of a gamma
camera is to create an image. One could use film to convert the scintillated light into an image, but
there are significant limitations to this approach, chiefly that the amount of light emitted is too low
for good image quality. Gamma cameras are designed to form their images electronically and so the
light output by the scintillator must be converted into an electrical signal. Furthermore, the output
signal must be magnified so that it is sufficiently strong that it can be detected. The PMT accomplishes
both of these goals.
A PMT is a vacuum tube designed to convert light into an amplified electrical signal (Figure 5.8).
For a PMT to function appropriately, it must be optically coupled to the scintillator material. Of the
many components of a PMT, first is a front material that releases electrons when visible light hits it;
such a material is called a photocathode. The released electrons pass through a focusing electrode
that aims them at the first dynode. The series of dynodes is a critical component of a PMT. Each
dynode is at a higher voltage than the one before it and is focused such that an electron that strikes
it is redirected to hit the next dynode. Because of their material and their voltage, when an electron
hits a dynode, not only does it bounce to the next dynode, but also several additional electrons are
released and sent to the next dynode. At the next dynode, the process repeats and so on, resulting in a

Electrical
Photocathode Electrons Anode
connectors

Light Focusing Dynode


photon electrode

Figure 5.8.  A photomultiplier tube. The incident light photon strikes the photocathode, resulting in electrons moving through the
focusing electrode, which directs the electrons to the first dynode. Because each successive dynode is at a higher and higher potential,
the electrons are multiplied as they move from dynode to dynode until ultimately they reach the anode. By the time they reach the anode,
there are sufficient electrons to result in a detectable electrical signal that is proportional to the energy of the initial incident photon. Arpad
Horvath, Photomultiplier tube. April 24, 2006. http://commons.wikimedia.org. Accessed April 17, 2013. Adapted with permission.
40 NUCLEAR MEDICINE

dramatic multiplication of the initially released electrons. Finally at the end there is an anode wherein
the electrons result in electrical current. That current is detected as the output signal from the PMT.
Converting visible light into an amplified electrical signal is excellent, but the PMT has another
critical capability: the total signal output from the PMT is proportional to the energy of the photon
that initially hit the scintillator. Therefore, it is possible to filter output signals based on the initial
photon energy, which is important for image formation, as discussed later in this chapter. The above
statement is literally true in a design where a scintillator is coupled to a single PMT. However, typi-
cally there is an array of PMTs coupled to a scintillator. A single scintillation can be detected by more
than one PMT in the array. In this case, the total signal from all of the PMTs in the array resulting
from a single scintillation is still proportional to the incident photon energy.
Furthermore, the relative amount of signal from the various PMTs surrounding a scintillation can
be used to figure out exactly where on the scintillator the scintillation occurred. This is called Anger
logic (named after Hal Anger, the inventor of the gamma camera, and not after the emotion of anger
induced by the study of nuclear medicine physics). Although the specific mathematics of Anger logic
are beyond the scope of this book, to get an idea of how it works, imagine a very simple array of three
PMTs in a line (Figure 5.9). If a scintillation occurs exactly in the middle of the middle PMT, most
of the signal is seen in the middle PMT with an equal amount of output signal from the two outer
PMTs. If the scintillation happens between two of the PMTs, the signal is the same in those two PMTs
and the remote PMT has a much lower output. It is much more complex to imagine a whole array of

(A) (C)

(B) (D)

Figure 5.9.  Anger logic. (A) Three PMTs are arrayed below the scintillating crystal (light gray box). A γ-ray interacts with the crystal at
the exact center of the middle PMT. (B) The relative output signal from the PMTs. The middle PMT has the greatest signal. Because the
light spreads through the crystal, there is a weak signal from the other two PMTs and that signal is the same in the two because they are
both the same distance from the scintillation event. (C) The incident γ-ray is now between two of the PMTs. (D) The relative output of the
first two PMTs is equal, whereas the third PMT has a very low output because the scintillation event was rather far away. In this way, Anger
logic uses the relative output signal from the various PMTs to calculate the location of the initial scintillation event.
Radiation detectors—single photon 41

PMTs and to calculate the exact location of an event based on the light detected in each PMT, but the
above example should convince that it is mathematically possible to do.

The Gamma Camera

Taking the above components and adding computer controls and recording equipment results in a
functioning gamma camera (Figure 5.10). A modern gamma camera is entirely digitally controlled;
the operator need only specify the isotope being imaged, the collimator being used, and the param-
eters of the image and the control systems do the rest.
One final consideration before discussing image acquisition is scatter. All of the previous discussion
on photon collimation, interaction with scintillators, and subsequent signal output via PMT assumed
that the incident photon came directly and linearly from the emission source. However, it is possible
for the photon to interact with matter along the way and be bent off course, called Compton scatter.
A scattered photon contributes spurious information to the image (Figure 5.11), so it is desirable to
reject scattered photons from the image. Luckily, when a photon is scattered it loses some energy,
which can be exploited. By setting the camera for a specific isotope, only detected photons with the
energy expected for that isotope are accepted; scattered photons with their lower energy are rejected
from the image. Typically a window is set about the photon energy of the isotope being imaged; the
wider the window the higher the count rate but the more scattered photons are included in the image.

Figure 5.10.  A dual-headed gamma camera. This camera has two large field-of-view planar sodium iodide crystals that are shown
configured 180 degrees apart (facing each other) and can thus provide opposing views (e.g., anterior and posterior) at the same time.
The detectors can also rotate around the subject and can also be put into a 90-degree configuration.
42 NUCLEAR MEDICINE

(A) (B)

Figure 5.11.  A point source is being imaged (black dot). The parallel hole collimator is denoted by the arrowheads and the gray box
below it is the remainder of the gamma camera. In A the photon comes out perpendicular to the collimator and so passes through and cre-
ates a count by the camera in the location of the point source. In B the photon comes out in a way that it would not even hit the collimator
(and if it did it would be blocked), but along the way it is scattered (its original course is altered) and it is now traveling perpendicular to
the collimator face and can pass through. This creates an appearance of a point source shown in gray. Thus, the detection of a scattered
photon gives inaccurate information.

The only way to reduce the effect of scatter in single photon imaging is through energy resolution
(in contrast to PET, see Chapter 6). Therefore, the energy resolution of the gamma camera must be
very accurate. The high light output of sodium iodide crystals contributes to good energy resolution,
making it the perennial favored scintillating material.

Static Planar Imaging

The simplest image acquired with a gamma camera is a static planar image. A planar image is simply
a two-dimensional image and a static image is a snapshot. This is in contrast to a dynamic image,
which is a series of snapshots acquired over some period of time. The relevant parameters of a static
planar image include matrix size, zoom, duration, and collimator choice (made according to the pre-
vious discussion).
Matrix size is an important aspect of imaging. If a black and white photograph is viewed under
a microscope, the individual grains of silver nitrate that were in the photographic emulsion can be
seen; each grain is a pixel of the photograph. Similarly, a planar image is separated into individual
pixels. Each pixel has a location and an intensity and when all the pixels are put together, they
form an image. The matrix size defines the number of pixels in the image (Figure 5.12); the larger
the matrix size the smaller the individual pixel and the higher the spatial resolution. As with all
imaging parameters, there is a tradeoff: the higher the matrix size, the less information that exists
per pixel.
When an event is detected by the camera it is localized by Anger logic; rather than record the precise
location of the event, the location is recorded into one of the pixels in the matrix. Over the course of
an image acquisition, many events are recorded and placed into the pixels. Now it is easy to see that
Radiation detectors—single photon 43

(A) (B) (C) (D)

Figure 5.12.  An illustration of a detector face divided into various matrix sizes. A is a 1 x 1 matrix meaning that it is one pixel across
and one pixel down for a total of one pixel. B depicts a 2 x 2 matrix, so two pixels across and two pixels down for four total pixels.
Similarly C is a 4 x 4 matrix and D is a 64 x 64 matrix (a commonly used size for some studies, particularly dynamic studies). The total
number of pixels is the number across times the number down. The more pixels, the finer the detail in the resulting image, but the less
information in each pixel.

the more pixels that exist in the image, the fewer events are put into each pixel. What is less obvious
is why this is a bad thing.
Whenever a measurement is made, there is some degree of uncertainty. For example, if an event
occurs right near the border between two pixels, there is a chance that the system will localize it
to the wrong pixel. A good rule of thumb is that the uncertainty is approximately the square root
of the number of observations. So if you localize just one event in a pixel, the uncertainty is the
square root of one, which is also one (1 ± 1 is not an optimal measurement!). Now if 100 events
are recorded in a pixel, the measurement is 100 ± 10 (because the square root of 100 is 10), which
translates into 10% error (10/100). With 10,000 events, the error is 100/10,000 or 1%. Figure 5.13
shows an example of an image with high matrix size and high noise compared with an image with
small matrix size and very little noise to show extremes of the tradeoff between resolution and noise
along with an ideal image with high matrix size and low noise. Practically speaking, most planar
nuclear medicine images are in a 128 x 128 matrix, although smaller and larger matrix sizes are used
in some situations.
Although the matrix size is the number of pixels in the resulting image, the zoom determines
how large the total area of the image is. With a zoom of 1, the image is exactly the size of the
detector face (the maximum size that can be imaged at one time). It is possible to use only part
of the detector; because the matrix size (and the number of pixels) is the same but the area is
smaller, the pixel size is smaller. This allows for higher spatial resolution at the cost of a smaller
field of view.
Image duration is, of course, a tradeoff. A  very short image is beneficial for patient comfort,
equipment use, and getting an accurate snapshot of a dynamic process. However, the number of
events observed is fewer over a shorter time period, and so the image has a great deal of uncertainty
(noise). With a longer acquisition, the image quality improves. However, this has practical limits
when discussing imaging living human beings. A  person can lie still for only so long (generally
about 20–25 minutes) without moving. Furthermore, expensive equipment must be used efficiently,
so a department cannot spend a whole day imaging a single patient. Therefore, image duration is
44 NUCLEAR MEDICINE

(A) (B)
(b) (C)

Figure 5.13.  The effect of matrix size on resolution and noise. (A) An acquisition in a 64 x 64 matrix. Although there is some noise in
the background, it is relatively homogeneous. However, the definition of the edges of the circle is less clear. (B) An acquisition in a 512 x
512 matrix with the same number of total counts as the 64 x 64 image. Here the edges of the circle are more crisply defined, but the over-
all level of noise is much higher (i.e., the intensity of one pixel compared with its neighboring pixels, when there should be uniformity).
(C) A 512 x 512 matrix acquisition with the same counts per pixel as the 64 x 64 acquisition (i.e., equivalent noise level). This is the best
of both worlds at the cost of an acquisition that took 64 times longer.

optimally planned so that the images in the hardest-to-image patient are at least of the minimum
adequate quality while maintaining manageable scan time. With modern equipment, scans can be
planned for a set amount of time or a set number of counts (number of events recorded), whichever
comes sooner.
Of course, the amount of activity in the patient contributes to the count rate of the scan and
the amount of time needed for good image quality. As discussed in Chapter 3, it is important to
always strive for the minimum necessary radiation dose. Finally, the electronics in the camera can
only detect so many events at a time and so there is a limit to how much activity can be in the field
of view at a time. This is rarely a practical limit, but with some tracer/scan combinations it can
become relevant.

Dynamic Imaging

Whereas a static image is a single snapshot that is a summation of all the events recorded during the
imaging time, a dynamic image allows for the addition of a time dimension to the resulting images.
Thus, instead of acquiring a single long static image, it is possible to acquire multiple shorter images
over the same time period Taken most simplistically, dynamic imaging in this sense is done by a static
image that is broken up into multiple sequential images of shorter duration so that changes over
time can be seen. Just like a movie is really just a series of snapshots played at 24 frames per second,
a dynamic single photon image is a series of snapshots taken over time. Depending on the process
being imaged, however, the duration of the snapshots can be very different. For example, when imag-
ing blood flow to the kidneys (Figure 5.14), each image may be 1 to 3 seconds long, whereas when
(A)

(B)

Figure 5.14.  Dynamic imaging in the form of a renal scan. Typically renal scans involve two separate dynamic series. (A) The flow
phase follows the flow of the radioactive tracer in the bloodstream, which is quite a fast process. Thus, these images are acquired at
1–3 seconds per frame (this image is displayed at 3 seconds per frame). The blood can be seen entering the aorta (arrow) and in the
subsequent frames it is seen perfusing the kidneys (arrowheads). (B) The functional phase follows the extraction of the radiotracer from
the blood and its excretion into the urine. This is a slower process and so these dynamic images are displayed at 1 minute per frame. The
accumulation of radiotracer in the kidney and subsequent clearance can be appreciated.
46 NUCLEAR MEDICINE

imaging the transit of urine from the kidney to the bladder, the images may be done at 30 seconds to
2 minutes per image.
Thus, the length of each frame is determined by the speed of the process being imaged. A very quick
process requires shorter frames and a slower process can use longer frames. It is important to note
that a short frame leaves very little time to collect events and so the shorter the frame the less infor-
mation is collected and the greater is the noise. So once again there is a tradeoff between the fineness
of the temporal data that can be collected (i.e., how short the frames can be) and the image quality
needed to learn anything from the information collected.

Gated Imaging

In the prior section we considered dynamic imaging to sample a process that is changing over time.
Some processes in the body are cyclical; they repeat over and over. The most relevant to this section
is the beating of the heart. To attempt to image the beating of the heart, one could imagine a very
fast series of dynamic images (in someone with a relatively slow heart rate of 60 beats per minute,
the heart completes one beat every second, so to image the heartbeat split into, say, 8 frames, each
frame could be only 125 milliseconds long), but the very short frame length precludes good image
quality. Because each heartbeat is followed by another heartbeat, one could consider repeating that
series of short dynamic frames over and over, once with each heartbeat. All of the first frames could
be summed together and all of the second frames summed together and so forth. This permits imaging
a very quick process (the heartbeat) but gathering enough data to allow for very good signal-to-noise
ratio.
The previous discussion makes gating seem quite simple. However, it assumes that the heart rate
remains perfectly constant, but our heart rates constantly vary. It is possible that the length of time
between two heartbeats is significantly shorter than average and so the next heartbeat begins before
the last frame starts. It is also possible that the time between two heartbeats is significantly longer
than average so that the last frame is filled with data from closer to the middle of the heartbeat cycle.
Therefore, to create such an image, it needs to be triggered by the cycle that we are trying to image.
That trigger is called a gate, and so the imaging is called gated imaging.
The example of imaging the beating heart is simplified in practice because the heart emits electri-
cal signals as each heartbeat occurs that can be easily recorded with an electrocardiogram. Thus, by
signaling the start of each cycle with the electrocardiogram, the time between beats can be assessed.
There are two ways of performing cardiac gating: retrospective and prospective (Figure 5.15). By
far the most common way in practice is prospective gating, where first several heartbeats are recorded
with the electrocardiogram and the average duration between beats is determined. The user also
determines the number of parts (commonly called bins) into which to split the cycle. Finally, the user
determines how much the cycle length can deviate from the average cycle length and still be consid-
ered valid.
From the previous example, let us say that we are gating into 8 bins and the average duration
between beats is 1 second. Thus, each bin is 125 milliseconds long. Once the imaging starts, when a
heartbeat is detected, it triggers the start of the cycle and the first 125 milliseconds of data go into the
Radiation detectors—single photon 47

(A)

(B)

Figure 5.15.  Prospective versus retrospective gating in a patient with atrial fibrillation. (A) In prospective gating, first the average R-R
interval is calculated. In this case it is about 740 ms. Upon an R wave, the system starts to collect data into bins of 92.5 ms (740 ms
divided into 8 bins). If the next R wave hits within some prespecified range of the 740 ms average (often 20%), the beat is accepted and
the data are saved in those bins (black bars). The R-R intervals shown in purple above the ECG are too short and are rejected (note that
half the beats are rejected so image acquisition takes twice as long to achieve a specified number of accepted beats). If the R-R interval
is longer than usual (blue bars) there are some data at the end of the interval that are not recorded. Alternatively, if the R-R interval is
shorter (red bars) the last bin is not completely filled before the next R-wave starts a new beat. (B) In retrospective gating, all the events
are recorded along with the electrocardiogram. Then, each R-R interval is measured and divided into some number of bins (8 in this case)
and the data are separated into bins. This allows the bins to be longer or shorter as needed for each interval between beats. Retrospective
gating helps to remove the influence of arrhythmias but can only be done if the data are acquired in list mode.

first bin, then the second bin, and so forth until all the bins are filled or the next heartbeat is detected.
If the next heartbeat occurs too soon or too late, the data from the preceding cycle are rejected. If the
next heartbeat is detected within the predetermined threshold, the data are recorded and added to
the sum. When enough cycles are recorded, the image is completed. The total duration of the image
acquisition is determined by the number of acceptable heartbeats so the image takes much longer to
complete in someone whose heart rate varies a lot over time compared with someone whose heart
rate is very consistent.
In retrospective gating, every event is recorded as is the electrocardiogram. Then, after all the data
are recorded, the duration of each cycle is determined and the exact correct frame time is determined
for that cycle (so if there is 1 second between beats the frame time is 125 milliseconds, whereas if there
are 2 seconds between beats the frame time is 250 milliseconds). The bins are then filled retrospec-
tively (thus the name). This is a more accurate way of performing gating because every frame is the
exact same percentage of the cardiac cycle; however, retrospective gating requires the ability to record
every single event during the entire acquisition (called list mode acquisition) rather than summing
48 NUCLEAR MEDICINE

them up along the way. This is computationally much more intensive and so is rarely performed with
single photon imaging (although it is becoming increasingly used with PET).

Single Photon Emission Computed Tomography

At this point in the chapter, all the words in single photon emission computed tomography (SPECT)
should be familiar except, perhaps, for tomography. Tomography refers to imaging an object in sec-
tions. Whereas planar imaging flattens a three-dimensional object into a two-dimensional image,
tomography allows for slicing the object into multiple sequential images. Because each section is
distinct, depth and overlap of structures can be understood. Figure 5.16 illustrates how overlapping
objects can be better imaged with tomography.
A complete understanding of tomographic reconstruction is sufficient content for an entire book,
let alone a section in a chapter. However, the basic underpinnings of SPECT reconstruction are really
fairly simple. Imagine trying to look at a sculpture of a famous person’s head with one eye closed (to
remove the depth perception that our binocular vision provides). Looking at it from the front it is
hard to know just how far the head goes back, how long the hair is, or what the hairstyle is. But just
by walking around the object and looking at it from multiple angles, the human mind can quickly

(A) (B) (C)

Figure 5.16.  Tomography. Imagine a cube (gray) that contains a cylinder (red) and a cone (blue). The dark gray plank shows the plane
through which the object is sliced and below it shows how that tomographic slice would appear. (A) When sliced toward the bottom in a
transaxial plane, both the cylinder and the sphere appear to be circles of similar size. (B) Higher up the cylinder appears the same in the
transaxial plane, but the cone now makes a much smaller circle. (C) Cut in a coronal plane, the cylinder forms a rectangle and the cone a
triangle.
Radiation detectors—single photon 49

0 30 60 90 120 150 180

Figure 5.17.  Filtered back projection. In this simple example, two point sources are viewed from multiple angles of 0, 30, 60, 90, 120,
150, and 180 degrees. In each view, the apparent finding is spread along the entire field of view in a line. All of these lines intersect at
the points where the actual findings exist. Because there are multiple overlaps at those points, they become darker and stand out over the
nonoverlapping lines. The more projections (views) are sampled about the object, the better the reconstruction and the less the relative
residual lines (which form the classic starburst artifact of filtered back projection).

construct a virtual three-dimensional image of the object. SPECT does just that: an object is imaged
from multiple projections and then any desired slice of the object can be reconstructed.
The simplest form of reconstruction is called filtered back projection (FBP) and a simple example
is illustrated in Figure 5.17. In the example, two points of activity are being imaged. Their appear-
ance from multiple points of view is shown. To perform back projection, we assume that what we are
seeing is spread uniformly across the field of view in the axis perpendicular to the camera detector.
When this is done from multiple angles, these counts begin to overlap at the true site of the activity.
The more angles are sampled, the more accurate the reconstruction becomes.
With pure back projection, however, there is always significant artifact from that smearing of the
activity along the perpendicular axis. This is where filtering comes in—filters are used to cut down
some of the lowest-intensity signal to get rid of some of the spurious activity. Of course, no filter is
perfect and there is always the risk of filtering out desirable signal along with the undesirable signal.
Just like with everything in imaging, there is a tradeoff.
FBP is still frequently used in clinical imaging, such as in x-ray computed tomography (CT). Because
the number of x-rays reaching the detector in CT is very high, the resulting images can have no vis-
ible artifacts from the FBP. Comparatively few γ-rays reach the detector in SPECT and so SPECT
reconstructed with FBP typically results in easily identifiable artifacts (Figure 5.18). With the advent
of modern computers, another reconstruction option became available: iterative reconstruction. As
the name implies, iterative reconstruction involves multiple repeated steps (iterations). Very simply,
an FBP reconstruction is first done and then the algorithm essentially guesses (through the use of
statistical probabilities) what is the true solution. The statistics behind this are called expectation
maximization (EM), essentially trying to find the solution that is statistically most likely to be correct.
From the example in Figure 5.17, the algorithm would compute the probability that the activity seen
in the streak extending from the points of activity is more likely in two points than fanning out the
50 NUCLEAR MEDICINE

(A)

(B)

Figure 5.18.  (A) SPECT image through the abdomen reconstructed with filtered
back projection. The classic starburst streak artifact is easily recognizable extend-
ing beyond the patient’s body. (B) The same data reconstructed with an OSEM
iterative algorithm essentially eliminates this artifact.

way the FBP reconstruction showed. It would, therefore, place more of the activity in the two points
and less peripherally. This process would be repeated (the next iteration). If all goes well, with a few
iterations the algorithm converges closer to the correct solution (the true distribution of radioactiv-
ity). However, it is possible for the algorithm to be incorrect, introducing artifacts.
The most common currently used method for iterative reconstruction is called ordered subset EM
(OSEM). Rather than performing the statistical analyses on the entire image at once, OSEM breaks
up the image into subsets and computes the statistical probabilities on each subset individually.
This allows for converging on a solution in fewer iterations than would be needed for EM.
Because we are reconstructing several images from multiple angles into single slices, SPECT intro-
duces a new property called angular sampling that impacts spatial resolution. The more angles sam-
pled, the higher the spatial resolution of the resulting reconstructed slices. It is important to note
that the spatial resolution cannot exceed the inherent spatial resolution of the system. For example,
a hypothetical system may be able to achieve a 1-cm spatial resolution with SPECT using low-energy
high-resolution collimators and sampling 128 angles. That same system with the same acquisition
parameters but using a high-energy general purpose collimator may achieve only a 2-cm spatial reso-
lution. Thus, the acquisition parameters should match the clinical needs of the resulting image and the
capabilities of the system as configured for the acquisition in question.
Each image in a SPECT acquisition is acquired for a given amount of time. As was discussed previ-
ously, the signal-to-noise ratio improves with increasing number of counts in the image. For a given
total acquisition time, the more angles that are sampled, the less counts per angle, the more noise per
image (and the more noise in the resulting reconstructed image). This is another consideration that is
Radiation detectors—single photon 51

needed in determining the optimal acquisition parameters. If an object is being imaged that has a high
amount of activity in it and the 128-angle SPECT acquisition taking 30 minutes total has low enough
noise to be interpretable, this maximizes the spatial resolution of the resulting image. However, if the
amount of activity is lower and the resulting images are very noisy (so that objects of interest cannot
be reliably differentiated from noise) the spatial resolution is not helpful. In these cases, it may be bet-
ter to sample from, for example, 64 angles, doubling the number of counts per angle while keeping
total acquisition time the same. The resulting image has lower spatial resolution but better signal-to-
noise ratio.
The final aspect of SPECT to be discussed here is orbit. In all of the previous examples, it is assumed
that the detectors would sample around the object in a circular orbit so that the center of the field
of view is centered in the orbit. However, the count rate and the spatial resolution are both best
when the detector is as close as possible to the object being imaged. Because people are generally not
spherical, the detectors can get closer together when above and below the patient than when at their
sides (e.g., about the shoulders). Therefore, it is desirable to allow a noncircular orbit. This is pos-
sible, but requires additional processing. By knowing how far the detector is from the center of the
field of view and understanding how quickly the count rate drops off with distance from the center
of the field of view, the counts from each projection can be scaled to account for the distance from
the center for that projection. This allows for maximizing the count rate by keeping the detector as
close to the patient as possible throughout the acquisition. Finally, (and probably counterintuitively)
it is not necessary to sample in a 360-degree arc around the object of interest to do the tomographic
reconstruction. This is most commonly pursued in cardiac imaging. Because the heart is in the front
left quadrant of the chest, commonly cardiac SPECT is done in a 180-degree arc from the front right
to the left back of the chest. This allows for acquisition of more counts from the heart for the same
total acquisition time compared with imaging around the entire chest. The tradeoff is that objects
far from the detectors during the acquisition (e.g., in the back right quadrant of the chest) are not
accurately assessed. Furthermore, the projection data are not redundantly assessed (i.e., viewed from
both sides), which can result in uncertainty and errors. In the case of cardiac imaging most believe
that the benefits of 180-degree acquisition outweigh the negatives. However, there are relatively few
such clinical indications.
When a SPECT image is reconstructed, we expect the standard transaxial, sagittal, and coronal
slices. However, it is possible to reconstruct the projection data in other ways, and the most common
is called a sinogram. In a standard two-dimensional slice, the data are put into Cartesian coordinates
showing intensity at a given location along the X-Y axes. A sinogram, however, displays data based
on the location of the objects in the field of view and the angle of the camera for that view. So the
different projection views, from first to last, are shown from top to bottom and the appearance of the
activity from that perspective is shown. A representative transaxial slice from a SPECT acquisition
and its associated sinogram are shown in Figure 5.19.
In many cases, the data are initially aggregated into sinograms and reconstructed from the sino-
grams into the desired slices. Much of the filtering and processing happens in the sinogram space. One
of the great advantages of a sinogram is that as an object is imaged from the different angles during
a SPECT acquisition, assuming it is stationary during the acquisition, it should form a nice smooth
curve. Any motion during the acquisition shows up as a discontinuity in the sinogram and thus can
be easily identified and, potentially, corrected.
52 NUCLEAR MEDICINE

(A) (B)

Figure 5.19.  A transaxial SPECT slice (A) with the same data represented as a sinogram (B). The sinogram essentially shows the
projection image from each angle from top to bottom going from 0 degrees at the top to 360 degrees at the bottom. By displaying the data
this way, any patient motion during the acquisition is readily apparent because it causes a discontinuity.

SPECT/CT

One aspect of SPECT reconstruction that has not yet been addressed is attenuation. Whereas some
photons are scattered along their path from emission to camera, others are absorbed within the
body. A  photon coming from the center of a patient is more likely to be absorbed before reach-
ing the camera than one right at the edge of the body. Therefore, the reconstructed images result
in fewer counts from the center of the body than peripherally. Furthermore, areas of dense tissue
have greater attenuation than areas of less dense tissue (e.g., the tissue-filled pelvis compared with
air-filled the chest). Because of this, an uncorrected image not only can make central findings appear
subtler, it also precludes quantification because measurements of intensity depend on the depth of
the structure.
There are many approaches to correcting for the effects of attenuation. The simplest but most limited
is a simple assumption that the density of the structure is fairly uniform and correcting solely based upon
depth. This can be useful in uniform areas like the brain but is inaccurate elsewhere. Another approach
is to create an attenuation map that can determine the amount and density of tissue a photon needs to
traverse to get from a particular point to the detector in a specific view. The reconstruction algorithm
can then take the attenuation map into consideration, correcting the relative number of counts on a
pixel-by-pixel basis depending on the likelihood of a photon from that location being attenuated before
reaching the detector. Because SPECT is derived from detection of individual photons whose depth of
origin is not exactly known, attenuation correction in SPECT is an approximation, whereas for PET
(discussed in the next chapter) attenuation correction is more quantitatively accurate.
The attenuation map can be created by passing photons through the body from an external source
(thus it is transmission imaging) and reconstructing it into a transmission scan. This can be done using
γ-emitting sources, such as cobalt-57 using the SPECT detectors to detect the transmitted photons.
Radiation detectors—single photon 53

Although such transmission scans provide accurate attenuation correction, they require a significant
amount of time to acquire and they also provide little tissue contrast.
More recently SPECT/CT devices have been produced (Figure 5.20), incorporating both a CT scan-
ner and a SPECT camera into one system with a single imaging table. Without moving, the patient can
undergo a CT scan as well as a SPECT scan and because the same table passes through both gantries, the
images can be overlaid. The CT scan is used to create an attenuation map for the SPECT reconstruction.
The major advantages of SPECT/CT over other means of attenuation correction are that the CT scan is
relatively fast compared with radioactive source transmission scans and the CT scan provides exquisite
anatomic definition allowing for a more anatomically precise interpretation of the SPECT study.
As has been stated many times, there are always tradeoffs and the same is true of SPECT/CT.
Disadvantages of SPECT/CT do certainly exist. Perhaps the most relevant from a patient perspective is
that the CT scan (even using specially designed low-dose approaches) confers significantly higher radia-
tion dose than a radioactive source transmission scan. In most cases, this higher radiation dose is worth-
while because the resulting anatomic information is so helpful, but it is an important consideration.
Because they are essentially two imaging instruments in one, SPECT/CT cameras are quite complex and
so are more expensive than those that do not incorporate a CT, both in initial cost and in maintenance.

Figure 5.20.  A SPECT/CT instrument. The table and two sodium iodide detectors appear very similar to the gamma camera in
Figure 5.10. However, behind these is a round gantry with a hole through the middle. This is the CT component. Because the imaging
table passes under the single photon detectors and through the CT component, the images can be accurately overlaid.
54 NUCLEAR MEDICINE

Gamma Probes and Well Counters

There are devices that use scintillators and PMTs to measure activity, but not to form an image, such
as a gamma probe (Figure 5.21). A  typical gamma probe has a thick metal housing to block out
background activity. Inside the housing is a scintillating crystal coupled to a PMT. Because of the
sensitivity of the scintillating crystal/PMT design, these probes can accurately measure very low levels
of activity while also providing a high degree of energy resolution.
Probes are frequently used to measure radioactivity within they thyroid gland both to measure
quantitative amount of uptake in patients given radioactive iodine as well as to survey for uninten-
tional uptake and trapping in the thyroid gland in staff members working with radioactive iodine.
Another common use for specialized gamma probes is for intraoperative localization of radioactive
lymph nodes in sentinel node biopsies.
Well counters also typically have this design. Whereas a probe is designed to measure activity emit-
ting from a person, a well counter is used to measure samples. The well is highly shielded and any
activity within the sample placed into the well can be detected by the scintillator.

Figure 5.21.  A gamma probe. The entire apparatus is mounted on


a mobile cart. The detector itself is housed in a shielded cone with an
opening at the end (black arrow); there is a slider bar with a ruler that
allows the distance from the object being measured to the detector
to be controlled. This model also has a separate shielded well (red
arrow) that allows for high sensitivity measurements of low levels of
activity.
6 RADIATION DETECTION—PET
56 NUCLEAR MEDICINE

PET stands for positron emission tomography. After discussing positrons in Chapter 2 and emission
imaging and tomography in Chapter 5, it is possible to begin to imagine what PET entails. Because
many imaging concepts are the same as for single photon imaging, particularly regarding single pho-
ton emission computed tomography (SPECT) and SPECT/CT reconstruction, it is recommended that
the reader be familiar with the previous chapter before delving into this one.

PET Principles

The very unique property of positron (β+) decay that forms the foundation of PET is the generation of
two 511-keV photons that travel along a straight line directly away from one another. PET is based
on detecting these photon pairs. Thus, a modern PET scanner is comprised of a ring of scintillating
crystals coupled to photomultiplier tubes (PMTs). Additionally, the scanner includes coincidence cir-
cuitry. Basically, when a photon is detected in one area, the circuit looks for another photon in a pre-
defined area away from itself for some period of time on the order of a few nanoseconds (Figure 6.1).
If another photon is detected during that period, the location of both events is recorded and the β+ is
known to have originated somewhere along that line, called a line of response (LOR), from which an
image can be reconstructed. One way of envisioning this is that as multiple events are recorded and
coincidence lines drawn, there is more overlap at sites with more radioactivity and less at sites with

(A) (B)

Figure 6.1.  (A) The blocks represent a ring of detectors and the red dot a scintillation event detected in the top center detector block.
The various lines represent the possible LOR; if a scintillation event is detected in one of those blocks within the timing window, that line
of response is recorded. (B) Transaxial CT slice of a patient with malignant pleural mesothelioma with a schematic representation of the
ring of PET detectors with multiple detected lines of response. The purple dot represents an area with little radioactivity and so there are
just two lines of response overlapping at it, whereas the red dot is an area with more activity and so more LOR overlaps. An actual PET
acquisition typically involves recording millions of events.
Radiation detection—PET 57

little activity and so an image is formed (essentially through automatic back propagation) as illus-
trated in Figure 6.1. The same principles of iterative reconstruction used in SPECT are applied to PET.
There are some key differences between PET and SPECT scanners beyond the obvious (that PET
uses a ring of crystals, whereas SPECT uses planar detectors that move about the object being imaged;
and the fact that PET includes coincidence circuitry). The first important difference is the choice of
scintillating material being used. All but a few modern specialized gamma cameras use thallium-doped
sodium iodide crystals, which have excellent properties for single photon imaging, particularly suited
to imaging Tc-99m, the most widely used single photon emitting isotope. However, the 511-keV pho-
tons generated from β+ decay have a much higher energy than those commonly used for single photon
imaging (e.g., Tc-99m emits 140-keV γ-rays). These high-energy 511-keV photons are unlikely to be
stopped by the (relatively) thin sodium iodide crystal in a gamma camera. The crystal only scintil-
lates if the photon (and its energy) is absorbed in the crystal. Therefore, crystals used for PET must
be thicker and have a higher stopping power (denoted as Z number) to capture a higher percentage
of the incident photons.
In addition to stopping power, light output is an important consideration in scintillator selection.
Light output is the amount of light emitted from the crystal per photon of a specific energy absorbed
in the crystal. The more light that is output, the stronger is the signal from the PMT and so signal-to-
noise and energy resolution are improved. For PET, scintillation time is equally, if not more, impor-
tant. This is a measure of how long it takes from when the photon is absorbed until all of the
scintillated visible light is emitted. Because PET is based entirely on detecting paired events on a very
small time scale, the shorter the light emission, the more accurately are paired events detected by the
coincidence circuits.
Properties of various scintillator materials that can be used for PET are shown in Table 6.1. Bismuth
germanate was the most commonly used material for quite a while due to its excellent stopping power
as well as stability and cost. However, its decay time is relatively long; as the electronic/circuitry com-
ponents of the scanners have improved, this has become a bigger limitation. Most scanners currently
commercially produced use lutetium oxyorthosilicate (LSO)–based crystals (LSO or yttrium-doped
LSO, denoted LYSO). A review of Table 6.1 illustrates the reason for the popularity of LSO given a
nice mix of relatively high Z number, quite fast decay time, and fairly good light output.

Table 6.1  Common Scintillators for PET Imaging

Effective atomic
Scintillator Density (g/cm ) 3
number (Z) Decay time (ns) Relative light output
NaI 3.67 51 230 100
Bi (GeO4)3 (BGO) 7.13 75 300 15
Lu2(SiO4)O:Ce 7.4 66 40 75
(LSO)

From: Melcher CL and Schweitzer JS, A promising new scintillator: cerium-doped lutetium oxyorthosilicate. Nuclear
Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment
314(1): 212–214. Reprinted with permission.
58 NUCLEAR MEDICINE

Whereas gamma cameras typically use single large planar crystals, PET scanners use blocks made
up of small individual crystals. For example, a block may be comprised of 4 X 4 X 20 mm individual
crystals. Individual crystals are bound into, for example, a 15 X 15 block with PMTs attached (thus,
Anger logic is used to directly localize the location of the scintillation). Multiple blocks are combined
to make a full ring of detectors (Figure 6.2). It would be possible to create a ring of almost any size,
but practically speaking, the diameter of the ring is dictated by patient size (modern scanners have
an interior diameter of around 70–85 cm, so the crystal ring is slightly larger than that), whereas the
axial length of the ring (how long an object can be imaged at once) is dictated almost entirely by
cost: the longer it is, the more the system costs. Most devices currently in production have an axial
length of 15–24 cm.
Perhaps the most glaring difference between SPECT and PET cameras is the lack of a collimator in
PET devices. For SPECT with a parallel hole collimator, the camera must block all photons that are
not coming in directly orthogonal to the detector. This results in the exclusion of more than 90% of
the emitted photons. PET, however, can filter out most undesirable events by the coincidence require-
ment rather than by a physical collimator. The end result is that a dramatically higher percentage of
decay events are captured and used in the image (so the sensitivity of the system as a whole is sub-
stantially higher). Early PET devices did have physical collimation to some extent in that they had
septae along the axial axis to split the ring into multiple discrete slices and minimize photon pairs
where one of the pair had scattered. This design blocked photons that were not traveling within that
slice (called two-dimensional acquisition); this was necessary because the scintillators used had rela-
tively poor energy resolution to limit scatter and slow light output so they could only deal with so
many events at a time. More modern instruments have dispensed with that limitation and allow for
three-dimensional acquisition, which provides a dramatically higher sensitivity (because no events are

(A) (B)

Figure 6.2.  (A) Numerous individual crystals are packed together into a block with an array of PMTs attached to the block. (B) The
crystal-PMT array blocks are combined to form a ring of crystals. Photos courtesy of Joel Karp, PhD.
Radiation detection—PET 59

(A) (B)

Figure 6.3.  Two-dimensional versus three-dimensional acquisition. Imagine viewing a PET crystal ring from the side looking at only the
top and bottom crystals (gray boxes). A two-dimensional scanner (A) has septae that block any photons that are not traveling perpendicu-
lar to the z-axis of the scanner (the axis along which the bed passes through the scanner). Therefore, the only possible lines of response
(arrows) are those that are directly perpendicular to the z-axis. However, a three-dimensional scanner (B) has no septae, so photons
coming from any angle that allows both of the coincidence photons to hit the crystal ring could be detected. One can see that there is an
incredibly higher number of possible lines of response using this set up. Thus, the number of detected events is dramatically higher for a
three-dimensional scanner compared with two-dimensional, assuming the crystals and electronics can handle this increased load.

blocked by physical barriers; Figure 6.3). Another way to think of it is that for a scintillation detected
at some point in the detector ring, the number of possible LOR is much smaller for a two-dimensional
versus a three-dimensional acquisition.

PET Acquisition and Reconstruction

The previous section introduced the principle of coincidence and illustrated, very simply, how a PET
camera can collect these coincidence pairs to form an image. Unfortunately, the real situation is a bit
more complicated. If the β+ decay rate was sufficiently slow that only one β+ at a time was emitted,
we would be certain that every detected coincidence was truly the result of β+ decay along the LOR
(known in PET as a true coincidence or just a true). However, with multiple decays happening at the
same time in the field of view it is possible for two scintillations to be detected within the timing win-
dow that are from unrelated events, called a random coincidence. Additionally, it is possible to detect
a scintillation without detecting a second scintillation within the allotted time window, called a single.
Finally, one or both of the detected scintillations in a pair may have been scattered along their path
to the detectors and so the LOR between the two points of detection will not include the location of
the originating β+ emission. These possibilities are illustrated in Figure 6.4.
One subtle but important point that has not yet been addressed is β+ range. When a β+ is emitted
from the nucleus it has some kinetic energy and travels near the speed of light. The higher its energy,
the farther it travels before the annihilation event occurs. The LOR will be at the point of annihilation,
60 NUCLEAR MEDICINE

Scatter

True Random

Figure 6.4.  Various options for detection in a PET system. The green dot represents a true coincidence. Both photons are detected
during the coincidence timing window and the line of response (LOR) matches the actual path of the coincidence photons. The red dots
represent two separate positron annihilation events, each of which has a single photon that reaches the detector during the coincidence
window. These represent a random coincidence and the resultant LOR (dotted line) represents incorrect data. If only one of the photons
is detected, it is considered a single. The orange dot represents an annihilation event. One of the resultant photons reaches the detector,
but the other is scattered and bent off course at the orange cross. Although still detected as a coincidence event, the LOR (dotted line) is
incorrect. Image courtesy of Joel Karp, PhD.

not at the point from which the β+ was emitted. As many β+ have an energy such that they travel only
a few millimeters (within the spatial resolution of human scanners), there is no significant effect of
this on spatial resolution. However, some have very high energy and may travel over a centimeter
on average, which decreases the spatial resolution of the resulting scan (because there is more than a
1-cm uncertainty in the location of the initial emission compared with the detected LOR).
An accurate PET reconstruction needs to take into consideration randoms and scatter. There are
many ways to do this and a full discussion is beyond the scope of this book. To correct for these, the
algorithm must understand the probability that a detected event is a random or has been scattered.
Regarding randoms, there is a time window after a detected scintillation during which a second
detected scintillation is either from a true coincidence or a random and it is impossible to differentiate
these. However, imagine that a second time window is opened sometime later. Because this window
is separated in time from when the first photon was detected, any coincidences detected must be
randoms. Thus, by using a second, delayed time window, an accurate estimate of the likelihood of a
random coincidence can be determined.
Scatter correction is a bit more complicated but it is based on certain preconditions. One is that
there is no radioactivity in the field of view outside the object being imaged. Another is that the object
being imaged is not moving during the image acquisition. If the algorithm knows the boundaries of
the object being imaged and it detects a coincidence whose LOR does not pass through the object, it
can be assumed that it is the result of scatter. By knowing how much of the field of view is filled by the
object being imaged, it is possible to calculate the probability that a scattered coincidence has an LOR
not passing through the object. Using this in combination with the number of scatter events detected,
it is possible to compute the likelihood of scatter and to remove it from the final image. Unfortunately,
detecting the boundaries of the object being imaged is not always straightforward. Many algorithms
Radiation detection—PET 61

use a crude reconstructed image to detect the boundaries of the object, but this is not always accurate
because distribution of radioactivity in the object or motion during acquisition may make the detec-
tion inaccurate (see Chapter 11).
After having read the preceding chapter, one may wonder why scattered photons cannot be filtered
out using an energy window as is done for gamma cameras. The simplified answer is that most PET
cameras do not have sufficient energy resolution to achieve this with enough accuracy to be useful in
good part because scintillators are chosen for their stopping power and timing resolution rather than
energy resolution. Because small deflections in one of the photons can cause substantial errors in the
LOR (and small deflections only cause slight decreases in photon energy), trying to account for scatter
by energy window is inadequate.
Finally, more so than with SPECT, attenuation correction is critical for interpretable PET images.
Because PET involves the detection of pairs, only one of the two photons needs to be attenuated for
the pair to not be detected. Furthermore, for many of the LOR, at least one of the photons needs to
pass through the bulk of the object being imaged. Thus, an attenuation-uncorrected image in PET is
strikingly more nonuniform than one for SPECT (Figure 6.5).
Although uncorrected images (correcting only for random coincidences, not scatter or attenuation)
are routinely reconstructed when PET images are acquired, their primary (and perhaps sole) use is to
determine whether a finding on the corrected images is a real finding (truly represents the distribution

(A) (C)

(B) (D)

Figure 6.5.  Uncorrected (A) and attenuation-corrected (B) SPECT images. The deep structures do appear slightly brighter on the
corrected images compared with the uncorrected images relative to the more superficial structures. In contrast, the central structures are
strikingly less intense on the uncorrected PET images (C) compared with the attenuation-corrected PET images (D).
62 NUCLEAR MEDICINE

of radioactivity in the object being imaged) or whether it represents an artifact caused by one of the
corrections done in the corrected reconstruction. The necessary attenuation map can be generated
just as for SPECT, using either external radioactive sources or a CT scanner (with magnetic reso-
nance imaging [MRI] a more recent possibility). It is important to note that because photon pairs are
detected, the exact probability of attenuation can be calculated and so the attenuation correction in
PET is mathematically exact, whereas it is an approximation for SPECT.
Initial instruments used radioactive sources to acquire a transmission scan and to generate an
attenuation map. Although this method has advantages over CT or MRI-based attenuation map gen-
eration, the striking disadvantages of a long acquisition time and, most importantly, lack of anatomic
information has essentially rendered PET scanners without CT or MRI attenuation correction an
endangered species with new instruments no longer in commercial production.
Regardless of the method of attenuation map generation, the corrected image reconstruction takes
into consideration attenuation to appropriately scale the detected distribution of activity. Scatter and
randoms are subtracted out. Ordered subset expectation maximization (iterative) algorithms are stan-
dard of care with PET reconstruction.

Time of Flight

When a true coincidence is detected and the LOR is determined the iterative reconstruction algorithm
takes into consideration that the β+ emission occurred along the LOR. However, depending on where
the β+ emission occurred relative to the location of the detector ring, each coincidence photon has a
difference distance to travel to the detector and so takes a different length of time (that is, it’s time of
flight is different). If that difference in time of flight could be detected it would be possible to estimate
where along the LOR the β+ emission occurred (Figure 6.6). With that information, the reconstruction
algorithm can start from a more accurate point (knowing roughly where along the LOR the β+ emis-
sion occurred rather than knowing only that it occurred somewhere along the entire LOR).
Incorporating time of flight into PET reconstruction is not a new idea but it is practically a very dif-
ficult one. Remember that photons travel at the speed of light, which is approximately 300,000,000
m/s. Yes, that is 300 million meters every second. Remember also that the detector ring has a diameter
of less than 1 m. It does not take a mathematician to realize that the time of flight of the coincidence
photons is astoundingly short so detecting a difference between them is, to say the least, challenging.
Nonetheless, depending on the location of the β+ emission, the time of flight is on the order of 1–2 ns.
With reasonably fast scintillators and modern electronics, timing resolution of modern time of flight–
capable instruments is on the order of 0.5 ns. Therefore, the exact location of the β+ emission along
the LOR cannot be determined, but it can be estimated to perhaps a 10- to 15-cm segment along the
line, which is a marked improvement over, say, a 60-cm field of view.
Remember that iterative reconstruction works by essentially guessing (through the use of complex
probabilities) the correct location of an event. The more events detected from the same location, the
more robust are the statistics, and the algorithm is better able to accurately localize the event. By
decreasing the number of possible locations (because the LOR is shortened by time of flight informa-
tion), fewer events are needed to reach robust statistical certainty of the correct location of the event.
Radiation detection—PET 63

(A) (B)

Figure 6.6.  (A) Three LOR in a PET acquisition. The circle along each LOR represents the site of the annihilation event. Note that the
LOR overlap in multiple places. (B) With time of flight, the difference in arrival time of each of the coincidence photons allows the system
to constrain the LOR to a much shorter segment of the LOR than the entire length between detectors. Seen in this simple example, there is
no longer overlap in the LOR from unrelated events.

More simply, to generate equivalent image quality, a time of flight–informed reconstruction requires
far fewer recorded events than one without time of flight information. This translates into shorter
acquisition times.

PET/CT

The first hybrid PET/CT devices began to become available in the late 1990s; their development
truly revolutionized the impact and use of PET imaging in medicine (Figure 6.7). In a true example
of synergy, the whole is far greater than the sum of the parts. To understand why, consider first the
creation of the attenuation map. For a typical acquisition of a patient from the base of the skull to
the thighs, a transmission scan generated by radioactive sources takes on the order of 15–20 minutes
to acquire. Compare this with less than a minute for a CT scanner. At the time that PET/CT scanners
were first released, the PET portion of the acquisition took a similar amount of time to the transmis-
sion scan, so by replacing a PET device with a PET/CT, the total number of patients imaged in a day
nearly doubled. Note that a CT-derived attenuation map can have errors not seen with radioactive
source–derived attenuation maps; these are discussed in Chapter 11.
Shortening the scan is important for improving patient access, but it also helps with other com-
promises. A human can only lie still for so long. Many PET acquisitions are long enough that by the
end patient motion degraded image quality. Shortening the scans reduces this effect. Furthermore,
the total length of acquisition is constrained by the length of time a person can tolerate imaging. The
64 NUCLEAR MEDICINE

Figure 6.7.  A modern PET/CT system. The pictured system is a split ring design wherein the first ring (black arrow) contains the CT
system and the second ring (red arrow) contains the PET components. In some systems these two components are surrounded by a single
common housing.

longer is needed for the attenuation map generation, the less time is available to the acquisition of
the PET itself.
Although reducing the time required for attenuation map creation is excellent, the greatest contri-
bution of PET/CT is the addition of anatomic data to the functional data depicted on the PET por-
tion. Because PET is a functional imaging modality, meaning that it images the behavior (distribution,
kinetics of uptake, and so forth) of the radiotracer administered, it provides exquisite data on what
is happening to the radiotracer in different areas of the body. Thus, on the PET images, one can see
areas with more or less radiotracer at some point in time, which could signify the presence or absence
of a disease process. Lacking in the image are anatomic landmarks, making it impossible to tell the
exact location of these findings.
Because PET/CT allows the PET to be overlaid on the extraordinary anatomic detail of the CT,
distribution of radiotracer in very specific locations can be assessed. For example, rather than saying
that there is something abnormal about accumulation of radiotracer in the mid right lung on a PET
scan, the PET/CT interpretation of the same PET images could say that there is a 4-cm mass in the
right middle lobe of the lung and only part of the mass has increased radiotracer accumulation and
that portion of the mass is abutting the lung fissure. This additional information can have a dramatic
impact on prognosis and treatment decisions.
Now to the tradeoffs, because they always exist. The balance of pros and cons of PET/CT is very
lopsided toward the pros, but the one substantial con is that CT uses x-rays and thus delivers ion-
izing radiation dose. Thus, the radiation dose to a patient from a PET/CT scan is substantially higher
compared with a PET (depending on the radiotracer used, the CT settings, and the parts of the body
Radiation detection—PET 65

being imaged, the CT dose can represent anyway from 10 to more than 50% of the total radiation
dose of the PET and CT parts of the study).

PET/MRI

MRI is a tomographic imaging modality that uses radiofrequency energy within magnetic fields
to form an image. Very simply, different tissues respond differently to perturbations in a magnetic
field and these differences are used to differentiate tissues. MRI permits high-resolution images with
remarkable tissue contrast. That is, tissues with only very slight differences can be differentiated.
From a medical imaging perspective, MRI primarily images the hydrogen nuclei in water. MRI also
has interesting functional imaging capabilities including angiography, spectroscopy, and diffusion
weighting among others. Furthermore and very notably, MRI does not involve the use of ionizing
radiation.
The very first PET/MRI devices have recently been released (Figure 6.8) and second-generation
devices are in development. In contrast to PET/CT wherein the CT ring and the PET ring are separate
and distinct, with PET/MRI it is possible to insert the PET detector ring into the MRI field of view.
This raises the possibility of true concurrent fused imaging.
However, there are substantial challenges to PET/MRI. First and foremost is that MRI requires a
very high magnetic field; standard PMTs cannot operate within such a field and so an integrated PET
ring must use special PMTs. One such option is avalanche photodetectors, which are very thin and

Figure 6.8.  The first commercially released simultaneous whole-body PET/MRI device. In this system, the PET detector ring is within
the 3-T magnetic field allowing both PET and MRI acquisition at the same time. © Copyright Siemens Healthcare 2013. Used with
permission.
66 NUCLEAR MEDICINE

operate within a magnetic field. However, these have poor timing resolution and so time of flight is
not possible with an avalanche photodetector–based system. Another option is silicon-based PMTs,
which are available but a very young technology. Furthermore, no ferromagnetic materials can be
used within the high magnetic field, so any parts that are traditionally made from steel alloys need to
be redesigned to be MRI safe. Finally, many patients cannot undergo MRI because they cannot safely
be within a high magnetic field. For example, some patients may have shrapnel in their bodies, which
could be dislodged by the field. Other patients have devices, such as pacemakers or implantable defi-
brillators, which may not function reliably or desirably within a field. This is a relatively small but by
no means insignificant patient population.
Another issue relates to the attenuation map. CT and PET both use high-energy photons to form
their images, so both are attenuated in the same fashion. Although there are some issues related to
using CT to make PET attenuation maps, overall it is really quite straightforward because CT inher-
ently provides a map of tissue density. MRI, however, does not directly image tissue density (or tissue
likelihood to attenuate coincidence photons). Furthermore, bone is the densest tissue in the body
(that most likely to attenuate photons) but because cortical bone has no significant water it cannot be
directly seen on MRI. Thus, forming an accurate attenuation map with MRI is not trivial.
One of the great advantages of the CT component of PET/CT is its speed; most of the imaging time
can be focused on PET (which clinically is often used to image most or all of the body). Although MRI
can be used to characterize tissues in remarkable ways, those characterizations take time. Individual
MRI acquisitions do not take long and so it is possible to acquire a whole-body PET/MRI in about
the same time as a comparable PET/CT, but the MRIs acquired in such a scenario are quite limited. To
take full advantage of MRI characterization capabilities the imaging extent may need to be reduced.
For example, whereas a PET/CT from the base of the skull to the thighs can be acquired in 10–15
minutes, it is very possible to spend well more than 15 minutes acquiring various relevant MRI
sequences of just the brain or the cervical spine.
Finally, MRI scanners are quite expensive as are PET devices. Making the PET portion MRI com-
patible also adds substantial cost. The net effect is a very expensive device. Although cost need only
be balanced by utility, the applications of PET/MRI need to be quite beneficial indeed to offset the
substantial cost.
Certainly, PET/MRI has compelling capabilities particularly for research to more thoroughly char-
acterize body processes and diseases noninvasively, simultaneously interrogating radiopharmaceuti-
cal distribution and tissue characteristics. There is also significant interest in being able to image
concurrently to understand motion in real time. It remains to be seen whether any of these capabilities
will find a niche in the routine practice of medicine.
7 IONIZATION CHAMBER/DOSE
CALIBRATOR ARTIFACTS
68 NUCLEAR MEDICINE

Most often the word “artifact” implies some aberration in a picture or sound. This does not intui-
tively apply to ionization chambers and dose calibrators where the only output signal is a display
showing a number in units of radioactivity. Perhaps one could consider a problem with the display
where the wrong unit of measurement lit up (showing mCi even though the device was set to show
Bq) or if, perhaps, an 8 looked like a 3 because part of the display was nonfunctioning. Although
these are possible, they are not likely and not the main topic for this chapter.
Although artifacts are usually thought of as image or sound issues, the word can generically
apply to any error in a signal. That is, any time the output signal has errors, those signal errors
could be considered artifacts. To that end, we discuss several important considerations that may
cause inaccurate dose calibrator measurements and that could be considered artifacts. Pitfalls
related to dose calibrator use (more likely to be caused by user error than system problems) are
discussed in Chapter 12.
Ionization chamber/dose calibrator artifacts 69

Case 1. Altitude

A researcher at a New York City university is offered a position at a university in La Paz, Bolivia.


Having always enjoyed South American cultures and being a fluent speaker of Spanish, she immedi-
ately accepts, packs up her laboratory, and moves. Shortly after arriving, she unpacks her ionization
chamber and her NIST traceable Co-57 standard. First she must calculate how much activity is in
the standard. The label indicates a calibration date of April 21, 2012 and an activity at calibration
of 2 mCi. She knows that the half-life of Co-57 is 272 days. The current date is August 26, 2013, so
492 days have elapsed since calibration. Having read Chapter 2, she remembers that:

−0.693*( time elapsed/T1/ 2 )


Activity now =Starting activity*e

Working out the equation, she finds that the activity in the standard is currently 0.57 mCi. She turns
on her ionization chamber, allows it to start up, and measures the standard at 1 m; she gets a read-
ing corresponding to 0.32 mCi. Thinking maybe the ionization chamber just needs to warm up she
leaves it alone and continues to unpack, planning to recheck the measurement the next day (because
standards should be checked every day). The next day, the measurement is virtually the same.
She calls for service. The engineer diligently tests it and proclaims that everything is working as
expected and asks her whether she correcte d for altitude after moving, reminding her that while
her prior institution was at sea level, La Paz is more than 2 miles above sea level. When taking into
account the altitude, the measurements were as expected.
An ionization chamber measures the current generated by ionizations in air. Just as different scintil-
lator crystals have different stopping power (Z number), the ability of air to interact with ionizing
radiation depends on its density. The thinner the air, the fewer ionizations per unit distance. Fewer
ionizations translates to less current, which results in a lower measured activity. Therefore, every ion-
ization chamber must be corrected for the altitude of the location where it is used. Of note, the daily
variations in atmospheric pressure are insufficient to cause errors in measurement.
70 NUCLEAR MEDICINE

Case 2. Geometry

A dose arrives from the radiopharmacy in a volume of 1 mL in a 3-mL syringe. However, this particu-
lar radiopharmaceutical must be administered diluted in 60 mL. The technologist measures the dose
in the TB syringe, and then proceeds to carefully transfer the dose to a 60-mL syringe with saline,
diligently mixing and flushing. He confirms that the TB syringe has no measureable residual radio-
activity. Before dispensing the dose, he remeasures it in the dose calibrator and is surprised to find a
different activity (Figure 7.1).
In extreme cases, the geometry of the radioactive source being measured can result in differences
in measurements. The calibrator is just an open chamber that measures ionizations within the cham-
ber. For the most part, any radioactivity in the chamber, regardless of its precise location or size of
container, should cause the same number of ionizations resulting in the same measurement. However,
there are limitations and if the geometry of the radioactive source exceeds these limitations incorrect
measurements are possible. Low-energy pure β emitters are particularly affected by geometry effects.
Geometry can be tested by placing known amounts of radioactivity into various containers (e.g., test-
ing every syringe size commonly used within a department), filled with various volumes to make sure
there are no problems. Any container that gives inaccurate results should not be used for the primary
measurement.

(A) (B) (C) (D)

Figure 7.1.  In a small volume (A) the dose measures 866 μCi (B). When diluted to 60 mL (C), the dose measures 725 μCi (D).
Ionization chamber/dose calibrator artifacts 71

Case 3. Materials

An I-131 capsule is ordered for a thyroid cancer treatment. The radiopharmacy delivers it with a dose
of 100 mCi at a calibration time of noon that day. While logging the capsule into the radiopharmacy
software at 1 p.m., the technologist measures the capsule and finds it to be 110 mCi. She calls the
radiopharmacy and asks them to reverify the dose and their calibration calculations. Every subse-
quent capsule that day is similarly reading about 10% higher than the calibrated dose.
This seeming like a consistent problem, a meeting is scheduled with the radiopharmacy to try to get
to the bottom of these discrepancies. It turns out that after the capsule is formed it is in a thick glass
vial. The capsule is measured in this vial. It is then transferred to a thin-walled plastic vial for delivery
to the hospital. The astute technologist requests that they measure the capsules in the plastic vial and
they get results matching what was seen in the hospital, about 10% higher than the reading when the
capsule was in the thick glass vial (Figure 7.2).
Radioactive sources are intentionally placed into lead, tungsten, or other vials to attenuate the
radioactivity and prevent unnecessary exposure. The storage containers can block most of the activity.
Thinner, less dense materials do not block enough to make them useful from a radiation protection
perspective, but they can still block enough to decrease the number of ionizations occurring externally
and, therefore, the measured activity in the dose calibrator. It is important, then, to either minimize or
correct for the attenuation caused by the container of the radioactive source. For thin plastic contain-
ers, such as syringes, this is typically unnecessary, but depending on the isotope being measured and
the material used, it can result in substantial differences.

(A) (B) (C) (D)

Figure 7.2.  When the capsule is measured in its plastic shipping vial (A) it measures 23.3 μCi (B), but when placed into the glass vial
(C), it measures 19.66 μCi (D).
8 GAMMA CAMERA ARTIFACTS
74 NUCLEAR MEDICINE

Gamma cameras are fairly complex devices and a malfunction in any component can result in image
artifacts. These range from glaring to subtle, but all can cause significant problems with image quality
and accuracy of image interpretation. Although some artifacts are visible on patient scans, because
we expect to see variations in radioactivity distribution in patients, clinical scans of patients cannot
be relied upon to show identifiable artifacts. Therefore, daily quality control acquisitions of uniform
sources are critical because they lay bare problems that may go unnoticed in clinical scans despite the
fact that they degrade the scans, sometimes in important ways.
Gamma camera artifacts 75

Case 1. Cracked Crystal

A technologist is preparing to perform an intrinsic flood, so he has removed the collimator. He leans
over the detector to attach the point source holder and his mobile phone slides out of his shirt pocket
and lands on the detector. Fearing the worst, he immediately acquires a flood image (Figure 8.1),
which reveals a focal discontinuity in the image. Unfortunately, the crystal is cracked!
These crystals are fragile things and so great care must be taken whenever the collimators are removed
from the camera. Indeed, considering that the camera costs hundreds of thousands of dollars, great
care must be taken whenever handling any part of it, but extra great care is needed with an uncovered
crystal.
Sodium iodide crystals are grown in very carefully controlled settings and are single continuous
pieces of crystal. The chemical bonds forming the crystalline structure are disrupted in damage and
there is no way to repair them. Thus, even the smallest little crack renders the crystal unusable and it
must be replaced to the tune of tens of thousands of dollars. Objects much smaller and lighter than
a mobile phone dropped from just a few inches above a gamma camera crystal have been known to
cause irreparable damage; their fragility is not to be underestimated.

(A) (B)

Figure 8.1.  This uniformity flood image from a dual-head gamma camera shows good uniformity in head 1 (A) but a focal defect (arrow)
in head two (B), in keeping with damage to the crystal. Image courtesy of Joshua Scheuermann, MMP.
76 NUCLEAR MEDICINE

Case 2. Hygroscopic Crystal

One of the gamma cameras in the department is functioning well and passes all daily quality control
extrinsic flood images with flying colors. Every month, however, a much higher count intrinsic flood is
acquired (Figure 8.2). At first glance it looks fine, but on closer inspection, there are a few dark spots
near the top of the image. Looking back over high-count intrinsic floods from the past several months
it appears these have been getting slowly worse.
Service is called. They review the floods, remove the collimators to look under the hood, so to
speak. They inform the technologist that the problem can be fixed and present her with an estimate
for $65,000. Knowing what happened to her colleague recently with the cracked crystal, she exclaims,
“I can get a whole new crystal for that amount of money.” The service engineer replies that a new
crystal is exactly what is required to fix the problem. He explains that the seal around the crystal was
damaged causing irreparable changes to the crystal. Unfortunately for the vendor and fortunately for
the technologist, the seals were found to be faulty from a manufacturing defect and so were replaced
by the vendor at no cost!
What happened in this case and why are these seals so important? In addition to being quite fragile,
sodium iodide crystals are hygroscopic. This means that they absorb water. Unfortunately, when they
absorb water they discolor (specifically, they turn yellow). To understand why this matters, we need
only recall chemistry class (although it causes painful flashbacks to do so). Specifically, one of the very
few things I remember from chemistry because it is so practically applicable is Beer’s Law, which is
often paraphrased, “The taller the glass, the darker the brew, the less light gets through.” In this case,

Figure 8.2.  High-count intrinsic flood image reveals tiny focal areas of decreased intensity (arrows) that, in retrospect, have been
becoming more noticeable over the past several months. Image courtesy of Joshua Scheuermann, MMP.
Gamma camera artifacts 77

because the yellow crystal is darker than the clear crystal, less light passes through it and so dark spots
are seen in the flood image.
Case 1 states that sodium iodide crystals are grown in very carefully controlled settings. Among
the many important parameters, they are formed in a dry environment with no humidity. Prior to
leaving this environment, they are encased in seals that are entirely impervious to water vapor. If the
seal integrity is damaged in any way, water vapor enters the crystal and progressively discolors it. If
minor, the discolored areas can be corrected as part of the flood corrections that account for the many
slight variations across the camera detector. However, because this is a progressive problem, after a
short time, those corrections are no longer sufficient. Eventually the problem progresses to the point
that replacement is required.
78 NUCLEAR MEDICINE

Case 3. PMT Malfunction

A breast cancer patient notes new rib pain for which her oncologist orders a bone scan. During a
spot image of the chest, the technologist sees a glaring defect in the acquisition (Figure 8.3). It is not
subtle at all, indicating that no events from that area are being recorded. As opposed to the two cases
above, this defect is perfectly round and profound (with no counts in the area at all) making it hard to
imagine a form of damage that could cause the defect seen (short of maybe someone taping a circular
sheet of lead to the collimator face).
The morning flood had passed uneventfully and prior patients were scanned that day. Upon noticing
this problem, an extrinsic flood was repeated and confirmed the same artifact (Figure 8.4). Because
of the appearance of the artifact, the technologist immediately suspects that the problem lies after the
collimator and crystal somewhere in the electronic components of the camera. Given that the defect
is in a single area, he expects that there is a problem with a single PMT. The technologist follows the
procedure to shut down and restart the system, but this does not remedy the problem. Service is called
and confirms a malfunctioning PMT. The PMT is replaced and the system calibrations are performed.
Subsequent floods confirm excellent uniformity.
From the first two cases in this chapter, it can be seen that physical damage to the collimator or crys-
tal results in image artifacts that are quite focal and nonregional. On the other hand, when there is an
interruption in the electronic signal that starts with the PMT converting visible light into electricity,
the defects are more often regional and usually appear organized or in a clearly defined location and
extent. It is to be noted that in most cases PMTs are either functioning properly or they are not so the
artifacts are usually absolute. However, it is possible for a PMT to still output some signal but less
than it should, as illustrated in Figure 8.5.

Figure 8.3.  The spot image from this bone scan shows a


glaring round defect (arrows) at the bottom of the field of view.
Image courtesy of Janet Reddin, PhD.
Figure 8.4.  An extrinsic flood image performed immediately after the acquisition shown in Figure 8.3 confirms the appearance and
uniformity of the defect. This appearance is characteristic of a malfunctioning PMT. Image courtesy of Janet Reddin, PhD.

Figure 8.5.  An extrinsic flood image from a camera with a malfunctioning PMT. In contrast to the PMT malfunction seen in Figures 8.3
and 8.4, this PMT still has some output, just far less than it ought. However, the overall appearance of the defect is similar. Image courtesy
of Janet Reddin, PhD.
80 NUCLEAR MEDICINE

Case 4. Flood Nonuniformity

Upon completing a daily flood, the technologist notes a failing result and visually sees a well-defined
grid pattern (Figure 8.6). Looking back on the last several weeks of daily floods, this does appear to
have become progressively worse and the resulting uniformity measurements have been creeping up
to the point of exceeding the passing threshold this morning.
Service is called in and requests a radioactive source with which to perform gain calibrations. After
this procedure, a daily flood is repeated and passes easily. Visually, there is uniformity and the grid is
gone. The camera is determined to be working well and is put back into service.
Whereas the collimator and crystal are essentially static objects that are incredibly uniform and do
not change over time (as long as no one damages them), the electronic components can vary slightly
from part to part and they do drift over time. For example, the PMTs have ever increasing voltage in
the dynodes; it is this voltage that accounts for amplification in signal with more and more electrons
being ejected from each dynode. However, if the voltage to one of the dynodes changes even slightly,
the resulting total number of electrons at the end of the PMT can be dramatically different. Slight
changes to voltage over time are inevitable. Furthermore, two seemingly identical PMTs invariably
have slightly different outputs.
To account for this variability in the electronic components of the system, high-count flood images
are acquired and a calibration file is created to make the flood image appear as uniform as possible.
This calibration is then applied to all images done on the system with that isotope. Daily floods are
done simply to ensure that the calibration used is still appropriate. As the electronic systems drift

(A) (B)

Figure 8.6.  Intrinsic uniformity floods for a dual-head gamma camera. (A) Head one reveals substantial nonuniformity with areas of
increased and decreased intensity corresponding to the pattern of PMTs. This is characteristic of a poor calibration and can be corrected
by repeating uniformity calibration. (B) Head two shows expected uniformity. Note that because this is an intrinsic flood with a point
source of activity, there is higher intensity in the middle than at the edges because the distance from the source to the detector is higher
toward the edges than in the middle. This is corrected by the software because the source is a known distance away.
Gamma camera artifacts 81

slightly, the calibration file may no longer result in a uniform image. In this case, we are seeing the
PMTs above background, suggesting that signal output (gain) is slightly too high.
By creating a new calibration file that reflects the state of the camera and its electronic components
at this point in time, a uniform flood image (and appropriate image quality) is ensured. Ideally these
calibrations are performed as part of planned maintenance at sufficiently short intervals that such
artifacts do not appear between calibrations. Electronic drift can be accelerated in some cases (e.g.,
if there are changes in the quality of the input electricity) resulting in early appearance of such flood
uniformity artifacts. Changing environmental conditions, especially swings in temperature or humid-
ity, also accelerate the appearance of nonuniformities.
9 PLANAR ACQUISITION ARTIFACTS
84 NUCLEAR MEDICINE

Although the artifacts seen in Chapter  8 were all seen on planar acquisitions, they were focused
on issues discovered during routine quality control. Here we discuss several artifacts that were dis-
covered on clinical patient acquisitions. It is important to note that even expertly maintained and
calibrated cameras can produce artifact-laden images. Most are related to the human element, be it
patient motion (Case 2) or incorrect acquisition set up (Case 1). However, some fundamental camera
issues can slip past routine quality control procedures (Case 4).
Planar acquisition artifacts 85

Case 1. Off-peak Acquisition

A child with neuroblastoma is injected with I-123 meta-iodobenzylguanidine (MIBG) and imaged.
The technologist recognizes that the scan does not look quite right (Figure 9.1). Everything just looks
fuzzy, for lack of a better description, with somewhat indistinct edges.
The technologist goes into the reading room and shows the images to the physician who smiles and
asks the technologist to repeat the images, setting up a new acquisition from scratch for I-123 MIBG.
The technologist does that and immediately sees that the images are markedly improved (Figure 9.2).
He goes back to the reading room to ask the physician what was wrong with the initial acquisition.
The physician replies that the camera was configured for a Tc-99m acquisition rather than for I-123.
It was a somewhat lucky, but an informed and, ultimately correct, guess on the part of the physician.
Tc-99m has a photopeak of 140 keV and is the most commonly imaged isotope in most any nuclear
medicine department; the photopeak of I-123 is 159 keV. Seeing the fuzziness of the first image, the
physician realized that the photopeak was set lower than it ought to be and smart money is that the

Figure 9.1.  Multiple anterior and posterior images from an I-123 MIBG study in a child with neuroblastoma. The images look fuzzy with
indistinct edges and low contrast. Notice the marker (arrows) seen near the ankles. By convention, a radioactive marker is used to mark
the right side of the body. This is particularly useful for studies when the orientation may not be obvious because of few anatomic clues on
these physiologic images.
86 NUCLEAR MEDICINE

Figure 9.2.  Multiple anterior and posterior images of the patient from Figure 9.1 were repeated. Note the clearly visible uptake in the
thyroid gland (arrow) and bone metastases (arrowheads) that could not be discerned in Figure 9.1.

photopeak would be set for Tc-99m. Most of the photons detected at energies below the photopeak
have lost energy through scatter. Therefore, by setting the photopeak too low, the image collects
more scattered photons. Figure 9.3 illustrates why an image with scattered photons looks fuzzy and
indistinct.
Although this case could easily fit into the pitfalls in Chapter 13, it really does fit into this current
chapter. When an isotope is selected in the camera software, it is taken for granted that the resulting
acquisition window is correct. However, it is possible for the detected energy to drift over time. Even
though the camera may be set correctly, it is possible to acquire images that contain more scatter
than expected. In some cases the result could look like the flood seen in Chapter 8, Case 4. Thus, it
may not be severe enough to recognize, but could still obscure important findings. Therefore, it is
important that the camera peak be reconfirmed with all used isotopes as part of the periodic planned
maintenance of the camera.
Planar acquisition artifacts 87

(a) (b)

Figure 9.3.  The effect of scatter on edge detection. (A) A point source of activity (black dot) is being imaged. Photons are emanating
from the source in all directions (light gray lines) but either do not hit or are blocked by the collimator. Only the photons coming in per-
pendicular to the collimator (black arrow) pass through and the image appears the same size on the detector as it truly is. (B) In this case,
some of the photons coming from the point source initially are not perpendicular to the collimator, but are scattered into a perpendicular
position. These then pass through the collimator and make the point source seem larger. If it were a line being imaged, it would appear
thicker with less distinct edges. This is why scatter makes edges less distinct. Note that the extremely scattered photon (dark gray) can
pass through the collimator but is still rejected because enough energy is lost in the scattering event so that the photon energy is no
longer within the energy window. Thus, the better the energy discrimination of the system, the smaller an energy window can be used and
the more scattered photons can be rejected.
88 NUCLEAR MEDICINE

Case 2. Motion Artifact

A patient with prostate cancer is referred for bone scan to assess for osseous metastatic disease.
Whole-body images are acquired and the normal structures of the skull and facial bones are not well
defined, but without clear abnormality (Figure 9.4).
A repeat acquisition of the head was performed and it appears normal (Figure 9.5).
The patient moved during image acquisition, causing an indistinct appearance of the head. In some
cases, it can be fairly subtle. In the above case, there is clearly something wrong with the images, but
it may not be obvious that it was caused by patient motion. In many cases, it can be just enough to
obscure a finding but not prominent enough to be recognized. Luckily, in most cases it is quite strik-
ing, making it obvious that the patient has moved, as for example in Figure 9.6.
In general, motion is most often seen in the head, hands, and feet, which makes sense because
these parts are easiest to move when on a scanner table. Hand and foot motion can be somewhat
mitigated by gentle restraints, like a Velcro strap that helps the patients keep their arms by their
sides or taping the feet together. The goal here is not to make the patient feel immobilized but just
to make it easier for them to remain still. However, immobilization of the head for long periods of
time can feel disquieting for patients. For dedicated images of the brain a head holder can be used

Figure 9.4.  Anterior and posterior whole-body bone scan images. Note


that the skull and facial bones are poorly defined.
Planar acquisition artifacts 89

Figure 9.5.  Anterior and posterior spot images of the head in the


same patient as Figure 9.4 reveal no abnormality. Note the way the
structures are clearly defined

to help immobilize (even this will not prevent all motion), but for whole-body acquisitions often the
patient is just instructed to move as little as possible. Unfortunately this is not always possible. To
flag potential subtle motion artifacts, it is important to pay close attention to the patient throughout
the study acquisition.
90 NUCLEAR MEDICINE

Figure 9.6.  Anterior and posterior whole-body bone scan images. In


this case, the head motion is fairly obvious.
Planar acquisition artifacts 91

Case 3. Dose Infiltration

A potential living related kidney donor is being worked-up and presents for measurement of glomeru-
lar filtration rate (GFR). The procedure involves injection of a radiopharmaceutical with measure-
ments of blood concentration of radioactivity at 1 and 3 hours after injection. The rate of decrease of
activity over time is used to calculate the rate of blood flow through the kidney. In this case, the proce-
dure is completed and the GFR is measured to be 60 mL/min, which is significantly lower than normal
and discordant with the patient’s 24-hour urine collection, which calculated GFR to be 120 mL/min.
If the low GFR value is correct, the patient will not be allowed to donate a kidney to her brother. The
physician reviews the results and requests a planar image of the injection site (Figure 9.7).
The image shows significant infiltration of the injected dose into the subcutaneous tissues. The
quantification in this study assumes that the entire dose was injected as a bolus into the vein with
subsequent clearance from the bloodstream via the kidneys. However, the part of the dose that was
injected into the subcutaneous tissues acts as a reservoir of radioactivity that is slowly entering the
bloodstream. Because additional activity is entering the bloodstream over time, it makes the measure-
ments appear as though less radioactivity is being excreted by the kidneys. Thus, the calculated GFR
is falsely low.
Different radiopharmaceuticals are given via different routes depending on their chemistry and the
process being imaged. For example, radioactive iodine is given orally, whereas some tracers are deliv-
ered directly into the bladder. Most are given intravenously. When a tracer is given by a route other
than the one intended the distribution and kinetics of the tracer can be dramatically altered. In this
case, even a relatively small infiltration can cause a dramatic error in results.

Figure 9.7.  Anterior image of the forearm (arrowheads) and hand. There is intense focal radiotracer activity in the hand (arrow) consis-
tent with partial infiltration of the injected dose. Notice that there is diffuse low-grade activity in the soft tissues of the forearm indicating
that most of the radiotracer was successfully injected intravenously, but a small portion was infiltrated at the injection site.
92 NUCLEAR MEDICINE

Because most radiopharmaceuticals are given intravenously and because it is fairly common for an
intravenous injection to be infiltrated (at least partially), it is important to always consider whether
an infiltration may have occurred. When the injection site is in the field of view, it is quite obvious
because an infiltration is easily recognizable. In cases where accurate quantification is important, it is
usually a good idea to specifically image the injection site to assess for infiltration.
Planar acquisition artifacts 93

Case 4. Collimator Penetration

A newly released pinhole collimator was just installed for a gamma camera intended for thyroid
imaging using I-123. As the first patient images are acquired, it is apparent that the target/background
ratio appears low (Figure 9.8), with the thyroid not seen as distinctly above background as is usually
expected.
The physician’s first question is what the iodine uptake measurement showed, because a thyroid
with low iodine uptake can have this appearance. However, the uptake is at the high end of the nor-
mal range. Because the image appearance and uptake measurement are not concordant, the patient
is rescanned using an alternate camera and the background is markedly lower with much clearer
visualization of the thyroid gland (Figure 9.9).
Thinking that there might be a problem with the collimator, an I-123 source is imaged with both the
pinhole collimator and a low-energy high resolution parallel hole collimator. The background is again
high with the pinhole and normal with the parallel hole collimator. This makes the physician suspect
that the collimator is not sufficiently blocking the photons that do not pass through the aperture.
Because it is a newly released product, the manufacturer is contacted and they confirm that the col-
limator was designed to be sufficiently thick for Tc-99m and I-123, to which the physician asks, “All
the photopeaks from I-123?” Silence, then, “What do you mean all the photopeaks?”
I-123 has a photopeak of 159 keV and this is the setting used for imaging. However, it does have
additional, much less abundant emissions of higher energy. Because they are not abundant, they are
not imaged. However, they can pass through an inadequate collimator and contribute to background.
In this case, the collimator designer overlooked these high-energy emissions.
Because collimators are carefully designed for their intended use, as long as the correct collimator
is installed for the isotope being imaged, issues are rare. However, it is possible in rare situations to

Figure 9.8.  Anterior pinhole image of the thyroid gland. Although the


uptake is relatively high, the target/background ratio is very low with back-
ground noise having an intensity close to that of the thyroid gland.
94 NUCLEAR MEDICINE

Figure 9.9.  Repeat anterior pinhole image of the thyroid gland in the


patient from Figure 9.8 performed on a different camera. The thyroid/back-
ground ratio is markedly higher.

see artifacts caused by unexpected design flaws. In this case, the collimator was specifically designed
to be light weight and so the thinnest possible amount of lead was used. Because of an oversight in
specifications, the lead was not thick enough. The collimator was quickly redesigned and the problem
was solved.
10 SPECT ACQUISITION ARTIFACTS
96 NUCLEAR MEDICINE

Because single photon emission computed tomography (SPECT) involves taking images from mul-
tiple projections (often 64 or 128 projections) and reconstructing them into sections, any tiny error
can be propagated and magnified to result in a substantial artifact. Therefore, the quality control and
calibration procedures needed for SPECT are more stringent than those needed for planar imaging.
For example, the calibration flood images for SPECT require many more counts because nonuni-
formities translate into larger errors when reconstructed. Some artifacts result from camera issues,
whereas others are a result of acquisition parameters and even radiotracer distribution.
SPECT acquisition artifacts 97

Case 1. Center of Rotation Error

The technologist is performing weekly quality control acquisitions including a center of rotation
(COR) phantom. In this acquisition, a point source of radioactivity is placed near the center of the
SPECT field of view and a circular SPECT is acquired. The projection images appear as expected with
just the single point seen in every image. However, when a transaxial slice is reconstructed through
the point source, there is a ring of activity with a cold center rather than the expected single point
(Figure 10.1).
Service is called and the engineer suggests that until the camera can be calibrated no SPECT should
be performed, but that it was perfectly acceptable to use the camera for planar acquisitions. Later that
day a COR calibration is performed. The COR SPECT acquisition is repeated and results in a single
point of activity on the reconstructed slice.
A SPECT acquisition involves imaging in multiple projections about an object with the detectors
rotating about a defined axis (the center of rotation). Each of those projections contributes to the final
reconstructed images. It is important that the location of the object with respect to the detector face
is the same as the location of the object on the acquisition computer. That is, if the camera rotates
about a different axis than expected, a point source will look like a ring on the reconstructed images,
as it did in this case. The calibration coregisters the physical COR (the axis that the detectors actually
orbit) with the image matrix used for the projection images.
COR errors can cause glaring artifacts when severe, but in less severe cases subtle artifacts do still
exist. Even less than a full pixel mismatch in the COR can cause meaningful defects. Unfortunately,
the subtle defects do not have a characteristic appearance and can easily be missed or mistaken for
pathologic findings. That is why routine verification of COR using point source acquisitions is impor-
tant for reliable SPECT imaging.

Figure 10.1.  Center of rotation error. The reconstructed transaxial slice of a SPECT acquisition of a point source should result in a
single point, but in this case it forms a circle, indicating that the center of rotation calibration is incorrect.
98 NUCLEAR MEDICINE

Case 2. Filtered Back Projection Streak

One week after receiving I-131 therapy for metastatic thyroid cancer, a 42-year-old woman returns
for a posttherapy scan. Multiple metastatic lesions are seen in the lungs, but it is unclear from the
planar images (Figure 10.2) whether there is disease outside the lungs.
In order to localize the sites of disease SPECT/CT of the lower chest and upper abdomen is requested.
The images are reconstructed using filtered back projection (FBP) and they are truly atrocious (Figure
10.3). There is significant streak artifact and the localization of disease is essentially not possible.
There simply are not enough counts to result in a useful reconstruction.
The reconstruction is repeated with an ordered subset expectation maximization iterative algo-
rithm and it is remarkable how it is changed (Figure 10.4). Now a metastasis can be clearly seen in
the liver, confirming that there is iodine avid disease outside the lungs.
This case not only illustrates the importance of iterative reconstruction, particularly for studies
with relatively low count rates, but it also shows the tremendous impact of the hybrid functional/

Figure 10.2.  Anterior and posterior whole-body images of a 42-year-old


woman with metastatic thyroid cancer treated 1 week previously with sodium
I-131. The posterior image shows diffuse uptake in the posterior lung fields
(arrowheads) consistent with lung metastases. There is a more focal area of
uptake on the right (arrow). It is unclear whether this is at the lung base or in
the liver.
(A) (B)

Figure 10.3.  The patient from Figure 10.2 undergoes SPECT/CT of the lower chest and upper abdomen. (A) Transaxial slices recon-
structed with filtered back projection show substantial artifact without useful information. (B) The CT image shows a low-density lesion in
the liver (arrow) but the transaxial SPECT and fused images show that the streaks on the SPECT image do not even overlap at the site of
the CT abnormality.

Figure 10.4.  The SPECT images from Figure 10.3 were re-reconstructed using an ordered subset expectation maximization iterative
algorithm and now the abnormality on CT (arrow) is clearly shown to have significant I-131 uptake, confirming that this is a liver
metastasis rather than disease in the lung.
100 NUCLEAR MEDICINE

anatomic imaging possible with SPECT/CT. The iterative reconstructed SPECT image is of excellent
quality showing a solitary focus of intense uptake with little surrounding artifact. This reflects the
true distribution of the radioactive iodine 7 days after the treatment. However, this image does not
help in determining the location of the radioactivity because there are no anatomic landmarks on the
scan. With the CT it is trivial to see exactly where the disease resides.
SPECT acquisition artifacts 101

Case 3. Noisy Images

A morbidly obese patient is referred for preoperative myocardial perfusion imaging; according to the
usual 1-day protocol she is injected with 10 mCi Tc-99m sestamibi at rest and subsequently under-
goes resting images. The technologist has difficulty processing the images because it is difficult to
clearly see the heart above background (Figure 10.5). The gated images appear even worse.
The patient subsequently undergoes pharmacologic vasodilator stress and is injected with 30 mCi
Tc-99m sestamibi at peak stress. Subsequent images, although not of optimal quality, are certainly
diagnostic (Figure 10.6). They show no significant defects on the stress images, although the gated
images remain of poor quality.
Although large patients may have slightly larger hearts than small patients, the size is not dramati-
cally different and so the total blood flow to the left ventricular myocardium at rest is fairly similar
in a small patient as in a large patient. However, the larger patient has dramatically more scatter and
attenuation resulting in far fewer accepted counts in the image. The filtering in FBP is done to remove
noise. Furthermore, as discussed in Case 2, an image reconstructed with an iterative algorithm gener-
ally requires fewer counts to achieve a diagnostic-quality image than with FBP. However, there are
limits and below a certain point there are just not sufficient counts to reconstruct a diagnostic-quality
SPECT slice.
To understand why this is, consider that iterative reconstruction relies on maximizing the statistical
probability of activity being present at any given location. Said another way, iterative reconstruction

(A) (B)

Figure 10.5.  A morbidly obese patient undergoes rest-stress myocardial perfusion imaging. (A) On the low-dose rest projection images
it is difficult to clearly delineate the heart (arrow) from the background structures. (B) The reconstructed gated SPECT images in the stan-
dard cardiac orientation are very noisy with significant heterogeneity making it difficult to differentiate artifact from true defects.
Figure 10.6.  The stress images from the patient in Figure 10.5; these images were done with a dose three-fold higher than the rest images (and the blood flow
to the heart at stress is higher than at rest, making the total number of counts from the heart even greater than three-fold higher). (A) The reconstructed static
stress images are much more homogeneous than the resting images and reveal no significant defects. (B) The reconstructed gated images remain noisy, although
less so than the resting images.
SPECT acquisition artifacts 103

uses clues about the apparent distribution to make an educated guess at the actual distribution. Each
acquired count is a clue; the more clues, the more educated (and accurate) a guess can be made. When
there are too few counts, the result is less an educated guess and more a wild conjecture.
To ensure diagnostic image quality, the patient’s body habitus must be taken into consideration in
protocol design. For example, with respect to myocardial perfusion imaging, patients above a given
size do not have adequate quality low-dose images and so should not be attempted with a 1-day pro-
tocol. Rather, such patients should undergo higher-dose imaging for both rest and stress on 2 separate
days. Most often stress imaging is done first; if the myocardium is normal at stress, resting images
are not needed. In this case, had the stress images not been normal, the patient would need to have
returned for high-dose resting images on a separate day because the resting images were not truly of
diagnostic quality.
104 NUCLEAR MEDICINE

Case 4. Iterative Reconstruction Errors

A patient with a carcinoid tumor is referred for an In-111 pentetreotide scan. In evaluating the pel-
vis looking for the colonic primary, the resident notes a photopenic area surrounding the bladder
(Figure 10.7).
Concerned that this photopenic area may obscure disease in the colon, the resident seeks to under-
stand the cause of the finding. Her attending first asks whether the finding is real or artifact. The
resident is unsure of the cause, but is confident that it is artifactual. The attending suggests requesting
images reconstructed with FBP; lo and behold the finding is absent (Figure 10.8) and no abnormality
is present in the pelvis.
Iterative reconstruction algorithms have substantial advantages over FBP and in most cases result
in better image quality than FBP, all other things being equal. However, it is important to recognize
that iterative reconstruction does involve a great degree of data manipulation, which can introduce
artifacts. One of the most common is that seen here wherein there appears to be a photopenic “halo”
around a very intense structure.
This artifact is actually quite easy to understand when it is remembered that iterative reconstruction
aims to maximize statistical likelihoods. When a structure like the bladder has excreted activity, it
often contains far more activity than adjacent structures. If the algorithm sees a great deal of activity
in the bladder and a small amount of activity that appears to be just next to the bladder, statistically
it is more likely that the activity actually belongs in the bladder rather than outside of it and so the
apparent activity is moved into the bladder leaving the photopenic halo around it. Unfortunately, in
this case the statistics are incorrect.

Figure 10.7.  Axial and coronal slices from an In-111 pentetreotide scan. Very little is seen in the pelvis except the bladder on the axial
slice and on the coronal slice an apparent halo appears about the bladder (arrow), presumably representing an artifact from the iterative
reconstruction algorithm.
SPECT acquisition artifacts 105

Figure 10.8.  The same SPECT images from Figure 10.7 were re-reconstructed using filtered back projection (FBP). Although there is
now significant streak artifact characteristic of FBP, the apparent halo is resolved.

Another important point worth noting regarding FBP versus iterative reconstruction relates to
quantification. As this case illustrates, iterative algorithms adjust the apparent distribution of activ-
ity and so can improve image quality. In doing so, however, quantitative accuracy can be adversely
affected. Conversely, while FBP can result in difficult to interpret images, the relative amounts of
activity in various structures are really quite accurate. Because of this, some have advocated using
FBP when quantification is important. However, for the most part, modern iterative algorithms, when
provided with sufficient data, result in very accurate quantification.
106 NUCLEAR MEDICINE

Case 5. Motion Artifact

Myocardial perfusion SPECT imaging was requested for a 67-year-old man who complained of chest
pain on postoperative day 3 after abdominal surgery. He had neither electrocardiographic changes
during the pain nor biochemical evidence of myocardial infarction afterward. The patient underwent
a 1-day Tc-99m sestamibi rest-stress SPECT study with vasodilator pharmacologic stress without
incident. Images were reconstructed (Figure 10.9) and reviewed.
The interpreting physician noted a predominantly apical defect, which prompted her to review the
raw projection images and resulting sinogram images (Figure 10.10). These revealed discontinuities
consistent with motion during image acquisition. On questioning, the technologist noted that the
patient was in considerable pain from his recent surgery and so found it difficult to avoid moving dur-
ing acquisition. The technologist offered to reprocess the images using motion correction software,
but the attending physician said that it would be futile and requested repeat stress image acquisition.
SPECT involves the acquisition of projection data from multiple angles and subsequent reconstruc-
tion of that data into sections. The reconstruction algorithm assumes that the object being imaged is

Figure 10.9.  Reconstructed myocardial perfusion SPECT images reveal a primarily apical defect (arrow) along with characteristic “tails”
arising from the short-axis images (arrowheads).
SPECT acquisition artifacts 107

(A) (B)

Figure 10.10.  The raw projection images (A) from the SPECT displayed in Figure 10.9 show patient motion. It is easiest to appreciate
patient motion by viewing a rotating video clip of the projection images, but because that is not possible in a book, the red line can be
used as a reference mark. Note how in some frames the heart (arrow) is entirely above the line, whereas in others the bottom of the heart
intersects or even goes below the line. The sinogram (B) reveals discontinuities (arrows) caused by patient motion.

Figure 10.11.  The meteorologic symbol for a hurricane. Note how the “tails” appear similar to those
seen on the SPECT images from Figure 10.9. The “hurricane sign” on the short axis images is highly
suggestive of artifact from patient motion. Nilfanion, Hurricane-north. February 4, 2007. http://commons.
wikimedia.org. Accessed August 22, 2013. Reproduced with permission.

stationary (in the exact same position) for all of the acquired projection images. If a patient moves
during the acquisition, artifacts result. In myocardial SPECT, the classic appearance of motion artifact
looks like the meteorologic symbol denoting a hurricane (Figure 10.11) and is typically most notice-
able near the apex.
Motion correction software does exist and is fairly standard on modern equipment. However, it is
important to recognize its limitations because it can adequately correct some forms of motion but is
entirely powerless in the face of others. Keep in mind that the reconstruction algorithm expects the
108 NUCLEAR MEDICINE

object to be seen from a specific angle for each projection image. If the object moves up or down in
the field of view but its angle with respect to the detector is unchanged, this can be effectively cor-
rected. However, if the motion changes the angle of the object being imaged (e.g., the patient rolls or
twists on the scanner table), no form of correction can fix it. We simply have not acquired projection
data from the correct angle. Motion correction can take acquired data and correct it; it cannot take
insufficient data and make it complete.
11 PET ACQUISITION ARTIFACTS
110 NUCLEAR MEDICINE

Because both positron emission tomography (PET) and single photon emission computed tomog-
raphy (SPECT) are tomographic techniques, they have some artifact causes in common. However,
because the camera designs are so different, many artifacts are unique to PET. Those that have com-
mon causes may have a very different appearance, such as photomultiplier tube (PMT) malfunctions
(Case 1). Some artifacts, such as attenuation correction issues (Case 5), are different from SPECT not
because of differences in technique but because of differences in photon energy. Finally, it is worth
noting that a fair number of the quality control tests done on modern PET/CT systems are quite
automated. The technologist may put a source in the field of view, but most of the acquisition and
analysis is done automatically and only a passing or failing message is displayed. Therefore, in some
cases (e.g., Case 2) there are no images, just a failing result.
PET acquisition artifacts 111

Case 1. PMT Malfunction

As part of the daily quality control process, an image is acquired with a test source in place. Commonly
Ge-68 rods are used because Ge-68 decays (with about 271-day half-life) to Ga-68, which decays by
β+ emission. The image is displayed as a sinogram (Figure 11.1) and a diagonal band with no events
is noted.
A PET sinogram is slightly different from a SPECT sinogram because the PET sinogram shows lines
of response (LOR), whereas for SPECT it displays individual events. In thinking of a PET ring and
all the possible LORs of a given angle, a specific location on the ring has a specific displacement from
the center at that angle; the location versus the angle varies in such a way that a point source results
in a curved sinogram (Figure 11.2).
It follows that for a specific location along the ring, the PMT going out results in a diagonal shape
of the sinogram defect because every LOR that involves that PMT has no counts. Whereas with
single-photon instruments a PMT defect is seen on a planar flood image, PMT issues in PET are gen-
erally seen as defects in the sinogram. However, they are equally striking defects that can usually be
readily identified.

Figure 11.1.  PET sinogram with a malfunctioning PMT. There is a diagonal


white stripe across the image that represents every possible line of response
involving the malfunctioning PMT. Image courtesy of Janet Reddin, PhD.
112 NUCLEAR MEDICINE

(A) (B)
+90°

–90°

Figure 11.2.  Construction of a PET sinogram. (A) A ring of PET detectors with a point source in the field of view. The different colored
lines represent detected lines of response (LOR) from the point source. The sinogram records the angle of the LOR and the distance
along the X-axis of the detected event with respect to the center of the gantry. (B) The points are plotted on the sinogram as angle versus
displacement. The sinogram for a point source forms half of a sine wave.
PET acquisition artifacts 113

Case 2. Crystal Temperature Instability

On Monday morning, the technologists come in to find the PET/CT scanner room very hot, above
90°F. They call maintenance and turn on some fans. Shortly thereafter, the air conditioning is restored
and the moment the room gets below 80°F, they do their weekly camera shutdown and restart and
start on the quality control acquisitions.
By the time the first PET quality control test is ready to start, the room is down to normal tempera-
tures. Normally the system acquires an emission sinogram and uses that to adjust the individual PMT
gain values to create uniformity (because the PMTs have varying small amounts of drift over time).
If the changes in gain exceed preset thresholds, the quality control will fail. Indeed, in this case, the
quality control failed, so service was called.
Two hours later, the service engineer arrived and his first step was to shut the system down, restart,
and reinitiate the quality control process. This time everything passed and the system initialized nor-
mally. While waiting for the system to restart, the engineer noted the fans that had been set up near
the room and was told about the cooling situation. He said that the air conditioning issues explained
the problem perfectly.
All scintillating crystals have temperature-dependent light output, meaning that the amount of light
that is emitted in response to a scintillation event depends on the temperature of the crystal. Most
scintillators used in nuclear medicine have relatively little change over a modest change in tempera-
ture, meaning that the light output versus temperature graph is relatively flat around common room
temperatures. However, the curve for lutetium oxyorthosilicate (LSO) is relatively steeper around
typical room temperatures, and the PET/CT system in question uses LSO–based crystals.
Because the room temperature was very hot, the crystals equilibrated at this temperature, which
would change their light output. When the initial initialization was performed, the measured gain
values were sufficiently different from what was expected because of the significant difference in tem-
perature. By the time the service engineer had arrived, the temperature was stabilized at its usual level.
Because of the design of the scanner and differences in airflow, not all parts of the crystal ring
change temperature at the same rate. In the worst case scenario, as the temperature in the room
changes, light outputs could change at a different rate in different crystal elements over time, leading
to nonuniformities in images that may not be readily apparent. Therefore, stable room temperatures
are important for appropriate scanner function.
114 NUCLEAR MEDICINE

Case 3. Table Misregistration

A patient with a recent diagnosis of breast cancer is referred for imaging prior to a planned curative
resection. She is found to have a suspicious but equivocal lesion on her planar bone scan. To better
evaluate this, the patient is referred for F-18 sodium fluoride PET/CT (Figure 11.3), which reveals an
intensely radiotracer avid lesion in the T11 vertebra, along with other foci that may be degenerative
or metastatic.
Based on the PET/CT finding, the patient is referred for image-guided percutaneous biopsy of the
left side of the T11 vertebra. The biopsy is done and shows only normal bone. Because this was a
surprising finding, re-review of the PET/CT is requested. As part of this, fused sagittal and coronal
images are reviewed and reveal significant misregistration between the PET and CT images (Figure
11.4). When it is manually corrected, it shows that the suspect lesion is actually in the T10 vertebra.
The percutaneous biopsy is repeated and reveals metastatic adenocarcinoma compatible with a
breast primary. The initial curative-intent treatment plan is changed to a palliative plan and the
patient is able to start the most appropriate treatment regimen. The bone scan, PET/CT, and biopsy

Figure 11.3.  A F-18 sodium fluoride PET/CT is done in a woman with breast cancer and suspected osseous metastatic disease. A sus-
pect lesion on PET (crosshairs) is shown on the CT images to be in the T11 vertebral body. CT-guided needle biopsy was attempted based
on the PET/CT findings and showed only fragments of normal bone and marrow.
PET acquisition artifacts 115

(A) (B) (C)

Figure 11.4.  (A) The coronal fused PET/CT images from Figure 11.3 show that there is significant misregistration between PET and CT.
Note that the edge of the skull on the PET is lower than on the CT. Similarly, the misregistration exists in the pelvic bones and bladder.
(B) Coronal fused PET/CT with registration corrected. (C) The PET abnormality is shown on the CT with corrected registration to be in the
T10 vertebra rather than T11 (crosshairs). Biopsy was done to confirm metastatic breast cancer.

results seemed discordant to the experienced clinician, prompting the re-review request. Had the clini-
cian not done so, the patient would have embarked on an inappropriate treatment, which would have
delayed the most appropriate care.
When interpreting PET/CT, it is critically important to remember that the images are not acquired
simultaneously and that correct fusion of the PET and CT components is based on two assump-
tions: that the scanner table is cross-calibrated so that a given spot on the table is in the same place on
both scans; and that the patient did not move between the scans. Normal quality control procedures
should prevent issues with the table calibration, but computer errors can and do happen. Patient
motion, furthermore, is fairly common. It is amazing how a patient can slide herself down a table
during the scan!
When reviewing the PET and the CT images side-by-side, it is easy to overlook misregistration.
The fused images are incredibly helpful in this regard because they really highlight these issues. For
that reason, it is helpful to have the fused images available during interpretation as a cross-reference
between the scans. A caution is in order to not rely too heavily on the fused images for actual inter-
pretation because they can obscure some findings while making others appear overly prominent.
However, for localization and registration, the fused images are invaluable.
116 NUCLEAR MEDICINE

Case 4. Scatter Correction Errors

A patient with melanoma is referred for staging F-18 fluorodeoxyglucose (FDG) PET/CT with imag-
ing from head to toe. As is standard practice in the facility, the patient is imaged with her arms down.
After completing the scan, the uncorrected images are reviewed and appear as expected, so the patient
is discharged. When the corrected images are reconstructed, a prominent defect is noted in the lower
abdomen, an entire area with almost no visualized activity (Figure 11.5).
Upon reviewing the images, the physician guesses that the patient was injected in the hand and that
part of the dose infiltrated. The technologist confirms that is the case wondering at the physician’s psy-
chic abilities. The physician requests that the study be reconstructed again using an algorithm designed
to account for patient motion. On the repeated acquisition, no artifact is visible (Figure 11.6).
The artifact in this case was caused by scatter correction. As was discussed in Chapter 6, scatter
correction looks for true coincidences whose LORs appear to be outside the body. An LOR outside
the outline of the body is assumed to be caused by one or both of the photons being scattered prior
to detection. The likelihood of a true coincidence from any location being deemed scattered is used to
essentially subtract scatter from the image.
There are two general ways to define what is in the body versus what is outside: the CT component
can be used; and a simple PET reconstruction can be used along with edge detection to determine the
extents of the body. There are pros and cons to both approaches.
The CT approach is generally more accurate in defining the extent of the body. However, it is also
contingent on two assumptions: that the patient has not moved between the CT component and the
PET; and that no part of the patient is outside the CT field of view. Indeed, the field of view of the
CT is usually smaller (and sometimes considerably) than the bore size of the scanner. Therefore, any

Figure 11.5.  Axial F-18 FDG PET and CT images from a patient with melanoma. The images through the lower abdomen/pelvis show a
marked defect where there are essentially no counts.
PET acquisition artifacts 117

Figure 11.6.  Axial F-18 FDG PET and CT images from Figure 11.5. The PET images were re-reconstructed using an alternative algo-
rithm that accounts for patient motion during image acquisition. The defect is no longer present.

part of the patient that is outside the CT field of view is considered outside the body and LORs from
that area are ascribed to scatter. Because the patient had her arms down and part of the dose was
infiltrated in the hand, there is a source of quite a lot of activity that is outside the field of view and
thought to represent scatter. Therefore, in some areas, more events were subtracted as part of the scat-
ter correction than were present and so the image is left with areas of no apparent counts.
The alternative approach to scatter correction uses the emission data to estimate the location of the
body and to determine what is inside and outside the body. Although this is more robust in the face
of misregistration or patient motion, it can also be less accurate. For example, if the patient moves
during the acquisition, it can think that the body shape encompasses all the locations of the body
throughout the acquisition. This results in an inaccurate reconstruction but without the glaring arti-
fact seen in this case. Although this may seem like a better solution, it may actually be better to have
an obvious artifact rather than inaccuracies that are not visibly apparent. The obvious artifact can be
noted and mitigated. The less apparent artifact may give spurious findings that are not questioned.
118 NUCLEAR MEDICINE

Case 5. Attenuation Correction Errors

A lung cancer patient undergoes curative resection but develops a postoperative arrhythmia requiring
placement of an implantable defibrillator/pacemaker. He recovers well from surgery but a few months
later develops new cough and shortness of breath, so he is referred for F-18 FDG PET/CT to evaluate
for recurrence (Figure 11.7). The scan is negative except for a new area of increased uptake in the left
anterior chest well.
Of note, there is no corresponding area of increased uptake on the uncorrected images. Furthermore,
this area corresponds to the location of the pacemaker. This artifact is related to the use of CT for
attenuation correction. Keep in mind that attenuation correction calculates the probability of one
or both photons from an annihilation event at a given a location being attenuated (absorbed) and

(A) (C) (E)

(B) (D) (F)

Figure 11.7.  Transaxial (A) and coronal (B) attenuation corrected PET images show an area of increased activity in the anterior chest
wall (crosshairs). There is no corresponding area of increased activity on the uncorrected transaxial (C) or coronal (D) PET images. In fact,
the area appears relatively photopenic as would be expected. The transaxial (E) and coronal (F) CT images show a pacemaker at the site of
the abnormality on the corrected PET images. The constellation of findings is typical of attenuation correction artifact.
PET acquisition artifacts 119

not reaching the detector. This then is used to boost the signal from each location to normalize the
measured data.
The x-rays used for the CT typically have a peak energy of 80–140 keV (depending on the scan-
ner setting), with most of the individual x-rays having a considerably lower energy. The CT image is
a 12-bit image, which translates into having 4,096 possible levels of density ranging from the least
dense (air) to the most dense (bone and metal). The scale of densities measured on CT is tailored to
be biologically relevant, with water density at the middle of the range. With very dense objects (like a
metal pacemaker), the measured density on CT is at the top of the range. Therefore, CT cannot dis-
tinguish between a relatively flimsy piece of porous metal and a huge hunk of lead; they both appear
maximally dense.
The photons produced by the β+ annihilation event have an energy of 511 keV. Although these
highly energetic photons may be attenuated to a great degree by a huge hunk of lead, they pass
through the flimsy piece of porous metal with minimal attenuation. However, we have established

(A)

(B)

Figure 11.8.  Attenuation correction artifacts caused by intravenous or oral contrast. (A) Transaxial attenuation corrected PET, uncor-
rected PET, and CT slices through the upper chest in a patient who received intravenous contrast. The crosshairs show an area of
relatively increased uptake on the corrected images without abnormality on the uncorrected images in the area of pooled contrast in the
left subclavian vein. (B) Transaxial attenuation corrected PET, uncorrected PET, and CT images through the pelvis in a patient who recently
underwent upper gastrointestinal fluoroscopy with small bowel follow through using dense oral contrast. The retained contrast (cross-
hairs) causes artifact on the CT in addition to a hot spot on the corrected but not the uncorrected images. Note that physiologic uptake in
bowel loops (arrows) is present on both the corrected and uncorrected images.
120 NUCLEAR MEDICINE

that these two potential situations cannot be differentiated on the basis of the CT-based attenuation
map. Therefore, objects that are at the dense end of the CT spectrum (meaning they significantly
attenuate the lower-energy x-rays) tend to have less effect on the 511-keV photons, but the exact
effect on said photons cannot be determined. Thus, they often are overcorrected leading to false areas
of increased intensity on the corrected images.
Luckily, these artifacts are almost always quite obvious and rarely have a significant impact on
image interpretation. Indeed, many have raised concerns about using oral or intravenous contrast
material for the CT, assuming that it will alter the attenuation map. Although very dense contrast
(e.g., in the subclavian vein or in a colon diverticulum after a barium enema) can cause visible attenu-
ation correction artifacts these are, again, generally quite obvious (Figure 11.8). Although it is true
that areas that are not dense enough can still have an effect on quantitative accuracy of the corrected
images, investigations have shown the effect to be small, no more than a few percent, and well within
the expected uncertainty of the measurements.
PET acquisition artifacts 121

Case 6. CT Artifacts Affecting PET Reconstruction

A 35-year-old woman with newly diagnosed Hodgkin’s disease is referred for initial staging with F-18
FDG PET/CT (Figure 11.9). The patient weighs 157 kg (346 lb) and is 157 cm (62 in) tall, for a body
mass index of 63. A significant amount of her torso is outside the field of view of the scan. There is sig-
nificant resulting artifact on the CT images, and some of those artifacts contribute to artifacts on the PET.
Because the spatial resolution of PET is lower than that of CT, the full resolution of the CT is not
required for the attenuation map. Therefore, in converting the CT to an attenuation map, there is a
significant amount of downsampling resulting in some blurring. This helps to average out the CT data
and so most CT artifacts do not translate into PET artifacts (Figure 11.10). In some cases, however,
the CT artifact is sufficiently severe that the attenuation corrected PET images also have visible arti-
facts. Even when there is no visible artifact, there are often nonuniformities in the quantitative data,
and so measurements on the corrected PET images in such cases should be interpreted cautiously.

(A) (B)

(C) (D)

Figure 11.9.  Transaxial PET (A) and CT (B) images along with maximum intensity projection (C) and transaxial fused (D) images show
beam hardening and ring artifact on the CT that translates to significant PET artifact seen on both the transaxial and maximum intensity
projection images (arrows).
122 NUCLEAR MEDICINE

(A) (B)

(C) (D)

Figure 11.10.  Transaxial PET (A) and CT (B) images along with maximum intensity projection (C) and transaxial fused (D) images show
fiducial markers causing significant beam hardening artifact on the CT (arrows) without corresponding PET artifact.

The causes and remedies of CT artifacts are beyond the scope of this book. However, in general,
the more tissue the x-rays must traverse, the more likely there is going to be artifact. To some degree
this can be mitigated with higher tube current. Therefore, many centers use higher CT tube currents
for larger patients (and many modern scanners have automatic dose modulation that can increase
the dose for larger patients). It is to be noted that although automatic dose modulation can result in
favorable low doses in very thin patients, it can also deliver unexpectedly high doses in the largest
patients. Therefore, a thorough understanding of the constraints of the modulation system is impor-
tant before its use.
The largest patients push the limits of diagnostic image quality. At some point, the artifacts are too
great and preclude accurate interpretation. Some considerations can be used to try to answer relevant
questions. For example, even if a patient is unable to raise his or her arms, if the area of interest is
in the lower abdomen, an effort can be made to make sure that the arms do not overlap that area to
minimize artifact through the key location. Furthermore, some scanners offer wider field-of-view CT
capabilities to help account for portions of the body outside the scan. These can be helpful and should
be used with larger patients. There are limits, however, and the image interpreter needs to develop
a confidence in knowing when valuable information exists (albeit with limitations) and to give the
referring clinician as much information as is available versus recognizing when the artifacts are too
great and that interpretation may be incorrect because the input data are simply too flawed.
12 DOSE CALIBRATOR PITFALLS
124 NUCLEAR MEDICINE

The dose calibrator, in practice, is a fairly simple piece of equipment. A source of radioactivity of
known isotope is placed into the dose calibrator, the appropriate isotope is selected, and the display
shows the amount of radioactivity in the source. However, even with such a simple interaction with
the device, errors are possible. Indeed, because it is so simple and reliable, interactions with the dose
calibrator can be almost automatic. Scarcely looking, users drop in a dose, glance at the display, write
down the result, pull the dose out, and go on their merry way. Unfortunately, the more automatic and
the less cautious interactions become, the more room there is for error. Because the steps involved in
measuring a dose in a dose calibrator are few, the errors are typically simple, straightforward ones.
However, the impact of these seemingly mindless errors can be quite dramatic. The cases discussed
next may seem a bit contrived, but such errors do occur frequently. They can be easily identified and
remedied through vigilance, but can just as easily reach the patient in the absence of careful attention
to detail.
Dose calibrator pitfalls 125

Case 1. Dose Calibrator Contamination

The on-call technologist is called in late one night to perform a hepatobiliary scan on a patient who
presented to the emergency department with acute right upper quadrant pain. After preparing the
dose, the technologist measures it in the dose calibrator and adjusts it until it has the appropriate
activity, 5 mCi. He injects the dose and begins the acquisition. He immediately notices that the count
rate is very low (Figure 12.1).
He checks the intravenous catheter and notes an excellent blood return. To be thorough, he images
the injection site and sees no evidence of infiltration (Figure 12.2). He has not yet discarded the injec-
tion syringe so he brings it back to the hot lab to measure it again in the dose calibrator. He drops it
into the dose calibrator and sees that dose is reading 4.5 mCi, only slightly lower than what he had
previously measured. He makes certain that the dose calibrator is set on Tc-99m and removes the
syringe to verify again that it is empty.
Immediately after looking at the obviously empty syringe, the technologist glances back at the dose
calibrator display and to his horror sees that the display still reads 4.5 mCi. He lifts the dipper out
of the dose calibrator and finds that a vial of activity has been left in the dose calibrator. Because he
just quickly drew up his dose and dropped it into the dose calibrator without explicitly making sure
that the calibrator read background when thought to be empty, he did not notice that there was a
problem. Because there was already 4.5 mCi sitting in the dose calibrator, a 0.5 mCi dose was drawn
up instead of a 5 mCi dose.

Figure 12.1.  Anterior image of the abdomen after injection of


Tc-99m mebrofenin for a hepatobiliary study. The liver (arrow) and
cardiac blood pool (arrowhead) can just be made out but the image
is very noisy because of low total number of counts acquired.
126 NUCLEAR MEDICINE

Figure 12.2.  An image of the forearm shows uniform tracer distribution in the soft tissues with no significant dose infiltration.

As is a common theme in this chapter, this was a mistake that could have been easily avoided by
simply explicitly thinking through every step in the process. Making sure one is starting from a repro-
ducible beginning point is critically important before making any measurement.
Although it should not be a common occurrence that radioactivity is left in a dose calibrator for
an unsuspecting subsequent user, it can happen. What is, perhaps, more common is contamination of
the dose calibrator. For example, a dose is drawn up and put into the dose calibrator. When reaching
to pull the syringe out, the syringe plunger is inadvertently depressed resulting in some of the activity
dripping into the dose calibrator. This activity will now increase the background reading. If a subse-
quent measurement is made without taking this into consideration, the measurement will be falsely
high.
Dose calibrator pitfalls 127

Case 2. Wrong Setting used on Dose Calibrator

The technologist is preparing for a ventilation-perfusion study using Tc-99m DTPA aerosol for the
ventilation images. Typically 30 mCi Tc-99m DTPA is put into the nebulizer. She grabs one of the
Tc-99m DTPA doses that has been delivered by the radiopharmacy and puts it into the dose calibra-
tor. Although the dose should be around 30 mCi, it is reading 13.4 mCi.
She removes the syringe from the dose calibrator and compares the sticker on the syringe with the
label on the lead pig. All are compatible with a Tc-99m DTPA dose. The date and time of calibration
and the calibrated dose are compatible with a dose of near 30 mCi at the current time. The technolo-
gist then calls the pharmacy to ask if there might be an explanation for this irregularity and is told
they have followed all of their procedures and do not suspect any issues.
The technologist looks at the syringe again to confirm that there is no liquid in the needle cap (it is
possible to accidentally push down on the syringe plunger when putting the top on the pig and thus
squirting part of the dose into pig, which leaves residual liquid in the cap). She puts the dose back
in the dose calibrator and it still reads 13.4 mCi. Taking a closer look, she slaps herself on the fore-
head: the dose calibrator is set for I-123 rather than Tc-99m. She pushes the Tc-99m button and is
rewarded with a reading of 29.5 mCi.
Dose calibrator displays are designed to show the isotope being measured at all times and in close
proximity to the readout of the measured dose. However, it is possible to miss this information and so
it is important to always explicitly verify the isotope setting. Many departments use Tc-99m almost
exclusively so it is easy to get into a habit of assuming the dose calibrator would be set to Tc-99m.
Habits can be hazardous and continuous vigilance and attention to detail are necessary.
128 NUCLEAR MEDICINE

Case 3. High Background Activity

The same technologist from Case 2 is now preparing to do the perfusion portion of the
ventilation-perfusion scan. She retrieves the Tc-99m MAA dose and puts it into the dose calibrator.
She expects a dose of 4 mCi but sees 11 mCi on the display. Still struggling to recover from her ear-
lier incident she immediately confirms that the dose calibrator is set to Tc-99m. Having heard what
happened to her colleague on call with another dose being in the dose calibrator she verifies that it is
empty but for her dose.
As she is looking into the dose calibrator to confirm that no other dose is in it, she notes that the
dose calibrator continues to read about 6 mCi. She is about to ask advice of a colleague who is also
in the reading room, but the colleague has just finished what he is doing and has stepped out of the
room. The technologist looks back at the dose calibrator and it is back to showing only background
levels of radiation. She puts her dose back in and finds that it is almost exactly 4 mCi.
After completing the ventilation-perfusion study, she asks her colleague what he was doing in the
hot lab. He responds that he was preparing a high-dose I-131 therapy for a thyroid cancer patient
(Chapter 15, Case 3) of about 350 mCi. He was inspecting the vial containing the pill causing a high
background level in the hot lab and resulting in a spurious dose calibrator reading.
13 SINGLE PHOTON PITFALLS
130 NUCLEAR MEDICINE

Many pitfalls are possible in single photon imaging; here are highlighted some of the most common.
A key element throughout is the idea that nuclear imaging is functional imaging and so patient and
environmental parameters can dramatically affect the final image. Interpretation without knowledge
of these parameters can result in faulty output. Therefore, it is critically important that all variables
are considered. Although positron emission tomography (PET) is also a functional technique, in cur-
rent clinical practice most PET imaging is done at a single, static time point after injection, whereas
many single photon procedures involve dynamic imaging. Furthermore, currently patients are more
likely to have multiple single photon studies in a short period of time, whereas this is a rare situation
in PET.
Single photon pitfalls 131

Case 1. Prostheses

Four days after bilateral mastectomy, a woman with breast cancer develops acute shortness of breath.
She is referred for a ventilation-perfusion scan. Imaging is performed and reveals bilateral defects on
the anterior images (Figure 13.1).
These defects do not appear to be segmental and on the oblique views they look to be outside the
lungs. A review of the operative report confirms that the patient had placement of tissue expanders
to stretch the remaining skin of her anterior chest wall prior to implant reconstruction (Figure 13.2).
The defects caused by the tissue expanders do not appear the same as the defects caused by pulmo-
nary emboli. However, usually a relatively homogeneous attenuation over the lungs is expected and
the interpreter need only look for segmental defects. In this case, because there is an area of higher

Figure 13.1.  Multiple ventilation and perfusion images of the lungs. Large nonsegmental areas of mildly apparently decreased ventila-
tion and perfusion are seen in the anterior and anterior oblique projections (arrowheads). On the oblique images, the defects are shown
to span across more than one lung in keeping with an overlying cause of attenuation. Furthermore, there are small punctate round defects
(arrows) caused by metallic ports on the tissue expanders.
132 NUCLEAR MEDICINE

Figure 13.2.  Chest x-ray from the patient in Figure 13.1. The tissue expanders are shown with the arrowheads, which are the cause
of the large, mild nonsegmental defects. The infusion ports that allow for filling (arrows) cause the small punctate round defects on the
ventilation-perfusion images.

(A) (B) (C) (D)

Figure 13.3.  Schematic of segmental defects in the lung with and without overlying sources of attenuation. (A) Normal lung with
uniform distribution of radiopharmaceutical. (B) There is a wedge-shaped perfusion defect. (C) There is a round area of attenuation from
the breast prosthesis. (D) Both breast prosthesis and a wedge-shaped perfusion defect are present. The defect appears more severe where
overlapping with the prosthesis.

attenuation overlying a portion of the lung, the interpreter needs to mentally perform attenuation
correction in this area because a segmental defect that is partially under the prosthesis does not have
a uniform appearance, as shown in Figure 13.3.
Not only is it important to know whether a patient has any prostheses or implants, but one must
also mentally visualize what effect those have on the images. With this information, it is possible to
minimize the negative impact on image interpretation. It is to be noted, however, that sometimes
the attenuation caused by prostheses degrades the image to a sufficient degree that interpretation is
greatly limited. That is, sometimes enough information is blocked by the prosthesis that no amount
of mental correction can replace information that simply is not present.
Single photon pitfalls 133

Case 2. Recent prior Study

Having been recently diagnosed with lung cancer, a patient is referred for bone scan to assess for
possible osseous metastatic disease (Figure 13.4). The bones appear normal for the patient’s age, but
there is diffuse activity in the gut.
The first consideration is that the patient most likely had a recent prior study and there is excreted
activity in the bowel. The patient, however, denied having been to nuclear medicine for a study at this
or any other hospital. The medical record also did not reveal any recent prior studies that caused this
pattern of activity. The interpreting team was, therefore, forced to consider alternate causes of this
uncommon appearance. Because there was no obvious cause, a computed tomography scan of the
abdomen and pelvis were done and showed no abnormality (Figure 13.5).
A few days later, a report of a nuclear myocardial perfusion rest/stress study done the day before the
bone scan comes in from the patient’s local cardiologist. The patient was sent to the cardiologist for
preoperative clearance and the stress test was done for that purpose. Because the stress test was done

Figure 13.4.  Anterior and posterior whole-body bone scan


images in a patient undergoing staging for lung cancer. There
is an unexpected accumulation of radiotracer in the right
mid-abdomen (arrows).
134 NUCLEAR MEDICINE

Figure 13.5.  Transaxial CT slices of the abdomen of the patient from Figure 13.4 show no abnormality.

in a cardiology office, the patient did not realize that it was a nuclear test or that it might interfere
with the bone scan.
In this era of multidisciplinary patient care, it is not uncommon for patients to see several physi-
cians and to undergo a myriad of tests. It would be unreasonable to expect the patients to have a high
level understanding of the nuances of different tests that may impact on one another. Although the
staff did make an effort to ask the patient if he had undergone any other nuclear medicine tests, per-
haps it would have been helpful to include questioning about other common studies, such as nuclear
stress tests, that involve the administration of radiopharmaceuticals.
Single photon pitfalls 135

Case 3. Contamination

A bone scan is performed in a patient with prostate cancer (Figure 13.6). There is an intense focus of
increased uptake overlying the inferior pubic ramus on the whole-body images. The resident supposes
that the focus is most likely urinary contamination but because it overlies a bone would prefer to con-
firm that there is no underlying abnormality in the bone. He therefore requests that the technologist
asks the patient to wipe his skin and to obtain a spot image of the pelvis.
The spot image is done and shows no significant change in the focus. The attending then intervenes
and suggests that the technologist cover the patient with a blanket and ask that he pull his pants
and underwear down and to repeat the image (Figure 13.7), on which the focus is resolved and the
underlying bone is normal.
Urinary contamination is very common. Many radiopharmaceuticals are excreted by the kidney
and so fairly high concentrations are possible in the urine. Consider a bone scan where something in
the neighborhood of half the dose goes to the bones and the other half is excreted in the urine (these
are, of course, not entirely accurate but close enough for the purposes of this example). Imagine the
total amount of bone and spread half the dose uniformly, then put the other half the dose into about

Figure 13.6.  Anterior and posterior whole-body bone scan images show a focus of abnormal activity overlying the right inferior pubic
ramus (arrow).
136 NUCLEAR MEDICINE

Figure 13.7.  Anterior and posterior spot images of the pelvis after moving the patient’s clothing reveals that there is no abnormality in
the pubic ramus and confirming that the finding was caused by contamination.

a liter of urine. Obviously the concentration of radioactivity in the urine per unit volume is much
higher than that in bone. Therefore, even a tiny drop of urine can appear as intense or more intense
than the surrounding bone.
In most cases, urinary contamination is fairly obvious and can be identified as such without addi-
tional imaging. In some cases when it overlies the bone additional images are reassuring. However, a
common mistake is to ask the patient to wipe their skin and then re-don the same clothing. Most of
the time, the urinary activity is on the clothing rather than the skin. Futile imaging and patient dis-
comfort and embarrassment can be avoided by a quick image with the clothing removed.
Single photon pitfalls 137

Case 4. Poor Dynamic Timing

A patient with chronic renal obstruction is being worked-up for possible intervention and so nuclear
renography is requested. The study is started with a 60-second flow image acquired at one frame per
second and summed into three-second frames (Figure 13.8), followed by imaging at 30–60 seconds
per frame for a further 20–40 minutes; commonly a diuretic is given 20 minutes into a 40-minute
acquisition.
The attending physician quickly glances at the images and deems them “useless.” When asked to
expand on her criticism, she explains that a critical aspect of these scans is the initial flow image,
typically done as 1–3 seconds per frame, with the images carefully evaluated for the timing of radio-
activity reaching the kidneys relative to its appearance in the aorta. In this case, the radioactivity had
already reached the kidneys when the imaging was started and so this important piece of information
was missing.

Figure 13.8.  Posterior dynamic flow images from a renogram at 3 seconds per frame. There is already perfusion of both kidneys on the
first frame so timing of the renal perfusion with respect to tracer arriving in the aorta cannot be assessed.
138 NUCLEAR MEDICINE

Once the flow image is started too late, there is no way to recover that data. Most experienced tech-
nologists understand how long their cameras take to start an acquisition once the button is pressed
and carefully time them so that the acquisition starts just as they are starting the injection. However,
sometimes the acquisition takes a little longer or perhaps the technologist presses start just after
instead of just before the injection.
Some technologists try to give some leeway, pressing start and waiting a few seconds before the
injection. This minimizes the risk of missing the initial flow. However, it also effectively shortens the
duration of the flow image. For example, in a three-phase bone scan in Figure 13.9, the technologist
waited a few seconds too long to inject the tracer after the start of the acquisition and so the flow has
barely reached the area of interest by the end of the image.
Dynamic studies require careful timing and a great deal of organization. Indeed, many tasks need
to happen at essentially the same time. It is incredibly impressive to watch an excellent technologist
prepare and execute a dynamic study and incredibly frustrating to see one bungled. For optimal
execution, each step must be thought out and prepared and all necessary equipment laid out. It does

Figure 13.9.  Anterior flow images of the knees at 3 seconds per frame. The radiopharmaceutical injection was done late and so activity
does not begin to arrive until the very end of the flow acquisition, precluding complete evaluation.
Single photon pitfalls 139

no good to start the acquisition and radiopharmaceutical injection at just the right moment only to
realize that there is no saline flush with which to complete the injection or to find that the intravenous
line was clamped and wasting precious seconds getting it unclamped. A great deal of information is
available in dynamic imaging but the quality of that data is completely dependent on the quality of
the acquisition.
140 NUCLEAR MEDICINE

Case 5. Background Activity

Complaining of chronic postprandial bloating, abdominal pain, and nausea, a diabetic patient is
referred for a solid gastric emptying study to evaluate for possible gastroparesis. In this study, the
patient eats a standard test meal of eggs, toast, jam, and water in which the eggs have been labeled
with a radiopharmaceutical. Anterior and posterior images of the abdomen are acquired immediately
after completing the meal and every hour until 4 hours after the test meal is completed.
Because the stomach does not run parallel to the anterior or posterior abdominal wall, the depth
of the activity from the detector in the anterior or posterior projection changes as the food moves
through the stomach. Indeed, the activity starts near the back of the abdomen before moving forward
(anteriorly) and then heading back again as it empties into the duodenum. If only anterior images
were acquired, it would appear as though the amount of radioactivity increased over time until it
emptied the stomach and vice versa for posterior images (Figure 13.10).
Because both anterior and posterior images are acquired, it is possible to mathematically remove
the influence of depth of activity with respect to either detector. This is done by calculating the geo-
metric mean, which is the square root of the product of the two conjugate views:

√(Anterior x Posterior)

A simple arithmetic mean (average) does not work because the number of counts acquired var-
ies exponentially with distance from the detector, not linearly. Although the geometric mean can be

(a) (b) (c) (d)

Ant Post

Figure 13.10.  (A) Schematic of the torso viewed from the side, showing the movement of the radiolabeled solid meal through the
stomach. The food initially enters the stomach from the esophagus near the patient’s back (Post) and moves anteriorly (Ant) through
the stomach. Toward the end of the stomach it moves posteriorly again before emptying into the duodenum. (B) Graph of the measured
intensity of activity in the stomach (y-axis) versus time (x-axis) when measured from the anterior only. For this example, the total activity
in the stomach is assumed to be constant without any emptying. As the activity moves from back to front the measured activity increases,
then begins to decrease as the activity moves posteriorly. (C) The same graph imaged from the posterior, showing the opposite situation.
(D) The geometric mean image is a straight line because the depth of the activity does not matter.
Single photon pitfalls 141

(A) (B)

Figure 13.11.  Serial anterior (A) and posterior (B) images from a gastric emptying study. Each frame is a 1-minute acquisition and
images are each done 1 hour apart, so the total imaging time was 4 hours. Frame four (done 3 hours into the study) appears different from
the rest with very high background activity.

applied to any pair of views, most often it is done with anterior and posterior for these studies simply
because that pair of views makes it easiest to separate the gastric from bowel activity. In this case, the
five anterior and posterior images are done (Figure 13.11).
The initial and 4-hour images appear correct and the residual activity at the end of imaging is
calculated and shows a delayed gastric emptying with more than 10% remaining in the stomach at
4 hours (Figure 13.12). However, the 3-hour image is more interesting; it appears different from the
others and looks to have generally more activity everywhere. Indeed, even the lungs appear darker
than the rest of the body.
Overall, it appears that the background is very high at this one time point. A second patient who
was scanned immediately afterward had a similar appearance. In trying to figure out what was going
on at the time, the physician remembered that he was in the process of consenting a patient for I-131
therapy at the time that the image was being acquired. The therapeutic I-131 dose was with the
technologist in the hallway waiting for the completion of the consent process sitting quite near the
room in which the gastric emptying images were being acquired. Even though the dose was shielded,
enough photons were able to escape the shielding to be detected by the camera giving the appearance
of a high background. Because the photon energy of I-131 is so high, it is not effectively collimated
by the LEHR collimators used for the gastric emptying study. That is why it is a diffuse background
rather than a focal source activity. The reason the lungs appear more intense than the rest of the body
is that the air in the lungs does not attenuate as much as the soft tissues in the body.
When doing quantification, it is often important to correct for background activity. A  camera
turned on in a room with no obvious source or radioactivity records some counts and these add noise
to the measurements. In a gastric emptying study, however, usually the effect of noise is small enough
that background correction is not necessary. In this case, however, by drawing a background region
on each frame and subtracting it from the counts in the stomach region of interest on that frame, it
142 NUCLEAR MEDICINE

1 240.2

90

80

70

60
Activity in %

50

40

30

20

10

0
0 30 60 90 120 150 180 210 240
Time [min]
Ant/post raw
Fit

Figure 13.12.  Geometric mean of anterior and posterior counts at each imaging point from Figure 13.11. Note that the counts at 3
hours are higher than would be expected causing the linear fit to be poor. This is caused by the high background activity in that image.

is possible to correct for the high background on one of the frames (Figure 13.13). Although it does
not change the overall diagnosis in this case, it does certainly change the calculation of the residual
activity at 3 hours.
Subtracting the background is not simply a matter of number of counts in one region minus num-
ber of counts in another because the regions are not always the same size. Therefore, one must first
measure the number of counts in a region and the number of pixels included in the region to calculate
counts per pixel. The background counts per pixel are subtracted from the organ counts per pixel and
the new corrected counts per pixel are multiplied by the number of pixels in the region to determine
the corrected number of counts in the region. When performing background correction in conjunc-
tion with geometric mean, the background correction is first performed independently on the anterior
and posterior images and the geometric mean is calculated using the corrected counts.
(A) (B) (C)
1 240.2

90
R L L R
80

70

60
1/5 2/5 3/5 1/5 2/5 3/5
50
Activity in %

40

30

20

10

0
0 30 60 90 120 150 180 210 240
Time [min]
Ant/post raw
4/5 5/5 4/5 5/5 Fit

Figure 13.13.  Serial anterior (A) and posterior (B) gastric emptying images with a region of interest defined for background subtraction. (C) The resulting
graph shows a much better linear fit with less of an outlier for the 3-hour image. However, it is important to note that it is not perfectly corrected.
14 PET PITFALLS
146 NUCLEAR MEDICINE

As was stated in the introduction to Chapter 13, both single photon and positron emission tomog-
raphy (PET) imaging are functional techniques. This cannot be overstated and a poor understanding
of the patient’s biology at the time of the scan is the cause of many (probably most) pitfalls for single
photon and PET imaging. Because most clinical PET involves static imaging rather than dynamic
imaging, some of the pitfalls seen in the prior chapter are not commonly encountered. However, tim-
ing of scans in relation to other events in the patient’s life (therapies, eating, exercise) has a dramatic
effect.
PET pitfalls 147

Case 1. Infiltration

A 43-year-old man with left-sided lung cancer is referred for restaging with F-18 fluorodeoxyglucose
(FDG) PET/computed tomography (CT) after completion of surgery and adjuvant chemotherapy. The
technologist noted increased resistance toward the end of the radiopharmaceutical injection into the
left antecubital fossa. The scan was performed and there is, indeed, infiltration of a portion of the dose
into the left antecubital fossa (Figure 14.1).
Although the infiltration is unsightly, the remainder of the scan appears as expected. The resident
believes that it can be qualitatively interpreted but suggests that any measurements on the PET por-
tion would be incorrect because of the infiltration. Indeed, he concludes that all measurements will be
falsely low because a less-than-expected amount of activity was injected. On discussing this with his
attending, he is tasked with estimating the effect of the infiltration on quantification.
Absolute quantification is rarely performed on clinical PET/CT studies. Rather, most measurements
are based on standardized uptake value (SUV). The SUV is calculated by the number of counts in

Figure 14.1.  Transaxial PET, CT, and fused images along with maximum intensity projection imaging of an FDG PET/CT acquisition.
The crosshairs show a partial infiltration of the injected dose in the left antecubital fossa.
148 NUCLEAR MEDICINE

the region of interest (in units of activity per milliliter) divided by the injected dose (decay corrected
to the time of the scan) and the body weight (assuming that the tissues in the body have a density of
about 1 g/ml).
It is to be noted that it is possible to correct for lean body mass or body surface area instead of body
weight. However, in current clinical practice, correcting for body weight is most common. Moreover,
it has been shown to be a simple, robust, reproducible technique. Although SUV as defined above is
essentially the mean value for the amount of activity in a region of interest, it is also possible to mea-
sure the SUV of the most intense voxel within the region of interest. This is referred to as SUVmax
and is the measurement most often reported in current clinical practice. Many other SUV-based mea-
surements are possible but are beyond the scope of this discussion.
The resident is thinking of ways to make measurements that do not require absolute quantifica-
tion. He tries to draw a three-dimensional region of interest around the activity in the infiltration and
one around the entire imaged portion of the patient. The infiltration has a max SUV of 23.3, a mean
SUV of 7.09, and a volume of 4.9 cm3. The region drawn around the entire body has a mean SUV
of 0.72 and a volume of 53,300 cm3. Therefore, he calculates that 0.09% of the activity in the scan
is within the infiltration. Thus, he concludes that it is unlikely to contribute significantly to image
quantification.
The attending applauds his logic, but goes on to explain that it is even a smaller percentage of
the injected dose. First of all, the entire body was not imaged in the scan and so the calculations did
not account for the amount of activity in absolute units compared with the injected dose (nor did it
account for activity outside the field of view of the scan).
There is also an issue of biology. Typically there is about a 1-hour interval between F-18 FDG injec-
tion and scan initiation. During this time, radiotracer is not only distributed throughout the various
tissue types, it is also excreted into the urine. It is standard practice to have the patient void before
imaging and so some of the tracer has left the body (typically in the neighborhood of 10%). Whereas
the intravenous activity can distribute and may pass through the kidney and be excreted, most of the
infiltrated portion of the dose remains at the infiltration site. Therefore, it is not significantly dimin-
ished by tracer excretion as is the activity elsewhere in the body.
Finally, the relative intensity displayed on the scan is activity per unit volume. Because infiltrations
are usually miniscule volumes, they can have incredibly high concentration and very little absolute
activity. Consider that it is not uncommon for a dose to be less than 1 ml in volume, whereas a
patient is somewhere on the order of 70 L in volume (assuming 70 kg and 1 g/ml, which is the same
as 1 kg/L). Imagine that 1% of a 1-ml dose was infiltrated and that it remained in a volume of 0.01
ml. The concentration of the dose would remain unchanged. Assume that the other 99% of the dose
distributed evenly throughout the 70 L of the body. Even though 99% of the dose is being injected,
the infiltration has a concentration thousands of times higher than what is seen elsewhere.
Of course, this is a flawed example because the tracer does not distribute uniformly and very little
goes into the fat and cortical bone. Furthermore, the scanner cannot accurately image something
with a very tiny volume and high concentration; the measured activity is averaged with adjacent tis-
sue. However, the thought experiment illustrates why an infiltration can appear very bright but still
have no significant effect on quantification. An infiltration that contains enough activity to impact
PET pitfalls 149

quantification likely also has enough activity to cause significant artifacts in the image (if it is in the
field of view).
One final important point: it is common practice to image the body with the arms up over the head.
It is also typical to perform injections into forearm veins. Therefore, the injection site is frequently
outside the field of view. Because of this, infiltrations may not be recognized on the scan. For this
reason, good intravenous access with confirmation of blood return prior to injection and vigilance
during injection and flushing is critically important. If significant infiltration is suspected, imaging of
the injection site may be helpful.
150 NUCLEAR MEDICINE

Case 2. Treatment Effect Mimics New Disease

A 53-year-old man with tonsillar cancer undergoes definitive chemoradiotherapy. Three months after
completion of successful treatment he returns for F-18 FDG PET/CT for evaluation of response to
therapy. The scan is done and shows no significant uptake in the radiation bed, but intense focal
uptake on the contralateral side (Figure 14.2).
The resident suspects that the patient has developed contralateral disease, either a new primary or
local spread. On reviewing the images, the attending declares the patient cancer free. The resident is
a bit flustered and asks for an explanation. The attending opens the next patient (who is referred for
initial imaging of rectal cancer) and scrolls to the tonsils (Figure 14.3). He asks the resident if the
tonsils are normal or abnormal. After a quick glance, she responds that the tonsils are normal.
He asks her to cover the right half of the patient with her hand and to describe how it looks com-
pared with the prior patient. Much to her surprise, it looks pretty much the same. Now she is con-
vinced that the uptake on the prior scan was normal, but does not quite understand why it looks so
strikingly asymmetric. The attending asks why there is uptake in the tonsils in the normal patient, to
which the resident replies that there is often inflammation there as the immune system processes the
many antigens that enter the body through the mouth. The attending agrees and adds that immune
cells are among the most sensitive to radiation. The radiation treatment destroyed the cancer, but it
also wiped out the normal cells on the treated side. The untreated side that is perfectly normal now
appears abnormal because of the asymmetry.
Although this is not exactly a physics case, it is included to underscore yet again that nuclear medi-
cine imaging is functional imaging. Many pitfalls are related not to any problem with the camera or
to disease processes themselves, but to aberrations in the normal distribution of the radiopharma-
ceuticals due to downstream effects of the primary disease process or, often, to the treatment itself.

Figure 14.2.  Transaxial FDG PET, CT, and fused images in a patient with treated left tonsillar cancer. The crosshairs show intense focal
uptake in the right tonsil.
PET pitfalls 151

Figure 14.3.  Transaxial fused, FDG PET, and CT images in a patient


with rectal cancer. The normal physiologic uptake in the tonsils
(arrows), although intense, is very symmetric.
152 NUCLEAR MEDICINE

Case 3. Misregistration and Attenuation Correction

A morbidly obese 42-year-old woman is referred for cardiac clearance before bariatric surgery. She is
hypertensive, has hypercholesterolemia, and had a brother who suffered a myocardial infarction at
the age of 47. Because of these risk factors, she is referred for myocardial perfusion imaging. Given
her morbid obesity, PET/CT using Rb-82 chloride was recommended over single photon emission CT.
The patient tolerated the procedure with only mild symptoms from the vasodilator pharmacologic
stress and had no electrocardiographic changes during stress. The imaging was completed and was
processed (Figure 14.4), showing a reversible lateral wall defect.
As part of routine study quality control and interpretation, the attending physician notes misreg-
istration between PET and CT on the stress images, with the lateral wall on the PET overlying the
lung on the CT (Figure 14.5). She requests that the misregistration be corrected and the study be
reprocessed. When processing is finished, the defect is resolved and the study is entirely normal. The
patient is given a normal report and goes on to successful surgery.
We have already discussed that attenuation correction is so important with PET imaging because
the probability is fairly high that one of the photons in a pair coming from deep within the body will

Figure 14.4.  Stress (rows 1, 3, and 5) and rest (rows 2, 4, and 6) Rb-82 myocardial perfusion PET images show a defect in the lateral
wall on the stress images (arrows) that is normal on the rest images.
PET pitfalls 153

Figure 14.5.  Transaxial Rb-82 PET, CT, and fused images along with maximum intensity projection image. On the fused image, it is
apparent that there is misregistration such that the lateral wall of the heart on the PET appears to be outside the heart borders on the CT. This
in turn causes the attenuation correction to decrease the relative activity in the lateral wall causing the reversible defect seen in Figure 14.4.

be scattered or attenuated. On uncorrected images in the abdomen, for example, there is a relatively
smooth slope of the average intensity going from superficial to deep because the tissue density is rela-
tively constant. In the chest, however, the heart and blood are approximately water density, whereas
the lung is air density. This dramatic change in density causes substantial changes in the attenua-
tion patterns such that activity in the lungs does not need much correction and activity in the heart
requires much more correction. When part of the myocardium appears to be in the lung, that part
is significantly less corrected compared with the parts that are correctly registered to myocardium
resulting in an apparent defect.
Therefore, reviewing the fused images to confirm appropriate coregistration is mandatory as part of
the reconstruction process for cardiac PET/CT. Although misregistrations can happen any time, many
practices perform a single CT immediately before the resting PET acquisition and then encourage the
patient not to move during the stress and subsequent stress imaging. This permits reducing the overall
radiation dose from the study because only one CT is needed. However, patient motion during the
stress is not uncommon.
Misregistration can be corrected before processing in most cases. Furthermore, dose reduction is a
commendable goal. However, a low-dose study with a wrong result is worse than a higher-dose study
with the correct result. Therefore, if there is significant patient motion, repeat CT may be the best
course of action.
154 NUCLEAR MEDICINE

Case 4. Respiratory Motion Artifact

A 59-year-old man with newly diagnosed colon cancer is referred for staging with F-18 FDG PET/CT
before planned surgery. The clinician notifies the resident that the patient is known to have a primary
malignancy in the colon along with metastases limited to the right liver and that they are planning
resection of the colon and the right lobe of the liver as long as the PET/CT reveals no disease outside
these locations. The resident reviews the images and informs the clinician that unfortunately there is
metastatic disease in the right lower lobe of the lung (Figure 14.6).
The attending sits down to review the case and, after doing so, suggests that the resident call the
clinician back and let her know that there is no disease in the lung and that the surgery can proceed
as planned. Unfortunately, the surgeon now has cold feet and has requested a diagnostic CT of the
chest (Figure 14.7), which confirms a metastasis at the hepatic dome with no abnormality in the lung.

Figure 14.6.  Transaxial FDG PET, CT, and fused images along with maximum intensity projection image. The crosshairs show an
abnormal focus of increased activity that localizes to the right lung base on the CT images. Note that the registration of PET to CT appears
to be accurate (i.e., this finding is not caused by misregistration).
PET pitfalls 155

(A) (B)

Figure 14.7.  (A) Transaxial diagnostic CT slice from the patient in Figure 14.6 confirms the presence of a metastasis in the posterior
liver dome. (B) Transaxial slices through the lungs show no pulmonary nodule.

With a modern PET/CT scanner, the CT component from the base of the skull to the proximal
thighs requires only tens of seconds, often well less than a minute. Therefore, although the patient is
free to breathe during the CT, the rapidity of the acquisition results in something close to a snapshot.
The fastest available cameras still require about a minute per bed position with varying amounts
of overlap between subsequent bed positions. Therefore, the image of the base of the lungs and the
hepatic dome takes at least 1 minute during which the patient may take more than 10 breaths.
Whereas the CT is essentially a snapshot of one phase of the respiratory cycle, the PET is an aver-
age of the entire cycle. In fact, it is something of a weighted average because we naturally spend more
time at end expiration than at end inspiration (typically we take a breath, breathe it out and then wait
until we need to take another breath). In this case, the CT was performed during inspiration, so the
diaphragm was flattened and the liver dome was pushed down. The PET was done during breathing
and so the liver dome was, on average, higher than it was on the CT. Therefore, the metastasis in the
liver dome appears to be in the lung.
Although respiratory gating of both the PET and CT is possible to correct such artifacts, it requires
a much longer acquisition time and respiratory gated CT results in much higher radiation dose. In
most cases it is not warranted as long as the reader understands the conditions under which the
images were acquired and the potential ramifications of cyclical motion, such as respiration.
15 THERAPY PITFALLS
158 NUCLEAR MEDICINE

Radiopharmaceutical therapy is an important part of nuclear medicine. Although the treatment dose
itself may or may not be imaged, this chapter is focused on pitfalls related to the treatments them-
selves. Most current radiopharmaceutical therapies are used for the treatment of various cancers and
are typically given in doses that can cause side effects. Although specific side effects depend on the
treatment in question, most cause some degree of bone marrow suppression and many can result in
gastrointestinal symptoms, such as nausea and vomiting.
Therapy pitfalls 159

Case 1. Empiric Dosing Exceeds Safe Limits

A 63-year-old man presents with a pathologic fracture of the right femur. He is eventually found to
have metastatic thyroid cancer and undergoes thyroidectomy. After surgery and thyroid hormone
withdrawal, he is treated empirically with 200 mCi I-131 (a common dose for patients with meta-
static disease); posttherapy scan is shown in Figure 15.1. Five weeks after treatment he notices easy
bruising. His physician obtains a complete blood count and he is found to have a platelet count of
17,000. He is carefully monitored and his counts recovery spontaneously.
One year later, the patient’s thyroglobulin begins to rise consistent with progressive thyroid cancer.
Because of the toxicity he experienced after the first dose, he is referred for dosimetry. In this pro-
cedure, the patient is given a small dose of I-131 and radioactivity in his blood and whole body are
measured every day over 4 days. The clearance times for the blood (a surrogate for the bone marrow)
and the whole body are calculated and the maximum tolerated dose is calculated. Most commonly,
the maximum dose is defined as the highest dose that does not exceed 2 Gy delivered to the blood
or 80 mCi remaining in the whole body at 48 hours after treatment. Whereas the blood activity is
done by taking blood samples and counting them in a well counter, the whole-body counts can be
done either with a gamma camera (using a radioactive source to convert camera counts into units of
radioactivity; Figure 15.2) or with a gamma probe.
The dosimetry is completed and shows extensive iodine-avid bone metastases. The maximum dose
is calculated to be 140 mCi (Figure 15.3). The patient is treated with this dose. After treatment, blood
counts are monitored. Although they do drop below the normal range, they remain well above levels

(A) (B)

(C) (D)

Figure 15.1.  Anterior and posterior images of (A) the head and neck, (B) the chest, (C) the abdomen, and (D) the pelvis done 1 week
after administration of 200 mCi I-131 for metastatic thyroid cancer. Multiple intense foci of uptake are seen from metastatic disease
(arrows); physiologic activity is also visible in the bladder (arrowheads).
(A) (B) (C) (D)

Figure 15.2.  Anterior and posterior whole-body image done (A) 1 hour, (B) 24 hours, (C) 48 hours, and (D) 72 hours after administra-
tion of a small dosimetric dose of I-131. A source of known quantity of I-131 is imaged in the field of view (arrows) to allow the conver-
sion of scan counts to units of radioactivity.

100 Biological Clearance


90
80
70
Retention (%)

60
50
40
30
20
10
0
0 1 2 3 4 5 6
Time (d)

(D/A)beta 1.30 rad/mCi


(D/A)Gamma 0.13 rad/mCi
(D/A)total 1.43 rad/mCi
MTAtotal 140 mCi
Blood retention48h 33%
Body retention48h 41%
Activity48h 57 mCi

Figure 15.3.  Dosimetry measurements from the data in Figure 15.2 along with measurements from blood samples at each of the time-
points shows slow clearance of activity from the body. The calculated maximum tolerable therapeutic dose is 140 mCi.
Therapy pitfalls 161

at which he would be at risk for bleeding or infection. The counts recover to within the normal range
and the thyroglobulin slowly decreases indicating response to treatment.
Standard dosing regimens are common in the treatment of thyroid cancer with I-131. For example,
some centers give 100 mCi to patients with relatively low-risk disease, 150 mCi to patients with nodal
involvement, and 200 mCi to patients with known metastatic disease. It is critically important to
recognize that these “standard” doses may exceed safe limits in a significant subset of patients. Most
common among these are patients of advanced age, patients with renal insufficiency, and those with
extensive metastatic disease.
The opposite situation is also possible. That is, a patient who could safely tolerate, say, 400 mCi in a
single treatment may not benefit as much from a standard 200-mCi dose as from a maximal 400-mCi
dose. Thus, in patients with advanced disease, dosimetry is often warranted to ensure appropriate
dose selection.
162 NUCLEAR MEDICINE

Case 2. Gastrointestinal Toxicity

A man with bone-dominant castrate-resistant prostate cancer is referred for therapy with Ra-223
dichloride. He meets all the criteria for treatment and is scheduled. On the day of treatment, an intra-
venous (IV) line is placed and the radiopharmaceutical is given without incident. He is discharged
from the department with radiation safety instructions and prescriptions for follow-up laboratory
tests.
Three days later, the emergency department calls to say that the patient has been admitted with
dizziness and hypotension. He is given IV fluids and almost immediately recovers. He relates that
starting the day after treatment he developed severe nausea and so did not take any food or fluids,
resulting in dehydration. He is discharged from the emergency department with strict instructions on
fluid intake along with a reminder to use antinausea medication if necessary.
Another 2 days pass and the patient presents again to the emergency department with dehydration.
He insists that he is doing all he can to drink, but has had severe diarrhea. He is again given IV fluids
along with antidiarrheal medication and is discharged. He is seen in follow-up 2 weeks later and is
feeling fine. The bone pain that he felt before treatment has resolved.
This patient presented twice with dehydration from two different but related causes. The first, nau-
sea (with or without vomiting), is a generic result of radiation exposure. Even relatively modest doses
of radiation can induce nausea, which is generally self-limited, lasting a few days. In some cases it is
severe and can result in dehydration. Antiemetics are generally successful in minimizing the symptoms
and so with many treatments it is warranted to give the patient a prescription with instructions to fill
it if they experience any symptoms. In very rare cases patients require IV fluids, although most can be
avoided with early antiemetic intervention along with encouragement to take oral fluids.
The second cause of dehydration, diarrhea, is more specific to the treatment in question, Ra-223
dichloride, which is primarily excreted via the gut and, therefore, can cause diarrhea. A knowledge of
the side effect profile of the treatment being given is important so that the patient can be instructed
what to expect and how to mitigate it. The patient may have been spared two visits to the emergency
department and much discomfort had he been given appropriate instructions.
Therapy pitfalls 163

Case 3. Radioactive Vomit

An unfortunate young woman with widely metastatic differentiated thyroid cancer is referred for
I-131 therapy. Based on what was learned in Cases 1 and 2, the treating physician has performed
dosimetry to determine the maximum tolerated dose and has ordered an antiemetic as needed. Because
the dose of 350 mCi is too high for outpatient therapy, the patient is admitted to a shielded room that
is appropriately prepared by radiation safety for the treatment (with plastic coverings for all surfaces
when possible and additional shielding where required to minimize dose to other patients). The dose
is given and the patient is instructed to remain in her room until cleared to leave by radiation safety.
Late in the afternoon, about 6 hours after the treatment, the floor calls to say that the patient has
vomited on the nurse. Radiation safety is called and their first instruction is to continue to provide
whatever medical care the patient urgently requires and their second instruction is, as much as pos-
sible, to prevent anyone from entering or leaving the room.
Shortly thereafter, radiation safety arrives with survey meters and they immediately begin to assess
for contamination. They discover that there is contamination on the floor outside the patient’s room
from where the nurse stepped out to call for help. This is covered with an absorbent plastic-lined pad.
Several nurses and the physician are in the patient’s room and one-by-one each staff member is sur-
veyed for contamination. Because of the high background radiation from the patient, they must walk
the staff members several feet outside the room, so they cordon off a walkway knowing that it may
be contaminated. Luckily, all of the caregivers that were in the room were wearing coats or gowns as
well as shoe covers. After these are removed, only the initial nurse is found to have detectable radio-
activity on her pants and hands.
Scrubs are sent for her to change into and in the meantime she is sent to scrub her hands thoroughly
with frequent surveys until no activity above background is found. She is given a change of pants and
is resurveyed. Her pants are taken by radiation safety to be held until they are decayed; a good rule of
thumb is that it requires 10 half-lives (80 days for I-131) to decay to background. On being told this,
the nurse requests that they just be thrown out (of course, they must still be stored for decay, but can
be held with the rest of the radioactive trash). Her repeat surveys show no activity above background.
The vomit is cleaned up and the plastic coverings on the floor are replaced. The area in the hallway
is cleaned but continues to have detectable levels of contamination so it is recovered and the dose
rate, date, and time are recorded along with the isotope. This is periodically recleaned and surveyed
by radiation safety until it is at background.
Based on the dose levels measured on the contaminated nurse, an estimated dose to her whole body
and to her skin is calculated and included in her dose history. Furthermore, because I-131 is volatile
and could have been released into the air, the staff members who were present in the room are asked
to present the next day to undergo measurements of their thyroid glands with a scintillation probe
to look for trapping of radioactive iodine (a procedure called a bioassay). Luckily, all are negative.
Radiation safety commends all involved for ensuring good patient care while minimizing spread of
radioactivity. The staff is reminded that radiation safety continually monitors their exposure to ensure
that it is within regulatory limits. Indeed, warnings are triggered if a staff member reaches more than
a quarter of the permissible dose, at which point job responsibilities and techniques are reviewed to
ensure that all work is adhering to the principle of “as low as is reasonably achievable” (ALARA).
164 NUCLEAR MEDICINE

Staff members who are on a trajectory to exceed dose limits may be reassigned temporarily and addi-
tional protective equipment may be considered when necessary.
Radiation safety also reminds the staff of the inverse square law meaning that dose decreases with
the square of the distance from the radioactive source. If the technologist normally stands 30 cm from
a patient while setting them up for the scan, if they increase that to 60 cm, their dose will go down
by a factor of four. Of course, it is difficult to provide good patient care from across the room, so the
word “reasonably” from ALARA is the key.
Therapy pitfalls 165

Case 4. Therapy Infusion via Indwelling Catheter

A 33-year-old man with metastatic paraganglioma is referred for I-131 meta-iodobenzylguanidine


therapy. Having undergone many cycles of chemotherapy, his peripheral veins are incredibly difficult
and IV access is not achieved. The patient does have a central venous port, which is accessed and used
for the therapy. Treatment is given without adverse effects and the patient returns 1 week later for
posttherapy scan (Figure 15.4).
The posttherapy scan reveals uptake in his sites of disease, but there is also intense curvilinear activ-
ity in his anterior chest wall. Thinking this may be retained activity in the infusion port, it is accessed
and flushed with saline. Repeat images look no different and the patient is sent home and monitored
conservatively.
Most indwelling catheters eventually receive a proteinaceous coating. It is important to note that
some radiopharmaceuticals readily stick to such a coating and are very recalcitrant to flushing away.
For this reason, giving therapeutic radiopharmaceuticals via a long-standing central catheter can

Figure 15.4.  Anterior and posterior whole-body images done 1 week


after treatment with I-131 meta-iodobenzylguanidine. The patient’s intra-
venous port was used for the therapeutic infusion and residual radiophar-
maceutical is seen at the port reservoir and along the tubing (arrows).
166 NUCLEAR MEDICINE

result in a portion of the dose remaining stuck to the tubing despite aggressive flushing. In some cases,
this is difficult to avoid when the patient does not have good options for peripheral access.
Furthermore, for the same reasons as were discussed in Chapter 14, Case 1, this amount appears to
be relatively much higher than it truly is. During the week interval between injection and scan, several
effective half-lives have elapsed (whereas the physical half-life of I-131 meta-iodobenzylguanidine is
8 days, the effective half-life is 1–2 days). More explicitly, the activity in the body has undergone 4–5
effective half-lives between excretion and decay, whereas the activity stuck in the tubing has under-
gone only the physical decay. Therefore, the activity in the tubing may look on order of 16 times more
intense on the posttherapy scan relative to the activity elsewhere in the body compared with the way
it would have looked at the time of injection. If calculations were made, the total amount of activity
in the tubing would be found to be insignificant.
16 PUZZLERS
168 NUCLEAR MEDICINE

This chapter provides 20 questions that highlight the topics covered in the book. The answer key fol-
lows along with references to the chapter and case in which an explanation is available.
1) This patient with right-sided lung cancer most likely has a synchronous left-sided vocal cord
cancer.

Figure 16.1.  Transaxial FDG PET, CT, and fused images along with maximum intensity projection image.

A. True
B. False
Puzzlers 169

2) What is the most likely problem with the camera causing the flood image seen here?

Figure 16.2.  Daily extrinsic flood image. Image courtesy of


Eleanor Mantel, CNMT.

A. Cracked crystal
B. Hygroscopic crystal
C. Photomultiplier tube down
D. Damaged collimator

3) An isotope has a physical half-life of 4 days and a biologic half-life of 3 days. What is the effec-
tive half-life?
A. 1.2 days
B. 1.7 days
C. 2.4 days
D. 3.5 days
170 NUCLEAR MEDICINE

4) What is the most likely diagnosis in this case?

(A) (B)

Figure 16.3.  (A) Anterior and posterior whole-body bone scan images. (B) Anterior and posterior spot images of the pelvis
done after appropriate intervention.

A. Contamination
B. Pelvic fracture
C. Isolated pelvic metastasis
D. Benign pelvic tumor
Puzzlers 171

5) This patient is likely to have had a severe prior stroke.

Figure 16.4.  Transaxial FDG PET and CT slices through the brain.

A. True
B. False
172 NUCLEAR MEDICINE

6) What is the artifact seen here?

Figure 16.5.  Anterior and posterior whole-body bone


scan images.

A. Misregistration
B. Off-peak acquisition
C. Wrong collimator
D. Motion

7) Which of the following does not arise from the atomic nucleus?
A. Auger electron
B. β-
C. β+
D. γ

8) Which scintillating material has the highest Z number?


A. NaI
B. BGO
C. LSO
D. LaBr3
Puzzlers 173

9) Transferring a radiopharmaceutical from a glass vial to a plastic syringe has no effect on dose
measured in a dose calibrator.
A. True
B. False

10) This sinogram reveals which of the following issues during acquisition?
A. Center of rotation error
B. Poor signal-to-noise ratio
C. Patient motion
D. Attenuation correction artifact

Figure 16.6.  Sinogram from a myocardial perfusion


SPECT acquisition.

11) Which of the following is the maximum permissible annual whole-body dose for a radiation
worker?
A. 0.5 mSv
B. 5 mSv
C. 50 mSv
D. 500 mSv
174 NUCLEAR MEDICINE

12) Which is the cause of the artifact in this image?


A. Photomultiplier tube error
B. Incorrect calibration
C. Attenuation correction error
D. Table misregistration

Figure 16.7.  Transaxial FDG PET, CT, and fused images along with fused coronal image.

13) When surveying for contamination from a Y-90 therapy (Y-90 is a β- emitter), the end cap
should be removed from the survey meter tube.
A. True
B. False

14) What is the percent uncertainty in a measurement with 1,000,000 counts?


A. 0.01%
B. 0.1%
C. 1%
D. 10%
Puzzlers 175

15) In caring for an acutely ill patient who has recently received radiopharmaceutical therapy,
which of the following is the primary goal of the medical team?
A. Minimizing staff radiation dose
B. Preventing surface contamination
C. Providing urgent medical care
D. Getting radiation safety involved

16) A coworker is overheard talking on the telephone about being pregnant and appears to be
starting to show. The fetal dose limits will apply to her for the remainder of the pregnancy.
A. True
B. False

17) This artifact is most likely caused by which of the following?

Figure 16.8.  Transaxial FDG PET, CT, and fused images along with maximum intensity projection image.

A. Scatter correction error


B. Table misregistration
C. Attenuation correction error
D. Incorrect calibration
176 NUCLEAR MEDICINE

18) Which reconstruction algorithm was used to generate these images?

Figure 16.9.  Transaxial, sagittal, and coronal reconstructed SPECT slices from an In-111 octreotide scan.

A. Filtered back projection


B. Iterative
Puzzlers 177

19) This artifact is likely caused by which of the following?

Figure 16.10.  Multiple views from a ventilation-perfusion scan.


A. Radiopharmaceutical impurity
B. Prosthesis
C. Motion
D. Photomultiplier tube error
178 NUCLEAR MEDICINE

20) Most cardiac gated nuclear images are done using which of the following?
A. Prospective gating
B. Concurrent gating
C. Retrospective gating
D. Iterative gating

Answers

1) B. This patient has right-sided vocal cord paralysis caused by her cancer; the uptake in the left
vocal cord is normal. Chapter 14, Case 2.
2) C. Chapter 8, Case 3.
3) B. Chapter 2, Units of radioactivity
4) A. Chapter 13, Case 3.
5) B. There is misregistration between the PET and the CT causing this diffuse appearance of
decreased activity on the side where the attenuation correction algorithm thinks that the activ-
ity is in air. Chapter 14, Case 3.

Figure 16.11.  FDG PET and CT transaxial slices from Figure 16.4 displayed in a fused format showing the misregistration that
caused the defect in the left cerebral hemisphere.

6) D. Chapter 9, Case 2.
7) A. Chapter 2, Auger electrons.
8) B. Chapter 6, Table 6.1.
Puzzlers 179

9) B. Chapter 7, Case 3.
10) C. Chapter 10, Case 4.
11) C. Chapter 3, Radiation safety.
12) D. Chapter 11, Case 3.
13) A. Chapter 4, Survey meters.
14) B. Chapter 5, Static planar imaging.
15) C. Chapter 15, Case 3.
16) B. Chapter 3, Radiation safety.
17) A. Chapter 11, Case 4.
18) A. Chapter 10, Case 2.
19) B. Chapter 13, Case 1.

Figure 16.12.  Portable anteroposterior chest x-ray done immediately before the ventilation-perfusion scan from Figure 16.9.

20) A. Chapter 5, Gated imaging.


INDEX
Index entries followed by t indicate a table; by f indicate a figure.

Activity calculation, 69 Atomic nucleus, 8, 9


ALARA (As Low As Reasonably Achievable), 22, 163–64, 175 Atomic number, 8
Alpha (α) emission, 10–11 Atoms, 8
Altitude artifacts, 69 Atrial fibrillation imaging, 46–48, 47f
Anger, Hal, 2f,3, 32, 40 Attenuation correction
Anger gamma camera, 2f, 3 attenuation correction errors (PET), 118–19f, 118–20
Anger logic, 40–41, 40f CT artifacts affecting PET reconstruction, 121–22, 121–22f
Annihilation events, 59, 60f, 118–19f, 118–20 misregistration, attenuation correction (PET), 152–53,
Antiemetics, 162 152–53f, 171
Antineutrinos, 9–10 PET, 61–62, 61f
Arrhythmia, 118–19f, 118–20 PET/CT, 64
Artifacts (gamma camera) PET/MRI, 66
cracked crystals, 75, 75f prostheses imaging, 131–32, 131–32f, 177
flood nonuniformity, 80–81, 80f SPECT, 61–62, 61f
hygroscopic crystals, 76–77, 76f SPECT/CT, 52–53
PMT malfunction, 78, 78–79f, 169 Auger electrons, 12, 172
Artifacts (ionization chamber/dose calibrator) Avalanche photodetectors, 65–66
altitude, 69
geometry, 70, 70f Background activity correction, 140–42, 140–43f
materials, 71, 71f, 173 Becquerel (Bq), 12–13
Artifacts (PET) Becquerel, Henri, 2, 2f
attenuation correction errors, 118–19f, 118–20 Beer’s Law, 76–77
crystal temperature instability, 113 Beta (β−) emission, 9–10
CT artifacts affecting PET reconstruction, 121–22, 121–22f Biologic half-life, 13
PMT malfunction, 111, 111–12f Bismuth-207 (Bi-207), 11
respiratory motion, 154–55, 154–55f Bismuth germanate, 57, 57t
scatter correction errors, 116–17, 116–17f, 175 Bone scans, 133–36, 133–36f, 170
table misregistration, 114–15, 114–15f, 174 Breast cancer, 114–15, 114–15f, 131–32, 131–32f, 174, 177
Artifacts (planar acquisitions) Bremsstrahlung, 8
collimator penetration, 93–94, 93–94f
dose infiltration, 91–92, 91f, 116–17, 116–17f, Cancer induction, 18–21, 20f
147–49, 147f, 175 Carcinoid tumors, 104–5, 104–5f, 173
edge detection, scatter effects on, 87f Cardiac gating, 46–48, 47f
motion, 88–89, 88–90f, 172 Cassen, Benedict, 3
off-peak acquisition, 85–87, 85–87f Catheter, therapy infusion via indwelling, 165–66, 165f
Artifacts (SPECT) Center of rotation (COR) error, 97, 97f
center of rotation (COR) error, 97, 97f Cobalt -57, 52, 69
filtered back projection (FBP) streak, 98–99f, 98–100, 176 Collimators
iterative reconstruction errors, 104–5, 104–5f, 173 aperture, 33, 33f
motion, 106–7f, 106–8 converging, 36–37, 37f
noisy images, 100–103, 102f design, 32, 36
“As low as is reasonably achievable” (ALARA), 22, 163–64, 175 diverging, 36–37, 37f
Astatine-211 (At-211), 11 functions, 32, 34
Atomic bomb survivors, 20 image inversion, 33, 34f
Atomic mass, 8 LEHR, 140–42, 140–43f
182 INDEX

Collimators (Cont.) Endocrine tumors, 2f


magnification, 33, 34f Exercise myocardial perfusion imaging, 2f
parallel hole, 34–36, 35f, 37f, 42f, 58 Expectation maximization (EM), 49–50, 50f
penetration artifacts, 93–94, 93–94f
pinhole, 32–34, 33–34f, 93–94, 93–94f FDG PET
principles, 32–33, 32f, 36–37, 37f CT artifacts affecting PET reconstruction, 121–22, 121–22f
resolution, 33, 34–35f, 36 dose infiltration, 91–92, 91f, 116–17, 116–17f, 147–49, 147f,
sensitivity, 33, 34f, 36 175
septae, 34–36, 35f historically, 2f
SPECT, 58 respiratory motion artifact, 154–55, 154–55f
types, 36–37, 37f scatter correction, 116–17, 116–17f, 175
Colon cancer, 154–55, 154–55f treatment effect mimics new disease, 150–51, 150–51f
Compton scatter, 41–42, 42f F-18 fluorodeoxyglucose (FDG). see FDG PET
Computed tomography (CT). see CT Filtered back projection (FBP)
Contamination iterative reconstruction vs., 104–5, 104–5f, 173
dose calibrators, 125–26, 125–26f principles, 49–50, 49–50f
radioactive vomit, 163–64, 175 streak artifacts, 98–99f, 98–100, 176
single photon imaging, 135–36, 135–36f, 170 Flood images, 75–81, 75f, 78f, 96, 111
survey meters, 27–29, 28–29f, 174 Flood nonuniformity, 80–81, 80f
urinary, 135–36, 135–36f, 170 Fluorine-18 (F-18), 10
Cracked crystals, 75, 75f Free radical formation, 16
Crystal temperature instability, 113 F-18 sodium fluoride, 114–15, 114–15f, 174
CT
artifacts affecting PET reconstruction, 121–22, 121–22f
Gamma cameras
cancer induction by, 20
Anger logic, 40–41, 40f
filtered back projection (FBP), 49–50, 49–50f
artifacts ( see artifacts (gamma camera))
PET/CT ( see PET/CT)
collimator penetration artifacts, 93–94, 93–94f
SPECT/CT ( see SPECT/CT)
collimators ( see collimators)
Curie (Ci), 12–13
components, 32, 32f, 41, 41f
Curie, Marie, 2, 2f
Compton scatter, 41–42, 42f
Cylotrons, 2–3, 2f
dynamic imaging, 44–46, 45f, 137–38f, 137–39
dynodes, 39–40, 39f
Dehydration, 162 focusing electrodes, 39–40, 39f
Deterministic effects, 16–18 gated imaging, 46–48, 47f, 100–103, 102f, 178
Diabetes, 140–42, 140–43f historically, 2f, 3
Dose calibrators image duration, 43–44, 46–48, 47f
contamination, 125–26, 125–26f matrix size, 42–43, 43–44f
described, 25–27, 26–27f, 124 noise, 43, 44f
high background activity, 128 output signals, 40
linearity testing, 25–26, 27f patient activity, 44
wrong setting, 127 photocathode, 39, 39f
Dose infiltration, 91–92, 91f, 116–17, 116–17f, 147–49, 147f, photomultiplier tubes ( see photomultiplier tubes)
175 probes, 54, 54f
Doses, whole-body, 16–18 resolution, 43, 44f
DTPA Tc-99m, 127 scintillators, 37–39, 38f
Dynamic imaging sodium iodide crystals in, 37, 38, 41f, 57
poor timing, 137–38f, 137–39 static planar imaging, 42–44, 43–44f, 174
principles, 44–46, 45f uncertainty, 43, 44f
zoom, 43, 44f
Eckelman, William, 2f Gamma (γ) radiation, 11–12
Edge detection, scatter effects on, 87f Gastric emptying/gastroparesis studies, 140–42, 140–43f
Effective dose, 17 Gastrointestinal syndrome, 18
Effective half-life equation, 13, 169 Gastrointestinal toxicity, 162
Electron capture, 9, 10 Gated imaging, 46–48, 47f, 100–103, 102f, 178
Electrons, 8, 9 Geiger-Mueller (GM) survey meters, 24, 27–29, 28–29f, 174
Electron volt, 6 Geometric mean calculation, 140
Empiric dosing exceeds safe limits, 159–60f, 159–61 Geometry artifacts, 70, 70f
Index 183

Glomerular filtration rate (GFR) measurement, 91–92, 91f crystal temperature instability, 113
Gray (Gy), 16–17 PET, 38, 57
scintillators, 38–39
Half-life (T½), 12–13, 169 sodium iodide crystals, 38, 42
Hematopoietic syndrome, 18 Linearity testing, 25–26, 27f
Hepatobiliary studies, 125–26, 125–26f Linear no threshold model, 20–21, 20f
High background activity, 128 Line of response (LOR), 56–57, 56f, 59–63, 63f, 111, 111–12f
Hiroshima, 20 Livingston, Stanley, 2f
Hodgkin’s disease, 121–22, 121–22f Lung cancer, 118–19f, 118–20, 168
Hygroscopic crystals, 76–77, 76f Lungs, ventilation and perfusion images, 131–32, 131–32f, 177
Lutetium oxyorthosilicate (LSO), 57, 57t, 113
I -131
empiric dosing exceeds safe limits, 159–60f, 159–61 MAA Tc-99m, 128
historically, 2f, 3 Marinelli, Leo, 2f
radioactive vomit, 163–64, 175 Materials artifacts, 71, 71f, 173
therapy infusion via indwelling catheter, 165–66, 165f Mebrofenin, 125–26, 125–26f
γ emissions by, 11 Melanoma, 116–17, 116–17f, 175
Image acquisition Misregistration. see table misregistration
background activity correction, 140–42, 140–43f Molybdenum -99, 3, 11. see also Tc-99m
lower chest/upper abdomen, 98–99f, 98–100, 176 Motion, patient. see patient motion artifacts
myocardial perfusion, 100–103, 102f, 106–7f, 106–8 Motion correction software, 106–7f, 106–8
off-peak, 85–87, 85–87f MRI
patient motion artifacts, 88–89, 88–90f, 106–7f, 106–8, 172 PET/MRI ( see PET/MRI)
PET, 58–62, 59–61f principles, 65
PET/CT, 63–64 Myocardial perfusion imaging
PET/MRI, 66 historically, 2f
poor dynamic timing, 137–38f, 137–39 image acquisition, 100–103, 102f, 106–7f, 106–8
SPECT, 50–51, 97, 106–7f, 106–8 misregistration, attenuation correction (PET), 152–53,
whole-body bone scans, 88–89, 88–90f 152–53f, 171
Image-guided percutaneous biopsy, 114–15, 114–15f, 174 noisy SPECT images, 100–103, 102f
I-123 meta-iodobenzylguanidine (MIBG), 85, 93 overview, 4
Implantable defibrillator, 118–19f, 118–20
Initial flow imaging, 137–38f, 137–39 Nagasaki, 20
Ionization chambers Neuroblastoma, 85, 85f
described, 24–25, 25f Neurologic syndrome, 18
GM tubes vs., 28 Neutrons, 8, 9
proportional counters, 30 Noisy images, 100–103, 102f
Ionization detectors Nuclear medicine generally, 2–4, 2f
benefits, 24 Nuclear nomenclature, 8
dose calibrators ( see dose calibrators)
Geiger-Mueller (GM) survey meters, 24, 27–29, 28–29f, 174
ionization chambers ( see ionization chambers) Obesity
Ionizing radiation, 16, 18–19 CT artifacts affecting PET reconstruction, 121–22, 121–22f
Isomeric transition, 11 misregistration, attenuation correction (PET), 152–53,
Isotopes, 8, 9, 13, 169 152–53f, 171
Iterative reconstruction errors, 104–5, 104–5f, 173 noisy SPECT images, 100–103, 102f
Off-peak acquisition, 85–87, 85–87f
Ordered subset EM (OSEM), 50
Krenning, Eric, 2f
Organ donation, 91–92, 91f
Kuhl, David, 2f, 3
Oshry, Eleanor, 2f
Osseous metastatic disease, 133–34, 133–34f
Lamberts, Steve, 2f
Lawrence, Ernest, 2, 2f
Lawrence, John, 2f P -32, 2f
Lead-208 (Pb-208), 9 Pacemaker, 118–19f, 118–20
LEHR collimators, 140–42, 140–43f Paraganglioma, metastatic, 165–66, 165f
Leukemia, 2f Parallel hole collimators, 34–36, 35f, 37f, 42f, 58
Light output Patient motion artifacts
184 INDEX

Patient motion artifacts (Cont.) malfunction artifacts (gamma camera), 78, 78–79f
planar acquisitions, 88–89, 88–90f, 172 malfunction artifacts (PET), 111, 111–12f
scatter correction errors, 116–17, 116–17f, 175 principles, 39–40f, 39–41
PET Pinhole collimators, 32–34, 33–34f, 93–94, 93–94f
annihilation events, 59, 60f, 118–19f, 118–20 Planar bone scan, 114–15, 114–15f, 174
artifacts ( see artifacts (PET)) PMT. see photomultiplier tubes
attenuation, 61–62, 61f Positron (β+) emission, 10
β+ range, 59–60, 62 Positron emission tomography (PET). see PET
coincidence pairs, 58–60, 60f Pregnancy, 22, 175
collimators, 58 Proescher, Frederick, 2f
components, 56 Proportional counters, 30
crystal rings, 58, 58–59f Prospective gating, 46–48, 47f
dose infiltration, 91–92, 91f, 116–17, 116–17f, 147–49, Prostate cancer, 88, 135–36, 135–36f, 162, 170
147f, 175 Prostheses imaging, 131–32, 131–32f, 177
energy window filters, 61 Protons, 8, 9
image acquisition, reconstruction, 58–62, 59–61f
light output, 38, 57 Quality factor, 17
line of response (LOR), 56–57, 56f, 59–63, 63f, 111, 111–12f
misregistration, attenuation correction, 152–53, 152–53f, 171
Radiation
principles, 56–59, 56f, 57t, 58–59f
alpha (α) emission, 10–11
random coincidence, 59, 60, 60f
Auger electrons, 12, 172
respiratory motion artifact, 154–55, 154–55f
beta (β−) emission, 9–10
scatter, 60–61, 60f
definitions, descriptions, 6
scintillating material, 57, 57t
electron capture, 9, 10
sensitivity, 58–59
gamma (γ), 11–12
singles, 59, 60f
internal conversion, 12
sinograms, 111, 111–12f
isomeric transition, 11
SPECT vs., 57–61, 60f
nuclear, 8–9
stopping power, 57
positron (β+) emission, 10
time of flight, 62–63, 63f
radioactivity, units of, 12–13
treatment effect mimics new disease, 150–51, 150–51f
radioactive decay equation, 12
true coincidence, 59, 60f
risk/benefit analysis, 21–22
PET/CT
X-rays, 6–8, 7f
anatomic data, 64
attenuation, 64 Radiation hormesis model, 21
dose infiltration, 91–92, 91f, 116–17, 116–17f, 147–49, 147f, Radiation-induced cataracts, 17
175 Ra-223 dichloride, 162
F-18 fluorodeoxyglucose (FDG) ( see FDG PET) Radioactive vomit, 163–64, 175
F-18 sodium fluoride, 114–15, 114–15f, 174 Radiobiology
image acquisition, 63–64 conversion factors, 17
ionizing radiation, 64–65 deterministic effects, 16–18
misregistration, attenuation correction, 152–53, 152–53f, 171 free radical formation, 16
principles, 63–65, 64f ionizing radiation, 16, 18–19
Rb-82 chloride, 152–53, 152–53f linear no threshold model, 20–21, 20f
scatter correction errors, 116–17, 116–17f, 175 pregnancy, 22, 175
sensitivity, 64 radiation exposure units, 16–17
table misregistration, 114–15, 114–15f, 174 radiation hormesis model, 21
PET/MRI radiation safety, 21–22
attenuation, 66 stochastic effects, 18–21, 20f
avalanche photodetectors, 65–66 whole-body doses, 16–18
historically, 2f, 65 Radiopharmaceutical therapy
image acquisition, 66 empiric dosing exceeds safe limits, 159–60f, 159–61
magnetic field, 65–66 gastrointestinal toxicity, 162
principles, 65–66, 65f medical team primary goals, 163–64, 175
silicon-based PMTs, 66 radioactive vomit, 163–64, 175
Photomultiplier tubes therapy infusion via indwelling catheter, 165–66, 165f
flood nonuniformity, 80–81, 80f Radiotracer studies, 2f
Index 185

Rb-82 chloride, 152–53, 152–53f collimators, 58


Recent prior study pitfalls, 133–34, 133–34f described, definitions, 48
Rectilinear scanner, 3 expectation maximization (EM), 49–50, 50f
Relative biologic effectiveness (RBE), 17 FBP ( see filtered back projection (FBP))
Renal obstruction imaging, 137–38f, 137–39 image acquisition, 50–51, 97, 106–7f, 106–8
Renal scans, 44–46, 45f iterative reconstruction, 101–5, 102f, 104–5f, 173
Renograms, 137–38f, 137–39 noise, 50–51
Respiratory gating, 154–55, 154–55f orbit, 51
Respiratory motion artifacts, 154–55, ordered subset EM (OSEM), 50
154–55f PET vs., 57–61, 60f
Retrospective gating, 46–48, 47f principles, 48–49, 48–49f
Richards, Powell, 2f resolution, 50, 51
Roentgen equivalent man (REM), 16–17 sinograms, 51, 52f, 111
R-R interval, 47f SPECT/CT, 52–53, 53f
Standardized uptake value (SUV), 147–48
Scatter Static planar imaging, 42–44, 43–44f, 174
Compton, 41–42, 42f Stochastic effects, 18–21, 20f
correction errors (PET), 116–17, 116–17f, 175 Stopping power (Z number), 37, 172
edge detection, scatter effects on, 87f Strauss, H. William, 2f
PET, 60–61, 60f Survey meters
Scintillation camera. see gamma cameras contamination, 27–29, 28–29f, 174
Scintillators dead time, 29
crystal temperature instability, 113 described, 24, 27–29, 28–29f, 174
described, 37, 38f halogen gases, 29
light output, 38–39 Townsend avalanche, 29, 29f
resolution, 38 SUVmax, 148
sensitivity, 38
stopping power (Z number), 37, 172 Table misregistration
temperature, 38–39 artifacts (PET), 114–15, 114–15f, 174
thickness, 38, 38f attenuation correction (PET), 152–53, 152–53f, 171
Seaborg, Glenn, 2f Tc-99m
Segre, Emilio, 2f discovery of, 2f
Seidlin, Samuel, 2f dose calibrator contamination, 125–26, 125–26f
Sestamibi, 101 dose calibrators, wrong setting, 127
Sievert (Sv), 16–17 DTPA, 127
Single photon emission computed tomography (SPECT). see emissions, 57
SPECT “instant kit” radiopharmaceuticals historically, 2f
Single photon imaging, 32, 130 MAA, 128
Sinograms mebrofenin, 125–26, 125–26f
PET, 111, 111–12f photo peak, 85–86
SPECT, 51, 52f, 111 production of, 11
Sodium iodide crystals sestamibi, 101
artifacts, 75, 75f Therapy infusion via indwelling catheter, 165–66,
cracked crystals, 75, 75f 165f
in gamma cameras, 37, 38, 41f, 57 Thyroid cancer, 2f, 98–99f, 98–100, 159–60f,
hygroscopic crystals, 76–77, 76f 159–61, 176
light output, 38, 42 Thyroid imaging, 93–94, 93–94f
PET scanners, 57, 57t Tissue expanders imaging, 131–32, 131–32f, 177
in SPECT/CT scanners, 53f Tl -201, 13
thallium-doped, 37, 57 Tomography, 48, 48f also see PET, SPECT
Somatostatin receptor-binding radiotracers, 2f Tonsillar cancer, 150–51, 150–51f
SPECT Townsend avalanche, 29, 29f
angular sampling, 50 Treatment effect mimics new disease, 150–51,
artifacts ( see artifacts (SPECT)) 150–51f
attenuation, 61–62, 61f
calibration, 96 Vocal cord cancer, 118–19f, 118–20, 168
cardiac, 51 Vomit, radioactive, 163–64, 175
186 INDEX

Weighting factor, 17 X-rays, 6–8, 7f


Well counters, 54 peak energy, 119
Whole-body bone scans production mode, 7–8, 7f
image acquisition, 88–89, 88–90f, 172 wavelengths, 6–7
patient motion artifacts, 88–89,
88–90f, 172 Yttrium-doped LSO (LYSO), 57, 57t
urinary contamination, 135–36, 135–36f, 170
Whole-body doses, 16–18
Z number (stopping power), 37, 172
Wrong setting errors, 127

Anda mungkin juga menyukai