Anda di halaman 1dari 366

Introduction to Optics and Photonics

Judith F. Donnelly
Three Rivers Community College

Nicholas M. Massa
Springfield Technical Community College

The New England Board of Higher Education


Published by
The New England Board of Higher Education
45 Temple Place
Boston, MA 02111
Telephone (617) 357-9620
Fax (617) 338-1577
www.nebhe.org

Copyright © 2007, The New England Board of Higher Education

All rights reserved. No part of this book may be reproduced or transmitted in any
form or by any means, electronic or mechanical, including photocopying,
recording, or by any information storage and retrieval system without the
permission of the New England Board of Higher Education.

All trademarks and brand names mentioned in this book are the property of their
respective owners and are in no way affiliated with the authors or publishers.

ISBN 978-0-9815318-0-9
INTRODUCTION

This book evolved over ten years of faculty professional development


workshops of the New England Board of Higher Education (NEBHE) funded by
the Advanced Technology Education program of the National Science
Foundation. The first workshops were conducted from the authors' class notes,
developed over a combined 5 decades of teaching optics/photonics to community
and technical college students. Because there were so few textbooks covering the
subject from an applications point of view at the appropriate educational level,
the high school and community college participants at the NEBHE workshops
requested copies of the instructor notes and handouts. These notes grew in time
to a set of eight instructional modules, then ten chapters, and finally, this fifteen-
chapter textbook.
We have arranged the text into a logical sequence for a one- or two-
semester course. Beginning with and important overview of laser safety, we then
cover the physics of light production, geometric and wave optics, and laser
physics. We have attempted to infuse these chapters with both natural and
technical applications of light. The remainder of the text covers photonics
applications:, lasers types and applications, fiber optics, imaging, holography,
manufacturing of precision optics and biophotonics. Three of the application
chapters were written by industry experts. The appendix provides additional
mathematical detail on the derivation of some of the text equations, and answers
to odd number end of chapter problems are provided.
A complete solutions manual for the numerical problems is available by
submitting a written request, on letterhead stationary, to the New England Board
of Higher Education's PHOTON Projects Director at 45 Temple Place, Boston,
MA 02111.
Acknowledgements
The authors thank the PHOTON and PHOTON2 teacher participants and their
students who provided creative suggestions and thoughtful corrections to this
work. We also thank guest authors James Masi (Biophotonics), Michael Ruane
(Imaging), Flemming Tinker (Manufacturing Precision Optics) and Peter
deGroot (Fizeau Interferometers); student editors Katie Donnelly, Sarah
Donnelly and Sarah Orde; Albert Yee for his photos of optical effects; Michele
Dischino for the cover art and Vanessa Goldstein and Matthew Donnelly for text
formatting assistance. We are most grateful to Fenna Hanes, NEBHE Senior
Director, for providing inspiration and support and for keeping us on task.

The antique microscope graphic on the cover art © Tom Grill/Corbis. All photos
not otherwise credited were taken by the authors.

Projects PHOTON (DUE #0053284) and PHOTON2 (DUE #0302528) were


supported in part by grants from the National Science Foundation to the New
England Board of Higher Education (NEBHE).

All royalties from the sale of this book go to a fund to support optics education.

March 2008
Table of Contents

Chapter 1: Laser Safety Overview 1

Physics of Light
Chapter 2: The Nature of Light 17

Chapter 3: Sources and Detectors of light 45

Chapter 4: Geometric Optics 67

Chapter 5: Lenses and Mirrors 86

Chapter 6: Wave Optics: Interference and Diffraction 110

Chapter 7: Polarization 141

Chapter 8: Optical Instruments 162

Chapter 9: Introduction to Laser Physics 187

Photonics Applications
Chapter 10: Laser Types and Applications 212

Chapter 11: Introduction to Fiber Optics 230

Chapter 12: Optical Imaging 259

Chapter 13: Holography 283

Chapter 14 Manufacturing of Precision Optical Components 304

Chapter 15 Biophotonics 329

Appendix 1: Derivations of some text equations 349

Appendix 2: Answers to odd numbered problems 357

Index 358

People sometimes take misguided approaches to the safe use of


lasers. For example, when bar code scanners were first introduced
in checkout lanes at a local supermarket, several clerks began to
stand as far from the laser scanner as their arms would permit.
When asked about this odd behavior, one cashier replied, "The
manager told us to be careful of the radiation, and it's aimed right at
us!" At the other end of the safe-use spectrum is the video
instructor who flashed a laser pointer toward the camera and stated
"It's just a light." The safe use of lasers does not require fear, but
rather caution, knowledge of the properties of laser light, and some
common sense.

Chapter 1

LASER SAFETY OVERVIEW


1.1 WHY LEARN ABOUT LASER SAFETY?
The availability of a large variety of affordable lasers has made the laser
a common tool in industry, medicine, research and education. You will no doubt
use lasers or see laser demonstrations if you are enrolled in an optics course.
Whether you are working with lasers in a school laboratory, using lasers on the
job, or listening to music on a CD player, you should be aware of how lasers
differ from other light sources and how these differences translate into rules for
safe use of lasers.
The safe handling and use of lasers depends on many factors: the
wavelength (or color) of the light, the power (or power density, called
irradiance), the viewing conditions, and whether the laser is continuously on
(called continuous wave, or cw) or pulsed. We will discuss the basic concepts of
laser safety in this chapter, and throughout one important idea prevails: treat
every laser with respect and care.
Many state, federal and international laser safety standards exist, but the
one most often quoted in the United States is the American National Standards
Institute's (ANSI) Z136 series of laser safety standards. The parent document,
ANSI Z136.1, provides complete information on laser classifications, hazards and
controls, and is designed to be a reference for users of all types of lasers. Other

1
LIGHT: Introduction to Optics and Photonics

documents in the series are numbered sequentially (for example, Z136.2, Z136.3)
and cover specific uses of lasers in areas such as health care, education,
telecommunications and outdoor light shows. The documents are available from
the Laser Institute of America at its web site www.laserinstitute.org.
The International Electrotechnical Commission (IEC) has also created a
series of laser safety regulations covering all aspects of laser use and laser
product labeling. Like the ANSI standards used in the United States, these
international regulations are constantly being updated to reflect current research
on laser hazards and new types of lasers. ANSI and IEC work together in an
effort to harmonize regulations worldwide, a necessity in a global economy.
American manufacturers of lasers and laser systems must comply with
regulations of the Center for Devices and Radiological Health (CDRH) of the
Food and Drug Administration (FDA). Among the product safety standards is a
requirement that each laser must bear a label indicating the laser hazard
classification and information on the beam power and wavelength. Since 2001,
the CDRH has allowed American laser manufacturers to conform to the IEC
regulations, which reduces the burden of having to show compliance with two
different sets of rules. It does add confusion, however, since the ANSI and
CDRH classification are not quite the same, as we will see.

1.2 CHARACTERISTICS OF LASER LIGHT


The video instructor in the introductory paragraph was correct that a
laser produces light. (We are including ultraviolet, visible and infrared radiation
in this broad definition of light.) However, laser light has some very unique
characteristics that distinguish it from ordinary light sources. After all, you might
be burned by touching a 60 watt light bulb; a 60 watt laser can slice through
centimeter thick wood. The light from most lasers is usually described as:
monochromatic Lasers emit a single wavelength (color) or narrow band of
wavelengths.
coherent The light produced by a laser consists of waves that are "in step."
(Coherence will be further explored in Chapter 6.)
highly directional Most laser beams do not spread much as they propagate.
We say they are "collimated" and, as a result, beam energy is concentrated in
a small area.
In Chapter 9, we will further explore the properties of laser light.

1.3 LASER HAZARDS


As a result of the unique characteristics of laser light, laser users need to
be aware of specific hazards associated with lasers. These hazards are often

2
Laser Safety

grouped into three main categories: eye hazards, skin hazards and secondary
hazards. We will concentrate on eye hazards and eye safety, because the loss of
vision is a life-altering occurrence. It should be noted, however, that electrical
hazards can be the most lethal hazards associated with laser operation.

Eye Hazards
The human eye is designed to focus visible light onto the light-sensitive
retina, forming an image that is eventually interpreted by the brain. Since near
infrared (IR) light also passes through the cornea and lens, it focuses on the retina
as well. However, near IR light does not have sufficient energy to stimulate the
retinal sensors to produce a signal. That is, we can't see near IR light, but it is still
being focused on the retina and may damage retinal tissue.
Figure 1.1 illustrates the focusing process in the eye for rays that enter
nearly parallel to the optical axis of the eye. (A more detailed diagram is found in
Chapter 8.) This is the situation when an object is located far from the eye. In the
same way, collimated laser light focuses to a very tiny spot on the retina. A rule
of thumb is that the light entering the eye from a collimated laser beam is
concentrated by a factor of about 100,000 times when it strikes the retina because
the area of the focal spot on the retina is approximately 1/100,000 of the pupil
area. That means a 0.10 W/cm2 laser beam would result in a 1000 W/cm2
exposure to the retina!

Figure 1.1 - Focusing effects


of the human eye.

Wavelength Dependence
When light strikes a material, it may be reflected, transmitted, scattered
or absorbed. To some extent, all of these processes occur. You are familiar with
these behaviors of light from everyday experience. Visible light is reflected by a
shiny surface, transmitted by a clear pane of glass, scattered by fog, and absorbed
by a piece of black cloth.
Damage occurs when radiation is absorbed by tissue. Whether radiation
is absorbed or harmlessly passes through depends on the type of material and the
wavelength. It is clear that hazardous effects to various structures of the eye

3
LIGHT: Introduction to Optics and Photonics

depend on the wavelength of the laser radiation and the type of tissue exposed.
As shown in Table 1.1:
Mid and far infrared (so-called IR-B and IR-C) and mid and far
ultraviolet (UV-B and C) wavelengths are absorbed by the cornea and may
damage corneal tissue.
Near ultraviolet (UV-A) wavelengths pass through the cornea and are
absorbed by the lens. This can cause lens clouding (cataracts).
Visible and near infrared (IR-A) wavelengths pass through the cornea
and lens and are focused on the retina. This portion of the spectrum is called the
"retinal hazard region."
Certain specific wavelengths in the IR-A and IR-B regions are also
absorbed by the lens, which may cause damage.

UV-C UV-B UV-A VISIBLE IR-A IR-B IR-C


100 nm- 280 nm- 315 nm- 400 nm- 700 nm- 1400 nm- 3000 nm-
280 nm 315 nm 400 nm 700 nm 1400 nm 3000 nm 1 mm
Cornea Cornea Lens Retina Retina Cornea Cornea

Table 1.1 Laser spectral regions (approximate wavelengths) and eye damage.

Viewing conditions
The damage caused to your eye by exposure to laser light depends on the
amount of light energy absorbed. The most hazardous viewing condition is
intrabeam viewing, that is, looking directly into the beam. Note that looking at a
beam from the side is normally not hazardous. Despite what you may have seen
in science fiction movies, a beam of light is not visible at right angles to the
direction of propagation unless there is something to scatter the light out of the
beam and into your eyes. For example, to see the beam of a laser pointer there
must be dust or fog in the room.
Reflected beams may or may not be harmful to look at, depending on the
laser power, the laser wavelength, the curvature of the reflector surface, and
whether the reflection is specular or diffuse. Specular reflections are mirror-like
reflections from shiny objects, and they can return close to 100% of the incident
Figure 1.2 - Specular
reflections from flat, convex light. Flat reflective surfaces will not change a fixed beam diameter, only the
and concave surfaces. direction of propagation. Convex surfaces will cause beam spreading and
concave surfaces will make the beam converge (Figure 1.2).
As Figure 1.3 shows, diffuse reflections result when irregularities in the
surface scatter light in all directions. Whether a reflection is specular or diffuse
Figure 1.3 - Diffuse depends upon the wavelength of incident radiation as well as on the smoothness
reflection from a rough of the surface. Specular reflection requires that the surface roughness must be
surface.

4
Laser Safety

less than the wavelength of the incident light. Thus, a surface that diffusely
reflects 500 nm visible light might cause specular reflection of 10.6 µm
wavelength radiation from a carbon dioxide (CO2) laser.

Skin Hazards
Although skin injuries are not as life-altering eye injuries, skin damage
may occur with high power lasers. Exposure to high levels of optical radiation
may cause skin burns. This thermal damage is the result of extreme heating of
the skin and is a particular danger when medium and high power infrared lasers
are being aligned. Accelerated skin aging and the increased risk of cancer may
result from exposure to ultraviolet wavelengths. This is called photochemical
damage and it is similar to a sunburn. Protective clothing such as gloves and
flame-retardant laboratory coats may be required for some laser applications.

Secondary Hazards
Some of the most life threatening hazards are not due to the laser beam,
but are the result of associated equipment or byproducts of laser processes. These
hazards include:
Electrical Hazards Electric shock is potentially the most lethal hazard
associated with laser use. Electrical hazards most often result from inappropriate
electrical installation, grounding or handling of the high voltage associated with
many lasers. The power supply for a common helium neon laser includes
capacitors that hold an electrical charge long after the laser is shut off. While not
ordinarily lethal, the shock resulting from grabbing the exposed connector is
certainly painful.
Fire and Explosion Hazards High-pressure arc lamps, filament lamps
and associated optics can shatter or explode. High power lasers used for cutting
may also present fire hazards, particularly if used in enclosures or near
flammable materials.
Other Associated Hazards Operation of a laser system may involve the
use of compressed gases, toxic dyes or cryogenic (extremely cold) liquids.
Dangerous fumes may be generated when the laser is used for material
processing, requiring engineered ventilation systems. So-called laser generated
air contaminants result from the interaction of high-energy laser radiation, assist
gases used in material processing, and the material itself. In addition to molten
and evaporated materials liberated from the processed surface, new noxious and
toxic compounds may be formed in some processes including metal oxide fumes,
cyanide and formaldehyde. When lasers are used in a medical setting, particles of
biological origin such as bacteria may be released into the air.

5
LIGHT: Introduction to Optics and Photonics

1.4 LASER HAZARD CLASSIFICATIONS


How can a laser user know the level of danger associated with a given
laser? Laser hazard classifications provide a simplified method to make users
aware of the potential hazards associated with radiation produced by a laser. The
classifications are the result of research and experience with sunlight and
manmade sources of light, as well as laser emissions. Until recently, different
laser classification schemes were used in North America and in Europe. To assist
manufacturers operating in both markets, CDRH (USA) agreed to accept the IEC
(European) standards, known as IEC 60825-1. The revised ANSI standards were
published in 2007.
CDRH, IEC and ANSI have in common four major laser hazard
classifications based mainly on the laser emission wavelength and power,
although they differ in the sub-classifications. (In fact, one set of standards uses
Roman numerals instead of Arabic numbers, to add to the confusion!) In this
chapter we present a brief and simplified description of hazard classifications.
For more detail and the most recent information, you should consult the latest
ANSI Z136.1 standard. In what follows, the sub-classifications with an IEC
notation are not part of the ANSI classification scheme for laser users, but may
be seen on laser equipment.
Class 1lasers are of such low power that they cannot cause injury to the
human eye or skin. Few lasers are Class 1, however the class also includes more
powerful lasers located in enclosures that limit or prohibit access to the laser
radiation. For example, Class 1 lasers include laser printers, DVD players and
even high-powered laser cutting systems that do not allow access to the beam
while in operation. These so-called embedded laser systems are considered Class
1 as long as the enclosure is intact.
Class 1M is a new (IEC) classification for lasers that are normally safe
for eyes and skin, but may cause injury to the eyes if the output is concentrated
using optics. For example, a highly divergent beam might be considered eye safe
unless it is focused with a lens.
Class 2lasers must emit visible radiation. They have output power higher
than that of a Class 1 laser but less than 1 mW. This upper limit is important
because the definition of Class 2 assumes that a person will blink or turn away
from a brilliant source of light within one quarter of a second, before the eye is
harmed. This is called the human aversion reaction time, or blink reflex, and it is
based on many years of medical research with human subjects. Class 2 lasers will
not injure the eye when viewed for 0.25 seconds or less. However, like many
conventional light sources, they may cause injury if stared at for a longer time.

6
Laser Safety

Class 2M is a new (IEC) classification for lasers that produce visible


output with power less than one milliwatt. The eye is protected by the aversion
reaction to bright light, unless the beam in concentrated by optics such as a
telescope.
Class 3Alasers normally will not cause injury when briefly viewed with
the unaided eye. Nonetheless, users should use caution and avoid viewing the
beam directly. For visible lasers, the output power levels range from 1 mW to 5
mW. Many laser pointers are Class 3A. The IEC Class 3R classification is
similar to Class 3A.
Class 3B includes laser systems with constant power output (cw lasers)
from 5 mW to 500 mW. Repetitively pulsed laser systems with beam energy
between 30-150 millijoules per pulse for visible and infrared light or greater than
125 millijoules per pulse for other wavelengths are also included in Class 3B.
The average power for the pulsed lasers must be less than 500 mW. Class 3B
lasers can produce eye injury when viewed without eye protection and could
have dangerous specular reflections. Eye protection is required when using Class
3B lasers.
All laser systems that exceed Class 3B limits are considered Class 4.
Viewing either the specular or diffuse reflections of the beam can be dangerous.
Class 4 lasers can also present a skin or fire hazard, and both eye and skin
protection are required when operating them. Commercially available Class 4
systems are often completely contained in an enclosure so that the overall system
is rated Class 1. Interlocks and other controls prevent the operation of the laser
when the enclosure is opened.

1.5 IRRADIANCE AND MAXIMUM PERMISSIBLE EXPOSURE (MPE)


How is protective equipment chosen for a particular laser application? In
addition to the laser wavelength, the power density or irradiance must be
considered. Irradiance is a central concept in the discussion of laser hazards and
laser classification. It is defined as the power per unit area, and is often (but not
always) given the symbol E or Ee. The standard (SI) unit of irradiance is W/m2,
but it is often more conveniently expressed in W/cm2.

P
E =
A (1.1) IRRADIANCE

In Equation 1.1, A is the area illuminated by light incident with power P.


Note that the letter E is also used to represent energy as well as electric field
strength. It is usually clear from the context and always clear from the units
which quantity is being considered.

7
LIGHT: Introduction to Optics and Photonics

EXAMPLE 1.1
Compare the irradiance from a 60 W light bulb at a distance of 1 meter from
the bulb to that of a 5 mW laser pointer which makes a 4 mm diameter spot
one meter from the laser.

Solution:
The light from the bulb spreads out in all directions,
so the total power (60 Watts) passes through the surface of a
sphere 1 meter in radius. (Of course, much of the radiation
from an ordinary light bulb is infrared—heat—and not visible light!)

A = 4! r 2 = 4! (100 cm)2 = 126000 cm 2


1m
P 60 W W
E= = = 0.00048
A 126000 cm 2 cm 2

The irradiance of the light bulb is 0.00048 W/cm2 one meter away from the
bulb.
The laser beam makes a spot with a 4 mm diameter. The power (5 mW)
is concentrated onto a 2 mm (= 0.2 cm) radius circle.
4 mm
A = ! r 2 = ! (0.2cm)2 = 0.126 cm 2
P 0.005 W W
E= = 2
= 0.040
A 0.126 cm cm 2

The irradiance of the laser is 0.040 W/cm2, more than 80 times that of the
light bulb (when the total radiation of the light bulb -60 watts- is
considered). Also note that the irradiance of the laser changes slowly as you
move away from the laser because the beam does not spread very much as it
propagates. The irradiance of the bulb drops as 1/r2 where r is the distance
from the bulb. Can you explain why? (The dependence of irradiance on
distance for a point source radiator will be explored in Chapter 2.)

Maximum permissible exposure (MPE) is defined in ANSI Z-136.1 as


"the level of laser radiation to which a person may be exposed without hazardous
effect or adverse biological changes in the eye or skin." The MPE is not a distinct
line between safe and hazardous exposures, but rather an exposure level that
should be safe for repeated exposures. MPE is usually expressed as the
allowable irradiance (in W/cm2) at a particular wavelength for a given exposure
time (in seconds). MPE tables exist for both eye and skin exposure, and it can

8
Laser Safety

also be calculated from formulas provided in the ANSI standard. Table 1.2 gives
the maximum permissible exposure for the eye for a variety of lasers calculated
from the formulas given in ANSI Z-136.

Laser Type Wavelength (nm) MPE (average power density—mW/cm2)


Exposure time in seconds
0.25 s 10 s 600 s
HeCd 325 — 100 1.67
Argon 488 2.5 1.0 0.0167
Argon 514 2.5 1.0 0.0167
HeNe 633 2.5 1.0 0.283
Nd:YAG 1064 — 5.1 202
CO2 10600 — 100 100

Table 1.2 MPE calculated for selected cw lasers and exposure times.

Note that for visible radiation (400-700 nm), the MPE is shown for 0.25
seconds, the human aversion response time. For infrared lasers, the blink reflex
does not provide protection since the light is invisible. However, research shows
that normal eye movements will redirect the eye from the beam within 10
seconds, so MPE is calculated for 10 seconds for infrared lasers. The remaining
time period in the chart, 600 seconds (or 10 minutes), is assumed to be an
average amount of time to perform a beam alignment. This is more important for
beams that are not visible, since a technician would presumably get out of the
way in less than 10 minutes if blinded by a visible laser beam.

EXAMPLE 1.2
Does the beam from a 3 mW laser pointer (650 nm) exceed the MPE for
0.25 seconds if it enters a 7 mm diameter fully dilated pupil?

Solution
To calculate irradiance at the pupil, use Equation 1.1 and the area of the
pupil (A = !r2). Note that the pupil radius is 0.35 cm.

3 mWatt "3 watts


Irradiance = 2 = 7.8x10
! ( 0.35 cm ) cm 2

The MPE for a laser operating at 650 nm is 2.5 x 10-3 W/cm2 (using the
closest wavelength value from Table 1.2). The MPE is exceeded by more
than three times.
As the MPE table indicates, the biological effects of laser radiation
depend on both the wavelength of the laser and exposure duration. For example,

9
LIGHT: Introduction to Optics and Photonics

the maximum permissible exposure for lasers producing visible light is generally
less than for ultraviolet or infrared for the wavelengths shown. Also, looking at
any one laser reveals that MPE decreases as exposure time increases. Although
all the lasers listed here are assumed to be operating at constant output power
(cw), MPE can also be calculated for pulsed lasers.

1.6 CHOOSING LASER SAFETY EYEWEAR


Protective eyewear in the form of goggles, spectacles, wraps and shields
provides the principal protection for the eyes. Some form of laser safety eyewear
must be worn at all times during operation of Class 3B and Class 4 lasers.
The main considerations when choosing laser safety eyewear are the
operating wavelength and power. Eyewear is available for specific lasers or
wavelengths (such as helium-neon safety goggles) or designed for a broader
spectrum of wavelengths and laser types. Most laser safety eyewear is expensive,
often costing several hundred dollars per pair. Eyewear for carbon dioxide lasers
is the exception, available for under $50 per pair from many suppliers. (It should
be noted that designer sunglasses can also cost well more than $200 per pair!)
Laser eyewear should be treated with care to avoid scratches or other damage
that can change the optical properties and make the eyewear susceptible to laser
damage.
Comfort and the effect on color vision are also important when choosing
laser safety eyewear. If the LSE is uncomfortable or prevents the wearer from
seeing, for example, color traces on monitoring equipment, users may not want to
wear it. It is also important not to "overprescribe" LSE. If the eyewear makes it
impossible to see the beam, alignment will be difficult. Accidents occur when a
technician removes the safety eyewear to complete an alignment and is injured
by the beam.

Figure 1.4 - Laser protection eyewear is


available in many styles, including goggles
to be worn over prescription glasses. The
absorbing dye is chosen to block
wavelengths of interest. (Photo courtesy
Kentek Corp. www.kentek.com)

10
Laser Safety

Calculating Optical Density for LSE


The lens of the eyewear is a filter/absorber designed to reduce light
transmission of a specific wavelength or band of wavelengths. The absorption
capability of the lens material is described by the optical density (OD). If Eo is
the irradiance incident on an absorbing material and ET is the irradiance
transmitted through the material, the transmitted irradiance is related to the OD
by an exponential function

ET = Eo 10 !OD (1.2)

The transmittance (T) of light through an absorber is defined as the ratio


of ET /E0. We can rewrite Equation 1.2 in a form used commonly with optical
filters

ET Transmittance
T = = 10 !OD (1.3)
Eo

Thus, an OD of 1 means the filter has reduced the irradiance of the beam
1
to 1/10 = 1/10 of its original irradiance and OD of 5 means the filter has reduced
the irradiance of the beam to 1/105 = 1/100,000 of its original irradiance. The
required OD for laser safety eyewear is the minimum OD necessary to reduce the
beam to a non-hazardous level. Optical density for a given wavelength is usually
labeled on the temple of the goggles or on the filter itself. Often, laser safety
eyewear is labeled with the OD for several wavelength ranges.
To calculate the OD required for a particular laser, we need to know the
incident radiation on the front surface of the LSE, E0. The irradiance transmitted
by the LSE cannot exceed the maximum permissible exposure (MPE). If we
replace ET in Equation 1.2 with MPE, we get

Eo
10!OD =
MPE

or, equivalently

MPE = Eo 10!OD

Solving the last expression for OD gives a useful equation for calculating
required OD for laser safety eyewear

! E $
OD = log # o & (1.4) OD for LSE
10
" MPE %

11
LIGHT: Introduction to Optics and Photonics

EXAMPLE 1.3
A 50 Watt Nd:YAG laser (cw at 1.064 µm) is projected onto a fully dilated
pupil of 7 mm diameter. The eye is exposed for 10 seconds. Calculate the
minimum OD of a laser safety goggle needed to protect the eye from
damage.

Solution:
From Table 1-2, the MPE for a Nd:YAG laser for a 10 second exposure
is 5.1 x 10 -3 W/cm2.
The irradiance at the pupil is calculated from Equation 1.1. The power is
50 watts and the pupil is a circle of radius 0.35 cm.

P 50 watts 50 watts watts


Eo = = 2 = 2
= 132
A ! (0.35 cm ) 0.38 cm cm 2

Use Equation 1.4 to determine the required OD

! W $
132
# cm 2 & = 4.4
OD = log10 #
W &
# 5.1x10-3 &
" cm 2 %
The optical density for the LSE must be at least 4.4.

In practice, the OD may be determined from a calculation similar to that


in the example, or by consulting the laser or eyewear manufacturer. Many
suppliers of laser safety equipment have online calculators to assist in the
selection of laser safety eyewear.

1.7 LASER SAFETY CONTROLS


To ensure safe use of lasers, administrative controls and engineering
controls are required. Warning signs and labels, standard operating procedures,
personal protective equipment and laser safety training are examples of
administrative controls. Engineering controls are designed into lasers and laser
systems to prevent accidental exposure of eyes or skin. Shutters, interlocks,
delayed emission and remote firing are examples of engineering controls
incorporated into laser system design.
ANSI Z136.1 states that any facility operating Class 3B and Class 4 lasers
must have designated a person to serve as Laser Safety Officer (LSO). The job of
the LSO is to ensure that laser safety procedures are in place and followed during

12
Laser Safety

laser operation and maintenance. Courses are available to train LSOs and help
them remain current with ANSI standards.
Among the administrative controls required by the ANSI laser safety
standards are warning signs and labels for lasers and for work areas where lasers
are in use. The most common signs used for lasers and laser systems are the
CAUTION sign (for Class 2 and some Class 3A lasers) and the DANGER sign
(used with higher power Class 3A and all Class 3B and 4 lasers). The sign
dimensions, colors, and lettering size are all specified by ANSI standards. The
IEC also has a specific format for warning signs, which differs from ANSI's, and
may be seen on some laser equipment.
In some applications, laser beams must be in the open. In these cases, the
LSO must define an area of potentially hazardous laser radiation called the
nominal hazard zone (NHZ). The NHZ is defined as the space within which the
level of direct, scattered or reflected laser radiation exceeds the MPE. This area
must be clearly marked and appropriate controls must be in place to exclude
Figure 1.5 Laser Warning
casual visitors. Signs (from the Laser
Institute of America,
1.8 PRACTICAL RULES FOR USING LASERS www.laserinstitute.org.)
Class 3 and 4 signs
Since many school laboratories use only low power Class 2 and 3A indicate if the beam is
lasers, the following guidelines are sufficient for many classrooms. Labs with visible or invisible.

higher power Class 3B lasers and Class 4 lasers need to be evaluated and
monitored by a trained Laser Safety Officer.
Do not look into a laser beam. Do not look at specular reflections from
mirrors or other shiny surfaces and do not stare at diffuse reflections. Remember
that some lasers produce invisible beams, so it is important to be aware of the
beam's location. If you are working with optical fiber, never look into the end of
the fiber unless you know for sure that it is not connected to a laser source.
Only trained and qualified personnel should work with lasers. Lasers
are not toys, and should not be used by casual visitors or friends visiting the laser
lab.
Keep room lights on if possible. Bright room lights will cause the pupil
to close, minimizing the light that enters your eye.
Remove watches, rings and other shiny objects. Before turning on a
laser, remove any jewelry or other items that could cause specular reflection.
Remember that lenses and other components that are primarily designed for
transmitting light can also reflect light.
Use beam blocks. Opaque blocks should be used to stop and absorb the
beam at the end of its useful path.

13
LIGHT: Introduction to Optics and Photonics

Do not bend down below beam height. If you sit down in a lab where
lasers are in use, be sure that the chair is high enough that your head is above
beam height. If you drop something, use a beam block to stop the laser beam
before bending down to pick up the object.
Wear laser safety eyewear. If eyewear is provided, wear it. Eyewear is
required for Class 3B and higher lasers.
Report any accidents immediately. If there is exposure to the eye, an
ophthalmologist should be consulted.
For additional information on the safe use of lasers, consult one of the
references at the end of this chapter. In addition, many research universities have
laser safety information on their websites.

REFERENCES
Laser Institute of America, “ANSI Z136.1 (2000) Safe Use of Lasers ," Orlando,
Florida.

WEB SITES
1. Laser safety industry society
www.laserinstitute.org (Laser Institute of America)
2. Companies dealing with laser safety
www.kentek.com (Kentek Corporation)
www.rli.com (Rockwell Laser Industries)
Many universities have laser safety web sites. An excellent example is found at
www.safety.vanderbilt.edu/training

14
Laser Safety

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. Explain the importance of maximum permissible exposure and the nominal hazard
zone.

2. What are the two ways that skin might be affected by laser exposure?

3. What is the most lethal hazard associated with the laser?

4. What wavelengths pose the greatest danger for retinal exposure?

5. What is the "human aversion response" time and how long is it?

6. To which classification would each of the following lasers/laser systems belong?


a. A completely enclosed system (e.g., DVD player, laser printer, laser engraver)
b. Most inexpensive red laser pointers
c. Lasers used for cutting, welding or other material processing
d. A 50 mW argon laser
e. Low power lab HeNe laser (1 mW or less)

7. What is the difference between Class 3A and Class3B lasers? What controls apply to
each?

LEVEL I PROBLEMS
8. Find the power of a laser if the irradiance is 1500 W/cm2 and all the laser's output
power is focused on a circular spot with a 16 µm diameter.

9. What OD would be required for laser protective eyewear if the worst case exposure
is 100 times the MPE? 1000 times?

LEVEL II PROBLEMS
10. What is the minimum OD for laser protective eyewear if an Argon laser with power
of 75 W is projected onto a partially dilated pupil with a diameter of 5mm and is
exposed for .25 seconds.

11. Consider a pupil opening 5 mm (0.5 cm) in diameter. What is the maximum HeNe
power allowed for a 0.25 second exposure? (By exposure, we mean looking directly
into the beam, intrabeam viewing, or directly viewing a specular reflection of the
beam.) Use the MPE chart.

12. Two students are playing around with a laser pointer (670 nm wavelength) on a dark
night. Because of the darkness, the students' pupils are dilated to 7 mm (0.7 cm)
diameter. Calculate the highest power diode laser allowed for a 7 mm diameter pupil
to be protected by the blink reflex (0.25 seconds). Compare this power to the usual
laser pointer, which is around 5 mWatts.

15
LIGHT: Introduction to Optics and Photonics

INTERNET RESEARCH PROJECTS


13. Your company has been hired to perform its first laser light show in a large sports
stadium. You are using a 5 Watt argon laser, with wavelengths from 450 nm to 530
nm. What control measures should you have in place for your employees? What
control measures should you have in place to protect your audience?

14. The XYZ Company has purchased a 4 kW CO2 laser to use with an existing robot in
the manufacture of precision widgets. The laser will produce pulses of IR to produce
precise holes in the steel widgets. You have been hired to oversee the installation of
the laser. What engineering and administrative control measures would you take to
protect the employees of XYZ?

16
Imagine walking into a dark room at night. You
reach to the wall and switch on the light. Instantly,
you are able to clearly see the room and its
contents—all types of textured surfaces in a
rainbow of colors. How does this happen? Is the
change from darkness to light really
instantaneous? How can something with no color
of its own reveal all the colors of nature? What,
exactly, is light? This is a question that
philosophers and scientists have been wrestling
with for thousands of years. In this chapter, we
will explore these questions as we begin to learn
about the nature of light.
Sun Rays in Glacier Bay (J. Donnelly)

Chapter 2

THE NATURE OF LIGHT


2.1 A BRIEF HISTORY
Photonics is the application of light to solve technical problems, so it
makes sense to begin with the question "What is light?" The quest to understand
the nature of light dates back to ancient times, when some Greek philosophers
postulated that "visual rays" leave the eye and travel through space until they
strike an object, at which point the object is seen. From the dawn of "modern
science" in the seventeenth century until the early 1800s, whether light is a wave
or a particle was the subject of spirited debate. Unlike sand, which is clearly
composed of particles, or sound, which bends around corners and is easily
understood to be a wave, the nature of light cannot be determined by casual
observation.
How are particles and waves different? What kinds of observations could
be made to determine if a phenomenon is a wave or a particle? To knock a can
off a distant fence with a baseball, the baseball must travel the distance from your
hand to the fence. In contrast, a wave is a disturbance in a medium. The medium
does not travel to a distant location in order to deliver energy, instead it transfers

17
LIGHT: Introduction to Optics and Photonics

energy though small displacements that propagate along the medium. For
example, if a rope is tied to the same fence, you can shake the far end of the rope
and an energy pulse will travel along the length of the rope to dislodge the can at
the other end. After the wave disturbance dies out, the rope returns to its
undisturbed position.
Waves can bend around obstacles—you can hear people talking in a
classroom before you reach the open door because sound waves bend around the
edges of the door. Particles follow straight-line paths unless they are deflected
by an external force. Waves can pass through each other undisturbed, but
particles cannot. When two or more waves pass through the same point in a
medium, they interfere constructively to make a larger amplitude wave (when a
wave crest meets a wave crest) or destructively (when a crest meets a trough).
Ordinary particles do not interfere in this way; two ordinary baseballs cannot
combine to produce zero baseballs.
Several observations had been made by the end of the seventeenth
century that might lead scientists to decide that light is a wave. It was well known
that beams of light could travel through each other undisturbed. Diffraction, the
bending of light around the edges of a tiny hole, was observed by Francesco
Grimaldi in the mid-1600s. Double refraction or "birefringence" was discovered
in the late 1600s, when it was observed that certain crystals produce two images
when placed over a printed page.
Despite what now seems like clear evidence of wave behavior, Isaac
Newton argued for the particle theory of light. He used some of the same
reasoning he had applied to mechanical forces to describe the stream of light
particles he called corpuscles. Newton was aware of earlier observations of
diffraction, but he explained these effects as the result of a mechanical interaction
between light corpuscles and the material at the edge of a hole. Newton also
Figure 2.1 - Newton's
rings formed between observed what are now called "Newton's rings," the interference pattern produced
two pieces of glass. in a thin film of air between two glass plates. Ironically, Newton's ring devices
are used today in classrooms to illustrate the wave behavior of light.
Living around the same time as Newton, Christian Huygens, a Dutch
scientist, proposed that light was a form of wave motion propagating through an
all-pervasive medium known as the "luminiferous aether." (We now know that
no medium is required for the propagation of light waves, but that idea was
unheard of in Huygens' time.) Huygens was able to use waves to explain
reflection and refraction as well as double refraction. However, most scientists of
the time were inclined to side with Newton, in part because they were expecting
any wave effects to be of a greater magnitude than what had been observed. The
wavelength of light turned out to be far smaller than many scientists expected.

18
The Nature of Light

At the start of the eighteenth century, a little more than one hundred
years after Newton published his famous treatise on optics, Thomas Young
performed the classic "two-slit" interference experiment, the results of which
could only be explained by assuming that light was a wave. Still, the acceptance
of the wave nature of light was not immediate. In 1818, Augustin Fresnel
submitted a paper on the wave theory of light to a competition of the French
Academy. Simeon Poisson, one of the judges, pointed out a ridiculous prediction
of Fresnel's theory: light passing around a solid obstacle, such as a ball bearing,
would cast a shadow with a bright spot at the center. This was clearly ludicrous!
When the experiment was performed and showed the bright spot, Fresnel was
named winner of the competition. Ironically, the bright spot in the center of the Figure 2.2 - Poisson's spot.
dark shadow is often called "Poisson's spot" after the scientist who doubted its The photo shows the
shadow of a ball bearing
existence. illuminated with laser light.
Throughout the 1800s, the wave theory was strengthened by further
observations. By the end of the nineteenth century, it had been shown beyond
any doubt that light is an electromagnetic wave, vibrating electric and magnetic
fields propagating through space, described by James Clerk Maxwell's set of four
equations. It was assumed that little else that would be discovered to change this
picture.
Problems with the wave theory surfaced when it was applied to the
interactions of light and matter. Albert Einstein (1905) found that by treating
light as a particle he was able to explain details of the photoelectric effect, the
ejection of electrons from a metal illuminated with light. The particle nature of
light was later used by Bohr (1913) to explain the emission and absorption of
light by atoms and by Compton (1928) to model the scattering of x-rays from
electrons. By 1930, the particle nature of light was as firmly accepted as the wave
nature had been 30 years earlier.
Is light a wave or a particle? It is in fact neither, but has some of the
properties and behaviors of both. When light is propagating through space it acts
like a wave and we usually describe its behavior using a wave model. When light
interacts with matter, its particle nature is invoked. Still, odd mixed-model
expressions such as the "wavelength of a photon" are often heard and understood.
During the early 20th century, scientists found that this wave-particle duality also
applies to particles such as electrons, which display wavelike behaviors in some
situations. Wave-particle duality is a fact of life for submicroscopic particles and
the scientists who study them.
Wave particle duality aside, the model of light we will use to solve
practical optics problems depends on the particular situation. Geometric optics
deals with light as if it were a simple ray that travels in straight lines unless

19
LIGHT: Introduction to Optics and Photonics

deflected by an obstacle. Wave optics treats light as a wave phenomenon.


Quantum optics deals with the emission and absorption of light at the atomic or
quantum level, where light is considered to be a particle of energy, a photon. As
you progress through this book, you will learn when it is appropriate to use each
of these models for light.

2.2 ELECTROMAGNETIC WAVES

Describing a wave
Let's begin with the wave description of light. Light is a transverse
electromagnetic wave, described by Maxwell's equations. What do these words
mean? A wave is a disturbance that transmits energy from one point to another.
The vibrations of a guitar string, the sound of a train whistle, and the ripples
caused by a breeze blowing across a lake are everyday examples of waves. A
harmonic wave is a special case that is described by a sine or cosine function.
Whether the wave is mechanical, acoustic or electromagnetic, it can be described
by the following quantities, pictured in Figure 2.3:
Amplitude (A) is the maximum displacement of the wave from the level
of the undisturbed medium. The units depend on the type of wave. For example,
the amplitude of an ocean wave is measured in meters and the amplitude of a
sound wave is measured in kPa (kilopascals), which are units of pressure.
Wavelength (! , the Greek letter "lambda") is the distance over which the
wave repeats itself, for example from one crest to the following crest. One
wavelength is also called one cycle of the wave. Wavelength is measured in units
of length (meters).
Period (T) is the time it takes for the wave to repeat itself, that is, the
time required for one cycle to be completed. Period is measured in units of time
(seconds).

Amplitude
Figure 2.3 - Amplitude, period
and wavelength for a wave. In
the top graph, the wave is shown
as a function of time, and the
time between peaks is the period
(T). In the lower graph, the wave
is shown as a function of space,
!
and the distance between peaks
is the wavelength (!).
Amplitude

20
The Nature of Light

Frequency (f) is the number of cycles that take place in a given time,
usually one second. Frequency is the inverse of the period, f = 1/T. The unit of
frequency is hertz (Hz), where 1 Hz = 1 cycle per second. Since "cycle" is not a
unit of measurement, 1 Hz = 1 s-1 or 1/s. The symbol " (the Greek letter "nu") is
often used to denote frequency. This text will use f to avoid confusion with the
letter v, which we will use to indicate wave speed. Example 2.1 shows how these
quantities are related in a common situation.

EXAMPLE 2.1
A student sits at the end of a dock watching the waves roll in. Always
prepared for physics, she has brought along a meter stick and a stopwatch.
She estimates the distance from the highest to lowest points on the wave is
30 cm and that the distance between wave peaks is 80 cm. In 10 seconds, 5
waves pass her position. What are the amplitude, wavelength, period and
frequency of the ocean waves she observes?

Solution:
The amplitude is the distance from the level of the undisturbed water to
the peak of the wave, that is, one half the peak-to-valley (highest to lowest)
distance. For these waves
A = 1/2 (30 cm) = 15 cm or 0.15 m.
The wavelength is the distance between wave peaks, 80 cm or
! = 0.80 m
The period is the time for one wave to pass, that is, the time from one
peak to the next. If 5 waves pass in 10 seconds, it takes 2 seconds for each
wave to pass.
T = 2 seconds.
If a wave takes 2 seconds to pass, then 1/2 of a wave passes each second.
The frequency is 1/2 wave per second or
f =0.5 Hertz.

Figure 2.3 illustrates the displacement of a wave as a function of time


and as a function of space (distance). In both cases, the displacement is described
by a sine function. The equation of the top graph in the figure (a wave as a
function of time) can be written

" 2! t %
y = A sin $
# T '&
(2.1)

21
LIGHT: Introduction to Optics and Photonics

The factor of 2! "converts" the fraction to radians, since there are 2!


radians in a complete wave cycle. For example, if T = 2 s and t = 1 s, then
2!t/T = !. One half of the wave cycle has been completed, the argument of the
sine function is !, and the displacement at t = 1 second is zero.
If frequency (f) is substituted for 1/T in Equation 2.1 we have

y = Asin(2! ft)

We define angular frequency as # = 2! f and write

(2.2) y = Asin(! t)

Like frequency, angular frequency # (the Greek letter "omega") has units of
s-1 or Hertz. Sometimes we say "radians per second" to emphasize the fact that
we are talking about angular frequency.
In a similar fashion, the equation of the wave as a function of distance
(bottom graph in Figure 2.3) can be written
2! x
y = Asin( )
"
Here, x/! is the fraction of the wave completed and again the factor of 2!
"converts" the fraction to radians. We define the propagation constant k = 2!/!
and write

(2.3) y = Asin(kx)

Equation 2.2 describes a harmonic wave in time and Equation 2.3


describes a harmonic wave in space. But traveling waves oscillate in both space
and time so the general equation for a harmonic wave must combine both of
these elements to give

(2.4) y = Asin(kx - ! t)

The details of the derivation of Equation 2.4 are given in the Appendix.

EXAMPLE 2.2
What is the equation for the wave in Example 2.1? The amplitude,
wavelength and frequency of the wave were found to be A = 0.15 m,
! = 0.80 m and f = 0.5 Hz.

Solution
k = 2!/! = 2!/(0.8 m) = 7.85 m-1
# = 2!f = 2!(0.5 Hz) = 3.14 Hz
so y = 0.15 sin ( 7.85 x - 3.14 t) is the equation of the wave.

22
The Nature of Light

Phase ($ , the Greek letter "phi") is another wave parameter not explicitly
pictured in Figure 2.3. Phase refers to a particular point on a wave, for example, a
sine wave has maximum displacement at 90o and a zero at 180o. The term is used
in this book in two contexts. First, we refer to a "phase shift," by which we mean
"sliding" a wave to the left or right of its original position. Second, we will speak
of waves being "in phase," which means they reach their minimum, maximum
and zeros at the same time or place, or "out of phase," meaning that when one
wave is at a maximum, the other is at a minimum. Because the maximum and
minimum points of a sine wave are 180o apart, we will sometimes refer to waves
that are out of phase as having a 180o phase difference. Using radian measure, the
waves are said to be ! radians out of phase.
You might wonder where the phase angle ($) fits into the wave equation.
If the displacement (x) and the time (t) are both zero in Equation 2.4, then
y = sin (0) = 0. This is the situation shown in Figure 2.3, where the wave
displacement (y) is zero at the origin. But what if a wave starts at maximum
amplitude, that is, if y = A at t = 0 and x= 0? The wave equation can easily
accommodate this situation if the phase angle is included in the argument of the
sine in Equation 2.4.

y = Asin(kx - ! t + " ) (2.5) The wave equation

For the current situation of a wave that has value A at the origin, we can
set $ = !/2 in Equation 2.5. Then

y = A sin (kx - ! t + ! / 2 ) and at t = 0 and x = 0

y = Asin(! / 2) = A which is the expected result.

Thus, the phase angle is used to indicate the starting point of the wave. Interested
readers will find more detail on harmonic motion and the wave equation in the
Appendix.
Another useful equation relates the wavelength, frequency and speed of a
wave. Since speed is defined as distance divided by time, the speed of a wave (v)
is given by dividing the distance the wave travels in one cycle (wavelength) by
the time to complete one cycle (period), or
d !
v= =
t T
Using the relationship f =1/T gives

v= !f Speed, frequency
(2.6) and wavelength
This important equation applies to all types of waves.

23
LIGHT: Introduction to Optics and Photonics

EXAMPLE 2.3
What is the speed of the waves in Example 2.1?

Solution
The wavelength of the wave in Example 2.1 was 0.80 m and the
frequency was 0.5Hz.
Thus, wave speed is given by
v = ! f = (0.80 m)(0.5 Hz) = 0.4 m/s
Remember that Hz = s-1, so meter times hertz gives meter per second.

The propagation speed calculated by Equation 2.6 is the speed with


which the wave disturbance moves forward in the medium. For example, when a
small stone is dropped into a puddle, the waves move outward with a speed v, the
propagation speed. For electromagnetic waves in a vacuum, propagation speed is
usually given the symbol "c," so that Equation 2.6 becomes

(2.7) c= !f

In 1983, after a long history of increasingly accurate measurements, the


speed of light in a vacuum was defined to be c = 299,792,458 m/s. The meter was
then defined as the distance light travels in a vacuum in 1/299,792,458 second. It
is easier to accurately measure time than distance using the natural oscillations of
an atomic "clock," so the meter is defined in terms of time and the constant speed
of light in a vacuum. For the problems in this text, c = 3x108 meters/second will
be sufficiently precise.
Now that we have a good idea of what a wave is, let us return to the other
two words in the description of light: transverse and electromagnetic. A
mechanical wave is a vibration in a medium, but in the case of light, electric (E)
and magnetic (H) fields are oscillating. No medium
Electric Field is required to support the electromagnetic field—
(E) light can travel in a vacuum! The E and H field
vectors are oriented at right angles to each other and
to the direction of propagation. This is what we mean
by transverse—the direction of wave vibration is
perpendicular to the direction of wave propagation.
Figure 2.4 represents an electromagnetic wave with
the electric field (E) in the y-direction, the magnetic
Magnetic Field (H) field (H) in the x-direction, and wave propagation in
Figure 2.4 – An electromagnetic wave the z-direction.

24
The Nature of Light

The Electromagnetic Spectrum


The rainbow colors of visible light make up only a small portion of the
electromagnetic spectrum. While the electromagnetic spectrum spans
wavelengths from a fraction of a picometer to several hundred meters, the
wavelengths of visible light extend only from about 400 nm for violet to about
700 nm for red. These wavelengths are not distinct boundaries; the wavelengths
at which the visible spectrum fades away at either end depend somewhat on the
individual as well as the viewing conditions. The colors of the visible spectrum,
from long wavelength to short wavelength, can be remembered by the acronym
"ROY G. BV": red, orange, yellow, green, blue, violet.
Figure 2.5 shows the electromagnetic spectrum with approximate
wavelengths from wavelengths 10-14 meters to more than 1000 meters. Beginning
at violet light and progressing toward shorter wavelengths we have ultraviolet
(UV) rays, then x-rays and finally gamma rays. In the region beyond red are
infrared (IR), microwaves and radio waves. All of these terms describe
electromagnetic radiation, differing only by wavelength.

Increasing wavelength
Figure 2.5 - The
electromagnetic spectrum.
-14 -12 -10 -8 -6 -4 -2 0 2 4
10 m 10 m 10 m 10 m 10 m 10 m 10 m 10 m 10 m 10 Visible light is a tiny
Gamma Rays X-Ray UV IR Microwave Radio portion of the spectrum
m
sandwiched between
ultraviolet (UV) and
infrared (IR).

400nm 500nm 600 nm 700 nm


Violet Blue Green Yellow Orange Red

What we call "light" usually means what humans are capable of sensing,
that is, the visible spectrum. In some contexts, however, the word "light" includes
portions of the UV and IR regions; this is sometimes called the optical spectrum.
Human vision is limited to a small range of wavelengths because our retinal
sensors cannot be stimulated by low energy infrared light, and the lens of our eye
blocks UV light. However, other animals, including many insects, can sense UV
light, and photographs taken of flowers and butterflies illuminated with UV light
often look very different from the same items illuminated with visible light. In
fact, imaging with radiation from different parts of the spectrum allows scientists
to study such diverse phenomena as crop diseases and star formation. Specialized
film or detectors are used to create images over wavelengths of the
electromagnetic spectrum invisible to human eyes. Chapter 11 will explore the
science and technology of optical imaging.

25
LIGHT: Introduction to Optics and Photonics

Figure 2.6 - Students imaged in infrared light.


Such photos are usually color coded so that the
brightest regions represent the warmest
temperatures. In this case, the color temperature
scale is beneath the photo. The glass lenses of
the students’ eyeglasses block infrared radiation
from reaching the camera, causing them to
appear dark. (Courtesy Dr. Michael Ruane,
Boston University, www.bu.edu/photonics)

EXAMPLE 2.4
The broadcast frequency of a radio station is 88.5 MHz. What is the
wavelength of the radio waves from that station?

Solution:
Use Equation 2.7, c = !f
Solving this equation for the wavelength

c 3x108 m/s
!= =
f 88.5 x106 Hz
! =3.39 m
Note that the time units (seconds) cancel, leaving meters, since
1 Hz=1 s-1.

2.3 INTRODUCTION TO QUANTUM OPTICS

Photons
At the end of the nineteenth century, physicists were fairly certain that
the science was complete in its description of the physical world and only minor
details remained to be explained. One of these details was the photoelectric
effect. The effect was puzzling because the emission of electrons from an
illuminated metal depends upon the wavelength of the light, not the irradiance.
For example, a certain metal target might emit electrons when illuminated with
ultraviolet light but not when illuminated with red light, even if the red light
strikes with a much higher irradiance than the UV.
The explanation of the photoelectric effect proposed by Einsteinin 1905
is that the light striking the metal is a stream of particles, later given the name

26
The Nature of Light

photons, and the energy of each photon is directly proportional to the frequency
of the associated radiation. Mathematically

E = hf (2.8) Photon Energy

The constant of proportionality, h, is called Planck’s constant and is equal to


-34
6.625x10 joule•second. The constant is named for Max Planck, sometimes
called the "father of quantum physics," who was the first to correctly describe the
wavelength spectrum of radiation produced by a glowing hot object.
A photon can be defined as the smallest division of a light beam that
retains properties of the beam such as frequency, wavelength and energy; it is a
quantum unit of light energy. A photon is sometimes described as a wave packet
that has specific energy content. Although we speak of photons as particles, they
have some pretty bizarre properties. For example, a photon has energy, but no
mass. It does carry momentum, and if it stops moving it ceases to exist!

EXAMPLE 2.5
The frequency of helium neon laser light is 4.74x1014 Hz. Calculate the
energy of the photon.

Solution
Use Equation 2.8,
E = hf = (6.625 x 10-34 J•sec)(4.74 x 1014 Hz)
E =3.14x10-19 joules
Since 1 Hz = 1 s-1, so multiplying J•sec times Hz results in joules.

Because the energy of photons is so small, it is often more convenient to


express photon energy in electron•volts (eV). An electron•volt is the energy
gained by one electron when it is accelerated through a potential difference of 1
volt, so that 1 eV is equal to 1.6 x 10-19 joules. Thus, Planck's constant can be
written h = 4.14 x 10-15 eV•s, which allows us to directly solve for photon energy
in electron•volt units. The energy of a photon of the helium neon laser light in
Example 2.5 can be shown to be 1.96 eV.
Solving Equation 2.7 for frequency and substituting into Equation 2.8
gives a convenient equation for finding the energy of a photon in terms of the
wavelength of the associated wave

hc
E= (2.9)
!

27
LIGHT: Introduction to Optics and Photonics

Notice that photon energy is inversely proportional to wavelength. For


example, photons of blue light have more energy than photons of red light, and
ultraviolet photons are more energetic than visible photons. This explains why
UV light is able to dislodge a photoelectron from a certain metal in the
photoelectric experiment while a red photon cannot.

Energy Levels In Atoms


In 1900, Planck introduced the central idea of quantum mechanics, that
radiation-producing oscillators are allowed to have only discrete energy levels.
Only thirteen years later, Neils Bohr proposed the first quantum atom, consisting
of a positive nucleus surrounded by negative electrons each existing in a specific
energy state. Electrons can gain or lose energy only by jumping from one energy
level to another. In fact, an electron changing energy levels is never seen
"between" levels; it simply disappears from one level and appears in another.
The situation is like a flight of stairs: you can go from one stair to
another, even jump over some stairs, but you can never land “between” two
stairs. Contrast this situation to a ramp, where your position may be at any height
above the bottom of the ramp (Figure 2.7). We say that energy is quantized; it
can have only certain specific values.

Figure 2.7 - Quantized energy values


are like a set of stairs (left) while
ENERGY

classical physics allows an object to


have any value of energy.

Although the circular electron orbits proposed by Bohr were later shown
to be impossible, the basic ideas of quantization that he proposed are still used to
explain the physics and chemistry of elements. The picture of an atom as a
nucleus surrounded by orbiting electrons is commonly used as a logo and even
appears in some textbooks. In fact, the location of electrons in an atom is
governed by probability. Even though the exact location of an electron may be
unknowable, its energy may still only have discrete, quantized values.
The minimum energy level for an atom is called the ground state. This is
not a state of zero energy, but rather the minimum allowable energy for that
atom. Atoms can absorb energy through a number of mechanisms and enter what
are called excited states. The amount of energy that is absorbed must be equal to
the difference between two energy levels (states). Once in an excited state, an
atom can return to a lower energy state, releasing energy in the process. If the

28
The Nature of Light

energy is released as radiation, the radiated photon will have energy equal to the E3

difference in energy between the two states.


Energy levels are often shown in a diagram such as shown in Figure 2.8. E2
Transitions between levels are shown as arrows indicating a gain or loss of
energy. In Figure 2.8, the "blue" photon excites the atom from the state E1 to the
blue
state E3. This happens because the photon energy is equal to the energy
E1
difference between E1 and E3. When the atom makes a transition from the state Absorption
E3 to the state E2, it gives off the "red" photon, with energy E3 - E2. Note that the
atom is still in an excited state (E2) and at some later point may transition to the
E3
ground state.
Although no numerical values are indicated in Figure 2.8, it is clear that
the energy jump from state E1 to E3 is larger than the jump from E3 to E2. Thus, E2
red
the photon that is emitted has a smaller energy than the one that is absorbed.
Equation 2.9 indicates that a smaller energy means a longer wavelength; thus, the
emitted photon is toward the red end of the spectrum and the absorbed photon is E1
Emission
toward the blue end.
Figure 2.8 – Absorption
Figure 2.9 shows some of the energy levels of the hydrogen atom, which (top) and emission
has only one electron. By convention, the ionized atom has E = 0, and the (bottom) of radiation by
an electron.
ionization energy is the amount of energy needed to free an electron originally in
the ground state. Thus, energy levels of bound electron states have negative
values. The lowest energy state of the hydrogen atom is -13.6 eV, that is, it takes
13.6 eV of energy to free a ground state hydrogen electron from the nucleus.

E! = 0

(additional levels not shown)

E6 = -0.38 eV
E5 = -0.54 eV
E4 = -0.85 eV

E3 = -1.50 eV Figure 2.9 – Some energy


levels of the hydrogen atom.
E2 = -3.40 eV
R B-G V V

E1 = -13.6 eV

29
LIGHT: Introduction to Optics and Photonics

The arrows in Figure 2.9 indicate some of the energy transitions from
higher energy states to lower energy states. The longest arrows indicate large
energy transitions; they produce high-energy photons (ultraviolet). In the same
way, the shortest arrows produce low energy photons (infrared). The arrows
shown in bold produce photons in the visible range: red (656 nm), green-blue
(486 nm), and violet (434nm and 410 nm). Observation of hot hydrogen gas with
a device called a spectrometer will reveal the presence of these four emission
lines. If the spectrometer is also sensitive to ultraviolet and infrared, many more
emission lines will be visible as well.
Figure 2.10 - Emission If light containing all visible wavelengths (white light) is passed through
(top) and absorption
(bottom) spectra for
a cloud of hydrogen gas, the light that emerges from the gas will exhibit an
hydrogen gas. Only the absorption spectrum. The visible part of the spectrum will contain all
four visible lines indicated
in Figure 2.9 are shown. wavelengths except the four lines absorbed by the hydrogen gas (Figure 2.10).
The wavelengths visible in Thus, the hydrogen atom emits and absorbs photons of specific wavelengths, as
the emission spectrum are
the same wavelengths determined by its allowed energy levels. These wavelengths can be used as a
that are missing in the sort of "fingerprint" to identify hydrogen gas or other substances whose spectral
absorption spectrum.
characteristics are known. In fact, the element helium was discovered in the late
1800s when analysis of the absorption spectrum of the sun revealed absorption
lines corresponding to no elements known at the time.
To observe the emission spectrum of hydrogen gas, the hydrogen atoms
must be excited into higher energy levels. One way to accomplish this is to apply
a high voltage across the ends of a glass tube containing the gas at low pressure.
A spectrometer will reveal the four visible lines shown in Figure 2.10. If you
look at the spectral lines of hydrogen you will see that lines are not equally
bright. In fact, the violet line at 410 nm may be barely visible to the eye at the far
end of the visible spectrum. Some difference in brightness is due to the color-
dependent sensitivity of the human eye, however, some spectral lines really are
much less bright than others. The relative brightness of the spectral lines can be
readily observed with a more "neutral" sensor than the eye, such as a computer-
based spectrometer (Figure 2.11).
The variation in brightness of spectral lines has its roots in the
probabilistic nature of quantum physics. Not all energy transitions are equally
probable. Most excited atoms give off energy very quickly (on the order of 10
nanoseconds) but some states have a longer lifetime, up to milliseconds or even
longer. If an atom is in a long lifetime state, the transition to a lower level is less
likely to happen, and fewer photons of that energy will be produced. Thus, the
spectral line is not as bright as a line produces by a more probable transition. As
you will see in Chapter 9, long lifetime states are essential to the operation of a
laser.

30
The Nature of Light

Figure 2.11 - The hydrogen (top) and helium (bottom)


spectra obtained with a computer based spectrometer. In
. the height of each line indicates the relative
general,
irradiance. The 656 nm red line of hydrogen is an
exception in graph. In order to show the 410 nm (violet)
line as a tiny line on the left of the spectrum, the full height
of the 656 nm line cannot be shown; the line saturates at
the top of the graph.

The sources were laboratory gas spectrum tubes. The


hydrogen tube has a dark pink glow; the brightest line is
656 nm (red). The two violet lines are very dim by
comparison. Helium gas glows yellow-orange; the
brightest line is at 587 nm (yellow).

Gas molecules can also absorb and emit light. In this case, the energy
levels correspond to vibrations and rotations of the entire molecule, and these
energy levels are also quantized. In the case of liquids and solids, the discrete
energy levels of single atoms are influenced by neighboring atoms, and the levels
spread out into energy bands. While the emission and absorption spectra of gases
at low pressure feature sharp lines, solids and liquids generate more continuous
spectra featuring wide bands of color.
Fluorescence and Phosphorescence
Fluorescence describes a process in which an atom is excited into one of
its short lifetime excited states and then spontaneously returns to the ground state
by emitting one or more lower energy photons. Inside a fluorescent light tube, for
example, a gas is excited by a high voltage to emit high energy ultraviolet
radiation. The atoms in the coating on the inside of the tube absorb the UV
photons and emit lower energy visible photons by making many smaller energy
downward transitions. The exact choice of coating material determines the tint of
light produced. If you observe fluorescent lights with a spectrometer, the bands
of color that make up the overall illumination are readily visible. Energy saving
fluorescent bulbs have particularly interesting spectra; their apparently white
light is actually composed of several bands of wavelengths from different regions
of the visible spectrum.
Phosphorescent materials have a long lifetime excited state that allows
them to continue to glow for a time after the excitation source is removed. In a

31
LIGHT: Introduction to Optics and Photonics

large collection of excited phosphorescent atoms, many will return to the ground
state fairly quickly, but some will remain in the excited state for several seconds
or minutes. Thus, light can be emitted over a fairly long period of time. Glow-in-
the-dark toys and clothing are examples of phosphorescence.
Blackbody Radiation
The photoelectric effect was not the only puzzle that could not be solved
by nineteenth century physics; the explanation of the radiation emitted by a
blackbody also requires quantum physics. A blackbody is an object that absorbs
all incident light at every wavelength, so it appears black to the eye. It is a perfect
absorber and also a perfect emitter of radiation. At any temperature, the spectral
distribution of a blackbody depends only on the temperature and not on the
material the blackbody is made of. A theoretical blackbody is often described as
a cavity in an otherwise solid piece of metal. A very small hole allows a sample
of the trapped radiation to exit and be measured. The pupil of your eye is a good
approximation to a blackbody.
Figure 2.12 illustrates the spectral distribution of radiation given off by a
blackbody at three different temperatures. Note that as the temperature increases
more radiation is emitted at all wavelengths, but the wavelength of maximum
brightness becomes more “blue” as the blackbody grows hotter. Many real
objects act like blackbodies, and the radiation they give off when heated
approximates blackbody radiation. For example, when the heating element of an
electric stove is first turned on it feels warm, but there is no visible change in
color because the radiation is in the infrared portion of the spectrum. As the
element grows hotter, it glows a dull red, visible only in a darkened room, then
brighter red, and finally orange-red. If the element continued to heat, it would
radiate yellow-orange light and then, at temperatures over 2000oC, it would
appear yellowish-white, like the filament of a tungsten light bulb.

Irradiance o
(W/m )
2
Higher temp (6000 C)

o
Wavelengths Medium temp (4500 C)
of maximum
irradiance
o
Figure 2.12 - The blackbody Lower temp (2000 C)
spectrum at three temperatures.

Wavelength
Visible
light

32
The Nature of Light

Classical physics assumed the radiation emitted by a heated object was


due to microscopic charged oscillators that acted like tiny antennas. The
distribution of oscillator energy was assumed to be continuous, however,
calculations using this model predicted much more short wavelength radiation
than was observed. In fact, the predictions were so far off that the effect was
called the "ultraviolet catastrophe." Max Planck's startling explanation of the
observed spectral distribution of blackbody radiation required that the energy of
the oscillators be quantized. The solution to the blackbody problem marked the
beginning of quantum physics at the start of the twentieth century.
The equations that describe the wavelength dependence of blackbody
radiation on temperature are well known. Thus, the radiation spectrum of an
object that acts like a blackbody can be used to determine its temperature.
Astronomers use this technique to determine the temperatures of stars, whose
colors range from “cool” red to blue (for the hottest stars), and technicians can
use blackbody radiation to measure the temperature of a furnace from the
wavelength distribution of the glowing contents.

2.4 RADIOMETRY AND PHOTOMETRY


Most general physics texts begin with an introduction to standard units of
measure. It is clear that measurements must be based on some commonly
accepted standard. If each laboratory or each nation had its own unique
measurement system, exchange of information would be difficult if not
impossible.
International standards organizations exist to ensure the uniformity of
measurement standards worldwide. The familiar International System (SI)
includes units to describe light generated by a source or incident on a surface.
You have already seen that the "color" of light (wavelength) is measured in
length units: !m (10-6 m), nm (10-9 m) and sometimes, especially in older texts,
Å (Ångstrom, where 1 Å = 10-10 m).
The "brightness" or "strength" of a light source may be measured in a
number of different ways. In fact, some of the measurements used for this
purpose have confusing if not downright silly names, such as phot, nox, nit and
stilb. Part of the complication lies in the fact that two sets of measurements are
used with light: radiometric units, which apply throughout the entire
electromagnetic spectrum, and photometric units, which apply only to visible
light and are dependent on the color response of the human eye.

Radiometric Measurements
The International System of units has approved seven radiometric terms,
and five of these are listed in Table 2.1. The remaining terms and their uses may

33
LIGHT: Introduction to Optics and Photonics

be found in the references to this chapter. Radiometric quantities occasionally


have the subscript "e" to distinguish them from photometric quantities, however,
they often appear with no subscript.

Term Symbol Description Unit


Radiant Energy E (or U or Energy joule (J)
Q)
Radiant Power (or P (or F) Energy per unit time J/s or watt
Flux) (W)
Radiant Exitance M Power emitted per unit area of a W/m2
source
Irradiance E (or Ee) Power falling on a unit area of a W/m2
target
Radiant Intensity I Source power radiated per unit W/sr
solid angle

Table 2.1 Some radiometric units and the symbols used to represent them.

The units of energy and power are familiar from basic physics, although
you may not have known that the letters Q and U are often used to denote energy.
Now we run into the problem of not having enough letters to serve as symbols
for similar quantities! In this text we will use E for both irradiance and energy
but it will be clear from the context and from the units which quantity is being
discussed.
You should know that I is sometimes used as a symbol for irradiance.
Irradiance, or optical power density, is often confused with intensity, and you
will find books that use intensity when irradiance should be used. Making the
situation even more confusing, the term intensity has different meanings in
different areas of physics! To avoid adding to the confusion, we will use E for
irradiance and I for intensity as indicated in Table 2.1.
Radiant Exitance and Irradiance
The radiant exitance is the amount of power that radiates from (exits) a
source per unit area. Its units are power divided by area, or W/m2. This is a term
that is used to characterize a source of radiation. However, as Table 2.1 shows,
the unit combination W/m2 is also used for irradiance, which describes radiation
incident on a surface. Irradiance was introduced in Chapter 1 as an important
measurement in determining laser classifications and laser hazards.
Examples 2.6 illustrates the use of exitance to characterize a source of
light. In Example 2.7, the energy flux (power) on a surface of given area is
quantified by calculating irradiance.

34
The Nature of Light

EXAMPLE 2.6
A globe lamp rated at 40 watts is 8 cm in diameter. Find its radiant exitance.
(Ignore the metal base and assume the bulb radiates over its entire spherical
surface.)

Solution
The surface area of the bulb is given by the formula for the surface area of a
sphere:

A = 4! r 2 = 4! (.04m)2 = 0.02m 2
The radiant exitance is

40 W W
M= 2
= 2000 2
0.02 m m

Although W/m2 is the SI unit for radiant exitance, for the small bulb in
Example 2.5 the answer may also be expressed in W/cm2. In that case, M = 0.2
W/cm2. Note that 40 Watts is the total power emitted, but if this is an
incandescent bulb, most of the power is radiated as heat (infrared) rather than
visible light.

EXAMPLE 2.7
A Helium Neon laser has an output power of 2 mW and makes a 1 cm
diameter spot on a wall four meters from the laser output aperture. Find the
irradiance at the wall due to the laser.

Solution
The area of the circular spot of light is given by
A = ! r 2 = ! (0.005m)2 = 7.85x10 "5 m 2
The irradiance is then

P 2x10 !3 W W
E= = = 25.5 2
A 7.85x10 !5 m 2 m

Intensity
The term intensity has a number of meanings depending on the context in
which it is used. For example, in wave theory it is the square of the wave
amplitude and in acoustics it is the power per unit area. In radiometry, the term
has a very specific meaning: intensity is the power emitted by a point source into
a cone of a given solid angle.

35
LIGHT: Introduction to Optics and Photonics

Since a point source emits radiation that moves outward as an expanding


s sphere, we need to describe the angles measured inside a solid sphere. A solid
% angle is a three-dimensional analog to the more familiar plane angle. Recall the
r definition of the radian: an angle drawn at the center of a circle sweeps out
(subtends) an arc of length "s" as shown in Figure 2.13. The measure of the angle
in radians is the arc length divided by the radius of the circle.
Figure 2.13 - Definition of
the radian. The angle % s
subtends the arc s. !=
r

Since both the arc length and radius have the same (length) units, radian is a
A dimensionless quantity. Also, since 360o represents a full circle with an arc
" length equal to the circumference (2"r), 360o is equivalent to 2" radians.
r
Now consider a sphere. A solid angle subtends an area A on the surface,
as shown in Figure 2.14. The measure of the solid angle in steradians (sr) is
given by the area subtended divided by the radius of the sphere square
Figure 2.14 - The steradian.
The area A subtends a solid A
(2.10) !=
angle &. r2

The upper case Greek letter omega (") is a common symbol for solid angle. The
steradian, like the radian, is a dimensionless quantity, since both numerator and
denominator have units of length-squared. As shown in Example 2.8, a sphere
subtends a solid angle of 4" sr.

EXAMPLE 2.8
What solid angle is subtended by a 10 cm2 area circle on the surface of a
balloon 50 cm in diameter? By the entire balloon surface?

Solution
a. The solid angle subtended by the circle is
A 10 cm 2
!= = = 0.016 sr
r 2 ( 25 cm )2

b. Since the surface area of a sphere is 4"r2, the solid angle subtended by the
entire surface is
A 4" r 2
!= = 2 = 4" sr
r2 r
This is a useful fact to remember about solid angle and a sphere.

36
The Nature of Light

Because radiant intensity is the amount of power emitted by a point


source into a given solid angle, a point source radiating uniformly into the space
around it has radiant intensity givn by

P P
I= = (2.11) Radiant intensity
! 4!

The units of radiant intensity are watts/sr (watts per steradian). Since
radiant intensity is defined for a point source of light, the light from an extended
source such as a long fluorescent bulb is described by a different unit of measure
(called radiance). A more complete discussion of radiant units may be found in
the references at the end of this chapter.

EXAMPLE 2.9
Derive the inverse-square law of radiation for a point source of light.

Solution
Solving Equation 2.10 for power in terms of intensity, we get P = 4!'.
The surface area of a sphere is given by
A = 4 !r2
Substituting these two expressions into the definition of irradiance (E=P/A),
we find that the irradiance of the source over a spherical surface surrounding
the source is given by
P 4! I I
E= = = 2
A 4! r 2
r

Example 2.9 shows that the irradiance from a point source decreases
inversely with the square of the distance. This is sometimes called the "inverse
square law." This result depends on the special geometry of a point source
radiating into a sphere. Remember, the inverse square law is not applicable to
long fluorescent tubes or narrow laser beams!

Photometric Units
Radiometry involves physical measurement and applies to all regions of
the electromagnetic spectrum. That is, we assume that radiation properties are
measured by an unbiased detector. Photometry, however, applies only to visible
light. Furthermore, it takes into account the wavelength response of the human
eye. Photometric units, then, measure the properties of visible radiation as
observed by a human viewer. Photometric units are important in lighting design
in marketing and architectural applications.

37
LIGHT: Introduction to Optics and Photonics

The spectral response of the human eye is plotted in Figure 2.15, which
shows the CIE (International Commission on Illumination) luminous efficiency
curve. The peak of the curve is at 555 nm (yellow-green); this is the wavelength
of peak sensitivity of the human eye in daylight. If red, green and blue light of
equal radiant power are observed by the human eye, the green will appear much
brighter than either the red or blue. A similar curve for nighttime vision has the
wavelength of peak efficiency shifted about 50 nm toward the short wave end of
the spectrum.
Luminous efficiency

Figure 2.15 - Luminous efficiency


curve for daytime viewing.
(Copyright Maxim Integrated
Products www.maxim-ic.com.
Adapted and used with
permission.)

Photometric units are related to radiometric units through the luminous


efficiency of the eye. By definition, a radiant power of 1 watt at the peak
wavelength of 555 nm corresponds to a luminous power of 683 lumens (lm).
(This constant was chosen to make the "new" standard correspond to an older
standard.) At other wavelengths, the luminous power is 683 lm times the
luminous efficiency from the curve in Figure 2.15. For example, 1 watt of radiant
power at 650 nm, where the luminous efficiency is 0.11 (11%), corresponds to a
luminous power of (0.11)(683 lm) = 75 lm. Since the eye is less sensitive at 650
nm (red light) than 555 nm (yellow-green) the same radiant power results in
lower luminous power for the red light.
In general, luminous units are related to radiometric units by the equation

Luminous units (2.12) Luminous unit = Luminous efficiency x 683 x Radiant unit

Luminous units corresponding to the radiant units of Table 2.1 are shown
in Table 2.2. Note that luminous units have the subscript v, to indicate the
dependence on human visual response. Here, luminous energy is given the
symbol Qv to distinguish it from illuminance, Ev.

38
The Nature of Light

Luminous units are important in applications where the human eye is the
detector. For example, the packaging of ordinary light bulbs indicates the
luminous power (in lumens) typically emitted by the bulb. The electrical power is
also indicated (in watts). Thus, the wall plug efficiency of the bulb may be
calculated in lumens per watt. (This quantity, in lumens/watt, is often called
"source luminous efficacy.") A typical "soft white" 60 watt incandescent bulb is
labeled "840 lumens", or 14 lumens/watt. In contrast, an energy efficient "light
capsule" might be labeled 850 lumens for 15 watts, or nearly 57 lumens/watt.

Term Symbol Description Unit Radiant term/unit


Luminous energy Qv Luminous energy talbot Radiant energy/
joule
Luminous power Pv or fv Luminous energy per unit time lumen (lm) Radiant power/
= talbot/s watt

Luminous exitance Mv Luminous power emitted per lumen/m2 Radiant exitance/


unit area of a source watt/m2

Illuminance Ev Luminous power on a unit area Lux (lx) Irradiance/


of a target =lumen/m2 watt/m2
Luminous intensity Iv Luminous source power radiated candela (cd) Radiant intensity/
per unit solid angle = lumen/sr watt/sr

Table 2.2 - Luminous units and corresponding radiant units.

Examples 2.10 and 2.11 illustrate the relationship between radiant and
luminous units.

EXAMPLE 2.10
An orange 30 watt light bulb is 3 meters from a wall. Calculate the luminous
power. Assume the wavelength is 610 nm and that all the energy is emitted
as orange light.

Solution
1. From the CIE curve, the luminous efficiency at 610 nm is 0.5 (50%). The
luminous power is given by:
Luminous power = Radiant power x 683 x 0.5
= 30 Watts x 683 x 0.5
= 10300 lumens
How would the answer change if the bulb were blue? or red?

39
LIGHT: Introduction to Optics and Photonics

EXAMPLE 2.11
Calculate the irradiance and illuminance of the orange light bulb of Example
2.11 at the distance of the wall. Assume the bulb emits only orange light
(even though this is not a good approximation for an incandescent bulb.)

Solution
At a distance of 3 m from the bulb it may be considered a point source,
emitting light over a sphere of radius 3 m. Then the irradiance is 3 m from
the source is found from the definition of irradiance and the formula for the
surface area of a sphere
P 30W W
E= = = 0.27 2
4! r 2
4! (3m) 2
m
The illuminance is calculated from the irradiance using the value of 50%
from the CIE curve at 610 nm

W lm
Ev = 0.27 2
x683x0.5 = 92 2
m m

The illuminance is 92 lumen/m2, or 92 lux.

It is important to notice that the illuminance unit "lux" is found by


expressing the illuminated area in m2. Irradiance is often expressed in watts per
square centimeter. When converting from radiant to luminous units, be sure area
is expressed in square meters.

40
The Nature of Light

REFERENCES
1. For a more complete description of the history of optics
Hecht, E. Optics 4th Edition, San Francisco: Addison Wesley 2002.
2. Interesting anecdotes from the lives of scientist important to optics
Lovell, D. J. Optical Anecdotes, Bellingham, WA: SPIE Press, 1981.
3. For a more detailed history with an emphasis on vision and vision
correction,
Pedrotti L. and Pedrotti, SJ, F. Optics and Vision, Upper Saddle
River, NJ: Prentice-Hall 1998.
4. Additional detail on quantum physics (at a non-calculus level) may be found
in most college level physics texts such as
Jerry D. Wilson and Anthony J. Buffa, College Physics, 5/E, Upper
Saddle River, NJ: Prentice-Hall, 2003.
5. Five scientists' answers to the question "What is a Photon?" may be found in
a special publication of the Optical Society of America
Roychoudhuri, C. and Roy,R. editors. OPN Trends- The Nature of
Light. What is a Photon? Optical Society of America, October, 2003.

WEB SITES
1. For information on the lives and work of physicists who developed quantum
mechanics
http://nobelprize.org/physics/
2. Atmospheric optics, beautiful photos and explanations of phenomena
www.atoptics.co.uk/
3. The history of the meter (and other definitive information on units) is on the
web site of the National Institute of Standards and Technology (NIST)
www.nist.gov/
4. For an exhaustive list of units, including those mentioned in this chapter, see
Russ Rowlett, "How Many? A Dictionary of Units of Measure"
www.unc.edu/~rowlett/units/index.html
5. Some history and an explanation of units used to measure light
www.electro-optical.com/whitepapers/candela.htm

41
LIGHT: Introduction to Optics and Photonics

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. What is meant by the term "wavelength?" Draw a wave to illustrate.
2. What is meant by the "period" of a wave? Give an example.
3. What is the speed of light in a vacuum?
4. List the electromagnetic spectrum in order of INCREASING wavelength (shortest to
longest).

5. List the colors of the visible spectrum in order of decreasing wavelength (longest to
shortest).

6. What colors correspond to the following wavelengths:


a) 450 nm

b) 550 nm

c) 650 nm

7. What is an instrument that allows scientists to study light spectra? How does it
work?

8. Which of the three regions of the "optical spectrum" (IR, VISIBLE, UV) has the
highest energy photons? The lowest energy photons?

9. What is the difference in appearance between an emission spectrum and an


absorption spectrum?

10. Why do "glow in the dark" toys continue to glow after the source of energy (light) is
removed?

11. All objects not at the absolute zero of temperature radiate blackbody radiation. Since
your normal temperature is 98.6oF = 37oC =310 Kelvins, you radiate electromagnetic
energy! Explain how firefighters make use of this radiation to find a person in a
smoky room.

12. Look at Figure 2.12 which shows the intensity of radiation from three objects at
different (and very high) temperature. Explain why some stars appear red and others
appear blue.

13. The luminous efficiency curve (Figure 2.15) is for daylight vision. In dim light, the
peak wavelength of luminous efficiency is around 510 nm. Why is peak efficiency
different in bright light and in dim light? (These are called scotopic and photopic
vision.)

14. Find an incandescent light bulb and a fluorescent light bulb and note the electric
power (Watts) and the luminous power (lumens). Which is more efficient?

15. If you can find an energy saving fluorescent bulb, compare it to the other two.

42
The Nature of Light

LEVEL I PROBLEMS
16. You are sitting on a beach watching the waves roll in. In 12 seconds, 6 waves roll by.
What is the period of the waves (T)? What is their frequency (f)?

17. An AM radio station broadcasts at a frequency of 660 kHz. (note that k means 1000).
Since radio waves move at the same speed as light waves, what is the wavelength of
this radio wave?

18. An FM station broadcasts at 101 MHz (M means 1 000 000). Find the wavelength of
these waves. Compare your answer to the problem above. Which type of broadcast,
AM or FM, has longer wavelength waves?

19. Find the energy of the following photons


a. 1 nm x-ray

b. 3 m radio wave

c. Why is an x-ray more harmful to humans than a radio wave?

20. Calculate the frequencies for the following wavelengths of electromagnetic radiation:
a. 633 nm

b. 1.06 µm

c. 10.6 µm

21. Calculate the wavelengths for the following frequencies of light:


a. 1.34 x 1014 Hz

b. 260.2 THz

22. Calculate the periods of the waves in Problem #21.


23. Find the irradiance of a 3 mW laser pointer that makes a roughly rectangular spot
with dimensions 3 mm wide by 6 mm long.

24. Find the illuminance of the laser pointer in problem #23 if the laser wavelength is
670 nm. What if the wavelength is 532 nm?

25. Find the solid angle subtended by the state of Alaska. Assume that the state is
roughly circular and has an area of 570 000 square miles. The diameter of the Earth
is approximately 8000 miles.

LEVEL II PROBLEMS
26. Glowing hydrogen gas has four visible spectral "lines." These are produced when the
electron in the hydrogen atom makes transitions to the energy level labeled E2
Figure 2.9. Find the wavelengths associated with the transitions that produce the four
visible wavelengths.

43
LIGHT: Introduction to Optics and Photonics

27. Since the four lines produced by the transitions listed in Problem #24 are all in the
visible region of the spectrum, in what region of the spectrum would the photons
produced by the following transitions be found? (Hint: look at the length of the
arrows, which is related to the photon energy. You don't need to do any calculation!)

a. E4 to E1

b. E4 to E3

28. An LED source emits 350 mW of yellow light at a wavelength of 564 nm. How
many photons per second are emerging from the source?

29. The blackbody radiation curves of Figure 2.12 show that the wavelength of
maximum radiation intensity decreases as the absolute temperature increases. That
is, as temperature increases, the wavelength of peak intensity (#max ) shifts toward the
blue end of the spectrum. # max can be found approximately from the equation

2900 µ miK
!max = where T is the temperature in Kelvins.
T

a. The human body, with a temperature of approximately 37o C, emits


electromagnetic radiation. Estimate the value of #max. How can you locate a
human in complete darkness?

b. The “color temperature” of the sum is about 5800 K. What is # max for
sunlight?

44
Walk through the "lighting" aisle of a large
supermarket or home improvement store and you will
see a bewildering array of lamps and lighting products.
Beside the standard incandescent bulbs and cylindrical
fluorescent lamps are energy saving bulbs in spiral,
globe and "u" shapes, tiny halogen lamps, "black"
lights, and flashlights powered by light emitting diodes
(LEDs). Package labeling ranges from descriptive
terms such as "soft white" and "party color" to
scientific quantities like 40 watts, 750 lumens or 2850
kelvins. Yet each lamp has the same purpose: the
production of light energy from electricity. In this
chapter, you will learn about several common
commercial light sources and some of the ways in
which light is detected and measured.

Dome Sunspot (Albert Yee)

Chapter 3

SOURCES AND DETECTORS


OF LIGHT
3.1 SOURCES OF LIGHT
Sources of light can be classified as natural, such as sunlight and
firelight, or artificial. There is a bewildering variety of artificial light sources, and
we cannot hope to cover all of them in this chapter. We will instead survey three
broad classes of lighting: incandescent, gas discharge and semiconductor. The
laser is another important light source, so important to photonics technology that
laser physics will have its own chapter later in the text.

Natural Light Sources


Sunlight and skylight are the most important sources of light on earth,
and until very recently in Earth history they were the only source of illumination.
We are not including starlight because it is generally too weak to affect physical
and biological systems. However, at least one species of moth has been shown to
be able to "see" by starlight alone! The total amount of solar irradiance reaching

45
LIGHT: Introduction to Optics and Photonics

the upper reaches of the atmosphere, called the solar constant, averages around
1370 W/m2. The actual irradiance varies somewhat from day to day, and has been
shown to be a function of sunspot activity. In fact, over the eleven-year sunspot
cycle, the solar constant may change by about 0.1%. Scientists are debating if or
how such a change may affect Earth's climate.
Sunlight and skylight differ by both irradiance and spectral content.
Skylight is mostly blue, and you may know that the color of the sky is due to the
fact that short wavelengths are scattered more than longer wavelengths.
(Rayleigh scattering by the atmosphere will be further discussed in Chapter 7.)
Why then is the sky not violet? The answer is that there is less violet in the solar
spectrum than blue, and the human eye is not as sensitive to violet as it is to blue.
In fact, skylight has a broad spectrum of visible wavelengths, but it is primarily
blue.
The spectrum of sunlight is approximately that of a blackbody at a
temperature of 5800 K. Unlike the blackbody spectrum described in Chapter 2,
the solar spectrum is not a smooth function of wavelength, even above the Earth's
atmosphere. Figure 3.1 illustrates the solar spectrum measured at the top and
bottom of the atmosphere and 10 m below the surface of the ocean. Above the
Earth's atmosphere, absorption caused by the solar atmosphere can be seen in
both visible and UV wavelengths. The effect of Earth's atmospheric absorption is
clearly evident in the infrared, where water vapor and oxygen molecules are
responsible for deep troughs in the spectrum. Ozone (O3) absorbs most of the
ultraviolet that reaches the top of the Earth's atmosphere so that little reaches
ground level. Water absorbs longer wavelengths, allowing only blue light to
penetrate the top 10 m of ocean surface.
The properties of sunlight are becoming increasingly important to the
growing field of solar energy, which uses photovoltaic cells to generate
electricity. In addition,
although sunlight is not
Top of atmosphere usually thought of as a
Figure 3.1 - Solar irradiance in
2 -1
Watts/cm -nm as a function source of laboratory
of wavelength above the
At Earth's surface illumination, much effort
Earth's atmosphere, at the
surface of the Earth, and 10 has been expended to create
meters below the ocean 10 meters below the
ocean surface
surface. (Courtesy of the artificial light sources that
NASA SORCE Mission [Solar
Radiation and Climate
mimic the spectrum of
Experiment] sunlight for both proper
http://lasp.colorado.edu/sorce)
color rendition and
presumed health benefits.

46
Sources and Detectors of Light

Incandescent Light Sources


Incandescent light sources are essentially electrical resistors heated by
electrical current to temperatures in excess of 2000oC. The atoms and molecules
of the filament produce radiation as a result of this thermal excitation. The
spectral output is similar in distribution to a blackbody, although less radiation is
usually produced than would be expected for a theoretical blackbody. Such an
emitter of radiation is called a graybody.
If you've ever taken a photograph of a scene illuminated by a common
incandescent bulb, you probably noticed that the photo has a yellow tone.
Regular incandescent bulbs operate at a temperature that produces more light in
the orange-yellow part of the spectrum than in the blue and violet. This is an
important consideration when designing lighting for homes, offices and public
spaces such as museums.
For applications where color rendering is important, lighting designers
need a comparative measure of the color effects of lighting. Color temperature is
used to compare the overall color of light sources. Measured in kelvins, the color
temperature indicates the temperature of a blackbody emitter that would have the
same color as the source. On this scale, low numbers indicate yellow/red light
and higher numbers a bluer source. Incandescent bulbs have color temperatures
ranging from 2500K-3000K depending on the wattage, while sunlight has a color
temperature of ranging from 5000K to 6000K. The spectral output of an
incandescent light may be modified somewhat by changing the characteristics of
the glass bulb. For example, neodymium added to the glass will eliminate some
of the red and yellow wavelengths resulting in a bluer light. Filtering out some
wavelengths decreases the total light output, however.
It is possible for many different combinations of wavelengths to produce
the same color temperature or overall light color. A second measure of light
source color is the color rendering index (CRI) which describes how different
color surfaces look when illuminated by the light. The CRI is used to compare
the visual effect of light sources that have the same color temperature. On the
CRI scale of 1-100, daylight has a rating of 100, as does any source that closely
resembles a blackbody source. Color temperature and CRI are important to
photographers and designers who must be concerned with accurate color
perception.
Most of the radiation produced by incandescent bulbs is in the infrared
region of the electromagnetic spectrum. The bulbs become quite hot, which
means that if the intent is to illuminate a room, much of the input energy is
wasted. The wall plug efficiency of standard household incandescent bulbs is

47
LIGHT: Introduction to Optics and Photonics

about 10-15 lumens per watt of electrical power, making them the least energy
efficient of common types of household lighting.
A standard incandescent bulb
has a tungsten filament that is either a
ribbon, a coil of fine wire, or a coil
created of coiled fine wire. The
filament is enclosed in a bulb of glass
or quartz glass (fused silica), which is
used for bulbs that operate at very
Figure 3.2 - Incandescent
light bulb (Courtesy high temperatures. The first light
bulbs.com, www.bulbs.com) bulbs operated with the filament in a
vacuum, but it was discovered that
adding an inert gas such as nitrogen,
argon or a mixture of the two would
prolong the life of the bulb.
At high operating
temperatures, tungsten evaporates from the filament and gradually forms a gray
film on the inside of the bulb. You can see this gray spot on a bulb that has been
in operation for some time. The inert gas atoms collide with the evaporating
tungsten, causing some of the tungsten atoms to return to the filament.
Eventually, however, a thin spot will develop in the filament, the filament will
break and the lamp will fail.
Halogen bulbs, sometimes called quartz lamps, are a variation on the
standard incandescent bulb. A small amount of a halogen gas, such as bromine, is
included in the gas mixture. The purpose of the halogen gas is to extend the life
of the filament by re-depositing the evaporated tungsten through a process called
the halogen cycle. The halogen combines with oxygen and the tungsten at the
surface of the bulb, producing a tungsten oxyhalide. The compound moves
toward the hot filament where it breaks apart, and the tungsten is deposited,
freeing the halogen to repeat the cycle.
In order for the halogen cycle to work, the bulb surface must be at a very
high temperature, too high for ordinary glass. Halogen bulbs are made of special
heat-resistance glass or fused quartz, and they are smaller than regular
incandescent bulbs. Quartz bulbs require special care, since finger oils can react
with the quartz and weaken the bulb, causing it to shatter.
Halogen bulbs are more efficient than regular incandescent bulbs, with
typical output up to 25 lumens/watt, and they may last up to two to three times as
long. There have been recent concerns about the potential for fire due to
flammable materials coming in contact with the very hot halogen bulbs,

48
Sources and Detectors of Light

especially in torchiere style lamps. It is important to use halogen bulbs only in


fixtures designed for their high temperature operation.

Gas Discharge Lamps


Gas discharge lamps are used in all areas of lighting, including home,
office, industrial, medical and automotive. Lamps operated at low pressures
produce spectral lines that can be filtered to create monochromatic sources while
high-pressure gas produces a broad output spectrum. In general, discharge lamps
are more efficient than incandescent sources and some have lifetimes exceeding
15,000 hours. We will describe a representative sample of important gas
discharge lamps in this chapter.
Fluorescent lamps
We begin with the ubiquitous fluorescent lamp, which uses a two-step
process to produce visible light. A schematic of a straight-line fluorescent tube is
shown in Figure 3.3 A glass or quartz tube, coated on the inside surface with a
phosphor film, contains two electrodes. The tube contains a low-pressure inert
gas such as such as argon and a small amount of mercury. When a high voltage is
applied to the electrodes, current flows through the tube, vaporizing some of the
mercury. Collisions with electrons energize the mercury atoms, which then
produce ultraviolet light through de-excitation. The ultraviolet light energizes the
phosphor, which produces visible light.
The exact spectral content of the lamp's output can be tailored to meet
specific requirements through the choice of appropriate phosphors. It is possible
to purchase bright white, warm white or "full spectrum" fluorescents, as well as
lamps specifically for growing indoor plants or raising fish or reptiles.

Phosphors on the inside bulb surface are


Exhaust tube to remove air excited by UV and produce visible light
and introduce gas mix
during lamp manufacture

Figure 3.3 Cutaway


drawing of a common
straight line
fluorescent bulb.

Hot cathode emits electrons Excited mercury atoms emit


that travel the length of the UV light
tube

The ionized vapor in fluorescent lamps does not follow Ohm's law,
which says that current is proportional to applied voltage. In fact, the resistance
of a hot gas of ionized particles, called a plasma, decreases as current increases,

49
LIGHT: Introduction to Optics and Photonics

producing more ions. Without some sort of limiting device, the current in the
tube would quickly reach catastrophic levels. Fluorescent lamps require ballasts,
which provide the high starting voltage and then limit the operating current in the
lamp. Early ballasts were magnetic coils that produced an irritating hum and
flickering light output, but recent improvements have resulted in electronic
ballasts that are quieter and quick starting.
Fluorescent lights can produce up to 100 lumens/watt and are therefore
much more energy efficient than incandescent bulbs. Along with rapid start and
"warmer" color, fluorescent bulbs are also available in sizes and bases that allow
them to be substituted into ordinary incandescent light sockets, resulting in
considerable energy savings for lights that are left on for any length of time. So-
called "capsule" bulbs contain a small diameter coiled fluorescent tube
surrounded by a typical "bulb" shaped outer glass lamp.
High Intensity Discharge (HID) lamps
Mercury, metal halide and high-pressure sodium are the most common
types of HID lamps (Figure 3.4). In each of these lamps, vapor atoms are ionized
by current passing between two electrodes, and light is produced by the de-
excitation of vapor ions. Like fluorescent lamps, HID lamps require ballasts to
provide starting voltage and current and to control the operating current. All of
these lamps consist of an inner tube containing a small amount of metal and a gas
mixture, encased in a larger bulb. Because the metal is in its room temperature
state (liquid or solid) when the bulb is not operating, there is a start-up time
required during which the metal is vaporized by the electrical discharge. It may
take several minutes for the lamp to reach its final luminous output and once shut
off, the lamp must cool down before being restarted. In addition, the bulbs are
usually designed to be operated in one position, vertical or horizontal depending
on design.
Mercury lamps contain a small amount of liquid mercury and argon gas
in the inner arc tube, and produce a blue-white output. Coating the bulb with
appropriate phosphors can produce a warmer color. While mercury vapor lamps
produce up to 65 lumens/watt, less than the typical fluorescent, they have
lifetimes that can reach 20,000 hours and more. Metal halide lamps are more
efficient, producing up to 115 lumens/watt. They are constructed similarly to
mercury lamps, but the gas fill includes various metal halides such as sodium
iodide in addition to argon, resulting in better color balance than mercury alone.
The most efficient of the HID bulbs is high-pressure sodium, which can produce
up to 140 lumens per watt. Because sodium attacks both glass and quartz, the
inner tube must be made of a transparent ceramic. The light from high-pressure
sodium lamps is orange-white.

50
Sources and Detectors of Light

Figure 3.4 HID lamp diagram.


(Courtesy Goodmart Lighting and
Electrical Supply, www.goodmart.com/)

While the traditional applications for HID lamps have been large-scale
roadway and industrial lighting, HID lamps are now finding automotive uses.
Before the lamps could be used in automobiles, two problems had to be
overcome: headlights cannot have a warm up time, and, when used as directional
signals, the lamps must be able to cycle on and off rapidly. Sophisticated (and
expensive) electronic controllers coupled with small bulb design have allowed
HID bulbs to be used as extremely bright, white headlights with color
temperatures exceeding 4100K.
Low Pressure Sodium Lamps
Unusual among the lamps mentioned so far, low-pressure sodium lamps
produce light that is nearly monochromatic (around 590 nm), which would
appear to limit their usefulness for general illumination. They are, however, by
far the light source with the highest wall plug efficiency, producing up to 200
lumens/watt. One reason for the high lumen output is that the wavelength
produced is very near the peak sensitivity of the human eye. (Recall that
luminous power measurement depends on the eye's spectral sensitivity.) This
efficiency comes with a price though; colors are often unrecognizable when
illuminated by low-pressure sodium vapor lamps, as you know if you have ever
searched for your car in a parking lot using such illumination.
Unlike the HID and fluorescent lamps, low-pressure sodium lamps do
not contain mercury, and they are not subject to the stringent disposal
requirements of mercury-containing devices. Since disposal of toxic mercury can

51
LIGHT: Introduction to Optics and Photonics

be a significant expense, the more benign nature of sodium lamps is another


advantage for their use in high volume applications like street lamps.
Flash Lamps and Arc Lamps
Unlike the lamps of the previous section, flash lamps and arc lamps are
not used for general illumination. Both types are similar in design, consisting of a
sealed tube with electrodes at each end (Figure 3.5). The tube is filled with a gas
and high voltage across the electrodes ionizes gas atoms, resulting in the
production of light. The gas fill depends on the spectral output desired. For
example, xenon is often used in tubes that will be used to excite solid state
(crystal) lasers because the xenon spectral output is matched to the laser medium
absorption spectrum.
Arc lamps are designed for continuous operation and find use in
applications requiring intense broad spectrum light. A variation, the short arc
lamp, has a very small gap between electrodes. The lamp tube is filled with
mercury or xenon under high pressure and provides the high brightness required
for projectors, searchlights and specialized medical and scientific instruments.
Short arc lamps have many safety issues, including very high temperature, high
output in the ultraviolet and danger of explosion to name but three. They are used
only when no other lamp will provide sufficient brightness.
Flash lamps are used when very short, intense bursts of white light are
needed. High power flash lamps are used to optically excite lasers and very low
power flash lamps are found in disposable film cameras. Electrical energy is
stored in a capacitor until the
Figure 3.5- 15,000 W
Xenon arc lamp used in lamp is to be fired. A "trigger"
®
IMAX theater projectors.
voltage ionizes the gas in the
The tube has been cut
away to reveal the tube, which makes it
electrodes inside. The
overall length is about 20
conductive and allows the
inches. (Photo taken at capacitor to rapidly discharge,
the Boston, MA Museum
of Science, 2005) producing the flash.

Light Emitting Diodes (LEDs)


The first commercially available LEDs appeared in the early 1960s and
produced only infrared light. In the thirty years that followed, new materials and
new designs saw the introduction of additional colors, appearing in "rainbow
order": first red, then orange, yellow, green, blue, violet and at the end of the
twentieth century, near ultraviolet. White light is created either by using a
phosphor coated dome and UV LED, or a combination of red, green and blue
LEDs.

52
Sources and Detectors of Light

Lumen output has increased at a dramatic rate as well. Once used mainly
as dimly lit indicators for electronic equipment, LEDs are now being seriously
considered as the successor to the incandescent light for area lighting. White
LEDs are available with higher efficiency ratings than either incandescent or
halogen lamps, and efficiencies are increasing at a rapid rate. Since LEDs operate
on only a few volts and last for tens of thousands of hours, the potential exists for
tremendous energy savings worldwide. LED lighting has already replaced
incandescent lamps in traffic lights, signage, automobiles, flashlights and even
general-purpose illumination.
Introduction to Semiconductor Physics
To understand how light is produced in an LED, you need to know
something about semiconductors. The topic will come up again when you study
quantum light detectors in this chapter and laser diodes in Chapter 9. The term
semiconductor indicates a material that, unlike an electrical conductor, does not
have loosely bound electrons capable of carrying current at room temperature.
However, it is possible to change the conductivity of a semiconductor by doping
the material with small amounts of a different atom.
Consider a silicon crystal, for example, with each
silicon atom bound to its neighbors by its four outermost
Si Si
valance electrons (Figure 3.6). At room temperature, there
are very few electrons able to escape the covalent bonds to Figure 3.6 - Intrinsic
Si Si
carry current, so the crystal is essentially an insulator. silicon structure.
Now suppose that we add a very small amount of a dopant
Si Si
such as arsenic (As) as shown in Figure 3.7. Arsenic has
five valence electrons, but only four are able to bond with
neighboring silicon atoms. The fifth electron is only
loosely attached to the parent arsenic nucleus and is
Si Si
available to move like the free electrons in a conductor.
We call such material an n-type semiconductor because Si As
the moving charges are negative electrons.
If we dope the silicon crystal with atoms that Si Si
have only three valance electrons (such as boron, B) there
free electron Figure 3.7 – n-type
will be a "hole" where the fourth electron should be. This dopant (top) and p-
vacancy can be filled by a nearby electron, which results type dopant (bottom).
Si Si
in a new hole appearing in a different location. In this
way, holes can move through the material (although not Si B
as easily as free electrons) and are apparent carriers of
positive charge. Material doped in this way is called p- Si Si
type because the charge carriers are positive holes. It is hole

53
LIGHT: Introduction to Optics and Photonics

important to remember that both n-type and p-type materials are electrically
neutral, that is, there is no net excess electric charge, just mobile charge carriers.
What happens if n-type and p-type materials are put together to form a
pn junction (Figure 3.8)? Near the junction, electrons from the n-type material
cross over to combine with holes in the p-type material producing a depletion
region where there are no free charge carriers. Because electrons move into the
p-type material, there is now a net negative charge on the p side of the junction
and a resulting net positive charge on the side of the n-type material, which has
lost electrons. The region near the junction is depleted of mobile charge carriers
and an electric field forms that prevents further electrons from moving into the p-
type material unless an external potential is applied.

p n
- +
hole

Figure 3.8 – pn junction. electron +- ++


+- ++
+ +
Negative ions, where Positive ions, due to
electrons have filled holes electrons crossing into p side

In order for current to flow, the junction must be forward biased which
means electrons are injected into the n side of the junction (Figure 3.9). The
excess electrons are swept across the junction and combine with holes, releasing
energy. In silicon, the energy does not normally take the form of photons,
however, many other semiconductor materials have been developed that do
create light when holes and electrons combine. The wavelength produced
depends on the semiconductor materials used.

photons

p n electrons

Figure 3.9 - Forward biased


LED (left) and symbol (right).

+ -

Practical LEDs are far more complicated devices than illustrated in


Figure 3.9. Most LEDs are made of two (or more) types of semiconductor
material. The structure of the device must take into consideration both the

54
Sources and Detectors of Light

electrical and optical characteristics of the material. For example, the junction is
usually a thin p-type layer sandwiched between n-type "cladding" materials. The
n-type materials may be the same as the active region (called a homojunction) or
it may be a different type of semiconductor (heterojunction). The p material
forms an active region with a slightly higher index of refraction to direct light
internally by total internal reflection. (Index of refraction and total internal
reflection will be discussed in Chapter 4.)

3.2 THE DETECTION AND MEASUREMENT OF LIGHT


The detection and measurement of light is fundamental to almost all
photonics applications. Optical detectors are devices used to measure the power
or energy of light, whether in the ultraviolet, visible or infrared portions of the
electromagnetic spectrum. Unlike other measurement instruments such as
oscilloscopes and voltmeters, which measure the amplitude of electromagnetic
signals, optical detectors measure the irradiance of light. Because irradiance is
proportional to amplitude squared, optical detectors are commonly referred to as
“square law” devices. While the number of applications and types of optical
detectors are many and beyond the scope of this textbook, a rudimentary
understanding of some of the basic types of optical detectors and their operating
characteristics is helpful to anyone working in the field of photonics.
The choice of an optical detector for a particular application depends on
several factors including wavelength, sensitivity, power rating and response time.
For example, when using a high-power CO2 laser for cutting or drilling steel,
accurate measurement of laser output power is critical. The optical detector for a
CO2 application must be able to measure light in the infrared portion of the
electromagnetic spectrum, around 10.6 µm in wavelength, and be able to handle
relatively high power levels (>100 watts). If the CO2 laser is pulsed, the optical
detector must be able to respond to the rapid changes in power. On the other
hand, in fiber optic communications applications the optical detector must be
sensitive to near infrared wavelengths (850 nm – 1600 nm), be very fast in order
to track changes at gigahertz rates, and be sensitive to very low light levels (<1
microwatt). When selecting an optical detector for a particular application, you
must have a solid understanding of the nature and specifics of the application in
which the detector will be used.
Most optical detectors generally fall into one of two categories: thermal
detectors or quantum detectors. Figure 3.10 illustrates the spectral response
characteristic of both thermal detectors and quantum detectors. The vertical axis
of Figure 3.10, responsivity, is defined as the ratio of electrical output current to
optical input power for a photo detector, usually expressed in amps/watt. The

55
LIGHT: Introduction to Optics and Photonics

graph indicates that quantum detectors are capable of higher responsivity, but
have limited spectral ranges. When choosing a quantum detector, the type of
detector depends on the wavelength range to be measured.

Figure 3.10 – Relative spectral response


for quantum vs. thermal detectors.

Thermal Detectors
Thermal detectors measure optical power by measuring heat energy
generated when light strikes a temperature sensitive device such as a
thermocouple or thermistor. The change in temperature in the device corresponds
to a change in optical power. Thermal detectors are characterized by a relatively
flat spectral response, the ability to handle high power levels, and a relatively
slow response time. They are typically black or opaque so that they will absorb
radiation over a wide spectrum. While generally less sensitive than quantum
detectors, they are also much less wavelength dependent, making them ideal for
broadband optical power measurements. Thermal detectors are commonly used
to measure the output power of high power lasers or to make optical power
measurements over a large range of wavelengths.
One of the most common types of thermal detector is the thermocouple.
In its simplest form, a thermocouple is two wires of dissimilar metal joined
together at both ends. If one junction is heated while the other remains cool,
current will flow in the wire loop due to a potential difference between the two
junctions. In an optical power meter, the "hot" junction is placed in or near an
absorbing material and the "cold" junction is connected to a heat sink. When light
strikes the absorber, it is converted to heat energy, warming the hot junction.
In order to make optical power measurement more efficient and spatially
uniform, many commercially available optical power meters use a device known
as a thermopile, which consists of multiple thermocouples arranged in an array
and attached to either a disc-shaped opaque surface absorber or a semi-

56
Sources and Detectors of Light

transparent volume absorber. These meters, originally developed by the National


Institute of Standards and Technology (formerly the National Bureau of
Standard), are called disc calorimeters (Figure 3.11). Note that the voltage
generated depends on the temperature difference between the hot and cold
junctions, not the absolute temperature of the hot junction. For this reason, disc
calorimeters sometimes include heater coils for calibration purposes.

Figure 3.11 – Disc calorimeter.

Other common thermal detectors include pyroelectric detectors,


bolometers and thermistors. Pyroelectric detectors use ferroelectric crystals such
as lithium tantalate to sense changes in temperature resulting from the absorption
of radiation (Figure 3.12). The pyroelectric effect derives its name from the
Greeek pyr, fire, plus electricity; thus it refers to electricity generated by heat.
When light energy is incident on the detector, the change in temperature causes a
change in the crystal's electrical polarization (charge distribution), which is
temperature dependent. With electrodes placed across the crystal, this change in
electrical polarization results in a change in voltage across the crystal
corresponding to the level of incident light energy.

Figure 3.12 – Pyroelectric detector.

Bolometers are devices that change resistance as a function of


temperature (Figure 3.13). Bolometers can be made of metal (typically nickel
and platinum), metal film, or for greater sensitivity, semiconductor materials (in
which case they are called thermistors). Light energy incident on the detector

57
LIGHT: Introduction to Optics and Photonics

causes a change in temperature of the device resulting in a change in resistance.


When placed in one branch of a Wheatstone bridge, as illustrated in Figure 3.13,
this change in resistance causes the bridge to become unbalanced. The optical
power is proportional to the degree to which the bridge is unbalanced.

Figure 3.13 – Bolometer using a


wheatstone bridge circuit.

Quantum Detectors
In contrast to thermal detectors, quantum detectors are devices that rely
on the photoelectric effect to detect light. The photoelectric effect, observed in
the late 1800s by Heinrich Hertz and explained by Einstein in 1905, is the
generation of free electrons, or current, when light strikes a metallic target. These
electrons are collected by external circuitry and converted into a voltage for
measurement. In contrast to thermal detectors, quantum detectors are wavelength
dependent devices, capable of light measurement in ultraviolet, visible and
infrared portions of the spectrum, and are most often used in low power
applications such as fiber optic communications, night vision goggles, CD
players, TV remote controls (IR detectors) and other applications where high
speed and sensitivity to low light levels are critical. In this section, we will
discuss three types of quantum detectors: Photoemissive detectors,
photoconductive detectors and photodiodes.
Photoemissive Detectors
Photoemissive detectors are devices that characterized by high current
generated by a relatively small amount of light energy. One photon entering a
photoemissive device can generate hundreds of thousands of electrons. One
example of a photoemissive detector is a photomultiplier tube (PMT), a device
used in spectroscopy, pollution monitoring, high-energy physics, aerospace and
other applications requiring extremely high sensitivity and ultra-fast response. A
photomultiplier tube consists of a vacuum tube containing a photoemissive
cathode and one or more electrodes called dynodes (Figure 3.14).

58
Sources and Detectors of Light

Figure 3.14 – Cross section


of a typical PMT.

Light entering the photomultiplier tube strikes the photocathode causing


electrons to be emitted from the photocathode surface. Under the influence of a
high voltage (~100 to 3000 volts), electrons are accelerated toward a positively
charged anode. On their path to the anode, the accelerated electrons bounce back
and forth between the dynodes in a zigzag pattern. Upon striking the dynodes, the
accelerated electrons cause additional electrons to be released, effectively
multiplying the number of electrons accelerating through the tube to the anode in
through the process of secondary emission. The resulting electrons are then
collected by the anode and serve as the output signal. As a result of secondary-
emission multiplication, photomultiplier tubes can provide gains as high as
1,000,000 for extremely high sensitivity with exceptionally low noise.
Photoconductive Detectors
In a photoconductive device, the conductivity of the device increases
when exposed to light energy. Photoconductive detectors pick up where
photoemissive devices leave off and are capable of measuring infrared radiation
above 1µm. In a photoconductive detector, light energy striking a semiconductor
material such as Mercury Cadmium Telluride (HgCdTe), Indium Antimonide
(InSb) and Platinum Silicide (PtSi) changes the balance of free electrons and
holes. If the photons have sufficient energy, free electrons will be generated
within the material, effectively increasing the conductivity of the device. If the
device is electrically biased, the change in conductivity will result in a change in
current and hence a change of induced voltage across a load resistor.
In applications requiring extremely accurate measurement of infrared
energy, the photoconductive detector is commonly housed in a cryogenic cooling
device called a Dewar, an insulated thermos-like housing filled with liquid
nitrogen to drop the temperature of the detector to approximately 50-80°K. A
window located on the side of the Dewar allows light to enter. Because infrared
energy is essentially heat, a cooler detector will provide for more accurate
measurements.

59
LIGHT: Introduction to Optics and Photonics

Photodiodes
The most common quantum detector is the photodiode. A simple pn
photodiode consists of a pn junction of silicon as illustrated in Figure 3.15. The
material composition of the device determines the wavelength sensitivity. Like
an LED, when the device is reverse biased little or no current will flow. When
light strikes the depletion region with sufficient energy to separate an electron-
hole pair, however, the newly created free electron and hole rapidly move away
from the depletion region towards the respective positive and negative sides of
the bias voltage resulting in a small electric current. The process is similar to the
way in which an LED emits photons, only in reverse.

pp nn
Figure 3.15 – pn junction
photodiode and symbol (right).

The result of the electron-hole pairs moving away from the depletion
region has the effect of increasing the open circuit voltage across the device. The
more light energy incident on the detector, the greater the open circuit voltage.
This is known as the photovoltaic effect. Because the current generated is
typically very small, photodiodes usually require some amplification. The small
current generated as a result of the photons striking the detector flows into an
external circuit and is amplified for measurement.
While the pn photodiode is simple in design, it is not very practical
because the small volume of the depletion region available to absorb light limits
its sensitivity and speed. To overcome this problem, a piece of intrinsic (that is,
undoped) semiconductor material is placed between the p-type and n-type
materials to increase to depletion region volume area available to absorb
incoming photons. This device is known as a PIN photodiode and is illustrated in
Figure 3.16.
PIN photodiodes made from silicon are used for light detection in the
visible portion of the spectrum. Devices fabricated using indium gallium arsenide
InGaAs have their highest responsivity in the near–infrared portion of the
spectrum with wavelengths between 1000 and 1700 nm and are used primarily in

60
Sources and Detectors of Light

fiber optic communication systems where high speed operation in the GHz
frequency range is essential. Devices made from germanium (Ge) are also
available and operate with wavelengths between 800 and 1500 nm. Germanium
photodetectors are generally slower than PIN photodiodes, capable of operating
at frequencies of up to 100 KHz.

Figure 3.16 – PIN photodiode.

In a PIN photodiode, the maximum possible quantum efficiency is one,


which means that for each photon striking the detector, a maximum of one
electron is generated. In fact, the quantum efficiency is always less than one.
Therefore, if the light signal is very weak (it contains few photons), the resulting
PIN photodiode current will also be very small and external amplification will be
necessary.
The avalanche photodiode (APD) is also a quantum detector, but unlike
a PIN photodiode, it is capable of internal gain. In an APD, a single photon
incident on the device can yield a significant number of electrons, resulting in
much higher current levels than possible with a PIN photodiode. APD gain is
accomplished by biasing the avalanche photodiodes device just below its
breakdown voltage, resulting in a large internal electric field across the depletion
region. Electrons within this large electric field (primary charge carriers) are
accelerated by the field and collide with neutral atoms in the region, forcing them
from the valence band to the conduction band to form secondary charge carriers.
Each of the secondary carriers can then repeat this process resulting in additional
charge carriers. This process leads to a multiplication of charge carriers resulting
in a device with a very fast response and high-level output.
The gain of the detector increases as the bias gain increases. In fact, if a
APD were biased above the breakdown voltage, current would be generated even
in the absence of an optical signal! As a result, the current generated in an APD
can be several hundred times greater than that of a PIN photodiode. This high
gain, however, does create a source of noise that may limit the sensitivity of the
device.

61
LIGHT: Introduction to Optics and Photonics

Photodiode Parameters
When working with quantum optical detectors, it is important to
understand the performance limitations of the device. While there are many types
of optical detectors and their respective applications are diverse, certain
parameters are normally specified for each detector type and can be of great
value in determining how well the detector is suited for a particular application.
Some of the more common detector parameters that we will look at include
quantum efficiency, responsivity, noise and signal-to-noise ratio. For example,
when measuring the output power of a 50-milliwatt HeNe laser, small variations
in measured power due to thermal noise can be neglected. On the other hand,
when measuring extremely low light levels, say 1 microwatt, parameters such as
signal-to-noise ratio and dark current noise become critical. When selecting a
detector for a particular application, a good understanding of these parameters
can help you make the choice.
Quantum Efficiency, as previously noted, is the ratio of the number of
electrons generated by the detector to the number of photons incident on the
detector. Values for quantum efficiency can range from 0% if no electrons are
generated to 100% if every photon incident on the detector results in the
generation of an electron. A typical value for quantum efficiency is around 70%
for a photodiode, meaning that seven out of every 10 photons of incident light are
converted to electrons. Quantum efficiency is calculated from Equation 3.1

Number of electrons generated


Quantum efficiency (3.1) Quantum efficiency = ! 100%
1 Incident photon

Responsivity is the ratio of the electrical current generated by the detector


to the optical input power. As you might expect, it depends directly on the
quantum efficiency. Responsivity is important in determining the amount current
that will be generated for a given amount of optical input power. Once the
amount (or range) of current is known, the proper amount of amplification can
then be determined. Responsivity is calculated from Equation 3.2

Electrical current generated ( A)


Responsivity (3.2) Responsivity =
Optical input power (W )

For example, let's say we have a Silicon PIN photodiode with a


responsivity of 0.8 A/W. This means that if 100 µwatts of optical power is
incident upon the detector, the amount of current generated is 100 µwatts x 0.8
A/W = 80 µA. With an output current of only 80µA, a PIN photodiode will
require significant amplification to generate a usable signal. In contrast, the
responsivity of an avalanche photodiode is approximately 100 times greater than

62
Sources and Detectors of Light

that of a PIN photodiode, or 80A/W. In the case of the avalanche photodiode,


much less signal power is needed to generate the same amount of output current
as a PIN photodiode; thus less amplification is needed.
Responsivity is a wavelength-dependent quantity. While silicon
photodiodes are widely used for light detection in the visible portion of the
spectrum, they are generally not useful for wavelengths greater than 1µm. For
light detection in the infrared portion of the spectrum, detectors made from
InGaAs or Ge are commonly used.
One of the most critical factors affecting a detector’s performance is
noise, which is any unwanted fluctuation in a detector’s output that interferes
with the signal being detected. Noise can seriously degrade a detector’s
performance and it limits the minimum optical power that can be detected
without error. It may originate in the detector or in the amplifier circuit. There are
two primary sources for detector noise: shot noise and thermal noise. Most often,
noise is expressed as a current, called the noise current.
Shot Noise is the noise generated by the random variation in the number
and velocity of the electrons generated by a detector. Because current is
composed of particles (electrons) governed by statistical laws, it fluctuates
somewhat like raindrops striking a roof. Shot noise generally includes Dark
current noise and Quantum noise.
Dark current noise is the amount of current generated by a detector with
no light applied. In a perfect world, when no light is shining on a detector, that
detector should generate no current. This is not the case for a real detector. While
typical dark current values are in the nano-amp range, even this small current can
be a serious problem when trying to detect very small light levels. Furthermore,
dark current is thermal in nature. For example, a commercially available InGaAs
PIN photodiode used for fiber optic communications has a dark current that
increases by a factor of 10 for every 20°C increase in temperature. Such an
increase is much more prominent in infrared detectors based on Ge and InGaAs
than in silicon detectors that operate at shorter wavelengths.
Quantum noise results from the statistical nature of the generation of
electrons. When a photon strikes a detector, there is no guarantee that it will be
absorbed, create an electron-hole pair and produce an electron. Typical shot noise
values are between 20 to 30 nA at 25°C.
Thermal Noise refers to the random fluctuation of current (or electrons)
in both the detector and its amplifying circuitry. Because electrons in a detector
and detector circuitry move constantly in a random fashion, the current flowing
through the detector circuit also fluctuates in a random fashion. Thermal noise is
also known as Johnson noise or Nyquist noise and depends upon the detector

63
LIGHT: Introduction to Optics and Photonics

temperature, receiver frequency bandwidth and load resistance. Increasing the


value of load resistance in the amplifying circuit can reduce thermal noise.
Both shot noise and thermal noise are present in a detector. Because they
are intrinsic in nature, they cannot be eliminated. Careful design of receiver
circuitry and/or cooling the detector can minimize their effect. One of the
problems with detector amplifier circuits, however, is that in addition to
amplifying the signal, they also amplify the noise. A common rule of thumb
states that the signal power in a detector should be at least as large as the noise
power, that is, the signal to noise ratio should be greater than one if the signal is
to be detected.

64
Sources and Detectors of Light

REFERENCES
1. Kelber A, Balkenius A, Warrant E J. "Scotopic colour vision in nocturnal
hawkmoths," Nature, October 31, 2002: 922-925.
2. Aanegola, S., Petroski, J. and Radkov, E. "Improving the efficiencies of
LEDs leads to energy-saving solid-state lighting," OER Oct 2003: 16-18.
3. Chen, C. Elements of Optoelectronics and Fiber Optics, Irwin Professional
Publishers/McGraw Hill, 1996.
4. Setian, L. Applications in Electro-Optics, Upper Saddle River, NJ: Prentice-
Hall 2002.

WEB SITES
1. Many Internet lighting supply companies have excellent references on types
of lamps, basic operation and characteristics. Two examples are
www.goodmart.com/ and http://www.bulbs.com/
2. SORCE (Solar Radiation and Climate Experiment)
http://lasp.colorado.edu/sorce/
http://earthobservatory.nasa.gov/Library/SORCE/sorce.html
3. The Lighting Research Center at Rensselaer Polytechnic Institute
www.lrc.rpi.edu/
4. OSRAM Syvania
www.sylvania.com/ConsumerProducts/
5. Phillips Lighting
www.nam.lighting.philips.com/us/consumer/
6. The Museum of Historic Discharge Lamps
www.lamptech.co.uk/
7. High Performance Flash and Arc Lamps, Perkin Elmer Optoelectronics
http://optoelectronics.perkinelmer.com/
8. Explanation of terms used in lighting, including efficiency and efficacy
www.ledsmagazine.com/features/2/5/4

65
LIGHT: Introduction to Optics and Photonics

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. Explain why color temperature of the sun changes with time of day or atmospheric
conditions.

2. Why do objects look bluer when observed under water?

3. Why does an incandescent bulb often blow out as it is turned on? (Hint: resistance in
a thin wire is higher than in a thicker one.)

4. Why do incandescent bulbs sometimes burn out with a bright flash? Why do bulbs
need fuses?

5. Suppose you have two paint chips of slightly different color. How could you make
them appear to be the same shade? How would this work?

6. A student decides to eat his dinner outside in a parking lot illuminated by low
pressure sodium lights. He has a slice of pepperoni pizza, a small salad and a glass of
milk. What color will each item appear?

7. Why does the color of your car look different under parking lot lights? Why do
shopping malls and other commercial parking lots use lamps that have such poor
color rendition?

LEVEL I PROBLEMS
8. A typical incandescent bulb converts only about 5% of the input electrical energy
into light. How many watts of radiant power are generated by a 3-lamp bathroom
fixture that uses 60 watt bulbs?

9. The package for 60 watt light bulbs states that each bulb has a luminous power of
840 lumens. Use the results of problem #9 to estimate the wall plug efficiency of the
bulb.

10. What current is generated in PIN photodiode with responsivity of .85 A/W when
used with a HeNe laser with 25 mW output? Would the same current be generated if
a 25 mW laser operating at 442 nm is used instead? Justify your answer.
LEVEL II Problems

11. A certain detector has a quantum efficiency of 70%. What current is generated when
it is used with 10 mW at 1300 nm? (Hint: what is the definition of current? what is
the charge on an electron?

12. One of the authors replaced a 6-bulb incandescent bathroom light fixture with a 3-
bulb fluorescent fixture. Find the yearly savings if the lights are left on for 4
hours/day, the incandescent bulbs were 40 watts, the fluorescent bulbs were 15 watts,
and energy costs about 10 cents/kilowatt-hour.

66
When you look straight into a rectangular fish tank,
the transparent glass and clear water allow you to
see right through to the room beyond the tank. But
if you look toward the corner of the tank, you see
the reflection of what is inside—fish, plants, and
the ceramic pirate ship on the bottom—but you
can't see through the glass. The clear glass side of
the tank has become a mirror! Where do the
reflections come from? How can clear water and
transparent glass sometimes act like a window and
sometimes like a mirror? In this chapter you will
learn the answers to these questions as we explore
geometric optics and the rules governing the
behavior of light rays.

Shadow Railing (Albert Yee)

Chapter 4

GEOMETRIC OPTICS
In Chapter 2, we described light as a phenomenon with both wave-like
and particle-like behaviors, and we said that the model we use depends on the
circumstances. The photon model is usually used to describe the interaction of
light and matter, while the wave model is useful when light is propagating
through space, passing through small apertures and around obstacles. However,
in some cases the wave nature of light may be ignored, and we can use a
simplified model that uses geometry—lines and angles—to predict the path of
light.

4.1 THE MOTION OF RAYS


Geometric optics is the simplest model of light: we consider only the
direction of propagation of light and the phenomena that cause the propagation
direction to change. To represent the propagation direction we use a construction

67
LIGHT: Introduction to Optics and Photonics

called a ray, an arrow that points in the direction light is traveling. For this
reason, geometric optics is also called "ray optics."
How do rays relate to electromagnetic waves? Consider a pebble
dropped into a still, shallow puddle of water. The waves produced on the water's
surface travel outward in all directions, creating a set of expanding concentric
circles. Some of the "rays" for this situation are shown in Figure 4.1, where the
circles represent the crests of the water waves. The distance between any two
circles is the wavelength !. Rays show the direction that the waves are traveling,
so they are drawn at a right angle to the wavefronts, or crests of the waves.
Figure 4.1 - A stone Figure 4.1 is a representation of a point source of water waves. In the
dropped in a puddle
makes circular waves. case of a point source of light, the waves move outward in three dimensions in
The crests are separated space, not just two dimensions as on the surface of a puddle. A star as seen from
by one wavelength.
Earth or a candle flame viewed from several meters away can be considered to be
a point source of light. In fact, many light sources can be modeled as point
sources if the distance from them to the observer is large enough.
What happens to a wavefront as the distance from the point source
increases? Close to the point source, the radius of curvature is quite small, but it
increases as the wave moves away from the source and the wavefront becomes
less curved. Since rays are drawn perpendicular to wavefronts, "really far from
the source" (mathematically an infinite distance away), the rays indicating the
direction of propagation are parallel. Later in this chapter we will talk about
"incoming parallel rays" to a lens or mirror. From Figure 4.2 you can see that if
the source of light is very distant, the rays at your location can be considered to
be approximately parallel in many common situations.

Figure 4.2 Curvature of


expanding wavefronts. ! ! !

Near source Far from source Infinitely far from source

Light rays travel in straight-line paths as long as there is nothing in the


way to deflect them. This is called rectilinear propagation of light, and it
explains a number of common phenomena, such as the formation of shadows.
Suppose light rays moving outward from a point source encounters an obstacle
like the ball shown in Figure 4.3. Light striking the ball will be absorbed or

68
Geometric Optics

reflected. Around the "edges" of the ball, light rays continue in straight lines so
that a shadow of the ball appears on the ground.
If the light source is extended rather than a point source, the shadow will
consist of very dark regions (umbra) surrounded by partially illuminated regions
(penumbra). These can also be explained by rectilinear propagation. You can see
this effect with typical fluorescent classroom lighting. Hold your hand a few
centimeters from your desktop and you will notice that the shadow of your hand
has a dark center surrounded by a lighter shadow.

Point source Extended source

Umbra
Penumbra
Figure 4.3 Formation of shadows. The lower
figures show the appearance of the shadows.

Umbra
Penumbra

In Chapter 6 you will learn that shadows formed by monochromatic light


have narrow bands of bright and dark fringes at the edges. This effect is called
diffraction. Sometimes geometric optics is described as the model of light used
when diffraction effects can be ignored.
The pinhole camera is another application of rectilinear propagation.
This is the simplest of cameras, consisting of a closed box with a small pinhole
centered on one end. Typical pinhole diameters are in the 400-500 micron range
for a medium size shoebox, and formulas are available on the Internet for
calculating the size of the pinhole for a given length box. (See the References at
the end of the chapter.) The pinhole forms an image on the opposite side of the
camera box as shown in Figure 4.4.

Light tight box with Inverted image forms on the side


pinhole in one end. of the box opposite the pinhole
Pinhole size is greatly (film is placed here).
exaggerated!

ho
hi Figure 4.4 - Pinhole camera geometry.

do di

Because of the small size of the pinhole, the rays of light originating at
the top of the object and those from the bottom of the object do not overlap on

69
LIGHT: Introduction to Optics and Photonics

the screen at the back of the box. Film is placed at the back of the camera, at the
location of the image. Long exposures are required because the pinhole allows
very little light to enter. In fact, one of the challenges of making a pinhole camera
is to eliminate any light leaks at the corners or edges of the box, which are apt to
be much larger than the pinhole.
The size of the image formed by a pinhole camera is calculated by
similar triangles. If the distance from the object to the pinhole is do and from the
pinhole to the screen is di, Figure 4.4 shows that the height of the image (hi) is
related to the height of the object (ho) by

Pinhole camera hi di
(4.1) =
magnification ho do

The ratio of image height to object height, hi/ho, is the transverse magnification
of the image.

EXAMPLE 4.1
A lamp 50 cm high is 5 meters in front of a pinhole camera. The length of
the camera is 10 cm. How large is the image?

Solution:
Using Equation 4.1 for similar triangles,
50 cm/500 cm = h i /10 cm
h i = 10 cm x 50 cm/500 cm = 1 cm
The image will be 1 cm high on the film.

How can you create a larger pinhole image? As you can see from Figure
4.4, you only need to make a larger camera! In fact, an entire room can be turned
into a "camera" by turning off the lights and closing the shades, creating a small
pinhole in one shade to focus light on the opposite wall. (Another name for a
pinhole camera is camera obscura, Latin for "dark room.") You may have
already seen this effect in a darkened room when light leaks in through a crack
between a shade and the wall. Cars driving by, for example, appear as ghostly
Figure 4.5 – Photo created
with a cylindrical oatmeal-
inverted images on the wall opposite the window.
box pinhole camera (top) Pinhole cameras are quite easy to construct and are often used as an
and digitally processed
positive (center.) The
introduction to photography. Many photographers favor the oddly blurred photos
building is actually flat! where nearby and distant objects are equally focused (Figure 4.5). Information on
(bottom). (Pinhole photo by
Laura, H. H. Ellis Technical making pinhole cameras and galleries of pinhole photographs can be found in the
High School. CT.) web sites listed at the end of this chapter.

70
Geometric Optics

4.2 REFLECTION
When light encounters a new medium, it may be reflected, refracted
(transmitted), scattered or absorbed. Remember from the discussion of laser
safety eyewear in Chapter 1 that whether a particular material absorbs or
transmits light depends on the wavelength. In this chapter, we will not consider
light that is absorbed or scattered, but will focus our efforts on the reflected and
transmitted light.
Figure 4.6 illustrates light interacting with the surface of a transparent
material. We call the ray that strikes the surface the incident ray. Figure 4.5 also
shows the normal line drawn to the surface. You may remember from geometry
that "normal" means the line is perpendicular to the surface. The incident angle
is always measured from the normal line.
Part of the incident light energy in Figure 4.6 is reflected back away from
the surface and part is transmitted into the material. The angles of incidence,
reflection and refraction, which are used to describe the ray paths, are all
measured from the normal line. The three rays—incident, reflected and
refracted—are in the same plane, in this case, the plane of the page.

NORMAL to SURFACE

Angle of Angle of
Incidence Reflection
INCIDENT RAY REFLECTED RAY

surface
Figure 4.6 - Incident, reflected and refracted
rays and angles.
REFRACTED
RAY

Angle of
Refraction

Law of Reflection
When a ray of light is incident on a surface, the angle of reflection (%R) is
equal to the angle of incidence (%I)
!R = !I (4.2) Law of reflection
o
So if a ray of light strikes a surface at 30 and is reflected, the reflected
angle will be 30o and the incident and reflected rays and the normal line will lie
in the same plane. This simple law explains the operation of mirrors, both flat
and curved, as well as transparent optics that may operate as reflectors of light.
We will see some of these devices when we discuss total internal reflection later
in the chapter.

71
LIGHT: Introduction to Optics and Photonics

We can only see an object if light leaves the object and enters our eyes.
Light is produced by a glowing object such as a candle flame, but how can we
see objects such as people, a landscape or book? Most of the objects you see are
reflectors of light rather than sources of light. In Chapter 1, you learned about
specular and diffuse reflection and how they are related to laser safety. Specular
reflection is from a smooth, shiny surface. If a group of parallel rays strikes such
a surface, all of the reflected rays will also be parallel. Figure 4.7 shows specular
Figure 4.7 - Along the Chena reflection from a calm water surface. In this case, a transparent substance (water)
River in Fairbanks, AK. The acts like a mirror because there is little light coming from beneath the surface
still water produces specular
reflection. allowing the small amount of reflected light to be clearly seen. This is similar to
an ordinary window in your home; a small amount of light is always reflected
from the glass, but it is much more noticeable when it is dark outside.
Rough surfaces produce diffuse reflection. Parallel incident rays reflect
in all directions, with each ray obeying the law of reflection at the particular
location on the surface where it strikes. Whether reflection from a given surface
is specular or diffuse depends on surface roughness and the wavelength of the
light. This means that a surface that might be a diffuse reflector for visible light
can be a specular reflector for infrared light. Since most common surfaces are
neither luminous nor specularly reflecting, much of the visual information about
your world comes to you via diffuse reflection.

4.3 REFRACTION
What happens to light that is not reflected from the surface of a material?
Ignoring absorption and scattering for the moment, the answer is that it is
transmitted through the material. However, whether it travels a straight path or is
deviated from its initial direction depends on the speed of light in the material
and the angle of incidence. Before we introduce the equation that describes the
bending of light when it encounters a new medium, we need to explore what
happens to the wavelength when the speed of light changes.

Index of Refraction
Previously, we said the speed of light is approximately 3 x 108 m/s. This
is only true in a vacuum. When light enters a medium, even one as tenuous as air,
its speed slows. We usually express the speed of light in a material in terms of
the material's index of refraction rather than state the actual speed in meters per
second. The index of refraction, n, is defined as the ratio of the speed of light in a
vacuum, c, to the speed of light in the material, v

c
Index of refraction (4.3) n=
v

72
Geometric Optics

Table 4.1 provides values of n for several common materials. The index of
refraction is wavelength dependent, so such tables must indicate the wavelength
at which the indices of refraction were measured. Often, the wavelength chosen
is 587.56 nm, the so-called Fraunhofer "d" spectral line of Helium. Note that the
index of refraction of air at standard temperature and pressure is 1.0003, but we
will use 1.0 in many applications. The table also shows the speed of light in the
listed materials, calculated by rearranging Equation 4.3 to give v=c/n. That is,
light's speed slows by the factor "n" when the light enters a material.

Material Index of Speed of Light


Refraction (n) (x 108 m/s)
Vacuum 1 3.00
Air (at STP) 1.0003 2.9999
Ice 1.309 2.29
Water (20oC) 1.333 2.25
Salt 1.516 1.98
Glass 1.46-1.96 2.05-1.51
Quartz (crystal) 1.544 1.94
Diamond 2.417 1.24
Cubic Zirconia 2.173 1.38

Table 4.1 - Index of refraction for selected materials (measured at around 590 nm).

Change of Wavelength in a Medium


The change in the speed of light as it goes from one medium to another
produces some interesting effects. An analogy often used to explain the effects of
wave speed involves a marching band practicing on a paved parking lot. (See for
example Reference 1 at the end of this chapter.) Keeping straight lines, the
musicians step to the (constant) beat of the percussion section, that is, the
frequency of their steps is constant (Figure 4.8). At the edge of the parking lot is
a muddy field, and although the marchers continue to step at the same rate, their
steps become shorter as they slog through the mud. Their forward speed slows
and the space between the rows shrinks.

Figure 4.8 - The circles represent the heads


of the marchers. As each row hits the mud,
Direction of the row slows and the space between rows
travel shortens.

Paved Mud

73
LIGHT: Introduction to Optics and Photonics

Similarly, when light goes from one medium to another, the speed
changes. Frequency does not change, because photon energy does not change.
(Remember that E = hf for a photon.) Since frequency does not change when
light enters a new material, the change in wave speed requires a change in
wavelength, which is given by ! = c/f. Combining the definition of index of
refraction (Equation 4.3) and c = !f, the wavelength in a material is given by

Wavelength in !o
(4.4) !m =
a material n

where !o is the wavelength in a vacuum and !m is the wavelength in the material.


Like the spaces between the rows of marchers, the wavelength of light shrinks
when it goes from a vacuum into a material where it travels more slowly.
Figure 4.9 illustrates a light wave propagating in air. It enters a block of
glass and then returns to air. When the light enters the glass, the wavelength
shrinks by a factor nglass. When light leaves the glass and returns to the
surrounding air vacuum, it returns to the original wavelength. Remember, the
frequency does not change when light goes from one medium to another!

Figure 4.9 - Wavelength change in a


material. #1 is the wavelength in air and
#2 is the wavelength in the glass. The
wavelength is shorter inside the glass.

EXAMPLE 4.2
A HeNe laser beam (! = 633 nm) travels through a glass window with index
of refraction 1.65. Determine the speed of light and the wavelength inside
the glass.

Solution
a.) From Equation 4.4,
v = c/n = 1.82 x 108 m/s
b.) Using Equation 4.5,
! #9
633 " 10 m
! glass = o =
n 1.65
-9
= 384 x 10 m = 384 nm

74
Geometric Optics

Snell's Law for Refraction


What happens if the marching band in Figure 4.8 is moving toward the
edge of the paved lot at an angle, as shown in Figure 4.10? In optical terms, what
happens if the angle of incidence is not zero? The marchers at one end of the row
will continue to move at a rapid pace while those at the other end will slow down
in the mud. The rows will bend along the edge of the pavement. Similarly, light
waves bend when they strike the interface between two different transparent
media and the speed of light changes.

Mud
Figure 4.10 – "Bending" of rows of marchers
as they hit the boundary between hard
pavement and soft mud at a non-zero angle of
incidence. Notice that the incident angle on the
"Normal line" left of the boundary is greater than the
"refracted" angle on the right. (Angles are
measured from the normal line.)
Paved

The bending of light when it goes from one medium to another is called
refraction, and it leads to many interesting optical effects. (For an example, see
Figure 4.11.) Refraction also explains the operation of many common optical
elements such as lenses and prisms.

Apparent position Figure 4.11 - The pencil in the photo


of pencil end appears to bend where it enters the
water. The diagram shows two rays
leaving the tip of the pencil and
bending at the surface of the water
due to refraction. The eye follows
the rays back to the apparent origin
(dashed lines) and sees the tip of
Pencil
the pencil above the actual position.
The difference between the actual
depth and apparent depth becomes
less near the water's surface.

The refracted angle, which we will call %2 depends angle of incidence, %1,
and the indices of refraction, n1 and n2 of the materials on each side of the surface
(Figure 4.12). The change in the direction of a light ray traveling from one
medium to another is described by Snell's law, named for the Dutch scientist who
discovered it experimentally in the early 1600s. Snell's law may be derived
analytically using Huygens' principle, which states that each point on a wave acts
as a source of secondary spherical wavelets, or Fermat's principle, that light

75
LIGHT: Introduction to Optics and Photonics

traveling from point A to point B follows the path that takes the least time. These
ideas are explored in the Appendix. We will just state the Snell's law here

Snell's Law (4.12) n1 sin !1 = n2 sin ! 2

Snell's law predicts that when light travels from a lower index of
refraction to a higher index of refraction, it is bent towards the normal to the
surface. If light travels from a higher index of refraction to a lower index of
refraction, light is bent away from the normal as shown in Figure 4.12. Many
devices that work by refraction can be understood by simply applying these two
rules.

Figure 4.12 - Refraction at the boundary between two


media. On the left, light goes from low index of
refraction n1 to high index n2 (high speed to low
speed). On the right, light goes from high index of
refraction n2 back to the low index n1 (low speed to %refr
%refr
high speed). In this particular case, the angle of
%incc %inc
incidence on the right side is equal to the angle of
refraction on the left side. Do you know why?
n1 n2 n2 n1
The reflected rays from the two surfaces are not
shown to simplify the drawing.

EXAMPLE 4.3
Determine the angle of refraction when light traveling in air enters a block
of glass (n=1.5) at a 45o angle.

Solution
Solving Snell's law, Equation 4.6,
for the angle of refraction (%2):
#n &
! 2 = sin "1 % 1 sin !1 (
$ n2 '
# 1 &
= sin "1 % sin 45o ( = 28.125o
$ 1.5 '
What is the second angle of refraction when light leaves the glass and enters
the air?

The Critical Angle


In Figure 4.12, the light traveling from a medium with a high index of
refraction to a medium with a lower index of refraction is bent away from the
normal line, back toward the surface. According to Snell's law, as the incident
angle is increased, the angle of refraction increases as well. So at some point, the

76
Geometric Optics

incident angle will reach a value that results in an angle of refraction of 90o. That
is, the refracted ray is along the surface. What happens if incident angle is
increased beyond this point? The answer is that the light is totally internally
reflected. The refracting surface acts like a mirror and the light is completely
reflected back into the medium (Figure 4.13).

Glass Air
Incident rays
Figure 4.13 - The critical angle. The
dotted blue ray strikes the surface at a
smaller angle than the critical angle and is
%C transmitted into the air. The solid black
ray strikes at the critical angle and is
o
refracted at 90 . The dashed red ray is
Refracted (transmitted) ray incident at an angle larger than the critical
angle and is totally internally reflected.

Total Internal o
Reflection Critical ray bent at 90

The incident angle at which the light is refracted at 90o is called the
critical angle. At incident angles less than the critical angle, light is transmitted
into the second medium, and at larger angles it is reflected by total internal
reflection (TIR). We can find the critical angle by setting the angle of reflection
%2 in Snell's law equal to 90o..

n1 sin !1 = n2 sin 90o

The sine of 90o is 1, so we can solve for the incident angle %1 to get

#n &
! c = sin "1 % 2 ( (4.7) Critical Angle
$ n1 '

In Equation 4.7, the subscript "c" replaces the subscript "1" to indicate that the
angle of incidence is the critical angle.
Both reflecting prisms and optical fibers use TIR to redirect light. In the
case of optical fiber, the central core has a higher index of refraction than the
surrounding cladding. Light striking the core-cladding interface at an angle
greater than the critical angle is totally internally reflected and thus trapped in the
fiber core. Light striking at an angle less than the critical angle is refracted into
the cladding and not transmitted along the fiber (Figure 4.14). Note that when
light undergoes TIR, the law of reflection applies and the reflected angle is equal
to the incident angle.

77
LIGHT: Introduction to Optics and Photonics

Figure 4.14 - TIR in an optical fiber. Cladding


The ray represented by a dashed line refracted into the cladding
strikes the core-cladding boundary at
an angle smaller than the critical Core
angle and is refracted into the
cladding. The ray drawn as a solid
line strikes at an angle larger than the reflected ray
critical angle and undergoes TIR.
This ray is guided by the fiber.

EXAMPLE 4.4
In the fiber shown in Figure 4.14, the core index of refraction is 1.48 and
the cladding index is 1.45. Find the range of angles over which total
n2 = 1.45
internal reflection occurs.

%C n1 =1.48 Solution
The critical angle is given by Equation 4.7

#n & # 1.45 &


! c = sin "1 % 2 ( = sin "1 % (' = 78.4
o

$ 1'
n $ 1.48
Total internal reflection takes place for angles of incidence from 78.4o to
90o, that is, over a range of 11.6o.

TIR explains the reflection described in the fish tank in the opening
paragraph of this chapter and pictured in Figure 4.15. You can see right through
the back of the tank in the photo; the hanging wire on the left is between the back
of the tank and the wall. When you look toward the end of the tank, however,
you observe light that strikes the surface at an angle larger than the critical angle.
This light is totally internally reflected and provides a mirror image of the objects
in the tank. You cannot see the wall or window behind this part of the tank.

Mirror images formed by TIR


Figure 4.15 - TIR in a fish tank.
There are really only four fish in
the tank! The fish at the right, as
well as the plant, heater, ceramic
sign and wires are mirror images.
The sketch illustrates the
geometry of TIR in the tank.
Refraction at the front of the tank
is not shown. (That is, the
observer is shown inside the
tank, a fish's eye view.)

Have you ever held a frosty glass on a hot summer day and seen your
fingertips clearly visible through the condensation (see Figure 4.16)? This is an
example of frustrated total internal reflection. To understand how this happens,

78
Geometric Optics

consider a ray undergoing TIR in a block of glass


such as the one shown in Figure 4.13. If a second Figure 4.16 - The same view
as Figure 4.15, with a wet
block of glass is brought close to the right hand hand pressed against the end
surface and all the air is squeezed out, the light of the fish tank. Notice that
total internal reflection is
will no longer be reflected but will continue into "frustrated," That is, does not
the second piece of glass. In other words, the two occur, where the hand is in
contact with the glass.
pieces of glass "fuse" into one and the light ray
does not experience a change of index of
refraction. Something similar happens when wet
fingers are pressed against the glass of liquid. The light wave in the glass
"couples" into the water film, and is no longer reflected. The fingers of your hand
become visible. This principle can be used to couple light into and out of an
optical fiber or another optical device that uses TIR to guide light in small
channels.

Dispersion
We noted earlier that the index of refraction of a material depends on the
wavelength of light, that is, n is a function of !. We call this property dispersion Total
internal
or sometimes, chromatic dispersion. Dispersion is responsible for the colors of First refraction reflection

the rainbow and is one cause of pulse spreading in fiber optic systems, limiting
Light from
data transmission speed. To understand these effects, it is important to remember the sun
that index of refraction is a measure of the speed of light in a material. To say
that different wavelengths have a different index of refraction is the same as
saying that different wavelengths travel at different speeds.
Second refraction
Colors from white light
White light contains visible wavelengths from roughly 400 nm to 700
Blue
nm. Since each wavelength has its own index of refraction in a material, each refracted
color will have a different angle of refraction for the same angle of incidence. above eye

Shaping the glass, for example creating a triangular prism, will increase the
angular separation of colors.
Red refracted below
Formation of a rainbow eye

In order for a rainbow to form, certain geometrical requirements must be


met. As Figure 4.17 illustrates, the sun must be behind you when you are facing
the water droplet-laden air. When a ray of light from the sun enters a water
droplet, it is refracted at the first surface, totally internally reflected at the second
surface, then refracted a second time upon exiting the droplet. Light from the
blue end of the spectrum is refracted more than longer wavelength light, thus the Figure 4.17 - Formation
of a rainbow by
red light entering the observer's eye comes from a point higher in the sky. Red refraction and TIR in
light appears at the top of the bow, and blue/violet light is at the bottom. Under water droplets.

79
LIGHT: Introduction to Optics and Photonics

some conditions, a double rainbow can be seen. This is caused by light reflected
twice before exiting the droplet (See problem #31). Rainbows seen from the air
may also form complete circles.

Chromatic dispersion in optical fiber


Nearly all data in optical fiber is in the form of on-off pulses, that is, it is
digital data. Even if the fiber's light source is a laser, which is frequently termed
"monochromatic" and considered to produce a single wavelength, a small range
of wavelengths is always present. (You will learn why in Chapter 9.) Suppose a
single pulse is sent into a long fiber by quickly turning the laser on and off. If
each of the wavelengths in the light pulse travels at a slightly different speed, the
wavelengths will “fall out of step." Imagine a race where 10 runners leave the
starting line at the same time. All of the runners are close together at the start, but
as the race progresses, some runners fall behind. By the end of the race, the
“pulse” of runners is stretched out along the road—the pulse has spread.
In an optical fiber, a pulse of light will suffer the same effect. One end of
the spectrum will arrive at the destination before the other end, and the longer the
light travels along the fiber, the more the colors get out of step and the more the
pulse spreads. After a long enough distance, adjacent pulses will spread so much
they run into each other (Figure 4.18). For this reason, long distance fiber optic
links use lasers with as few wavelengths as possible, referred to as a narrow
spectral width. Very long lengths of fiber require some sort of dispersion
compensation to reform the pulse by bringing the wavelengths back in step.

Figure 4.18 - Schematic of Slower Faster


chromatic dispersion in an optical Wavelengths Wavelengths
fiber. The pulse spreads as
different wavelengths arrive at
the right end at different times.
This plus attenuation in the fiber
causes the exiting pulse to have Input pulse Fiber Output pulse
a lower amplitude (pulse height)
than the input pulse.
l

80
Geometric Optics

REFERENCES
Introductory explanations of reflection and refraction are available (with varying
level of mathematical exposition) in many general physics books. For example:
1. Giancoli, D. Physics: Principles with Applications Ed. 5, Upper Saddle
River, NJ: Prentice Hall 1997. (Algebra/trig based.)
2. Serway, R. and Jewett, J. Physics for Scientists and Engineers, Ed. 6, Brooks
Cole Publishers, 2003. (Calculus based.)
Optics texts provide a more complete mathematical basis for reflection and
refraction:
3. Meyer-Arendt, J. Introduction to Classical and Modern Optics Ed. 4, Upper
Saddle River, NJ: Prentice Hall, 1994.
4. Pedrotti, L. and Pedrotti, F. Optics and Vision Ed. 2, Upper Saddle River, NJ:
Prentice-Hall 1998.
Tables of optical constants (such as index of refraction) are found in:
5. Lide, D. editor. CRC Handbook of Chemistry and Physics 86th Edition, CRC
Press, 2004. (Many earlier editions are available with much of the same
information.)
A beginners guide to creating and using pinhole cameras:
6. Shull, J. The Beginners' Guide to Pinhole Photography, Amherst Media,
1999.

WEB SITES
1. For more information on pinhole photography and resources for making your
own pinhole camera
www.pinholevisions.org/
2. The home page of Worldwide Pinhole Photography Day, including a photo
gallery
www.pinholeday.com/
3. Refractive indices, plus information on wave behavior
http://hypertextbook.com/physics/waves/refraction/
4. A large table of refractive indices is found on the web at
www.robinwood.com/(in the 3D Tutorials link)

81
LIGHT: Introduction to Optics and Photonics

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. Use a diagram to explain how a light ray is related to a light wave.

2. Explain how rectilinear propagation leads to the formation of a shadow.

3. Explain the difference between specular reflection and diffuse reflection. Which
accounts for the image of a mountain formed on the still surface of a lake? What type
of reflection allows you to see the moon? How do you know the moon is not a
specular reflector?

4. Why does the hole in a pinhole camera need to be very small? What happens to the
image if the hole is made large? Use a diagram to explain. What is a disadvantage of
a very small hole in a pinhole camera? (A second disadvantage will be explained in
Chapter 6.) How should the pinhole size change if you use a smaller box? Why?

5. A person tries on a new outfit in a clothing store. To examine the fit, he stands in
front of two plane mirrors that intersect at a 90 degree angle. Why does he see
multiple images of himself? How does the number of images depend on the angle
between the mirrors? Use a ray diagram.

6. The speed of light in a certain crystal is about 2 x 108 m/s. Should you mount this
crystal in a ring or sprinkle it on your cooked peas?

7. A ray of light goes from a medium with a low index of refraction to one with a
higher index of refraction. What happens to the speed of the light? What happens to
the wavelength of the light?

8. A salesman tries to sell you a new type of plastic whose index of refraction is less
than one. What would you tell him?

9. The critical angle for a beam of light passing from water into air is 48.8 degrees.
What happens to light leaving the water and striking the surface at 45 degrees? At 50
degrees?

10. Is there a critical angle for light going from air into glass? Explain.

11. Why can't you see a rainbow at noon?

12. Explain the origin of the rays numbered "1" and "2" in the photograph to the right.
Light is entering the plastic shape from the left, first striking the hypotenuse of the
triangle prism. Can you explain the faint rays leaving the prism? (Hint: trace the
reflected rays from the second surface.)

LEVEL I PROBLEMS
13. A student makes a pinhole camera by making a hole in a window shade and placing
a screen 1.5 meters from the hole. If a neon sign outside is 30 meters away from the
window and the image of the sign is 10 cm high, how tall is the sign?

82
Geometric Optics

14. Two people stand in a dark room 3.0 meters from a large mirror covering one wall.
They are standing 5 meters apart. At what angle of incidence should one of them
shine a flashlight on the mirror so that the reflected light directly strikes the other
person? To solve this problem, make a drawing.

15. Two plane mirrors are placed 50 cm apart with their mirrored surfaces facing each
other and parallel. If the mirrors are 25 cm long, at what angle should a beam of light
be incident at one end of one mirror so that it just strikes the far end of the other.
Make a drawing!

16. A ray of green light (550 nm) enters a block of salt. The index of refraction of table
salt is 1.53. What is the wavelength of the light inside the block of salt? What is the
speed of light inside the block of salt?

17. A beam of light is incident on a flat piece of plastic at an incident angle of 45o. The
angle of refraction is measured to be 26.5o What is the index of refraction of the
material?

18. A beam of light is incident on a polished crystalline material at an incident angle of


45 degrees. The angle of refraction is measured to be 30 degrees. What is the index
of refraction of the material?

19. An oil layer (n=1.45) is spread smoothly and evenly over the surface of piece of
glass. A ray of light shines from the oil into the material at an incident angle of 60
degrees. The angle of refraction is 53 degrees. What is the index of refraction of the
unknown material?

20. A beam of light containing red (650 nm) and blue (450 nm) pass from a certain glass
into air at an incident angle of 30o. Find the refracted angles for the red and for the
blue light. Use n = 1.576 for red and n = 1.598 blue.

21. A diver shines a flashlight upward from beneath the surface of a still pond at a 30 o
angle to the vertical. At what angle does the light leave the water?

22. A ray of light strikes the interface between air and water at a 20o angle. The ray is
partly reflected by Fresnel reflection (law of reflection) and partly refracted (Snell's
law). (Note that in the other problems, there was also a reflected ray, but you weren't
asked to consider it.) What is the angle of reflection (back into the air)? What is the
angle of refraction in the water? What is the angle between the reflected and
refracted rays?

23. What is the critical angle for light going from diamond into air? The index of
refraction for diamond is 2.42.

LEVEL II PROBLEMS
24. In a pinhole camera, the distance between the object and the screen is 1.5 meters.
How long must the camera be in order that a 10 cm tall object produces a 2 cm tall
image?

83
LIGHT: Introduction to Optics and Photonics

25. Two mirrors meet so that the angle between them is 60o. A ray of light is incident on
the first mirror at an angle of 45o as shown in the figure. Find the angle of reflection
at the second mirror. Be sure to make a careful diagram!

26. A circular pool of water is lit from below by a bright light 2.80 meters beneath the
surface. The light is at the center of the pool, which is 2.7 meters in diameter. At
what angle does the light leave the water where it meets the edge of the pool?

27. An aquarium filled with water has flat glass sides. The index of refraction of the
glass is 1.5. A beam of light from outside the aquarium strikes the glass at a 43
degree angle to the perpendicular. What is the angle of this light ray when it enters
the water? What would be the angle if it entered the water directly from the air?

28. A 5 cm thick layer of oil is spread smoothly and evenly over a still pool of water.
What is the angle of refraction in the water for a ray of light that has an angle of
incidence of 45 degrees as it enters the oil from the air above? The index of
refraction for the oil is 1.51 and for water it is 1.33.

29. Light is incident on an equilateral glass prism at a 45.0 degree angle to one face.
Calculate the angle at which light emerges from the opposite face. Assume the index
50
o
of refraction of the prism is 1.52.

30. A light ray passing through a slab of glass is shown at left. The ray is shifted to the
left relative to the original path as it exits the slab. The index of refraction of the
10 cm
glass is 1.4. Find the distance x. The incident angle is 50 o and the glass is 10 cm
x
thick.

31. Using a careful ray diagram, explain why the secondary rainbow has blue at the top
of the bow and red on the inside (the opposite color order of the primary rainbow).

32. Fermat's theorem states that light takes the path of least time to get from one point to
another. Use Fermat's theorem to graphically prove the law of reflection by drawing
A B
at least 10 paths from point A to point B in the drawing at right.. Measure the total
length of each path with a ruler to determine the angle of incidence that produces the
shortest path. Note that if the medium does not change, the path of least time is also
the path of shortest distance. You can also use calculus to solve this problem!

84
Where can you find lenses and mirrors? It is easy to think
of devices that rely on lenses for their operation—certainly
eyeglasses and contact lenses, but also telescopes,
microscopes, cameras, binoculars and various types of
image projectors. Mirrors, too, are found on the walls of
every household, as well as perched high in the corners of
many convenience stores, where their convex shape allows
the entire store to be viewed from one vantage point. But
did you know that lenses are used to direct the light of
streetlamps, traffic lights and car headlights and taillights?
Miniature lenses focus the laser light in a CD or DVD
player. And even smaller lenses, created by varying the
index of refraction of a disk rather than by shaping a
homogeneous piece of glass, are used with optical fiber
devices. In this chapter, you will learn how reflection and
refraction make lenses and mirrors work.

Underwater Building (Albert Yee)

Chapter 5

LENSES AND MIRRORS


5.1 INTRODUCTION TO LENSES
Lenses are one of the most common optical elements—just consider how
many people you know who wear eyeglasses or contact lenses. Lenses are also
among the simpler elements to understand, at least on an elementary level, since
they work by refraction.

Thin Lens Approximation


The lenses you are most likely to be familiar with are pieces of plastic
or glass with smooth concave or convex surfaces, such as the lens whose cross
section is shown in Figure 5.1. This lens bends light at both surfaces by
refraction because each of the lens surfaces is in contact with air. As shown in
the diagram, light bends at the first surface, travels a distance through the
material of the lens, then bends again at the second surface. That is, light

85
LIGHT: Introduction to Optics and Photonics

enters and exits the lens at different distances from the optical axis, the
horizontal line through the center of the lens.
A thin lens represents an ideal situation in which light enters and exits
Optical the lens at the same level (Figure 5.1). That is, light is assumed to bend at one
axis
plane in space, and the fact that light travels through the lens is ignored. Thin
lenses are, then, approximations to real lenses. As you might imagine, the thin
Rays are
assumed to lens approximation works better with lenses that have relatively flat curvature
bend along this
line and are thin at the center. Although the thin lens model is easy to understand,
it is usually not good enough for real-world applications.
Optical
axis Converging and Diverging Lenses
Consider a ray of light that is initially parallel to the optical axis in
Figure 5.1. As the ray passes through the lens it is refracted by the first surface,
Figure 5.1 - Thick (top) and travels in a straight-line path to the second surface, and is refracted again. We
thin (bottom) lens. The thin will assume these are spherical surfaces, that is, each half of the lens may be
lens is an approximation of a
real lens- we model the lens thought of as a slice cut from a sphere. If the radius of curvature is known for
by a plane in space where each surface, Snell's law and geometry can predict the path of the ray. As you
the light rays appear to bend.
can see in the top drawing of Figure 5.1, light is bent toward the normal as it
enters the lens at the left surface, and away from the normal as it leaves the lens
at the right surface.
If we apply Snell's law to several incident rays near the optical axis, the
rays will travel through the lens and then pass through a common point on the
right hand side of the lens. This point is called the focal point. (Later you will
learn why it is not really a "point.") The distance from the lens to the focal
point is the focal length. Remember that a thin lens is treated as a plane in
space, rather than a piece of real glass. When we speak of distances from the
lens we are referring to a distance measured from the plane in space where rays
bend, shown in Figure 5.1 as a dashed line through the center of the lens.
A lens that directs all of the incoming parallel rays to a focal point (as
in Figure 5.1) is called a converging, or positive lens. A lens that causes all the
incoming parallel rays to diverge, or spread out, after leaving the lens is called
a diverging or negative lens. The reason for the terms positive and negative
will be come clear when we calculate the focal lengths of these lenses.
Figure 5.2 shows the cross-sectional view of several types of
converging and diverging lens. Note that the converging lenses are thicker in
the center than at the edges and the diverging lenses are thinner at the center.
Each of these lenses has advantages and disadvantages in optical systems,
depending on the wavelength of light and how the lens is being used. For
example, plano-convex lenses can be used to minimize spherical aberration,

86
Lenses and Mirrors

which we will discuss later in this chapter. Lens system design is a complex
topic, best reserved for more advanced treatments of optics. (See the references
at the end of this chapter.)

Figure 5.2 – Cross sections of


Biconvex Plano Convex Positive Meniscus
converging, or positive, lenses
(top row) and diverging lenses,
or negative, lenses (bottom
row).

Biconcave Plano Concave Negative Meniscus


\

Figure 5.3 shows the actual path of light rays from a laboratory light
source as they pass through plastic shapes that behave like two dimensional
converging and diverging lenses. In each case, parallel rays enter from the left
hand side of the photo and are refracted by the plastic shape. The top lens
converges the rays to a focal point, the point on the optical axis where the rays
cross. This is called a real focal point. The bottom lens causes the rays to
diverge as they leave the lens. In this case, you can follow the outgoing rays back
through the lens to find the point from which the rays seem to emerge. This is
called a virtual focal point.
You will meet the terms real and virtual again when we speak of the
images formed by lenses. In general, the description "real" indicates that rays
actually meet at a point in space. "Virtual" is used to describe the situation where
rays appear to diverge from a point. Figure 5.3 - Refraction of
light by converging (top)
and diverging (bottom) lens
5.2 CALCULATING FOCAL LENGTH: THE LENS MAKER'S shapes. The rays of light
enter from the left.
FORMULA
It would be very tedious to apply Snell’s law to each point on both
surfaces of a lens in order to find the focal length. Fortunately, a simple
equation (the Lens Maker's Formula, Equation 5.2) can be used to predict the
focal length of both converging and diverging lenses that have spherical
surfaces. The derivation of this equation makes a simplifying assumption,
called the paraxial approximation. It is important to understand the
implications of this approximation, because it means the use of the Lens
Maker's Formula is restricted to certain types of lenses and situations. Because

87
LIGHT: Introduction to Optics and Photonics

the paraxial approximation is based on the small angle approximation, which


we will frequently use in this text, we will take a closer look.

The Paraxial Approximation


As you know, Snell's law involves the sines of the angles of incidence
and refraction. It would greatly simplify the mathematics if we could replace
the sines of the angles with the angles themselves, so that Snell's law becomes
c a linear equation
r n1%1 = n2%2.
s b
Replacing the sine by the angle (in radians) is called the small angle
%
approximation and it is used often in optics.
a
Consider the triangles of Figure 5.4 and recall the definition of the
% r s angle in radians: % = s/r. If % is very small, as in the triangle at the bottom of
c
b the figure, the hypotenuse c and the radius r are nearly the same size. Also, the
a
arc length s is approximately the same length as side b. Therefore,
Figure 5.4 - The small
angle approximation. In b s
the triangle below, the sin ! = " =!
arc length s is
c r
approximately the same
as the side b. Also, the That is, if the angle is very small, the sine of the angle can be replaced by the
sides a and c are nearly angle itself, expressed in radians.
the same length as the
radius r. A useful addition to the small angle approximation involves the tangent.
Because side a and the hypotenuse c are nearly equal if % is very small,

b b
tan ! = " = sin !
a c

To summarize, the small angle approximation may be written


Small Angle (5.1) sin ! " tan ! " ! (in radians)
Approximation
To see how the small angle approximation applies to a lens, consider
%' the two parallel rays incident on the curved surface shown in Figure 5.5. The
ray that strikes closer to the optical axis has a much smaller angle of incidence,
%
so the small angle approximation is more valid near the axis. In fact, paraxial
means "near the axis."
When is the paraxial approximation valid? It depends, of course, on the
circumstances and the precision required. It also depends on the curvature of the
Figure 5.5 - The paraxial lens surface and how large an angle the incident rays make with the optical axis.
approximation applies to rays
striking the lens near the For educational quality lenses in a school lab, the approximation works quite
optical axis. In this diagram well, especially for relatively long focal length lenses. It is not appropriate,
the angle % is much smaller
than the angle %'. however, in the design and construction of precision optical systems. Fortunately,

88
Lenses and Mirrors

computer software is available to assist in the solution of complex lens design


problems.

The Lens Maker's Formula


The Lens Maker's Formula is derived in the Appendix by applying
Snell's law to a spherical surface. We will simply state the result here, and show
how it can be used to calculate the focal length of a lens with spherical surfaces.
We assume that light enters the lens from the left, as in the illustrations of Figure
5.1. Let n be the index of refraction of the lens material and R1 and R2 be the radii
of curvature of the left hand side and right hand side of the lens, respectively. If
the lens is surrounded by air (nsurr=1), its focal length, f, can be calculated by

" %
1
f
(
= n!1) $# R1 !
1
'
R2 &
(5.2) Lens Maker's Formula
1

When you work with lenses, it is very important to be consistent with the
signs of various quantities (positive or negative.) You have probably already
encountered this in math and perhaps physics courses. To use the Lens Maker's
Positive radius of curvature: R>0
Formula, we need a sign convention to distinguish between curvature opening
toward the left and toward the right. We will say a curvature is positive if the
open "c" curve points in the positive direction (to the right); otherwise it is
negative (see Figure 5.6). This sign convention is based on the familiar Cartesian
coordinate system. In general, negative quantities mean "to the left" of the origin
Negative radius of curvature: R<0
and positive quantities are "to the right" of the origin.
Figure 5.6 - Sign convention
for spherical surfaces.
EXAMPLE 5.1:
The lens shown below (shaded blue) has a radius of curvature on the left
side of +10 cm and on the right side of -40 cm. It is made of glass with an
index of refraction of 1.5 What is its focal length? (Hint: Make a drawing
of the lens to guide your solution.)
+10 cm radius

Solution Lens
Using Equation 5.2:

1 " 1 1 %
= (1.5 ! 1) $
-1
! = 0.0625 cm
f # 10cm !40 cm '&
-40 cm radius
-1
f = 1/ 0.0625 cm = 16.0 cm

Note that in Example 5.1, the left side of the lens is curved so that it
opens toward the right; this is a positive radius of curvature according to our

89
LIGHT: Introduction to Optics and Photonics

sign convention. Using the same sign convention, the right side of the lens has
a negative radius of curvature. Incoming parallel rays on the left side of the
lens will meet at a focal point 16 cm to the right side of the lens. The focal
length is positive; this is a converging lens and the focal point is described as
"real."

EXAMPLE 5.2:
+40 cm radius The lens shown at left has a radius of curvature on the left side of +40 cm
and on the right side of +10 cm. It is made of glass with an index of
lens
refraction of 1.5. What is its focal length if it is used in air?

Solution
Using Equation 5.2,

1 " 1 1 %
= (1.5 ! 1) $
-1
! = !0.0375 cm
+10 cm radius f # 40 cm 10 cm '&
f = 1/-0.0375 = -26.7 cm
The focal length is -26.7 cm; rays appear to diverge from point this far to the
left of the lens.

The diverging lens of Example 5.2 has a negative focal length.


Remember that negative quantities mean "to the left" of the origin, in this case,
the lens. The negative focal point is the point to the left of the lens from which
rays appear to diverge (see Figure 5.2). This is a virtual focal point.

Optical Power
Often, lenses are specified by their focal lengths expressed in millimeters
or centimeters. But it is also common to refer to the power of a lens, especially
in ophthalmic applications. The optical power of a lens in air is given by

1
Lens Power (in air) (5.3) P=
f

When the focal length is in meters, the unit of optical power is diopters
(D), where 1 diopter = 1 m-1. An eyeglass prescription that specifies spherical
power of -2.25 D is describing a lens that has a focal length of f = 1/(-2.25 D)
= -0.44 m = -44 cm. Like focal lengths, powers are positive for lenses that bring
light to a focus and negative for lenses that cause light to diverge.
Finally, in some applications we may want to focus light in only one
direction, creating a "line" focus rather than a point. In this case the axis, or
direction of the lens, is an important specification. In Figure 5.7, the axis is

90
Lenses and Mirrors

vertical, and incoming parallel rays of light are brought to a focus along a vertical
line. Cylindrical lenses are used in eyeglasses to correct astigmatism,
irregularities of the cornea that cause vision to be blurred in one direction only.
In this case, the cornea has an elongated, rather than spherical, shape. Eyeglasses
are made with cylindrical lenses whose axis is 90o to the cornea's axis.
Cylindrical lenses are also used to reshape laser beams that have an elliptical
cross section, creating a more circular beam profile, and to create "line" sources Figure 5.7 - A cylindrical
lens focuses in only one
of light for specialized applications. Cylindrical power is also measured in direction.
diopters.

5.3 IMAGE FORMATION WITH THIN LENSES

Ray Tracing
The focusing properties of a lens lead directly to the formation of
images. The location and size of an image can be determined by considering the
rays of light that leave an object located on one side of a lens, and following their
progress through the lens. Determining where an image will form by following
rays through an optical system is called ray tracing. Although we can find the
location of an image using algebra, ray tracing has the advantage of providing a
visual explanation of why an image forms where it does.
Consider an object on the optical axis in front of a converging lens and
well outside the focal point. (Note that in our drawings light will nearly always
go from left to right, so "in front of" means to the left.) The object is an arrow;
that is, it has an orientation in space so that we can determine if the image is in
the same orientation (upright) or upside down (inverted). Figure 5.8 illustrates
such an object and uses a double-headed arrow to represent the (thin) converging
lens. Two focal points are shown, one on each side of the lens, because incoming
parallel light from either direction will be focused the same distance from a thin
lens. In other words, if the thin lenses in Figure 5.2 were turned around they
would focus light at the same distance from the lens.
The object may be illuminated, or it may be diffusely reflecting light
from the sun or artificial source. In either case, rays of light are heading out in all
directions from every point on the arrow. It would be an overwhelming task to
follow each of these rays, instead three principle rays are drawn to determine the
image location. These rays are chosen because we know in advance how they
will behave when they pass through the lens. Of course, they aren't the only rays
that leave the object and end up forming the image; they are just the simplest rays
to draw. In Figure 5.8, the heavy (blue) arrows represent the principle rays and
the remaining arrows indicate a few of the other rays that leave the tip of the
object arrow and meet at the tip of the image arrow.

91
LIGHT: Introduction to Optics and Photonics

The principle rays are:


• The parallel ray enters the lens traveling parallel to the optical axis.
It leaves the lens and passes through the focal point on the opposite
side. This is a direct result of the definition of the focal point.
• The focal ray enters the lens after passing through the focal point on
one side of the lens. (Depending on object location, it may leave the
object heading in a direction as if it came from the focal point.) The
focal ray exits the lens traveling parallel to the optical axis. A
moment's thought reveals that this ray is the parallel ray traveling in
the reverse direction (right to left in our drawing).
• The central ray heads from the tip of the object (arrow) toward the
center of the lens. It can be shown to pass through the center of the
lens undeviated.

Thin converging lens


Figure 5.8 - Ray tracing to Object
locate an image. The lens is
represented by the double
headed arrow. The parallel,
focal and central rays are Image
traced from the object to a
point on the right side of the
lens, where they form a real Parallel ray
image. Note that the rays
continue beyond the image. Focal ray
At other distances from the
Central ray
lens, the image is not in
Focal points
focus.

Notice that the three principle rays all meet at a point beyond (to the right
of) the lens before continuing onward. In fact, for every point on the object,
there is a point beyond the lens where rays originating at that object point meet.
The image points form a real image. The light can be projected onto a screen
held at the image plane in space, and a real image will be seen. Note that the
image is inverted (upside down) with respect to the object. The ray diagram also
indicates that, for this case, the image is smaller than the object. Carefully drawn
ray diagrams can be used to find at least an approximate solution to many lens
problems.

92
Lenses and Mirrors

EXAMPLE 5.3
An object is placed 10 cm in front of a diverging lens with a focal length of
-5 cm. Where does the image form?

Solution
The object and lens with its focal points are drawn to scale along the optical
axis. Notice that a diverging thin lens is represented by a line with inverted
arrowheads at each end.
The three principle rays are followed from the tip of the object through the
lens as shown in the diagram below.
• The parallel ray exits the lens as if it came from the virtual focal
point on the left hand side (see Figure 5.2).
• The focal ray heads toward the focal point on the opposite (right)
side of the lens and exits the lens parallel to the optical axis.
• The central ray passes through the center of the lens undeviated.
These three rays do not meet at a common image point on the right hand
side of the lens. Instead, if the three rays are followed backward (to the left)
it appears as if they originated at a point on the left side of the lens. That is,
if you look into the lens from the right side you will see a small virtual
image to the left of the lens.

object

image

Parallel ray

Focal ray
Central ray
Focal points

Thin Lens Equation


Although ray tracing can be used to locate the image formed by a lens, it
can be a fairly time-consuming process. It would be handy to have a formula that
would allow us to solve lens problems algebraically rather than with ray tracing.
(Lens designers actually use sophisticated computer programs that perform the
calculations and provide graphical and numeric information to assist in
optimizing optical systems.)

93
LIGHT: Introduction to Optics and Photonics

Before we can numerically analyze problems with lenses, we need to


introduce some terminology and agree on a sign convention for distances along
and at right angles to the optical axis. General physics textbooks often use a sign
convention referred to as the "empirical convention.” Unfortunately, its rules
are only easy to follow for the simplest of lens problems. In this book we will use
the Cartesian sign convention, which is the one preferred by optical engineers. It
has the advantage of following mathematical rules you already know from
graphing in a Cartesian coordinate system and it is easily extended to
complicated multiple lens systems.
Figure 5.9 shows the Cartesian convention we will use throughout this
text. We assume that light is traveling from the left toward the right. The arrow
indicating the direction of light’s travel also indicates the positive direction. That
means up and to the right are both positive directions, down and toward the left
are both negative directions. Note that the optical axis marks the zero in the
vertical direction. In the horizontal direction, the lens itself divides the axis
between positive and negative. This is the convention we have already used to
locate positive and negative focal points when using the Lens Maker's Formula.

+
Light from left to right
Figure 5.9 - Cartesian sign
convention for solving Optical axis
problems with lenses.
+

EXAMPLE 5.5
What are the signs (+ or -) of the distances indicated on the drawing below?

Solution
The solid arrow on the left has a height of +5 cm and it is located -10 cm
from the lens. The striped arrow on the right has a height of -5 cm and it is
located +10 cm from the lens.

5 cm 10 cm

10 cm 5 cm

94
Lenses and Mirrors

Now that we have a consistent sign convention we can use the thin lens
equation to locate the image for a given lens and object. The derivation of this
equation can be found in the Appendix of this text. The thin lens equation
follows from the application of Snell's Law to a spherical surface.
Let us call the distance from the lens to the object do (the object distance)
and the distance from the lens to the image di (the image distance), as shown in
Figure 5.10. The thin lens equation relates the object and image distance to the
focal length of the lens.
1 1 1
+ = (5.4) Thin Lens Equation
d o f di

The thin lens equation is easy to remember by noting the order of the
denominators and comparing to the geometry of a typical imaging problem;
starting on the left, light goes from the object (do), through the lens (f), and forms
(=) an image (di ). The equation can be used for both converging (positive focal
length) and diverging (negative focal length) lenses. This form of the equation is
correct for the Cartesian sign convention; a different sign convention may require
the order of variables be changed.

Light from left to right


Lens

Figure 5.10 - Geometry of the


thin lens equation. In this (and
do di many common situations) the
object is drawn to the left of
the lens and the object
Object distance is negative.

Image

Equation 5.4 also suggests a practical way to find the focal length of a
lens. Subtracting the quantity 1/do from both sides gives
1 1 1
= !
f di d o (5.5)

For a converging lens, which forms a real image whose distance from the lens is
easily measured, the focal length can be determined by measuring the object and
image distances and solving Equation 5.5.

95
LIGHT: Introduction to Optics and Photonics

EXAMPLE 5.5
An object is located 40 cm from a lens with a focal length of 20 cm. Find
the location of the image.

Solution
Assume the object is to the left of the lens, then the object distance is
-40 cm. Using Equation 5.4

1 1 1
+ =
-40 cm 20 cm di
1
= 0.025 cm !1
di
di = 40 cm
The image forms 40 cm to the right of the lens.

In Example 5.5, the object was well beyond the focal point of the lens,
on the left hand side. What happens if we bring the object in closer to the lens?
Table 5.1 shows the calculated image distances for the same 20 cm focal length
lens for object distances that vary from -500 cm to 10 cm. You can use the thin
lens equation to verify the values in the table.

Object distance Image distance


-500 cm 21 cm
-200 cm 22 cm
-50 cm 33 cm
-40 cm 40 cm
-25 cm 100 cm
-22 cm 220 cm
-20 cm no image
-10 cm -20 cm

Table 5.1 Object and image distances for a positive lens with f = 20 cm.

Notice that when the object is very far from the lens, the image is near
the focal point. This is to be expected. Wavefronts from very distant objects are
nearly flat when they enter the lens, or, said another way, the rays entering the
lens are nearly parallel. Thus, very distant objects are imaged near the focal point
as shown in Figure 5.2. This gives another way to quickly determine the

96
Lenses and Mirrors

approximate focal length of a converging lens. Find a very distant object, such as
a tree or utility pole, and use the lens to form an image of the object on a piece of
paper. The distance from the lens to the paper is the approximate focal length of
the lens.
As the object moves closer to the lens, Table 5.1 shows that the image
moves away from the focal point. If the object is exactly at the focal point, there
is no image formed at all. (The image distance is sometimes said to be "infinite.")
In fact, if a point of light is placed at the lens focus, the rays are made parallel by
the lens.
To interpret the final result in the table, a negative image distance, recall
that negative distances are on the left side of the lens—the side that light is
coming from. This is a virtual image, like the one formed by the diverging lens in
Example 5.3. Recall that a virtual image is formed on the right hand side of the
lens and the eye follows the rays back to see the image. To see this image you
need to look through the lens back toward the direction of the object. This is how
a positive lens is used as a magnifying glass.

Magnification
The thin lens equation provides information on the location of an image
produced by a lens. But what is the size of the image? The definition of
transverse magnification (that is, at right angles to the optical axis) is the ratio of
the image height to the object height
hi
M= (5.6)
ho
Using the two similar triangles formed by the object, image and optical
axis in Figure 5.11, the magnification can also be shown to be related to di and do
hi di
M= =
ho do (5.7) Transverse Magnification

ho di Figure 5.11 - Transverse


magnification is the image height
do hi divided by object height.
Object
Le Image
ns
Note that magnification can be greater than one, meaning the image is
larger than the object, or less than one, indicating that the image is smaller than
the object. Also, magnification is a positive quantity if the image is upright, or in
the same orientation as the object, and a negative quantity if the image is inverted

97
LIGHT: Introduction to Optics and Photonics

or "upside down" from the object. For Example 5.5, the object distance was
40 cm and the image formed at +40 cm. Magnification is therefore -1; the image
is the same size as the object, but inverted.
It should be clear that a diverging lens cannot produce a real image, since
it cannot cause rays to come to a focus. In problems with a single lens, the thin
lens equation used with a negative focal length lens will always produce a
negative image distance, that is, a negative lens produces a virtual image.
Examples 5.6 and 5.7 illustrate the method for finding image location
and magnification for a converging lens. Note that a diagram showing the
relative positions of the object and lens help you to predict the type of image that
will be formed. The procedure for a diverging lens is the same; the problems at
the end of this chapter provide an opportunity to work with both types of lenses.

EXAMPLE 5.6
An object is located 25 cm to the left of a lens with a focal length of 10 cm.
3 cm f f The object is 3 cm high. Find the image distance, magnification, height,
orientation and type (real or virtual).
25 cm

Solution
Since the lens is converging and the object is beyond the focal point, we
expect the image will be real and inverted.
a. Image distance: Use Equation 5.5, the thin lens equation
1 1 1
+ =
do f di
1 1 1 (Note that the object distance is negative.)
+ =
!25cm 10cm di
di = 16.7 cm

The image forms at 16.7 cm to the right of the lens. (The image distance is a
positive quantity.) This is a real image, as expected.
b. To find magnification use Equation 5.7
di
M=
do

16.7cm
M= = !0.67
!25cm

The image is 0.67 (2/3) the size of the object, so it is 2 cm high. The
negative sign means that the image is inverted (upside down) compared to
the object, as expected. In summary, the image is real, inverted, 2 cm high
and 16.7 cm to the right of the lens.

98
Lenses and Mirrors

EXAMPLE 5.7
An object is located 5 cm to the left of a lens with a focal length of 10 cm.
The object is 3 cm high. Find the image distance, magnification, height,
orientation and type (real or virtual).
f 3 cm
f
Solution
Since the object is between the focal point and the lens, 5 cm
we expect a virtual, upright image.
a. Image distance: Substituting these values into the thin lens equation
1 1 1
+ =
d o f di
1 1 1
+ = (Note that the object distance is negative.)
!5cm 10cm di

d i = -10 cm
The image forms at 10 cm to the left of the lens. (The image distance is a
negative quantity.) This is a virtual image as expected, you need to look
through the lens to see the image.
b. Magnification
di
M=
do

!10cm
M= =2
!5cm
The image is twice the size of the object, so it is 6 cm high. The
magnification is positive, which means that the image is upright (same
orientation) compared to the object, as expected. In summary, the image is
virtual, upright, 6 cm high and appears to be 10 cm to the left of the lens
when viewed looking back through the lens.

Finally, the thin lens equation also provides a means of calculating image
location and size for multiple lens systems. The image formed by the first lens
becomes the object for the next lens, and so on. In these cases, it is absolutely
necessary to begin with a careful drawing of the lens system!

5.4 REAL LENSES: ABERRATIONS


The thin lens equation works reasonably well for many applications not
requiring precision. But for exacting applications, the thickness of the lens
becomes an issue. Although the discussion of "thick" lenses is beyond the scope

99
LIGHT: Introduction to Optics and Photonics

of this text, we will discuss two of the many aberrations that degrade the images
formed by a lens.
Spherical aberration is a result of the spherical geometry of the lens
surfaces. Rays passing through the outer portions of a lens with spherical
surfaces do not focus at the same point as rays that pass through the center
Figure 5.12 - Spherical (Figure 5.12). Instead of a sharp focus, the focal region is actually a small spot.
aberration. Rays that pass This means that any images made with the lens will be blurred to some degree. A
through the outside
portions of the lens focus lens designed to correct for spherical aberration may include stops to restrict the
closer to the lens than rays
usable part of the lens to the center. Alternatively, the radius of each surface can
that pass through the
center of the lens. be chosen to minimize the problem; for example, the side that faces the object
may be considerably more curved than the other side. Another solution is to use
White light Blue focus "aspherical" surfaces. These are more expensive to produce than spherical lenses.
Chromatic aberration is caused by dispersion, the variation of index of
refraction with wavelength. Chromatic aberration results in a slightly different
focal point for each wavelength, which causes colored fringes to appear around
Red focus
images of black and white objects (Figure 5.13). Lenses that compensate for
Figure 5.13 Chromatic
aberration causes chromatic aberration are called achromats. Achromatic doublets are compound
different wavelengths to lenses consisting of two lenses, one converging and one diverging, made from
focus at different
distances from the lens. different glasses and cemented together.

5.5 MIRRORS
Mirrors work by specular reflection, and as you know, mirrors can form
images. The formation of an image in a plane mirror is shown in Figure 5.14,
using the law of reflection. Two of the rays from the top of the cylindrical object
are traced, showing the reflection from the mirror. The rays are redirected toward
the eye and the eye and brain interpret the diverging rays as originating from
behind the mirror, that is, the image is virtual. Simple geometry can be used to
show that the image distance is equal to the object distance in a plane mirror.
reflective
surface

Figure 5.14 Image formation in a


plane mirror.

do di

Common household mirrors are constructed of glass with a metallic


coating on the back. The back coating makes it easy to clean the mirror and

100
Lenses and Mirrors

protects the coating from dirt and fingerprints. Precision technology applications
demand front surface mirrors, however, since back surface mirrors produce
double reflections—one from the glass surface and one from the mirror surface
on the back. These mirrors may be thin metal films on a glass substrate, or they
may be constructed of very thin layers of dielectric (non-conducting) materials.
Mirrors may be broadband, reflecting a wide band of wavelengths equally, or
they may be "tuned" to reflect a narrow range of wavelengths while transmitting
all others. Thin film mirrors and filters will be discussed more fully in Chapter 6.

Spherical mirrors
As with a lens, image problems with mirrors can be solved by ray tracing
or algebraically. Again, before introducing the equation for solving spherical
mirror problems, we need to have a sign convention. Although the law of
reflection has a simpler form than the law of refraction, the sign convention for
image formation by a mirror is a bit less intuitive than that for lenses. Recall that
for lenses the direction of light propagation defines the positive direction. In
Figure 5.9, light is pictured going from the left to the right, so that all
measurements to the right of the lens are positive and those to the left of the lens
are negative. Light travels through the transparent lens, so positive and negative
directions remain fixed for the duration of the problem, as light goes from object
to lens to image.
The situation with a mirror is complicated because after striking the
mirror, light reverses direction and returns to the direction from which it came.
This means that in order to be consistent, the sign convention must change when
the direction of light changes. Figure 5.15 illustrates the two sign conventions
for before and after reflection from a mirror. The figure on the left shows light
traveling from left to right toward the mirror. This situation is identical to the
sign convention used in Figure 5.9 with a lens. In the figure on the left, light has
been reflected from the mirror and is traveling from right to left. Now the signs
along the optical axis are reversed and distances measured from the mirror to a
point to the left of the mirror are positive, and from the mirror to points to the
right of the mirror are negative.
Figure 5.15 -
Cartesian sign
Light from left to right + Light from right to left + convention for mirrors
leads to results
+ - consistent with those
for lenses: real image
- distance is positive
+
and virtual image
mirror surface mirror surface distance is negative
- - (behind the mirror).

101
LIGHT: Introduction to Optics and Photonics

Although this sign convention may seem confusing at first, it leads to


results consistent with those for lenses. Real images will have positive image
distances and virtual images will have negative image distances.
Consider a practical example. When you look at an object in a mirror you
see a virtual image "behind" the mirror (Figure 5.16). The object distance is
negative; the object is situated to the left of the mirror and light travels from left
to right. However, the image distance is also negative because, after reflection,
light is traveling away from the mirror from right to left. Image distances
measured from the mirror toward the right (where the virtual image is located)
are negative.

mirror

Figure 5.16 - Sign convention for a mirror


problem. The object distance is negative
because it is to the left of the mirror when
light is going from left to right. The image
distance is also negative because the light
that forms the image goes from right to left. do di

Light Light

Before reflection: After reflection:


Object distance < 0 Image distance < 0

The Spherical Mirror Equation


Ray tracing with spherical mirrors is much like the process with lenses.
Figure 5.17 shows the cross section of a concave spherical mirror. The mirror
itself is a portion of the shell of a sphere; think of a "slice" from a hollow rubber
ball, silvered on either the inside (concave) or outside (convex) surface. The
radius of curvature of the mirror in Figure 5.17 is negative because the curve of
the mirror opens toward the left (following the sign convention of Figure 5.6).
Using geometry, it can be shown that parallel rays of light that strike this
Center of curvature mirror are brought to a focus at a point halfway to the center of curvature. That
Focal point
is, the focal length is equal to one half the radius of curvature. In order to
Figure 5. - 17 A concave
spherical mirror has a preserve our Cartersian sign convention and to have a positive focal length for
real (positive) focus this converging mirror, we write

R
(5.8) f =!
Mirror focal length 2

It should be noted that, as with a lens with a spherical surface, the


incoming rays shown in Figure 5.17 will not all be focused at precisely the same

102
Lenses and Mirrors

point. There will be spherical aberration with rays striking the outer portions of
the mirror focused closer to the mirror than rays striking the center. If the mirror
is relatively flat and only rays close to the axis are considered (the paraxial
approximation), spherical aberration may be ignored in some applications. Using
a parabolic surface eliminates spherical aberration .
The concave mirror shown in Figure 5.17 has a positive focal length and
is capable of bringing rays to a focal point. A convex mirror (with a positive focal point
center of curvature
radius of curvature) will have a negative focal length and a virtual focal point
from which incoming parallel rays will appear to diverge. A diverging mirror, Figure 5.18 – A convex
like a diverging lens, cannot produce a real image. mirror has a virtual focus
(behind the mirror).
As with lenses, it is possible to use three principle rays to determine
where images are formed by a spherical mirror:
• The parallel ray strikes the mirror after traveling parallel to the optical
axis. This ray reflects so that it goes through the focal point (or so that it
appears to come from the focal point for a convex mirror).
• The focal ray strikes the mirror after passing through the focal point. (For
a convex mirror, the ray cannot pass through the virtual focal point, but
is heading in that direction before striking the mirror.) This ray reflects
parallel to the optical axis.
• The central ray goes through the center of curvature of the mirror (or is
headed in that direction for a convex mirror). Because a ray that passes
through the center of curvature strikes a spherical surface normally, it
reflects back along the line of incidence.

Figure 5.19 - Ray


diagrams for a concave
Focal point mirror. On the left, the
Objec t
object located beyond
the focal point produces
a real image. On the
right, the object is
Object between the focal point
Virtual
image
and the mirror, creating
Center of Real
curvature
a virtual image. Note the
image
similarities and
differences in ray
diagrams for lenses!

There are other similarities between mirrors and lenses. Figure 5.19
shows ray diagrams for a concave mirror with the object outside (to the left of)
the focal point and a concave mirror with the object inside (to the right of) the
focal point. When the object is to the left of the focal point, the three principle
rays meet at a point after reflecting from the mirror. That is, a real image is
formed which can be projected onto a screen. (Of course, the screen must be

103
LIGHT: Introduction to Optics and Photonics

placed below the object so that it does not prevent the light from the object from
reaching the mirror.) Note that the real image is inverted; as with a lens, the
magnification depends on the exact object placement.
When the object is between the focal point and the mirror, the reflected
rays diverge as they leave the mirror so no real image is formed. However, if you
place your eye to intercept the diverging rays, a virtual image is formed at the
point from which the diverging rays appear to emanate. This image is upright
and enlarged. This is a how a shaving or make up mirror works to create an
enlarged view of your face.
A ray diagram for a convex mirror is shown in Figure 5.20. In the figure,
the ray that leaves the tip of the object traveling parallel to the optical axis (R1) is
reflected so that it appears to come from the virtual focal point (behind the
mirror). The ray that leaves the object heading toward the virtual focal point (R2)
reflects so that it is parallel to the optical axis. The third principle ray (R3) heads
toward the center of curvature (behind the mirror) and reflects along the same
line. The three reflected rays appear to come from a point behind the mirror,
which locates the tip of the virtual image. Convex mirrors are used for
surveillance in stores, where they can provide wide-angle view of the interior.
The mirror on the passenger side of many cars that reads "Caution: objects in
mirror are closer than they appear" is also a convex wide-angle view mirror.

R1
R3

Figure 5.20 - Image R2


formed by a convex mirror. image focal point
object center of
curvature

We can predict image location if the object distance and mirror focal
length are known. It can be shown that the mirror equation has the same form as
the thin lens equation

(5.9) 1 1 1
+ =
do f di

Magnification is also calculated using the same equation familiar from lens
problems
di
M=
do

104
Lenses and Mirrors

It may surprise you to learn that the mirror equation also applies to a
plane mirror. Since the radius of curvature of a flat surface is infinite, the focal
length of the mirror is also infinite (see Equation 5.8). This means that the 1/f
term in the mirror equation is zero. The resulting equation says that object
distance equals image distance and magnification is one, as you know from your
experience with common household mirrors.

EXAMPLE 5.8
An object is placed 25 cm in front of a concave spherical mirror with a 15
cm focal length. Find the location and size of the image.

Solution
The object distance is negative, with light going left to right from the object
toward the mirror. We expect a real, inverted image since the object is
beyond the focal point.
a. Image distance

1 1 1
+ =
do f di
15 cm
1 1 1
+ =
!25cm 15cm di 25 cm
di = 37.5cm

The image forms 37.5 cm to the left of the mirror (on the same side as the
object). Since the light forming the image goes from right to left, positive
image distances locate images to the left of the mirror. This is a real image,
as expected, that can be projected onto a screen.
b. Magnification

hi di
M= =
ho do
di 37.5cm
M= = = !1.5
do !25cm

The image is 1.5 times the size of the object. The negative sign means that
the image is inverted (upside down) compared to the object, as expected.

105
LIGHT: Introduction to Optics and Photonics

REFERENCES
For derivations of the thin lens equation and detailed information on aberrations
and thick lens equations see
1. Pedrotti, F., Pedrotti, L.M., Pedrotti, L.S. Introduction to Optics Ed. 3,
Upper Saddle River, NJ: Prentice Hall, 2006.
2. Meyer-Arendt, J. Intro to Classical and Modern Optics. Upper Saddle
3. Hecht, E. Optics, Ed. 4. San Francisco: Addison-Wesley, 2001.

WEB SITES
1. For an explanation of the optics of an eyeglass prescription, see
www.sightandhearing.org/news/healthissue/archive/hi_0303.asp
2. For an illustration of ray tracing, an excellent Java applet is found at
www.hazelwood.k12.mo.us/~grichert/optics/intro.html
3. Lens design and optical CAD software are important tools of optical
engineers. You can find information and some demonstrations and/or
tutorials at any of these web sites.
www.photonengr.com/
www.breault.com/
www.opticalres.com/
www.lambdares.com

106
Lenses and Mirrors

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
Lenses/Thin lens equation
1. What is the difference between a real and virtual focus? Real and virtual image?
2. How does spherical aberration cause blurring of images?
3. What causes chromatic aberration? How does it affect an image?
4. Is it possible to make a real image with a single diverging lens? Explain.
5. For this problem you will need to draw a ray diagram to scale. If an object is placed
10 cm in front of a lens that has a 5 cm focal length, where does the image form? If
the object is moved closer to the focal point, does the image move in toward the lens
or away from the lens?

6. A child gets her first pair of eyeglasses and tries to set a small pile of leaves on fire
by focusing sunlight. Try as she may, the light won't focus. What do you think the
problem might be?

Mirrors
7. A concave mirror has a focal length of 20 cm. What type of image (real/virtual,
upright/inverted) is formed if the object is a) 30 cm in front of the mirror? b)10 cm in
front of the mirror?

8. Can a convex mirror ever form a real image? Why or why not?
9. Why are objects in the passenger side mirror closer than they appear?

LEVEL I PROBLEMS
Lenses/Thin lens equation
10. The thin lens at right has a radius of curvature on the left side of +20 cm and on the
right side of -30 cm. It is made of glass with an index of refraction of 1.58.

a. What is the focal length of the lens when it is used in air?

b. Suppose the radii are switched—that is, the left side is -30 cm and the
right side is +20 cm. What would the focal length be? What kind of
lens would it be? -

11. A biconvex lens made of an unknown material has a 35 mm focal length. The radius
of curvature of the first surface is 30 mm and the radius of curvature of the second
surface is -50 mm. Find the index of refraction of the lens material. Draw a diagram
that represents the lens. object distance

12. An object 1 cm high is placed 30 cm in front of a thin lens with a focal length of 15 focal point
cm, as shown at left. Where is the image and what is its magnification? Is the image
real or virtual? Is the image erect or inverted? 15 cm

107
LIGHT: Introduction to Optics and Photonics

13. Using the same lens as in problem #12 (15 cm focal length) calculate what happens
to the image as the object is placed 20 cm, 10 cm and 5 cm in front of the lens. What
happens to the image location and image type (real or virtual) as the object is moved
from 20 cm toward the lens? What happens to the image orientation and size as the
object is moved from 20 cm toward the lens?

14. Repeat Problem #13 for the same object placed in front of a diverging lens of focal
length -15 cm. Find the image distance, magnification and type of image at distances
of 30, 20, 10 and 5 cm from the lens. What happens to the image location and image
type (real or virtual) as the object is moved from 20 cm toward the lens? What
happens to the image orientation and size as the object is moved from 20 cm toward
the lens

15. A real object 2 cm tall is located 55 cm from a lens of focal length of 7.5 cm. Find
the image distance and determine whether the image is real or virtual. What is the
height of the image? Is the image inverted or upright?

16. A real object 3 cm tall is located 7.5 m from a converging lens of focal length 4.5
cm. Find: image distance, real or virtual object, magnification, height of image,
inverted or upright.

17. A 1.7 m tall person is standing 2.5 m in front of a camera. The camera uses a
converging lens with a focal length of 0.0500 m. Find the image distance. Is the
image real or virtual? What is the magnification and height of the image on the film?
Is the image inverted or upright?

Mirrors
18. A man is shaving with his face 25 cm in front of a concave mirror. The image is 65
cm behind the mirror. Find the focal length of the mirror and its magnification.

19. A dentist's mirror is placed 1.5 cm from a tooth. The enlarged image is located 4.3
cm behind the mirror. What kind of mirror (plane, convex, concave) is being used?
Determine the focal length of the mirror, magnification and image orientation
relative to the object.

20. A grocery store uses convex mirrors to monitor the aisles in a store. The mirrors
have a radius of curvature of 4 m. What is the image distance if a customer is 15 m
in front of the mirror? Is the image real or virtual? If a customer is 1.6 m tall, how
tall is the image? LEVEL II PROBLEMS

Lenses/thin lens equation


21. What is the object distance and object height if the magnification is 6 and the real
image is 30 cm from the lens and 1 cm high?

22. Two positive lenses, each of focal length f = 3 cm, are separated by a distance of 12
cm. An object 2 cm high is located 6 cm to the left of the first lens. Find the final
image characteristics, distance, height and orientation. Draw a diagram.

108
Lenses and Mirrors

23. A converging lens with f =12 cm is 28 cm to the left of a diverging lens f = -14 cm.
An object is located 6 cm to the left of the converging lens. Find the final image
distance and the overall magnification.

24. Two 20 cm focal length lenses are placed on an optical bench 70 cm apart. A 1 cm
high object is placed 40 cm to the left of the first lens. Find the location of the
images of the first lens and the second lens. Find the overall magnification of the two
lens system

25. Repeat problem #24 with the distance between the lenses reduced to 50 cm.
26. Repeat problem #24 with the distance between the lenses reduced to 30 cm.
27. A converging lens projects a real image on a screen, which can serve as the object
for a second lens. Show how this method can be used to determine the focal length of
a negative (diverging) lens.

109
In Chapter 4, we saw that dispersion
explained the colors of the rainbow. But what
is the source of the colors that form in a clear
oil film spread on a puddle or a soap bubble
floating in air? How does a rainbow spectrum
form in the reflection from the surface of a
compact disc? What causes the iridescence of
butterfly wings? To explain these colorful
effects we need to consider the wave nature of
light. In this chapter we examine two wave
behaviors—interference and diffraction.
Chapter 7 will deal with another important
wave behavior of light, polarization.
Pacific Waves (J. Donnelly)

Chapter 6

WAVE OPTICS: INTERFERENCE


AND DIFFRACTION
6.1 INTERFERENCE OF WAVES

Introduction
One of the most striking properties of laser light is the ease with which
one can show interference of waves. In fact the speckle effect—the shifting
sparkling pattern you see when an expanded laser beam shines on a wall—is due
to interference of the beam with itself. In order to understand how light waves
interfere we need to first have an understanding of some basic wave properties
and behaviors. We will begin with a discussion of superposition, the addition of
waves, and coherence, which is a necessary condition for stable interference
patterns.

The Principle of Superposition


What happens when several waves are present in the same medium? For
example, if you drop two stones into a shallow puddle of water, what happens
when the expanding ripples flow past each other? If you try this experiment, you
will see that waves move through each other and emerge unchanged. At each

110
Wave Optics: Interference and Diffraction

instant, the medium responds at each point with displacement equal to the
algebraic sum of the displacements of the two waves.

y total = y1 + y2

For example, suppose that at one point where two waves overlap one of
the waves is at its positive peak (y1 = A) and one is at its most negative value (y2
= -A). The resultant displacement is zero at this point and time. If the two waves
are both at their peak values, the medium will have an instantaneous
displacement of ytotal = 2A, and so on. This algebraic addition of wave
displacements is called superposition.
Figure 6.1a shows two waves with the same amplitude approaching each
other on a stretched string. You can reproduce this experiment by stretching a
Slinky® spring between yourself and a friend. At the same time, shake both ends
once or twice, sending short wave pulses toward each other. As the waves
overlap, the amplitude of the resulting wave is the algebraic sum of the
amplitudes of the two waves. Thus in 6.1-b, where a crest from the right-going
wave overlaps a crest from the left-going wave, the resulting wave has twice the
amplitude of the individual waves. The waves are said to be in phase, resulting in
constructive interference. In 6.1-c, a crest overlaps a trough and the string is
momentarily flat. The waves are completely out of phase, which results in
destructive interference. At positions between completely in phase and
completely out of phase, the string assumes a sinusoidal shape. The amplitude is
less than the maximum produced by constructive interference and more than the
minimum produced by destructive interference; this is partially destructive
interference. In all cases, the waves continue on in the original directions after
their brief moment of overlap.

a.
Figure 6.1 - Superposition of waves on a
string. The propagation directions of the
interfering waves are indicated by arrows. The
heavy red lines show how the string appears
b. when the two waves overlap.
a. Two waves approach
b. Constructive interference
c. Destructive interference
d. Partially destructive interference
c.
Note that interference will still result if waves
are traveling in the same direction in a
medium, as in the case of the speakers
shown in Figure 6.2.
d.

111
LIGHT: Introduction to Optics and Photonics

Destructive interference is a unique wave behavior. Classical particles


cannot interfere destructively; throwing two baseballs straight at a wall produces
an impact that is greater than either impact alone. But under the right conditions,
directing two beams of light at a wall might produce more intense light, less
intense light or no light at all at a given location on the wall.
It is important to realize that although the term "destructive" suggests
that light is destroyed or cancelled, this is not the case. If light is weakened at
some point through destructive interference, at some other point the light is
intensified through constructive interference. The light is simply redistributed, as
it must be for energy to be conserved.
Path Length Difference
Solving problems involving the interference of light
requires that we develop an analytical method for dealing with
superposition. One technique uses the concept of path length
difference. Figure 6.2 provides a more intuitive example of
superposition using acoustic waves and it illustrates what we
mean by path length difference.
Two speakers are side-by-side, driven by the same
oscillator and producing the same single wavelength tone. The
waves are in phase and the resulting sound wave has large
1/2 #
amplitude. When the top speaker is moved forward one half
wavelength, the crests from the top speaker occur at the same
points in space as the troughs of the bottom speaker, and there is
destructive interference directly in front of the speakers. When the
top speaker moves forward another one half wavelength (for a
1#
total of one wavelength moved) the amplitude is large again. The
distance between the speakers is the "path length difference." We
Figure 6.2 - Path length will use the Greek letter $ (upper case gamma) to indicate the
difference for two audio
speakers driven by the same additional path traveled by one wave (the path length difference).
source. Top: Path length Of course, it is difficult to replicate this experiment with
difference = 0. Middle: Path
length difference = one half real speakers, since there are always reflections from the walls,
wavelength. Bottom: Path
floor and ceiling. However, such a "cancellation" of sound waves
length difference = one
wavelength. can occur in an auditorium, causing a "dead spot" for a particular
pitch. At the Point Judith Lighthouse at the Coast Guard Station
in Narragansett, RI, a pair of small outbuildings is located directly behind a
foghorn. Walking slowly between the buildings, you can hear the sound of the
foghorn growing alternately louder and softer as a result of the superposition of
sound waves reflected from the sides of the buildings.

112
Wave Optics: Interference and Diffraction

Path Length Difference in a Medium


In many interesting cases, the path length difference for two light waves
involves more than just the extra physical distance traveled by one of the waves.
Since the wavelength of light changes in a material, the path length difference for
two light waves must take into account the index of refraction when two waves
travel paths in different materials. For example, consider a beam of light split
into two parts. One wave travels in an evacuated tube and one in a tube filled
with water (Figure 6.3).
Figure 6.3 - Path length
difference in a medium. The
top tube is evacuated (n=1)
and the bottom tube is filled
with water n=1.33).

Although the waves enter the tubes in phase and they travel the same
physical distance (the length of the tubes), they are not in phase when they exit.
Since the wavelength shortens in water, an equal distance of water contains more
wave cycles. The number of waves that fit in the evacuated tube is given by
length of tube L
# of wavelengths = = (6.1)
wavelength !o

where L is the length of the tube and !o is the wavelength of the light in a
vacuum. In a medium of index of refraction n, the wavelength of light is shorter
than in a vacuum by a factor n. That is, in the tube of water,
length of tube L nL (6.2)
# of wavelengths = = =
wavelength " !o % !o
$# '&
n

There are "n" more wavelengths in the water-filled tube than the evacuated tube.

EXAMPLE 6.1
In Figure 6.3, the vacuum wavelength is 500 nm and the index of refraction
of water is 1.33. How many wavelengths fit in each of the tubes if the tubes
are each 5 mm long?
Solution
The number of wavelengths is given by Equation 6.2
For a vacuum:
(1)(0.005m)
# of wavelengths = = 10, 000 waves
500nm
For water:
(1.33)(0.005m)
# of wavelengths = = 13,300 waves
500nm

113
LIGHT: Introduction to Optics and Photonics

The concepts of path length difference and how waves are affected by a
changing index of refraction are central to the understanding of optical thin films
and interferometers.

6.2 COHERENCE
Adding two waves by superposition is a fairly straightforward operation
on paper, but to produce high contrast optical interference in the lab requires light
waves that have the property of coherence. Because you will probably want to try
some of the experiments we discuss in this chapter, you should have at least a
basic understanding of coherent light.
Coherence refers to the phase relationship between two or more waves
traveling through space. Most light sources, such as an incandescent lamp,
produce light that consists of many wavelengths emitted at random times. The
light has poor coherence and could not be used, for example, to make a hologram
of a three-dimensional object. Fortunately, light from many common lasers is
coherent, allowing even beginning students to produce beautiful interference
effects.
In this chapter we will give a non-mathematical description of the two
types of coherence. Temporal coherence refers to the correlation in phase
between points along the direction of propagation. That is, how long does the
wave remain sinusoidal? Spatial coherence refers to the phase correlation
between points on a wavefront, perpendicular to the direction of propagation.
Because we are looking across the wave, perpendicular to the direction of
propagation, this type of coherence is sometimes called lateral coherence.

Temporal Coherence
Recall the speaker experiment in Figure 6.2. As long as the speakers are
both connected to the same sine wave source, the phase relationship between the
waves emitted by the two speakers remains constant. Depending on the path
Turned off Turned on length difference, the waves may be in phase, or out of phase, or somewhere in
between, but in any case a stable interference pattern will result.
Now suppose that the speakers are driven by independent sine wave
Turned off Turned on sources that are turned on and off at random as shown in Figure 6.4. There will
no longer be a stable phase relationship between the two waves. At some times
they will interfere constructively and at some times destructively. If the speakers
are on for only a few wave cycles at a time, the momentary interference patterns
Figure 6.4 - Sound sources will "wash out" and a stable interference pattern will not be observed.
turned on and off at random The situation with an ordinary light source is similar to that pictured in
times.
Figure 6.4. Since the lifetime of an excited atomic state is around 10
nanoseconds, the wave trains emitted by a source are not very long. For an

114
Wave Optics: Interference and Diffraction

observer watching the passing light wave, a certain number of regular sinusoidal
oscillations pass by, then there is an abrupt change of phase. The average
interval between the phase changes can be measured in units of time or in terms
of the length of the wave train. For this reason, temporal coherence is sometimes
called longitudinal coherence and we speak of a source's coherence length.
Actually, the situation is even more chaotic with an ordinary light source,
because it contains many wavelengths. More sophisticated analysis reveals that
the coherence length (lt,) is related to the wavelength content of the light source
according to the equation

!2
Longitudinal coherence
lt = (6.3) length
"!

where ! is the central wavelength of a source and %!, the source spectral width, is
a measure of the number of wavelengths emitted by the source. This means the
more monochromatic the source, the greater the coherence length. Visible white
light, which contains wavelengths from approximately 400 nm to 700 nm, has a
very short coherence length because %! is large. Laser light can have an
extremely narrow spectral width, as small as a fraction of a nanometer. Lasers
may have coherence lengths of several centimeters up to many kilometers, which
means that producing interference effects with laser light is relatively easy to do.
Temporal coherence is important for instruments that divide the
amplitude of a light beam and send the resulting beams along two paths to
recombine. The device used to divide one beam of light into two beams of lesser
amplitude is called a beam splitter, and one may be constructed by cementing
two right angle prisms together along their diagonals. A beam splitter may also
be a thin partially reflecting mirror, similar to that used in a one-way window.
Coherence length is of central importance in the
Screen
construction of interferometers, which are devices that use Mirror
light interference to measure very small distances, P
B
uniformity of optical materials, and optical surface quality.
For example, suppose a simple two-beam interferometer is
constructed using a beam splitter in an arrangement like that A C
shown in Figure 6.5. If the path length difference between
Beam splitter Mirror
path ABP and path ACP is longer than the laser's coherence
length, there will be little or no discernable interference Figure 6.5 - Temporal coherence and
path length difference. The
pattern observed on the screen. A rule of thumb for beamsplitter divides a single laser
inexpensive helium neon lasers is that the coherence length beam into two beams that afterward
travel different paths to the screen.
is approximately the length of the laser tube.

115
LIGHT: Introduction to Optics and Photonics

EXAMPLE 6.2
Compare the coherence lengths for white light (central wavelength
550 nm and spectral width of approximately 300 nm) and a HeNe laser
(central wavelength 633 nm and spectral width of 0.01 nm).

Solution:
Using Equation 6.3

lt =
( 550 nm )2
= 1 µ m for white light
300 nm

lt =
( 633 nm )2 = 40 cm for the laser
0.01 nm

The result of Example 6.2 suggests that a white light interferometer must
have path length differences of micron size, while the HeNe laser of the example
can be used in an interferometer with path length differences of up to 40 cm. In
fact, white light interferometry can be used to study very small surface features,
since interference effects are restricted to a short distance, while laser
interferometers may be large enough to accommodate large optics for testing.

Spatial Coherence
Imagine spherical light waves emanating from a single atom. The light
waves are only partially temporally coherent because the wave train is not
infinite in length. The expanding spherical wave has a beginning and end,
followed by other expanding spherical waves originating later as the atom is
excited and emits light to return to a lower energy state. That is, the waves have
discontinuities along the direction of propagation (Figure 6.6). However, the
waves are completely spatially coherent. Points along the spherical wave front,
such as those labeled A and B in Figure 6.6, are in phase and continue to stay in
phase as the wave expands outward. There is regularity to the wavefront that
does not change as the wave propagates. We speak of the spatial coherence
length when describing the distance between two points on the wavefront where
the wave is spatially coherent.

A B
Expanding
spherical waves
Figure 6.6 - Spatial coherence:
an expanding set of spherical Single atom
waves. light source

116
Wave Optics: Interference and Diffraction

A real light source is made up of a large number of atoms, each sending A


out waves at random times, as illustrated in Figure 6.7. There is no fixed
relationship between two widely separated points in space, such as points A and
B. Even if the source were monochromatic and therefore had a high degree of
temporal coherence, it would still be spatially incoherent.
Spatial coherence is related to the size of the source as it appears at the
location where coherence is measured. Suppose we have a narrow rectangular B

source slit of width s that emits light of wavelength !. We want to know the Figure 6.7 - Spatial
extent of spatial coherence at a distance d from the source. It can be shown that incoherence.
the spatial coherence length (ls,) of this source is given by

d!
ls = (6.4) Spatial coherence length
s

Thus, a wavefront from an extended source (many emitting atoms) can be


spatially coherent at two widely separated points if the source is very small
(small s) or very distant (large d). Spatial coherence is important in experiments
where a wavefront is divided along its length and then recombined to produce
interference on a screen. One of the most famous wave optics experiments, the
double slit experiment, requires that the light passing through the two slits have a
high degree of spatial coherence.

EXAMPLE 6.3
The diameter of the sun is 1.39 x 106 km, and the average distance from
earth to the sun is 150 x106 km. What is the lateral spatial coherence length
for the sun as viewed from earth? (Use the sun's diameter for the source size
(s) in the equation. The correction for a circular source will be explained
later in this section.)

Solution
Using 550 nm (the central wavelength of the sun's visible radiation) for
wavelength, the lateral spatial coherence length of the sun is found from
Equation 6.4
d ! (150x10 6 km)(550 nm)
ls = = = 59 µ m
s 1.39x106 km

Helium neon lasers usually have very good spatial coherence because of
the way the light is produced in the laser cavity. Therefore, it is fairly easy to
direct the laser's beam through two closely spaced slits and produce interference.
If the sun is used in such an experiment, however, interference may not be seen

117
LIGHT: Introduction to Optics and Photonics

because, as Example 5.3 shows, the spatial coherence length of the sun is around
60 microns. If the two slits are 50 µm apart, they fall within the spatial coherence
length and an interference pattern results. If they are 100 µm apart, the
experiment will not work. The two-slit interference experiment was developed by
British physicist and physician Thomas Young around 1800. Young ensured that
sunlight would be sufficiently spatially coherent by passing the light first through
a single pinhole and then through double slits, as shown in Figure 6.8. (Actually,
Young may not have performed the experiment exactly as it is described in
modern textbooks, but he did perform many experiments to show that light is a
wave.)

Figure 6.8 - A small pinhole acts


as a point source, increasing
spatial coherence of light.

Single Pinhole Double Slits

Finally, a more exact equation for lateral spatial coherence for a circular
source such as the sun includes a factor of 1.22, so the actual spatial coherence
length is larger than the 59 !m calculated in Example 6.3. For a circular light
source of diameter D, the spatial coherence length at a distance d is given by
1.22d !
ls =
D

6.3 ANALYSIS OF YOUNG'S DOUBLE SLIT EXPERIMENT


If you allow spatially coherent light to pass through two closely
separated holes or slits in an opaque screen you will see something like the
pattern shown on the right hand side of Figure 6.9 form on a distant screen. This
is a remarkable result! At the time Young was experimenting with light
interference, some scientists still thought that light was
composed of Newton's "corpuscles" and this result was
like throwing baseballs straight through two side-by-
side open windows and having them land in several
piles across the lawn, with empty space in between.
The results of this experiment are unusual for
baseballs, but to be expected from waves. (Quantum
physics does have an explanation for why photons,
Spatially coherent light
governed by probability, pass through two slits and
"clump" at specific points on the screen.)
Figure 6.9 - Young's 2-slit experiment.

118
Wave Optics: Interference and Diffraction

Suppose the slits in Figure 6.9 are so narrow that we can replace them by
two point sources of coherent light. In that case, the light reaching the center of
the viewing screen has traveled an equal distance from each slit and is in phase.
Constructive interference will occur at the center of the pattern and there will be
a bright spot, often called a fringe. The bright fringe formed at the center is called
the zero order maximum.
Another way of saying that the waves travel an equal distance from the
two slits to the center of the screen is to say the path length difference is zero.
Again, we use the Greek letter $ (upper case gamma) to indicate path length
difference. Therefore, $ = 0 for the zero order bright fringe.
Figure 6.10 shows the path from each slit to the screen without showing
the circular wavefronts. Moving upward along the screen, away from the center,
the path length difference $ increases until the two waves are exactly out of
phase. That is, the wave from the lower slit travels 1/2 wavelength farther than
the wave from the upper slit. Since the waves are now one half wave (180o) out
of phase, there will be a dark fringe at this point on the screen. The path length
difference for the first dark fringe is $ = !/2. Continuing upward away from the
center of the screen, the waves are eventually in phase again, forming the first
order bright fringe. The path length difference is one wavelength: $ = !.

$=2!

%' Figure 6.10 The geometry of Young's


$=1!
double slit experiment. Note that the
figure is not to scale; the distance to
the screen is much greater than the slit
d $=0
spacing. Under these conditions, two
paths shown from the slits to the
screen are very nearly parallel and the
Slits $ $=1! angle marked as a right angle is very
o
close to 90 .

Screen $=2!

In general, moving away from the center of the screen in either the
positive and negative direction will result in a bright fringe whenever the path
length difference is equal to a whole number of wavelengths. In other words,
bright fringes occur when

! = 1 ", 2 " , 3" .... = m" where m = 0,1, 2, 3… (6.5)

In Equation 6.5, m is the order of the fringe. As you can see, we use the order
number to keep track of which of the fringes we are talking about. Sometimes

119
LIGHT: Introduction to Optics and Photonics

you will see positive and negative values of m, indicating fringes above or below
the axis
It would be handy to have an equation relating the variables that are
easily measured—the separation of the slits, the wavelength of light and the
spacing of the bright fringes on the screen. A mathematical relationship can be
%' easily derived from the geometry of Figure 6.10. The triangle is redrawn as
' Figure 6.11, where the paths from the slits to the screen are shown as parallel
d
lines. In fact, because the screen is so far away compared to the slit spacing, the
$ % lines are nearly parallel.
Referring to the small right triangle in Figure 6.11, the path length
difference for the light paths shown is given by
Figure 6.11 - The
angles in the Young's (6.6) ! = d sin ( " ')
equation derivation.
Since the path length difference must satisfy Equation 6.5 to produce a
bright fringe, we can equate the right hand sides of Equations 6.5 and 6.6,

(6.7) m ! = d sin ( " ') m = 0,1, 2, 3.....

The problem with this equation is that the angle %' is very difficult to measure.
However, a little geometry reveals that %' is the same as %, the angular position
of the mth fringe on the screen. The equation we are seeking to calculate the
angular position of the mth order maximum is Equation 6.7, with % replacing %'.
Young's double slit (6.8) m ! = d sin ( " ) m = 0,1, 2, 3.....
equation
When you are performing Young's double slit experiment, it is usually
more convenient to measure the vertical (y) position of the bright fringes on a
screen located a distance x from the slits. Since % is a very small angle for the
two slit experiment, the small angle approximation may be used, noting that sin
% # tan % = y/x for small angles. This allows us to replace the sine in Equation 6.8
with variables we can measure in the lab

y
(6.9) m! = d m = 0,1, 2,.....
x

It is left to the reader to show that destructive interference occurs


whenever the path length difference is an odd number of half wavelengths. Note
that for interference minima (dark fringes), the lowest order is 1. There is no zero
order minimum because the center of the pattern is bright.
Young's two-slit experiment may be used to determine the wavelength of
an unknown source although, as you will see in the next section, using many slits

120
Wave Optics: Interference and Diffraction

rather than just two creates interference fringes that are more easily viewed and
measured.

EXAMPLE 6.3
Monochromatic light passes through a double slit, producing interference.
The distance between the slit centers is 0.50 mm and the distance from the
center of the pattern to the first order maximum on a screen 3 m away is 0.3
cm. Find the wavelength of the light.
0.3 cm
Solution 0.5
mm
Use Equation 6.9, and solve for wavelength:
dy (0.5 mm)(0.3 cm)
!= =
mx (1)(3 m)
! = 500 nm 3.0 m

6.4 DIFFRACTION GRATINGS- FROM TWO SLITS TO MANY


What happens if more slits are added to Young's two slit experiment? As
Figure 6.12 shows, increasing the number of effective point sources contributing
to the interference pattern causes the bright fringes to become very sharp and
bright. Why does this happen? If you draw the waves going from three slits to a
distant screen as we did in Figure 6.10 for two slits, you will find that three
waves do not interfere constructively at as many points as two waves do. The
more slits you include in the drawing, the fewer points on the screen where all of
the waves are in phase.
If we use many equally spaced slits, the bright fringes are widely
separated, distinct points of light. This makes the distances between fringes much
easier to measure. A diffraction grating is such a device, consisting of hundreds
or even thousands of closely spaced slits per centimeter.
2 slits

Figure 6.12 - Increasing number of slits


in Young's experiment. The top diagrams
4 slits
show the calculated interference patterns
for 2, 4 and 16 slits. Below each graph is
a simulation of what would appear on a
distant screen.
16 slits
The photographs at the bottom show
interference patterns created by 633 nm
light directed through two slits and four
slits 50 microns apart. (A.Y.)

121
LIGHT: Introduction to Optics and Photonics

When Young's experiment is performed with white light, different


wavelengths form bright fringes at different locations on the screen. However, it
is difficult to separate the overlapping colored fringes. Because the diffraction
grating maxima are so widely separated, wavelengths are easily separated. In
fact, diffraction gratings have many applications in the analysis of light. The
most common application is spectroscopy, where gratings are found in both
inexpensive and high quality instruments.
In its simplest form, a diffraction grating consists of multiple parallel,
equidistant slits, often called lines, in an opaque screen. Early gratings were
made by dragging a diamond stylus across a glass plates in a large machine
called a "ruling engine" and holographic gratings are made using the principle of
interference. Which type of grating is used depends on the application and
resolution required for the particular situation. Light is passed through the slits
of a transmission grating, while a reflection grating consists of multiple parallel
grooves etched into a reflecting surface. The grooves on a compact disc act like a
reflection grating and will produce several bright spots when a laser is reflected
from the CD onto a distant wall.

m=2
m=1
Figure 6.13 - Diffraction grating m=0
schematic. A single wavelength is m=1
directed through the grating. The m=2
angular position % "of each of the
bright fringes is measured with
respect to the normal to the grating. Diffraction
Grating
screen
%1 %2

The equation governing the angular position of the bright spots for the
interference pattern of a diffraction grating is the same as for Young's two slit
experiment

Grating Equation (6.10) d sin ! = m" m = 0,1, 2, 3....

In Equation 6.10, d is the separation between two adjacent slits, % is the


angle through which the mth order is diffracted and # is the wavelength of the
light. The slit spacing is often not stated explicitly, but rather gratings are marked
with the number of lines (slits) per millimeter. For example, a grating may be
marked "500 lines/mm". In that case, d is calculated from N, the number of lines
per mm by
1 1
d= = = 2 µm
N 500 mm -1

122
Wave Optics: Interference and Diffraction

The small angle approximation cannot usually be used with a diffraction


grating because the fringes are widely separated and the angle % can vary from 00
to 90o.
From Equation 6.10 it can be seen that a grating with very fine line
spacing (more lines/mm) will have fewer diffracted orders, but the separation of
those orders will be greater than for a coarse grating. This allows for the clearer
separation of closely spaced spectral lines. We say that the fine grating has a
higher resolvance or resolving power. Holographic gratings have only one order
on each side of the central maximum. With only one order there is no confusion
caused by the overlap of wavelengths from different orders.

EXAMPLE 6.4
A diffraction grating has 500 lines/mm. When a screen is placed 100 cm
from the grating, two first order maxima are separated by 60 cm. What is
the wavelength of the incident light?
60 cm
Solution
First, find the spacing between lines (d).
Note that 100 cm

1
d= = 2 µm
500x10 3 lines/m

Next, find % using the distances to the screen (x) and to the first order
maximum. Note that the small angle approximation generally is not used
with diffraction gratings. Also note that y, the distance from the center of the
pattern to the mth maximum, is one half the distance between maxima.
Solving Equation 6.10 for wavelength,
y 30cm
tan ! = =
x 100cm
! = 16.7 o
d sin ! ( 2 µ m) sin16.7 o
"= = = 575 nm (green)
m (1)

6.5 THIN FILM INTERFERENCE


How can a thin layer of colorless, transparent liquid like oil or dish soap
create a rainbow when light is reflected from it (Figure 6.14)? In this section we
will explore thin films, which have important applications in optical technology. Figure 6.14 - An oil slick
Thin film coatings are commonly applied to optical components to either on water is a thin film.
This photo was taken in a
enhance or suppress reflection. Wavelength selective mirrors or filters can be parking lot after a rain
created from many layers of very thin film applied to a substrate material. In fact, storm.

123
LIGHT: Introduction to Optics and Photonics

students are often surprised to find that they can see through a "mirror" designed
to operate at a particular laser wavelength. At its operating wavelength it is in
fact a near perfect reflector. Although thin film interference is the basis of a
useful technology for creating mirrors and filters, it can also be problematic, for
example, by producing stray or unwanted reflections in an optical system.
When a beam of light travels from one medium to another, the change of
index of refraction causes part of the beam to be reflected (Figure 6.15). The
remainder of the light is transmitted into the film and a portion of the transmitted
light is then reflected at the lower surface. Depending on the thickness of the
layer and the index of refraction of the materials involved, constructive or
destructive interference between the two reflected beams is possible.
When solving thin film problems, two phenomena must be considered:
phase shift upon reflection and path length difference in the film. The path length
difference may need to include the index of refraction of the film material. We
will consider only the simplest case of light normally incident on the film. At
other incident angles the basic ideas are the same, but, as you might imagine,
trigonometry is involved in determining whether reflected rays are in phase or
out of phase.

Incident Light Reflected beams - Multiple beam interference

Figure 6.15 - Thin film interference.


The light transmitted into n3 is not
shown. If the layer labeled n2 can be n1 reflections
considered a "thin film,” the reflected
beams will interfere. n2

n3

Phase Shift on Reflection


What happens when light is reflected from a dielectric surface? To
understand the effect of reflection on the phase of a wave, we begin with an
analogy. Consider the experiment shown in Figure 6.16. At the top of the figure,
a stretched string is tied tightly to a post and a pulse is sent down the string from
left to right. When the pulse strikes the fixed end, it "flips" and returns on the
other side of the string. Since a positive pulse reflects as a negative pulse, this is a
180o or one half wavelength (!/2) phase shift.
In the lower part of the figure, the string is tied not to the post but to a
Figure 6.16 - Reflection of a
loop that is free to slide without friction along the post. In this case, the loop rises
pulse on a string from fixed
and free ends. with the incident pulse, and as the loop returns to its original position a pulse is

124
Wave Optics: Interference and Diffraction

returned on the same side of the string. That is, there is no phase shift since a
positive pulse is reflected as a positive pulse.
When light reflects from an interface between two media there may also
be a phase shift. If light is normally incident on a surface of a medium with a
higher index of refraction, the reflected light will experience a 180° or #/2 shift.
If the light travels from a higher index material to a lower index material,
however, there is no phase change.
Figure 6.17 shows light entering a medium with an index of refraction of
1.5 from air (n = 1). Some of the light is transmitted (top of figure) and some is
reflected from the interface. Since it is reflected from a higher index of
refraction, the wave undergoes a 180o phase shift. The light that is transmitted
through the film enters a material with an index of refraction of 1.3, where a
portion of the light is transmitted and a portion is reflected. This time there is no
change of phase because the reflection is from a lower index of refraction.

n=1 n=1.5 n=1.3


Figure 6.17 - Phase shifts in a thin film.
The wave is incident from the left (top
o wave). Reflection from a higher index of
180 phase shift
refraction produces a change of phase
(middle). Reflection from a lower index of
no phase shift refraction does not cause a change of
phase in the reflected wave (bottom).

Path length difference


Phase change on reflection is not the only consideration when trying to
determine if reflected waves are in phase or out of phase. Consider the diagram
shown in Figure 6.18. A substrate material such as glass is coated with a thin film
of thickness t. Figure 6.18 shows the beam striking at a slight angle so that the
n1
incident and reflected rays can be separately shown in the drawing. For
t n2
mathematical simplicity, we will assume that the beam strikes the film at normal
n3
incidence.
At the top surface of the film, a portion of the light is reflected, and the
remainder enters the film. The transmitted beam strikes the bottom surface of the Figure 6.18 – A thin film.
The incident light is
film and a portion is reflected back to the top surface. The path length difference assumed to be striking
is the extra distance one beam travels in the film, multiplied by the index of head on, but is shown
at an angle for clarity.
refraction of the film. The index of refraction is needed because the wavelength
changes by a factor n when light is in the film.
Since the lower beam travels across the thickness of the beam two times,
the path length difference is
$ = 2(t n2)
where t is the thickness of the film and n2 is the film's index of refraction.

125
LIGHT: Introduction to Optics and Photonics

Solving Thin Film Problems


Both change of phase upon reflection and path length difference must be
considered when solving problems involving thin film interference. The first step
is to determine if there are any phase changes on reflection at either surface.
Then the path length difference is set to either m! or (m-1/2)!, depending on
phase changes and whether the problem is to produce constructive (brightness) or
destructive (darkness) interference.
For example, if the goal is to create constructive interference of a
particular wavelength on reflection from the film:
• The path length difference must be a whole number of wavelengths (m! )
if there are no phase shifts on reflection or if there are two phase shifts
(one at each surface).
• The path length difference must be (m-1/2)! if there is one phase shift on
reflection from only one surface.
The best way to understand thin film problems is practice by working
through as many as you can! Although it is possible (and tempting) to simply
write a separate equation for every type of thin film problem, it is far more
instructive to consider the physics of the problem and devise your own solution.
A diagram of the film, showing index of refraction and where phase changes
occur, can be very helpful. Examples 6.5 and 6.6 illustrate the process.

EXAMPLE 6.5
What wavelength will be reflected for normal incidence from a 300 nm film
of index of refraction 1.40 formed on a glass substrate of index 1.5?

Solution
1.0 Step 1: There will be a !/2 phase change on reflection from both surfaces of
the film. Two phase changes put the two waves "back in step,” so the phase
!/2 phase
1.4 change changes may be ignored. Constructive interference will occur when the path
!/2 phase in the film is a whole number of wavelengths:
1.5 change
$ = m!, where m = 1,2,3….
Step 2: For light striking the film head on, the path length difference (the
additional optical path in the film) is 2tnfilm, where t is the film thickness.
Then
m! = 2tnfilm
For m = 1, ! = 2 (300 nm) (1.4) = 840 nm (IR, not visible).
For m = 2, ! = 2 (300 nm) (1.4)/2 = 420 nm (the film appears violet).
For m = 3 and up, the wavelength will be in the UV range.

126
Wave Optics: Interference and Diffraction

EXAMPLE 6.6
A soap bubble appears green (540 nm) when viewed head on. What is the
minimum thickness for the soap film if its index of refraction is 1.40?

Solution
Step 1: The soap film is surrounded by air (n=1) on both sides.
There will be a !/2 phase change on reflection from
the top surface of the film but not from the bottom surface. 1.0

Why? Constructive interference will occur when the path !/2 phase change
1.4
in the film is !/2, 3!/2, 5!/2, ..., or no phase change
# 1& 1.0
! = %m"
$ () m=1,2,3....
2'

Step 2: For light striking the film head on, the path length difference
(the additional optical path in the film) is 2tnfilm, where t is the film
thickness. Then
" 1%
2tn film = $ m ! ' ( m=1,2,3...
# 2&

The minimum thickness occurs when m = 1, so


1 " 1% ( 540 nm
t= $1 ! ' ( = = = 96.4 nm
2n film # 2& 4n film 4 (1.4 )

The bubble would also appear green when viewed head on for a thickness of
3t, 5t and so on. However, at some point the film is thick enough to
constructively reflect other visible wavelengths as well as green. Problem
#37 at the end of this chapter further explores this idea.

Can you figure out how the many colors of a soap bubble or an oil slick
are produced? When you look at an oil slick on the surface of a puddle, you are
seeing light reflected at many angles. At each angle, the light passes through a
different thickness of film; that is, the path length difference is a function of the
angle of incidence. Which wavelength interferes constructively depends on the
path length difference, therefore, a different color is seen at each viewing angle. Figure 6.19 - Interference
This has implications for thin film optical coatings, for example, thin film filters fringes due to a thin film of
air between two microscope
must be inserted into the beam path at the angle specified by the manufacturer or slides. The reflection of the
they will not operate correctly. table lamp can be seen on
the left. The source of
Figure 6.19 shows the colored fringes reflected from the top and bottom illumination is a compact
of a thin layer of air trapped between two microscope slides placed beneath an fluorescent bulb.

ordinary table lamp. What causes the curvature of the fringes?

127
LIGHT: Introduction to Optics and Photonics

Antireflection Coatings
Thin film anti-reflection (AR) coatings are commonly applied to optical
n1 n2 n3 components such as multi-element lenses to reduce the light loss due to
reflection. You may have an AR coating applied to your eyeglasses. By
introducing a one-half wavelength phase change between the light reflected at the
o
180 out first and second surfaces of the film as in Figure 6.20, a coating of appropriate
of phase
index of refraction and thickness can eliminate reflection at one specific
wavelength. To reduce reflection over a band of wavelengths, multilayer coatings
are used, with each layer corresponding to a specific wavelength.
Figure 6.20 - AR coating. MgF2 is a common antireflection coating material with an index of
The coating thickness is
one quarter the wavelength refraction of 1.38. Since the coating index is more than that of air but less than
of the light in the film.
that of the glass, there is a #/2 change of phase at each surface of the film. Thus,
in order to have destructive interference, the path in the film must be #/2, 3#/2,
5#/2, ….. or
" 1%
2tn film = $ m ! ' (
# 2&
The minimum film thickness occurs when m = 1, or
!
(6.11) t=
4nfilm

Equation 6.11 is the same equation for the thickness that produced
constructive interference for the soap bubble in Example 6.6! The difference is
the additional phase shift on reflection for the antireflection coating.

EXAMPLE 6.7
A single lens AR coating on a pair of eyeglasses is chosen to "antireflect"
550 nm. What is the minimum thickness of the coating if it is made of
MgF?

Solution
Using Equation 6.11
! 550 nm
t= = = 99.6 nm
4n film ( 4 ) (1.38 )

Unlike the thin film coating on the eyeglasses of Example 6.7, real
eyeglass lens AR coatings are many layers thick. The coated lenses often reflect
a pink-violet or green tint; the wavelengths that are not reflected are transmitted
through the lens material. The photo on the first page of Chapter 8 shows
multiple reflections from the coated surfaces of a multielement camera lens.

128
Wave Optics: Interference and Diffraction

Although optical thin films are quite common, they must be applied with
vacuum techniques in a precisely controlled process (See Chapter 14). The
slightest contamination on the substrate may cause the film to fail. Thin film
coatings also require careful cleaning processes since damage to the coating is
usually permanent.

6.7 DIFFRACTION
As you saw in Chapter 4, when a point source of light casts a shadow of
an object, we expect the edges of the shadow to be sharp and well defined.
However, under some circumstances the edges are not sharp but rather blurred,
with alternating bright and dark fringes. You can see these fringes if you look at
a bright scene through the tiny crack between two pencils held side by side. The
fringes are the result of diffraction, the bending of light around the edges of an
object. Diffraction is a characteristic wave behavior that you are familiar with
even if you have not previously heard the term. When you hear the voices of
people in a room before you reach an open door it is because sound waves
diffract, or bend, around the edges of the door. Diffraction has important
implications for light because it sets an ultimate limit on focal spot size and the
resolution of optical systems.
One of the simplest techniques for analyzing diffraction is based on the
Huygens-Fresnel principle. Huygens was a contemporary of Isaac Newton and
he used the principle that bears his name to explain the behaviors of light, which
he believed to be a wave. Fresnel modified the principle in the early 1800s to
include the effects of interference. According to Huygens-Fresnel, every point on
a wavefront serves as a source of secondary wavelets of the same wavelength as
the primary wave. The optical field at a point beyond an Barrier
obstruction is the superposition of all such wavelets reaching
that point. Huygens'
wavelets
In Figure 6.21, a plane wave is propagating through an
opening in a barrier. Five Huygens' wavelets are shown, Original wave
originating on the plane wave centered on the barrier. One cycle
later, the wavefront has moved to the right by one wavelength.
In the center of the opening, the wavelets add to form a plane New
wavefront
wavefront, but at the edges the wave curves around past the edge
of the barrier. If wavelets are drawn along this wavefront, the
resulting wavefront will bend even further past the opening Figure 6.21 - Huygen's Principle
(dashed line in Figure 6.21). This is the origin of diffraction. for a wave propagating through a
slit in a barrier. The wave
As the light propagates beyond the barrier in Figure 6.21 propagates toward the right.
the waves passing through the opening interfere with those that Wavelets propagating to the left
are ignored.

129
LIGHT: Introduction to Optics and Photonics

are bent around the edges of the hole. A complex pattern of dark and bright
fringes develops that changes as the wavefront moves forward. Although the
pattern that forms on a screen placed to the right of the barrier changes
continuously with distance, we usually speak of two types of diffraction—the
pattern close to the barrier and the one at a very large distance from the barrier.
In Figure 6.22, if the screen is placed very close to a barrier or obstacle,
the edges of the shadow formed on the screen will be sharp and distinct. If the
screen is then moved away from the obstacle, a fine fringe pattern will begin to
form on the edges of the shadow. This is known as the Fresnel or near-field
diffraction pattern. The farther the screen is moved, the more the pattern spreads,
and eventually the shadow bears little resemblance to the shape of the obstacle.
This is the Fraunhofer or far-field diffraction pattern. Fresnel diffraction is the
more general case and includes Fraunhofer diffraction as a special case.
Fraunhofer diffraction, however, is much easier to describe mathematically so we
will limit our discussion to the Fraunhofer case.

6.22 - Fresnel (near field) and


Fraunhofer (far field)
diffraction of a square hole in
an opaque barrier. The pattern
evolves continuously as light
goes from aperture to far field.
Aperture Near Field Far Field
(Fresnel) (Fraunhofer)
Fraunhofer Diffraction of a Single Slit
The simplest diffraction situation to analyze is a single narrow
rectangular slit cut in an opaque barrier. The diffraction pattern is shown in
Figure 6.23. Notice that there is a broad central maximum with smaller bright
fringes to either side. To understand the origin of the diffraction pattern, we will
apply Huygen's-Fresnel principle to light passing through the slit. Assume plane
light waves strike a slit of width s as shown in Figure 6.23. Let us look at three of
the sources of Huygens-Fresnel wavelets on the wave crest as it passes through
Figure 6.23 - the slit: the sources at the
Fraunhofer diffraction
of a single slit. The $=3/2 # top and bottom edges of
three dots in the slit Slit the slit and the one in the
%' 1 $=1/2 #
opening are the origins
of Hugyens' wavelets. 2 center of the slit. To
Note that the drawing s 3 simplify the diagram,
is not to scale—the % $=0
distance to the screen Figure 6.23 shows the
is really much, much paths waves take from
larger than the slit $13
width, s, so the angle $=1/2 # the point sources to the
labeled as a right angle
in the figure is very Screen $=3/2 # screen, rather than the
o
close to 90 . wavelets themselves.

130
Wave Optics: Interference and Diffraction

We are usually interested in the position of the dark fringes in a


diffraction pattern. An interference minimum (dark fringe) occurs if wave 1
(from the top source) and wave 2 (from the center source) are one half
wavelength out of phase with one another. Therefore, we can conclude that a first
minimum occurs when the path difference, $12, between waves 1 and 2 is
slit
%'
1 1
!12 = " 2
2
3
s % %
If waves 1 and 2 are one half wavelength out of phase then any wave
originating in the top half of the opening will be out of phase with a wave
originating at a point a distance s/2 away in the bottom half of the opening. In $13

particular, wave 3 must be one half wave out of phase with wave 2, because they Figure 6.24 – Part of
figure 6.23, redrawn.
are a distance s/2 apart. This means that waves 1 and 3 are one whole wavelength The wavelets originating
out of phase, or at the three dots in the
slit opening will form a
dark interference fringe
!13 = " (6.12) on the screen.

From the small triangle in Figure 6.24, we can use the same arguments
leading to Equations 5.6 and 5.8 to show that
"13
(6.13) sin ! ' =
s

By combining Equation 6.12 and 6.13 and using geometric arguments to show
that &' = & when the slit is very small compared to the distance to the screen,
s ! sin " = #
Therefore, a dark fringe will occur in the diffraction pattern of a single slit
whenever the path difference $13 is an integer multiple of !, or

s ! sin " = m# m = 1,2,3… (6.14) Single slit diffraction

Notice that if the width of the slit is decreased, the angular distance
% from the zero-order maximum to the 1st order minimum increases. That is, if
you squeeze light through a smaller opening, it spreads more! Also, as the
wavelength ! increases, so does the size of the central bright fringe.
A common source of confusion is the similarity of this equation and the
equation predicting bright spots in a double slit interference pattern. It is
Figure 6.25 - Photograph
important to remember that the interference pattern results from the interference of single slit diffraction for
of waves from the two point sources originating at the double slits, while the light at 633 nm through a
slit approximately 100
diffraction pattern originates with countless Huygens-Fresnel sources positioned microns wide. (A.Y.)
along the length of an aperture. Equation 6.14 predicts the locations of the dark
fringes in the single slit pattern, while Equation 6.8 locates the bright fringes of
the double slit pattern.

131
LIGHT: Introduction to Optics and Photonics

In practice, the distance to the viewing screen is much larger than the
size of the diffraction pattern. Since the angle % is very small, the small angle
approximation may be used. Substituting sin & ~ tan & = y/x gives a simplified
form of the Fraunhofer single slit equation:

sy
(6.15) m! =
x

EXAMPLE 6.8
A 633 nm light source is incident on a slit of width s. On a screen 1 meter
away, two 1st order minima are formed 10 mm apart. Find the slit width.

Solution:
The two first order minima are separated by 10 mm, so the distance from the
center of the pattern to the 1st order minimum is 5 mm. Using Equation 6.15
and solving for s, we find
m! x (1)(633x10 "9 m) (1m )
s= = = 126 µ m
y (
5x10 "3 m )

Diffraction Pattern Example 6.8 shows that diffraction fringes may be used to determine the
Interference
Pattern size of a narrow slit. A similar principle may be used to monitor the size of
optical fiber at it is being drawn from a glass rod in a fiber draw tower.

Single Slit Diffraction and Interference


In the beginning of this chapter, we predicted the interference pattern for
Actual
pattern seen
Young's double slit experiment and for diffraction gratings. In Figure 6.12 it was
on a screen shown that adding additional slits created narrower fringes. The illustrations in
Figure 6.12 show bright fringes of equal irradiance across the width of the
Figure 6.26 - Computer
simulation, showing separate
interference pattern. If you have had the chance to perform a double slit or
and combined interference diffraction grating experiments you probably noticed that the fringes are not
and diffraction patterns.
equally bright across the width of the interference pattern. The pattern is brightest
in the center and fades on either side. There are also occasional maxima missing,
as the photographs of interference patterns show in the lower part of Figure 6.12.
We are now in a position to explain why this is so. The actual pattern seen on a
Figure 6.27 - Photographs of
single slit and three-slit distant screen is a combination of the single slit diffraction pattern and the double
patterns. The three slits are slit interference pattern.
each the same width as the
single slit. No fringe appears The combined effects of interference and diffraction are shown in Figure
where the zeros of the single 6.26. The top part of the figure shows a calculated two-slit interference pattern
slit diffraction pattern fall on a
maximum of the three-slit and the diffraction pattern of a single slit. Combining these two patterns results in
interference pattern. (A.Y.) an interference pattern with overall irradiance determined by the single slit

132
Wave Optics: Interference and Diffraction

diffraction pattern. The photographs in Figure 6.27 show the diffraction pattern
of a single slit and the interference/diffraction pattern of an opaque slide with
three slits of the same size as the single slit. Note that the single slit minima
appear in the same location in both photographs.

Rayleigh's Criterion
Diffraction has important implications for optical systems. When two
distant objects are close together, their ability to be resolved is dependent on
whether their diffraction patterns are separate and distinct. Consider the case of
two distant stars viewed through a telescope. As the light from each star passes
through the telescope, it spreads by diffraction. The stars can be resolved as
separate points of light only if their diffraction patterns remain separate. You
might argue that large diameter telescope lenses would result in minimal
diffraction. Although light spreads by a small amount, if the angular separation of
the two stars is very small, this tiny amount of diffraction may be significant.
When light of a given wavelength passes through a circular aperture of
diameter D, like a lens or the pupil of the eye, the diffraction pattern consists of a Figure 6.28 – Airy disk.
central bright spot, shading dimmer toward the edges and surrounded by a set of (A.Y.)
faint circular fringes (Figure 6.28). The central spot is called the Airy disk and its
angular size is given by
Diffraction by a circular
D sin ! = 1.22 " (6.16) aperture
Equation 6.16 is similar to the single slit diffraction equation, except that
the diameter (D) replaces the slit width (s) and the integer m=1 is replaced by
m=1.22. The constant 1.22 arises from the detailed solution of the diffraction
problem for a circular aperture. You may recall that a factor of 1.22 also
appeared in the equation for the spatial coherence length of a circular light
source. In fact, the equations for the positions of the higher order fringes also
have non-integer values of m, but usually it is the first order fringe that is of
interest since it defines the size of the central disk.
The radius of the Airy disk on a distant screen
can be found by using the small angle approximation, y=r
sin % # tan % = y/x, where y is the distance from the
D x
center of the disk to the first dark fringe and x is the
Airy disk
distance to the screen (Figure 6.29). Since the distance
from the disk to the first dark fringe is the radius, r, of Figure 6.29 - Diffraction of light by a
the Airy disk, we have circular opening. The diameter of the
aperture, D, may be found from a
measurement of the radius of the Airy
1.22 ! x
r=y= disk.
D

133
LIGHT: Introduction to Optics and Photonics

How does diffraction affect the ability to distinguish between closely


spaced objects? Consider light from two stars passing through a lens of diameter
D (see Figure 6.30). The stars are considered to be "just resolved" if the central
maximum of one star coincides with the first order minimum of the other. This
rule of thumb is known as Rayleigh's criterion. If the central disks of the
diffraction patterns overlap by more than the amount stated in Rayleigh's
criterion, the stars will not be resolved as separate points of light.

Figure 6.30 - Diffraction by an


optical system. The stars are
just resolved when the central
ymin
maximum of one diffraction %min D
pattern falls on the first order %
minimum of the other. The
graph on the right shows the
intensity of light across the x Overlapping
diffraction patterns formed by diffraction
the light from each star. Each patterns of the
pattern is similar to the photo in two stars
Figure 6.27.

From Figures 6.29 and 6.30 you can see that when Rayleigh’s criterion is
satisfied, the stars have an angular separation %min which is equal to the angular
size of the Airy disk as defined by Equation 6.16. That is,

Rayleigh's criterion "


(6.18) sin ! min = sin ! = 1.22
D

where D is the diameter of the lens. If the two stars are separated by a distance
ymin and located a distance x from the lens, the small angle approximation can be
used to show
1.22 ! x
(6.19) ymin =
D

Equation 6.19 quantifies something you are very familiar with: the
farther away two objects are, the farther apart they must be in order to see them
as separate objects. Diffraction means that it is impossible for an optical system
to be “perfect.” Even if an optical system is corrected for any and all aberrations,
the system is still said to be diffraction-limited, and there is a limiting angular
separation between points that can be imaged as separate points.
Rayleigh's criterion applies not only to the resolution of stellar images
through a telescope, but also to imaging tiny structures in a cell viewed with a
microscope and to more mundane problems such as the minimum size of letters
on highway signs and how close together lights can be on an airport tower. Since

134
Wave Optics: Interference and Diffraction

ymin depends on wavelength and lens diameter, resolution can be improved with a
shorter wavelength and larger lens.

EXAMPLE 6.9
Assume the pupil diameter of the human eye is 3mm. Determine how far
away a car driving at night with its headlights on must be in order for the
headlights to be just resolved. Assume the headlights are 1.5 meters apart
and use 500 nm for the wavelength.

Solution
Use the small angle
approximation and solve
Equation 6.19 for x. (Note
that since y/x has a
minimum value, this sets a lower limit for y or an upper limit for x.)

x=
Dy
=
( 3 mm ) (1.5 m ) = 7380 m
1.22 ! (1.22 ) ( 500 nm )

At distances greater than 7380 meters, the headlights will not be resolved as
separate points of light. If this distance (about 4.6 miles) seems a little large
for resolving headlights, you are correct. The eye is subject to several
aberrations and is not considered a diffraction-limited system. Thus,
additional factors need to be included in a realistic calculation.

135
LIGHT: Introduction to Optics and Photonics

REFERENCES
Optics texts with additional mathematical detail (calculus based)
1. Hecht, E. Optics 4th Edition, San Francisco: Addison Wesley 2002.
2. Pedrotti, L. and Pedrotti, F. Optics and Vision, Upper Saddle River, NJ:
Prentice-Hall 1998.
A treatment of diffraction based on Fourier transforms
3. Goodman, J. Introduction to Fourier Optics Ed. 2, McGraw Hill, 1996.
(advanced math)
4. Metrologic, Inc.- Physical Optics Kit; a hands-on approach to optical image
processing, no advanced math used.
Some references on colors in nature produced by interference and diffraction.
Article summaries may be available at no cost on the web.
5. Parker, A.R. "515 million years of structural colour,” J. Opt. A: Pure
Appl. Opt. 2, 2000: R15-R28.
6. Phillips, K. "Feathers reveal their true colors,” The Journal of Experimental
Biology 205, 2002: i1402-i1402.
7. Osorio, D. and Ham, A.D. "Spectral reflectance and directional properties of
structural coloration in bird plumage,” The Journal of Experimental Biology
205, 2002: 2017-2027.

WEB SITES
Many optics companies have tutorials in their print or online catalogs. Some
examples are
1. A tutorial on optical coatings
www.edmundoptics.com/
2. Tutorials on thin film, filters and polarization
www.cvilaser.com/

136
Wave Optics: Interference and Diffraction

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. Use a sketch to describe the differences between destructive and constructive
interference.

2. Two waves approach each other on a slinky (see sketch at right). One has
an amplitude of 5 cm and the other has an amplitude of 8 cm:

a. As the waves pass through each other, what is the maximum value the
amplitude can have?

b. What is the minimum value the amplitude can have? 3

c. Will these waves ever cancel (interfere completely destructively)?

3. The two diffraction patterns at left represent light passing through a square aperture
in an opaque screen. Which is a Fresnel (near field) pattern and which is a
Fraunhofer (far field) pattern? How do you know?

4. When white light passes through a diffraction grating, is red light or violet light
diffracted by a greater angle? Which end of the spectrum is refracted by a larger
angle when white light is passed through a glass prism?

5. You may have noticed that an FM radio station alternately fades and grows stronger
as you slowly drive along a city street. What causes this phenomenon?

6. Why is it easier to perform interference and diffraction experiments with a laser than
an ordinary light source?

7. If you place your outstretched fingers close together and look at a light source
through the cracks between them you will see faint fringes appearing between your
fingers. Are these due to interference or diffraction? How do you know?

8. A soap film in a wire loop is held vertical and viewed in white light. Just before the
film breaks, the thinning upper edge appears black. Why?

9. The formula for antireflection from a thin film on glass is the same as for a strong
reflection from a soap bubble in air. How can that be?

10. A very thin soap film will strongly reflect certain colors, depending on thickness and
viewing angle. However, if the soap is in a thick layer, no such phenomenon is seen.
Explain.

LEVEL I PROBLEMS
11. Two filters are used to transmit light centered around a wavelength of 590 nm. One
is a broadband filter with a spectral width of 100 nm and the other is a narrow band
filter with a spectral width of 1.0 nm. Calculate the coherence length of the light
transmitted by each filter. Which would be better for interferometry?

137
LIGHT: Introduction to Optics and Photonics

12. Michelson found the cadmium red line at 643.8 nm to be one of the most
monochromatic sources available to him. He was able to see fringes for a path length
difference of 30 cm. Calculate the spectral width of the line.

13. A narrow band filter transmits wavelengths in the range 500 +/- 0.05 nm. If this filter
is used with white light, what is the coherence length of the transmitted light?

14. A pinhole of diameter 0.5 mm is used with a sodium lamp (589 nm) as a source for a
Young's two slit experiment. Over what distance is the light spatially coherent at a
distance of 1 meter from the slit? If the double slits are placed 1 meter from the
single slit, what is the maximum slit spacing to insure that interference fringes are
just visible?

15. Two audio speakers are connected to the same sine wave source, as shown in Figure
6.2. The speakers are 0.8 meters apart. What wavelength are they producing if there
is destructive interference at a point 3 meters directly in front of one of the speakers?
What is the wavelength that produces a 60 o phase difference between the waves at
the same point?

16. Monochromatic light passes through double slits, producing an interference pattern
on a screen 5 m away. The distance between the slit centers is 0.9 mm and the
distance from the center of the interference pattern to the first fringe is 0.3 cm. What
are the wavelength and color of the light?

17. Light of wavelength 680 nm falls on two closely spaced slits and produces an
interference pattern in which the fourth-order fringe is 48 mm from the central fringe
on a screen 1.5 m away. What is the separation of the two slits?

18. Monochromatic light illuminates two narrow slits 0.4 mm apart. On a screen
1.2 m away, the distance between the two first-order maxima is 4 mm. What are the
wavelength and color of the light?

19. A 3500 line/cm grating produces a third order fringe at a 22o angle. What wavelength
of light is being used?

20. What is the separation of lines in a diffraction grating that produces a 2nd order
maximum at an angle of 35o when it is used with 589 nm light?

21. White light is directed though a diffraction grating with 300 lines/mm. How many
degrees separate the first order red (700 nm) and violet (400 nm)?

22. In the diffraction grating of problem #21, do the first and second orders overlap?
(Hint: which colors do you need to consider in the first and second orders?)

23. If a soap bubble is 120 nm thick, what color will appear at the center when
illuminated normally by a white light? Assume that n = 1.34.

24. Magnesium fluoride (n=1.38) is used to form a thin film antireflecting coating on a
camera lens. Find the minimum thickness for a coating that is antireflecting at 500
nm. It is on glass with n=1.5.

138
Wave Optics: Interference and Diffraction

25. A thin film of oil (n=1.4) floats on water. It is illuminated from above by white light,
and also observed from directly above. What visible wavelengths will be strongly
reflected if the film is 500 nm thick?

26. Two parallel glass plates are in contact and illuminated from above by 600 nm light.
The plates are slowly moved apart and darkness occurs at certain separations when
the reflected light occurs. Find the first three values of separation distance when this
happens.

27. A slit whose width is 5.2x10-5 m is located 1.55 m from a flat screen. Light shines
through the slit and then falls on the screen. Find the width of the central fringe of
the diffraction pattern when the wavelength of light is 850 nm.

28. Light shines through a slit of width 4.5x10-5 m and then falls on a screen. What
wavelength light will produce a central maximum with a total width of 3.2 cm on a
screen located 2 m from the slit?

29. Find the diameter of the Airy disk formed when light of 540 nm passes through a
100 !m circular opening. The disk forms on a screen 2 meters from the opening.

30. Assuming a pupil diameter of 5 mm, how far away can the headlights of a car be
resolved? Assume the headlights are 1.25 meters apart and use 550 nm for the central
wavelength emitted by the lamps.

31. In July, 1976, Viking Orbiter 1 photographed a region of Mars that featured what
some thought was a "face,” thus "proving" the existence of Martian intelligence.
Analysis indicates that this rocky feature is about 1 km across. How large a telescope
would be needed to resolve this feature from an Earth based observatory? Use
225x109 m for an average Earth-Mars distance. Use 600 nm for the wavelength.

LEVEL II PROBLEMS
32. A radio station is broadcasting at 1500 kHz. The radio waves reach a car radio 10 km
from the transmitter by two paths: a direct path and a reflected path off a large
building directly behind the car. How far from the car should the building be in order
that destructive interference occur at the position of the car? (Assume there is no
phase change on reflection from the building.)

33. Determine the spectral width in hertz for a laser light whose coherence length is 20
cm. The mean wavelength is 632.8 nm. (Hint: %f = c(%#/#2))

34. The 21 cm wavelength of atomic hydrogen is often used in radio astronomy.


Suppose two dish antennas are being used to determine the diameter of a star. How
far apart should they be if the star's diameter is around 0.2o as seen from Earth?

35. Suppose a Young's double slit apparatus is submerged in water. By what factor
would the fringe separation change?

139
LIGHT: Introduction to Optics and Photonics

36. Two flat glass plates are in contact on one end and separated at the other end by a
piece of lens tissue. When illuminated from the top by 540 nm light, 145 parallel
dark fringes appear across the top plate from the point of contact to the edge of the
paper (including the band at the contact point). How thick is the piece of paper?

37. A soap bubble strongly reflects light of both 540 nm and 720 nm. Find the thickness
of the soap film if the index of refraction is 1.4.

38. When a double slit experiment is performed, the second minimum of the single slit
diffraction pattern falls on the fifth maximum of the double slit pattern. If the slits
are 100 microns apart, how wide are they?

39. A beam of light contains two wavelengths, one of which is 400 nm. The beam is
passed through a diffraction grating and the fifth order bright fringe for 400 nm light
falls upon the third order for the other wavelength. What is the other wavelength, and
what color is it?

40. No matter what the spacing is for a diffraction grating, 400 nm light in the third order
will overlap 600 nm light in the second order. Explain.

41. Standing directly behind a 1.0 meter wide window, a mother calls her child in to do
his homework. The primary frequency of the sound is 400 Hz. At what angle to the
normal to the window should the child be in order not to hear the sound? Use 340
m/s for the speed of sound in air.

42. How far apart can two spots be on the moon and still be resolved by a 2 m telescope
on earth? Use 250 000 miles as the distance to the moon, and 650 nm for the
wavelength of the light from the spots.

43. The analysis of diffraction by a rectangular slit in this chapter considered only one
dimension, slit width. Predict what the two dimensional diffraction pattern would be
for a rectangular opening 25 µm by 50 µm when illuminated by 700 nm light. (Hint:
see question #3.)

140
A pilot of private planes once remarked that
he needed Polaroid® sunglasses to block the
glare from the tops of clouds, but the same
glasses made it impossible to see the liquid
crystal display (LCD) screens on his
instrument panel. How is the glare of sunlight
reflected from water droplets related to the
operation of an LCD? In this chapter you will
learn about the polarization of light, which
has countless applications in modern optical
technology. In fact, the natural examples and
technical applications of this wave behavior
are so numerous that polarization deserves a
Sand Ripples (Chien-Wei Han)
chapter of its own.

Chapter 7

POLARIZATION
7.1 WHAT IS POLARIZED LIGHT?
Recall from Chapter 2 that light is a transverse electromagnetic wave.
The electric and magnetic fields oscillate at right angles to the direction of
propagation. Early scientists arguing for the wave nature of light at first assumed
it was a longitudinal wave. How do we now know that light is a transverse wave?
One piece of evidence is that light can be polarized. Look back at Figure 2.4 in
Chapter 2, which represents an electromagnetic wave. If all of the waves in a
beam of light have their electric fields vibrating in the same direction, the light is
described as polarized. In other words, the polarization of a light wave describes
the orientation of its electric field in space. The electric field vector may be
restricted to oscillate in a single direction or it may move in some other orderly
fashion as the wave propagates. We will begin by discussing the various types of
polarized light and then explain how polarized light is produced and used.

141
LIGHT: Introduction to Optics and Photonics

7.2 TYPES OF POLARIZATION


Let us begin the discussion by considering light that is not polarized, or
more correctly said, is randomly polarized. This is how we will describe light
that has no specific orientation of the electric field. Occasionally, you will see a
reference to unpolarized light, but randomly polarized is a more precise term
because the direction of vibration of the electric field varies randomly at
approximately the frequency of light. Randomly polarized light is also
sometimes called natural light. Light from the sun or ordinary light bulbs is
randomly polarized.
Plane-polarized light has an electric field that oscillates in a specific
plane perpendicular to the direction of propagation. Unlike randomly polarized
light, the direction of the electric field vibration remains constant for plane-
polarized light. There are two commonly mentioned special cases of polarization:
horizontal polarization, where the electric field vibrates horizontally as the wave
moves forward, and vertical polarization, where the electric field vibrates
vertically as the wave moves forward (Figure 7.1). A common helium neon laser
in a rectangular housing may produce vertically polarized light when the laser is
upright and horizontally polarized light when the laser is placed on its side.

Figure 7.1 - Linearly polarized


light. The light in the bottom
drawing is polarized at an angle &
to the x axis and can be resolved
into vector components in the x
and y directions.

Plane polarized light is also called linearly-polarized light because the


electric field vector can be pictured vibrating along a line in space. As we will
show, linearly polarized light is produced by several natural processes as well as
optical components designed for the purpose.

142
Polarization

For plane polarized light with its electric field vibrating in an arbitrary
direction, the direction of polarization is usually described by the angle & made
by the electric field vector and the vertical (y) axis. Like any vector you have
studied in physics, the electric field can be split into its components along two
axes that are at right angles and in a plane perpendicular to the direction of
propagation. Figure 7.1 illustrates an electric field vector and its components in
the horizontal (x) and vertical (y) directions.
The electric field vector of circularly polarized light sweeps out a circle
during each cycle of the wave. The magnitude of the electric field vector
remains constant throughout each cycle, but its direction is continuously
changing. The electric field can be imagined as spiraling around an axis as the
wave moves forward, like the threads of an advancing screw.
You might not be surprised to find that circular polarization finds many
applications in optical technology. For example, the constant magnitude of the
rotating electric field vector is useful in laser materials processing where the
absorption of laser energy depends on the uniform polarization of the light.
However, it has also been discovered that certain beetles and butterflies reflect
circularly polarized light. Biologists speculate that the reflected circular
polarization helps these insects recognize each other against a tropical
background of randomly polarized light.
Elliptical polarization is the most general type of polarization. In fact, in
the mathematical treatment of polarization, linear polarization and circular
polarization are simply the two extremes of the elliptical case. As with circular
polarization, the electric field vector rotates, but in this case the magnitude does
not remain constant. That is, the tip of the electric field vector sweeps out an
ellipse rather than a circle as the wave propagates.
In order to explain many applications of polarized light, it is essential to
understand the behavior of the components of the electric field. For linearly
polarized light, the two components of the electric field are in phase: that is, the x
and y components reach their maximum and minimum values at the same time as
shown in the bottom drawing of Figure 7.1. This leads to a total electric field
vector that oscillates between two extremes like a mass vibrating on a spring. For
circularly polarized light, x and y components of the electric field are one quarter
wave, or 90°, out of phase. When the x component is zero, the y component has
its maximum (or minimum) value and when the y component is zero, the x
component is a maximum (or minimum). Figure 7.2 shows the components and
the resultant rotating vector. The electric field vector is never zero and it spins
like a corkscrew as the wave advances.

143
LIGHT: Introduction to Optics and Photonics

y component

Figure 7.2- Components of circularly


polarized light. Note that the x and y
components of the electric field are one
o
quarter wave (90 ) out of phase, resulting x component
in an electric field that rotates as it moves
forward.
o
90 Phase Shift

A+B=C (Circularly Polarized Light)

7.3 WHERE DOES POLARIZED LIGHT COME FROM?


Polarization is used in many applications in which the transmission,
absorption and reflection of light must be controlled, so it is important to
understand how polarized light is produced and modified. It turns out that there
are several mechanisms that produce polarized light. We will consider how
polarized light is created and modified by selective absorption, reflection,
scattering and birefringence.

Polarization by Absorption
You are familiar with filters that remove certain wavelengths from a
beam of light. For example, a piece of red stained glass absorbs most
wavelengths from sunlight and transmits red light. The absorption of light of a
specific wavelength, expressed in terms of optical density, is of central
importance in designing laser safety eyewear. Polarizing filters work by
removing (absorbing) components of the electric field. When randomly polarized
(natural) light is passed through an optical element known as a linear polarizer,
one component of the electric field vector is absorbed, resulting in light that is
strongly polarized in one direction.
The idea of a filter that absorbs a component of the electric field may
seem unusual, but in fact some naturally occurring minerals have polarizing
characteristics. These crystals are used in technical applications and are very
expensive because a natural crystal with few or no imperfections is difficult to
find. Man-made sheet polarizers made of plastic materials are far less expensive
and readily available for less exacting applications. The first sheet polarizer was
invented by Edwin Land in 1938 when he was a 19 year old student at Harvard
University. The material is created from a plastic sheet heated and stretched so

144
Polarization

the hydrocarbon molecules become aligned. The plastic is impregnated with


iodine, which adheres to the hydrocarbon chains. The iodine provides electrons
that are free to move along the hydrocarbon “wires.” The component of the
electric field in the direction of the chains drives the electrons along the chain
and is thus absorbed. Electrons are unable to move in the direction perpendicular
to the chains, thus the electric field component perpendicular to the direction of
the hydrocarbon chains is transmitted. This direction is called the transmission
axis or polarization axis of the material.
Figure 7.3 illustrates randomly polarized light entering a polarizer with
its transmission axis held vertical. The horizontal component of the electric field
is strongly attenuated and only the vertical component of the electric field passes.
The transmitted light in this case is vertically polarized. To create plane-polarized
light at any other angle, you need only rotate the polarizer so the transmission
axis is along the desired direction of polarization. Because one component of the
electric field is removed, the irradiance of the transmitted light will be less than
that of the incident light. In fact, for an ideal polarizer, one half of the incident
irradiance is transmitted. Real polarizers have a gray or green tint, and remove
more than one half of the incident light.

Figure 7.3 - Polarization by


absorption. The polarizing filter
is similar to the material used
to make polarized sunglasses.

Malus' Law
Suppose the light incident on a polarizer is not randomly polarized as in
Figure 7.3, but rather linearly polarized in a direction different from that of the
polarizer transmission axis. What happens to the irradiance and direction of
polarization when light passes through the filter? We can show that the resulting
direction of polarization will be the direction of the polarizer's transmission axis
and the transmitted irradiance will be less than the incident irradiance.
The relationship between the irradiance of the incident and transmitted
light for a polarizing material is known as Malus' Law, named for the French
scientist who discovered it in the early 1800s. The law is easily derived by
referring to Figure 7.4 and using a little trigonometry. The vibration direction of
the electric field of the incoming light is vertical in Figure 7.4 and the polarizer

145
LIGHT: Introduction to Optics and Photonics

makes an angle % with respect to the vertical. Only the component of the electric
field along the direction of the polarizer axis is transmitted by the polarizer. So,
we need to resolve the electric field into components parallel and perpendicular
to the polarizer axis. The component along the direction of the polarizer axis is

(Incident Electric field) i cosine (! )

Linearly polarized
at angle %
Incident electric field
Figure 7.4 - Malus' Law. The &
orientation of the electric field %
vector upon leaving the linear &
polarizer is rotated by the angle %
Electric field component
and the resultant irradiance of parallel to the
the transmitted beam is reduced Vertically polarization axis
2
by a factor of cos %. polarized
light
Linear polarizer with
transmission axis at
angle % with the vertical

Irradiance is proportional to the square of the electric field, so the irradiance that
is transmitted through the filter (Et) is related to the incident irradiance (Eo) by

Malus' Law (7.1) Et = Eo cos 2 !

Because the polarizer passes only electric field vibrations at the angle %, the
emerging light will be polarized in this direction.
Malus' law is also sometimes written in terms of incident and transmitted
power, since when you perform the Malus' law experiment you use an optical
power meter to determine the incident and transmitted quantities. If the beam
area does not change during the measurement, the power is proportional to the
irradiance.
What happens if a polarizer is rotated so that its transmission axis makes
o
a 90 angle with the direction of polarization of the incident light? For example,
suppose vertically polarized light is incident on a polarizer with a horizontal
transmission axis. The polarizer transmits only horizontal components of the
incident electric field, but the electric field of vertically polarized light vibrates
Figure 7.5 - Although
dimmed, the background
only in the vertical direction. In this case, all of the light is blocked. Malus' law
scene is visible through the confirms this result, since the cosine of 90o is zero. Figure 7.5 is a photograph of
parallel polarizers (top).
Light is completely blocked light transmitted through two polarizers with their transmission axes first held
when the polarizer axes are parallel and then perpendicular. The light is partially dimmed by the parallel
at right angles (bottom). The
photographer's reflection is polarizers (why?) and completely blocked when they are "crossed" or held with
faintly visible in the photos. transmission axes perpendicular.
(A.Y.)

146
Polarization

EXAMPLE 7.1
Vertically polarized light strikes an ideal linear polarizer oriented at 45o
from the vertical. What percent of incident light is transmitted?

Solution
Using Equation 5.11,
E = Eocos245o = Eo(0.7071)2
E = 0.50 Eo
50% of the incident light is transmitted; it is linearly polarized at 45o.

Polarization by Reflection
Probably the most familiar application of polarizing material is glare-
reducing sunglasses. These glasses work because the light reflected from non-
conducting (or dielectric) surfaces such as water or snow is at least partially
polarized. By orienting the lens' transmission axes in the correct direction, the
polarized glare can be blocked.
Photographers use polarizing filters to reduce glare and see beneath the
surface. The photograph at the beginning of this chapter was taken through a
vertical polarizing filter. This blocked the polarized glare from the water's
surface, allowing the sand below to be clearly photographed. Figure 7.6 is
another example of using a polarizing filter to block the polarized glare in order
to see below a reflective surface.
In order to distinguish between the polarization components of reflected Figure 7.6 - When the
light, the descriptions parallel (||) and perpendicular (') are often used rather polarizer is held with its axis
horizontal, the polarized
than x and y. These terms refer to the orientation of the electric field components glare from the surface of
with respect to the plane of incidence, the plane containing the incident and the water is transmitted
through the polarizer (top).
reflected rays and the normal to the surface (Figure 7.7). Some texts also refer to When the polarizer's axis is
vertical, the surface glare is
these components as “p” (parallel) and “s” (perpendicular) polarization, blocked, revealing a log
respectively. (The German word for perpendicular is "senkrecht.”) beneath the surface of the
water (bottom). (A.Y.)
P|| (also called p polarization)

The plane of incidence


contains the incident,
reflected and refracted
P (#" also called rays and the normal to the
s polarization) surface.
Figure 7.7 - Polarization
components for polarization by
reflection.

147
LIGHT: Introduction to Optics and Photonics

The degree to which reflected light is polarized depends on both the


angle of incidence and which polarization component is being considered.
Around 1820, Augustin Fresnel, the French scientist who successfully explained
diffraction, deduced the equations governing the reflection of light from a
dielectric interface, based on his belief that light waves were transverse
vibrations. (Remember, this was many years before Maxwell's equations
describing an electromagnetic wave!) The Fresnel reflection equations are used
to calculate how much light is reflected for each of the polarization components.
The Fresnel equations solve for the quantity called reflectivity, which is
calculated by dividing the reflected power by the incident power. That is,
reflectivity is the fraction of the incident light reflected by the surface. Since
there are two reflected components, perpendicular and parallel to the plane of
incidence, there are two Fresnel reflection equations:

R|| =
( n2 cos !1 " n1 cos ! 2 )2 R! =
( n1 cos "1 # n2 cos " 2 )2
( n1 cos ! 2 + n2 cos !1 )2 ( n1 cos "1 + n2 cos " 2 )2
In these equations, n1 is the index of refraction of
the incident material and n2 is the index of refraction of
the transmitting material. The angle &1 is the incident
angle at the material surface and &2 is the refracted angle
in the material, which can be found by Snell's law. It is
difficult to gain an intuitive picture of the situation by
looking at these equations! The graphs of the Fresnel
equations, plotted in Figure 7.8 for the case of light
incident in air (n1=1) and reflecting from glass (n2 =1.5),
better explain the situation. The graphs show the
percentage of each polarization component reflected from
the surface as a function of incident angle.
Figure 7.8 - Fresnel reflection by glass in air.
The graphs of reflectivity shown in Figure 7.8
explain some everyday observations. First, note that the perpendicular
component is almost always reflected more strongly than the parallel component,
except for the case of normal incidence where they are reflected equally.
Therefore, light reflected from most surfaces tends to be at least partially
polarized in the direction perpendicular to the plane of incidence. This is why
polarizing sunglasses are able to block reflective glare from water or snow. Also
note that at very large angles of incidence, light of both polarizations is reflected
very strongly. You may have seen this effect looking down a long tiled corridor,
where the floor in the distance looks more highly polished than the floor near
your feet.

148
Polarization

Another interesting observation may be made from the graphs in Figure


7.8. Notice that at the angle labeled %B none of the parallel component is
reflected. The reflected light is totally polarized in the perpendicular direction.
This angle is called Brewster's angle, and it occurs when the reflected and
refracted rays are normal to each other. Brewster's angle can be calculated from
the index of refraction of the incident and transmitting materials
"1 n2
! B = tan (7.3) Brewster's Angle
n1

Figure 7.9 illustrates natural light incident on glass (n=1.5) at Brewster's


Natural Polarized
angle. Although only 15% of the perpendicular component is reflected, none of Light Light
%B
the parallel component is reflected. The parallel component light is instead 0% II
15% '
transmitted into the material. The polarizing effect can be intensified by using &
multiple reflections by many layers of reflecting material. This arrangement is n 100% II
B
called a pile of plates polarizer. 1n 85% '

2
EXAMPLE 7.2
The index of refraction in the fused quartz window of a HeNe tube is 1.45. Figure 7.9 - Light incident
at Brewster’s angle is fully
The window is adjacent to air. Find Brewster's angle. polarized upon reflection.

Solution
Using Equation 7.3,
#n & # 1.45 &
! B = tan "1 % 2 ( = tan "1 % = 55.4 o
$ n1 ' $ 1.0 ('

Brewster's angle is often used in laser cavities for the generation of


linearly polarized light. A Brewster window placed at the ends of a laser tube as
shown in Figure 7.10 allows the parallel component to pass with minimal loss.
The perpendicular component, however, experiences a 15% loss per pass. As the
light bounces back and forth between the two laser cavity mirrors, the
perpendicular component is quickly extinguished, resulting in linearly polarized
output.

Figure 7.10 - Brewster


window in a laser.

149
LIGHT: Introduction to Optics and Photonics

The Fresnel equations also predict the existence of back reflection, a


source of loss in optical systems. Suppose light strikes a surface at normal
incidence, that is, %1 = 0o. Then cos %1 = cos %2 = 1 and both of the Fresnel
equations reduce to

R=
( n2 ! n1 )
2

(7.2)
( n2 + n1 )2
For an air-glass interface with n2 = 1.5, the reflectivity at normal incidence is
approximately 4% as shown in Figure 7.8. Although 4% seems like a small
number, for an optical system with many air-glass interfaces, the total loss can be
an important factor. Components may be treated with anti-reflection coatings to
minimize reflection losses. In an optical fiber system, the reflection loss in
connectors can be minimized by the use of a refractive index matching gel.
Making the values of n2 and n1 nearly equal has the effect of reducing the
numerator in Equation 7.2 and thus reducing the amount of back reflected light.
The 4% reflection from an air-glass interface also explains the behavior
of a window that reveals the outside world during the daytime and reflects the
inside world at night. Light entering from outside during daylight hours is
sufficiently bright such that the Fresnel reflection cannot be seen. At night, with
no external light, the Fresnel reflection gives the window the appearance of a
mirror.
Finally, it should now be clear that polarizing sunglasses only work if the
transmission axis of the lenses is perpendicular to the direction of polarization of
the glare. That is, the transmission axis should be vertical to extinguish glare
from, say, the horizontal surface of a lake. How could the same sunglasses
extinguish the glare from the vertical face of an icy glacier?

Polarization by Scattering
In the natural world, many organisms have evolved that can sense and
make use of polarized light. You might wonder why this is so-what is the source
of polarized light? As you will see in this section, the light from the blue sky is
polarized!
Consider a light wave incident on a small molecule (Figure 7.11). The
Figure 7.11 – Top: electrical charges in the molecule respond to the incident wave by oscillating in
Incident wave strikes a
the same direction as the electric field. The oscillating charge acts like a small
small molecule, causing
vibration in the vertical antenna and the radiation it produces is called dipole radiation. This type of
direction. Below: Dipole
radiation pattern
radiation is strongest in the plane at right angles to the vibration and decreases to
produced by the a value of zero directly above and below the scattering molecule. If you were
vertically oscillating
molecule. standing to the side of the radiating molecule you would see vertical oscillations

150
Polarization

of the re-radiated electric field. However, looking down at the molecule from the
top or up at it from below you see no radiation propagating toward you.
Now consider a huge collection of tiny radiating dipoles—the
atmosphere around you. Figure 7.12 shows the polarization orientation of the
radiation produced by sunlight incident on one of the molecules in the
atmosphere. The actual direction of the electric field can be represented by
vertical and horizontal components so only these components are shown in the
drawing. The vertical component causes vertical oscillations of the atmospheric
dipole, resulting in re-radiated light that is mostly vertically polarized and
spreads outward in a horizontal plane (like the "donut" of Figure 7.11). The
horizontal component results in horizontally polarized light that spreads outward
in a vertical plane, as if the "donut" of Figure 7.11 were turned sideways by 90o.

Figure 7.12 - Polarization of


the sky by scattering of
sunlight. Left: Vertical
component. Right: Horizontal
component.

What will you see if you are on the ground looking up at a group of such
molecules directly overhead? If the sun is on the horizon, the sky above you will
be strongly polarized. Only the horizontally polarized light will be visible to you
because the vertically polarized light is not strongly emitted in the direction
toward the Earth.
The part of the sky that is most strongly polarized varies with the
position of the sun and your viewing angle. Some insects, including honeybees
and ants, can sense the polarization of the sky and use its directional features to
navigate. Since the light from the sky is only polarized when the sky is clear, it
is difficult for these insects to find their way on a cloudy day (see Figure 7.13).
Some people can also sense polarized light; for more information on this effect,
called Haidinger's brush, see the references at the end of this chapter. Figure 7.13 - The
The type of scattering that produces the polarization of the sky is called polarization of the sky is
evident in these two photos.
Rayleigh scattering, and it is also responsible for the blue color of the sky. Light from this portion of the
sky is strongly polarized in
Rayleigh scattering is strongly wavelength dependent, with blue light scattered the vertical direction and
much more than red. In fact, the intensity of scattered radiation is proportional to thus does not pass through
the polarizer when it is held
1/#4. This means the light with a wavelength of 375 nm is scattered 24 or 16 horizontal. (A.Y.)
times more than light at 750 nm. Rayleigh scattering may be observed in a fish
tank to which a drop or two of milk has been added. If you shine a flashlight

151
LIGHT: Introduction to Optics and Photonics

along the length of the tank, the bulb will appear more yellow than it actually is
when you look at it through the water in the tank. From the side, the beam will be
lightly tinted blue and it will also be polarized. You can verify this by looking at
the beam through a polarizer or polarizing sunglasses.
If additional milk is added to make the water appear cloudy, the
polarization of the beam will diminish. Scattering from large scattering centers
(such as water droplets in the atmosphere) is not wavelength dependent nor is the
light polarized. Clouds and fog appear white and the light scattered from them is
not polarized. It should not surprise you then that insects that depend on the
polarization of the sky for navigation are much less likely to be out on cloudy
days.

Polarization through Birefringence


An optically isotropic material is one in which the index of refraction,
and thus the speed of the light passing through it, is the same in all directions.
However, certain types of crystal such as calcite, quartz, mica and ice are
anisotropic or birefringent ("doubly refracting”). We will examine some of the
interesting properties of birefringent materials and discuss their use in optical
devices.
Optic axis
How does light travel through a birefringent material? Imagine a tiny
flash of light is set off in the center of a birefringent crystal, as shown in Figure
7.14. One polarization component travels outward in all directions at a constant
speed. This is the ordinary situation for the materials you have studied thus far, a
flash of light produces an expanding spherical wave. This polarization
component is called ordinary light and we may speak of the ordinary ray when
we are tracing its path through the crystal.
Ordinary light
The polarization component at right angles to the ordinary light behaves
Extraordinary light
quite differently. In one direction, called the optic axis, this component travels at
Figure 7.14 - Expanding light the same speed as the ordinary component. In all other directions it travels at a
in a birefringent crystal. The
speed of the extraordinary different speed, and the speed (thus the index of refraction) depends on the
ray depends on its direction direction of propagation. This odd behavior is the reason we call this
of propagation.
extraordinary light. Since the speed of the wavefront depends on direction, the
extraordinary light does not expand as a sphere but rather as a sort of flattened
ellipsoid.
We can draw several conclusions from Figure 7.14. If you are looking
into the crystal along the optic axis you will not notice anything unusual. The
ordinary and extraordinary rays emerge at the same time. In every other
direction, the extraordinary light emerges first. (There are some materials where

152
Polarization

the extraordinary ray moves slower; in this case, the spherical wave in Figure
7.14 is larger than the flattened shape and the ordinary ray emerges first.)
The fact that the two polarization components travel at different speeds
and have different indices of refraction means that birefringent materials may be
used to separate polarization components. If a ray of randomly polarized light
enters the crystal shown in Figure 7.15, it splits into two rays. This crystal is cut
so that the light is directed at an angle between the optic axis and horizontal axis
shown in Figure 7.14.

Figure 7.15 - Refraction by a


birefringent crystal, such as
calcite. The light forming the
dots on the screen is polarized.

The ordinary ray goes straight through the crystal, as expected for normal
incidence, but the extraordinary ray bends to produce a second spot on the
screen. Rotating the crystal causes the extraordinary ray to trace a circle around
the ordinary ray. Since the light forming each dot on the screen is oppositely
polarized, if a linear polarizer is rotated in front of the screen, one dot is blocked
while the other is seen (Figure 7.16).
Figure 7.16 - The photo on the left is
of a birefringent calcite crystal placed
over the words "DOUBLE IMAGE,”
which produces two images of the
words. In the photo on the right, two
polarizing filters have been placed
over the calcite. The transmission
axes of the two filters are at right
angles, and you can tell by the black
stripe where they overlap.

Birefringent materials are used to make a variety of devices that polarize


light. By cutting the crystal at the proper angle, it is possible for light of one
polarization to be transmitted while the other polarization undergoes total internal
reflection. A second crystal may then be cut and cemented to the first to
straighten out the transmitted beam. One such device commonly used to separate
two polarized beams is the Glan-Thompson prism shown in Figure 7.17.
Birefringence leads to other interesting (and useful) effects. It can be
induced in substances that are normally isotropic by the application of a Figure 7.17 - Glan-
Thompson prism.
mechanical stress. This phenomenon can be used to study the stresses in

153
LIGHT: Introduction to Optics and Photonics

transparent materials. The object to be tested is placed between two crossed


polarizers and mechanical forces are applied. The light transmitted through the
object will show fringes where there are strains in the structure. The amount of
birefringence is dependent on wavelength, so the strain patterns are beautifully
colored. If you have access to two polarizing filters you can easily see patterns in
a clear plastic ruler or protractor inserted between them. Figure 7.18 on the left
shows the stress patterns in the lenses of a pair of eyeglasses. Notice that no light
is transmitted around the lenses, as expected for polarizers with their axes
crossed. On the right, pieces of clear packing tape are placed between crossed
polarizers. The tape rotates the plane of polarization by an angle that depends on
the thickness and direction of the tape. The effect is wavelength dependent,
producing beautiful clear colors.

Figure 7.18 - Eyeglasses (left)


and pieces of transparent
packing tape (right) inserted
between two crossed polarizers.
The bottom polarizer is larger
than the top polarizer in both
photos.

7.4 MODIFICATION OF POLARIZATION: WAVE PLATES


Look again at Figure 7.14. At right angles to the optic axis, the
extraordinary ray travels faster than the ordinary ray. Recall that for circularly
polarized light, the horizontal and vertical components are one-quarter
wavelength out of phase (Figure 7.2). If a birefringent crystal is cut to the proper
optical thickness, the faster extraordinary ray can be made to exit the crystal one-
quarter wavelength ahead of the ordinary ray. Since the two rays are polarized at
right angles to each other, it is possible to use a "slice" of birefringent crystal to
create circularly polarized light from linearly polarized light.
Figure 7.19 shows linearly polarized light entering a slab of birefringent
material with its polarization axis at a 45o angle with respect to the "fast" axis of
the crystal (the direction of highest light speed). The crystal is cut to a thickness
that allows one component (vertical in the figure) to emerge from the material
one-quarter cycle ahead of the other. Thus, the emerging light will be circularly
polarized. This device is called a quarter wave plate. If the angle between the
polarization of the incident beam and fast axis is not 45o, the light emerging will
be elliptically polarized or linearly polarized (for the special cases of 0o or 90o).

154
Polarization

If Figure 7.18 is run in reverse (right to left), the quarter wave plate changes
circularly polarized light into linearly polarized light. Many applications use
quarter wave plates to change the state of polarization of light in order to control
its passage though an optical system.

QUARTER WAVE PLATE


Circularly
Fast axis polarized light

o Figure 7.18 – A quarter


LINEAR POLARIZER at 45
wave plate can create
circularly polarized light
Slow axis from light linearly polarized
o
at 45 to the fast axis of the
wave plate. In reverse, it
creates linearly polarized
light from circularly
Linearly polarized light polarized light.
(solid) and its horizontal
and vertical
components (dashed)
Natural Light

If the thickness of a quarter wave plate is doubled, the device is called a


half wave plate. The delay between the two polarization components is now one
half wavelength, causing the direction of polarization of the input beam to
change. If the light incident on a half wave plate is linearly polarized at an angle
of &, the emerging light will be rotated by 2&, as shown in Figure 7.19. A half
wave plate can therefore be used to change the direction of linear polarization.
This is useful, for example, in the case of a large laser with polarized output.
Rather than moving the laser, a half wave plate can be used to change the plane
of polarization. Like the quarter wave plate, a half wave plate is also used in
devices that control the amount of light passing through an optical system.

HALF WAVE PLATE


2%
Fast axis
Figure 7.19 - A half
% % wave plate rotates the
Linearly angle of polarization of
LINEAR POLARIZER at % polarized light linearly polarized light.
rotated by 2%
The incoming electric
field is oriented at angle
% %, the outgoing electric
Slow axis field is rotated
counterclockwise by 2%.
Linearly polarized light
(solid arrow) and its
horizontal and vertical
components (dashed)

Natural Light

155
LIGHT: Introduction to Optics and Photonics

7.5 APPLICATION: MONOCHROME LCD DISPLAY


A liquid crystal display (LCD) is a common example of a device that
uses polarization to create a sort of light valve. Liquid crystal is a type of organic
fluid with a regular arrangement of molecules whose orientation responds to an
electric field. As shown in Figure 7.20, the crystal has the ability to rotate the
direction of linear polarization. The crystal thickness is chosen so that incident
light has its direction of polarization rotated by 90o when no voltage is applied.
(Figure 7.20 uses a small circle to indicate horizontally polarized light, that is,
polarization in and out of the page. This represents the "tail" of the electric field
vector arrow.) Thus, vertically polarized light becomes horizontally polarized
after passing through the crystal. However, when an external electric field is
applied to the crystal, the molecules align with the electric field so that the
direction of polarization does not change. When combined in series with linear
polarizers, the crystal becomes a voltage-controlled "light switch.”

Figure 7.20 - Liquid crystal “off”


(left) and "on" states. With no
applied voltage, the plane of
o
polarization is rotated by 90 . The
dot indicates polarization in and
out of the plane of the page.

In an LCD display, the electrodes are segments of the display characters.


Figure 7.21 shows what happens when segments in the display are turned off.
Room light (which is randomly polarized and represented by horizontal and
vertical components) enters from the left. The first polarizer removes the vertical
component, and horizontally polarized light enters the liquid crystal. The liquid
crystal rotates the plane of polarization by 90o, creating vertically polarized light,
which can pass through the vertical polarizer behind the crystal and reflect off of
the mirror at the back of the display.
The light from the mirror follows a similar path: vertically polarized, it
passes through the rear polarizer, becomes horizontally polarized by passing
through the liquid crystal, and exits the display through the front polarizer. The
segment thus appears bright and is indistinguishable from the background. The
light exiting the display is polarized; you can see this if you look at your
calculator display through a polarizer (see Figure 7.23).
Now consider the same arrangement with voltage applied to the liquid
crystal electrodes that create the number "6” (Figure 7.22). With voltage

156
Polarization

applied, the plane of polarization is not rotated in the activated segments, so light
is unable to pass through the second polarizer. With no light incident on the
mirror and reflected back to the observer, the segments are dark. A black number
"6" appears on a bright background of reflected light.
Question 9 at the end of this chapter is an opportunity for you to explore
another important use of polarization to control the flow of light : the laser Q-
switch.

Natural light Figure 7.21 - Segment


off: light is reflected out
of the display, and
display is uniformly
bright.

Horizontal Polarizer Liquid Crystal Vertical Polarizer Mirror


Figure 7.22 -
Segments On: Light
is not reflected by
Natural light segments that are
turned on and thse
segments appear
dark. Where there are
no "on" segments,
light passes through
to the observer, who
sees a bright
Horizontal Polarizer Liquid Crystal Vertical Polarizer Mirror background.

Figure 7.23 - Light from an


LCD screen is linearly
polarized. What is the
direction of polarization for
this laptop computer screen?

157
LIGHT: Introduction to Optics and Photonics

REFERENCES
The mathematics of polarization is explored at basic and advanced levels
in the optics references for Chapters 4 and 6 (E. Hecht, Pedrotti and Pedrotti, and
Meyer-Arendt).
Additional references include:
1. Wehner, R. "Polarized-light navigation by insects," Scientific American,
Vol. 31(1), 1976: 106.
2. Greenler, R. Red Sunsets, Black Clouds, and the Blue Moon: Light Scattering
in the Atmosphere, Science Bag Videos available from Blue Sky Associates,
Inc.
3. Kattawar, G.W. "A Search for Circular Polarization in Nature.” Optics and
Photonics News, September, 1994.

WEB SITES
1. All about polarization in art, in nature and in technology. The web site
includes directions for learning to see Haidinger's brush:
www.polarization .com/
2. One of many web applets that allow the user to manipulate polarization
components. The WebTOP project has a variety of applets in addition to
polarization.
http://webtop.msstate.edu/
3. Amazing colorful collages can be made with birefringent materials and
polarizers. The creator of the Polage® art form has samples on exhibit on her
web site:
www.austine.com/

158
Polarization

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. Explain the difference between random polarization and linear polarization.
2. Suppose you look through two polarizing filters while holding one stationary and
rotating the other in front of it. How many times does the transmitted light go from
light to dark in one complete revolution of the rotating filter?

3. Suggest methods that will enable you to determine if your sunglasses are polarized
(without breaking them!).

4. When you have a medical ultrasound procedure, the technician first puts a clear gel
on the skin where the transducer will be placed. Why? (Hint: the speed of sound in
the transducer material is very different from the speed of sound in air.)

5. What happens to natural light when it is reflected from a non-conducting surface at


Brewster's angle?

6. A small juice glass containing vegetable oil is placed inside a larger glass tumbler,
and the space between the two is also filled with vegetable oil. If you look through
the outer glass and the oil, the inner glass is no longer visible. (Try it! It works best
with Pyrex ® glass, with other types of glass only partially "disappearing.") Explain
this disappearance of the inner glass.

7. What is the advantage of polarized sunglasses over normal tinted sunglasses?


8. Why are the lights lining the taxiways at airports blue? (Hint: Is it a good idea for
pilots to think a taxiway is a good place to land?)

9. Why isn't the sky uniformly blue from zenith to horizon?


10. Explain the pilot's dilemma in the introduction to this chapter. Can you suggest a
solution?

11. Two polarizers are crossed (transmission axes at right angles) and a third polarizer is
placed between them so that its transmission axis is 45o to the transmission
V
axes of the other two. What happens? Why?

12. The sketch at right shows a laser "Q-switch". The Pockels cell is a device
that rotates the plane of polarization by 90o when a voltage is applied.
Explain how the beam's passage through the switch can be controlled by
Vertically Pockels Vertical
the voltage to the Pockels cell. Does turning the voltage on allow or stop Polarized Cell Polarizer
the laser beam's passage? Light from
Laser
13. A beam of natural (randomly polarized) light passes through a linear
polarizer and a quarter wave plate, is reflected by a plane mirror, passes again
through the quarter wave plate and strikes the polarizer. What happens?

159
LIGHT: Introduction to Optics and Photonics

LEVEL I PROBLEMS
14. Light from a laser is plane polarized in the vertical direction. If a linear polarizer is
placed in the beam at an angle of 45o from the vertical axis, what percentage of the
light will pass?

15. In problem #14, a second polarizer is placed between the laser and the first polarizer
with its polarization axis oriented 35o from the vertical axis. What percentage of the
light will pass? Does it matter if the second polarizer is in front of or behind the first
polarizer?

16. Vertically polarized light is incident on three polarizers, the first is 30o to the
vertical, the second is 45o to the vertical and the third is 60o to the vertical. How
much of the original intensity passes through all three?

17. Randomly polarized light passes through two polarizing filters; the axis of one is
vertical and that of the other is at 60o to the vertical. What fraction of the incident
irradiance is transmitted? What is the orientation the transmitted light?

18. What is Brewster's angle for light reflected by water? Light in water reflected by air?
19. Reflected light from a piece of glass is completely polarized when the reflected angle
is 58o. What is the index of refraction of the glass?

20. Show that the percent of incident light reflected from an air-glass interface is 4%.
What is the significance of this back-reflected light? Use n = 1.5 for glass.

LEVEL II PROBLEMS
21. Occasionally you will see Malus' law written as
E = KEo cos 2 !
where K is a constant that accounts for the fact that the polarizers have a gray or
green tint and therefore are not ideal. Suppose when the two polarizers are aligned,
E/Eo is 0.85. What percent of the initial irradiance is passed by this polarizer when
the angle between the incident polarization and the polarizer axis is 35o? Compare
this to the situation of an ideal polarizer, with K = 1

22. The critical angle for total internal reflection at a boundary between two materials is
52o. What is the Brewster's angle at this boundary?

23. A glass plate is held in a laser beam so that the reflected light has maximum linear
polarization. What is the angle of refraction for the transmitted light if the index of
refraction is 1.6?

24. Light reflected from a piece of plastic has its maximum polarization when light is
transmitted through the plastic at a 32o angle of refraction. What is the index of
refraction of the plastic?

25. How can you rotate the plane of polarization of a vertically polarized beam of light
by 90o with only a 10% loss of amplitude (irradiance)? Assume ideal polarizers.

160
Polarization

26. How much more light is scattered by the atmosphere at 700 nm compared to
400 nm?

27. A certain birefringent crystal has indices of refraction of 1.508 and 1.510. How thick
must a slice of this crystal be to act as a quarter wave plate for light of 600 nm
wavelength?

161
You just found out you need eyeglasses. The
3.
optometrist hands you a prescription that looks like
this
Rx Spherical Cylindrical Axis
Right -2.25 D -.50 D 90
Left -3.50 D -.25 D 180
Do you know what these numbers mean? In this
chapter you will learn the basic principles
underlying many optical instruments—including
your eyes. In the problems at the end of the chapter,
you will have the opportunity to "prescribe" lenses
for two common eye problems.

4x13 Light Bulbs (Albert Yee)

Chapter 8

OPTICAL INSTRUMENTS
With an understanding of the basic laws of geometric and wave optics,
we can now describe the principles behind many instruments that rely on optics.
Some of the instruments that we will describe are similar to those you will see in
a school laboratory, not in a university or industrial setting. Research equipment
has evolved into highly sophisticated instrumentation, usually integrated with a
computer, for advanced data processing and display. In addition, where this
chapter specifies a single lens, a high quality instrument will have multiple lenses
designed to minimize various aberrations. Nonetheless, the basic operating
principles are the same as those we present here.

8.1 PRISMS
We begin with instruments that work by refraction. Prisms are the
simplest of these, since they are solid shapes with straight sides. You have no
doubt seen a prism used to spread light into its component wavelengths. A prism
can transform white light containing the range of visible wavelengths into a
rainbow-like spectrum because of the dispersive property of glass. Since each
wavelength travels at a slightly different speed, it "sees" a slightly different index

162
Optical Instruments

of refraction. As Snell's law indicates, each wavelength will exit the prism at a
slightly different angle.
Figure 8.1 illustrates the operation of a prism spectrometer, an important
instrument in the early days of spectroscopy. The spectrometer superimposes a
wavelength scale over the optical spectrum so that wavelengths may be read
directly. The observer in this case is a person who looks into the viewing
microscope. The prism in a spectrometer is used both for refraction and
reflection: light forming the spectrum is refracted through the prism into the
viewing microscope, and the wavelength scale is reflected from the face of the
prism into the microscope.

Scale illumination (white light)


Scale

Figure 8.1 - Prism Spectrometer.


Viewing Looking into the viewing
Entrance slit Prism microscope microscope, you see spectral
R lines against the illuminated
V R wavelength scale.
V

Light source to be
analyzed

Prisms made of glass can be used only for visible light because ordinary
glass absorbs most ultraviolet and infrared radiation. However, spectrometers can
be constructed for UV by making the prism out of quartz or fluorite. Prisms made
of salt or sapphire can be used to analyze infrared wavelengths. If the radiation is
invisible to the eye, a photodetector sensitive to UV or IR radiation must be used.
The spectrum may be displayed on a computer screen or it may be recorded on
paper.
Instruments that can be viewed directly by eye are called spectroscopes;
if the spectrum is recorded, the instrument is referred to as a spectrograph or a
spectrometer. Nowadays, high quality spectrometers use reflection gratings
rather than prisms to disperse light into its component spectrum. The advantage
that reflection gratings have is that light does not need to pass through them, so
the spectrum is not altered by absorption. Relatively inexpensive grating
spectrometers use CCD cameras to capture the spectrum and the data is
transferred to a computer via the USB port for processing and display. The
spectra pictured in Figure 2.11 were taken with a USB spectrometer.
Many applications of prisms do not depend upon or use the dispersive
property of light but make use of total internal reflection (TIR). In these
applications the prism is used to change the direction of a beam of light or

163
LIGHT: Introduction to Optics and Photonics

change the orientation of an image. Plane mirrors could be used instead of a


prism, but prisms offer important advantages. The reflecting surface of a prism is
easier to keep clean than the thin film of a front surface mirror. Also, the angle
between the two faces of a prism is more stable than the angle between two
mirrors. Finally, total internal reflection can produce higher reflectivity than
mirror reflection. Sometimes when a prism is used as a reflector the reflecting
surface is coated to both protect it and enhance reflection.
Figure 8.2 shows a right angle prism used as a reflector. Notice that the
Figure 8.2 - A right angle
prism changes the orientation of the image changes after passing through the prism. Two right angle
orientation of the image.
The figure entering the
prisms may be cemented with their long faces at right angles to each other to
prism has the arrow form a Porro prism. This arrangement inverts both the horizontal and vertical
pointing up (as viewed by
the observer) and the parts of an image and is often used in binoculars to form an upright image.
circle to the right. When Prisms with very small apex angles are used to produce slight changes in
the arrow is pointing up on
the figure leaving the the path of light. In this case, light passes through the prism and its path is
prism, the circle is to the slightly altered by refraction. These so-called wedge prisms are used in
left.
eyeglasses to treat double vision.

8.2 THE SIMPLE MAGNIFIER


To examine the detail in an object, you need to bring it close to your eye
h % to enlarge the image formed on your eye's retina. However, there is a limit to
how close to your eye you can hold an object and still have it remain in focus.
25
cm The limiting distance is mostly a matter of age, but for a young, normal eye it is
Figure 8.3 – Viewing an usually around 25 cm. This distance is called the eye's near point; your eye
object with the unaided
eye. cannot focus objects any closer than this distance without assistance. A simple
magnifier (positive lens) is used to allow the object to be brought closer to the
eye and still remain in focus.
Magnifiers purchased in department stores are usually labeled with a
h %'
magnifying power, another name for angular magnification. Although transverse
f
magnification (hi/ho) is often used with real images, when the eye looks through
a lens to see a virtual image, angular magnification is preferred. In Figure 8.3,
Figure 8.4 - The object is
placed at the focal point of the object subtends an angle % when it is held at a distance of 25 cm from the eye.
the lens and the relaxed If a positive lens is placed directly in front of the eye and the object is placed at
eye sees the image at
infinity. the focal point of the lens, the angle subtended by the image is %', as shown in the
Figure 8.4. Angular magnification is usually defined as
tan ! "
Angular magnification (8.1) M! =
tan !

(You will also see angular magnification defined as the ratio of angles; the two
definitions are in agreement for the usual case of small angles.)

164
Optical Instruments

We can use this definition to obtain an expression for magnifying power


based on the focal length of the lens and the eye's near point.
Using the triangles in Figures 8.3 and 8.4
tan ! " h / f 25
M! ! = =
tan ! h / 25 f

That is, the magnifying power is given by 25/f, where f is in centimeters. For
example, a magnifier labeled "5x" has a 5 cm focal length.
h'
Sometimes, the user may accommodate, or focus the lens of the eye to %
create a virtual image at the eye's near point. To do this, the object is placed at a '
25 cm
point inside the lens focal point, as shown in Figure 8.5. Using the thin lens
equation, it can be shown that in this case the magnifying power is 25/f + 1. The
Figure 8.5 - The eye
5x magnifier with a 5 cm focal length lens would have an angular magnification accommodates and the
of 6. Although it is possible to increase magnification by accommodating, image is seen at the point
of closest vision, taken to
viewing in this manner for extended periods of time may cause eyestrain. be 25 cm.
In any case, the actual magnification achieved with a magnifying glass
depends somewhat on the viewer, since the near point distance differs from
person to person and each user views the enlarged image where it is most
comfortable.

8.3 THE SLR CAMERA


The simplest camera consists of a light-tight box with a focusing
mechanism on one side and film on the opposite side at the image plane. In
Chapter 4, the focusing mechanism of a pinhole camera was a tiny hole in the
side of the box. Whether it is used to project an image that will be copied by hand
or expose photographs on film, the problem with a pinhole camera is that very
little light passes through the pinhole. When used as a real-time imaging device,
the images are quite dark; when used with film, very long exposure times are
required.
Even before light sensitive film was invented, it was discovered that
replacing the pinhole with a lens allows much more light to be gathered,
producing a brighter image. The film or viewing screen is placed at the image
plane of the lens, which of course varies with object distance. A fixed-focus
camera locates the film at the focal plane of the lens so that objects at a large
distance from the camera may be clearly imaged. If you have ever attempted
close-up photos with such a camera, you know that they are distorted and out of
focus because the image forms behind the film for short object distances.
A single lens reflex (SLR) camera with semi-automatic adjustments is
shown in Figure 8.6. Single lens reflex refers to the camera design which

165
LIGHT: Introduction to Optics and Photonics

features a mirror that directs light from the main lens to a prism and then the
viewfinder, allowing the photographer to see exactly what will be exposed on the
film (or sensor, if it is a digital SLR camera). When the shutter is depressed, the
mirror snaps out of the way, allowing light to reach the film. If you have used
such a camera, you may have noticed that the viewfinder does not function
during the time the shutter is open because the mirror is out of position.

Focusing ring
Figure 8.6 - Circa 1990 SLR camera f-stop
showing shutter speed, f-stop and
focusing adjustments. The “A” on the f-
Shutter speed
stop and shutter speed adjustments
allow automatic setting of these
parameters.

Good quality cameras have three main adjustments, which are often set
automatically. The first adjustment is shutter speed, which determines how long
the shutter remains open to expose the film. The shutter speed must be very fast
to photograph a moving object. For action photos, speeds as fast as 1/1000 of a
second are common.
Of course, opening the shutter for a short time limits the amount of light
reaching the film. The exposure may be further controlled by adjusting the size of
the aperture, the opening in an iris diaphragm placed behind the lens. Aperture
size is usually referred to in terms of f-stop, or f/#, which is defined as

f
f-stop (8.2) f /#=
D

where f is the lens focal length and D is the aperture diameter.


In order to increase the amount of light striking the film, the aperture
must be made larger, which means that D must increase in Equation 8.2. Thus,
small f-stops indicate large light gathering ability (aperture size). Camera f-stops
are normally numbered f/1.4, f/2, f/2.8 for the largest aperture sizes and range to
f/11, f/16 or even f/22 for the smallest aperture sizes. Although there is no
reason the aperture cannot be smoothly opened or closed, the camera is usually
arranged so that a change of one “notch,” such as from f/1.4 to f/2, changes the
area of the aperture opening by a factor of 2. Shutter speeds also change by a
factor of 2, so that increasing the f-stop by one “notch” (decreasing the aperture
area by a factor of 2) results in the same exposure as when the shutter speed is
decreased by one notch (halving the exposure time).

166
Optical Instruments

The third adjustment available on more expensive cameras is the focus,


which is accomplished by moving the lens toward or away from the film. The
lens shown in Figure 8.6 may be adjusted so that objects from roughly one meter
away to infinity are in focus on the film. To take photos of closer objects, a
different lens must be used.
Many cameras are now available with automatic focusing, called
autofocus, which may be either active or passive. Active focusing works like
radar. A pulse of infrared light is emitted from the camera and reflected off the
object to be photographed. The pulse then returns to a sensor on the camera, and
the distance to the object is calculated from the time delay between sending and
receiving the pulse. The camera's lens is moved so that the image is in focus for
an object at the calculated distance. Passive focusing relies on information in the
image to determine if the image is in focus or out of focus. Both types of
autofocus can be "fooled.” For example, active autofocus looks at the center of
the scene for focusing information so if the object of interest is off center it will
not be in focus in the final photo. Often, autofocus cameras also have a manual
focus setting to give the photographer control in unusual situations.
Film cameras, while fairly simple to understand, are becoming less
common as digital cameras become more affordable. Digital imaging will be
discussed in Chapter 12. Digital photography uses many of the same terms as
film photography even though the actual devices (such as a shutter) are electronic
rather than mechanical.

8.4 THE HUMAN EYE


The human eye is in some ways similar to a camera—it is an enclosure
with a variable diaphragm to control the amount of entering light. The focusing
elements form an image on a continuously recording "film,” the retina (Figure
8.7).
The iris (colored part of the eye) controls the amount of light that enters
by varying the size of the pupil. The pupil appears black because very little light
is reflected back out of it from inside the eye. An exception is during some flash
photography when light reflected out of the pupil may appear as "red eye" on a
photograph.
Most of the focusing power of the eye is due to the cornea rather than the
lens. Recall that the focal length of a lens depends on the index of refraction of
the material of the lens as well as the material surrounding the lens. The interface
at the front of the eye (cornea) is air-corneal tissue while the lens is surrounded
by material whose index of refraction is close to that of the lens itself. The
cornea, therefore, has a higher refractive power than the lens.

167
LIGHT: Introduction to Optics and Photonics

Vitreous
Aqueous Lens humor
humor
Retina
Cornea

Figure 8.7 - Schematic


Fovea centralis
representation of a human eye.

Macula
Pupil
Optic nerve
Iris

When the eye observes light from a distant source (as shown in Figure
8.7) the muscles surrounding the lens are relaxed. To focus on nearby objects, the
muscles contract and the lens changes its shape, becoming thicker in the middle,
allowing a clear image to form on the retina. This ability to change the focus of
the eye is called accommodation.
The light sensitive retina consists of an array of photoreceptors called
rods and cones, which absorb photons and produce electrical signals that travel to
the optic nerve and then to the brain for analysis and interpretation. Cones,
concentrated mainly in the center of the retina in an area called the fovea
centralis, are used for high resolution vision such as reading, and are responsible
for color vision. Three types of cones—red, green and blue— respond to
different photon frequency ranges. To see yellow, for example, both red and
green cones must be activated.
Rods are not color sensitive. Several rods are connected to the same
nerve so that if any rod in a group is stimulated by a photon, the brain receives a
signal. Thus, rods are responsible for low light viewing.
Some common visual experiences can be explained by the color response
and light sensitivity of rods and cones and their distribution across the retina. For
example, you may have noticed that it is very difficult to distinguish the colors of
clothing in a dark closet. In this low-light condition, mainly rods are stimulated,
so there is little color perception. For the same reason, it is difficult to determine
the color of an object seen in peripheral vision. Light entering the eye obliquely
will be focused outside the central portion of the retina where the cones are most
numerous. Even now, as you are reading this page, your eyes are constantly
moving to keep the image located at the point on your retina where the cones are
most concentrated in order to resolve the detail of the print.

Eyeglasses
The "normal" eye has a focusing range from roughly 25 cm (the "near
point") to infinity (the "far point"). As you know, many people do not have eyes

168
Optical Instruments

that meet this ideal. For example, very young children can comfortably hold a
book quite close to their eyes, while older folks need to hold the newspaper at
arm's length.
Myopia, or nearsightedness, occurs when an eye can't focus properly on
very distant objects. The far point is not at infinity, but at some (much closer)
distance. The image formed by the myopic eye's focusing system falls in front of
the retina as a result of the cornea being too curved or the eyeball being too long
(or a combination of the two). This causes rays from "infinity" to converge too
quickly and focus in front of the retina, perhaps by only a millimeter or two. This
condition can be corrected by adding a diverging lens in front of the eye to allow
the rays to properly focus on the retina. Corneal sculpting (for example LASIK),
which flattens the cornea so that it is less strongly focusing, is another, more
expensive treatment.
Hyperopia, or farsightedness, is the condition where the eye can't
properly focus on nearby objects. The image in this case falls behind the retina
because the cornea is too flat or the eyeball too short. The hyperopic eye needs
assistance focusing, so converging lenses are used. Problems 1 and 2 at the end
of this chapter illustrate the process of prescribing lenses for myopic and
hyperopic eyes.
Presbyopia is similar to hyperopia in that it is difficult to focus on nearby
objects. The cause is different, however. Presbyopia is due to an inability to
accommodate, caused by the aging of the lens and muscles that support it. The
correction for presbyopia is also a converging lens, commonly found in
pharmacies as "reading glasses.” Presbyopia is an unavoidable part of the aging
process.
Astigmatism is a vision defect where the cornea that is not quite round—
it may be elongated (something like a football) in any direction. If a person with
astigmatism looks at a target composed of radial lines, the lines in one direction
will focus while lines at right angles will be blurry. That is, the eye has different
focal lengths for different directions. The correction for astigmatism is eyeglasses
(or contact lenses) with cylindrical rather than spherical curvature.
It is not unusual (especially as people age) to need correction for more
than one of these conditions, resulting in complex lenses with superimposed
spherical and cylindrical curvatures, as well as a bifocal section for close
viewing. In some cases, trifocals with close, medium and distant vision
correction are prescribed. As you might imagine, these lenses can be quite
expensive!

169
LIGHT: Introduction to Optics and Photonics

8.5 OPTICAL TELESCOPES


Telescopes may be based on lenses (refracting telescopes), mirrors
(reflecting telescopes) or a combination of both. A telescope is normally used to
view objects at a large distance, so the incoming rays are parallel. Figure 8.8
shows a lens-based Keplerian, or astronomical, telescope. The light enters an
objective lens, which forms a real image at its focal point. The image is examined
by the eye lens, which acts as a magnifier to produce a greatly enlarged virtual
image of the object. Only the most inexpensive of telescopes will have a single
eye lens. In high quality telescopes, the eye lens is replaced by a complex
arrangement of several lenses known as an eyepiece. When the focal points of the
objective and eye lens coincide and the viewing eye is relaxed, the final image
appears "at infinity.” The length of the telescope is the sum of the focal lengths f0
and fe..

Objective lens Image formed


by objective
Light from very lens Eye lens
distant object

Figure 8.8 - Astronomical,


or Keplerian telescope.

Image at fo
infinity fe

Often, small telescopes are described by magnifying power, or angular


magnification. When used to look at stars, for example, the telescope doesn't
make the stars look larger; it makes the distance between them larger. In Figure
8.9, two of the rays from Figure 8.8 are drawn, showing the angles that light
makes as it enters and leaves the telescope. Assuming that fe << fo
tan ! o f
Magnifying power (8.3) M! = " o
tan ! e fe

fo fe
Figure 8.7 - Angular magnification %o %e
of a telescope. %o %e

170
Optical Instruments

When you purchase a small telescope it often comes with several eyepieces of
different focal length. To increase magnifying power you simply substitute a
shorter focal length eyepiece.
The Keplerian telescope produces an inverted image. This is not a
problem for astronomy, but is inconvenient for bird watching. A Galilean, or
terrestrial, telescope uses a diverging lens for the eye lens, which produces an
upright image. Binoculars, which have design features in common with
telescopes, use prisms to produce an upright image.

EXAMPLE 8.1
The Yerkes Observatory telescope in Wisconsin has an objective focal
length of 19 meters. What is its angular magnification if it is used with an
eyepiece with a 10 cm focal length?

Solution
Using Equation 8.3:
M = fo/fe = 19 m/0.10 m
M = 190
This means that objects that are 0.1o apart in the sky will appear to be 19o
apart when seen through the telescope. By the way, this telescope has an
objective lens more than three feet in diameter and its length is
approximately 19.1 meters (=fo + fe)

It may surprise you to learn that magnifying power is the least important
of the three powers that characterize a telescope used for astronomy research.
Light gathering power is far more important since astronomers are looking at
very distant, dim objects. Resolving power is also important because diffraction
causes each point in an object to be imaged as an Airy disk. Both light gathering
power and resolving power are improved by making the telescope objective as
large as possible. There are limits, however, to the size of a glass objective lens.
Large diameter lenses are heavy and difficult to support and move into position.
A larger diameter lens is also thicker, absorbs more light passing through it and
must be corrected for aberrations. At 40 inches in diameter, the refracting
telescope at Yerkes Observatory is the largest in the world.
It is possible, however, to make reflecting telescopes that are much
larger than refracting telescopes. Light does not pass through a mirror, so
chromatic aberration is not a problem; and since only one surface is used, the
mirror may be supported across the entire back side. Mirrors may also be
constructed with parabolic profiles, eliminating spherical aberration. In addition,

171
LIGHT: Introduction to Optics and Photonics

the problem of light absorption by glass is eliminated. Very large reflecting


telescopes have been constructed including the two 9.8 meter Keck telescopes in
Hawaii and the 10.4 meter Gran Telescopio Canarias in the Canary Islands.
A simple reflecting telescope is shown in Figure 8.10. This
configuration, often called a Newtonian telescope, replaces the objective lens of a
refracting telescope with a concave mirror. The mirror may be paraboloidal or
hyperboloidal in cross section. A secondary mirror directs the converging rays
toward the side of the telescope tube, where it is viewed through an eyepiece.
The secondary mirror is small and does not obstruct a significant portion of the
incoming light.

Primary Primary
Light from Eyepiece mirror Light from mirror
a distant a distant
object object
Figure 8.10 -
Newtonian reflector
(left) and Cassegrain
telescope (right).
Secondary Secondary
mirror mirror

Variations of the Newtonian telescope redirect the converging rays back


through the primary mirror for viewing. For example, Cassegrain telescopes use
a convex lens to direct the light from the primary mirror back through an opening
in the primary mirror (Figure 8.10). The observer (usually a camera or other
digital imaging device) is placed on the axis of the telescope tube, behind the
primary mirror. One of the most famous examples of this type of telescope is the
Hubble Space Telescope. From its vantage point above the Earth's atmosphere, it
has an unprecedented view of the universe (Figure 8.11).

Figure 8.11- A series of photos taken by the


Hubble space telescope of an expanding pulse
of light around a distant star named V838
Monocerotis (V838 Mon) which is at the outer
edge of our Milky Way galaxy. As the pulse
grows, it illuminates dust around the star.
(Courtesy STScI [Space Telescope Science
Institute] www.stsci.edu. Photo from the official
Hubble website, www.hubblesite.org).

To further increase resolving power, several telescopes can be operated


as a huge interferometer, greatly increasing the effective aperture. One such
combination is the Very Large Telescope (VLT) array in Chile, which, when

172
Optical Instruments

fully operational, will have four 8.8 meter telescopes and four 1.8 meter
telescopes that can be operated separately or as a unit. International consortia of
universities have been studying proposed telescope systems with whimsical
names such as ELT (Extremely Large Telescope) and the OWL Telescope
(Overwhelmingly Large Telescope). Most new large telescopes incorporate
adaptive optics, deformable mirrors that compensate for changes in an optical
system or atmospheric conditions that cause blurring of images.

8.6 THE COMPOUND MICROSCOPE


Like the telescope, the compound microscope has an objective lens and
an eye lens (or eyepiece). However, unlike the objects viewed by a telescope,
which are very distant compared to the dimensions of the instrument, the object
for a microscope is positioned close to the objective lens, just beyond the focal
point (see Figure 8.12). The distance between the lenses is fixed and the entire
tube moves up and down to focus on the object.
The objective lens forms a real, inverted and enlarged image. The eye
lens is then used as a magnifier to examine the image. If the real image formed
by the objective lens is positioned at the focal point of the eye lens, a virtual,
inverted and greatly magnified image is seen at infinity by the relaxed eye. The
final magnification is the product of the transverse magnification provided by the
objective lens and angular magnification of the eye lens

M = M objective M eyelens

Object L
fe

Figure 8.12 – Basic


components of a compound
fo
microscope.
do
Final image
at infinity
Image formed
by objective

If the image is formed at infinity, the angular magnification of the eye


lens is the same as that for a simple magnifier
25
M objective = M ! = (8.4)
fe

and the objective magnification is given by


di L ! f e
M objective = = (8.5)
do do

173
LIGHT: Introduction to Optics and Photonics

In Equation 8.4, the normal near point of 25 cm is assumed. (Note that fe


must also be in centimeters in this equation.) Equation 8.5 makes use of the fact
that the objective image forms at the focal point of the eye lens. From Figure
8.12 you can see that the image distance for the objective (di) is equal to L-fe
where L is the distance between the lenses.
25 (L ! f e )
M = M objective M eyelens =
fe do

We can eliminate the object distance from the magnification equation


with an approximation based on typical microscope specifications. Since the
distance between the lenses, L, is usually much greater than the eye lens focal
length

L -fe ~ L

Also, the object is placed very close to the focal point of the objective, so

do~fo

Using these approximations, the total magnification of the microscope is


approximately
25L
(8.6) M!
fe fo

This approximation results in a value of M within 10% or so of the exact value.

EXAMPLE 8.2
A compound microscope has a 2.0 cm focal length eye lens and a 10 mm
focal length objective lens. Find the approximate total magnification if the
lenses are separated by 20 cm.

Solution
Using Equation 8.6,
(25cm)(20cm)
M= = 250
(2.0cm)(1.0cm)

Note that all measurements are in cm when used in this equation.

For all but the most inexpensive microscopes, both of the focal lengths in
Equation 8.6 are actually the effective focal lengths of multielement lenses,
corrected for aberrations. The specimen is illuminated by reflected light (for
opaque objects) or transmitted light (if the object is translucent). Specialized

174
Optical Instruments

illumination techniques and advanced optical systems have been developed to


improve contrast and resolution. A few of these will be discussed in Chapter 15.

8.7 INTERFEROMETERS
As their name implies, the basic principle of an interferometer is the
interference of light waves. In a wavefront division interferometer, incoming
waves are split along the wavefront, perpendicular to the direction of
propagation. Young's double slit experiment is an example of wavefront division.
Amplitude division uses a device called a beam splitter, which redirects a portion
of a beam of light, allowing the remainder to continue straight through. As we
discussed in Chapter 5, spatial coherence is important in the construction of a
wavefront division interferometer and temporal (also called longitudinal)
coherence is necessary to the operation of an amplitude division interferometer.
Now let's take a look at some common forms of interferometers, along
with their practical uses. Unfortunately, with so many types of interferometers
currently in use, we will not be able to examine them all. However, all are based
on the same principle: the interference of waves.

The Michelson Interferometer


Albert A. Michelson was America's first Nobel Prize winner in science
(1907). He developed the instrument that bears his name in the early 1880s to
measure the speed of light. Michelson’s experiments were notable because of
their extreme precision. In one of the most famous "negative results" of physics,
he found no evidence at all to support the aether theory, that an all-pervasive
medium filled the universe and supported the vibrations of light waves. Over the
next 25 years, this result and the controversy that followed it was resolved by the
special theory of relativity and the idea of the constancy of the speed of light in a
vacuum.
In its multiple variations, Michelson's interferometer is found in many
areas of technology where tiny changes in optical path length must be measured.
A Michelson interferometer can produce both circular and straight-line fringes.
These fringes are used to make very precise measurements that result from small
movements and vibrations, aberrations and/or scratches in optical components,
changes in refractive index, and other phenomena that require measurement
resolution on the order of the wavelength of light.
A Michelson interferometer most often uses a light source with good
coherence properties, such as a laser. It is possible to use a white light source,
which produces rainbow colored fringes, but the single color fringes produced by
a laser are needed for automatic detection and measurement. Figure 8.12 shows a
Michelson interferometer that might be constructed in a student laboratory. As

175
LIGHT: Introduction to Optics and Photonics

illustrated in Figure 8.13, the light from a laser first passes through a short focal
length lens, which diverges the beam. If the laser beam is then collimated by a
second lens, the interferometer is called a Tywman-Green interferometer. In
either case, the beam then enters a beam splitter where it is divided into two
paths: A-B and C-D. Light in path A-B is reflected by the movable mirror and
retraces its path to the beam splitter where it is reflected onto the index card
screen. Light in path C-D is reflected by the fixed mirror and travels back
through the beam splitter where it is combined with beam A-B.

Figure 8.13 - Michelson


Interferometer.

If the path lengths A-B and C-D are identical, the two beams interfere
constructively and form a bright spot on the index card screen. (Depending on the
type of beam splitter used and number of changes of phase upon reflection, the
center spot for equal path lengths may actually be a dark fringe, rather than a
bright fringe.) If the movable mirror is moved by 1/4 wavelength, the path
difference between the two beams is 1/2 wavelength and destructive interference
occurs, generating a dark fringe at the center of the pattern. Moving the mirror by
another 1/4 wavelength (for a total of 1/2 wavelength) makes the path difference
between the two beams one wavelength and constructive interference occurs
again. For every additional 1/2 wavelength movement of the mirror, the path
difference between the two beams will be an integer multiple of one wavelength,
generating another bright interference fringe at the center of the pattern.
The interference pattern on the screen resembles a bull's eye, with a new
fringe emanating from the center for each mirror movement of 1/2 wavelength
away from the beam splitter. If the mirror is moved toward the beam splitter, a
fringe collapses into the center for each 1/2 wavelength movement. As you can
see, if you know the distance the mirror has moved, you can determine the
wavelength of the laser light. On the other hand, if you know the wavelength, the
distance the mirror has moved can be calculated by

" !%
distance = (# of fringes) $ '
# 2&

176
Optical Instruments

The interferometer can be incorporated into instruments measuring


motions on the order of the wavelength of light. For example, the movable mirror
can be mounted on a vibrating part and the amplitude of vibration can be
measured by counting the change in the number of fringes. It should be noted
that measurements with a Michelson interferometer are not easy to perform.
Because of the extreme sensitivity of the interferometer to movements on the
order of the wavelength of light, any vibration of the components—or the table or
building housing it—will cause the pattern on the screen to change. In practice, it
is difficult to translate the movable mirror by hand and observe the changes in the
bull's eye pattern without a highly precise linear translation stage. In fact, simply
placing your warm hand under one of the light paths A-B or C-D will cause the
pattern on the screen to change due to the change in the index of refraction of the
air over your warm hand! When Michelson and his colleague Edward Morley
repeated the speed of light experiment to find evidence of the aether, they used
an interferometer mounted on a large stone floating on a film of mercury.
For precision measurements, it is more convenient to view straight-line
fringes rather than an expanding or collapsing bull's eye. By introducing a small !/2
amount of tilt in one of the mirrors, the fringe pattern will change to a series of !/4
straight lines. The distance between adjacent lines represents approximately 1/2
wavelength of displacement. Such fringes, whose spacing represents a change in
height of a surface, are called contours of equal height. For example, in Figure
8.14 the depth of a scratch in one of the mirrors can be determined by measuring
the amount of deviation from the straight fringe in terms of !. Since each fringe
represents 1/2 wavelength of displacement, a deviation of about one half of the Scratch

distance between fringes indicates a displacement of 1/4 of a wavelength. In Figure 8.14 - Deviation in
straight-line pattern
Figure 8.14 the light producing the deviation in the fringe pattern traveled 1/4
produced by a scratch on
wavelength farther than the rest of the light in the beam. one mirror.

Mach-Zehnder Interferometer
The Mach-Zehnder interferometer was described separately by Ernst light beam mirror
source splitter
Mach and Ludwig Zehnder in 1882. Of the two, Mach is the better known since
his name is used to express the ratio of an object's speed to the speed of sound:
the Mach number.
In principle, the Mach-Zehnder interferometer is similar to the Michelson s
interferometer. However, the beam splitters and mirrors are arranged so that mirror beam c
neither beam retraces its path (Figure 8.15). This allows the interferometer to splitter screen
r
examine transparent objects. For example, if a lens is placed in one of the two Figure 8.15 – Mach-Zehnder
e
Interferometer.
light paths, any irregularities in the lens surface will appear as wiggles in the
e
interference fringes. Mach-Zehnder interferometers may be made quite large
n

177
LIGHT: Introduction to Optics and Photonics

and, in this case, the "transparent object" under test might be a wind tunnel. The
flow of air around an object may be visualized in the fringe pattern.
in out
A miniature version of a Mach-Zehnder interferometer is used as an
optical modulator in fiber optic systems (Figure 8.16). The function of the
modulator is to impress an electrical signal onto a beam of light. The device is
constructed of an electro-optic material, a substance that has a refractive index
Figure 8.16 - Mach-Zehnder
interferometer in an electro- that changes when an electric field is applied. Light entering the device from an
optic modulator. The light is
optical fiber is split along the two paths, each of which is a very narrow
carried into and out of the
device by optical fibers. waveguide formed in the substrate material.
With no voltage applied, the two beams recombine in phase resulting in
maximum transmission. When the appropriate voltage is applied, the index of
refraction changes in the waveguides, and the wave in one path is slowed relative
to the wave in the other path. If the waves recombine 1/2 wavelength out of
phase, transmission is minimum. Such electro-optic modulators can be switched
on and off much faster than a switching a laser, in some cases over 40 billion
times per second (40 GHz)! High speed fiber optic transmission systems
typically use continuous wave lasers and electro-optic modulators to control the
light intensity.

Fabry-Perot Interferometer
A very simple interferometer was described in
1899 by C. Fabry and A. Perot. The Fabry-Perot
interferometer consists of two highly polished partially
silvered parallel glass plates. The spacing between the
two glass plates can be either fixed or variable. If the
spacing is fixed, the device is usually called an etalon.
Etalons are often installed in laser cavities where they
act as highly selective wavelength filters. By tilting the
etalon at the appropriate angle, the laser can be "tuned"
Figure 8.17 - Fabry-Perot to operate at a particular wavelength.
Interferometer. The non-
reflective (outside) surfaces The interferometer of Figure 8.17 shows two parallel plates separated by
are slightly angled and a distance, d. Light entering the cavity from the left undergoes multiple
coated with an antireflection
layer to minimize reflection. reflections, with a portion of the light exiting at each reflection. Multiple beam
interference produces very sharp fringes and the interferometer has exceptional
resolution. If the medium between the two surfaces is air, a 1/2 wavelength phase
shift occurs upon reflection from each air-glass interface. Light exiting the cavity
will form a maximum each time
"
d cos ! = m (8.7)
2

178
Optical Instruments

Just what do we mean when we say the Fabry-Perot interferometer


creates "very sharp interference fringes?’ One term we use to quantify the
sharpness and separation of fringes is finesse, a measure of the width of fringes
compared to their separation. Finesse depends only on the reflectivity (R) of the
mirrored surfaces.

! R
F= (8.8)
1" R

If the mirrors are highly reflective, the transmitted fringes are narrow and widely
separated.
Chromatic resolving power, also called resolvance, is used to describe
the performance of an instrument used to separate light into its spectral
components. The definition of resolving power is
"o
!= (8.9) Resolvance
#"

That is, how close around a central wavelength (!() can two wavelengths be (%!)
and still be resolved by the instrument? For a Fabry-Perot interferometer, the
resolving power depends on the order of the fringe, since higher order (center)
fringes are more widely separated, and on the finesse, or sharpness of the fringes

! = mF where m is the fringe order (8.10)

Combining Equations 8.7, 8.8 and 8.10 gives

2d # R
! = mF = assuming % is small and cos% & 1 (8.11)
" (1 $ R)

EXAMPLE 8.3
A Fabry-Perot interferometer has 1 cm spacing and R = 0.90. For
wavelengths around 500 nm, find the maximum interference order (m) and
the minimum wavelength separation that may be resolved. Assume normal
incidence ( ( = 0o).

Solution
a. From Equation 8.7,
2d cos ! 2(0.01 m)(1)
m= = = 40, 000
" 500 nm
b. Using Equations 8.11 and 8.9,
" .9 % 500 nm
! = mF = 40, 000 = 1.19x10 6 and $% = = = 0.00042 nm!
(1 # .9) ! 1.19x10 6

179
LIGHT: Introduction to Optics and Photonics

The Fizeau Interferometer and Phase Shifting Interferometry


Modern interferometers, coupled with powerful computer programs, can
be used to measure surface details, homogeneity of optical materials and motions
of machine parts with amazing precision. For example, Figure 8.18 shows the
surface of a ground and polished valve seat, imaged by an interferometric system.
Notice that the part is approximately 15 mm in diameter
and the measured surface variations are on the order of a
few microns! The interferometer that routinely produces
this type of surface measurement combines relatively
simple physics with digital image processing and
complex mathematical computation.
One important industrial measurement is to
compare the form of a manufactured optic to a reference
optic of known quality. A particularly simple
interferometer for this purpose involves a plane-parallel
optical flat, certified for flatness on one side, placed in
Figure 8.18 - Interferometric near contact with a nominally flat test object. This creates a type of amplitude-
measurement of the ground
and polished surface of a
division interferometer, similar to Newton's rings (Figure 2.1), whose fringes are
valve seat. (Photo courtesy contours of equal height for the object surface (Figure 8.19). Optical flats are
of Zygo, Corp.
www.zygo.com) common features of many high-precision machine shops as well as optical shops.
A light box with a narrow spectral width filter increases the fringe contrast and
the measurement range. This practical interferometer has its origins in the work
of Armand Fizeau (1819-1896). The colored interference fringes formed by thin
films, for example an oil film on water, are often referred to as Fizeau fringes.

Figure 8.19 - A tilted optical flat


reference placed close to an object
comprises a practical, elementary
version of the Fizeau interferometer.
The straight-line fringes correspond to
contours of equal object height.
(Courtesy of Zygo, Corp.
http://www.zygo.com/)

A laser source with large longitudinal coherence makes it possible to


more widely separate the object and reference optics, enabling the extension of
the optical flat test to a highly accurate and versatile metrology tool. In the laser
Fizeau interferometer shown in Figure 8.20, a filtered HeNe laser beam
illuminates the Fizeau cavity. The reflected light images onto a rotating glass

180
Optical Instruments

diffuser disk, which reduces the spatial coherence of the image to prevent further
unwanted interference effects. An electronic camera records the image after
selective magnification by a zoom lens.

Figure 8.20 - A laser Fizeau


interferometer includes an
alignment scheme as well as a
phase-shifting mechanism for
automated data acquisition and
analysis. (Courtesy of Zygo, Corp.
www.zygo.com)

Interferometers are highly sensitive instruments that would be difficult to


set up were it not for the align view capability. In the align view, a secondary
optical path directs the converging light cones returned from the reference and
object surfaces to twin spots on the alignment screen shown in Figure 8.20. The
location of the spots on the alignment screen corresponds to the relative tilt
between the reference and object surfaces. The user brings the interferometer
into alignment by adjusting tip-tilt stages until the two spots overlap at the center
of the alignment screen. Switching back to measurement mode then shows the
expected interference fringe pattern characteristic of closely-parallel surfaces.
One problem with the basic Fizeau interferometer is that the difference
between a high point and depression is not immediately evident from the fringes
forming the surface countour map. In order to form an accurate three
dimensional representation of the surface form, phase shifting interferometry
(PSI) is used. The piezo-electric transducer (PZT phase shifter) shown in Figure
8.20 introduces precise motion of the reference surface over a range of roughly
one micron. A computer stores images from the electronic camera during the
motion of the reference optic, resulting in a sequence of interference images with
precise shifts in phase between them. This combination of precise shifting of the
interference phase offset and electronic data acquisition provides a powerful
phase interpolation technique for surface profiling.

181
LIGHT: Introduction to Optics and Photonics

Since the surface height at each pixel depends on the interference phase
%, the central task of PSI is to estimate the interference phase at each image point.
This is accomplished by inspection of N time-dependent interference values g i
captured at each camera pixel for a sequence of i = 1,2..N phase shifts )i. These
interference values depend on both the interference phase % and the phase shifts
between measurements, )i and can be calculated from

(8.12) g i = C (1 + V cos (! + " i ))

In this equation, V is the fringe visibility (which quantifies the difference


between dark and bright fringes) and C is a constant.
A computer uses specialized algorithms to solve for the interference
phase %, independent of V and C. A simple PSI algorithm for N=4 steps and a
phase shift of !/2 between camera frames is

# g " g3 &
(8.13) ! = tan "1 % 1
$ g 2 " g 4 ('

The surface height for the particular pixel or image point examined is then

!"
(8.14) h=
4#

where ! is the wavelength of the source light, for example, 633 nm for a helium
neon laser.

EXAMPLE 8.4
A certain pixel of an 8-bit (256 gray level) camera measures the following
sequence of intensities, as the reference optic is moved in four steps:
g1=151 g2 =201 g3 = 105 g4 = 55

Solution
Using Equation 8.13 we solve for the interference phase angle, %:

# g " g3 & # 151 " 105 &


! = tan "1 % 1 = ! = tan "1 % = 0.3 radians
$ g 2 " g 4 (' $ 201 " 55 ('

By Equation 8.14, we calculate the surface height corresponding to this


camera pixel:

!" ( 633 nm ) ( 0.3 radians )


h= =
4# 4#
h = 15 nm

182
Optical Instruments

The computer applies the same algorithm to each of the camera pixels,
which may number a million or more, to generate a final 3D image of surface
height over the entire test optic. A typical phase shifting measurement has better
than 1nm of height resolution!
The large and flexible air gap in a Fizeau interferometer facilitates the
testing of non-flat optics, prisms, diffraction gratings and other components.
Figure 8.21 shows a configuration for measuring spherical surfaces using
converging and diverging wavefronts.

Figure 8.21: A laser Fizeau


interferometer is a flexible tool
for testing both flat surfaces
(see Figure 14.9) as well as
spherical surfaces, as shown
here. In this case, the optical
flat is replaced by a
transmission sphere. (Courtesy
of Zygo, Corp. www.zygo.com)

The laser Fizeau interferometer has been adapted to the measurement of


everything from contact lens molds, DVD lenses, small camera lenses and so on,
to optical telescope mirrors several meters in diameter (Figure 8.22.) Modern
instruments perform even more complex measurement tasks, such as measuring
multiple surfaces in an assembled optical system and measuring aspherical
optical elements for photolithography lenses with an uncertainty of less than
0.05nm.

Figure 8.22 - The laser Fizeau


comes in many sizes, from
microscopes for small optics (left)
to large-aperture systems (right).
(Courtesy of Zygo, Corp.
www.zygo.com/)

183
LIGHT: Introduction to Optics and Photonics

REFERENCES
1. Jaffe, B. Michelson and the Speed of Light, Anchor Books, Doubleday & Co,
1960.
2. "Optical Metrology," Encyclopedia of Optics, Vol.3 (2004, Wiley-VCH
Publishers, Weinheim) p.2085-2117.
3. Malacara, D. (Editor). Optical Shop Testing, 2nd Edition, John Wiley and
Sons, 1992.

WEB SITES
1. A description of SLR cameras and how they work
www.nikon.co.jp/main/eng/society/index.htm
2. A listing of the world's biggest telescopes and telescope systems
www.seds.org/billa/bigeyes.html
3. A comprehensive tutorial on all types of modern microscopes, plus several
galleries of microscopic "art.”
www.microscopy.fsu.edu/
4. Information on industrial interferometers, plus many photos of
interferograms
www.zygo.com/

184
Optical Instruments

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. What is the function of rods and cones? Why is it hard to see colors in the
dark?
2. When you look at a very dim star, you sometimes need to look to the side of the star
and view it with peripheral vision. Why?

3. Explain how your eye is like a camera. What parts of your eye function as aperture,
lens and film?

4. What is an "f stop" or "f/#?” For a given lens, what does it mean to increase the f-
number? Why is a low f-number lens sometimes called a "fast" lens?

5. Two telescopes have the specifications listed below. Which would provide higher
magnification? Which gives better resolution?
#1 f o = 100 cm fe = 0.95 cm objective diameter = 18 cm
#2 fo = 90 cm fe = 0.90 cm objective diameter = 20 cm

6. Why is light gathering power important for a telescope?

7. What is an advantage of a Mach-Zehnder interferometer over a Michelson


interferometer?

8. What does chromatic resolving power measure? For what type of instrument is it an
important characteristic?

9. A laser beam collimator is a telescope in reverse. That is, light enters the "eye lens"
and exits the "objective lens.” Explain the operation of such a device.

LEVEL I PROBLEMS
10. A refracting telescope has an objective diameter of 102 cm. The objective has a focal
length of 19 m and the eyepiece has a focal length of 10 cm. Calculate the total
magnifying power of the telescope and estimate the length of the telescope

11. The "normal" far point for a human eye is infinity—you can see objects as far away
as you wish. A myopic (near-sighted) person has a far point that is much closer. The
eyeglasses for a myopic person take an object very far away (at "infinity") and form
an image at the person's own far point. Suppose a myopic person has a far point of
1.25 meters (she can't see anything clearly beyond 1.25 meters). Use the thin lens
equation to determine the focal length of her eyeglasses.

Hint #1: the object distance is infinite.


Hint #2: the image will be a virtual image (she looks through the lens of the glasses).
What is the power of these corrective lenses in diopters?

185
LIGHT: Introduction to Optics and Photonics

12. The "normal" near point for a (young adult) human eye is roughly 25 cm. (Actually,
very young children are comfortable holding reading material very much closer than
that, while most folks over 40 need to hold print at arms length or more.) A
hyperopic or presbyopic (far sighted) person has a near point considerably more than
25 cm. The corrective lenses for these conditions form an image at the person's own
near point when the object is held at a comfortable 25 cm distance. Suppose a
presbyopic person has a near point of 1 meter. Use the thin lens equation to
determine the focal length of reading glasses that will allow him to hold a book 25
cm from his eye.

Hint #1: the object is the book, at a distance of (negative) 25 cm.


Hint #2: the image will be virtual (he looks through the lens of the glasses). What is
the power of these corrective lenses in diopters?

13. A polished surface is examined using a Michelson interferometer, with the polished
surface replacing one of the mirrors. The fringe pattern is observed using HeNe light
of 632.8 nm. Fringe distortion (shifts) over the surface are found to be less than 1/4
the fringe separation at any point. What is the maximum depth of polishing defects
on the surface?

LEVEL II PROBLEMS
14. An eyepiece is made of two positive thin lenses, each of focal length f=20 cm,
separated by a distance of 16 mm. Where must a small object viewed by the eyepiece
be placed so that the eye receives parallel light from the eyepiece

15. A Rayleigh interferometer (left) has two chambers placed side by side. Since the
light does not retrace its path, the path length difference is simply L(n2-n1). Assume
that the chambers are 30 cm long completely evacuated. As one chamber is slowly
filled with a gas, 240 fringes are seen to cross the field of view. If the light used is
500 nm, what is the refractive index of the gas?

16. One arm of a Michelson interferometer used with 600 nm light contains a 2.5 cm
long tube that is first evacuated and then slowly filled with a gas. Twenty-five (25)
fringes cross the reference line as the tube is filled. Find the index of refraction of the
gas.

17. The resolving power of a diffraction grating is given by Nm, where N is the number
of lines intercepted by the beam and m is the diffraction order.

a. Compare the resolving power of a Fabry-Perot interferometer with 1 cm spacing


and R= 0.85 used at normal incidence and in the maximum order to a diffraction
grating with 500 lines/mm used in the second order. Assume both are being used
with wavelengths centered around 550 nm and that the beam diameter is 5 mm.
b. How close can two wavelengths be and still be resolved with each instrument?

186
Who invented the laser? Many give credit to two Bell
Labs researchers, Arthur Schawlow and Charles
Townes, who described the possibility of laser action
in the infrared and visible portions of the spectrum in
their paper Infrared and Optical Masers, published in
1958. Some say Gordon Gould, an unconventional
graduate student at Columbia University who fought
a bitter patent battle for 30 years, was the inventor of
key components required for successful laser
operation. Others give credit to Theodore Maiman,
who in 1960 constructed the first successful optical
laser. In fact, Albert Einstein first postulated
Fiber Laser (IPG Photonics, www.ipgphotonics.com) stimulated emission in 1917—over 40 years before
Schawlow, Townes, Gould and Maiman! You might
say that Albert Einstein is the father of the laser.

Chapter 9

LASER PHYSICS AND


CHARACTERISTICS
9.1 INTRODUCTION
Although Einstein first articulated the governing principle of laser
operation in 1917, the laser did not become a reality until 1960. The first laser
was a pulsed ruby laser developed by Theodore Maiman at the Hughes Research
Labs. It consisted of a synthetic ruby rod surrounded by a helical xenon flash
lamp. Intense white light energy from the flash lamp was converted into a beam
of red laser light with a wavelength of 694.3 nm.
As you learned in Chapter 1, laser light has unique properties that
distinguish it from any other light source. Unlike the light from an ordinary bulb,
candle or even an LED indicator light, laser light is considered to be
monochromatic. That is, the laser emission consists of such an extremely narrow
band of wavelengths we say the laser light is monochromatic—"one color."

187
LIGHT: Introduction to Optics and Photonics

Laser light can be highly directional and, unlike a flashlight beam, it can travel
over great distances with minimal divergence or “spreading." Laser light has very
high irradiance—power/area; in fact, many lasers have higher irradiance than the
sun! Finally, the light from a laser is often highly coherent, both spatially and
temporally.
These properties allow laser light to be used in almost every area of
modern life: manufacturing, medicine, communications, consumer products and
environmental sensing. For example, the intense energy contained in a laser
beam can be focused down to a very small spot, much smaller than the diameter
of a human hair, allowing the energy to be concentrated enough to cut, weld, and
drill holes with extreme accuracy in steel and other hard materials. Laser beams
are used in medical applications such as laser scalpels, retinal attachment, tattoo
removal, photodynamic cancer therapy and LASIK surgery. They are used as
light sources in fiber optic communications, where billions of bits of information
can be transmitted over distances of hundreds of miles through hair-thin glass
fibers. Lasers are used for ultra-precise measurements in manufacturing, for
example, testing optics for telescopes and semiconductor fabrication systems for
manufacturing computer chips. Semiconductor lasers, no larger than a grain of
sand, are found in every CD and DVD player and CD ROM drive.

9.2 EMISSION AND ABSORPTION OF PHOTONS


In Chapter 4 we described light in terms of rays that could be “bent”
through refraction or redirected by reflection. This geometric model of light
allows us to explain how mirrors, lenses and prisms work. In Chapters 6 and 7
we described light as an electro-magnetic wave propagating through space. The
wave model of light allows us to examine phenomena such as polarization,
interference and diffraction. In order to explain how laser light is generated
however, we need to think of light in terms of its quantum unit—the photon. The
interaction of light and matter on an atomic or molecular level form the basis for
understanding laser action. In the next section, we will examine the processes and
Electron makes a conditions in which atoms absorb and emit photons.
downward transition
Spontaneous Emission
E4
E3 As you learned in Chapter 2, photons are produced when an atom makes
Energy levels

a transition from a higher energy level to a lower energy level. If there is no


E2
external stimulus to cause the transition, the process is called spontaneous
Ephoton = E3 - E2 emission. The average time required for an excited atom to emit a photon
E1 without external stimulus is called the lifetime of the transition. The lifetime of an
Figure 9.1 – Spontaneous
atomic state is analogous to the half-life of a radioactive element. In a large
emission of a photon. collection of atoms excited to the same energy level, one half will make a

188
Laser Physics and Characteristics

downward transition in a time interval equal to the lifetime of the state, typically
about 10 nanoseconds. The emitted photon travels in a random direction, with
energy equal to the difference between the two energy levels in question. The
frequency of the emitted photon is given by

E2 ! E1
f = (9.1)
h

where E is the photon energy, h is Planck’s constant (h = 4.14 x 10-15 eV•s or


6.625 x 10-34 j•s) and f is the frequency of the photon in Hertz.

EXAMPLE 9.1
An atom makes a transition from energy level E2 (-4.6 eV) to E1 (-8 eV).
Find the frequency and wavelength of the emitted photon.
Solution:
Use Equation 9.1 to find the frequency of the photon:

E2 ! E1 -4.6 eV – (-8) eV 3.4eV


f = = = = 821 THz
h 4.14x10 !15 eV•s 4.14x10 !15 eV•s

Solving for # using c =!f (Chapter 2, Equation 2.3)


c 3x10 8 m/s
!= = =365 nm
f 821 THz
The emitted photon is from the violet end of the visible spectrum.

Absorption of a Photon by an Atom


Atoms can absorb photons of the same wavelength of light as they emit
Electron makes an
during spontaneous emission. If an atom in a lower energy state is struck by an upward transition

photon of energy equal to the difference between the lower state and one of the
E4
atom’s upper level states, the energy from the photon will be absorbed by the E3
Energy levels

atom, forcing it into the higher energy state. After absorption, the photon will
E2
cease to exist (See Figure 9.2). For this type of photon absorption to occur, two Ephoton = E3 - E2
conditions must be satisfied:
1. The energy of the incoming photon must be equivalent to the energy E1

difference between the two energy levels in question, and Figure 9.2 – Absorption of
a photon.
2. The atom absorbing the photon must be in the lower of the two energy
levels.
Atoms and molecules may absorb other forms of energy besides light
energy (photons); these will be discussed in Section 9.3. In every case, the atom

189
LIGHT: Introduction to Optics and Photonics

(or molecule) can increase its energy only by moving from one discrete energy
level to another.

Radiationless Transitions
Sometimes an atom makes a transition from a higher to a lower energy
state without emitting a photon. In gases, this can occur when the excited atom
collides with an atom in some lower energy state and energy from the excited
atom is transferred to the less energetic atom. As a result, the excited atom moves
to a lower energy level and the atom that was struck moves to a higher energy
level.
In solids, energy in an excited atom is transferred to surrounding atoms in
the form of vibrational energy (called phonons). An increase in vibrational
energy typically causes a temperature increase within the material. Such so-called
radiationless transitions usually involve small energy changes and short
lifetimes. Often, radiationionless transitions can be used to control the population
of atoms in a given energy state during the operation of a laser.

Stimulated Emission
Although you might think that stimulated emission is very similar to
spontaneous emission (both result in the production of a photon), the process also
has something in common with absorption. In both stimulated emission and
absorption, an incoming photon is required. The difference is that for stimulated
emission the atom must be in the upper of the two energy states involved, while
for absorption the atom is in the lower of the two states.
When a photon collides with an atom in an upper energy state, the photon
“stimulates” or forces the atom to release a second photon. For this process to
occur, the energy of the incident photon must be equal to the difference in the
energy between the two energy states involved. The photon released in the
stimulated emission process has the same energy, frequency, wavelength,
Figure 9.3 - direction and phase as the incident photon. That is, the process of stimulated
Stimulated emission.
emission results in two identical photons traveling in the same direction (See
Figure 9.3). These two photons are then capable of colliding with two more
atoms, producing four identical
E4
Photon collision Electron makes a photons, and so on. The laser provides
downward transition
Ephoton = E3 - E2 light amplification by the stimulated
E3
emission of radiation (laser). Under the
Energy levels

2 identical photons
E2 emerge right conditions, stimulated emission
can be repeated many times causing an
E1 exponential increase in the number of
photons emitted.

190
Laser Physics and Characteristics

For more than 40 years after Einstein's description of stimulated


emission, it was assumed that the process could never be made to work in a
practical device. Remember that if an incident photon encounters an atom in the
lower energy state, the photon is absorbed. Stimulated emission can only occur if
the incident photon strikes an excited atom. Therefore, for stimulated emission to
be more likely to happen than absorption, more atoms must be in the upper
energy state than lower energy state. This is called a population inversion, and it
does not occur naturally. Without an external energy source to excite the atoms to
a higher energy state, more atoms are always found in the lower energy states and
the material will simply absorb the incident light.
How is a population inversion achieved? We said that an excited atom
spontaneously emits a photon after a short time—the lifetime of the excited state.
To create a large population of atoms in an excited state, it is necessary for the
lasing atom to have an excited energy level with a relatively long lifetime. This
energy level, known as a metastable state, allows atoms to remain in the excited
state long enough for population inversion to be achieved. As we shall see, such
metastable states are necessary for laser action.

9.3 BASIC LASER COMPONENTS


A typical laser consists of three basic elements illustrated in Figure 9.4.
To produce light amplification by stimulated emission of radiation, a laser needs
a suitable gain medium with a metastable state to support a population inversion,
an energy pump to excite the medium and a resonator or optical cavity, which
reflects photons back and forth through the medium.
Lasers are often named after the gain medium, which may be a gas,
liquid or solid. For example, the carbon dioxide (CO2) laser uses energy levels of
CO2 molecules to produce infrared radiation, and the helium neon laser makes
use of the atomic energy levels of neon atoms in a low pressure gas to produce
visible light. The gain medium determines the wavelength of the radiation, from
UV to far IR. The most important requirement of the gain medium is its ability to
support a population inversion between two energy levels of the laser atoms.
For some lasers, the gain medium consists of two parts: a host medium
and the lasing atoms. The Nd:YAG (neodymium: yttrium aluminum garnet)
laser, for example, consists of a YAG crystal host doped with neodymium ions.
Neodymium ions are also found as lasing atoms in the Nd:YLF laser, where the
host crystal is yttrium lithium fluoride.
Once we have found a gain medium with a metastable state to support a
population inversion, we need a method of energizing the atoms or molecules of
the medium into the excited state. In laser technology, this external energy source

191
LIGHT: Introduction to Optics and Photonics

is usually called a pump. Pumps can be optical, electrical, chemical or thermal.


For example, in gas lasers such as a helium neon or CO2, the most commonly
used pump is an electrical discharge. In solid-state lasers such as Nd:YAG and
ruby, optical pumping from a flash lamp or another type of laser is used to create
a population inversion. Diode lasers are often used for this purpose.

Figure 9.4 – Schematic of laser


operation.

Figure 9.4 illustrates the process of amplification of light energy as


additional photons are created along the length of the gain medium. It is clear
from the figure that if the length of the gain medium is increased, there will be
more excited atoms in the path of the photons and more photons will be
produced. But it should be equally obvious that increasing the length of the laser
has its limits! An optical cavity or resonator causes light to reflect back and forth
through the gain medium, effectively increasing the length of the path of light
through the gain medium.
The resonator is an optical feedback device consisting of a pair of
carefully aligned plane or curved mirrors centered along the optical axis of the
laser system. One of the mirrors is chosen with a high reflectivity close to 100%
(HR mirror). This is accomplished using multilayer high-reflectivity coatings
applied to the mirror that not only enhance reflectivity, but also attenuate
unwanted wavelengths. For example, to make a helium neon laser that produces
green light, the mirrors are highly reflective at the appropriate green wavelength
but transmit other wavelengths, not allowing feedback of the unwanted laser
lines. The other mirror, known as the output coupler, is a partially reflective
mirror with a reflectivity somewhat less than 100% to allow some of the energy
to leave the cavity and form the laser beam. The geometry of the mirrors and
their separation determines the structure of the electromagnetic energy within the
laser cavity. In general, the greater the volume of light contained between the
two mirrors (called the mode volume), the greater the gain of the laser. Several
common cavity configurations are shown in Figure 9.5.

192
Laser Physics and Characteristics

Plane-parallel mirror configurations are often used in pulsed solid state


(crystal) lasers because their high mode volume makes efficient use of the gain
Plane parallel
medium. One of the problems with the plane-parallel cavity, however, is that it is
difficult to align. Spherical mirror cavities are the easiest to align, but have the
smallest mode volume. You can see in Figure 9.5 that the spherical mirror cavity
uses only two cone-shaped portions of the medium, since the light is focused at a
Hemispherical
point in the center of the cavity. Continuous wave (CW) dye lasers are equipped
with this type of cavity because the high energy density of a focused beam is
necessary to cause efficient stimulated emission. In some high power lasers,
however, spherical mirror cavities are undesirable because of the intense power
Spherical
density that is generated inside the cavity from the focused beam. This problem
can be corrected by using mirrors whose radii are longer than L/2, where L is the
length of the cavity.
Unstable
In a hemispherical cavity, one of the mirrors is spherical; the other is a
plane mirror. This configuration is used mostly with low-power helium neon
lasers because of low diffraction loss, ease of alignment and reduced cost
Concave
because the high-reflectivity coating needs to be applied to only a small portion Convex mirror
mirror
at the center of the HR mirror. Figure 9.5 – Examples of
Finally, in the unstable resonator configuration, the beam emerges resonator configurations.
The mode volume (the
around the smaller convex mirror rather than through it. The output of the laser illuminated portion of the
emerges in the shape of a doughnut because the smaller mirror blocks the center medium) is shaded.
of the beam. Although the beam has the expected hole in the middle just beyond
the output coupler, farther away from the laser the hole fills in and a relatively
high quality beam is produced. This resonator configuration is often used in high
powered CO2 lasers where the output power level of the laser usually exceeds the
maximum power rating of a transmissive output coupler. Further beam-shaping
optics can be used if needed to improve the quality of the beam.

9.4 ENERGY TRANSITIONS IN LASER ACTION


Quantum physics tells us that photons of specific energy must be created
in the laser cavity and must interact with excited atoms to create additional
photons via stimulated emission. The new photons, identical in wavelength,
direction and phase, bounce back and forth between the mirrors of the resonator
before escaping to form the laser beam. How exactly is this accomplished?
We can break the process into several steps as shown in Figure 9.6. The
laser depicted in the figure is called a four-level laser because four energy levels
(or sets of energy levels) are involved in the production of the population
inversion. Of course, a real atom has many more energy levels than this, but we

193
LIGHT: Introduction to Optics and Photonics

are limiting our discussion to the four levels that are involved in the production
of laser light. Referring to the numbered steps in Figure 9.6:
1. Energy from an appropriate source is coupled into
the laser medium. The energy is sufficiently high to
Pumping Band
En
excite a large number of the atoms from the ground
2
Radiationless Transition state E1 to several excited states, collectively
Stimulating photon labeled En, also called the pumping band.
E3
Emitted photon can
1 collide with another 2. Once at these levels, some atoms spontaneously
(pump) 3 atom and cause decay back to the ground state E1. (This is not
stimulated emission
E2 shown on the diagram.) Many, however, decay
Decay to
4 Ground State rapidly via a radiationless transition to state E3, the
E1 Ground State
metastable state. Since E3 has a relatively long
lifetime, a sort of "bottleneck" is created and the
number of atoms in state E3 increases. As pumping
Figure 9.6- Production of
laser light. continues, the number of atoms at E3 (we will call this number N3) grows
large compared to the number of atoms at E2 (called N2). A population
inversion means that N3 >N2 , a requirement for light amplification by
stimulated emission.
3. Occasionally an atom will spontaneously decay from level E3 to level E2.
Remember that the lifetime of a state is a statistical quantity and some atoms
may spontaneously decay in much less time than the average lifetime. These
so-called seed photons have energy equal to the difference between E3 and
E2. The seed photons collide with excited atoms and produce additional
photons by stimulated emission. If the photons are traveling in a direction
parallel to the optical axis of the cavity, they continue to collide with other
excited atoms, producing additional photons as they are reflected back and
forth between the mirrors forming the cavity.
4. After the atoms transition from E3 to E2, they quickly drop into energy state
E1. (E2 is a short lifetime state.) Quickly removing atoms from the lower
lasing state (E2) helps maintain the population inversion. The optical cavity
confines and directs the growing number of photons back and forth through
the laser medium, continually exploiting the population inversion to create
more and more stimulated emissions. Upon reaching a threshold level, a
certain fraction of the laser light wave is coupled out of the cavity through
the output coupler mirror.
The laser described in Figure 9.6 works because of the relative lengths of
the excited state lifetimes. Excited states at higher energies than the metastable
state have short lifetimes, allowing the population on the upper laser energy level
(E3) to increase. The lower laser energy level (E2) is also a short lifetime state,

194
Laser Physics and Characteristics

quickly emptying the lower laser energy level. Fortunately, there are many atoms
and molecules that have such an arrangement of excited energy states, resulting
in a large variety of useful laser gain media and wavelengths.
It is also possible to have a three-level laser. In this case the level labeled
E1 in Figure 9.6 is missing. After making the laser transition from energy level E3
to E2, the atom will remain in the lower lasing state and the population inversion
soon fails unless intense pumping is continued. A three-level laser is usually
operated in a pulsed mode since the atoms must be rapidly reenergized to form a
population inversion each time the lower level population builds up and stops the
stimulated emission process.

9.5 LASER OUTPUT PARAMETERS


Despite similarities in the way light is produced in different types of
lasers, the beam produced by different lasers varies greatly in wavelength, from
ultraviolet to infrared, and in power, from a few milliwatts to tens of thousands of
watts. The output of a laser can be described in terms of its temporal
characteristics, the wavelength and longitudinal coherence properties, and its
spatial characteristics, the transverse profile of the laser beam. Together these
characteristics determine parameters important to laser users, such as focused
spot size, divergence, focused power density, beam quality and coherence.

Temporal Properties
The temporal properties of a laser are dependent on the laser gain
medium and physical size of the resonator or optical cavity. In Chapter 2, we
described the spectrum produced by a glowing low pressure gas as a series of
bright lines whose wavelengths are determined by the energy levels of the gas
atoms. In fact, even the sharpest spectral line contains a range of wavelengths,
called the fluorescent linewidth (Figure 9.7). This spectral width may be only a
fraction of a nanometer, but it is never zero.
Any gain medium will amplify light
over the fluorescent linewidth for the energy Gain
level transition involved. The width and Fluorescent Linewidth %"ffl
height of this gain curve depend on the type
of active medium, its temperature and the
magnitude of the population inversion. Each
type of laser exhibits its own characteristic
fo frequency
gain curve. When discussing lasers we (!o) (wavelength)
usually use frequency units rather than
Figure 9.7 - Fluorescent linewidth (gain curve).
wavelength units to express spectral widths.

195
LIGHT: Introduction to Optics and Photonics

Typical fluorescent linewidths (expressed in frequency units) range from roughly


1.5 Ghz for neon (HeNe laser) to about 30 Ghz for neodymium (Nd:YAG laser).
Longitudinal Modes
To complete the description of the wavelength output characteristics of a
laser we need to take a moment to describe standing waves. You may have
studied standing waves on strings—the stable patterns that a string makes when
you tie one end and shake the other back and forth (Figure 9.8). If you shake
slowly, the string vibrates in one large loop. Shake it faster (at twice the
frequency for one loop) and there are two equal length loops. Increase the
shaking frequency to three times the first value and you will see three loops.
m=1
Each loop in the standing wave pattern is one half of a sine wave; that is, the
string can only vibrate in whole numbers (integers) of one half of a wave. At
m=2
frequencies between these values, stable patterns do not exist.
The stable vibrating patterns of a string are called frequency modes.
m=3
Modes are labeled (m =1,2,3..) according to how many loops are in the pattern.
Note that there are always nodes, places where the amplitude of vibration is zero,
m=4
at each end of the string. (We are assuming that your hand moves very little as
you shake the string.) Halfway between any two nodes is an antinode, the name
m=5
given to the place in the standing wave pattern where the amplitude is a
maximum. As you increase the frequency of shaking the end of the string,
additional nodes (and antinodes) appear.
Studying the standing wave configurations shown in Figure 9.8 we can
write a general equation for the wavelength of the "mth" mode in the series (!m),
in terms of the length of the string (L).
!1 = (2/1) L
#1 #2 #3 ................................... # m
!2 = (2/2) L
Figure
............... 9.8 – (Top) Modes !3 = (2/3) L
of#1a vibrating string. .
#1
(Bottom) Spectrum chart
for standing waves on a !m = (2/m) L
string.

As shown in Figure 9.8, the wavelengths may be plotted on a "spectrum chart,"


which has the appearance of a comb with a "tooth" at each of the standing
wavelengths.
As is the case with any wave, the velocity (v), wavelength (!) and
frequency (f) of a standing wave are related by v = !f. This means that the
frequency of the mth standing wave mode is given by

v
(9.2) fm = m m = 1, 2, 3...
2L

196
Laser Physics and Characteristics

Standing wave frequencies can also be plotted on a spectrum chart and, like the
wavelength spectrum chart, it will also resemble a comb with equally spaced
"teeth."
Like mechanical standing waves on a string, electromagnetic standing
waves are supported by laser resonator geometry. The standing waves that a
particular laser cavity can support depend on the length of the cavity and the
index of refraction of the gain medium. The index of refraction is important
because the relationship between the resonant wavelengths and frequencies
depends on the speed of light in the medium. If we substitute v = c/n into
Equation 9.2 for the speed of light in the gain medium we get

c
fm = m m = 1, 2, 3...
2nL

We are usually interested in the frequency difference between two


adjacent modes, called the mode spacing. The mode spacing can be found by
subtracting the frequency of the mth mode from the frequency of the (m+1)th
mode.

c c
!f = (m + 1) " (m)
2nL 2nL Longitudinal Mode
(9.3)
c Spacing
!f =
2nL

Note that the mode spacing depends only on the length of the laser cavity and on
the index of refraction of the gain medium.

EXAMPLE 9.2
A HeNe laser has a cavity length of 20 cm and an output wavelength of
632.8 nm. Find the mode spacing %f. Assume the index of refraction of the
medium (low pressure gas) is 1.0.

Solution
Using Equation 9.3,
c 3x10 8 m/s
!f = = = 750 MHz
2nL 2(1)(0.20m)
For the HeNe laser, the frequency mode spacing calculated here is
equivalent to wavelength spacing of around 1 picometer!

What wavelengths are actually emitted by a laser? Both the fluorescent


linewidth (gain curve) and the standing waves resonant in the cavity must be
considered. The wavelengths produced by the laser are those for which the gain

197
LIGHT: Introduction to Optics and Photonics

curve and the cavity frequency spectrum overlap, as shown in Figure 9.9. From
Figure 9.9 it is apparent that the output of a laser is not truly “monochromatic,"
but rather composed of a series of closely spaced wavelength “spikes," or modes.
The modes are centered around the output wavelength of the laser, and the
distance between the modes is given by Equation 9.3.

Figure 9.9 – Laser cavity modes. The


wavelengths in the laser output (bottom)
depend on both the gain curve
(fluorescent linewidth) and the cavity
resonance modes (top).

If we could remove all but one of the modes, would the laser then be
monochromatic? (Such so-called single mode lasers do exist and will be
discussed in Chapter 10.) In fact, the individual modes themselves have a non-
zero width. The approximate width of a mode, called the bandwidth of a single
mode, depends on the cavity mode spacing and amount of light lost in the cavity.
The bandwidth of a single mode is approximately equal to

(9.4) !f bw = !f (T + Loss)

where %f is the mode spacing, T is the percent of incident light transmitted


through the output coupler, and Loss is the cavity loss—the fraction of
circulating light lost on each trip around the cavity. Cavity losses are due to
factors such as misalignment, diffraction, scattering reflection or dirty optics.
Both T and Loss are expressed as numbers between 0 and 1, corresponding to the
Figure 9.10 - Spectral
range 0-100%.
characteristics of a Figure 9.10 shows the output power versus wavelength for two
multimode (top) and single
mode (bottom) laser. The semiconductor lasers as measured by an optical spectrum analyzer, a grating-
vertical scale is based instrument that shows optical power as a function of wavelength. The top
logarithmic.
graph shows the multimode output of an infrared diode laser, with modes spaced
1.2 nm apart. The total spectral width is around 5 nm. The single mode output of
the laser at the bottom has a measured spectral width of only 0.09 nm! Although

198
Laser Physics and Characteristics

it appears that there are several significant cavity modes for this laser, the vertical
scale is logarithmic so the small side peaks are actually lower than the central
peak by a factor of 105.

EXAMPLE 9.3
A HeNe laser has a mode spacing of 275 MHz. The output coupler
transmission is 1.7% and the cavity loss is 0.5%. Find the bandwidth of a
single mode.

Solution
Using Equation 9.4,
%fbw = %f(T + Loss)
= (275 x 106Hz)(0.017 + 0.005)
= 6.05 Mhz
Like mode spacing, bandwidth is often expressed in frequency rather than
wavelength units. In Example 9.3, the mode width is only 0.000006 nm!

Finally, we would like to estimate the number of modes in the laser


output. The fluorescent linewidth (%ffl ) defines the total spectral width of the
laser output and the individual modes are spaced %f apart, so the approximate
number of modes in the output beam can be determined by

!ffl
n= (9.5)
!f

EXAMPLE 9.4
A Nd:YAG laser has a mode spacing of 250 MHz. The laser fluorescent
linewidth of Nd:YAG is 30 GHz. Approximately how many modes are
there in the laser output?

Solution
Using Equation 9.5,
!ffl 30x10 9 Hz
n= = = 120 modes
!f 250x10 6 Hz
The laser output has approximately120 closely spaced wavelength modes.
Longitudinal Coherence Length
The longitudinal coherence length introduced in Chapter 6 is a measure
of the temporal coherence of a laser beam and is the distance along the beam

199
LIGHT: Introduction to Optics and Photonics

over which the laser light has sufficient coherence to produce visible interference
fringes. Longitudinal coherence length is important whenever the amplitude of a
laser beam is divided by a beam splitter and then recombined to form an
interference pattern. In most applications that involve interferometry, the
coherence length of the laser light must be known.
In Chapter 6, the longitudinal coherence length was also referred to as
temporal coherence length, since distance and time are related by the equation
x=vt. Equation 6.3 expressed longitudinal coherence length in terms of the
spectral width given in wavelength units. We can also express the longitudinal
coherence length of a source in terms of frequency.

c
(9.6) lt =
!f

In Equation 9.6, %f is the spectral width of the source in frequency units, often
called the laser's total bandwidth.

EXAMPLE 9.5
The total bandwidth of a multi-mode HeNe laser is 1.7 GHz. Find the
coherence length.

Solution
Using Equation 9.6,

c 3x10 8 m/s
lt = = = 0.18 meter
!f 1.7x10 9 Hz

Why is coherence length important to a laser user? If the HeNe laser of


Example 9.5 is used to set up a Michelson interferometer, the 18 cm coherence
length places a limitation on the path length difference of the two arms of the
interferometer. The difference in optical path length must not exceed 18 cm in
order to form high contrast interference fringes.
Since the longitudinal coherence length of
laser light is inversely proportional to the bandwidth,
HR Mirror Output Coupler
reducing the bandwidth can extend the coherence
Etalon
Active Medium length. In some lasers, this is accomplished by
allowing only one mode to oscillate in the cavity.
This process reduces the bandwidth of the output
Unwanted Wavelengths beam from multi-mode, with a bandwidth that is
essentially the fluorescent linewidth, to a single mode
Figure 9.11 – Increasing coherence length
with an etalon. of bandwidth %fbw. Single mode operation can be

200
Laser Physics and Characteristics

achieved in some lasers by installing an etalon in the laser cavity (Figure 9.11).
You will recall that an etalon is a form of Fabry-Perot interferometer that acts as
a very sharp wavelength filter.

EXAMPLE 9.6
An argon laser has fluorescent linewidth of 11 GHz. The bandwidth of a
single cavity mode is 7.0 MHz. Find the coherence length with all possible
modes oscillating and coherence length if only one mode oscillates.

Solution
For all modes oscillating the bandwidth is the fluorescent linewidth.
Equation 9.5 gives

c c 3x108 m/s
lt = = = = 0.021 meters
!f !f fl 11x109 Hz

For one mode oscillating the bandwidth is the width of a single cavity mode
and
c c 3x108 m/s
lt = = = = 43 meters
!f !f bw 7.0x106 Hz
For this laser, single mode operation increases coherence length by more
than 2000 times!

Spatial Properties
The spatial distribution of a laser beam is very important in many
applications. For example, laser cutting, drilling and welding require well-
collimated beams of a particular diameter that diverge slowly. Almost all
precision laser applications require a uniform spatial distribution of irradiance
with no "hot spots.”
Transverse Electromagnetic Modes
Unlike the standing waves on a string, the electromagnetic field inside a
laser cavity exists in a three dimensional configuration. Frequency modes
leading to many wavelength spikes in the output are due to standing waves
traveling along the central axis of the laser cavity. However, standing waves can
exist across the cavity as well. The electromagnetic field variations perpendicular
to the direction of travel of the waves are called transverse electromagnetic
modes or TEM modes. The transverse electromagnetic modes of the beam depend

201
LIGHT: Introduction to Optics and Photonics

on the construction of the resonator cavity and mirror surfaces. TEM modes
appear as bright spots or "blotches" in the laser output (Figure 9.12).
Transverse electromagnetic modes are specified as TEMmn where m is
the number of dark fringes crossing the horizontal axis and n is the number of
dark fringes crossing the vertical axis. The centers of the dark bands in the
intensity patterns of the TEM modes are actually nodes in the electric field within
the laser cavity. Some lasers will operate on several transverse modes at the
same time, creating a beam that has "hot spots" and "dark spots" in regions of
high and low irradiance (Figure 9.13). Careful alignment of the cavity mirrors
can often minimize or eliminate the hot spots.
TEM00 TEM10 TEM03

Figure 9.12 – Some Transverse


Electromagnetic Modes. In the TEM*01 mode,
the TEM10 or TEM01 modes oscillate in the TEM11 TEM30 TEM* 01
cavity at the same time with a phase difference
o
of 90 .

If you examine Figure 9.12, you will notice that the TEMoo mode is the
only transverse mode that has no nodes or dark spots. It is also smaller in size
than the other, higher order TEM modes. Therefore, it is possible to eliminate all
modes except TEMoo by restricting the cavity diameter or inserting a cavity
aperture. TEMoo is called the uniphase mode because it is the only mode in which
laser light is spatially coherent. This coherence property allows the TEMoo mode
to have lower beam divergence than other modes.

Figure 9.13 –Output Profile of a


laser operating with several TEM
modes (left) and CCD photo of
the beam spot produced by a
such a laser (right). A false color Hot Spots
scale is used to represent laser
irradiance, revealing this laser's
multimode output.

TEM00 Mode: Beam Shape and Focusing


Since the TEMoo mode is the most desirable for many applications, we
will investigate its properties further. A laser operating in the TEMoo mode has a
so-called Gaussian, or "bell shaped", energy distribution. Consider an
experiment where a laser beam is scanned by a very small probe (such as an
optical fiber) that moves across the beam, transverse to the direction of

202
Laser Physics and Characteristics

propagation. At the center of the beam, the irradiance is Eo. As the distance from
the center of the beam (r) increases, the irradiance E falls off exponentially
according to the equation
2
#r&
!2% (
$"'
E = Eo e (9.7) Gaussian Beam Profile

The graph of this equation is a curve shaped somewhat like a bell (See Figure
9.14).

Eo

1
points
e2
Figure 9.14 – Gaussian beam profile.
2
Eo/e

So what is the diameter of the beam? The beam has no sharp edge; it
simply becomes less and less bright as you go out from the center. We will need
to decide on a convention for measuring the radius of the beam. Let us say that
# is the radius of the beam. Then if r = # in Equation 9.7,
E = Eo e-2 = 13.5% Eo.
The beam radius is the found by determining the distance from the center where
the irradiance drops to 13.5% of the maximum value, found at the center of the
beam. Note that 1/e2 = 0.135
Beam Divergence for a Gaussian Beam.
Because of diffraction, all laser beams spread, or diverge, as they travel %
Laser
through space. (Figure 9.15) In order to describe how a laser beam diverges, we
must consider the beam at a distance far enough from the laser aperture to be in
Figure 9.15- Laser beam
the far field. We will define the far field as the distance from the laser greater divergence.
than 100 D2/! where D is the diameter of output aperture. In the far field, the full
angle beam divergence of a Gaussian laser beam is given by

1.27 " Gaussian Beam


!= (9.8)
D Divergence

Equation 9.8 describes the diffraction limited beam divergence, the


minimum divergence possible when light of wavelength ! is diffracted as it
passes through an effective aperture of diameter D. The equation is valid only for
beams with Gaussian intensity profiles. Beams with other TEM modes included
will spread more than Gaussian beams.

203
LIGHT: Introduction to Optics and Photonics

Focusing Laser Beams


You may remember that when a uniform beam passes through a small
hole, the central (Airy) disk is surrounded by bulls-eye fringes. This is not the
case with a Gaussian beam, where the focused spot also has a Gaussian energy
distribution. The radius of the smallest spot to which a Gaussian laser beam can
be focused by a positive lens is known as the diffraction limited spot size (Figure
9.16). For a beam of divergence % focused by a lens of focal length f, the spot
size (diameter) S is given by
Figure 9.16 – Focusing
a Laser Beam. (9.9) S=f%

Substituting Equation 9.7 into Equation 9.8, we can express the diameter of a
focused spot in terms of laser wavelength # and effective output diameter D:
1.27 ! f
Focused Spot Size (9.10) S=
D

Equation 9.9 incorporates facts about diffraction you learned in Chapter


5. Diffraction effects are most pronounced with long wavelengths and small
apertures. It is difficult to squeeze long wavelength light to a small focal point.
This means that short wavelength (UV) lasers must be used for micromachining,
for example, drilling micron-sized holes. You can also see from Equation 9.9 that
a long focal length lens will result in a larger spot size. If the light needs to be
focused far from the laser because of the size or position of the work piece, a
long focal length lens may produce too large a spot size for the job.

EXAMPLE 9.7
A 100 watt Nd:YAG laser has a beam divergence of 2.0 milliradians. The
beam is focused by a lens of focal length 7 cm. Find a.) the spot size and b.)
irradiance in watts/cm2.

Solution
Using Equation 9.9,
S = f * % = (7 x 10-2 m) (2 x 10-3 radians) = 140 µm
Recall that irradiance is given by: E = Power/Area. Calculate the beam
area, assuming a circular beam cross section. The beam diameter is 140 µm,
so the radius is 70 µm.
A = !r2
= !(.007 cm)2 = 1.54 x 10-4 cm2
Calculating irradiance we get
E = 100 watts/1.54 x 10-4 cm2 = 650 kW/cm2

204
Laser Physics and Characteristics

Mode Quality Factor (M2)


The Gaussian laser beam is an ideal situation. Real laser beams differ
from a perfect Gaussian energy profile by varying amounts. In the late 20th
century, as lasers became more common on the shop floor, it was necessary to
quantify how different a particular laser's output was from a perfect Gaussian
beam. The Mode Quality Factor or M2 ("M-squared") was developed so that
manufacturers could compare the quality of a focused higher order TEMmn beam
to a diffraction limited TEMoo Gaussian beam. Because higher order modes
diverge more than the TEMoo mode, they cannot be focused to as small a spot.
From Equation 9.10, the focused spot size for a Gaussian beam with
effective output diameter D is given by

" 1.27 ! %
Soo = f $
# D '&

We use the subscript "oo" as a reminder that this is the spot size of an ideal
TEMoo beam. For a TEMmn higher order mode, the spot size for a beam with the
same output diameter D focused by a lens of focal length f is given by

" 1.27 ! %
Smn = M 2 Soo = M 2 $ f ' (9.11)
# D &

That is, the focal spot size of a higher order beam is M2 times the diffraction-
limited value. M2 is 1.0 for a Gaussian beam, but it is greater than 1.0 for all other
TEM modes, which means that the focused spot size of any other beam will be
larger than that of a TEMoo beam. Because irradiance at the focal spot varies with
the square of the spot diameter, M2 is an important parameter specification for
laser users.
Note that when laser manufacturers refer to "focused spot size" they are
usually (but not always) referring to the diameter of the spot. It is important to
read carefully to determine if diameter or radius is indicated.
Introduction to Gaussian Beam Optics
Gaussian beam optics can be used to describe how a laser beam diverges
from the point on the beam where the beam radius is a minimum. The minimum
radius may be at the laser aperture, or it may be inside or outside the laser cavity.
This analysis is especially important in applications such as laser materials
processing, where a focused beam must be precisely located on the work piece.
The radius of a Gaussian beam at a distance (z) along the beam depends
only on the wavelength and minimum beam radius (#o). Figure 9.17 shows a
diverging beam focused by a lens. )o in this case is located at the focal spot of the
lens. The radius a distance z away from the minimum radius is # (z).

205
LIGHT: Introduction to Optics and Photonics

Figure 9.17 – Gaussian


beam divergence. The LASER
radius at any distance z
depends only on the
minimum radius (produced #o z # (z)
by the laser itself or by a
lens as seen here) and the
wavelength.
We state without proof that the radius # (z) for a Gaussian beam is found from
the equation

2
$ "z '
(9.12) ! (z) = !o 1+ &
% #! o2 )(

When a laser beam is focused, diffraction prevents the beam from


converging to an infinitely small point. The beam gradually narrows to a focal
spot, then widens beyond the narrowest spot as shown in Figure 9.17. The depth
of focus (DOF) is twice the distance from the focal spot to the location on the
beam axis where the cross sectional area of the beam is twice the focus spot size.
Since the area of the beam spot is proportional to beam radius squared, this is the
same as saying DOF is twice the distance from the minimum spot radius to the
location on the beam axis where the radius of the diverging beam reaches 2 #o
(Figure 9.18).

Figure 9.18 – Depth of focus.


zo zo
+2 # o #o +2 # o

The DOF is important when a beam is used for cutting. Outside of this
region, there may be insufficient power density in the widening beam to cut the
material. It can be shown that the DOF for a Gaussian beam is given by

2!" o2
(9.13) DOF = 2zo =
#

Like the beam radius itself, the depth of focus depends only on the focal spot size
and wavelength of the laser light.

206
Laser Physics and Characteristics

EXAMPLE 9.9
Find the DOF for a CO2 laser focused to a 100 µm spot size.

Solution
Using Equation 9.13,

2!! 0 2 2!(50 µ m)2


DOF = 2z0 = = = 1.48mm
" 10.6 µ m

For a longer depth of focus, you must use a longer focal length lens. Of
course, that gives a larger focal spot size as well.

Continuous Wave and Pulsed Lasers


Lasers may be divided into two broad groups: continuous wave (cw)
lasers and pulsed lasers. A cw laser is one whose power output undergoes little
or no fluctuation in time once the laser has warmed up. For example, a HeNe is a
cw laser; once the laser has stabilized, the output power is considered constant.
With a pulsed laser, however, the laser output is emitted in short bursts.
Solid state ruby, Nd:YAG and CO2 gas lasers are commonly operated in pulsed
mode. From physics, you may remember that Power = Energy/Time. When a
laser is pulsed, the energy contained in the beam can be concentrated and
delivered in a shortened time interval, thereby increasing the output power. For
example, if an Nd:YAG laser delivers 1 joule of energy in a 0.1 second pulse, the
peak power of the pulse is 1 J/0.1 s = 10 watts. The same 1 joule of power
delivered in a 1 millisecond pulse results 1 kilowatt peak power.
There are three common methods of pulsing a laser. Perhaps the most
obvious method is to pulse the excitation mechanism or power supply, that is,
turn the power supply on and off. This method produces relatively long duration
pulses of around one tenth of a millisecond to several milliseconds. The pulse
obtained in this way does not rise and fall continuously but is composed of many
small pulses, each lasting about 50 ns.
Large pulses of short duration may be obtained by inserting a Q-switch in
the laser cavity. This is an electro-optic or acousto-optic shutter that prohibits
feedback, for example, by blocking one of the mirrors. Stimulated emission is
suppressed because photons are not circulating in the gain medium, which allows
the population inversion to grow to a very large value. The “shutter” is then
opened for a very short period of time and the laser’s energy is delivered in one
intense burst.
Extremely short pulses may be obtained in a laser that operates with
many longitudinal modes by a process called modelocking. This technique forces

207
LIGHT: Introduction to Optics and Photonics

the frequency modes to line up in phase inside the cavity, causing them to
constructively interfere and resulting in an extremely narrow output pulse.
Most pulsed lasers are capable of emitting several hundred to several
thousand pulses per second, making the controlled delivery of high-power laser
energy possible. Some lasers used in research are capable of extremely short
pulses of just a few femtoseconds (1 fs = 10-15 s)!

Q-Switched Lasers
Let's look at the Q-switch method of producing pulsed laser output in
more detail. Q-switched pulses can range from a several nanoseconds to several
hundred nanoseconds. This results in peak powers that may be several thousand
times greater than that of the same laser without a Q-switch. The “Q” in Q-switch
refers to the Quality factor, a measure of optical feedback in the cavity. When the
switch is on, the cavity provides very little to no feedback. When it is turned off,
feedback is restored and lasing is possible. (Note that in some types of Q-
switches, turning the Q-switch on restores the feedback and produces a laser
pulse.)
The most common type of Q-switch is the acousto-optic Q-switch shown
in Figure 9.19. A high frequency acoustic wave is launched across a slab of
piezoelectric material, which creates a standing wave within the device that acts
like a diffraction grating. When an acoustic wave is applied, the standing wave
diffracts light off axis and away from the cavity mirror, preventing lasing. The
population inversion continues to build during this time, since the pumping
continues. When the acoustic wave is turned off, the device allows light to pass
unobstructed and lasing occurs.

Acousto-Optic
Q-Switch

Pulse
Laser Medium
Laser
Medium
HR mirror rf Output
Figure 9.19 - Acousto-Optic Q switch. OFF Coupler
Acousto-Optic
Q-Switch

Laser Medium No Pulse

rf Output
HR mirror ON Coupler

208
Laser Physics and Characteristics

An electro-optic Q-switch uses polarization to control the flow of light


through the laser. An electro-optic crystal rotates the plane of polarization of
light in the laser cavity so that it is either blocked or allowed to pass through the
laser cavity. Question 12 at the end of Chapter 7 illustrates the operation of a
quarter-wave electro-optic Q-switch.

REFERENCES
1. A largely non-mathematical treatment of laser technology:
Hitz, C.B., Ewing, J. and Hecht, J. Introduction to Laser Technology, 3rd
Edition, Wiley-IEEE Computer Society Press, 2001
2. Advanced laser physics textbooks.
Svelto, O. Principles of Lasers, Ed. 3, Plenum Press, 1989
Siegman, A. Lasers, University Science Books (1986)
3. The birth of the laser from two different points of view:
Taylor, N. Laser: The Inventor, The Nobel Laureate, and The 30-Year
Patent War, Simon and Schuster, 2000.
Townes, C.H. How the Laser Happened: Adventures of a Scientist,
Oxford University Press, 2001.
4. An overview of methods of determining divergence and M2 and the
importance of doing so in an industrial setting
Roundy, Carlos B. Current Technology of Laser Beam Profile
Measurements, white paper available from Spiricon at
www.spiricon.com

WEB SITES
1. "A Practical Guide to Lasers for Experimenters and Hobbyists.” Extremely
comprehensive
www.laserfaq.com/
2. "The Laser Adventure", an online laser tutorial
www.phys.ksu.edu/perg/vqm/laserweb/
3. A manufacturers' web sites with detailed technical information
www.coherentinc.com/
4. The following web site, from a laser user, includes a simplified discussion of
M2 and other beam parameters of importance to laser welding.
www.thefabricator.com/LaserWelding/LaserWelding_TechCell.cfm

209
LIGHT: Introduction to Optics and Photonics

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. What does the term LASER stand for?

2. What is a metastable state? Why is it important for laser operation?

3. Describe three properties of laser light that distinguish it from light from a flashlight.

4. Describe the three basic elements of a laser. What is the function of each?

5. "Stimulated emission and absorption are competing processes." Discuss this


statement.

6. Why is stimulated emission not possible without spontaneous emission?

7. Explain the significance of the fluorescent linewidth and longitudinal mode spacing
to laser output.

8. Describe the three common methods of pulsing a laser. Which can provide the
shortest pulses? Why is pulsing the power supply not practical for a gas laser?

9. Why is the TEM00 mode known as the “uniphase” mode?

10. What is the TEM mode shown at left?

11. Explain the benefits and drawbacks of the four laser cavity configurations (plane-
parallel, spherical, hemispherical and unstable resonator).

12. What is meant by “diffraction-limited spot size?”

13. What is an etalon and why is it used?

LEVEL I PROBLEMS
14. Determine the wavelengths of photons having the following energies: (a) 6.87 x10-
20 J and (b) 4.2 eV.

15. An atom makes a transition from 5.3 eV to 2.3 eV. Calculate the wavelength of the
emitted photon.

16. Calculate the frequency of the red light from a HeNe laser (# = 633 nm).

17. An argon laser has a cavity length of 1 meter. Find the longitudinal mode spacing
(assume n = 1).

18. A HeNe laser has a mode spacing of 500 MHz. Find the cavity length.

19. Find the longitudinal coherence length of an Nd:YAG laser with a 30 GHz
fluorescent linewidth.

20. Calculate the full angle beam divergence for a red HeNe (632.8 nm) that has an
output aperture of 0.85 mm.

210
Laser Physics and Characteristics

21. If the He Ne laser in Problem 20 is focused by an 8 mm focal length lens, what is the
diameter of the focused spot?

22. A Nd:YAG laser (# = 1.064 µm) has an effective aperture diameter of 2.5 mm.
Determine the divergence.

23. An Nd:YAG laser operating with a TEM11 output has a full angle beam divergence
of 2.0 milliradians. The beam is focused by a length 7 cm focal length lens. Find the
spot size.

24. Find the DOF (depth of focus) for a CO2 laser focused to an 80 µm spot size.

25. A Q-switched ruby laser emits 4.5 J in a pulse with 30 ns duration. What is the peak
power?

26. A Q-switched Nd:YAG laser with pulse duration of 20 ns has a peak power of 10
MW. What is the energy of the pulse?

27. A Q-switched ruby laser emits 6 J in a pulse duration time %t1/2 of 300 ns. What is
the peak power?

LEVEL II PROBLEMS
28. A HeNe laser has a cavity length of 50 cm. The laser fluorescent linewidth is 1.5
GHz. The transmission of the output coupler is 1.5%, and the round trip is 0.4%.
Determine the following quantities:

a. Mode spacing

b. Bandwidth of a single mode

c. Approximate number of modes present in the laser output

29. An argon ion laser has a fluorescent linewidth of 9 GHz. The cavity length is 1.25
meters. The transmission of the output coupler is 4% and the cavity loss is 1%.
Determine the coherence length with and without an etalon.

30. A 200-watt CO 2 (# = 10.6 µm) laser with a beam diameter of 1 cm is focused by a 10


cm focal length lens. Determine the irradiance of the focused beam in watts/cm2.

31. Show that Equation 9.14 follows from Equation 9.13 by substituting $2#o for #(z)
and solving for z=zo.

211
You might be surprised how many times you've used a
laser. In fact, most people use lasers without even
realizing it. Lasers store and play back music and
movies on CDs and DVDs. They scan the bar code
labels that encode the prices of products at the
supermarket. Lasers are increasingly used in cosmetic
applications for everything from smoothing wrinkles
to removing hair. They are used by carpenters and
surveyors for alignment, in machine shops for cutting,
drilling and welding, in copiers and printers, and by
dentists and surgeons for medical procedures. From
biomedical research to homeland security, the laser is a
common tool. In this chapter we will look at a variety
of laser types and some of their applications.
Micromachining on Kapton (Photomachining, Inc.,
www.photomachining.com)

Chapter 10
Laser Types and Applications
10.1 OVERVIEW OF LASER TYPES
How can we sort through the tremendous variety of laser types and uses?
Lasers may be grouped or classified by output power or wavelength, whether
they are continuous wave or pulsed, or their most common applications. In this
chapter we will group lasers by the type of active medium. In this scheme, most
lasers fall into one of six categories: gas lasers, solid state lasers, dye lasers,
chemical lasers, semiconductor lasers and fiber lasers. In addition, there are other
more exotic lasers, such as free electron lasers or x-ray lasers, that are used in
specialized research.
For each of the major groups, we will discuss several representative
types and describe some of their applications. Entire textbooks are devoted to
applications of lasers in medicine, materials processing, telecommunications and
spectroscopy, and we will only be able to provide a brief introduction and
overview here. In addition, new lasers are rapidly being developed to replace
older, less efficient models. If you would like to learn more about lasers and the
latest lasers applications, see the trade journals and Internet web sites listed in the
references at the end of this chapter.

212
Laser Types and Applications

10.2 GAS LASERS


Gas lasers all use a gaseous gain medium and may be further grouped
according to the type of gas used: atomic, molecular or ionized gas. Gas lasers
range from low power helium neon lasers to huge carbon dioxide lasers capable
of producing over 20 kilowatts of cw power. The gas may be sealed in a tube or
circulate through the cavity at high speed. Most gas lasers use some form of
electric discharge to energize the medium and create a population inversion.

Neutral Atom Gas Lasers


The first type of gas laser we will consider uses electrically neutral gas
atoms as the gain medium. The helium neon (HeNe) laser, a common item in
educational labs, has a gain medium consisting of a mixture of helium (He) and
neon (Ne) gases. The neon atoms are the active elements and the helium serves to
enhance lasing by increasing pumping efficiency. Both gases are sealed at low
pressure in a glass tube, with a ratio of about 5 to 9 parts helium to one part neon.

Figure 10.1 – Typical HeNe Laser tube.


The Brewster window provides plane
polarized output.

Excitation of the gas in a HeNe laser is achieved by a DC current flow


through the gas. Electrons released at the cathode are accelerated toward the
anode by the voltage applied across the tube, resulting in collisions between
electrons and gas atoms. Since a helium atom has about one fifth of the mass of a
neon atom, electron energy is transferred much more readily to helium atoms
than neon atoms. Helium has two energy levels that are very close in energy to
two metastable levels of the neon atom. The more numerous helium atoms
collide with neon atoms, transferring energy to them and producing a population
inversion in the neon atoms. This process of pumping the active laser gas
indirectly by the transfer of energy via collisions with another excited gas is
called resonant excitation.
A few of the many transitions available from the two neon excited states
are shown in Figure 10.2. The primary visible wavelength for a HeNe laser is
632.8 nm but HeNe lasers can also lase at 1.15 µm and 3.39 µm in the infrared

213
LIGHT: Introduction to Optics and Photonics

as well as at other visible wavelengths, including 604 nm (orange), 594 nm


(yellow) and 543 nm (green—sometimes called a “GreeNe”). All of these
wavelengths are created in the laser tube and the operating wavelength is chosen
by the use of wavelength specific mirrors that provide feedback at only the
desired spectral line. Additional gases can also be added to suppress the
unwanted lines.

Helium Neon

~20 eV 3390 nm

Resonant
excitation of 633 nm
Figure 10.2 - Schematic showing some
of the transitions in a helium neon Neon atoms by 543 nm
Helium atoms
laser. The lower helium energy level
can also excite the neon atoms to the
lower metastable levels producing
additional laser lines not shown here. Non radiative transfer
to ground state through
collisions with tube
walls

Ground states

Output powers for helium neon lasers range from less than 1 mW to 100
mW or more for specialized types. The HeNe laser is relatively inefficient at
converting electrical energy to optical energy, and the reason for this can be
discovered in the energy level diagram. Approximately 20 eV is needed to raise
the helium atom to the appropriate excited state, but the photons created by the
neon atom have energies of around 2.28 eV (for green light) or less. The extra
energy is wasted as heat. Less than 1% of the electrical energy used to power the
laser emerges as laser light.
Despite its inefficiency, the spectral purity and stability of the HeNe laser
output make it very desirable in many applications requiring a coherent TEM00
beam, such as holography and interferometry. However, for less stringent
applications such as supermarket barcode scanning or alignment, cheaper, more
reliable diode lasers have replaced the HeNe laser.

Ion Gas Lasers


An ionized atom has one or more electrons missing from its outer shell
of electrons. The ion, the atom minus electron(s), has a positive electric charge.
The argon (Ar+) and krypton (Kr+) lasers are common examples of ionized gas
lasers. Ar and Kr ion lasers operate with relatively high current levels (20-50 A)
and a discharge voltage of approximately 600 V. The excited gas is contained in
a segmented plasma tube constructed of graphite or beryllium oxide. Typically, a

214
Laser Types and Applications

solenoid is placed around the tube to generate a magnetic field, which


concentrates the plasma and increases the current density in the gain medium to
improve pumping efficiency.
As you might imagine, a great deal of energy is required to ionize the gas
and, although ion lasers are the most powerful of the cw lasers in the visible and
ultraviolet regions, there is still a large amount of energy delivered to the gas that
does not appear in the output beam. Either water or air must be used to cool the
system for more stable operation. Most ion lasers operate on 208 Volt 3-phase
power because of the large current requirements.

Figure 10.3 – Argon-Krypton laser.


The etalon is used to select the
output wavelength.

Argon lasers produce two primary wavelengths in the blue-green portion


of the visible spectrum, 488 nm (blue) and 514.5 nm (green). The two primary
colors are normally selected or "tuned" by using a rotating prism placed within
the laser cavity. Krypton lasers produce lines that span nearly the entire visible
spectrum. Laser systems with an active medium composed of a combination of
both gases can produce output spectra with lines from violet to red, with output
powers of up to several watts when operated in a continuous wave mode. The
lasers may also be pulsed at rather low rates of up to 120 pulses per second.
Argon and krypton lasers have low divergence and good coherence
properties that make them suitable for applications such as holography and
spectroscopy. Although they are being replaced by diode and solid state lasers
for some low power applications, argon lasers still find use as "optical pumps" to
excite the gain medium in other types of lasers, for example, dye and solid state
lasers. Because they provide bright output at several wavelengths, argon and
krypton lasers are the most common laser type used in laser light shows. Argon

215
LIGHT: Introduction to Optics and Photonics

lasers also have medical uses, for example, in dermatology to treat a variety of
birthmarks and port wine stains. In Chapter 15 we will explore some of the
reasons for choosing a particular laser for a given medical procedure.

Molecular Gas Lasers


As the name indicates, these lasers use gas molecules such as carbon
dioxide (CO2), nitrogen (N2) or carbon monoxide (CO) as the gain medium.
Unlike the lasers presented thus far, molecular lasers generate light energy from
molecular vibrations rather than electronic transitions.
Figure 10.4 illustrates the three ways a CO2 molecule can vibrate. If you
imagine that the two oxygen atoms are attached to the central carbon atom by
electronic "springs." the oxygen atoms can vibrate back and forth symmetrically
(top figure) or asymmetrically (bottom figure) along the axis of the molecule.
Figure 10.4 – CO2
molecule vibrational They can also vibrate up and down so that the molecule bends back and forth
modes. (center figure). Each of these vibration types is called a vibrational mode.
Like the electronic energy levels of an atom, vibrational energy is also
quantized, that is, only certain values of vibrational energy are allowed.
Transitions between vibrational modes result in the emission of photons in the
infrared region of the spectrum.
CO2 lasers are excited by a high voltage of 5-10 kV and low current (5-
30 mA) electrical discharge. They can also be excited using a high frequency
(~100 Mhz) high voltage (5-10 kV) discharge across the laser tube. Coherent
light output is available at a large number of wavelengths in the infrared,
centered around 9.6 and 10.6 µm. CO2 lasers may have sealed tubes or the gas
may be circulated through the tube. When flowing gas is used, the higher the rate
of gas flow through the tube, the greater the laser output.
Small sealed tube lasers of a few tens of watts output can fit on a desktop
(Figure 10.5). Waveguide CO2 lasers producing 30-100 watts have resonant
cavities only a few millimeters wide and are about the size of a shoebox.
Transversely Excited
HR mirror Output Coupler Atmospheric (TEA)
CO2 lasers have a broad
output beam that is
Electron Flow
difficult to focus to a
spot. These lasers are
usually used with a
Anode (+) Power Cathode (-)
Supply
mask, for example, for
marking applications.
Figure 10.5 – Schematic of a typical sealed tube CO2 laser.

216
Laser Types and Applications

At the other end of the spectrum are large folded path resonators, about
the size of a dormitory refrigerator. (Figure 10.6) These industrial lasers have
outputs of 5 kW to 25 kW and more. A new type of CO2 laser has a cavity with
an annular cross section. The large cylindrical body of the laser may be placed
directly on a robot arm for industrial material processing applications.

Gas chillers (CO2 delivery system)

Fold mirrors
Figure 10.6 - Resonator for a
Trumpf flowing gas CO2 laser.
(www.trumpf.com) The beam
path is "folded" by mirrors at the
corners of the gas tubes, so that
a nearly 5 m long resonator fits
in a 65 cm square footprint.

Output coupler

Rear mirror

Fold mirrors Electrodes Gas discharge tubes

Because glass is relatively opaque at CO2 wavelengths, optics must be


fabricated out of zinc selenide, germanium or other material transparent to the
10.6 µm radiation. Since CO2 wavelengths cannot be delivered by solid glass
fiber, placement of the beam is accomplished by a combination of moving
mirrors and/or by moving the part being processed. Beginning in 2004, thin
hollow fibers with reflective inner walls have been used to guide CO2 laser
beams for minimally invasive surgery.
CO2 lasers can be operated in both CW or pulsed modes. The CW output
power varies from a few watts to 10 kW. Pulsed CO2 lasers can have powers up
to several kilowatts with pulse repetition rates of 100kHz. If the laser is Q-
switched, it can produce peak powers of up to 100 kW!
The CO2 laser is used in material processing, medical surgery,
manufacturing and LIDAR (Light Detecting and Ranging) applications. Unlike a
mechanical tool, a laser does not become dull with use or exert forces on a part
that could cause it to warp or deform. A CO2 laser can be used, for example, to
remove the insulation around wire in the aerospace industry. The alternative, a
mechanical wire stripper, can nick or damage the wire being stripped, which may
result in faulty wiring or weakening of the wire. CO2 lasers are also used to mark
wire or electronic components with a part number, eliminating the need for
punching or painting the material.

217
LIGHT: Introduction to Optics and Photonics

In the electronics industry, CO2 lasers are not only used to strip and mark
wire but also to form components. Resistors and capacitors are formed when a
laser vaporizes portions of thin film elements on transparent substrates to form
the circuit geometry. In addition, resistors can be trimmed to very close
tolerances using lasers. A resistor of a lower resistance than the final desired
value is manufactured and then a cut is then made part of the way across the
resistor. The value of the resistor is monitored and when the desired value is
reached, the laser is turned off. Tolerances of better than 0.1% are easily
accomplished.
Most metals can be processed well with lasers. Metal parts formed with
CO2 lasers fit into a large variety of everyday products including aircraft engines
and airplanes, medical equipment, automobiles, farm equipment, ships, fire
trucks, elevators, escalators and appliances.
Carbon dioxide lasers have extensive medical uses because the 10.6 µm
wavelength is strongly absorbed by water in body tissues. When tightly focused,
the beam acts as a scalpel and a defocused beam can be used for skin surface
treatments. CO2 lasers have been used to remove moles and warts, for wrinkle
removal and scar resurfacing, and to vaporize tumors located deep within the
brain or located at the base of the skull.

Excimer Lasers
The word excimer is a contraction of “excited dimer." which describes
an unstable molecule formed from an inert gas atom and a halogen gas atom.
Normally the two atoms are not attracted to each other, but if the inert gas atom
is energized by an electric discharge, the two atoms will form a molecule that
exists only in the excited state. When the molecule transitions to a lower energy
state, it breaks up into its component elements. A population inversion is
achieved because after a photon is emitted, the molecule no longer exists. The
population of the upper energy state always exceeds that of the lower state
because there is no lower state!
Excimer lasers are excited by an electric current flow through a gas
mixture that contains both the inert gas atoms and halogen gas atoms. Lasers
such as krypton fluoride (KrF) and xenon fluoride (XeF) produce laser emission
in the UV portion of the spectrum. Commercially available models can produce
pulses of from 10 to 40 ns with average powers up to roughly 100 watts and
pulse repetition rates of around 500 Hz.
Unlike a laser that operates in the infrared portion of the spectrum, an
excimer laser does not remove material through thermal processes. Energetic UV
photons disrupt the bonds that hold molecules together so there is little heat

218
Laser Types and Applications

damage to the material. An excimer laser with output at 157 nm is used in


lithography, the etching of integrated microelectronics. Because of its short
wavelength, the light can be focused to a very small spot size, producing features
as small as one micron. The smaller the focused spot, the more tightly circuits
can be arranged on a computer chip. The small spot size of an excimer laser is
also put to use in micromachining, creating micron-sized features in very thin
sheets of metal or polymer materials. The photograph at the beginning of this
chapter shows an intricate pattern machined in 50 micron thick Kapton, using a
248 nm excimer laser. The double slots on the left and right sides of the piece are
50 microns long, about half the diameter of a human hair.
In the medical field, LASIK eye surgery uses an excimer laser to correct
a too-curved cornea, allowing the eye to focus images sharply on the retina. The
excimer laser is chosen because of the fine detail it can cut. Transmyocardial
Laser Revascularization (TMLR) surgery uses an excimer laser to drill holes in
the walls of the heart in order to allow more oxygen into the muscle. TMLR is
used in instances where the patient is too weak to undergo a major surgery such
as coronary artery bypass. Because the laser light travels through a fiber optic
cable, the surgery in deemed “minimally invasive," allowing the chest to remain
unopened.

10.3 SOLID STATE LASERS


In electronics technology, the term "solid state" is used to indicate a
semiconductor device, but in laser technology the term is usually reserved for a
laser with a gain medium consisting of a crystalline material doped with impurity
ions. The crystal serves as a host for the dopant ions, which support the
population inversion. The energy pump for solid-state lasers is a light source
such as a flash lamp, semiconductor laser or other type of laser. Because light is
used to energize the medium, these lasers are often referred to as optically
pumped.

Ruby Laser
The first laser was a ruby laser, constructed by Theodore Maiman in
1960. The gain medium was a crystal of aluminum oxide (Al2 O3), or synthetic
sapphire, doped with triple ionized chromium atoms (Cr+3). Ruby lasers are
typically pumped with a xenon-filled flash lamp. The primary wavelength of the
laser output is 694 nm.
Ruby lasers can be operated in either CW or pulsed mode and can
generate nanosecond pulses with powers ranging from 103 to 106 watts. Because
the ruby laser is a 3-level laser (that is, the lower lasing level is also the ground
state), it is relatively inefficient and requires intense pumping to maintain a

219
LIGHT: Introduction to Optics and Photonics

population inversion, especially if it is being operated in a cw mode. This intense


pumping generates a tremendous amount of heat, normally requiring water-
cooling.
Ruby lasers are commonly used in applications such as holography,
tattoo removal and spot welding of copper. The first holographic portraits of
people (and pets!) were created with pulsed ruby lasers. The very short pulses
allow a living subject to be captured for the same reason that a very short
exposure is used to eliminate blurring of a photograph of, say, a basketball player
in mid air. If the beam is widely diverged to expose the entire person, the
irradiance is so low that eye safety is not an issue. Several commercial
holographic studios can be found on the Internet that will (for a large price)
create a holographic portrait of you or your pet.

Neodymium:YAG or Nd:YAG
The Nd:YAG laser is a solid state laser consisting of a crystalline host of
yttrium aluminum garnet (YAG) doped with triple-ionized neodymium atoms
(Nd +3). The primary laser wavelength is 1064 nm in the near infrared region of
the spectrum, but the use of nonlinear optical elements allows the output to be
frequency doubled, tripled or quadrupled. If the frequency of light is changed by
a factor of 2 (or three or four), then the wavelength changes by a factor of 1/2 (or
1/3 or 1/4) since the speed of light remains constant. This means that Nd:YAG
lasers can be configured to produce output at 532 nm (green), 355 nm (UV) and
266 nm (UV). Each of these additional wavelengths has industrial uses,
especially the UV wavelengths, which can be focused to a very small spot size
for micromachining. Because of their relatively low cost and ease of use, short
wavelength Nd:YAG lasers are replacing excimer lasers in many applications.
The Nd:YAG crystal is usually a long thin rod, and the optical pump
sources are placed alongside the rod in a reflective cavity with an elliptical cross
section (Figure 10.7). As you learned in geometry class, an ellipse is a figure
with two foci. With the rod at one focus of the ellipse and the lamp at the other,
all light emitted by the lamp is reflected into
Reflector
Flashlamp the rod for maximum pump efficiency. The
Figure 10.7 - Cross-section
view of a flash lamp pump sources are either broad spectrum Xenon
pumped Nd:YAG cavity. lamps or, increasingly, diode lasers. Lasers are
Rays emitted by the
flashlamp at one focus of more efficient because their output can target
the ellipse are reflecting by the absorption bands of neodymium, therefore,
the cavity walls onto the
Nd:YAG rod, located at the less pump light is wasted. Like the ruby laser,
other focus. dissipation of heat is an issue and the rod must
Nd:YAG rod
be cooled with either flowing air or water.

220
Laser Types and Applications

Nd:YAG lasers can be operated in cw or pulsed mode, but are commonly


Q-switched for use in industrial and medical applications. Peak pulsed powers
can range from 5 kilowatts to 200 megawatts. CW systems with average powers
of up to 5 kilowatts are also available. Figure 10.8 shows the major components
of a typical Nd:YAG laser.

Figure 10.8 – Nd:YAG laser

The Nd:YAG laser is common in industry where it is used for a variety


of material processing applications such as cutting, welding, etching and drilling.
The Nd:YAG wavelength chosen depends on the absorption characteristics of the
material and the detail required. In the aerospace industry, Nd:YAG lasers are
used extensively for drilling holes in turbine blades. In the automotive industry,
they can be used in conjunction with robotics to perform complex welding
operations on contoured surfaces. Unlike the 10.6 µm wavelength of a CO2 laser,
the 1.064 µm wavelength of the Nd:YAG laser transmits very well through glass.
This property allows light energy from the Nd:YAG to be delivered to the work
piece using fiber optic cable, which can be conveniently mounted on the end of a
robotic arm. The small cubes of acrylic or glass that are sold at tourist attractions,
featuring internal three dimensional designs (subsurface engravinging) are made
by focused Nd:YAG lasers. The light is transmitted through the glass, forming
small bubbles only at the focal point of the beam.
Medical applications of the Nd:YAG laser include laser dentistry, where
it is used to ablate material such as the pulp in root canals, plaque and decay,
which absorb the 1.064 !m radiation without harming the tooth enamel. The
result is quick and painless removal of tooth decay.
In many situations the Nd:YAG laser is often referred to as just a "YAG"
laser, but you should know that many other dopants are used in the YAG crystal
to produce lasers with different operating or output characteristics. One new
(2004) development is the commercial availability of the disk laser, which uses a
very thin disk of ytterbium doped YAG (Yb:YAG) to produce around 1000 watts
of optical power. The output of several disks may be combined to produce multi-
kilowatt lasers. The advantage of a thin disk over a long rod is that the disk may
be mounted directly onto a heat sink, greatly reducing the temperature gradients
across the laser medium that degrade beam quality.

221
LIGHT: Introduction to Optics and Photonics

Ti:Sapphire
The Ti:Sapphire laser (Ti3+:Al2O3) uses a crystalline host of aluminum
oxide (synthetic sapphire, the same host used in the ruby laser) doped with triply
ionized titanium atoms. The primary appeal of the Ti:Sapphire laser is that the
laser output wavelength is tunable over a range from 660 nm to 1080 nm. That is,
the laser medium has a very wide fluorescent linewidth and the wavelength of
interest can be extracted by special tuning optics in the laser cavity. Its relatively
short atomic lifetime makes flash lamp pumping inefficient, so the Ti:Sapphire
laser is pumped with green light from either an argon laser or a frequency-
doubled Nd:YAG laser.
The Ti:Sapphire laser is used in a variety of applications including
micro-machining of metals, glass and other complex materials, as well as in
medical applications such as dermatology, where the laser's tunable output can be
used to treat different types of pigmented lesions. The wide range of output
wavelengths, which can be increased by the use of nonlinear optics and special
cavity configurations, is especially useful in spectroscopy.
Because of the large spectral width, Ti:Sapphire lasers can be
modelocked to produce femtosecond (10-15 second) pulses for use in scientific
research. Events on the molecular and even atomic level can be imaged using
ultrashort pulses from Ti:Sapphire lasers.

10.4 DYE LASERS


Dye lasers are unique because the lasing medium is an organic dye
dissolved in a liquid solvent. The laser wavelength is determined by the type of
dye, and dyes are available to produce wavelengths that span the spectrum from
350 nm to around 1µm. Since the dye can be changed from one type to another
in less than an hour, it is easy to quickly produce a new wavelength. Figure 10.9
shows the wide range of wavelengths that may be produced from a single laser
using different dyes.

Figure 10.9 -Dye laser tuning curves.


Each curve shows the wavelengths that
are produced from a specific dye. The
set of 20 dyes spans nearly the entire
spectrum from 350 nm to 1000 nm.
(Courtesy of Photon Technology
International, Inc. www.pti-nj.com)

222
Laser Types and Applications

The lasing dye is contained in a transparent tube and optical pumping—


either by flash lamp or another laser—is used to excite the dye. The dye tends to
degrade with use, so it is usually circulated through the laser cavity. Output is
either CW or pulsed and the output power depends on both the dye used and the
method of pumping. The wide spectrum of wavelengths available makes dye
lasers useful for spectroscopy and to remove port wine stains, tattoos and
birthmarks. Like the Ti:Sapphire, the dye laser may be modelocked to produce
ultrashort picosecond pulses.
There are some disadvantages to dye lasers, however. The dyes are
usually toxic and the solvents may be both toxic and flammable. This means that
in addition to the usual optical and electrical hazards, chemical safety is also a
concern. In many applications requiring a tunable wavelength source, tunable
solid state lasers like the Ti:Sapphire are replacing dye lasers.

10.5 CHEMICAL LASERS


The term "chemical laser" refers, not to the state of the lasing medium,
but to the method of creating a population inversion. In the chemical laser, the
excitation is produced by an exothermic (energy producing) chemical reaction.
Chemical reactions can be initiated by flash lamp, electrical discharges, heating
by arc jets or direct chemical reaction.
Most of the chemical reactions used in chemical lasers are of the form

A + BC ! AB * +C + energy
In this chemical formula the asterisk (*) indicates that the molecule AB is
in an excited state. The excited states involved are vibrational states, similar to
those of the CO2 laser, producing laser wavelengths in the infrared. Although the
chemicals can be in the solid, liquid or gaseous state, most chemical lasers use
gas as the active medium, for example, hydrogen fluoride (HF) and deuterium
fluoride (DF). Figure 10.10 illustrates the principles of a typical chemical laser.
Gases are fed into a reaction chamber where the exited molecules are produced.
The molecules produce laser light in the cavity and then are removed as waste.

HR Mirror
Hydrogen

Exhaust gases
Figure 10.10 - Schematic of the
Gas Mixing
major components of a chemical
laser.
Fluorine Output Coupler

Laser beam

223
LIGHT: Introduction to Optics and Photonics

Chemical lasers offer several attractive features: they may be operated


either CW or pulsed, large output powers may be obtained, they are relatively
efficient, and they operate in the 2-4 µm range, which allows the output radiation
to be focused to a smaller spot size than the output of a CO2 laser. However,
chemical lasers are not usually used in industrial or medical applications. Since
the energy source for a chemical laser is easily stored in gas tanks, and the output
powers can be in the megawatt range, most of the applications of chemical lasers
are military. Airborne laser weapons, for example, are likely to be chemical
lasers.

10.5 SEMICONDUCTOR LASERS (LASER DIODES)


A semiconductor laser, or laser diode (LD), is essentially a light emitting
diode (LED) with its light-producing region modified to form an optical cavity.
This can be accomplished by cleaving the ends of the semiconductor crystal,
creating flat mirrors. The optical cavity traps the photons generated in the
electron-hole recombination process and provides the feedback mechanism
necessary to produce stimulated emission of radiation.
You probably own several semiconductor lasers; they are used in CD and
DVD players, computer CD-ROM drives and laser pointers. No larger than a
grain of sand, diode lasers have operating characteristics determined by the
semiconductor material used to construct the diode and the internal structure that
controls the flow of both electrons and photons.
The most common material for diode lasers is gallium arsenide (GaAs),
which cleaves easily along certain crystal planes, leaving flat, parallel surfaces.
Usually the cleaved ends of the laser diode require no further coating to form the
mirrors for feedback and output coupling since the reflectivity at the interface
between gallium arsenide and air is approximately 36%. Coating the end surfaces
of the laser diode with a metal film can increase the reflectivity to the desired
levels, if necessary.
Laser diodes typically have output powers that range from milliwatts to
watts, and depending on the semiconductor material, laser wavelengths range
from near ulatraviolet to mid infrared. Because they are small, relatively
inexpensive and efficient, diode lasers are replacing other types of lasers in many
applications as well as creating the opportunity for new applications.

Homojunction Lasers
The diode laser illustrated in Figure 10.11 is called a homojunction laser
because only one type of semiconductor material is used for the entire structure.
In this type of laser diode, the pn junction formed between p-type and n-type
semiconductor materials (the depletion region) serves as the laser cavity.

224
Laser Types and Applications

Although homojunction laser structure provides a simple explanation of laser


action, to produce a more efficient laser it is necessary to control both the flow of
current and light within the device, which can be accomplished with doping and
electrode design.

+ –
I Partially
p-type reflective
surface
Active region Figure 10.11 - Homojunction laser diode.
Highly n-type Light emitted
reflective
surface

The index of refraction of a doped semiconductor depends upon the


particular dopant used, as well as the doping level. In a homojunction device, the
pn junction region is actually lightly doped p-type material, which creates a
region with a higher index of refraction. The surrounding n-type material and
more heavily doped p-type material have a lower index of refraction than the
junction region. The low index of refraction junction surrounded by the higher
index of refraction material forms an optical waveguide structure that helps to
confine the laser light to the active junction region. (Recall that total internal
reflection can occur when light travels from a high index of refraction to a lower
index of refraction.) The disadvantage of homojunction lasers is that the
threshold current density for laser operation is high and the efficiency is low.

Heterojunction Lasers
A heterojunction laser is a laser diode whose junction has been designed
to reduce diffraction loss in the optical cavity (Figure 10.12). This is
accomplished by modifying the laser material to control the index of refraction of
the cavity and width of the pn junction. Much higher lasing efficiency and much
lower threshold current density are obtained by replacing some of the gallium in
both the p-type layer and the n-type layer with aluminum, so that both the p and
n-regions are composed of aluminum
gallium arsenide (InGaAs.) This reduces the
index of refraction of the outer layers and
results in better confinement of the laser
light to the optical cavity. The discontinuity
in the refractive indices causes radiation
generated within the junction to be reflected
back into the region, producing a higher
lasing efficiency. Figure 10.12 - Heterojunction laser diode.

225
LIGHT: Introduction to Optics and Photonics

Optical gain and output power can be improved by increasing the current
density in the junction region. The striped heterostructure is a modification of
the single heterostructure that concentrates the current in a small portion of the
active region. To create this structure, a layer of a silicon dioxide insulator
material is first deposited on the top of the diode leaving a narrow stripe about 13
µm wide (Figure 10.13). The metal electrode is then deposited. In the laser
shown in Figure 10.12, the top electrode covers the entire top surface of the p-
type material, allowing current to flow across the full width of the diode. By
creating a narrow strip of electrode across the top of the laser diode, the current
density and thus the gain, can be greatly increased. Confining current flow to a
narrow strip of the junction increases both gain and laser power.

Figure 10.13 - Striped


heterostructure laser diode.

Distributed Feedback Lasers


Ordinary laser diodes have a fairly broad spectral width. This type of
laser diode is known as a Fabry-Perot laser. On the other hand, single frequency
laser diodes are specifically designed to meet the requirements for high
bandwidth communications. One variety of this type of structure is the
distributed feedback (DFB) laser diode (Figure 10.14). In a DFB laser, a
corrugated structure (similar to a diffraction grating) is formed in the cavity of
Figure 10.14 - DFB laser
schematic. the laser. Only light of a very specific wavelength will be reflected back into the
cavity and allowed to oscillate. DFB lasers have output linewidths that are
extremely narrow (around 0.1 nm).
As you will learn in Chapter 11, very narrow linewidth is a characteristic
required for dense wavelength division multiplexing (DWDM) systems where
Figure 10.15 – Commercial
many closely spaced wavelengths are transmitted through the same optical fiber.
DFB laser with a 14-pin Distributed feedback lasers usually emit light at fiber optic communication
butterfly package (Courtesy of
JDS Uniphase Corp. wavelengths of 1310 nm and 1550 nm (Figure 10.15). These devices generally
www.jdsu.com). have lower threshold currents and lower power requirements compared to other

226
Laser Types and Applications

laser diodes. Since the wavelength of the single mode depends on temperature,
sophisticated monitoring and temperature control circuitry is necessary when this
laser is used for telecommunications.

10.6 FIBER LASERS


Throughout the 1990s, the laser industry developed more compact and
efficient lasers for applications previously requiring gas, solid state and dye
lasers. One of the most exciting laser developments was the fiber laser, capable
of generating high optical power in a hair thin strand of glass fiber.
As you will learn in Chapter 11, glass optical fiber has a central high
index of refraction core surrounded by a lower index cladding. Because of total
internal reflection, light launched into one end of the fiber is guided along the
core to the other end. A fiber laser consists of a glass fiber with a core doped with
erbium, ytterbium, praseodymium or another of the rare earth elements. Solid
state or diode lasers provides the energy to pump the dopant atoms to excited
states.

Laser diodes
(pump source)
Coupler
Fiber Bragg gratings

Figure 10.17 - Elements of a


fiber laser.
Laser output
Laser diodes
(pump source)
Active fiber (gain medium)

The final element required for a laser, the resonator cavity, is provided
by fiber Bragg gratings constructed inside the fiber itself. These index of
refraction variations act like diffraction gratings and reflect only one wavelength
back into the fiber, providing optical feedback, similar to the gratings in the
distributed feedback laser.
Fiber laser output is typically between 1µm and 2 µm. However, non-
linear crystals may be used to convert infrared wavelengths to visible
wavelengths in the red, green and blue. In the first few years of the 21st century,
Figure 10.18 - Fiber laser
laser power increased dramatically from a few watts to hundreds of watts from a subassembly, showing
single optical fiber. To increase power further, many fibers can be bundled doped fiber active medium
(Photo courtesy of Nufern,
together. Although some other lasers may use optical fiber to transport beam www.nufern.com/)
energy to the work piece, fiber lasers use the fiber both to produce and transport
light.
Figures 10.18 shows a commercially available fiber laser subassembly
containing the active fiber laser medium. The laser pictured at the beginning of

227
LIGHT: Introduction to Optics and Photonics

Chapter 9 is also a fiber laser. Because the active medium is an optical fiber,
integration with robots is easily achieved.
Applications for fiber lasers are wide and growing. The military and the
automotive industry are among those taking a second look at this family of
efficient lasers with high power, excellent beam quality, low maintenance and an
operating lifetime of more than 100,000 hours. Fiber lasers have found uses in
medicine as well, including laser dermatology (wrinkle removal). It is expected
that fiber lasers will replace gas lasers in many medical and industrial
applications.

REFERENCES
1. Hitz, C.B., Ewing, J. and Hecht, J. Introduction to Laser Technology, 3rd
Edition, Wiley-IEEE Computer Society Press, 2001 - Excellent introduction
to lasers with minimal mathematics
2. Svelto, O. Principles of Lasers, Ed. 3, Plenum Press, 1989 - Advanced
introduction to laser physics and applications
3. Steen, W. Laser Material Processing, 3rd ed., Springer-Verlag, 2003.
4. Mann, K., Morris, T. "Disk Lasers Enable Application Advancements,"
Photonics Spectra, January 2004: 106-110.
5. Savage, N. "Fiber lasers power up," OE Reports, SPIE Press, August 2003.

TRADE JOURNALS
The most up-to-date information on lasers and applications can be found in
monthly trade journals
1. Photonics Spectra (Laurin Publishing, Pittsfield, MA)
www.photonics.com/
2. Laser Focus World (PennWell Publishing, Tulsa, OK)
http://lfw.pennnet.com/

WEB SITES
1. "The Laser Adventure" A comprehensive guide to laser physics and
applications
www.phys.ksu.edu/perg/vqm/laserweb/
2. "A Practical Guide to Lasers for Experimenters and Hobbyists"
www.laserfaq.com/
3. This web site provides a complete textbook on laser materials processing
www.columbia.edu/cu/mechanical/mrl/ntm/pgIndex.html

228
Laser Types and Applications

REVIEW QUESTIONS AND PROBLEMS


Some of these questions are answered in the chapter and others will require you to do
some research on the internet.
1. What is the primary wavelength for the following lasers types and what
type of laser is it? Find a use for each laser that is not listed in this
chapter:
a. HeNe
b. Nd:YAG
c. CO2
d. KrF
2. Why would temperature gradients across a solid state crystal rod degrade the
beam quality? What else might be affected by uneven heating of the rod?

3. What is the difference between homojunction laser and hetero junction lasers?
Why is the difference in structure important to laser operation?

4. How does a Distributed Feedback Laser achieve single mode operation? Why
might single mode operation be important in telecommunications?

5. Find one laser application in each of the following industries:


a. aerospace
b. automotive
c. dermatology
d. ophthalmology
e. dentistry

6. Internet project: Research a laser material application. Write a one page paper
about the application including
How this application was done before lasers were used
How it is done using lasers
What type of laser is used
What the advantage is of using a laser.
Are there any disadvantages to associated with using laser?

229
Over the past decade, optical fiber has had a major
impact on the way we communicate, arguably more than
any other technology. Without fiber optics we would not
be able to enjoy the high-speed of broadband Internet,
which has reduced download time of data files from
minutes to just a few seconds. Telephone transmission
too has been changed by fiber optics. Just imagine—
when you speak on the telephone your voice is actually
combined with 130,000 others (or more!) in the same
hair-thin glass fiber traveling at the speed of light to its
destination. There is no doubt that the information
superhighway is paved with glass!

Optical Fiber (J. Donnelly)

Chapter 11

INTRODUCTION TO FIBER
OPTICS
In recent years, optical fiber has steadily replaced copper wire as the
preferred medium for transmitting high-speed data. Optical fiber spans the long
distances between local telephone systems and provides the communications
backbone for many network systems. A fiber optic system is similar in many
ways to the copper wire system that it replaces, but optical fiber uses light pulses
to transmit information along hair-thin glass fibers instead of electronic pulses to
transmit information through copper wires. Figure 11.1 illustrates the main
functional parts of a typical communication system: transmitter, information
channel (medium) and receiver. In a fiber optic system, the transmitter converts
electronic signals to optical signals, the receiver performs the reverse operation,
and the information channel is optical fiber.

11.1 THE EVOLUTION OF COMMUNICATION BY LIGHT


If optical communication includes all methods of transmitting messages
using light as a carrier of information, then Claude Chappe's invention of the
optical telegraph in the 1790s might be considered the first modern optical

230
Introduction to Fiber Optics

communication system. He devised a method that used a series of semaphores, or


flag wavers, between towers to relay messages. Nearly 100 years later,
Alexander Graham Bell patented the photo-phone, a device that used light
reflected from a vibrating diaphragm to transmit information. His other
communication device—the telephone—proved to be more practical.
At first, optical fibers were simply hollow or transparent glass rods.
Through the 1920s and 1930s, scientists explored the idea of using these glass
rods to transmit images. By the 1950s, the use of bundles of optical fiber to
transport light to form images was firmly established, and by the late 1950s
optical fiber was available for medical imaging applications. However, the fiber
optic bundles could not be very lengthy because the attenuation, or loss of light
in the fiber, was too great. To use optical fiber for telecommunications,
attenuation had to be decreased by several orders of magnitude.
In the 1960s, scientists and engineers recognized the usefulness of glass Message
fiber as a transmission medium for communication. Their optimism was partly Source

due to the introduction of the helium neon laser as well as efforts to decrease the
attenuation of the glass. Corning Glass Works (now Corning Inc.) produced a Transmitter
Information Channel:
fiber that, when used with a HeNe laser, lost "only" 99% of the incident light
over a distance of 1 km. While this was far too great a loss for • copper wire or
cable
telecommunications systems, the accomplishment was a major breakthrough in • glass or plastic
fiber
the establishment of fiber optic communications. • the atmosphere
The first commercially available fiber optic system was installed in the • space

late 1970s and used to carry "plain old telephone service" across nationwide
Receiver
networks. Soon local telephone service providers began using fiber optics to
carry this same service between central office switches at more local levels, as
well as to neighborhoods and individual homes. Message
Destination
Fiber optic networking began in the early 1980s with systems that were
capable of transmitting 90 million bits of data per second (90 Mb/s). At this data
Figure 11.1 - Basic
rate, a single optical fiber could handle approximately 1300 simultaneous voice communications system.
channels. By the start of the 21st century, fiber optic communication systems
were routinely being deployed at 10 Gb/s, and next-generation systems under
development will operate at rates as high as 40 Gb/s. At a data rate of 10 Gbits/s,
over 130,000 simultaneous voice channels can be transmitted on one hair thin
fiber using a single wavelength of light. New technologies have been successfully
used to combine many wavelengths and further increase data rates to beyond a
terabit per second (>1000 Gb/s) over distances in excess of 100 km. This is
equivalent to transmitting 13 million simultaneous phone calls through a single
glass fiber. At this speed, 100,000 books can be transmitted coast to coast in 1
second!

231
LIGHT: Introduction to Optics and Photonics

Modern fiber optic telecommunication systems now carry more digital


computer data than telephone calls. Large corporations, cities and universities
own private networks and need secure, reliable systems to transfer computer and
monetary information to the desktop terminal, between buildings and around the
world. The security inherent in optical fiber systems is a major benefit. Cable
television companies, called community antenna television (CATV), also find
fiber useful for video services. The high information-carrying capacity of fiber
makes it the perfect choice for transmitting signals to subscribers.

11.2 WHY OPTICAL FIBER?


You may hear people say that optical fiber is "faster" than copper wire.
The “speed” advantage of fiber does not refer to how fast a signal travels from
one end of the fiber to the other, but rather to bandwidth, the maximum data rate
(number of bits of data sent per second) at which the fiber optic system can be
operated. In copper-based systems, as the data rate increases the attenuation
increases also. At a certain point, copper wire is no longer capable of handling
the high rate of data. This is where fiber optics, with its tremendous data rate
capability, shines.
For long distance communications, the low attenuation of fiber is a
distinct advantage over copper transmission media. Copper voice-grade systems
may require amplification every few kilometers, but signals are routinely
transmitted over optical fiber for more than 100 kilometers without amplification.
This results in large savings in electronics components needed to amplify and
condition a signal, as well as an increase in system reliability due to fewer
components.
There are other advantages to optical fiber as well. Since it is not an
electrical conductor, it is immune to electromagnetic noise. A nearby lightning
strike can cause havoc with electrical conductors, but unless the glass fiber is
physically harmed, the light inside is unaffected. Optical fiber does not radiate
electromagnetic radiation either, so it is difficult to tap into, making it an ideal
medium for secure communications. Compared to metallic conductors, it is much
smaller, lighter and uses less power.
Optical fiber also has non-telecommunications uses. So called “coherent
fiber optic bundles" are used to transmit images. For example, fiber optic bundles
can be used to transfer images from inside the human body to the eye of a
surgeon. On the other hand, incoherent fiber optic bundles, can carry light for
illumination, an application of fiber growing in importance. Optical fibers with
special properties can also be used as sensors, detecting mechanical motion,

232
Introduction to Fiber Optics

temperature change or the presence of biological and chemicals agents. We will


discuss some of these applications later in the chapter.
Optical fiber, however, does have some disadvantages. Working with
fiber optic cable requires handling skills that necessitate special training and also
raise safety issues that are not a problem with electrical conductors. For example,
individuals working with optical fiber must be aware of the laser hazards from
light exiting the fiber. In addition, given the small size of optical fiber, tiny bits of
glass fiber can become easily embedded in the skin. For long distance
communication links operating at very high data rates, however, fiber optics has
no competition. Most recently, fiber optics has become the communications
medium of choice in metropolitan area networks (MAN), local area networks
(LAN), as well as fiber to the home (FTTH) or fiber to the premises (FTTP).

11.3 INTRODUCTORY PRINCIPLES

Fiber Construction
An optical fiber consists of a central glass (or plastic) core surrounded by
outer cladding with a slightly lower index of refraction as shown in Figure 11.2.
In glass fibers, the core and cladding materials are usually highly purified silica
fused together during the manufacturing process. Fiber optic cable can also be
constructed with plastic core and cladding or, in some highly specialized fiber,
glass core and plastic cladding. Whatever the material, the core and cladding are
not separable. A plastic buffer coating is usually added as the fiber is being
manufactured to protect it from environmental contaminants. The buffer must be
removed when the fiber is spliced or put into a connector.

Buffer
Figure 11.2 - Typical optical
Cladding fiber construction. The core
and cladding cannot be
Core separated; they are two
regions of glass with different
index of refraction.

The outer diameter of a typical telecommunications fiber is about


125µm, which is a little larger than the diameter of a human hair. The fiber core
and cladding diameters have been standardized both nationally and
internationally for manufacturing and application purposes. Fiber size is
designated by the manufacturer using the notation core/cladding, where the first
number refers to the core diameter in micrometers and the second number refers
to the cladding diameter in micrometers. For example, the fiber shown in Figure
11.3 with a 62.5 µm core and 125 µm cladding would be designated 62.5/125.

233
LIGHT: Introduction to Optics and Photonics

(This is said "sixty-two dot five, one twenty-five.") Other typical fiber sizes are
for telecommunications are 9/125 and 50/125. A common plastic optical fiber is
62.5 µ m designated 980/1000. This fiber has a 980 µm core with a 1000 µm (1 mm)
outside diameter and can be used in very short distance data links.
Many different types of fiber are used for specialized purposes. Fibers
with specially doped cores are used to make optical amplifiers, fiber lasers and
fiber Bragg gratings. Large core fibers are used to transport light energy, for
example, in a laser delivery system. Photonic fibers, with many tiny holes
125 µm running the length of the core, are finding many uses in next generation optical
Figure 11.3 – devices.
Core/cladding profile of a
62.5/125 fiber. How Fiber "Works" - Total Internal Reflection
Optical fiber transmission is based on the principle of total internal
reflection (TIR), which was introduced in Chapter 4. Recall that when light
travels from one medium to another, the angle that transmitted light makes with
the normal to the surface depends on the index of refraction of the two media
according to Snell’s law:

n1 sin %1 = n2 sin %2

In Chapter 4 you also learned that when the index of refraction of the incident
medium is greater than the index of refraction of the second medium, light is
refracted away from the normal. If the angle of incidence is such that the
refracted angle is 90°, the incident angle is called the critical angle (%c). Any
incident angle greater than the critical angle will cause light to be totally reflected
back into the first medium.
Consider the situation shown in Figure 11.4, where a higher index
material is sandwiched between two lower index materials (n1 > n2). This device
is known as an optical waveguide. Any beam of light striking the interface
between the two materials at angles greater than the critical angle will be totally
internally reflected and guided along the waveguide. Optical fiber works by this
basic principle.

n1
Figure 11.4 - Light guided by TIR %c
in the core of an optical fiber. %a
n2

234
Introduction to Fiber Optics

Numerical Aperture
A ray of light traveling through an optical fiber must strike the core-
cladding boundary of the fiber at an angle larger than the critical angle if light is
to undergo total internal reflection. This implies that the light entering the fiber
also must meet a geometric test to be guided by the fiber. The acceptance angle,
2&a, defines the cone of rays that will be "accepted" and propagated by the fiber.
Rays entering the fiber at angles larger than the acceptance angle will be refracted
into the cladding and not guided along the core. Thus, the acceptance angle is a
measure of the fiber's light gathering ability.
Manufacturers usually do not state the acceptance angle, but rather give
the numerical aperture (N.A.) of the fiber. The numerical aperture is defined by

N.A. = sin ! a (11.1) Numerical Aperture

where %a is the half-acceptance angle of the fiber (see Figure 11.5).

Critical angle beam


Figure 11.5 - Numerical Aperture and
Acceptance Angle. The dashed ray
enters the end of the fiber at an angle
greater than the acceptance angle
%a and is lost in the cladding.
$ acceptance
angle

For most telecommunications fiber, the N.A. varies between 0.1 and 0.3,
depending on the type of fiber. The numerical aperture is an important quantity
because it is used to determine how a fiber couples to other system components
and, as we will show, how much a pulse of light spreads as it travels along the
fiber. Optical fiber with a larger numerical aperture is easier to couple light into,
but it will cause light pulses to spread more over distance than a small numerical
aperture fiber, thereby limiting its data rate capability.
Sometimes it is necessary to relate numerical aperture to the index of
refraction of the fiber’s core and cladding. Applying Snell's law and some
trigonometry, we can approximate the numerical aperture using the equation
Numerical Aperture for
N .A. = 2
ncore ! ncladding
2
(11.2) a Step Index Fiber

From Equation 11.2, it is clear that the larger the difference between the
core and cladding indices of refraction, the larger the numerical aperture. Since a
large numerical aperture causes increased pulse spreading, it is often desirable to
have the core and cladding indices quite close in value.

235
LIGHT: Introduction to Optics and Photonics

EXAMPLE 11.1
Find the numerical aperture and acceptance angle of an optical fiber with a
core index of refraction of 1.512 and a cladding index of 1.496.

Solution:
Using Equation 11.2,
N .A. = 1.512 2 ! 1.496 2 = 0.2193
The half acceptance angle is found from Equation 11.1
sin -1 (0.2193) = 12.7 o
Only light striking the end of the fiber in a cone of angle 2 x 12.7o or 25.4o is
"accepted" or propagated along the length of the fiber.

Operating Wavelengths
Optical fiber communications systems operate in the near-infrared (IR)
portion of the spectrum because of the low attenuation of glass in this region.
Early fiber optic systems used the 800 nm - 900 nm range because sources and
detectors were readily available. This is sometimes called the first transmission
window. The second transmission window is around 1300 nm and the third
transmission window is near 1550 nm, where silica glass has minimum
attenuation. In the not too distant past, manufacturers referred to the water peak,
a region near 1400 nm where OH- ions in the glass caused enough absorption to
make 1400 nm region unusable for long distance communications. Recently
however, improvements in manufacturing processes have resulted in the creation
of a "dry" fiber with virtually no water contamination. Now the entire 1300 nm -
1600 nm range may be put to use for telecommunications.
Attenuation in dB/km

Figure 11.6 - Spectral attenuation 4


for silica (glass) fiber. The
dashed line shows the 3
"Water Peak"
improvement in attenuation
2
provided by so-called "full
spectrum" fiber, which eliminates 1
-
the OH absorption (water) peak.

800 900 1000 1100 1200 1300 1400 1500 1600


Wavelength in nm

11.4 FIBER CHARACTERISTICS


The transmission characteristics of optical fiber for telecommunications
depend both on the specific material composition and the physical shape and size.
The simplest type of optical fiber is known as a step-index fiber, in which there is

236
Introduction to Fiber Optics

a “step” change in index of refraction between the core and cladding, as shown in
Figure 11.2. Certain fibers, however, may not have a simple step index profile,
but rather a parabolic or complex index profile designed to enhance the
transmission characteristics of the fiber. Manufacturers closely guard parameters
such as glass composition and index profile, which directly affect the fiber
performance. The most important characteristics of optical telecommunications
fiber are fiber loss (attenuation) and data rate.

Types of Optical Fiber for Communications


One of the most common optical fibers used in short distance, low
bandwidth applications is the step-index multimode (SI-MM) fiber. SI-MM fiber
has a high index of refraction in the core of the fiber that changes abruptly to a
lower index of refraction in the cladding. A typical 62.5/125 SI-MM fiber is
illustrated in Figure 11.3. SI-MM fiber has a relatively large core diameter and
large numerical aperture. The major advantage of this fiber is that it is relatively
easy to couple light into the fiber and, as a result, it may be used with either laser
or LED sources.
The fiber is termed “multimode” because light may follow many paths
(or modes) as it travels along the fiber as shown in Figure 11.7. For example,
suppose a short pulse of light is launched into the left end of the fiber. Some of
the light will travel in each of the specific paths shown as the pulse propagates
toward the right end of the fiber. It is clear that light traveling near the centerline
of the fiber core (low order modes) will travel a much shorter path than light that
follows a highly zigzag path (higher-order modes). This causes light traveling in
the lower-order modes to arrive at the output of the fiber sooner than the light
traveling in the higher-order modes. The result is a broadened output pulse. This
type of pulse broadening is known as modal distortion.

ncladdi Figure 11.7 - Step Index


Multimode fiber has many paths
ng
for light to travel, leading to pulse
spreading.
Short light pulse enters fiber Broad pulse exits fiber

Digital data transmission consists of on-off pulses of light, like those


shown in Figure 11.7. If many closely spaced pulses are sent along the fiber at
too great a rate, modal distortion will cause them to spread into one another until
individual pulses can no longer be distinguished. The farther the pulses travel, the
more the pulses spread, limiting the data rate of the fiber.

237
LIGHT: Introduction to Optics and Photonics

Step-index multimode fiber may be used in applications that require


lower bandwidth (< 1 GHz) transmission over relatively short distances (< 3 km),
such as in a local area network.
How many modes can a multimode fiber support? For a multimode step-
index fiber, the number of modes Mn traveling in the fiber can be approximated
by

V2
Mn =
Number of modes (11.5) 2

where V is known as the normalized frequency, or the V-number. This


dimensionless quantity is an important parameter used to describe fiber optic
characteristics. It depends on the core radius (a), the numerical aperture (N.A.),
and the operating wavelength.

# 2! a &
V-number (11.6) V =% ( N.A.
$ " '

Equation 11.5 is valid for large values of V, that is, for V-numbers
greater than about 20. For smaller values of V, an exact solution must be used.

EXAMPLE 11.2
Approximately how many paths (modes) are there in a 62.5/125 SI-MM
fiber operated at 1300 nm (1.3 µm)? Assume a numerical aperture of 0.3.

Solution
First, find the V-number using Equation 11.6. The core radius is one half the
diameter, 62.5 µm.

(
" 2! 31.25µ m %)
V =$
1.3µ m
( )
' 0.3 = 45
# &

Then use 11.5 to find the approximate number of modes

V 2 452
M= = = 1012
2 2

The fiber will support approximately 1000 modes. The number of modes
will change as the fiber bends and fiber geometry changes.

Singlemode fiber solves the problem of modal distortion by allowing


only one mode to propagate in the fiber (Figure 11.8). If you examine Equation

238
Introduction to Fiber Optics

11.6, you can see that decreasing either the fiber core radius or its numerical
aperture will decrease the V-number, and therefore the number of modes. In
practice, single mode operation is accomplished by making the fiber core
diameter extremely small (5-10 µm is a typical range for core diameter in single n
mode fiber) and by minimizing the core/cladding index difference, which results c
Figure 11.8 - Singlemode
in a smaller numerical aperture. fiber.
l

Analysis beyond the scope of this text shows that when the V-number is a

less than 2.405, only one mode will exist in the fiber. This fact allows the single d
mode core diameter to be calculated using Equation 11.6. Since V depends on d
wavelength as well as core diameter, a fiber that is single mode at one
i
wavelength may support multiple modes at a shorter wavelength. Therefore, it is
n
important to specify the wavelength for singlemode fiber. The wavelength at
g
which a singlemode fiber is guaranteed to be singlemode is referred in
manufacturer’s data sheets as the cutoff wavelength.

EXAMPLE 11.3:
What is the maximum core diameter for a fiber to be singlemode at a
wavelength of 1310 nm if the N.A. is 0.12?

Solution
Solving Equation 11.6 for the core radius, a,
a = (V # )/(2*! N.A.)
For singlemode operation, V must be 2.405 or less. The maximum core
radius occurs when V = 2.405. Using this value for V,
a = (2.405 * 1310 nm)/(2*! * 0.12) = 4.18 µm
The core diameter is d = 2 x a = 8.36 µm. This is a typical diameter for
single mode fiber.

Although singlemode fiber eliminates the problem of modal distortion, at


very high data rates a different pulse spreading mechanism can limit
performance. In Chapter 4, chromatic dispersion was introduced as a
consequence of the wavelength dependence of index of refraction. In an optical
fiber, slight variations in the index of refraction over the spectral width of the
light source can result in pulse spreading over a long length of fiber. Chromatic
dispersion may be reduced or eliminated by operating the fiber with highly
monochromatic light source such as a distributed feedback laser (DFB laser), by
operating at a wavelength where the glass has minimum dispersion (1300 nm for
silica fiber), or by using specialized dispersion compensating fiber. Note that
chromatic dispersion also exists in multimode fibers, but because it is negligible

239
LIGHT: Introduction to Optics and Photonics

compared to modal distortion it is not normally considered for multimode


systems.
Singlemode fibers are used in high-bandwidth, long distance applications
such as long-distance telephone trunk lines, cable TV head-ends, and high-speed
WAN backbones. However, given its small core diameter, single mode fiber is
more difficult to work with than multimode fiber. Just imagine connecting two
fiber cores, one-tenth the diameter of a human hair! In addition, because of the
small numerical aperture associated with single mode fiber, it can only be used
with laser sources, which are more expensive and complex than LED sources.
Graded Index Fiber represents a compromise between multimode fiber,
which is easy to work with, and singlemode fiber with its higher data carrying
capacity. Often referred to as GRIN fiber (GRaded INdex), it has a core index of
refraction that varies parabolically from the center of the core to the inner edge of
the cladding (Figure 11.9).
Index of
refraction
profile
Figure 11.9 - GRIN fiber -
The index of refraction is
highest in the center of
the core.

Recall that light travels slower in materials where the index of refraction
is higher. In GRIN fiber, the light propagating along the shorter paths near the
axis travels slower than light in the the high order modes near the cladding. This
allows the “zigzag” modes to “catch up" to the light traveling straight along the
fiber center. The net effect is that modal distortion is greatly reduced.

11.5 LOSSES IN A FIBER OPTIC COMMUNICATION SYSTEM


In every communication system, power is lost as the signal propagates
through the information channel. While loss in fiber optic cable is small
compared to copper wire, there still is some loss that must be accounted for when
building a fiber optic link. Intrinsic loss is due to the interaction of light with the
material of the fiber. Rayleigh scattering accounts for most of the loss. As you
know, this is scattering from the atoms in the glass. Absorption is due to light
interaction with impurities in the glass and may be controlled to some extent by
improved manufacturing processes. These losses are wavelength dependent and
can be minimized by carefully choosing the operating wavelength.
Extrinsic loss, on the other hand, results from deformation in the fiber
structure. When fiber cable is bent, for example, to go around a corner, we speak
of macrobending loss. Fiber cable specifications indicate a minimum bend radius
that should not be exceeded, or excessive signal loss may result. Small localized

240
Introduction to Fiber Optics

distortions of the fiber geometry are called microbends, and these may be due to
local pressure, say, from the cabling process.
Finally, light is also lost when couplers, connectors and other optical
components are inserted into the system. The overall loss incurred is called
insertion loss. For a fiber optic system to function properly, all of the losses must
be accounted for and there must be enough optical power left over at the receiver
to be detected with minimal error.

Measuring Loss in Decibels


Fiber optic system losses are measured in logarithmic units called
decibels (dB) rather than in power units (watts). This may seem like a needless
complication, however, using logarithms allows us to replace multiplication with
addition and subtraction, which can often be performed quickly even without a Pin Fiber Pout
calculator. Logarithms also allow us to graphically present data that covers too
Figure 11.10 - Loss in a
many orders of magnitude to be effectively shown on a linear scale. fiber: Pout < Pin.
Consider an optical fiber with input power P1 and output power P2
(Figure 11.10). To calculate the loss in decibels, we first compare the output
power to the input power by calculating the ratio P2/P1, then we take the
logarithm of the ratio. When we multiply by the factor of 10 it converts the unit
"bel" (named after Alexander Graham Bell) to the unit "decibel.”

!P $ (11.7)
Loss in dB = 10 log # 2 &
" P1 %

Thus, if the output power is one half the input power, the loss is 3 dB, since the
log of 0.5 is 0.3. Loss in optical fiber is usually expressed in decibels per
kilometer (dB/km).

EXAMPLE 11.4
A fiber has P1 = 2 mW and P2 = 1.6 mW. Find the loss in dB.

Solution
Using Equation 11.7,
! 1.6 mW $
loss in dB=10 log # = '0.969 dB
" 2 mW &%

The negative sign implies a loss of power.

You are probably most familiar with power stated in watts, milliwatts or
µwatts. It is common, however, to express optical power in fiber optic systems in

241
LIGHT: Introduction to Optics and Photonics

units called dBm, or "decibel referenced to one milliwatt.” Power in dBm is


calculated from the absolute power by

(11.8) ! P $
dBm from mW P(in dBm ) = 10 log #
" 1 mW %&

That is, the power of a light source is first compared to 1 mW, then the log is
taken and the result multiplied by 10. Thus, a power of 1 mW is equivalent to
0 dBm and 10 mW is equivalent to 20 dBm. The advantage of using dBm is that
loss (and gain from amplification) in dB may be subtracted (or added) by simple
arithmetic.
When power is given in dBm, power in mW may be calculated by
rearranging Equation 11.8 to give
P(in dBm )
mW from dBm (11.9) P(in mW) = 10 10

Notice that we use the notations “in mW” or “in dBm” in our equations so you
know which units we are referring to. Tables 11.1 and 11.2 illustrate some optical
power levels and their associated dB and dBm equivalents. You will notice the
familiar base 10 logarithm at work here!

dB Pout/Pin
+40 dB 10 4 = 10,000
+30 dB 10 3 = 1000
+20 dB 10 2 = 100
+10 dB 10 1 = 10
0 dB 10 0 =1
-10 dB 10 -1 = 0.1
-20 dB 10 -2 = 0.01
-30 dB 10 -3 = 0.001
-40 dB 10 -4 = 0.0001
Table 11.1 – Decibel to power ratio conversions

dBm P (in mW) (Preference = 1 mW)


+40 dBm 10 4 = 10,000 mW
+30 dBm 10 3 = 1000 mW
+20 dBm 10 2 = 100 mW
+10 dBm 10 1 = 10 mW
0 dBm 10 0 = 1 mW
-10 dBm 10 -1 = 0.1mW
-20 dBm 10 -2 = 0.01 mW
-30 dBm 10 -3 = 0.001 mW
-40 dBm 10 -4 = 0.0001 mW
Table 11.2 - dB m to mW conversions

242
Introduction to Fiber Optics

EXAMPLE 11.5
A communication system uses 20 km of fiber that has a loss of
0.5 dB/km. Find the output power if the input power in 100 mW.

Solution
For 20 km of fiber with 0.5 dB/Km
Loss (in dB)=(20 km)(-0.5 dB/km) = -10 dB
Solve equation 11.7 for P2, using the fact that if x = log y, then y = 10x.
!P$
Loss(in dB) = 10 log # 2 &
" P1 %
loss(in dB)
P2
10 10
=
P1
! loss(in dB) $ ! '10dB $
P2 = P1 # 10 10 & = 100 mW # 10 10 & = 10 mW
" % " %
The output power is 10 mW. Another way to view this problem is to note
that the input power, 100 mW, is equivalent to 20 dBm. A loss of 10 dB
would result in 20 dBm-10dB= 10 dB.

EXAMPLE 11.6
A fiber has a 2.5 dB/km loss. What is Pout for a 5 km segment of this fiber if
Pin =1 dBm?

Solution
The total fiber loss is (2.5 dB/km)(5 km) = 12.5 dB.
Then, Pout = 1 dBm - 12.5 dB = -11.5 dBm.
From equation 11.9,
P(in dBm ) !11.5dBm
P(in mW) = 10 10
= 10 10
= 10 !1.15 = 0.071 mW

One method that system designers use to “map out” the power lost in a fiber
optic link is a power budget, a graph with power on the vertical axis and distance
on the horizontal axis. As Example 11.7 shows, the power budget is a convenient
graphical representation of the entire fiber optic link illustrating the position and
loss associated with each component in the system. A typical power budget
shows the input light source power (Pin) expressed in dBm; the light lost due to
the light source-to-fiber connection; the loss of each connector or splice; the fiber
loss in dB/km; the fiber-to-receiver connection loss; and the receiver sensitivity

243
LIGHT: Introduction to Optics and Photonics

(Rs), which is the minimum power in dBm that must be present at the receiver to
ensure proper detection of the signal. The excess power left over at the receiver
after all of the losses have been taken into account is known as the power margin,
which is usually specified by the designer. A power margin is necessary to ensure
against losses that occur during the life of the system due to aging components or
perhaps unexpected fiber damage.

EXAMPLE 11.7
A fiber optic link is designed according to the following specifications:
Laser input power (Pin) = 10 mW (10 dBm)
Light source-to-fiber loss = 3 dB
Fiber loss per km = 0.3 dB/km
Fiber length (x) = 60 km
Connector loss = 1 dB (five connectors spaced 10 km apart)
Fiber to detector loss = 3 dB
Receiver sensitivity (Rs) = –40 dBm
Sketch the power budget graph and find the power margin.

Solution
The total loss is determined by adding up all of the individual losses.
Total loss = 3 dB +(60 km + 0.3 dB/km) + (5 x 1 dB) + 3 dB = 29 dB
Power at the receiver = Input Power–Total loss
= 10 dBm–29 dB = -19 dBm
The power margin is the difference between the receiver sensitivity and the
power at the receiver, or
Power margin = -19dBm – (-40 dBm) = 21dB
Power To summarize, the receiver requires
10 dBm input power a minimum of –40 dBm (0.1 µW
3 dB source to fiber loss
1 dB connector loss using equation 11.9); the output
for each connector
power after all of the losses are
29 dB total loss

included is –19 dBm (12.6 µW using


3 dB loss for each
equation 11.9), which is 21 dB above
10 km of fiber the minimum required signal level
for needed for proper detection.
3 dB fiber to detector loss
System loss is illustrated by the
Distance diagram at left.
60 km fiber link

244
Introduction to Fiber Optics

Measuring Fiber Loss


One of the most straightforward methods of measuring fiber loss is to
inject a known amount of optical power into a fiber and then measure the output
power. This is easily accomplished using an optical loss test kit, which consists of
an optical source (either LED or laser) and power meter. Most modern fiber optic
loss sets can be configured to operate at multiple wavelengths over a large
dynamic range and can be equipped with a variety of connector types. The
drawback to using this method is that both ends of the fiber need to be accessible,
which is not always the case.

Figure 11.11 - Optical fiber attenuation


measurement. Top: measuring the
reference patch cord. Bottom: testing the
fiber.

Measuring the loss in a fiber optic link generally involves measuring the
attenuation of the cable under test and comparing it to a known reference cable.
This is necessary because the output power of the light source in the test kit is not
known and, in fact, may change over time. In Figure 11.11 light is launched into
the reference cable, or patch cord, from the source and is measured with the
detector. This is the reference value. The cable under test is then connected to the
patch cord and a second measurement is made. This is the measured value. The
attenuation of the cable is simply the difference between the two values.
Figure 11.12 - Fiber optic
Attenuation = Reference Value – Measured Value
loss set. (Photo courtesy
For the measurement illustrated in Figure 11.11, AFL Telecommunications
www.afltele.com)
Attenuation = (-10 dBm) – (-12.4 dBm) = 2.4 dB
Most higher quality optical loss sets also have a reference feature that allows the
measured value to automatically be subtracted from the reference value for
convenience.

245
LIGHT: Introduction to Optics and Photonics

Another method for determining the loss of a fiber optic cable in dB/km
is known as the cutback method (Figure 11.12). In this measurement, light is first
launched into a certain length of fiber (L1) and the output power (P1) is measured
with the detector. A section of fiber is then cut off and the output power (P2) is
measured at the output of the remaining length of fiber (L2) with the detector.
The loss of the fiber in dB/km can then be determined
P1 ! P2
Loss =
L1 ! L2

where the difference L1 – L2 is expressed in kilometers and P1 – P2 is in dB.

Figure 11.13 – Cutback method to


determine fiber loss.

Insertion Loss Measurements


The total optical power loss caused by the insertion of an optical
component into a fiber optic system is called insertion loss, and it is an important
parameter commonly used to specify performance of a connector, splice or
coupler. It is determined by measuring the optical power before and after an
optical component has been inserted. The two power measurements are then used
to calculate loss using a form of Equation 11.7

! Pafter $
Insertion Loss Lossil = 10 log # &
#" Pbefore &%

where Pafter is the power measured after the component has been installed and
Pbefore is the initial power. Insertion loss is usually stated on manufacturers' data
sheets.

The Optical Time Domain Reflectometer


If both ends of the fiber are not available, loss measurements may be
made using an Optical Time Domain Reflectometer (OTDR). This instrument
sends short pulses of laser light down a fiber and measures the amount of time
needed for the reflected pulses to return. It is easy to understand that pulses are
reflected from splices and connectors. However, light is also reflected from tiny

246
Introduction to Fiber Optics

variations in the core glass by Rayleigh scattering. The OTDR builds up a


graphical picture of power versus distance in the fiber. In addition to fiber loss
(attenuation), an OTDR can be used to measure splice and connector loss and to
locate breaks in the fiber. A commercially available OTDR is shown in Figure
11.14.
In Figure 11.15, reflected events, labeled “back reflections.” are a result
of Fresnel reflection due to light striking an air-glass or glass-air interface. Non-
reflective events caused by splice attenuation are indicated as slight drops on
optical power. Finally, loss per kilometer of the fiber itself is determined by
measuring the slope of the OTDR trace. A steeper slope indicates higher dB/Km Figure 11.14 – An OTDR.
fiber attenuation. (Photo courtesy AFL
Telecommunications
www.afltele.com)

Attenuation in fiber
Power

Back reflection at
end of fiber

Noise Figure 11.15 - A typical OTDR trace.


Initial pulse

Splice
Back reflection at a connector
Distance

11.6 FIBER OPTIC SYSTEM COMPONENTS

Fiber Optic Cable


For most applications, optical fiber is packaged in some type of cabling
to protect it from the environment. For example, fiber for outdoor use needs
protection from water, which would eventually degrade the fiber, and submarine
fiber must have armored protection against biting sharks. Cabling adds the
strength needed to pull the fiber into its final position and provides crush
resistance and protection from excess bending. Cables may be as simple as single
fiber plus a strength member such as aramid yarn (Kevlar,) bundled together in
an outer jacket, or they may be quite complex containing many separate fibers in
gel-filled tubes (to exclude water), several strength members and metal armor.
Specialized tools are often required to remove the cabling in order to splice the
fiber inside.

Connectors and Splices


Fiber optic connectors are used for temporary connections, such as in a
large distribution frame. Unlike telephone lines that have a single standardized

247
LIGHT: Introduction to Optics and Photonics

connector, a wide variety of connector types is available for optical fiber.


Connector names are usually two or more letters such as ST (a twist lock bayonet
type), FC (which has a screw on coupling), SC (a square snap on connector) and
FDDI (specifically for Fiber Distributed Data Interface networks).
FC Newly developed connectors, such as the MT-RJ and the LC, are called
“small form factor” connectors. When hundreds of fibers need to be connected in
SC distribution frames, the size of the connector becomes an issue and these newer
connectors are smaller than the older varieties of the 1980s and 1990s. Figure
MTP
11.16 shows examples of common fiber optic connectors. Overall, depending on
ST FDDI
the connector, connector losses can range from as little as 0.10 dB up to 1 dB.
FigureMTRJ
11.16 – Some of the
Splicing can provide either a temporary or a permanent connection
many types of fiber optic
connectors. (Courtesy Fiber between two fibers. Mechanical splices use an adhesive or a clamp to hold the
Instrument Sales,
LC
www.fiberinstrumentsales.com)
ends of two fibers together for a quick repair or test procedure (Figure 11.17).
Average splice loss for a mechanical splice is approximately 0.2 dB.
MU When the ends of the fibers must be permanently joined, fusion splicing
is used. The ends of two fibers are heated by an electric arc to soften them, and
then pushed together so that they essentially become a single fiber. Fusion
splicing is done on special machines called fusion splicers that may cost from
several thousand dollars to tens of thousands of dollars. Average splice loss for
Figure 11.17 – Mechanical
Splices (Courtesy of Cables fusion splicing can be less than 0.1 dB. Figure 11.18 illustrates the process of
Unlimited, Inc. www.cables-
unlimited.com)
fusion splicing two fibers together. More expensive fusion splicers are capable of
automatic x-y-z position alignment to ensure maximum throughput, as well as
rotational alignment for polarization maintaining fiber. Some fusion splicers are
able to simultaneously splice up to twenty-four fibers in a ribbon cable!

Figure 11.18 – Fusion splicing


process. The two ends of the fiber are
aligned in a mechanical holder called
a v-groove or a vacuum chuck (upper
right). The fusion splicer then applies
a pre-fusion electrical arc to rid the
fibers of any contaminants (lower left).
Then a high voltage electrical arc is
applied which physically fuses or
melts the two fibers together. The
result is essentially a single fiber
(lower right).
The video screens show two different
views of the fiber, top and side.

248
Introduction to Fiber Optics

Fiber Optic Couplers


Connectors and splices are used to join two fibers together in either a
temporary or permanent connection. Couplers on the other hand join one fiber or
many fibers to many other separate fibers, and may be designed so that each
output fiber receives equal or unequal power. There are two general categories of
couplers, star couplers and tap couplers (Figure 11.19).
Input Output
P1 Outpu
4x4 (P1 + P2 + P3 + P4)/n
P2 Figure 11.19 – Star coupler.
Star (P1 + P2t + P3 + P4)/n
P3
Coupler (P1 + P2 + P3 + P4)/n
(P1 + P2 + P3 + P4)/n
P4

Star couplers are used to distribute an optical signal from N inputs to N


outputs. Light coming into any port is equally divided among all the output ports.
Star couplers are generally used to connect a large number of terminals to a
network, as in an Ethernet network. An important parameter that describes how
light is distributed is called the power division. For example, in a star coupler
with eight inputs and eight outputs (8 x 8), the power from any of the eight inputs
is equally split among each of the eight outputs.

Pin Pin
Pout = =
n 8

Power division among coupler ports is commonly expressed using


decibel notation

! 1$
Power Division (in dB) = 10Log # &
" n%

With power division expressed in decibels, if the input power is in dBm the
output power can also be expressed in dBm by simply subtracting the power
division from the dBm input power.

EXAMPLE 11.8
Find the power division for a 4 x 4 star coupler with 3 dBm input power.

Solution
! 1$
Power Division (in dB) = 10Log # & = '6dB
" 4%

The output power is input power minus power division, or


+3 dBm – 6dB = -3 dBm , which is equivalent to 0.5 mWatts into each port.

249
LIGHT: Introduction to Optics and Photonics

Another important characteristic of star couplers is crosstalk, the extent


to which a signal transmitted on one input of the coupler creates an undesired
effect in another input or channel. Crosstalk is usually expressed in decibels and
is typically greater than 40dB. Excess power loss is a parameter used to describe
the power that is lost in the star coupler from a variety of other effects including
absorption, scattering and any other mechanism that reduces the power available
at the output of the coupler. For example, if the total input power at one input of
a star coupler is 10 dBm (10mW) and the total power from all outputs combined
is 9 dBm (6 mW), the excess power loss is 1 dBm.
A tap coupler or T-coupler, is used to “tap” off some of light from a fiber
Input Output optic cable the for signal distribution or for monitoring purposes (Figure 11.20).
P2 The amount of power split off can vary from as little as 1% to as much as 50%
P1
P3
depending on the application. The percentage of power splitting is known as the
Figure 11.20- Tap Coupler. split ratio. For example, a T-coupler in which 10% of the light is split off would
be a 90/10 T-coupler. The split ratio can also be expressed in terms of decibels.
For example, in a 3dB T-coupler, the output power at the two ports is split 50/50.
In general,
!P $
PTap (dB) = 10Log # out &
" Pin %

11.7 FIBER OPTIC COMMUNICATIONS AND DEVICES

Wavelength Division Multiplexing and Optical Amplification


How does optical fiber transport data at such high rates? Since different
wavelengths can travel along a fiber without interfering, it is possible to combine
many slightly different wavelengths and transmit them on the same fiber. This
transmission technique is called Wavelength Division Multiplexing (WDM) and
it is at the heart of modern high-speed communications.
At the receiving end, the individual wavelengths are separated into
different channels. A schematic of a WDM system is shown in Figure 11.21.
Each WDM data channel may consist of a single data source or a combination of
multiplexed data sources. When multiple wavelengths are transmitted through the
same fiber using very closely spaced wavelengths (< 0.8 nm), the process is
known as dense wavelength division multiplexing (DWDM).
Of course, in order for such a complex system to work, standards must
exist so that equipment from different manufacturers can work together. In
DWDM systems, the International Telecommunications Union (ITU) sets
standard frequency spacing for communication channels in what is known as the
ITU-Grid. The portion of the grid shown in Table 11.3 specifies 100GHz spacing

250
Introduction to Fiber Optics

(equivalent to about 0.8 nm wavelength spacing) between transmission channels.


50 GHz spacing is also specified for some fiber optic systems.

!1 EDFA (Fiber amplifier) !1

Fiber !2
!2 Fiber

!3 Figure 11.21 – WDM system


!3 WDM with EDFA (optical amplifier).
WDM

!4 !4
Power and
wavelength
Tap monitor
!N Coupler !N
Input Wavelengths Output Wavelengths

DWDM systems usually operate with wavelengths between roughly 1530


nm and 1570 nm because of the low attenuation of glass in the 1550 nm region of
the spectrum and the availability of Erbium-Doped Fiber Amplifiers (EDFA)
which can amplify signals over a range of wavelengths near 1550 nm.

Center ! – nm Optical f - (THz) Center ! – nm Optical f - (THz)


1530.33 195.9 1546.92 193.8
1531.12 195.8 1548.51 193.6
1531.90 195.7 1549.32 193.5
1532.68 195.6 1550.12 193.4
1533.47 195.5 1550.92 193.3
1534.25 195.4 1551.72 193.2
1535.04 195.3 1552.52 193.1
1535.82 195.2 1553.33 193.0
1536.61 195.1 1554.13 192.9
1537.40 195.0 1554.93 192.8
1538.19 194.9 1555.75 192.7
1538.98 194.8 1556.55 192.6
1539.77 194.7 1557.36 192.5
1540.56 194.6 1588.17 192.4
1541.35 194.5 1558.98 192.3
1542.14 194.4 1559.79 192.2
1542.94 194.3 1560.61 192.1
1543.73 194.2 1561.42 192.0
1544.53 194.1 1562.23 191.9
1545.32 194.0 1563.05 191.8
1546.12 193.9 1563.86 191.7
Table 11.3 The ITU GRID (100 GHz spacing)

251
LIGHT: Introduction to Optics and Photonics

Wavelength division multiplexers use several methods to combine and


separate different wavelengths, depending on the spacing between the
wavelengths. The two most common types of WDMs are those that use
multilayer thin film interference filters and those that use fiber Bragg gratings.
A multilayer thin film interference filter consists of several layers of thin
dielectric material of alternating index of refraction. This is similar to the thin
film interference filters described in Chapter 6. At each thin film optical
Figure 11.22 - Multilayer thin
film interference filter. interface, a portion of the light entering the filter off axis is reflected and a
portion is transmitted. By selecting the appropriate thin-film thickness (or by
positioning the filter at the appropriate angle), the light reflecting at each
interface can be made to produce either constructive or destructive interference.
Depending on the wavelength, selecting the appropriate filter thickness can
produce a filter that passes certain wavelengths and rejects others. Thin-film
filters can be used to separate broadly spaced wavelengths, for example 1310 nm
and 1550nm, and can also provide pass-bands that are narrow enough to be used
Figure 11.22 - Commercially
available thin film
in WDM applications utilizing up to 32 channels with 1-2 nm spacing. A
interference filter for WDM. commercially available thin film interference filter type WDM is shown in
Figure 11.23.
For more closely spaced channels, fiber Bragg gratings (FBG) are
commonly used (Figure 11.24). A fiber Bragg grating is essentially a diffraction
grating formed within the core of a small segment (several inches) of specially
doped optical fiber. The process of creating a fiber Bragg grating involves
directing beam of ultraviolet light through a diffraction grating (or special mask)
and onto a photosensitive optical fiber. The UV light introduces periodic
variations the index of refraction (high-low-high-low, etc) of the fiber core. When
light of several wavelengths enters the grating, one specific wavelength will be
heavily reflected through constructive interference at the multiple interfaces. The
remaining transmitted light will contain all of the wavelengths entering the FBG
except for the reflected wavelength. FBGs are available with bandwidths as small
as 0.05 nm.

Index of refraction variations


Figure 11.24 - Fiber Bragg Transmitted
grating. One wavelength is Input wavelength(s) wavelength(s)
reflected and the remaining
light is transmitted. The
periodic variations are
separated by one-half of the Reflected wavelength
reflected wavelength.

The wavelength reflected by the FBG is collected using a device called a


circulator. As illustrated in Figure 11.25, a circulator acts somewhat like a traffic

252
Introduction to Fiber Optics

circle, routing light from port to port in one direction only. In this illustration,
multiple wavelengths enter the device and are routed to the FBG. The one
reflected wavelength is then returned to the circulator, where it continues on to
the output port. To separate multiple wavelengths from a composite signal, all
one needs to do is cascade several FBG/circulator combinations, each “tuned” to
the desired wavelength. When combined, a FBG and a circulator form what is
called an Add/Drop Multiplexer, a device used to both extract and reintroduce
specific wavelengths into a WDM system (Figure 11.26).

Figure 11.25 – FBG and circulator.

Figure 11.26 - Fiber optic


add/drop multiplexer

Fiber Optic Sources and Detectors


The type of light source used in a fiber optic system depends on a variety
of factors such as the type of data (analog or digital), data rate, modulation
technique, output power, type of fiber to be used, wavelength stability, ease of
handling, distance over which data will be sent and spectral width. Most fiber
optic communication systems use either LEDs or laser diodes. In Chapter 3 we
described the basic operation and characteristics of LEDs and in Chapter 9 we
discussed the basic operation and characteristics of laser diodes. While it is
beyond the scope of this chapter to describe in detail the design and operation of
these devices specifically for fiber optic systems, this section will provide you
with a rudimentary understanding of where each type of light source is used in
fiber optics.
LEDs for fiber optic systems usually operate at 850nm and 1310nm and
are used primarily in short distance (< 1 km) fiber optic applications where data
rates are relatively low (< 500 Mb/s). Red LEDs at 665 nm are sometimes used

253
LIGHT: Introduction to Optics and Photonics

with multimode plastic optical fiber for very short distance links. Since LEDs do
not have highly directional output, they are commonly used with multimode
fibers which have a relatively large numerical aperture. One of the main benefits
of LEDs is their ease of use. Because the output optical power of LEDs is linear
with current, they can be operated in both analog mode for direct voice or video
communications, or in digital mode for data communications. LEDs are also
much less expensive than laser diodes and do not require complex circuitry to
operate. They are available discretely as individual components, pigtailed (the
fiber optic cable attached with a connector by the manufacturer) or circuit board-
edge connectorized, which allows a standard connectorized multimode fiber to be
mated to the LED.
The main drawback of the LED is that it is not very fast, does not provide
enough output power for signal transmission over a long distance and given its
large numerical aperture and spectral width, it can produce severe modal
distortion and chromatic dispersion.
Laser diodes, on the other hand, are available at 850, 1310, and a broad
range of wavelengths around 1550 nm. They can be operated at very high data
rates (up to 40 Gb/s in some cases) and can produce enough output power to send
optical information over 100 km without re-amplification. Diode lasers are
available with relatively broad spectral outputs (~2nm) as in the case of a Fabry-
Perot laser, or with extremely narrow spectral widths (~ .00001 nm) as in a
distributed feedback laser (DFB). Like LEDs, laser diodes are available in a
variety of pigtailed or connectorized packages including TO-style cans, dual
inline packages (DIP) or butterfly-mount packages that can be plugged directly
into a circuit board or surface-mounted respectively. Most laser diodes also
include a built in photodetector for real-time power monitoring. Compared to
LEDs, however, diode lasers require more complex circuitry to operate and are
difficult to operate in a linear fashion.
One of the biggest challenges in using diode lasers in high-speed
telecommunications applications is their wavelength stability. Semiconductor
diode lasers tend to drift in wavelength with both temperature and current
variations. If multiple channels of data are to be transmitted without interference,
as in DWDM systems, the closely spaced ITU wavelengths cannot be allowed to
drift into one another. DWDM systems therefore require laser sources that are
precisely controlled for current and temperature. This is accomplished by tapping
off a small portion of the transmitted wavelengths with a tap coupler and
monitoring both power and wavelength. The information is then fed back to the
transmitter electronics and, through the use of a device known as a thermoelectric
cooler (TE cooler), the temperature of the laser is either increased or decreased by

254
Introduction to Fiber Optics

the appropriate amount needed to shift the wavelength back to its designated
position.
The detectors used in fiber optic communication systems generally fall
into one of two categories: PIN photodiodes and avalanche photodiodes (APD).
These were discussed in Chapter 3 so we will simply summarize the results here.
In general, APDs have much higher gain than PIN photodiodes and are thus more
suitable for very low light levels. APDs are also more expensive and, because of
higher voltage requirements and temperature sensitivity, require specialized
circuitry to operate. Thus they are used mostly for very high-speed long haul data
links, while PIN photodiodes are the detector of choice for less demanding
applications.

Erbium Doped Fiber Amplifiers


One of the “bottlenecks” of early optical fiber systems was the necessity
of converting optical signals to electronic signals for amplification and
regeneration. Once amplified, the signals were converted back to light and sent
over the next link of optical fiber. Each wavelength required its own amplifier.
The development of Erbium Doped Fiber Amplifiers allowed optical signals to be
amplified and regenerated without having to be converted back to the electrical
domain.
EDFAs are optical amplifiers that operate in the 1530-1570 nm region of
the spectrum. An EDFA consists of a short segment of special fiber doped with
Erbium (Er+3) ions. When excited with a diode pump laser operating at either 980
nm or 1480 nm, the EDFA behaves like a laser without the cavity mirrors; the
pump laser produces a population inversion and amplifies signal wavelengths
between 1530nm and 1570nm up to 30 dB through stimulated emission. Multiple
wavelengths can be amplified simultaneously without any electronic conversion,
allowing for long distance high bandwidth communications with minimal
components.
Newer forms of all-optical amplification include Raman amplifiers,
which transfer energy from a high power pump beam to the weaker signal beam
through interactions between light and atoms in the fiber. Unlike an EDFA, a
Raman amplifier can in principle amplify any wavelength being transmitted and
no special doped fiber is required.

11.8 NON-TELECOMMUNICATIONS USES FOR OPTICAL FIBER


Optical fiber is extremely versatile. Although telecommunications
applications of optical fiber have received a great deal of media attention, fiber
has many other uses. One application that has generated much interest was

255
LIGHT: Introduction to Optics and Photonics

mentioned in Chapter 10: fiber lasers. In this chapter we present a handful of


other non-telecommunications applications.

Fiber Sensors
Chemical sensors use an optical fiber whose end is impregnated or doped
with a chemical that changes color (or other optical property) when exposed to an
environmental change. Measurement of the optical change is calibrated to
indicate the degree of environmental change. For example, a fiber may be
constructed with a porous end that is filled with a chemical indicator. This can be
placed in a hostile environment and, from a safe distance, light is sent into the
fiber and the return light is monitored for changes indicating a chemical reaction.
Mechanical sensors use the changes in light properties that occur when a
fiber is bent or stretched. For example, if a fiber is bent, some of the light that
would have been guided because it strikes the core-cladding interface at greater
than the critical angle is now lost because bending changes the angle of incidence.
The amount of power lost is an indication of the amount the fiber has moved.
This effect could be incorporated in a sensor to measure the motion of a structure.
Fiber Bragg gratings may be use to detect changes in temperature or
pressure. Changing the grating spacing by stretching will change the spacing of
the index of refraction variations in the core, and thus return a different
wavelength. Any mechanism that stretches the fiber, such as a change in
temperature or pressure or an applied mechanical strain, can be measured by
sending a broad spectrum of light into the fiber and measuring the reflected
wavelength.

Power Transfer by Fiber


In many laser materials processing applications, the high power laser
light is directed to the work piece by an optical fiber. In this case the fiber may be
made of material more transparent to the given wavelength than silica based
glass. The large, cabinet-sized laser remains in place and the light is delivered
precisely where needed through an optical fiber. In addition to novelty lighting
such as fiber optic holiday decorations, fiber lighting has been used in areas
where it is difficult to place (or replace) light bulbs. For example, dashboard
lighting can be provided by a conveniently placed bulb, with light delivered to
displays by optical fiber. Optical fiber delivery is common in industrial settings,
such as a welding station, and doctors' and dentists' offices, where it is used with
a laser to provide an "optical scalpel.” High power lamps are used with large
core fibers to provide lighting in tight places, such as ships and submarines.

256
Introduction to Fiber Optics

Coherent Bundles
For power transfer, it is only necessary that each fiber go from one end of
the bundle to the other. To make a coherent bundle, the fibers must maintain their
orientation. For example, if the bundle is square, the fiber that begins at one end
in the lower left corner must end up in the same corner at the other end. Such
coherent bundles are made by the tedious process of laying each fiber by hand,
one by one, into a special form, keeping the fibers straight and properly
positioned from end to end. Coherent bundles in flexible form are used as
endoscopes to look into everything from jet engines under test to human knees
Figure 11.27- Fused fiber
during surgery. bundles can be used to
The coherent fiber bundle may also be fused together and then put produce a magnified
image, to rotate an image,
through the drawing process again, producing a fused bundle about the size of the or to move an image from
original fiber. These bundles are laid out again, fused again and so on. The result one location to another
without the use of a lens.
is thousands of fibers lying sided by side, fused into a single rod of glass. These (Photo courtesy of
CTFiberoptics Inc.,
can be used for image transfer (placed on a printed page, the print appears at the www.ctfiberoptics.com/)
top of the glass piece) and can be tapered to provide magnification, or twisted to
invert the orientation of an image (Figure 11.27).

REFERENCES
1. J. Fiber Optic Communications, Ed 5, Prentice Hall 2004.
2. Hecht, J. City of Light: The Story of Fiber Optics, Oxford University Press,
1999
3. Mynbaev, D., Scheiner, L. and Scheeiner, L. Fiber-Optic Communications
Technology, Prentice Hall, 2000.
4. Sterling, D. Technicians Guide to Fiber Optics, Ed 4, Delmar Learning,
2003.
5. Fiber Optic Technology (formerly Fiberoptic Product News) is a trade
magazine, free to qualified individuals. www.fiberoptictechnology.net

WEB SITES
1. Several excellent tutorials and technical papers (Discovery Center)
www.corningcablesystems.com/
2. History, facts, hardware and links
www.aboutfiberoptics.com/
3. "Lennie Lightwave's Guide to Fiber Optics" tutorials for fiber installers
www.jimhayes.com/lennielw/index.html
4. To learn more about hollow core "photonic" fibers
www.omni-guide.com/

257
LIGHT: Introduction to Optics and Photonics

REVIEW QUESTIONS AND PROBLEMS

QUESTIONS
1. You are working in the fiber optics industry. What would you say to a potential
customer when he asks you why he should install a fiber optic network rather than a
copper wire network?

2. Describe the differences between multimode and single mode fiber. For what
applications might each fiber be used?

3. What causes pulse spreading in a multimode fiber? In a singlemode fiber?


4. For what purpose is an OTDR used?
5. Describe the causes of light loss (attenuation) in a fiber.

LEVEL 1 PROBLEMS
6. If most telecommunications fiber has numerical aperture between 0.1 and 0.3, what
is the corresponding range of acceptance angles?

7. Calculate the numerical aperture of a fiber whose core index is 1.56 and whose
cladding index is 1.54.

8. What is the V-number of a fiber whose N.A. is .32, core radius is


50 µm and operates at a wavelength of 1.31 µm?

9. What is the maximum core diameter for a fiber to operate in a single mode at a
wavelength of 1310 nm if the N.A. is 0.12?

10. Convert to dBm: 50 µW, 250 µW

11. A signal of 100 µW is injected into a fiber. The signal exiting from the other end is
40 µW. What is the loss in dB?

12. What is the loss in dB if half the input power is output?


13. A fiber of 10 km length has Pin = 1 mw and Pout = .125 mW. Find the loss in dB/km.

LEVEL 2 PROBLEMS
14. Approximately how many modes exist in a fiber with n1=1.534, n2=1.5312 and core
diameter = 50 µm when it operates at 1300 nm?

15. A communication system uses 8 km of fiber that has a 0.8 dB/km loss characteristic.
The input power is 20 mW. Find the output power.

16. A system consists of an LED that can couple –13dBm into a fiber. The receiver
requires –42 dBm to detect the signal. The connectors and slices in the link have a
total loss of 2 dB and the designer wants to leave a 6 dB margin for future expansion.
How long can the fiber link be if fiber loss is 3.5 dB/km?

258
Introduction to Fiber Optics

The saying “A picture is worth a thousands words”


takes on new significance as electronic imaging
becomes less expensive and more widespread. You
encounter digital images daily, probably without
thinking about them, as millions of images are scanned
from film or paper, stored on computers or sent around
the world through the Internet. You may have a digital
video camera, digital camera, cell phone with a
camera or all three! Our entertainment, news and work
all depend on digital imaging, which increasingly helps
keep us healthy and safe. In this chapter you will learn
how optics and photonics have joined together with
computers to capture, store and display images.

Hurricane Ivan (NOAA Coastal Services Center, www.csc.noaa.gov)

Chapter 12

IMAGING
12.1 EVOLUTION OF IMAGING
People have recorded their world in artistic images since prehistory, but
such art was slow, laborious and usually a somewhat inaccurate record of what
was seen. The artist examined the scene and used paints, charcoal or other
materials to recreate his or her perceived image. Modern imaging began nearly
200 years ago when early photographers used optics to direct light to initiate a
chemical reaction that preserved an image. In the last few decades electronic
sensors have replaced light-sensitive chemistry for image capture and computers
have advanced the manipulation, enhancement and display of images.
The first successful permanent optical image was captured in 1827 by
Niépce in France, but it took eight hours to produce one image! He used simple
optics—the well-known camera obscura (or pinhole camera) created a real image
in a darkened room or tent where the light sensitive chemicals were positioned.
In 1839 Daguerre announced an improved process with a wet chemical emulsion
on a metal plate that required “only” thirty minutes per image. Daguerreotypes
were an immediate sensation. Improved optics and chemistry soon reduced
exposure times to under a minute, making portraits practical and imaging
relatively inexpensive (only a week’s wages for one picture). However, each

259
LIGHT: Introduction to Optics and Photonics

original Daguerreotype had to be independently exposed and the wet chemistry


and portable dark room were awkward and inconsistent. Daguerreotypes were
quickly surpassed by improved methods. Paper prints, first made in the 1840s by
Talbot, were initially of poor quality. As print-making improved, photography
became more accessible and popular.
In 1851 Archer improved the light-sensitive chemistry with the wet
collodion process. Wet coated glass plates provided the light-sensitive emulsion.
When dried, the plates could make detailed prints. Twenty years later Maddox
produced gelatin plates, the first dry plate process, freeing photographers from
their cumbersome equipment. In 1884 Eastman introduced celluloid-based
film—a dry, flexible light sensitive medium that could be manufactured in rolls.
Then in 1888 Eastman introduced the simple box camera and popular
photography was born. The slogan "you press the button, we do the rest”
announced a reliable affordable method for capturing, storing and reproducing
black and white images. The Kodak® camera, the first of the modern roll-film
cameras, appeared in 1898.
In the early 20th century photography advanced quickly. Flammable
cellulose nitrate was replaced with cellulose acetate, and standardization of film
speeds and formats created a worldwide industry. Chemists developed color-
sensitive dyes for red, green and blue light, and developed layered color
emulsions for films and photographic paper. By 1935 amateurs could use
®
Kodachrome film for color photography. Most consumers used commercial film
processing to complete the chemistry and produce their pictures. In the 1990s
automated color processing equipment appeared in the corner drugstore, allowing
non-chemists to convert film into prints in less than an hour for pennies a print.
Recognizing the public’s impatience with the delays of developing,
Edwin Land introduced “instant” (or Polaroid®) photography in 1948. For the
next 50 years the Polaroid Corporation was a leader in imaging systems ranging
from instant photography to digital cameras.
Camera technology also improved quickly as Eastman’s box camera
evolved into precision systems with high-quality optics, electronic exposure
controls, automated film handling and integrated flash equipment. In 1963 the
first cartridge-loading cameras appeared and in 1989 one-time-use “disposable”
cameras were marketed. In 1996 the Advanced Photographic System merged
computers with film cameras by recording each photo’s camera settings on the
film, allowing variable formats and compensation in the developing process. But
a major transition had begun—to digital imaging.
Electronic imaging is rooted in the 1887 discovery of the photoelectric
effect by Hertz and later explained by Einstein. Certain materials when exposed

260
Imaging

to photons produced free electrons. This effect was weak and the equipment
bulky. Unlike chemical light sensitive emulsions that react to photons with
resolution at the molecular level, electronic photon detectors have resolution
related to their geometry. Clever schemes arose to address only small regions of
a photovoltaic detector, such as sweeping an electron beam across light sensitive
electronic material. Areas with many photon arrivals would draw more beam
current, in effect “reading” the beam where it struck and rebuilding the pattern of
photon intensity. These schemes led to the early video cameras for television,
but their continuous (analog) signals were complicated to manage. Storage was
extremely difficult and subject to noise that corrupted the image. Color imaging
required capture and synchronization of color using colored wheels or filters.
In 1961 the first integrated circuits were marketed. Optical patterning
yielded highly regular and extremely small structures. Engineers quickly
realized that integrated arrays of photovoltaic cells could be made to collect
electronic images. Each picture element (pixel) would yield a voltage
proportional to the photon intensity at that location. Other circuits, essentially a
dedicated computer, would organize these voltages as a series of numbers that
represented the picture. As with photography, the early devices were black and
white, but color versions were soon developed.
Digital imaging cameras were available as early as 1972 for research and
military applications. In 1982 the first consumer digital cameras arrived and
since then the technology has rapidly improved. Picture elements have become
smaller, yielding better resolution; power consumption has fallen and battery life
increased; and storage, camera optics and electronic controls have improved.
While professional imaging units with high quality optics can cost thousands of
dollars, simple cameras are virtually given away in newer cell phones or as
premiums. The Internet, DVD and CD technology, as well as digital television
have facilitated the growth of digital imaging, and new applications and
technologies are announced almost daily. In the near future you will be able to
buy devices to “paint” digital images directly on your retina, read flexible books
and newspapers using electronic ink and have access to immense databases of
images for work, learning or entertainment.
In the rest of this chapter we will consider only electronic imaging, often
referred to as "digital imaging.” Photography with light-sensitive chemistry will
remain an important technology for image capture and certain kinds of archival
storage, but increasingly ours is a digital world. Trends in electronics,
miniaturization and computers will make digital images more competitive to film
photography. Digital imaging offers truly “instant” images, making review easy.
There is no cost for unwanted images—memory can be erased and used again.

261
LIGHT: Introduction to Optics and Photonics

Storage and display are easy and cheap and friends around the world can share
images with the click of a mouse. Film companies recognize these trends when
they offer to scan your film photos to a CD or the Internet, or offer technology to
create paper prints from your digital camera files.

12.2 WHEN YOU TAKE A DIGITAL PICTURE…


Let’s quickly see what happens when you take a picture using a digital
camera. Daguerre would recognize much of what you do (see Figure 12.1)!
You would use your camera’s viewfinder—a simple lens—to see the
composition of the picture or you might observe a small color display on the
camera body. The display shows exactly what the imaging element “sees.” The
camera body also holds the electronics, batteries and controls. A lens (plastic or
glass) focuses the light from the scene onto the imaging sensor, which may have
between one and ten million pixels. An aperture allows you to change the
amount of light entering the body and helps you avoid under or overexposure.
Finally, pressing the control button opens a shutter that briefly allows light to
enter to expose the imaging element. The shutter may be a mechanical device,
but more often is an electronic control signal that activates the imaging element
for a fixed amount of time, say 1/500 of a second. Faster shutter speeds “freeze”
action, but also collect less light. Better cameras may compensate for reduced
light at fast shutter speeds by using wider aperture settings.

Computer/Memor
y
Figure12.1 - Simple digital
camera systems replace the
film with electronic sensors
and displays.
Shutter
Lens
Aperture Imaging Sensor Display

It is the imaging element that would seem most amazing to Daguerre.


Imaging sensors have a grid of pixels elements, each of which changes its
electrical charge in response to the arrival of photons. In a consumer camera
format with 2872x2160 pixels, over 6.20 million pixels record the details of the
image. Each of these pixel locations actually contains one red, two green and
one blue-sensitive subpixels, arranged in a square, giving over 24.8 million
subpixels. All the pixels are exposed while the shutter is open, giving a
simultaneous exposure.

262
Imaging

Next the electronics take over. The 24.8 million voltage readings are
each converted to a number in binary, and then read in order into the memory in
the camera body or a removable flash memory card. Perhaps 200-300
milliseconds pass while the conversion and storage are done. The camera then
shows the picture you have just taken on the rear display.
Later, you can transfer the 24.8 million numbers to your computer or
send them over the Internet. In a typical computer, these numbers are interpreted
by the monitor to control the display and rebuild the pixel-by-pixel image. The
details depend on the display technology, but each pixel location generates a mix
of red, green and blue light that recreates the original color for that location. You
see a smooth picture, unaware of the point-by-point underlying representation. If
you print your image, the printer interprets the numbers and controls the
deposition of inks or dyes to make a permanent copy.
What would really surprise Daguerre is your ability to modify your
picture. Say you wanted to remove “red eye” caused by the reflection of light off
your subject’s retinal blood vessels. With inexpensive software you could locate
those numbers representing the pixels in the red eye area of your image. If you
change those numbers, you will change the colors of the pixels at those locations.
Make the numbers at the red eye pixels all zero (equivalent to black), save the
data again and the red eye effect is gone! With more sophisticated manipulation
of the numbers, you could not only remove the red eye from your subjects, you
could remove an entire subject, or move them around the image. You have
complete control! Fantastic and unreal pictures can result and our vision cannot
distinguish the real from the cleverly processed image.

12.3 PRINCIPLES OF IMAGING


Let’s look now at imaging ideas in more detail. We will see how the
properties of light are used to create a representation of your image, how the light
is converted to data, how the data is stored, and finally how we rebuild an image
to in order to display it.

Imaging Across the Spectrum


Influenced by photography, we usually think of images as
representations of the visible world (wavelengths between about 400nm and
700nm). But imaging can occur anywhere in the spectrum and can even be done
with other kinds of waves or measurements. For example, X-ray images
(wavelengths around 1-10 nm) are critical to medical diagnoses and infrared
images (wavelengths around 10 µm) provide night vision and industrial thermal
imaging.

263
LIGHT: Introduction to Optics and Photonics

All optical imaging requires focusing optics appropriate for the


wavelength of interest and an electronic detector material responsive to the
photon energies involved. The focusing optics of our imaging system must be
transparent to our photons, requiring special lenses or mirrors for some
applications. For example, silicon, which looks like a dull metal in visible light,
is transparent to 10!m infrared light. Water, which absorbs little light at the blue
end of the visible spectrum, is opaque in the portions of the infrared.

Figure 12.2 - Absorption spectrum of pure water


showing absorption coefficient - as a function of
wavelength.

We describe the wavelength dependence of a material’s absorption


through the absorption coefficient, -(#). It has units of (length)-1 and as the
function notation indicates, the absorption depends on the wavelength of light.
(Optical density, introduced in Chapter 1, is related to the absorption coefficient,
but it also includes the thickness of the material and so is unitless.) If Eo is the
irradiance at the surface of a material, the irradiance after traveling through a
thickness x (E) is given by the Beer-Lambert law,

Beer-Lambert Law (12.1) E = Eo e ! " x

EXAMPLE 12.1
We define the transmittance of a given layer of a material as the fraction of
input irradiance exiting the layer. What is the transmittance of 3 cm of pure
water at #=800 nm (near infrared)?

Solution
From the semilog graph in Figure 12.2, at #=800 nm, - = 0.02/cm.
Then E !( 0.02cm !1 )( 3cm )
= e! " x = e = 0.942
Eo
94.2% of the incident light is transmitted through 3 cm of water. The lost
power is dissipated in the water as heat.

264
Imaging

The second requirement is that the detector material respond to the


Conduction bands E
photons efficiently. To increase detector efficiency, we engineer materials so
absorption is high, keeping more photons in the material. Further we engineer an Band
appropriate band gap, the energy required to release an electron from a bound gap
P
energy
state to a conducting state (See Figure 12.3). This is the reverse of the process
E
discussed in Chapter 2, where conduction electrons released their energy in the
Valence bands
form of a photon with wavelength # according to E = hc/#. Now we want an
absorbed photon of wavelength #=hc/E to convey its energy E to an electron, Figure 12.3 - Absorption of
photon energy frees
raising the electron to a conduction energy level where it can be measured. More electron to conduction
band. Once conducting,
photons create greater current, yielding a device voltage proportional to the the electron charge can be
incident photon flow. Silicon can be a good detector up to wavelengths around measured to show light
intensity.
1100 nm. But at longer wavelengths, photons do not have enough energy to
release electrons to conduction in silicon.
Images can also be collected from non-optical sources. In LIDAR (Light
Direction and Ranging) or laser radar, pulses of laser light are bounced off
objects and their round-trip-time is measured. This gives the distance from the
laser to the objects. Lasers can provide precise measurements of objects or even
large areas, and this information is often presented as an image, although the
pixels were measured one at a time. Figure 12.4 shows an image created from
LIDAR data.
Figure 12.4 - LIDAR image
Image Representation taken 9/11/2001 of the Twin
Towers site in New York. The
Images ultimately are represented in a digital imaging system as large debris pile was measured with
collections of pixel values. Each number in the pixel array has an address 4 inch accuracy across an
area of blocks, from a low
associated with its column and row position. The resolution of an image flying aircraft. False color can
describes the columns and rows of pixels (see Figure 12.5). A 640x480 image be used to suggest height and
shadows. (Photo courtesy
has 640 columns and 480 rows of pixels, or a total of 307,200 pixels. NOAA, www.noaa.gov/)

Figure12. 5. - Zooming on an image eventually


reveals the pixels that we perceive as a smooth
image. These four images are an original and three
zoomed images. The ‘boxy’ appearance at bottom
right reveals the size of the pixels.

265
LIGHT: Introduction to Optics and Photonics

In a typical gray image with bit depth of 8 bits, pixel voltage values are
scaled so that they may be represented by integers in the range from 0 to 255. (In
binary arithmetic, it requires 8 bits to represent the numbers from 0 to 255.) In
this representation, 0 is black, 255 is white and everything else is a shade of gray.
In a color image, each pixel is described by three distinct integer values for red,
green and blue (RGB), each of which is between 0 and 255. The RGB color
(0,0,0) is black and (255,255,255) is white. All other colors are mixtures of RGB
intensities. This defines a color space shown by a 3-dimensional color cube
(Figure 12.6).

Figure 12.6 – At left is the 24 bit RGB


color cube with 16 million colors. At
this scale we cannot distinguish the
individual points, which consist of R, G,
and B integers from the range (0 - 255).
On the right is a compressed color
scheme, with only 256 allowed colors
(8 bits).

Video imaging results when we quickly store or view a succession of still


images. Digital video cameras have adjustable frame rates, or number of images
stored per second. Higher frame rates can capture faster events, such as sports or
racing, but that means that more data must be stored. In computer games, the
software generates the images for display, following rules for the characters,
environment and your gaming inputs. Common video formats range from 15 to
60 frames per second (fps). If each image was a color 8 MB image, we would
need to move data at a bit rate of 240 MB/s to memory. Bit rate, or bandwidth, is
a concern when we are copying an image or downloading it from the Internet.

EXAMPLE 12.2
Can a cable modem with 1.5 Mb/s bandwidth transmit real-time
uncompressed video from a 640x480 resolution color (24 bits) web cam
operating at 15 fps?

Solution
Bit rate = Resolution (pixels/frame) x bit depth (bits/pixel) x frames rate
(frame/sec) or Bit rate = 640x480x24x15 =110.6 Mb/s. It cannot be done!

266
Imaging

Image Capture
Digital image capture occurs in a specialized electronic integrated circuit
whose sensor elements convert incident photons into a proportional electrical
Pixel
voltage. Like film, this happens simultaneously across the entire sensor. Each
pixel voltage is then sequentially converted into digital format as a set of bits.
These bits are moved to memory in the camera or to a storage device like a flash
card, tape, CD or DVD. A typical low-resolution digital camera performs all
these steps for nearly 1 MB of image data in a few hundred milliseconds!
Many technologies have been developed to capture digital images.
Charge coupled devices (CCD) and complementary metal oxide semiconductor
(CMOS) sensors are common area detectors. Each contains an array of pixels Dead zone Subpixel
(the shapes may be rectangular or square) and the electronics and conductors to
Figure 12.7 - Four pixels,
control, measure and transport the image data. The imaging optics focus an each with a Bayer Mask
image on the sensor which responds with a measurement of pixel values at each comprising four subpixels.
Each subpixel has a color
location. In a color image we must collect measurement of the R, G, and B part filter over a broad
of the pixel separately, and most CCD and CMOS sensors use a Bayer Mask wavelength detector.

pattern (Figure 12.7). Each subpixel has a thin optical color filter over it, such
that only red photons make it into the red pixel location, green into the green
pixel location and blue into the blue pixel location. Unfortunately this scheme
ignores image photons that happen to fall on the “wrong” color. For example,
red photons on the green sensor are lost. The Bayer mask also means that each
picture element is really four times larger than necessary. If we could make each
pixel as small as the pure red subpixel, our resolution would be four times better.
A 2003 innovation known as direct sensing addresses these
inefficiencies. Direct sensing is related to CCD electronics, but it stacks its RGB
color filters and sensors vertically in the space of one subpixel. The resulting
detector, while more complex, is both more sensitive and has higher resolution
than a Bayer Mask system.
Line sensors are close relatives of CCD area sensors. Elements are
arranged in a long thin line, not distributed over an area. Line sensors are well
suited for scanners, copiers or industrial processes where the image subject is
rapidly passing by the sensor. In effect, the line sensor builds up an image by
storing one line image at a time. Rather than having 1024x768 pixels that create
an image all at once, a line sensor would have 1024x1 sensors and take 768
successive line images. Line sensors often scan continuously to detect defects,
color changes or other problems in the materials passing by the sensor.
Video does not require a special sensor element, but it does demand
sophisticated control electronics and software to manage the collection of many
images per second. Even a delay of 100 ms to move an image to memory would

267
LIGHT: Introduction to Optics and Photonics

be too slow for video frame rates of 30 fps (a frame each 33 milliseconds).
Frame rates of 10 fps would be too slow for sports and similar fast events.
Many special sensors have been developed to create images at specific
wavelengths or for demanding applications. Pixel shapes and fill factors are
important in scientific imaging. High-speed sensors have been developed, as
well as sensors that work well in outer space or in small areas like endoscopes
and arthroscopic tools for surgery. Outside the visible spectrum the electronic
materials for capturing the photons can be quite different from CMOS and CCD
technology. Most digital imaging has a regular array or line scan sensor, but
some applications image one pixel at a time while moving the detector across the
area of interest.

Image Storage
In all digital systems, data is stored as bits (binary digits, 0 or 1, on or
off). Counting in binary arithmetic, eight bits give us 28 (that is, 256) bit
patterns. The base ten digits 0 through 255 are represented as binary numbers
0000000 through 11111111. With a bit depth of 8 bits per pixel, and using the
fact that one byte (B) is 8 bits, a simple 640x480 gray image with 307,200 pixels
needs 307,200 bytes for storage as a bit map. If it is a color image with 24 bits
per pixel, we would need 921,600 B or nearly 1 million bytes (1 MB, or 1
megabyte). Since each combination of values makes a different pixel color, our
24 bits gives us 16,581,375 possible colors as shown in Figure 12.6.

EXAMPLE 12. 3
Images in your PC are digital images. A typical resolution setting is
1024x768. With 24-bit color, how much data is stored for one full-screen
image?

Solution
1024 x 768 = 786,432 pixels
786,432 pixels x 24 bits/pixel = 18,874,368 bits
18,874,368 bits x 1 byte/8bits = 6,291,456 B or roughly 6.3 MB
Windows XP ® actually employs 32 bits/pixel to give fuller color. A full
screen therefore needs 8.4 MB to store all the numbers representing the
image.

Knowing that a typical small digital picture is about 1 MB of data, let’s


see how many words a picture is really worth. English words average 6 letters,
and each letter requires one byte (8 bits) of memory. Storing a thousand words

268
Imaging

requires approximately 6 kB. In terms of data storage, a 1 MB picture is the


same size as about 166 thousand words!
Earlier in discussing video with bit mapped images, we saw that we had
to move 240 MB through memory every second. Such speeds and data volumes
are a challenge, even to modern computers. Fortunately we can be clever and
compress the data and reduce the transfer rates and times.
Data compression removes redundant data from our images, for example,
the white background in the line drawings of this book. Mathematical rules
allow us to drop excess data and guide the recreation of the full image for later
display or video viewing. Any image storing all the raw data will be relatively
large. Bit maps (.bmp files) and portable network graphics format (.png files)
keep all the details. Compression methods like GIF (graphics interface format,
.gif files) or JPEG-2000 (Joint Photographic Experts Group, .jpg files) can reduce
image size significantly. Many cameras and video units automatically perform
compression as the images are initially stored. JPEG reduces an image to about
20% of its original size, and is lossless, that is, it recreates the full image.

Image Display
Image display requires reconstruction of the image’s original pixel
pattern with its original colors. Many technologies exist to do this and all
produce a pattern of adjacent red, green and blue (RGB) pixels at the column-row
location of a colored pixel. Our eyes interpret the adjacent proportions of RGB
brightness as a composite color at that location and the picture is recreated.
Displays contain millions of individual pixel display devices that must be
carefully synchronized to display brightness and colors according to the image
data. Each display pixel is a triad, three sub-displays for R,G and B (Figure
12.8).
RGB pixels are so small that you would need a magnifier to distinguish
them on your computer, cell phone or DVD display. They may be dots,
rectangles or squares. Video projectors make a screen image that can have larger
pixels. Looking closely, you can probably see directly the individual RGB colors Figure 12. 8 - Display details.
A magnified view of an LCD
on the projected image. Display hardware converts the stream of image numbers mask shows letter’ k’ on
into brightness levels that are refreshed 60-75 or more times per second. white background (top).
Triads are long rectangles
Monitors and displays may, in software, reassign many triads to one pixel for making square pixels
increased speed or magnification. When one pixel is mapped to one triad, your (bottom).

monitor is working at maximum resolution. If the underlying image data


changes fast enough, we perceive motion and video action.
Cathode ray tube (CRT) displays have a grid of phosphor pigments (a
triad of phosphor dots) on the front screen of an evacuated glass tube. Control

269
LIGHT: Introduction to Optics and Photonics

plates guide a stream of accelerated electrons, generated at a cathode element at


the back of the tube. The ray of electrons precisely hits the phosphor targets in a
row across the screen. When the electrons are delivered to a grid location, the
phosphor glows briefly with its characteristic RGB color, in proportion to the
electron current. Then the next row is illuminated, and so on, usually in a
progressive scan. CRT tubes are bulky, require high voltages, generate heat and
magnetic fields, and present disposal problems. Nonetheless, CRTs are
inexpensive and common on older desktop computers.
Liquid crystal displays are a thin sandwich. (See Figures 6.20, 6.21 and
6.22 in Chapter 6 for an explanation of black and white LCD technology.) They
have a uniform white back light source that is polarized before passing through
the liquid crystal material. An electric field is applied across the liquid crystal
sandwich through transparent electrodes, causing the crystals to change their
polarization and control the amount of light escaping. The electrodes are
arranged in a grid with phosphor or color filters in front of each location. As
each RGB pixel receives its data, it controls the brightness of the escaping red,
green or blue light. LCD displays are relatively low power, thin, light and soft to
the touch (because they have a liquid layer below the surface). LCD displays are
found on laptops, flat panel desktop screens, and portable devices like cell
phones and DVD players.
Light emitting diodes can be engineered to emit red, green or blue light.
Arrays of these elements can be made to directly display RGB signals and build
images. No backlighting is needed, power is low and the displays are rugged.
They are ideal for cell phones and other portable devices. Similarly, plasma
displays use a control voltage to create a plasma discharge at each pixel location.
In a plasma discharge the hot plasma gases glow, releasing photons like a neon
sign. Stronger discharges make brighter displays. No backlighting is needed and
the display is bright and easy to see from a wide angle. Each location must have
supporting electronics to control the plasma making these displays expensive.
They are most common now in high definition or digital television displays, and
in high-end computer displays.
Video projectors have an intense lamp, a device that controls the RGB
pixels across the image, and optics to focus, shape and enlarge the image. Some
use an LCD panel to control the light, but this becomes more difficult as lamps
get brighter and more heat is absorbed in the LCD panel. More common is a
mirror array that has hundreds of thousands of small tilting mirrors. In one
position, they reflect light into the imaging optics, which projects it. In another
position the mirror sends the light into a block that keeps it from being seen on
the screen. Brightness is controlled by the fraction of time each RGB pixel is

270
Imaging

“on” or “off.” To reduce complexity, some projectors pass the lamp light
through a spinning color wheel with RGB filters, so the illumination is
alternately red, then green, then blue.
One of the newest displays is electronic ink, flexible materials that
produce pixels in black and white or color (RGB). As the image data is
conducted to electrodes in the “paper,” electric fields at each pixel rotate nano-
particles to the surface. These appear as white or black (or color, in advanced
prototypes), creating an image of text or images on the page.

12.4 IMAGE PROCESSING


Digital imaging offers one great innovative advantage over photography
– image processing, the editing, enhancement, conversion, measurement and
automated understanding of images. Once your image is in the form of data,
remarkable control is available to you. You can change your pictures in ways not
possible in the darkroom, and extract hidden information automatically. We will
now look at some of these capabilities arising from the marriage of computing
with imaging.

Software Tools
Many image processing tools are available and many of them are free.
Novice users often need not understand the details of the intuitive operations they
are performing. Advanced users can access powerful tools and develop
specialized methods quickly. Standard operating systems for PCs, Macs and
Linux all provide rudimentary image processing tools, and commercial
photography vendors increasingly offer on-line services for processing your
digital images. Let’s consider some general operations common to better
software tools.
Image acquisition is the first processing step. The software should be
able to read stored or Internet images in common formats like bitmap (.bmp),
JPEG (.jpg) and GIF (.gif). It may be able to download directly from cameras
and scanners or receive direct video input. Some commercial tools allow you to
control cameras from the software, creating an automatic image collection and
processing system. Image display should also be included, providing simple
access to your images or video clips. You can usually adjust bit depth, resolution
and color palette and choose the storage compression mode when you save your
images.
Software should perform standard photographic operations on digital
images. Copying is simple in a digital environment, as is printing multiple
copies on a color printer, or transmitting copies for photographic prints over the
Internet. Cropping involves cutting out a section of the image to make a new

271
LIGHT: Introduction to Optics and Photonics

image. Image rotation and flipping rearrange the image. Tools may allow you to
change the brightness and contrast of the image, or of parts of it, by rescaling the
pixel values.
Intermediate tools might allow you to manipulate individual pixels, copy
and paste sections of images or execute multi-step procedures like correcting red
eye or compensating colors, brightness and contrast for a balanced image. Some
tools apply rules to change your image to another artistic style, for example,
stained glass or poster or charcoal drawing. You have complete control through
your access to the underlying image data.
More advanced tools perform explicit tests and statistical operations on
the data in your image. These tools usually require professional tools, similar to
word processing tools for text. We will now consider a few of these advanced
image processing techniques.

Thresholding
Every image is a collection of thousands of numbers and the statistics of
those numbers tell a story about the image and its subject. We could calculate
single number statistics, like the mean (average) pixel value or the standard
deviation of all the pixels in the image. Or we could form a histogram of the
image’s pixel values. Consider only a gray image, where every pixel value is
between 0 and 255. Assign each pixel to a bin with similar-valued pixels and
count how many are in each bin. The resulting 256 counts are a histogram graph
of the image. Histograms greatly simplify categorizing and adjusting the image.
Suppose you were interested in studying those parts of an image that
were particularly bright, for example, portions of an image with pixels whose
values were above some critical value, the “threshold.” This might occur if you
were looking at an image collected while flying over a forest fire—the brightest
parts would be the open fire. We could instruct the software to set a threshold
(say, 225 on a 0 to 255 scale) and create a new image with pixels according to
this rule:
• If a pixel value is above or equal to 225, make it 255 (white) in the
new image
• If a pixel value is below 225, make it 0 (black) in the new image
What would the new image look like? Only those pixels above the
threshold would be white; the rest would be black. With one command we have
found all the “fire” pixels. We could now count them and get a measure of how
much of the image was on fire. An image has now become a guide to the fire
locations and a quantitative measure of how much fire we are facing. Figures
12.9 and 12.10 illustrate the process.

272
Imaging

Figure 12. 9 - Histograms for two


gray-level images. Which one
matches which picture? The white
lines are artifacts caused by
mismatch in pixel spacing and
display spacing.

Filtering
Filtering generally involves applying a mathematical rule to an image to
make a new image with special characteristics. Thresholding is a kind of filter.
So are blurring and sharpening an image. Blurring takes a local average of pixels
and replaces the old pixels with the new average value. It tends to smooth out
big changes in the image, hence the name blurring. Sharpening takes the
differences among nearby pixels and replaces old pixel values with a number
related to the differences. This accentuates differences in the image and we
perceive this operation as making the picture sharper or more focused.
We could filter to find only those parts of the images that changed
intensity rapidly, from
color gray
black towards white.
Our operations would
locate all the pixel
Figure 12. 10 - The
locations with big original color image is
changes, that is, where converted to gray
(intensity) at upper
the differences (rather right. An edge filter is
than the pixel levels edge filter threshold shown lower left. Lower
right is a threshold at
themselves) are above 170/255.
a threshold. If we
make the high change
areas white, and the
rest black, we would
see an image that

273
LIGHT: Introduction to Optics and Photonics

emphasizes edges or high contrast areas. Figure 12.10 illustrates the use of an
edge filter and application of a threshold value to an image.

Color Manipulation (Sub-Images, False Color)


Another kind of image processing involves color manipulation. Color
sub-images in red, green and blue each have values from 0 to 255, so they look
like gray images separately. These can be filtered or subjected to a threshold,
making it possible to select items with specific color properties. The subimages
in Figure 12.11 show the relative RGB signals. We could take the red subimage
and seek the flowers and stripes of the flag with thresholding.
full color red blue green

Figure 12.11 - Color subimages


show the contributions of RGB
components to pixels. Recall
that white is made by having all
three RGB present.

False color is an important tool for presenting image data. For instance,
in weather images temperature readings in the long infrared wavelengths do not
have any natural color. However, assigning a color map associates specific
colors with ranges of temperatures. When displayed as an RGB image, cloud and
storm formations can be easily distinguished because they have distinctive
temperature characteristics. Of course, the continents, islands and state
boundaries are artificially generated by the computer to make the image more
understandable to humans. The image at the beginning of this chapter is such a
false color image, in this case, of Hurricane Ivan.
Similar false color images can be generated from any data, even data
generated by the computer and displayed as an image. For example, people,
buildings and industrial processes can be imaged in thermal (IR) spectra and
rendered in false color.

Figure 12.12 - False color


infrared images, conveying
infrared temperature data as
colors. Portrait, showing cooler
cheeks, nose and clothing (left).
Heat loss through a window
(right).

274
Imaging

12.5 SCIENTIFIC IMAGING


Imaging systems provide scientists and engineers with a precise,
quantitative record of objects and events, at high speed or over long periods.
They can obtain images of the physical universe as small as atoms and as big as
galaxies. In many cases the imaging optics are extremely clever, building an
image from measurement one pixel at a time, or collecting photons for hours or
days to build a clear image.

Scanning Imagers
The atomic force microscope (AFM) uses optics only indirectly to
measure the surface of materials down to atomic scales (Figure 12.13). A tiny
tip is attracted or repelled by inter-atomic forces as the microscope moves in a
geometrical regular scan pattern across the surface. At each location, which
corresponds to a pixel, a laser interferometer (Chapter 5) measures tip
displacement with nanometer accuracy. These measurements indicate the
distance to the surface and therefore the shape of the surface. False color
accentuates three-dimensional surface topography. Measurements of surface
roughness can be made from the raw data. A variation of this imaging system,
called the magnetic force microscope (MFM), uses a magnetic tip that responds
to local magnetic state of the material. Magnetic forces now dominate in
deflecting the tip, such that images show the up or down magnetic state and the
strength of the fields. These are particularly helpful in studying magnetic hard
drives because the MFM can “see” the magnetic stored data.
Figure 12.13 - AFM images
resolve the shape and surface
features of a Bacillus
atrophaeus spore (left). The
image in the center is a close-up
of the surface, revealing rod-like
structures that fold when the
spore is dehydrated (right).
(Courtesy Lawrence Livermore
National Laboratory,
www.llnl.gov, and U.S. Dept. of
Energy.)

Another scanning imager is the scanning tunneling microscope (STM).


It moves a tapered optical fiber tip whose diameter is only a few microns across
the surface. At each pixel the fiber tip admits and measures a few photons,
creating a pixel value. Again, false color is used to create an intelligible,
appealing image that is proportional to the actual image signals. The STM is
particularly effective at measuring the surface light emissions of optical lasers
and other tiny light sources. Resolution, in terms of pixel size, is determined by
the size of the tip aperture and accuracy of the scanning algorithm.

275
LIGHT: Introduction to Optics and Photonics

Astronomical Imaging
Light traveling for billions of years from distant galaxies is incredibly
faint. Images are made from these photons in the orbiting Hubble telescope by
carefully accumulating intensity in the telescope’s electronic detectors for
minutes to days. As more photons accumulate, the detector voltages grow and
become distinguishable. During this long exposure the Hubble must constantly
be realigned to point in exactly the same direction or the images will be blurry.
The Hubble uses different detectors and filters to build up its images as gray
images from different spectral regions. False color is then added to the gray
images to recreate the natural color proportions and often beautiful images.
Huge amounts of data are collected with each image, digitized and transmitted to
Earth using radio.
Earth-bound observatories also can collect photons for hours or days to
build up their images. During this time the wavering of the atmosphere can
distort the incoming waves, making stars and planets twinkle. Astronomers now
use adaptive optics to compensate for the twinkling by correcting the arrival of
the photons to restore wave synchronization.

Figure 12.14 - On the left, Earth-based


solar observer shows sunspots and
flares. (Courtesy NASA
http://pwg.gsfc.nasa.gov/istp)
On the right, a Hubble telescope photo
of a turbulent gas cloud in the
Omega/Swan Nebula. (Courtesy NASA,
www.hubblesite.org)

Sometimes scientists can learn a great deal from what they don’t see.
Certain atomic elements absorb photons of characteristic wavelengths. A dark
absorption line in the spectrum is a sure clue that an element is present in a star or
in the regions of space where the light has been traveling. Using image processing,
astronomers collect and compare spectral images and locate those regions where
light has been absorbed. Those regions are then known to be home to elements like
carbon or iron. False coloring again helps scientists create informative images to
support their findings.

Environmental Imaging
Some of the earliest satellites, launched in the late 1950s, were primarily
imaging satellites created to monitor the weapons, armies and economies of other
countries. Photographic films were ejected, fell to earth and were plucked from
the sky by planes that would chase the parachutes. Things are much easier with

276
Imaging

digital imaging and radio communications! Now satellites are constantly


imaging the earth for military and peaceful reasons. Environmental imaging is
invaluable to farmers, loggers, miners, engineers and biologists. Towns depend
on satellite and airplane imagery to plan services and monitor environmental
problems.
LANDSAT was one of the first imaging satellites for mapping and
environmental monitoring. Now in its seventh version and taking images in
seven different wavelength bands, LANDSAT can distinguish flooded land from
dry, corn from wheat, forest from grassland and many other features on the
surface miles below. Features as small as a meter across can be identified. It has
become difficult to hide anything on the surface of the earth!

Figure12.15 - LANDSAT imaging


from space, created by coloring
gray data in several visible bands.
From left: the Mississippi delta,
Shetland Islands and the
Himalayas. (Courtesy NASA,
http://landsat.gsfc.nasa.gov/)

Commercial services now distribute digital images of the surface of the


earth with one meter or better resolution. Satellite image archives are growing
rapidly as companies design and launch their own space imaging systems. Other
companies collect aircraft high-altitude pictures, scan them to digital format and
integrate them with satellite images. Commercial image databases available on
the Internet can be coordinated with other data, such as land ownership,
population, voting patterns and environmental quality. The resulting
geographical information systems (GIS) are revolutionizing urban planning,
environmental preservation, responses to natural disasters and agriculture. Of
course, you can process the raw images further to extract other understanding or
coordinate with your own data sets and interests.

12.6 MEDICAL IMAGING


Long before Leonardo da Vinci’s anatomical sketches, doctors relied on
external pictures to diagnose and monitor diseases. Now arthroscopy,
colonoscopy and arterial disease treatments send miniature cameras deep into the
body. X-rays and ultrasound can show bones and soft tissues, and tomographic
processing builds complete 3-dimensional images. Indirectly, researchers study
our brains by imaging blood flows and oxygen absorption or identify cancer by
imaging thermal characteristics of tumors.

277
LIGHT: Introduction to Optics and Photonics

Acoustic Ultrasound Imaging


Since about 1980, when ultrasound imaging became widely available, the
first image of most babies has been a gray or false color image formed from the
reflection of high frequency (ultrasound) sound waves. Such in utero images
provide information about health development, size and weight, and even the
baby’s gender. Among adults, similar imaging tools can make videos of the heart
and lungs, study the constriction of blood vessels and otherwise probe the soft
tissues of the body. Combined with X-ray imaging, which portrays our opaque
skeletal structures, ultrasound can help discover many problems from injury to
aging. Image processing can reveal quantitative measures like blood or air flow,
and measurements of internal organs. Unlike X-rays, which have high ionizing
energy and can damage soft tissue, ultrasound waves are relatively benign.

Figure 12.16 - Ultrasound images.


On the left, a 3D image of 27 week
fetus. The image on the right
shows the carotid artery bifurcation
with a plaque deposit in circle.
(Images courtesy of GE Healthcare,
www.gehealthcare.com.)

X-ray and CT Imaging


Imaging techniques have helped greatly reduce the intensity of X-rays
needed to image the skeleton, teeth and other tissues, reducing patient risk. Digital
X-ray sensors directly produce high resolution images that can be enhanced by
software, stored, sent to specialists over the Internet or carried by the patient with
other records. While computer diagnosis is still under development, radiologists
now regularly use image processing and analysis tools to suggest areas that have
suspicious structures. They can concentrate on these problem areas and reduce
missed diagnoses.
Tomography combines many (hundreds typically) images taken at different
angles through the body into a 3-D map of a person’s complete internal structure.
Since everything in the image is ultimately data, the doctor’s software can rotate
the virtual person, slice sections to reveal internal details and apply false color to
accentuate structures and problems. Computed tomography (CT, formerly called
CAT) is now found in most major hospitals and gives the doctor full insight into
the patient’s systems.
CT imaging data has opened new areas of medicine. CT images can be
used prior to surgery to simulate what the doctor will encounter. CT data can
present realistic views of the internal structures that will be encountered.

278
Imaging

Some systems actually project internal organs in front of the doctor, as if


the patient were invisible, or guide lasers to show exactly when the surgeon
should place an incision. In X-ray therapy for cancer treatment, CT imaging can
ensure that the lethal X-ray treatments stay focused on the tumor. The CT scan
follows the patient’s organs as she breathes, allowing the therapist to track the
tumor with the radiation treatment.

Figure 12.17 - CT scan showing


longitudinal “slice” of a knee (left). Many
such images may be assembled to form
a rendered 3-D image (right). (Images
courtesy of GE Healthcare,
www.gehealthcare.com.)

Magnetic Resonance Imaging


Magnetic resonance imaging measures the minute electrical signals
radiated by atoms in our body when they are subjected to strong magnetic fields.
An image similar to CT images can be constructed, without needing multiple X-
rays. The data can be mathematically assembled to show internal 3-D structures.
It can also be tuned to certain chemical reactions, like our brain’s localized
absorption of oxygen when we are performing certain tasks. Listen to music and
parts of your brain “light up” chemically. Try to remember a string of numbers
and you use a different part of the brain. MRI can follow these chemical
reactions from outside the body and suggest how we use the brain to think.

12.7 MACHINE VISION


Machines that can “see” and act in response to their environment arise
when you combine imaging systems and actuators. Machine vision systems are
now commonplace throughout manufacturing, performing assembly, inspection,
welding, painting and other functions. Several automobiles now show a video
display revealing activity in blind spots, for example, behind your car while
backing up. Some luxury cars use infrared imaging to “see” obstacles in the dark
and in rain and fog. Soon, some cars will even display the image on a heads-up
portion of the windshield in front of the driver.

Part Measurement
Knowing the imaging geometry and the optics of a camera, you can
determine the size of an object in pixels and convert them to real-world
dimensions. Machine vision systems can measure the world by counting pixels
and applying the necessary math. This is very useful where parts of varying sizes

279
LIGHT: Introduction to Optics and Photonics

are being made. Incorrect parts can be removed and valid parts can be
segregated by size easily. Once we can measure single dimensions we can
further decide what shape an object has and whether objects are near or far from
each other, obstacles or goals.
Continuing this measure and match strategy, it is possible to capture an
image and compare it to an existing library of images that have been previously
“learned.” A recognition score is assigned and we can accept or reject items
based on their familiarity. Even people’s faces and actions can be analyzed by
pattern recognition and measurement.

Inspection
Human inspectors used to check all the buttons on telephones and cell
phones, ensuring that buttons were positioned in the right locations and had the
proper printing on them. With increasing volumes and manufacturing speed,
human operators grow fatigued and make mistakes. A single color camera can
image each phone as it passes the inspection station, investigate the keys and
pattern match them to the correct arrangement and printed images. If there are
mistakes, the software alerts an operator who removes the phone for adjustment,
or it may automatically remove the incorrect phone from the process.

Figure 12.18 - This simple imaging


system determines when a bottle is
in place, takes an image, and
measures the distance from the
bottom of the bottle to the top of the
liquid.

Inspection of products prevents contamination and incomplete packaging.


A common problem, filling containers, requires inspection of each item to ensure
that it is filled to the correct level. An imaging system can quickly take an image
and measure the image pixels to find the fill level and compare it to the
specifications. Again, the system is tireless and automatic.

Robotic Control
Vision systems can learn through their software that certain actions or
locations should be avoided. Systems continuously examine the data contained in
frames from an image stream and raise a warning to the controller. The controller
in turn talks with the motors and pumps that provide mobility to the robot arms.

280
Imaging

REFERENCES
1. Russ, J. The Image Processing Handbook, Fourth Edition, North Carolina
State University, Raleigh, North Carolina, CRC Press, c2002.
2. Jähne, B. Practical Handbook On Image Processing For Scientific And
Technical Applications, Boca Raton, FL: CRC Press, c2004.
3. Weeks, A. Fundamentals of Electronic Image Processing, University of
Central Florida, John Wiley & Sons, 1997.
4. Gonzalez, R. and Woods, R. Digital Image Processing (2nd Edition),
Addison-Wesley; 2nd ed edition 2002.
5. Trucco, A. Introductory Techniques for 3-D Computer Vision, Pearson
Education 1998.
6. Webb, A.R. Introduction to Biomedical Imaging, John Wiley & Sons, 2003.
7. Heller, A. Editor. “Life at the Nanoscale,”, Science and Technology, May
2004. Available online at the web site of Lawrence Livermore Laboratory,
www.llnl.gov/str/May04/DeYoreo.html.
8. Vincze, M. and Hager, G. (editors). Robust Vision for Vision-Based Control
of Motion, SPIE/ IEEE Series on Imaging Science and Engineering, John
Wiley & Sons, 2000.
9. Bovik, A. Handbook of Image & Video Processing, Academic Press 2000.

WEB SITES AND TRADE MAGAZINES

1. Machine Vision Online Magazine


www.machinevisiononline.org
2. Biophotonics International Magazine, Laurin PublishingCo., Berkshire
Common, PO Box 4949, Pittsfield, MA 01202-4949,
www.photonics.com/bioPhotonicsHome.aspx
3. Photonics Spectra Magazine, Laurin PublishingCo., Berkshire Common, PO
Box 4949, Pittsfield, MA 01202-4949.
www.photonics.com/
4. Advanced Imaging Magazine, 3030 West Salt Creek Ln, Suite 200 Arlington
Heights, Il 60005
www.advancedimagingmag.com/

281
LIGHT: Introduction to Optics and Photonics

REVIEW QUESTIONS and PROBLEMS

QUESTIONS
1. What would a photograph of a glass of water look like if the illumination is infrared
light at 1100 nm? (Refer to the graph in Figure 12.2.)

2. In a story on supermarkets "snooping" on customers by putting IR cameras above the


store aisles, a photo purported to illustrate the image taken by such a camera. The
image was said to be color coded to temperature. A woman, blue tone face with
orange colored hands, reaches into a freezer to touch a blue colored carton of food.
What was wrong with this picture?

LEVEL 1 PROBLEMS
3. What fraction of incident light is transmitted through a 20 cm deep fish bowl if the
wavelength is 400 nm? 1000 nm?

282
A flight of industrial steel stairs edged with white lights leads
to the second floor of a small building in Cambridge,
Massachusetts—the Massachusetts Institute of Technology
Museum. Housed inside is the largest collection of holograms
in the world. Unlit, the holograms are slightly foggy glass
plates that look as if someone forgot to place photos into the
empty frames. But when properly illuminated, the effects are
breathtaking—a "cityscape" that extends outward into the space
in front of the glass, a woman who turns and blows a kiss as
you walk by, an explosion of dots reaching toward you with
colors that change with the viewing angle. The MIT collection
includes holograms used for scientific and technological
purposes, as well as exquisite artistic holograms. In this chapter
we present the basic physics of the construction and viewing of
a hologram, as well as practical ideas for making your own
holograms.

Einstein by Laser (J. Donnelly)

Chapter 13

INTRODUCTION TO HOLOGRAPHY
13.1 BRIEF HISTORY OF HOLOGRAPHY
Holograms are everywhere—providing security on credit cards, adorning
bookmarks and key chains, steering laser beams in supermarket bar code
scanners and hanging on museum walls as art. Hobbyists make them in their
basements and artists create full-sized images of people and their pets that can be
illuminated with room lights. Holograms also have found use in vibration
analysis, non-destructive testing and data storage. Yet only 40 years ago
holograms were scientific curiosities requiring equipment worth tens of
thousands of dollars to produce.
The beginnings of holography date back to the late 1940s when Dennis
Gabor, a Hungarian-born scientist working in Britain, developed the theory of
what was known at the time as "wavefront reconstruction." Gabor created the
word hologram from the Greek words "(h)olos" (whole) and "gramma" (a letter,
or message). That is, a hologram preserves the "whole message." The first

283
LIGHT: Introduction to Optics and Photonics

holograms were made with spectral lines of a mercury arc lamp, the most
coherent light source available at the time. The short coherence length limited
the depth that could be recorded, nonetheless Gabor's work was considered
important enough that he was awarded the Nobel Prize for physics in 1971.
The field of holography grew rapidly after the invention of the laser in
1960. Emmett Leith and Juris Upatnieks, radar researchers at the University of
Michigan, were the first to use a laser to create three-dimensional images and
what is called the "off-axis" method of creating a hologram. Some of their
earliest holograms of a model railroad engine and toy bird are in the MIT
collection. About the same time, Yuri Denisyuk in the Soviet Union pioneered
the type of reflection hologram that now often bears his name.
Holography as a technological tool and an art form grew rapidly in the
1960s and 1970s with the development of new recording media including
dichromated gelatin, which produces extremely clear and detailed holograms, the
type often seen on stickers and jewelry, and thermoplastic "cameras" which
create holograms electronically in seconds without chemical processing. New
techniques were developed as well: white light transmission holograms (called
"rainbow" holograms), holograms of living subjects taken with fast pulsed lasers,
double exposure holograms used to detect minute motions (holographic
interferometry), holographic moving pictures, and the embossed holograms seen
on everything from magazine covers to cereal boxes.
Today holography is a mature technology, with applications including
aircraft displays, holographic optical elements, holographic data storage, optical
pattern recognition and at least one consumer digital camera focusing system.
Holograms are not only created by optical means, but are also computer
generated, allowing the creation of an ideal reference against which to measure,
for example, the surface of a mirror.

13.2 CREATING A HOLOGRAM—INTERFERENCE


Even though holograms are often made on what is essentially black and
white photographic film, the difference between a photograph and a hologram is
striking. A photograph records irradiance, the dark and bright areas of a scene,
and there is a one-for-one correspondence between points on the scene and points
in the photograph. The result is a two dimensional representation of a three
dimensional world. In comparison, a hologram preserves not only variations of
brightness but also includes depth information, which we sometimes refer to as
parallax. Hold a pen at arm's length and move your head from side to side while
you look at the pen and scene behind it. You will see a shifting of the position of
the pen relative to the background—this is parallax.

284
Holography

A hologram is not some sort of optical illusion, nor does it require that
you wear special glasses or train your eyes as with a 3D stereogram. A hologram
accomplishes three-dimensional imaging by reproducing the actual complex
wavefront leaving an object, including phase information (depth) as well as
brightness.
Recording media, such as photographic film, respond to light energy.
Where irradiance is high, the film responds by darkening more than where only a
small amount of energy strikes it. To preserve phase information on film, it is
necessary to convert phase information to irradiance or light energy information.
Recall from your study of waves that constructive interference creates a
maximum brightness when the waves are in phase and destructive interference,
or darkness, occurs when waves are out of phase. As a result, interference
patterns can be used to encode information about the phase relationships of
different parts of a wave.
Holograms are created by the interference of light. In Chapter 6 we
analyzed the interference produced in Young's double slit experiment in terms of
path length difference. To understand the interference that leads to the formation
of a hologram, we will use another, nonmathematical, technique: moiré patterns,
named for the swirling patterns of light and dark on the variety of silk of the
same name. You have probably observed moiré patterns when the folds of sheer
window curtains overlap, producing patterns of light and dark where threads
overlap other threads or the spaces between threads. Many Internet web sites are
devoted to moiré patterns and some have Java applets that allow you to move two
patterns relative to each other and see the resulting "interference.” A few of these
web sites are listed in the references to this chapter.
In the double slit experiment, we assumed that each slit was a point
source of light producing circular wavelets like the expanding ripples on the
surface of a lake disturbed by a dropped pebble. In the case of overlapping
spherical waves, the moiré patterns form a family of hyperbolas. A geometric
treatment of holography based on the hyperboloidal interference curves,
developed by Tung H. Jeong, professor emeritus of Wake Forest College, can be
found in the references to this chapter. Here we will consider the simpler case of
interfering plane waves.
Let us represent a plane wave by drawing a series of lines indicating the
crests of the wave. The separation between the lines in the drawing is the spacing Figure 13.1 – Moiré
patterns representing
between wave crests, that is, the wavelength. Figure 13.1 shows two such plane plane waves traveling from
waves and the region where they overlap. Notice that a moiré pattern of dark and left to right as shown by
the arrows.
light bands is created where the waves overlap, representing constructive and
destructive interference over the surface of the page.

285
LIGHT: Introduction to Optics and Photonics

One important point to notice is that, for the case of two plane waves, the
distance between the dark (or bright) interference fringes depends upon the angle
at which the waves meet. (Figure 13.2) The inverse of the distance between the
interference fringes, called the spatial frequency, is the number of fringes
occurring in a given space. The relationship between spatial frequency and angle
between the oncoming waves will be important when we discuss noise in
holograms later in the chapter. As you can see from Figure 13.2, as the angle
between the wave propagation direction increases, the spatial frequency
decreases.

Figure 13.2 - Interference fringe


spacing depends upon the angle
between the direction of travel of
the wavefronts. Low spatial
frequency fringes (left). High
spatial frequency fringes (right).

EXAMPLE 13.1
The drawing below represents a rectangle 2 cm wide and 1 cm high. Find
the spatial frequency of the pattern in the horizontal and vertical directions.

Solution

Horizontal: 6 lines in 2 cm is a spatial frequency of 6/2 = 3 line/cm


Vertical: 2 lines in 1 cm is a spatial frequency of 2/1 = 2 lines/2cm

Suppose that the object to be holographed is a point source of light, such


as the head of a small pin. The waves leaving the object form concentric spheres,
centered on the point source. What happens when the spherical wave from the
object interferes with a plane wave? In the terminology of holography, the
spherical wave originating at the object is called the object beam and the
interfering plane wave is the reference beam. Of course, in order to form
interference fringes the object and reference beams must be coherent, but for now
we will just assume that the coherence condition is met. In Figure 13.3, the
spherical wave is shown expanding in all directions and the plane wave is
traveling from left to right. A moiré pattern results where the spherical and plane
waves overlap. The irradiance at the film plate is shown on the right side of the
diagram as a bright spot in the center surrounded by alternating dark and light
rings.

286
Holography

Plane wave

Figure 13.3 - Interference of


expanding waves from a point
source and a plane reference
wave. The irradiance pattern on the
film is a Gabor zone plate. The film
is shown from the side (left) and
front (right).

Film plate

Spherical wave

The film will be exposed wherever there are interference maxima and
irradiance is high. Where the interference minima occur, the film will remain
unexposed. When it is developed, the film will contain a record of the
interference pattern, which is a hologram of the point source. The developed
film will be dark where it was exposed by the bright fringes and light (or clear)
otherwise; in other words, it is a photographic negative. The pattern on the film,
called a Gabor zone plate, consists of concentric rings, or zones, whose
transmittance (optical density) is a function of distance from the center of the
pattern.
What is a zone plate? In Chapter 5, geometric optics and Snell's law
explained how a lens could focus light to a small point. Zone plates also focus
light, but do so by diffraction rather than by refraction. A Fresnel zone plate is
another type of zone plate, related to the Fresnel lenses that are used in
lighthouses. It has a similar pattern to the Gabor zone plate, but the dark bands
are of equal optical density across the pattern. Although the zone plate formed on
the film in our experiment bears no resemblance to the point source object, it has
the potential to recreate an exact image of that "object.” as we will describe in the
next section.

13.3 VIEWING A HOLOGRAM—DIFFRACTION


Let us assume the exposed photographic film containing the hologram
has been developed. (We will say more about photographic film and film
developing later in the chapter.) We now can "reconstruct" the original wavefront
of the point source by illuminating the hologram with a replica of the original
reference beam, that is, with a plane wave of the same wavelength.
Remember that the hologram is a zone plate, that acts as a lens. The
original object was a "point" source of light, and the zone plate recreates that
point source at its focal point. But because this is a diffractive lens, two images

287
LIGHT: Introduction to Optics and Photonics

appear, one real and one virtual, on opposite sides of the film (Figure 13.4).
Looking back into the film, you will see the virtual image of the point source at
the exact location of the original source. The real image may be observed by
placing a screen in front of the illuminated film.

Figure 13.4 - Hologram reconstruction.


The duplicate reference beam strikes the Real image
developed film plate from the left. The Virtual image
virtual image, formed by the diverging light
on the right hand side, is difficult to see
because of the undiffracted light (red rays)
and the light forming the real image.

There is a practical limitation to placing the object and reference beam


along the same line, as we did to create our example hologram. When you look
into the hologram to view the virtual image, the light forming the real image is
directed forward toward you, along with the light forming the virtual image as
well as the "zero order,” or non-diffracted, light. This makes the virtual image
difficult to see. The solution is the off-axis configuration devised by Leith and
Upatneiks, which separates the real and virtual images (Figure 13.5). Both the
object and reference beams are off-axis when the hologram film is exposed. The
light forming the virtual image is also diffracted off-axis, separating it from the
reference beam and real image light.

Object beam
Object
Figure 13.5 - When the object and
reference beams are off-axis, the
virtual image may be seen without the
real image light in the way. The zero
order (non diffracted) light is not Reference beam
shown. Construction of an off-axis
hologram (top). Viewing the hologram Virtual Real
(bottom). Image Image

Reference
beam

The hologram described so far is of a single point source of light. Of


course, it is unlikely that anyone would want to create a hologram of the head of
a pin! But consider the situation where there are two such point objects as shown

288
Holography

in Figure 13.6, Each of the point sources interferes with the reference light to
form its own zone plate pattern on the film. When the plane wave reference beam
illuminates the developed hologram, two virtual images are formed. (Two real
images are formed as well.) The light leaving the hologram and forming the
virtual images is the exact replica of the light that left the two point sources at the
time the hologram was made. As you shift your gaze, the two image points will
appear to move in relation to each other, just as the actual object points did, that
is, they will have the same parallax as the original objects. The hologram creates
a true three-dimensional view of the two point objects.

Reference beam

Point source
objects Figure 13.6 - Hologram construction
(top) and reconstruction (bottom) of
Film two point sources of light. In the
bottom figure, the real images are
Replica of
reference beam not shown.

Virtual
images

Now let us replace the two point source objects by a three dimensional
object. Each point on the object forms its own set of Gabor zones on the film. A
hologram, therefore, is a complex set of overlapping interference fringes from all
points on the object that preserves both the phase and intensity information of the
original object wave. It should be noted that the interference fringes of a
hologram are separated by less than a micron! The microscopic fringe separation
leads to some practical issues in creating a hologram, as we shall see later in the
chapter.

13.4 PROPERTIES OF HOLOGRAMS


You may have heard that a piece of a hologram preserves the entire
scene, unlike a torn piece of a photograph that shows only a part of the picture.
This redundancy is like looking through a large window made up of small panes.

289
LIGHT: Introduction to Optics and Photonics

Looking through any one of the panes reveals the same scene, but from a
different perspective. Similarly, each part of a hologram encodes a different view
of the whole scene. Figure 13.7 shows how this is possible. Light from every part
Object
of the object interferes with the reference light across the entire hologram. This
Piece of means that even a small piece of film can record a relatively large scene.
hologram
What if a hologram is created with one wavelength and reconstructed
with another? Remember that when a diffraction grating is used with red light the
Reference diffraction pattern spreads more than when it is used with blue light. In the same
beam
way, the size of the image of a reconstructed hologram changes if the wavelength
Figure 13.7 - The small is not the same as the wavelength used to construct the hologram. Magnification
piece of the hologram may be achieved by recording the hologram at a short wavelength and
contains information from
the top and bottom of the reconstructing it at a longer wavelength.
object, as does the entire We have not said anything to this point about the real image that is
hologram.
formed at the same time as the virtual image. The real image has the peculiar
property of appearing "inside out.” that is, parts of the object that were farther
from the film appear closer in the image; for example, indentations appear as
bumps. This property can be illustrated simply by viewing the hologram on a
credit card with the card held upside down. Usually this real image is not very
useful, but it can be used as an object to create a second hologram, which will
appear "normal" when viewed. If the photographic plate for the second hologram
is placed so that it passes through the real image of the first hologram, the final
image will appear to straddle the hologram plane. The image appears to stretch
out in front of the hologram as well as recede into the distance behind it.
The fact that a hologram produces an exact replica of the light that left a
scene means that optical elements such as lenses, mirrors or complete optical
systems may be included in a hologram and they will "work,” For example, a
hologram of a magnifying lens in front of an object will magnify the part of the
object behind it. If you look toward a different part of the object, the portion
being magnified will shift as it would with a real lens and real object. In fact,
computer generated holograms are commonly used in sophisticated optical
systems to compensate for aberrations such as coma, astigmatism and spherical
aberration.
The holograms we have been discussing are photographic negatives. If a
print (positive) is made from a holographic negative, the images that you see will
be the same because the hologram is simply a complex pattern of interference
fringes. Just as the diffraction pattern of a slit is nearly identical to that of a solid
line of the same width, light diffracts around the fringes the same way for both
the positive and negative hologram.

290
Holography

13.5 TYPES OF HOLOGRAMS

Transmission and Reflection Holograms


When a holographic image is viewed by looking at light transmitted
through the film it is called a transmission hologram. The holograms we have
discussed so far are transmission holograms and they are both created and viewed
with monochromatic light. Object beam
Reflection holograms are viewed with the light source on the same side Reference
as the viewer and the light reflected from the hologram forms the (virtual) image. beam

One method of constructing a reflection hologram is to place the object on the


opposite side of the film plate from the reference beam. The object is then Object
Illuminating
illuminated with monochromatic light that first passes through the film. The light Fil
object beam is reflected from the surface of the object and interferes with the m
reference beam in the film emulsion (Figure 13.8).
The geometry of the construction of a transmission hologram results in Virtual
image
fringes formed across the face of the film, similar to the lines of a diffraction
grating. The fringes of a reflection hologram, however, are roughly parallel to the
film surface and act like tiny partially reflecting mirrors, directing light back Figure 13.8 - Construction
and viewing of a reflection
toward the observer. A typical holographic film emulsion is approximately 10 hologram.
wavelengths thick, so about 20 fringe layers form
Transmission holograms must be viewed with a monochromatic source.
If white light is used, the images formed by different wavelengths are slightly
different sizes and appear at different angles. These colorful images overlap and
cause a blurry "rainbow" image to form. A reflection hologram may be viewed
with white light, however, which is why they are sometimes referred to as white
light reflection holograms. (They are also called Denisyuk holograms, after the
inventor.) When illuminated by a white light source, only one wavelength
interferes constructively when reflected from the layered "partial mirrors" in the
emulsion, and the image appears in this color. You might think that the hologram
image is necessarily the color of the laser used to illuminate the object, but this is
not always the case. Variations in development of the film causes the fringe
spacing to change, resulting in a hologram that might appear redder or bluer than
the original scene. The film emulsion can also be treated with chemicals that
cause it to swell, resulting in a shift toward the blue end of the spectrum for the
final image.

White Light Transmission Holograms


Some of the most strikingly beautiful art holograms use transmitted
white light for reconstruction. So-called white light transmission holograms were
invented by Stephen Benton of Polaroid Corporation (later the director of the

291
LIGHT: Introduction to Optics and Photonics

MIT Media Labs) in 1968. Also called rainbow holograms, they are constructed
with a two-step process.
Normal transmission holograms have parallax in both the horizontal and
vertical directions. If such a hologram is viewed in white light, each wavelength
will produce an image at a different viewing angle, resulting in a colored blur.
Reflection holograms remove some of the information of the original hologram
so that the final image has parallax in only one direction.
First a "normal" master hologram is created. The developed hologram is
placed behind a mask so that only a horizontal slit is exposed. This exposed
"slice" of master hologram serves as the object for the final hologram (Figure
13.9). In effect, the mask removes the parallax from the vertical direction so that
when the hologram is viewed with white light, each wavelength reproduces an
image at a specific vertical angle. That is, the image is colored, and the color you
see depends upon the vertical viewing angle.

Figure 13.9 - Schematic of


the creation and viewing of a
white light transmission
(rainbow) hologram. The
final hologram produces an
image with parallax only in
the horizontal direction. The
color of the image changes
with vertical viewing Masked hologram master Rainbow hologram
direction. Film

Embossed Holograms
Since the famous March 1984 edition of National Geographic Magazine
that featured a hologram of an eagle on its cover, embossed holograms have
become commonplace on postage stamps, trading cards, credit cards and even
soup can labels and wrapping paper. The process for creating an embossed
hologram begins with a rainbow hologram, but in this case the rainbow hologram
is recorded on photoresist, which hardens so that the developed hologram
consists of grooves rather than variations of optical density. A layer of nickel is
deposited on the hologram, creating a mold from which other holograms may be
made by embossing the interference pattern onto plastic film. Finally, a thin layer
of aluminum is applied to each of the stamped holograms so that it may be
viewed by reflected white light. Like the white light transmission hologram that
served as the master, the embossed hologram exhibits parallax in only one
direction.

292
Holography

Holographic Interferometry
If a hologram is made of an object and then the object is moved slightly
and a second (double) exposure is made on the same film, dark fringes will be
appear on the final image. This technique allows the measurement of very small
motions through what is essentially the interference of the light forming one
holographic image with another. Since the light leaving a hologram is an exact
replica of the light leaving an object, it is possible to replace one of the double
exposures by the object itself. That is, the developed hologram is replaced in its
original position and the virtual image is superimposed on the object. As you
might imagine, if the hologram must be moved in order to be developed, it is
exceedingly difficult to return it to precisely the original position.
A solution to the problem of developing holograms in place was
developed in the late 1970s. Holographic "cameras" with thermoplastic "film"
allowed a hologram to be exposed and developed without moving it. The object
could then be moved or deformed and the resulting displacement fringes
photographed and studied. Modern holographic interferometry systems create
extremely well defined deformation patterns through the use of video capture and
electronic processing.
Figure 13.10 shows a double exposure hologram of two circuit board
samples, held horizontally and side by side. Mirrors placed above and below the
samples allow both front and back sides of the circuit boards to be viewed
simultaneously. Between the first and second
exposures, the boards were heated by running
Figure 13.10 - Double
current through the circuit traces. The fringes show exposure hologram of
that the board on the left expanded more than the one circuit board samples.
(Courtesy of Dr. Karl
on the right. The expansion differences are the result Stetson, Karl Stetson
of different bonding techniques used in the Associates, LLC.,
www.pcholo.com)
construction of the boards.
Holographic interferometry may also be
used to study the movement of parts undergoing
small amplitude vibrations. In this case, the
Figure 13.11 - Time
hologram is exposed for a relatively long time. What averaged hologram of
can be thought of as many overlapping holographic vibrating disk.
(Courtesy of Dr. Karl
images reveals details of standing wave patterns. Stetson, Karl Stetson
Figure 13.11 shows a time-averaged hologram of a Associates, LLC. ,
www.pcholo.com)
vibrating disk. The bright regions are where the disk
was stationary, that is, the vibration nodes. The
amplitude of the vibration can be inferred from the
fringe contrast.

293
LIGHT: Introduction to Optics and Photonics

Computer Generated Holograms


Holograms may also be synthesized by using a computer to calculate the
interference fringe pattern, which is then transferred to film or another medium.
This technique is of growing importance in the area of holographic optical
elements (HOE) used to control light in many applications. HOE are more
common than you might expect. They are used, for example, in supermarket
scanners where a rotating array of holographically produced elements focus and
scan the laser beam across the bar code patterns on products.
Because computers may calculate and produce holograms of objects that
do not really exist, they can be used to create ideal wave fronts of, for example,
an aspheric mirror. The hologram can then be used to test a mirror under
construction. Interference fringes formed between the real and holographic
mirrors may be used to examine and perfect the mirror surface.

Holographic Data Storage


One of the most promising new technologies using holographic
techniques involves the storage of vast amounts of data in very small volumes.
Holography presents a tremendous advantage over current forms of data storage,
such as CDs and DVDs, because data is stored throughout the volume of a device
rather than just on the surface.
In principle, a laser beam is split into two parts: a signal beam and a
reference beam. Digital data to be stored is encoded onto the signal beam by a
spatial light modulator, an LCD panel that represents digital data as an array of
black and white pixels. The signal beam combines with the reference beam inside
a photosensitive medium and a hologram is recorded where the beams intersect.
Many different holograms can be recorded in the same volume by using different
wavelengths and changing the angle of the reference beam.
To read the storage device, the reference beam must illuminate the
medium at exactly the angle used to store the data. The recovered hologram is
recorded by a CCD camera and the results are fed to a computer for
interpretation. Because a whole page of data is recovered at once, holographic
data storage is expected to be much faster than other forms of storage requiring
sequential retrieval of bits of data.

13.6 PRACTICAL CONSIDERATIONS FOR LAB AND HOBBY


HOLOGRAMS
Creating holograms no longer has the air of mystery it had in the late
1960s when practitioners were mostly scientists in expensive laboratories. In fact,
hobby holography has a large following, as evidenced by the number of Internet
web sites and discussion groups devoted to the subject. Nonetheless, holography

294
Holography

is not quite the same as photography with a "point and shoot" camera, and there
are some important considerations for successful outcome.

Laser Source
The laser must have an adequate coherence length. This is important
especially for two beam holograms where the path taken by the reference beam
must not differ from the object beam path by more than the coherence length of
the laser. For the typical helium neon laser found in a school laboratory, the
coherence length is roughly the length of the laser tube. Although it was assumed
for many years that holograms could not be made with inexpensive diode lasers,
that certainly has not turned out to be the case. In fact, diode lasers have the
advantage of being highly divergent, so beam-spreading optics are not needed.
Removing the collimating lens from a laser pointer gives a clean, diverging beam
of laser light that works very well, especially for reflection holograms of small
objects.
Laser output power is not necessarily a limiting factor, except that very
low power lasers will require extremely long exposure times. The longer it takes
to make the exposure, the more likely vibrations will ruin the hologram. Still,
satisfactory white light reflection holograms can be made with a helium neon
laser of around 1 milliwatt output power.

Recording Medium
The easiest medium to work with is glass plates with silver halide
emulsion. Plates are easier to mount and hold steady than flexible holographic
film, which must be sandwiched between two clean pieces of glass to prevent
motion. Film plates for holography are expensive, so planning and care are
required. Although the silver halide emulsions are similar to those used for black
and white photography, cheaper black and white photography film will not work
because it does not have sufficient resolution to record the microscopic
interference fringes of a hologram.
Dichromated gelatin films are used to produce the extremely bright,
slightly green tinged holograms used for hologram stickers and other novelties.
These films are difficult to work with, requiring an exacting development
procedure. Exposures require a deep green or blue laser. Photopolymers are
another class of holographic medium of interest to technical holographers. Like
dichromated gelatin, they require short wavelength lasers of fairly high power.
We will assume silver halide films in the applications that follow, so it
will be useful to have at least an elementary understanding of how the film
works. The light sensitive component of holography film is silver halide grains
(silver iodide, chloride and/or bromide) in a gelatin base. When a photon strikes

295
LIGHT: Introduction to Optics and Photonics

one of the silver halide grains it may generate a free electron that can then
combine with a silver ion to produce a neutral atom of silver. If a few silver
atoms are produced in a silver halide grain, we say a latent image has formed.
Since several photons are needed to produce a latent image, the larger the grain
the more quickly the latent image can be formed. Because of the extremely small
grains of holography film, it is considered a "slow" film.
To develop the exposed film, it is first placed in a reducing agent that
turns the silver halide grains into metallic silver. The silver atoms of the latent
image act as a catalyst, so those gains turn to silver first. Eventually all of the
silver halide grains would turn, so it is important to stop the process before the
entire plate turns black. This is accomplished in a stop bath of running water. In
black and white photography, the film is then placed in a fixer solution, which
removes the remaining silver halide and prevents further exposure from turning
the film dark. In creating holograms, we bleach the film rather than fix it. The
bleach converts the dark silver into a transparent material with a high refractive
index. The hologram then works by retarding the phase of the reconstructing
beam, rather than blocking the beam by opaque regions in the film. This results
in a much brighter hologram image.

Vibration
Since the interference fringes to be captured on film are spaced less than
a micron apart, it follows that any motion of the fringes on the order of a micron
will cause the bright fringes to move into the positions of the dark fringes. Rather
than recording a fringe pattern, the film will be more or less evenly exposed.
Unlike a photograph that shows blurring when the subject or camera moves, a
hologram simply fails to work at all if the "subject" moves enough to destroy the
interference fringes.
Although an expensive air-supported optical table is not required to
produce a hologram, great care must be taken to reduce vibrations to a minimum,
especially with a low power laser that requires a long exposure time. A stable
base may be created from a heavy wooden or concrete block (such as a patio
block) placed on neoprene rubber balls, a rubber mat or even plastic foam
packing material. During exposure there should be no moving about, and opening
and closing the shutter to expose the film must not cause the table to vibrate.
Care must be taken as well to block air currents around the object, film and
optical components.

Ambient Light
The room should be darkened, but it does not need to be totally dark. The
usual rule of thumb is that the room must be too dark to read a printed page. A

296
Holography

photographic darkroom is not necessary because the film is a "slow" film, not
very sensitive to light.
Commonly available silver halide holographic film is red sensitive, so a
green "safelight" may be used. A green night-light bulb works well. It must be
placed so that it does not shine directly on the film, for example, under a table or
desk. It is important to remember that most green incandescent bulbs are far from
monochromatic, so it may take a few tries to find one that is adequately "safe" for
holography. So-called "party" or holiday light bulbs should be checked with a
spectrometer or viewed through a diffraction grating to ensure that the bulb is not
producing red wavelengths along with the green.

Optical Noise
Optical noise in a hologram is unwanted light that detracts from the
image. Some noise is caused by light scattered by the grains of the silver halide
emulsion and is therefore unavoidable. Intermodulation noise is a potential
problem when constructing a transmission hologram. When a two-beam
hologram is created, light from every part of the object should interfere only with
the reference beam light. However, as Figure 13.12 illustrates light reflected from
different parts of the object may interfere. For example, interference of light from
the points labeled A and B in Figure 13.12 will cause extraneous fringes to be
recorded on the film plate. This will result in optical noise in the image.
Intermodulation noise may be minimized by careful attention to the A B
geometry of hologram construction. Recall that when interfering wavefronts meet
Reference
at a large angle, the interference fringes are close together (high spatial beam
frequency) and when the angle is small, the fringes are far apart (low spatial
frequency) (Figure 13.2). Intermodulation noise may be reduced by making sure
that the angle between the reference beam and object beam is larger than the
angle subtended by the "ends" of the object (points A and B in Figure 13.12).
This ensures that the "noise fringes" are of a lower spatial frequency than the
fringes that will form the image. "Noise light" will be diffracted at smaller angles film plate

than the image light and will not interfere with the viewing of the image. Figure 13.12 - The angle
Optical noise can also be minimized by using a high intensity reference subtended by the object
should be less than the angle
beam compared to the object beam. The fringes formed by the interference of between the object and
reference and object will thus be stronger than those formed by different parts of reference beams to minimize
intermodulation noise.
the object interfering with each other. This technique is useful when creating
two-beam transmission holograms.

Developing Chemicals
Developing chemicals (as well as film plates) are readily available from
Internet sources. Less toxic chemicals have been developed, which are more

297
LIGHT: Introduction to Optics and Photonics

appropriate in home lab and classroom settings. In fact, since laser diodes and
new methods of vibration isolation have greatly simplified holography, the most
difficult aspect may now be the proper disposal of developing chemicals
according to local regulations. The main environmental issue is the disposal of
heavy metals, including any silver that results from developing silver halide
films.

Subjects for Holography (The Object)


The best objects for beginner holography are small, stable and light
colored, that is, reflective but not necessarily shiny. Coins, keys or small plastic
action figures work well. When making a two-beam hologram, large objects will
produce noisy holograms unless they are placed far from the film. However,
placing the object too far from the film may cause the beam path length
difference to exceed the coherence length of the laser. Very lightweight objects
are more subject to movement because of air currents and should be glued to a
heavy support to ensure that they do not move.
Holograms can be made of living "objects" such as people or pets, but
they require a pulsed laser of sufficient power to illuminate the scene and a pulse
duration that is short compared to any movement made by the subject of the
hologram.

13.7 MAKING YOUR OWN HOLOGRAMS


It is difficult for the authors to imagine someone seeing a hologram and
not wanting to try to make one! The first "hobby" hologram systems, developed
in the 1970s, required the components be placed in a large box of sand to control
vibrations of the laser, optics and film. A helium neon laser was the laser of
choice. More recent developments have shown the suitability of laser pointers for
holography, and messy sandboxes have been replaced by metal plates or concrete
blocks. Chemicals for processing the film are available in convenient kit form.
The references at the end of the chapter include several Internet sites dedicated to
assisting both new and experienced holographers with advice and supplies.

Single Beam Reflection Hologram


So, you want to make a hologram! The simplest hologram is made with a
single laser beam and glass film plate. The laser beam must be expanded by a
short focus mirror or lens. The mirror is simpler to align and easier to handle than
a lens. A low power laser pointer with the lens removed will not require any
additional diverging optics. Turn the laser on and let it warm up and stabilize for
at least 15 minutes before beginning.

298
Holography

If the chemicals are from a kit, mix them as directed using distilled
water. Be sure to wear plastic gloves and keep a pair handy for developing the
film. Before exposing the hologram, place the developing chemicals in trays in
the order in which they will be used: developer, followed by rinse water, bleach
and a second tray of rinse water. The authors usually do both rinses under
running water if it is available. You may wish to put a few drops of a wetting
agent in the final rinse to reduce spotting on the finished hologram. Usually,
running water is specified for the rinse, but a large pail of water is acceptable,
especially if local regulations do not allow the rinse water to be put down the
drain.
Use an old holography plate or similar sized glass plate to check the laser
alignment and object placement. The object should be very close to the plate but
not touching. If the object touches the plate, the hologram will not be ruined, but
the image will be distorted, with one part "stuck" to the plane of the film. Check
the laser beam quality before making an exposure. The object and plate should be
uniformly illuminated. Rings and spots may be caused by dirt on the mirror or
lens; these should be carefully cleaned. (This is not an issue if you use a diode
laser with the collimating lens removed.)
Figure 13.13 schematically illustrates the geometry of constructing a
single beam hologram. If the film plate cannot be securely mounted in a vertical
position, it can be placed horizontal, resting on three screws partially hammered
into a heavy wood block. In this case, the laser beam is then directed downward
toward the film plate by tilting the mirror while the object rests directly on the
base below the film plate. Although this arrangement limits the size of the object
to the space beneath the plate, it is a convenient method of securing the film
plate. If a diode laser is used with the horizontal film support, it can be mounted
above the film plate using a lab clamp and stand, with the beam directed
downward.

LASER
Small
spherical Figure 13.13 - Schematic of constructing
Shutter mirror a single beam hologram with a lab laser
and spherical mirror beam spreader. This
view is from the top.
Film held
Object vertical

The emulsion must be allowed to settle for at least 15-20 seconds before
exposure. A typical exposure time for a 5 mW diode laser mounted

299
LIGHT: Introduction to Optics and Photonics

30 cm above the film plate is 5-10 seconds, depending on the reflectivity of the
object used. Fortunately, the exposure time is not as critical in holography as it is
in photography, and the "best" exposure is found by trial and error. The actual
exposure is made by moving a "shutter" out of the way to allow the laser beam to
strike the plate. Any opaque piece of cardboard will serve as a shutter, but a
certain technique is involved in moving it out of the way of the beam. First, lift
the shutter slightly so that the beam is still blocked but the shutter is not in
contact with any part of the apparatus. Wait at least 10 seconds to allow
vibrations to die out, then remove the shutter completely out of the way to expose
the film
The hologram should be developed until the optical density is about 2.5.
If a 2.5 OD neutral density filter is available, the transmittance of the hologram
can be compared to that of the filter by looking toward the safelight first through
the filter and then through the hologram. Continue developing until the hologram
is as dark as the filter. If a neutral density filter is not available for comparison,
develop the hologram until it is very dark but not opaque (like very dark
sunglasses). Fortunately there is quite a bit more latitude in the development of a
hologram than in photographic development since you are recording the presence
or absence of interference fringes, not the nuances of light and shadow in a scene.
When the hologram is sufficiently developed, it must be thoroughly
rinsed to remove all traces of the developer. It is then put into the bleach until no
trace of black remains, and then rinsed once more.
You will not be able to see the image until the film is completely dry. It
is best to allow the hologram to air dry, which admittedly requires a great deal of
patience! Some types of film plate may be gently dried with a hair dryer set to
the lowest setting. Once the emulsion is completely dry, the emulsion side of the
plate can be painted with a thin coat of flat black spray paint. This not only
protects the emulsion from scratches but also improves the visibility of the
image.
To view the hologram, illuminate it with a point source of light, like an
LED flashlight or halogen bulb, turning the plate to view the reflection at
different angles until the image is visible.

Two-Beam Transmission Hologram


A transmission hologram requires a more stable configuration than a
reflection hologram. The laser beam is divided into two parts by a beam splitter,
and relative motion can occur between the reference and object beams and the
film plate unless the entire set-up is mounted on a vibration free table or sand
box.

300
Holography

Arranging the components for a two-beam hologram requires some


planning to ensure that the two beams follow separate paths; that is, the reference
beam must not illuminate any part of the object (Figure 13.15). Care must also be
taken to ensure that the two beam paths are the same within the coherence length
of the laser. A piece of string can be used to compare the beam path lengths
without making actual measurements. Adjust the mirror positions to make these
lengths approximately equal.
Ideally the beam splitter should have variable density to allow control
over the two beam intensities. Once two beams are obtained, each of the beams
must be expanded. A microscope objective works well and is easy to handle.
Experienced holographers also include a spatial filter to remove the dark rings
and mottled appearance of the beam due to dust and other imperfections in the
optics that are a cause of optical noise in the finished hologram.

Shutter
Beam splitter Figure 13.15 - Schematic of two-beam
LASER Mirror hologram set -up viewed from above.
Additional mirrors may be used to
equalize beam paths or to allow the
components to fit on the table.

Film
Object

The reference beam is directed to the film plate, and the object beam is
directed to the object, which then reflects light toward the film plate. Before
exposing the actual film plate, the beam paths and beam quality should be
checked using an old holography plate or similar size piece of glass. Block the
object beam only and check that the reference beam does not illuminate the
object. Block the object beam to be sure the reference beam fills the film plate
and is not excessively spotted or mottled.
The exposure and development of the two-beam hologram is similar to
that of the single-beam hologram, except that the target optical density of the
developed plate is 1.5 rather than 2.5. The exposure time can be calculated at
least approximately if you have a light meter and data on the film plate
sensitivity. Again, the best exposure time is usually found by trial and error.
The transmission hologram must be viewed with a laser that replicates
the original reference beam. The developed film plate should be in the same
position relative to the reference beam that it was during exposure. Place the film
plate back into the holder and block the object beam or remove the object from
the table. To see the image, look into the film plate, toward the original position

301
LIGHT: Introduction to Optics and Photonics

of the object. The image should appear to be floating in the same location as the
original object.
Once the basic holography techniques have been mastered there are a
number of additional experiments that can be attempted by even novice
holographers. For example, a two channel hologram shows a different image
depending on the viewing angle. To create a two (or multi) channel hologram,
the first hologram is exposed normally. Then, after the first exposure, the film
plate is covered and a second object replaces the first. The film plate is tilted with
respect to the reference beam before the second exposure, which produces a
second set of fringes. Each hologram reconstructs at its own specific angle. Both
reflection and transmission holograms can be made this way.

REFERENCES
1. Iovine, J. Homemade Holograms, TAB Books (McGraw Hill) 1990.
2. Unterseher. F., Ross, F., Kluepfel, B.. Holography Handbook: Making
Holograms the Easy Way, Ross Books, Berkeley, CA, 1996. Step by step
guide to making holograms.
3. Abramson, N. Light in Flight, or, the Holodiagram, SPIE Press, Bellingham,
WA, 1996.
4. Jeong, T.H. Holography: A concise non-mathematical explanation,
(available from Integraf, www.integraf.com).
5. “The Laser: "A Splendid Light" For Man's Use,” National Geographic Vol
165, No 3 March 1984: 370-372.
6. Olsen, K. E. “Holographic multi-stereogram constructed from computer
images: Applied 3-D printer,” (doctoral thesis) Department of Physics,
University of Bergen, May 1996. Available at:
www.fou.uib.no/fd/1996/h/404001/

WEB SITES
1. Comprehensive source of information, including detailed instructions for
creating a variety of holograms
www.holoworld.com/.
2. Provider of holography supplies. Technical articles and support.
www.integraf.com
3. Boston University Project LITE, includes Moire applets
www.bu.edu/smec/lite/
4. MIT holography museum web site
web.mit.edu/museum/

302
Holography

REVIEW QUESTIONS AND PROBLEMS


1. What is the spatial frequency of the lines in the figures below?

2. How could you use holography to create a diffraction grating? What kind of beam
(plane or spherical) would you use? How could you adjust the spacing of the "slits?”

3. Suppose a hologram is constructed with 532 nm light. If it is reconstructed with 633


nm light, how will the image compare to the object?

4. When creating a hologram, the angle between the reference and object beams is 20o
and the wavelength is 633 nm. How far apart are the fringes in the hologram? (Hint:
Think of a diffraction grating and work backwards! Assume the first order - m=1.)

5. Suppose you make a hologram, which is a photographic negative. If you make a


contact print of the hologram so that you now have a photographic positive, will the
reconstructed image look different? Justify your answer.

303
Many new opticians begin their training in optics
manufacturing with two fundamental principles of
optics fabrication: (1) “You can’t make it if you can’t
measure it,” and (2) “Glass breaks.” While these two
simple rules may seem straightforward, they still
challenge even the most experienced professionals.
Designing and controlling an effective manufacturing
process requires a thorough understanding of
specifications, metrology, and the strengths,
sensitivities and fragile nature of the materials you work
with. In this chapter, you will learn the essential
properties of optical materials, specifications for
common optical components and how optics are
Precision Glass Optics (Zygo, Inc. www.zygo.com)
shaped, ground and polished to dimensional tolerances
measured in millionths and billionths of a meter.

Chapter 14

MANUFACTURING PRECISION
OPTICAL COMPONENTS
14.1 INTRODUCTION
The field of optical manufacturing transcends many disciplines and
draws upon an understanding of mechanics, chemistry, physics, optics and
material science. Therefore, it is not unusual to find people from a variety of
backgrounds working in the field of optics manufacturing. For the last several
hundred years, those learning how to make optics have generally done so through
apprenticing under a master optician. This process could take many years,
producing highly skilled technicians who would guard the special techniques of
making optics. However, the last 30 years have brought about a much clearer
scientific understanding of the process of making optics, leading to a number of
novel technological advances that have revolutionized the field. Such advances
have enabled milestones in semiconductor processing, fiber optical
telecommunications and multiple other industries and applications. These new
innovations have not rendered the old techniques obsolete; in fact, traditional
methods of optics making can still produce optics of equal or better quality.

304
Manufacturing of Precision Optics

14.2 OPTICAL MATERIALS


Optical components are made from a variety of materials, including
glass, ceramics, crystals, composites, plastics and metals. The material chosen for
a given application is selected based on its properties, which describe its optical,
thermal, mechanical and chemical qualities. These properties are quantified in
specifications and tolerances describing allowable variations. The amount of
variation is known as the tolerance, and both the mean value and tolerance of a
given property may ultimately limit the end quality or performance of an optical
component. Optics designers will conduct a tolerance analysis to make sure that
the material specifications are appropriate to ensure the optic will perform as
desired. Performance specifications may define how well the component may
focus or collimate light, what wavelengths will transmit or how an image will
distort after passing through a lens or optical system.
Glass is not a simple material to describe and material scientists have
struggled to define it simply and concisely. Some of the benefits of glass are its
transparency in the visible spectrum, hardness, chemical resistance and relatively
high strength. What makes glass different from metals and ceramics is its
amorphous, non-crystalline nature. Glass structure, or order, may only be
detected on a micro-scale while examining a small volume of atoms or
molecules. Unlike metals and ceramics, there is no macro-grain structure or
repetitive long-range order to the arrangement of glass molecules. Instead, glass
resembles metals or ceramics as if they were "frozen" in a molten or liquid state.
For this reason, glass is sometimes referred to as a material in a "supercooled"
state. As we will see later in the chapter, this unique nature of glass makes it
possible to manufacture optics to dimensional accuracies on the order of
nanometers, but also results in a very brittle material that is easily fractured.
Most glasses are based on a silica (SiO2) network. The fundamental sub-
unit in silica-based glasses is the SiO44- tetrahedron in which a silicon (Si4+) ion is
bonded to four oxygen ions. Figure 14.1 schematically shows the four oxygen
ions arranged to form a tetrahedron. Another common glass type is based on
boron oxide (B2O33-). Borosilicate glasses are made with additional alkali and
alkaline earth oxides to form the tetrahedral unit BO44-. Additives known as
modifier oxides, such as Na2 O & K2 O (alkali) and earth oxides such as CaO &
MgO (alkaline), are often added to change its chemical, mechanical, thermal and
optical properties.
Some of the most common types of glasses are soda-lime glass, made
from SiO2 and NaO2, which is used in industrial lighting, pressed and blown
glassware, containers and other commercial applications. For optical
applications, fundamentally pure SiO2, borosilicate glasses and lead glasses are

305
LIGHT: Introduction to Optics and Photonics

most often used. Borosilicate glasses such as Pyrex have very low thermal
expansion properties. Lead glasses are very often specified in lens design for
their high refractive index and are also used for high-energy radiation shielding
applications.

4+
Figure 14.1 - Silica Tetrahedron Si0
and typical soda-lime glass. O
-
2
network. +
Na

Chemical Properties of Glass


There are hundreds of different types of optical glass made from
numerous mixtures of different compounds and chemicals. While these
chemicals are generally selected to optimize the optical or mechanical properties
of the component for a given application, the chemical make-up may also result
in undesirable characteristics to the optician, such as leaching, staining or etching
sensitivity to humidity and common cleaning solvents. Water, water vapor, acids,
gases and various ions in polishing slurries may cause staining or decomposition
of the glass surface. For this reason, chemical resistance is one of the most
important properties. Glass catalogs will cite chemical durability in terms of the
amount of mass dissolved from a sample when exposed to water or another
chemical for a certain period of time. In addition to the amorphous nature of
glass, sensitivity to chemical attack is the second important key that enables the
process of optical polishing.
Perhaps it is surprising that the most important and prevalent chemical
attack comes from water. Water and/or its ionic components H+ (hydrogen
making aqueous solutions acidic) or OH- (hydroxide ions making aqueous
solutions alkaline) play the decisive role. The pH value indicates whether the
aqueous solution is neutral (pH=0), acidic (values below 7) or alkaline (values
above 7).

Mechanical Properties of Glass


As many of the processes used to make optics require the cutting,
grinding or abrasion of material, the mechanical properties of glass are important
to understand. One of the most important properties is hardness, which refers to
the resistance of a material to plastic deformation. Hardness is measured using an
indenter device. An indenter is a probe, typically with a spherical, conical or

306
Manufacturing of Precision Optics

pyramid shaped diamond tip. When the probe is pressed against the material
under a defined force, the depth of the resulting indentation defines the hardness
of the material. Optical materials are most often characterized by their “Knoop”
hardness, which refers to a particular type of indenter with a pyramid shaped tip.
Table 14.1 shows the Knoop hardness of a number of common optical materials.
Opticians use the knowledge of hardness to select appropriate tools and materials
for making optics. For instance, if one needs to grind a hard material like fused
silica it would be wise to use an abrasive grit like aluminum oxide or diamond,
which are substantially harder. A weaker grit might simply break apart without
doing any work to the fused silica blank. It follows then that material removal
rates may be substantially slower when grinding harder materials. Conversely,
soft materials may work much faster and be more prone to developing scratches
and other surface defects from polishing, cleaning or handling. Designers will
often select hard materials for optics used in harsh or abrasive environments,
such as aircraft windows.
The stiffness, or rigidity, of a material is described by a property called
the modulus of elasticity (E), also known as Young's modulus. The modulus of
elasticity is the percent of elongation (strain) the material experiences in response
to the applied tensile force (stress) in the direction of the applied force. This
relationship of stress (,) and strain (-) is sometimes referred to as Hooke’s Law.

! (stress)
E= (14.1)
" (strain)

#x ,.
Elastic materials require relatively little force to change their dimensions; !="
#z
such a material would have a very low elastic modulus. In contrast, a stiff
material would have a very high elastic modulus. Glass materials are similar in
stiffness to aluminum. Although it may not be obvious that glass has elastic
properties, when glass optical components are placed even under light loads, the
amount of mechanical deformation can be severe.
As tension is applied to a material in one dimension causing it to stretch,
one would expect the dimensions of the material to contract in the perpendicular
dimension (Figure 14.2). Poisson’s ratio (") defines the ratio of dimensional
z
change perpendicular to the applied force versus the change in the direction of y
x
the applied force. Because processes used in forming lenses and other optical
components involve the application of loads and forces, it is important to know Figure 14.2 - Elongation and
contraction of a volume of
how the material will bend in response. Poisson’s ratio and Young’s modulus are material.
key elements of such calculations.

307
LIGHT: Introduction to Optics and Photonics

Thermal Properties of Glass


The coefficient of thermal expansion (CTE), often represented by the
Greek letter -, is probably the most important thermal characteristic to an
optician. The CTE is a measure of how much a material expands or contracts
with changes in temperature. The higher the coefficient, the more the material's
dimensions change with temperature. This is important for numerous reasons;
one of the most important is thermal shock. Materials will vary in their ability to
withstand rapid changes in temperature and can easily fracture when exposed to
very hot or cool fluids. Some materials will fracture as a result of thermal shock
even when undergoing ordinary grinding or polishing. Another critical result of
CTE is thermal deformation during polishing or testing. When a lens or mirror is
exposed to slow uniform temperature changes, it may simply expand or contract
without great change to its optical flatness or irregularity. However, when one
part of the lens or mirror is heated or cooled more rapidly than other portions of
its volume, it is said to have a thermal gradient and the optic will thermally
deform.
The magnitude of the deformation (or, sag .) may be approximated by
Equation 14.2, which describes the thermally induced bending of a circular plate
of thickness (t) over diameter (L), with coefficient of thermal expansion (() and
axial thermal gradient (%T).

L2"#T
(14.2) !=
8t

For instance, a mirror at thermal equilibrium may be perfectly flat.


!1
t
However, if one side of the mirror is irradiated and begins to heat up while the
rear surface remains cool, the warm surface will expand while the cool surface
!2
stays the same. The result is that the mirror will bend (Figure 14.3). Even the
heat transferred by an optician's hands as he picks up the lens or mirror is enough
/1 / /2
to cause significant thermal deformation!
Figure 14.3 - Thermal Figure 14.4 illustrates the effect of placing a hand on the surface of a
bending of a circular plate.
fused silica reference flat for only 10 seconds. About 30 seconds after the hand is
removed, the area affected has expanded by 0.275 microns in thickness. This
may not seem like much, but the specification for flatness of this surface is only
0.05 microns! Within a matter of minutes, however, the optic will equilibrate
and the deformed shape will return to its original flat condition. This process is
referred to as thermal stabilization. How fast or slow the material stabilizes is
determined in part by its thermal conductivity (k), which describes the rate at
which a material will transmit heat. Also important for understanding

308
Manufacturing of Precision Optics

stabilization rates is the specific heat, which is the heat energy necessary to raise
the temperature of a unit mass of material by one degree.

Figure 14.4 - Thermal expansion


from a handprint.

Optical Properties of Glass


Refractive index (n) was defined earlier as the ratio of the speed of light
in vacuum to that in a material for a particular wavelength, temperature and
pressure. Refractive index determines the angle at which light rays will refract in
a material and is a critical property defining reflection. It is typically defined in
the middle range of the visible spectrum as nd , where the subscript "d" refers to
the helium d spectral line at a wavelength 587.56 nm.
Engineers and opticians use refractive index for many purposes,
including lens design, coating design, prism design and testing, beam deviation
or optical wedge testing, transmission, reflection and absorption calculation. The
homogeneity, or uniformity, of refractive index in optical glass is one of the most
important tolerances for determining the amount of distortion a transmitting
wavefront will experience when passing through the optic. Because most glass
types are complex mixtures of several different compounds, it is very difficult to
have perfect homogeneity in any given volume of glass. Even when the glass
chemistry is very homogeneous, the uniform and controlled melting and
annealing of glass is critical to producing a homogeneous refractive index
throughout the volume of the glass blank.
Annealing is the controlled cooling of a material after it has been brought
to an elevated temperature (such as in melting or softening). By slow uniform
cooling, internal stresses in the material can be minimized. Non-uniformity in the
annealing rate and distribution of temperatures throughout the volume of glass
during annealing results in stress in the material. These stresses cause mechanical
strain, or irregularities, in the glass structure and density, which has the effect of
changing the refractive index over the affected areas. When these index
variations appear locally in the form of streaks or bands, we call it striae.
Striations are often periodic in nature ranging in widths of about 0.1 mm to
several millimeters. A light ray passing through an area of striae in a lens will be
sharply deviated rather than passing straight through. Striae also tend to be

309
LIGHT: Introduction to Optics and Photonics

directional, such that they may only be visible when observed in a certain
orientation.
Stress birefringence, or strain birefringence, is a more general term
applied to more global stress induced changes to refractive index. Birefringence
is expressed as the optical path difference (OPD) between a ray propagating
through a region of maximum strain versus a ray propagating through the same
region in the transverse direction. A highly birefringent material will have
differing refractive indices in orthogonal directions. Reheating or re-annealing
the glass may relieve these stresses and thus reduce the birefringence. A
specification sheet for a particular glass will provide a tolerance for allowable
stress birefringence given in units of nanometers of OPD per centimeter
thickness. Another way to think of this is that a light ray of a certain wavelength
will experience a relative phase shift of so many nanometers for each centimeter
of glass traversed versus an orthogonal ray.
Impurities in the form of seeds or other contaminant particles are called
inclusions. Inclusions will scatter light passing through the optic and give an
undesirable cosmetic appearance.
Table 14.1 shows optical, thermal and mechanical constants for several
different types of glass. Thermal and mechanical data is included for a few
metals for comparison. This table is just a small sample of the vast number of
glass types available to optical designers and manufacturers.

Optical Thermal Mechanical


Material n CTE Thermal Knoop Youngs Modulus Density Poisson’s
o
(µ/m C) Conductivity (k) Hardness (H) (E) (GPa) (g/cm3) Ratio
(W/m K)
Zerodur 1.52 0-0.05 1.64 91.0 2.53 0.24
Fused Silica 1.46 0.6 1.37 73.2 2.20
Pyrex 1.47 3.25 1.13 65.5 2.23
BK7 1.52 7.1 1.12 610 80.7 2.53 0.21
SF-11 1.78 6.1 450 66.0 4.74 0.235
F2 1.62 8.2 420 57.0 3.61 0.220
Sapphire 1.43 8.4 1370 3.97
Zinc Selenide 2.64 7.6 18.0 100 71.9 5.27
Silicon Carbide 2.68 4.0 3.1 0.14
Silicon 3.49 4.2 820 131 2.33
Steel 10-18 50 190-210 7.85 0.27-0.30
Iron 11.7 73 190 7.4 0.3
Aluminum 23.6 205 73.1 2.51 0.33

Table 14.1 Material constants for several types of glass

310
Manufacturing of Precision Optics

14.3 OPTICAL COMPONENTS TERMS AND SPECIFICATIONS


Optical component drawings follow the same guidelines as drawings of
other mechanical components for describing dimensions and tolerances (Figure
14.5). The most common units are similar to those used in other precision
manufacturing applications (Table 14.2). However, optics drawings typically
have several unique elements, which describe attributes such as surface micro-
roughness, wavefront distortion, cosmetic quality and transmission or reflected
energy for specific wavelengths. Several of the most common specifications are
described here.

1 micron (µm):
= 1/1000 of a mm, or (1 x 10-3 mm)
= 1/25,400 of an inch, or (3.94 x 10-5 inch)
Applications: surface form, roughness, mechanical tolerances
1 nanometer (nm):
= 1/1,000,000 of a mm, or (1 x 10-6 mm),
= 1/25,400,000 of an inch, or (3.94 x 10-8 inch)
Applications: wavelength, OPD, roughness
1 angstrom (Å):
= 1/10,000,000 of a mm, or (1 x 10-7 mm)
= 1/250,400,000 of an inch, or (3.94 x 10-9 inch)
Applications: surface roughness, atomic dimensions
arc-seconds and radians
= 3600 arc-seconds in 1 degree of arc
= 2! radians in 360 degrees
1 arc-sec = 4.85 x 10 -6 radians ~ 5 µradians
Applications: wedge/parallelism, angle measurement, slope or
gradient formCommon
Table 14.2 error units of measure in optics.

Figure 14.5 - Typical optical


component design drawing.
(Courtesy of Zygo, Inc.
www.zygo.com)

311
LIGHT: Introduction to Optics and Photonics

Irregularities in surfaces are referred to differently depending on the


spatial scale lengths of the surface features. Figure 14.6 shows the relative
differences between form error, waviness (slope error) and roughness.

Roughness Scale
Form Error
Roughness Error

Waviness scale, for mid-


Figure 14.6 - Spatial scale spatial frequency errors
lengths.

Form Scale

Surface
Surface form error is a description of a surface in terms of its deviation
from a best-fit plane (flatness error), or best-fit sphere on a spherical surface (also
called irregularity). Surface form error is most often represented as either a peak-
to-valley or RMS statistic. Peak-to-valley (PV) refers to the deviation of the
Best fit plane
highest peak to the lowest valley on a surface from either its best-fit plane or
best-fit sphere. RMS form error is the root-mean-square of the heights of surface
Surface
features, which provides greater weight to more repetitive feature heights.
Surface form error measurements are typically made with an
interferometer. Figure 14.8 (top) illustrates a reflected wavefront distortion
Best fit sphere measurement. The result is computer processed to determine the surface form of
the optical component. Wavefront distortion is created when an image is
Figure 14.7 - Two-
dimensional diagram of transmitted through or reflected off an imperfect optical element or system.
best-fit plane and sphere. Irregularities in the surfaces of the optics and inhomogeneity in the glass will
distort the image much like that of your reflection when looking at a fun-house
mirror. The degree to which an optical element distorts images is referred to as
its transmitted wavefront error or reflected wavefront error, depending on
whether it is a transmissive or reflective element.

test piece

a.

Figure 14.8 Reflected (top) and


transmitted (bottom) wavefront transmission reference surface
reflective reference
test. (Courtesy of Zygo, Inc. surface
www.zygo.com) test piece

312
Manufacturing of Precision Optics

We measure wavefront error by illuminating a near perfect reference


element to generate a wavefront (in this case, the perfect wavefront is analogous
to the image) that is essentially free from aberration. This wavefront is then
reflected off the test surface or transmitted through the test element in an
interferometer as shown in Figure 14.8. The now aberrated wavefront is
interfered with the reference wavefront and the resulting optical path difference
(OPD) is the wavefront error of the test optic. From this information we may
discern the form error or irregularity present in the test optic's surfaces or
material.
Note that in the reflected wavefront example, the wavefront traverses the
form error (h) twice as it is incident on the surface and then again as it is
reflected from the surface. This is more easily observed in Figure 14.9.
Therefore, the form error is only one half of the wavefront error measured. Note
h
also that in the transmitted wavefront error measurement, the test beam passes
Figure 14.9 - Wavefront error
through the test optic twice as well. Thus again the wavefront error measured by at normal incidence is 2x
the interferometer is two times the actual distortion contributed from the test form error.

optic. For this reason we call this type of test a double pass test. Figure 14.10
shows an analyzed interferogram of the form error in a test optic derived from a
reflected wavefront error test.

Figure 14.10 - Interferogram


resulting from a reflected wavefront
test. (Courtesy of Zygo, Inc.
www.zygo.com)

Slope/Gradient: Slope error is often referred to as gradient or waviness,


or how sharply features rise from the surface. Slope is usually represented by an
angle reported in microradians or also as a rise over run specification like waves (
per centimeter.
Clear Aperture (CA) is the usable area of an optical component, the Figure 14.11 - Slope error -
measured on a surface.
portion of the optic through which we want to pass light or over which light is to
be reflected. It is typical that the optical specifications need only be satisfied over

313
LIGHT: Introduction to Optics and Photonics

this usable aperture. The area outside of the clear aperture may be used for
mounting the optic.
Roughness (PV, Ra, Rq): Average roughness (Ra) is the arithmetic
average height of surface features measured for spatial scale lengths on the order
of 0-0.01mm. Roughness is typically measured with a stylus profilometer or
interferometric microscope. Figure 14.12 shows a roughness measurement taken
from a Silicon wafer and measured on an interferometric microscope.

Figure 14.12 - Surface roughness. (Courtesy


of Zygo, Inc. www.zygo.com)

The radius of curvature defines the curvature of a lens surface. As you


learned in Chapter 5, radius is measured as the distance of a line drawn normal
from the best-fit sphere until it crosses the optical axis at the center of curvature
(Figure 14.13). The sag is the distance a curved surface deviates from being flat
over its diameter or clear aperture. The word sag is short for sagitta, which is
derived from Sagittarius, the archer, making reference to the shape of the archer's
bow. Sag was introduced earlier in this chapter to describe the thermally induced
bending of a plate, but it is also an important lens specification. Although the sag
of a lens almost never appears on design drawings, it is one of the most important
dimensions for the optician to know for testing and making tools to grind or
polish the lens surface. Sag and radius of curvature are related by the sag
equation

2
# d&
(14.3) ! = R " R2 " % (
$ 2'

R . = sag
Figure 14.13 - Radius of curvature d / Optic axis R = Radius of Curvature
d = diameter of the lens
and sag. R-/ clear aperture

314
Manufacturing of Precision Optics

Radius of curvature is measured either with a spherometer (Figure 14.14)


interferometrically with a test plate or interferometer. The spherometers shown in
Figure 14.14 consist of a dial indicator that measures the linear motion of a depth
probe place in the center of the cup. The dial reads zero when the surface of the
probe is located in the plane of the cup. The cup is placed down on a reference
glass or test plate and the dial indicator is zeroed out. When the cup of the
spherometer is placed on the curved surface of a lens, the probe measures the sag
over the diameter of the cup. The sag may then be used to calculate the radius of
curvature of the lens. Many spherometers can measure to an accuracy of about
0.01 mm. If greater accuracy is required, an interferometric test using a test plate
is often used.

Figure 14.14 - Spherometers.

Sag (/) measured over diameter (d)

The interferometric use of a test plate is very similar to the reflected


wavefront test shown in Figures 14.9. However, in this case, the reference
element is the test plate, which is a reference element made to a calibrated radius
of curvature and form accuracy. When a test plate is placed in contact with the
test surface of similar radius, interference fringes may be observed. The number
of rings corresponds to the thickness of the air gap between the surfaces. This air
gap is also equal to the difference in sag of the test plate to the test surface. Since
the radius of the test plate is known, the sag may be calculated, added to the air
gap thickness and thus the sag and radius of the test surface may be known. This
is still one of the most accurate techniques for determining radius of curvature.
Sag of the test surface may be measured to a precision of approximately 0.0005
mm.
Wedge Error/Eccentricity: Wedge error is defined in either a lens or a
flat optic as the angle formed between two best-fit planes of the optics surfaces,
which are intended to be parallel. In a flat, we call this parallelism. In a lens or
spherical surface, it is typically described as eccentricity (-), or centering error.
When wedge exists in a lens, it necessarily follows that the center of the spherical

315
LIGHT: Introduction to Optics and Photonics

surface does not exactly correspond to the center of the diameter of the element.
Figure 14.15 shows why this is so and how eccentricity is related to wedge (%).
The eccentricity is the amount that the center of the spherical surface is offset
from the center of the diameter. Wedge and parallelism are most often measured
with dial indicators, autocollimators, and interferometers.

% " h
h tan ! = =
R L

Figure 14.15 - Relationship between - = Eccentricity


wedge (% ) and eccentricity (-). h = Maximum height variation over L

0
L

Cosmetic Quality: The cosmetic quality of an optic is typically defined


in terms of scratches and digs. Scratches are defined by their width, length and
area, and sometimes by their apparent relative size as compared with a
standardized scratch applied to a calibrated plate. Digs are defects typically
circular in dimension like a pit in the surface. Dig size is typically characterized
by its diameter.

14.4 CONVENTIONAL GLASS OPTICS FABRICATION

Overview
Having now discussed some basics of material properties and
specifications for design and performance, we can explore the topic of how
optics are made. The task of manufacturing is to select and apply techniques to
cut, grind and polish the materials to conform to the specifications of the design.
The better we understand the properties and sensitivities of material the more
success will be obtained in applying and controlling manufacturing processes.
There have been numerous techniques, tools and equipment developed for optics
manufacturing over the past several hundred years. Selection of the equipment
and techniques to be used are intended to shape glass blanks as accurately as
possible, prevent undesired fractures and cracks in the material, polish to satisfy
specifications, and control the process to converge on the finished condition as
efficiently as possible. A conventional manufacturing sequence is shown in
Figure 14.16.

316
Manufacturing of Precision Optics

Glass Blanks Common Generating / Machining


Metrology Tools

Verniers &
Micrometers

Spherometers
Figure 14.16 – Conventional
Coordinate
Measuring Machine manufacturing sequence for glass
Fine Grinding
optics.
Polishing
Dial Indicators

Thin Film Coating

Cleaning

A typical series of major process steps in the making of glass optics


begins with the sawing or rough cutting of preformed glass blanks. Rough
cutting is followed by surface generating and fine grinding or “lapping.” Fine
ground surfaces are then polished, cleaned and often coated with thin films to
enhance their transmissive or reflective properties for certain wavelengths. The
following paragraphs will describe the mechanisms by which opticians cut and
grind away material.
By way of a very crude analogy, glass optics are cut to shape in the
similar way a sculptor chisels away stone to form a statue. The sculptor is the
optician and the chisels are fine diamond tools. In keeping with this analogy, just
like a large chisel struck with great force will introduce a large and deep crack in

317
LIGHT: Introduction to Optics and Photonics

the stone, break off a large chip and leave a very rough surface, glass is cut with
a variety of sized tools that yield a correspondingly rough or smooth surface. We
therefore describe surface forming operations in optics by generating, grinding
and polishing. On the micro scale, these three operations are governed by
fundamentally different mechanisms or modes of material removal. These modes
of material removal are brittle fracture, ductile grinding and chemically activated
polishing. To understand how these processes differ, we will return to our earlier
discussion of the mechanical and chemical properties of glass and how glass
materials may be caused to deform.
Materials may be deformed either elastically or plastically or they may
be fractured. Materials with a low modulus of elasticity will stretch when placed
under load, thus deforming dimensionally. However, as the load is released, the
material will return to its original dimension. Plastically deformed materials will
not return to their original dimensions when load is released; they are
permanently altered. All materials experience some degree of elastic and plastic
deformation when loads are applied; however, some materials are much more
inclined to deform plastically. At some point all materials will reach a critical
condition when they will cease to flex and fracturing will take place. The degree
to which a material will deform plastically before fracturing is referred to as
ductility.
Fracture refers to the separation of a solid by applying a force or load.
Fracturing of materials takes place in two different modes – ductile and brittle
mode. Ductile materials are distinguished by fundamentally plastic deformation
and very slow and shallow crack creation. By contrast, brittle materials
experience little plastic deformation before deep, fast propagating fractures
result. The fracturing of glass is generally dominated by this brittle mode.
Therefore, as an optician embarks on the task of grinding glass optics, it is
important to be aware of the cracks and fractures being created on the glass
surface as well as beneath the surface.
As micro cracks are formed in the surface of the glass, fractures
penetrate beneath the surface and form sub-surface damage (SSD). This SSD
Figure 14.17 - Sub-
surface damage (“SSD”).
can degrade the quality and stability of an optic. Therefore, in critical
applications like vacuum windows, aircraft windows, laser optics or very high
accuracy optics, these microcracks must be removed in subsequent grinding and
polishing operations.

Generating
Generating almost always refers to the coarse grinding of optical flats
and spheres using a “fixed-abrasive” grinding tool. There are a number of

318
Manufacturing of Precision Optics

different types of generators including rotary surface generators, reciprocating


surface generators and curve generators. Each of these machines use a cutting
tool implanted with fine diamond grains to form the cutting surface. These
“fixed abrasive” tools are typically attached to a spindle, which rotates the tool
over the work surface to grind away material. “Machining” a surface is often
used interchangeably with "generating,” but may also refer to the use of multi-
axis CNC machines to form more complex shapes, edges and structures. CNC
machines are often employed to make lightweight mirrors, stepped or curved
edges or other milled features. Figures 14.18 and 14.20 show some examples of
different types of milling tools, including edge or side mills, cup wheels and face
milling tools.
In generating, material is removed when the abrasive particles in the
cutting tools create micro-cracks in the glass and fracture away chips. Coolants
are showered over the glass during machining to prevent the surfaces from
heating up to the point of uncontrolled fracturing and also to clean away particles
and small chips of glass fractured. In some cases, coolants are used as lubricants
and can have beneficial effects on surface roughness.

diamond cutting surface

tool axis

diamond cutting surface


tool axis

workpiece workpiece
Figure 14.18 - Generating
Side Milling Face Milling
Side Milling (machining) tools.
Tool
diamond cutting surface

tool axis

workpiece

Cup Wheel machining

Typical machining processes require the use of several progressively


finer tools to obtain finer surfaces. The coarsest of the tools are generally used
for rough shaping and heavy stock removal. Finer tools may be used sequentially
to remove material to a depth exceeding the SSD layer produced from the first
tool. Because SSD is produced also from very fine tools, as a last step before fine
grinding or polishing, workpieces are sometimes chemically etched to relieve the
stresses resulting from the presence of SSD. Etching will smooth out sub-surface

319
LIGHT: Introduction to Optics and Photonics

Before Etching defects and enlarge them while the chemical seeps into the
base of residual micro cracks.
In addition to simply generating the work piece to
the proper shape and dimensions, other key features of
controlling a generating process include maximizing the
After Etching
material removal rate and producing a low roughness
Figure 14.19 – Etching
to remove SSD. surface, free of large chips or other surface defects. The CNC
machining process allows control and consistency of the
stock removal process. By specifying the removal amounts, feed rates and tool
speeds within the control program, the surface and subsurface characteristics of
the components may be pre-determined and repeatedly produced.

Diamond Cup Wheel

Figure 14.20 - Spherical surface


generating.

Grinding
Grinding is a refining operation following generating and preceding
polishing. Grinding most often describes the process of abrading away material
using very fine, hard abrasive grains suspended in either a fixed abrasive tool or
loosely suspended in water to make a slurry. Hard grinding tools made from cast
iron, ceramics and sometimes even glass are used to form the surface shape
desired. Typical abrasives used in slurries are aluminum oxide or diamond. The
term "lapping" is also frequently used to describe the process of fine grinding
flats on a large planar grinding table called a lap.

Figure 14.21 - Cast iron and


glass grinding tools.

320
Manufacturing of Precision Optics

Some materials are much easier to grind than others and this relative
difference is referred to as lapping hardness. Materials with high lapping
hardness will grind slower. The lapping hardness of a material is dependent on
the hardness of the glass, mechanical strength and to a lesser extent, its chemical
durability. A material's mechanical strength is often described by its tensile yield
stress (ST), or the force required to fracture the glass.
The slurry is a suspension of abrasive in a fluid, almost always water.
Typical abrasives used in optical grinding are aluminum oxide (Al2 O3) and
diamond. Polishing slurries today commonly include cerium oxide (CeO2) and
zirconium oxide (ZrO2). Some important aspects of slurry include the pH,
specific gravity of the suspension (or concentration), mean particle size, and
uniformity of the distribution of particle sizes in the suspension.

Polishing
Polishing is the process of changing a ground, diffuse surface to one that
is specularly reflective. Like grinding, polishing often uses a loose abrasive
slurry, but unlike grinding and generating, polishing is not a brittle fracturing
process. Instead, chemistry plays a much larger and essential role in the process.
Another fundamental difference is the type of tools used. Instead of hard tools
like cast iron or ceramic, polishing laps are made from relatively soft materials
like polyurethane, felt, cloth and most commonly pitch. These “soft,” highly
viscous and formable tools are used to comply with or “fit” to the surface of the
optic. Then, by applying selective load to the tool, and stroking it over the optic
surface, the shape of the surface may be slowly and carefully polished away until
the final form accuracy is achieved.
Polishing Pitch
Pitch tools have been used successfully by opticians for many years. The
pitch can be made synthetically or from natural ingredients. The basic types are
wood pitch (made from deciduous and coniferous trees), rosin based, petroleum
based and asphalt tar pitch (coal based). Two of the most important properties of
a pitch polishing lap are its ability to take on a desired form (flat, spherical) and
to alter or adjust the form of the lap during polishing. This is referred to as the
compliance of the lap. While the hardness (or viscosity) of the pitch may strongly
influence a lap’s compliance, other factors are involved as well. In order to allow
the pitch tools to flex and flow, channels are typically cut into the surface of the
tools. This also promotes slurry flow over the lap. Other important aspects of
pitch tools include its ability to hold a “charge”, that is, to allow slurry particles
to embed in its surface.

321
LIGHT: Introduction to Optics and Photonics

Synthetic Pads
Synthetic pads properties vary with the materials they are made from but
are distinguished from pitch in that they are dominated by their elastic behavior
rather than their viscosity. Unlike pitch, which may be formed to a particular
shape with pressure, synthetic pads must be shaped with some form of cutting
tool. Synthetic pads are sometimes fly-cut or lapped with a diamond tool and
have a number of advantages over pitch, including maintaining their form for a
very long time and the ability to withstand high pressure. This enables high
speed polishing machines to use greater force during polishing to maximize
removal rates. However, pads cannot be shaped to the same accuracy as pitch
tools. Consequently, pitch is used far more often to achieve the highest surface
accuracies on polished optics.
How Polishing Works
Recall from our discussion of the chemical durability of glass that water
reacts with ions in the glass producing staining, etching or even dissolution of the
upper surface of the glass. Unlike a mechanically dominated process such as
mechanical abrasion, polishing depends heavily on this chemical reaction.
However, water alone will not produce a polished surface. Pressure, velocity and
abrasion also play crucial roles. It is the combination of these elements that
enables the chemo-mechanical process of producing precision polished surfaces
on glass optics. Figure 14.22 illustrates this interactive chemo-mechanical
relationship.
This process depicted in Figure 14.23 repeats itself as fresh slurry is
poured over the lap and glass and the glass is traversed over the lap under applied
load until enough material is removed that the matte finish of microcracks and
fractures has been polished away. What is left is a low roughness polished finish
with no clearly visible remaining pits or surface cracks left over from grinding.
This condition is referred to as "polished-out" or "grayed-out” because the “gray”
frosted look of the glass is now completely gone.
The removal rate for polishing is typically on the order of one micron per
hour for most conventional processes. Therefore, polishing can be a very slow
and painstaking process. The polishing rate is dependent on a number of factors
such as the chemical durability of the glass, pH of the slurry, size of the abrasive,
visco-elastic properties of the lap, applied pressure, shear strength of the hydrated
layer, and relative velocity at which the glass is traversed over the lap.
A number of additional observations can be made at this point. One of
these is that since the slurry is increasingly becoming more alkali (because of the
addition of the alkali modifier ions leached out of the glass), the pH of the slurry
increases as more and more glass is polished away (and the older the slurry gets).

322
Manufacturing of Precision Optics

As the pH goes up, it will be less likely to attract more ions out of the glass and
the polishing rate can decrease. As a result, opticians must constantly monitor
and adjust the slurry pH to stabilize their polishing process.

1. The surfaces of the glass and the soft


polishing tools are wetted with a slurry and
placed in close contact.

2. As compressive load is applied to the tools


and glass, the abrasive particles in the slurry are
pressed into the polishing tool creating what we
call a "charged" lap.

Figure 14.22 - The


material removal
mechanism of
polishing.

+
3. Water-based slurry dissociates into H and
-
OH ions and leaches out modifier ions in the
upper surface of the glass to a few atomic layers
deep. These ions mix into the slurry leaving
behind a weakened and porous layer often
called the "hydrated layer" on the surface of the
glass.

4. Because the resulting porous silica or borate


hydrated layer has been weakened substantially,
it now becomes more ductile. When the glass
and tools are stroked across each other at some
velocity (v) the abrasive embedded in the pitch
tool plows away the hydrated layer by fine ductile
scratching, resulting surface is polished.

Another observation is that in addition to grinding, the velocity, applied


pressure and properties of the abrasive are all important. However, if these
mechanical inputs are not applied in a balanced proportion to the chemical action
creating the hydrated layer, the abrasives will begin scratching the glass beneath

323
LIGHT: Introduction to Optics and Photonics

the hydrated layer. What results are large brittle fracture-like surface scratches
that will reduce the cosmetic quality of the optic and also add SSD. The same
thing can occur if the pitch is too hard to allow the abrasive to adequately embed
in the pitch, or if the glass is too soft relative to the pitch or abrasive.
Another important match to make is the hardness of the glass with its
chemical sensitivity to the slurry being used. In some cases it may be very
desirable to drive the pH very acidic or basic depending on the glass chemical
properties. Therefore, in order to obtain the best results in polishing, one can
optimize the combination of abrasive type, slurry pH, pitch type, pressure and
velocity for a given glass type and specification.
A number of machines have been designed to execute and control this
process. The simplest tool used today for optical polishing is the overarm spindle
(Figure 14.23). The principle of design is to place the optic against the polishing
tool so that slurry may be flowed between the pitch and the glass, pressure can be
applied and the rotation and motion of the tool versus the optic can be controlled
to produce a desired level of smoothing or surface form accuracy.

Figure 14.23 - Overarm spindle.

“Gray”

appeara
Figure 14.24 - Spherical pitch polishing laps.
nce
from
Channel embedd
s ed
Another common type of polishing cutequipment is the planetary style
abrasive
polisher, which is used to produce flat surfaces. In this case, the tool is
into
considerably larger than the optic (Figure 14.25). In fact, planetary style polishers
lap
may be as large as a room, with a rotating base three meters in diameter or larger.
The optics are placed in a rotating rings on the pitch lap. The rotations of the lap
and rings are synchronized to produce a uniform wear ratio on the glass that will
ultimately smooth the surfaces to a very flat surface form.

324
Manufacturing of Precision Optics

Both of these machine types are capable of producing surfaces accuracies


up to 0.010 or even 0.005 microns PV (peak to valley). However, this is not
possible on all types of optics. Recall from the thermal and mechanical
deformation sensitivities in the glass optical materials we discussed earlier. These
sensitivities (and others) often limit the surface form the machines are capable of
producing. It therefore depends upon the skill of the optician to predict and
measure these sensitivities and counteract them with corrective techniques. It is
no wonder that skilled opticians are in very high demand!

Figure 14.25 - Planetary polisher.


CUTTER BAR

LAP

RING STATION 1

CONDITIONER

RING STATION 2

range of motion

RING STATION 3

Because it is often not possible to measure the deformation an optic


during polishing, and the thermal and mechanical distortion as well as the
accuracy and instability of the lap itself all change interdependently, the practice
of making optics on these traditional machines is not very predictable. As a
result, opticians try to keep these variables as constant as possible and frequently
remove the optic to measure its condition. After observing its surface quality and
form, the machine controls are adjusted and hopefully the optic progresses
toward the final form specification. This iterative process is often very time
consuming and requires years of training to master. Thus, the most successful
manufacturers of optics tend to be those who are able to recognize the cause and

325
LIGHT: Introduction to Optics and Photonics

effect relationship between each of the process variables and then maintain a
consistent process. The difficulty associated with this task has lead many to
develop more deterministic processes and machines. Examples of this are CNC
optics grinding and polishing machines, and machines that use computer
algorithms to control precise correction of form errors.

14.5 ADVANCED PROCESSES


Computer Controlled Polishing (CCP) processes range from computer
assisted versions of conventional polishing machines to fully automated and
uniquely functional machines. In both spindle and planetary polishing, the optic
is almost always in full contact with the polishing lap. As the optic and tool are
pressed together and rotated relative to each other, the full surface of the optic is
polished until it conforms to the shape of the lap and specified form accuracy.
CCP most often uses a sub-aperture tool ranging in size from about a millimeter
to tens of millimeters in size. The small tool is designed to selectively polish only
the zones of the surface that rise above the best fit surface.
In most applications of CCP, a digital map of the optic’s surface is fed
into the computer as input. A program called a “dwell map” is then created which
dictates how long the polishing tool will remain over the peaks in the surface.
The dwell time is calculated based on a removal function which describes how
much material is polished away by a certain polishing tool under a set of
controlled machine parameters. Until recently, CCP machines where all custom
made. Today however, commercially available machines are available (Figure
14.26).

Figure 14.26 - Computer


controlled polishing
machines. Top left: Q22-Y
MRF system (Courtesy
QED Technologies,
www.qedmrf.com), Top
right: AII 7-axis robotic
polisher. Bottom: Close up
of subaperture tool. (Both
courtesy Satisloh,
www.satisloh.com)

Sub-aperture tool

326
Manufacturing of Precision Optics

Some interesting variations of CCP today include magnetorheological


finishing (MRF) and ion-beam figuring (IBF). Neither of these processes uses a
pitch or pad polishing tool. In MRF, an abrasive is suspended in a magnetically
sensitive fluid. As the fluid is directed onto a wheel, a magnetic field “hardens”
the fluid and forms a small polishing tool approximately 1-2 mm in size. The
embedded abrasive in the fluid then ductilly shears away glass from the optic
under very light load. In IBF, a stream of energized ions is directed toward the
optic’s surface. The ions collide with the glass and abrade away the glass surface
on an atomic scale. This also is done in the ductile regime so that deep fracturing
does not occur. After optics are polished they are then typically cleaned in
preparation for thin film coating.

14.6 DEPOSITION OF THIN FILM COATINGS


Thin film coatings are deposited on optics by several different methods.
Five of these processes along with their most common applications are shown in
Table 14.3.

Process Type Application


Physical Vapor Deposition resistive AR, HR and laser mirrors
e-beam ITO (indium tin oxide)
reactive decorative
ion-assisted hot/cold mirrors
ion-plating dichroics
opthalmics
Sputtering DC/pulsed DC architectural
AC display
RF microelectronics
reactive packaging
ion-beam precision filters
magnetron
Chemical Vapor Deposition (CVD) lighting
Physical Liquid Deposition (PLD) SOLGEL high energy laser coatings
sprayed CRT contrast enhancement

Table 14.3 Thin Film Coating Processes and Applications

For optical applications, evaporated coatings made by physical vapor


deposition (PVD) are most prevalent. In this process, scrupulously cleaned
optics are placed in a rotating rack in a vacuum chamber (Figure 14.27). The
chamber is also loaded with the chemicals required by the thin film design. The
chamber is then sealed and evacuated to a pressure to the order of

327
LIGHT: Introduction to Optics and Photonics

1x10-6 Torr. The chamber is often heated to between 200-300 degrees C when
dielectric materials are being applied.

Figure 14.27 - Evaporative


coating chamber. (Image
courtesy Edmund Industrial
Optics, www.edmundoptics.com)

REFERENCES
1. Smith, W. Principles of Material science and Engineering, New York:
McGraw-Hill, 1986.
2. Izumitani, T .S. Optical Glass, New York: AIP Translation Series, 1986.
3. Schott Glass Catalog, www.us.schott.com/
4. Musikant, S. Optical Materials, New York: Marcel Dekker, Inc., 1985.
5. Brown, N. Precision Optical Fabrication, SPIE Short course.
6. Malacara, D. Optical Shop Testing, John Wiley & Sons.
7. Karow, H.H. Fabrication Methods for Precision Optics, New York: John
Wiley and Sons,1993)
8. Steve Jacobs (class notes from the University of Rochester

WEB SITES
1. Zygo Corporation
www.zygo.com/
2. Ohara Corp, Optical Glass
www.oharacorp.com/swf/og.html
3. Loh Optikmaschinen AG, CNC controlled Spheronorm machines: AII 7-axis
Robotic Polisher, PIIA Aspheric Polisher
www.loh-optic.com
4. QED Technologies, QED Q22-Y MRF System
www.qedmrf.com,

328
You just met the love of your life, but there's a problem.
Your former love's name is tattooed in large script on a
flowery heart on your forearm. You decide to have the
tattoo removed and your dermatologist recommends
laser surgery. After a few treatments and several
hundred dollars, only the faintest trace of your former
ties remain. Laser tattoo removal is only one of the
hundreds of applications of optics in biology and
medicine. In this chapter we will describe some of the
research and therapeutic techniques of biophotonics as
well as a few of the commonly used instruments
enabled by optics.

DNA Microarray, (Boston University Photonics Center)

Chapter 15
BIOPHOTONICS
15.1 WHAT IS BIOPHOTONICS?
Photonics, nanotechnology and photobiology have begun to
revolutionize the way we think about bioscience, bioengineering and health care.
This revolution has provided photonic, therapeutic, and diagnostic tools for
medicine and biology. Photonic technologies have already greatly improved
health care delivery and telemedicine through the use of fiber optics, optical
switches and high-speed optical networks.
The application of photonics—the science and technology of light—to
bioscience is what we call biophotonics. Biophotonic devices use the properties
of photons to generate unique interactions with living tissue. Sometimes the
interaction is enhanced with selected dyes and chemicals that increase contrast or
fluorescence. Biophotonic techniques and instruments are useful in biological
research and diagnostic and therapeutic medicine. According to Photonics
Research Ontario (PRO):
…biophotonics is an emerging area of scientific research that
uses light and other forms of radiant energy to understand the inner
workings of cells and tissues in living organisms. The approach
allows researchers to see, measure, analyze and manipulate living

329
LIGHT: Introduction to Optics and Photonics

tissues in ways that have not been possible before. Biophotonics is


used in biology to study the structure and function of proteins, DNA
and other important molecules. In medicine, biophotonics allows the
more detailed study of tissue and blood, both at the macro and micro
level, for the purpose of diagnosing and treating diseases from cancer
to stroke in less invasive ways. [1]
The use of lasers and other light sources to destroy diseased tissue
without the need for major surgery can significantly reduce healthcare costs,
surgical complications and recovery time for patients. Using light to monitor
tissue function or structure is an important new area of research and application.
Novel fiber optic and fluorescence endoscope instrumentation has improved
surgical guidance, non-invasive monitoring of blood chemistry and early cancer
detection and treatment. In this chapter we will discuss the interaction of light
and biomatter and present some of the research, diagnostic and therapeutic
applications that fall under the umbrella of biophotonics. The field is vast and
growing rapidly, spurred in part by the need for rapid identification of biological
security threats as well as the medical demands of an aging population. We can
only begin the exploration of biophotonics here; for more information, see the
references and web sites at the end of this chapter.

15.2 BIOLOGY
In order to understand biophotonics, it is important to have some
knowledge of the fundamental life science biology as well as optical science and
technology. Biophotonic scientists and engineers need to study both disciplines
in depth. If you would like to review the basics of biology, there are many
resources available and several of these are listed in the references at the end of
this chapter.
Biophotonics is used extensively in biological research. For example,
laser microscopes are used to measure single cells and tissues at unprecedented
resolution. The development of tunable, ultra-fast pulsed laser sources has helped
scientists visualize molecular dynamics and structure. Optical coherence
tomography (OCT) is used to image tissue and organs using visible and infrared
lasers and light sources.

Photobiology
Photobiology is the branch of biological science that studies the effects
of light on living organisms. The wide variety of interactions between light and
biological organisms include:
• Photosynthesis: Light is the key to photosynthesis—the process
through which plants nourish themselves. Red and blue photons are absorbed by

330
Biophotonics

chlorophyll in the cells of green plants (as well as certain algae) providing the
energy source for the process that combines carbon dioxide and water to produce
sugar and oxygen. Light is also vital to human growth and well-being; for
example, ultraviolet light is required for the production of vitamin D, necessary
for calcium absorption by bones.
• Photoluminescence: Light from the blue end of the spectrum can
produce photoluminescence in living tissue and biological specimens. As you
learned in Chapter 2, excited atoms may give off photons very quickly
(fluorescence) or photons may be emitted over a longer period of time
(phosphorescence). Photoluminescent emissions can be used to identify specimen
types and abnormalities. In some cases, fluorescence is used to study materials
that are themselves fluorescent, but in many cases a fluorescent dye called a
fluorchrome is used to stain the structures of interest so that they become visible
when illuminated.
• Light and the eye: As you learned in Chapter 8, light interacts with cells
in the retina of the eye, producing electric charges in the light sensitive pigments
of the rods and cones. These charges are transmitted to the optic nerve and then
to the visual cortex of the brain, enabling us to see. In some cases, light can be
dangerous to the eye. In Chapter 1 we discussed how lasers and other high
brightness light sources can injure the eye through thermal or chemical means.
Photokeratitis (a corneal burn), photochemical cataracts and photochemical and
thermal injury to the retina are some injuries that may be caused by light
exposure.
• Photomedicine: The use of light to improve health has opened new
vistas to minimally invasive diagnosis, therapy and surgery. Phototherapy is the
use of light to treat a variety of conditions including cancer, leukemia, acne,
macular degeneration, jaundice in newborn infants and many types of skin
problems. As we will explain later in the chapter, the effects of light on living
tissue can be enhanced by the use of photosensitive drugs.
• Chemluminescence: Fireflies and luminous bacteria give off light as a
result of chemical reactions due to secretions from certain organs. These telltale
emissions are used to diagnose the presence or absence of certain bacteria,
reactions and chemicals. The production of light in the cells of living organisms
is also called bioluminescence.
• Photopsychologial effects: The intensity and hue of light at different
times of the day has profound effects on animals and humans. For example, light
influences the production of certain hormones and is thought to affect some
aspects of behavior. Seasonal affective disorder (SAD) is a form of depression
not uncommon in northern climates during the winter months when daylight

331
LIGHT: Introduction to Optics and Photonics

hours are limited and cold temperatures discourage exposing the skin to sunlight.
A common treatment for this disorder is exposure to bright, broad spectrum
artificial light for several hours a day.
• Photoinduced tissue pathologies: Light can have harmful effects on
organisms, including humans. For example, too much exposure to sunlight can
cause skin cancer as well as premature aging of the skin and the eye.
• Photochemical cellular effects: On the cellular level, ultraviolet rays
can cause harmful changes, which may result in genetic mutations as well as alter
crucial chemicals in the cell.

15.3 FUNDAMENTALS OF LIGHT-BIOMATTER INTERACTIONS

Energy Balance in Light-Tissue Interactions


What happens to light incident on biological tissue? The incident energy
may be reflected by the tissue surface (Er), absorbed by the tissue (Ea), lost to
the surroundings as tissue is vaporized (Ev), or conducted through the tissue (Ec ).
In some cases of light-tissue interaction, light of a different wavelength may be
produced by fluorescence (Ef). The energy balance for light-tissue interactions
may be described by equation 15.1, which is simply a statement of energy
conservation. Note that fluorescence has the opposite sign from the other energy
terms because light is being produced in this process.

(15.1) Ei = E r + E a + E c + E v ! E f

Depending on the exact nature of the light-tissue interaction, some or all of the
terms on the right hand side are present. The energy budget is represented
schematically in Figure 15.1 for a laser incident on tissue.

Ei
Laser beam
Ev
Er
Figure 15.1 - Energy budget for light-
tissue interactions. Ef

Ea

Ec

The amount of energy delivered to a specific tissue site depends on


whether the laser is off-contact (the light is focused onto the tissue from a
distance) or contact (light delivered via a transmitting tip or optical fiber) as
illustrated in Figure 15.2. Off-contact methods are preferable where there is no

332
Biophotonics

direct access to the area of interest. For example, retinal surgery requires that
light be transmitted through the cornea and lens, using external focusing optics.
Contact methods are preferred in cases where access can be accomplished by a
fiber or other direct means, for procedures such as periodontal surgery or
arteriosclerosis angioplasty.

Laser Laser

P =10 W P = 10
W Figure 15.2 - Off-contact vs. contact
Laser surgery or therapy. Contact methods
beam Vaporizing, cutting, deliver more laser power than off-
coagulating, tip/fiber contact. Laser light is delivered by an
optical fiber or transmitting tip.

~ 5W ~ 9W

Light Absorption by Biological Tissues


At the heart of the interaction of light and biomatter is the way each
biological material behaves when light impinges on it. For biological systems,
the absorption coefficient ((!)% is the most important of the properties that
describe the interaction between light and tissue. As you know, the absorption
coefficient is also of central importance when light interacts with non-biological
matter as well, for example in imaging (Chapter 12) and optical component
design (Chapter 14). The absorption coefficients of various components of blood
and skin and an average for composite tissue are shown as a function of
wavelength in Figure 15.3.

Figure 15.3 - Spectral absorption


of human tissue components (avg.
for male).

As you can see from the spectral absorption curves, there are distinct
absorption peaks for each type of biological material. This makes it possible to
identify and target specific pigmented organisms, or chromogens, at the cellular

333
LIGHT: Introduction to Optics and Photonics

and sub-cellular levels. Once identified and targeted, interventions ranging from
diagnosis to tissue destruction are possible. For example, yellow and green
wavelengths are absorbed by components of blood, which makes these
wavelengths useful for treating red birthmarks or spider veins. Variations in skin
color, localized abnormalities and pathologies can also be detected and analyzed
by their spectral signatures, that is, by the specific wavelengths of light they
absorb. The relationship between incident light irradiance Eo and the transmitted
irradiance E is the Beer-Lambert law, introduced in Chapter 12 (Equation 12.1).
Recall that the fraction of light that is transmitted through a material depends
exponentially on the thickness of the material (x) and the wavelength-dependent
absorption coefficient.

E = Eo e ! " x
(15.2)

The thickness of tissue that is numerically equal to the reciprocal of the


absorption coefficient is called the penetration depth. If x = 1/- is substituted
into Equation 15.2, we find that

E = 0.37 Eo

That is, the penetration depth is the depth of tissue at which 37% of the incoming
wave energy is absorbed. The higher the absorption coefficient, the shallower
the penetration depth. Said another way, a high absorption coefficient leads to
more energy absorption in a smaller volume of tissue, which results in increased
localized heating. The penetration depth has implications for laser surgery, for
example, where light energy must be targeted in a specific area but heating of
nearby tissue must be minimized.

Biophotonic Effects
Light-tissue interactions may be categorized by the effect that light has
on the tissue. All of these interactions are influenced by optical power density
(irradiance), the wavelength of light used, material absorption and exposure time.
The relationship between power density and exposure time for several types of
interactions may be illustrated graphically, as shown in Figure 15.4. While we
present only a summary of a few of the interactions here, much current
information is available from physician and biophotonic device manufacturers'
web sites .
Photomechanical processes occur when a very short, energetic pulse of
light causes tissue to expand with a miniature explosion. A mechanical (acoustic)
wave expands from the target site and disrupts the surrounding tissue. This
technique is used for some tattoo removal where the wavelength is chosen for

334
Biophotonics

best absorption by the tattoo dye. Dye is vaporized with minimal effect on
surrounding tissue.

13
10
11
10 Photomechanical
Power density (watt/square cm)

9
10

10
7
Figure 15.4 - Light-tissue
Photoablative interaction variation with time,
5
10 power density, and process.
10
3 Photothermal

1
10
Photochemical
-1
Photofluorescence
10
PDT
-3
10
-12 -9 -6 -3 +3
10 10 10 10 1.0 10
Exposure time (sec)

If the pulse duration is somewhat longer and of lower irradiance,


photothermal interaction, or heating, occurs. Tissue can be vaporized, or ablated,
in a photoablative process. Careful control of the pulse duration allows control of
the penetration depth of the beam and precise heating of the target tissue. This
effect is used in laser hair removal and skin resurfacing. Photothermal
interactions may also be used to promote coagulation, leading to "bloodless
surgery."
As pulse duration increases and irradiance decreases, photochemical
changes may be induced. For example, ultraviolet lasers may be used to break
chemical bonds in tissues. Photodynamic therapy (PDT) also involves chemical
reactions in tissue, however, a photosensitive drug is first administered which
absorbs the incident light. In this way, a tumor that takes up the drug is targeted,
while the surrounding tissue is unaffected. We will discuss PDT in more detail
later in this chapter.
Finally, photofluorescence is a research and diagnostic tool that uses a
light sensitive drug, for example, to reveal the presence of a tumor. The tumor
selectively takes up the drug and the area is then irradiated with ultraviolet light.
Characteristic fluorescent emissions reveal concentrations of the drug in a
specific area.

15.4 LASER SURGERY/TISSUE MODIFICATION


Laser surgery and tissue modification are part of a broader field called
tissue engineering that directs light (usually a laser) onto a tissue sample. Table

335
LIGHT: Introduction to Optics and Photonics

15.1 shows some of the areas of application for laser surgery and tissue
modification. Note that many of the terms are similar to those used in industrial
laser applications!

Generation/Inhibition Contour & Welding & Cutting Ablation & Excision


Reconstruction
Stimulation Plastic Surgery Tissue Fusion Angioplasty

Activation of Tissue Pigment/Tattoo Removal Tear Repair, Wound Tumor Removal


Surface Closure
Inhibition of Growth Wrinkle Removal Adhesion Separation Scar Removal

Necrotizing Tissue Hair Removal Retinal Repair Periodontal Excision

Photocoagulation Corneal and Lens Capsulotomy Wart and Mole Removal


Reshaping
Dye Enhanced Tumor Skin Resurfacing, Dye/solder Enhanced Laser Removal of
Inhibition Cosmetic Surgery Tissue Eelding Gallstones and Kidney
Stones
Glandular Stimulation Coagulative Neo-Natal, Pre-Natal Revascularization
Otolaryngology Abnormality Correction

Table 15.1 - Some laser surgery and tissue modification application areas.

In order to produce the desired tissue modification without destroying the


tissue in the process, many variables need to be carefully controlled, including
the total energy available for delivery to the tissue, how the light is delivered (for
example, by optical fiber or off-contact, through the air), focused spot size and
wavelength. To prevent thermal damage, ultra-short duration pulses may be used
along with thermal feedback. Dyes with higher absorption than the tissue being
irradiated may be used to increase local laser-tissue interaction. An experimental
procedure now undergoing animal studies uses nanoparticles coated with a thin
(10 nm) layer of gold. The tiny particles, around 120-130 nm in diameter, are
treated so that they flow easily through the bloodstream but collect in the blood
vessels of a tumor. The tumor is then irradiated with near infrared light, which
passes harmlessly through surrounding tissue and is absorbed by the gold
spheres, heating the tumor and destroying it. [6]
Although certain variables may be controlled, it should not be a surprise
that some parameters differ greatly among individuals and are beyond the control
of the engineer or surgeon. Power, energy, pulse duration and wavelength are all
controllable as is the decision to use a contact or off-contact beam application.
Non-controllable tissue parameters include the absorption coefficient, light

336
Biophotonics

scattering, thermal conductivity, local vascular circulation, tissue density and


pigmentation. Furthermore, depending on power, delivery and tissue type,
different lasers can be used to have the same effect. Conversely, the same laser,
depending upon on the distance from the fiber tip to the tissue, can produce
coagulation, cutting and coagulation or just cutting. Specially designed
instruments, such as optically designed fiber tips, may also be used to shape or
focus the light to produce a certain effect.

15.5 DENTAL APPLICATIONS


Lasers are becoming common tools in many
dentists’ offices. Because of the predictable optical Enamel

absorption spectra of diseased tissue, many new Dentin


procedures have emerged over the last ten years Gingeva
(Figure 15.5). Among the most widely used are the (gums)

stimulation and/or excision of diseased soft gum


15.5 - Tooth cross section
tissue, removal of small amounts of gum tissue for (American Dental
Association,
cosmetic reasons, and as a light source for tooth www.ada.org. Reprinted
whitening or curing of dental composites. The removal with permission.)
of decayed dentin (caries) by laser is also becoming
more common with the approval of new lasers that effectively remove decayed
matter without harming healthy tissue. The Er, Cr:YSGG (erbium chromium:
yttrium scandium gallium garnet) laser at a wavelength around 2.8 microns is
seeing increased use (Figure 15.6.) and diode, Nd:YAG and Er:YAG are used
frequently as well.

Figure 15.6 - Dental laser. The pulsed Er, CT:


YSGG laser is combined with a precision water
jet to perform oral surgery and dental
procedures. The tooth is illuminated by LEDs
surrounding the laser and water jet. (Photo
courtesy BIOLASE Technology, Inc.,
www.biolase.com/)

Lasers give dentist more control than the older mechanical tools and
many patients claim laser procedures are less painful. A 2005 survey of dentists
using lasers reports that many laser procedures may be performed without
anesthesia. [10] The speed of laser procedures also allows more patients to be
seen in a day. This helps compensate for the relatively high cost of dental laser
systems.

337
LIGHT: Introduction to Optics and Photonics

In 2005, only about 5% of dentists were using lasers in their practices,


with most U.S. dentists relying on more traditional methods. This number is
expected to increase as lasers become more affordable and dentists become more
comfortable with laser procedures

15.6 OPHTHALMIC APPLICATIONS


Lasers have been used in ophthalmic (eye) surgery since the early 1970s,
when ruby lasers were first used to repair detached retinas. In the case of eye
surgery, the focusing ability of the eye must be considered part of the "laser
delivery system." The original laser systems used the same optics normally used
by surgeons for eye examinations, making it easier for doctors to learn to use the
new laser-based equipment. Since red light is partially absorbed by the cornea
and lens, it was found to sometimes cause damage during surgery so doctors
replaced ruby lasers with green lasers, such as frequency doubled Nd:YAG or
Argon.
By the end of the twentieth century, many laser types were available
and approved for specialized eye procedures including retinal attachment,
treatment of glaucoma, retinal blood vessel coagulation, treatment of macular
Figure 15.7 - Schematic degeneration, corneal sculpting, keratectomy (removal of part of the cornea) and
representation of laser retinal
surgery. The eye and external
keratotomy (making incisions in the cornea.) Figure 15.7 shows a cross section
optics together focus light on of the eye schematically depicting a laser retinal procedure.
the structures of interest.
(Courtesy Rochester Eye The idea of surgical vision correction is not new. In the early 1970s
Center, Rochester, NY Svyatoslav Fyodorov, a Russian ophthalmologist, developed a procedure called
www.rochestereyecenter.com)
Radial Keratotomy (RK) which was designed to eliminate the need for
eyeglasses by surgically changing the focal length of the eye's optical system.
Recall from Chapter 8 that myopia (nearsightedness) occurs when the image
formed by refraction focuses behind the retina because the cornea is too curved
or the eyeball too long (or both). In RK, the surgeon created incisions in the
cornea to flatten the front of the eye, increasing the eye's focal length and
allowing light to focus onto the retina. RK was only appropriate for patients with
mild myopia or astigmatism and in some cases patients complained of glare
caused by the corneal scars remaining after the procedure. Subsequent surgeries
were sometimes needed to correct vision defects caused by the surgery.
Photorefractive Keratectomy (PRK), approved by the FDA in 1996, uses
an ultraviolet excimer laser to resurface the cornea. Because of the nerves
running through the cornea surface, PRK turned out to be a fairly painful
procedure. Around the same time, LASIK (Laser in-situ keratomileusis) surgery
was developed. In the LASIK procedure, a flap is created in the surface of the
cornea and the underlying layers are reshaped with an excimer laser. The flap is

338
Biophotonics

returned to its original position, where it serves as a sort of protective "bandage"


while the cornea heals. Because there are no pain receptors in the under-layers of
the cornea, this procedure is painless and most patients have a very quick
recovery. Figure 15.8 illustrates the rapid healing of the cornea after surgery.

15.7 DERMATOLOGICAL APPLICATIONS


Approximately five percent of people are born with one or more types of
skin blemishes commonly referred to as birthmarks, and many older people
develop pigmented lesions due to sun exposure. Pulsed dye lasers emitting
yellow or green light (which is readily absorbed by the hemoglobin in blood
cells) are used to treat port wine stains in children. Vascular lesions in adults,
such as spider veins and cherry hemangiomas, are frequently treated with
frequency double Nd:YAG lasers (532 nm). Figure 15.8 - Modified PRK
surgery at the time of
Lasers can also be used to remove unwanted hair or tattoos (Figure 15.9). surgery (top) and one week
Hair removal is usually simplified by the fact that hair is pigmented, localized post-operative. (Courtesy
American Journal of
and can be “burned” away with minimal skin damage. Tattoos are more difficult Ophthalmology,
http://authors.elsevier.com)
to remove because they are located beneath the skin surface, however, lasers
make it possible to remove tattoos with only minimal scarring. Because the
heating effect of the lasers' energy cauterizes, or seals, small blood vessels, there
is less pain associated with laser surgery than traditional tattoo removal. The
choice of laser wavelength depends on the color of ink used to create the tattoo.
Unfortunately, the exact pigment content of tattoo dyes is not regulated and
similar colors may have different absorption characteristics. Often, many
treatments are necessary to completely remove a tattoo.

a.

Figure 15.9 - Laser cosmetic


birthmark (a) and hair (b) removal.
(Center For Laser Surgery,
b. Washington, DC,
www.lasersurgery.com)

Two common cosmetic skin conditions—acne and aging, wrinkled


skin—can also be successfully treated with laser surgery. Pulsed CO2 lasers may
be used to ablate, or vaporize, the surface of the skin, resulting in the tightening
of underlying tissues creating a smoother look. A newer procedure, non-ablative
laser resurfacing, uses visible or near infrared light to stimulate sub-surface
collagen tightening and reshaping, resulting in a smoother skin surface. Since no

339
LIGHT: Introduction to Optics and Photonics

skin is removed in this procedure, healing is more rapid than in the ablative
resurfacing.

15.8 PHOTODYNAMIC THERAPY


Photodynamic therapy (PDT) uses a light source, often a laser, to
activate photosensitive drugs and initiate chemical reactions that harm cancer
tissue, but do not affect normal tissue. The sensitizer drugs are either injected
into the blood stream or applied topically. Tumors, but not healthy tissue, absorb
some of the sensitizer drug. When laser light of the correct wavelength impinges
on the tumor, the photosensitive drug is activated and creates molecules toxic to
the cancer cells, reducing the size of the tumor. Since the wavelength associated
with the laser light that transforms the photosensitizer dye can only penetrate
about ten millimeters into the tissue, large tumors may require repeated light
exposure to fully destroy the cancerous tissue. A simplified version of the steps
in photodynamic therapy is illustrated in Figure 15.10.

Inject photosensitizer 18-80 hrs


systemically or locally
into patient
Patient Patient

Figure 15.10 - Schematic of Laser or light


PDT with photosensitizer. source
Fiber or Within days
delivery
Sensitized tumor Cancer cells
conduit
exposed to light are destroyed,
causing oxygen leaving healthy
excitation leading cells intact
to tumor
destruction

PDT has several proven advantages and some researchers have suggested
it may be a "magic bullet" for cancer treatment because it has shown to be
effective on many types of cancer. There are no cumulative toxic effects of PDT
as with radiation or chemotherapy, so the procedure can be repeated several
times if needed. PDT is usually an outpatient procedure and patients who are
elderly or too ill for surgery can receive PDT because of its lower risk.
Photosensitizers locate very selectively within the cytoplasm of the cell and DNA
is almost never damaged. There are few drug interactions associated with the
drugs administered for PDT.
In addition, PDT has a high success rate, is safe to use and is a relatively
low cost therapy. It gives hope for the future in effectively treating early-stage,

340
Biophotonics

localized cancers with minimally invasive procedures. New research in


nanotechnology is aimed at improving the success rate for PDT by encapsulating
the photoactive drug in tiny nanospheres. This allows the drug to remain in the
body for a longer period of time, so more of the drug can accumulate in the
tumor.
A variation of PDT is called fluorescence photodiagnosis. In this case,
the dye, selectively absorbed by a tumor, fluoresces when it is irradiated by the
light source, showing the location of the tumor. PDT is also used for non-
cancerous conditions such as psoriasis and macular degeneration.

15.9 OPTICAL IMAGING TECHNIQUES


Several medical imaging technologies were discussed in Chapter 12
including ultrasound, magnetic resonance imaging (MRI), x-ray and
computerized tomography (CT). Here we will discuss two forms of advanced
optical imaging: optical coherence tomography (OCT) and scanning laser
confocal microscopy (SLCM).

Optical Coherence Tomography


Ultrasound imaging has provided increasingly detailed diagnostic images
over the past several decades. Ultrasound resolution is limited, however, by the
wavelength of acoustic waves, which is orders of magnitude larger than optical
wavelengths. Ultrasound also requires physical contact, including some sort of
index of refraction matching gel between the ultrasound transducer and the
patient, and the sound waves attenuate quickly inside the body. OCT is a new
optical imaging technique that addresses the problems of ultrasound imaging and
performs high-resolution imaging of very small structures in biological systems.
OCT can achieve image resolutions of 1–15 !m, one to two orders of magnitude
smaller than standard ultrasound imaging.
Like ultrasound, OCT creates cross-sectional images of tissue structure.
OCT is a promising imaging technology because it can provide "optical
biopsies.” highly detailed images of tissue in real time without the need for
biopsy and processing of specimens nor the use of ionizing radiation such as x-
rays or gamma rays. OCT is delivered by optical fiber so it can image deep
inside the body. As with ultrasound, an image is formed by energy back reflected
from tissue structures so OCT is useful for imaging anatomical areas with dense
and highly stratified tissue. For example, OCT is used to diagnose and monitor
diseases of the retina of the eye as well as to detect lesions in the arteries leading
to the heart.

341
LIGHT: Introduction to Optics and Photonics

Ultrasound images are created by measuring the "echo" time for back-
scattered acoustic waves. Since the speed of sound is relatively slow, acoustic
detectors and electronics perform this task. The speed of light is too fast for
electronics to measure the echo delay time, so interferometric techniques must be
used instead. A schematic representation of an OCT system is shown in Figure
15.11. Light from a near infrared source of low coherence such as an arc lamp,
femtosecond laser or superluminescent LED is coupled into an optical fiber. A
2x2 fiber optic splitter directs some of the light onto the specimen and some onto
a moving reference mirror. Reflected light from the specimen and reference
mirror are directed back through the splitter onto a detector. As you can see from
Figure 15.11, the device is a fiber optic Michelson interferometer!
Since both the reference beam and back-scattered beams have their
origin in the same source, a well-defined interference pattern will result if the
path length difference is of the order of the source’s coherence length. As the
mirror moves, it forms interference patterns with light reflected from each of the
layers of the tissue structure. The interferograms from each reflecting interface
are processed by computer to determine the depth at which they originated; and
from that information an image of tissue structure can be created.

Moving
mirror
Low Coherence
Light Source
2x2
Figure 15.11 - Schematic Reference arm
Coupler
representation of OCT optics
fiber system.
Specimen
Signal Processing
and Display

Detector
Sample arm
optics

In Chapter 8, light of sufficiently long coherence length was said to be


necessary for the operation of a Michelson interferometer. However, in the case
of OCT the shorter the coherence length, the better the axial resolution! A very
short coherence length ensures that back reflections from different depths of
tissue form separate interference patterns and may thus be distinguished.

Laser Scanning Confocal Microscopy


As you learned in Chapter 8, a conventional (wide field) microscope
gathers light from an illuminated object and creates an image that is viewed by
eye or photographed. Because light from both in-focus and the surrounding out-
of-focus regions are used to create the image, it will be blurred, especially at high

342
Biophotonics

magnifications. Laser Scanning Confocal Microscopy (LSCM) is a


fundamentally different technique that uses spatial filtering (small apertures) to
create high-resolution images. It also allows the depth of field to be controlled,
leading to three-dimensional reconstructions of many kinds of biological
specimens.
Several concepts that have been previously discussed are relevant to the
operation of an LSCM. First, the specimen to be examined must fluoresce when
illuminated. One or more types of dye are used to stain the sample so that the
structures of interest will fluoresce under short wavelength light. Argon lasers
have traditionally been used for fluorescence microscopy because their output LASER

contains several wavelengths, each of which is capable of exciting a different


type of dye to fluorescence. The exciting light (laser) and sample light
(fluorescence) are separated by a dichroic mirror, that is, a thin film filter tuned
to reflect only one wavelength in a beam of light while transmitting the Specimen
remaining wavelengths (Figure 15.12).
Figure 15.12 - A dichroic
Another concept important to LSCM was discussed in Chapter 5 when mirror is used to separate
the thin lens equation was used to determine the relationship between object and laser light from fluorescent
emissions of the sample
image distances for a given lens. To extend the object-image idea a bit further, (dashed line). Focusing
imagine light emanating from a point on an object, passing through a lens system optics are not shown in
this simplified diagram.
and converging to an image point on the other side. If the rays are reversed, as
we pointed out in Chapter 5,
rays leaving the image point Detector

end up at the object point. We Figure 15.13 - Simplified


call these conjugate points Pinhole focal plane
confocal microscope.
Fluorescence from parts
and the confocal microscope of the specimen outside
is based on the fact that light the focal point (dotted
line) is blocked by the
emerging from one focal point Dichroic Mirror pinhole, while the
LASER
of an optical system is focused fluorescence originating
at the focus (dashed line)
at a conjugate point on the passes through the
other side. pinhole to the detector.
Objective Reflected laser light from
Unlike a conventional the specimen is not
shown.
microscope, the LSCM
illuminates only a tiny portion Specimen focal plane

of the specimen with laser Specim


light that is focused by the objective lens (Figure 15.13). en The reflected and
fluorescent light emitted by the specimen are then focused by the objective lens
and directed through a dichroic mirror, allowing only the fluorescent
wavelengths to pass to the detector. A pinhole aperture in the focal plane in front
of the photodetector eliminates the out-of-focus fluorescent light that originates

343
LIGHT: Introduction to Optics and Photonics

at points on the specimen outside the laser focal point. Even though a fairly thick
piece of tissue around the focal spot may emit light, only a sharp, focused part of
the fluorescence is detected.
The focused laser spot is rapidly scanned across the specimen by mirrors
and the detected irradiance from each point is recorded and processed by
computer. The resulting image, built up by collecting the data from each spot in
the objective focal plane, is displayed on a monitor in grayscale or as a false
color image. Because the depth of the focal spot can be controlled, image
"slices,” called optical sections, may be obtained at various depths in the
specimen. A computer can use these sections to generate a three-dimensional
reconstruction of the specimen.

15.10 SURVEY OF RESEARCH AND DIAGNOSTIC PHOTONIC


APPLICATIONS

Flow Cytometry
Cytometry means "measurement of cells.” Flow cytometers have been
used since the 1970s by large research centers and laboratories for counting or
sorting cells based on their optical properties. In recent years, the availability of
smaller, more affordable lasers has made flow cytometers more common in
smaller research facilities and clinical diagnostic laboratories.
The flow cytometer processes thousands of cells per second, one cell at a
time. Cells are suspended in a liquid that flows in a focused stream past a laser
beam. Cells may be studied in vivo (alive) or in vitro (fixed, or not alive). Each
cell scatters some of the laser light by diffuse reflection and, excited by the laser,
the cell emits fluorescence. Once again, the fluorescence may be characteristic of
the cell chemistry or it may be due to an added dye. Fluorescence intensities are
typically measured at several different wavelengths simultaneously for each cell.
Several detectors are used to gather the scattered light: one is in line with
the laser and the others at a right angle to the beam (Figure 15.14). Typically,
several parameters are measured simultaneously for each cell, including forward
scattered light (approximately proportional to cell diameter), light scattered at
right angles to the laser beam (approximately proportional to the quantity of
granular structures in the cell) and fluorescence at several wavelengths. Each of
these properties is useful for diagnosis.
Sophisticated data analysis software extracts information from the light
energy gathered by each detector. For example, flow cytometry is commonly
used in clinical labs to distinguish among types of white blood cells in blood
samples and to perform complete blood counts. Fluorescent antibodies are often
used to measure the number of specific cell receptors and thus to distinguish

344
Biophotonics

different types of cells. Fluorescent probes can also detect DNA and important
markers of cell metabolism. The ability to detect changes on the cellular and
subcellular level has made flow cytometry an important tool in the development
of potential new drug therapies.
Like laser confocal microscopy, flow cytometry requires the ability to
handle large amounts of data very quickly. Improvements in one technique have
led to better optics and data processing in the other.

Fast moving carrier Figure 15.14 - Schematic


Cell samples (Additional mirrors and detectors)
(or sheath) fluid of Flow Cytometry. The
fluid delivery system is
diagrammed on the left.
Detector, !2 The stream containing the
Reflects !2 cell samples is
hydraulically focused by a
Reflects !1 Detector, !1 fast moving carrier fluid.
Scattered
The optical detection
light scheme is shown on the
Detector, forward
LASER right. Dichroic mirrors
scattered light
LASER
direct specific fluorescent
Flow cell (from above) wavelengths to the
detectors.

Laser Induced Breakdown Spectroscopy


The idea behind laser induced breakdown spectroscopy (LIBS) is quite
simple: vaporize a small amount of material and subject the glowing gas that
results to spectral analysis. Specifically, a low energy, high power pulse from a
Q-switched Nd:YAG laser is tightly focused on a sample creating a microplasma,
that is, an tiny spark of ionized gas. As the gas quickly cools, it emits light with
wavelengths characteristic of the elements in the target substance. Collecting
optics direct the light to a broadband spectrometer (typically 200 to 900 nm) and
computer for data processing and display.
LIBS is can identify almost every chemical element in the parts per
billion range. Small portable units have been developed that can be used in the
field as chemical and biological sensors. It can also be used remotely to monitor
corrosive or hazardous environments. The technique can be used with solid,
liquid and gas samples and is capable of detecting biomaterials such as molds,
pollen and proteins. As you might imagine, LIBS has many other non-biological
chemical analysis applications also.

Optical Tweezers
A common effect in science fiction is the "tractor beam,” a light ray that
can be used to drag enemy ships through space. In fact, it is possible to use light
to move matter, but only on the microscopic scale. So-called optical tweezers are
tools that use a laser beam or multiple beams to manipulate biological cells and

345
LIGHT: Introduction to Optics and Photonics

other microscopic matter. The tweezers are only able to apply piconewton forces,
but these are enough to hold and move the objects of interest.
The origin of the force is the momentum carried by a beam of light.
When a ray of light passes through a tiny particle it is refracted, or bent. The
change in direction means there has been a change in momentum of the beam.
Since momentum is conserved, the particle must also undergo a change in
momentum, that is, the particle experiences a force. The laser has a non-uniform,
Gaussian profile and the radiation force traps the particle at the most intense part
of the beam: at the focus.
Optical tweezers formed by near infrared light can trap living cells for
study without harming them. Researchers have used optical tweezers to stretch
out DNA (by pulling on spheres "glued" to the two ends of the molecule) and to
hold particles in place for study by other tools, such as confocal microscopy. A
frequency doubled Nd:YAG laser (532 nm) may be used to perform "surgery" on
trapped particles. Such optical scissors have been used to optically cut trapped
chromosomes into small pieces for detailed study.

REFERENCES
1. Photonics Research Ontario, www.optic.on.ca/
2. Campbell, N. and Reese, J. P. Biology, Ed 6, Benjamin Cummings, 2003.
3. Johnson, M. D. Human Biology: Concepts and Current Issues, Benjamin
Cummings, 2001
4. Fuller, T. Thermal Surgical Lasers, Monograph and private communication,
1993.
5. Trokel S.L., Srinivasan R., Baren B. "Excimer laser surgery of the cornea".
American Journal of Ophthalmology,1983; 96:710-5.
6. "Photonics in Biotech", featured review in OE Magazine, September, 2004
7. Tearney, P.J and Bouma, B.E., Handbook of Optical Coherence
Tomography, Marcel Dekker, New York, 2001.
8. Huang, D. et al.,"Optical coherence Tomography", Science, 254, 1178-1181
(1991).
9. Prasad, P.N., Introduction to Biophotonics, John Wiley and Sons, NJ. (2003)
p.228
10. "We're Liking Laser", Dental Products Report, April 2005, Advanstar Dental
Communications.
11. Goodell, T.T., “Photodynamic Therapy: Story of an "Orphan" Treatment,”
Oregon Medical Laser Center Publication, 2002.
12. McGloin, D. et al, "Touchless Tweezers", OE Magazine, January 2003

346
Biophotonics

13. Biophotonics International Magazine, Laurin Publishing Co., Berkshire


Common, PO Box 4949, Pittsfield, MA 01202-4949.

WEB SITES
1. High school biology resource
http://scienceniche.com/science/biology.cfm
2. American Society for Photobiology Biophotonics
www.kumc.edu/POL/ASP_Home/
a. Center For Laser Surgery, Washington, DC (2004)
www.lasersurgery.com/
3. Shore Laser Center, tutorial on medical lasers
www.shorelaser.com/
4. LSCM notes by Kees van der Wulp
www.cs.ubc.ca/spider/ladic/intro.html
5. Web sites for microscopy
www.microscopyu.com/
http://micro.magnet.fsu.edu/
6. Confocal microscsopy- a reminiscence by the inventor
www.ai.mit.edu/people/minsky/papers/confocal.microscope.txt
7. Laser Induced Breakdown Spectroscopy
www.whoi.edu/(Woods Hole Oceanographic Institute)
8. One of many good web sites on optical trapping, at Stanford University
www.stanford.edu/group/blocklab/

347
LIGHT: Introduction to Optics and Photonics

REVIEW QUESTIONS AND PROBLEMS

NOTE: Most problems require Internet research!

1. Find three biophotonic companies on the Internet and briefly explain what area of
biophotonics they employ in their business.

2. Find and briefly describe three new areas of photobiology not previously mentioned
in this chapter.

3. Which of the following are controllable and which are non-controllable tissue
parameters: skin color, laser pulse repetition rate, degree of vascularization, type of
laser used, tissue density, tumor size, wavelength, chromophore selection and spot
size? Explain.

4. Why is tattoo removal more of a challenge than hair or birthmark removal? Please
give a few reasons, explaining your answer in light of what you’ve learned about
absorption, laser type, laser power and controllable vs. non-controllable parameters.

5. From what you have learned about microscopes in earlier chapters, what size should
the pinhole (confocal aperture) be to effectively filter out unwanted light and still
provide sufficient light to the photodetector? Explain.

6. Photodynamic therapy has many advantages, as stated in the chapter. What might be
some drawbacks to employing this type of treatment with regard to patient, co-
pathologies, resident viral strains, etc.

7. A particular mole primarily comprises melanin. (a) If a dermatologist wishes to treat


this mole with an Ar/Kr laser, at what depth will ~37% of the energy be absorbed?
(b) At what depth will ~70 % of the energy be absorbed?

8. A 10 W Nd:YAG laser with a fiber optic conduit and sapphire tip is used for
periodontal (gum line) excision. If the fiber has a core diameter of 62.5 µm and a
numerical aperture of 0.3, at what distance would the dentist have to position the end
of the fiber to obtain 10 10 W/m2 at the gingival (gum) surface? Assume 3 dB loss in
the conduit and estimate a reasonable pulse duty cycle.

348
Appendix 1: Derivations and Further Explorations

A1. The wave equation


We are considering harmonic waves, for example, the typical wave shape
that comes to mind if you imagine waves on the surface of the ocean. If we could
eliminate the time dependence of the wave (t), that is "freeze" the motion with a
snapshot, the wave would have a sinusoidal form. On the other hand, if you
watch the motion of a buoy at one location (x) as the moving waves pass
underneath, it moves up and down in simple harmonic motion, which is also
described by a sinusoidal function. A harmonic wave, therefore, must be
expressed as a function of both space (x) and time (t) and in both cases the
equation will involve a sine (or cosine).
Let us begin with the time dependence. Simple harmonic motion can be
described as the "shadow" or projection of a point that moves with constant
angular speed around a reference circle. In Figure A1, the rotating point is a yo-
yo on a string of length A. If the sun is shining from the left, the yo-yo casts a
shadow on the y axis. The shadow moves up and down in simple harmonic
motion between the limit +A and -A. At any point in time, the position of the
shadow is given by
y = A sin $

Shadow of
y
point moving
in a circle
Figure A1 - A yo-yo twirled in a
A y = A sin $. circle casts a shadow. The shadow
$ moves up and down between +A
and -A in simple harmonic motion.

Let the constant angular velocity of the yo-yo be #, where by definition


"
!=
t
Solving for $ (=#t) and inserting we have
y = A sin ! t.
The relationship between angular velocity of the yo-yo and frequency of the up-
and-down motion can be seen by noting that in one period of rotation (T), the
angle $ passes through 2! radians. Also, frequency (f) equals 1/T. So
" 2#
!= = = 2# f
t T

349
LIGHT: Introduction to Optics and Photonics

Then the time dependence of the wave's up and down motion is given by

(A1) y = A sin(2! ft)

Now we will consider the wave as a function of space. That is, we will
change the time and frequency dependence of Equation A1 into distance (x) and
wavelength (!) dependence. To do this, we note that
v = #f (from Chapter 2)
and v = x/t (by definition)
Combining, we have

x
!f =
t
x
f =
!t

Substituting this last equation for f in Equation A1 gives

# 2! &
(A2) y = A sin % x(
$ " '

The final step is to make the wave "move.” Imagine that we shake a
string and send a pulse moving along the x axis with speed v. Since the pulse is
moving in both space and time, we'll call the function that represents the pulse
f(x,t). (We assume that the pulse does not change shape as it moves.) Now
consider a second coordinate system that moves along with the pulse at speed v
(Figure A2). In this frame of reference the pulse is at rest so we can call it f(x').
How are f(x,t) and f(x') related? As you can see from Figure A2, the
f(x) f(x') distance to a point on the moving pulse such as the one labeled x1' is given
vt
by x1 - vt. The transformation between the two coordinate systems is then
f(x') = f (x -vt)
v
Using this transformation, Equation A2 becomes

$ 2!
x1'
y = A sin &
% "
( x # vt )')(
x1

Figure A2- The x' coordinate Substituting v = #f and recalling that k = 2!/# and # = 2!f,
system moves to the right at a
$ 2!
speed v. y = A sin &
% "
( x # " ft )')(
$ 2! '
y = A sin & x # 2! ft )
% " (
y = A sin (kx # * t)

The final equation is Equation 2.3 of Chapter 2.

350
Appendix

A2 Snell's Law

Derivation using Huygen's Principle


Although Snell himself discovered the law that bears his name Wave- Wave-
front at front at
experimentally, the same result may be obtained analytically using Huygen's t=0 time t

Principle, which states that every point on a wave front acts as the source of
secondary spherical waves that have the same frequency and speed. The position
of the wave at a later time may be determined by drawing the surface tangent to
these secondary waves (Figure A3). Although this principle has some radius of
wavelet is r = vt
shortcomings (for example, we need to neglect the wave that propagates Figure A3 – Huygen's
wavelets
backwards) it proves useful in the description of several wave behaviors.
Snell's law may be derived by noting that the radius of the wavelets Medium 1 (higher wave speed)
depends on the speed of the wave in the medium. As shown in Figure A4, when
the wave enters a medium where it travels more slowly, the wavelets "shrink"
and the wavefront bends.
To find Snell's law, let us redraw Figure A4 showing only the positions
and directions of two of the waves (Figure A5). The directions are the rays
associated with the waves. The angle of the incoming ray in medium 1 is the
angle of incidence, %1, and the angle of the ray in medium 2 is the angle of
Medium 2 (lower wave speed)
refraction, %2, and both angles are measured with respect to the normal or
Figure A4 - Huygen's
perpendicular line to the surface. Two parallel rays are shown in Figure A5, principle predicts the bending
of waves by refraction
striking the surface at points a distance "D" apart.

D Position of wave front at t = 0

%1 Wave in medium 1 travels


a distance v1t

%1 Figure A5 - Geometry for deriving


%2 Snell's law. The time interval
between the dashed lines above
and below the surface between the
Wave in medium 2 %2 two media is the same (t). The
travels a distance v2t
wave travels more slowly in the
Position of wave lower medium, so the dashed lines
front at t are closer together.

In the upper triangle (containing %1), sin %1 = v1t / D. In the lower triangle
(containing %2), sin %2 = v2t / D. Since t/D is the same for both triangles we have

sin !1 sin ! 2
=
v1 v2

351
LIGHT: Introduction to Optics and Photonics

Multiplying both sides by the speed of light, c, and noting that index of
refraction n = c/v we have Snell's law
n1 sin !1 = n2 sin ! 2

A B Notes on laws of reflection and refraction using Fermat's Theorem


Pierre de Fermat proposed the theorem that bears his name in the 17th
century. According to Fermat, when light travels from point A to point B, it
takes the path of least time, that is, it takes the shortest (fastest) path possible.
For example, Figure A6 shows light reflected from a surface. Three of
Figure A6 - Several the infinite number of possible light paths are shown. You can actually perform
possible paths from point
A to point B. Light takes this experiment with pen and paper (see Problem 30 in Chapter 2). To find the
the path of least time. path of least time analytically requires calculus, but we can at least outline the
procedure here.
If we redraw one of the paths in Figure A6, we can give the path lengths
using the Pythagorean theorem (See Figure A7).
A

(b + (d ! x) )
B

a b
Path Length = (a 2
)
+ x2 + 2 2

The time to go from point A to point B is the path length divided by the speed of
x light. You can graph this function and find the value of x that produces a
d
minimum of the function. Or, using the methods of differential calculus, the
Figure A7 -Geometry for result may also be found analytically.
Fermat's Principle
Snell's law can be similarly derived by finding the least time to travel
from a point in one medium to a point in the second medium. In this case, one
path length will be divided by v1 and the other will be divided by v2. This
problem is relatively simple to solve using calculus.
A3 Decibels
If you have heard of decibels, most likely it was used to describe the
loudness of a sound. The young, normal human ear can hear sounds with
intensities as low as 10-12 watts/m2 and the loudest sounds that do not cause pain
(called the threshold of pain) have intensities of around 1 watt/m2. (Sound
intensity is measured in watts/m2.) It turns out that human perception of sound is
related to the logarithm of intensity, not the intensity itself. The decibel scale is
logarithmic, that is,

" P %
dB = 10 log $
# 1x10 !12 '&

The actual sound intensity, in watts/m2 is divided by the intensity


corresponding to the threshold of hearing and the log of the ratio is taken. The dB

352
Appendix

scale ranges from 0 dB for the softest audible sound to 120 dB at the threshold of
pain. Notice that 120 dB corresponds to an intensity range that spans twelve
powers of 10. Using a logarithmic scale "shrinks" the range of values
considerably!
In optics, particularly in lightwave communications, it is common to
measure optical powers in decibels, rather than in milliwatts or watts. In this
case, the reference level (the denominator in the dB formula) is usually 1
milliwatt, in which case we call the measurement "decibel referenced to one
milliwatt" or, dBm. Occasionally, the reference level is 1 microwatt and the
corresponding term is dBµ.
Decibels are used to express loss and gain by comparing a "start" and
"end" power level and then taking the log of the ratio. Unlike dBm, which is used
to express a specific power level, the dB is a comparison of two arbitrary levels
P2
dB = 10 log
P1

Why go to the trouble of using a logarithmic scale? As with sound,


sometimes the range of optical powers is so great that a decibel scale allows more
detail to be observed, for example, on an optical spectrum analyzer. But there is
another reason, related to the properties of the logarithm. In math class, you
learned that
log (AB) = log (A) + log(B).
This means that multiplication and division can be replaced by addition and
subtraction, easier operations to do "in your head" without a calculator. So, for
example, if a source emits 3 dBm (equivalent to 2 mW) into a system that has a 6
dB loss, the output power is
3 dBm - 6 dB = -3 dBm (or, 0.5 mW.)
A4 Lens Makers formula and thin lens equation
The Lens Makers formula originates with Gauss' formula for refraction at
a spherical surface. In Figure A8, a ray of light originates at a point P and strikes
a spherical surface at a distance h above the optical axis. The ray is bent
(according to Snell's law) and crosses the axis at the point P'. The index of
refraction on the left of the surface is n1 and on the right it is n2. In this
derivation, we will use the small angle approximation several times, that is, the
derivation is only correct for paraxial rays, those which strike the surface at
points near the axis where the angle of incidence is small. For clarity, large
angles are shown in the diagram but remember the derivation is only valid for
small angles.
Note that angles in the diagram are measured in both the clockwise and
counterclockwise directions. As you know by now, these angles will need to have

353
LIGHT: Introduction to Optics and Photonics

different signs. As in your math class, we will take angles measured


counterclockwise to be positive and angles measured clockwise to be negative.
The angles of incidence (%1) and refraction (%2) are, as always, measured
from the normal to the surface. Recall that the radius of curvature, R, is normal
to the surface, as shown. The angle ( is measured from the normal line to a line
drawn parallel to the optical axis and passing through the point of incidence,
which is located a distance h above the optical axis. 0 is the angle from this
parallel line to the incident ray and 0' is the angle from the same parallel line to
the refracted ray.

(
%1

0 %2 0'
A8 - Geometry for refraction at a (
curved surface. A ray (red line) P
. h R P'
originating at P is refracted to
the point P'.
do di

n1 n2

From Figure A8, it is clear that

!1 = " + # (A3)

and

!2 = " + # ' (A4)

The second equation results from the fact that 1' is a negative angle
(clockwise rotation.)
At the point where the incident ray strikes the surface, Snell's law is
n1 sin !1 = n2 sin ! 2
Using the small angle approximation, that sin %1 %, Snell's law can be
written
n1!1 = n2! 2
Inserting equations A3 and A4 into this form of Snell's law we have

n1 (! + " ) = n2 (! + " ') (A5)

Now we would like to replace the angles (which are difficult to measure)
with the displacements do, h, and di. In the paraxial approximation, the distance
labeled / in Figure A8 is very small and may be neglected.

354
Appendix

From the geometry of Figure A8 we have


h h h
!= , !" ' = and !=
"do di R

Notice that we have used the sign convention for both displacements and angles
in these three equations. do is to the left of the surface and thus negative, and
clockwise angles are also negative. We can substitute these equations into
Equation A5 and rearrange to find

"h h% "h h%
n1 $ ! ' = n2 $ ! '
# R do & # R di &

n1 ( n2 ! n1 ) n2 (A6)
or, + =
do R di

Equation A6 is Gauss' formula for refraction at a single surface. You will


notice similarities to both the Lensmakers' formula and thin lens equation. In
fact, ( n2 ! n1 ) / R is the power of the surface measured in diopters. A lens consists
of two surfaces of radii R1 and R2, with the medium between the surfaces having
index of refraction nlens. If the index of refraction on either side of the lens is no,
then the power of the lens is the sum of the two surface powers:

P = P1 + P2
nlens ! 1 1 ! nlens nlens ! 1 " nlens ! 1 %
P= + = !$
R1 R2 R1 # R2 '&
" 1 1%
P = ( nlens ! 1) $ ! '
# R1 R2 &

The last equation is the Lens Makers formula.


How does the thin lens equation come from Gauss' formula? The first
di1 do2
lens surface takes light originating from the object distance do1 and forms an
image at di1. This image is the object of the second lens surface, which then
d
forms an image at di2. Suppose the two surfaces are separated by a distance d.
Then,
d = do1 + d02 (See Figure A9) Figure A9 - A lens is two
Since do2 is negative refracting surfaces
separated by distance d.
d02 = di1 ! d The first surface forms an
For surface 1, Gauss' formula is image at di1, which is the
object for the second
surface located a distance
n0 n
+ P1 = lens d02 from the object.
do1 di1

355
LIGHT: Introduction to Optics and Photonics

Here, we have used P1 for the power of the first surface. For the second surface,
nlens n
+ P2 = o
di1 ! d di 2

Again, the power of the surface is used and we have also substituted the
expression for d02 derived above.
Adding these two equations and combining the terms with nlens we have

n0 n n n
+ P1 + lens + P2 = lens + o
do1 di1 ! d di1 di 2
n0 n n n
+ P1 + P2 lens ! lens = o
do1 di1 ! d di1 di 2
n0 " d % no
+ P1 + P2 + nlens $ '=
do1 # ( di1 ! d ) di1 & di 2

If this is a thin lens, then d is very small and the term containing d is
negligible. Finally, we note that the power of the lens, P=P1+P2.
n0 n
+P= o
do1 di 2

If the lens is surrounded on both sides by air (no=1) and do1 is the object
distance from the lens and doi is the image distance, we have the thin lens
equation:
1 1 1
+ =
d o f di

Here we have used P=1/f which is valid when the lens is in air.

356
ANSWERS TO ODD NUMBERED PROBLEMS

Chapter 1 27. 5.1 cm


9. 2, 3 29. 2.64 cm
11. 0.49 mW 31. 165 m
33. 1.5 GHz
Chapter 2 35. 75%
17. 455 m 37. one solution is 1250 nm
19. 1240 eV; 4 x 10-7 eV 39. 667 nm, red
21. 2.24 µm; 1.15 µm 41. 41. 58.2o from a line normal to the window
23. 167 W/m2
25. 0.0356 sr Chapter 7
29. 9.3 µm; 523 nm 15. 65 % (48% if the polarizers are reversed)
17. 12.5%; 60o
Chapter 3 19. 1.60
9. 14 lumens/watt 21. 0.57; 0.67
11. 7.3 mA 23. 32o
25. 23 or 24 polarizers (for 90o ± 0.02o)
Chapter 4 27. 75 µm
13. 2m
15. 27o Chapter 8
17. 1.58 11. -1.25 m; 0.8 diopters
19. 1.57 13. 79 nm
21. 41.7o 15. 1.0004
23. 24.4o 17. 7x105 and 5000; 0.0078 m and 0.11 nm
25. 15o
27. 30.8o, 30.8o Chapter 9
29. 54.3o 15. 414 nm
17. 150 MHz
Chapter 5 19. 10 mm
11. 1.537 21. 7.6 µm
13. image distance= 60 cm, -30 cm, -7.5 cm 23. 0.14 mm
15. image distance=8.7 cm, height=0.32 cm 25. 150 MW
17. image distance=0.051 cm, M = -0.020 27. 20 MW
19. f=2.3 cm, M=2.9 29. 3.3 cm; 50 m
21. object distance = -5 cm, height = 1/6 cm
23. 10.4 cm to the left of lens 2, M=0.52 Chapter 11
25. 20 cm to the left of lens 2, M= -2 7. 0.249
9. 8.4 µm
Chapter 6 11. -3.98 dB
11. 3.5 µm; 0.35 mm 13. 0.903 dB/km
13. 2.5 mm 15. 4.6 mW
15. 0.20 m, 0.60 m
17. 85 µm Chapter 12
19. 357 nm 3. (approximately) 0.986; 0.835
21. 5.2o
23. 643 nm
25. 400 nm, 560 nm

357
INDEX
Absorption constructive interference, 111 Fiber Bragg gratings, 227, 234, 252
Atmospheric, 46 Critical Angle, 76 Fiber bundles, 256
Biological tissue, 333 Cutback method, 246 Fiber optic connectors, 247
Loss in fiber, 240 Cutoff wavelength, 239 Fiber optic couplers, 249
Optical density, 11 Fiber optic sensors, 256
Photon, 28, 189
Fiber optic splicing, 248
Polarizing filters, 144 Dark current noise, 63
Water absorption spectrum, Flow cytometry, 344
dBm, 242
264 Fluorescence, 31, 344
Decibels (dB), 241, 352
Water peak, 236 Fluorescent linewidth, 195
Denisyuk, Yuri, 284, 291
Absorption coefficient, 264, 333 focal length, 102
Depth of focus (DOF), 206
Absorption spectrum, 30 Focal point, 86
destructive interference, 111
Acceptance angle, 235 Real, 87
Detectors Virtual, 87
Airy disk, 133 APD, 61
Amplitude, 20 Forward bias, 54
Bolometers, 57
Angstrom, 33 Four-level laser, 194
Photoconductive, 59
ANSI Z136.1, 1 Photodiode, 60 Fracture, 318
Anti-reflection (AR) coatings, 128 Photomultiplier tube, 58 Frequency, 21
Atomic force microscope (AFM), PIN, 60 angular, 22
275 Pyroelectric, 57 Fresnel equations, 148
Quantum, 58 Fresnel, Augustin, 19
Autofocus, 167
Thermal, 56 Fringe order, 119
avalanche photodiode (APD), 61
Thermocouple, 56
Avalanche photodiodes (APD), 255
Dewar, 59
Dichroic mirror, 343 Gabor zone plate, 287
Diffraction, 129 Gabor, Dennis, 283
Bandwidth, 232
Bayer Mask, 267 and interference, 132 Gas discharge lamps, 49
circular aperture, 133 Gaussian beams
Beer-Lambert law, 264, 334
Fraunhofer, 130 Beam radius, 206
Benton, Stephen, 291
Fresnel, 130 Divergence, 203
Biophotonics (definition), 329 Focusing, 204
single slit, 130
Blackbody radiation, 32 TEMoo mode, 202
Diffraction grating, 121
Bohr, Neils, 19, 28 Diffraction-limited, 134 Generating, 319
Bolometer, 57 Glass, 305
Digital image representation, 265
Brewster's angle, 149 chemical properties, 306
Digital X-ray, 278
Diopter, 90 Mechanical properties, 306
Optical properties, 309
Camera disc calorimeter, 57
Thermal properties, 308
Digital, 262 Dispersion, 79 Graded index fiber (GRIN), 240
Pinhole, 69 Chromatic, optical fiber, 80
Grimaldi, Francesco, 18
Single lens reflex (SLR), 165 Distributed feedback lasers, 226
Candela, 39 Grinding, 320
Cartesian sign convention, 94
Lenses, 94 Einstein, Albert, 19, 58
Halogen cycle, 48
Mirrors, 101 electromagnetic waves, 20
Helium neon laser, 213
Radius of curvature, 89 Electron•volt, 27
Hertz, Heinrich, 58
Cathode ray tube, 269 Electronic ink, 271
heterojunction, 55
CDRH, 2 Emission spectrum, 30
Heterojunction laser, 225
Charge coupled devices (CCD), 267 Energy levels in atoms, 28
High Intensity Discharge (HID)
Chemluminescence, 331 Excited state, 28
Ground state, 28 lamps, 50
chromatic dispersion, 239 Hobby holograms (making), 294
Circulator, 252 Ionization energy, 29
Lifetime, 30 Single beam reflection, 298
Clear aperture, 313 Two-beam transmission, 300
Erbium Doped Fiber Amplifiers
Clerk Maxwell, James, 19 Holograms
(EDFA), 255
CMOS sensors, 267 Computer generated, 294
Etalon, 178, 201, 210, 211
CO2 lasers, 216 Embossed, 292
Exitance, 34, 39
Coherence, 114 Reflection, 291
Eye, human, 167 Transmission, 291
Spatial, 116
vision defects, 169 White light transmission
Temporal (longitudinal), 114,
199 (rainbow), 291
Color rendering index (CRI), 47 f/stop (f/#), 166 Holographic data storage, 294
Color temperature, 47 Fabry-Perot laser, 226 Holographic interferometry, 293
Computer controlled polishing, 326 False color, 274 homojunction, 55
Confocal microscope, 343 Fermat's Theorem, 352 Homojunction laser, 224

358
Index

Huygen's Principle, 351 Laser output parameters Materials for optical components,
Huygens, Christian,, 18 Spatial properties, 201 305
Huygens-Fresnel principle, 129 Temporal properties, 195 Maximum permissible exposure
Hydrogen atom energy levels, 29 Laser Safety (MPE), 8
Controls, 12 Metastable state, 191
Practical rules, 13 Micro cracks, 318
IEC, 2 Laser safety eyewear (LSE), 10
Micro mirror array, 270
Image Laser scanning confocal microscopy,
Microscope, 173
real, 92 342
virtual, 93 Mirror equation, 104
Laser surgery, 335
Image compression, 269 Mirror images
Laser types Plane mirror, 100
Image displays, 269 Chemical, 223 Spherical mirrors, 102
Image filtering, 273 Dye, 222 Mirrors, 100
Incandescent light sources, 47 Excimer, 218 Spherical, 101
Incident angle, 71 Fiber lasers, 227
Modal distortion, 237
Incident ray, 71 Ion gas, 214
Molecular gas, 216 Mode Quality Factor (M2), 205
Index of Refraction modelocking, 207
Definition, 72 Nd:YAG, 220
neutral gas atom, 207, 213 Moiré, 285
Table of values, 73
Insertion loss, 246 Semiconductor, 224 Multimode fiber, 237
Solid state (crystal), 219
Intensity, 34, 35
LASIK, 338
Interferometer Newton, Isaac, 18
Lens
Fabry-Perot, 178 Noise, 59, 63, 301
Converging, 86
Fizeau, 180 Normal line, 71, 76, 354
Mach-Zehnder, 177 Cylindrical, 91
Diverging, 86 Normalized frequency. See V-
Michelson, 175
Image formation, 91 number
Phase shifting, 181
Names of, 87 Numerical Aperture (N.A.), 235
Irradiance, 7, 34, 39
Lens aberrations, 99 Nyquist noise, 63
ITU-Grid, 250
Lens Maker's formula, 89
LIDAR, 265
Object beam, 286
Johnson noise, 63 Lifetime, 188, 194, 222
Light Emitting Diodes (LEDs), 52 optical axis, 86
Optical coherence tomography, 341
Light-tissue interactions, 332, 334
Lamps Optical density (OD), 11
Line sensors, 267
Arc lamps, 52 Optical detectors, 55
Flash lamps, 52 Liquid crystal display (LCD), 156,
Optical fiber
Fluorescent, 49 270
Buffer coating, 233
HID, 50 Longitudinal Modes, 196
Cladding, 233
High-pressure sodium, 50 Bandwidth of a mode, 198
Core, 233
Incandescent, 47 Longitudinal moeds Operating wavelengths, 236
Low-pressure sodium, 51 Mode spacing, 197 Step-index, 236
Mercury, 50 Loss Optical loss test kit, 245
Metal halide, 50 Decibels (dB), 241 Optical noise, 297
LANDSAT, 277 Loss in a fiber system
Optical Time Domain Reflectometer
Laser components Extrinsic, 240
(OTDR), 246
Gain medium, 191 Insertion, 241
Pump, 192 Intrinsic, 240 Optical tweezers, 345
Resonator (mirrors), 192 power budget, 243 Optical waveguide, 234
Laser dental applications, 337 Lumen, 38
Laser dermatology, 339 Luminous efficacy, 39 Paraxial Approximation, 88
Laser diode, 224 Luminous efficiency, 38, 42, 66 Path length difference, 112, 119,
Laser energy transitions, 194 Luminous power, 38, 42, 66 125, 131
Laser eye surgery, 338 Lux, 39 Penetration depth, 334
Laser generated air contaminants, 5 Period, 20
Laser hazard classifications, 6 Phase, 23
Machine vision, 279
Class 1, 6 Phase Shift on Reflection, 124
Class 2, 6 Magnetic Resonance Imaging, 279
Magnification Phosphorescence, 31
Class 3, 7
Class 4, 7 angular, 164 Photobiology, 330
Laser hazards, 2 Lens (Transverse), 97 Photochemical cellular effects, 332
Eye hazards, 3 spherical mirror, 104 Photoconductive detectors, 59
Secondary hazards, 5 Transverse, 70 Photodiodes, 60
Skin hazards, 5 Magnifiers, simple, 164 Photodynamic therapy, 340
Viewing conditions, 4 Maiman, Theodore, 187 Photodynamic therapy (PDT), 335
Laser induced breakdown Malus' Law, 145 Photoelectric effect, 26, 58
spectroscopy, 345 Photoemissive detector, 58

359
LIGHT: Introduction to Optics and Photonics

Photofluorescence, 335 Superposition, 110


Photoinduced tissue pathologies, 332 Surface form error, 312
Radian, 36
Photoluminescence, 331
Radiationless transition, 190
Photomechanical processes, 334
Radiometric units, 33 Talbot, 39
Photomedicine, 331
Rainbow, 79 Telescope
Photometric Units, 37
Ray, 68 Cassegrain, 172
Photomultiplier, 58 Newtonian, 172
Ray Tracing
Photon, 26 Telescopes
Lens, 91
Absorption, 28, 189 Galilean, 171
Mirrors, 103
Energy, 27 Keplerian, 170
Rayleigh scattering, 151
Spontaneous emission, 188 Reflecting, 172
Stimulated emission, 190 Rayleigh's Criterion, 133
Rectilinear propagation, 68 Refracting, 170
Photopsychologial effects, 331 TEMoo , 202
Photosynthesis, 330 Reference beam, 286
Reflection Thermal detectors, 56
photothermal interaction, 335 Thermal Noise, 63
Photovoltaic effect, 60 Diffuse, 4, 72, 82, 321, 344
Law of, 71 Thermistor, 57
PIN photodiode, 60, 255 Thermocouple, 56
Specular, 4, 7, 15, 72, 82
Pinhole camera, 69 Refraction Thermopile, 56
Planck, Max, 27 Critical angle, 76 Thin film coating deposition, 327
Planck’s constant, 27 Snell's law, 75 Thin film interference, 123
Plasma, 49, 214 Resolving power, 123, 171, 179 Solving problems, 126
Plasma displays, 270 Resonant excitation, 213 Thin Lens Approximation, 85
pn junction, 54, 224, 225 Responsivity, 55, 62 Thin Lens Equation, 93
Point source, 68 Roughness, 314 Three-level laser, 195
Poisson, Simeon, 19 Ruby Laser, 219 Thresholding, 272
Poisson's spot, 19 Ti:Sapphire laser, 222
Polarization Tissue modification, 335
Absorption, 144 Sag, 314
Tomography, 278
Birefringence, 152 scanning tunneling microscope
Total internal reflection, 77, 234
Reflection, 147 (STM), 275
Total internal reflection (TIR)
Scattering, 150 Scratch/dig, 316
Sunglasses, 150 Frustrated, 78
Semiconductor Transmittance, 11
Polarized light n-type, 53
Circular, 143 Transverse Electromagnetic Modes
pn junction, 54
Definition, 141 p-type, 53 (TEM), 201
Elliptical, 143 Semiconductor physics
Plane polarized, 142 (introduction), 53 Ultrasound Imaging, 278
Polishing, 321 Shadows, 68
Pitch, 321 Penumbra, 69
Process, 322 Umbra, 69 V-number, 238
Synthetic pads, 322 Shot noise, 63
Power Signal-to-noise, 64
Light gathering, 171 Wall plug efficiency, 39
Singlemode fiber, 238
Resolving, 171 Wave equation, 22, 349
Power margin, 244 Skylight, 46 Wave plates, 154
Slope error, 313 1/2 wave, 155
Power, optical, 90
diopter, 90 Small angle approximation, 88 1/4 wave, 154
Principle rays, 91, 93, 103 Snell's law, 75, 351 Wavefront, 68
Prisms, 162 Spatial frequency, 286 Wavefront error, 313
Propagation speed, 24 Spectrometer, 163 Wavelength, 20
Pulsed lasers, 207 Spectrum Change in a medium, 74
modelocking, 207 Absorption, 30 Wavelength Division Multiplexing,
Q-switch, 207 electromagnetic, 25 250
Pyroelectric detector, 57 Emission, 30 wave-particle duality, 19
Optical, 25
pyroelectric effect, 57 Wedge error/Eccentricity, 315
Speed of light, 24
Spherometer, 315
Q-Switch, 208 Steradian, 36 YAG, 220
Quantum efficiency, 62 Sunlight Young, Thomas, 19
Quantum noise, 63 Solar constant, 46 Young's double slit experiment, 118
Spectral content, 46

360

Anda mungkin juga menyukai