TABLE OF CONTENTS
- 4.1 -
4.1 Introduction - The Purpose of Model Eyes
The human eye is an elegant electro-optical image processing
system with many parallels to modern optical instruments. For
example, the eye could be compared to an advanced video camera
with wide field of view, auto-focus and auto-exposure optics, a
variable-resolution sensor with 3 color channels plus an ultra-
sensitive black and white channel, local and global automatic gain
control mechanisms which produces uniform sensitivity over a large
dynamic range, all packaged in a pair of light-weight spheres
mounted in a lubricated gimbal that allows rapid, servo-controlled,
coordinated stereoscopic positioning and the tracking of moving
objects. At the same time, the eye is a typical biological system with
an inherent complexity and variability which cannot easily be
represented by a simple equation or model. Nevertheless, there are
many features of the eye that are common to virtually all human
eyes which makes it feasible and desirable to represent the average
eye by a functional, neuro-optical, schematic model.
- 4.2 -
Cross-section of the Human Eye
optic n
erve
- 4.3 -
early adulthood the auto-focus capability (called accommodation)
of the human eye provides an impressive dynamic range from
infinity to less than 10 cm from the eye. Throughout adulthood,
however, the capacity to accommodate declines and eventually
disappears at about 55 years of age. In addition to these age
changes that occur in all eyes, there are other changes that occur in
a significant minority of eyes. For example, many people experience
additional ocular growth during childhood or early adulthood that is
not accompanied by a corresponding decrease in optical power of
the eye's dioptric components. These eyes effectively become
optically over-powered, or “near-sighted” (myopic), thus requiring a
spectacle or contact lens with negative power in order to see distant
objects clearly. Given these large differences in the optical
properties of real eyes, the goal of a schematic model eye is to
represent the typical healthy eye, supplemented with tolerances on
individual components which give a sense of the range of values
found in the population.
Numerous model eyes have been developed during the last 150
years to satisfy a variety of needs. In some cases the model eye is
aimed primarily at anatomical accuracy, whereas other models have
made no attempt to be anatomically correct or even have any
anatomical structure at all. There are many reasons why
anatomically correct models are useful. As a biological organ, it
would be impossible to understand the physiology, development,
and biological basis of the eye’s optical properties without an
- 4.4 -
accurate anatomical model. In the clinical world of refractive
surgery, an anatomically correct model is an essential tool for
planning operations designed to modify the eye’s optical properties.
A good example of this need is the routine cataract surgery to
remove an eye's internal, opacified lens and replace it with a
synthetic lens designed as a substitute for the removed natural lens
and to correct for any refractive error that the eye may have. Since
the refractive error depends upon the lens, cornea and eye length,
and the effective power of the introduced lens depends upon its
location with respect to the cornea and retina, an anatomically
correct optical model is essential for clinicians to calculate the
desired lens power of the synthetic, implanted lens.
- 4.5 -
refracting surfaces. The latter has spurred the use of aspheric
refracting surfaces and variable refractive indices to better model
the monochromatic and polychromatic aberrations of the eye.
Until recently, model eyes were concerned only with light passing
into the eye (the path necessary for vision). However, recent
interest in retinal imaging for medical diagnostic purposes has
highlighted the importance of accurate modeling of light exiting the
eye following reflection from the fundus. Although most individuals
only experience this phenomenon as an annoying “red eye” on their
photographs when a flash is used, clinicians for over a hundred
years have been using reflected light to view the blood vessels and
neural retina at the back of the eye (fundus). For these clinical
applications, the retina becomes the object which is imaged by the
eye’s optical system in conjunction with an ophthalmic instrument
(e.g. fundus camera or ophthalmoscope). This is a challenging task
because the eye contains dense visual pigmentation which absorbs
light for vision, plus additional melanin pigment which absorbs
excess light, thereby reducing internal light scatter. Consequently,
only a small fraction of the light entering the eye actually reflects
back out again. Combining this very low reflectance (e.g. 10-4 ) with
the susceptibility of the retina to photic injury leads to a practical
problem of dim fundus images. The clinical solution to this problem
is to dilate the pupil, which unfortunately releases the full impact of
the eye's aberrations on the reflected image. Only recently have the
eye's aberrations been measured with sufficient accuracy to allow
correction with adaptive optics, thereby producing a high resolution
image of the fundus.42
- 4.6 -
called the Indiana Eye which was designed specifically for the
computation of image quality. The optical properties of the Indiana
Eye are examined in detail in section 4.4 and the use of the model to
compute image quality is outlined in section 4.5. A model of the
neuro-sensor features of the human retina is presented in section
4.6. We conclude our chapter with a discussion in section 4.7 of the
problem of validating eye models which are intended to predict the
optical and neuro-sensory effects on visual performance.
FEATURES
• Single, aspheric refracting surface with variable shape parameter
• Adjustable pupil (axial, lateral, diameter)
• Dispersive, homogeneous medium
• Variable focus (adjustable three ways)
APPLICATIONS
• Focal power
• Retinal magnification factor
• Defocus
• Longitudinal chromatic aberration (LCA)
• Chromatic difference of magnification (CDM)
• Transverse chromatic aberration (TCA)
• Spherical aberration
• Oblique astigmatism
• Non-linear projection of visual space
• Wave aberration function
• Polychromatic optical transfer functions (OTFs)
• Image quality
• Visual performance
- 4.7 -
communication between the fields of engineering optics and visual
optics, we conclude this introduction by briefly mentioning a few
terms and conventions.
- 4.8 -
Newton (1670) was the first to appreciate that the human eye
suffered from chromatic aberration, and Huygens (1702) actually
built a model eye to demonstrate the inverted image and the
beneficial effects of spectacle lenses. Astigmatism was observed by
Thomas Young (1801) in his own eye and spherical aberration was
measured by Volkman (1846). The first quantitative paraxial model
eye was developed by Listing (1851), which employed a series of
spherical surfaces separating homogeneous media with fixed
refractive indices. Listing's model set a trend that produced
numerous models over the next 100 years, of which the most widely
quoted variants are due to Emsley.28
- 4.9 -
correcting oblique astigmatism include improved imaging of the
peripheral fundus and improved efficacy and accuracy of laser
photocoagulation in the peripheral retina. Some interesting issues
regarding the ocular source of oblique astigmatism have been raised
in the modeling work of Dunne and colleagues. They observed that
their model, and those of others, failed to capture both off-axis
astigmatism and on-axis spherical aberration even though both are
similarly affected by changes in model asphericity.26 In a later study
they observed that off-axis astigmatism is virtually unchanged in
Gullstrand’s model eye when the lens is removed.27 This conflicted
with experimental reports that aphakic eyes (those with the lens
surgically removed) have less off-axis astigmatism. 51 By displacing
the model pupil backwards along the optic axis, they were able to
reduce the off-axis astigmatism to levels observed experimentally in
aphakic eyes, and therefore came to the conclusion that aphakic
eyes have reduced off-axis astigmatism due to axial displacement of
the pupil and not because the lens contributes significantly to the
off-axis astigmatism of the eye. Axial displacement of the pupil
(which is typically located at the anterior surface of the lens in
previous aspheric models) will have a profound effect on the model
eye’s off-axis astigmatism, but very little impact on axial spherical
aberration. Therefore, using a model without a lens in which the
pupil can be placed in any plane, it is possible to manipulate surface
asphericity and pupil position to accurately model both off-axis
astigmatism and on-axis spherical aberration.
- 4.10 -
The third, and dominant, motivation for recent development of
model eyes has been to account for the effects of monochromatic
and polychromatic aberrations on vision. In general, models
constructed with spherical refracting surfaces have been found to
have far more aberration than is present in real eyes. For example,
the early spherical models of Gullstrand and others, which were not
designed to model spherical aberration, exhibit between 8 D and 12
D of longitudinal spherical aberration (LSA) at the edge of an 8 mm
diameter pupil, whereas human eyes have less than 2.5 D for the
same ray location.43, 81 Discrepancies of this kind have prompted
the adoption of aspheric surfaces and dispersive media to help bring
the models into closer agreement with real eyes. For example, Liou
and Brennan use a four surface, aspheric model plus a GRIN lens to
achieve a realistic value of about 2 D of LSA at a 4 mm ray height.44
Similarly, Navarro et al.54 designed an aspheric multi-surfaced
model with dispersive media in order to compute the effects of
monochromatic and chromatic aberrations on image quality.
The drive to develop models for computing image quality has led
some authors to abandon anatomical correctness in favor of
- 4.11 -
analytical or physical simplicity. For example, Van Meeteren
(1974) developed a mathematical model of the pupil function and
its variation with wavelength in order to compute image quality in
the human eye for monochromatic and polychromatic light.82 The
drawback of a purely mathematical model, however, is that it does
not readily provide insight into the physics of image formation in
the eye. For this reason we prefer to work with a physical model,
from which we may derive the optical properties which determine
image quality. In the spirit of parsimony, we would like the model
to be as simple as possible, preferably with a single refracting
surface. Much to our surprise, our group at Indiana University has
discovered over the past decade that the classic reduced eye
satisfies these requirements, provided that a few small modifications
are adopted. As shown in the following sections, a modified
reduced eye is capable of accurately describing the chromatic,
spherical, and astigmatic aberrations of the typical human eye.
Because of its simplicity of form the model may be analyzed
geometrically, analytically, and numerically to gain insight into
visual optics problems such as computing the effects of diffraction,
defocus, and aberrations on retinal image quality and visual
performance.
- 4.12 -
similarity to real eyes, but by its ability to predict visual
performance on optically-limited tasks.
b
n( ) = a + (4.1)
−c
to specify how the refractive index n of the ocular medium varies with
wavelength λ. In this equation λ is specified in microns and the other
parameters are: a=1.31848, b=0.006662, c=0.1292.
x2 + y 2 + pz 2
z= . (4.2)
2r
- 4.13 -
0 < p < 1, prolate ellipsoid,
p = 0, paraboloid,
p < 0, hyperboloid,
- 4.14 -
Gullstrand's
Schematic Eye
P P′ N N′
F F´
3.60
23.89
General
Aspheric Reduced Eye
Surface
Achroma Fovea
tic Axis φ
Optical Axis E′ α
ψ C
Axis
F Fixation N F´
xis
T Visual A 11.0
distant
fixation n(λ) = a+b/(λ-c)
point
n D = 1.333
a = 1.320535
5.55 b = 0.004685
c = 0.214102
16.67 22.22
2.75
Indiana Eye
(FIGURE 4.2)
- 4.15 -
focal point (F) and posterior focal point (F ′) of the reduced eye. Surface of
the reduced eye is located midway between the principal points P, P′ of the
Gullstrand model. (Redrawn from Thibos, Ye, Zhang, & Bradley, 1997)
- 4.16 -
chromatic aberration (TCA) will be zero at the fovea. However, if
the pupil is not well centered on the visual axis (i.e. φ ≠ 0) , or
equivalently, if the visual axis does not coincide with the eye's
achromatic axis (i.e. ψ ≠ 0), then the fovea will be subjected to the
deleterious effects of TCA, which can have a dramatic impact on
retinal image quality and on visual performance.5, 76, 83
- 4.17 -
Horizontal Angle ψ (deg)
Vertical Pupil Center (mm) -20 -10 0 10 20
1.5 20
0 0
-0.5
-10
-1
-20
-1.5
-1.5 -1 -0.5 0 0.5 1 1.5
Horizontal Pupil Center (mm)
Figure 4.3. Distribution of pupil displacement from the visual axis
(bottom and left axes) and distribution of angle ψ (angle between visual
axis and achromatic axis of the eye, top and right axes) in a population of
young adult eyes. Ellipse encompasses half of the data points. Redrawn
from Rynders et al., 1995
- 4.18 -
of the ocular medium (as defined by Mouroulis and Macdonald (p.
190) 53) from 55 for water to 50 for the Indiana Eye. The radius of
curvature r of eqn. 4.2 was set by Emsley to be 5.55 mm in order
for the reduced eye to have the same anterior and posterior focal
lengths as Gullstrand's schematic eye at the emmetropic reference
wavelength (589 nm). These values for r and for the reference
wavelength were adopted also for the Indiana Eye. (The fact that the
radius of curvature of the model eye is significantly less than that of
the cornea alone, which is 7.8 mm in Gullstrand's schematic eye,
reflects our intentional disregard for anatomical accuracy in order
to represent the eye by a single refracting surface.) As for
asphericity, a recent study of the eye's spherical aberration suggests
a shape-parameter value of p = 0.6 is required to match the
spherical aberration of the Indiana Eye with that of typical human
eyes.81 (This p value of the model is significantly greater than that
of the typical human cornea). For convenient reference, the
parameters of the Indiana Eye model are collected in Table 4.2 .
diopters
- 4.19 -
anterior focal length for emmetropic wavelength fD = 16.67 mm
(1/FD)
(nD/FD)
The paraxial power of the refracting surface in the Indiana Eye is equal to the
change in refractive index across the interface divided by the radius of curvature
F = ( n′ −1)/ r (4.3a)
= n′ / f ′ (4.3b)
= 1/ f . (4.3c)
f′− f = r
(4.4)
f ′ / f = n′
Equations 4.3 and 4.4 are valid for any emmetropic reduced-eye
model. In what follows we use subscript notation to indicate that a
particular parameter assumes the value from Table 4.2 associated
with the Indiana Eye model.
- 4.20 -
4.4.2 Magnification
(a) (b)
pp
ζ s
ω
β b
N d
r
f f′ f′ ∆f ′
Figure 4.4 Retinal magnification (a) and blur circle diameter (b) for the
Indiana Eye model.
- 4.21 -
4.4.3 Defocus
n′ n′
E= −
f ′ f ′ + ∆f ′
(4.5)
n′ ∆ f ′
≅
f ′2
b ∆f ′
= . (4.6)
d f′
Dividing both sides of eqn. 4.6 by f ′ and combining the result with
eqn. 4.5 gives
n ′b
= PE . (4.7)
f′
- 4.22 -
Since the distance ∆f ′ is much smaller than f ′, the ratio b/f ′ may be
taken as an approximate formula for the angle subtended by the
blur circle at the principal plane. As shown previously in
connection with Fig. 4.4(a), the angle subtended at the nodal point
is larger by the factor n′. Thus the left side of eqn. 4.7 is recognized
as the desired visual angle β and so the final approximate formula
for the angular size of the blur circle is the simple linear expression
= dE (4.8)
K = L′ − L − F (4.9)
K0 = L0′ − F0 = 0 (4.10)
- 4.23 -
than eqn 4.9 because L0 = 0 when an image is clearly focused on the
retina of the Indiana Eye. Subtracting eqn. 4.10 from 4.9 gives the
desired expression for refractive error,
K = ( L′ − L0′ ) − L − (F − F0 ) (4.11)
fo′ ∆f ′ fo′ ∆f ′
f′ ∆f ′ f′ ∆f ′
f′
o fo′
Figure 4.5 Four cases of refractive error cause by altering (a) object
distance, (b) axial length, (c) surface curvature, (d) refractive index. In
each diagram, the lower ray trace shows the focus error in image space and
the upper ray trace shows the focus corrected by a correcting lens of
power K in object space.
- 4.24 -
dioptric distance from refracting surface to the retina. Given
L0′ = n D / f0′ and L′ = nD /( f 0′ +∆ f ′) we conclude from eqn. 4.11 that
K / L0′ = −∆ f ′ /( f0′ + ∆ f ′ ) . This result indicates that the ratio of the power
of the correcting lens to the dioptric length of the standard eye is
the same as the ratio of the change in axial length to the overall
length of the eye.
n′ − nD n′ − nD
K= − . (4.12)
f0′ r
nD − n ′ 1
K= ⋅ . (4.13)
r nD
This result says that the spectacle lens required to correct refractive
error caused by changing refractive index is 1/ n D = 0.75 times the
- 4.25 -
change in power of the refracting surface. For example, an increase
in refractive index sufficient to produce a 4 D increase in surface
power of the Indiana Eye can be corrected by a -3 D spectacle lens,
with the difference being made up by the 1 D increase in dioptric
distance from refracting surface to the retina.
The Chromatic Eye and the Indiana Eye suffer from the same
amount of paraxial chromatic aberration caused by the dispersive
properties of the ocular medium. Refractive index in these models
varies with wavelength according to eqn. 4.1, which in turn
produces a variable refractive error quantified by eqn. 4.13. We
quantify the magnitude of longitudinal chromatic aberration K by
the difference in refractive error between wavelengths 1 and 2
n ′( 1) − n′ ( )
K = 2
(4.14)
rnD
- 4.26 -
1
λ ref = 589 nm
0.5
Refractive Error (diopters) j
0
Wald & Griffin, 1947
-0.5 j Bedford & Wyszecki, 1957
j Ivanoff, 1953
Millodot & Sivak, 1973
-1 j Millodot, 1976
Charman & Jennings, 1976
Powell, 1981
-1.5 Lewis et al., 1982
Ware, 1982
-2 Mordi & Adrian, 1985
Howarth & Bradley, 1986
Cooper & Pease, 1988
-2.5 Thibos et al., 1992
Water Eye
Chromatic Eye
-3
300 400 500 600 700 800
Wavelength (nm)
Figure 4.6 Comparison of published measurements of ocular chromatic
aberration with the traditional water-eye model and with the chromatic
eye model. Published results were put on a common basis by translating
data points vertically until the refractive error was zero at the reference
wavelength (589 nm). (Redrawn from Thibos, Ye, Zhang, & Bradley, 1992).
+0.25 D
-0.25 D
80
-0.5 D
60
-0.75 D
+0.5 D
-1.0 D
40
20
0
350 400 450 500 550 600 650 700 750
Wavelength (nm)
- 4.27 -
Figure 4.7 Chromatic defocus in the context of the source luminance
spectrum. The solid curve shows the luminance spectrum of white light
emitted by the P4 phosphor of cathode-ray tubes and arrows mark the
amount of defocus if the eye accommodates for 550 nm. When the peak of
the luminance spectrum is in focus, most of the light is less than 0.25
diopters out of focus. (Redrawn from Thibos, Bradley & Zhang, 1991)
- 4.28 -
defocus to be expected for a retinal object as a function of the wavelength of
light.
- 4.29 -
CDM ≅ z ⋅ K 0.8
CDM (%)
0.4
z
0.2
0
S
L -0.2
Position = -5.7
-8 -4 0 4 8 12
Pupil distance from apex (mm)
CDM = zK (4.15)
- 4.30 -
the value of CDM determined for the natural eye without an
artificial pupil was less than half the value expected from the
Chromatic Eye model, which suggests that a better match between
experiment and theory would be obtained if the pupil were closer to
the nodal point. A need for a slight axial adjustment of the pupil
(0.6 mm) is also indicated by an analysis of oblique astigmatism of
the model, as will be shown further on.
- 4.31 -
= K z sin = hK
ω z
h L
τ
N τ S N S
(a) L (b)
= K z sin (4.16)
- 4.32 -
which subtends 0.2° of visual angle, which is 8 times the diameter of
cone photoreceptors in that part of the retina, 4 times cone spacing,
and about equal to the spacing between neighboring retinal ganglion
cells.
= hK (4.17)
d
CDM = = K zcos (4.18)
d
For small field angles cos ω ≈ 1, which brings eqn. 4.18 into
agreement with eqn. 4.15.
- 4.33 -
TSA as the linear distance in the focal plane from the optical axis to
the point of intersection of the marginal ray, the choice of an
angular measure for TSA is favored in visual optics.) A proportional
relationship between LSA and TSA is easily derived for small angles
a,b for which the approximation x=tan(x) applies (and neglecting
the small distance z=OA)
TSAimage = = b− a
1 1
≅ y − (4.19)
S F
y⋅ LSA
=
n′
TSAobject = s ≅ y /T
(4.20)
= yLSA
- 4.34 -
(a) y TSA = (b) y TSA = s = n′
n′ n′ 1
Q(y,z) LSA = − LSA =
S F Q(y,z) T
s
σ
b a z z
O A N S F T O A N F
- 4.35 -
A = 0.937p − 0.419 Diopters/mm 2 (object space)
(4.22b)
LSA (diopters)
100 0.7 10 0.8
0.6
0.5 0.7
(arcmin)
0 0.4 5 0.6
0.3
TSA
0.2 0.5
-100 0.1 0 0.4
0.01 0.3
p 0.2
-5
0.1
-200 0.01
-10
-4 -2 0 2 4 -4 -2 0 2 4
Ray height (mm) Ray height (mm)
- 4.36 -
0.6 0.6
(a) Image Space (b) Object Space
0.4 0.4
Seidel Coefficient
0.2 0.2
0 0
- 4.37 -
equations to determine tangential and sagittal focal lengths for distant objects;
(v) determine oblique astigmatism in diopters in both image and object spaces.
Detailed equations which implement these five steps are provided elsewhere.89
Figure 4.13(a) shows the computed tangential and sagittal focal powers for the
Chromatic Eye (i.e. p=0.437, zero spherical aberration). For object points on the
optical axis, focal power of the model eye is 60 D. With the increase of field
angle, focal powers in both tangential and sagittal planes increase. The difference
between tangential and sagittal focal powers is the Sturm's interval in diopters,
which is shown in Fig 4.13(b) for the Indiana Eye (p=0.6).
(a) Chromatic Eye (p=0.437, Z=1.95) (b) Indiana Eye (p=0.6, Z=2.75mm)
85 5 Rempt, et al., 1971
80 PT 4
75 P
S 3
70
2
65
1
60
0
55
-80 -60 -40 -20 0 20 40 60 80 -80 -60 -40 -20 0 20 40 60 80
Field angle (deg) Field angle (deg)
Several studies of the refractive error of human eyes have found that large
amounts of oblique astigmatism are normally present in the peripheral visual
field,29, 52, 63 although the amount and type of astigmatism varies considerably
between individuals.50, 63 Early attempts to simultaneously model oblique
astigmatism and spherical aberration with the same model were unsuccessful.39,
41, 45, 88 The same is true for the general reduced-eye model in Fig. 4.2(b).
Regardless of whether the model is chosen to have large amounts of spherical
aberration (p=1) or zero spherical aberration (p=0.437), the model over-estimates
the oblique astigmatism measured in the population of human eyes.89
However, the model is easily brought into agreement with the empirical data
simply by adjusting the axial location of the pupil slightly. This maneuver works
because axial location of the entrance pupil controls the obliquity of the chief rays
from off-axis object points and thus the degree of astigmatism. The exact
location of the pupil required to fit the astigmatism data depends slightly on the
p-value of the refracting surface. For the Indiana Eye (p=0.6), the pupil must be
shifted 0.8 mm closer to the nodal point to provide the optimum fit to the human
population data as shown in Fig. 4.13(b). With this justification, the Indiana Eye
has a pupil located 2.75 mm from the apex as shown in Fig. 4.2(c). In this
configuration, the Indiana Eye accounts simultaneously for the chromatic
- 4.38 -
aberration, spherical aberration, and oblique astigmatism of the typical human
eye. Although we have not specifically attempted to reproduce the various
types of oblique astigmatism described in the literature (e.g. myopic, hyperopic,
mixed),25, 29 we anticipate that these various forms could be modeled by
modifying the retinal contour of the Indiana Eye to have an aspheric shape.
The Indiana Eye is a wide-angle model which may be used to compute the
non-linear projection of object space onto the retina. Following Drasdo and
Fowler 24 we assumed a spherical retina with 11 mm radius of curvature and
used ray tracing to compute the intersection of the chief ray with the retinal
sphere as a function of field angle. The distance from this point of intersection to
the optic axis along the retinal surface of the Indiana Eye model is compared in
Fig. 4.14(a) with the widely quoted results of Drasdo and Fowler for their 3-
surface aspheric model. This comparison indicates that the Indiana Eye model
has less compression of the peripheral field than does the Drasdo & Fowler
model. Both models have less compression than that of the single human eye
investigated by Frisén and Schöldström.30 As far as we are aware, this latter
study using photocoagulation lesions to mark the retina at known perimetric
angles prior to surgical enucleation is the only direct measurement of the visual
field projection in humans. Although the Indiana Eye does not fit these limited
experimental data as well in the far periphery, the projection of visual space in
the reduced-eye model depends upon the shape factor of the refracting surface,
axial location of the pupil, and radius of curvature of the retinal sphere. Various
combinations of these parameters can produce a range of mapping functions
which includes all those shown in Fig. 4.14(a). Thus, in applications for which the
wide-angle projection of the visual field onto the retina is of importance (e.g.
clinical imaging of peripheral retina), the general reduced eye represents a
flexible tool for examining the impact of eye parameters on the non-linear
projection of visual space onto the retina. However, currently there is
insufficient evidence to warrant adjusting the Indiana Eye to more accurately
represent the projection of visual space in the average eye.
- 4.39 -
Retinal magnification (mm/deg)
25 0.30
Frisen & Scholdstrom (1977) a b
0.25
15
10
0.20
5
Indiana Eye
Drasdo & Fowler (1974)
0 0.15
0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80
Field angle (deg) Field angle (deg)
Figure 4.14. Retinal projection of the visual field in the Indiana Eye
model. (a) Distance from an eccentric image to the visual axis, measured
along the curved retinal surface, as a function of field angle of the source.
(b) Retinal magnification factor for Indiana Eye across the visual field,
obtained by differentiating the curve in (a). Also shown are predictions of
a wide-angle model eye from Drasdo and Fowler (1974) and experimental
data from a human eye reported by Frisén and Schöldström.
- 4.40 -
between eye and instrument and their combined performance (see
chapter 5 of this book for further analysis of this issue).
- 4.41 -
(a) (b) refracting
surface
reference sphere (center @ S)
exit
pupil myopic wavefront
(center @ G)
A B
paraxial
a* C focus image
ω a screen
(retina)
S
φ L
D
θ
z P G S
f′ ∆ f′
fD′
Figure 4.15. Effect of chromatic aberration on images of gratings. (a) Transverse chromatic
aberration produces wavelength-dependent phase shifts in the components of a
polychromatic grating. (b) Longitudinal chromatic aberration is a focus error which induces
a wavefront aberration error.
In general, chromatic phase shifts due to TCA will affect the hue, saturation,
and brightness of the target. Consider, for example, the retinal image of a
grating target with two chromatic components, one with red and black bars, and
the other with green and black bars. Assuming the red and green bars have the
same luminance, the appearance of the target will depend dramatically upon
their relative spatial phase. When the components have the same phase, the
target will look like yellow and black bars, i.e. a luminance pattern with no
chromatic variation. When the components have the opposite phase, however,
the target will look like red and green bars, i.e. a chromatic pattern with no
luminance variation. It follows, therefore, that the eye's TCA has the potential to
introduce luminance artifacts into chromatic targets, and chromatic artifacts into
luminance targets, both of which can have important visual consequences. 9, 12, 98,
100, 102
Here we are concerned solely with the effect of phase shift on luminance
contrast so that we may compute the luminance OTF for polychromatic objects.
The situation is complicated somewhat by the fact that chromatic errors of focus
will differentially affect the image contrast of each chromatic component of
polychromatic targets. Nevertheless, we may calculate the polychromatic OTF
for arbitrary field angle ω by taking into account the separate effects of LCA on
contrast, and TCA on phase, as described above for the Chromatic Eye model.
The geometry for determining the wave aberration function W(x,y,λ) for the
reduced eye is shown in Fig. 4.15(b). Our goal is to find the optical path length
W = n AC in terms of the longitudinal focal shift ∆ f ′ = SG and the half-angle θ
- 4.42 -
subtended by the exit pupil at the image screen. For the case of simple defocus,
Hopkins34 showed that if one accepts the approximation that the distance
AD = AG then for any given wavelength
The axial location z of the pupil affects angle θ because a given pupil will subtend
a larger angle if it shifts towards the retina. To account for this effect it is
convenient to scale the pupil radius a by the factor fD′ /( f D′ − z) to create an
effective pupil radius ap in the principal plane tangent to the refracting surface at
its vertex. Then from the geometry of Fig. 4.15(b)
−1/2
a 2
cos = [1+ tan ]
−1 / 2
= 1 + p (4.24)
2
fD′
n ∆ f ′ ap 2
2
W ( x, y, ) = ( x + y2 ) (4.25)
( fD′ ) 2
2
The wave aberration function given in eqn. 4.25 may be simplified further if it
is expressed in dioptric terms. Recall from section 4.4.3 that longitudinal
chromatic aberration is a wavelength-dependent focusing error E which is given
by the approximate formula
n ∆f ′
E( ) = (4.26)
( fD′ )2
ap2 2
W ( x, y, ) = E( ) ( x + y2 ) (4.27)
2
- 4.43 -
Dimensionally, W will be in meters if E is in diopters and the
projected pupil radius is in meters. Note that, except for a
difference in notation, eqn. 4.27 is the same as eqn. 2.4 in chapter 2
of this book.
( )= sin . (4.29)
- 4.44 -
equivalent). Strictly, since magnification varies with wavelength, so
must image frequency. However the change in frequency is less
than 1% over the spectral range of 380-780 nm and therefore is
assumed to be constant locally. If L(x,λ) is the luminance profile (in
a direction x orthogonal to the bars of the grating ) of a
monochromatic component of the image of a polychromatic grating
of unit contrast, then the profile of each spectral component of the
image is given by
where the weighting factor S(λ) is the mean luminance of the object
at wavelength λ, which takes into account the photopic spectral
sensitivity curve of the eye and the spectral content of the object,
and M(λ,ν) is the monochromatic modulation transfer function. The
luminance distribution of a polychromatic grating is found by
integrating L(x,λ) over wavelength to yield71
[
L(x ) = L 0 1 + A2 + B 2 cos(2 x −Φ ) ] (4.31)
1
S( )M( ) cos(2 ( ))d
L0 ∫
A( ) = , ,
1
S( ) M ( , ) sin(2 ( ))d
L0 ∫
B( ) = , (4.32)
Φ( ) = arctan(B / A)
- 4.45 -
objects which are placed at 4 m or 2 m so as to defocus the images
by 0.25 D and 0.5 D, respectively. These results, which were
computed for a 2.5 mm pupil, show that the effect of LCA on the
polychromatic MTF is very nearly the same as the effect of 0.25 D
defocus on monochromatic light. In visual terms, this is a relatively
minor amount of defocus which is comparable in magnitude to the
depth of field of the human eye15 or to the residual refractive error
expected after routine spectacle correction. Given that optical
attenuation of image contrast described by an MTF should translate
directly into a loss of contrast sensitivity (because the contrast of a
test target would have to be boosted proportionally to compensate
for optical losses), these computational results suggest that the
visual loss of contrast sensitivity due to LCA is slight. These results
agree with experimental data for central vision gathered by
Campbell and Gubisch17 who reported less than 0.2 log unit
difference in contrast sensitivity for white and monochromatic
lights over a 10-40 cyc/deg range of spatial frequencies.
Modulation Transfer Ratio
P4 0D
0.25 D
0°
0.1 0.1
Peripheral 15°
Neural
0.5 D Threshold
0.01 0.01
30°
45°
- 4.46 -
replotted as the dotted curve (labeled "foveal neural threshold") in
Fig. 4.16(a). Through graphical analysis we can determine the
spatial frequency for which the image contrast of a target with 100%
object contrast will fall below threshold. This endpoint is given by
the intersection of the optical MTF and the retinal threshold curve.
For the white-light MTF shown the intersection is at 50 cyc/deg and
for the monochromatic MTF it is 55 cyc/deg. These predicted acuity
values agree almost exactly with the experimental values (52 vs. 55
cyc/deg for subject R.W.G.) reported by Campbell and Gubisch.17
- 4.47 -
accurate predictions depend upon the precise wavelength which is
in-focus on the retina and upon the degree of other off-axis
aberrations. Nevertheless, we may conclude that ocular chromatic
aberration accounts for most of the acuity loss suffered when
viewing through a displaced pupil.5 Similar predictions apply when
viewing polychromatic interference fringes or conventional targets
produced by a decentered visual stimulator seen in Maxwellian-view
(i.e. illumination source is imaged in the plane of the entrance pupil
of the eye).11, 72 We have experimentally verified these theoretical
predictions by showing that acuity drops threefold when the visual
stimulator is displaced 3 mm from the visual axis.11
- 4.48 -
Effect of Wavelength-in-Focus
Effecton
ofMTF
Wavelength-in-Focus
of of ReducedEye
Eye
on MTF Reduced
1.0
MTF
0.5
0.0
0
10 Spa
20 tial f 650
30 550 570
requ
40
ency450 th (nm)
Waveleng
50
60 (c/d)
3 mm pupil
2800K tungsten a
0.5
Figure 4.17. Use of Chromatic Eye model to evaluate the defocusing effect
of changing wavelength (a) and as a standard for comparison against
experimental measurements of polychromatic MTFs of human eyes (b).
- 4.49 -
psychophysical determinations of the polychromatic MTF by
Campbell and Green16 and Campbell and Gubisch.17
- 4.50 -
darker or lighter background, or discriminating a grating target
from a uniform background of the same mean luminance.
Performance on such tasks is usually quantified by the minimum
contrast in the test object required by an observer to perform the
task. However, the determining factor is whether the amount of
contrast on the retina exceeds the neural threshold for detection.
Since contrast in the retinal image is determined by the optical
quality of the imaging system, the optical limitations for contrast
detection tasks are fully described by the system MTF. For example,
if some optical instrument were to double the retinal contrast of a
given target over that obtained by the unaided eye, then the
minimum contrast of the target required for detection would be
halved.
- 4.51 -
Although filtering models of contrast detection have been used
extensively to describe performance of human foveal vision, these
models have not always been generalized correctly to cover the
peripheral field.73 To adapt foveal models for peripheral vision, the
usual approach has been to broaden the filter's impulse response
function sufficiently to reduce the filter's low-pass cutoff frequency
to match the reduced resolving power of peripheral vision.
Unfortunately, this is an invalid approach which produces too much
filtering and predicts a cutoff frequency for detection much lower
than has been measured experimentally.79 These difficulties reflect
a failure to distinguish clearly between the visual tasks of contrast
detection and spatial resolution. Although the cutoff spatial
frequency for these two tasks are nearly equal for normal foveal
vision, they can differ by as much as an order of magnitude in
peripheral vision. The underlying reason for this large difference in
performance levels is that a filtering-limited task like contrast
detection is limited by the size of neural apertures, whereas
resolution is sampling-limited and therefore is determined by the
spacing between neurons. Since the relationship between size and
spacing of neural apertures varies across the retina, it is
inappropriate to use information about spacing derived from
measurements of spatial resolution to set aperture size in a filtering
model. Instead, an anatomically correct approach would broaden
the filter's impulse response function in proportion to the diameter
of the entrance aperture of cone photoreceptors, or subsequent
neural stages, for any given location in the peripheral retina. To
illustrate these points further we present below a simplified model
of neural sampling of the retinal image which is developed in greater
detail elsewhere. 73
- 4.52 -
not as sensitive as rod-based vision, the central fovea is blind to dim
lights (e.g. faint stars) that are clearly visible when viewed
indirectly. At a radial distance of about 0.1-0.2 mm along the
retinal surface (=0.35-0.7° field angle) from the foveal center, rods
appear and in the peripheral retina rods are far more numerous
than cones.22, 61 Each photoreceptor integrates the light flux
entering the cell through its own aperture which, for foveal cones, is
about 2.5 µ in diameter on the retina or 0.5 arcmin of visual
angle.22, 49 Where rods and cones are present in equal density (0.4-
0.5 mm from the foveal center, = 1.4-1.8°) cone apertures are about
double their foveal diameter and about three times larger than rods.
Although rod and cone diameters grow slightly with distance form
the fovea, the most dramatic change in neural organization is the
increase in spacing between cones and the filling in of gaps by large
numbers of rods. For example, in the mid-periphery (30° field
angle) cones are about three times larger than rods, which are now
about the same diameter as foveal cones, and the spacing between
cones is about equal to their diameter. Consequently cones occupy
only about 30% of the retinal surface and the density of rods is
about 30 times that of cones. Given this arrangement of the
photoreceptor mosaic, we may characterize the first neural stage of
the visual system as a sampling process by which a continuous
optical image on the retina is transduced by two inter-digitated
arrays of sampling apertures. The cone array supports photopic
(daylight) vision and the rod array supports scotopic (night) vision.
In either case, the result is a discrete array of neural signals which
we call a neural image.
- 4.53 -
Schematic Photoreceptor Mosaic
“On” optic
nerve
“Off”
- 4.54 -
Although the entrance apertures of photoreceptors do not
physically overlap on the retinal surface, it is often useful to think
of the cone aperture as being projected back into object space
where it can be compared with the dimensions of visual targets.
This back-projection can be accomplished mathematically by
convolving the optical point-spread function of the eye with the
uniformly-weighted aperture function of the cone. (In taking this
approach we are ignoring the effects of diffraction at the cone
aperture, which would increase the cone aperture still further.) The
result is a spatial weighting function called the receptive field of the
cone. Since foveal cones are tightly packed on the retinal surface,
and since the effect of the eye's optical system is to broaden and
blur the acceptance aperture of cones, the receptive fields of foveal
cones in object space must overlap to some degree. Furthermore,
the convolution result will be dominated by optics since the
minimum width of the optical point spread function (PSF) of the
well-corrected eye is greater than the aperture of foveal cones. Just
the opposite is true in the periphery where cones are widely spaced
and larger in diameter than the optical PSF, provided off-axis
astigmatism and focusing errors are corrected with spectacle lenses.
The neural images encoded by the rod and cone mosaics are
transmitted from eye to brain over an optic nerve which, in humans,
contains roughly one million individual fibers per eye. Each fiber is
an outgrowth of a third order retinal neuron called a ganglion cell.
It is a general feature of the vertebrate retina that ganglion cells are
functionally connected to many rods and cones by means of
intermediate, second order neurons called bipolar cells. As a result,
a given ganglion cell typically responds to light falling over a
relatively large receptive field covering numerous rods and cones.
Neighboring ganglion cells may receive input from the same
receptor, which implies that ganglion cell receptive fields may
physically overlap. Thus in general the mapping from
photoreceptors to optic nerve fibers is both many-to-one and one-
to-many. The net result, however, is a significant degree of image
compression since the human eye contains about 5 times more
cones, and about 100 times more rods, than optic nerve fibers.21, 22
For this reason the optic nerve is often described as an information
bottleneck through which the neural image must pass before
arriving at visual centers of the brain where vast numbers of
neurons are available for extensive visual processing.
- 4.55 -
It would be a gross oversimplification to suppose that the array
of retinal ganglion cells form a homogeneous population of neurons.
In fact, ganglion cells fall into a dozen or more physiological and
anatomical classes, each of which looks at the retinal image through
a unique combination of spatial, temporal, and chromatic filters.
Each class of ganglion cell then delivers that filtered neural image
via the optic nerve to a unique nucleus of cells within the brain
specialized to perform some aspect of either visually-controlled
motor-behavior (e.g. accommodation, pupil constriction, eye
movements, body posture, etc.) or visual perception (e.g. motion,
color, form, etc.). Different functional classes thus represent
distinct sub-populations of ganglion cells which exist in parallel to
extract different kinds of biologically useful information from the
retinal image.
- 4.56 -
individual cone. Incidentally, the cone population which drives the
P-system consists of two subtypes with slightly different spectral
sensitivities. Since foveal ganglion cells are functionally connected
to a single cone, the ganglion cell will inherit the cone's spectral
selectivity, thereby preserving chromatic signals necessary for color
vision. In peripheral retina, P-ganglion cells may pool signals
indiscriminately from different cone types, thus diminishing our
ability to distinguish colors.
- 4.57 -
cyc/deg. This is also a very high spatial frequency which rivals the
detection cutoff for normal foveal vision and is an order of
magnitude beyond the resolution limit in the mid-periphery.
Nevertheless, the prediction has been verified using interference
fringes as a visual stimulus.79 A slightly lower cutoff frequency for
contrast detection is obtained under natural viewing conditions with
refracted errors carefully corrected, indicating that optical
attenuation of the eye sets a lower limit to contrast detection than
does neural filtering in peripheral vision, just as it does in central
vision.
- 4.58 -
that the resolution task performed by observers is sampling-limited,
rather than contrast limited. In other words, it must be
demonstrated that targets beyond the resolution limit remain visible
as aliases of the actual target. Experimental confirmation of
perceptual aliasing was obtained initially for central95, parafoveal97
and peripheral vision79 using interference fringe stimuli, and similar
results have been obtained subsequently using natural viewing of
parafoveal and peripheral targets.2-4, 68, 78
50
Aliasing zone
10
5
Detect Resolve
Natural view
Interferometer
RGC Nyquist
1
0 10 20 30 40
Visual Eccentricity (deg)
- 4.59 -
(contrast detection, pattern resolution), for two different types of
visual targets (interference fringes, sinusoidal gratings displayed on
a computer monitor with the eye's refractive error corrected by
spectacle lenses), at various locations along the horizontal nasal
meridian of the visual field.77, 79 These results show that for the
resolution task, cutoff spatial frequency was the same regardless of
whether the visual stimulus was imaged on the retina by the eye's
optical system (natural view) or produced directly on the retina as
high-contrast, interference fringes. This evidence supports our
conclusion that, for a well-focused eye, pattern resolution is limited
by the ambiguity of aliasing caused by undersampling, rather than
by contrast attenuation due to optical or neural filtering. Aliasing
occurs for frequencies just above the resolution limit, so the
triangles in Fig. 4.19 also mark the lower limit to the aliasing
portion of the spatial frequency spectrum. This lower boundary of
the aliasing zone is accurately predicted by the Nyquist limit
calculated for human P- ganglion cells.21
- 4.60 -
4.6.3 Optical vs. Neural Limits to Visual Performance
- 4.61 -
movement. For example, the visual control of posture, locomotion,
head, and eye movements are largely under the control of motor
mechanisms sensitive to peripheral stimulation. 36, 47 Many of these
functions of peripheral vision are thought of as reflex-like actions
which, although they can be placed under voluntary control, largely
work in an "automatic-pilot" mode with minimal demands for
conscious attention. This suggests that information regarding body
attitude, self-motion through the environment, and moving objects
are ideally suited for peripheral display since such a strategy
matches the information to be displayed with the natural ability of
the peripheral visual system to extract such information. The
danger, however, is that retinal undersampling in the periphery can
lead to erroneous perception of space, motion, or depth which may
have unintended or undesirable consequences.
- 4.62 -
produced by pupil decentration has a profound effect on
polychromatic image quality, visual resolution, and contrast
sensitivity.
- 4.63 -
ACKNOWLEDGMENTS
REFERENCES
1. Akerman, A. and Kinzly, R.E., (1979). Predicting aircraft
detectability. Human Fact., 21, 277-291.
5. Artal, P., Marcos, S., Iglesias, I. and Green, D.G., (1996). Optical
modulation transfer and contrast sensitivity with decentered
small pupils in the human eye. Vision Res., 35, 3575-3586.
6. Banks, M.S., Geisler, W.S. and Bennett, P.J., (1987). The physical
limits of grating visibility. Vision Res., 27, 1915-1924.
8. Bennett, A.G. and Rabbetts, R.B., Clinical Visual Optics, 2nd. Ed.,
(Butterworths, London, 1989).
- 4.64 -
9. Bradley, A., (1992). Perceptual manifestations of imperfect
optics in the human eye: attempts to correct for ocular
chromatic aberration. Optom. Vis. Sci., 69, 515-521.
11. Bradley, A., Thibos, L.N. and Still, D.L., (1990). Visual acuity
measured with clinical Maxwellian-view systems: effects of
beam entry location. Optom. Vis. Sci., 67, 811-817.
13. Bradley, A., Zhang, X.X. and Thibos, L.N., (1991). Achromatizing
the human eye. Optom. Vis. Sci., 68, 608-616.
14. Burton, G.J. and Haig, N.D., (1984). Effects of the Seidel
aberrations on visual target discrimination. J. Opt. Soc. Am. A,
1, 373-385.
16. Campbell, F.W. and Green, D.G., (1965). Optical and retinal
factors affecting visual resolution. J. Physiol., 181, 576-593.
- 4.65 -
21. Curcio, C.A. and Allen, K.A., (1990). Topography of ganglion
cells in human retina. J. comp. Neurol., 300, 5-25.
22. Curcio, C.A., Sloan, K.R., Kalina, R.E. and Hendrickson, A.E.,
(1990). Human photoreceptor topography. J. comp. Neurol.,
292, 497-523.
27. Dunne, M.C.M., Barnes, D.A. and Mission, (1991). Effect of iris
displacement on oblique astigmatism in aphakic eyes. Optom.
Vis. Sci., 68, 957-959.
28. Emsley, H.H., Visual Optics, 5th, (Hatton Press Ltd, London,
1952).
29. Ferree, C.E., Rand, G. and Hardy, C., (1931). Refraction for the
peripheral field of vision. Arch. Ophthalmol., 5, 717-731.
31. Green, D.G., (1967). Visual resolution when light enters the eye
through different parts of the pupil. J. Physiol., 190, 583-593.
32. Gullstrand, A., Appendix II.3. The optical system of the eye., ed.
H.v. Helmholtz, Physiological Optics. English translation:
Southall, J.P.C. (ed.) (Optical Society of America, Washington,
D.C., 1924, 1909), 350-358.
- 4.66 -
33. Hartridge, H., (1947). The visual perception of fine detail.
Philos. Trans. R. Soc. Lond. B, 232, 519-671.
36. Howard, I., The Perception of Posture, Self motion, and the
Visual Vertical, in Handbook of Perception and Human
Performance, ed. K.R. Boff, L. Kaufman and J.P. Thomas (John
Wiley & Sons, New York, 1986), Ch. 18.
37. Howarth, P.A., Zhang, X., Bradley, A., Still, D.L. and Thibos, L.N.,
(1988). Does the chromatic aberration of the eye vary with age?
J. Opt. Soc. Am. A, 5, 2087-2092.
41. Le Grand, Y., Form and Space Vision, revised, ed. G.G. Heath and
M. Millodot (Indiana University Press, Bloomington, 1967).
43. Liou, H.L. and Brennan, N.A., (1996). The prediction of spherical
aberration with schematic eyes. Ophthalmic. Physiol. Opt., 16,
348-354.
- 4.67 -
46. Lyons, K., Mouroulis, P. and Cheng, X., (1996). The effect of
instrumental spherical aberration on visual image quality. J.
Opt. Soc. Am. A, 13, 193-205.
49. Miller, W.H. and Bernard, G.D., (1983). Averaging over the
foveal receptor aperture curtails aliasing. Vision Res., 23, 1365-
1369.
55. Newton, I., The Optical Papers of Isaac Newton, Vol 1: The
Optical Lectures 1670-1672., ed. A.E.E. Shapiro, 1984)
(Cambridge University Press, Cambridge, 1670).
- 4.68 -
59. Peli, E., (1990). Contrast in complex images. J. Opt. Soc. Am. A,
7, 2032-2040.
61. Polyak, S.L., The Retina, (Univ. Chicago Press, Chicago, 1941).
65. Rynders, M.C., Lidkea, B.A., Chisholm, W.J. and Thibos, L.N.,
(1995). Statistical distribution of foveal transverse chromatic
aberration, pupil centration, and angle psi in a population of
young adult eyes. J. Opt. Soc. Am. A, 12, 2348-2357.
67. Sliney, D. and Wolbarsht, M., Safety with Lasers and Other
Optical Sources, (Plenu Press, New York, 1980).
68. Smith, R.A. and Cass, P.F., (1987). Aliasing in the parafovea with
incoherent light. J. Opt. Soc. Am. A, 4, 1530-1534.
69. Snyder, A.W., Bossomaier, T.R.J. and Hughes, A., (1986). Optical
image quality and the cone mosaic. Science, 231, 499-501.
- 4.69 -
73. Thibos, L.N. and Bradley, A., Modeling off-axis vision - II: the
effect of spatial filtering and sampling by retinal neurons, in
Vision Models for Target Detection and Recognition, ed. E. Peli
(World Scientific Press, Singapore, 1995), 338-379.
75. Thibos, L.N., Bradley, A., Still, D.L., Zhang, X. and Howarth, P.A.,
(1990). Theory and measurement of ocular chromatic
aberration. Vision Res., 30, 33-49.
76. Thibos, L.N., Bradley, A. and Zhang, X.X., (1991). Effect of ocular
chromatic aberration on monocular visual performance.
Optom. Vis. Sci., 68, 599-607.
77. Thibos, L.N., Cheney, F.E. and Walsh, D.J., (1987). Retinal limits
to the detection and resolution of gratings. J. Opt. Soc. Am. A,
4, 1524-1529.
78. Thibos, L.N., Still, D.L. and Bradley, A., (1996). Characterization
of spatial aliasing and contrast sensitivity in peripheral vision.
Vision Res., 36, 249-258.
79. Thibos, L.N., Walsh, D.J. and Cheney, F.E., (1987). Vision beyond
the resolution limit: aliasing in the periphery. Vision Res., 27,
2193-2197.
80. Thibos, L.N., Ye, M., Zhang, X. and Bradley, A., (1992). The
chromatic eye: a new reduced-eye model of ocular chromatic
aberration in humans. Appl. Opt., 31, 3594-3600.
81. Thibos, L.N., Ye, M., Zhang, X. and Bradley, A., (1997). Spherical
aberration of the reduced schematic eye with elliptical
refracting surface. Optom. Vis. Sci., (in press) ,
- 4.70 -
84. Villegas, E.R., Carretero, L. and Fimia, A., (1996). Le Grand eye
for the study of ocular chromatic aberration. Ophthalmic.
Physiol. Opt., 16, 528-531.
86. Vos, J.J. and van Meeteren, A., (1991). PHIND: an analytical
model to predict acquisition distance with image intensifiers.
Appl. Opt., 30, 958-966.
87. Walsh, G., Charman, W.N. and Howland, H.C., (1984). Objective
technique for the determination of monochromatic aberrations
of the human eye. J. Opt. Soc. Am. A, 1, 987-992.
90. Wang, Y.Z., Thibos, L.N. and Bradley, A., (1996). Undersampling
produces non-veridical motion perception, but not necessarily
motion reversal, in peripheral vision. Vision Res., 36, 1737-
1744.
91. Wang, Y.Z., Thibos, L.N. and Bradley, A., (1997). Effects of
refractive error on detection acuity and resolution acuity in
peripheral vision. Invest. Ophthal. Vis. Sci., (in press) ,
93. Wässle, H., Grünert, U., Martin, P. and Boycott, B.B., (1994).
Immunocytochemical characterization and spatial distribution
of midget bipolar cells in the macaque monkey retina. Vision
Res., 34, 561-579.
- 4.71 -
95. Williams, D.R., (1985). Aliasing in human foveal vision. Vision
Res., 25, 195-205.
96. Williams, D.R., Brainard, D.H., McMahon, M.J. and Navarro, R.,
(1994). Double pass and interferometric measures of the
optical quality of the eye. J. Opt. Soc. Am. A, 11, 3123-3135.
97. Williams, D.R. and Coletta, N.J., (1987). Cone spacing and the
visual resolution limit. J. Opt. Soc. Am. A, 4, 1514-1523.
98. Winn, B., Bradley, A., Strang, N.C., McGraw, P.V. and Thibos,
L.N., (1995). Reversals of the colour-depth illusion explained by
ocular chromatic aberration. Vision Res., 35, 2675-2684.
100. Ye, M., Bradley, A., Thibos, L.N. and Zhang, X., (1991).
Interocular differences in transverse chromatic aberration
determine chromosteropsis for small pupils. Vision Res., 31,
1787-1796.
101. Ye, M., Bradley, A., Thibos, L.N. and Zhang, X.X., (1992). The
effect of pupil size on chromostereopsis and chromatic
diplopia: Interaction between the Stiles-Crawford effect and
chromatic aberrations. Vision Res., 32, 2121-2128.
104. Zhang, X., Thibos, L.N. and Bradley, A., (1991). Relation
between the chromatic difference of refraction and the
chromatic difference of magnification for the reduced eye.
Optom. Vis. Sci., 68, 456-458.
- 4.72 -