Anda di halaman 1dari 14

Image and Vision Computing 29 (2011) 64–77

Contents lists available at ScienceDirect

Image and Vision Computing


j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / i m av i s

A spatial variant approach for vergence control in complex scenes


Xuejie Zhang, Leng Phuan Tay ⁎
School of Computer Engineering, Nanyang Technological University, Blk N4, Nanyang Avenue, Singapore 639798

a r t i c l e i n f o a b s t r a c t

Article history: The flexibility in primate vision that utilizes active binocular vision is unparalleled even with modern fixed-
Received 13 July 2009 stereo-based vision systems. However to follow the path of active binocular vision, the difficulty of attaining
Received in revised form 10 May 2010 fixative capabilities is of primary concern. This paper presents a binocular vergence model that utilizes the
Accepted 5 August 2010
retino-cortical log polar mapping in the primate vision system. Individual images of the binocular pair were
converted to multi-resolution pyramids bearing a coarse-to-fine architecture (low resolution to high
Keywords:
Vergence control
resolution) and disparity estimation on these pyramidal resolutions was performed using normalized cross
Log polar transformation correlation on the log polar images. The model was deployed on an actual binocular vergence system with
Image pyramid independent pan-tilt controls for each camera and the system was able to robustly verge on objects even in
Disparity estimation cluttered environment with real time performance. This paper even presents the experimental results of the
system functioning in unbalanced contrast exposures between the two cameras. The results proved favorable
for real world robotic vision applications where noise is prevalent. The proposed vergence control model was
also compared with a standard window based Cartesian stereo matching method and showed superior
performance.
© 2010 Elsevier B.V. All rights reserved.

1. Introduction observations reduces the processing load on the early visual areas and
through organized pathways, the brain is able to multiplex thought
Active binocular vision systems mimic primate biological vision by processes.
using motorized cameras to emulate the extra-ocular muscles It has been discovered that the retino-cortical mapping in the
attached to the eyeball. While other methods such as fixed-camera primate vision system can be modeled by a log polar geometry [2] and
stereopsis can retrieve the needed depth information, active binocular it motivated investigations to determine any apparent advantage that
systems can potentially be more robust, removing the delicate can be derived from this geometric formation. A vergence control
reliance on lens parameters to determine depth information. Active model is proposed using a coarse-to-fine disparity estimation in the
binocular systems can also provide a wider field of vision with fewer log polar space. This computational architecture utilizes the inherent
concerns over image distortions at the lens periphery. From the foveal magnification properties of the log polar transformation to
psychological aspect of communication, human body-language inter- systematically focus towards the foveal information, eventually
actions are usually complemented with visual fixations that function providing a stable vergence. This proposed method can be used
as cognitive cues to project one's focus of attention. It is from this reliably for real time vergence control. This paper discusses the
perspective that we lean our emphasis on vergence control, the pillar rationale for the system implementation and further illustrates the
of fixations. performance of the proposed vergence control model through the
The comprehensive primate vision system includes several distinct experimental results obtained. The understanding of three essential
subsystems that govern the processes such as the attention components is necessary for conceptualization of the robust vergence.
mechanism, binocular fusion, saccade and smooth pursuit. Without These include the significance of the log polar transformation, the log
vergence, many of these primal functions will be impaired. Vergence polar based normalized cross correlation and the coarse-to-fine image
is the process of directing both eyes to foveate singularly on a target, pyramidal search strategy. Section 2 is a review of some existing
providing visual information for 3D perception. The combination of vergence control methods. Section 3 presents the proposed log polar
vergence and other eye movements such as saccade allows the visual correlation model using image pyramid. Section 4 presents experi-
system to examine the scene at a fast speed through multiple fixation mental results and Section 5 concludes the paper.
points [1]. It is thus believed that this process of multiple saccadic
2. Strategic overview of the proposed vergence control

⁎ Corresponding author. Tel.: + 65 6790 4604/6790 6965; fax: + 65 6792 6559. Primate raw binocular images consist of two slightly offset
E-mail addresses: zh0002ie@e.ntu.edu.sg (X. Zhang), aslptay@ntu.edu.sg (L.P. Tay). perspectives which reveal the presence of a mismatched double

0262-8856/$ – see front matter © 2010 Elsevier B.V. All rights reserved.
doi:10.1016/j.imavis.2010.08.005
X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77 65

image when overlaid across each other. 3D perception is achieved weight. The log polar transformation has been used widely and
when these images are processed in the binocular fusion system successfully for disparity estimation and vergence control
located higher in the visual cortex [3]. Such fusion of perspectives [20,21,23,24]. In this paper we combine the pyramidal approach and
within the cortex has been actively studied by Grossberg et al. in 3D log polar transformation to build a robust and efficient model for
LAMINART [4]. vergence control which can survive cluttered environments. The left
When a binocular system verges on an object, the spatial reference and right images are initially converted into image pyramids. The
position of the object can be geometrically derived from the camera search for corresponding position is carried out through the pyramid
baseline and motor angles, providing a spatial referencing for the architecture beginning from the coarser and working towards the
point of fixation with respect to the camera platform. A continuous finer levels. At each level, the reference image (the left camera image)
observation of the environment, as discovered by Yarbus in his study is transformed to log polar image with the origin as the reference
of human saccades [1], provides means to observe the environment position. The backward log polar transformation maps each log polar
over a quick sequence of short duration micro saccades in the order of pixel to the nearest pixel in Cartesian space. At each candidate
milliseconds. position in the candidate image (the right image), log polar
There are four sources of vergence in the human vision system: transformation is performed with this position as origin. Normalized
binocular disparity, accommodation, tonic, and proximal vergence [5]. cross correlation between the left reference log polar image and the
Most existing vergence control models in the literature, such as those right candidate log polar images is used to derive the disparity. The
presented in Refs. [6–9], belong in the category of binocular disparity disparity at the image center for each pyramid level was used
estimation. Disparity refers to the positional difference between iteratively for adjustment of vergence angle until the vergence of the
corresponding points in the left and right images. Disparity can be binocular vision system was achieved.
used as a motor differential signal to verge the cameras.
There are many disparity estimation models based on biological 3. The pyramidal log polar correlation model
and physiological studies of the primate vision system. The disparity
tuning responses of binocular simple cells and complex cells have Gravitating from binocular vergence being the primary building
been investigated and disparity energy model was proposed to block for primate visual perception, the precise objective of this paper
simulate the responses of binocular visual neurons [7,10–18]. The is to provide a robust and reliable vergence control systems. The
spatial organization of visual information in the visual cortex was also vergence control in this paper refers to the manipulation of two
studied and utilized for the design of disparity estimation mechan- cameras to fixate on a single point of interest within a scene. Since our
isms. In the primary visual cortex, the visual information from the left objective is to address the low level vision faculty, the point of fixation
eye and the right eye is interlaced as sliced image segments, forming a can therefore be any dynamically defined point that is visible in both
structure called the ocular dominance columns. This gave rise to the camera images.
cepstrum based disparity estimation model proposed by Yeshurun The system consists of a coarse-to-fine model with varying image
and Schwartz [8,10,19]. resolutions of the image and these images from the different
Apart from the biologically inspired methods, a direct way of resolution layers that resemble the stacked pyramidal layers. On
establishing correspondence is to use explicit matching to find each layer, a normalized cross correlation of the log polar transformed
correspondences in the binocular images. One such method uses the images is used to determine disparity. The log polar image inherently
correlation based method to search for matching regions in the empowers the cross correlation disparity estimator with a global
binocular pair of images [20–26]. Despite the performance of abstraction for region matching. At the same time the log polar
correlation measures, a major issue of searching based disparity transformation magnifies the center of image, creating an emphasis to
estimation methods is the working range limit. When the disparity the central region of fixation for cross correlation computations. This
existing in the image exceeds the working range the algorithms combined disparity of information from both the center and extreme
return erroneous disparity results. This occurs especially when the edges of the binocular image pair is used for vergence control.
fixation point is in the process of switching between a near focus to a As illustrated in Fig. 1, the system maintains two CCD cameras
significantly distant position. An approach to solve this is to use multi- mounted on two independent pan-tilt control units. Each camera
resolution image pyramid and process from coarser to finer levels [6– captures 1000 × 1000 resolution images with 46° angle of view. In the
8,26]. The multi resolution image pyramid increases the working proposed vergence control model, the 1000 × 1000 images were
range but is still limited by the pyramid levels and localized range at converted to 200 × 200 images and used for the disparity estimation.
each level. Although exhaustive search can be employed to resolve the The baseline distance between the two cameras was set at 24 cm. Each
range issue, it is not efficient and would likely introduce noise to the pan-tilt control has two stepper motors for panning and tilting, which
estimation due to the increase in likelihood of uncorrelated visual provides rigid and repeatable positioning. The motors have the
information present in perspective views. Inspired by the visual capability to rotate at a speed of 300°/s with a resolution of 3.086 arc
mapping geometry in the visual cortex, we propose to enlarge the min (0.0514°).
working range through the use of spatial variant transforms of the
original image, such as log polar transformation [2,27–30]. In this 3.1. Significance of the log polar transformation
transformation, a large portion of the primary visual cortex is mapped
to the small, central portion of the visual field. This is sometimes Referencing the visual plane in the typical X–Y plane and setting
referenced as a cortical magnification. This magnification provides a the coordinates (x0, y0) at the middle of the image as the origin, the log
higher resolution in the fovea of the retina and is considered polar transformation can be represented by Eq. (1).
important for many vision components such as our visual attention 8 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
and vergence control. Comparing with the Cartesian images, log polar >
>
>
< ρ = log a ðx−x0 Þ2 + ðy−y0 Þ2
images magnify the image center and diminish the periphery of the
  ð1Þ
original image, creating a natural bias for information residing within >
> −1 y−y0
>
: φ = tan
the fovea. This creates a natural possibility of estimating the disparity x−x0
in cluttered environments as there is a foveal focusing priority over
the periphery. The spatial variant transformation is expected to show where ρ is the logarithm of the Euclidean distance between the
advantages over alternative Cartesian images for the vergence control Cartesian coordinates of a pixel (x, y) and the origin (x0, y0) and φ is
task since the latter attempts to resolve the entire image with an equal the polar angle of the point (x, y).
66 X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77

Fig. 1. The binocular cameras and the pan-tilt controls.

Given the size of the original image as (W, H) and the size of log transform a Cartesian image to a log polar image [32,33]. In this paper,
polar image as (ρmax, φmax), the logarithmic base a can be determined for the purpose of simplicity, a log polar sampling method was
by the original image size and the transformed log polar image size adopted. We will show in Section 3.2.1 that the sampling method
[31] according to Eq. (2). shows almost equivalent performance with a log polar bilinear
0 !!1 interpolation method in disparity estimation. This sampling method
2
W= H=2 ensures sufficient image quality while providing adequate perfor-
  Bln min ; C
lnðrmax Þ B C mance with significantly reduced computational cost. The log polar
a = exp = exp B
B
C
C ð2Þ sampling process requires a backward log polar transformation which
ρmax @ ρmax A
can be pre-computed as a mapping table. The value of a pixel in the
log polar image is set to the value of the nearest corresponding pixel in
the Cartesian image. The backward log polar mapping was modeled
where rmax is the maximum radius (distance from origin) of the log
using Eq. (3) and this provides an inverse transform of Eq. (1).
polar pixels in spatial domain. A more general treatment of this

V
transformation can be found in Ref. [2].
An ideal log polar image would be the one obtained from a log 8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
>
> a2ρ h πi 3π 
polar image sensor. However, this is not always available and requires >
>
þ
if φ∈ 0; or ; 2π
>
>
special manufacturing process at potentially much higher cost than >
> 1 + tan2 ðφÞ 2 2
>
<x =
the normal Cartesian CCD sensors. Therefore most existing studies sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ð3Þ
into vergence control have relied on transforming the normal >
> − a2ρ
>
> otherwise
Cartesian images to log polar images [20,23]. The transformation or >
> 1 + tan2 ðφÞ
>
>
mapping of corresponding pixels between Cartesian and log polar >
:
y = x tanðφÞ
space is non-linear. There has been research on how to optimally

Fig. 2. Log polar transformation on the Baboon image [34]: (a) original image of size 512 × 512 and (b) log polar image of size 200 × 360.
X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77 67

Fig. 2 illustrates the effects of the backward log polar transforma- each position, the image was transformed to the corresponding log
tion, which is similar to a spatial variant circular sampling of the input polar image and the NCC between the left log polar image and the
image. Using an image centered origin and starting from the positive right log polar image was calculated. The resolution of the original
x-axis, the backward log polar transformation samples point in an image pair was 200 × 200. The resolution of log polar images was set
anticlockwise sweep direction. The resultant log polar transformed at 25 × 45, 50 × 90, 100 × 180 and 200 × 360 respectively, to evaluate
image magnifies the center, reducing the contributions from the the affect of log polar resolution to the NCC measure. The NCC
image periphery. In such a transformation, all points in the original values were plotted against the true disparity, shown in Fig. 3(b).
image are in some way represented in the transformed image, though There is an indication of high consistency, indicating that the
in varying degrees of contribution. In Fig. 2, the red nose of the baboon estimation is stable and a low resolution log polar image can be
appearing in the middle of the original image occupies about 3 used for effective vergence control. The center position at zero
quarters of the transformed log polar space, while the eye that is disparity exhibits the highest NCC, indicating that both image
further from the original image center has a reduced representation perspectives possess the maximum log polar correlation at this
and appears rather beady on the top right quadrant of the position of the 0.5°-interval sliding window. This illustrates the
transformed image. feasibility and efficacy of the log polar based NCC computations.
Such properties of the log polar transformation provide ideal Subsequently, the resolution of log polar images was fixed to
conditions for object registration in paired perspective images. 50 × 90 and the original binocular image pair was scaled at three
Objects that lie in the periphery are subjected to a many-to-one ratios (1, 0.5, and 0.25). The disparity tuning curves were produced
correspondence of the visual-cortical mapping and this compensates using different resolutions of the Cartesian images and shown in
the typically adverse disparity experience at the periphery by a factor Fig. 3(c). The results showed that a reduction of Cartesian image
proportional to the distance from the point of fixation. In essence, it resolution affects little of the disparity tuning curves. This provides
reduces the contribution of the periphery regions, though not entirely a possibility of designing a multi-level pyramidal algorithm for
omitting their presence. For pixels at the fovea, a one-to-many disparity estimation, which provides a basis for the pyramidal log
correspondence of the visual-cortical space results in a cortical polar correlation model presented in this paper.
magnification, thus providing a heavier emphasis on the fixation As log polar image is a cortical magnification of the original
point of the image. Since the log polar transformation magnifies any image, the correlation in log polar space can be classified as a
disparity at the fovea, it provides a means for high resolution weighted correlation in Cartesian space. Each Cartesian pixel is
matching at the fovea. It is with these two properties that this paper considered in the correlation measure associated with a weight
leverages to provide an efficient and effective means of binocular factor. The weight for each pixel in the correlation measure is
vergence through a foveal-centric matching. determined by the cortical magnification factor produced by the log
polar transformation. This cortical magnification factor is defined as
3.2. Normalized cross correlation in the log polar domain the number of pixels in log polar space that corresponds to 1 pixel
at a certain location in Cartesian space. This factor can be
Normalized cross correlation (NCC) has often been used in decomposed into two components: the magnification along ρ-axis
disparity estimation [35,36]. The NCC, expressed in Eq. (4), defines and along φ-axis. In a continuous transformation, 1 pixel in
the cross correlation between two normalized image patches f(x,y) Cartesian domain at a radial distance r occupies a step from r−0.5
and g(x,y). Its typical application to the spatial domain has been to r + 0.5 along the direction of ρ-axis. This interval corresponds to
successful through the use of a sliding window matching process and the following distance on axis ρ:
this provides an effective stability in area-based stereo matching. The
NCC method for disparity estimation was adopted and applied in the dðloga ðr ÞÞ 1
Rp = loga ðr + 0:5Þ− loga ðr−0:5Þ = = : ð5Þ
log polar domain, utilizing the cortical magnification property of the dr r lnðaÞ
globally inclusive transformation to provide a better vergence [20,23].
   Along the φ-axis, one ring of pixels with diameter r corresponds
1 f ðx; yÞ− ¯f g ðx; yÞ− ḡ to φmax pixels in log polar space. Therefore the magnification ratio on
NCC = ∑ : ð4Þ φ-axis is:
n−1 x;y σf σg
φmax
Rφ = : ð6Þ
Unlike the Cartesian domain transforms where matches can be 2πr
derived through linear shifts of the image, NCC cannot be done by
linear shifts and matching in log polar domain. The correlation based The overall magnification is the multiplication of these two
method becomes more complex in log polar domain because the components.
matching window is non-linearly transformed. However, regardless
of the shift, the unique matching positions will preserve the highest φmax 1 φmax 1
R = Rρ Rφ = = : ð7Þ
correlation [23]. It is thus expected that the transformed image of the 2π lnðaÞ r 2 2π lnðaÞ ðx−x0 Þ2 + ð y−y0 Þ2
sliding window will possess the highest correlation in log polar
domain only when the two perspectives coincide under the log polar The magnification factor is therefore inversely proportional to
transformation. the squared distance from the origin of the log polar transforma-
To illustrate the effectiveness of using log polar image for tion. Assuming the system is fixating on a foreground object, the
correlation based disparity estimation, an experiment was con- correlation measure can be considered as summations of the
ducted to plot the NCC against the ground truth disparity. Fig. 3(a) correlation from foreground region and background region.
shows a sample pair of binocular images used in the experiment. The weight for the foreground region is significantly larger than
The foveas of the left and right image reside on the same point in the background as the foreground is at the fovea of the input image
space, targeting the bottle. To derive a disparity tuning curve using for vergence control tasks. When the foreground regions of the
the log polar transformation and normalized cross correlation, the binocular image pair match and produce a high correlation, the
left image was initially transformed into a log polar representation difference in the background contributes little to the correlation
with the image center as the origin. The right camera was panned at measure because of the exponential decrease of the magnification.
0.5° intervals to the left and right of the original position and at However, when the foreground regions do not fuse, both the
68 X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77

Fig. 3. (a) The binocular image pair, (b) disparity tuning curves with log polar images of different resolutions, (c) disparity tuning curves with log polar images of fixed resolution
(50 × 90) transformed from different resolutions of Cartesian images.

foreground correlation and background correlation values are 3.2.1. Comparison with a log polar bilinear interpolation approach
small. Thus the disparity tuning curve has a distinct peak only In the proposed model, a log polar sampling method was used.
when the correct disparity is achieved. This significantly stabilizes Here we show that the sampling method provides almost equivalent
the vergence on foreground object, especially when the foreground performance with a bilinear interpolation method. Utilizing the
is small and has large disparity changes with the background. In binocular image series in Fig. 3(a), the transformed log polar images
Cartesian space, due to the even contribution from the whole and disparity tuning curves are shown in Fig. 4. We can see although
image, vergence is directed to an intermediate depth, depending on the image quality was slightly degraded in the sampling approach, the
the sizes of foreground and background regions in the matching disparity tuning curves were generally identical to that produced by
window. bilinear interpolation. In a real time robot application, the backward

Fig. 4. The disparity tuning curves with log polar images by (a) nearest neighbor sampling and (b) bilinear interpolation.
X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77 69

log polar sampling reduces computation because it is merely a natural that a larger σ will attempt to match the whole image instead
mapping process with a pre-computed map. Thus the backward log of fovea region and a small σ tends to follow local variations. As
polar sampling approach was adopted. expected, the Gaussian weighted approach resembles the standard
window based methods with the standard deviation σ being the
3.2.2. Comparison with a Gaussian weighted approach matching window size. Therefore Gaussian magnification is a
Similar to the log polar fovea magnification, a spatial variant smoother varying weight while the log polar magnification decreases
weight such as Gaussian weight can be applied to the correlation much faster. In Fig. 7, the disparity tuning curves for different radial
measure in Cartesian space to focus on the center of the image. Fig. 5 resolutions of log polar images are quite consistent, indicating the
shows the comparison of the log polar magnification and Gaussian robust and stable performance of the log polar approach for vergence
magnification. The log polar transformation significantly magnifies control tasks in cluttered scenes.
the center of the image. The Gaussian weighted approach also
diminishes the background but in a smoother manner and the 3.2.3. Epipolar geometry
magnification of center is not as sharp as the log polar approach. The searching range in a stereo matching method can be limited to
To compare the backward log polar sampling approach with a a straight line by the epipolar geometry, given both the intrinsic and
Gaussian magnification approach, another experiment was conducted extrinsic parameters of the binocular system. In this system, the slope
using the experimental images in Fig. 6. The two cameras were of the epipolar line of the center of the left image can be estimated
initially verged on a foreground object at 1.8 m depth in front of the given the camera configurations (baseline 0.24 m, focal length 8 mm,
cluttered background which is about 5 m depth. The right camera was 6.45 μm square pixels). When the two cameras are verging on at a
subsequently paned to the left and right in the range of [−10°, 10°] at distance of 1 m away right in front of the platform, the epipolar line of
a step of 1°. Twenty-one pairs of binocular images were captured and the center of the left camera has a slope of 0.05 which is a fairly small
used for generating the disparity tuning curve. slope. Thus the vertical correspondence does not deviate too much
The Gaussian weighted approach is realized by biasing the original from the horizontal axis, especially in the fovea region. This slope
binocular image pair with a Gaussian magnification function and then decreases with the increase of verging distance or target depth. When
computing the NCC between the weighted binocular pair, shown in the two cameras are parallel, the epipolar line is just a horizontal scan
Eq. (8). line and a horizontal direction search process is sufficient.
   Considering the task of vergence control, we adopted a horizontal-
f ðx; yÞ−¯f ′ g′ðx; yÞ−¯g′

1 then-vertical search approach instead of searching on the epipolar
GNCC = ∑ ð8Þ line. This is possible for the following reasons. Firstly, in a fixed-stereo
n−1 x;y σf ′ σg ′
system, the epipolar geometry can limit the searching range to a
′ ′
where f = f *G and g = g*G are the Gaussian weighted binocular images single line, hence reducing the problem from 2D to 1D and thus
and Gðx; yÞis a Gaussian function centered at ðx0 ; y0 Þ with variance σ :
2 reducing computational cost. However, in an active vision system, the
epipolar geometry constantly changes and each time the vergence
angle is adjusted, the fundamental matrix for epipolar geometry
Fig. 7 shows the disparity tuning curves produced by the Gaussian should be recalculated. In a real time system, the vergence is adjusted
magnification approach and the log polar approach. It can be seen that at frame rate and this will increase the computational load. A better
with a smaller or larger Gaussian width (σ), the disparity tuning approach is to constrain the search in an epipolar stripe region
curves was incorrect, lacking a peak at position 0. With σ = 40, the considering all the possible epipolar lines due to the verging operation
disparity tuning curve has a peak at the correct position. When σ was [37]. Secondly, the binocular system may undergo vertical misalign-
small (σ = 5), the disparity tuning curve experienced many confusing ment in the process of vergence control. In this case, the geometry
peaks. As σ tended towards 100, the peak of the tuning curve shifted is more complex and the estimation of epipolar line is more difficult.
to the background depth, indicated by the dashed vertical line. It is Thirdly, the fundamental matrix relies on the intrinsic parameter

Fig. 5. The magnification of log polar and Gaussian weight function. Left column: the magnification (ρmax = 50, φmax = 90) along φ-axis, along ρ-axis, and the total log polar
magnification. Right column: the Gaussian magnification (σ = 10, 40, and 100).
70 X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77

Fig. 6. (a) The left view and (b) the right view. The right camera was panned to the left and right in the range of [−10°, 10°] at a step of 1°. The angle of view of the cameras is about 46°.

of the cameras. This is not flexible when the lenses or cameras are center of the image can be minimized. We make sure that our vertical
changed. searching range covers the epipolar correspondence range.
In the horizontal-then-vertical approach, the correspondence
search is first applied on the horizontal direction. The vertical 3.3. Coarse-to-fine disparity estimation using image pyramid
searching is started at the column where the horizontal searching
gives the highest correlation. This is an approximation of the 2D The coarse-to-fine (multi-resolution) search strategy is the third
searching and at the same time saves computation. A small vertical component in this paper that is essential for robust vergence. This
searching range is enough to cover the possible epipolar line strategy utilizes multi-level processing to enlarge the working range
constraints. Through iterative vergence control, the disparity at the of the disparity estimation under a reduced computational complexity

Fig. 7. Disparity tuning curves by the Gaussian magnification approach (upper row) and the log polar approach (lower row). The pan angle of the right camera corresponding to the
foreground (depth = 1.8 m) is shown as a vertical solid line and the pan angle corresponding to the background (depth = 5 m) is shown as a vertical dashed line.
X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77 71

[7,38]. The coarse resolution results are used as an initial estimation of based on the shift of the origins of log polar transformation in the
the foveal match and it funnels the search space into a smaller search candidate image. This is biologically synonymous to the excitatory
region, funneling the finer level search on the subsequent cascading and antagonistic competitions that occur between neighbouring
process to a more specific region. In the proposed model, image receptive fields. Instead of a parallel implementations of such
pyramids are used on a binocular pair of perspective images to receptive field processes, sequential implementations resort to the
estimate disparity in a coarse-to-fine progressive manner. The original sliding window method as proposed.
binocular image pair is sub-sampled to form a multi-level image For a pixel (xL, yL) in the left image, the objective of disparity
pyramid through Eq. (9). The images are sub-sampled in a 2:1 ratio estimation is to find the corresponding pixel (xL + dH, yL + dV) in the
between consecutive levels of the pyramid. The lowest level (highest right image, where the (dH, dV) pair is the horizontal and vertical
resolution) is level 0 which is the original image of size H × W and the disparity. The left image is transformed to log polar image with (xL, yL)
resolution for level n in the pyramid is H/2n × W/2n. as the origin. For the right image, the origin for log polar

transformation is shifted horizontally with respect to the reference
Ln ðx; yÞ = L0 x*2n ; y*2n position. A series of log polar images with shifted origins are thus
    ð9Þ obtained and this is illustrated in Fig. 9(b). The right log polar image
where x ∈ 0; =2 n −1 and y ∈ 0; =2 n −1
H W
with the maximum normalized cross correlation coinciding with the
left log polar image indicates the best match. This results in the
A pyramid of 3 levels, level 0 to 2, as shown in Fig. 8 was used. The centering of the two images at the current resolution and the next
resolution at level 0 was 200 × 200. The resolutions of level 1 and 2 resolution-level processing is performed until the finest resolution is
were 100 × 100 and 50 × 50 respectively. The disparity estimation achieved. In the case when the desired point of fixation is near the
process increases from the coarsest resolution to the finest resolution. corner or boundary of the image, the corresponding positions of some
As each level is processed, the best NCC position of that level sets the of the log polar pixels may be out of the original image. In this case, the
next point of fixation for processing at the next level. values of these pixels are set to 0.
The log base a of the log polar transform is determined by the top In the process, the permissible search space at the subsequent lower
level (coarsest) image size (HT, WT) and the transformed log polar level is restricted in a narrower range compared to the initial level. Let n
image size (ρmax, φmax) using Eq. (10). The transformed log polar be the number of levels in the pyramid and these are enumerated from 0
image covers a maximum inner circle of the top level image, shown in to n−1, i.e. from finest to coarsest resolution. The disparity estimation is
Fig. 8. In the experiments, log base a was calculated from the top level initially conducted at the coarsest n−1 level. The initial search radius of
image of size 50 × 50 and converted onto a 50 × 90 log polar image 5 pixels was used for level n−1 and a radius of 2 pixels was used for the
using the backward log polar transformation. The value of a is subsequent levels. According to Eq. (11), the 3-level pyramid model can
maintained for subsequent resolutions, implying that the area of pixel cover a maximum disparity of ±26 pixels. The computation time is only
coverage is constant, but since each subsequent levels increases the (5+ 2 + 2)/26 = 35% compared to using a direct searching method
pixel resolution, the search for the point of fixation focuses only on a covering the same range of disparity.
best match position obtained from the previous resolution. Thus, the
n−1
funneling process gives rise to the concept of a pyramidal growth of Disparity range D = ∑ dl *2
l

resolutions as the focus toward the point of fixation proceeds. As l=0


shown in Fig. 8, due to the fixed log base a, the log polar
transformation covers a region that is becoming smaller downward
n−1
ð11Þ
Computational time T = ∑ dl
the Cartesian domain image pyramid. l=0

0   1
B
ln min
HT
=2 ; WT
=2 C
where dl is the searching radius for level l:
a = exp B
@
C
A ð10Þ
ρmax
Vertical disparity estimation at each level follows after the
completion of the horizontal disparity estimation. The position
Taking the left image as the reference image (master eye) and the with maximum log polar normalized cross correlation determines
right image as the candidate image, the disparity estimation process is the vertical disparity. The searching range in our implementation

Fig. 8. The log polar transform on the image pyramid.


72 X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77

Fig. 9. Disparity estimation using log polar images: black pixels denote the reference position in the left image and the candidate positions in the right image. (a) Transformation of
the left image with the reference position as origin and (b) transformation of the right image with the candidate positions as origins.

is ±2 pixels for vertical disparity because the binocular system has a experiment provides a quantitative depth of a fixated object from the
horizontal baseline which has smaller disparity in vertical direction. cyclopean eye of the binocular pair and this value is compared with
the ground truth. The cyclopean depth can be mathematically
3.4. Vergence control calculated from the geometry of the binocular setup given a fixed
width baseline and the angular rotations of individual cameras at
The disparity computations derived at different levels of the fixation. In the experiment conducted, the horizontal searching range
pyramid can directly be used for vergence control in both horizontal in the three level pyramids is ±5, ±2, ±2 for level 2, 1, 0 respectively.
and vertical direction. The controlling criteria used are the weighted The vertical searching range is ±2 for all three levels. The log polar
sum of the disparities of all levels, shown in Eq. (12). A multiplicative resolution used was 50 × 90.
factor of 2 is used to compensate the sub-sampling of the original The proposed pyramidal log polar correlation method was
image at a scaling ratio of 2 for consecutive levels. essentially compared with the Pyramidal Cartesian Correlation
(PCC) model [39] to show its superiority. The PCC model performs
n−1
l matching between corresponding Cartesian points using a sliding
C = ∑ 2 Dl
l=0 ð12Þ window based normalized cross correlation. Apart from different
where Dl is the estimated disparity at level l: transform spaces, all other experimental parameters were identical to

The left camera was assigned as the master camera and the right
camera as the slave camera. To omit the complexities of the cognitive
attention selection process, manual target selection was used. The
master camera was manually directed towards a fixation point and
the slave camera performed an equal angular magnitude movement
in the same direction. The vergence control process subsequently
shifts the slave camera to move towards the fixation point of the
master camera. The controlling of slave eye is an iterative process
through the levels until (CH, CV) b (10, 10) (both less than 10). As
shown in Fig. 10, the input signal to the controller is an angle θ
corresponding to 2nDn pixels at level n. The disparity 2nDn was
converted to θ, assuming a pinhole camera model. This angular signal
was sent to a proportional controller to control the motors. The
controller gain was set to 0.8 to achieve fast performance and avoid
overshooting. The threshold 10 is equivalent to an average disparity
of ±1 at each level of the pyramid. Once the threshold is achieved, the
process moves to the next level. The control will stop if either
(CH, CV) b (10, 10) or oscillation occurs. Oscillation occurs when
opposite disparities in consecutive vergence adjustments occur and a
specification of three of such consecutive occurrences gives rise to a
termination. The final state of the vergence control is either verged or
oscillation. The detailed control process is shown in Fig. 10.

4. Experimental results

One of the greatest difficulties in assessing the success of vergence


control is in the system of measurements. In order to appreciate the
effectiveness of this work, a series of qualitative and quantitative
experiments were devised and described. The first qualitative test of
vergence is to qualitatively judge the accuracy of the fixation point
placements attained by the algorithm that physically located the point
of fixations on each pair of corresponding binocular images. A second Fig. 10. Iterative vergence control.
X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77 73

Fig. 11. Vergence control results: (a–f) captured frames of binocular image pairs after vergence control.

the log polar method. In the PCC method, we used a 3-level image background of the two views varied drastically. The plants and other
pyramid (resolution being 200 × 200, 100 × 100, and 50 × 50) and the objects seen through the door had significant disparity changes and
matching window has a size of 21 × 21. The searching radius is yet the system was still able to verge on the bottle. This is possible
also ±5, ±2, ±2 for level 2, 1, 0 respectively. The vertical searching through the cortical magnification of log polar transformation. The
range is ± 2 for all three levels. The window size is fixed for all three same experiment was repeated in the normal Cartesian space with the
levels so that the equivalent matching window is actually 84 × 84, PCC model but this time the PCC model became unstable, swinging
42 × 42 and 21 × 21 at the original resolution. into large oscillations which often led to failure in vergence. This is the
major advantage of the log polar transformation over the normal
4.1. Vergence control Cartesian domain based methods.
In order to simulate resilience to noisy drifts in camera parameters,
Vergence control was tested in a lab environment with a wide the right camera was deliberately adjusted to different level of
range of disparities in the scene. Three objects were put in front of the brightness. The resultant system performance was robust enough to
cameras at a distance about 1 m and the background environment accommodate these changes and showed promising performance.
varied between 2 to 7 m. The two cameras were initially parallel. The This is very important for real world robotic application where the
master (left) camera was directed manually to a position and the slave hardware may suffer more disturbances from the environment and
camera automatically verged to fixate on the same position. A series of should be able to adapt to different conditions. The results are shown
binocular image pairs after vergence control were captured and in Fig. 12.
presented in Fig. 11. The system successfully verged on the
foreground objects, the background objects and the ceiling. It was 4.2. Depth estimation
able to achieve successful stable final state for all runs of vergence
control with no oscillation. The system showed robust performance For lack of direct methods of evaluating vergence performance, the
when depth changed significantly in the local area. This is illustrated indirect way of depth estimation was harnessed. The basic concept of
in Fig. 11(c) where the cameras were fixating on the bottle and the this experiment hinges on the duality of depth estimation and
74 X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77

Fig. 12. Vergence control results when the two cameras have different brightness: (a–f) captured frames of binocular image pairs after vergence control.

vergence. This stems from the concept that if the depth of the foveated The depth can be estimated from the geometry of the binocular
objects can be estimated correctly, derived geometry for that estimate system, shown in Fig. 13. The depth and height of the target can be
proves that both cameras are fixating on the same point. The log polar estimated given the motor parameters. The system can be divided into
model was also compared with the Cartesian space PCC model. two planes. The horizontal plane associated with the motor panning is

Fig. 13. The geometry of the system: (a) two cameras verging on a target, (b) top view for depth estimation (α and β are the pan angles) and (c) side view for height estimation (θ is
the tilt angle).
X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77 75

and consistently small standard deviation. There are two positions


where the system indicated a larger estimation error and this
occurred at the Plant and the Speaker L2. These errors tend to occur
when both cameras rotate beyond 30° from its optical axis. A certain
angular error of vergence will cause a larger depth estimation error
when the panning is large from the center of the platform. The
estimation is thus worse when the target deviated too much from the
center. Another source of error may be the perspective and scale
change of the target in the binocular images. If the panning angle is
large (for example, beyond 30 degrees), both cameras suffer from a
perspective distortion and scale change. Therefore the projected
image of the objects on the two frames may be different in both
perspective and scale. This will increase the error of disparity
estimation. This incidentally agrees with the human vision system
where human compensates the error by rotating the head about the
neck.
The PCC model fails for distance less than 1.5 m. For the Green
Bottle at 2.06 m and Speaker R at 3.5 m, it also failed several times.
Fig. 14. The experimental environment: fixation positions were marked with crosshairs. These two objects have very different background in two camera
views. This model does not work consistently when the local area has
several major components of depth. In comparison, the log polar
relevant to the depth estimation and the vertical plane for each model seems to be positively superior.
camera determines the vertical position or height of the target. Note Fig. 15 shows the analysis plot of error and standard deviation of
that no matter how the tilt angle is, the depth is only determined by error against the ground truth depth. The proposed log polar model
the pan angles of the base motors. When the two cameras are verging shows a consistent performance throughout the whole range of
on a target, the depth of the target can be directly estimated using depths while the PCC model fails several times and showed unstable
Eq. (13). performance. However, the PCC model shows superior performance
on flat surfaces such as the Poster and the Dell Box. Under such
B conditions, the binocular matching for PCC was very accurate where
D=
tan α + tan β the matching window of the PCC model covered accurately regions
ð13Þ
H2 with uniform depth. On the other hand, this is not possible for the log
H = H1 + −D tan θ
cos θ polar method because the actual areas for disparity estimation are
much larger than the local window of the PCC method.
The experiment was carried out in a lab environment where From all the above observations, it is possible to conclude that the
specific objects were selected for the system to foveate. Once fixation proposed pyramidal log polar correlation model possesses a stable
was achieved, the pan-tilt configurations of the two cameras were performance over a wider working range. This is important for real
used to calculate the depth of the target. Fig. 14 shows a synthesized world robot applications because reliability and robustness are
view of the experimental setup and the objects were marked with important in the noisy environment. The working range of the
crosshairs. The Mug and Green Bottle on the DELL Box were also pyramidal log polar correlation method is wider and deeper than the
depth-wise adjusted to attain varying depth settings to the camera PCC method when the two methods have the same settings of searching
frame. Objects in the experiments were scattered over a wide range of range. In the conducted experiments, the computation of one round of
depths ranging from less than 1 m to 8 m. Distances less than 0.5 m disparity estimation takes about 25 milliseconds. This leads to a real
were not considered because the system with a baseline distance of time control frequency above 30 Hz. The computation of the PCC
0.24 m cannot accommodate depths of 0.5 m. method takes a shorter period of 10 milliseconds because the matching
Table 1 shows the performance of the two models. The average window size (21 × 21) is less than the log polar image size (50 × 90).
absolute error and the standard deviation of average absolute error One possible concern is the non-rotation-centralized CCD position
are tabulated. The pyramidal log polar correlation model exhibited a in the camera setup. Intuitively, this would pose a problem for
performance on depth estimation with average accuracy above 90% distance estimations due to a bias from the off-centered rotation.

Table 1
Depth estimation results.

Object True depth (m) The log polar pyramidal model The PCC model

Avg estimated depth (m) Avg error and std dev (%) Avg estimated depth (m) Avg error and std dev (%)

Green bottle 0.80 0.84 5% ± 2% Fail Fail


Mug 1.00 1.06 6% ± 3% Fail Fail
Green bottle 1.30 1.24 5% ± 2% Fail Fail
Dell box 1.76 1.74 4% ± 2% 1.78 1% ± 0%
Mug 1.80 1.82 3% ± 3% 1.87 4% ± 1%
Green bottle 2.06 1.92 7% ± 3% 2.51 23% ± 23%
Speaker R 3.50 3.27 7% ± 4% 4.14 26% ± 47%
Door 4.00 3.68 8% ± 5% 3.81 5% ± 2%
Poster 4.54 4.71 7% ± 4% 4.57 1% ± 0%
Plant 7.40 6.40 14% ± 4% 7.17 3% ± 1%
Poster printer 4.65 4.33 7% ± 4% 4.43 5% ± 3%
Speaker L1 3.20 3.18 5% ± 4% 3.51 10% ± 8%
Speaker L2 2.75 2.90 12% ± 4% 2.94 7% ± 4%
76 X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77

Fig. 15. Error plots of depth estimation. Top row: estimated depth versus true depth. Bottom row: estimation error and standard deviation of error versus true depth. The crosses
indicate the failure cases.

However, this is true only for tilted scenarios as illustrated in Fig. 13 the system was shown to be able to verge on objects in a cluttered
(c). If the cameras are not at a tilt, maintaining a horizontal pose, the environment. The log polar transformation based method was shown
panned cameras will not experience off-centered bias error as the to be stable and reliable in vergence control and proven to be
angles used for computations remain consistent regardless of whether statistically more reliable than the PCC model in Cartesian domain.
they are taken from the point of rotation or from the optical centers.
This immunity is unique because the fixation is achieved by a visual
References
centering of the cameras through iterative disparity minimization,
rather than a geometrical fixation from camera panning parameters. [1] A.L. Yarbus, Eye movements and vision, Plenum Press, New York, USA, 1967.
Furthermore, when the cameras are at a certain tilt, the estimation of [2] E.L. Schwartz, Spatial mapping in the primate sensory projection: analytic
structure and relevance to perception, Biological Cybernetics 25 (1977) 181–194.
the vertical position of the target under vergence can be compensated [3] Y. Cao, S. Grossberg, A laminar cortical model of stereopsis and 3D surface
by the geometry presented in Fig. 13(c). perception: closure and da Vinci stereopsis, Spatial Vision 18 (2005) 515–578.
[4] S. Grossberg, P.D.L. Howe, A laminar cortical model of stereopsis and three-
dimensional surface perception, Vision Research 43 (2003) 801–829.
5. Conclusions [5] L.R. Squire, F.E. Bloom, S.K. McConnell, J.L. Roberts, N.C. Spitzer, M.J. Zigmond,
Fundamental neuroscience, 2 edAcademic Press, San Diego, California, USA, 2003.
A spatial variant approach for vergence control was proposed in [6] T.J. Olson, Stereopsis for verging systems, IEEE Conference on Computer Vision
and Pattern Recognition, New York City, USA, 1993, pp. 55–60.
this paper. The model was inspired by the log polar transformation [7] J.P. Siebert, D.F. Wilson, Foveated vergence and stereo, International Conference
existing in the retino-cortical pathway of the primate vision system. on Visual Search, Nottingham, UK, 1992.
The foveal magnification property of the log polar transformation was [8] J.R. Taylor, T.J. Olson, W.N. Martin, Accurate vergence control in complex scenes,
IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA,
utilized for accurate and reliable vergence control, producing reliable 1994, pp. 540–545.
camera fixation. In the proposed model, image pyramids were [9] A.X. Zhang, A.L.P. Tay, A. Saxena, Vergence control of 2 DOF pan-tilt binocular
generated from binocular images and coarse-to-fine disparity esti- cameras using a log-polar representation of the visual cortex, International Joint
Conference on Neural Networks, Vancouver, Canada, 2006, pp. 4277–4283.
mation was carried out through disparity searching in the image
[10] D.J. Coombs, C.M. Brown, Intelligent gaze control in binocular vision, IEEE
pyramids. Coupling this with the normalized cross correlation in log International Symposium on Intelligent Control, Philadelphia, PA, USA, 1990,
polar domain, the disparity in the image was obtained. The model was pp. 239–245.
deployed in a binocular vision system for real time vergence control in [11] G.C. DeAngelis, I. Ohzawa, R.D. Freeman, Depth is encoded in the visual cortex by a
specialized receptive field structure, Nature 352 (1991) 156–159.
complex environment and this included a wide range of depth and [12] J. Diaz, E. Ros, S.P. Sabatini, F. Solari, S. Mota, A phase-based stereo vision system-
disparity settings. Through qualitative and quantitative experiments, on-a-chip, Journal of BioSystems 87 (2005) 314–321.
X. Zhang, L.P. Tay / Image and Vision Computing 29 (2011) 64–77 77

[13] D.J. Fleet, A.D. Jepson, Stability of phase information, IEEE Transactions on Pattern [27] P.M. Daniel, D. Whitteridge, The representation of the visual field on the cerebral
Analysis and Machine Intelligence 15 (1993) 1253–1268. cortex in monkeys, Journal of Physiology 159 (1961) 203–221.
[14] D.J. Fleet, H. Wagner, D.J. Heeger, Neural encoding of binocular disparity: energy [28] B. Fischer, Overlap of receptive field centers and representation of the visual field
models, position shifts and phase shifts, Vision Research 36 (1996) 1839–1857. in the cat's optic tract, Vision Research 13 (1973) 2113–2120.
[15] M. Hansen, G. Sommer, Active depth estimation with gaze and vergence control [29] R.B. Tootell, M.S. Silverman, E. Switkes, R.L.D. Valios, Deoxyglucose analysis of
using Gabor filters, International Conference on Pattern Recognition, Vienna, retinotopic organization in primate striate cortex, Science 218 (1982) 902–904.
Austria, 1996, pp. 287–291. [30] D.C. Van Essen, W.T. Newsome, J.H. Maunsell, The visual representation in striate
[16] M.M. Marefat, L. Wu, C.C. Yang, Gaze stabilization in active vision—I. Vergence cortex of macaque monkey: asymmetries, anisotropies, and individual variability,
error extraction, Pattern Recognition 30 (1997) 1829–1842. Vision Research 24 (1984) 429–448.
[17] I. Ohzawa, G.C. DeAngelis, R.D. Freeman, Stereoscopic depth discrimination in the visual [31] R.A.P. II, M. Bishay, T. Rogers, On the computation of the log-polar transform,
cortex: neurons ideally suited as disparity detectors, Science 249 (1990) 1037–1041. Technical Report, Intelligent Robotics Laboratory, Center for Intelligent Systems,
[18] W.M. Theimer, H.A. Mallot, S. Tolg, Phase method for binocular vergence control Vanderbilt University School of Engineering, 1996.
and depth reconstruction, Proceedings of SPIE 1826 (1992) 76–87. [32] C. Mehanian, S.J. Rak, Bi-directional log-polar mapping for invariant object
[19] Y. Yeshurun, E.L. Schwartz, Cepstral filtering on a columnar image architecture: a recognition, Proceedings of SPIE 1471 (1991) 200–209.
fast algorithm for binocular stereo segmentation, IEEE Transactions on Pattern [33] V.J. Traver, F. Pla, Log-polar mapping template design: from task-level require-
Analysis and Machine Intelligence 11 (1989) 759–767. ments to geometry parameters, Image and Vision Computing 26 (2008)
[20] A. Bernardino, J. Santos-Victor, Vergence control for robotic heads using log-polar 1354–1370.
images, International Conference on Intelligent Robots and Systems, Osaka, Japan, 1996. [34] The USC-SIPI Image Database. http://sipi.usc.edu/database/. Accessed on 23 Aug
[21] C. Capurro, F. Panerai, G. Sandini, Dynamic vergence using log-polar images, 2010.
International Journal of Computer Vision 24 (1997) 79–94. [35] W.-S. Ching, P.-S. Toh, M.-H. Er, Robust vergence with concurrent detection of
[22] W.-S. Ching, P.-S. Toh, K.-L. Chan, M.-H. Er, Robust vergence with concurrent occlusion and specular ts, Computer Vision and Image Understanding 62 (1995)
detection of occlusion and specular highlights, International Conference on 298–308.
Computer Vision, Berlin, Germany, 1993. [36] T. Kanade, M. Okutomi, A stereo matching algorithm with an adaptive window:
[23] E. Grosso, R. Manzotti, R. Tiso, G. Sandini, A space-variant approach to oculomotor theory and experiment, IEEE Transactions on Pattern Analysis and Machine
control, International Symposium on Computer Vision, Coral Gables , Florida , Intelligence 16 (1994) 920–932.
USA, 1995, p. 509. [37] J. Monaco, A.C. Bovik, L.K. Cormack, Epipolar spaces for active binocular vision
[24] R. Manzotti, A. Gasteratos, G. Metta, G. Sandini, Disparity estimation on log-polar systems, International Conference on Image Processing, San Antonio, Texas, USA,
images and vergence control, Computer Vision and Image Understanding 83 2007, pp. 549–551.
(August 2001) 97–117. [38] Y. Chen, N. Qian, A coarse-to-fine disparity energy model with both phase-shift
[25] J.H. Piater, R.A. Grupen, K. Ramamritham, Learning real-time stereo vergence and position-shift receptive field mechanisms, Neural Computation 16 (2004)
control, International Symposium on Intelligent Control/Intelligent Systems and 1545–1577.
Semiotics, Cambridge, MA, USA, 1999, pp. 272–277. [39] X. Zhang, A.L.P. Tay, A physical system for binocular vision through saccade
[26] C. Yim, A.C. Bovik, Vergence control using a hierarchical image structure, IEEE generation and vergence control, Cybernetics and Systems: an International
Southwest Symposium on Image Analysis and Interpretation, Dallas, Texas, USA, 1994. Journal 40 (2009) 549–568.

Anda mungkin juga menyukai