Anda di halaman 1dari 4

METHODS FOR MONITORING CELLULAR MOTION AND FUNCTION

Dirk Padeld
1,2
, Jens Rittscher
1
, Badrinath Roysam
2
1
GE Global Research, One Research Circle, Niskayuna, NY, 12309.
2
Rensselaer Polytechnic Institute, 110 8th St., Troy, NY 12180.
ABSTRACT
Automated cell phase analysis of live cells over extended
time periods requires both novel assays and automated image
analysis algorithms. We approach the tracking problem as a
spatio-temporal volume segmentation problem, where the 2D
slices are stacked into a volume with time as the z dimension.
This extended abstract gives an overview of our approach and
outlines how a robust tracking system for high-throughput
screening can be designed.
Index Terms Cell segmentation, cell tracking
1. INTRODUCTION
The ability to automatically follow the movement of cells and
capture events such as mitosis and apoptosis as addressed by
many researchers [1, 2, 5, 3, 4] continue to have important
implications on biological research. We expand upon this by
enabling the study of the cellular motion in a high-throuput
environment. In particular we address the segmentation of
cells in the presence of poor image quality and the occasional
shifts during image acquisition. This resesearch will lead
to a general framework for cell tracking and segmentation
that is capable of high-throughput, high-content analysis of
cells under various imaging condition These algorithms will
open new possibilities for biological research in diagnosis and
treatment.
Existing algorithms for cell tracking can be roughly di-
vided into two main approaches: independent detection with
subsequent data association [1, 2] and model based tracking
[3, 4]. Li et al. [5] combines these tasks by both segmenting
each frame separately and using a multi-target tracking sys-
tem using model propagation with level sets and a stochastic
motion lter. Padeld et al. [6] approach the tracking task
as a spatio-temporal segmentation task to estimate cell cycle
phase.
The development of a generalized tracking framework
requires a variety of algorithms to cope with the realities of
high-throughput, high-content image acquisition. Although
each biological application addresses a different problem,
image analysis algorithms used in these applications share
a number of core challenges. Large-scale experimentation
makes optimization of the imaging conditions difcult for
individual experiments, so algorithms must be robust to poor
image quality and large variability within datasets. For ex-
ample, microscopes occasionally become defocused during
acquisition in a high-throughput environment because of the
limited time allocated for focusing and image acquisition
before proceeding to the next well. Stage shift, which re-
sults in shifted images across acquisitions, is another effect
of such rapid movement, and it can bias cell motility mea-
surements. In time-lapse uorescent cell microscopy, the
imaging is limited by the fact that the imaging process itself
damages cells. The light exposure necessary to stimulate
the uorescent molecules into uorescing leads to the pho-
tochemical destruction of the uorophores, a phenomenon
called photobleaching. In addition, all uorescent DNA bind-
ing dyes inhibit DNA replication to a greater or lesser extent
because of their toxicity. Thus, the contrast-to-noise ratio is
often low as a result of the low concentrations of uorescent
dyes that can be used. As a result, the signal-to-noise ratio
in such datasets can be very low and standart cell detection
algorithms tend to fail. For similar reasons, the frequency
of image acquisition must also be reduced to limit the light
exposure, which means that cells can move signicantly be-
tween image acquisitions. Consequently, there is often no
spacial overlap of the cells between adjacent images. Since
different experiments require different amounts of time be-
tween acquisitions, the algorithms need to track cells that do
not overlap across time frames. For high-content analysis, it
is often necessary to generate several high-resolution images
of several tiled elds, and stitching algorithms are necessary
for generating such higher-resolution images. Model systems
also typically contain a large number of cells thus requiring
the simultaneous monitoring of many targets. Moreover, cell
segmentation and tracking algorithms need to track not only
moving events, but also mitosis, merging (occlusion), and
movement into and out of the image eld of view. To address
these requirements, a combination of sophisticated analysis
modules is necessary.
2. CELL SEGMENTATION
To denoise the images and segment the cells, we use an algo-
rithm, introduced in [7]. The approach is based on the shift-
invariant wavelet frames transformation of the image as well
47 978-1-4244-2941-7/08/$25.00 2008 IEEE Asilomar 2008
Fig. 1. Datasets with low contrast-to-noise. The gure on
the left shows a cropped view of a typical low-dose Hoechst-
stained image, and the gure on the right shows the segmen-
tation using the wavelet approach. Contrast-to-noise (CNR)
is dened as
SN
N
, where
S
is the signal mean,
N
is the
noise mean, and
N
is the noise standard deviation. For these
images, CNR = 0.5, indicating that the average intensity
difference between the signal and the noise is only half of the
standard deviation of the noise. The intensity range is only 20
gray-scale levels.
as the ltering of non-salient wavelet coefcients. Wavelet
frames are identical to the standard wavelet transform except
that the decimation operation at each level is omitted. The
wavelet frames approach we use is called the ` a trous (with
holes) wavelet transform. Prior research on biomedical data
[8, 9] demonstrates that the ` a trous (with holes) wavelet trans-
form is robust to local noise variations and discards low fre-
quency objects in the background. The decomposition is rep-
resented as work.
I
i
(x, y) =

m,n
h(m, n)I
i1
(x 2
i1
m, y 2
i1
n) (1)
W
i
(x, y) = I
i
(x, y) I
i+1
(x, y) (2)
where I
i
and W
i
represent the approximation and detail im-
ages, respectively, at each scale, i, and h(m, n) denotes the
scaling function. The recursive denition in Equation 1 is
initialized by setting I
0
(x, y) to the original discrete image.
The convolution can be implemented efciently by multiply-
ing the Fourier transform of the image with that of the scaling
function with zero taps inserted at each scale. From this de-
composition, the reconstruction of the original image can be
computed by summing all detail images W
i
(x, y) with the
residual image at the nal scale I
s
(x, y).
I
0
(x, y) = I
s
(x, y) +
s

i=1
W
i
(x, y) (3)
Assuming that the image noise is additive, the correspond-
ing wavelet transformation results in coefcients generated by
the underlying signal W
I
and those that correspond to image
noise W
N
. To approximate the signal term, we threshold the
image stack with an amplitude-scale-invariant Bayes Estima-
tor (ABE) using Jeffreys non-informative prior [10] as an es-
timate of the signicance of wavelet coefcient W
I
i
(x, y) at
(a) Original overlay of two frames
showing stage shift.
(b) Two frames with stage shift re-
moved.
Fig. 2. Stage shift effect. Figure 2(a) shows the original over-
lap between two successive frames, with the previous frame
shown in red and the current in green. Figure 2(b) shows the
same gure corrected for stage shift. The only remaining shift
between cells on successive frames is the result of actual cell
movement.
a given scale i and position (x, y)
W
I
i
(x, y)
ABE
(W
i
(x, y)) =

W
i
(x, y)
2
3
2
i

+
W
i
(x, y)
(4)
where
2
i
is the estimated noise variance at a given scale. In
order to further reduce noise and enhance objects that ex-
tend across multiple resolutions, we compute a correlation
stack C
s
(x, y), which is the multiplication of a subset of the
denoised wavelet coefcients corresponding to the selected
scales
C
s
(x, y) =
js

i=j
l
W
I
i
(x, y)
+
. (5)
3. STAGE SHIFT REMOVAL
In time-lapse high-throughput experiments, the microscope
may drift over time. In addition, experimenters occasionally
desire to remove the cell plate from the microscope, for ex-
ample to add some compound to the experiment, and then
return it to continue the time-course. When the effect of the
stage shift fromthese conditions is ignored in cell tracking ex-
periments, two conditions can arise: 1) either the algorithms
fail to track the cells because different cells overlap, or 2)
even if the algorithms are robust enough to correctly track the
cells, the global stage shift movement will be added to the cell
movement and will bias the cell velocity and direction calcu-
lations. It is therefore necessary to automatically correct for
this stage shift.
An example of the stage shift effect is shown in the Fig-
ure 2(a). This gure shows the overlap of the previous frame
in red and the current frame in green, which color-codes the
global misalignment resulting from stage shift. Figure 2(b)
shows the same images with the stage shift corrected by the
method presented in this section.
48
(a) Normal image. R
d
= 1.0733 (b) Defocused image. R
d
=
3.6868
Fig. 3. Comparison of normal and defocused images. The
R
d
number listed for each gure is the defocused ratio mea-
sured by our algorithm that indicates how defocused an image
is. This defocused ratio is dened in the text.
We use a registration step to correct for this effect. By
the nature of the stage shift, few constraints can be placed
on the transformation between images since signicant mis-
alignment could result from users taking the plate out of the
microscope and placing it back in. Because of the potentially
large transform between images, registration methods based
on local search can become stuck in a local minimum, and
many such approaches require careful setting of the param-
eters. We therefore employ an algorithm that calculates the
translation, rotation, and scale using the Fourier Transform
that nds a global minimum, requires no parameters, and is
fast. Our approach is related to that of Reddy and Chatterji in
[11]. The method uses the Fourier translation, rotation, and
scale properties to perform the registration. It is founded on
the principles that the Fourier transform is translation invari-
ant and its conversion to log-polar coordinates converts the
scale and rotation differences to vertical and horizontal off-
sets that can be measured.
4. AUTOMATIC AUTOFOCUS DETECTION
In high-throughput applications, the microscope acquires
each image quickly and then moves on to the next, leaving
little time for refocusing in cases where the specimen was not
fully in focus. This can lead to the failure of segmentation
and tracking algorithms. To address this, we have developed
an algorithm to detect such defocused images automatically
so they can be removed from analysis.
The algorithms are based on the principle that a defo-
cused image acquires a smoothed appearance, losing its tex-
ture. Such effects can be effectively captured using wavelet
transforms. Wavelets decompose an image into approxima-
tion and detail channels that encode both the frequency and
spatial information of the image.
Using the wavelet decomposition, the energy in each
channel provides insight into the texturedness of the image.
Smooth images tend to concentrate most of the energy in the
low frequency channels, whereas the wavelet decomposition
of textured images show energy spread throughout the chan-
nels [12]. For the application of defocused image detection,
the high-frequency noise is still present and thus provides
little discriminative information. However the middle fre-
quencies are attenuated relative to in-focus images. Using
this intuition, we calculate the defocus ratio R
d
base on the
total energy of the lower frequency channel to the middle fre-
quency channels. This energy of a given subband is computed
as
E(x) =
1
IJ
I

i=1
J

j=1
|x(i, j)|, (6)
where IJ is the number of coefcients in channel x, and
x(i, j) is the value of the coefcient. Using this defocus ratio,
it is possible to distinguish between focused and defocused
images: the ratio is high for defocused images and low for
focused images.
5. RESULTS
Example tracking results are given in Figure 4. The original
images are shown on the left and the tracking results with
track tails on the right. The tracks indicate the location of the
cell in the previous image, and this demonstrates that the stage
shift moves consistently to the left (so that the cells appear to
move to the right). The tails thus indicate the raw distance
that the cells travel, and the corrected distances are found by
subtracting the stage shift from these tracks for each cell. As
in dataset 1, blue boxes indicate cells entering the image, red
boxes indicate cells leaving, and green boxes indicate mitosis
events.
The images in Figure 4 showthe performance of the track-
ing during image defocus. When defocused images are de-
tected, the tracks are propagated to the defocused images and
then the tracking resumes on the next focused image. Thus,
the tracks in the third row have twice the normal length. Since
the cells are tracked accurately across these defocus images,
the overall distance measurements across the experiment is
comparable to the wells where no defocus is present.
The total time of 31 seconds is based on an average ex-
periment with 50 cells per image. The numbers are presented
in different units based on the algorithms. The defocus de-
tection and segmentation are carried out on single images, so
these are reported in seconds/image. The stage shift correc-
tion is carried out on pairs of images, so the units are sec-
onds/(image pair). The tracking depends on the number of
cells rather than the size of the image, so it is reported in sec-
onds/cell. This timing was calculated on a dual-core 2.6GHz
Dell laptop with 3.5GB of RAM, and the algorithms were im-
plemented in Matlab.
49
Fig. 4. Tracking results in the presence of defocused frames and large stage shift for dataset 2. The original images are
shown on the left, and the tracking results, with tails showing the previous location of the cell and segmented contours, are
shown on the right. Each row represents the next image in the sequence. The tracking proceeds correctly despite defocused
images. Blue boxes indicate cells entering, red boxes indicate cells leaving, and green boxes indicate cells splitting.
6. CONCLUSIONS AND FUTURE WORK
Automated analysis of high-throughput time-lapse data can
provide statistically meaningful measures that are difcult to
achieve by manual analysis. We have presented an automatic
analysis approach that exploits the spatio-temporal nature of
the data to constrain the segmentation and tracking problems.
The topics for future work include measuring biologically
relevant phenomena on more datasets and expanding the edit-
based validation framework to incorporate a learning module
that will make suggestions of changes based on past edits.
Acknowledgements: We would like to thank Elizabeth
Roquemore and Angela Williams for working out the assay
conditions and generating the image sets.
7. REFERENCES
[1] O. Al-Kofahi, R.J. Radke, S.K. Goderie, Q. Shen, S. Temple,
and B. Roysam, Automated cell lineage construction: a rapid
method to analyze clonal development established with murine
neural progenitor cells, Cell Cycle, vol. 5, pp. 32735, 2006.
[2] F. Yang, M. Mackey, F. Ianzini, G. Gallardo, and M. Sonka,
Cell segmentation, tracking, and mitosis detection using tem-
poral context, in MICCAI 05, James Duncan and Guido
Gerig, Eds., 2005, number 3749 in LNCS, pp. 302309.
[3] O. Debeir, P. Van Ham, R. Kiss, and C. Decaestecker, Track-
ing of migrating cells under phase-contrast video microscopy
with combined mean-shift processes, IEEE Trans. Med. Imag-
ing, vol. 24, no. 6, pp. 697711, 2005.
[4] A. Dufour, V. Shinin, S. Tajbakhsh, N. Guillen-Aghion, J.C.
Olivo-Marin, and C. Zimmer, Segmenting and tracking u-
orescent cells in dynamic 3-d microscopy with coupled active
surfaces., IEEE Trans. Image Proc., vol. 14, no. 9, 2005.
[5] K. Li, E. Miller, L. Weiss, P. Campbell, and T. Kanade, On-
line tracking of migrating and proliferating cells imaged with
phase-contrast microscopy, in Proc. CVPRW, 2006, pp. 65
72.
[6] D. Padeld, J. Rittscher, T. Sebastian, N. Thomas, and
B. Roysam, Spatio-temporal cell cycle analysis using 3Dlevel
set segmentation of unstained nuclei in line scan confocal u-
orescence images, in IEEE ISBI, 2006.
[7] D. Padeld, J. Rittscher, and B. Roysam, Spatio-temporal cell
segmentation and tracking for automated screening, in IEEE
ISBI, 2008.
[8] A. Genovesio, T. Liedl, V. Emiliani, W. J. Parak, M. Coppey-
Moisan, and J.C. Olivo-Marin, Multiple particle tracking in
3-d+t microscopy: method and application to the tracking of
endocytosed quantum dots., IEEE Trans. Image Proc., vol.
15, no. 5, pp. 10621070, 2006.
[9] J.C. Olivo-Marin, Automatic detection of spots in biological
images by a wavelet-based selective ltering technique, in
ICIP, 1996, pp. I: 311314.
[10] M. Figueiredo and R. Nowak, Wavelet-based image estima-
tion: an empirical bayes approach using jeffreys noninforma-
tive prior, IEEE Trans. Image Proc., vol. 10, no. 9, pp. 1322
1331, September 2001.
[11] B. S. Reddy and B. N. Chatterji, An fft-based technique for
translation, rotation, and scale-invariant image registration,
Image Processing, IEEE Transactions on, vol. 5, no. 8, pp.
12661271, 1996.
[12] R. Porter and C.N. Canagarajah, A robust automatic clus-
tering scheme for image segmentation using wavelets, IEEE
Transactions on Image Processing, vol. 5, no. 4, pp. 662665,
1996.
50

Anda mungkin juga menyukai