Anda di halaman 1dari 8

An Effective Document Image Deblurring Algorithm

Xiaogang Chen1,3, Xiangjian He2 , Jie Yang1,3 and Qiang Wu2


1 2
Institute of Image Processing and Pattern Recognition Centre for Innovation in IT Services and Applications
Shanghai Jiao Tong University, China University of Technology, Sydney, Australia
{cxg,jieyang}@sjtu.edu.cn {Xiangjian.He,Qiang.Wu}@uts.edu.au
3
Key Laboratory of System Control and Information Processing
Ministry of Education, China

Recognition and Document Understanding. The


Abstract preprocessing of document image can greatly improve
visual quality and recognition accuracy.
Deblurring camera-based document image is an The basic assumption of this paper is that the document
important task in digital document processing, since it can image undergoes a spatial-invariant blurring, which is
improve both the accuracy of optical character recognition commonly modeled by image convolution. It should be
systems and the visual quality of document images. noted that the spatial-variant blurring can also be regarded
Traditional deblurring algorithms have been proposed to as spatial-invariant locally and it has been discussed
work for natural-scene images. However the natural-scene previously [1, 2, 3].
images are not consistent with document images. In this
paper, the distinct characteristics of document images are 1.1. Related work
investigated. We propose a content-aware prior for Document image restoration is an important problem in
document image deblurring. It is based on document image digital document analysis. During document image
foreground segmentation. Besides, an upper-bound acquisition, it can be degraded by many factors such as
constraint combined with total variation based method is geometrical warping [4], shading distortions [5], noise
proposed to suppress the rings in the deblurred image. pollution [6] and image blurring [1, 7, 8]. Here we mainly
Comparing with the traditional general purpose deblurring concern the problem of document image deblurring.
methods, the proposed deblurring algorithm can produce Usually, the only observed information is the blurred
more pleasing results on document images. Encouraging image, which may be out-of-focus blurred [1] or motion
experimental results demonstrate the efficacy of the blurred [8, 9]. The problem of estimating the sharp image
proposed method. and the blurring kernel, namely Point Spread Function
(PSF), is commonly regarded as blind deconvolution,
where the spatial-invariant condition is assumed. Although
1. Introduction this problem has been studied since several decades ago, it
With the pervasive use of consumer electronics, camera has not been completely solved due to its inherent
has been widely integrated into mobile phone and PDA. It ill-conditioned property.
is a convenient portable tool for common users to record In some situations, users can prevent blurring by optical
information. This condition potentially drives the stabilizer which is usually not armed in low-end cameras. A
advancement of camera-based document image processing. cheaper method without such hardware is by using more
This paper proposes a new algorithm for document image observed images to estimate the latent sharp image. Yuan et
deblurring. It produces a sharp document image from an al. [10] employed blurred/noisy image pairs to obtain the
observed blurry one. sharp one. Zhuo et al. [11] recovered a sharp image from a
Unlike the natural-scene imaging where the flash might pair of motion blurred and flash images. Chen et al. [12]
be forbidden for fear of ruining the picture in dim light used two consecutively captured blurred photos to estimate
environment, the flash is easier to use during capturing the sharp one.
document image and thus the blurring might be avoided. The most difficult situation is to estimate a sharp image
However, for some cameras integrated in cheap devices the from a single observed blurry image. The traditional
flash is not available and it results in blurring. Another methods tend to explore effective priors to get the
reason for document image deblurring, restoration and ill-conditioned problem well regularized. The priors
enhancement is propelled by the research for Character include Total Variation ( TV ) [13, 14], sparse

369
approximation [15], image edge transparency [16], This paper is organized as follows. In Section 2, we
Hyper-Laplacian [17] [18], heavy-tailed distribution [9, 19, investigate the characteristics of document images. The
20], etc.. The commonplace inference tools include proposed deblurring formulation is presented in Section 3.
Maximum A Posteriori [9, 13], Variational Bayesian [14, The detailed numerical computations are discussed in
19, 20], MCMC [21], etc.. Section 4. In Section 5, the efficacy of the proposed method
Adopting the prior knowledge of document image can is demonstrated by experiments. Section 6 gives a brief
further improve the estimation condition. In [22] and [7], conclusion of the paper.
the information like font size and font type was employed
for document image restoration. However in their work, the 2. Properties of document image
motion blurring model was not discussed.
Most of the previous deblurring methods are based on
natural-scene images of which the characteristics may be
1.2. Our contributions not consistent with document images. Generally,
This paper contains the following contributions. comparing with natural-scene images, document images
1. We show the characteristic difference between have their distinct properties, which will be discussed in
natural-scene images and document images. We also show this section.
that the heavy-tailed distribution based methods may not be The target images that we consider here are the widely
appropriate for document image deblurring. seen document images. An example is shown in the
2. A content-aware prior for document image deblurring bottom-left of Figure 1. It is prominent that the foreground
is proposed. We calculate the desired intensity probability dark pixels are related with text or figure. Such property is
density function (PDF) of sharp image from the intensity also shared with handwritten document images.
histogram of the observed blurred image. Then, the PDF
functions as priors for deblurring. 2.1. Probability density function
3. The local structure of document image is analyzed. An
upper-bound constraint combined with TV based method is Let f ' be the horizontal gradient or vertical gradient of
proposed to effectively suppress the rings, which are image f . It is known that log{ p ( f ')} is a heavy-tailed
notorious in image deblurring. distribution for sharp natural-scene image [9]. Such
We formulate the deblurring problem in a Bayesian distribution has been successfully exploited in natural
manner. The corresponding objective function is efficiently scene image deblurring [9, 19]. However, directly using
optimized by alternating minimization [9, 23]. Due to the these methods for blurry document image restoration is not
effective priors, the proposed algorithm is able to produce reliable. The reason is that the gradient distribution of
better results than the state-of-the-art methods [9, 19] for document images may be significantly different from that
document image deblurring. of natural-scene images. Therefore, the heavy-tailed
distribution prior will mislead the PSF estimation and blind
Sharp street image Blurred street image
deconvolution process.
Figure 1 shows an awkward case. A sharp document
image, a sharp street image and its motion-blurred version
are shown. From their corresponding logarithmic PDFs of
image gradients, shown in the bottom-right of Figure 1, we
observe that the gradient distribution of the sharp document
0 image is similar to that of the blurred street image. But it is
Sharp document image -2
sharp street image
blurred steet image quite different from the heavy-tailed distribution
sharp document image

-4 corresponding to the sharp street image. Such difference


can create a barrier for document image deblurring using
log probability density

-6

-8
gradient distribution based methods [9, 19].
-10
Furthermore, the gradient distribution of a sharp
-12 document image could be varied in a great extent. For a
-14
-200 -150 -100 -50 0 50 100 150 200
sharp document image with rich contents, logarithmic
probability density of the image gradients could be
gradient

Figure 1: To the above three images, their logarithmic probability


heavy-tailed and similar to that of a sharp natural-scene
densities of gradients are shown in the right bottom corner of the
figure, where the blue and green curves reveal that the statistical
image. On the other hand, if the background pixels of a
characteristics of a sharp document image can be similar to that of document image are white, there may be only a few
a blurred natural scene image in gradient domain. foreground words as explained later in this section. There
will be just a few gradient levels corresponding to the

370
gradient image. significantly whitened. Such effect can be noticed from the
Therefore, we do not use image gradient distribution but histograms shown in Figure 2(b), where the corresponding
image intensity probability density, as priors for document document images are motion blurred. It is not unexpected
image deblurring. It is based on the observation that, most that the dark foreground pixels are whitened and the
of the dark pixels connect to the foreground words or proportion of dark pixels with intensities lower than 50 is
figures, and the white pixels constitute the background of greatly reduced. It reveals the relationship of intensity
the document image, as shown in the document image of histograms between sharp document image and its blurred
Figure 1. Given a motion blurred document image, we can version.
segment the foreground and background pixels, by the Given a blurred document image, we can reasonably
widely discussed foreground segmentation tools [24, 25, 26, assume that the foreground pixels are darker in the sharp
27] of document images. Therefore, we can model the image, under the same lighting condition. Therefore, the
desired intensity distribution of the sharp document image low-part of its histogram is strengthened in its counterpart
to be estimated, according to the contents in a blurred of the sharp document image.
image. Such method is content-aware and thus more The proportion of foreground dark pixels can be easily
flexible than the traditional fixed gradient distribution estimated by foreground segmentation [24, 25] or adaptive
based method [9, 19]. thresholding [27]. Let the proportion be p , and the
0.035 observed intensity histogram of the blurred image be H b ,
0.03 which is a normalized vector with size 256×1, || H b ||1 = 1 .
0.025 We suggest the desired intensity histogram of the sharp
image be H u = H b ⋅ (1 − p)+d s . 1- p is a scalar which
density function

0.02

0.015
corresponds to the percentage of the white background
pixels. d s is a vector of size 256×1. It corresponds to the
0.01

components of the foreground pixels, which should be dark


0.005
in the sharp image. To determine d s , we let the intensities
0
0 50 100 150 200 250
of the foreground dark pixels be uniformly distributed in
pixel intensity
(a)
0.06 region [0, s − 1] with probability density p / s . Here, s is
0.05
the lowest intensity with probability density p / s in H b .
According to this procedure, the desired histogram of the
0.04
unobserved sharp image, H u , is calculated and then used
density function

0.03
in the following deblurring algorithm. It is not difficult to
0.02 see || H u ||1 = 1 and the proportion of the dark pixels is
0.01
improved in the histogram. Admittedly, the uniform
distribution assumption of the dark pixels may not be
0
0 50 100 150 200 250 absolutely consistent with the real case. It basically
pixel intensity
(b) encourages the foreground pixels to be darker. Experiments
Figure 2: (a) Histograms of ten sharp document images. (b) show that our deblurring algorithm based on prior H u ,
histograms of ten motion blurred document images. which is content-aware, is able to produce pleasing results.
Ideally, the pixels corresponding to the foreground text
in a document image should be black and have the 2.2. Local structure of document image
intensities with little values. Due to the lighting conditions, Traditional image deconvolution methods are commonly
their intensities might be slightly translated to larger suffered from ring effect, which is a well-known
magnitudes. To show this phenomenon, histograms of ten challenging problem in image deblurring. In this section,
sharp document images are shown in Figure 2(a), where the the local structure of document images is investigated, and
images are collected by a Canon 400D camera. It is an effective method is proposed to suppress the rings.
observed that most of the foreground pixels, corresponding It is a reasonable assumption that the intensities of the
to text or figures, constitute the low-part of the histograms, dark foreground pixels are lower than the surrounded white
with intensities lower than 50. background pixels in a document image. Besides, the white
On the other hand, in motion blurred document images, background regions in a deblurred image are smooth. Thus,
the foreground dark pixels, surrounded by white they can be approximated from the input blurry image
background pixels, are smeared and their intensities are which actually is a low-pass filtered version of the sharp

371
image. Based on these two factors, we propose a new −w || f ⊗ h − g ||22 , where the weighting parameter w is
method to suppress the rings appeared during image positive and determined by the noise variance [9, 17]. The
deconvolution. Our method is based on the upper-bound constant terms are ignored here.
constraint and TV regularization [6, 28].
log p( f ) regularizes the deconvolution process. It is the
Given a blurry document image g (i, j ) , we impose an
major contribution of this paper. We adopt two methods to
upper-bound M (i, j ) on the pixel intensities in the sharp facilitate the estimation. Firstly, we consider the desired
image f (i, j ) to be estimated. M (i, j ) is calculated from intensity histogram of f . It is known that the density
the input image g by a max filter [30], which is based on function p( f ) corresponds to the intensity histogram of
ordering (ranking) the pixels contained in the image area image f . From the previous section, we know that the
encompassed by the filter and then choosing the maximum
desired histogram of f , namely H u , can be calculated
as the response. Figure 3 shows an example.
After f is computed during each deconvolution from the input image g . Basically, we wish the foreground
iteration, the upper-bound constraint is enforced by pixels in the deblurred image f be darker. Secondly, we
f = min( f , M ). However, directly imposing the consider the local structure of f . The upper-bound M on
upper-bound M on f does not improve the results, since the pixel intensities of f and the non-negativity of
the operation f = min( f , M ) could produce many false intensities are enforced. The TV regularization is
edges. It is intuitive to adopt TV regularization to suppress combined to reduce the false edges and perform denoising.
the false edges. Note that the TV regularization inherently Since the desired histogram H u is a vector of size 255
can also denoise image [28]. Pleasingly, experiments show ×1, its parametric function p( f ) is preferred and can be
that the upper-bound constraint combined with TV based manipulated in optimization. We adopt a piecewise linear
method is very effective to suppress rings. The detailed function Ψ ( f ) to represent log p( f ) , given log( H u ) .
algorithms are presented in the following sections. The more line segments are assigned, the higher
approximation accuracy is achieved. Actually, many
methods have been proposed to represent an edge by a set
of joint line segments in machine vision, such as by
recursive splitting method and sequential method [29]. The
second approach is adopted to calculate joint line segments
Ψ ( f ) in our implementation. Figure 4 shows an example.
Figure 3: The left image is an observed blurry image. The right The four red line segments constitute Ψ ( f ) , which
image is the computed upper-bound M. approximates the desired log p( f ) . Assuming that the
pixel intensities of f are all i.i.d. distributed, we get
3. Document image deblurring
log p( f ) = ∑ log p( f l )  − Ψ ( f ) 1 , where the negative
From the view of Bayesian method, it is easy to l
incorporate prior knowledge in estimation problems. Let sign exists because Ψ ( f l ) is a negative function. f l is the
the input blurred document image be g , and the sharp l-th pixel of image f .
image and the PSF to be estimated be f and h . We have
-2 desired log p(f)
p ( f , h | g ) ∝ p ( g | f , h) p( f ) p (h) . By using logarithm approximated function
-4
function, the multiplication can be separated.
log p( f )

log p( g | f , h) is commonly regarded as data fidelity term, -6

and log p( f ) and log p(h) correspond to regularization -8

terms. To estimate f and h , we adopt the maximum a -10


posteriori ( MAP ) method. The fidelity and regularization
-12
terms are detailed as follows. 0 50 100 150 200 250
intensity
For log p( g | f , h) , usually the probability density
Figure 4: The desired logarithmic PDF function denoted by a
involved in this term is considered to be a Gaussian blue curve, is approximated by a piecewise linear function Ψ ( f ) ,
function. Assuming that the elements are independent and
shown by red joined line segments.
identically distributed (i.i.d.), log p( g | f , h) results in
The regularization term log p(h) models the prior

372
knowledge of PSF. We impose two common constraints on computed by minimizing
kernel, as discussed in [10]. The kernel elements should be J ( if ) = η || if − f ||22 + λ || Ψ ( if ) ||1 . (3)
non-negative and the summation of the elements is equal to
one. Because each pixel in if can be separated from Eq.(3) and
Consequently, the fidelity term and the regularization the piecewise linear function Ψ is represented by the joint
terms can be explicitly represented. The negative logarithm line segments, the pixel-wise function for the l-th pixel in
of a posteriori probability, − log p( f , h | g ) , results in the if is,
following objective function,
l l l ∑
J ( if ) = η ( if − f ) 2 − λ (a if + b )δ ( if ) ,
s l s s (4)
l
J ( f , h) = w || f ⊗ h − g ||22 + β ∑ || f ⊗ d k ||1 + λ || Ψ ( f ) ||1 s
k = x, y
where δ s is a delta function. If if l is in the s-th line
s.t. (1) 0 ≤ f (i, j ) ≤ M (i, j ) ; (2) h(i ', j ') ≥ 0 ;
segment of Ψ , δ s ( if l ) equals one, and zero elsewhere.
(3) || h ||1 = 1 . (1)
as , bs are the parameters of the s-th line segment. Since
In the above equations, || ⋅ || p denotes Lp norm of the
J ( if ) is a quadratic function in all segments, the global
l
matrix, and d x and d y are the first-order derivative filters,
minimum of J ( if l ) can be efficiently determined.
d x = [1, −1] , d y = [1, −1]T . In Eq. (1), ⊗ represents
convolution operation, and (i, j ) and (i ', j ') are the
f x and j
4.2. Solving j fy
coordinate of image and kernel respectively. Furthermore,
w , λ and β are the weighting parameters. Given f , the derivative f ⊗ d x can be calculated. We
∑ || f ⊗ d
k = x, y
k ||1 corresponds to the TV regularization [23]. name the derivative as f x for short. Then, the sub-problem

Given a blurry image g , the piecewise linear function for solving j


f can be computed analytically. From Eq. (2),
x

Ψ and the image M are calculated first. Then, the we obtain the function,
objective function J ( f , h) is minimized to estimate the J (j
f x ) = β || j
f x ||1 +η || f x − j
f x ||22 . (5)
sharp image f and the PSF h , by alternating This is a well-known function in TV regularization
problems [23, 28], which can be easily solved by shrinkage
minimization.
function [23]. The sub-problem for solving j f y given f y ,
4. Optimization is identical to the problem defined by Eq. (5).
We adopt an iterative method to estimate f and h . To
4.3. Solving f
solve the problem quickly, the variable splitting approach
[9, 17, 23] is employed. By introducing auxiliary variables Holding other variables constant, from Eq. (2), we get
f x and j
if , j f y at each pixel, the function J ( f , h) is the sub-problem for solving f ,
changed to, J ( f ) = w || f ⊗ h − g ||2 +η
2 ∑
|| f ⊗ d − j
f ||2 k k 2

∑ || j
f k ||1 + λ || Ψ ( if ) ||1
k = x, y
J = w || f ⊗ h − g ||22 + β
k = x, y +η || if − f ||22 . (6)
+η ∑ || f ⊗ d k −j
f k ||22 + η || if − f ||22 . (2) The above function is quadratic. With the circular
k = x, y boundary conditions, the minimization problem has a
The three constraint terms in Eq.(1) remain the same here. closed-form solution in the frequency domain,
η is varied during estimation. As η → ∞ , the computed ⎧ wH *G + η ∑ Dk* F j +η F
i⎫
k
f will be consistent with the solution f in equation (1) [9, −1 ⎪ k = x, y ⎪
f =F ⎨ ⎬ , (7)
⎪ wH H + η ∑ Dk Dk +η ⎪
* *
17]. The four variables f , if , j f , j
f can be iteratively x y
⎩ k = x, y ⎭
estimated. Since the sub-problems for solving the four −1
where F denotes the invese Fourier transform operation,
variables all have closed-form solutions, they can be
and the superscript ∗ represents the conjugate of the
efficiently computed.
complex. Above multiplication and division operations are
done component-wise. Furthermore, H , Dk , G , F j and
4.1. Solving if k

Fi are the Fourier transformations of h , d , g , j


f and k k
Holding f constant, from Eq. (2) we see that if can be if respectively. Then, the bound constraints of f are

373
enforced by f = min( f , M ) and f = max( f , 0) . Blurred Fergus

4.4. Solving h
Holding other variables constant, from Eq. (1) we get the
sub-problem for solving h ,
J (h) = w || f ⊗ h − g ||22 ,
s. t. (1) h(k , l ) ≥ 0 ; (2) || h ||1 = 1 . (8)
This problem has been widely discussed in blind
deconvolution. It can be solved by an interior point method Shan Ours
[9], or the Landweber iteration method [10]. We simply
adopt a gradient descent scheme and the constraints are
enforced on the each iteration result. h is iteratively
updated as follows.
dJ (h)
1. h = h − α
n +1 n
,
dh
2. Set hln +1 = 0 , if hln +1 < 0 .
Then, normalize h n +1 by hn +1 = hn +1 / || hn +1 ||1 . Figure 5: Experiments on synthesized image. Top-left portion
shows the synthesized blurred image and the ground-truth PSF.
The subscript l denotes the l-th element of the kernel. The Others are the estimated sharp images and the PSFs produced by
step parameter α is 0.1 in our experiments. Fergus et al. [19], Shan et al. [9], and the proposed method
In the previous paragraphs, we have defined the respectively. The small regions in the red box are extracted for
objective function for estimating sharp image and PSF. The easier visual inspection.
auxiliary variables have been introduced to facilitate the Blurred Fergus
numerical computation, which forms Eq.(2). Then the
optimization methods for solving all the sub-problems have
been detailed. The complete proposed document image
deblurring algorithm has been concluded in Algorithm 1.
Algorithm 1. Document Image Deblurring Algorithm
Input: The blurred image g ; kernel size.
Initialization:
compute the piecewise linear function Ψ ; Shan Ours
compute the image M by max filter;
initialize the PSF h as a 2D Gaussian function;
let the deblurred image f = g .
Repeat:
1.Update f by continuation scheme:
for( η = 1 ; η < 210 ; η = η × 2 2 ) {
obtain if by minimizing (3).
Figure 6: The second experiments on synthesized image.
f x and j
obtain j f y by minimizing (5).
update f by (7).
5. Experiments
enforce the bound constraints by
f = min( f , M ) and f = max( f , 0) . } In the Algorithm 1, various experiments show that the
parameter set of N1 = 15 and N 2 = 6 produces pleasing
2.Update h by minimizing (8). Gradient descent
results. Despite the fact that N1 for Gradient descent may
method is adopted and it repeats N1 times.
not result in a complete convergence, such a low tolerance
Until it is converged or maximum iterations N 2 have
for the inner iteration is effective for the whole deblurring
been performed. algorithm, as discussed in Chan et al. [13]. The default
Return f and h . values for the other parameters are all 103 , namely,

374
w, β , λ = 103. It works well for most cases. It should be Blurred Fergus
noted that the proposed algorithm is also able to denoise
image. The β controls the weight on the TV term. For the
noisy image, larger β makes the estimated f cleaner.
Usually after four or five outer iterations, it can produce
satisfactory estimation results. Each outer-iteration takes
less than 9 seconds on a windows PC with an Intel 2.6 GHz Shan Ours
CPU for blurred image with size 500 × 500, based on
Matlab code. In the following experiments, the estimation
results produced by Fergus et al. [19] and Shan et al. [9] are
compared with our estimation results. The parameters are
hand-tuned to produce the best results.
Figures 5 and 6 show the experimental results of Figure 8: Experiments on real motion blurred image.
synthesized blurry images, where the ground-truth PSF is
known. In spite of the fact that the statistical property of
document image may be different with natural-scene
image’s property, heavy-tailed distribution guided
estimation methods [9, 19] may still produce reasonable
results shown by our experiments. However, since more
prior knowledge of document image is harnessed in our
estimation model, better results are produced. It is
noticeable that in all of the following figures, the notorious
rings caused by deconvolution are successfully suppressed
by the proposed algorithm. Zoom-in to inspect the details.
Figures 7 and 8 show the experiments of deblurring
optical motion-blurred images, where only the blurred
image is observed. Again, our estimation results are neater
and clearer. The background white regions in our estimated
images are smooth. The rings are unnoticeable. Particularly,
Figure 8 demonstrates the denoising capability of the
proposed method. Comparing with the images estimated by
Fergus et al., who adopted Lucy-Richardson non-blind
deconvolution, obviously our estimated sharp image has
less noise. Another observation is that the foreground
pixels appear darker in our deblurred image. It is benefited
from Ψ ( f ) term.

Blurred Fergus

Figure 9: Handwritten document image deblurring. A cropped


blurry image and three deblurred results are shown. They are
produced by Shan et al. [9], Fergus et al. [19] and the proposed
method respectively.

Shan Ours Figure 9 shows an example of handwritten document


image deblurring. It also shows that our restored image,
presented in the last row, is sharp and clear. Besides, the
rings are satisfactorily suppressed and unnoticeable.

6. Conclusions
Figure 7: Experiments on optical blurred image. Again, the results This paper proposes a new algorithm for document
of Fergus et al. [19], Shan et al. [9] and ours are compared. image deblurring. The key contribution of the paper is that
the distinct characteristics of document image are

375
investigated. It functions as a prior and is integrated into the [15] J. Cai, H. Ji, C. Liu and Z. Shen, Blind motion deblurring
proposed document image deblurring algorithm by from a single image using sparse approximation, in CVPR
Bayesian’s approach. The adopted two priors, namely the 2009.
desired intensity probability density function and the [16] J. Jia, Single image motion deblurring using transparency,
in CVPR 2007.
document image local structure constraints, are proved to [17] D. Krishnan, R. Fergus, Fast image deconvolution using
be the effective priors. They lead the proposed deblurrring hyper-laplacian priors, in NIPS 2009.
algorithm to produce satisfactory results, and successfully [18] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image
suppress rings. and depth from a conventional camera with a coded aperture.
ACM Transactions on Graphics, 26(3):70:1–70:9, July
Acknowledgments 2007.
[19] R. Fergus, B. Singh, A. Hertzmann, S. Roweis and W.
We would like to thank Jia Liu and the anonymous reviews. Freeman, Removing camera shake from a single photograph,
Their valuable comments improved the quality of this paper. This ACM Transactions on Graphics (SIGGRAPH),Vol. 25,
work was supported by the Chinese Ministry of Science and Issue 3, pp. 787-794, 2006.
intergovernmental cooperation project under Grant No. [20] O. Whyte, J. Sivic, A. Zisserman, J. Ponce, Non-uniform
2009DFA12870, the National Science Foundation of China under deblurring for shaken images, In CVPR 2010.
grant No. 61075012. [21] T. Bishop, R. Molina, and J. R. Hopgood, Blind restoration
of blurred photographs via AR modelling and MCMC, in
References ICIP 2008.
[22] T. Kanungo and Q. Zheng, Estimating degradation model
[1] Y. Tian and W. Ming, Adaptive deblurring for parameters using neighborhood pattern distributions: an
camera-based document image processing, Lecture Notes in optimization approach, IEEE Trans. PAMI. Volume 26,
Computer Science, 2009, Volume 5876, 2009, pp:767-777. Issue 4,pp:520 - 524, 2004.
[2] A. Gupta, N. Joshi, C. Zitnick, M. Cohen, B. Curless. Single [23] Y. Wang, J. Yang, W. Yin, and Y. Zhang. A new alternating
image deblurring using motion density functions. In ECCV minimization algorithm for total variation image
2010. reconstruction. SIAM J. Imaging Sciences, 1(3): 248–272,
[3] J. Bardsley, S. Jefferies, J. Nagy, and R. Plemmons, Blind 2008.
iterative restoration of images with spatially-varying blur, [24] T Kasar, J Kumar and A G Ramakrishnan, Font and
Opt. Express 14, 2006, pp:1767–1782. background color independent text binarization,ICDAR
[4] M. S. Brown and W. B. Seales, Image restoration of 2007.
arbitrarily warped documents, IEEE Tran. PAMI, VOL. 26, [25] Jorge Sdnchez Valverde and Rolf Rainer Grigat, Optimum
NO. 10, 2004, pp: 1295-1306. binarization of technical document images, ICIP 2000.
[5] L. Zhang, A. Yip and C. Tan, Removing shading distortions
[26] F. Drira, F. LeBourgeois, and H. Emptoz,A coupled mean
in camera-based document images using inpainting and
shift-anisotropic diffusion approach for document image
surface fitting with radial basis functions, in ICDAR 2007.
segmentation and restoration, ICDAR 2007, pp:814-818.
[6] L. Likforman-Sulem, J. Darbon and E. Barney Smith,
Pre-processing of degraded printed documents by non-local [27] C. Gopalan and D.Manjula, Sliding window approach based
means and total variation, in ICDAR 2009. text binarisation from complex textual images, International
[7] L. Kenneth G, Text image deblurring by high-probability Journal on Computer Science and Engineering, Vol. 02, No.
word selection, US Patent 6282324, Publication Date 2001. 02, 2010, pp:309-313.
[8] X. Qi, Li Zhang, C. Tan, Motion deblurring for optical [28] S. Osher, M. Burger, D. Goldfarb, J. Xu, W. Yin, An
character recognition, in ICDAR 2005. iterative regularization method for total variation-based
[9] Q. Shan, J. Jia, A. Agarwala, High-quality motion image restoration, Multiscale Modeling & Simulation, Vol.
deblurring from a single Image, ACM Transactions on 4, No. 2. (2005), pp. 460-489.
Graphics (SIGGRAPH), Vol. 27, No. 3, pp. 73:1-73:10, [29] R. Jian, et al., Machine vision, MIT Press and McGraw-Hill,
2008. Inc, 1995.
[10] L. Yuan, J. Sun, L. Quan and H. Shum. Image deblurring [30] R. C. Gonzalez and R. E. Woods. Digital image processing.
with blurred/noisy image pairs. In SIGGRAPH 2007. Pearson Education, 2003.
[11] S. Zhuo, D. Guo and T. Sim, Robust flash deblurring, in
CVPR 2010.
[12] J. Chen, L. Yuan, C. Tang, and L. Quan. Robust dual motion
deblurring. In CVPR 2008.
[13] T. F. Chan and C. K. Wong, Total variation blind
deconvolution, IEEE Transactions on Image Processing, vol.
7, no. 3, pp:370-375, March 1998.
[14] S. Babacan, R. Molina, and A. Katsaggelos, Variational
bayesian blind deconvolution using a total variation prior,
IEEE Transactions on Image Processing, Volume 18 ,Issue
1,2009.

376

Anda mungkin juga menyukai