by
Rishaad Abdoola
in the
May 2008
ii
DECLARATION BY CANDIDATE
I hereby declare that the dissertation submitted for the degree M Tech: Engineering:
not previously been submitted to any other institution of higher education. I further
declare that all sources cited or quoted are indicated and acknowledged by means of a
Rishaad Abdoola
iii
To my parents, my brother and my sister
iv
ACKNOWLEDGEMENTS
Ben van Wyk and Prof. Anton van Wyk for their guidance, support and advice during
the completion of this project. Thank you for always having time and patience and for
I would also like to thank FSATIE (French South African Technical Institute in
Electronics) for all the opportunities and facilities provided in completing this project.
I thank all the lecturers and students at FSATIE and TUT (Tshwane University of
I would like to thank Dr. Dirk Bezuidenhout and Mark Holloway from the CSIR
(Council for Scientific and Industrial Research) for their significant help in providing
Special thanks go to Tshwane University of Technology, FSATIE and the CSIR for
their financial support. The financial assistance of the DPSS (Defence, Peace, Safety
and Security) towards this research is hereby acknowledged. Opinions expressed and
conclusions arrived at, are those of the author and are not necessarily to be attributed
v
ABSTRACT
Heat scintillation occurs due to the index of refraction of air decreasing with an
increase in air temperature, causing objects to appear blurred and waver slowly in a
over long distances resulting in a loss of detail in the video sequences. Algorithms and
and a GUI (Graphical User Interface) is developed. Focus is placed on the removal of
vi
GLOSSARY
MSE: Mean-Square-Error.
vii
CONTENTS
PAGE
DECLARATION................................................................................................... iii
DEDICATION ...................................................................................................... iv
ACKNOWLEDGEMENTS ................................................................................... v
ABSTRACT. ......................................................................................................... vi
GLOSSARY. ........................................................................................................ vii
CONTENTS. ....................................................................................................... viii
LIST OF FIGURES .............................................................................................. xi
CHAPTER 1
CHAPTER 2
ALGORITHMS ....................................................................................................... 5
2.1 PREVIOUS WORK IN ATMOSPHERIC TURBULENCE ................. 5
2.1.1 Speckle Imaging .................................................................................. 5
2.1.2 Friedenss Method ............................................................................... 7
2.1.3 Fusion ................................................................................................. 8
2.1.4 WFS (Wave-Front Sensing) ................................................................. 9
viii
2.2 ALGORITHMS SELECTED FOR COMPARISON ...........................10
2.2.1 Time-Averaged Algorithm..................................................................10
2.2.1.1 Lucas-Kanade Algorithm ....................................................................11
2.2.2 First Register Then Average and Subtract ...........................................14
2.2.2.1 FATR .................................................................................................14
2.2.2.2 FRTAAS ............................................................................................14
2.2.2.3 Elastic Image Registration ..................................................................18
2.2.3 Independent Component Analysis.......................................................22
2.2.4 Restoration of Atmospheric Turbulence Degraded Video
using Kurtosis Minimization and Motion Compensation .....................27
2.2.4.1 Control Grid Interpolation ..................................................................27
2.2.4.2 Compensation .....................................................................................28
2.2.4.3 Median using fixed period enhancement .............................................32
2.2.4.4 Kurtosis minimization ........................................................................32
2.3 CONCLUSION ..................................................................................36
CHAPTER 3
CHAPTER 4
ix
4.3.1 Real turbulence-degraded sequences without motion ..........................58
4.3.2 Real turbulence-degraded sequences with motion ...............................69
4.3.3 Sharpness of real turbulence-degraded sequences ...............................70
4.4 CONCLUSION ..................................................................................76
CHAPTER 5
BIBLIOGRAPHY .................................................................................................79
APPENDIX A: FRTAAS Matlab code ...................................................................86
APPENDIX B: ICA Matlab code ......................................................................... 144
APPENDIX C: CGI Matlab code ......................................................................... 152
APPENDIX D: Wiener Filter and Kurtosis Matlab code ...................................... 156
APPENDIX E: General Algorithm Matlab code................................................... 159
APPENDIX F: Warping Algorithm Matlab code ................................................. 168
APPENDIX G: Graphical User Interface Matlab code ......................................... 170
APPENDIX H: General Algorithm C code .......................................................... 178
APPENDIX I: CGI Algorithm C code ................................................................. 182
x
LIST OF FIGURES
PAGE
Figure 2.1: (a) Original Image, (b) Simulated image, (c) Flow field, (d)
Time averaged frame and (e) Corrected frame. ...................................12
Figure 2.2: (a) Original Image, (b) Simulated Image, (c) Motion Blurred
Average and (d) Incorrectly Registered Image. ...................................13
Figure 2.3: (a) to (t) Frames from Number plate sequence. (r) selected as
sharpest frame. ...................................................................................16
Figure 2.4: Graph showing sharpness levels of sequence in Fig 2.3. .....................16
Figure 2.5: (a) Distorted frame 1, (b) Distorted frame 2, (c) & (d) Shift
maps, (e) estimated warp between frames and (f) Frame 1
warped to frame 2. ..............................................................................21
Figure 2.6: 10 frames from a simulated Lenna sequence. ......................................21
Figure 2.7: 10 corrected frames corresponding to Fig 2.6 using FRTAAS.............22
Figure 2.8: (a), (b) & (c) 3 turbulent frames, (d) Extracted Source Image,
(e) & (f) Turbulent spatial patterns, (g) Time averaged frame
and (h) Extracted source image ...........................................................25
Figure 2.9: ICA applied to video sequences ..........................................................26
Figure 2.10: ICA applied to Building site sequence ................................................26
Figure 2.11: (a) Simulated Lenna frame 1, (b) CGI motion field between
(a) and (c) and (c) Simulated Lenna frame 2. ......................................28
Figure 2.12: Motion fields used in trajectory estimation with a time
window of 5 frames ............................................................................30
Figure 2.13: 10 frames from a simulated Lenna sequence .......................................31
Figure 2.14: 10 corrected frames corresponding to Fig 2.13 using CGI
algorithm ............................................................................................31
Figure 2.15: Lenna sequence where turbulence blur is increased linearly ................33
Figure 2.16: Kurtosis measurements of sequence in Fig 2.15 ..................................33
Figure 2.17: (a) Original Lenna, (b) Blurred Lenna (=0.001) and (c)
Restored Lenna (=0.001) ..................................................................34
Figure 2.18: Graph showing normalized kurtosis of sequence in Fig 2.17.
=0.001 ..............................................................................................35
Figure 2.19: Real Turbulence degraded sequence ...................................................35
Figure 2.20: Restored sequence using CGI and kurtosis algorithm ..........................36
xi
Figure 3.1: (a) Original Lenna image, (b) Image blurred with =0.00025,
(c) Image blurred with =0.001 and (d) Image blurred with
=0.0025.............................................................................................39
Figure 3.2: (a) Checkerboard Image, (b) Checkerboard image warped in
x-direction, (c) Checkerboard image warped in y-direction
and (d) Final warp ..............................................................................41
Figure 3.3: (a) Real turbulence degraded image, (b) Flow field obtained
from (a), and (c) Simulated geometric distortion using flow
field ....................................................................................................42
Figure 3.4: (a), (b) & (c) 3 frames from simulated Lenna sequence with
blurring and geometric distortion ........................................................42
Figure 3.5: Motion of a pixel in a real turbulence sequence...................................44
Figure 3.6: Pixel wander for simulated Lenna sequence ........................................45
Figure 3.7: GUI for simulating atmospheric turbulence ........................................46
Figure 4.1: MSE of Lenna sequence with =0.001 and pixel motion set to
5 .........................................................................................................50
Figure 4.2: MSE of Flat sequence with =0.001 and pixel motion set to 5 ............50
Figure 4.3: MSE of Satellite sequence with =0.001 and pixel motion set
to 5 .....................................................................................................51
Figure 4.4: MSE of Girl1 sequence with =0.001 and pixel motion set to
5 .........................................................................................................51
Figure 4.5: MSE of Lenna sequence with =0.0025 and pixel motion set
to 3 .....................................................................................................52
Figure 4.6: MSE of Flat sequence with =0.0025 and pixel motion set to
3 .........................................................................................................53
Figure 4.7: MSE of Military sequence with =0.0025 and pixel motion
set to 3 ................................................................................................53
Figure 4.8: MSE of Room sequence with =0.00025 and pixel motion set
to 5 .....................................................................................................54
Figure 4.9: MSE of Lenna sequence with =0.00025 and pixel motion set
to 5 .....................................................................................................55
Figure 4.10: 3 frames from simulated motion sequence ..........................................55
Figure 4.11: MSE of Fixed Window CGI and Fixed Window CGI using
Median on simulated motion sequence 1 .............................................56
Figure 4.12: MSE of simulated motion sequence 1 with =0.001 and pixel
motion set to 5 ....................................................................................57
xii
Figure 4.13: MSE of simulated motion sequence 2 with =0.001 and pixel
motion set to 5 ....................................................................................57
Figure 4.14: MSE between consecutive frames of Armscor sequence .....................59
Figure 4.15: Real turbulence degraded frame of Armscor building .........................59
Figure 4.16: CGI corrected frame of Armscor building ...........................................60
Figure 4.17: FRTAAS corrected frame of Armscor building...................................60
Figure 4.18: Time-averaged corrected frame of Armscor building ..........................61
Figure 4.19: ICA corrected frame of Armscor building ...........................................61
Figure 4.20: Real turbulence degraded frame of building site sequence at a
distance of 5km ..................................................................................62
Figure 4.21: MSE between consecutive frames of Building site sequence ...............63
Figure 4.22: MSE between consecutive frames of Tower sequence ........................64
Figure 4.23: (a) Real turbulence-degraded frame of a tower at a distance
of 11km and (b) same tower at 11km but with negligible
turbulence ...........................................................................................65
Figure 4.24: (a) Frame from CGI algorithm output sequence and (b) frame
from FRTAAS algorithm ....................................................................66
Figure 4.25: (a) Frame from Time-averaged algorithm output sequence
and (b) frame from ICA algorithm ......................................................67
Figure 4.26: Real turbulence-degraded frame of a shack at a distance of
10km ..................................................................................................68
Figure 4.27: MSE between consecutive frames of Shack sequence .........................68
Figure 4.28: MSE between consecutive frames of Building site sequence
with motion ........................................................................................69
Figure 4.29: Sharpness of frames in Shack sequence using CGI algorithm..............70
Figure 4.30: Sharpness of frames in Shack sequence using CGI algorithm
with kurtosis enhancement ..................................................................71
Figure 4.31: Sharpness of frames in Shack sequence using Time-averaged
algorithm ............................................................................................71
Figure 4.32: Sharpness of frames in Shack sequence using ICA algorithm..............72
Figure 4.33: Sharpness of frames in Shack sequence using FRTAAS
algorithm ............................................................................................72
Figure 4.34: Sharpness of frames in Building site sequence using Time-
averaged algorithm .............................................................................73
Figure 4.35: Sharpness of frames in Building site sequence using ICA
algorithm ............................................................................................74
xiii
Figure 4.36: Sharpness of frames in Building site sequence using
FRTAAS algorithm ............................................................................74
Figure 4.37: Sharpness of frames in Building site sequence using CGI
algorithm with Kurtosis enhancement .................................................75
Figure 4.38: Sharpness of frames in Building site sequence using CGI
algorithm ............................................................................................75
xiv
CHAPTER 1
PROJECT OVERVIEW
1.1 INTRODUCTION
over long distances. The resultant video sequences appear blurred and waver in a quasi-
periodic fashion. This poses a significant problem in certain fields such as astronomy and
Hardware methods proposed for countering the effects of atmospheric turbulence such as
adaptive optics and DWFS (Deconvolution from Wave-Front Sensing) require complex
devices such as wave-front sensors and can be impractical. In many cases image
processing methods proved more practical since corrections are made after the video is
acquired [1, 3, 10, 13, 18]. Heat scintillation can therefore be removed for any existing
Numerous image processing methods have been proposed for the compensation of blurring
effects. Fewer algorithms have been proposed for the correction of geometric distortions
induced by atmospheric turbulence. Each method has its own set of advantages and
disadvantages.
In this dissertation the focus is placed on comparing image processing methods proposed
to restore video sequences degraded by heat scintillation. The comparative analysis will be
followed by selecting the best method which is suited for real-time implementation. An
restore sequences degraded by the effects of atmospheric turbulence with the focus placed
1.3 METHODOLOGY
Results in the dissertation were obtained using datasets divided into two categories: Real
datasets and simulated datasets. The real datasets consist of sequences obtained in the
presence of real atmospheric turbulence. These datasets were obtained from the CSIR
(Council for Scientific and Industrial Research) using their Cyclone camera and vary in
range from 5km-15km. The simulated sequences were generated using ground truth
images/sequences. Both datasets can be further divided into sequences with real-motion
1.4 ASSUMPTIONS
The motion due to atmospheric turbulence is assumed to be small compared to the motion
of objects in the scene or motion caused by the camera such as panning and zooming. The
effect of atmospheric turbulence is taken to be either random at each pixel but highly
This work is limited to the implementation and comparison of different algorithms for the
stationary but the objects within the scene need not be. Panning and zooming will
The algorithms chosen will focus on recent work in which the main consideration
1.6 CONTRIBUTIONS
The CGI (Control Grid Interpolation) algorithm was modified to use the median,
scene.
This chapter is an introduction to the project and contains the problem statement,
Chapter two deals with the literature review on both post-processing techniques as well as
hardware methods for correcting atmospheric turbulence. Most of the literature discussed
applies to the correction of the blurring induced by turbulence. The algorithms chosen for
The algorithm proposed for the simulation of atmospheric turbulence used in this
Chapter four focuses on the datasets, the results and a discussion of the results. The real-
ALGORITHMS
The following chapter presents previous work in reducing the effects of atmospheric
turbulence as well as the algorithms selected for comparison. The previous works
discussed focus mainly on dealing with the blurring caused by atmospheric turbulence, as
the bulk of the research completed has been in this area. The algorithms chosen for
comparison are presented in detail. These algorithms were selected based on post
processing of the turbulence as well as their ability to compensate and/or minimise both the
Sections 2.1.1 to 2.1.4 outline current methods available in reducing the effects of
atmospheric turbulence. It also looks at hardware methods of dealing with turbulence such
as adaptive optics.
Speckle imaging is the process of reconstructing an object through the use of a series of
short exposure images. This is usually achieved by estimating the Fourier transform, phase
and magnitude, of the object. The phase and magnitude are handled separately. Some of
the methods proposed using speckled imaging are Knox-Thompson [15, 25] and Labeyrie
Labeyrie made the observations that speckles in short exposure images contain more high
spatial frequency information of an object than long exposure images. Therefore by using
the short exposure images more detail can be extracted from the images. Using a large
transform, required to obtain the object energy spectrum, the phase of the object is lost.
The phase spectrum will be required to create an image. Labeyries method can be outlined
as follows:
point.
used. The cross-spectrum is a specialised moment of the image spectra which contain
information about the phase of the object [15]. The cross-spectrum can therefore be used to
The main problem of speckle imaging is the requirement of a reference point or reference
images. The method is well suited to astronomy where reference images can be obtained.
Another problem is that a large amount of short-exposure images are required to achieve
good results.
Friedens method deals with the problem of imaging through turbulence by making use of
blind deconvolution. Blind deconvolution is used when no information about the distortion
of the image is known. It is capable of restoring the image as well as its point spread
function. The estimation of the PSF (Point Spread Function) is of vital importance as
choosing the incorrect one will lead to the restoration being unsuccessful.
Unlike the other methods discussed, Friedens approach only requires two short-exposure
images instead of a number of frames from a video sequence. No reference point sources
are required as is needed in speckle imaging. It uses the two short exposure frames to
estimate the point spread functions. This is achieved by finding the Fourier transforms of
the two images and then dividing by the same two images as shown in (1)
(1)
In n (1) On n (1)
Dn ( 2)
= =
In n ( 2 ) On n ( 2 ) (1)
where I represents the observed image spectras, represents the OTFs (Optical Transfer
Function) which have to be estimated and O represents the unknown object spectrum.
From (1) it can be seen that for the division of the images to eliminate the unknown
spectrum of the object, there can be no motion present in the scene. Using the division of
the two unknown OTFs, a set of n linear equations can be generated to give a solution to
the unknown OTFs. Once the point spread functions have been estimated the blind
deconvolution algorithm can be used to correct the atmospheric turbulence in the image.
This method makes a number of assumptions, the first being that no additive noise is
present in the images. The presence of 1% additive Gaussian noise in the images degraded
the quality of the PSFs significantly. The second assumption made is that there is no
2.1.3 Fusion
Image fusion has also been proposed to enhance images blurred by atmospheric
in several domains [23]. However fusion has also been used to obtain a less degraded
image of a scene by using a single sensor to obtain several images of the scene, making it
suitable for atmospheric turbulence. The degradations could be caused by motion in the
Zhao [18] made use of this by using image fusion to compensate for photometric variations
caused by turbulence. The algorithm proposed by Zhao can be outlined as follows [18]:
3. The image is reconstructed from level N to 0 using the weights to combine all
The energy of the local Laplacian pattern was used as a salience measure. The method by
Zhao also provided a method of compensating for the geometric distortion by using a
reference frame which is selected manually. This is further discussed in section 2.2.1 under
the Time-Averaged Algorithm. The method proposed by Zhao requires all motion in the
A wave-front sensor is capable of measuring the phase perturbation in each short exposure
image. This can be used in two ways to correct for atmospheric turbulence.
The first method would be in an adaptive optics system in which the wave-front sensor is
used to control a deformable mirror. In this system the AOI (Adaptive Optics Imaging)
system will require three main components: WFS, Controller and a deformable mirror. The
WFS is used to measure the phase perturbations. These measurements are then used by the
controller to manipulate the deformable mirror. The deformable mirror is then the means in
which the wave front is corrected [15]. A deformable mirror is any unit that is capable of
changing the perturbed wave such as a tip-tilt mirror. These mirrors are segmented and
controlled by actuators to manipulate the wave front. This method of correcting for
turbulence requires complex devices i.e. deformable mirrors and wave-front sensors which
The second method is a hybrid imaging technique that uses a wave-front sensor as well as
from the WFS are used to estimate an OTF for the current short exposure image. This OTF
is then used to correct the short-exposure image through deconvolution [15]. While this
method still requires WFS for the phase measurements, the deformable mirrors are not
Four groups of algorithms were chosen for comparison. The algorithms chosen are based
on post processing of the turbulence as well as their ability to compensate and/or minimise
both the distortion effects and the blurring caused by turbulence. Sections 2.2.1 to 2.2.4
outline the background of the algorithms as well as their respective advantages and
disadvantages.
number of algorithms for correcting atmospheric turbulence [3, 11, 18]. This technique
divides heat scintillation into two separate steps i.e. distortion and blurring. Each step is
then dealt with individually. The first step uses some form of image registration to bring
the images into alignment and compensate for the shimmering effect. The alignment is
usually done against a reference frame. The second step deals with the blurring induced by
Image registration is performed for each frame with respect to a reference frame. This
provides a sequence that is stable and geometrically correct. The reference frame can be
selected by using a frame from the sequence with no geometric distortion and minimal
blurring. This is however impractical since the frame would have to be selected manually
and the probability of finding an image that has both minimal distortion and is
would be the temporal averaging of a number of frames in the sequence. Since atmospheric
For the purpose of registration, an optical flow algorithm as proposed by Lucas and
I ( x , y , t ) = I ( x + x, y + y, t + t ) (2)
where I(x,y,t) represents the intensity of a pixel, x and y represents the co-ordinates of a
pixel and t represents the frame number. The assumption is made that the intensity of the
I I I
I ( x + x, y + y, t + t ) = I ( x, y, t ) + x+ y+ t (3)
x y t
where all higher order terms can be discarded. Since there are two unknowns (x, y) and
only one equation, it is an ill-posed problem. By assuming that the optical flow is locally
constant within a small, n x n patch, additional equations can be obtained. A least squares
I x1 I y1 I t1
I I y 2
x2 Vx I t 2
M M V = M
y
I x n I y n I t n (4)
I I I
where , and are denoted as Ix, Iy and It respectively.
x y t
Figure 2.1 shows the time-averaged algorithm applied to a distorted Lenna sequence.
(d) (e)
Figure 2.1: (a) Original Image, (b) Simulated image, (c) Flow field,
The reference frames were generated by averaging the first N frames of the turbulence
degraded sequences. Using time-averaging to generate a reference frame works well when
there is no real motion present in the scene, however if real motion is introduced, the
averaging of the sequence will generate a reference frame that is motion blurred and worse
than the turbulence-degraded frames. Using this reference frame to register the sequence
(c) (d)
Figure 2.2: (a) Original Image, (b) Simulated Image, (c) Motion Blurred Average and (d)
The algorithm performs well when there is no real motion present in the scene i.e. objects
moving, panning and zooming of camera. The video sequence is stabilized, except for a
few discontinuities caused by the optical flow calculations and/or the warping algorithm.
Localized blur will be present due to the averaging of the frames to obtain a reference
frame and this hinders the registration algorithm. When real motion is present in the scene
the reference frame is degraded due to motion blur. Warping any frame towards the
reference frame will cause further distortions and the restored sequence will be degraded to
The FRTAAS (First Register Then Average and Subtract) algorithm is an improvement of
the FATR (First Average Then Register) algorithm, proposed by Fraser, Thorpe and
Lambert [3]. The FATR algorithm is very similar to the Time-averaged algorithm
discussed in section 2.2.1. The FRTAAS algorithm aims to address the problem of
localized blur by minimizing the effect of the averaging process used to create the
reference frame.
2.2.2.1 FATR
The FATR algorithm registers each frame in the image sequence against an averaged
hierarchically shrinking region based on the cross correlation between two windowed
regions. To obtain improved results the dewarped frames are then averaged once again to
obtain a new reference frame and the sequence is put through the algorithm once again. As
discussed in section 3.1 the blur due to the temporal averaging will still be present. The
2.2.2.2 FRTAAS
In FRTAAS the averaging approach used to create the reference frame in FATR is avoided
by allowing any one of the frames in a sequence to be the reference frame. However due to
the time varying nature of atmospheric turbulence, some of the frames in the sequence will
not be as severely degraded as others. This would mean that it would be possible to obtain
a reference frame in which the atmospheric induced blur would be minimal. A sharpness
metric is used to select the least blurred frame in the sequence. This frame can also be
Figure 2.3 shows 20 frames from a real turbulence-degraded sequence. Visually it can be
seen that the 18th frame in the sequence is the sharpest as the number plate is more clearly
visible. The graph in Figure 2.4 shows that the sharpness metric has selected the 18th frame
Figure 2.3: (a) to (t) Frames from Number plate sequence. (r) selected as sharpest frame.
(CSIR Dataset).
reference frame. All frames in the sequence are then warped to the reference frame. The
shift maps that are used to warp the frames in the sequence to the reference frame are then
used to determine the truth image. In the FATR method the truth image was obtained by
temporal averaging. However by instead averaging the shift maps used to register the
turbulent frames to the warped reference frame, a truth shift map which warps the truth
frame into the reference frame is obtained. The averaging of the shift maps can be used
The warping using the shift maps, xs and ys, can be described as
r ( x, y , t ) = g ( x + xs ( x, y , t ), y + y s ( x, y , t ), t ) (6)
representing a backward mapping where r(x,y,t) is the reference frame and g(x,y,t) is a
distorted frame from the sequence. Once the shift maps, xs and ys, have been obtained for
each frame in the sequence, the centroids, Cx and Cy, which are used to calculate the pixel
N
1
C x ( x, y ) =
N
x ( x, y , t )
t =1
s
N
(7)
1
C y ( x, y ) =
N
y ( x, y, t ).
t =1
s
It is important to note that since the warping represents a backward mapping, the shift
maps obtained does not tell us where each pixel goes from r(x,y,t) to g(x,y,t) but rather
where each pixel in g(x,y,t) comes from in r(x,y,t) . Therefore the inverse of Cx and Cy are
then calculated and used to determine the corrected shift map of each warped frame in the
sequence as
1 1 1
X s ( x, y, t ) = C x ( x, y ) + x s ( x + C x ( x, y ), y + C y ( x, y ), t )
1 1 1
Ys ( x, y, t ) = C y ( x, y ) + y s ( x + C x ( x, y ), y + C y ( x, y ), t )
(8)
where Xs and Ys are the corrected shift maps used to correct the frames in the original
Using Xs and Ys one is therefore able to obtain the geometrically improved sequence using
f ( x, y, t ) = g ( x + X s ( x, y , t ), y + Ys ( x, y , t ), t ). (9)
The registration was done by using a differential elastic image registration algorithm
The registration algorithm models the mapping between images as a locally affine but
globally smooth warp that explicitly account for variations in image intensities [4]. The
a11 a12 0
[u v 1] = [x y 1] a 21 a 22 0. (10)
a31 a32 1
From this
f ( x, y, t ) = f (u , v, t 1)
u = a11 x + a21 y + a31
v = a12 x + a22 y + a32 (11)
parameters. The aij parameters are estimated locally within a small n x n patch, Q, by
minimizing
[ f ( x, y , t ) f ( a
x , yQ
11 x + a 21 y + a 31 , a12 x + a 22 y + a32 , t 1)]2 . (12)
This can be done by expanding (12) using the Taylor series expansion
2
f ( x, y, t ) f ( x, y , t ) f ( x, y, t )
f ( x, y, t ) f ( x, y, t ) + ( a11 x + a21 y + a31 x)
x
+ ( a12 x + a22 y + a32 y )
y
+ (t 1 t )
t
x , yQ
2
f ( x , y , t ) f ( x, y, t ) f ( x, y, t )
= f ( x, y, t ) f ( x, y, t ) + ( a11 x + a21 y + a31 x )
x
+ ( a12 x + a22 y + a32 y )
y
t
x , yQ
which simplifies to
2
f ( x, y , t ) f ( x, y, t ) f ( x, y, t )
x , yQ t
(a11 x + a21 y + a31 x )
x
( a12 x + a22 y + a32 y )
y
. (13)
Equation (13) can be minimized by differentiating with respect to the unknowns and
1
r rr r
a = cc T ck (14)
x , yQ x , yQ
where
T
r f ( x, y, t ) f ( x, y, t ) f ( x, y , t ) f ( x, y, t ) f ( x, y, t ) f ( x, y, t )
c = x y x y
x x y y x y
f ( x, y, t ) f ( x, y, t ) f ( x, y, t )
k= +x +y (15)
t x y
r
[
a = a11 a 21 a31 a12 a 22 a 32 ]
This will provide us with all the parameters, aij, required to define the affine warp [5].
The registration algorithm also accounts for intensity changes between images in which the
image brightness constancy assumption fails. By modifying (11) to include two additional
variations in intensity.
where ac and ab represent contrast and brightness [5]. This equation is then solved in a
The final aspect of the algorithm deals with the addition of a smoothness constraint which
replaces the assumption of image brightness constancy. The assumption is made that the
parameters required vary smoothly across space [5]. This allows for a larger spatial
used.
Figure 2.5 shows a selected reference frame (target) and the warping of a frame from a
simulated sequence (source) to the target frame. It also shows the estimated warp and the
shift maps.
Figure 2.5: (a) Distorted frame 1, (b) Distorted frame 2, (c) & (d) Shift maps, (e) estimated
Figure 2.6 shows 10 frames from a simulated turbulent Lenna sequence. Figure 2.7 shows
the restored images, corresponding to the frames in Figure 2.6, using the FRTAAS
algorithm.
The FRTAAS algorithm performed well with no motion present in the scene. The restored
sequences were an improvement over the Time-Averaged Algorithm. This was achieved by
avoiding temporal averaging of the turbulent frames. The algorithm also has problems in
methods for separating data into their underlying informational components [8]. In the
algorithm proposed by Kopriva et al [10], the underlying data takes the form of images. By
treating the frames of the turbulent sequence as sensors and taking the turbulence spatial
patterns as sources along with the original frame, ICA can be applied to the sequence to
I i = AI 0 + v
(17)
dimensions of the frames and I0 represents the original image. Therefore each image is
which distorts the original frames to form the turbulent frames and v represents additive
noise
ICA aims to find a transformation matrix W that will separate the turbulent frames or
I o = WI i . (18)
For ICA to work it is assumed that the source signals are mutually statistically independent
since ICA will estimate W so that o will represent estimated source signals that are as
statistically independent as possible. o will then contain the scaled original signal and its
was used to estimate the un-mixing matrix W. In JADE statistical independence is achieved
through the minimization of the squares of the fourth order cross-cumulants among the
components of o i.e.
W = arg min off W T C ( I ij , I ik , I il , I im )W
m l k j (19)
a
2
off ( A) = ij
1 i j N
(20)
and (Iij , Iik , Iil , Iim) are sample estimates of the fourth-order cross-cumulants [10].
mixing matrix cannot be calculated. To prevent this, the algorithm measures the mutual
D ( p, q ) = L( p, q ) + L(q, p ) (21)
T
pk T
q
where L( p, q ) = p k log 2 , L(q, p ) = q k log 2 k , and pk and qk are pixel intensities
k =1 qk k =1 pk
at the spatial coordinate k ( x, y ) . If two images are the same the distance D(p,q) will be
zero.
Figure 2.8 shows 3 turbulent frames from a simulated Lenna sequence applied to the ICA
algorithm. The extracted source image and turbulent spatial patterns are also shown.
(g) (h)
Figure 2.8: (a), (b) & (c) 3 turbulent frames, (d) Extracted Source Image, (e) & (f)
Turbulent spatial patterns, (g) Time averaged frame and (h) Extracted source image.
The algorithm proposed was modified to function with video sequences by using a frame
window size of three. At each iteration, the frame window was incremented by one to
include a new frame, thereby allowing the preservation of the sequence length as well as
ensuring that the mutual information, in most cases, would not be the same. This is
An alternate method used to apply the algorithm to video sequences was to use the
extracted source image from the current window set and apply this frame to the next
window set. This would allow us to always have a frame that is close to the source image
in the window set. This method however would cause the Kullback-Leibler divergence to
The ICA algorithm is advantageous because only a few frames are required to extract the
source frame and it performs better than the simple temporal averaging method since the
loss of details experienced with averaging is avoided. The disadvantages with the
algorithm are the assumptions that all motion has been compensated for and the turbulence
In this algorithm, both effects of turbulence are addressed i.e. geometric distortion and
blurring. To compensate for the blurring induced by atmospheric turbulence, the kurtosis
of an image is used and for the geometric distortion, CGI (Control Grid Interpolation) is
used.
Control grid interpolation is a method proposed by Sullivan [16] for motion compensation.
segmented into small continuous square regions, the corners of which form control points.
These control points are used as anchors in which the intermediate vectors are calculated
using bilinear interpolation. CGI allows for the representation of complex motion making
The pixel relationship between two images within a square region is described as
where i, j represent pixel co-ordinates and d1[i, j ] and d 2 [i, j ] represent the horizontal
and vertical displacement of the pixels between the two images I1 and I0. d1 [i, j ] and
d 2 [i, j ] are used to compute the vectors between the four control points enclosing each
region, R using bilinear interpolation. Given four points, (i0,j0), (i1,j1), (i2,j2), (i3,j3) any
intermediate motion vectors can be calculated providing us with a dense motion field of the
turbulence.
I1[i, j ] I [i, j ]
( I [i, j ] I [i, j ]
[ i , j ]R
0 1
i
d1[i, j ] 2
j
d 2 [i, j ]) 2
(25)
with higher order terms discarded. Using the least squares method we can obtain the
bilinear parameters. Figure 2.11 shows an example of a motion field generated using CGI.
Figure 2.11: (a) Simulated Lenna frame 1, (b) CGI motion field between (a) and (c) and (c)
2.2.4.2 Compensation
To compensate for the distortion the trajectories of each pixel are calculated using the
motion vectors from the CGI algorithm. Two methods were proposed for calculating the
trajectories of the pixels. The first method calculates the trajectory between the current
since at each new frame in the sequence only the trajectory between the new frame and the
previous frame have to be calculated. This however also makes the method susceptible to
errors in the presence of noise since an error in the previous trajectory estimate will be
compounded in all succeeding frames. The trajectory of the first method can be calculated
as
T (i, j, t0 ) = (i, j )
T (i, j, t0 1) = T (i, j , t0 ) + vt 0 ,t 01 (T (i, j, t0 ))
M M M
T (i, j, t0 n) = T (i, j, t0 n + 1) + vt 0 n +1, t 0 n (T (i, j , t0 n + 1))
(26)
where T represent the trajectory, v represent the motion vectors between two frames and t0
The second method estimates the trajectory by using the motion vectors between a source
frame, which remains fixed, and a target frame that is changed. This significantly increases
the computational load since the motion vectors will have to be recalculated each time the
algorithm increments to a new frame. Since the previous trajectory estimates are not used,
the problem of errors in the presence of noise is avoided. The methods trajectory can be
calculated as
Figure 2.12: Motion fields used in trajectory estimation with a time window of 5
frames. (CSIR Dataset).
Using (27) we can compensate for the geometric distortion at frame t0 by calculating the
1
T (i, j ) = T (i, j, k ) .
n + 1 t0 n k t 0
(28)
Since the motion due to atmospheric turbulence is quasi-periodic, the net displacement
over a period will be zero. Therefore by averaging the motion fields at each current frame,
a motion field can be generated which will allow us to dewarp the current frame. Since
motion due to atmospheric turbulence is small compared to real motion, using this method
Figure 2.13 shows 10 frames from a simulated turbulent Lenna sequence and Figure 2.14
shows the restored images, corresponding to the frames in Figure 2.13, using the CGI
algorithm.
Figure 2.14: 10 corrected frames corresponding to Fig 2.13 using CGI algorithm
The length of trajectory smoothing filter can be fixed, however by adjusting the length of
the smoothing filter, improved results can be obtained. The algorithm proposes a method
based on the characteristic that turbulent motion has zero mean quasi-periodicity. Using
this property the length of the smoothing filter is adjusted. This allows for improvements in
separating real motion and turbulent motion since when no real motion is present in the
scene the window length can be increased for improved results. If real-motion is present
the window length is decreased since the real motion will affect the trajectory estimation.
camera is stationary and objects in the scene move. In cases of zooming and panning, the
For the purpose of real-time implementation a fixed period enhancement method was used
in this dissertation. The algorithm was however modified in the way the centroid of
trajectory is estimated. It was observed that by using the median of the trajectory instead of
the average, better results could be obtained over fixed-period enhancement without
increasing the computational complexity significantly. Since real motion is larger than
turbulent motion, by using the median the effect of the large motion vectors due to real
image is used. The kurtosis of a random variable is defined as the normalized fourth central
moment, i.e.
E (( x ) 4 )
k= (29)
4
where and represent the mean and standard deviation respectively. The kurtosis
leptokurtic and with a kurtosis less than 3 is platykurtic. The Gaussian distribution has a
kurtosis of 3 and is called mesokurtic. By using the kurtosis of an image it was shown that
in general images which are blurred or smoothed have a higher kurtosis than the original
image will behave in the opposite manner i.e. kurtosis will decrease as the image is
blurred.
Using this property, the algorithm aims to optimize a parameter in a restoration filter. By
finding the parameter which corresponds to the lowest kurtosis the image can be restored.
This was shown to work for a number of different blurs such as Gaussian, out-of-focus and
(32) was used. By estimating the parameter, , within a search space, deconvolution can be
used to restore the estimated original image. The deconvolution can be achieved by any
non-blind restoration algorithm but for our implementation, similar to [12], Wiener
H * (u , v) X (u , v)
G (u , v ) = 2
H (u , v ) + nsr
(30)
where nsr is the noise-to-signal ratio, H represents the degradation function and X
Figure 2.17 shows an example of a simulated turbulence sequence with a of 0.001. The
graph plots the values of the normalized kurtosis versus the search space. As can be seen
a value of 0.001 has been selected as the minimum, which corresponds to the value used
Figure 2.17: (a) Original Lenna, (b) Blurred Lenna (=0.001) and (c) Restored Lenna
(=0.001).
Figure 2.19 shows an example of a real turbulence sequence and Figure 2.20 shows the
image, first corrected for distortion and then enhanced using the kurtosis method.
The significant advantage of this algorithm is that it can restore turbulent degraded
sequences while preserving real motion. It also addressed both the geometric distortions
and blurring. Although the algorithm is computationally expensive, computational time can
be reduced by altering certain parameters such as fixing the length of the frames in the
trajectory estimation.
2.3 CONCLUSION
Popular methods for correcting the effects of atmospheric turbulence were discussed. Most
of the methods currently researched deal with the blurring caused by atmospheric
turbulence. Most of the algorithms selected for comparison have been shown to degrade
when real motion is present in the scene. The algorithm proposed by Li [1] has shown
good results in dealing with situations when both turbulent and real motion is present in the
could be kept constant and still allow for real motion and turbulent motion to be separated
with good results. Based on the results of a detailed analysis conducted in Chapter 4, real-
ATMOSPHERIC TURBULENCE
Atmospheric turbulence is caused due to the index of refraction of air fluctuating with
changes in temperature. This causes objects in sequences to appear blurred and waver
slowly in a quasi-periodic fashion. The following chapter presents the methods used to
Since atmospheric turbulence degraded images are not easily available, being able to
simulate atmospheric effects would be advantageous. This will also provide us with a set
of ground truth sequences allowing us to compare the original sequences with the
turbulence and x is the original video [1]. Based on (31) the effects of turbulence can be
To simulate the effects of blurring, the OTF (optical transfer function) of atmospheric
turbulence as derived by Hufnagel and Stanley [1] is used. The OTF can be modelled as
2
+ v 2 )5 / 6
H (u, v) = e ( u (32)
where u, v represent the co-ordinates in the spatial frequency domain and controls the
severity of the blur [1]. Since turbulence blurring is time varying, the value of will have
to be varied from frame to frame, within a limit, to correctly simulate the effect of
atmospheric blurring. Typical values of would range from 0.00025 to 0.0025 to simulate
low to severe turbulence respectively. Figure 3.1 shows the Lenna image from blurred
=0.00025 to =0.0025.
Figure 3.1: (a) Original Lenna image, (b) Image blurred with =0.00025, (c) Image blurred
Two methods were used to simulate the geometric distortions induced by atmospheric
turbulence. The first method uses a 2-pass mesh warping algorithm [14] to randomly map
pixels from the source image to a destination image within a specified area. This creates a
sequence in which the scene appears to waver in a quasi-periodic fashion. This method
allows for full control of the level of distortion present in the sequences. It also allows for
the simulation of a large number of sequences each with its own distortion levels.
ordinates. The first array contains the co-ordinates of control points in the source image.
Since we are not interested in any particular points of interest in the case of atmospheric
turbulence, the first array is generated directly from the source image as a rectilinear grid.
The second array is used to specify the destination co-ordinates of the control points. This
array is generated using a random shift map that specifies pixel shift values at the control
points. Both the arrays are constrained to be topologically equivalent to prevent folding or
discontinuities in the destination image. This is achieved by limiting the shift values to
make certain that they do not wander so far as to cause self intersection.
Since the algorithm is separable the 1st pass is concerned with warping the source image
into an intermediate image in the horizontal direction. The 2nd pass will then complete the
warp by warping the intermediate image in the vertical direction. Figure 3.2 shows the 1st
pass, 2nd pass and final output of the 2-pass mesh warping algorithm applied to the
checkerboard image.
(c) (d)
Figure 3.2: (a) Checkerboard Image, (b) Checkerboard image warped in x-direction, (c)
The second method used was to extract the motion fields directly from real turbulence
degraded video clips and then apply them to the frames of a turbulence free video clip [12].
While this gives us only a limited number of distortion levels it provides a more realistic
approach to the distortion effects of atmospheric turbulence. The sequences used for the
extraction of the turbulent motion fields were provided by the CSIR and extracted using
the Lucas-Kanade optical flow algorithm. Figure 3.3 shows an example of the process.
Figure 3.3: (a) Real turbulence degraded image, (b) Flow field obtained from (a), and (c)
The final result of the blurring and the distortion applied to the Lenna sequence can be seen
in Figure 3.4.
Figure 3.4: (a), (b) & (c) 3 frames from simulated Lenna sequence with blurring and
geometric distortion.
follows:
Simulation Algorithm
One of the key assumptions made in most of the algorithms discussed is that atmospheric
turbulence is quasi-periodic in nature. This means that the net displacement over a period
of time is approximately zero. To illustrate this, the motion of pixels from real-turbulence
As can be seen from Figure 3.5, the pixel motion remains within a specified radius
showing the quasi-periodic nature of turbulence. The + shows the location of the pixel in
the initial frame. The * shows the average of the pixel co-ordinates and would correspond
to the estimated true location of the pixel in the initial frame. Using a simulated turbulence
sequence the value of the average co-ordinate of the pixel motion can be calculated and
this can be compared to the original location of the pixel since the truth image is available.
A GUI was developed in Matlab to manage the simulation of atmospheric turbulence and
simplify the process of creating datasets. The GUI allows you to select an input image for
processing. The distortion and blurring levels can be set and the processed sequence
viewed. Once a desired set of levels is selected, a sequence is processed according to the
number of frames required. The sequence can then be viewed in the GUI and saved. Figure
3.4 CONCLUSION
The methods used in this dissertation for simulating the effects of atmospheric turbulence
were discussed and the quasi-periodic nature of atmospheric turbulence was shown. A GUI
Results in the dissertation were obtained using datasets divided into two categories: Real
datasets and simulated datasets. The real datasets consist of sequences obtained in the
presence of real atmospheric turbulence. These datasets were obtained from the CSIR
(Council for Scientific and Industrial Research) using their Cyclone camera and vary in
range from 5km to 15km. The simulated sequences were generated using ground truth
images/sequences. Both datasets can be further divided into sequences with real-motion
The registration algorithms used in the CGI, FRTAAS and Time-averaged algorithms were
window size of five was chosen for all the algorithms. The pyramid levels were also
chosen to be the same for both the Lucas-Kanade and Elastic Registration algorithm. On
For the CGI and Time-averaged algorithms a time window of ten frames was chosen. In
the case of the Time-averaged algorithm this meant averaging ten frames to obtain the
reference frame and in the case of the CGI algorithm a moving filter of ten frames was
used in the trajectory estimation. For the case of the ICA algorithm the best results were
obtained using a time window of three frames from which to estimate a true frame. For the
FRTAAS algorithm the entire sequence was allowed for the selection of the sharpest
frame.
The simulated sequences were used as they provided us with a ground truth with which to
compare the results of each algorithm. The simulated sequences can be sub-divided into
two further categories, sequences with motion and sequences without motion. For the
sequences without motion a single image was used and warped in different ways to create
a distorted sequence similar to turbulence. The distortions were set to be random within a
certain level of pixel shift that could be controlled. The sequence was then blurred using
Equation (32) with being selected randomly within a limit. This limit was chosen to be
small and was determined by multiplying with a random value between 0.5 and 1.5. The
final sequence was therefore a time-varying blurred and distorted sequence which
simulates turbulence. The turbulent sequences with motion were simulated by warping and
blurring each frame of a motion sequence using the same method described above.
Figures 4.1-4.4 show the results of the four algorithms on a number of simulated
turbulence sequences with no motion. The sequences were generated using a medium level
of turbulence with an average =0.001. The level of distortion was set to a radius of
dissertation.
The MSE (Mean-square-error) was calculated between each frame in the output sequence
of the algorithms and the original ground-truth frame. All of the output sequences showed
a definite reduction in the levels of geometric distortion. The FRTAAS algorithm showed
the best results visually which is confirmed by the MSE results. The CGI algorithm and the
Time-averaged algorithm performed similarly, with the CGI algorithm showing a slight
algorithm is able to generate a good reference frame through averaging. The outstanding
results obtained from the FRTAAS algorithm can be attributed to two main factors. The
first being the registration algorithm used. Since the registration algorithm accounts for
registration. The second factor is the sharpness metric used to select a sharp initial
reference frame which improves the results in the final output. The ICA algorithm is only
capable of compensating for a small amount of distortion in the turbulence sequence and
while there is a reduction in the output sequence when compared to the turbulence
sequence, it shows the least performance gain compared to the other three algorithms. The
reason for the CGI algorithm starting from frame six is the time required to accumulate
over the time window. It could have been allowed to start from frame one by reducing the
time window initially and increasing it once the target frame is reached but to get a more
Figure 4.2: MSE of Flat sequence with =0.001 and pixel motion set to 5.
Figure 4.4: MSE of Girl1 sequence with =0.001 and pixel motion set to 5.
turbulence with a =0.0025 and the distortion level set to 3. This was done to see how the
algorithms performed with a small amount of distortion but an increased level of blur. It
can be seen that the CGI and FRTAAS algorithms performed similarly in this case. Since
the sequence has a higher level of blur, the sharpest frame selection does not provide a
significant advantage in this case. In different turbulence sequences, due to lucky selection
of a sharp frame, the FRTAAS algorithm might show improved results. The ICA algorithm
again showed the least improvement in performance. The time-averaged algorithm showed
improved results over the turbulence sequence but was still outperformed by the CGI and
FRTAAS algorithms.
Figure 4.5: MSE of Lenna sequence with =0.0025 and pixel motion set to 3.
Figure 4.7: MSE of Military sequence with =0.0025 and pixel motion set to 3.
with =0.00025. The level of distortion was set to a radius of approximately five.
Figure 4.8: MSE of Room sequence with =0.00025 and pixel motion set to 5.
For the simulated sequences with motion, the CGI algorithm was used with a fixed
window using the median method as described in section 2.2.4.3. The difference between
the CGI using fixed window and CGI using fixed window with the median is shown in
Figure 4.11. The results are from a sequence of a car entering a parking lot as shown in
Figure 4.10.
Figure 4.10: 3 frames from simulated motion sequence. (PETS 2000 Dataset [45]).
motion sequence 1.
Figures 4.12-4.13 show the results of the four algorithms. The Time-averaged algorithm
broke down completely with motion present in the scene since the reference frame was
motion blurred. The ICA algorithm also showed signs of motion blur in the output
sequence and the results show that the output sequence was worse than the turbulence-
degraded sequence in the first motion sequence. The FRTAAS algorithm showed similar
performance to the ICA algorithm. The CGI algorithm is the only algorithm which showed
an improvement in the output sequence with geometric distortions reduced and the real
motion preserved. Figure 4.13 shows a comparison of using the fixed window and the
fixed window with median. It can be seen that the fixed window using the median
Figure 4.13: MSE of simulated motion sequence 2 with =0.001 and pixel motion set to 5.
The real turbulence degraded dataset consists of 11 sequences provided by the CSIR
(Council for Scientific and Industrial Research). The sequences vary from buildings and
structures, which contain a large amount of detail, to open areas in which the contrast can
be low. The ranges of the sequences are from 5km to 15km and were obtained using the
CSIRs cyclone camera. The turbulence levels vary from light to severe with most of the
In the case of the real sequences, since no ground truth is available the sequences cannot be
compared with the original. The MSE (Mean-square-error) was calculated between
consecutive frames in a sequence, which shows the stability of the sequence. An intensity
corrected MSE will measure the differences between frames i.e. if turbulence is present the
geometric changes between frames will be large and this will correspond to a high MSE.
The sharpness of the frames was then calculated and is shown in section 4.3.3.
Figure 4.14 shows the results of the real turbulence sequence taken of the Armscor
building at a distance of 7km. Examples of the turbulent and corrected frames are shown in
Figures 4.15-4.19. In this sequence there is a fair amount of detail present. The processed
sequences for all the algorithms showed a reduction in the geometric distortions. The CGI,
FRTAAS and the Time-averaged algorithms output a stable sequence with few to no
with the distortions appearing to move more slowly. The distortions were still however
present and the shimmering effect remained. This can be seen in Figure 4.14, where the
MSE of the ICA is always less than that of the turbulence sequence however its variations
Armscor sequence. There is still however slight differences between the frames.
Figure 4.15: Real turbulence degraded frame of Armscor building. (CSIR Dataset).
Figure 4.21 shows the results of the real turbulence sequence taken of a building site at a
distance of 5km. An example of a turbulent frame of the sequence is shown in Figure 4.20.
In this sequence there is also fair amount of detail present. The results of the processed
FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 61
sequences were similar to the Armscor building sequence with the exception of the
FRTAAS algorithm which showed improved results over the CGI and Time-averaged
methods.
Figure 4.20: Real turbulence degraded frame of building site sequence at a distance of
Figure 4.22 shows the results of a real turbulence sequence taken of a tower at a distance of
11km. An example of a turbulent frame of the sequence is shown in Figure 4.23(a). In this
sequence the detail is minimal. The turbulence level is however severe, both in terms of
geometric distortion and blurring. For comparison, Figure 4.23(b) shows a frame from a
different sequence of the same tower. The sequence was taken from the exact same
location as the blurred sequence but at a time in which the turbulence was negligible.
The CGI and Time-averaged algorithms performed well once again. The FRTAAS
algorithm and the ICA algorithm, while showing improvements over the turbulent
sequence, did not perform as well. The ICA processed sequence still possessed geometric
Figure 4.23: (a) Real turbulence-degraded frame of a tower at a distance of 11km and (b)
same tower at 11km but with negligible turbulence. (CSIR Dataset).
Figure 4.24: (a) Frame from CGI algorithm output sequence and (b) frame from FRTAAS
algorithm. (CSIR Dataset).
Figure 4.25: (a) Frame from Time-averaged algorithm output sequence and (b) frame from
ICA algorithm. (CSIR Dataset).
Figure 4.27 shows the results of a real turbulence sequence taken of a shack at a distance of
10km. An example of a turbulent frame of the sequence is shown in Figure 4.26. The CGI
and Time-averaged algorithms performed well once again, with the Time-averaged
algorithm outperforming the CGI algorithm in this case. The ICA processed sequence still
FRTAAS algorithm, while stable, did contain warping on certain of the features.
(CSIR Dataset).
Figure 4.28 shows the results of the building site sequence with motion present in the scene
in the form of a car. The results were calculated using the MSE between consecutive
frames. Since there is motion present in the scene, the MSE between frames will be higher.
This will apply to all the output sequences though and the results are therefore still
relevant. The results can be compared with Figure 4.21 which is part of the same sequence,
the only difference being the motion of the car. It can be seen that the FRTAAS and Time-
averaged algorithms broke down in the presence of motion. The ICA algorithm performed
similarly to before but as discussed i.e. while the geometric distortions were reduced the
effect of heat shimmer was still present. The CGI algorithm once again showed a stable
Figure 4.28: MSE between consecutive frames of Building site sequence with motion.
The sharpness of the outputs of the algorithms was calculated using Equation (5). As
shown in section 2.2.2.2 the lowest value corresponds to the sharpest image. The results of
using the kurtosis of the image, as described by Li [1], is also shown. Since this was the
only algorithm discussed which explicitly dealt with and described methods of enhancing
the image turbulent images the results of the CGI algorithm (pre-enhancement) are also
shown. Figures 4.29-4.33 show the sharpness of the turbulence shack sequence compared
to the corrected shack sequences. It can be seen that the algorithms did not degrade beyond
the sharpness levels of the turbulence sequences. The ICA algorithm also shows frame 12
being significantly sharper than the other frames and matching the sharpest level obtained
enhancement.
Figure 4.34: Sharpness of frames in Building site sequence using Time-averaged algorithm.
Figure 4.36: Sharpness of frames in Building site sequence using FRTAAS algorithm.
enhancement.
Figure 4.38: Sharpness of frames in Building site sequence using CGI algorithm.
Based on the results above it was shown that the CGI algorithm performed the best when
real motion as well as turbulence motion was present in the scene. It was also shown that
under real turbulence conditions, the CGI performed the best along with the Time-
averaged algorithm. Another reason for selecting the CGI algorithm was the way in which
the compensation for geometric distortions was achieved by using a fixed time window.
This also increased the complexity of the algorithm since the motion vectors would have to
be calculated for each of the frames in the time window but the accuracy was however
increased. Therefore, for real-time simulation, the CGI algorithm [1] was chosen. The
Time-averaged algorithm was also implemented for comparison. OpenCV was used for the
available. All results were obtained on an Intel Core Duo with a CPU speed of 2 GHz and
2 GB of RAM. The Lucas-Kanade algorithm was chosen for both implementations since
this algorithm was available in OpenCV and is optimized. The CGI algorithm [1] was able
to run at a frame rate of 9 frames per second whereas the Time-averaged algorithm ran at a
frame rate of 29 frames per second. The time window chosen for the CGI algorithm was 10
frames i.e. to compensate for each frame in the sequence, the optical flow between 10
frames had to be calculated. When the time window was decreased to 6 frames, a frame
rate of approximately 14 frames per second was achieved. The sizes of the images were
256x256.
5.1 CONCLUSION
Algorithms for the removal of heat scintillation in sequences were researched and a
comparative analysis performed. Four algorithms were selected for the comparative
shown that all the algorithms were capable of reducing the geometric distortions present in
the sequences with the FRTAAS [13], CGI [1] and Time-averaged [3, 11, 18] algorithms
outputting sequences that are stable and geometrically improved. It was observed with the
ICA algorithm that while the geometric distortions were reduced, they were not removed
and the effect of heat shimmer still remained. The property of quasi-periodicity, used by
many algorithms to compensate for geometric distortions, was examined and shown using
results with the ground-truth images, which was not possible with the real-sequences. The
problem of real-motion present in the scene along with the effects of turbulence was
investigated. While most algorithms had difficulty dealing with real-motion in the scene,
the CGI algorithm [1] was shown to compensate for geometric distortions while preserving
developed capable of providing improved results over using a fixed period. This method
increased the computational complexity of the algorithm making it less suitable for real-
time implementation. It was shown that by using the median of the trajectories, a fixed
period could be maintained and an improved output observed especially in the presence of
investigated and it was shown that none of the algorithms served to further degrade the
sequences. The enhancement of the images using kurtosis was also shown to provide a
performed on both real and simulated turbulence sequences and it was seen that while the
FRTAAS algorithm performed better than the others in the simulated sequences, the results
based on the real sequences showed the CGI algorithm to have the best overall
performance. Based on the results of the comparative analysis the CGI algorithm [1] was
chosen and simulated using OpenCV. While a frame rate of only 9 frames per second was
achieved using a time window of ten frames, the algorithm is well suited for parallel
While fusion has been used for blurring, methods are also available for fusing images in
such as the MAP (Maximum A Posteriori Probability) algorithm which use special blind
deconvolution methods that are capable of identifying and compensating for the between-
channel misregistration [23]. Work in this area might allow for the development of a single
algorithm capable of compensating for the blur and distortion simultaneously. Future work
will also include selecting and parallelising an algorithm to function on a GPU or other
hardware capable of handling parallel applications in order to achieve higher frame rates.
wide-area motion blur restoration," Optical Society of America, pp. 1751-1758, 1999.
[6] S. Periaswamy, H. Farid, Differential Affine Motion Estimation for Medical Image
2004.
[9] J.F. Cardoso, A. Souloumiac, An efficient technique for blind separation of complex
in video using an adaptive control grid interpolation, in Proc. of the IEEE Int. Conf.
using a typical frame as prototype, in TENCON 2005-2005 IEEE Region 10, pp.
1382-1387, 2005.
[14] G. Wolberg, Digital Image Warping, Los Alamitos, CA, USA: IEEE Computer
[16] G. J. Sullivan and R. L. Baker, Motion compensation for video compression using
[17] E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision, Prentice
Hall, 1998.
in Proc. of the 2001 IEEE International Conference on Multimedia and Expo, pp. 393-
396, 2001.
Turbulence Effects from Images, Council for Scientific and Industrial Research
turbulence, Optics Communication, Elsevier Science B. V., vol. 150, pp. 15-28, 1998.
[21] J. F. Cardoso, Blind Beamforming for Non Gaussian Signals, IEEE Proc. F, vol.
[22] J. F. Cardoso., Blind Signal Separation: Statistical Principles, in Proc. of the IEEE,
[24] P. Campisi and K. Egiazarian., Blind Image Deconvolution, Boca Raton, CRC Press,
2007.
1974.
[28] J. F. Cardoso, Higher Order Contrasts for ICA, in Neural Computation, vol. 11, pp.
157-192, 1999.
Fusion of Thermal and Video Sequences for Terrestrial Long Range Observation
[33] B. Zitova and J. Flusser, Image Registration Methods: a Survey, Image and Vision
Minimization, in Proc. of the IEEE Int. Conference Image Processing, vol. 1, pp.
905-908, 2005.
[35] S. John and M. A. Vorontsov, Multiframe Selective Information Fusion from Robust
Error Estimation Theory, IEEE transactions on Image Processing, vol. 14, pp. 577-
584, 2005.
Images, in Journal of Opt. Soc. of America, vol. 18, No. 6, pp. 1312-1324, 2001.
the IEEE International Conference on Acoustics, Speech and Signal Processing, pp.
2857-2860, 1998.
[40] L. Peng and X. Peng, Turbulent Effort Imaging and Analysis, in Proc. of the IEEE,
Fields: An Evaluation of Eight Optical Flow Algorithms, in Proc. of the Ninth British
Flow Techniques, in Proc. of the IEEE on Computer Vision and Pattern Recognition,
[44] R. C. Gonzalez and R. E. Woods, Digital Image Processing 2/E, Prentice Hall, 2002.
[Accessed: 24/10/2007]
[Accessed: 24/10/2007]
clear,close all,clc;
mov=aviread('armscor',1:120);
for k=1:N
i1=mov(k).cdata;
i1=im2double(i1);
for x=1:size(i1,1)
for y=1:size(i1,2)
temp=i1(x,y);
if temp~=0
shtemp=-((log(temp)/log(exp(1)))*temp);
sh=sh+shtemp;
end
end
end
shf(k)=sh;
sh=0;
end
[c,i]=min(shf)
% Begin algorithm
%i1=rgb2gray(mov(1).cdata);
i1=(mov(i).cdata);
i1 = im2double(i1);
for k=2:N
k
% i2=rgb2gray(mov(k).cdata);
i2=(mov(k).cdata);
i2 = im2double(i2);
[i1_warped,flow]=register2d(i1,i2);
cx=(sum(u,3))./N;
cy=(sum(v,3))./N;
for k=2:N
temp=mov(k).cdata;
temp=im2double(temp);
final(:,:,k-1)=mdewarpfinal(temp,-(u(:,:,k-1)-cx),-(v(:,:,k-1)-cy));
end
for k=2:N
map=gray(256);
mov1(k-1) = im2frame(final(:,:,k-1)*255,map);
end
map=gray(256);
for k=1:N-1
temp1=(mat2gray(mov1(k).cdata));
mov1(k)=im2frame(im2uint8(temp1),map);
temp2=(mat2gray(mov(k).cdata));
mov(k)=im2frame(im2uint8(temp2),map);
%mov(k)=mat2gray(mov(k).cdata);
end
mplay(mov);
mplay(mov1);
toc;
%x=mdewarpfinal(i1,u(:,:,1),v(:,:,1));
outvid='armscorout_1to120_frtaas';
%movie2avi(mov1,outvid,'fps',20,'compression','None');
save(outvid,'mov1');
[h,w] = size(i1_in);
i1 = i1_in;
i2 = i2_in;
[h,w] = size(i1);
mind = min(w,h);
levels = floor(log2(mind/params.main.minSize)+1);
i1 = pad2dlr(i1,padWl,padWr,padHl,padHr,0);
i2 = pad2dlr(i2,padWl,padWr,padHl,padHr,0);
pyr1(1).im = i1;
pyr2(1).im = i2;
[ht,wt] = size(pyr1(1).im);
for i = 2:levels
pyr1(i).im = reduce(pyr1(i-1).im);
pyr2(i).im = reduce(pyr2(i-1).im);
[ht,wt] = size(pyr1(i).im);
fprintf('Pyramid level: %g size: %gx%g mse is: %g\n',...
i,ht,wt,mse(pyr1(i).im,pyr2(i).im ));
end
fprintf('-----\n');
%--------------- End construct pyramid ------------------
[hs,ws] = size(pyr1(levels).im);
flowAcc = flow_init(ws,hs);
i1_w = pyr1(levels).im;
for i=levels:-1:1 % (from coarse to fine)
if (params.glob.flag)
fprintf('Finding global affine fit\n');
[flow_glob,M] = aff_find_flow(i1_w,pyr2(i).im, ...
params.glob.model,params.glob.iters);
flowAcc = flow_add(flowAcc,flow_glob);
flowAcc.m7 = flow_glob.m7;
flowAcc.m8 = flow_glob.m8;
fprintf('Done\n');
end
% Notes:
%
% flowfind_smooth warps the image by flowAcc
% before it finds the flow field.
% This takes care of the global warp.
% It also accumulates the previous flowAcc.
%
% The flow accumulation is also done in flowfind_smooth
% since it is this function that displays intermediate
% results.
end
%undo padding
flowAcc = flow_undopad(flowAcc,padWl,padWr,padHl,padHr);
ftime = clock;
return;
[h,w] = size(i1);
[M,b,c] = affbc_iter(i1,i2,iters,model);
flow_glob = flow_aff(M,w,h);
flow_glob.m7 = ones(h,w) .* c;
flow_glob.m8 = ones(h,w) .* b;
M
return;
params.main.boxW = 5;
params.main.boxH = 5;
params.main.model = 4;
params.main.minSize = 128;
params.main.dispFlag = 0;
params.main.padSize = 2;
params.smooth.loops_inner = 1;
params.smooth.loops_outer = 1;
params.smooth.lamda = [1e11 1e11 1e11 1e11 1e11 1e11 1e11
1e11];
params.smooth.deviant = [0 0 0 0 0 0 1 0];
params.smooth.dweight = [0 0 0 0 0 0 0 0];
params.glob.flag = 1;
params.glob.model = 4;
params.glob.iters = 2;%20;
params.glob.numLevels = 100;
return;
% Explanation of parameters
%
% params.main.boxW = width of box used in least squares
estimate
% params.main.boxH = height of box used in least squares
estimate
% params.main.model = model used in estimation
%
% 4 -> affine + translations + contrast/brightness
% 3 -> affine + translation
% 2 -> translation + contrast/brightness
% 1 -> translation
%
% params.main.minSize = lowest scale of pyramid has this size
% params.main.dispFlag = display intermediate results
% params.main.padSize = padding at coarsest scale
%
% params.smooth.loops_inner = Smoothness iterations
% params.smooth.loops_outer = Taylor series iterations
% params.smooth.lamda = Lamda values on smoothness terms
%
% params.smooth.deviant = internal, ignore
% params.smooth.dweight = internal, ignore
%
% params.glob.flag = perform global registration also
% params.glob.model = model of global registration,
function v = nextpow2(v)
l = log2(v);
if (l == floor(l))
return;
end
v = 2^round(log2(v)+0.5);
return;
if (nargin == 3)
val = 0;
end
[h,w] = size(img);
out = ones(h+2*by,w+2*bx) .* val;
out(by+1:by+h,bx+1:bx+w) = img;
return;
if (~exist('val','var'))
val = 0;
end
if (~exist('undoFlag','var'))
undoFlag = 0;
end
[h,w] = size(imgIn);
if (undoFlag == 0)
imgOut= ones(h+padHl+padHr,w+padWl+padWr) .* val;
imgOut(padHl+1:padHl+h,padWl+1:padWl+w) = imgIn;
else
h = h - padHl - padHr;
w = w - padWl - padWr;
imgOut = imgIn(padHl+1:padHl+h,padWl+1:padWl+w);
end
return;
[h,w] = size(img);
filt = [.05 .25 .4 .25 .05];
if (nargin == 1)
cnt = 1;
end
if (cnt == 0)
return;
end
for i = 1:cnt
img = conv2mirr(img,filt,filt);
img = img([1:2:h],[1:2:w]);
h = h/2;
w = w/2;
end
return;
[oh,ow] = size(in);
[mx,my] = meshgrid(linspace(1,oh,h), linspace(1,ow,w));
out = interp2(in,mx,my,'cubic');
return;
function updatedisplay(i1,i2,flowA,i1_warped,curIter,maxIter,level);
figure(1);
[h,w] = size(i1);
fprintf('Displaying...');
set(gcf,'DoubleBuffer','on');
set(gca,'NextPlot','replace','Visible','off')
set(gcf,'renderer','zbuffer');
grid = grid2d(w,h,floor(16*h/256),1);
grid_warped = flow_warp(grid,flowA);
imgC = trunc(flowA.m7,0,1);
imgB = trunc(flowA.m8,0,1);
row1 = rowimg(i1,i2,abs(i1-i2));
row2 = rowimg(i1_warped,i1_warped_bc,abs(i1_warped_bc-i2));
row3 = rowimg(flowA.m7, flowA.m8,1-grid_warped);
myimg = [row1;row2;row3];
myimg = frameimg(myimg,1,0);
imagesc(myimg,[0 1]);
colormap(gray);
axis image;
%truesize;
axis off;
x = 2;
y = h*2+15;
dy = 20;
title({tstr},'HorizontalAlignment','left','fontname','courier','position'
,[0 0],'Interpreter','none','FontSize',8);
%set(gca,'position',[0 0 1 0.9]);
drawnow;
fprintf('Done!\n');
out = varargin{1};
out = frameimg(out,1,1);
for i=2:nargin
img = varargin{i};
img = frameimg(img,1,1);
out = [out img];
end
return;
function out=trunc(in,minv,maxv)
out = in;
ind = find(out > maxv);
out(ind) = 1;
ind = find(out < 0);
out(ind) = 0;
return;
pat = zeros(h,w);
pat(1:t:h,1:w) = v;
pat(1:h,1:t:w) = v;
pat(h,1:w) = v;
pat(1:h,w) = v;
return;
if (exist('c_diffxyt') ~= 3)
fprintf('Compiling c/c_diffxyt.c\n');
cd c
mex c_diffxyt.c
cd ..
end
if (exist('c_smoothavg') ~= 3)
fprintf('Compiling c/c_smoothavg.c\n');
cd c
mex c_smoothavg.c
cd ..
end
addpath('.:./c:./register:./utils:');
[mx,my] = meshgrid([0:w-1]-w/2+0.5,-([0:h-1]-h/2+0.5));
return;
[h,w] = size(in);
return;
%test
A = [1 2 3 4;5 6 7 8;9 10 11 12]
B = mirror_extend(A,2,2)
function err=mse(i1,i2,b)
if (nargin == 3)
[h,w] = size(i1);
nh = h - 2*b;
nw = w - 2*b;
i1 = cutc(i1,nw,nh);
i2 = cutc(i2,nw,nh);
end
i1 = i1(:);
i2 = i2(:);
d = (i1 - i2);
d = d.*d;
err = sqrt(mean(d));
return
if (isfield(flow,'m7'))
flow.m7 = scaleby2(flow.m7,scale);
flow.m8 = scaleby2(flow.m8,scale);
end
return;
if (scale == 0)
out = in;
return;
end
[h,w] = size(in);
scale = 2^scale;
[mx,my] = meshgrid(linspace(1,w,scale*w),linspace(1,h,scale*h));
out = interp2(in,mx,my,'linear');
return;
function flow=flow_smooth(flow,params)
if (isempty(PR))
fprintf('Smoothness: Initializing\n');
lamda = params.smooth.lamda;
deviant = params.smooth.deviant;
dweight = params.smooth.dweight;
model = params.main.model;
affbc_find_init;
[h,w] = size(flow.m1);
[L,d,index,rankE,iFlag] =
smooth_setC(lamda,deviant,dweight,model);
[M,b,c,r,pt,kt] = affbc_find_api(flow.f,flow.g,model);
PR = zeros(h*w,8,8);
KR = zeros(h*w,8);
r = [1:rankE];
ir = index(r);
area = h*w;
for i=1:area
p = pt(:,i);
P = p * p';
K = p * kt(i);
C = K +d;
R = inv(P+L);
if (iFlag)
end
m_avg = zeros(8,h*w);
for i=1:8
blurr(i).filt = sfilt .* lamda(i);
end
%r_blurr = conv2mirr(flow.r,sfilt);
r_blurr = c_smoothavg(flow.r,1);
r_zind = find(r_blurr == 0);
r_blurr(r_zind) = 1;
flow.r = ones(h,w);
flow.r(r_zind) = 0;
%m_avg(1,:) = reshape(conv2mirr(flow.m1,blurr(1).filt) ./
r_blurr,1,h*w);
%m_avg(2,:) = reshape(conv2mirr(flow.m2,blurr(2).filt) ./
r_blurr,1,h*w);
%m_avg(3,:) = reshape(conv2mirr(flow.m3,blurr(3).filt) ./
r_blurr,1,h*w);
%m_avg(4,:) = reshape(conv2mirr(flow.m4,blurr(4).filt) ./
r_blurr,1,h*w);
%m_avg(5,:) = reshape(conv2mirr(flow.m5,blurr(5).filt) ./
r_blurr,1,h*w);
%m_avg(6,:) = reshape(conv2mirr(flow.m6,blurr(6).filt) ./
r_blurr,1,h*w);
%m_avg(7,:) = reshape(conv2mirr(flow.m7,blurr(7).filt/20),1,h*w);
%m_avg(8,:) = reshape(conv2mirr(flow.m8,blurr(8).filt/20),1,h*w);
KRA = KR + m_avg';
for r = 1:8
m(:,r) = sum( squeeze(PR(:,r,:)) .* KRA,2);
end
flow.m1(:) = m(:,1);
flow.m2(:) = m(:,2);
flow.m3(:) = m(:,3);
flow.m4(:) = m(:,4);
flow.m5(:) = m(:,5);
flow.m6(:) = m(:,6);
flow.m7(:) = m(:,7);
flow.m8(:) = m(:,8);
flow.m1(r_zind) = 1;
flow.m2(r_zind) = 0;
flow.m3(r_zind) = 0;
flow.m4(r_zind) = 1;
flow.m5(r_zind) = 0;
flow.m6(r_zind) = 0;
flow.m7(r_zind) = 1;
flow.m8(r_zind) = 0;
return;
function [L,d,index,rankE,iFlag] =
smooth_setC(lamda,deviant,dweight,model)
[index,afFlag,bcFlag] = getindex(model);
iFlag = ~(afFlag & bcFlag);
rankE = length(index);
L = zeros(rankE,rankE);
for i = 1:rankE
ind = index(i);
L(i,i) = lamda(ind) + dweight(ind);
d(i) = deviant(ind) .* dweight(ind);
end
d = d';
return;
%
% Note:
%
% The unwarped i1 is passed in as a parameter since all
% subsequent warpings are performed using the accumulated
% flow on the original image.
[h,w] = size(i1);
if (~exist('flowA'))
flowA = flow_init(w,h);
i1_warped = i1;
else
i1_warped = flow_warp(i1,flowA);
end
flow = flowA;
flow.m7 = ones(h,w);
flow.m8 = zeros(h,w);
dispFlag = params.main.dispFlag;
loops_outer = params.smooth.loops_outer;
loops_inner = params.smooth.loops_inner;
if (dispFlag)
hw = waitbar(0,'Finding Smooth Flow');
updatedisplay(i1,i2,flowA,i1_warped,0, loops_outer,currLevel);
end
for j=1:loops_outer
flow = flowfind_raw(i1_warped,i2,params);
clear flow_smooth;
for i=1:loops_inner
fprintf('Level: %d/%d Iteration outer: %d/%d inner: %d/%d
size: %dx%d\n', totLevels-
currLevel+1,totLevels,j,loops_outer,i,loops_inner,w,h);
if (dispFlag)
waitbar(i/loops_inner*j/loops_outer,hw);
end
end
[flowA,i1_warped] = flow_add(flowA,flow,i1);
flowA.m7 = flow.m7;
flowA.m8 = flow.m8;
if (dispFlag)
waitbar(i/loops_inner*j/loops_outer,hw,...
sprintf('iteration %g/%g' ,j,loops_outer));
updatedisplay(i1,i2,flowA,i1_warped,j,
loops_outer,currLevel);
end
end
%keyboard;
if (dispFlag)
close(hw);
end
return;
flowOut.m5 = pad2dlr(flowIn.m5,padWl,padWr,padHl,padHr,0,1);
flowOut.m6 = pad2dlr(flowIn.m6,padWl,padWr,padHl,padHr,0,1);
if (isfield(flowIn,'m7'))
flowOut.m7 = pad2dlr(flowIn.m7,padWl,padWr,padHl,padHr,0,1);
flowOut.m8 = pad2dlr(flowIn.m8,padWl,padWr,padHl,padHr,0,1);
end
return;
if (nargin == 2)
mtd = 'cubic';
end
if (nargin < 4)
eflag = 0;
end
[h,w] = size(in);
b = floor(h/4);
if (eflag)
in = mirror_extend(in,b,b);
else
in = pad(in,b,b);
end
flow.m5 = mirror_extend(flow.m5,b,b);
flow.m6 = mirror_extend(flow.m6,b,b);
[h,w] = size(in);
[mx,my] = meshgridimg(w,h);
mx2 = mx + flow.m5;
my2 = my + flow.m6;
out = interp2(mx,my,in,mx2,my2,mtd);
out = out(b+1:h-b,b+1:w-b);
out(find(isnan(out))) = 0;
return;
function [out]=flowfind_raw(im1,im2,params);
if (~exist('params','var'))
params = params_default;
end
barFlag = 0;
t0 = clock;
[h_org,w_org] = size(im1);
boxW = params.main.boxW;
boxH = params.main.boxH;
model = params.main.model;
boxW2 = floor(boxW/2);
boxH2 = floor(boxH/2);
im1 = mirror_extend(im1,boxW,boxH);
im2 = mirror_extend(im2,boxW,boxH);
[h,w] = size(im1);
pcOrg = 0;
if (barFlag)
hw = waitbar(0,'Estimating flow...');
end
clear affbc_find;
global affbc_params;
affbc_params.model = model;
affbc_params.index = index;
affbc_params.frameSize = frameSize;
affbc_params.w = boxW;
q = zeros(5,h,w);
q(1,:,:) = -im1;
q(2,:,:) = fx;
q(3,:,:) = fy;
q(4,:,:) = ft;
mout = zeros(h,w,9);
for y=boxH2+1:h-boxH2
if (barFlag)
pc = (y-boxH2-1)/(h-boxH);
waitbar(pc,hw);
end
y1=y -boxH2;
y2=y1+boxH -1;
yind = y1:y2;
for x=boxW2+1:w-boxW2
x1=x -boxW2;
x2=x1+boxW -1;
xind = x1:x2;
chunk = q(:,yind,xind);
affbc_params.nf = chunk(1,:);
affbc_params.fx = chunk(2,:);
affbc_params.fy = chunk(3,:);
affbc_params.ft = chunk(4,:);
affbc_find;
mout(y,x,:) = affbc_params.mout;
end
end
out.m1 = mout(:,:,1);
out.m2 = mout(:,:,2);
out.m3 = mout(:,:,3);
out.m4 = mout(:,:,4);
out.m5 = mout(:,:,5);
out.m6 = mout(:,:,6);
out.m7 = mout(:,:,7);
out.m8 = mout(:,:,8);
out.r = mout(:,:,9);
if (1)
maxd = boxW/2;
ind = find(abs(out.m5) >= maxd | abs(out.m6) >= maxd);
out.m1(ind) = 0;
out.m2(ind) = 0;
out.m3(ind) = 0;
out.fx = fx;
out.fy = fy;
out.ft = ft;
out.f = im1;
out.g = im2;
out.model = model;
out.index = index;
out = flow_extract(out,h_org,w_org);
%waitbar(1,hw);
if (barFlag)
close(hw);
end
t0=etime(clock,t0);
fprintf('Elapsed time: %g\n',t0);
return;
affFlag = 0;
bcFlag = 0;
switch model
case 1; % Translation
index = [5 6];
case 2; % Translation + BC
index=[5 6 7 8];
bcFlag = 1;
case 3; % Affine+Translation
index=[1 2 3 4 5 6];
affFlag = 1;
case 4; % Affine+Translation + BC
index=[1 2 3 4 5 6 7 8];
affFlag = 1;
bcFlag = 1;
end
return;
[h,w] = size(in);
H1 = framegen(h,w,b);
H2 = 1-H1;
out = in .* H1 + H2 .* val;
return;
H = zeros(h,w);
H(frameSize+1:h-frameSize,frameSize+1:w-frameSize) = 1;
return;
flow_out.m5 = pad2dlr(flow.m5,padWl,padWr,padHl,padHr,0);
flow_out.m6 = pad2dlr(flow.m6,padWl,padWr,padHl,padHr,0);
flow_out.m7 = pad2dlr(flow.m7,padWl,padWr,padHl,padHr,1);
flow_out.m8 = pad2dlr(flow.m8,padWl,padWr,padHl,padHr,0);
return;
flow.m5 = zeros(h,w);
flow.m6 = zeros(h,w);
flow.m7 = ones(h,w);
flow.m8 = zeros(h,w);
return;
[oh,ow] = size(flow.m5);
x1 = (ow-w)/2+1;
y1 = (oh-h)/2+1;
x1 = floor(x1);
y1 = floor(y1);
x2 = x1+w-1;
y2 = y1+h-1;
if (isfield(flow,'m1'))
flow.m1 = flow.m1(y1:y2,x1:x2);
end
if (isfield(flow,'m2'))
flow.m2 = flow.m2(y1:y2,x1:x2);
end
if (isfield(flow,'m3'))
flow.m3 = flow.m3(y1:y2,x1:x2);
end
if (isfield(flow,'m4'))
flow.m4 = flow.m4(y1:y2,x1:x2);
end
if (isfield(flow,'m5'))
flow.m5 = flow.m5(y1:y2,x1:x2);
end
if (isfield(flow,'m6'))
flow.m6 = flow.m6(y1:y2,x1:x2);
end
if (isfield(flow,'m7'))
flow.m7 = flow.m7(y1:y2,x1:x2);
end
if (isfield(flow,'m8'))
flow.m8 = flow.m8(y1:y2,x1:x2);
end
if (isfield(flow,'r'))
flow.r = flow.r(y1:y2,x1:x2);
end
if (isfield(flow,'f'))
flow.f = flow.f(y1:y2,x1:x2);
end
if (isfield(flow,'fx'))
flow.fx = flow.fx(y1:y2,x1:x2);
end
if (isfield(flow,'fy'))
flow.fy = flow.fy(y1:y2,x1:x2);
end
return;
function flow_disp(flow,skip,color)
if (nargin == 1)
skip = 8;
end
if (nargin < 3)
color = 'b';
end
[h,w] = size(flow.m5);
m5 = flow.m5(1:skip:h,1:skip:w);
m6 = flow.m6(1:skip:h,1:skip:w);
[flowx flowy] = meshgrid([0:skip:w-1]-w/2+0.5,-([0:skip:h-1]-
h/2+0.5));
quiver(flowx,-flowy,m5,-m6,0,color);
axis image;
grid on;
axis ij;
return;
m = M(1:2,1:2);
dx = M(1,3);
dy = M(2,3);
return;
[h,w] = size(flowAB.m5);
m5 = flow_warp(flowBC.m5,flowAB,'cubic',1);
m6 = flow_warp(flowBC.m6,flowAB,'cubic',1);
n = find(isnan(m5) | isnan(m6));
m5(n) = 0;
m6(n) = 0;
if (nargin == 3)
img_warped = flow_warp(img_org,flowAC,'cubic',0);
end
return;
[fdx,fdy,fdz] = c_diffxyt(frame1,frame2);
return;
if (nargin == 2)
filtsizes = [3 3];
end
persistent px dx py dy pz dz;
if (isempty(px))
[px,dx] = GetDerivPD(filtsizes(1));
[py,dy] = GetDerivPD(filtsizes(2));
[pz,dz] = GetDerivPD(2);
end
%Step 1: prefilter in t;
% dx = prefilter in y, differentiate in x
% dy = prefilter in x, differentiate in y
%frame_pz = frame1 .* pz(1) + frame2 .* pz(2);
frame_pz = (frame1 + frame2)/2;
%fdx = conv2(py,dx',frame_pz,'same');
%fdy = conv2(dy,px',frame_pz,'same');
fdx = conv2mirr(frame_pz,dx,py);
fdy = conv2mirr(frame_pz,px,-dy);
%Step 2: differentiate in t;
% dz = prefilter in x and y
%fdz = conv2(py,px',frame_dz,'same');
fdz = conv2mirr(frame_dz,px,py);
return;
switch(noFrames)
case 2
p = [0.5 0.5];
d = [-1 1];
case 3
p = [0.223755 0.552490 0.223755];
d = [-0.453014 0.0 0.453014];
case 4
p = [0.092645 0.407355 0.407355 0.092645];
d = [-0.236506 -0.267576 0.267576 0.236506];
case 5
p = [0.036420 0.248972 0.429217 0.248972 0.036420];
d = [-0.108415 -0.280353 0.0 0.280353 0.108415];
case 6
p = [0.013846 0.135816 0.350337 0.350337 0.135816 0.01384];
d = [-0.046266 -0.203121 -0.158152 0.158152 0.203121
0.046266];
case 7
p = [0.005165 0.068654 0.244794 0.362775 0.244794 0.068654
0.005165];
d = [-0.018855 -0.123711 -0.195900 0.0 0.195900 0.123711
0.018855];
otherwise
p = 0;
d = 0;
end
return;
if (nargin == 2)
newH = newW;
end
[h,w] = size(img);
y = round((h-newH)/2)+1;
x = round((w-newW)/2)+1;
out = img(y:y+newH-1,x:x+newW-1);
[h,w] = size(in);
nr = nl;
nd = nu;
if (nargin == 3)
out = conv2(fy,fx,out,'valid');
else
out = conv2(out,fx,'valid');
end
out = out([1:h],[1:w]);
return;
if (nargin == 2)
model = 4;
noIters = 50;
minBlkSize = 32;
end
if (nargin < 6)
minBlkSize = 32;
end
[h,w] = size(im1);
steps = floor(log2(w/minBlkSize)+1);
%steps = 3;
pyr1(1).im = im1;
pyr2(1).im = im2;
M = eye(3);
bnew = 0;
cnew = 1;
for i = steps:-1:1
im1S = pyr1(i).im;
im2S = pyr2(i).im;
if (i ~= steps)
M1 = M;
M1(1:2,3) = M1(1:2,3)/scale;
im1S = aff_warp(im1S,M1);
end
%dispimg([im1S,im2S]);
[Mnew,b,c] = affbc_iter(im1S,im2S,noIters,model);
cnew = c * cnew;
bnew = b + bnew;
return;
if (~exist('model','var'))
model = 4;
end
if (~exist('iters','var'))
iters = 20;
end
txtFlag = 0;
affbc_find_init;
[h,w] = size(i1);
W = ones(h,w);
[index] = getindex(model);
c = 1;
b = 0;
M = [1 0 0; 0 1 0; 0 0 1];
i1N = i1;
for i=1:iters
M = M*dM;
b = b+db;
c = c*dc;
if (c < 0.1)
c = 1;
end
i1N = aff_warp(i1,M,0);
if (txtFlag)
fprintf('Iteration: %d\n',i);
M
fprintf('c: %g, b: %g, mse: %g\n', c,b,mse(i1N,i2));
end
i1N = i1N .* c + b;
end
return;
function affbc_init
return;
global affbc_params;
[h,w] = size(f);
[fx,fy,ft] = diffxyt(f,g,[3 3]);
[ind,afFlag,bcFlag] = getindex(model);
affbc_params.h = h;
affbc_params.w = w;
affbc_params.model = model;
affbc_params.index = ind;
affbc_params.afFlag = afFlag;
affbc_params.bcFlag = bcFlag;
affbc_params.nf = -f(:)';
affbc_params.fx = fx(:)';
affbc_params.fy = fy(:)';
affbc_params.ft = ft(:)';
affbc_params.frameSize = 1;
if (~exist('lamdas','var'))
lamdas = ones(length(ind),1);
deviants = zeros(length(ind),1);
end
if (model == 4)
lamdas = [1 1 1 1 1 1 1 1]'*10e11;
deviants = [0 0 0 0 0 0 1 0]';
end
affbc_params.S = diag(lamdas);
affbc_params.D = (lamdas .* deviants);
%affbc_params.init = 1;
affbc_find;
mout = affbc_params.mout;
if (nargout > 4)
pt = affbc_params.pt;
kt = affbc_params.kt;
end
return;
function affbc_find
global affbc_params;
affbc_params.pt = [p1;p2;p3;p4;...
affbc_params.fx;affbc_params.fy;affbc_params.nf;
minus_ones];
affbc_params.kt = affbc_params.ft;
if (bcFlag)
affbc_params.kt = affbc_params.kt + affbc_params.nf;
end
if (afFlag)
affbc_params.kt = affbc_params.kt + p1+ p4;
end
affbc_params.mout = [1 0 0 1 0 0 1 0 r];
if (iFlag)
affbc_params.mout(affbc_params.index) = m;
else
affbc_params.mout(1:8) = m;
end
return;
affbc_init(affbc_params)
%Initialize things
%fprintf('affbc_find Initializing...\n');
w = affbc_params.w;
h = affbc_params.h;
[mx,my] = meshgridimg(w,h);
H = framegen(w,h,affbc_params.frameSize); %box filter
noFields = length(affbc_params.index);
rankE = noFields;
afFlag = affbc_params.afFlag;
bcFlag = affbc_params.bcFlag;
%Expected rank
rankE = noFields;
o = -ones(1,h*w);
return;
if (nargin == 2)
contFlag = 0;
end
m = M(1:2,1:2);
dx = M(1,3);
dy = M(2,3);
[h,w] = size(in);
a = h*w;
[mx,my] = meshgridimg(w,h);
mx2 = reshape(mx2,h,w);
my2 = reshape(my2,h,w);
out = interp2(mx,my,in,mx2,my2,'cubic');
out(isnan(out)) = 0;
if (contFlag)
out = out * det(M);
end
return;
if nargin>=2
if ischar(whichbar)
type=2; %we are initializing
name=whichbar;
elseif isnumeric(whichbar)
type=1; %we are updating, given a handle
f=whichbar;
else
error(['Input arguments of type ' class(whichbar) ' not valid.'])
end
elseif nargin==1
f = findobj(allchild(0),'flat','Tag','TMWWaitbar');
if isempty(f)
type=2;
name='Waitbar';
else
type=1;
f=f(1);
x = max(0,min(100*x,100));
switch type
case 1, % waitbar(x) update
p = findobj(f,'Type','patch');
l = findobj(f,'Type','line');
if isempty(f) | isempty(p) | isempty(l),
error('Couldn''t find waitbar handles.');
end
xpatch = get(p,'XData');
xpatch = [0 x x 0];
set(p,'XData',xpatch)
xline = get(l,'XData');
set(l,'XData',xline);
if nargin>2,
% Update waitbar title:
hAxes = findobj(f,'type','axes');
hTitle = get(hAxes,'title');
set(hTitle,'string',varargin{1});
end
oldRootUnits = get(0,'Units');
axFontSize=get(0,'FactoryAxesFontSize');
pointsPerPixel = 72/get(0,'ScreenPixelsPerInch');
f = figure(...
'Units', 'points', ...
'BusyAction', 'queue', ...
'Position', pos, ...
'Resize','off', ...
'CreateFcn','', ...
'NumberTitle','off', ...
'IntegerHandle','off', ...
%%%%%%%%%%%%%%%%%%%%%
% set figure properties as passed to the fcn
% pay special attention to the 'cancel' request
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
if nargin > 2,
propList = varargin(1:2:end);
valueList = varargin(2:2:end);
cancelBtnCreated = 0;
for ii = 1:length( propList )
try
if strcmp(lower(propList{ii}), 'createcancelbtn' ) &
~cancelBtnCreated
cancelBtnHeight = 23 * pointsPerPixel;
cancelBtnWidth = 60 * pointsPerPixel;
newPos = pos;
vertMargin = vertMargin + cancelBtnHeight;
newPos(4) = newPos(4)+vertMargin;
callbackFcn = [valueList{ii}];
set( f, 'Position', newPos, 'CloseRequestFcn',
callbackFcn );
cancelButt = uicontrol('Parent',f, ...
'Units','points', ...
'Callback',callbackFcn, ...
'ButtonDownFcn', callbackFcn,
...
'Enable','on', ...
'Interruptible','off', ...
'Position', [pos(3)-
cancelBtnWidth*1.4, 7, ...
cancelBtnWidth, cancelBtnHeight], ...
'String','Cancel', ...
'Tag','TMWWaitbarCancelButton');
cancelBtnCreated = 1;
else
% simply set the prop/value pair of the figure
set( f, propList{ii}, valueList{ii});
end
catch
disp ( ['Warning: could not set property ''' propList{ii}
''' with value ''' num2str(valueList{ii}) '''' ] );
end
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
colormap([]);
axNorm=[.05 .3 .9 .2];
axPos=axNorm.*[pos(3:4),pos(3:4)] + [0 vertMargin 0 0];
h = axes('XLim',[0 100],...
'YLim',[0 1],...
'Box','on', ...
tHandle=title(name);
tHandle=get(h,'title');
oldTitleUnits=get(tHandle,'Units');
set(tHandle,...
'Units', 'points',...
'String', name);
tExtent=get(tHandle,'Extent');
set(tHandle,'Units',oldTitleUnits);
titleHeight=tExtent(4)+axPos(2)+axPos(4)+5;
if titleHeight>pos(4)
pos(4)=titleHeight;
pos(2)=screenSize(4)/2-pos(4)/2;
figPosDirty=logical(1);
else
figPosDirty=logical(0);
end
if tExtent(3)>pos(3)*1.10;
pos(3)=min(tExtent(3)*1.10,screenSize(3));
pos(1)=screenSize(3)/2-pos(3)/2;
axPos([1,3])=axNorm([1,3])*pos(3);
set(h,'Position',axPos);
figPosDirty=logical(1);
end
if figPosDirty
set(f,'Position',pos);
end
xpatch = [0 x x 0];
ypatch = [0 0 1 1];
xline = [100 0 0 100 100];
yline = [0 0 1 1 0];
p = patch(xpatch,ypatch,'r','EdgeColor','r','EraseMode','none');
l = line(xline,yline,'EraseMode','none');
set(l,'Color',get(gca,'XColor'));
set(f,'HandleVisibility','callback','visible','on');
if nargout==1,
fout = f;
end
clear,close all,clc;
tic;
T1 = 256;
T2 = 256;
N=30;
num=1;
%Read in sequence
mov=aviread('building_site',1:N);
for k=1:N
mov(k).cdata=mov(k).cdata(1:T1,1:T2);
end
for k=num+1:N-1
k
for f=1:num
i(:,:,f)=mov(k-f).cdata;
i(:,:,f)=i(1:T1,1:T2,f);
im=reshape(i(:,:,f),1,T1*T2);
imf(f,:)=im;
i(:,:,f+num+1)=mov(k+f).cdata;
i(:,:,f+num+1)=i(1:T1,1:T2,f+num+1);
im=reshape(i(:,:,f+num+1),1,T1*T2);
imf(f+num+1,:)=im;
end
i(:,:,num+1)=mov(k).cdata;
i(:,:,num+1)=i(1:T1,1:T2,num+1);
im=reshape(i(:,:,num+1),1,T1*T2);
imf(num+1,:)=im;
X=double(imf);
newframe=abs(um(1,:));
newframe=uint8((newframe./max(max(newframe)))*255);
final(k-num,:)=newframe;
end
toc;
for k=num+1:N-2
map=gray(256);
mov1(k-num) = im2frame(umf(:,:,k-num),map);
%mov1(k).colormap=gray(236);
end
map=gray(256);
for k=num+1:N-2
temp1=(mat2gray(mov1(k-num).cdata));
mov1(k-num)=im2frame(im2uint8(temp1),map);
temp2=(mat2gray(mov(k-num).cdata));
mov(k-num)=im2frame(im2uint8(temp2),map);
%mov(k)=mat2gray(mov(k).cdata);
end
mplay(mov1);
mplay(mov);
% Self-commenting code
%=====================
if verbose, fprintf('jade -> Removing the mean value\n'); end
X = X - mean(X')' * ones(1,T);
%% Init
if 1, %% Init by diagonalizing a *single* cumulant matrix. It seems to
save
%% some computation time `sometimes'. Not clear if initialization is
%% a good idea since Jacobi rotations are very efficient.
for p=1:m-1,
for q=p+1:m,
Ip = p:m:m*nbcm ;
Iq = q:m:m*nbcm ;
pair = [p;q] ;
V(:,pair) = V(:,pair)*G ;
CM(pair,:) = G' * CM(pair,:) ;
CM(:,[Ip Iq]) = [ c*CM(:,Ip)+s*CM(:,Iq) -s*CM(:,Ip)+c*CM(:,Iq)
] ;
end%%of the if
end%%of the loop on q
end%%of the loop on p
end%%of the while loop
if verbose, fprintf('jade -> Total of %d Givens rotations\n',updates);
end
%%% We permut its rows to get the most energetic components first.
%%% Here the **signals** are normalized to unit variance. Therefore,
%%% the sort is according to the norm of the columns of A = pinv(B)
return ;
% To do.
% - Implement a cheaper/simpler whitening (is it worth it?)
%
% Revision history:
%
%- V1.5, Dec. 24 1997
% - The sign of each row of B is determined by letting the first
% element be positive.
%
%- V1.4, Dec. 23 1997
% - Minor clean up.
% - Added a verbose switch
% - Added the sorting of the rows of B in order to fix in some
% reasonable way the permutation indetermination. See note 2)
% below.
%
%- V1.3, Nov. 2 1997
% - Some clean up. Released in the public domain.
%
%- V1.2, Oct. 5 1997
% - Changed random picking of the cumulant matrix used for
% initialization to a deterministic choice. This is not because
% of a better rationale but to make the ouput (almost surely)
% deterministic.
% - Rewrote the joint diag. to take more advantage of Matlab's
% tricks.
% - Created more dummy variables to combat Matlab's loose memory
clear,close all,clc;
N=30;
w=5; % window for cgi 5=window of +5/-5, 6=window of +6/-6 , etc
mov=aviread('building_site',580:629);
for iter=1:w
[u v]=cgi_frakes(current,mov(k-iter).cdata);
uf(:,:,iter)=u;
vf(:,:,iter)=v;
[u v]=cgi_frakes(current,mov(k+iter).cdata);
uf(:,:,iter+w)=u;
vf(:,:,iter+w)=v;
end
cu(:,:,1)=(sum(uf,3))./((2*w)+1);
cv(:,:,1)=(sum(vf,3))./((2*w)+1);
for k=1:N-(2*w)
map=gray(256);
mov1(k) = im2frame(result(:,:,k)*255,map);
end
map=gray(256);
for k=1:N-(2*w)
mplay(mov1);
mplay(mov);
%save('buildingsiteseqout_580to629_cgi','mov1');
im1=double(i1);
im2=double(i2);
halfWindow = floor(windowSize/2);
for i = halfWindow+1:cp:size(fx,1)-halfWindow
for j = halfWindow+1:cp:size(fx,2)-halfWindow
curFx = curFx';
curFy = curFy';
curFt = curFt';
inca=halfWindow+1;
for a=1:windowSize:windowSize*windowSize
incb=halfWindow+1;
inca=inca-1;
for b=1:windowSize
incb=incb-1;
curFx2(a+b-1,:)=[1 i-inca j-incb (i-inca)*(j-incb) 1 i-
inca j-incb (i-inca)*(j-incb)].*curFxb(a+b-1,:);
end
end
A = curFx2;
U = pinv(A'*A)*A'*curFt;
cntr1=halfWindow+1;
for a=1:cp
cntr2=halfWindow+1;
cntr1=cntr1-1;
end;
end;
u(isnan(u))=0;
v(isnan(v))=0;
clear,close all,clc;
tic;
load buildingsiteout_1to20_cgi;
mov=mov1;
clear mov1;
N=1;
nsr=0.0001; % estimated
for iter=1:N
iter
i2=mov(iter).cdata;
i2=double(i2);
lambda=0.00025;
for v=1:125
iwnr=func_wnr2(i2,lambda,nsr);
lambda=lambda+0.00002;
% Calculate Kurtosis
s=size(iwnr,1)*size(iwnr,2);
iwnrtemp=reshape(iwnr,[1 s]);
k(v,1)=kurtosis(double(iwnrtemp));
k(v,2)=lambda-0.00002;
%k(v,3)=rish_psnr((iwnr),(i2));
end
%normalize kurtosis
x=k-min(k(:,1));
x=x./max(x(:,1));
x(:,2)=k(:,2)
[q p]=min(x(:,1))
toc;
frame_c(:,:,iter)=func_wnr(i2,x(p,2),nsr);
end
map=gray(256);
for iter=1:N
temp1=(mat2gray(mov1(iter).cdata));
mov1(iter)=im2frame(im2uint8(temp1),map);
%mov(k)=mat2gray(mov(k).cdata);
end
mplay(mov1);
%save('armscorseqout_1to96_kurtosiscgi','mov1');
function i2final=func_wnr2(i2,k,nsr)
i2=double(i2);
[r c]=size(i2);
% Pre-prcessing of image
% Centres the image frequency transform
for x=1:r
for y=1:c
i2(x,y)=i2(x,y)*((-1)^(x+y));
end
end
% Calculate DFT
i2freq=fft2(i2);
filt(u,v)=((i2freq(u,v))*(abs(H(u,v))^2))/((H(u,v)*((abs(H(u,v))^2)+nsr))
);
end
end
% Post-processing of image
% Multiply by (-1) to "de-centre" the transform
i2final=zeros(r,c);
for x=1:r
for y=1:c
i2final(x,y)=i2deblur(x,y)*((-1)^(x+y));
end
end
clear,close all,clc;
mov=aviread('shack',1:25);
% for k=1:N
% mov(k).cdata=rgb2gray(mov(k).cdata);
% end
for k=1:N
in(:,:,k)=im2double(mov(k).cdata);
end
tic;
average=ave_total/avelength;
for k=1:N
im2=in(:,:,k);
[u,v] = HierarchicalLK(im2, average, 1, 5,1,0);
res(:,:,k)=mdewarpfinal(in(:,:,k),u,v);
end
toc;
for k=1:N
map=gray(256);
mov1(k) = im2frame(res(:,:,k)*255,map);
end
map=gray(256);
for k=1:N
temp1=(mat2gray(mov1(k).cdata));
mov1(k)=im2frame(im2uint8(temp1),map);
%mov(k)=mat2gray(mov(k).cdata);
end
mplay(mov1);
outvid='shackseq_1to25_ta';
%movie2avi(mov1,outvid,'fps',20,'compression','None');
save(outvid,'mov1');
if (size(im1,1)~=size(im2,1)) | (size(im1,2)~=size(im2,2))
error('images are not same size');
end;
if (size(im1,3) ~= 1) | (size(im2, 3) ~= 1)
error('input should be gray level images');
end;
%Build Pyramids
pyramid1 = im1;
pyramid2 = im2;
for i=2:numLevels
im1 = Reduce(im1);
im2 = Reduce(im2);
for r = 1:iterations
[u, v] = LucasKanadeRefined(u, v, baseIm1, baseIm2);
end
for r = 1:iterations
[u, v, cert] = LucasKanadeRefined(u, v, curIm1, curIm2);
end
end;
if (display==1)
figure, quiver(Reduce((Reduce(medfilt2(flipud(u),[5 5])))), -
Reduce((Reduce(medfilt2(flipud(v),[5 5])))), 0), axis equal
end
uIn = round(uIn);
vIn = round(vIn);
%uIn = uIn(2:size(uIn,1), 2:size(uIn, 2)-1);
%vIn = vIn(2:size(vIn,1), 2:size(vIn, 2)-1);
u = zeros(size(im1));
v = zeros(size(im2));
%to compute derivatives, use a 5x5 block... the resulting derivative will
be 5x5...
% take the middle 3x3 block as derivative
for i = 3:size(im1,1)-2
for j = 3:size(im2,2)-2
% if uIn(i,j)~=0
% disp('ha');
% end;
if (lowRindex < 1)
lowRindex = 1;
highRindex = 5;
end;
if (lowCindex < 1)
lowCindex = 1;
highCindex = 5;
end;
if isnan(lowRindex)
lowRindex = i-2;
highRindex = i+2;
end;
if isnan(lowCindex)
lowCindex = j-2;
curFx = curFx';
curFy = curFy';
curFt = curFt';
curFx = curFx(:);
curFy = curFy(:);
curFt = -curFt(:);
A = [curFx curFy];
U = pinv(A'*A)*A'*curFt;
u(i,j)=U(1);
v(i,j)=U(2);
cert(i,j) = rcond(A'*A);
end;
end;
u = u+uIn;
v = v+vIn;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [fx, fy, ft] = ComputeDerivatives(im1, im2);
%ComputeDerivatives Compute horizontal, vertical and time derivative
% between two gray-level images.
if (size(im1,3)~=1) | (size(im2,3)~=1)
error('method only works for gray-level images');
end;
u = zeros(size(im1));
v = zeros(size(im2));
halfWindow = floor(windowSize/2);
for i = halfWindow+1:size(fx,1)-halfWindow
for j = halfWindow+1:size(fx,2)-halfWindow
curFx = fx(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);
curFy = fy(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);
curFt = ft(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);
curFx = curFx';
curFy = curFy';
curFt = curFt';
curFx = curFx(:);
curFy = curFy(:);
curFt = -curFt(:);
A = [curFx curFy];
U = pinv(A'*A)*A'*curFt;
u(i,j)=U(1);
v(i,j)=U(2);
end;
end;
u(isnan(u))=0;
v(isnan(v))=0;
%u=u(2:size(u,1), 2:size(u,2));
%v=v(2:size(v,1), 2:size(v,2));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%
function [fx, fy, ft] = ComputeDerivatives(im1, im2);
%ComputeDerivatives Compute horizontal, vertical and time derivative
% between two gray-level images.
if (size(im1,3)~=1) | (size(im2,3)~=1)
error('method only works for gray-level images');
end;
function res=mdewarpfinal(I,u,v)
%% Read Image
[r c]=size(I);
morph1=u;
morph2=v;
tempx=1:c;
tempy=(1:r)';
indicex=zeros(r,c);
indicey=zeros(r,c);
for k=1:r
indicex(k,:)=tempx;
end
for k=1:c
indicey(:,k)=tempy;
end
%% Intermediate matrix
interx=indicex-morph1;
intery=indicey-morph2;
for k=1:size(r1)
for j=1:size(c1)
if (interx(r1(k),c1(j))<1)
interx(r1(k),c1(j))=1;
elseif (interx(r1(k),c1(j))>c)
interx(r1(k),c1(j))=c;
end
end
for k=1:size(r2)
for j=1:size(c2)
if (intery(r2(k),c2(j))<1)
intery(r2(k),c2(j))=1;
elseif (intery(r2(k),c2(j))>r)
intery(r2(k),c2(j))=r;
end
end
end
x1 = floor(interx);
x2 = ceil(interx);
fx1 = (x2-interx);
fx2 = 1-fx1;
y1 = floor(intery);
y2 = ceil(intery);
fy1 = y2-intery;
fy2 = 1-fy1;
for k=1:r
for j=1:c
res(k,j) = fx1(k,j)*fy1(k,j)*I(y1(k,j),x1(k,j)) +
fy1(k,j)*fx2(k,j)*I(y1(k,j),x2(k,j)) +
fy2(k,j)*fx1(k,j)*I(y2(k,j),x1(k,j)) +
fy2(k,j)*fx2(k,j)*I(y2(k,j),x2(k,j));
end
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code
% --- Outputs from this function are returned to the command line.
function varargout = Ver1_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
handles.orig=im1;
handles.filename=strcat(PathName,FileName);
guidata(hObject,handles)
%mov=aviread('lennablur_001_5');
%movie(handles.axes2,mov);
end
I=im2;
n=d.Value;
k=b.Value/10000;
res=simulateblur2(I,n);
res=func_blur(res,k);
handles.im1=res;
guidata(hObject,handles)
imshow(handles.im1,[]);
set(handles.axes1,'nextplot','replacechildren','units','normalized');
newplot;
movie(handles.axes1,handles.movx);
d=get(handles.Distortion);
b=get(handles.Blurring);
[mov
movx]=create_seq(handles.filename,uint8(handles.nframes),'test1',d.Value,
(b.Value)/10000)
handles.mov=mov;
handles.movx=movx;
guidata(hObject,handles);
[r c]=size(I);
tempx=1:c;
tempy=(1:r)';
%% Intermediate matrix
interx=indicex-morph1;
intery=indicey-morph2;
% remove all values less than 1 and greater then width and height
[r1 c1]=find((interx<1) | (interx>c));
[r2 c2]=find((intery<1) | (intery>r));
for k=1:size(r1)
for j=1:size(c1)
if (interx(r1(k),c1(j))<1)
interx(r1(k),c1(j))=1;
elseif (interx(r1(k),c1(j))>c)
interx(r1(k),c1(j))=c;
end
end
end
for k=1:size(r2)
for j=1:size(c2)
if (intery(r2(k),c2(j))<1)
intery(r2(k),c2(j))=1;
elseif (intery(r2(k),c2(j))>r)
intery(r2(k),c2(j))=r;
end
end
end
%% Weights
x1 = floor(interx);
x2 = ceil(interx);
fx1 = (x2-interx);
fx2 = 1-fx1;
y1 = floor(intery);
y2 = ceil(intery);
fy1 = y2-intery;
fy2 = 1-fy1;
for k=1:r
for j=1:c
res(k,j) = fx1(k,j)*fy1(k,j)*I(y1(k,j),x1(k,j)) +
fy1(k,j)*fx2(k,j)*I(y1(k,j),x2(k,j)) +
function i1final=func_blur(i1,k)
i1=double(i1);
[r c]=size(i1);
% Pre-prcessing of image
% Centres the image frequency transform
for x=1:r
for y=1:c
i1(x,y)=i1(x,y)*((-1)^(x+y));
end
end
% Calculate DFT
i1freq=fft2(i1);
% Post-processing of image
% Multiply by (-1) to "de-centre" the transform
i1final=zeros(r,c);
for x=1:r
for y=1:c
i1final(x,y)=i1blur(x,y)*((-1)^(x+y));
end
end
image=imread(name);
lambda1=lambda;
for k=1:size
w(:,:,k)=simulateblur2(image,n);
w(:,:,k)=func_blur(w(:,:,k),lambda);
%lambda=lambda+0.0002;
lambda=(rand+0.5)*lambda1;
map=gray(256);
mov(k) = im2frame(w(:,:,k),map);
movx(k)=mov(k);
movx(k).cdata=flipud(movx(k).cdata);
end
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
#include <time.h>
time_t t1,t2;
double diff=0;
float average2[1024][1024];
if( argc == 2 )
capture = cvCaptureFromAVI( argv[1] );
cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,0);
double x=cvGetCaptureProperty(capture,CV_CAP_PROP_FPS);
printf("Frame Rate: %f ",x);
if( !capture )
{
fprintf(stderr,"%d Could not initialize capturing...\n",argc);
return -1;
}
printf( "\n"
"\tESC - quit the program\n");
cvNamedWindow( "distorted", 1 );
cvNamedWindow( "corrected", 1 );
time(&t1);
for(int r=0;r<10;r++)
{
if( !average )
{
/* allocate all the buffers */
average = cvCreateImage( cvGetSize(frame), 8, 3 );
average->origin = frame->origin;
grey = cvCreateImage( cvGetSize(frame), 8, 1 );
img2 = cvCreateImage( cvGetSize(frame), 8, 1 );
}
int i=0,j=0;
for(i=0;i<grey->height;i++)
{
for(j=0;j<grey->width;j++)
{
average2[i][j]=(unsigned char)(grey-
>imageData[i*grey->widthStep+j])+average2[i][j];
if(r==9)
{
average2[i][j]=average2[i][j]/10;
img2->imageData[i*grey->widthStep+j]=(unsigned
char)(average2[i][j]);
}
}
}
cvReleaseImage(&grey);
cvReleaseImage(&average);
cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,0);
for(;;)
{
IplImage* frame = 0;
int i=0, k=0, c=0;
if( !image )
//==========================================================//
int j=0;
int height,width,step,channels;
float *data_x1,*data_y1;
cvCalcOpticalFlowLK( grey,img2,cvSize(5,5),vel_x,vel_y);
height = grey->height;
width = grey->width;
step = grey->widthStep;
channels = grey->nChannels;
data_x1 = (float *)vel_x->imageData;
data_y1 = (float *)vel_y->imageData;
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{
data_x1[i*step+j]=j-data_x1[i*step+j];
data_y1[i*step+j]=i-data_y1[i*step+j];
if (data_x1[i*step+j]<0)
data_x1[i*step+j]=0;
if (data_y1[i*step+j]<0)
data_y1[i*step+j]=0;
if (data_x1[i*step+j]>(width-2))
data_x1[i*step+j]=(float)(width-2);
if (data_y1[i*step+j]>(height-2))
data_y1[i*step+j]=(float)(height-2);
}
cvRemap(grey,corrected,vel_x,vel_y);
cvShowImage( "distorted",grey);
cvShowImage( "corrected",corrected);
c = cvWaitKey(1);
if( (char)c == 27 )
break;
}
time(&t2);
diff=difftime(t2,t1);
printf("%f",diff);
cvReleaseCapture( &capture );
cvDestroyWindow("distorted");
cvDestroyWindow("corrected");
return 0;
}
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
#include <ctime>
#include <time.h>
time_t t1,t2;
double diff=0;
IplImage *image , *grey;
IplImage *corrected = 0;IplImage *img2 = 0;IplImage *vel_x=0, *vel_y=0;
IplImage *curr = 0, *current = 0;
float data_xf[256][256],data_yf[256][256];
float average2[256][256];
CvCapture* capture = 0;
if( argc == 2 )
capture = cvCaptureFromAVI( argv[1] );
cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,0);
double x=cvGetCaptureProperty(capture,CV_CAP_PROP_FPS);
printf("Frame Rate: %f ",x);
if( !capture )
{
fprintf(stderr,"%d Could not initialize capturing...\n",argc);
return -1;
}
cvNamedWindow( "distorted", 1 );
cvNamedWindow( "corrected", 1 );
//===========Main LOOP===================//
time(&t1);
for (int q=0;q<1000;q++)
{
cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,q+5);
for(int p=0;p<1;p++)
{
frame = cvQueryFrame( capture );
if( !frame )
break;
if( !curr )
{
/* allocate all the buffers */
curr = cvCreateImage( cvGetSize(frame), 8, 3 );
curr->origin = frame->origin;
current = cvCreateImage( cvGetSize(frame), 8, 1 );
}
cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,q);
for(p=0;p<11;p++)
{
IplImage* frame = 0;
int i=0, k=0, c=0;
if( !image )
{
/* allocate all the buffers */
image = cvCreateImage( cvGetSize(frame), 8, 3 );
image->origin = frame->origin;
grey = cvCreateImage( cvGetSize(frame), 8, 1 );
vel_x = cvCreateImage( cvGetSize(grey),32 , 1 );
vel_y = cvCreateImage( cvGetSize(grey),32 , 1 );
corrected = cvCreateImage( cvGetSize(grey),8 , 1 );
img2 = cvCreateImage( cvGetSize(frame), 8, 1 );
}
int j=0,r=0;
int height,width,step;
cvCalcOpticalFlowLK( current,grey,cvSize(5,5),vel_x,vel_y);
// Lk algorithm used since it is optimized
height = grey->height;
width = grey->width;
step = grey->widthStep;
data_x1 = (float *)vel_x->imageData;
data_y1 = (float *)vel_y->imageData;
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{
data_xf[i][j]=data_xf[i][j]+data_x1[i*step+j];
data_yf[i][j]=data_yf[i][j]+data_y1[i*step+j];
if (p==10)
{
data_xf[i][j]=data_xf[i][j]/10;
data_yf[i][j]=data_yf[i][j]/10;
data_x1[i*step+j]=j-data_xf[i][j];
data_y1[i*step+j]=i-data_yf[i][j];
if (data_x1[i*step+j]<0)
data_x1[i*step+j]=0;
if (data_y1[i*step+j]<0)
data_y1[i*step+j]=0;
if (data_x1[i*step+j]>(width-2))
data_x1[i*step+j]=(float)(width-2);
if (data_y1[i*step+j]>(height-2))
data_y1[i*step+j]=(float)(height-2);
}
}
if (p==10)
{
cvRemap(current,corrected,vel_x,vel_y);
cvShowImage( "distorted",grey);
cvShowImage( "corrected",corrected);
c = cvWaitKey(1);
if( (char)c == 27 )
break;
//=================Run Sequence==========================//
}
time(&t2);
diff=difftime(t2,t1);
printf("%f",diff);
cvReleaseCapture( &capture );
cvDestroyWindow("distorted");
cvDestroyWindow("corrected");
return 0;
}