Anda di halaman 1dari 199

ALGORITHMS FOR THE REMOVAL OF

HEAT SCINTILLATION IN IMAGES

by

Rishaad Abdoola

Submitted in fulfilment of the requirements for the degree

MAGISTER TECHNOLOGIAE: ENGINEERING: ELECTRICAL

in the

Department of Electrical Engineering

FACULTY OF ENGINEERING AND THE BUILT ENVIRONMENT

TSHWANE UNIVERSITY OF TECHNOLOGY

Supervisor: Prof B van Wyk

Co-Supervisor: Prof A van Wyk

May 2008
ii
DECLARATION BY CANDIDATE
I hereby declare that the dissertation submitted for the degree M Tech: Engineering:

Electrical, at Tshwane University of Technology, is my own original work and has

not previously been submitted to any other institution of higher education. I further

declare that all sources cited or quoted are indicated and acknowledged by means of a

comprehensive list of references.

Rishaad Abdoola

Copyright Tshwane University of Technology

iii
To my parents, my brother and my sister

for all your support and patience. Thank you.

iv
ACKNOWLEDGEMENTS

I would like to express my sincere gratitude and appreciation to my supervisors: Prof.

Ben van Wyk and Prof. Anton van Wyk for their guidance, support and advice during

the completion of this project. Thank you for always having time and patience and for

all the effort given in completing this project.

I would also like to thank FSATIE (French South African Technical Institute in

Electronics) for all the opportunities and facilities provided in completing this project.

I thank all the lecturers and students at FSATIE and TUT (Tshwane University of

Technology) for all their assistance.

I would like to thank Dr. Dirk Bezuidenhout and Mark Holloway from the CSIR

(Council for Scientific and Industrial Research) for their significant help in providing

me with the turbulence sequences.

Special thanks go to Tshwane University of Technology, FSATIE and the CSIR for

their financial support. The financial assistance of the DPSS (Defence, Peace, Safety

and Security) towards this research is hereby acknowledged. Opinions expressed and

conclusions arrived at, are those of the author and are not necessarily to be attributed

to the DPSS, TUT or CSIR.

v
ABSTRACT

Heat scintillation occurs due to the index of refraction of air decreasing with an

increase in air temperature, causing objects to appear blurred and waver slowly in a

quasi-periodic fashion. This imposes limitations on sensors used to record images

over long distances resulting in a loss of detail in the video sequences. Algorithms and

techniques for the restoration of image sequences degraded by atmospheric turbulence

are investigated. A comparative analysis of these algorithms is performed. An

algorithm, developed to simulate the effects of atmospheric turbulence, is presented

and a GUI (Graphical User Interface) is developed. Focus is placed on the removal of

heat scintillation, a special case of atmospheric turbulence, from video sequences.

vi
GLOSSARY

AOI: Adaptive Optics Imaging.

CGI: Control Grid Interpolation.

CSIR: Council for Scientific and Industrial Research.

DWFS: Deconvolution from Wave-Front Sensors.

FATR: First Average Then Register.

FRTAAS: First Register Then Average And Subtract.

FSATIE: French South African Technical Institute in Electronics.

GPU: Graphics Processing Unit.

GUI: Graphical User Interface.

ICA: Independent Component Analysis.

JADE: Joint Approximate Diagonalization of the Eigen-Matrices.

MAP: Maximum A Posteriori Probability.

MSE: Mean-Square-Error.

NSR: Noise to Signal Ratio.

OTF: Optical Transfer Function.

PSF: Point Spread Function.

vii
CONTENTS

PAGE

DECLARATION................................................................................................... iii
DEDICATION ...................................................................................................... iv
ACKNOWLEDGEMENTS ................................................................................... v
ABSTRACT. ......................................................................................................... vi
GLOSSARY. ........................................................................................................ vii
CONTENTS. ....................................................................................................... viii
LIST OF FIGURES .............................................................................................. xi

CHAPTER 1

PROJECT OVERVIEW ........................................................................................... 1


1.1 INTRODUCTION ............................................................................... 1
1.2 Problem Statement............................................................................... 2
1.3 Methodology ....................................................................................... 2
1.4 Assumptions ........................................................................................ 2
1.5 Delimitations ....................................................................................... 3
1.6 Contributions ....................................................................................... 3
1.7 Document Plan .................................................................................... 4
1.7.1 Chapter One ........................................................................................ 4
1.7.2 Chapter Two ........................................................................................ 4
1.7.3 Chapter Three ...................................................................................... 4
1.7.4 Chapter Four........................................................................................ 4
1.7.5 Chapter Five ........................................................................................ 4

CHAPTER 2

ALGORITHMS ....................................................................................................... 5
2.1 PREVIOUS WORK IN ATMOSPHERIC TURBULENCE ................. 5
2.1.1 Speckle Imaging .................................................................................. 5
2.1.2 Friedenss Method ............................................................................... 7
2.1.3 Fusion ................................................................................................. 8
2.1.4 WFS (Wave-Front Sensing) ................................................................. 9

viii
2.2 ALGORITHMS SELECTED FOR COMPARISON ...........................10
2.2.1 Time-Averaged Algorithm..................................................................10
2.2.1.1 Lucas-Kanade Algorithm ....................................................................11
2.2.2 First Register Then Average and Subtract ...........................................14
2.2.2.1 FATR .................................................................................................14
2.2.2.2 FRTAAS ............................................................................................14
2.2.2.3 Elastic Image Registration ..................................................................18
2.2.3 Independent Component Analysis.......................................................22
2.2.4 Restoration of Atmospheric Turbulence Degraded Video
using Kurtosis Minimization and Motion Compensation .....................27
2.2.4.1 Control Grid Interpolation ..................................................................27
2.2.4.2 Compensation .....................................................................................28
2.2.4.3 Median using fixed period enhancement .............................................32
2.2.4.4 Kurtosis minimization ........................................................................32
2.3 CONCLUSION ..................................................................................36

CHAPTER 3

ATMOSPHERIC TURBULENCE ..........................................................................38


3.1 SIMULATING ATMOSPHERIC TURBULENCE.............................38
3.1.1 Blurring ..............................................................................................39
3.1.2 Geometric distortion ...........................................................................39
3.2 QUASI-PERIODICITY OF ATMOSPHERIC
TURBULENCE..................................................................................43
3.3 GRAPHICAL USER INTERFACE ....................................................45
3.4 CONCLUSION ..................................................................................46

CHAPTER 4

DATASETS, RESULTS AND COMPARATIVE ANALYSIS ...............................47


4.1 EXPERIMENTAL SETUP .................................................................47
4.2 RESULTS USING SIMULATED SEQUENCES ...............................48
4.2.1 Simulated sequences without motion ..................................................48
4.2.2 Simulated sequences with motion .......................................................55
4.3 RESULTS USING REAL SEQUENCES ...........................................58

ix
4.3.1 Real turbulence-degraded sequences without motion ..........................58
4.3.2 Real turbulence-degraded sequences with motion ...............................69
4.3.3 Sharpness of real turbulence-degraded sequences ...............................70
4.4 CONCLUSION ..................................................................................76

CHAPTER 5

CONCLUSION AND FUTURE WORK.................................................................77


5.1 CONCLUSION ..................................................................................77
5.2 FUTURE WORK ...............................................................................78

BIBLIOGRAPHY .................................................................................................79
APPENDIX A: FRTAAS Matlab code ...................................................................86
APPENDIX B: ICA Matlab code ......................................................................... 144
APPENDIX C: CGI Matlab code ......................................................................... 152
APPENDIX D: Wiener Filter and Kurtosis Matlab code ...................................... 156
APPENDIX E: General Algorithm Matlab code................................................... 159
APPENDIX F: Warping Algorithm Matlab code ................................................. 168
APPENDIX G: Graphical User Interface Matlab code ......................................... 170
APPENDIX H: General Algorithm C code .......................................................... 178
APPENDIX I: CGI Algorithm C code ................................................................. 182

x
LIST OF FIGURES

PAGE

Figure 2.1: (a) Original Image, (b) Simulated image, (c) Flow field, (d)
Time averaged frame and (e) Corrected frame. ...................................12
Figure 2.2: (a) Original Image, (b) Simulated Image, (c) Motion Blurred
Average and (d) Incorrectly Registered Image. ...................................13
Figure 2.3: (a) to (t) Frames from Number plate sequence. (r) selected as
sharpest frame. ...................................................................................16
Figure 2.4: Graph showing sharpness levels of sequence in Fig 2.3. .....................16
Figure 2.5: (a) Distorted frame 1, (b) Distorted frame 2, (c) & (d) Shift
maps, (e) estimated warp between frames and (f) Frame 1
warped to frame 2. ..............................................................................21
Figure 2.6: 10 frames from a simulated Lenna sequence. ......................................21
Figure 2.7: 10 corrected frames corresponding to Fig 2.6 using FRTAAS.............22
Figure 2.8: (a), (b) & (c) 3 turbulent frames, (d) Extracted Source Image,
(e) & (f) Turbulent spatial patterns, (g) Time averaged frame
and (h) Extracted source image ...........................................................25
Figure 2.9: ICA applied to video sequences ..........................................................26
Figure 2.10: ICA applied to Building site sequence ................................................26
Figure 2.11: (a) Simulated Lenna frame 1, (b) CGI motion field between
(a) and (c) and (c) Simulated Lenna frame 2. ......................................28
Figure 2.12: Motion fields used in trajectory estimation with a time
window of 5 frames ............................................................................30
Figure 2.13: 10 frames from a simulated Lenna sequence .......................................31
Figure 2.14: 10 corrected frames corresponding to Fig 2.13 using CGI
algorithm ............................................................................................31
Figure 2.15: Lenna sequence where turbulence blur is increased linearly ................33
Figure 2.16: Kurtosis measurements of sequence in Fig 2.15 ..................................33
Figure 2.17: (a) Original Lenna, (b) Blurred Lenna (=0.001) and (c)
Restored Lenna (=0.001) ..................................................................34
Figure 2.18: Graph showing normalized kurtosis of sequence in Fig 2.17.
=0.001 ..............................................................................................35
Figure 2.19: Real Turbulence degraded sequence ...................................................35
Figure 2.20: Restored sequence using CGI and kurtosis algorithm ..........................36

xi
Figure 3.1: (a) Original Lenna image, (b) Image blurred with =0.00025,
(c) Image blurred with =0.001 and (d) Image blurred with
=0.0025.............................................................................................39
Figure 3.2: (a) Checkerboard Image, (b) Checkerboard image warped in
x-direction, (c) Checkerboard image warped in y-direction
and (d) Final warp ..............................................................................41
Figure 3.3: (a) Real turbulence degraded image, (b) Flow field obtained
from (a), and (c) Simulated geometric distortion using flow
field ....................................................................................................42
Figure 3.4: (a), (b) & (c) 3 frames from simulated Lenna sequence with
blurring and geometric distortion ........................................................42
Figure 3.5: Motion of a pixel in a real turbulence sequence...................................44
Figure 3.6: Pixel wander for simulated Lenna sequence ........................................45
Figure 3.7: GUI for simulating atmospheric turbulence ........................................46

Figure 4.1: MSE of Lenna sequence with =0.001 and pixel motion set to
5 .........................................................................................................50
Figure 4.2: MSE of Flat sequence with =0.001 and pixel motion set to 5 ............50
Figure 4.3: MSE of Satellite sequence with =0.001 and pixel motion set
to 5 .....................................................................................................51
Figure 4.4: MSE of Girl1 sequence with =0.001 and pixel motion set to
5 .........................................................................................................51
Figure 4.5: MSE of Lenna sequence with =0.0025 and pixel motion set
to 3 .....................................................................................................52
Figure 4.6: MSE of Flat sequence with =0.0025 and pixel motion set to
3 .........................................................................................................53
Figure 4.7: MSE of Military sequence with =0.0025 and pixel motion
set to 3 ................................................................................................53
Figure 4.8: MSE of Room sequence with =0.00025 and pixel motion set
to 5 .....................................................................................................54
Figure 4.9: MSE of Lenna sequence with =0.00025 and pixel motion set
to 5 .....................................................................................................55
Figure 4.10: 3 frames from simulated motion sequence ..........................................55
Figure 4.11: MSE of Fixed Window CGI and Fixed Window CGI using
Median on simulated motion sequence 1 .............................................56
Figure 4.12: MSE of simulated motion sequence 1 with =0.001 and pixel
motion set to 5 ....................................................................................57

xii
Figure 4.13: MSE of simulated motion sequence 2 with =0.001 and pixel
motion set to 5 ....................................................................................57
Figure 4.14: MSE between consecutive frames of Armscor sequence .....................59
Figure 4.15: Real turbulence degraded frame of Armscor building .........................59
Figure 4.16: CGI corrected frame of Armscor building ...........................................60
Figure 4.17: FRTAAS corrected frame of Armscor building...................................60
Figure 4.18: Time-averaged corrected frame of Armscor building ..........................61
Figure 4.19: ICA corrected frame of Armscor building ...........................................61
Figure 4.20: Real turbulence degraded frame of building site sequence at a
distance of 5km ..................................................................................62
Figure 4.21: MSE between consecutive frames of Building site sequence ...............63
Figure 4.22: MSE between consecutive frames of Tower sequence ........................64
Figure 4.23: (a) Real turbulence-degraded frame of a tower at a distance
of 11km and (b) same tower at 11km but with negligible
turbulence ...........................................................................................65
Figure 4.24: (a) Frame from CGI algorithm output sequence and (b) frame
from FRTAAS algorithm ....................................................................66
Figure 4.25: (a) Frame from Time-averaged algorithm output sequence
and (b) frame from ICA algorithm ......................................................67
Figure 4.26: Real turbulence-degraded frame of a shack at a distance of
10km ..................................................................................................68
Figure 4.27: MSE between consecutive frames of Shack sequence .........................68
Figure 4.28: MSE between consecutive frames of Building site sequence
with motion ........................................................................................69
Figure 4.29: Sharpness of frames in Shack sequence using CGI algorithm..............70
Figure 4.30: Sharpness of frames in Shack sequence using CGI algorithm
with kurtosis enhancement ..................................................................71
Figure 4.31: Sharpness of frames in Shack sequence using Time-averaged
algorithm ............................................................................................71
Figure 4.32: Sharpness of frames in Shack sequence using ICA algorithm..............72
Figure 4.33: Sharpness of frames in Shack sequence using FRTAAS
algorithm ............................................................................................72
Figure 4.34: Sharpness of frames in Building site sequence using Time-
averaged algorithm .............................................................................73
Figure 4.35: Sharpness of frames in Building site sequence using ICA
algorithm ............................................................................................74

xiii
Figure 4.36: Sharpness of frames in Building site sequence using
FRTAAS algorithm ............................................................................74
Figure 4.37: Sharpness of frames in Building site sequence using CGI
algorithm with Kurtosis enhancement .................................................75
Figure 4.38: Sharpness of frames in Building site sequence using CGI
algorithm ............................................................................................75

xiv
CHAPTER 1

PROJECT OVERVIEW

1.1 INTRODUCTION

Atmospheric turbulence imposes limitations on sensors used to record image sequences

over long distances. The resultant video sequences appear blurred and waver in a quasi-

periodic fashion. This poses a significant problem in certain fields such as astronomy and

defence (surveillance and intelligence gathering) where detail in images is essential.

Hardware methods proposed for countering the effects of atmospheric turbulence such as

adaptive optics and DWFS (Deconvolution from Wave-Front Sensing) require complex

devices such as wave-front sensors and can be impractical. In many cases image

processing methods proved more practical since corrections are made after the video is

acquired [1, 3, 10, 13, 18]. Heat scintillation can therefore be removed for any existing

video independent of the cameras used for capture.

Numerous image processing methods have been proposed for the compensation of blurring

effects. Fewer algorithms have been proposed for the correction of geometric distortions

induced by atmospheric turbulence. Each method has its own set of advantages and

disadvantages.

In this dissertation the focus is placed on comparing image processing methods proposed

to restore video sequences degraded by heat scintillation. The comparative analysis will be

followed by selecting the best method which is suited for real-time implementation. An

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 1


algorithm, developed to simulate the effects of atmospheric turbulence, is presented and a

GUI (Graphical User Interface) is developed.

1.2 PROBLEM STATEMENT

The purpose of this project is to perform a comparative analysis of algorithms developed to

restore sequences degraded by the effects of atmospheric turbulence with the focus placed

on the removal of heat scintillation.

1.3 METHODOLOGY

Results in the dissertation were obtained using datasets divided into two categories: Real

datasets and simulated datasets. The real datasets consist of sequences obtained in the

presence of real atmospheric turbulence. These datasets were obtained from the CSIR

(Council for Scientific and Industrial Research) using their Cyclone camera and vary in

range from 5km-15km. The simulated sequences were generated using ground truth

images/sequences. Both datasets can be further divided into sequences with real-motion

and sequences without real motion.

1.4 ASSUMPTIONS

The motion due to atmospheric turbulence is assumed to be small compared to the motion

of objects in the scene or motion caused by the camera such as panning and zooming. The

effect of atmospheric turbulence is taken to be either random at each pixel but highly

correlated or spatially local and temporally quasi-periodic depending on the algorithm.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 2


1.5 DELIMITATIONS

This work is limited to the implementation and comparison of different algorithms for the

correction of atmospheric turbulence in images:

The algorithms selected will only focus on post-processing of the sequences.

Real motion as well as atmospheric motion will be considered in the comparative

analysis. This will however be restricted to motion in which the camera is

stationary but the objects within the scene need not be. Panning and zooming will

therefore not be considered.

The algorithms chosen will focus on recent work in which the main consideration

will be the correction of geometric distortions as well as their ability to minimise

and correct for the blurring induced by turbulence.

1.6 CONTRIBUTIONS

An algorithm is developed to simulate the effects of atmospheric turbulence and a

GUI is created to easily visualise atmospheric turbulence and create datasets.

The CGI (Control Grid Interpolation) algorithm was modified to use the median,

instead of the mean of the trajectory of a pixel, allowing a fixed period to be

maintained while showing improved results in the presence of real-motion in a

scene.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 3


1.7 DOCUMENT PLAN

The dissertation consists of five chapters organised as follows:

1.7.1 Chapter One

This chapter is an introduction to the project and contains the problem statement,

methodology, assumptions and delimitations.

1.7.2 Chapter Two

Chapter two deals with the literature review on both post-processing techniques as well as

hardware methods for correcting atmospheric turbulence. Most of the literature discussed

applies to the correction of the blurring induced by turbulence. The algorithms chosen for

comparison are presented in detail as well as their advantages and disadvantages.

1.7.3 Chapter Three

The algorithm proposed for the simulation of atmospheric turbulence used in this

dissertation is presented. The quasi-periodicity of atmospheric turbulence is also discussed.

1.7.4 Chapter Four

Chapter four focuses on the datasets, the results and a discussion of the results. The real-

time simulation, in OpenCV, of two of the selected algorithms is discussed.

1.7.5 Chapter Five

The conclusion is presented and future work is outlined.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 4


CHAPTER 2

ALGORITHMS

The following chapter presents previous work in reducing the effects of atmospheric

turbulence as well as the algorithms selected for comparison. The previous works

discussed focus mainly on dealing with the blurring caused by atmospheric turbulence, as

the bulk of the research completed has been in this area. The algorithms chosen for

comparison are presented in detail. These algorithms were selected based on post

processing of the turbulence as well as their ability to compensate and/or minimise both the

distortion effects and the blurring caused by turbulence.

2.1 PREVIOUS WORK IN ATMOSPHERIC TURBULENCE

Sections 2.1.1 to 2.1.4 outline current methods available in reducing the effects of

atmospheric turbulence. It also looks at hardware methods of dealing with turbulence such

as adaptive optics.

2.1.1 Speckle Imaging

Speckle imaging is the process of reconstructing an object through the use of a series of

short exposure images. This is usually achieved by estimating the Fourier transform, phase

and magnitude, of the object. The phase and magnitude are handled separately. Some of

the methods proposed using speckled imaging are Knox-Thompson [15, 25] and Labeyrie

[15, 26] methods.

Labeyrie made the observations that speckles in short exposure images contain more high

spatial frequency information of an object than long exposure images. Therefore by using

the short exposure images more detail can be extracted from the images. Using a large

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 5


number of short exposure images, the energy spectrum of an object is estimated. This is

commonly referred to as speckle interferometry. In estimating the modulus of the Fourier

transform, required to obtain the object energy spectrum, the phase of the object is lost.

The phase spectrum will be required to create an image. Labeyries method can be outlined

as follows:

Labeyrie Method (Speckle Interferometry)

1. Obtain N number of short exposure images of the object as well as of a reference

point.

2. Determine the Fourier transform of the image.

3. Determine the modulus squared.

4. Repeat steps 2 and 3 for all N images.

5. Determine average of modulus squared.

6. Deconvolute to obtain the energy spectrum of the object.

To obtain the phase spectrum the Knox-Thompson, or cross-spectrum, method can be

used. The cross-spectrum is a specialised moment of the image spectra which contain

information about the phase of the object [15]. The cross-spectrum can therefore be used to

recover the phase of the object.

The main problem of speckle imaging is the requirement of a reference point or reference

images. The method is well suited to astronomy where reference images can be obtained.

Another problem is that a large amount of short-exposure images are required to achieve

good results.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 6


2.1.2 Friedens Method [20]

Friedens method deals with the problem of imaging through turbulence by making use of

blind deconvolution. Blind deconvolution is used when no information about the distortion

of the image is known. It is capable of restoring the image as well as its point spread

function. The estimation of the PSF (Point Spread Function) is of vital importance as

choosing the incorrect one will lead to the restoration being unsuccessful.

Unlike the other methods discussed, Friedens approach only requires two short-exposure

images instead of a number of frames from a video sequence. No reference point sources

are required as is needed in speckle imaging. It uses the two short exposure frames to

estimate the point spread functions. This is achieved by finding the Fourier transforms of

the two images and then dividing by the same two images as shown in (1)

(1)
In n (1) On n (1)
Dn ( 2)
= =
In n ( 2 ) On n ( 2 ) (1)

where I represents the observed image spectras, represents the OTFs (Optical Transfer

Function) which have to be estimated and O represents the unknown object spectrum.

From (1) it can be seen that for the division of the images to eliminate the unknown

spectrum of the object, there can be no motion present in the scene. Using the division of

the two unknown OTFs, a set of n linear equations can be generated to give a solution to

the unknown OTFs. Once the point spread functions have been estimated the blind

deconvolution algorithm can be used to correct the atmospheric turbulence in the image.

This method makes a number of assumptions, the first being that no additive noise is

present in the images. The presence of 1% additive Gaussian noise in the images degraded

the quality of the PSFs significantly. The second assumption made is that there is no

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 7


motion present in the scene. The method does not address the problem of distortion caused

by turbulence and only deals with blurring.

2.1.3 Fusion

Image fusion has also been proposed to enhance images blurred by atmospheric

turbulence. Typically fusion refers to the extraction of information spontaneously adopted

in several domains [23]. However fusion has also been used to obtain a less degraded

image of a scene by using a single sensor to obtain several images of the scene, making it

suitable for atmospheric turbulence. The degradations could be caused by motion in the

scene, noise, incorrect focus as well as atmospheric turbulence.

Zhao [18] made use of this by using image fusion to compensate for photometric variations

caused by turbulence. The algorithm proposed by Zhao can be outlined as follows [18]:

Zhaos Method (Fusion)

1. The source images are decomposed using a Laplacian pyramid.

2. Weight computation based on a salient pattern is computed at each level.

3. The image is reconstructed from level N to 0 using the weights to combine all

source pyramids at the current level.

The energy of the local Laplacian pattern was used as a salience measure. The method by

Zhao also provided a method of compensating for the geometric distortion by using a

reference frame which is selected manually. This is further discussed in section 2.2.1 under

the Time-Averaged Algorithm. The method proposed by Zhao requires all motion in the

scene, both real and turbulent, to be registered.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 8


2.1.4 WFS (Wave-Front Sensing)

A wave-front sensor is capable of measuring the phase perturbation in each short exposure

image. This can be used in two ways to correct for atmospheric turbulence.

The first method would be in an adaptive optics system in which the wave-front sensor is

used to control a deformable mirror. In this system the AOI (Adaptive Optics Imaging)

system will require three main components: WFS, Controller and a deformable mirror. The

WFS is used to measure the phase perturbations. These measurements are then used by the

controller to manipulate the deformable mirror. The deformable mirror is then the means in

which the wave front is corrected [15]. A deformable mirror is any unit that is capable of

changing the perturbed wave such as a tip-tilt mirror. These mirrors are segmented and

controlled by actuators to manipulate the wave front. This method of correcting for

turbulence requires complex devices i.e. deformable mirrors and wave-front sensors which

are difficult to obtain and extremely expensive.

The second method is a hybrid imaging technique that uses a wave-front sensor as well as

post-processing techniques to compensate for turbulence. The method is known as DWFS

(Deconvolution from Wave-Front Sensors). In this method the measurements obtained

from the WFS are used to estimate an OTF for the current short exposure image. This OTF

is then used to correct the short-exposure image through deconvolution [15]. While this

method still requires WFS for the phase measurements, the deformable mirrors are not

required and is therefore less complex than an adaptive optics system.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 9


2.2 ALGORITHMS SELECTED FOR COMPARISON

Four groups of algorithms were chosen for comparison. The algorithms chosen are based

on post processing of the turbulence as well as their ability to compensate and/or minimise

both the distortion effects and the blurring caused by turbulence. Sections 2.2.1 to 2.2.4

outline the background of the algorithms as well as their respective advantages and

disadvantages.

2.2.1 TIME-AVERAGED ALGORITHM

The Time-averaged algorithm is based on common techniques and components used in a

number of algorithms for correcting atmospheric turbulence [3, 11, 18]. This technique

divides heat scintillation into two separate steps i.e. distortion and blurring. Each step is

then dealt with individually. The first step uses some form of image registration to bring

the images into alignment and compensate for the shimmering effect. The alignment is

usually done against a reference frame. The second step deals with the blurring induced by

atmospheric turbulence. This is usually compensated for by making use of a blind

deconvolution algorithm or by using image fusion.

Image registration is performed for each frame with respect to a reference frame. This

provides a sequence that is stable and geometrically correct. The reference frame can be

selected by using a frame from the sequence with no geometric distortion and minimal

blurring. This is however impractical since the frame would have to be selected manually

and the probability of finding an image that has both minimal distortion and is

geometrically correct is unlikely. A more practical approach to selecting a reference frame

would be the temporal averaging of a number of frames in the sequence. Since atmospheric

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 10


turbulence can be viewed as being quasi-periodic, averaging a number of frames would

provide a reference frame that is geometrically improved, but blurred.

For the purpose of registration, an optical flow algorithm as proposed by Lucas and

Kanade [2] was implemented.

2.2.1.1. Lucas-Kanade Algorithm

The motion of a pixel from one frame to another can be described as

I ( x , y , t ) = I ( x + x, y + y, t + t ) (2)

where I(x,y,t) represents the intensity of a pixel, x and y represents the co-ordinates of a

pixel and t represents the frame number. The assumption is made that the intensity of the

object remains constant. Expanding (2) using a Taylor series we get

I I I
I ( x + x, y + y, t + t ) = I ( x, y, t ) + x+ y+ t (3)
x y t

where all higher order terms can be discarded. Since there are two unknowns (x, y) and

only one equation, it is an ill-posed problem. By assuming that the optical flow is locally

constant within a small, n x n patch, additional equations can be obtained. A least squares

method can then be used to solve the over-determined system:

I x1 I y1 I t1
I I y 2
x2 Vx I t 2
M M V = M
y

I x n I y n I t n (4)

I I I
where , and are denoted as Ix, Iy and It respectively.
x y t

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 11


The Lucas-Kanade algorithm was implemented using a coarse-to-fine iterative strategy

achieved through an image pyramid.

Figure 2.1 shows the time-averaged algorithm applied to a distorted Lenna sequence.

(a) (b) (c)

(d) (e)

Figure 2.1: (a) Original Image, (b) Simulated image, (c) Flow field,

(d) Time averaged frame and (e) Corrected frame.

The reference frames were generated by averaging the first N frames of the turbulence

degraded sequences. Using time-averaging to generate a reference frame works well when

there is no real motion present in the scene, however if real motion is introduced, the

averaging of the sequence will generate a reference frame that is motion blurred and worse

than the turbulence-degraded frames. Using this reference frame to register the sequence

will further degrade the sequence. This is illustrated in Figure 2.2.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 12


(a) (b)

(c) (d)

Figure 2.2: (a) Original Image, (b) Simulated Image, (c) Motion Blurred Average and (d)

Incorrectly Registered Image. (PETS 2000 Dataset [45]).

The algorithm performs well when there is no real motion present in the scene i.e. objects

moving, panning and zooming of camera. The video sequence is stabilized, except for a

few discontinuities caused by the optical flow calculations and/or the warping algorithm.

Localized blur will be present due to the averaging of the frames to obtain a reference

frame and this hinders the registration algorithm. When real motion is present in the scene

the reference frame is degraded due to motion blur. Warping any frame towards the

reference frame will cause further distortions and the restored sequence will be degraded to

a greater extent than the turbulence sequence.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 13


2.2.2 FIRST REGISTER THEN AVERAGE AND SUBTRACT

The FRTAAS (First Register Then Average and Subtract) algorithm is an improvement of

the FATR (First Average Then Register) algorithm, proposed by Fraser, Thorpe and

Lambert [3]. The FATR algorithm is very similar to the Time-averaged algorithm

discussed in section 2.2.1. The FRTAAS algorithm aims to address the problem of

localized blur by minimizing the effect of the averaging process used to create the

reference frame.

2.2.2.1 FATR

The FATR algorithm registers each frame in the image sequence against an averaged

reference or prototype frame. The registration technique used in [3] employed a

hierarchically shrinking region based on the cross correlation between two windowed

regions. To obtain improved results the dewarped frames are then averaged once again to

obtain a new reference frame and the sequence is put through the algorithm once again. As

discussed in section 3.1 the blur due to the temporal averaging will still be present. The

FRTAAS algorithm aims to address this problem.

2.2.2.2 FRTAAS

In FRTAAS the averaging approach used to create the reference frame in FATR is avoided

by allowing any one of the frames in a sequence to be the reference frame. However due to

the time varying nature of atmospheric turbulence, some of the frames in the sequence will

not be as severely degraded as others. This would mean that it would be possible to obtain

a reference frame in which the atmospheric induced blur would be minimal. A sharpness

metric is used to select the least blurred frame in the sequence. This frame can also be

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 14


selected manually. The distortion of the frame is not considered when selecting a reference

frame. The sharpness metric used to select the sharpest frame is

S h = g ( x, y ) ln[ g ( x, y )]dxdy (5)

where g(x,y) represents the image and x, y represent pixel co-ordinates.

Figure 2.3 shows 20 frames from a real turbulence-degraded sequence. Visually it can be

seen that the 18th frame in the sequence is the sharpest as the number plate is more clearly

visible. The graph in Figure 2.4 shows that the sharpness metric has selected the 18th frame

as the sharpest which corresponds to what is observed visually.

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) (j) (k) (l)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 15


(m) (n) (o) (p)

(q) (r) (s) (t)

Figure 2.3: (a) to (t) Frames from Number plate sequence. (r) selected as sharpest frame.

(CSIR Dataset).

Figure 2.4: Graph showing sharpness levels of sequence in Fig 2.3.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 16


Once the sharp but distorted frame is selected from the video sequence it is used as the

reference frame. All frames in the sequence are then warped to the reference frame. The

shift maps that are used to warp the frames in the sequence to the reference frame are then

used to determine the truth image. In the FATR method the truth image was obtained by

temporal averaging. However by instead averaging the shift maps used to register the

turbulent frames to the warped reference frame, a truth shift map which warps the truth

frame into the reference frame is obtained. The averaging of the shift maps can be used

because, as in the case of temporal averaging to obtain a reference frame, atmospheric

turbulence can be viewed as being quasi-periodic.

The warping using the shift maps, xs and ys, can be described as

r ( x, y , t ) = g ( x + xs ( x, y , t ), y + y s ( x, y , t ), t ) (6)

representing a backward mapping where r(x,y,t) is the reference frame and g(x,y,t) is a

distorted frame from the sequence. Once the shift maps, xs and ys, have been obtained for

each frame in the sequence, the centroids, Cx and Cy, which are used to calculate the pixel

locations of the truth frame, can be determined by averaging:

N
1
C x ( x, y ) =
N
x ( x, y , t )
t =1
s

N
(7)
1
C y ( x, y ) =
N
y ( x, y, t ).
t =1
s

It is important to note that since the warping represents a backward mapping, the shift

maps obtained does not tell us where each pixel goes from r(x,y,t) to g(x,y,t) but rather

where each pixel in g(x,y,t) comes from in r(x,y,t) . Therefore the inverse of Cx and Cy are

then calculated and used to determine the corrected shift map of each warped frame in the

sequence as

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 17


1
C x ( x, y ) = C x ( x C x ( x, y, t ), y C y ( x, y, t ))
1
C y ( x, y ) = C y ( x C x ( x, y, t ), y C y ( x, y, t ))

1 1 1
X s ( x, y, t ) = C x ( x, y ) + x s ( x + C x ( x, y ), y + C y ( x, y ), t )
1 1 1
Ys ( x, y, t ) = C y ( x, y ) + y s ( x + C x ( x, y ), y + C y ( x, y ), t )
(8)

where Xs and Ys are the corrected shift maps used to correct the frames in the original

warped sequence [13].

Using Xs and Ys one is therefore able to obtain the geometrically improved sequence using

f ( x, y, t ) = g ( x + X s ( x, y , t ), y + Ys ( x, y , t ), t ). (9)

2.2.2.3 Elastic image registration

The registration was done by using a differential elastic image registration algorithm

proposed by Periaswamy and Farid [4].

The registration algorithm models the mapping between images as a locally affine but

globally smooth warp that explicitly account for variations in image intensities [4]. The

general representation of an affine transformation can be described as

a11 a12 0
[u v 1] = [x y 1] a 21 a 22 0. (10)
a31 a32 1

From this

f ( x, y, t ) = f (u , v, t 1)
u = a11 x + a21 y + a31
v = a12 x + a22 y + a32 (11)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 18


where a31 and a32 represent a translation and the remaining coefficients represent the affine

parameters. The aij parameters are estimated locally within a small n x n patch, Q, by

minimizing

[ f ( x, y , t ) f ( a
x , yQ
11 x + a 21 y + a 31 , a12 x + a 22 y + a32 , t 1)]2 . (12)

This can be done by expanding (12) using the Taylor series expansion
2
f ( x, y, t ) f ( x, y , t ) f ( x, y, t )
f ( x, y, t ) f ( x, y, t ) + ( a11 x + a21 y + a31 x)
x
+ ( a12 x + a22 y + a32 y )
y
+ (t 1 t )
t

x , yQ
2
f ( x , y , t ) f ( x, y, t ) f ( x, y, t )
= f ( x, y, t ) f ( x, y, t ) + ( a11 x + a21 y + a31 x )
x
+ ( a12 x + a22 y + a32 y )
y

t
x , yQ

which simplifies to

2
f ( x, y , t ) f ( x, y, t ) f ( x, y, t )

x , yQ t
(a11 x + a21 y + a31 x )
x
( a12 x + a22 y + a32 y )
y
. (13)

Equation (13) can be minimized by differentiating with respect to the unknowns and

setting the result equal to zero. This will result in

1
r rr r
a = cc T ck (14)
x , yQ x , yQ

where

T
r f ( x, y, t ) f ( x, y, t ) f ( x, y , t ) f ( x, y, t ) f ( x, y, t ) f ( x, y, t )
c = x y x y
x x y y x y
f ( x, y, t ) f ( x, y, t ) f ( x, y, t )
k= +x +y (15)
t x y
r
[
a = a11 a 21 a31 a12 a 22 a 32 ]
This will provide us with all the parameters, aij, required to define the affine warp [5].

The registration algorithm also accounts for intensity changes between images in which the

image brightness constancy assumption fails. By modifying (11) to include two additional

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 19


parameters for brightness and contrast, the registration algorithm is able to account for

variations in intensity.

a c f ( x, y , t ) + a b = f ( a11 x + a 21 y + a31 , a12 x + a 22 y + a32 , t 1) (16)

where ac and ab represent contrast and brightness [5]. This equation is then solved in a

similar fashion as (11) by minimizing the error function.

The final aspect of the algorithm deals with the addition of a smoothness constraint which

replaces the assumption of image brightness constancy. The assumption is made that the

parameters required vary smoothly across space [5]. This allows for a larger spatial

neighbourhood to be selected since the image brightness constancy assumption is no longer

used.

Figure 2.5 shows a selected reference frame (target) and the warping of a frame from a

simulated sequence (source) to the target frame. It also shows the estimated warp and the

shift maps.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 20


(a) (b) (c)

(d) (e) (f)

Figure 2.5: (a) Distorted frame 1, (b) Distorted frame 2, (c) & (d) Shift maps, (e) estimated

warp between frames and (f) Frame 1 warped to frame 2.

Figure 2.6 shows 10 frames from a simulated turbulent Lenna sequence. Figure 2.7 shows

the restored images, corresponding to the frames in Figure 2.6, using the FRTAAS

algorithm.

Figure 2.6: 10 frames from a simulated Lenna sequence.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 21


Figure 2.7: 10 corrected frames corresponding to Fig 2.6 using FRTAAS

The FRTAAS algorithm performed well with no motion present in the scene. The restored

sequences were an improvement over the Time-Averaged Algorithm. This was achieved by

avoiding temporal averaging of the turbulent frames. The algorithm also has problems in

dealing with real motion present in the scene.

2.2.3 INDEPENDENT COMPONENET ANALYSIS

ICA (Independent Component Analysis) belongs to a class of blind source separation

methods for separating data into their underlying informational components [8]. In the

algorithm proposed by Kopriva et al [10], the underlying data takes the form of images. By

treating the frames of the turbulent sequence as sensors and taking the turbulence spatial

patterns as sources along with the original frame, ICA can be applied to the sequence to

extract the sources from the turbulent frames.

The turbulent image frames can be represented as

I i = AI 0 + v
(17)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 22


where Ii is an N x T matrix of the turbulent frames. T=x x y where x and y are the

dimensions of the frames and I0 represents the original image. Therefore each image is

represented as a single 1 x T vector of intensity values. A is the unknown mixing matrix

which distorts the original frames to form the turbulent frames and v represents additive

noise

ICA aims to find a transformation matrix W that will separate the turbulent frames or

mixtures into source signals, i.e.


I o = WI i . (18)

For ICA to work it is assumed that the source signals are mutually statistically independent

since ICA will estimate W so that o will represent estimated source signals that are as

statistically independent as possible. o will then contain the scaled original signal and its

spatial turbulent patterns. The scaling is inherent in ICA algorithms.

In the work of Kopriva et al [10], a fourth-order cumulant-based ICA algorithm, JADE

(Joint Approximate Diagonalization of the Eigen-Matrices) as proposed by Cardoso [6],

was used to estimate the un-mixing matrix W. In JADE statistical independence is achieved

through the minimization of the squares of the fourth order cross-cumulants among the

components of o i.e.



W = arg min off W T C ( I ij , I ik , I il , I im )W
m l k j (19)

where off(A), for A = (a i j )1 i , j N , is defined as

a
2
off ( A) = ij
1 i j N
(20)

and (Iij , Iik , Iil , Iim) are sample estimates of the fourth-order cross-cumulants [10].

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 23


If the mixing matrix, A, is singular, the un-mixing matrix, W, which is the inverse of the

mixing matrix cannot be calculated. To prevent this, the algorithm measures the mutual

information between turbulent frames using the Kullback-Leibler divergence

D ( p, q ) = L( p, q ) + L(q, p ) (21)

T
pk T
q
where L( p, q ) = p k log 2 , L(q, p ) = q k log 2 k , and pk and qk are pixel intensities
k =1 qk k =1 pk

at the spatial coordinate k ( x, y ) . If two images are the same the distance D(p,q) will be

zero.

Figure 2.8 shows 3 turbulent frames from a simulated Lenna sequence applied to the ICA

algorithm. The extracted source image and turbulent spatial patterns are also shown.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 24


(a) (b) (c)

(d) (e) (f)

(g) (h)

Figure 2.8: (a), (b) & (c) 3 turbulent frames, (d) Extracted Source Image, (e) & (f)

Turbulent spatial patterns, (g) Time averaged frame and (h) Extracted source image.

The algorithm proposed was modified to function with video sequences by using a frame

window size of three. At each iteration, the frame window was incremented by one to

include a new frame, thereby allowing the preservation of the sequence length as well as

ensuring that the mutual information, in most cases, would not be the same. This is

illustrated in Figure 2.9 and Figure 2.10.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 25


Frame 1 Frame 2 Frame 3 Frame 4 Frame 5

Frame 1 Frame 2 Frame 3

Figure 2.9: ICA applied to video sequences.

Figure 2.10: ICA applied to Building site sequence. (CSIR Dataset).

An alternate method used to apply the algorithm to video sequences was to use the

extracted source image from the current window set and apply this frame to the next

window set. This would allow us to always have a frame that is close to the source image

in the window set. This method however would cause the Kullback-Leibler divergence to

become zero frequently.

The ICA algorithm is advantageous because only a few frames are required to extract the

source frame and it performs better than the simple temporal averaging method since the

loss of details experienced with averaging is avoided. The disadvantages with the

algorithm are the assumptions that all motion has been compensated for and the turbulence

used for correction is classified as weak.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 26


2.2.4 RESTORATION OF ATMOSPHERIC TURBULENCE DEGRADED VIDEO

USING KURTOSIS MINIMIZATION AND MOTION COMPENSATION

In this algorithm, both effects of turbulence are addressed i.e. geometric distortion and

blurring. To compensate for the blurring induced by atmospheric turbulence, the kurtosis

of an image is used and for the geometric distortion, CGI (Control Grid Interpolation) is

used.

2.2.4.1 Control Grid Interpolation

Control grid interpolation is a method proposed by Sullivan [16] for motion compensation.

It is a technique used for performing spatial image transformations. An image is initially

segmented into small continuous square regions, the corners of which form control points.

These control points are used as anchors in which the intermediate vectors are calculated

using bilinear interpolation. CGI allows for the representation of complex motion making

it suitable for images distorted by atmospheric turbulence.

The pixel relationship between two images within a square region is described as

I1[i, j ] = I 0 [i + d1[i, j ], j + d 2 [i, j ]]


d1[i, j ] = 1 + 2i + 3 j + 4ij (22)
d 2 [i, j ] = 1 + 2i + 3 j + 4ij

where i, j represent pixel co-ordinates and d1[i, j ] and d 2 [i, j ] represent the horizontal

and vertical displacement of the pixels between the two images I1 and I0. d1 [i, j ] and

d 2 [i, j ] are used to compute the vectors between the four control points enclosing each

region, R using bilinear interpolation. Given four points, (i0,j0), (i1,j1), (i2,j2), (i3,j3) any

intermediate coordinate X(i,j) may be computed using the expression

X [i, j ] = 1 + 2i + 3 j + 4ij . (23)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 27


Therefore once the and parameters have been determined for each region R, the

intermediate motion vectors can be calculated providing us with a dense motion field of the

turbulence.

The and parameters can be estimated within each region R by minimizing

( I [i, j] I [i + d [i, j], j + d [i, j]])


[ i , j ]R
0 1 1 2
2
. (24)

Using the Taylor series, (24) approximates to

I1[i, j ] I [i, j ]
( I [i, j ] I [i, j ]
[ i , j ]R
0 1
i
d1[i, j ] 2
j
d 2 [i, j ]) 2
(25)

with higher order terms discarded. Using the least squares method we can obtain the

bilinear parameters. Figure 2.11 shows an example of a motion field generated using CGI.

(a) (b) (c)

Figure 2.11: (a) Simulated Lenna frame 1, (b) CGI motion field between (a) and (c) and (c)

Simulated Lenna frame 2.

2.2.4.2 Compensation

To compensate for the distortion the trajectories of each pixel are calculated using the

motion vectors from the CGI algorithm. Two methods were proposed for calculating the

trajectories of the pixels. The first method calculates the trajectory between the current

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 28


frame and the previous frames trajectory estimate. This method is computationally efficient

since at each new frame in the sequence only the trajectory between the new frame and the

previous frame have to be calculated. This however also makes the method susceptible to

errors in the presence of noise since an error in the previous trajectory estimate will be

compounded in all succeeding frames. The trajectory of the first method can be calculated

as

T (i, j, t0 ) = (i, j )
T (i, j, t0 1) = T (i, j , t0 ) + vt 0 ,t 01 (T (i, j, t0 ))
M M M
T (i, j, t0 n) = T (i, j, t0 n + 1) + vt 0 n +1, t 0 n (T (i, j , t0 n + 1))
(26)

where T represent the trajectory, v represent the motion vectors between two frames and t0

represents the source or initial frame.

The second method estimates the trajectory by using the motion vectors between a source

frame, which remains fixed, and a target frame that is changed. This significantly increases

the computational load since the motion vectors will have to be recalculated each time the

algorithm increments to a new frame. Since the previous trajectory estimates are not used,

the problem of errors in the presence of noise is avoided. The methods trajectory can be

calculated as

T (i, j, t 0 + n) = (i, j ) + vt0 ,t0 + n (i, j )


M M
T (i, j, t 0 + 1) = (i, j ) + vt0 ,t0 +1 (i, j )
T (i, j, t 0 ) = (i, j ) (27)
T (i, j, t 0 1) = (i, j ) + vt0 ,t01 (i, j )
M M
T (i, j, t 0 n) = (i, j ) + vt0 ,t0 n (i, j ).

This is illustrated in Figure 2.12.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 29


Frame (t0+2) Frame (t0+1) Frame (t0) Frame (t0-1) Frame (t0-2)

Figure 2.12: Motion fields used in trajectory estimation with a time window of 5
frames. (CSIR Dataset).

Using (27) we can compensate for the geometric distortion at frame t0 by calculating the

centroid of the trajectory as

1
T (i, j ) = T (i, j, k ) .
n + 1 t0 n k t 0
(28)

Since the motion due to atmospheric turbulence is quasi-periodic, the net displacement

over a period will be zero. Therefore by averaging the motion fields at each current frame,

a motion field can be generated which will allow us to dewarp the current frame. Since

motion due to atmospheric turbulence is small compared to real motion, using this method

will preserve the real motion in the scene.

Figure 2.13 shows 10 frames from a simulated turbulent Lenna sequence and Figure 2.14

shows the restored images, corresponding to the frames in Figure 2.13, using the CGI

algorithm.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 30


Figure 2.13: 10 frames from a simulated Lenna sequence.

Figure 2.14: 10 corrected frames corresponding to Fig 2.13 using CGI algorithm

The length of trajectory smoothing filter can be fixed, however by adjusting the length of

the smoothing filter, improved results can be obtained. The algorithm proposes a method

based on the characteristic that turbulent motion has zero mean quasi-periodicity. Using

this property the length of the smoothing filter is adjusted. This allows for improvements in

separating real motion and turbulent motion since when no real motion is present in the

scene the window length can be increased for improved results. If real-motion is present

the window length is decreased since the real motion will affect the trajectory estimation.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 31


The adaptive period enhancement performs significantly better than fixed period when the

camera is stationary and objects in the scene move. In cases of zooming and panning, the

difference is not significant. The computational complexity of using adaptive period

enhancement is also significantly increased making it less suitable for real-time

implementation. Therefore fixed period was used in our implementation.

2.2.4.3 Median using fixed period enhancement

For the purpose of real-time implementation a fixed period enhancement method was used

in this dissertation. The algorithm was however modified in the way the centroid of

trajectory is estimated. It was observed that by using the median of the trajectory instead of

the average, better results could be obtained over fixed-period enhancement without

increasing the computational complexity significantly. Since real motion is larger than

turbulent motion, by using the median the effect of the large motion vectors due to real

motion are minimised.

2.2.4.4 Kurtosis Minimization

To compensate for the blurring induced by atmospheric turbulence, the kurtosis of an

image is used. The kurtosis of a random variable is defined as the normalized fourth central

moment, i.e.

E (( x ) 4 )
k= (29)
4

where and represent the mean and standard deviation respectively. The kurtosis

measures the peakedness of distributions. A distribution with a kurtosis higher than 3 is

leptokurtic and with a kurtosis less than 3 is platykurtic. The Gaussian distribution has a

kurtosis of 3 and is called mesokurtic. By using the kurtosis of an image it was shown that

in general images which are blurred or smoothed have a higher kurtosis than the original

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 32


images. This is shown in Figure 2.15 and Figure 2.16. In certain cases the kurtosis of an

image will behave in the opposite manner i.e. kurtosis will decrease as the image is

blurred.

Figure 2.15: Lenna sequence where turbulence blur is increased linearly

Figure 2.16: Kurtosis measurements of sequence in Fig 2.15.

Using this property, the algorithm aims to optimize a parameter in a restoration filter. By

finding the parameter which corresponds to the lowest kurtosis the image can be restored.

This was shown to work for a number of different blurs such as Gaussian, out-of-focus and

linear motion blurs.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 33


For the case of restoring the atmospheric turbulence blurred images, the turbulence OTF

(32) was used. By estimating the parameter, , within a search space, deconvolution can be

used to restore the estimated original image. The deconvolution can be achieved by any

non-blind restoration algorithm but for our implementation, similar to [12], Wiener

filtering was used. The Wiener filter is defined as

H * (u , v) X (u , v)
G (u , v ) = 2
H (u , v ) + nsr
(30)

where nsr is the noise-to-signal ratio, H represents the degradation function and X

represents the blurred image.

Figure 2.17 shows an example of a simulated turbulence sequence with a of 0.001. The

graph plots the values of the normalized kurtosis versus the search space. As can be seen

a value of 0.001 has been selected as the minimum, which corresponds to the value used

to blur the image.

(a) (b) (c)

Figure 2.17: (a) Original Lenna, (b) Blurred Lenna (=0.001) and (c) Restored Lenna

(=0.001).

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 34


Figure 2.18: Graph showing normalized kurtosis of sequence in Fig 2.17. =0.001.

Figure 2.19 shows an example of a real turbulence sequence and Figure 2.20 shows the

image, first corrected for distortion and then enhanced using the kurtosis method.

Figure 2.19: Real Turbulence degraded sequence (CSIR Dataset).

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 35


Figure 2.20: Restored sequence using CGI and kurtosis algorithm. (CSIR Dataset).

The significant advantage of this algorithm is that it can restore turbulent degraded

sequences while preserving real motion. It also addressed both the geometric distortions

and blurring. Although the algorithm is computationally expensive, computational time can

be reduced by altering certain parameters such as fixing the length of the frames in the

trajectory estimation.

2.3 CONCLUSION

Popular methods for correcting the effects of atmospheric turbulence were discussed. Most

of the methods currently researched deal with the blurring caused by atmospheric

turbulence. Most of the algorithms selected for comparison have been shown to degrade

when real motion is present in the scene. The algorithm proposed by Li [1] has shown

good results in dealing with situations when both turbulent and real motion is present in the

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 36


scene. By using the median of the temporal window it was shown that the window length

could be kept constant and still allow for real motion and turbulent motion to be separated

with good results. Based on the results of a detailed analysis conducted in Chapter 4, real-

time capabilities will be assessed.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 37


CHAPTER 3

ATMOSPHERIC TURBULENCE

Atmospheric turbulence is caused due to the index of refraction of air fluctuating with

changes in temperature. This causes objects in sequences to appear blurred and waver

slowly in a quasi-periodic fashion. The following chapter presents the methods used to

simulate atmospheric turbulence. The GUI developed to simplify the visualisation of

atmospheric turbulence and the creation of datasets is also presented.

3.1 SIMULATING ATMOSPHERIC TURBULENCE

Since atmospheric turbulence degraded images are not easily available, being able to

simulate atmospheric effects would be advantageous. This will also provide us with a set

of ground truth sequences allowing us to compare the original sequences with the

recovered sequences from the algorithms.

A turbulence degraded sequence g can be modelled as

g (i, j, t ) = D[ x(i, j, t ) * h(i, j , t ), t ] + (i, j , t ) (31)

where * denotes two-dimensional convolution, i and j denote pixel co-ordinates at frame t,

denotes time-varying additive noise, D denotes the turbulence induced time-varying

geometric distortion, h is the dispersive distortion component of the atmospheric

turbulence and x is the original video [1]. Based on (31) the effects of turbulence can be

divided into two categories, blurring and geometric distortion.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 38


3.1.1 Blurring

To simulate the effects of blurring, the OTF (optical transfer function) of atmospheric

turbulence as derived by Hufnagel and Stanley [1] is used. The OTF can be modelled as

2
+ v 2 )5 / 6
H (u, v) = e ( u (32)

where u, v represent the co-ordinates in the spatial frequency domain and controls the

severity of the blur [1]. Since turbulence blurring is time varying, the value of will have

to be varied from frame to frame, within a limit, to correctly simulate the effect of

atmospheric blurring. Typical values of would range from 0.00025 to 0.0025 to simulate

low to severe turbulence respectively. Figure 3.1 shows the Lenna image from blurred

=0.00025 to =0.0025.

(a) (b) (c) (d)

Figure 3.1: (a) Original Lenna image, (b) Image blurred with =0.00025, (c) Image blurred

with =0.001 and (d) Image blurred with =0.0025.

3.1.2 Geometric Distortion

Two methods were used to simulate the geometric distortions induced by atmospheric

turbulence. The first method uses a 2-pass mesh warping algorithm [14] to randomly map

pixels from the source image to a destination image within a specified area. This creates a

sequence in which the scene appears to waver in a quasi-periodic fashion. This method

allows for full control of the level of distortion present in the sequences. It also allows for

the simulation of a large number of sequences each with its own distortion levels.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 39


The 2-pass mesh warping algorithm accepts a source image and two 2-D arrays of co-

ordinates. The first array contains the co-ordinates of control points in the source image.

Since we are not interested in any particular points of interest in the case of atmospheric

turbulence, the first array is generated directly from the source image as a rectilinear grid.

The second array is used to specify the destination co-ordinates of the control points. This

array is generated using a random shift map that specifies pixel shift values at the control

points. Both the arrays are constrained to be topologically equivalent to prevent folding or

discontinuities in the destination image. This is achieved by limiting the shift values to

make certain that they do not wander so far as to cause self intersection.

Since the algorithm is separable the 1st pass is concerned with warping the source image

into an intermediate image in the horizontal direction. The 2nd pass will then complete the

warp by warping the intermediate image in the vertical direction. Figure 3.2 shows the 1st

pass, 2nd pass and final output of the 2-pass mesh warping algorithm applied to the

checkerboard image.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 40


(a) (b)

(c) (d)

Figure 3.2: (a) Checkerboard Image, (b) Checkerboard image warped in x-direction, (c)

Checkerboard image warped in y-direction and (d) Final warp

The second method used was to extract the motion fields directly from real turbulence

degraded video clips and then apply them to the frames of a turbulence free video clip [12].

While this gives us only a limited number of distortion levels it provides a more realistic

approach to the distortion effects of atmospheric turbulence. The sequences used for the

extraction of the turbulent motion fields were provided by the CSIR and extracted using

the Lucas-Kanade optical flow algorithm. Figure 3.3 shows an example of the process.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 41


(a) (b) (c)

Figure 3.3: (a) Real turbulence degraded image, (b) Flow field obtained from (a), and (c)

Simulated geometric distortion using flow field.

(Image (a) courtesy of the CSIR)

The final result of the blurring and the distortion applied to the Lenna sequence can be seen

in Figure 3.4.

(a) (b) (c)

Figure 3.4: (a), (b) & (c) 3 frames from simulated Lenna sequence with blurring and

geometric distortion.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 42


The complete algorithm for simulating atmospheric turbulence can be summarised as

follows:

Simulation Algorithm

1. Use the 2-pass mesh warping algorithm to simulate the distortion


effects.

2. Transform the images in the distorted sequence into the


frequency domain.

3. Multiply each frame Xt, where t represents the frame number,


with the OTF described by (32), varying between frames.

3.2 QUASI-PERIODICITY OF ATMOSPHERIC TURBULENCE

One of the key assumptions made in most of the algorithms discussed is that atmospheric

turbulence is quasi-periodic in nature. This means that the net displacement over a period

of time is approximately zero. To illustrate this, the motion of pixels from real-turbulence

degraded sequences was plotted.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 43


Figure 3.5: Motion of a pixel in a real turbulence sequence.

As can be seen from Figure 3.5, the pixel motion remains within a specified radius

showing the quasi-periodic nature of turbulence. The + shows the location of the pixel in

the initial frame. The * shows the average of the pixel co-ordinates and would correspond

to the estimated true location of the pixel in the initial frame. Using a simulated turbulence

sequence the value of the average co-ordinate of the pixel motion can be calculated and

this can be compared to the original location of the pixel since the truth image is available.

This can be seen in Figure3.6.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 44


Figure 3.6: Pixel wander for simulated Lenna sequence

3.3 GRAPHICAL USER INTERFACE

A GUI was developed in Matlab to manage the simulation of atmospheric turbulence and

simplify the process of creating datasets. The GUI allows you to select an input image for

processing. The distortion and blurring levels can be set and the processed sequence

viewed. Once a desired set of levels is selected, a sequence is processed according to the

number of frames required. The sequence can then be viewed in the GUI and saved. Figure

3.7 shows an example of the GUI.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 45


Figure 3.7: GUI for simulating atmospheric turbulence

3.4 CONCLUSION

The methods used in this dissertation for simulating the effects of atmospheric turbulence

were discussed and the quasi-periodic nature of atmospheric turbulence was shown. A GUI

developed for simulating turbulence was presented.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 46


CHAPTER 4

DATASETS, RESULTS AND COMPARATIVE ANALYSIS

Results in the dissertation were obtained using datasets divided into two categories: Real

datasets and simulated datasets. The real datasets consist of sequences obtained in the

presence of real atmospheric turbulence. These datasets were obtained from the CSIR

(Council for Scientific and Industrial Research) using their Cyclone camera and vary in

range from 5km to 15km. The simulated sequences were generated using ground truth

images/sequences. Both datasets can be further divided into sequences with real-motion

and sequences without real motion.

4.1 EXPERIMENTAL SETUP

The registration algorithms used in the CGI, FRTAAS and Time-averaged algorithms were

implemented to be as similar as possible to allow for accurate results to be obtained. A

window size of five was chosen for all the algorithms. The pyramid levels were also

chosen to be the same for both the Lucas-Kanade and Elastic Registration algorithm. On

average two pyramid levels were used.

For the CGI and Time-averaged algorithms a time window of ten frames was chosen. In

the case of the Time-averaged algorithm this meant averaging ten frames to obtain the

reference frame and in the case of the CGI algorithm a moving filter of ten frames was

used in the trajectory estimation. For the case of the ICA algorithm the best results were

obtained using a time window of three frames from which to estimate a true frame. For the

FRTAAS algorithm the entire sequence was allowed for the selection of the sharpest

frame.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 47


4.2 RESULTS USING SIMULATED SEQUENECES

The simulated sequences were used as they provided us with a ground truth with which to

compare the results of each algorithm. The simulated sequences can be sub-divided into

two further categories, sequences with motion and sequences without motion. For the

sequences without motion a single image was used and warped in different ways to create

a distorted sequence similar to turbulence. The distortions were set to be random within a

certain level of pixel shift that could be controlled. The sequence was then blurred using

Equation (32) with being selected randomly within a limit. This limit was chosen to be

small and was determined by multiplying with a random value between 0.5 and 1.5. The

final sequence was therefore a time-varying blurred and distorted sequence which

simulates turbulence. The turbulent sequences with motion were simulated by warping and

blurring each frame of a motion sequence using the same method described above.

4.2.1 Simulated sequences without motion

Figures 4.1-4.4 show the results of the four algorithms on a number of simulated

turbulence sequences with no motion. The sequences were generated using a medium level

of turbulence with an average =0.001. The level of distortion was set to a radius of

approximately five pixels, which is considered to be a medium level of distortion in this

dissertation.

The MSE (Mean-square-error) was calculated between each frame in the output sequence

of the algorithms and the original ground-truth frame. All of the output sequences showed

a definite reduction in the levels of geometric distortion. The FRTAAS algorithm showed

the best results visually which is confirmed by the MSE results. The CGI algorithm and the

Time-averaged algorithm performed similarly, with the CGI algorithm showing a slight

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 48


performance advantage. Since no motion is present in the scene the time-averaged

algorithm is able to generate a good reference frame through averaging. The outstanding

results obtained from the FRTAAS algorithm can be attributed to two main factors. The

first being the registration algorithm used. Since the registration algorithm accounts for

intensity variations as well as including a smoothness constraint which avoids the

constraints of image brightness constancy, better results can be obtained through

registration. The second factor is the sharpness metric used to select a sharp initial

reference frame which improves the results in the final output. The ICA algorithm is only

capable of compensating for a small amount of distortion in the turbulence sequence and

while there is a reduction in the output sequence when compared to the turbulence

sequence, it shows the least performance gain compared to the other three algorithms. The

reason for the CGI algorithm starting from frame six is the time required to accumulate

over the time window. It could have been allowed to start from frame one by reducing the

time window initially and increasing it once the target frame is reached but to get a more

accurate assessment this was not done.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 49


Figure 4.1: MSE of Lenna sequence with =0.001 and pixel motion set to 5.

Figure 4.2: MSE of Flat sequence with =0.001 and pixel motion set to 5.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 50


Figure 4.3: MSE of Satellite sequence with =0.001 and pixel motion set to 5.

Figure 4.4: MSE of Girl1 sequence with =0.001 and pixel motion set to 5.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 51


Figures 4.5-4.7 show the results of the simulated sequences with a higher level of

turbulence with a =0.0025 and the distortion level set to 3. This was done to see how the

algorithms performed with a small amount of distortion but an increased level of blur. It

can be seen that the CGI and FRTAAS algorithms performed similarly in this case. Since

the sequence has a higher level of blur, the sharpest frame selection does not provide a

significant advantage in this case. In different turbulence sequences, due to lucky selection

of a sharp frame, the FRTAAS algorithm might show improved results. The ICA algorithm

again showed the least improvement in performance. The time-averaged algorithm showed

improved results over the turbulence sequence but was still outperformed by the CGI and

FRTAAS algorithms.

Figure 4.5: MSE of Lenna sequence with =0.0025 and pixel motion set to 3.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 52


Figure 4.6: MSE of Flat sequence with =0.0025 and pixel motion set to 3.

Figure 4.7: MSE of Military sequence with =0.0025 and pixel motion set to 3.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 53


Figures 4.8-4.9 shows the results of the four algorithms on simulated turbulence sequences

with =0.00025. The level of distortion was set to a radius of approximately five.

Figure 4.8: MSE of Room sequence with =0.00025 and pixel motion set to 5.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 54


Figure 4.9: MSE of Lenna sequence with =0.00025 and pixel motion set to 5.

4.2.2 Simulated sequences with motion

For the simulated sequences with motion, the CGI algorithm was used with a fixed

window using the median method as described in section 2.2.4.3. The difference between

the CGI using fixed window and CGI using fixed window with the median is shown in

Figure 4.11. The results are from a sequence of a car entering a parking lot as shown in

Figure 4.10.

Figure 4.10: 3 frames from simulated motion sequence. (PETS 2000 Dataset [45]).

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 55


Figure 4.11: MSE of Fixed Window CGI and Fixed Window CGI using Median on simulated

motion sequence 1.

Figures 4.12-4.13 show the results of the four algorithms. The Time-averaged algorithm

broke down completely with motion present in the scene since the reference frame was

motion blurred. The ICA algorithm also showed signs of motion blur in the output

sequence and the results show that the output sequence was worse than the turbulence-

degraded sequence in the first motion sequence. The FRTAAS algorithm showed similar

performance to the ICA algorithm. The CGI algorithm is the only algorithm which showed

an improvement in the output sequence with geometric distortions reduced and the real

motion preserved. Figure 4.13 shows a comparison of using the fixed window and the

fixed window with median. It can be seen that the fixed window using the median

performed slightly better.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 56


Figure 4.12: MSE of simulated motion sequence 1 with =0.001 and pixel motion set to 5.

Figure 4.13: MSE of simulated motion sequence 2 with =0.001 and pixel motion set to 5.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 57


4.3 RESULTS USING REAL SEQUENECES

The real turbulence degraded dataset consists of 11 sequences provided by the CSIR

(Council for Scientific and Industrial Research). The sequences vary from buildings and

structures, which contain a large amount of detail, to open areas in which the contrast can

be low. The ranges of the sequences are from 5km to 15km and were obtained using the

CSIRs cyclone camera. The turbulence levels vary from light to severe with most of the

sequences having a medium level of turbulence as determined visually by an expert [47].

In the case of the real sequences, since no ground truth is available the sequences cannot be

compared with the original. The MSE (Mean-square-error) was calculated between

consecutive frames in a sequence, which shows the stability of the sequence. An intensity

corrected MSE will measure the differences between frames i.e. if turbulence is present the

geometric changes between frames will be large and this will correspond to a high MSE.

The sharpness of the frames was then calculated and is shown in section 4.3.3.

4.3.1 Real turbulence-degraded sequences without motion

Figure 4.14 shows the results of the real turbulence sequence taken of the Armscor

building at a distance of 7km. Examples of the turbulent and corrected frames are shown in

Figures 4.15-4.19. In this sequence there is a fair amount of detail present. The processed

sequences for all the algorithms showed a reduction in the geometric distortions. The CGI,

FRTAAS and the Time-averaged algorithms output a stable sequence with few to no

discontinuities. The ICA algorithm output showed a reduction in atmospheric turbulence

with the distortions appearing to move more slowly. The distortions were still however

present and the shimmering effect remained. This can be seen in Figure 4.14, where the

MSE of the ICA is always less than that of the turbulence sequence however its variations

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 58


are much higher. The CGI and Time-averaged algorithms show the best results for the

Armscor sequence. There is still however slight differences between the frames.

Figure 4.14: MSE between consecutive frames of Armscor sequence.

Figure 4.15: Real turbulence degraded frame of Armscor building. (CSIR Dataset).

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 59


Figure 4.16: CGI corrected frame of Armscor building. (CSIR Dataset).

Figure 4.17: FRTAAS corrected frame of Armscor building. (CSIR Dataset).

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 60


Figure 4.18: Time-averaged corrected frame of Armscor building. (CSIR Dataset).

Figure 4.19: ICA corrected frame of Armscor building. (CSIR Dataset).

Figure 4.21 shows the results of the real turbulence sequence taken of a building site at a

distance of 5km. An example of a turbulent frame of the sequence is shown in Figure 4.20.

In this sequence there is also fair amount of detail present. The results of the processed
FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 61
sequences were similar to the Armscor building sequence with the exception of the

FRTAAS algorithm which showed improved results over the CGI and Time-averaged

methods.

Figure 4.20: Real turbulence degraded frame of building site sequence at a distance of

5km. (CSIR Dataset).

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 62


Figure 4.21: MSE between consecutive frames of Building site sequence.

Figure 4.22 shows the results of a real turbulence sequence taken of a tower at a distance of

11km. An example of a turbulent frame of the sequence is shown in Figure 4.23(a). In this

sequence the detail is minimal. The turbulence level is however severe, both in terms of

geometric distortion and blurring. For comparison, Figure 4.23(b) shows a frame from a

different sequence of the same tower. The sequence was taken from the exact same

location as the blurred sequence but at a time in which the turbulence was negligible.

The CGI and Time-averaged algorithms performed well once again. The FRTAAS

algorithm and the ICA algorithm, while showing improvements over the turbulent

sequence, did not perform as well. The ICA processed sequence still possessed geometric

distortions although they were less than the turbulent sequence.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 63


Figure 4.22: MSE between consecutive frames of Tower sequence.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 64


(a) (b)

Figure 4.23: (a) Real turbulence-degraded frame of a tower at a distance of 11km and (b)
same tower at 11km but with negligible turbulence. (CSIR Dataset).

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 65


(a) (b)

Figure 4.24: (a) Frame from CGI algorithm output sequence and (b) frame from FRTAAS
algorithm. (CSIR Dataset).

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 66


(a) (b)

Figure 4.25: (a) Frame from Time-averaged algorithm output sequence and (b) frame from
ICA algorithm. (CSIR Dataset).

Figure 4.27 shows the results of a real turbulence sequence taken of a shack at a distance of

10km. An example of a turbulent frame of the sequence is shown in Figure 4.26. The CGI

and Time-averaged algorithms performed well once again, with the Time-averaged

algorithm outperforming the CGI algorithm in this case. The ICA processed sequence still

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 67


possessed geometric distortions although they were less than the turbulent sequence. The

FRTAAS algorithm, while stable, did contain warping on certain of the features.

Figure 4.26: Real turbulence-degraded frame of a shack at a distance of 10km.

(CSIR Dataset).

Figure 4.27: MSE between consecutive frames of Shack sequence.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 68


4.3.2 Real turbulence-degraded sequences with motion

Figure 4.28 shows the results of the building site sequence with motion present in the scene

in the form of a car. The results were calculated using the MSE between consecutive

frames. Since there is motion present in the scene, the MSE between frames will be higher.

This will apply to all the output sequences though and the results are therefore still

relevant. The results can be compared with Figure 4.21 which is part of the same sequence,

the only difference being the motion of the car. It can be seen that the FRTAAS and Time-

averaged algorithms broke down in the presence of motion. The ICA algorithm performed

similarly to before but as discussed i.e. while the geometric distortions were reduced the

effect of heat shimmer was still present. The CGI algorithm once again showed a stable

output sequence with improvements over the turbulence sequence.

Figure 4.28: MSE between consecutive frames of Building site sequence with motion.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 69


4.3.3 Sharpness of real turbulence-degraded sequences

The sharpness of the outputs of the algorithms was calculated using Equation (5). As

shown in section 2.2.2.2 the lowest value corresponds to the sharpest image. The results of

using the kurtosis of the image, as described by Li [1], is also shown. Since this was the

only algorithm discussed which explicitly dealt with and described methods of enhancing

the image turbulent images the results of the CGI algorithm (pre-enhancement) are also

shown. Figures 4.29-4.33 show the sharpness of the turbulence shack sequence compared

to the corrected shack sequences. It can be seen that the algorithms did not degrade beyond

the sharpness levels of the turbulence sequences. The ICA algorithm also shows frame 12

being significantly sharper than the other frames and matching the sharpest level obtained

by the kurtosis algorithm. This can also be confirmed visually.

Figure 4.29: Sharpness of frames in Shack sequence using CGI algorithm.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 70


Figure 4.30: Sharpness of frames in Shack sequence using CGI algorithm with kurtosis

enhancement.

Figure 4.31: Sharpness of frames in Shack sequence using Time-averaged algorithm.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 71


Figure 4.32: Sharpness of frames in Shack sequence using ICA algorithm.

Figure 4.33: Sharpness of frames in Shack sequence using FRTAAS algorithm.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 72


Figures 4.34-4.38 shows the sharpness of the turbulent building site sequences compared to

the corrected sequences.

Figure 4.34: Sharpness of frames in Building site sequence using Time-averaged algorithm.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 73


Figure 4.35: Sharpness of frames in Building site sequence using ICA algorithm.

Figure 4.36: Sharpness of frames in Building site sequence using FRTAAS algorithm.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 74


Figure 4.37: Sharpness of frames in Building site sequence using CGI algorithm with Kurtosis

enhancement.

Figure 4.38: Sharpness of frames in Building site sequence using CGI algorithm.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 75


4.4 CONCLUSION

Based on the results above it was shown that the CGI algorithm performed the best when

real motion as well as turbulence motion was present in the scene. It was also shown that

under real turbulence conditions, the CGI performed the best along with the Time-

averaged algorithm. Another reason for selecting the CGI algorithm was the way in which

the compensation for geometric distortions was achieved by using a fixed time window.

This also increased the complexity of the algorithm since the motion vectors would have to

be calculated for each of the frames in the time window but the accuracy was however

increased. Therefore, for real-time simulation, the CGI algorithm [1] was chosen. The

Time-averaged algorithm was also implemented for comparison. OpenCV was used for the

implementation in C as many image processing algorithms and functions are readily

available. All results were obtained on an Intel Core Duo with a CPU speed of 2 GHz and

2 GB of RAM. The Lucas-Kanade algorithm was chosen for both implementations since

this algorithm was available in OpenCV and is optimized. The CGI algorithm [1] was able

to run at a frame rate of 9 frames per second whereas the Time-averaged algorithm ran at a

frame rate of 29 frames per second. The time window chosen for the CGI algorithm was 10

frames i.e. to compensate for each frame in the sequence, the optical flow between 10

frames had to be calculated. When the time window was decreased to 6 frames, a frame

rate of approximately 14 frames per second was achieved. The sizes of the images were

256x256.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 76


CHAPTER 5

CONCLUSION AND FUTURE WORK

5.1 CONCLUSION

Algorithms for the removal of heat scintillation in sequences were researched and a

comparative analysis performed. Four algorithms were selected for the comparative

analysis based on their ability to compensate for geometric distortions as well as

minimising or enhancing images blurred by the effect of atmospheric turbulence. It was

shown that all the algorithms were capable of reducing the geometric distortions present in

the sequences with the FRTAAS [13], CGI [1] and Time-averaged [3, 11, 18] algorithms

outputting sequences that are stable and geometrically improved. It was observed with the

ICA algorithm that while the geometric distortions were reduced, they were not removed

and the effect of heat shimmer still remained. The property of quasi-periodicity, used by

many algorithms to compensate for geometric distortions, was examined and shown using

real-turbulence degraded sequences provided by the CSIR. An algorithm was also

developed to simulate the effects of atmospheric turbulence, allowing us to compare our

results with the ground-truth images, which was not possible with the real-sequences. The

problem of real-motion present in the scene along with the effects of turbulence was

investigated. While most algorithms had difficulty dealing with real-motion in the scene,

the CGI algorithm [1] was shown to compensate for geometric distortions while preserving

real-motion. By estimating the period of turbulence, an adaptive CGI algorithm was

developed capable of providing improved results over using a fixed period. This method

increased the computational complexity of the algorithm making it less suitable for real-

time implementation. It was shown that by using the median of the trajectories, a fixed

period could be maintained and an improved output observed especially in the presence of

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 77


real-motion. The sharpness levels of the various outputs from the algorithms were also

investigated and it was shown that none of the algorithms served to further degrade the

sequences. The enhancement of the images using kurtosis was also shown to provide a

significant improvement in the sharpness of the images. Comparative analysis was

performed on both real and simulated turbulence sequences and it was seen that while the

FRTAAS algorithm performed better than the others in the simulated sequences, the results

based on the real sequences showed the CGI algorithm to have the best overall

performance. Based on the results of the comparative analysis the CGI algorithm [1] was

chosen and simulated using OpenCV. While a frame rate of only 9 frames per second was

achieved using a time window of ten frames, the algorithm is well suited for parallel

processing which should significantly increase the frame rate.

5.2 FUTURE WORK

While fusion has been used for blurring, methods are also available for fusing images in

which there is a slight amount of misregistation present as well as blurring. Algorithms

such as the MAP (Maximum A Posteriori Probability) algorithm which use special blind

deconvolution methods that are capable of identifying and compensating for the between-

channel misregistration [23]. Work in this area might allow for the development of a single

algorithm capable of compensating for the blur and distortion simultaneously. Future work

will also include selecting and parallelising an algorithm to function on a GPU or other

hardware capable of handling parallel applications in order to achieve higher frame rates.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 78


BIBLIOGRAPHY

[1] D. Li, R. M. Mersereau, D. H. Frakes and M. J. T. Smith, A New Method For

Suppressing Optical Turbulence In Video, in Proc. European Signal Processing

Conference (EUSIPCO2005), 2005.

[2] B. D. Lucas and T. Kanade, An iterative image registration technique with an

application to stereo vision (darpa)," in Proc. of the 1981 DARPA Image

Understanding Workshop, pp. 121-130, April 1981.

[3] D. Fraser, G. Thorpe, and A. Lambert, "Atmospheric turbulence visualization with

wide-area motion blur restoration," Optical Society of America, pp. 1751-1758, 1999.

[4] S. Periaswamy and H. Farid, "Differential elastic image registration," Dartmouth

College, Computer Science, Tech. Rep. TR2001-413, 2001.

[5] S. Periaswamy, H. Farid, Elastic Registration in the presence of Intensity Variations,

in IEEE transactions on Medical Imaging, vol. 22, pp.865-874, 2003.

[6] S. Periaswamy, H. Farid, Differential Affine Motion Estimation for Medical Image

Registration, in proc of SPIEs 45th Annual Meeting, 2000.

[7] S. Periaswamy, General-Purpose Medical Image Registration, Ph.D. thesis,

Computer Science, Dartmouth College, April 2003.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 79


[8] J. V. Stone, Independent Component Analysis: A Tutorial Introduction, MIT Press,

2004.

[9] J.F. Cardoso, A. Souloumiac, An efficient technique for blind separation of complex

sources, in Proc. IEEE SP Workshop on Higher-Order Stat, pp. 275-279, 1993.

[10] I. Kopriva, Q. Du, H. Szu and W. Wasylkiwskyj, Independent Component Analysis

approach to image sharpening in the presence of atmospheric turbulence, Optics

Communications, Elsevier B. V., vol. 233, pp. 7-14, 2004.

[11] D. H. Frakes, J. W. Monaco, M. J. T. Smith, Suppression of atmospheric turbulence

in video using an adaptive control grid interpolation, in Proc. of the IEEE Int. Conf.

Acoustics, Speech, and Signal Processing, pp. 1881-1884, 2001.

[12] D. Li, Restoration of atmospheric turbulence degraded video using kurtosis

minimization and motion compensation, Ph.D. thesis, School of Electrical and

Computer Engineering, Georgia Institute of Technology, May 2007.

[13] M. Tahtali, D. Fraser, A. J. Lambert, Restoration of non-uniformly warped images

using a typical frame as prototype, in TENCON 2005-2005 IEEE Region 10, pp.

1382-1387, 2005.

[14] G. Wolberg, Digital Image Warping, Los Alamitos, CA, USA: IEEE Computer

Society Press, 1994.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 80


[15] M. C. Roggemann and B. Welsh, Imaging Through Turbulence. CRC Press, 1996.

[16] G. J. Sullivan and R. L. Baker, Motion compensation for video compression using

control grid interpolation, in Proc. ICASSP 1991, pp. 27132716, 1991.

[17] E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision, Prentice

Hall, 1998.

[18] W. Zhao, L. Bogoni and M. Hansen, Video Enhancement by Scintillation Removal,

in Proc. of the 2001 IEEE International Conference on Multimedia and Expo, pp. 393-

396, 2001.

[19] W. A. Clarke, An Overview of Techniques for the Removal of Atmospheric

Turbulence Effects from Images, Council for Scientific and Industrial Research

internal document, 2006.

[20] B. R. Frieden, An exact, linear solution to the problem of imaging through

turbulence, Optics Communication, Elsevier Science B. V., vol. 150, pp. 15-28, 1998.

[21] J. F. Cardoso, Blind Beamforming for Non Gaussian Signals, IEEE Proc. F, vol.

140, pp. 362-370, 1993.

[22] J. F. Cardoso., Blind Signal Separation: Statistical Principles, in Proc. of the IEEE,

vol. 9, no 10, pp. 2009-2025, 1998.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 81


[23] R. S. Blum and Z. Liu., Multi-Sensor Image Fusion and Its Applications, Boca Raton,

CRC Press, 2006.

[24] P. Campisi and K. Egiazarian., Blind Image Deconvolution, Boca Raton, CRC Press,

2007.

[25] K. T. Knox and B. J. Thompson., Recovery of images from atmospherically

degraded short-exposure photographs, Astrophysics Journal, vol. 193, pp. L45-L48,

1974.

[26] A. Labeyrie., Attainment of Diffraction Limited Resolution in Large Telescopes by

Fourier Analyzing Speckle Patterns in Star Images, Astronomy and Astrophysics,

vol. 6, pp. 85, 1970.

[27] J. F. Cardoso, Statistical Principles of Source Separation, in Proc. of the SYSID97,

11th IFAC symposium on system identification, pp. 1837-1844, 1997.

[28] J. F. Cardoso, Higher Order Contrasts for ICA, in Neural Computation, vol. 11, pp.

157-192, 1999.

[29] B. K .P. Horn and B. G. Schunck, Determining Optical Flow, Artificial

Intelligence, vol. 17, pp. 185-203, 1981.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 82


[30] D. Fraser, A. Lambert, M. R. S. Jahromi, M. Tahtali and D. Clyde, Anisoplanatic

Image Restoration at ADFA, in Proc. VIIth Digital Image Computing: Techniques

and Applications, pp. 19-28, 2003.

[31] G. Thorpe, A. Lambert and D. Fraser, "Atmospheric turbulence visualization through

Image Time-Sequence Registration," in Proc. of the International Conference on

Pattern Recognition, vol. 2, pp. 1768-1770, 1998.

[32] L. P. Yaroslavsky, B. Fishbain, A. Shteinman and S. Gepshtein, Processing and

Fusion of Thermal and Video Sequences for Terrestrial Long Range Observation

Systems, in Proceedings of the 7th International Conference on Information Fusion

(FUSION '04), vol. 2, pp. 848855, Stockholm, Sweden, 2004.

[33] B. Zitova and J. Flusser, Image Registration Methods: a Survey, Image and Vision

Computing, vol. 21, pp. 977-1000, 2003.

[34] D. Li, R. M. Mersereau and S. Simske, Blur Identification Based on Kurtosis

Minimization, in Proc. of the IEEE Int. Conference Image Processing, vol. 1, pp.

905-908, 2005.

[35] S. John and M. A. Vorontsov, Multiframe Selective Information Fusion from Robust

Error Estimation Theory, IEEE transactions on Image Processing, vol. 14, pp. 577-

584, 2005.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 83


[36] C. Bondeau and E. Bourennane, Restoration of Images Degraded by the

Atmospheric Turbulence, in Proc. of ICSP, pp. 1056-1059, 1998.

[37] M. A. Vorontsov and G. W. Carhart, Anisoplanatic Imaging Through Turbulent

Media: Image Recovery by Local Information Fusion from a Set of Short-Exposure

Images, in Journal of Opt. Soc. of America, vol. 18, No. 6, pp. 1312-1324, 2001.

[38] D. G. Sheppard, B. R. Hunt and M. W. Marcellin, Iterative MultiFrame Super-

Resolution Algorithms for Atmospheric Turbulence-Degraded Imagery, in Proc. of

the IEEE International Conference on Acoustics, Speech and Signal Processing, pp.

2857-2860, 1998.

[39] D. G. Sheppard, B. R. Hunt and M. W. Marcellin, Super-Resolution of Imagery

Acquired through Turbulent Atmosphere, Conference Record of the Thirtieth

Asilomar Conference on Signals, Systems and Computers, pp. 81-85, 1997.

[40] L. Peng and X. Peng, Turbulent Effort Imaging and Analysis, in Proc. of the IEEE,

Circuits and Systems, pp. 588-591, 1998.

[41] B. Galvin, B. McCane, K. Novins, D. Mason and S. Mills, Recovering Motion

Fields: An Evaluation of Eight Optical Flow Algorithms, in Proc. of the Ninth British

Machine Vision Conference, pp. 195-204, 1998.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 84


[42] J. L. Barron, D. Fleet, S. S. Beauchemin, and T. Burkitt, Performance of Optical

Flow Techniques, in Proc. of the IEEE on Computer Vision and Pattern Recognition,

pp. 236242, 1992.

[43] S. S. Beauchemin and J. L. Barron, The Computation of Optical Flow, ACM

Computing Surveys, vol. 27, pp. 433-467, 1995.

[44] R. C. Gonzalez and R. E. Woods, Digital Image Processing 2/E, Prentice Hall, 2002.

[45] PETS 2000 Dataset. []

Available from: http://www.cvg.rdg.ac.uk/slides/pets.html

[Accessed: 24/10/2007]

[46] The USC-SIPI Image Database.

Available form: http://sipi.usc.edu/database/

[Accessed: 24/10/2007]

[47] M. Holloway, Interview, CSIR, 2008.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 85


APPENDIX A

FRTAAS MATLAB CODE


% Runs the FRTAAS algorithm using the
% elastic image registration algorithm

clear,close all,clc;

% Read in clip and initialise


tic;
init;

mov=aviread('armscor',1:120);

N=28; % frame length

% determine sharpest frame in sequence


sh=0;

for k=1:N
i1=mov(k).cdata;
i1=im2double(i1);

for x=1:size(i1,1)
for y=1:size(i1,2)

temp=i1(x,y);
if temp~=0
shtemp=-((log(temp)/log(exp(1)))*temp);
sh=sh+shtemp;
end

end
end
shf(k)=sh;
sh=0;
end
[c,i]=min(shf)

clear i1 sh shtemp shf c k x y temp

% Begin algorithm

%i1=rgb2gray(mov(1).cdata);
i1=(mov(i).cdata);
i1 = im2double(i1);

for k=2:N
k
% i2=rgb2gray(mov(k).cdata);
i2=(mov(k).cdata);
i2 = im2double(i2);
[i1_warped,flow]=register2d(i1,i2);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 86


u(:,:,k-1)=-flow.m5;
v(:,:,k-1)=flow.m6;
end

cx=(sum(u,3))./N;
cy=(sum(v,3))./N;

for k=2:N

temp=mov(k).cdata;
temp=im2double(temp);
final(:,:,k-1)=mdewarpfinal(temp,-(u(:,:,k-1)-cx),-(v(:,:,k-1)-cy));

end

for k=2:N
map=gray(256);
mov1(k-1) = im2frame(final(:,:,k-1)*255,map);
end

map=gray(256);
for k=1:N-1
temp1=(mat2gray(mov1(k).cdata));
mov1(k)=im2frame(im2uint8(temp1),map);
temp2=(mat2gray(mov(k).cdata));
mov(k)=im2frame(im2uint8(temp2),map);
%mov(k)=mat2gray(mov(k).cdata);
end

mplay(mov);
mplay(mov1);

toc;

%x=mdewarpfinal(i1,u(:,:,1),v(:,:,1));

outvid='armscorout_1to120_frtaas';
%movie2avi(mov1,outvid,'fps',20,'compression','None');

save(outvid,'mov1');

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 87


%
% function [i1_warped,flowAcc] = register2d(i1_in,i2_in,params)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [i1_warped,flowAcc] = register2d(i1_in,i2_in,params)

% First set default parameters


if (~exist('params'))
params = params_default;
end

[h,w] = size(i1_in);

if (~isfield(params.glob,'numLevels')) % Newly added field


params.glob.numLevels = 100;
end

i1 = i1_in;
i2 = i2_in;
[h,w] = size(i1);

%--------------- Padding begin -----------------------------


newH = nextpow2(h);
newW = nextpow2(w);

padHl = floor((newH-h) / 2);


padHr = newH-h - padHl;

padWl = floor((newW-w) / 2);


padWr = newW-w - padWl;

mind = min(w,h);
levels = floor(log2(mind/params.main.minSize)+1);

padWl = padWl + (params.main.padSize * 2^levels);


padHl = padHl + (params.main.padSize * 2^levels);
padWr = padWr + (params.main.padSize * 2^levels);
padHr = padHr + (params.main.padSize * 2^levels);

padW = padWl + padWr;


padH = padHl + padHr;

i1 = pad2dlr(i1,padWl,padWr,padHl,padHr,0);
i2 = pad2dlr(i2,padWl,padWr,padHl,padHr,0);

%--------------- Padding end -----------------------------

pyr1(1).im = i1;
pyr2(1).im = i2;

[ht,wt] = size(pyr1(1).im);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 88


%--------------- Construct pyramid ----------------------
fprintf('Pyramid level: %g size: %gx%g mse is: %g\n',...
1,ht,wt,mse(pyr1(1).im,pyr2(1).im ));

for i = 2:levels

pyr1(i).im = reduce(pyr1(i-1).im);
pyr2(i).im = reduce(pyr2(i-1).im);

[ht,wt] = size(pyr1(i).im);
fprintf('Pyramid level: %g size: %gx%g mse is: %g\n',...
i,ht,wt,mse(pyr1(i).im,pyr2(i).im ));
end
fprintf('-----\n');
%--------------- End construct pyramid ------------------

[hs,ws] = size(pyr1(levels).im);
flowAcc = flow_init(ws,hs);

i1_w = pyr1(levels).im;
for i=levels:-1:1 % (from coarse to fine)

fprintf('Pyramid level: %g h: %g w: %g mse is: %g\n',...


i,size(pyr1(i).im),mse(pyr1(i).im,pyr2(i).im ));

if (params.glob.flag)
fprintf('Finding global affine fit\n');
[flow_glob,M] = aff_find_flow(i1_w,pyr2(i).im, ...
params.glob.model,params.glob.iters);
flowAcc = flow_add(flowAcc,flow_glob);
flowAcc.m7 = flow_glob.m7;
flowAcc.m8 = flow_glob.m8;
fprintf('Done\n');
end

% Notes:
%
% flowfind_smooth warps the image by flowAcc
% before it finds the flow field.
% This takes care of the global warp.
% It also accumulates the previous flowAcc.
%
% The flow accumulation is also done in flowfind_smooth
% since it is this function that displays intermediate
% results.

if (params.glob.numLevels >= levels-i+1) % in case we need to


stop at a particular level
flowAcc = flowfind_smooth(pyr1(i).im,pyr2(i).im,params,...
flowAcc,i,levels);
end

if (i ~= 1) %for all but the finest scale


%prepare for the next scale
flowAcc = flow_reduce(flowAcc,-1);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 89


if (params.glob.flag)
i1_w = flow_warp(pyr1(i-1).im,flowAcc);
end
end

end

%undo padding
flowAcc = flow_undopad(flowAcc,padWl,padWr,padHl,padHr);

%warp image in the finest scale


i1_warped = flow_warp(i1_in,flowAcc);

fprintf('Pyramid level: %g mse is: %g\n',...


1,mse(i1_warped,i2_in));

ftime = clock;

return;

function [flow_glob,M] = aff_find_flow(i1,i2,model,iters)

[h,w] = size(i1);
[M,b,c] = affbc_iter(i1,i2,iters,model);
flow_glob = flow_aff(M,w,h);
flow_glob.m7 = ones(h,w) .* c;
flow_glob.m8 = ones(h,w) .* b;
M

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 90


%
% function [params] = params_default
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [params] = params_default

params.main.boxW = 5;
params.main.boxH = 5;
params.main.model = 4;
params.main.minSize = 128;
params.main.dispFlag = 0;
params.main.padSize = 2;

params.smooth.loops_inner = 1;
params.smooth.loops_outer = 1;
params.smooth.lamda = [1e11 1e11 1e11 1e11 1e11 1e11 1e11
1e11];
params.smooth.deviant = [0 0 0 0 0 0 1 0];
params.smooth.dweight = [0 0 0 0 0 0 0 0];

params.glob.flag = 1;
params.glob.model = 4;
params.glob.iters = 2;%20;
params.glob.numLevels = 100;

return;

% Explanation of parameters
%
% params.main.boxW = width of box used in least squares
estimate
% params.main.boxH = height of box used in least squares
estimate
% params.main.model = model used in estimation
%
% 4 -> affine + translations + contrast/brightness
% 3 -> affine + translation
% 2 -> translation + contrast/brightness
% 1 -> translation
%
% params.main.minSize = lowest scale of pyramid has this size
% params.main.dispFlag = display intermediate results
% params.main.padSize = padding at coarsest scale
%
% params.smooth.loops_inner = Smoothness iterations
% params.smooth.loops_outer = Taylor series iterations
% params.smooth.lamda = Lamda values on smoothness terms
%
% params.smooth.deviant = internal, ignore
% params.smooth.dweight = internal, ignore
%
% params.glob.flag = perform global registration also
% params.glob.model = model of global registration,

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 91


% similar to local model.
% params.glob.iters = iterations used in global estimation
% params.glob.numLevels = internal, ignore
%

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 92


%
% function v = nextpow2(v)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function v = nextpow2(v)

l = log2(v);
if (l == floor(l))
return;
end

v = 2^round(log2(v)+0.5);

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 93


%
% function out = pad(img,bx,by,val)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function out = pad(img,bx,by,val)

if (nargin == 3)
val = 0;
end

[h,w] = size(img);
out = ones(h+2*by,w+2*bx) .* val;

out(by+1:by+h,bx+1:bx+w) = img;

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 94


%
% function imgOut = pad2dlr(imgIn,padWl,padWr,padHl,padHr,val,undoFlag)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function imgOut = pad2dlr(imgIn,padWl,padWr,padHl,padHr,val,undoFlag)

if (~exist('val','var'))
val = 0;
end

if (~exist('undoFlag','var'))
undoFlag = 0;
end

[h,w] = size(imgIn);
if (undoFlag == 0)
imgOut= ones(h+padHl+padHr,w+padWl+padWr) .* val;
imgOut(padHl+1:padHl+h,padWl+1:padWl+w) = imgIn;
else
h = h - padHl - padHr;
w = w - padWl - padWr;
imgOut = imgIn(padHl+1:padHl+h,padWl+1:padWl+w);
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 95


%
% function out = reduce(in, cnt)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%
%
%USAGE: out = reduce(in, cnt);
%
% This function blurrs img
% with the seperable filter
% [.05 .25 .4 .25 .05]
% and then subsamples.
% Process repeated cnt times.

function img = reduce(img,cnt)

[h,w] = size(img);
filt = [.05 .25 .4 .25 .05];

if (nargin == 1)
cnt = 1;
end

if (cnt == 0)
return;
end

for i = 1:cnt
img = conv2mirr(img,filt,filt);
img = img([1:2:h],[1:2:w]);
h = h/2;
w = w/2;
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 96


%
% function out = resize_2d(in,w,h)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function out = resize_2d(in,w,h)

[oh,ow] = size(in);
[mx,my] = meshgrid(linspace(1,oh,h), linspace(1,ow,w));
out = interp2(in,mx,my,'cubic');

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 97


%
% function updatedisplay(i1,i2,flowA,i1_warped,curIter,maxIter,level);
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function updatedisplay(i1,i2,flowA,i1_warped,curIter,maxIter,level);

figure(1);

[h,w] = size(i1);

fprintf('Displaying...');

set(gcf,'DoubleBuffer','on');
set(gca,'NextPlot','replace','Visible','off')
set(gcf,'renderer','zbuffer');

grid = grid2d(w,h,floor(16*h/256),1);
grid_warped = flow_warp(grid,flowA);

i1_warped_bc = i1_warped .* flowA.m7 + flowA.m8;

imgC = trunc(flowA.m7,0,1);
imgB = trunc(flowA.m8,0,1);

row1 = rowimg(i1,i2,abs(i1-i2));
row2 = rowimg(i1_warped,i1_warped_bc,abs(i1_warped_bc-i2));
row3 = rowimg(flowA.m7, flowA.m8,1-grid_warped);

myimg = [row1;row2;row3];

myimg = frameimg(myimg,1,0);

imagesc(myimg,[0 1]);
colormap(gray);
axis image;
%truesize;
axis off;

x = 2;
y = h*2+15;
dy = 20;

tstr = sprintf('%s [%d x %d] level: %d iter: %d/%d',...


datestr(now),w,h,level,curIter,maxIter);

title({tstr},'HorizontalAlignment','left','fontname','courier','position'
,[0 0],'Interpreter','none','FontSize',8);

%set(gca,'position',[0 0 1 0.9]);
drawnow;
fprintf('Done!\n');

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 98


return;

function out = rowimg(varargin)

out = varargin{1};
out = frameimg(out,1,1);

for i=2:nargin
img = varargin{i};
img = frameimg(img,1,1);
out = [out img];

end

return;

function out=trunc(in,minv,maxv)

out = in;
ind = find(out > maxv);
out(ind) = 1;
ind = find(out < 0);
out(ind) = 0;

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 99


%
% function pat = grid(w,h,t,v)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function pat = grid(w,h,t,v)

pat = zeros(h,w);

pat(1:t:h,1:w) = v;
pat(1:h,1:t:w) = v;

pat(h,1:w) = v;
pat(1:h,w) = v;

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 100


% Compile c code and initialize paths

if (exist('c_diffxyt') ~= 3)
fprintf('Compiling c/c_diffxyt.c\n');
cd c
mex c_diffxyt.c
cd ..
end

if (exist('c_smoothavg') ~= 3)
fprintf('Compiling c/c_smoothavg.c\n');
cd c
mex c_smoothavg.c
cd ..
end

addpath('.:./c:./register:./utils:');

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 101


%
% function [mx,my] = meshgridimg(w,h)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [mx,my] = meshgridimg(w,h)

[mx,my] = meshgrid([0:w-1]-w/2+0.5,-([0:h-1]-h/2+0.5));

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 102


%
% function out = mirror_extend(in,bx,by)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function out = mirror_extend(in,bx,by)

%Note: works for both even and odd bx/by!

[h,w] = size(in);

%First flip up and down


u = flipud(in([2:1+by],:));
d = flipud(in([h-by:h-1],:));

in2 = [u' in' d']';

%Next flip left and right


l = fliplr(in2(:, [2:1+bx]));
r = fliplr(in2(:,[w-bx:w-1]));

%set the 'mirrored' image to out.


out = [l in2 r];

return;

%test
A = [1 2 3 4;5 6 7 8;9 10 11 12]
B = mirror_extend(A,2,2)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 103


%
% function err=mse(i1,i2,b)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function err=mse(i1,i2,b)

if (nargin == 3)
[h,w] = size(i1);
nh = h - 2*b;
nw = w - 2*b;
i1 = cutc(i1,nw,nh);
i2 = cutc(i2,nw,nh);
end

i1 = i1(:);
i2 = i2(:);

d = (i1 - i2);
d = d.*d;

err = sqrt(mean(d));

return

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 104


%
% function flow = flow_reduce(flow,scale)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function flow = flow_reduce(flow,scale)


scale = -scale;
flow.m5 = scaleby2(flow.m5,scale) .* (2^scale);
flow.m6 = scaleby2(flow.m6,scale) .* (2^scale);

if (isfield(flow,'m7'))
flow.m7 = scaleby2(flow.m7,scale);
flow.m8 = scaleby2(flow.m8,scale);
end
return;

function out = scaleby2(in,scale)

if (scale == 0)
out = in;
return;
end

[h,w] = size(in);
scale = 2^scale;

[mx,my] = meshgrid(linspace(1,w,scale*w),linspace(1,h,scale*h));
out = interp2(in,mx,my,'linear');

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 105


%
% function flow=flow_smooth(flow,params)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function flow=flow_smooth(flow,params)

persistent index rankE PR KR h w blurr r_blurr sfile iFlag m_avg


model;
global lamda deviant dweight sfilt

if (isempty(PR))

fprintf('Smoothness: Initializing\n');

lamda = params.smooth.lamda;
deviant = params.smooth.deviant;
dweight = params.smooth.dweight;
model = params.main.model;

affbc_find_init;

[h,w] = size(flow.m1);

if (h < 64 | w < 64)


fprintf('flow_smooth: Size < 64 , setting model=2\n');
model= 2;
end

[L,d,index,rankE,iFlag] =
smooth_setC(lamda,deviant,dweight,model);
[M,b,c,r,pt,kt] = affbc_find_api(flow.f,flow.g,model);

PR = zeros(h*w,8,8);
KR = zeros(h*w,8);

r = [1:rankE];
ir = index(r);

area = h*w;

for i=1:area

p = pt(:,i);

P = p * p';
K = p * kt(i);

C = K +d;
R = inv(P+L);

if (iFlag)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 106


PR(i,ir,ir) = R(r,r);
KR(i,ir) = C(r)';
else
PR(i,:,:) = R(:,:);
KR(i,:) = C(:)';
end

end

m_avg = zeros(8,h*w);

sfilt = [1 4 1;4 0 4;1 4 1];

for i=1:8
blurr(i).filt = sfilt .* lamda(i);
end

fprintf('Smoothness: Initializing complete\n');


end

%r_blurr = conv2mirr(flow.r,sfilt);
r_blurr = c_smoothavg(flow.r,1);
r_zind = find(r_blurr == 0);
r_blurr(r_zind) = 1;

flow.r = ones(h,w);
flow.r(r_zind) = 0;

%m_avg(1,:) = reshape(conv2mirr(flow.m1,blurr(1).filt) ./
r_blurr,1,h*w);
%m_avg(2,:) = reshape(conv2mirr(flow.m2,blurr(2).filt) ./
r_blurr,1,h*w);
%m_avg(3,:) = reshape(conv2mirr(flow.m3,blurr(3).filt) ./
r_blurr,1,h*w);
%m_avg(4,:) = reshape(conv2mirr(flow.m4,blurr(4).filt) ./
r_blurr,1,h*w);
%m_avg(5,:) = reshape(conv2mirr(flow.m5,blurr(5).filt) ./
r_blurr,1,h*w);
%m_avg(6,:) = reshape(conv2mirr(flow.m6,blurr(6).filt) ./
r_blurr,1,h*w);
%m_avg(7,:) = reshape(conv2mirr(flow.m7,blurr(7).filt/20),1,h*w);
%m_avg(8,:) = reshape(conv2mirr(flow.m8,blurr(8).filt/20),1,h*w);

m_avg(1,:) = reshape(c_smoothavg(flow.m1 , lamda(1)) ./


r_blurr,1,h*w);
m_avg(2,:) = reshape(c_smoothavg(flow.m2 , lamda(2)) ./
r_blurr,1,h*w);
m_avg(3,:) = reshape(c_smoothavg(flow.m3 , lamda(3)) ./
r_blurr,1,h*w);
m_avg(4,:) = reshape(c_smoothavg(flow.m4 , lamda(4)) ./
r_blurr,1,h*w);
m_avg(5,:) = reshape(c_smoothavg(flow.m5 , lamda(5)) ./
r_blurr,1,h*w);
m_avg(6,:) = reshape(c_smoothavg(flow.m6 , lamda(6)) ./
r_blurr,1,h*w);
m_avg(7,:) = reshape(c_smoothavg(flow.m7, lamda(7)/20),1,h*w) ;
m_avg(8,:) = reshape(c_smoothavg(flow.m8, lamda(8)/20),1,h*w) ;

KRA = KR + m_avg';

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 107


m = zeros(h*w,8);

for r = 1:8
m(:,r) = sum( squeeze(PR(:,r,:)) .* KRA,2);
end

flow.m1(:) = m(:,1);
flow.m2(:) = m(:,2);
flow.m3(:) = m(:,3);
flow.m4(:) = m(:,4);
flow.m5(:) = m(:,5);
flow.m6(:) = m(:,6);
flow.m7(:) = m(:,7);
flow.m8(:) = m(:,8);

flow.m1(r_zind) = 1;
flow.m2(r_zind) = 0;
flow.m3(r_zind) = 0;
flow.m4(r_zind) = 1;
flow.m5(r_zind) = 0;
flow.m6(r_zind) = 0;
flow.m7(r_zind) = 1;
flow.m8(r_zind) = 0;

return;

function [L,d,index,rankE,iFlag] =
smooth_setC(lamda,deviant,dweight,model)

[index,afFlag,bcFlag] = getindex(model);
iFlag = ~(afFlag & bcFlag);

rankE = length(index);

L = zeros(rankE,rankE);

for i = 1:rankE
ind = index(i);
L(i,i) = lamda(ind) + dweight(ind);
d(i) = deviant(ind) .* dweight(ind);
end

d = d';

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 108


%
% function flowA =
flowfind_smooth(i1,i2,params,flowA,currLevel,totLevels)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

%
% Note:
%
% The unwarped i1 is passed in as a parameter since all
% subsequent warpings are performed using the accumulated
% flow on the original image.

function flowA = flowfind_smooth(i1,i2,params,flowA,currLevel,totLevels)

[h,w] = size(i1);

if (~exist('flowA'))
flowA = flow_init(w,h);
i1_warped = i1;
else
i1_warped = flow_warp(i1,flowA);
end

flow = flowA;
flow.m7 = ones(h,w);
flow.m8 = zeros(h,w);

dispFlag = params.main.dispFlag;
loops_outer = params.smooth.loops_outer;
loops_inner = params.smooth.loops_inner;

if (dispFlag)
hw = waitbar(0,'Finding Smooth Flow');
updatedisplay(i1,i2,flowA,i1_warped,0, loops_outer,currLevel);
end

for j=1:loops_outer

fprintf('Outer iteration: %d/%d\n',j,loops_outer);

flow = flowfind_raw(i1_warped,i2,params);

clear flow_smooth;
for i=1:loops_inner
fprintf('Level: %d/%d Iteration outer: %d/%d inner: %d/%d
size: %dx%d\n', totLevels-
currLevel+1,totLevels,j,loops_outer,i,loops_inner,w,h);
if (dispFlag)
waitbar(i/loops_inner*j/loops_outer,hw);
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 109


flow = flow_smooth(flow,params);

end

[flowA,i1_warped] = flow_add(flowA,flow,i1);
flowA.m7 = flow.m7;
flowA.m8 = flow.m8;

if (dispFlag)
waitbar(i/loops_inner*j/loops_outer,hw,...
sprintf('iteration %g/%g' ,j,loops_outer));
updatedisplay(i1,i2,flowA,i1_warped,j,
loops_outer,currLevel);
end

end
%keyboard;

if (dispFlag)
close(hw);
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 110


%
% function flowOut = flow_undopad(flowIn,padWl,padWr,padHl,padHr)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function flowOut = flow_undopad(flowIn,padWl,padWr,padHl,padHr)

flowOut.m5 = pad2dlr(flowIn.m5,padWl,padWr,padHl,padHr,0,1);
flowOut.m6 = pad2dlr(flowIn.m6,padWl,padWr,padHl,padHr,0,1);

if (isfield(flowIn,'m7'))
flowOut.m7 = pad2dlr(flowIn.m7,padWl,padWr,padHl,padHr,0,1);
flowOut.m8 = pad2dlr(flowIn.m8,padWl,padWr,padHl,padHr,0,1);
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 111


%
% function [out] = flow_warp(in,flow,mtd,eflag)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [out] = flow_warp(in,flow,mtd,eflag)

if (nargin == 2)
mtd = 'cubic';
end

if (nargin < 4)
eflag = 0;
end

[h,w] = size(in);
b = floor(h/4);

if (eflag)
in = mirror_extend(in,b,b);
else
in = pad(in,b,b);
end

flow.m5 = mirror_extend(flow.m5,b,b);
flow.m6 = mirror_extend(flow.m6,b,b);

[h,w] = size(in);

[mx,my] = meshgridimg(w,h);

mx2 = mx + flow.m5;
my2 = my + flow.m6;

out = interp2(mx,my,in,mx2,my2,mtd);
out = out(b+1:h-b,b+1:w-b);
out(find(isnan(out))) = 0;

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 112


%
% function [out]=flowfind_raw(im1,im2,params);
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [out]=flowfind_raw(im1,im2,params);

if (~exist('params','var'))
params = params_default;
end

barFlag = 0;
t0 = clock;

%%%%%%%%%%%%% Parameters for aff_find


frameSize = 0;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

[h_org,w_org] = size(im1);

%if (h_org == 32)


% boxW = 3;
% boxH = 3;
% model= 2;
% fprintf('flowfind_raw: Size: 32x32, setting model=2, box=4\n');
%end

boxW = params.main.boxW;
boxH = params.main.boxH;
model = params.main.model;

boxW2 = floor(boxW/2);
boxH2 = floor(boxH/2);
im1 = mirror_extend(im1,boxW,boxH);
im2 = mirror_extend(im2,boxW,boxH);

[h,w] = size(im1);

pcOrg = 0;

if (barFlag)
hw = waitbar(0,'Estimating flow...');
end

[fx,fy,ft] = diffxyt(im1,im2,[3 3]);


[index,afFlag,bcFlag] = getindex(model);

clear affbc_find;
global affbc_params;

affbc_params.model = model;
affbc_params.index = index;
affbc_params.frameSize = frameSize;
affbc_params.w = boxW;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 113


affbc_params.h = boxH;
affbc_params.afFlag = afFlag;
affbc_params.bcFlag = bcFlag;

q = zeros(5,h,w);
q(1,:,:) = -im1;
q(2,:,:) = fx;
q(3,:,:) = fy;
q(4,:,:) = ft;

mout = zeros(h,w,9);

for y=boxH2+1:h-boxH2
if (barFlag)
pc = (y-boxH2-1)/(h-boxH);
waitbar(pc,hw);
end

y1=y -boxH2;
y2=y1+boxH -1;
yind = y1:y2;

for x=boxW2+1:w-boxW2

x1=x -boxW2;
x2=x1+boxW -1;
xind = x1:x2;

chunk = q(:,yind,xind);
affbc_params.nf = chunk(1,:);
affbc_params.fx = chunk(2,:);
affbc_params.fy = chunk(3,:);
affbc_params.ft = chunk(4,:);

affbc_find;

mout(y,x,:) = affbc_params.mout;

end
end

out.m1 = mout(:,:,1);
out.m2 = mout(:,:,2);
out.m3 = mout(:,:,3);
out.m4 = mout(:,:,4);
out.m5 = mout(:,:,5);
out.m6 = mout(:,:,6);
out.m7 = mout(:,:,7);
out.m8 = mout(:,:,8);
out.r = mout(:,:,9);

if (1)
maxd = boxW/2;
ind = find(abs(out.m5) >= maxd | abs(out.m6) >= maxd);
out.m1(ind) = 0;
out.m2(ind) = 0;
out.m3(ind) = 0;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 114


out.m4(ind) = 0;
out.m5(ind) = 0;
out.m6(ind) = 0;
out.m7(ind) = 1;
out.m8(ind) = 0;
out.r(ind) = 0;
cnt = length(ind);
fprintf('out of bounds: %.2f%%\n',cnt/h/w*100);
end

out.fx = fx;
out.fy = fy;
out.ft = ft;
out.f = im1;
out.g = im2;
out.model = model;
out.index = index;

out = flow_extract(out,h_org,w_org);

%waitbar(1,hw);
if (barFlag)
close(hw);
end

t0=etime(clock,t0);
fprintf('Elapsed time: %g\n',t0);

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 115


%
% function [index,affFlag,bcFlag] = getindex(model)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [index,affFlag,bcFlag] = getindex(model)

affFlag = 0;
bcFlag = 0;

switch model

case 1; % Translation
index = [5 6];

case 2; % Translation + BC
index=[5 6 7 8];
bcFlag = 1;

case 3; % Affine+Translation
index=[1 2 3 4 5 6];
affFlag = 1;

case 4; % Affine+Translation + BC
index=[1 2 3 4 5 6 7 8];
affFlag = 1;
bcFlag = 1;
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 116


%
% function [out] = frameimg(in,b,val)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [out] = frameimg(in,b,val)

[h,w] = size(in);
H1 = framegen(h,w,b);
H2 = 1-H1;

out = in .* H1 + H2 .* val;

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 117


%
% function [H] = framegen(h,w,frameSize)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [H] = framegen(h,w,frameSize)

H = zeros(h,w);
H(frameSize+1:h-frameSize,frameSize+1:w-frameSize) = 1;

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 118


%
% function flow_out = flow_padlr(flow,padWl,padWr,padHl,padHr)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function flow_out = flow_padlr(flow,padWl,padWr,padHl,padHr)

flow_out.m5 = pad2dlr(flow.m5,padWl,padWr,padHl,padHr,0);
flow_out.m6 = pad2dlr(flow.m6,padWl,padWr,padHl,padHr,0);
flow_out.m7 = pad2dlr(flow.m7,padWl,padWr,padHl,padHr,1);
flow_out.m8 = pad2dlr(flow.m8,padWl,padWr,padHl,padHr,0);

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 119


%
% function flow = flow_init(w,h)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function flow = flow_init(w,h)

flow.m5 = zeros(h,w);
flow.m6 = zeros(h,w);
flow.m7 = ones(h,w);
flow.m8 = zeros(h,w);

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 120


%
% function flow = flow_extract(flow,h,w)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function flow = flow_extract(flow,h,w)

[oh,ow] = size(flow.m5);

x1 = (ow-w)/2+1;
y1 = (oh-h)/2+1;

x1 = floor(x1);
y1 = floor(y1);

x2 = x1+w-1;
y2 = y1+h-1;

if (isfield(flow,'m1'))
flow.m1 = flow.m1(y1:y2,x1:x2);
end

if (isfield(flow,'m2'))
flow.m2 = flow.m2(y1:y2,x1:x2);
end

if (isfield(flow,'m3'))
flow.m3 = flow.m3(y1:y2,x1:x2);
end

if (isfield(flow,'m4'))
flow.m4 = flow.m4(y1:y2,x1:x2);
end

if (isfield(flow,'m5'))
flow.m5 = flow.m5(y1:y2,x1:x2);
end

if (isfield(flow,'m6'))
flow.m6 = flow.m6(y1:y2,x1:x2);
end

if (isfield(flow,'m7'))
flow.m7 = flow.m7(y1:y2,x1:x2);
end

if (isfield(flow,'m8'))
flow.m8 = flow.m8(y1:y2,x1:x2);
end

if (isfield(flow,'r'))
flow.r = flow.r(y1:y2,x1:x2);
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 121


if (isfield(flow,'g'))
flow.g = flow.g(y1:y2,x1:x2);
end

if (isfield(flow,'f'))
flow.f = flow.f(y1:y2,x1:x2);
end

if (isfield(flow,'fx'))
flow.fx = flow.fx(y1:y2,x1:x2);
end

if (isfield(flow,'fy'))
flow.fy = flow.fy(y1:y2,x1:x2);
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 122


%
% function flow_disp(flow,skip,color)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function flow_disp(flow,skip,color)

if (nargin == 1)
skip = 8;
end

if (nargin < 3)
color = 'b';
end

[h,w] = size(flow.m5);
m5 = flow.m5(1:skip:h,1:skip:w);
m6 = flow.m6(1:skip:h,1:skip:w);
[flowx flowy] = meshgrid([0:skip:w-1]-w/2+0.5,-([0:skip:h-1]-
h/2+0.5));

%This little bit helps avoid displaying zero vectors


ind = find(m5 | m6);
m5 = m5(ind);
m6 = m6(ind);
flowx = flowx(ind);
flowy = flowy(ind);

quiver(flowx,-flowy,m5,-m6,0,color);
axis image;
grid on;
axis ij;

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 123


%
% function flow = flow_aff(M,w,h)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function flow = flow_aff(M,w,h)

m = M(1:2,1:2);
dx = M(1,3);
dy = M(2,3);

[mx my] = meshgridimg(w,h);


pnts = m * [mx(:)';my(:)'] ;
vx = pnts(1,:) + dx;
vy = pnts(2,:) + dy;
vx = reshape(vx,h,w);
vy = reshape(vy,h,w);
vx = vx - mx;
vy = vy - my;
flow.m5 = vx;
flow.m6 = vy;

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 124


%
% function [flowAC,img_warped] = flow_add(flowBC,flowAB,img_org)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [flowAC,img_warped] = flow_add(flowBC,flowAB,img_org)

[h,w] = size(flowAB.m5);

m5 = flow_warp(flowBC.m5,flowAB,'cubic',1);
m6 = flow_warp(flowBC.m6,flowAB,'cubic',1);

n = find(isnan(m5) | isnan(m6));
m5(n) = 0;
m6(n) = 0;

flowAC.m5 = flowAB.m5 + m5;


flowAC.m6 = flowAB.m6 + m6;

if (nargin == 3)
img_warped = flow_warp(img_org,flowAC,'cubic',0);
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 125


%
% function [fdx,fdy,fdz] = diffxyt(frame1,frame2,filtsizes)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Filter details:
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%
% Reference:
% Optimally Rotation-Equivariant Directional Derivative Kernels
% H. Farid and E.P. Simoncelli
% Computer Analysis of Images and Patterns (CAIP), Kiel, Germany,
1997
% See also: http://www.cs.dartmouth.edu/~farid/research/derivative.html
%

function [fdx,fdy,fdz] = diffxyt(frame1,frame2,filtsizes)

[fdx,fdy,fdz] = c_diffxyt(frame1,frame2);
return;

if (nargin == 2)
filtsizes = [3 3];
end

persistent px dx py dy pz dz;

if (isempty(px))
[px,dx] = GetDerivPD(filtsizes(1));
[py,dy] = GetDerivPD(filtsizes(2));
[pz,dz] = GetDerivPD(2);
end

%Step 1: prefilter in t;
% dx = prefilter in y, differentiate in x
% dy = prefilter in x, differentiate in y
%frame_pz = frame1 .* pz(1) + frame2 .* pz(2);
frame_pz = (frame1 + frame2)/2;
%fdx = conv2(py,dx',frame_pz,'same');
%fdy = conv2(dy,px',frame_pz,'same');
fdx = conv2mirr(frame_pz,dx,py);
fdy = conv2mirr(frame_pz,px,-dy);

%Step 2: differentiate in t;
% dz = prefilter in x and y

%frame_dz = frame1 .* dz(1) + frame2 .* dz(2);


frame_dz = frame1 - frame2;

%fdz = conv2(py,px',frame_dz,'same');
fdz = conv2mirr(frame_dz,px,py);

return;

function [p,d] = GetDerivPD(noFrames)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 126


% Coefficients for smoothing filter p and
% derivitive filter d provided by Hanny Farid
% farid@cs.dartmouth.edu

switch(noFrames)

case 2
p = [0.5 0.5];
d = [-1 1];
case 3
p = [0.223755 0.552490 0.223755];
d = [-0.453014 0.0 0.453014];
case 4
p = [0.092645 0.407355 0.407355 0.092645];
d = [-0.236506 -0.267576 0.267576 0.236506];
case 5
p = [0.036420 0.248972 0.429217 0.248972 0.036420];
d = [-0.108415 -0.280353 0.0 0.280353 0.108415];
case 6
p = [0.013846 0.135816 0.350337 0.350337 0.135816 0.01384];
d = [-0.046266 -0.203121 -0.158152 0.158152 0.203121
0.046266];
case 7
p = [0.005165 0.068654 0.244794 0.362775 0.244794 0.068654
0.005165];
d = [-0.018855 -0.123711 -0.195900 0.0 0.195900 0.123711
0.018855];
otherwise
p = 0;
d = 0;
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 127


%
% function out = cutc(img,newW,newH)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function out = cutc(img,newW,newH)

if (nargin == 2)
newH = newW;
end

[h,w] = size(img);
y = round((h-newH)/2)+1;
x = round((w-newW)/2)+1;

out = img(y:y+newH-1,x:x+newW-1);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 128


%USAGE: out = conv2mirr(in,fy,fx)
%
% This function performs a 2D convolution
% after mirroring the boundaries.
% Since matlab's conv2 function processes
% the columns first, we first mirror the rows.
%
% If the kernel is non-seperable, pass it
% in parameter fx and ignore fy.
%
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function out = conv2mirr(in,fx,fy)

[h,w] = size(in);

%Size of the borders to flip


if (nargin == 3)
nl = round((length(fx)-1)/2);
nu = round((length(fy)-1)/2);
else
%non-seperable mask passed
[lfy lfx] = size(fx);
nl = round((lfx-1)/2);
nu = round((lfy-1)/2);
end

nr = nl;
nd = nu;

%First flip up and down


u = flipud(in([2:1+nu],:));
d = flipud(in([h-nd:h-1],:));

in2 = [u' in' d']';

%Next flip left and right


l = fliplr(in2(:, [2:1+nl]));
r = fliplr(in2(:,[w-nr:w-1]));

%set the 'mirrored' image to out.


out = [l in2 r];

if (nargin == 3)
out = conv2(fy,fx,out,'valid');
else
out = conv2(out,fx,'valid');
end

out = out([1:h],[1:w]);

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 129


%test
A = [1 2 3 4;5 6 7 8;9 10 11 12]
B = conv2mirr(A,[1 1 1],[1 1 1])
C = conv2mirr(A,[1 1 1;1 1 1;1 1 1])

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 130


%
% function [M,bnew,cnew,W] = affbc_mult(im1,im2,noIters,model,minBlkSize)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [M,bnew,cnew] = affbc_mult(im1,im2,noIters,model,minBlkSize);

if (nargin == 2)
model = 4;
noIters = 50;
minBlkSize = 32;
end

if (nargin < 6)
minBlkSize = 32;
end

[h,w] = size(im1);
steps = floor(log2(w/minBlkSize)+1);

%steps = 3;
pyr1(1).im = im1;
pyr2(1).im = im2;

%First build pyramids


scale = 1;
for i = 2:steps
pyr1(i).im = reduce(pyr1(i-1).im);
pyr2(i).im = reduce(pyr2(i-1).im);
scale = scale * 2;
end

M = eye(3);
bnew = 0;
cnew = 1;
for i = steps:-1:1

im1S = pyr1(i).im;
im2S = pyr2(i).im;

fprintf('affbcmult: scale= %g h=%g w=%g\n',scale,size(im1S));

if (i ~= steps)
M1 = M;
M1(1:2,3) = M1(1:2,3)/scale;
im1S = aff_warp(im1S,M1);
end

%dispimg([im1S,im2S]);
[Mnew,b,c] = affbc_iter(im1S,im2S,noIters,model);
cnew = c * cnew;
bnew = b + bnew;

Mnew(1:2,3) = Mnew(1:2,3) * scale;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 131


M = Mnew * M
%Mnew
%M
scale = scale/2;
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 132


%
% function affbc_iter(i1,i2,iters,model)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [M,b,c,W] = affbc_iter(i1,i2,iters,model)

if (~exist('model','var'))
model = 4;
end
if (~exist('iters','var'))
iters = 20;
end

txtFlag = 0;

affbc_find_init;
[h,w] = size(i1);
W = ones(h,w);
[index] = getindex(model);

c = 1;
b = 0;
M = [1 0 0; 0 1 0; 0 0 1];
i1N = i1;

for i=1:iters

[dM db dc,r] = affbc_find_api(i1N,i2,model);

M = M*dM;
b = b+db;
c = c*dc;

if (c < 0.1)
c = 1;
end

i1N = aff_warp(i1,M,0);

if (txtFlag)
fprintf('Iteration: %d\n',i);
M
fprintf('c: %g, b: %g, mse: %g\n', c,b,mse(i1N,i2));
end

i1N = i1N .* c + b;

end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 133


%
% function affbc_init
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu), Dartmouth College.
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function affbc_init

clear diffxyt affbc_find

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 134


% function [M,b,c,r,pt,kt] = affbc_find_api(f,g,model);
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu), Dartmouth College.
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [M,b,c,r,pt,kt] = affbc_find_api(f,g,model);

global affbc_params;

[h,w] = size(f);
[fx,fy,ft] = diffxyt(f,g,[3 3]);
[ind,afFlag,bcFlag] = getindex(model);

affbc_params.h = h;
affbc_params.w = w;
affbc_params.model = model;
affbc_params.index = ind;
affbc_params.afFlag = afFlag;
affbc_params.bcFlag = bcFlag;
affbc_params.nf = -f(:)';
affbc_params.fx = fx(:)';
affbc_params.fy = fy(:)';
affbc_params.ft = ft(:)';
affbc_params.frameSize = 1;

if (~exist('lamdas','var'))
lamdas = ones(length(ind),1);
deviants = zeros(length(ind),1);
end

if (model == 4)
lamdas = [1 1 1 1 1 1 1 1]'*10e11;
deviants = [0 0 0 0 0 0 1 0]';
end

affbc_params.S = diag(lamdas);
affbc_params.D = (lamdas .* deviants);

%affbc_params.init = 1;
affbc_find;
mout = affbc_params.mout;

M = [mout(1) mout(2) mout(5); mout(3) mout(4) mout(6); 0 0 1];


c = mout(7);
b = mout(8);
r = mout(9);

if (nargout > 4)
pt = affbc_params.pt;
kt = affbc_params.kt;
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 135


%
% function affbc_find
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function affbc_find

global affbc_params;

persistent h w mx my H pt m minus_ones afFlag bcFlag iFlag rankE


mout_def

%if (isempty(mx) | affbc_params.init == 1)


if (isempty(mx))
[w,h,mx,my,H,rankE,afFlag,bcFlag,iFlag,minus_ones,mout_def]=...
affbc_init(affbc_params);
affbc_params.init = 0;
affbc_params.mout = [mout_def 0];
end

% premultiply derivatives with box filter


if (affbc_params.frameSize > 0)
affbc_params.fx = affbc_params.fx .* H;
affbc_params.fy = affbc_params.fy .* H;
affbc_params.ft = affbc_params.ft .* H;
affbc_params.nf = affbc_params.nf .* H;
minus_ones = minus_ones .* H;
end

%set entire p transpose


p1 = affbc_params.fx .* mx;
p2 = affbc_params.fx .* my;
p3 = affbc_params.fy .* mx;
p4 = affbc_params.fy .* my;

affbc_params.pt = [p1;p2;p3;p4;...
affbc_params.fx;affbc_params.fy;affbc_params.nf;
minus_ones];

affbc_params.kt = affbc_params.ft;

if (bcFlag)
affbc_params.kt = affbc_params.kt + affbc_params.nf;
end

if (afFlag)
affbc_params.kt = affbc_params.kt + p1+ p4;
end

%choose only parameters needed


if (iFlag)
affbc_params.pt = affbc_params.pt(affbc_params.index,:);
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 136


%Calculate P = p' * p
P = (affbc_params.pt) * affbc_params.pt';
K = (affbc_params.pt) * affbc_params.kt';

if (rcond(P) > 10e-8)


%if (rank(P) == rankE)
m = inv(P) * K;
%m = inv(P+affbc_params.S) * (K+affbc_params.D);
r = 1;
else
m = mout_def;
r = 0;
end

affbc_params.mout = [1 0 0 1 0 0 1 0 r];

if (iFlag)
affbc_params.mout(affbc_params.index) = m;
else
affbc_params.mout(1:8) = m;
end

return;

function [w,h,mx,my,H,rankE,afFlag,bcFlag,iFlag, o,mout_def]=...

affbc_init(affbc_params)
%Initialize things
%fprintf('affbc_find Initializing...\n');
w = affbc_params.w;
h = affbc_params.h;
[mx,my] = meshgridimg(w,h);
H = framegen(w,h,affbc_params.frameSize); %box filter
noFields = length(affbc_params.index);
rankE = noFields;
afFlag = affbc_params.afFlag;
bcFlag = affbc_params.bcFlag;

iFlag = ~(afFlag & bcFlag);

%Expected rank
rankE = noFields;
o = -ones(1,h*w);

%linearize into rows


mx = mx(:)';
my = my(:)';
H = H(:)';

%default values for all parameters


mout_def = [1 0 0 1 0 0 1 0];
%choose only parameters we need
mout_def = mout_def(affbc_params.index);
%fprintf('affbc_find Done\n');

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 137


%
% function [out] = aff_warp(in,M,contFlag)
%
% Date: April 19, 2003
% By : Senthil Periaswamy (sp@cs.dartmouth.edu)
%
% Copyright (c), 2000, Trustees of Dartmouth College. All rights
reserved.
%

function [out] = aff_warp(in,M,contFlag)

if (nargin == 2)
contFlag = 0;
end

m = M(1:2,1:2);
dx = M(1,3);
dy = M(2,3);
[h,w] = size(in);
a = h*w;
[mx,my] = meshgridimg(w,h);

pnts = m * [mx(:)';my(:)']; % order of pnts does not matter

mx2 = pnts(1,:) + dx;


my2 = pnts(2,:) + dy;

mx2 = reshape(mx2,h,w);
my2 = reshape(my2,h,w);

out = interp2(mx,my,in,mx2,my2,'cubic');
out(isnan(out)) = 0;

if (contFlag)
out = out * det(M);
end

return;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 138


%
% function fout = waitbar(x,whichbar, varargin)
%
% Modified By: Senthil Periaswamy (sp@cs.dartmouth.edu)
%
function fout = waitbar(x,whichbar, varargin)
%WAITBAR Display wait bar.
% H = WAITBAR(X,'title', property, value, property, value, ...)
% creates and displays a waitbar of fractional length X. The
% handle to the waitbar figure is returned in H.
% X should be between 0 and 1. Optional arguments property and
% value allow to set corresponding waitbar figure properties.
% Property can also be an action keyword 'CreateCancelBtn', in
% which case a cancel button will be added to the figure, and
% the passed value string will be executed upon clicking on the
% cancel button or the close figure button.
%
% WAITBAR(X) will set the length of the bar in the most recently
% created waitbar window to the fractional length X.
%
% WAITBAR(X,H) will set the length of the bar in waitbar H
% to the fractional length X.
%
% WAITBAR(X,H,'updated title') will update the title text in
% the waitbar figure, in addition to setting the fractional
% length to X.
%
% WAITBAR is typically used inside a FOR loop that performs a
% lengthy computation. A sample usage is shown below:
%
% h = waitbar(0,'Please wait...');
% for i=1:100,
% % computation here %
% waitbar(i/100,h)
% end
% close(h)

% Clay M. Thompson 11-9-92


% Vlad Kolesnikov 06-7-99
% Copyright 1984-2000 The MathWorks, Inc.
% $Revision: 1.21 $ $Date: 2000/08/04 15:36:26 $

if nargin>=2
if ischar(whichbar)
type=2; %we are initializing
name=whichbar;
elseif isnumeric(whichbar)
type=1; %we are updating, given a handle
f=whichbar;
else
error(['Input arguments of type ' class(whichbar) ' not valid.'])
end
elseif nargin==1
f = findobj(allchild(0),'flat','Tag','TMWWaitbar');

if isempty(f)
type=2;
name='Waitbar';
else
type=1;
f=f(1);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 139


end
else
error('Input arguments not valid.');
end

x = max(0,min(100*x,100));

switch type
case 1, % waitbar(x) update
p = findobj(f,'Type','patch');
l = findobj(f,'Type','line');
if isempty(f) | isempty(p) | isempty(l),
error('Couldn''t find waitbar handles.');
end
xpatch = get(p,'XData');
xpatch = [0 x x 0];
set(p,'XData',xpatch)
xline = get(l,'XData');
set(l,'XData',xline);

if nargin>2,
% Update waitbar title:
hAxes = findobj(f,'type','axes');
hTitle = get(hAxes,'title');
set(hTitle,'string',varargin{1});
end

case 2, % waitbar(x,name) initialize


vertMargin = 0;
if nargin > 2,
% we have optional arguments: property-value pairs
if rem (nargin, 2 ) ~= 0
error( 'Optional initialization arguments must be passed in
pairs' );
end
end

oldRootUnits = get(0,'Units');

set(0, 'Units', 'points');


screenSize = get(0,'ScreenSize');

axFontSize=get(0,'FactoryAxesFontSize');

pointsPerPixel = 72/get(0,'ScreenPixelsPerInch');

width = 360 * pointsPerPixel;


height = 75 * pointsPerPixel;
%pos = [screenSize(3)/2-width/2 screenSize(4)/2-height/2 width height];
pos = [0 0 width height];

f = figure(...
'Units', 'points', ...
'BusyAction', 'queue', ...
'Position', pos, ...
'Resize','off', ...
'CreateFcn','', ...
'NumberTitle','off', ...
'IntegerHandle','off', ...

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 140


'MenuBar', 'none', ...
'Tag','TMWWaitbar',...
'Interruptible', 'off', ...
'Visible','off');

%%%%%%%%%%%%%%%%%%%%%
% set figure properties as passed to the fcn
% pay special attention to the 'cancel' request
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
if nargin > 2,
propList = varargin(1:2:end);
valueList = varargin(2:2:end);
cancelBtnCreated = 0;
for ii = 1:length( propList )
try
if strcmp(lower(propList{ii}), 'createcancelbtn' ) &
~cancelBtnCreated
cancelBtnHeight = 23 * pointsPerPixel;
cancelBtnWidth = 60 * pointsPerPixel;
newPos = pos;
vertMargin = vertMargin + cancelBtnHeight;
newPos(4) = newPos(4)+vertMargin;
callbackFcn = [valueList{ii}];
set( f, 'Position', newPos, 'CloseRequestFcn',
callbackFcn );
cancelButt = uicontrol('Parent',f, ...
'Units','points', ...
'Callback',callbackFcn, ...
'ButtonDownFcn', callbackFcn,
...
'Enable','on', ...
'Interruptible','off', ...
'Position', [pos(3)-
cancelBtnWidth*1.4, 7, ...
cancelBtnWidth, cancelBtnHeight], ...
'String','Cancel', ...
'Tag','TMWWaitbarCancelButton');
cancelBtnCreated = 1;
else
% simply set the prop/value pair of the figure
set( f, propList{ii}, valueList{ii});
end
catch
disp ( ['Warning: could not set property ''' propList{ii}
''' with value ''' num2str(valueList{ii}) '''' ] );
end
end
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

colormap([]);

axNorm=[.05 .3 .9 .2];
axPos=axNorm.*[pos(3:4),pos(3:4)] + [0 vertMargin 0 0];

h = axes('XLim',[0 100],...
'YLim',[0 1],...
'Box','on', ...

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 141


'Units','Points',...
'FontSize', axFontSize,...
'Position',axPos,...
'XTickMode','manual',...
'YTickMode','manual',...
'XTick',[],...
'YTick',[],...
'XTickLabelMode','manual',...
'XTickLabel',[],...
'YTickLabelMode','manual',...
'YTickLabel',[]);

tHandle=title(name);
tHandle=get(h,'title');
oldTitleUnits=get(tHandle,'Units');
set(tHandle,...
'Units', 'points',...
'String', name);

tExtent=get(tHandle,'Extent');
set(tHandle,'Units',oldTitleUnits);

titleHeight=tExtent(4)+axPos(2)+axPos(4)+5;
if titleHeight>pos(4)
pos(4)=titleHeight;
pos(2)=screenSize(4)/2-pos(4)/2;
figPosDirty=logical(1);
else
figPosDirty=logical(0);
end

if tExtent(3)>pos(3)*1.10;
pos(3)=min(tExtent(3)*1.10,screenSize(3));
pos(1)=screenSize(3)/2-pos(3)/2;

axPos([1,3])=axNorm([1,3])*pos(3);
set(h,'Position',axPos);

figPosDirty=logical(1);
end

if figPosDirty
set(f,'Position',pos);
end

xpatch = [0 x x 0];
ypatch = [0 0 1 1];
xline = [100 0 0 100 100];
yline = [0 0 1 1 0];

p = patch(xpatch,ypatch,'r','EdgeColor','r','EraseMode','none');
l = line(xline,yline,'EraseMode','none');
set(l,'Color',get(gca,'XColor'));

set(f,'HandleVisibility','callback','visible','on');

set(0, 'Units', oldRootUnits);


end % case

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 142


drawnow;

if nargout==1,
fout = f;
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 143


APPENDIX B

ICA MATLAB CODE


% ICA algorithm
% Uses JADE (Joint Approximation Diagonalization of Eigenmatrices)
% algorithm for performing ICA
% Frame window of 3

clear,close all,clc;
tic;

T1 = 256;
T2 = 256;
N=30;
num=1;

%Read in sequence
mov=aviread('building_site',1:N);

for k=1:N
mov(k).cdata=mov(k).cdata(1:T1,1:T2);
end

% Reshape matrix to fit image in one row

for k=num+1:N-1
k
for f=1:num
i(:,:,f)=mov(k-f).cdata;
i(:,:,f)=i(1:T1,1:T2,f);
im=reshape(i(:,:,f),1,T1*T2);
imf(f,:)=im;

i(:,:,f+num+1)=mov(k+f).cdata;
i(:,:,f+num+1)=i(1:T1,1:T2,f+num+1);
im=reshape(i(:,:,f+num+1),1,T1*T2);
imf(f+num+1,:)=im;
end

i(:,:,num+1)=mov(k).cdata;
i(:,:,num+1)=i(1:T1,1:T2,num+1);
im=reshape(i(:,:,num+1),1,T1*T2);
imf(num+1,:)=im;

X=double(imf);

% Bm is the unmixing matrix and Xs is the mixed matrix


% The matlab estimate
Xs = X ;
Xs = Xs'; Xs = Xs' ;
Bm = MatlabjadeR(Xs);
% um is the unmixed set of images

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 144


um=Bm*Xs;

newframe=abs(um(1,:));
newframe=uint8((newframe./max(max(newframe)))*255);
final(k-num,:)=newframe;
end

% Reshape images back to two dimensions


for k=num+1:N-2
umtemp(:,:)=reshape(final(k-num,:),T1,T2);
umf(:,:,k-num)=umtemp;
end

toc;

% Convert to movie frames and play sequence

for k=num+1:N-2
map=gray(256);
mov1(k-num) = im2frame(umf(:,:,k-num),map);
%mov1(k).colormap=gray(236);
end

map=gray(256);
for k=num+1:N-2
temp1=(mat2gray(mov1(k-num).cdata));
mov1(k-num)=im2frame(im2uint8(temp1),map);
temp2=(mat2gray(mov(k-num).cdata));
mov(k-num)=im2frame(im2uint8(temp2),map);

%mov(k)=mat2gray(mov(k).cdata);
end

mplay(mov1);
mplay(mov);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 145


function B = jadeR(X,m)
% Blind separation of real signals with JADE. Version 1.5 Dec. 1997.
%
% Usage:
% * If X is an nxT data matrix (n sensors, T samples) then
% B=jadeR(X) is a nxn separating matrix such that S=B*X is an nxT
% matrix of estimated source signals.
% * If B=jadeR(X,m), then B has size mxn so that only m sources are
% extracted. This is done by restricting the operation of jadeR
% to the m first principal components.
% * Also, the rows of B are ordered such that the columns of pinv(B)
% are in order of decreasing norm; this has the effect that the
% `most energetically significant' components appear first in the
% rows of S=B*X.
%
% Quick notes (more at the end of this file)
%
% o this code is for REAL-valued signals. An implementation of JADE
% for both real and complex signals is also available from
% http://sig.enst.fr/~cardoso/stuff.html
%
% o This algorithm differs from the first released implementations of
% JADE in that it has been optimized to deal more efficiently
% 1) with real signals (as opposed to complex)
% 2) with the case when the ICA model does not necessarily hold.
%
% o There is a practical limit to the number of independent
% components that can be extracted with this implementation. Note
% that the first step of JADE amounts to a PCA with dimensionality
% reduction from n to m (which defaults to n). In practice m
% cannot be `very large' (more than 40, 50, 60... depending on
% available memory)
%
% o See more notes, references and revision history at the end of
% this file and more stuff on the WEB
% http://sig.enst.fr/~cardoso/stuff.html
%
% o This code is supposed to do a good job! Please report any
% problem to cardoso@sig.enst.fr

% Copyright : Jean-Francois Cardoso. cardoso@sig.enst.fr

verbose = 0 ; % Set to 0 for quiet operation

% Finding the number of sources


[n,T] = size(X);
if nargin==1, m=n ; end; % Number of sources defaults to # of sensors
if m>n , fprintf('jade -> Do not ask more sources than sensors
here!!!\n'), return,end
if verbose, fprintf('jade -> Looking for %d sources\n',m); end ;

% Self-commenting code
%=====================
if verbose, fprintf('jade -> Removing the mean value\n'); end
X = X - mean(X')' * ones(1,T);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 146


%%% whitening & projection onto signal subspace
% ===========================================
if verbose, fprintf('jade -> Whitening the data\n'); end
[U,D] = eig((X*X')/T) ;
[puiss,k] = sort(diag(D)) ;
rangeW = n-m+1:n ; % indices to the m most significant
directions
scales = sqrt(puiss(rangeW)) ; % scales
W = diag(1./scales) * U(1:n,k(rangeW))' ; % whitener
iW = U(1:n,k(rangeW)) * diag(scales) ; % its pseudo-inverse
X = W*X;

%%% Estimation of the cumulant matrices.


% ====================================
if verbose, fprintf('jade -> Estimating cumulant matrices\n'); end

dimsymm = (m*(m+1))/2; % Dim. of the space of real symm matrices


nbcm = dimsymm ; % number of cumulant matrices
CM = zeros(m,m*nbcm); % Storage for cumulant matrices
R = eye(m); %%
Qij = zeros(m); % Temp for a cum. matrix
Xim = zeros(1,m); % Temp
Xjm = zeros(1,m); % Temp
scale = ones(m,1)/T ; % for convenience

%% I am using a symmetry trick to save storage. I should write a


%% short note one of these days explaining what is going on here.
%%
Range = 1:m ; % will index the columns of CM where to store the
cum. mats.
for im = 1:m
Xim = X(im,:) ;
Qij = ((scale* (Xim.*Xim)) .* X ) * X' - R - 2 * R(:,im)*R(:,im)' ;
CM(:,Range) = Qij ;
Range = Range + m ;
for jm = 1:im-1
Xjm = X(jm,:) ;
Qij = ((scale * (Xim.*Xjm) ) .*X ) * X' - R(:,im)*R(:,jm)' -
R(:,jm)*R(:,im)' ;
CM(:,Range) = sqrt(2)*Qij ;
Range = Range + m ;
end ;
end;

%%% joint diagonalization of the cumulant matrices


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% Init
if 1, %% Init by diagonalizing a *single* cumulant matrix. It seems to
save
%% some computation time `sometimes'. Not clear if initialization is
%% a good idea since Jacobi rotations are very efficient.

if verbose, fprintf('jade -> Initialization of the


diagonalization\n'); end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 147


[V,D] = eig(CM(:,1:m)); % For instance, this one
for u=1:m:m*nbcm, % updating accordingly the cumulant set given
the init
CM(:,u:u+m-1) = CM(:,u:u+m-1)*V ;
end;
CM = V'*CM;

else, %% The dont-try-to-be-smart init


V = eye(m) ; % la rotation initiale
end;

seuil = 1/sqrt(T)/100; % A statistically significant threshold


encore = 1;
sweep = 0;
updates = 0;
g = zeros(2,nbcm);
gg = zeros(2,2);
G = zeros(2,2);
c = 0 ;
s = 0 ;
ton = 0 ;
toff = 0 ;
theta = 0 ;

%% Joint diagonalization proper


if verbose, fprintf('jade -> Contrast optimization by joint
diagonalization\n'); end

while encore, encore=0;

if verbose, fprintf('jade -> Sweep #%d\n',sweep); end


sweep=sweep+1;

for p=1:m-1,
for q=p+1:m,

Ip = p:m:m*nbcm ;
Iq = q:m:m*nbcm ;

%%% computation of Givens angle


g = [ CM(p,Ip)-CM(q,Iq) ; CM(p,Iq)+CM(q,Ip) ];
gg = g*g';
ton = gg(1,1)-gg(2,2);
toff = gg(1,2)+gg(2,1);
theta = 0.5*atan2( toff , ton+sqrt(ton*ton+toff*toff) );

%%% Givens update


if abs(theta) > seuil, encore = 1 ;
updates = updates + 1;
c = cos(theta);
s = sin(theta);
G = [ c -s ; s c ] ;

pair = [p;q] ;
V(:,pair) = V(:,pair)*G ;
CM(pair,:) = G' * CM(pair,:) ;
CM(:,[Ip Iq]) = [ c*CM(:,Ip)+s*CM(:,Iq) -s*CM(:,Ip)+c*CM(:,Iq)
] ;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 148


%% fprintf('jade -> %3d %3d %12.8f\n',p,q,s);

end%%of the if
end%%of the loop on q
end%%of the loop on p
end%%of the while loop
if verbose, fprintf('jade -> Total of %d Givens rotations\n',updates);
end

%%% A separating matrix


% ===================
B = V'*W ;

%%% We permut its rows to get the most energetic components first.
%%% Here the **signals** are normalized to unit variance. Therefore,
%%% the sort is according to the norm of the columns of A = pinv(B)

if verbose, fprintf('jade -> Sorting the components\n',updates); end


A = iW*V ;
[vars,keys] = sort(sum(A.*A)) ;
B = B(keys,:);
B = B(m:-1:1,:) ; % Is this smart ?

% Signs are fixed by forcing the first column of B to have


% non-negative entries.
if verbose, fprintf('jade -> Fixing the signs\n',updates); end
b = B(:,1) ;
signs = sign(sign(b)+0.1) ; % just a trick to deal with sign=0
B = diag(signs)*B ;

return ;

% To do.
% - Implement a cheaper/simpler whitening (is it worth it?)
%
% Revision history:
%
%- V1.5, Dec. 24 1997
% - The sign of each row of B is determined by letting the first
% element be positive.
%
%- V1.4, Dec. 23 1997
% - Minor clean up.
% - Added a verbose switch
% - Added the sorting of the rows of B in order to fix in some
% reasonable way the permutation indetermination. See note 2)
% below.
%
%- V1.3, Nov. 2 1997
% - Some clean up. Released in the public domain.
%
%- V1.2, Oct. 5 1997
% - Changed random picking of the cumulant matrix used for
% initialization to a deterministic choice. This is not because
% of a better rationale but to make the ouput (almost surely)
% deterministic.
% - Rewrote the joint diag. to take more advantage of Matlab's
% tricks.
% - Created more dummy variables to combat Matlab's loose memory

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 149


% management.
%
%- V1.1, Oct. 29 1997.
% Made the estimation of the cumulant matrices more regular. This
% also corrects a buglet...
%
%- V1.0, Sept. 9 1997. Created.
%
% Main reference:
% @article{CS-iee-94,
% title = "Blind beamforming for non {G}aussian signals",
% author = "Jean-Fran\c{c}ois Cardoso and Antoine Souloumiac",
% HTML = "ftp://sig.enst.fr/pub/jfc/Papers/iee.ps.gz",
% journal = "IEE Proceedings-F",
% month = dec, number = 6, pages = {362-370}, volume = 140, year = 1993}
%
% Notes:
% ======
%
% Note 1)
%
% The original Jade algorithm/code deals with complex signals in
% Gaussian noise white and exploits an underlying assumption that the
% model of independent components actually holds. This is a
% reasonable assumption when dealing with some narrowband signals.
% In this context, one may i) seriously consider dealing precisely
% with the noise in the whitening process and ii) expect to use the
% small number of significant eigenmatrices to efficiently summarize
% all the 4th-order information. All this is done in the JADE
% algorithm.
%
% In this implementation, we deal with real-valued signals and we do
% NOT expect the ICA model to hold exactly. Therefore, it is
% pointless to try to deal precisely with the additive noise and it
% is very unlikely that the cumulant tensor can be accurately
% summarized by its first n eigen-matrices. Therefore, we consider
% the joint diagonalization of the whole set of eigen-matrices.
% However, in such a case, it is not necessary to compute the
% eigenmatrices at all because one may equivalently use `parallel
% slices' of the cumulant tensor. This part (computing the
% eigen-matrices) of the computation can be saved: it suffices to
% jointly diagonalize a set of cumulant matrices. Also, since we are
% dealing with reals signals, it becomes easier to exploit the
% symmetries of the cumulants to further reduce the number of
% matrices to be diagonalized. These considerations, together with
% other cheap tricks lead to this version of JADE which is optimized
% (again) to deal with real mixtures and to work `outside the model'.
% As the original JADE algorithm, it works by minimizing a `good set'
% of cumulants.
%
% Note 2)
%
% The rows of the separating matrix B are resorted in such a way that
% the columns of the corresponding mixing matrix A=pinv(B) are in
% decreasing order of (Euclidian) norm. This is a simple, `almost
% canonical' way of fixing the indetermination of permutation. It
% has the effect that the first rows of the recovered signals (ie the
% first rows of B*X) correspond to the most energetic *components*.
% Recall however that the source signals in S=B*X have unit variance.
% Therefore, when we say that the observations are unmixed in order
% of decreasing energy, the energetic signature is found directly as

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 150


% the norm of the columns of A=pinv(B).
%
% Note 3)
%
% In experiments where JADE is run as B=jadeR(X,m) with m varying in
% range of values, it is nice to be able to test the stability of the
% decomposition. In order to help in such a test, the rows of B can
% be sorted as described above. We have also decided to fix the sign
% of each row in some arbitrary but fixed way. The convention is
% that the first element of each row of B is positive.
%
%
% Note 4)
%
% Contrary to many other ICA algorithms, JADE (or least this version)
% does not operate on the data themselves but on a statistic (the
% full set of 4th order cumulant). This is represented by the matrix
% CM below, whose size grows as m^2 x m^2 where m is the number of
% sources to be extracted (m could be much smaller than n). As a
% consequence, (this version of) JADE will probably choke on a
% `large' number of sources. Here `large' depends mainly on the
% available memory and could be something like 40 or so. One of
% these days, I will prepare a version of JADE taking the `data'
% option rather than the `statistic' option.
%
%

% JadeR.m ends here.

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 151


APPENDIX C

CGI MATLAB CODE

% Algorithm using CGI registration and trajectory estimation


% Using a frame window of size 10

clear,close all,clc;

N=30;
w=5; % window for cgi 5=window of +5/-5, 6=window of +6/-6 , etc

mov=aviread('building_site',580:629);

% Calculate motion between current frame and other frames


tic;
for k=w+1:N-w
k
current=mov(k).cdata;

for iter=1:w
[u v]=cgi_frakes(current,mov(k-iter).cdata);
uf(:,:,iter)=u;
vf(:,:,iter)=v;

[u v]=cgi_frakes(current,mov(k+iter).cdata);
uf(:,:,iter+w)=u;
vf(:,:,iter+w)=v;
end

% Comupte the mean value of the trajectory

cu(:,:,1)=(sum(uf,3))./((2*w)+1);
cv(:,:,1)=(sum(vf,3))./((2*w)+1);

% Dewarp the image to the geometrically improved estimate.


x=(double(mov(k).cdata))/255;
result(:,:,k-w)=mdewarpfinal(x,cu,cv);
end
toc;

% Create a movie sequnce and play

for k=1:N-(2*w)
map=gray(256);
mov1(k) = im2frame(result(:,:,k)*255,map);

end

map=gray(256);
for k=1:N-(2*w)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 152


temp1=(mat2gray(mov1(k).cdata));
mov1(k)=im2frame(im2uint8(temp1),map);
temp2=(mat2gray(mov(k).cdata));
mov(k)=im2frame(im2uint8(temp2),map);
%mov(k)=mat2gray(mov(k).cdata);
end

mplay(mov1);
mplay(mov);

%save('buildingsiteseqout_580to629_cgi','mov1');

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 153


function [u v]=cgi_frakes(i1,i2)

im1=double(i1);
im2=double(i2);

windowSize=5; % Select odd windows greater than 3


cp=5;

[fx, fy, ft] = ComputeDerivatives(im1, im2);


%[fx fy ft]=deriv_farid(im1,im2);
u = zeros(size(im1));
v = zeros(size(im2));

halfWindow = floor(windowSize/2);

for i = halfWindow+1:cp:size(fx,1)-halfWindow
for j = halfWindow+1:cp:size(fx,2)-halfWindow

% calculates the partial derivatives within the image windowsize


curFx = fx(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);
curFy = fy(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);
curFt = ft(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);

curFx = curFx';
curFy = curFy';
curFt = curFt';

% Takes the matrices and arranges them into vectors


curFx = curFx(:);
curFy = curFy(:);
curFt = -curFt(:);

curFxb=[curFx curFx curFx curFx curFy curFy curFy curFy];

inca=halfWindow+1;
for a=1:windowSize:windowSize*windowSize
incb=halfWindow+1;
inca=inca-1;
for b=1:windowSize
incb=incb-1;
curFx2(a+b-1,:)=[1 i-inca j-incb (i-inca)*(j-incb) 1 i-
inca j-incb (i-inca)*(j-incb)].*curFxb(a+b-1,:);
end
end

A = curFx2;

% Least squares to estimate parameters for interpolation

U = pinv(A'*A)*A'*curFt;

% Perform interpolation within the window

cntr1=halfWindow+1;
for a=1:cp
cntr2=halfWindow+1;
cntr1=cntr1-1;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 154


for b=1:cp
cntr2=cntr2-1;
u(i-cntr1,j-cntr2)=[1 i-cntr1 j-cntr2 (i-cntr1)*(j-
cntr2)]*U(1:4);
v(i-cntr1,j-cntr2)=[1 i-cntr1 j-cntr2 (i-cntr1)*(j-
cntr2)]*U(5:8);
end
end

end;
end;

u(isnan(u))=0;
v(isnan(v))=0;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 155


APPENDIX D

WIENER FILTER AND KURTOSIS MATLAB CODE

% Implements kurtosis minimization algorithm between 0.00025 and 0.0025


% search space

clear,close all,clc;
tic;

load buildingsiteout_1to20_cgi;

mov=mov1;
clear mov1;
N=1;
nsr=0.0001; % estimated

for iter=1:N
iter
i2=mov(iter).cdata;
i2=double(i2);

% taper image for preprocessing


htaper=fspecial('gaussian',5,5);
i2=edgetaper(i2,htaper);

lambda=0.00025;
for v=1:125

iwnr=func_wnr2(i2,lambda,nsr);
lambda=lambda+0.00002;

% Calculate Kurtosis
s=size(iwnr,1)*size(iwnr,2);
iwnrtemp=reshape(iwnr,[1 s]);
k(v,1)=kurtosis(double(iwnrtemp));
k(v,2)=lambda-0.00002;
%k(v,3)=rish_psnr((iwnr),(i2));

end

%normalize kurtosis

x=k-min(k(:,1));
x=x./max(x(:,1));
x(:,2)=k(:,2)
[q p]=min(x(:,1))
toc;

frame_c(:,:,iter)=func_wnr(i2,x(p,2),nsr);

end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 156


for iter=1:N
map=gray(256);
mov1(iter) = im2frame(uint8(frame_c(:,:,iter)),map);
end

map=gray(256);
for iter=1:N
temp1=(mat2gray(mov1(iter).cdata));
mov1(iter)=im2frame(im2uint8(temp1),map);
%mov(k)=mat2gray(mov(k).cdata);
end

mplay(mov1);
%save('armscorseqout_1to96_kurtosiscgi','mov1');

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 157


% Wiener filter using the turbulence degradation function

function i2final=func_wnr2(i2,k,nsr)

i2=double(i2);
[r c]=size(i2);

% Pre-prcessing of image
% Centres the image frequency transform
for x=1:r
for y=1:c
i2(x,y)=i2(x,y)*((-1)^(x+y));
end
end

% Calculate DFT
i2freq=fft2(i2);

% Calculate the OTF for the blur


% "u-r/2" and "v-c/2" are used to centre the transform
H=zeros(r,c);
for u=1:r
for v=1:c
H(u,v)=exp(-k*((u-r/2)^2+(v-c/2)^2)^(5/6));
end
end

% Calculate filtered image


filt=zeros(r,c);
for u=1:r
for v=1:c
%filt(u,v)=i2freq(u,v)./(H(u,v)+nsr);

filt(u,v)=((i2freq(u,v))*(abs(H(u,v))^2))/((H(u,v)*((abs(H(u,v))^2)+nsr))
);
end
end

% Calculate inverse DFT


i2deblur=ifft2(filt);
i2deblur=real(i2deblur);

% Post-processing of image
% Multiply by (-1) to "de-centre" the transform
i2final=zeros(r,c);
for x=1:r
for y=1:c
i2final(x,y)=i2deblur(x,y)*((-1)^(x+y));
end
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 158


APPENDIX E

GENERAL ALGORITHM MATLAB CODE

% Time-averaged algorithm using Lucas-Kanade algorithm


% for registration
% Averaging window is 10

clear,close all,clc;

N=28; % frame length


avelength=10; % averaging length
ave_total=0.00;

mov=aviread('shack',1:25);

% for k=1:N
% mov(k).cdata=rgb2gray(mov(k).cdata);
% end

for k=1:N
in(:,:,k)=im2double(mov(k).cdata);
end

tic;

% Generate reference frame through averaging

for k=1:avelength % averaging length


ave_total=in(:,:,k)+ave_total;
end

average=ave_total/avelength;

% Calculate motion vectors and warp frames to reference frame

for k=1:N
im2=in(:,:,k);
[u,v] = HierarchicalLK(im2, average, 1, 5,1,0);
res(:,:,k)=mdewarpfinal(in(:,:,k),u,v);
end
toc;

% Create movie frames and play sequence

for k=1:N
map=gray(256);
mov1(k) = im2frame(res(:,:,k)*255,map);
end

map=gray(256);
for k=1:N
temp1=(mat2gray(mov1(k).cdata));
mov1(k)=im2frame(im2uint8(temp1),map);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 159


temp2=(mat2gray(mov(k).cdata));
mov(k)=im2frame(im2uint8(temp2),map);

%mov(k)=mat2gray(mov(k).cdata);
end

mplay(mov1);
outvid='shackseq_1to25_ta';
%movie2avi(mov1,outvid,'fps',20,'compression','None');

save(outvid,'mov1');

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 160


function [u,v,cert] = HierarchicalLK(im1, im2, numLevels, windowSize,
iterations, display)
%HIERARCHICALLK Hierarchical Lucas Kanade (using pyramids)
% [u,v]=HierarchicalLK(im1, im2, numLevels, windowSize,
iterations, display)
% Tested for pyramids of height 1, 2, 3 only...
operation with
% pyramids of height 4 might be unreliable
%
% Use quiver(u, -v, 0) to view the results
%
% NUMLEVELS Pyramid Levels (typical value 3)
% WINDOWSIZE Size of smoothing window (typical value
1-4)
% ITERATIONS number of iterations (typical value 1-5)
% DISPLAY 1 to display flow fields (1 or 0)
%
%Uses: Reduce, Expand
%
% Sohaib Khan
% edited 05-15-03 (Yaser)
% yaser@cs.ucf.edu
%
% [1] B.D. Lucas and T. Kanade, "An Iterative Image Registration
technique,
% with an Application to Stero Vision," Int'l Joint Conference
Artifical
% Intelligence, pp. 121-130, 1981.

if (size(im1,1)~=size(im2,1)) | (size(im1,2)~=size(im2,2))
error('images are not same size');
end;

if (size(im1,3) ~= 1) | (size(im2, 3) ~= 1)
error('input should be gray level images');
end;

% check image sizes and crop if not divisible


if (rem(size(im1,1), 2^(numLevels - 1)) ~= 0)
warning('image will be cropped in height, size of output will be
smaller than input!');
im1 = im1(1:(size(im1,1) - rem(size(im1,1), 2^(numLevels - 1))), :);
im2 = im2(1:(size(im1,1) - rem(size(im1,1), 2^(numLevels - 1))), :);
end;

if (rem(size(im1,2), 2^(numLevels - 1)) ~= 0)


warning('image will be cropped in width, size of output will be
smaller than input!');
im1 = im1(:, 1:(size(im1,2) - rem(size(im1,2), 2^(numLevels - 1))));
im2 = im2(:, 1:(size(im1,2) - rem(size(im1,2), 2^(numLevels - 1))));
end;

%Build Pyramids
pyramid1 = im1;
pyramid2 = im2;

for i=2:numLevels
im1 = Reduce(im1);
im2 = Reduce(im2);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 161


pyramid1(1:size(im1,1), 1:size(im1,2), i) = im1;
pyramid2(1:size(im2,1), 1:size(im2,2), i) = im2;
end;

% base level computation


disp('Computing Level 1');
baseIm1 = pyramid1(1:(size(pyramid1,1)/(2^(numLevels-1))),
1:(size(pyramid1,2)/(2^(numLevels-1))), numLevels);
baseIm2 = pyramid2(1:(size(pyramid2,1)/(2^(numLevels-1))),
1:(size(pyramid2,2)/(2^(numLevels-1))), numLevels);
[u,v] = LucasKanade(baseIm1, baseIm2, windowSize);

for r = 1:iterations
[u, v] = LucasKanadeRefined(u, v, baseIm1, baseIm2);
end

%propagating flow 2 higher levels


for i = 2:numLevels
disp(['Computing Level ', num2str(i)]);
uEx = 2 * imresize(u,size(u)*2); % use appropriate expand function
(gaussian, bilinear, cubic, etc).
vEx = 2 * imresize(v,size(v)*2);

curIm1 = pyramid1(1:(size(pyramid1,1)/(2^(numLevels - i))),


1:(size(pyramid1,2)/(2^(numLevels - i))), (numLevels - i)+1);
curIm2 = pyramid2(1:(size(pyramid2,1)/(2^(numLevels - i))),
1:(size(pyramid2,2)/(2^(numLevels - i))), (numLevels - i)+1);

[u, v] = LucasKanadeRefined(uEx, vEx, curIm1, curIm2);

for r = 1:iterations
[u, v, cert] = LucasKanadeRefined(u, v, curIm1, curIm2);
end
end;

if (display==1)
figure, quiver(Reduce((Reduce(medfilt2(flipud(u),[5 5])))), -
Reduce((Reduce(medfilt2(flipud(v),[5 5])))), 0), axis equal
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 162


function [u,v,cert] = LucasKanadeRefined(uIn, vIn, im1, im2);
% Lucas Kanade Refined computes lucas kanade flow at the current level
given previous estimates!
%current implementation is only for a 3x3 window

%[fx, fy, ft] = ComputeDerivatives(im1, im2);

uIn = round(uIn);
vIn = round(vIn);
%uIn = uIn(2:size(uIn,1), 2:size(uIn, 2)-1);
%vIn = vIn(2:size(vIn,1), 2:size(vIn, 2)-1);

u = zeros(size(im1));
v = zeros(size(im2));

%to compute derivatives, use a 5x5 block... the resulting derivative will
be 5x5...
% take the middle 3x3 block as derivative
for i = 3:size(im1,1)-2
for j = 3:size(im2,2)-2
% if uIn(i,j)~=0
% disp('ha');
% end;

curIm1 = im1(i-2:i+2, j-2:j+2);


lowRindex = i-2+vIn(i,j);
highRindex = i+2+vIn(i,j);
lowCindex = j-2+uIn(i,j);
highCindex = j+2+uIn(i,j);

if (lowRindex < 1)
lowRindex = 1;
highRindex = 5;
end;

if (highRindex > size(im1,1))


lowRindex = size(im1,1)-4;
highRindex = size(im1,1);
end;

if (lowCindex < 1)
lowCindex = 1;
highCindex = 5;
end;

if (highCindex > size(im1,2))


lowCindex = size(im1,2)-4;
highCindex = size(im1,2);
end;

if isnan(lowRindex)
lowRindex = i-2;
highRindex = i+2;
end;

if isnan(lowCindex)
lowCindex = j-2;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 163


highCindex = j+2;
end;

curIm2 = im2(lowRindex:highRindex, lowCindex:highCindex);

[curFx, curFy, curFt]=ComputeDerivatives(curIm1, curIm2);


curFx = curFx(2:4, 2:4);
curFy = curFy(2:4, 2:4);
curFt = curFt(2:4, 2:4);

curFx = curFx';
curFy = curFy';
curFt = curFt';

curFx = curFx(:);
curFy = curFy(:);
curFt = -curFt(:);

A = [curFx curFy];

U = pinv(A'*A)*A'*curFt;

u(i,j)=U(1);
v(i,j)=U(2);

cert(i,j) = rcond(A'*A);

end;
end;

u = u+uIn;
v = v+vIn;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [fx, fy, ft] = ComputeDerivatives(im1, im2);
%ComputeDerivatives Compute horizontal, vertical and time derivative
% between two gray-level images.

if (size(im1,1) ~= size(im2,1)) | (size(im1,2) ~= size(im2,2))


error('input images are not the same size');
end;

if (size(im1,3)~=1) | (size(im2,3)~=1)
error('method only works for gray-level images');
end;

fx = conv2(im1,0.25* [-1 1; -1 1]) + conv2(im2, 0.25*[-1 1; -1 1]);


fy = conv2(im1, 0.25*[-1 -1; 1 1]) + conv2(im2, 0.25*[-1 -1; 1 1]);
ft = conv2(im1, 0.25*ones(2)) + conv2(im2, -0.25*ones(2));

% make same size as input


fx=fx(1:size(fx,1)-1, 1:size(fx,2)-1);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 164


fy=fy(1:size(fy,1)-1, 1:size(fy,2)-1);
ft=ft(1:size(ft,1)-1, 1:size(ft,2)-1);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 165


function [u, v] = LucasKanade(im1, im2, windowSize);
%LucasKanade lucas kanade algorithm, without pyramids (only 1 level);

%REVISION: NaN vals are replaced by zeros

[fx, fy, ft] = ComputeDerivatives(im1, im2);

u = zeros(size(im1));
v = zeros(size(im2));

halfWindow = floor(windowSize/2);
for i = halfWindow+1:size(fx,1)-halfWindow
for j = halfWindow+1:size(fx,2)-halfWindow
curFx = fx(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);
curFy = fy(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);
curFt = ft(i-halfWindow:i+halfWindow, j-halfWindow:j+halfWindow);

curFx = curFx';
curFy = curFy';
curFt = curFt';

curFx = curFx(:);
curFy = curFy(:);
curFt = -curFt(:);

A = [curFx curFy];

U = pinv(A'*A)*A'*curFt;

u(i,j)=U(1);
v(i,j)=U(2);
end;
end;

u(isnan(u))=0;
v(isnan(v))=0;

%u=u(2:size(u,1), 2:size(u,2));
%v=v(2:size(v,1), 2:size(v,2));

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%
function [fx, fy, ft] = ComputeDerivatives(im1, im2);
%ComputeDerivatives Compute horizontal, vertical and time derivative
% between two gray-level images.

if (size(im1,1) ~= size(im2,1)) | (size(im1,2) ~= size(im2,2))


error('input images are not the same size');
end;

if (size(im1,3)~=1) | (size(im2,3)~=1)
error('method only works for gray-level images');
end;

%fx = conv2(im1,(1/6)* [-0.453014 0 0.453014;-0.453014 0 0.453014]) +


conv2(im2, (1/6)*[-0.453014 0 0.453014;-0.453014 0 0.453014]);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 166


%fy = conv2(im1,(1/6)* [-0.453014 0 0.453014;-0.453014 0 0.453014]') +
conv2(im2, (1/6)*[-0.453014 0 0.453014;-0.453014 0 0.453014]');
fx = conv2(im1,0.25* [-1 1; -1 1]) + conv2(im2, 0.25*[-1 1; -1 1]);
fy = conv2(im1, 0.25*[-1 -1; 1 1]) + conv2(im2, 0.25*[-1 -1; 1 1]);
ft = conv2(im1, 0.25*ones(2)) + conv2(im2, -0.25*ones(2));

% make same size as input


fx=fx(1:size(fx,1)-1, 1:size(fx,2)-1);
fy=fy(1:size(fy,1)-1, 1:size(fy,2)-1);
ft=ft(1:size(ft,1)-1, 1:size(ft,2)-1);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 167


APPENDIX F

WARPING ALGORITHM MATLAB CODE

function res=mdewarpfinal(I,u,v)

% Warping algortihm that dewarps images

%% Read Image

[r c]=size(I);

%% Create shift matrix and indices matrix

%move pixels a maximum of two for now to prevent discontinuities


%morph has same dimensions as I to map each pixel

morph1=u;
morph2=v;

tempx=1:c;
tempy=(1:r)';

% Generate matrix of indices

indicex=zeros(r,c);
indicey=zeros(r,c);
for k=1:r
indicex(k,:)=tempx;
end
for k=1:c
indicey(:,k)=tempy;
end

%% Intermediate matrix

interx=indicex-morph1;
intery=indicey-morph2;

% remove all values less than 1


[r1 c1]=find((interx<1) | (interx>c));
[r2 c2]=find((intery<1) | (intery>r));

for k=1:size(r1)
for j=1:size(c1)
if (interx(r1(k),c1(j))<1)
interx(r1(k),c1(j))=1;
elseif (interx(r1(k),c1(j))>c)
interx(r1(k),c1(j))=c;
end
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 168


end

for k=1:size(r2)
for j=1:size(c2)
if (intery(r2(k),c2(j))<1)
intery(r2(k),c2(j))=1;
elseif (intery(r2(k),c2(j))>r)
intery(r2(k),c2(j))=r;
end
end
end

%% Calculate weighting parameters

x1 = floor(interx);
x2 = ceil(interx);
fx1 = (x2-interx);
fx2 = 1-fx1;
y1 = floor(intery);
y2 = ceil(intery);
fy1 = y2-intery;
fy2 = 1-fy1;

% Compute the new grayvalue of the current pixel


I=double(I);

for k=1:r
for j=1:c

res(k,j) = fx1(k,j)*fy1(k,j)*I(y1(k,j),x1(k,j)) +
fy1(k,j)*fx2(k,j)*I(y1(k,j),x2(k,j)) +
fy2(k,j)*fx1(k,j)*I(y2(k,j),x1(k,j)) +
fy2(k,j)*fx2(k,j)*I(y2(k,j),x2(k,j));
end
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 169


APPENDIX G

GRAPHICAL USER INTERFACE MATLAB CODE

function varargout = Ver1(varargin)


% VER1 M-file for Ver1.fig

% Begin initialization code


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @Ver1_OpeningFcn, ...
'gui_OutputFcn', @Ver1_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code

% --- Executes just before Ver1 is made visible.


function Ver1_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to Ver1 (see VARARGIN)

% Choose default command line output for Ver1


handles.output = hObject;

% Update handles structure


guidata(hObject, handles);

% UIWAIT makes Ver1 wait for user response (see UIRESUME)


% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line.
function varargout = Ver1_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 170


% --- Executes on button press in Open.
function Open_Callback(hObject, eventdata, handles)
% hObject handle to Open (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
[FileName,PathName] =
uigetfile({'*.tiff;*.jpeg;*.jpg;*.tif;*.png;*.gif','All Image Files';...
'*.*','All Files' });
if (FileName~=0)
im1=imread(strcat(PathName,FileName));
imshow(strcat(PathName,FileName));

handles.orig=im1;
handles.filename=strcat(PathName,FileName);
guidata(hObject,handles)

%mov=aviread('lennablur_001_5');
%movie(handles.axes2,mov);
end

% --- Executes on button press in Process.


function Process_Callback(hObject, eventdata, handles)
% hObject handle to Process (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
d=get(handles.Distortion);
b=get(handles.Blurring);
d.Value
b.Value/10000
%x=get(handles.handleim);
im2=(handles.orig);
if size(im2,3)>1
im2=rgb2gray(im2);
end

I=im2;
n=d.Value;
k=b.Value/10000;
res=simulateblur2(I,n);
res=func_blur(res,k);

handles.im1=res;
guidata(hObject,handles)
imshow(handles.im1,[]);

% --- Executes on button press in Save.


function Save_Callback(hObject, eventdata, handles)
% hObject handle to Save (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 171


[file,path] = uiputfile('*.avi','Save Sequence As');
movie2avi(handles.mov,strcat(path,file),'compression','none');

% --- Executes on button press in Play.


function Play_Callback(hObject, eventdata, handles)
% hObject handle to Play (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

set(handles.axes1,'nextplot','replacechildren','units','normalized');
newplot;
movie(handles.axes1,handles.movx);

% --- Executes on slider movement.


function Distortion_Callback(hObject, eventdata, handles)
% hObject handle to Distortion (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'Value') returns position of slider


% get(hObject,'Min') and get(hObject,'Max') to determine range of
slider

% --- Executes during object creation, after setting all properties.


function Distortion_CreateFcn(hObject, eventdata, handles)
% hObject handle to Distortion (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns
called

% Hint: slider controls usually have a light gray background.


if isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor',[.9 .9 .9]);
end

% --- Executes on slider movement.


function Blurring_Callback(hObject, eventdata, handles)
% hObject handle to Blurring (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'Value') returns position of slider


% get(hObject,'Min') and get(hObject,'Max') to determine range of
slider

% --- Executes during object creation, after setting all properties.


function Blurring_CreateFcn(hObject, eventdata, handles)
% hObject handle to Blurring (see GCBO)

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 172


% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns
called

% Hint: slider controls usually have a light gray background.


if isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor',[.9 .9 .9]);
end

% --- Executes on button press in Sequence.


function Sequence_Callback(hObject, eventdata, handles)
% hObject handle to Sequence (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

d=get(handles.Distortion);
b=get(handles.Blurring);

[mov
movx]=create_seq(handles.filename,uint8(handles.nframes),'test1',d.Value,
(b.Value)/10000)

handles.mov=mov;
handles.movx=movx;
guidata(hObject,handles);

%% Function for simulating Distortion%%


%%===================================%%
function res=simulateblur2(I,n)

% Warping algortihm that simulates heat shimmer

[r c]=size(I);

%% Create shift matrix and indices matrix

%morph has same dimensions as I to map each pixel


morph=n*imresize((rand(5,5,2)-0.5),size(I),'bilinear');
morph1=morph(:,:,1);
morph2=morph(:,:,2);

tempx=1:c;
tempy=(1:r)';

% Generate matrix of indices

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 173


indicex=zeros(r,c);
indicey=zeros(r,c);
for k=1:r
indicex(k,:)=tempx;
end
for k=1:c
indicey(:,k)=tempy;
end

%% Intermediate matrix

interx=indicex-morph1;
intery=indicey-morph2;

% remove all values less than 1 and greater then width and height
[r1 c1]=find((interx<1) | (interx>c));
[r2 c2]=find((intery<1) | (intery>r));

for k=1:size(r1)
for j=1:size(c1)
if (interx(r1(k),c1(j))<1)
interx(r1(k),c1(j))=1;
elseif (interx(r1(k),c1(j))>c)
interx(r1(k),c1(j))=c;
end
end
end

for k=1:size(r2)
for j=1:size(c2)
if (intery(r2(k),c2(j))<1)
intery(r2(k),c2(j))=1;
elseif (intery(r2(k),c2(j))>r)
intery(r2(k),c2(j))=r;
end
end
end

%% Weights

x1 = floor(interx);
x2 = ceil(interx);
fx1 = (x2-interx);
fx2 = 1-fx1;
y1 = floor(intery);
y2 = ceil(intery);
fy1 = y2-intery;
fy2 = 1-fy1;

%compute the new grayvalue of the current pixel


I=double(I);

for k=1:r
for j=1:c

res(k,j) = fx1(k,j)*fy1(k,j)*I(y1(k,j),x1(k,j)) +
fy1(k,j)*fx2(k,j)*I(y1(k,j),x2(k,j)) +

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 174


fy2(k,j)*fx1(k,j)*I(y2(k,j),x1(k,j)) +
fy2(k,j)*fx2(k,j)*I(y2(k,j),x2(k,j));
end
end

%% Function for simulating Blurring%%


%%=================================%%

function i1final=func_blur(i1,k)

i1=double(i1);
[r c]=size(i1);

% Pre-prcessing of image
% Centres the image frequency transform
for x=1:r
for y=1:c
i1(x,y)=i1(x,y)*((-1)^(x+y));
end
end

% Calculate DFT
i1freq=fft2(i1);

% Calculate the OTF for the blur


% "u-r/2" and "v-c/2" are used to centre the transform
H=zeros(r,c);
for u=1:r
for v=1:c
H(u,v)=exp(-k*((u-r/2)^2+(v-c/2)^2)^(5/6));
end
end

% Calculate filtered image


filt=zeros(r,c);
for u=1:r
for v=1:c
filt(u,v)=i1freq(u,v).*H(u,v);
end
end

% Calculate inverse DFT


i1blur=ifft2(filt);
i1blur=real(i1blur);

% Post-processing of image
% Multiply by (-1) to "de-centre" the transform
i1final=zeros(r,c);
for x=1:r
for y=1:c
i1final(x,y)=i1blur(x,y)*((-1)^(x+y));
end
end

%% Function for creating a sequence from a single image%%


%%====================================================%%

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 175


function [mov movx]=create_seq(name,size,outname,n,lambda)
%
% % function create_seq(name,size,outname,n,lambda)
%
% % Creates a test sequence using a single image
% % name - name of the test image
% % size - number of frames in output sequence
% % outname - name of output sequence
% % n - distortion level
% % lambda - blurring level

image=imread(name);
lambda1=lambda;
for k=1:size
w(:,:,k)=simulateblur2(image,n);
w(:,:,k)=func_blur(w(:,:,k),lambda);
%lambda=lambda+0.0002;
lambda=(rand+0.5)*lambda1;
map=gray(256);
mov(k) = im2frame(w(:,:,k),map);
movx(k)=mov(k);
movx(k).cdata=flipud(movx(k).cdata);
end

function Frames_Callback(hObject, eventdata, handles)


% hObject handle to Frames (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of Frames as text


% str2double(get(hObject,'String')) returns contents of Frames as
a double
nframes=str2double(get(hObject,'String'));
handles.nframes=nframes;
guidata(hObject,handles);

% --- Executes during object creation, after setting all properties.


function Frames_CreateFcn(hObject, eventdata, handles)
% hObject handle to Frames (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns
called

% Hint: edit controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 176


function edit6_Callback(hObject, eventdata, handles)
% hObject handle to edit6 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit6 as text


% str2double(get(hObject,'String')) returns contents of edit6 as a
double

% --- Executes during object creation, after setting all properties.


function edit6_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit6 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns
called

% Hint: edit controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 177


APPENDIX H

GENERAL ALGORITHM C CODE

#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
#include <time.h>

/* Time-averaged general algorithm using Lucas-Kanade optical flow


algorithm. */

time_t t1,t2;
double diff=0;

IplImage *image , *grey , *prev_grey , *swap_temp, *average;


IplImage *corrected = 0;IplImage *img2 = 0;IplImage *vel_x=0, *vel_y=0;

float average2[1024][1024];

int main( int argc, char** argv )


{
CvCapture* capture = 0;

if( argc == 2 )
capture = cvCaptureFromAVI( argv[1] );

cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,0);
double x=cvGetCaptureProperty(capture,CV_CAP_PROP_FPS);
printf("Frame Rate: %f ",x);

if( !capture )
{
fprintf(stderr,"%d Could not initialize capturing...\n",argc);
return -1;
}

printf( "\n"
"\tESC - quit the program\n");

cvNamedWindow( "distorted", 1 );
cvNamedWindow( "corrected", 1 );

//==================Calculate the Reference Image===============//

time(&t1);

for(int r=0;r<10;r++)
{

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 178


IplImage* frame = 0;

frame = cvQueryFrame( capture );


if( !frame )
break;

if( !average )
{
/* allocate all the buffers */
average = cvCreateImage( cvGetSize(frame), 8, 3 );
average->origin = frame->origin;
grey = cvCreateImage( cvGetSize(frame), 8, 1 );
img2 = cvCreateImage( cvGetSize(frame), 8, 1 );
}

cvCopy( frame, average, 0 );


cvConvertImage( average, average, CV_CVTIMG_FLIP );
cvCvtColor( average, grey, CV_BGR2GRAY );

int i=0,j=0;
for(i=0;i<grey->height;i++)
{
for(j=0;j<grey->width;j++)
{
average2[i][j]=(unsigned char)(grey-
>imageData[i*grey->widthStep+j])+average2[i][j];

if(r==9)
{
average2[i][j]=average2[i][j]/10;
img2->imageData[i*grey->widthStep+j]=(unsigned
char)(average2[i][j]);
}

}
}

cvReleaseImage(&grey);
cvReleaseImage(&average);

cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,0);

//======Calculate the optical flow and remap pixels==================//

for(;;)
{
IplImage* frame = 0;
int i=0, k=0, c=0;

frame = cvQueryFrame( capture );


if( !frame )
break;

if( !image )

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 179


{
/* allocate all the buffers */
image = cvCreateImage( cvGetSize(frame), 8, 3 );
image->origin = frame->origin;
grey = cvCreateImage( cvGetSize(frame), 8, 1 );
prev_grey = cvCreateImage( cvGetSize(frame), 8, 1 );
vel_x = cvCreateImage( cvGetSize(grey),32 , 1 );
vel_y = cvCreateImage( cvGetSize(grey),32 , 1 );
corrected = cvCreateImage( cvGetSize(grey),8 , 1 );
}

cvCopy( frame, image, 0 );


cvConvertImage( image, image, CV_CVTIMG_FLIP );
cvCvtColor( image, grey, CV_BGR2GRAY );

//==========================================================//

int j=0;
int height,width,step,channels;
float *data_x1,*data_y1;

cvCalcOpticalFlowLK( grey,img2,cvSize(5,5),vel_x,vel_y);

height = grey->height;
width = grey->width;
step = grey->widthStep;
channels = grey->nChannels;
data_x1 = (float *)vel_x->imageData;
data_y1 = (float *)vel_y->imageData;

for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{
data_x1[i*step+j]=j-data_x1[i*step+j];
data_y1[i*step+j]=i-data_y1[i*step+j];
if (data_x1[i*step+j]<0)
data_x1[i*step+j]=0;
if (data_y1[i*step+j]<0)
data_y1[i*step+j]=0;
if (data_x1[i*step+j]>(width-2))
data_x1[i*step+j]=(float)(width-2);
if (data_y1[i*step+j]>(height-2))
data_y1[i*step+j]=(float)(height-2);
}

cvRemap(grey,corrected,vel_x,vel_y);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 180


//================Play Sequences====================//

cvShowImage( "distorted",grey);
cvShowImage( "corrected",corrected);

c = cvWaitKey(1);
if( (char)c == 27 )
break;

}
time(&t2);
diff=difftime(t2,t1);
printf("%f",diff);

cvReleaseCapture( &capture );
cvDestroyWindow("distorted");
cvDestroyWindow("corrected");
return 0;
}

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 181


APPENDIX I

CGI ALGORITHM C CODE

#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
#include <ctime>
#include <time.h>

time_t t1,t2;
double diff=0;
IplImage *image , *grey;
IplImage *corrected = 0;IplImage *img2 = 0;IplImage *vel_x=0, *vel_y=0;
IplImage *curr = 0, *current = 0;

float data_xf[256][256],data_yf[256][256];
float average2[256][256];

int main( int argc, char** argv )


{

CvCapture* capture = 0;

if( argc == 2 )
capture = cvCaptureFromAVI( argv[1] );

cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,0);
double x=cvGetCaptureProperty(capture,CV_CAP_PROP_FPS);
printf("Frame Rate: %f ",x);

if( !capture )
{
fprintf(stderr,"%d Could not initialize capturing...\n",argc);
return -1;
}

printf( "Hot keys: \n"


"\tESC - quit the program\n");

cvNamedWindow( "distorted", 1 );
cvNamedWindow( "corrected", 1 );

//===========Main LOOP===================//

time(&t1);
for (int q=0;q<1000;q++)
{

cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,q+5);

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 182


IplImage* frame = 0;

for(int p=0;p<1;p++)
{
frame = cvQueryFrame( capture );
if( !frame )
break;

if( !curr )
{
/* allocate all the buffers */
curr = cvCreateImage( cvGetSize(frame), 8, 3 );
curr->origin = frame->origin;
current = cvCreateImage( cvGetSize(frame), 8, 1 );
}

cvCopy( frame, curr, 0 );


cvConvertImage( curr, curr, CV_CVTIMG_FLIP );
cvCvtColor( curr, current, CV_BGR2GRAY );

cvSetCaptureProperty(capture,CV_CAP_PROP_POS_FRAMES,q);

//=================Create Time window=====================//

for(p=0;p<11;p++)
{
IplImage* frame = 0;
int i=0, k=0, c=0;

frame = cvQueryFrame( capture );


if( !frame )
break;

if( !image )
{
/* allocate all the buffers */
image = cvCreateImage( cvGetSize(frame), 8, 3 );
image->origin = frame->origin;
grey = cvCreateImage( cvGetSize(frame), 8, 1 );
vel_x = cvCreateImage( cvGetSize(grey),32 , 1 );
vel_y = cvCreateImage( cvGetSize(grey),32 , 1 );
corrected = cvCreateImage( cvGetSize(grey),8 , 1 );
img2 = cvCreateImage( cvGetSize(frame), 8, 1 );
}

cvCopy( frame, image, 0 );


cvConvertImage( image, image, CV_CVTIMG_FLIP );
cvCvtColor( image, grey, CV_BGR2GRAY );

//==================Calc Optical Flow===============//

int j=0,r=0;
int height,width,step;

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 183


float *data_x1,*data_y1;

cvCalcOpticalFlowLK( current,grey,cvSize(5,5),vel_x,vel_y);
// Lk algorithm used since it is optimized

height = grey->height;
width = grey->width;
step = grey->widthStep;
data_x1 = (float *)vel_x->imageData;
data_y1 = (float *)vel_y->imageData;

for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{

data_xf[i][j]=data_xf[i][j]+data_x1[i*step+j];

data_yf[i][j]=data_yf[i][j]+data_y1[i*step+j];

if (p==10)
{
data_xf[i][j]=data_xf[i][j]/10;
data_yf[i][j]=data_yf[i][j]/10;
data_x1[i*step+j]=j-data_xf[i][j];
data_y1[i*step+j]=i-data_yf[i][j];

if (data_x1[i*step+j]<0)
data_x1[i*step+j]=0;
if (data_y1[i*step+j]<0)
data_y1[i*step+j]=0;
if (data_x1[i*step+j]>(width-2))
data_x1[i*step+j]=(float)(width-2);
if (data_y1[i*step+j]>(height-2))
data_y1[i*step+j]=(float)(height-2);

}
}

if (p==10)
{

cvRemap(current,corrected,vel_x,vel_y);
cvShowImage( "distorted",grey);
cvShowImage( "corrected",corrected);
c = cvWaitKey(1);
if( (char)c == 27 )
break;

//=================Run Sequence==========================//

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 184


}

}
time(&t2);
diff=difftime(t2,t1);
printf("%f",diff);

cvReleaseCapture( &capture );
cvDestroyWindow("distorted");
cvDestroyWindow("corrected");
return 0;
}

FSATIE TSHWANE UNIVERSITY OF TECHNOLOGY Page 185

Anda mungkin juga menyukai