Anda di halaman 1dari 27

A

PROJECT REPORT ON

IMAGE ENHANCEMENT

ABSTRACT

A robust and efficient image enhancement technique has been developed to


improve the visual quality of digital images that exhibit dark shadows due to the
IMAGE ENHENCEMENT

limited dynamic ranges of imaging and display devices which are incapable of
handling high dynamic range scenes. The proposed technique processes images in
two separate steps: dynamic range compression and local contrast enhancement.

Dynamic range compression is a neighborhood dependent intensity


transformation which is able to enhance the luminance in dark shadows while keeping
the overall tonality consistent with that of the input image. The image visibility can be
largely and properly improved without creating unnatural rendition in this manner. A
neighborhood dependent local contrast enhancement method is used to enhance the
images contrast following the dynamic range compression. Experimental results on
the proposed image enhancement technique demonstrates strong capability to
improve the performance of convolution face finder compared to histogram
equalization and multiscale Retinex with color restoration without compromising the
false alarm rate.

CONTENTS
Chapters Page.No
1. INTRODUCTION 04

SUBMITTED BY- NEHA SAHU Page 2


IMAGE ENHENCEMENT

Introduction
Image processing

2. IMAGE PROCESSING 07
Introduction
Image resolution
How to improve your image
Pre-processing of the remotely sensed image

3. AERIAL IMAGERY 15
Uses of imagery
Types of aerial photography
Non linear image enhancement techniques
4. RESULT ANALYSIS 22
Results
Conclusion
Future work
REFERENCES 26

CHAPTER 1

INTRODUCTION

INTRODUCTION

SUBMITTED BY- NEHA SAHU Page 3


IMAGE ENHENCEMENT

Aerial images captured from aircrafts, spacecrafts, or satellites usually suffer


from lack of clarity, since the atmosphere enclosing Earth has effects upon the images
such as turbidity caused by haze, fog, clouds or heavy rain. The visibility of such
aerial images may decrease drastically and sometimes the conditions at which the
images are taken may only lead to near zero visibility even for the human eyes. Even
though human observers may not see much than smoke, there may exist useful
information in those images taken under such poor conditions. Captured images are
usually not the same as what we see in a real world scene, and are generally a poor
rendition of it.

High dynamic range of the real life scenes and the limited dynamic range of
imaging devices results in images with locally poor contrast. Human Visual System
(HVS) deals with the high dynamic range scenes by compressing the dynamic range
and adapting locally to each part of the scene. There are some exceptions such as
turbid (e.g. fog, heavy rain or snow) imaging conditions under which acquired images
and the direct observation possess a close parity .The extreme narrow dynamic range
of such scenes leads to extreme low contrast in the acquired images.

To deal with the problems caused by the limited dynamic range of the imaging
devices, many image processing algorithms have been developed .These algorithms
also provide contrast enhancement to some extent. Recently we have developed a
wavelet-based dynamic range compression (WDRC) algorithm to improve the visual
quality of digital images of high dynamic range scenes with non-uniform lighting
conditions .The WDRC algorithm is modified in by introducing an histogram
adjustment and non-linear color restoration process so that it provides color constancy
and deals with pathological scenes having very strong spectral characteristics in a
single band. The fast image enhancement algorithm which provides dynamic range
compression preserving the local contrast and tonal rendition is a very good candidate
in aerial imagery applications such as image interpretation for defense and security

SUBMITTED BY- NEHA SAHU Page 4


IMAGE ENHENCEMENT

tasks. This algorithm can further be applied to video streaming for aviation safety. In
this project application of the WDRC algorithm in aerial imagery is presented. The
results obtained from large variety of aerial images show strong robustness and high
image quality indicating promise for aerial imagery during poor visibility flight
conditions.

IMAGE PROCESSING
In electrical engineering and computer science, image processing is any form
of signal processing for which the input is an image, such as a photography or video
frame the output of image processing may be either an image or, a set of
characteristics or parameters related to the image. Most image-processing techniques
involve treating the image as a two-dimensional signal and applying standard signal-
processing techniques to it.

Image processing usually refers to digital image processing, but optical and
analog image processing also are possible. This article is about general techniques
that apply to all of them. The acquisition of images (producing the input image in the
first place) is referred to as imaging. Image processing is a physical process used to
convert an image signal into a physical image. The image signal can be either digital
or analog. The actual output itself can be an actual physical image or the
characteristics of an image. The most common type of image processing is
photography. In this process, an image is captured using a camera to create a digital
or analog image. In order to produce a physical picture, the image is processed using
the appropriate technology based on the input source type.

In digital photography, the image is stored as a computer file. This file is


translated using photographic software to generate an actual image. The colors,
shading, and nuances are all captured at the time the photograph is taken the software
translates this information into an image.

SUBMITTED BY- NEHA SAHU Page 5


IMAGE ENHENCEMENT

Euclidean geometry transformations such as enlargement, reduction, and


rotation.
Color corrections such as brightness and contrast adjustments, color mapping,
color balancing, quantization, or color translation to a different color space.

Digital compositing or optical compositing (combination of two or more


images), which is used in film-making to make a "matte".

Interpolation, demosaicing, and recovery of a full image from a raw image


format using a Bayer filter pattern.

Image registration, the alignment of two or more images.

Image differencing and morphing.

Image recognition, for example, may extract the text from the image using
optical character recognition or checkbox and bubble values using optical
mark recognition.

Image segmentation.

High dynamic range imaging by combining multiple images.

Geometric hashing for 2-D object recognition with affine invariance.

SUBMITTED BY- NEHA SAHU Page 6


IMAGE ENHENCEMENT

CHAPTER 2

IMAGE PROCESSING

INTRODUCTION

Image Processing and Analysis can be defined as the "act of examining


images for the purpose of identifying objects and judging their significance" Image
analyst study the remotely sensed data and attempt through logical process in
detecting, identifying, classifying, measuring and evaluating the significance of
physical and cultural objects, their patterns and spatial relationship.

IMAGE RESOLUTION

Resolution can be defined as "the ability of an imaging system to record fine


details in a distinguishable manner". A working knowledge of resolution is essential
for understanding both practical and conceptual details of remote sensing. Along with
the actual positioning of spectral bands, they are of paramount importance in
determining the suitability of remotely sensed data for a given applications. The
major characteristics of imaging remote sensing instrument operating in the visible
and infrared spectral region are described in terms as follow:

Spectral resolution

Radiometric resolution

Spatial resolution

Temporal resolution

SUBMITTED BY- NEHA SAHU Page 7


IMAGE ENHENCEMENT

Spectral Resolution refers to the width of the spectral bands. As different


material on the earth surface exhibit different spectral reflectances and emissivities.
These spectral characteristics define the spectral position and spectral sensitivity in
order to distinguish materials. There is a tradeoff between spectral resolution and
signal to noise. The use of well -chosen and sufficiently numerous spectral bands is a
necessity, therefore, if different targets are to be successfully identified on remotely
sensed images.

Radiometric Resolution or radiometric sensitivity refers to the number of


digital levels used to express the data collected by the sensor. It is commonly
expressed as the number of bits (binary digits) needs to store the maximum level. For
example Landsat TM data are quantised to 256 levels (equivalent to 8 bits). Here also
there is a tradeoff between radiometric resolution and signal to noise. There is no
point in having a step size less than the noise level in the data. A low-quality
instrument with a high noise level would necessarily, therefore, have a lower
radiometric resolution compared with a high-quality, high signal-to-noise-ratio
instrument.

Spatial Resolution of an imaging system is defines through various criteria, the


geometric properties of the imaging system, the ability to distinguish between point
targets, the ability to measure the periodicity of repetitive targets ability to measure
the spectral properties of small targets.

The most commonly quoted quantity is the instantaneous field of view


(IFOV), which is the angle subtended by the geometrical projection of single detector
element to the Earth's surface. It may also be given as the distance, D measured along
the ground, in which case, IFOV is clearly dependent on sensor height, from the
relation: D = hb, where h is the height and b is the angular IFOV in radians. An
alternative measure of the IFOV is based on the PSF, e.g., the width of the PDF at
half its maximum value.

SUBMITTED BY- NEHA SAHU Page 8


IMAGE ENHENCEMENT

A problem with IFOV definition, however, is that it is a purely geometric


definition and does not take into account spectral properties of the target. The
effective resolution element (ERE) has been defined as "the size of an area for which
a single radiance value can be assigned with reasonable assurance that the response is
within 5% of the value representing the actual relative radiance". Being based on
actual image data, this quantity may be more useful in some situations than the IFOV.

Other methods of defining the spatial resolving power of a sensor are based on the
ability of the device to distinguish between specified targets. Of the concerns the ratio
of the modulation of the image to that of the real target. Modulation, M, is defined as:
M = Emax -Emin / Emax + Emin
Where Emax and Emin are the maximum and minimum radiance values recorded
over the image.

Temporal Resolution refers to the frequency with which images of a given


geographic location can be acquired. Satellites not only offer the best chances of
frequent data coverage but also of regular coverage. The temporal resolution is
determined by orbital characteristics and swath width, the width of the imaged area.
Swath width is given by 2tan(FOV/2) where h is the altitude of the sensor, and FOV
is the angular field of view of the sensor.

How to Improve Your Image?


Analysis of remotely sensed data is done using various image processing techniques
and methods that includes:

Analog image processing

Digital image processing.

SUBMITTED BY- NEHA SAHU Page 9


IMAGE ENHENCEMENT

Visual or Analog processing techniques is applied to hard copy data such as


photographs or printouts. Image analysis in visual techniques adopts certain elements
of interpretation, which are as follow:
The use of these fundamental elements of depends not only on the area being
studied, but the knowledge of the analyst has of the study area. For example the
texture of an object is also very useful in distinguishing objects that may appear the
same if the judging solely on tone (i.e., water and tree canopy, may have the same
mean brightness values, but their texture is much different. Association is a very
powerful image analysis tool when coupled with the general knowledge of the site.
Thus we are adept at applying collateral data and personal knowledge to the task of
image processing. With the combination of multi-concept of examining remotely
sensed data in multispectral, multitemporal, multiscales and in conjunction with
multidisciplinary, allows us to make a verdict not only as to what an object is but also
its importance. Apart from this analog image processing techniques also includes
optical photogrammetric techniques allowing for precise measurement of the height,
width, location, etc. of an object.
Digital Image Processing is a collection of techniques for the manipulation of
digital images by computers. The raw data received from the imaging sensors on the
satellite platforms contains flaws and deficiencies. To overcome these flaws and
deficiencies inorder to get the originality of the data, it needs to undergo several steps
of processing. This will vary from image to image depending on the type of image
format, initial condition of the image and the information of interest and the
composition of the image scene.
TABLE: ELEMENTS OF IMAGE INTERPRETATION

Elements of Image Interpretation


Black and White Tone
Primary Elements Color
Stereoscopic Parallax
Spatial Arrangement of Tone Size

SUBMITTED BY- NEHA SAHU Page 10


IMAGE ENHENCEMENT

Shape
& Color Texture
Pattern
Based on Analysis of Primary Height
Elements Shadow
Site
Contextual Elements
Association

Pre-processing consists of those operations that prepare data for subsequent


analysis that attempts to correct or compensate for systematic errors. The digital
imageries are subjected to several corrections such as geometric, radiometric and
atmospheric, though all these correction might not be necessarily be applied in all
cases. These errors are systematic and can be removed before they reach the user. The
investigator should decide which pre-processing techniques are relevant on the basis
of the nature of the information to be extracted from remotely sensed data. After pre-
processing is complete, the analyst may use feature extraction to reduce the
dimensionality of the data. Thus feature extraction is the process of isolating the most
useful components of the data for further study while discarding the less useful
aspects (errors, noise etc). Feature extraction reduces the number of variables that
must be examined, thereby saving time and resources.

Image Enhancement operations are carried out to improve the interpretability of


the image by increasing apparent contrast among various features in the scene. The
enhancement techniques depend upon two factors mainly

The digital data (i.e. with spectral bands and resolution)

The objectives of interpretation

SUBMITTED BY- NEHA SAHU Page 11


IMAGE ENHENCEMENT

As an image enhancement technique often drastically alters the original


numeric data, it is normally used only for visual (manual) interpretation and not for
further numeric analysis. Common enhancements include image reduction, image
rectification, image magnification, transect extraction, contrast adjustments, band
ratioing, spatial filtering, Fourier transformations, principal component analysis and
texture transformation.

Information Extraction is the last step toward the final output of the image
analysis. After pre-processing and image enhancement the remotely sensed data is
subjected to quantitative analysis to assign individual pixels to specific classes.
Classification of the image is based on the known and unknown identity to classify
the remainder of the image consisting of those pixels of unknown identity. After
classification is complete, it is necessary to evaluate its accuracy by comparing the
categories on the classified images with the areas of known identity on the ground.
The final result of the analysis consists of maps (or images), data and a report. These
three components of the result provide the user with full information concerning the
source data, the method of analysis and the outcome and its reliability.

Pre-Processing of the Remotely Sensed Images


When remotely sensed data is received from the imaging sensors on the satellite
platforms it contains flaws and deficiencies. Pre-processing refers to those operations
that are preliminary to the main analysis. Preprocessing includes a wide range of
operations from the very simple to extremes of abstractness and complexity. These
categorized as follow:

1. Feature Extraction

2. Radiometric Corrections

3. Geometric Corrections

4. Atmospheric Correction

SUBMITTED BY- NEHA SAHU Page 12


IMAGE ENHENCEMENT

The techniques involved in removal of unwanted and distracting elements


such as image/system noise, atmospheric interference and sensor motion from an
image data occurred due to limitations in the sensing of signal digitization, or data
recording or transmission process. Removal of these effects from the digital data are
said to be "restored" to their correct or original condition, although we can, of course
never know what are the correct values might be and must always remember that
attempts to correct data what may themselves introduce errors. Thus image
restoration includes the efforts to correct for both radiometric and geometric errors.

Feature Extraction
Feature Extraction does not mean geographical features visible on the image
but rather "statistical" characteristics of image data like individual bands or
combination of band values that carry information concerning systematic variation
within the scene. Thus in a multispectral data it helps in portraying the necessity
elements of the image. It also reduces the number of spectral bands that has to be
analyzed. After the feature extraction is complete the analyst can work with the
desired channels or bands, but in turn the individual bandwidths are more potent for
information. Finally such a pre-processing increases the speed and reduces the cost of
analysis.

Radiometric corrections
Radiometric Corrections are carried out when an image data is recorded by the
sensors they contain errors in the measured brightness values of the pixels. These
errors are referred as radiometric errors and can result from the

1. Instruments used to record the data

2. From the effect of the atmosphere

SUBMITTED BY- NEHA SAHU Page 13


IMAGE ENHENCEMENT

Radiometric processing influences the brightness values of an image to correct


for sensor malfunctions or to adjust the values to compensate for atmospheric
degradation. Radiometric distortion can be of two types:

1. The relative distribution of brightness over an image in a given band can be


different to that in the ground scene.

2. The relative brightness of a single pixel from band to band can be distorted
compared with spectral reflectance character of the corresponding region on
the ground.

SUBMITTED BY- NEHA SAHU Page 14


IMAGE ENHENCEMENT

CHAPTER 3
AERIAL IMAGERY

Aerial imagery can expose a great deal about soil and crop conditions. The
Birds eye view an aerial image provides, combined with field knowledge, allows
growers to observe issues that affect yield. Our imagery technology enhances the
ability to be proactive and recognize a Problematic area, thus minimizing yield loss
and limiting exposure to other areas of your field. Hemisphere GPS Imagery uses
infrared technology to help you see the big picture to identify these everyday issues.
Digital infrared sensors are very sensitive to subtle differences in plant health and
growth rate. Anything that changes the appearance of leaves (such as curling, wilting,
and defoliation) has an effect on the image. Computer enhancement makes these
Variations within the canopy stand out, often indicating disease, water, weed, or
fertility problems.
Because of Hemisphere GPS technology, aerial imagery is over 30 times more
detailed than any commercially available satellite imagery and is available in selected
areas for the 2010 growing season. Images can be taken on a scheduled or as needed
basis. Aerial images provide a snapshot of the crop condition. The example on the
right shows healthy crop conditions in red and less than healthy conditions in green.
These snapshots of crop variations can then be turned into variable rate prescription
maps (PMaps), which is shown on the right Imagery can be used to identify crop
stress over a period of time. In the images to the left, the
Problem areas identified with yellow arrows show potential plant damage (e.g.
disease, insects, etc.).
Aerial images, however, store information about the electro-magnetic radiance
of the complete scene in almost continuous form. Therefore they support the
localization of break lines and linear or spatial objects. The Map Mart Aerial Image
Library covers all of the continental United States as well as a growing number of
International locations. The aerial imagery ranges in date from 1926 to the present

SUBMITTED BY- NEHA SAHU Page 15


IMAGE ENHENCEMENT

day depending upon the location. Imagery can be requested and ordered by selecting
an area on an interactive map or larger areas, such as cities or counties can be
purchased in bundles. Many of the current digital datasets are available for download
within a few minutes of purchase.
Aerial image measurement includes non-linear, 3-dimensional, and materials
effects on imaging. Aerial image measurement excludes the processing effects of
printing and etching on the wafer. The successful application of aerial image
emulation for CDU measurement traditionally, aerial image metrology systems are
used to evaluate defect printability and repair success.

Figure: Areal image of the test the building


Whereas, digital aerial imagery should remain in the public domain and be archived
to secure its availability for future scientific, legal, and historical purposes.
Aerial photography is the taking of photographs of the ground from an elevated
position. The term usually refers to images in which the camera is not supported by a
ground-based structure. Cameras may be hand held or mounted, and photographs may
be taken by a photographer, triggered remotely or triggered automatically. Platforms

SUBMITTED BY- NEHA SAHU Page 16


IMAGE ENHENCEMENT

for aerial photography include fixed-wing aircraft, helicopters, balloons, blimps and
dirigibles, rockets, kites, poles, parachutes, vehicle mounted poles . Aerial
photography should not be confused with Air-to-Air Photography, when aircraft serve
both as a photo platform and subject.

USES OF IMAGERY:

Aerial photography is used in cartography (particularly in photogrammetric


surveys, which are often the basis for topographic maps), land-use planning,
archaeology, movie production, environmental studies, surveillance, commercial
advertising, conveyancing, and artistic projects. In the United States, aerial
photographs are used in many Phase I Environmental Site Assessments for property
analysis. Aerial photos are often processed using GIS software.

Radio-controlled aircraft

Advances in radio controlled models have made it possible for model


aircraft to conduct low-altitude aerial photography. This has benefited real-estate
advertising, where commercial and residential properties are the photographic
subject. Full-size, manned aircraft are prohibited from low flights above populated
locations.[3] Small scale model aircraft offer increased photographic access to these
previously restricted areas. Miniature vehicles do not replace full size aircraft, as full
size aircraft are capable of longer flight times, higher altitudes, and greater equipment
payloads. They are, however, useful in any situation in which a full-scale aircraft
would be dangerous to operate. Examples would include the inspection of
transformers atop power transmission lines and slow, low-level flight over
agricultural fields, both of which can be accomplished by a large-scale radio
controlled helicopter. Professional-grade, gyroscopically stabilized camera platforms
are available for use under such a model; a large model helicopter with a 26cc
gasoline engine can hoist a payload of approximately seven kilograms (15 lbs).

SUBMITTED BY- NEHA SAHU Page 17


IMAGE ENHENCEMENT

Recent (2006) FAA regulations grounding all commercial RC model flights


have been upgraded to require formal FAA certification before permission to fly at
any altitude in USA. Because anything capable of being viewed from a public space
is considered outside the realm of privacy in the United States, aerial photography
may legally document features and occurrences on private property.

TYPES OF AERIAL PHOTOGRAPH

Oblique photographs

Photographs taken at an angle are called oblique photographs. If they are


taken almost straight down are sometimes called low oblique and photographs taken
from a shallow angle are called high oblique.

Vertical photographs

Vertical photographs are taken straight down. They are mainly used in
photogrammetric and image interpretation. Pictures that will be used in
photogrammetric was traditionally taken with special large format cameras with
calibrated and documented geometric properties.

Combinations

Aerial photographs are often combined. Depending on their purpose it can be


done in several ways. A few are listed below.

Several photographs can be taken with one handheld camera to later be


stitched together to a panorama.

SUBMITTED BY- NEHA SAHU Page 18


IMAGE ENHENCEMENT

In pictometry five rigidly mounted cameras provide one vertical and four low
oblique pictures that can be used together.

In some digital cameras for aerial photogrammetric photographs from several


imaging elements, sometimes with separate lenses, are geometrically
corrected and combined to one photograph in the camera.

Orthophotos

Vertical photographs are often used to create orthophotos, photographs which


have been geometrically "corrected" so as to be usable as a map. In other words, an
orthophoto is a simulation of a photograph taken from an infinite distance, looking
straight down from nadir. Perspective must obviously be removed, but variations in
terrain should also be corrected for. Multiple geometric transformations are applied to
the image, depending on the perspective and terrain corrections required on a
particular part of the image. Orthophotos are commonly used in geographic
information systems, such as are used by mapping agencies (e.g. Ordnance Survey) to
create maps. Once the images have been aligned, or 'registered', with known real-
world coordinates, they can be widely deployed.

Large sets of orthophotos, typically derived from multiple sources and divided
into "tiles" (each typically 256 x 256 pixels in size), are widely used in online map
systems such as Google Maps. Open Street Map offers the use of similar orthophotos
for deriving new map data. Google Earth overlays orthophotos or satellite imagery
onto a digital elevation model to simulate 3D landscapes.

Aerial video

SUBMITTED BY- NEHA SAHU Page 19


IMAGE ENHENCEMENT

With advancements in video technology, aerial video is becoming more


popular. Orthogonal video is shot from aircraft mapping pipelines, crop fields, and
other points of interest. Using GPS, video may be embedded with meta data and later
synced with a video mapping program.

This Spatial Multimedia is the timely union of digital media including


still photography, motion video, stereo, panoramic imagery sets, immersive media
constructs, audio, and other data with location and date-time information from the
GPS and other location designs. Aerial videos are emerging Spatial Multimedia which
can be used for scene understanding and object tracking. The input video is captured
by low flying aerial platforms and typically consists of strong parallax from non-
ground-plane structures. The integration of digital video, global positioning systems
(GPS) and automated image processing will improve the accuracy and cost-
effectiveness of data collection and reduction. Several different aerial platforms are
under investigation for the data collection

NON-LINEAR IMAGE ENHANCEMENT TECHNIQUE

We propose a non-linear image enhancement method, which allows selective


enhancement based on the contrast sensitivity function of the human visual system.
We also proposed. An evaluation method for measuring the performance of the
algorithm and for comparing it with existing approaches. The selective enhancement
of the proposed approach is especially suitable for digital television applications to
improve the perceived visual quality of the images when the source image contains
less satisfactory amount of high frequencies due to various reasons, including
interpolation that is used to convert standard definition sources into high-definition
images. Non-linear processing can presumably generate new frequency components
and thus it is attractive in some applications.

SUBMITTED BY- NEHA SAHU Page 20


IMAGE ENHENCEMENT

PROPOSED ENHANCEMENT METHOD

Basic Strategy

The basic strategy of the proposed approach shares the same principle of the
methods that is, assuming that the input image is denoted by I, then the enhanced
image O is obtained by the following processing

O = I + NL(HP( I ))

where HP() stands for high-pass filtering and NL() is a nonlinear operator. As will
become clear in subsequent sections, the non-linear processing includes a scale step
and a clipping step. The HP() step is based on a set of Gabor filters.
The performance of a perceptual image enhancement algorithm is typically
judged through a subjective test. In most current work in the literature, such as this
subjective test is simplified to simply showing an enhancement image along with the
original to a viewer. While a viewer may report that a blurry image is indeed
enhanced, this approach does not allow systematic comparison between two
competing methods.
Furthermore, since the ideal goal of enhancement is to make up the high-
frequency components that are lost in the imaging or other processes, it would be
desired to show whether an enhancement algorithm indeed generates the desired high
frequency components. The tests in do not answer this question. (Note that, although
showing the Fourier transform of the enhanced image may illustrate whether high-
frequency components are added this is not an accurate evaluation of a method, due
to the fact that the Fourier transform provides only a global measure of the signal

SUBMITTED BY- NEHA SAHU Page 21


IMAGE ENHENCEMENT

spectrum. For example, disturbing ringing artifacts may appear as false high-
frequency components in the Fourier transform.

CHAPTER 4
RESULTS ANALYSIS

RESULTS

Figure. Histogram adjusted image

SUBMITTED BY- NEHA SAHU Page 22


IMAGE ENHENCEMENT

Figure orthogonal wavelet decomposition

Figure WDRC Approximation

SUBMITTED BY- NEHA SAHU Page 23


IMAGE ENHENCEMENT

Figure WDRC Reconstucted spatial domain image

Figure Final result

CONCLUSION
In this project application of the WDRC algorithm in aerial imagery is
presented. The results obtained from large variety of aerial images show strong
robustness, high image quality, and improved visibility indicating promise for aerial
imagery during poor visibility flight conditions. This algorithm can further be applied
to real time video streaming and the enhanced video can be projected to the pilots
heads-up display for aviation safety.

SUBMITTED BY- NEHA SAHU Page 24


IMAGE ENHENCEMENT

FUTURE SCOPE
As future work,
This work can be extended in order to increase the accuracy by increasing the
level of transformation.
This can be used as a part of the block in the full fledged applications ,i.e, by
using these DWT,the applications can be developed such as
compression,watermarking etc.

SUBMITTED BY- NEHA SAHU Page 25


IMAGE ENHENCEMENT

REFERENCES
[1] D. J. Jobson, Z. Rahman, G. A. Woodell, G.D.Hines, A Comparison of Visual
Statistics for the Image Enhancement of FORESITE Aerial Images with Those of
Major Image Classes, Visual Information Processing XV, Proc. SPIE 6246, (2006)
[2] S. M. Pizer, J. B. Zimmerman, and E. Staab, Adaptive grey level assignment in
CT scan display, Journal of Computer Assistant Tomography, vol. 8, pp. 300-305 ,
(1984).
[3] J. B. Zimmerman, S. B. Cousins, K. M. Hartzell, M. E. Frisse, and M. G. Kahn,
A psychophysical comparison of two methods for adaptive histogram equalization,
Journal of Digital Imaging, vol. 2, pp. 82-91(1989).
[4] S. M. Pizer and E. P. Amburn, Adaptive histogram equalization and its
variations, Computer Vision, Graphics, and Image Processing, vol. 39, pp. 355-368,
(1987).
[5] K. Rehm and W. J. Dallas, Artifact suppression in digital chest radiographs
enhanced with adaptive histogram equalization, SPIE: Medical Imaging III, (1989).

SUBMITTED BY- NEHA SAHU Page 26


IMAGE ENHENCEMENT

SUBMITTED BY- NEHA SAHU Page 27