Anda di halaman 1dari 463

EC 624 Digital Image Processing (3 0 2 8) Class I Introduction Instructor: PK.

Bora

Digital Image Processing


Digital Image Processing means processing of digital images on digital hardware usually a computer

What is an analog image?

Electrical Signal, for example, the output of a video camera, that gives the electric voltage at locations in an image

What is a digital Image

2D array of numbers representing the sampled version of an image The image defined over a grid, each grid location being called a pixel. Represented by a finite grid and each intensity data is represented a finite number of bits. A binary image is represented by one bit gray-level image is represented by 8 bits.

Pixel and Intensities 16 18 19 18 22 22 20 20 30 23

255 19 21 21 21 33

Mathematically

We can think of an image as a function, f, from R2 to R: f( x, y ) gives the intensity at position ( x, y ) Realistically, we expect the image only to be defined over a rectangle, with a finite range: f: [a, b]x[c, d] [0, 1]

What is a Colour Image?


Three components: R,G, B each usually represented by 8 bits We call 24-bit video These three primary are mixed in different proportions to get different colours For different processing applications other formats (YIQ,YCbCr, HIS etc) are used

A color image is just a three component function. We can write this as a vector-valued function:

r ( x, y ) ( , ) f ( x, y ) = g x y b ( x, y )

Types of Digital Image


Digital images include Digital photos Image sequences used for video broadcasting and playback Multi-sensor data like satellite images in the visible, infrared and microwave bands Medical images like ultra-sound, Gamma-ray images, X-ray images and radio-band images like MRI etc Astronomical images Electron-microscope images used to study material structure

Photgraphic

Examples

Ultrasound

Mammogram

Image processing
Digital Image processing deals with manipulation and analysis of the

digital image by a digital hardware, usually a computer. Emphasizing certain pictorial information for better clarity (human interpretation) Automatic machine processing of the scene data. Compressing the image data for efficient utilization of storage space and transmission bandwidth

Image Processing
An image processing operation

typically defines a new image g in terms of an existing image f.

We can also transform O the domain of f:

Image Processing

image filtering: change range of image


f

h
x x

g(x) = h(f(x))

image warping: change domain of image f f


h
x x

g(x) = f(h(x))

Example

Image Restoration
Processing Restored Image

Degraded Image

Image Processing steps


Acquisition, Sampling/ Quantization/ Compression Image enhancement and restoration Feature Extraction Image Segmentation Object Recognition Image Interpretation

Image Acquisition

An analog image is obtained by scanning the sensor output. Some of the modern scanning device such as a CCD camera contains an array of photodetectors, a set of electronic switches and control circuitry all in a single chip

Image Acquisition
Sample Sampleand and Hold Hold Image Sensor
Takes a measurement and holds it for conversion to digital. Converts a measurement to digital

Analog Analogto to Digital Digital

Digital Image

Sampling/ Quantization/ Compression


A digital image is obtained by sampling and quantizing an analog image. The analog image signal is sampled at rate determined by the application concerned Still image 512X512, 256X256 Video: 720X480, 360X240, 1024x 768 (HDTV) The intensity is quantized into a fixed number of levels determined human perceptual limitation 8 bits is sufficient for all but the best applications 10 bits Television production, printing 12-16 bits Medical imagery

Sampling/ Quantization/ Compression (Contd.)


Raw video is very bulky Example:The transmission of highdefinition uncompressed digital video at 1024x 768, 24 bit/pixel, 25 frames requires 472 Mbps We have to compress the raw data to store and transmit

Image Enhancement
Improves the qualities of an image by enhancing the contrast sharpening the edges removing noise, etc. As an example, let us explain the image filtering operation to remove noise.

Example: Image Filtering

Original Image

Filtered Image

Histogram Equalization
Enhance the contrast of images by transforming the values in an intensity image to its normalized histogram the histogram of the output image is uniformly distributed. Contrast is better

Feature Extraction

Extracting Features like edges Very important to detect the boundaries of the object Done through digital differentiation operation

Example: Edge Detection


Original Saturn Image Edge Image

Segmentation

Partitioning of an image into connected homogenous regions. Homogeneity may be defined in terms of: Gray value Colour Texture Shape Motion

Segmented Image

Object Recognition
An object recognition system finds objects in the real world from an image of the world, using object models which are known a priori labelling problem based on models of known objects

Object Recognition (Contd.)

Object or model representation Feature extraction Feature-model matching Hypotheses formation Object verification

Image Understanding
Inferring about the scene on the basis of the recognized objects Supervision is required Normally considered as part of artificial intelligence

Books
1. R. C. Gonzalez and R. E. Woods, Digital
Image Processing, Pearson Education, 2001 (Main Text)

2.

A. K. Jain, Fundamentals of Digital Image processing, Pearson Education, 1989. 3. R. C. Gonzalez , R. E. Woods and S. L. Eddins, Digital Image Processing using MATLAB, Pearson Education, 2004. (Lab Ref)

Evaluation Scheme
End Sem Mid Sem Quiz Matlab Assignment Mini Project Total 50 25 5 10 10 100

1. MINI PROJECT

Matlab Implementation and preparing Report and Demonstration of any advanced topic like:

Video compression Video mosaicing Video-based tracking Medical Image Compression Video Watermarking Medical Image Segmentation Image and Video Restoration Biometric recognition

2D Discrete Time (Space) Fourier Transform


Recall DTFT of a 1D sequence, Given { x [ n ] , n = ..... } is

X ( ) = x [ n ] e
n=-
and

j n

1 j n x [n] = X e ( ) d 2

Note that X ( + 2 ) = X ( )
X ( ) exists

if and only if x [ n] is absolutely summable, i.e.,


n =

x [ n] <

Relationship between CTFT and DTFT


Consider a discrete sequence { x [ n ] , n = .... } obtained by sampling an analog signal xa ( t ) at a uniform sampling rate Fs = 1 , where T is T the sampling period. We can represent the sampling process by means of the Dirac delta function with the relation x [ n ] = xa ( nT ) , n=0, 1...... Now the sampled signal can be represented in continuous domain as,

xs ( t ) = x a ( t ) ( t nT ) = x a ( nT ) ( t nT )
n=- n=-

Thus, the analog and discrete frequencies are related as w = T .

2D DSFT
Consider the signal { f [m, n], m = ,....., , n = ,....., } defined over the two-dimensional space. Also assume

m = n =

f [ m, n] < .

Then the two-dimensional discrete-space Fourier transform (2D DSFT) and its inverse are defined by the following relations:

F (u , v) =
and

n = m =

f [m, n]e j ( um + vn )

f [m, n] =

1 4 2

F (u , v)e + j ( um + vn ) du dv

Note that F (u , v ) is doubly periodic in u and v. Following properties of F (u , v ) are easily verified: Linearity Separability Shifting theorem:
2 D DSFT e f [m m0 , n n0 ]

2 D DSFT If f [m, n] F (u, v), then j ( um0 +vn0 )

F (u, v),

Convolution theorem:
2 D DSFT 2 D DSFT If f1 [m, n] F1 (u, v) and f 2 [m, n] F2 (u , v), then 2 D DSFT f1 [m, n]) * f 2 [m, n] F1 (u, v) F2 (u , v)

Eigen function Modulation Correlation Inner product Parsevals theorem

2D DFT
Motivation
Consider the 1D DTFT X ( ) = each . [0, 2 ].

n =

x[n]e jwn which is uniquely defined for

Numerical evaluation of X ( ) involves very large (infinite) data and is to be done for each . An easier way is the Discrete Fourier transform (DFT) which is obtained by sampling X ( ) at a regular interval

1 Sampling periodically in the frequency domain at a rate means that the data sequence N will be periodic with a period N . The relation between the Fourier transform of an analog signal xa (t) and the DFT of the sampled version is illustrated in the Figure below.

1 . N

2D DFT
The 2D DFT of a 2D sequence is defined as
F [ k1 , k2 ] =

f [ m, n] e
n =0 m =0

N -1 M -1

2 2 j mk1 + nk2 N M

, k1 = 0,1,..., M 1, k2 =0,1,...,N-1

and the inverse 2D DFT is given by


k1 and k2

1 f [ m, n] = MN

k 2 =0 k1 =0

F [k , k ] e
1 2

N-1 M-1

2 2 j mk1 + nk2 N M

m = 0,1,..., M 1, n=0,1,...,N-1

2D DFT is periodic in both Thus

F[ k1,k2] = F[ k1 +M,k2 +N]

Properties of 2D DFT
Shifting property

2 D DFT If f [m, n] F [k1 , k2 ], then 2 D DFT f [m m0 , n n0 ] e 2 2 j m0 k1 + n0 k2 N M

F [k1 , k2 ]

Separability propery Since


e
2 2 j m k1 + nk 2 N M

=e

2 mk M

2 nk 2 N

Properties of 2D DFT
Separability propery Since
e
2 2 j m k1 + nk 2 N M

=e

2 mk M

2 nk 2 N

We can write
F

[k 1 , k 2 ] =
= =

N -1

M -1

n = 0 m = 0

[m
f

,n

2 2 j m k1 + nk M N

N -1

n = 0 N -1

m = 0

M -1

[m

,n
j

]e

2 m k1 M

2 nk N

n = 0

F1[ k 1 , n ] e

2 nk N

w h e re

F1[ k 1 , n ] =

m = 0

M -1

[m

,n

]e

2 m k1 M

Thus the 2D DFT can be computed from a 1D FFT routine

2D Fourier Transform
Frequency domain representation of 2D signal :: Consider a two-dimensional signal f ( x, y ) . The signal f ( x, y ) and its two-dimensional Fourier transform F ( u, v ) 2DFT are related by :: f x , y F (u , v )

F (u , v ) =
f ( x, y) =


1
2

f ( x , y ) e j ( xu + yv ) dx dy

F (u , v )e

j ( xu + yv )

du dv

u and v represent the spatial frequency in radian/length. F(u,v) represents the component of f(x,y) with frequencies u and v. A sufficient condition for the existence of F(u,v) is that f(x,y) is absolutely integrable.

f ( x, y ) dxdy <

2D Fourier Transform u and v represent the spatial frequency in horizontal and vertical directions inradian/length. F(u,v) represents the component of f(x,y) with frequencies u and v.

Illustration of 2D Fourier transform

2D Fourier Transform

A sufficient condition for the existence of F(u,v) is that f(x,y) is absolutely integrable.

f ( x, y ) dxdy <

Properties of 2D Fourier Transform


1. The 2D Fourier transform is in general a complex function of the real variables u and v. As such, it can be expressed in terms of the magnitude F (u, v) and the phase F (u , v). 2.
2D FT F1 (u , v) Linearity Property: f1 ( x, y ) 2D FT f 2 ( x, y ) F2 (u , v) 2D FT a f1 ( x, y ) + bf 2 ( x, y ) a F1 (u , v) + bF2 (u , v)

3.

j ( xo u + y o v ) 2 D FT f ( x x , y y ) e F (u , v ) o o Shifting Property:

Phase information changes, no change in amplitude. 4. Modulation Property:


+ j ( u o x + vo y )

f ( x, y )e

F (u u o , v v o )

5.

Complex exponentials are the eigen functions of linear shift invariant systems.

The Fourier bases are the Eigen functions of linear systems

For an imaging system, h(x, y) is called the point spread function and H (u, v) is called the optical transfer function

6.

Separability property:
F (u , v ) = =

f ( x , y ) e jux dx . e jvy dy

F1 ( u , v ) e jvy dy

w here F1 ( u , v ) is 1-D Fourier T ransform


Particularly if

f ( x, y ) = f1 ( x) f 2 ( y ) then

F (u, v) = F1 (u ) F2 (v)

x y Suppose f ( x, y ) = rect .rect a a

, then

F ( u, v ) = ( asincau ) a sin cav = a sincau sincav


2

7.

2D Convolution:

If g ( x, y ) = f ( x, y ) * h( x, y ) G (u , v) = F (u , v) H (u , v) Similarly if g ( x, y ) = f ( x, y )h( x, y ) G (u , v) = 1 4
2

F (u , v) * H (u , v)

Thus the convolution of two functions is equivalent to product of the corresponding Fourier transforms.

8. Preservation of inner product: Recall that the inner product of two functions f ( x, y ) and h ( x,y ) is defined by f ( x, y ), h( x, y ) = f ( x, y )h( x, y ) dx dy

The inner product is preserved through Fourier transform. Thus,


f ( x, y ), h( x, y ) = 1 4
2

F (u , v), H (u, v)

where F (u , v), H (u , v) =

F (u, v) H (u, v)

du dv

Particularly, f ( x, y ), f ( x, y ) = 1 2 F (u, v), F (u, v) 4

f ( x, y ) dx dy =
2

1 4 2

F (u , v) du dv

Hence Norm is preserved through 2D Fourier transform.

2D Discrete Time (Space) Fourier Transform


Recall DTFT of a 1D sequence, Given { x [ n ] , n = ..... } is

X ( ) = x [ n ] e
n=-
and

j n

1 j n x [n] = X e ( ) d 2

Note that X ( + 2 ) = X ( )
X ( ) exists

if and only if x [ n] is absolutely summable, i.e.,


n =

x [ n] <

Relationship between CTFT and DTFT


Consider a discrete sequence { x [ n ] , n = .... } obtained by sampling an analog signal xa ( t ) at a uniform sampling rate Fs = 1 , where T is T the sampling period. We can represent the sampling process by means of the Dirac delta function with the relation x [ n ] = xa ( nT ) , n=0, 1...... Now the sampled signal can be represented in continuous domain as,

xs ( t ) = x a ( t ) ( t nT ) = x a ( nT ) ( t nT )
n=- n=-

Thus, the analog and discrete frequencies are related as w = T .

2D DSFT
Consider the signal { f [m, n], m = ,....., , n = ,....., } defined over the two-dimensional space. Also assume

m = n =

f [ m, n] < .

Then the two-dimensional discrete-space Fourier transform (2D DSFT) and its inverse are defined by the following relations:

F (u , v) =
and

n = m =

f [m, n]e j ( um + vn )

f [m, n] =

1 4 2

F (u , v)e + j ( um + vn ) du dv

Note that F (u , v ) is doubly periodic in u and v. Following properties of F (u , v ) are easily verified: Linearity Separability Shifting theorem:
2 D DSFT e f [m m0 , n n0 ]

2 D DSFT If f [m, n] F (u, v), then j ( um0 +vn0 )

F (u, v),

Convolution theorem:
2 D DSFT 2 D DSFT If f1 [m, n] F1 (u, v) and f 2 [m, n] F2 (u , v), then 2 D DSFT f1 [m, n]) * f 2 [m, n] F1 (u, v) F2 (u , v)

Eigen function Modulation Correlation Inner product Parsevals theorem

Colour Image Processing


Colour plays an important role in image processing Colour image processing can be divided into two major areas
Full-colour processing: Colour sensors such as colour cameras and colour scanners are used to capture coloured image. Processing involves enhancement and other image processing tasks Pseudo-colour processing : Assigning a colour to a particular monochrome intensity range of intensities to enhance visual discrimination.

Colour Fundamentals
Visible spectrum: approx. 400 ~ 700 nm The frequency or mix of frequencies of the light determines the colour Visible colours: VIBGYOR with UV and IR at the two extremes (excluding)

HVS review
Cones are the sensors in the eye responsible for colour vision Humans perceive colour using three types of cones Primary colours: RGB because the cones of our eyes can basically absorb these three colours. The sensation of a certain colour is produced due to the mixed response of these three types of cones in a certain proportion Experiments show that 6-7 million cones in the human eye can be divided into red, green and blue vision. 65% cones are sensitive to red vision, 33% are for green and only 2% are for blue vision (blue cones are the most sensitive)

Experimental curves for colour Sensitivity

Absorption of light by red, green and blue cones in the human eye as a function of wavelength

Colour representations: Primary colours


According to the CIE (Commission Internationale de lEclairage, The International Commission on Illumination) the wavelength of each primary colour is set as follows: blue=435.8nm, green=546.1nm, and red=700nm. However this standard is just an approximate; it has been found experimentally that no single colour may be called red, green, or blue There is no pure red, green or blue colour. The primary colours can be added in certain proportions to produce different colours of light.

Natural and Artificial Colour


The colour produced by mixing RGB is not a natural colour. A natural colour will have a single wavelength, say . On the other hand, the same colour is artificially produced by combining weighted R, G and B each having different wavelength.

The idea is that these three colours together will produce the same amount of response as that would have been produced by wavelength alone (proportion of RGB is taken accordingly), thereby giving the sensation of the colour with wavelength to some extent.

Colour representations: Secondary colours


Mixing two primary colours in equal proportion produces a secondary colour of light: magenta (R+B), cyan (G+B) and yellow (R+G). Mixing RGB in equal proportion produces white light. The second figure shows primary/secondary colours of pigments.

Colour representations: Secondary colours


There is a difference between the primary colours of light and primary colours of pigments. Primary colour of a pigment is defined as one that subtracts or absorbs a primary colour of light and reflects or transmits the other two. Hence, the primary colours of pigments are magenta, cyan, and yellow. Corresponding secondary colours are red, green, and blue.

Brightness, Hue, and Saturation


Brightness perceived (subjective brightness) is a logarithmic function of light intensity. In other words it embodies the chromatic notion of intensity. Hue is an attribute associated with the dominant wavelength in a mixture of light waves. It represents the dominant colour as perceived by an observer. Thus, when we call an object red, orange, or yellow, we are specifying its hue. Saturation refers to the relative purity or the amount of white light mixed with hue. The pure spectrum colours are fully saturated. colour such as pink (red and white) is less saturated. The degree of saturation is inversely proportional to the amount of white light added.

Brightness, Hue, and Saturation (contd..)


Red, Green, Blue, Yellow, Orange, etc. are different hues. Red and Pink have the same hue, but different saturation. A faint red and a piercing intense red have different brightness. Hue and saturation taken together are called chromaticity. So, brightness + chromaticity defines any colour.

XYZ Colour System


CIE (Commision Internationale X 0.490 0.310 de LEclairage), 1931. Spectral Y = 0.177 0.813 RGB primaries (scaled, such that Z 0.000 0.010 X=Y=Z matches spectrally flat white). denotes corresponding The entire colour gamut can be produced by the three primaries used in CIE 3-colour system. A particular colour (of wavelength ) be represented by three components X, Y, and Z. These are called tri-stimulus values.

0.210 R 0.011 G B 0.990 spectral component

Colour composition using XYZ


XYZ Colour System

A colour is then specified by its tri-chromatic coefficients, defined as x= X/(X+Y+Z) y= Y/(X+Y+Z) z = Z/(X+Y+Z) so that x + y +z=1 For any wavelength of light in the visible spectrum, these values can be obtained directly from curves or tables compiled from experimental results.

Chromaticity Diagram
Shows colour composition as a function of x and y (only two of x, y and z are independent z = 1 (x + y) and so not independent of them) The triangle in the diagram below shows the colour gamut for a typical RGB system plotted as the XYZ system. y 1
The axes extend from 0 to 1. The origin corresponds to BLUE. The extreme points on the axes correspond to RED and GREEN.The point corresponding to x= y= 1/3 (marked by the white spot) corresponds to WHITE.

Actual Chromaticity Diagram


The positions of various spectrum colours from violet (380nm) to red (700 nm) are indicated around the boundary (100% saturation). These are pure colours. Any inside point represents mixture of spectrum colours. A straight line joining a spectrum colour point to the equal energy point shows all the different shades of the spectrum colour.

Any color in the interior of the horse shoe" can be achieved through the linear combination of two pure spectral colors A straight line joining any two points shows all the different colours that may be produced by mixing the two colours corresponding to the two points The straight line connecting red and blue is referred to as the line of purples

RGB primaries form a triangular color gamut. The white colour falls in the center of the diagram

Colour vision model: RGB colour Model


Colour models are normally invented for practical reasons, and so a wide variety exist. The RGB colour space (model) is a linear colour space that formally uses single wavelength primaries. Informally, RGB uses whatever phosphors a monitor has as primaries Available colours are usually represented as a unit cube usually called the RGB cube whose edges represent the R, G, and B weights.

Schematic of the RGB colour cube

RGB 24-bit colour cube

CMY and CMYK colour models


Cyan, Magenta, Yellow Primary pigment colour Subtractive color space Related to RGB by

C 1 R M = 1 G Y 1 B
Should produce black Practical printing devices additional black pigment is needed. This gives the CMYK colour space

C 1 M = 1 Y 1

Decoupling the colour components from intensity Decoupling the intensity from colour components has several advantages:
Human eyes are more sensitive to the intensity than to the hue We can distribute the bits for encoding in a more effective way. We can drop the colour part altogether if we want gray-scale images. In this way, black-and-white TVs can pick up the same signal as color ones. We can do image processing on the intensity and color parts separately. Example: Histogram equalization on the intensity part to contrast enhance the image while leaving the relative colors the same

HSI Colour system


Hue is the colour corresponding to the dominant wavelength measured in angle with reference to the red axis Saturation measures the purity of the colour. In this sense impurity means how much white is present. Saturation is 1 for a pure colour and less than 1 for an impure colour. Intensity is the chromatic equivalent of brightness also means the grey level component.

I S

I H

HSI Colour Model

HSI model can be obtained from the RGB model. The diagonal of the joining Black and White in the RGB cube is the intensity axis

HSI Model

HSI Colour Model


HSI colour model based on a triangle and a circle are shown The circle and the triangle are perpendicular to the intensity axis.

Conversion from RGB space to HSI


The following formulae show how to convert from RGB space to HSI:
if B G H = o 360 if B > G 1 [( R G ) + ( R B)] 1 2 = cos 2 ( R G ) + ( R B)(G B) 3 S = 1 [min( R, G, B)] (R + G + B ) 1 I = ( R + G + B) 3

1 2

Conversion from RGB space to HSI


To convert from HSI to RGB, the process depends on which colour sector H lies in. B = I (1 S) For the RG sector: 0o H 120 o S cosH = + R I 1 cos For the GB sector: o ( ) 60 H o 120 H 240 For the BR sector: G = 3I (R + B)
o o 240 H 360

H = H 240 G = I (1 S)

H = H 120 R = I (1 S)

S cos H S cosH G I 1 = + B = I 1+ cos 60 H ( ) (60 H) cos R = 3I (G + B) B = 3I ( R + G)

YIQ model
YIQ colour model is the NTSC standard for analog video transmission Y stands for intensity I is th e in phase component, orange-cyan axis Q is the quadrature component, magenta-green axis Y component is decoupled because the signal has to be made compatible for both monochrome and colour television. The relationship between the YIQ and RGB model is
Y 0.299 0.581 0.114 R I = 0.596 0.274 0.322 G Q 0.211 0.523 0.312 B

Y-Cb-Cr colour model


International standard for studio-quality video This colour model is chosen in such a way that it achieves maximum amount of decorrelation. This colour model is obtained by extensive experiments on human observers
Y = 0.299 R + 0.587G + 0.114 B Cb = B Y Cr = R Y

Colour balancing
Refers to the adjustment of the relative amounts of red, green, and blue primary colors in an image such that neutral colors are reproduced correctly Colour imbalance is a serious

problem in Colour Image Processing


Select a gray level, say white, where RGB components are equal Examine the RGB values. Keep one component fixed and match the other components to it, there by defining a transformation for each of the variable components Apply the transformation to all the images

Example: Colour Balanced Image

Histograms of a colour image


-Histogram of Luminance and chrominance components separately -Colour histograms ( H-S components or normalized R-G components -Useful way to segment objects like skin non-skin -Colour based indexing of images
8 7 6 5 4 3 2 1 0 1 2 3 Hue 4 5 6 S1 7 8 S4 S7 Saturation

Contrast enhancement by histogram equalisation


Histogram equalisation cannot be applied separately for each channel Convert to HIS space Apply histogram equalisation to the I component Correct the saturation if needed Convert back to RGB values
Digital Image Processing, 2nd ed.
www.imageprocessingbook.com

Chapter 6 Color Image Processing

2002 R. C. Gonzalez & R. E. Woods

Colour image smoothing


Vector processing is used Averaging in vector is equivalent to averaging separately in each channel Example- Averaging low pass filter: Averaging a vector is equivalent to averaging all the components

Colour image sharpening

Vector median filter


We cannot apply the median filtering to the component images separately because that will result in colour distortion If each channel is separately median filtered then the net median will be completely different from the values of the pixel in the window Vector median filter will minimize the sum of the distances of a vector pixel from the other vector pixels in the window The pixel with the minimum distance will give the vector median. The set of all vector pixels inside the window is given by

X W = { x1 , x2 , ...., x N }

Computation of vector median filter


(1) Find the sum of the distances i of the ith (1 i N ) vector pixel from all other neighbouring vector pixels in the window given by
i = d ( xi , x j )
j =1 N

where d ( xi , x j ) represents an appropriate distance measure between the ith and jth neighbouring vector pixels (2) Arrange is in the ascending order. Assign the vector pixel xi a rank equal to that of

i . Thus, an ordering (1) (2) .... ( N ) implies the same ordering of the corresponding vectors given as x(1) x(2) .... x( N )
where x(1) x(2) .... x( N ) are the rank-ordered vector pixels with the number inside the parentheses denoting the corresponding rank.

Computation of vector median filter (contd..)


The set of rank ordered vector pixels is given by
X R = { x(1) , x(2) ,..., x( N ) }

(3) Take the vector median as xVMF = x(1) The vector median is defined as the vector that corresponds to the minimum SOD to all other vector pixels

Edge Detection and Colour image segmentation

Considering the vector pixels as feature vectors we can apply clustering technique to segment the colour image

EDGE DETECTION
Edge detection is one of the important and difficult operations in image processing. It is important step in image segmentation the process of partitioning image into constituent objects. Edge indicates a boundary between object(s) and background.

Edge

When pixel intensity is plotted along a particular spatial dimension the existence of edge should mean sudden jump or step.

df dx
X0

Magnitude of first derivative is maximum

d f 2 dx
X0

Second derivative crosses zero at the edge point

All edge detection methods are based on the above two principles. In two dimensional spatial coordinates the intensity function is a two dimensional surface. We have to consider the maximum of the magnitude of the gradient.

The gradient magnitude gives the edge location

For simplicity of implementation, the gradient magnitude is approximated by The direction of the normal to the edge is obtained from

Second derivative is implemented as a Laplacian given by

differentiation is highly prone to high frequency noise. An ideal differentiation corresponds to the function being changed in the frequency domain by the addition of a zero at origin. Thus there is an increase of 20dB per decade. This will lead to high frequency noise being amplified. To circumvent this problem, low pass filtering has to be performed. Differentiation is implemented as finite difference operation.

Three types of differences generally done are:


forward difference = f(x+1) f(x) backward difference = f(x) f(x+1) centre difference = { f(x+1) f(x-1) } / 2

The most common kernels used for the gradient edge detector are the Roberts, Sobel and Prewitt edge operators.

Roberts Edge Operator

Disadvantage: High sensitivity to noise

Prewitt Edge Operator

Does some averaging operation to reduce the effect of noise. May be considered as the forward difference operations in all 2-pixel blocks in a 3 x 3 window.

Sobel Edge Operator

Does some averaging operation to reduce the effect of noise, like the Prewitt operator. May be considered as the forward difference operations in all 2 x 2 blocks in a 3 x 3 window.

Gradient Based Edge detection

Find fx and fy using a suitable operator. Compute gradient Edge pixels are those for which suitable threshold where T is a

Example

Second derivative Based


For the two-dimensional image, we can consider the orientation-free Laplacian operator as the second derivative. The Laplacian of the image f is given by

Laplacian Operator

Advantages: No thresholding symmetric operation Disadvantages: Noise is more amplified It does not give information about edge orientation

Model based edge detection


Marr studied the literature on mammalian visual systems and summarized these in five major points:
In natural images, features of interest occur at a variety of scales. No single operator can function at all of these scales, so the result of operators at each of many scales should be combined. A natural scene does not appear to consist of diffraction patterns or other wave-like effects, and so some form of local averaging (smoothing) must take place. The optimal smoothing filter that matches the observed requirements of biological vision (smooth and localized in the spatial domain and smooth and band-limited in the frequency domain) is the Gaussian.

When a change in intensity (an edge) occurs there is an extreme value in the first derivative or intensity. This corresponds to a zero crossing in the second derivative. The orientation independent differential operator of lowest order is the Laplacian.

Based on the five observations an edge detection algorithm is proposed as follows:


Convolve the image with a two dimensional Gaussian function. Compute the Laplacian of the convolved image. Edge pixels are those for which there is a zero crossing in the second derivative.

LOG Operation
Convolving the image with Gaussian and Laplacian operator can be combined into convolution with Laplacian of Gaussian (LoG) operator ( Inverted Maxican Hat) Continuous function and discrete approximation

Canny Edge detector


Cannys criterion:

Minimizing the error of detection. Localization of edge i.e. edge should be detected where it is present in the image. single response corresponding to one edge.

Canny Algorithm
1.

Smooth the image with Gaussian filter


If one wants more detail of the edge, then the variance of the filter is made large .If less detail is required, then the variance is made small. Noise is smoothed out

2. Gradient Operation 3. Non Maximal Suppression:


Consider the pixels in the neighborhood of the current pixel. If the gradient magnitude in either of the pixels is greater than the current pixel, mark the current pixel as non edge. mark all pixels with f > TH as the edges. mark all pixels with f < TL as non edges. a pixel with TL > f < TH marked as an edge only if it is connected to a strong edge.

4. Thresholding with hysterisis:


Example

Canny

Edge linking
After labeling the edges, we have to link the similar edges to get the object boundary. Two neighboring points (x1,y1) and (x2,y2) are linked if

Line Detection and Hough transform


Many edges can be approximated by straight lines.

For

edge pixels, there are n(n 1 )

possible lines.

2
To find whether a point is closer to a line we have to perform n(n 1 ) comparisons. Thus, a total of 0( n3 ) comparisons. 2

Hough transform uses parametric representation of a straight line for line detection.

y
y = mx + c

c
(m, c)

y
( x1 , y1 )

l 2 c = mx + y

( x, y )
P

l1

x
The points ( x, y ) and ( x1 , y1 ) are mapped to lines respectively in m c space l 1 and l 2 will intersect at a point P representing the values of the line joining ( x, y ) and ( x1 , y1 ).

l 1and l2
( m, c )

The straight line map of another point collinear with these two points will also intersect at P The intersection of multiple lines in the mc plane will give the (m,c) values of lines in the edge image plane.

The transformation is implemented by an accumulator array A, each accumulator corresponding to a quantized value of ( m, c ). The array A is initialized to zero. Corresponding to each edge point ( x, y ), for each mi in the range find

[ mmin , mmax ],

c
N cmax

c j = mi x + y

Increment A(i, j ) by 1

M mmax

Hough transform algorithm


Initialize a 2-D array A of accumulators to zero. For each edge point ( x, y ), Increment A(i, j ) by 1 Threshold the accumulators: the indices of accumulators with entry greater than a threshold give ( m, c ) values of the lines. Group the edges that belong to each line by traversing each line. find c = mx + y

Hough transform variation

and

are, in principle, unbounded: cannot handle

all situations. Rewrite the line equation as

x cos( ) + y sin( ) =

Instead of ( m, c ), we can consider ( , ) as the parameters with 0 to

varying between -90o and 90o and

varying from

M 2 + N2

for an M N image.

Example

Circle detection
Other parametric curves like circle ellipse etc. can be detected by Hough transform technique.

( x xo ) 2 + ( y yo ) 2 = r 2 = constant For circles of undetermined radius, use 3-d Hough transform for parameters

Example

Compression Basics
Todays world is dependent upon a lot of data either stored in a computer or transmitted through a communication system Compression involves reducing the number of bits to represent the data for storing and transmission. Particularly, Image compression is the

application of compression on digital images.

Storage Requirement Example:One second of digital video without compression requires 720X480X24X25~24.8 MB Example: One 4-minute song: 44100samples per second X16 bits per sampleX4X60~20 MB How to store these data efficiently?

Band-width requirement
The large data rate also means a larger bandwidth requirement for transmission For an available bandwidth of B, the maximum allowable bit-rate is 2B. 2B bits/s can be resolved without ambiguity
How to send large amount of data in real-time data through a limited bandwidth channel, say a telephone channel?

We have to compress the raw data to store and transmit.

Lossless Vs Lossy Compression


Lossless
A compressed image can be restored without any loss of information Applications Medical images Document images GIS

Lossy
Perfect reconstruction is not possible but visually useful information is retained Provides large compression Examples Video broadcasting Video conferencing Progressive transmission of images Digital libraries and image databases

Encoder and Decoder


A digital compression system requires two algorithms: Compression of data at the source (encoding) Decompression at the destination (decoding)

What are the principles behind compression?


Compression is possible if the signal data contains redundancy. Statistically speaking the data contain highly correlated sample values. Example : Speech data, Image data Temperature and Rainfall Data

Types of Redundancy

Coding Redundancy Spatial Redundancy Temporal Redundancy Perceptual Redundancy

Coding Redundancy

Some symbols may be used more often than others In English text, the letter E is far more common than the letter Z. More common symbols are given shorter code-lengths Less common symbols are given bigger code-lengths Coding redundancy is exploited in loss-less coding like Huffman coding

Example of Coding Redundancy


Morse Code Morse noticed that certain letters occurred more frequently than others. In order to reduce the average time required to send a telegraph message, frequent letters were given shorter symbols. Example: e (.) , a (. ) , q ( ) and j (.- - - )

Spatial Redundancy
Neighboring data samples are correlated Given a sample a part of x ( n 1) , x ( n 2 ) ,...,
x ( n)

can be predicted if the data are correlated

Spatial Correlation: Example

Temporal Redundancy
In video, same objects may be present in consecutive frames so that objects may be predicted

Frame k

Frame k+1

Perceptual Redundancy
Humans are sensible to only limited changes in the amplitude of the signal While choosing the levels of quantization, this fact may be considered Visually lossless means that the degradation is not visible to the human eye

Example: Humans are less sensitive to variation of colour

64 levels

32 levels

Principle Behind lossless Compression


Lossless compression methods work by identifying some frequently occurring symbols in the data, and by representing these symbols in an efficient way. Examples: Run-Length Encoding (RLE). Huffman Coding. Arithmetic coding.

Elements of Information Theory


Information is a measure of uncertainty A more uncertain (probable) symbol has more information Information of a symbol is related to probability

Information Theory (Contd.)


Source X is a random variable that takes the symbols x1, x2 ,..., xn with probabilities p1, p2 ,..., pn The self information or information I ( xi ) is defined as 1 I ( xi ) = log 2 pi

Information Theory (Contd.)


Suppose a symbol x always occurs. Then p(x) = 1 => I(x) = 0 ( no information) If the base of the logarithm is 2, then the unit of information is called a bit. If p(x) = 1/2 , I(x) = -log2(1/2) = 1 bit. Example: Tossing of a fair coin: outcome of this experiment requires one bit to convey the information.

Information Theory (Contd.)

1 H(X) = pi log2 bits / symbol i=1 pi


n

Average Information content of the source. Measures the uncertainty associated with a source and is called entropy

Entropy
Introduced by Ludwig Boltzmann His only epitaph

S = k ln W

Properties of Entropy
1. 0 H ( X ) log 2 (n) 2. H ( X ) = log 2 (n) when all n symbols are equally likely If X is a binary soure with symbols 0 and 1 emitted with probability p and (1- p ) respectively, then 1 1 ) H ( X ) = p log 2 ( ) + (1 p) log 2 ( 1 p p

Properties of a Code
Codes should be uniquely decodable. Should be instantaneous ( we can decode the code by reading from left to right as soon as the code word is received. Instantaneous codes satisfy the prefix property (no code word is a prefix to any other code) . The average codeword length Lavg is given by n L = li p i avg i=1

Krafts Inequality
There is an instantaneous binary code with codewords having lengths l1, l2 ,....lI if and only if

2li 1
i=1

For example, there is an instantaneous binary code with lengths 1, 2, 3;,3, since 1 1 1 1 + + + =1 2 2 2 2 An example of such a code is 0; 10; 110; 111. There is no instantaneous binary codewith lengths 1, 2, 2, 3, since
1 1 1 1 + + + = 1.125 > 1 2 2 2 2
2 2 3

Shannons Noiseless Coding theorem


Given a discrete memoryless source X with symbols x1 , x2 ,..., xn the average code word length for any instantaneous code is given by Lavg H ( X ) More over there exists at least one code such that Lavg H ( X ) + 1

Shannons Noiseless Coding theorem


(Contd..)
Given a discrete memory less source X with symbols x1 , x2 ,..., xn if we code strings of symbols at a time, the average code word length for any instantaneous code is given by Lavg H ( X )

Example
Symbol Probability 0.125 x1 x2 x3 x4 0.125 0.250 0.500 + 0.125log 2 (1/ 0.25) + 0.125log 2 (1/ 0.5) = 1.125 bit/symbol

H ( X ) = 0.125log 2 (1/ 0.125) + 0.125log 2 (1/ 0.125)

Example (Contd..)
Symbol Probability code 0.125 000 x1 x2 x3 x4 0.125 0.250 0.500 = 1.125 bit/symbol 001 01 1

Lav = 0.125 X 3 + 0.125 X 3 + 0.25 X 2 + 0.5 X 1

Prefix code and binary tree


A prefix code can be represented by a binary tree, each branch being denoted by 0 or 1, emanating from a root node and having n leaf nodes A prefix code is obtained by tracing out branches from the root node to each leaf node.

Huffman coding
Based on a loss-less statistical method of the 1950s. Creates a probability tree and combines the two lowest probabilities to obtain the code

0 1 1 1

0 0.125 0.125 0.25 0.5

Huffman Coding ( Contd..)

Most common data value (with the highest frequency) has the shortest code Huffman table of data value versus code must be sent Time of coding and decoding can be long Typical compression ratios 2:1 3:1

Steps in Huffman Coding


Arrange symbol probabilities pi in decreasing order while there is more than one node Merge the two nodes with the smallest probabilities to form a new node with probabilities equal to their sum Arbitrarily assign 1 and 0 to each pair of branches merging in to a node Read sequentially from the root node to the leaf node where the symbol is located

Run-length coding
Looks for sequential pixel values Example: 1 row of an image with the new code below
40 2 10 10 9 10 0 10 4 10 10 10 10 10 0 0 10

40

40

Has reduced the size from 18 bytes to 6 Higher compression ratios when predominantly low frequency information Typical compression ratios of 4:1 to 10:1 Used in Fax machine Used for coding the quantized transform coefficients in a lossy coder

Arithmetic coding
Codes a sequence of symbols rather than a single symbol at a time

A = {a1 , a2 , a3}; p(a1 ) = 0.7, p(a2 ) = 0.1, p(a3 ) = 0.2


Now, the sequence a1a2 a2 a3 has to be coded A single number lying between 0 and 1 is generated, corresponding to all the symbols in the sequence, this number is called

tag F ( a1 ) = 0.7, F ( a2 ) = 0.8, F ( a3 ) = 1

a3 a2 a1 a1

a3 0.7 a2 a1

a30.56 a2 a2 a1

a3 0.546 a2 a2 a1 a3

0.546

a3 Tag

0.49

0.539

0.5446

Choose the interval corresponding to the first symbol; the tag will lie in the interval Go on subdividing the subintervals according to the symbol probability The code is the AM of the final subinterval The tag is sent to decoder which has to know the symbol probabilities The decoder will repeat the same procedure to decode the symbols

Decoding algorithm for Arithmetic coding


In itialise k = 0 , l 0 = 0 , u 0 = 1 R ep eat k = k + 1
k 1 T A G l t * = k 1 u l k 1 x k su ch th at F X ( x k 1 ) t * < F x ( x k

U p d ate u k , l k U n til k = size o f th e seq u en ce

Arithmetic code is used to code the symbols in JPEG2000

Disadvantage
Assumes data to be stationary, does not consider dynamics of data

LZW (Lemple, Ziv and Welch) coding


Similar to run-length coding but with some statistical methods similar to Huffman Dynamically builds up a dictionary by both encoders and decoders Examples:
Unix command compression Image Compression- Graphics Interchange Format (GIF) Portable document format (PDF)

LZW Coding (contd..)


Initialize table with single character strings STRING = first input character WHILE not end of input stream CHARACTER = next input character IF STRING + CHARACTER is in the string table STRING = STRING + CHARACTER ELSE output the code for STRING add STRING + CHARACTER to the string table STRING = CHARACTER END WHILE output code for STRING

Example
Let aabbbaa be the sequence to be encoded; the dictionary will be

The output for the given sequence is 11253, which is aabbbaa according to dictionary

Lossy Compression
Throws away both non-relevant information and a part of relevant information to achieve required compression Usually, involves a series of algorithm-specific transformations to the data, possibly from one domain to another (e.g to frequency domain in Fourier Transform) without storing all the resulting transformation terms and thus, loosing some of the information contained

Lossy Compression (contd..)


Perceptually unimportant information is discarded. The remaining information is represented efficiently to achieve compression The reconstructed data contains degradations with respect to the original data

Example
Differential Encoding: Stores the difference between consecutive data samples using a limited number of bits. Discrete Cosine Transform (DCT): Applied to image data. Vector Quantization JPEG (Joint Photographic Experts Group)

Fig. Original Lena image, and Reconstructed image from lossy Compression

Rate distortion theory


Rate distortion theory deals with the problem of representing information allowing a distortion -Less exact representation requires less bits

Lossy Coder

X
2

Average distortion

E ( X Y ) = ( x y ) p( x) p( y / x)
2 x y

Rate Distortion theory


Minimize the bit rate Constraint: Average Distortion between X and Y D

X
source Lossy Coder

I( X,Y )

= H(X) H(X| Y)

Hence: minimize I( X,Y ) under the constraint D

Rate Distortion function for a Gaussian source


If the source X is a Gaussian random variable, the rate distortion function is given by
2 1 R ( D ) = log D< 2 D, 2 0 , otherwise

2 is the variance of Gaussian random variable

R(D)

2
D

Rate Distortion

Gaussian case presents the worst case of coding For a non Gaussian case, achievable bit error rate is lower than that of Gaussian. If we do not know about anything about the distribution of X, then Gaussian case gives us the pessimistic bound. An increase of 1 bit improves the SNR by about 6 dB.

Lossy Encoder
Fig. A Typical Lossy Signal/Image Encoder

Input Data

Prediction/ Transformation

Quantization

Entropy Coding

Compressed Data

Entropy-coding table Quantization table

Differential Encoding
Given a sample x [ n 1] , x [ n 2] ,..., x [ n p ] , a part of x [ n ] can be predicted if the data are correlated. A simple prediction scheme expresses the predicted value as a linear combination of past p samples:

[ n ] = ai x [ n i ] x
i =1

Linear Prediction Coding (LPC)


The prediction parameters a1 , a2 .....a p are estimated using correlation among data The prediction parameters a1 , a2 .....a p and the prediction error x(n) x ( n) are transmitted

LPC (contd..)
Variants of LPC (10) are used for coding speech for mobile communication Speech is sampled at 8000 samples per second Frames of 240 samples ( 30 msec of data) are considered for LPC Corresponding to each frame, quantized versions of 10 prediction parameters and approximate prediction errors are transmitted

Transform coding
Transform coding applies an invertible linear coordinate transformation to the image. Correlated data Transform Less correlated data

Most of the energy will be stored in a few transform coefficients Example: Discrete Cosine transform (DCT), Discrete wavelet transform (DWT)

Transform selection
Transform KLT DFT DCT
DWT

Merits
Theoretically optimal Very fast

Demerits
Data dependent Not fast Assumes periodicity of data High frequency distortion is more because of Gibbs phenomenon

Less high frequency distortion High energy compaction High Energy Compaction Scalabilty

Blocking Artifacts Computationally complex

Also, DCT is theoretically closer to KLT and implementation wise closer to DFT

Discrete Cosine Transform (DCT)


Reversible transform, like Fourier Transform N samples of signal

f [ m , n ], m = 0,1,..., N 1, n = 0,1,..., N 1
DCT is given by
Fc (u , v) =
N 1

m =0

(u ) (v) f [m, n]cos


n =0

N 1

with (u ) =

1 N 2 N

( (2m + 1)u + ( 2n + 1) v ) , u = 0,1,..N 1, v = 0,1,.., N 1 2 N 1 u=0 N v=0 and (v) = 2 v = 1, 2,.., N -1 u = 1, 2,.., N -1 N

DCT (contd..)
x
DCT Round Threshold IDCT

50

54

49 55 2 0 53

52 -1 0 53

53
1.13

54
-4.30

49
-3.0366

147.07 -0.32 -2.54 1.54 -1.41

147 0 -3 147 0 0 51 54 50

1 -4 0 -4 50 54

-3 0 51

We see that only two DCT coefficients contain most information about the original signal DCT can be easily extended to 2D

Block DCT
DCT can be efficiently implemented in blocks using FFT and other fast methods. FFT based transform is more computationally efficient if applied in blocks rather than on the entire data. For a data length N and N point FFT, computational complexity is of order N log 2 N If the data is divided into sub-blocks of length n then the number of sub-blocks is computational complexity

N n

and the

N = n log 2 n = N log 2 n n

How to choose block size?


Smaller block-size gives more computational efficiency Neighboring blocks will be correlated causing inter block redundancy. If blocks are coded independently, blocking artifact will appear
Reconstruction error

2 2 4 4

88

Block size

Beyond 8 8 block size, reduction in error is not significant.

4 DCT co-efficients per 8X8 block

8 DCT co-efficients per 8X8 block

16 DCT co-efficients per 8X8 block

Quantization
Replaces the transform coefficients with lower-precision approximations which can be coded in a more compact form

A many-to-one function.
Precision is limited by the number of bits available.
X= 147.07 -0.32 -2.54 0 -3 1.54 2 -1.41 -1

Quant(X)= 147

Quantization (contd..)
Information theoretic significance More the variance of the co-efficients, more is the information Estimate the variance of each transform coefficient from given image or determine the variance from the assumed model In the DCT, DC co-efficients: Raleigh distribution AC co-efficients: Generalized Gaussian distribution model

Two methods for quantization are zonal coding and threshold coding

Zonal coding
The co-efficients with more information content (more variance) are retained

Threshold coding
The co-efficients with higher energy are retained, the rest are assigned zero More adaptive Computationally exhaustive

Zonal Coding mask and the number of bits allotted for each coefficient

Original image and its DCT

Reconstructed image from truncated DCT

JPEG
Joint Photographic Expert Group A generally used lossy image coding format Allows tradeoff between compression ratio and image quality Can achieve high compression ratio(20+) with almost invisible difference

JPEG (contd..)
Quantization Table

Huffman Table Huffman Coding


Coded Image

Image

8x8 DCT

quantization

Baseline JPEG
Divide image into blocks of size 8X8. Level shift all 64 pixels values in each block by subtracting, 2n 1 (where 2nis the maximum number of gray levels). Compute 2D DCT of a block. Quantize DCT coefficients using a quantization table. Zig-zag scan the quantized DCT coefficients to form 1-D sequence. Code 1-D sequence (AC and DC) usingJPEG Huffman variable length codes.

An image block and DCT


An 8X8 intensity block

Quantization Quantization table

Zig-zag scanning

Zigzag scanning

AC Coefficients 18 9 3 8 1 3 1 2 4 2 4 0 3 1 0 1 1 0 1 1 0 0 0 1 (39 zeros)

Wavelet Based Compression


Recall that DWT is implemented through Row-wise and column-wise filtering and the down-sampling by 2 after each filtering. The approximate image is further decomposed. First and second stages of decomposition are illustrated in the figure below.
LL1 HL1
LL2 HL2 LH2 HH2

HL1

LH1

HH
1

LH1

HH
1

First stage

Second stage

Embedded Tree Image Coding


Embedded bit stream: A bit stream at a lower rate is contained in a higher rate bit stream (good for progressive transmission) Embedded Zero-tree Wavelet (EZW) coding algorithm, Shapiro [1993] Set Partitioning In Hierarchical Trees (SPIHT)based algorithm, Said and Pearlman [1996] EBCOT (Embedded Block Coding with Optimized Truncation) proposed by Taubman in 2000.

Tree representation of Wavelet Decomposition

EZW

EZW scans wavelet coefficients subband by subband. Parents are scanned before any of their children, but only after all neighboring parents have been scanned.

EZW coding
Each coefficient is compared against the current threshold T. A coefficient is significant if its amplitude is greater then T; such a coefficient is then encoded as Positive significant (PS) Negative significant (NS) Zerotree root (ZTR) is used to signify a coefficient below T, with all its children also below T Isolated zero (IZ) signify a coefficient below T, but with at least one child not below T 2 bits are needed to code this information

Successive Approximation quantization

Sequentially applies a sequence of thresholds T0,,TN-1 to determine significance Three-level mid-tread quantizer Refined using 2-level quantizer

Example

Threshold T =

l o g 2 Cmax 2

= 2

l o g 2 52

= 32

Quantization
+8

-8

2nd Pass

JPEG 2000
Not only better efficiency, but also more functionality Superior low bit-rate performance Lossless and lossy compression Multiple resolution Region of interest(ROI)

JPEG2000 v.s. JPEG


DCT JPEG
Discrete Cosine Transform

8x8 Quantization Table


Quantization

Huffman Coding
Entropy Coding

Transform

DWT J2K
Discrete Wavelet Transform

Quantization for each sub-band

Arithmetic Coding

JPEG2000 v.s. JPEG

low bit-rate performance

Video Compression
A video sequence consists of a number of pictures, containing a lot of time domain redundancy. This is often exploited to reduce data rates of a video sequence leading to video compression.

Motion-compensated frame differencing can be used very effectively to reduce redundant information in sequences Finding corresponding points between frames (i.e., motion estimation) can be difficult because of occlusion, noise, illumination changes, etc Motion vectors (x,y-displacements) are sent

Motion-compensated Prediction

Reference frame

Current frame

Predicted frame

Error frame

Search procedure
Reference frame Current frame

Best match

Search region

Current block

Search Algorithms
Exhaustive Search Three-step search Hierarchical Block Matching

Three step search algorithm


Search window of (2N 1) pixels is selected (N=3) Search at location (0,0) Set S= 2N-1 (the step size) Search at eight locations S pixels around location (0,0) From the nine locations searched so far, pick the location with smallest Mean Absolute Difference (MAD) and make this the new search origin. Set S=S/2. Repeat stages 4-6 until S=1

Three step search algorithm

First iteration

Minimum at first iteration Second iteration Minimum at second iteration Third iteration

Minimum at third iteration

Video Compression Standards


Two Formal Organizations International Standardization Organization/ International Electro- technical Commission (ISO/IEC) International Telecommunications Organization (ITU-T) ITU-T Standard H.261 (1990) H.263 (1995) ISO/IEC standards MPEG-1 (1993) MPEG-4 (1998) ITU-T and ISO/IEC MPEG-2 (1995) H.264/AVC (2003)

Applications And The Bit-rates Supported


Standard H.261 MPEG-1 MPEG-2/ H.262 H.263 MPEG-4 H.264 Application Video conferencing and video telephony CD-ROM video applications HDTV and DVD Transmissions over PSTN networks Multimedia applications
Broad cast over cable, Video on demand, Multimedia streaming services

Target bit-rate Px64 kbps 1P30 1.5Mbps 15-30 Mbps Up to 64kbps 5kbps-50Mbps
64kbps to 240Mbps

MPEG Video Standards


Motion Pictures Expert Group Standards for coding video and the associated audio Compression ratio above 100 MPEG 1, MPEG 2, MPEG 4, MPEG 7, MPEG 21.

MPEG 2 Coder Decoder

MPEG 2 Frame Types


The MPEG system specifies three types of frames within a sequence: Intra-coded picture (I-frame): Coded independently from all other frames Predictive-coded picture (P-frame): Coded based on a prediction from a past I- or P-frame Bidirectionally predictive coded picture (Bframe): Coded based on a prediction from a past and/or future I- or P-frame(s). Uses the least

MPEG 2 GOP structure

Image Enhancement
Aimed at improving the quality of an image for Better human perception or Better interpretation by machines.

Includes both spatial- and frequency-domain techniques: Basic gray level transformations Histogram Modification Average and Median Filtering Frequency domain operations Edge enhancement

Image enhancement
Input image Better image Enhancement technique

Application specific No general theory Can be done in


Spatial domain: Manipulate pixel intensity directly - Frequency domain: Modify the Fourier transform
-

Spatial Domain technique


g[ x, y ] = T ( f [x, y ]) or s = T ( r)

X -Simplest case g [.] depends only on the value of f at [ x, y ]; does not depend on the position of the pixel in the image. called brightness transform or point processing

Contrast stretching
s = T (r )

Some useful transformations

s = 255 r

s = T (r )

in the r Enhanced range 100-150 and 150-255

Thresholding

If I [m, n] > Th I [m, n] = 255 else I [m, n] = 0

Th= 120

Log transformation
Compresses the dynamic range
s = c log( r + 1)

where c is the scaling factor.

Example : Used to display the 2D Fourier Spectrum

Log transformation (Contd..)

Power law transformation


Expands dynamic range
s = cr

where and

are positive constants

Often referred to as gamma-correction


Image scaling, same effect as adjusting camera exposure time.

Example : = 1,

Example: Image Display in the monitor

Sample Input to Monitor

Monitor output

Gamma Correction
Sample Input

Gamma Corrected Input

Monitor Output

Gamma corrected image

Original Image

Corrected by = 1.5

Gamma correction (Contd..)

Gray level slicing

Results of slicing in the black and white regions

Bit-plane slicing
Highlights the contribution of specific bits to image intensity Analyses relative importance of each bit; aids determining the number of quantization levels for each pixel

MSB plane

Original

MSB plane obtained by thresholding at 128

Original Image and Eight bit-planes

Histogram Processing

Includes Histogram Histogram Equalization Histogram specification

Histogram
rk {0,1,.., L 1} nk n nk Number of pixels with p(rk ) = gray level rk n Total number of pixels

histogram: For B-bit image, initialize 2 B bins with 0 For each pixel x,y If f(x,y)=i, increment bin with index I endif endfor

Histogram

Low-contrast image

Histogram

Improved-contrast image

Histogram

Histogram Equalisation

Suppose r represents continuous gray levels 0 r 1. Consider a transformation of the form s = T ( r ) that satisfies the following conditions (1) s = T ( r ) is single valued, monotonically increasing in r . (2) 0 T ( r ) 1 for 0 r 1
T [ 0,1] [0,1]

1 Inverse transformation is T ( s ) = r , 0 s 1

Histogram Equalisation (contd.)

r can be treated as a random variable in [0, 1] with pdf pr ( r ) .The pdf of s = T ( r ) is


ps ( s ) = pr ( r ) ds dr r =T 1 ( s )
r

Suppose s=T ( r ) = pr ( u ) du ,0 r 1
0

then,

ds = pr ( r ) dr

ps ( s ) =

pr ( r ) = 1, 0 s 1 pr ( r )

Histogram Equalisation
rk {0,1,.., L 1} p(rk ) =
k i =0

nk n

g k = p(ri )

The resulting g k rounded.

needs to be scaled and

Histogram-equalized image image

Histogram

Histogram-equalized Image

Example
The following table shows the process of histogram equalization for a 128X128 pixel 3-bit (8level) image. Gray level ( rk )

nk

nk n

k n sk = round i i =0 n

X 7

0 1 2 3 4 5 6 7

1116 4513 5420 2149 1389 917 654 226

0.0681 0.2755 0.3308 0.1312 0.0848 0.0560 0.0399 0.0138

0 2 5 6 6 7 7 7

Histogram specification
Given an image with a particular histogram, another image which has a specified histogram can be generated and this process is called histogram specification or histogram matching.
pr ( r ) original histogram pz ( z ) desired histogram

s = pr ( u ) du
0

r = pz ( w ) dw
0

z = G 1 ( s ) = G 1T ( r )

Image filtering Image filtering involves Neighbourhood operation taking a filter mask from point to point in an image and perform operations on pixels inside the mask.

Linear Filtering

he case of linear filtering, the mask is placed over the pixel; the gray value of the image are multiplied with the corresponding mask weights and then added up to give the new value of the pixel. Thus the filtered image g[m,n] is given by

g[m, n] =

w
m' n' m ', n ' n'

m ', n '

f [m m ', n n ']

Where summations are performed over the window. The filtering window is usually symmetric about the origin so that we can write

g[m, n]

w
m'

f [m + m ', n + n ']

Linear Filtering Illustrated

Averaging Low-pass filter

An example of a linear filter is the averaging low-pass filter. The output I of an averaging filter at any pixel is the average of the neighbouring pixels inside the filter mask. It can be given as f [m, n] = w f [m + i, n + j ] ,
avg

avg

i, j

where the filter mask is of size m n and f , w are the image pixel values and filter weights respectively. Averaging filter can be used for blurring and noise reduction.
i, j i, j

Show that averaging low-pass filter reduces noise. Large filtering window means more blurring

Averaging filter

Original Image

Noisy Image
1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9

Filtered Image

Low-pass filter example

Filtered with 7X7 averaging mask

High-pass filter
A highpass filtered image can be computed as the difference between the original and a lowpass filtered version. Highpass = Original Lowpass 0 0 0
0

0 0
0

1/9 1/9 1/9 =

1/9 1/9 1/9

1/9 1/9 1/9

1 0

-1/9 -1/9 -1/9

-1/9

-1/9 -1/9 -1/9

8/9 -1/9

High-pass filtering

-1/9 -1/9 -1/9

-1/9

-1/9 -1/9 -1/9

8/9 -1/9

Unsharp Masking
f s [m, n] = Af [m, n] f av [m, n] A > 1 = ( A 1) f [m, n] + f [m, n] f av [m, n] = ( A 1) f [m, n] + f high [m, n]

Median filtering
The median filter is a nonlinear filter that outputs the median of the data inside a moving window of pre-determined length. This filter is easily implemented and has some attractive properties Useful in eliminating intensity spikes ( salt & pepper noise) Better at preserving edges Works up to 50% of noise corruption Verify that median filter is a nonlinear filter.

18

20

20

15

255

17

20

20
Median

20

Sorted data 15 17 18 20 20 20 20 20 255 Median= 20, and 255 will be replaced by 20

Median Filtering

IMAGE TRANSFORMS

Image transform

Signal data are represented as vectors. The transform changes the basis of the signal space. The transform is usually linear but not shift-invariant. Useful for z compact representation of data z separation noise and salient image features z efficient compression. .
z

A transfom may be z orthonormal/unitary or non-orthonormal z complete, overcomplete, or undercomplete. z applied to image blocks or the whole image ..

1D TRANSFORM
DATA

UNITARY TRANSFORM

Unitary transform and basis

t t ...... t t t ...... t T = : t t .....t Then


0 ,0 0 ,1 1,0 1,1 N 1,0 N 1,1

0 , N 1

1, N 1

N 1, N 1

t t T = : t
1

* 0 ,0 * 0 ,1

t ...... t
1,0

N 1,0 *

t ...... t
1,1 *

1, N 1

N 1,0

* N 1,1

.....t

* N 1, N 1

Unitary transform and basis (Contd..)

Therefore, f [0] t f [1] t : = : [ 1] f N t


* 0 ,0 * 0 ,1

t ...... t
1,0

t
*

1,1

0 , N 1

1, N 1

t t = F [0] : t

0 ,0

0 ,1

N 1,0

F [0] ...... t F [1] : .....t F [ N 1] t t t t + F [1] : + ...F [1] : t t


* N 1,0 * 1, N 1 * * N 1, N 1 * * 1,0 * * 1,1 * * 1, N 1

* N 1,0 * N 1,1

* N 1, N 1

Since the columns of dimensional space

are independent, they form a basis for the N-

Examples of Unitary Transform

2D DFT

Other Examples : DCT (Discrete Cosine Transform) DST (Discrete Sine Transform) DHT (Discrete Hardmard Transform) KLT (Karhunen Loeve Transform)

Properties of Unitary Transform


1.

The rows of T form an orthogonal basis for the Ndimensional complex space.

2.

All the eigen vectors of T have unit magnitude

Tf = f

3.

4.

5.

Parsevals theorem: T is energy preserving transformation, because it is an unitary transform which preserves energy F*' F = energy in transform domain f*' f = energy in data domain F*' F = [Tf]*' Tf = f*' T*' Tf = f*' I f = f*' f Unitary transform is a length and distance preserving transform. Energy is conserved, but often will be unevenly distributedamong coefficients.

Decorrelating property
z z

makes the data sequence uncorrelated. useful for compression

Let f = [ f[0], ,f[N-1] ] T be data vector, = Covariance matrix = Covariance matrix of transformation

Diagonal elements of CFt variance, off-diagonal elements of CFt Covariance Perfect Decorrelation off-diagonal elements are zero.

2D CASE

Separable Property

Matrix representation
z

If we consider separable, unitary and symmetric transform.

=
Thus we can write

N 1 N 1 n2 = 0 n1 = 0

f [n , n ]t[k , n ]t[n , k ]
1 2 1 1 2 2

2D Separable, unitary and symmetric transforms have following properties:


i. ii. iii. iv.

Energy preserving Distance preserving Energy compaction Other properties specific to particular transforms.

Karhunen Loeve (KL) transform


z

Let f = [ f[0], ,f[N-1] ] T be data vector, = Covariance matrix

Cf can be diagonalised by where

UCU f =

is diagonal matrix formed by eigen values


U is matrix formed by eigen vectors as columns
T = U

Transform matrix of KL transform is

KL Transform

KL transform is E(FKLT) = 0

FKLT = TKLT(f - f)

KL Taransform

Covariance Matrix

FKLT

is

For KLT eigen values are arranged in descending order and the transformation matrix is formed by considering the eigen vectors in order of the eigen values. Reconstructed value is Suppose we want to retain only k- transform coefficients we will retain the transformation matrix formed by the k largest eigen vectors.

Principal Component Analysis (PCA) Linear combination of largest principal eigen vectors.

KLT Illustrated

F2

F1

KL transform
z

Mean square error of approximation is

)(f - f ) = Energy of f Energy of f E (f - f = =


z
N 1 i =0 N 1 i= j


J 1 i =0 i

Mean square error is minimum over all affine transforms. Transform matrix is data dependent so computationally extensive.

1D-DCT
z

Let f = [ f[0], ,f[N-1] ] T be data vector 1D-DCT of data vector f and its IDCT are

where

The transformation can be written as F = TC f where TC is transformation matrix with elements

DCT is a real transform. TC is orthogonal transform TC TC = I so TC-1 = TC

Relation with DFT


z

DCT :

j2 nk j k N 1 FC(k ) = (k )Re al f (n)e 2N e 2N n=0



f [n], f (n) = for n =0,1,..., N 1

f ( n) =

= 0 , N

f ( n) , n

< 1 n 2 N 1

N 1

Let

0, otherwise

DCT of

f (n)

j2 nk j k 2N 1 FC(k ) = (k )Re al f (n)e 2N e 2N n=0

is given by

Another interpretation
z

Extend the data symmetrically about n=N-1 so that


f [n], f (n) = for n =0,1,..., N 1 f (2 N 1 n), n = N , N +1,...,2 N 1

DFT of f (n) is given by


j 2 j 2 N 1 nk k (2 N 1 n) ' 2 N 2 N + f (2 N 1 n)e F (k ) = f (n)e n=0
N < f , ) 1 2 ( = N

j2 j j 2 nk N 1 j2 nk j nk 2 N nk 2 N nk = e 2N e +e 2 N e 2 N f (n) e

n=0

j nk N 1 k =e N f (n)cos( (2n +1))

N 1 Fc (k ) = (k ) f (n)cos( k (2n +1)) 2N n=0 j nk N 1 j nk k (2n +1)) f ( n )cos( = e N (k )e N 2N n=0


=

n=0

2N

e N

j nk

(k ) F '(k )

Interpretation of DCT with DFT

DCT

2D-DCT
z

The 2D DCT of

f (nn , ) 1 2

is

2D DCT can be implemented from 1D DCT by


z z

Performing 1D DCT column wise Then performing 1D DCT row wise

DCT is close to KLT


z

Firstly, DCT of basis vectors will be eigen vectors of triangular matrix

Secondly, a first order markov process with correlation coefficient has a covariance matrix

If is close to 1 then

Therefore for a Markov first order process with close to 1 DCT is close to KLT. Because of closeness to KLT, energy compaction, data decorrelation and ease of computation DCT is used in many applications.

Matrices Many image processing operations are efficiently implemented in terms of matrices Particularly, many linear transforms are used

Simple example: colur transformation Y 0.299 0.587 0.114 R I = 0.596 -0.275 -0.321 G Q 0.212 -0.523 0.311 B

Matrix representation of linear operations


Let x [ 0] , x [1] ,......, x [ N 1] be N data samples.
x 0 x 1 x= # x N -1

These data points can be represented by a vector

Consider a transformation y [ n ] = anj x [ j ] , n = 0,1,......, N 1


j =0

N 1

Denoting

y [ 0] y 1 [ ] . we get y= . . y [ N -1]

y = Ax

a0,0 a1,0 a where A = 2,0 . . aN 1,0

a0,1 a1,1 a2,1

aN 1,1

a0,2 . . . a0, N 1 a1,2 . . . a1, N 1 a2,2 . . . a2, N 1 aN 1,2 . . . aN 1, N 1

Example: Discrete Fourier transform

x [ k ] = x [ n] e
n =0

N 1

2 nk N

k=0,1,.........N-1

X [ 0] 1 X [1] 1 . = . . . . 1 X [ N 1]

1 e
j 2 N

. .

. .

. .

1 e
j 2 ( N 1) N

2 ( N 1) N

2 ( N 1)( N 1) N

x [ 0] x [1] . . . x[ N 1]

Example: Rotation operation


y

( x, y)
r

( x, y )
r

x = x cos y sin
y = x sin + y cos
x

x cos sin x y = sin cos y

Matrices - Basic Definitions

Transpose of a matrix: Symmetric Matrix: Hermitian Matrix:


1 A= -i

AT AT = A

A*T

A
i 1

Example:

i 1 T , A = 1 -i

Inverse of a Matrix:

A 1A = I

for a non singular matrix

Unitary Matrix
A matrix A of complex elements is called unitary if

A 1 = AT

Example

1 3 1 3 1 3

1 3 1 1 3 + j 2 2 3 1 1 3 j 2 3 2

1 3 1 1 3 j 2 2 3 1 1 3 + j 2 3 2

Orthogonal Matrix
For an orthogonal matrix A

A 1 = AT
Real-valued unitary matrices are orthogonal Example: Rotation operation

y = x sin + y cos

x = x cos y sin

cos A= sin

-sin cos

cos A = -sin
1

sin T =A cos

Example
Is the following matrix orthogonal?
1 2 1 2 1 2 1 2

Toeplitz Matrix
A matrix is called Toeplitz if the diagonal contains the same element, each of the sub-diagonals contains the same element and each of the super-diagonals contains the same element. The system matrix corresponding to linear convolution of two sequences is a Toeplitz matrix. The autocorrelation matrix of a wide-sense stationary process is Toeplitz.

Example

a0,0 a1,0 a A = 2,0 . . aN 1,0

a0,1 a0,0 a1,0

aN 1,1

a0,1 . . . a1, N 1 a0,0 . . . a2, N 1 . . a1,0 a0,0 . . . a0, N 1

Circulant Matrix
A matrix is called a Circulant Matrix if each row is obtained by the circular shift of the previous row.
a0,0 a0, N 1 a A = 0, N 2 . . a0,1 a0,1 a0,0 a0, N 1 a0, N 1 a0,1 . . . a0, N 2 a0,0 . . . a0, N 3 . . ...a0, N 1 a0,0 ...

Example

a0,2

The system matrix corresponding to circular convolution of two sequences is a circulant matrix.

Eigen Values and Eigen Vectors


If A is a square matrix and is a scalar such that Ax = x then is the eigen value and x is the corresponding eigen vector. Thus the eigen vectors are invariant in direction under an operation by a matrix.
Example: Rotation operation with

= 90 then

x 0 y = 1

-1 x 0 y

0 A= 1

-1 0

then No real eigen values and eigen vectors exist. Now consider Rotation operation with

= 180 then
1800

-1 A= 0

0 -1

so that each vector is invariant under rotation by

Morphological Image Processing


Basically, a filtering operation. Set-theoretic operations are used for image processing applications. Such operations are simple and fast. In normal image filtering, we have an image and filtering is done through the convolution mask. In morphological image filtering, the same is done through the structuring element

Binary Morphology Basics


Image plane is represented by a set
2 ( x , y ) | ( x , y ) Z { }

A binary object is represented by in Binary Image Morphology


Background

A = {(x, y) | I (x, y) = 1} Ac = {(x, y) | I (x, y) = 0}

In Gray-scale Morphology,

A = {( x, y , z ) | ( x, y ) Z 2 , I ( x, y ) = z}

The value of z gives the gray value and (x, y) gives the gray point.

Structuring element
It is similar to mask in convolution It is used to operate on the object image A

Structuring element

A
Entire image

Reflection and Translation operations


(1) Reflection Operation: Reflection of A is given by

= {(x, y) | (x, y) A} A
B

(2) Translation Operation: Translation of

( A )x

A by an amount x is given by = {a + x | a A}

Binary image morphology operations


a. Dilation operation b. Erosion operation c. Closing operation d. Opening operation e. Hit or Miss transform

Dilation operation
Given a set A and the structuring element B, we define dilation of A with B as

A B =

x| A B

( )

A B

Why dilation?
If there is a very small object, say (hole) inside the object A, then this unfilled hole inside the object is filled up. Small disconnected regions outside the boundary may be connected by dilation. Irregular boundary may be smoothened out.

Properties of Dilation
1. 2.

A B = B A
A ( B C ) = ( A B) C

Dilation is commutative Dilation is associative Dilation is translation invariant

3.

( A B) x = Ax B

Erosion operation
Erosion of A with B is given by,
A B

A B = { x | B x A}

Here, structuring element B should be completely inside the object A.

Why Erosion?
1. Two nearly connected regions will be separated by erosion operation. 1. Shrinks the size of the objects 3. Removes peninsulas and small objects . 4. Boundary may be smoothened

Properties of Erosion
1. Erosion is translation invariant 2. Erosion is not commutative 3. Erosion is not associative

( A B ) x = Ax B

( AB ) BA
( AB) C = A( B C)

Dilation and erosion are dual operations in the sense that,

( A B )

= A B
C

A B

= A B
C

Opening and Closing operation


Dilation, and Erosion changes size
By opening and closing operations, irregular boundaries may be smoothened without changing the overall size of the object

Original

Dilated

Closing operation
Dilation, followed by Erosion

AB =

(A

B ) B

After Dilation operation, the size of the object is increased and it is brought back to the original size by erosion operation
Closing

Object A

Structuring element B

By closing operation, irregular boundaries may be smoothened depending upon the structural element

Example

Original

Dilated

Closed

Opening operation
Erosion, followed by Dilation

A o B = ( A B ) B
Opens the weak links between the nearby objects Smoothens the irregular boundary

In all operations, performance depends upon the structuring element and is applied to binary images. Example: Edge detection

Example
A={(1,0), (1,1), (1,2),(1,3), (0,3)} B={(0.0), (1.0)}

A B = ={(1,0), (1,1), (1,2),(1,3), (0,3),(2,0), (2,1), (2,2),(2,3)}


AB = {(1,3), (0,3)}
We can similarly do other morphological operations

Hit or Miss transform


The transform is given by

A B = ( AB1 ) ( A B2 )
C

A B = ( AB1 ) A B 2
where

B 2 = W B1
W is the window around the structuring element

The main aim of this transform is pattern matching

Procedure for Hit-or-Miss Transform


W

Object A

Structuring element B1

B2=W-B1

After B2 is obtained, find

A B = ( AB1 ) ( A B2 )
C

Example
In a text, how many Fs are present? Structuring element or

The structuring element will match both E & F .In such cases, background is also considered for pattern matching Hit or miss transform will match F only

Example

Applications of binary morphology operations


1. Boundary extraction

( A ) = A ( A B )

Applications of binary morphology operations


2. Region filling
(a)Starts with a pixel X 0 inside the unfilled region (b)Select a proper structuring element B and perform

X n = ( X n 1 B ) A C

Repeat this operation After getting

X n = X n 1
find

Xn

A Xn

to get the filled region

3. Thinning operation
This gives the image as single pixel width line

object

Thinning

object

skeletonizing

3. Thinning operation

Thinning and skeletonizing are impact operations

Thinning operator

A B = A (A B)

B1
1s 0s Original image A

Go on matching until exact matching is achieved


Now, consider another structuring element Then, the thinning operation is

B2

A B = A ( A B1 ) A ( A B1) B2
In this manner, we can do hit or miss with structuring elements to get ultimately thinned object

Skeletonizing
The skeleton is given by the operation

S( A) = USk ( A)
k =0

where Sk ( A) = ( AkB) ( AkB) o B

Gray-scale morphology
.Generalization of binary morphology to gray-level images Max and Min operations are used in place of OR and AND Nonlinear operation The generalization only applies to flat structuring elements. .

Gray scale morphological operations


f ( x, y )
Object (intensity surface) Structuring element (will also have an intensity surface)

b ( x, y )
Domain of f ( x , y ) Domain of

Df

Domain of object Domain of structuring element

b ( x, y )

Db

Dilation at a point (s,t) of f with b


( f b )( s, t ) = max { f ( s x, t y ) + b ( x, y ) , s x, t y D f , ( x, y ) Db }
f ( x, y ) + b ( s x, t y ) It can also be written like in convolution

Here, mask is rotated by 180 degree and placed over the object; from overlapping pixels, maximum value is considered. Since it is a maxima operation, darker regions become bright

Applications
1. Pepper noise can be removed 2. Size of the image is also changed.

Dilation Illustrated

Dilation Result

Erosion operation

( f b)( s, t ) = min{ f ( s + x, t + y) b( x, y) ,(s + x, t + y) Df , ( x, y) Db}


It is a minimum operation and hence, bright details will be reduced. Application: Salt noise is removed

We can erose and dilate with a flat structuring element

Erosion Result

Closing operation

f b = ( f b) b
Removes pepper noise Keeps intensity approximately constant Keeps brightness features

Opening operation

f o b = ( f b ) b
Removes salt noise Brightness level is maintained Dark features are preserved

Opening and Closing illustration

Closing and Opening results

Original image

closing

opening

Duality
Gray-scale dilation and erosion are duals with respect to function complementation and reflection

( f b) ( s, t ) = ( f
c

( s, t ) b

Gray-scale opening and closing are duals with respect to function complementation and reflection
c f b = f ob ( ) c

Smoothing
Opening followed by closing Removes bright and dark artifacts, noise

Morphological gradient

g = f b f b

Subtract the Eroded image from the dilated image Similar to boundary detection in the case of binary image Direction Independent

Probability and Random processes


Probability and random processes are widely used in image processing. For example in Image enhancement Image coding Texture Image processing Pattern matching

Two ways of applications: (a) The intensity levels can be considered as the values of a discrete random variable with a probability mass function. (b) Image intensity as two-dimensional random process

The following figure shows an image and its histogram.

1400 1200 1000 800 600 400 200 0 0 50 100 150 200 250

We can use this distribution of grey levels to extract meaningful information about the image.
Example: Coding application and segmentation application

We may also model the analog image as a continuous random variable

Probability concepts Random Experiment: An experiment is a random experiment if its outcome cannot be predicted precisely. One out of a number of outcomes is possible in a random experiment. A single performance of the random experiment is called a trial. 2. Sample Space: The sample space is the collection of all possible outcomes of a random experiment. The elements of are called sample points. 3. Event: An event A is a subset of the sample space such that probability can be assigned to it.

Probability Definitions
Classical definition of probability) Consider a random experiment with a finite number of outcomes. If all the outcomes of the experiment are equally likely, the probability of an event N is defined by
A

NA P ( A) = N
Where is the number of outcomes favourable to A. Example A fair die is rolled once. What is the probability of getting a6? Here S = {'1', ' 2 ', '3', ' 4 ', '5', '6 '} and A = { '6 '}.
P ( A) = 1 . 6

Relative Frequency Definition


If an experiment is repeated n times under similar conditions and the event A occurs in A times, then

nA P ( A) = Lim n n
Example: Suppose a die is rolled 500 times. The following table shows the frequency each face.

Face Frequency

1 82

2 81

3 88

4 81

5 90

6 78

Then

P( A) =

78 1  500 6

Image Intensity as a Random Variable


x
PX ( x)

. .

L-1

p0

p1

pL 1

For each gray level i, f h[i ] = i N where f i is the number of pixels with intensity i. The probability is estimated from the histogram of the image.

Some Important Properties


Probability that either A or B or both occurs is given by

P ( AUB ) = P ( A) + P ( B ) P ( A B)

P( S ) = 1
If A and B are mutually exclusive or disjoint, then

P( A B) = 0

Conditional Probability

the conditional probability of B given A denoted by P(B/A) is defined by


N A B P(B/A) = NA N A B = N NA N P( A B) = P( A) Similarly P( A B) P( A / B) = P( B)

Independent events
Two events are called independent if the probability of occurrence of one event does not affect the probability of occurrence of the other. Thus the events A and B are independent if and only if or

P ( B / A) = P ( B )
P ( A / B ) = P ( A) and hence P ( A B ) = P ( A) P ( B )

Random variable

A random variable associates the points in the sample space with real numbers. Consider the probability space and function mapping the sample space into the real line. Real line

Domain of X Figure Random variable X

Range of X

Probability Distribution Function


The probability P({X x}) = P({s | X (s) x, s S}) is called the probability distribution function (also called the cumulative distribution function abbreviated as CDF) of X and denoted by F ( x). Thus (, x],
X

FX ( x) = P ({ X x})

Value of the random variable

FX ( x )

Properties of Distribution Function


0 FX ( x) 1

Random variable

FX ( x)

is a non-decreasing function of

X.

Thus,

x1 < x2 , then FX ( x1 ) < FX ( x2 )

FX () = 0 FX ( ) = 1
P ({x1 < X x2 }) = FX ( x2 ) FX ( x1 )

Example
Suppose S = {H , T } and X : S \ is defined by X (H) = 1 and X (T ) = 1. Therefore, X is a random variable that can take values 1 with 1 1 probability and -1 with probability . 2 2
1

H S T
FX (x)

-1 X

x x

Probability Density Function


If F ( x) is differentiable, the probability density function (pdf) of by f ( x), is defined as
X
X

X,

denoted

f X ( x) =

d FX ( x ) dx

Properties of the Probability Density Function


f X ( x) 0. FX ( x) is
f X ( x)

This follows from the fact that


FX ( x) =

a non-decreasing function

f X (u )du

( x) dx = 1
x2

P( x1 < X x 2 ) =

x1

f X ( x)dx

x 0 x0 + x0

- P({x0 < X x0 + x0 })  f X ( x0 )x0

Example
Uniform Random Variable

1 f X ( x) = b - a 0

a xb otherwise

Gaussian or Normal Random Variable

f X ( x) =

1 2 X

1 xX 2 X

< x <

Functions of a Random variable


Let X be a random variable and g (.) be a function of X . Then Y = g ( X ) is a random variable. We are interested to find the pdf of Y . For example, suppose X represents the random voltage input to a full-wave rectifier. Then the rectifier output Y is given by Y = X . We have to find the probability description of the random variable Y . We consider the case when g ( X ) is a monotonically increasing or monotonically decreasing function . Here

f X ( x) f X ( x) = dy g ( x) x = g 1 ( y ) dx Example: Probability density function of a linear function of a random variable Suppose Y = aX + b, a > 0. fY ( y ) =

y b dy =a and a dx y b fX ( ) f X ( x) a fY ( y ) = = dy a dx Then x =

EXPECTATION AND MOMENTS OF A RANDOM VARIABLE

The expectation operation extracts a few parameters of a random variable and provides a summary description of the random variable in terms of these parameters. The expected value or mean of a continuous random variable
xf ( x)dx X X = EX = N x p (x ) i X i i =1
X

is defined by

for a continuous RV for a discrete RV

Generally for any function

g( X )

of a RV

EY = Eg ( X ) = g ( x) f X ( x)dx

Particularly, Mean-square value Variance


EX
2

= x 2 f X (x )d x

2 X = E(X - X )2 = (x - X )2 f X (x)dx

Multiple Random variables


In many applications we have to deal with many random variables. For example, the noise affecting the R B channels of colour video may be represented by three random variables. In such situations, it is conven to define the vector-valued random variables where each component of the vector is a random variable. Joint CDF of n random variables Consider n random variables X 1 , X 2 ,.., X n defined on the same sample space. We define the random vect as,
X X X = . . X
1

A particular value of the random vector X is denoted by


x1 x x= 2 . # xn

The CDF of the random vector X is defined as follows F ( x , x ,..x ) = F (x) X , X ,.., X X 1 2 n n 1 2 = P ({ X x , X x ,.. X x }) 1 1 2 2 n n

Multiple random variables


If
X

is a continuous random vector, that is,


X

F ( x , x ,..x ) is 1 2 n X , X ,.. X 1 2 n

continuous in each of

its arguments, then

can be specified by the joint probability density function

n f (x) = f ( x , x ,..x ) = F ( x , x ,..x ) X , X ,.. X 1 2 n X x x ...x X1, X 2 ,.. X n 1 2 n n 1 2 n 1 2

We also define the following important parameters. The mean vector of X, denoted by , is defined as
X

E( X ) E ( X ) = E ( X) = X # E ( X ) = #
1 2 n X
1

X2

Xn

Multiple Random Variables

Similarly for each (i, j ) i = 1, 2,.., n, j = 1,2,.., n we can define the covariance

Cov( X i , X j ) = E ( X i X i )( X j X j ) = EX i X j X i X j and the correlation coefficient Cov( X i , X j ) Xi , X j =

X X
i

All the possible covariances and variances can be represented in terms of a matrix called the covariance matrix CX defined by

C X = E ( X X )( X X ) cov( X 1 , X 2 )" cov( X 1 , X n ) var( X 1 ) cov( X , X ) var( X )". cov( X , X ) 2 1 2 2 n = # # # cov( , ) cov( , ) var( ) X X X X " X n 1 n 2 n

Multi-dimensional Gaussian
Suppose for any positive integer n, X 1 , X 2 ,....., X n represent
n

jointly

random variables.
1 2 n

These random variables are called jointly Gaussian if the random variables X , X ,....., X have joint probability density function given by

X1 , X 2 ,....., X n

( x , x ,...x ) =
1 2 n

1 1 ( X X ) ( X X )' C X 2

)
X
n

det(C )
X

where
X

C = E ( X )( X ) '
X X
1 2

is

the

covariance

matrix

and

= E ( X) = [ E ( X ), E ( X )......E ( X ) ] ' is the vector formed by the means of the random

variables.

Example

2D Gaussian

Two random variables

X and Y

are called jointly Gaussian if their joint density function is


1 2 (1 2 X ,Y )

f X ,Y ( x , y ) =

1 2 X Y
2 1 X ,Y

( x X 2 X

)2

2 XY

( x

X Y

)( y

( y

Y 2 Y

)2

- < x < , - < y <


The joint pdf is determined by 5 parameters means X and Y variances X and Y
2 2

correlation coefficient X , Y .

Karhunen-Loeve transform (KLT)


Let a random vector X be given as
X [1] X [ 2] X = . . X N [ ]

and characterized by mean x and

auto co-variance matrix CX = E ( X x )( X x )' . CX is a positive definite matrix. This matrix can be digitalized as 1
0 CX= . .
T

2
.

Where 1 , 2 .....N are the eigen values of the matrix CX and is the matrix formed with the eigen vectors as its columns. Consider the transformation Y = T ( X x ) . Then, CYwill be a diagonal matrix and the transformation is called the Karhunen Loeve transform (KLT).

. . . . . 0 N .

Random Process

X (t , s3 )

s3

X (t , s2 )

s2

s1
X (t , s1 )

Recall that a random variable maps each sample point in the sample space to a point in the real line. A random process maps each sample point to a waveform. A random process is thus a function of t and s. The random process X(t,s) is usually denoted by X(t). Discrete time random process X[n].

The discrete random process { X [ n ]} is a function of onedimensional variable n. Some important parameters of a random process are: Mean: [ n] = EX [ n] Variance:
2 [ n ] = E ( X [n] [ n ])2

Auto correlation:

R X [ n, m ] = E ( X [ n ] X [ m ] )
C X [ n, m ] = E ( X [ n ] [ n ] ) ( X [ m ] [ m ] )

Auto covariance:

Wide sense stationary (WSS) random process


For WSS process

X [n] ,

the Mean

EX [ n ] = X = constant
and the autocorrelation function is a function of the lag n monly. We denote the autocorrelation function of a WSS process X [ n ] , at lag k by Autocorrelation function RX [k ] is even symmetric with a maximum at k=0.

RX [ m, n ] = EX [m] X [n]

Matrix Representation
We can represent N samples by an N-dimensional random vector
X [1] 2 X [ ] X = . . X N [ ]

Mean vector:

EX [1] x [1] EX [ 2] x [ 2] = . X = EX = . . . EX N N [ ] x [ ]

Autocorrelation matrix:
R X = EXX

Auto covariance matrix:


Check that
R X and CX

Cx = E ( X - X )( X - X )
are symmetric Toeplitz matrix

Frequency-domain Representation
A host of tools are available to study a WSS process. Particularly, we may have the frequency domain representation of a WSS process in terms of the power spectral density (PSD) given by

S X ( ) =

The autocorrelation function is obtained by the inverse transform of S X ( ) by

k =

RX [k ]e j k
as given

1 RX [ k ] = 2

S X ( )e j k d

Gaussian random process

A random process { X [ n]} is called a Gaussian process if for any N, the joint density function is given by,
f X [1], X [2],..., X [ N ] ( x1 , x2 , x3 ,..., xN ) = 1
N 1 ( x X )' C-1 X ( x X ) 2

det ( Cx )

where the vecors and the matrices are as interpreted earlier.

Markov process

X [ n] can take the one of L discrete { X [ n]} is a random process with discrete state .i.e. values x 0 , x 1 , x 2 , . . . . x L 1 with certain probabilities as shown below. x0 , x1 , x2 ,....xL 1 State: p0 , p1 , p2 ,..., pL 1 Probabilities:
{ X [ n]} is called first-order Markov if
P ({ X [ n ] = xn | X [ n 1] = xn 1 , X [ n 2] = xn 2 ,...}) = P ({ X [ n ] = xn | X [ n 1] = xn 1})

Thus for a first-order Markov process, the current state depends on the immediate past. Similarly, { X [n]} is called p th-order Markov if
P

({ X [ n ] = x

| X [ n 1] = xn 1 , X [ n 2] = xn 2 ,...}

=P

({ X [ n ] = x

| X [ n 1] = xn 1 , X [ n 2] = xn 2 ,..., X [ n p ] = xn p }

A first-order Markov process is generally called a Markov process.

Random field A two dimensional random sequence{ X [m, n]} is called a random field. For a random field { X [m, n]}, we can define the mean and the autocorrelation functions as follows: Mean: EX [ m, n] = [m, n] Autocorrelation:

RX [ m, n, m, n] = EX [m, n] X [m, n]

A random field{ X [ m, n]}is called a wide-sense stationary (WSS) or homogeneous random field if RX [ m, n, m, n] is a function of the lags [m m, n n]. Thus, for a WSS random field{ X [m, n]}, the autocorrelation function RX [ k , l ] can be defined by

RX [ k , l ] = EX [ m, n] X [m + k , n + l ) = R X [ k , l ]

= EX [m + k , n + l ] X [m, n]

A random field { X [m, n]} is called an isotropic random field if


RX [ m, n, m, n] is a function of the distance (m m)2 + (n n)2

A random field{ X [m, n]} is called a separable random field if RX [ m, n] can be separated as RX [ m, n] = RX [ m] RX [ n]
1 2

Two-dimensional power spectral density We have the frequency domain representation of a WSS random field in terms of the two-dimensional power spectral density given by S (u , v) = R [ k , l ]e j (uk + vl ) X X
l = k =

The autocorrelation function R f [ k , l ] is given by

RX [ k , l ] =

1 4
2

(u , v)e j ( uk + vl ) dudv

A random field{ f [m, n]} is called a Markov random field if the current state at a location depends only on the states of the neighboring locations.

Segmentation
Divide the image into homogenous segments. Homogeneity may be in terms of
(1) Gray values (within a region the gray values dont vary much ) Ex: gray level of characters is < gray level of background colour (2) Texture : some type of repetitive statistical uniformity (3) Shape (4) Motion (used in video segmentation)

Example

Segmentation based on texture

Segmentation based on Colour

Application
Optical character recognition Industrial inspection Robotics Determining the microstructure of biological , metallurgical specimens Remote sensing Astronomical applications Medical image segmentation Object based compression techniques (MPEG 4) Related area in object representation

Main approaches
Histogram-based segmentation Region-based segmentation
Edge detection Region growing Region splitting and merging.

Clustering
K-means Mean shift

Motion segmentation

Example: Text and Background


Problem: we are given an image of a paper, and we would like to extract the text from the image. Thresholding: define a threshold Th such that If I(x,y) < Th Then object(character) Else background (paper) How do we determine the threshold ?
Just choose 128 as a threshold (problematic for dark images) Use the median/mean (both are not good, as most of the paper is
white)

Example

Histogram-based Threshold
Assumption
Regions are distinct in terms of gray level range

Frequency Gray level

Histogram-based threshold
Compute the gray level histogram of the image. Find two clusters: black and white. Minimizing the L2 error:
Select initial estimate T Segment the image using T. Compute the average gray level of each segment mb,mw Compute a new threshold value: T = (mb+mw) Continue until convergence.

We are already familiar with this algorithm !

Problems with this approach


Noise Many holes and discontinuities in the segmentation. Changes in the illumination We do not use spatial information.
Some of the problems can be solved using image processing techniques. For example, we can enhance the result using morphological operations. Yet How can we overcome the changes in the illumination ?

Adaptive Thresholding
Divide the image into sub-images. Assume that the illumination in each sub-images is constant. Use a different threshold for each sub-image. Alternatively use a running window (and use the threshold of the window only for the central pixel ) Problems: Rapid illumination changes. Regions without text: we can try to recognize that these regions is unimodal.

Optimal Thresholding
We may use probability based approach. Intensity histogram is a mixture of Gaussian distribution.
Normalized histogram = sum of two Gaussian with different mean and variance.

Identify the Gaussian Decide the threshold

Region-based segmentation

We would like to use spatial information. We assume that neighboring pixels tend to belong to the same segment (not always true) Edge detection: Looking for the boundaries of the segments. Problem: Edges usually do not determine close contours. We can try to do it with edge linking

Region-based segmentation
Basic Formulation Let R represent the entire image region. Segmentation: Partitioning R into n subgroups Ri s.t: a) U Ri = R i b) Ri is a connected region c) Ri I R j = d) P ( Ri U R j ) = False e) P( Ri ) = True P is the partition predicate

Partition should be such that each region is homogenous as well as connected.

Example of a Predicate
A predicate P has values TRUE or False The intensity variation within the region is not much. Not a valid predicate The intensity difference between two pixels is less than 5.Valid Predicate The distance between two R, G, B vectors is less than 10.Valid Predicate

Region growing
Choose a group of points as initial regions. Expand the regions to neighboring pixels using a predicate:
Color distance from the neighbors. The total error in the region (till a certain threshold):
Variance Sum of the differences between neighbors. Maximal difference from a central pixel.

In some cases, we can also use structural information: the region size and shape.

In this way we can handle regions with a smoothly varying gray level or color. Question: How do we choose the starting points ? It is less important if we also can merge regions.

Selection of seed points


One solution of selecting the seed point is to choose the modes of histogram. Select pixels corresponding to modes as seed points.

Frequency

S1

S2

S3

Intensity

Region merging and splitting


In region merging, we start with small regions (it can be pixels), and iteratively merge regions which are similar. In region splitting, we start with the whole image, and split regions which are not uniform. These methods can be combined. Formally:
1. Choose a predicate P. 2. Split into disjoint regions any region Ri for which P( Ri ) = False 3. Merge any adjacent regions Ri and Rj for which P ( Ri U R j ) = True 4. Stop when no further merging and splitting is possible.

QuadTree

R R21 R1 R23 R22 R24 R4 R1 R2 R3 R4

R3

R21

R22

R23

R24

With quadtree, one can use a variation of the split & merge scheme: Start with splitting regions. Only at the final stage: merge regions.

Segmentation as clustering
Address the image as a set of points in the n-dimensional space:
Gray level images: p=(x,y,I(x,y)) in R3 Color images: p =(x,y,R(x,y),G(x,y),B(x,y)) in R5 Texture: p= (x,y,vector_of_fetures) Color Histograms: p=(R(x,y),G(x,y),B(x,y)) in R3. we ignore the spatial information.

From this stage, we forget the meaning of each coordinate. We deal with arbitrary set of points. Therefore, we first need to normalize the features (For
example - convert a color image to the appropriate linear space representation)

Similarity Measure
Given two vectors Xi and X j , we can use measures like Euclidean distance Weighted Euclidean distance Normalized correlation

Again, we can use splitting & merging

Here, we merge each time the closest neighbors.

K-means
Idea:
Determine the number of clusters Find the cluster centers and point-cluster correspondences to minimize error Problem: Exhaustive search is too expensive. Solution: We will use instead an iterative search [Recall the ideal quantization procedure.]

Algorithm

Fix cluster centers 1, 2 ,..., k Allocate points to closest cluster Fix allocation; compute best cluster centers

Error function =

Illustration of K-means
Data set (72,180) (65,120) (59,119) (64,150) (65,162) (57,88) (72,175) (44,41) (62,114) (60,110) (56,91) (70,72) Intitial Cluster centres (45,50) (75,117) (45,117) (80,180) Iteration 1 (45,50) (75,117) (45,117) (44,41) New mean (44,41) (62,114), (65,120) New mean (63,117) (57,88),(59,119),(56,91),(60,110) New mean ( 58,102) (72,180), (64,150),(65,162), (72,175),(70,172) New mean ( 39,170)

(80,180)

Example clustering with K-means using gray-level and color histograms

Mean Shift
K-means is a powerful and popular method for clustering. However:
It assumes a pre-determined number of clusters It likes compact clusters. Sometimes, we are looking for long but continues clusters.

Mean Shift:
Determine a window size (usually small). For each point p:
Compute a weighted mean of the shift in the window:

r r Set p := p + m m= wi ( pi Continue until convergence. iwindow

r p)

wi = d ( p, pi )

At the end, use a more standard clustering method.

Mean Shift (contd..)

This method is based on the assumption that points are more and more dense as we are getting near the cluster central mass.

Motion segmentation
Background subtraction: Assumes the existence of a dominant background. Optical flow (use the motion vectors as features) Multi model motion: Divide the image to layers such that in each layer, there exist a parametric motion model.

Texture
Texture may be informally defined as a structure composed of a large number of more or less ordered similar patterns or structures Textures provide the idea about the perceived smoothness, coarseness or regularity of the surface. Texture has played increasingly important role in diverse application of image processing
Computer vision Pattern recognition Remote sensing Industrial inspection and Medical diagnosis.

Texture Image Processing

Texture analysis: how to represent and model texture Texture synthesis: construct large regions of texture from small example images Shape from texture: recovering surface orientation or surface shape from image texture.

In image processing texture analysis is aimed at two main issues: Segmentation of the scene in an image into different homogeneously textured regions without a priori knowing the textures. Classification of the textures present in an image into a finite number of known texture classes. A closely related field is the image data retrieval on the basis of texture. Thus a speedy classification can help in browsing images in a database. Texture classification methods can be broadly grouped into one of the two approaches: Non-filtering approach and Filtering approach.

Co-occurrence Matrix
Objective: Capture spatial relations A co-occurrence matrix is a 2D array Cd in which Both the rows and columns represent a set of possible image values Cd (i, j ) indicates how many times gray value i co-occurs with value j in a particular spatial relationship d. The spatial relationship is specified by a vector d = (dr,dc). From Cd we can compute P , the normalized gray-level co-occurrence d
matrix, where each value is divided by the sum of all the values.

Example
1 1 0 0 1 1 0 0

d = 1 pixel right 16 12 Cd = 12 16 16 12 56 56 Pd = 12 16 56 56

Measures Extracted from GLCC Matrix


From Co-occurrence matrices extract some quantitative features: 1. the maximum element of C
Max(Cd (i, j ))
i, j

2. the element difference moment of order k k C ( i , j ) /( i j ) d


i, j

3. the inverse element difference moment of order k


4. entropy
d

k C ( i , j )( i j ) d i, j

C (i, j ) log C (i, j )


d i, j

5. uniformity

2 C d (i, j )

Disadvantages

Computationally expensive Sensitive to gray scale distortion (co-occurrence matrices depend on gray values) May be useful for fine-grain texture. Not suitable for spatially large textures.

Non- Filtering approach

Structural texture analysis methods that consider texture as a composition of primitive elements arranged according to some placement rule. These primitives are called texels. Extracting the texels from the natural image is a difficult task. Therefore these methods have limited applications. Statistical methods that are based on the various joint probabilities of gray values. Co-occurrence matrices estimate the second order statistics by counting the frequencies for all the pairs of gray values and all displacements in the input image. Several texture features can be extracted from the co-occurrence matrices such as uniformity of energy, entropy, maximum probability, contrast, inverse difference moments, and correlation and probability run-lengths. Model based methods that include fitting of model like Markov random field, autoregressive, fractal and others. The estimated model parameters are used to segment and classify textures.

Filtering approach
In the filtering approach, the input image is passed through a linear filter followed by some energy measure. Feature vectors are extracted based on these energy outputs. Texture classification is based on these feature vectors. The following figure shows the basic filtering approach for texture classification.

Filtering approach includes Laws mask, ring/wedge filters, dyadic Gabor filter banks, wavelet transforms, quadrature mirror filters, DCT, Eigen filters etc.

Gabor filters

Gabor Filters Fourier coefficients depend on the entire image (Global): We lose spatial information. Objective: Local Spatial Frequency Analysis Gabor kernels: Fourier basis multiplied by a Gaussian

( x x0 ) 2 g ( x) = exp ( i ( x x0 ) i ) exp 2 2 2 2 1

The product of a symmetric Gaussian with an oriented sinusoid

Gabor filter
Gabor filters come in pairs: symmetric and antisymmetric Each pair recover symmetric and antisymmetric components in a particular direction.:the spatial frequency to which the filter responds strongly : the scale of the filter. When = infinity, similar to FT We need to apply a number of Gabor filters at different scales, orientations, and spatial frequencies.

Two Dimensional Signals and Systems


An analog image is modeled as a two-dimensional (2D) signal. Consider an image plane where a point is denoted by the coordinate

( x, y ).
The intensity at ( x, y ) is a two-dimensional function and is denoted by f ( x, y ). The video is modeled as a three-dimensional function f ( x, y , t ). The digital image is defined over a grid, each grid location being called a pixel. We will denote this 2D discrete space signal as f [m, n].

Some of the useful one-dimensional(1 D) functions


1. Dirac Delta or Impulse function ::
1 2a

( x) = 0

x0

Sifting property Scaling property

f ( x) ( x x)dx = f ( x)

( x)dx =1
( x)
a

-a

(ax) =

Dirac Delta function is the derivative of the unit step function

( x) = u( x)

2. Kronecker Delta or discrete-time impulse function ::

0 [ n] = 1
Sifting property :
m = m =

n0 n =1

f [m] [n - m] =f [n]

3. Rectangle function ::

1, rect ( x) = 0

1 x 2 otherwise

4. Sinc function ::

sin x sin c ( x) = x

5. Complex exponential function ::

j x

These functions are defined in two or multiple dimensions by the seperability property ::

f ( x, y ) is symmetric if f ( x, y ) = f ( x) f ( y )
For example, the complex exponential function is separable.

f ( x, y ) = f1 ( x) f 2 ( y )

j (1 x +2 y )

=e

j1 x

j2 y

The two-dimensional delta functions


2D Dirac delta function ::

( x, y ) = ( x ) ( y )
[ m, n ] = [ m ] [ n ]

2D Kronecker Delta function ::

Linear Systems and Shift Invariance

A system is called linear if : T [af1 [m, n] + bf 2 [m, n]] = aTf1 [m, n] + bTf 2 [m, n] And the input and output are :

f [m, n] = f [m ', n '] [m m ', n n '] g[m, n] = Tf [m, n] = f [m ', n ']T [m m ', n n ']
m' n' m' n'

Shift invariance :: f [ n] g[ n] f [n n0 ] g[n n0 ]


g[ m, n] = Tf [m, n] = =
m' n'

f [m ', n ']h[m, n, m ', n ']

f [ m ', n ']h[ m m ', n n '] m'n' For a 2-D linear shift invariant system with input f [ m, n] and
impulse response h[ m, n] ,the output g[m, n] is given by

g[ m, n] = h[ m, n]* f [ m, n]

Suppose h[m, n] is defined for Then g[m, n] is defined for

m = 0,1, . . . . M 1 1 and n = 0,1, . . . . , N1 1

and f [m, n] is defined for m = 0,1, . . . . ,M 2 1 and n = 0,1, . . . . ,N 2 1 m = 0,1, . . . . ,( M 1 + M 2 2) and n = 0,1, . . . . ,( N1 + N 2 2)

2D convolution involves: 1. Rotate h [ m, n] by180D to get h [ m, n] 2. Shift the origin of h [ m, n] to [m, n] 3. Multiply/overlap elements and sum up.

2D convolution can be similarly defined in the continuous domain through

f ( x, y ) * h ( x, y )

f ( x, y)h( x x, y y)dx, dy

Illustration of convolution of x[m,n] and h[m,n]

[m,n]

Illustration of convolution of x[m,n] and h[m,n] (contd)

Example

g[0 ,0] = 0 g[1,0] = 2 g[2 ,1] = 12 and so on

Causality

For a causal system, present output depends on present and past inputs. Other wise, the system is non-causal.

The concept of causality is also extended. Particularly important is the non-symmetrical half-plane (NSHP) model.

[m,n]

Lectures on WAVELET TRANSFORM

OUTLINE
FT, STFT, WS & DWT Multi-Resolution Analysis (MRA) Perfect Reconstruction Filter Banks Filter Bank Implementation of DWT Extension to 2D Case (Image) Applications in Denoising, Compression etc.,

Fourier Transform
F ( ) =

f (t )e jt dt

Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. a serious drawback In transforming to the frequency domain, time information is lost. When looking at a Fourier transform of a signal, it is impossible to tell when a particular event took place.

FT Sine wave two frequencies stationary

f1 (t ) = 0.25sin100 t + sin 200 t


Peaks corresponding to 5 Hz and 10 Hz

FT Sine wave two frequenciesnonstationary

sin 200 t t < 50 f 2 (t ) = 0.25sin100 t + sin 200 t otherwise

Peaks corresponding to 50 Hz and 100 Hz

The Short Time Fourier Transform Gabor


Time parameter Frequency parameter

FSTFT ( , ) = f (t ) w(t )e jt dt
t

Window function centered at

FT Sine wave two frequenciesnonstationary

Short Time Fourier Transform (STFT)

Take FT of segmented consecutive pieces of a signal. Each FT then provides the spectral content of that time segment on Difficulty is in selecting time window.
NOTE Low frequency signal better resolved in frequency domain High frequency signal better resolved in time domain

Uncertainty Theorem
Uncertainty Theorem - We cannot calculate frequency and time of a signal with absolute certainty (Similar to Heisenbergs uncertainty principle involving momentum and velocity of a particle). In FT we use the basis which has infinite support and infinite energy. In wavelet transform we have to localize both in time domain (through translation of basis function) and in frequency domain (through scaling).

Example

The Wavelet Transform


Analysis windows of different lengths are used for different frequencies: Analysis of high frequencies Use narrower windows for better time resolution Analysis of low frequencies Use wider windows for better frequency resolution Heisenberg principle still holds!!! The function used to window the signal is called the wavelet

Mother Wavelet
In wavelet we have a mother wavelet as the basic unit.

Daubechies

Haar

Shannon Wavelet
( x) =
sin(2 x) sin( x) x

Translation and Scaling


a ,b ( x) = 1 a

x b translated to b and a

scaled by a

Mother Wavelet
f ( x) f ( x b)

Translation

f ( ax )

Scaling

f ( ax )

1 |a|

| a |

Continuous Wavelet Transform


Wa ,b = f ( x) a ,b ( x) dx = f ( x), a ,b ( x)

1 f (x ) = C

W a ,b

da .db a ,b ( x ) a2
2

Where energy is
C

( )

and

F ( ) = FT of ( x )

Admissibility Criterion
Requirement is C < => 1)
(0) = 0 DC
F

term of the mother wavelet must be

zero. 2) (x) should be of finite energy => (x) should have finite support (asymptotically decaying signal).

3)

( )d <
2 F

spectrum should be concentrated in a

narrow band.

Wavelet series expansion


s , (t ) = 1 t s s

m m where scale s = so and translation = n o so

Discrete wavelet transform


If scale and translation take place in discrete steps
m/2 m m,n (t ) = so ( so t n o )

Dyadic wavelet

If so = 2 & o = 1 m,n (t ) = 2
m/2

(2 t n)

f (t ) can be represented as a wavelet series iff


A f
2

m, n

f (t ), wm,n (t )

B f

where A and B are positive constants independent of f (t )

Such a family of discrete wavelet functions is called a frame

When A = basis functions

B, the wavelets form a family of orthogonal

Discrete Wavelet transform

f (t )

can be represented as a series combination of wavelets

f (t ) = wm,n m,n (t )
mn
where

Wm,n = f (t), wm,n (t)


m = s0m/ 2 f (t)(so t no )dt t

and

m,n (t ) = 2m / 2 (2m t n)

Multi resolution analysis (MRA)


A scaling function (t) is introduced Eg: Haar scaling function is given by
(t)
1 0

(t ) = 1

0 t 1 = 0 elsewhere

This scaling function is also to be scaled and translated to generate a family of scaling functions j ,k (t ) = 2 j / 2 (2 j t k )

A function f (t ) can be generated using the basis set of translated scaling functions.

f (t ) = ak j ,k (t k )
k

In the case of Haar basis, f (t ) comprises of all piecewise continuous functions. This set of functions is called span of Let this space be denoted by Vj

{ j,k (t ), k Z }

Requirements of MRA
Requirement 1 The scaling functions should be orthogonal with respect to their integral translates

j , k (t ), j ,l (t ) = 0
t

lk

j , k (t ), j ,l (t ) dt = 0 l k

Haar basis function is orthogonal

Requirement 2
... V1 VO V1 V2 ....

1,0 (t )
1 2

The Haar scaling function at resolution 1 1 1 0 0,0 (t ) = 1,0 ( x) + 1,0 (t )


2 2 2 (t ) = 1 2 2 (2t ) + 1 2 2 (2t 1)

scaling functions at low scale is nested with in the subspace spanned by the higher scale. In general, we can write
(t )
= hk 2 (2t k )
k

basis function at lower scale Similarly,

basis function at the higher scale

(t ) = bk 2 (2t k )
k

Requirement 2(contd..) For example Haar wavelet


1 1 , h1 = ho = 2 2 hk = 0 k > 1

Triangular wavelet
hk = 1 , k = 0,1,2...
1

2 2 = 0 otherwise

Requirement 3
V ... V 1 V0 V0 ... C

V = {0} zero function

All square integrable functions can be represented with arbitrary precision. In particular
V = L2 ( R)
V2 V1 V0

Requirement 4

W1 W0

V1 = V0 W0 V2 = V1 W1 = V0 W0 W1
and so on. Here denotes the direct sum.

Thus at a scale, the function can be represented by a scale part and a number of wavelet parts.

MRA equation or Dilation equation


For scaling function,

(t ) = hk 2 (2t k )
k
For the wavelet bases,

(t ) = bk 2 (2t k )
k

MRA and DWT


(t) 1 1,0(t)

1 2

1
(t)

1 2

1 1,0 (t ) = ( (t ) + (t ) ) 2 f1 (t ) = Ck 0,k (t ) + d k 0,k (t ) f1 (t ) V1


k k

Scaling part + wavelet part Low-pass part + high-pass part

MRA and DWT ( Contd.)


Similarly,
W1
V2

W0
V1 V0

1 1 f 2 (t ) = Ck 1, k (t ) + d k 1, k (t ), k k

f 2 (t ) V2
k

0 0 1 = Ck 1, k (t ) + d k 1, k (t ) + d k 1, k (t ) k k

and so on How to find those c and d coefficients? We have to learn a bit of filterbank theory to have an answer.

Perfect Reconstruction bank

LPF h0 [ n]
2

g0 [ n]

f [ n]

HPF h1 [ n]

g1 [ n]

[ n] f

Analysis filter bank

Synthesis filter bank


1 Y ( z) = ( X ( z) + X (z)) 2

Note that

X ( z)

On the synthesis side To avoid aliasing, g 0 [n]and g1[n] can be selected by a simple relationship with h0 n and h1 n .

[ ]

[ ]

Go ( z) = H0 ( z), G1 ( z) = H0 (z) go [n] = h0[n], g1[n] = (1)n+1 h0[n]

Orthonormal filters
A class of perfect reconstruction filters needed for the filter bank implementation of discrete wavelet transform (DWT) These filters satisfy the relation

h1[ n] ( 1) n h0 [ N 1 n]
where N is the tap length required to be even The synthesis filters are given by

gi [ n] = hi [ n] i {0,1}

Orthonormal filter banks


2

h0[n]

h0 [ n]

f [ n]

h1[n]

h1 [ n]

[ n] f

Analysis filter bank

Synthesis filter bank

f [ n] is the original signal


[ n] is the reconstructed signal f

Filter bank implementation of DWT


The MRA equation (or) dilation equation for the scaling function is

(t ) = h [k ] 2 (2t k )
By slightly modifying the notations, we have
k

(t ) = h [k ] 2 (2t k )
k

(t ) = h [k ] 2(2t k ) f (t ) = c j ,k j ,k (t ) + d j ,k j ,k (t )
k k k

j ,k (t ), j ,k (t )
other

are orthogonal individually and to each

Further, orthonormality is assumed

Contd
Using the orthogonality of the scaling and the wavelet bases

The co-efficient c j , k can be given as the convolution of

c j +1,k and h [ k ]
l

at alternate points

c j ,k = h [l 2k ] c j +1,l
C j +1,k

Similarly

h [ k ]

C j ,k

d j ,k = h [l 2k ]C j +1,k
l

h [k ]

is an anti-causal filter, but it is not a serious issue

and can be addressed by proper translation of the function.

How to get the filter co-efficients?


Integrating the dilation equation on both sides

(t )dt = 2 h [k ] (2t k )dt


k

2t - k = u

1 = h [k ] (u )du 2 k

h [k ] =
k

2 ------ (1)

Similarly

h2 [k ] = 1 (2)
k

2 ( x)dx = 1

Due to the orthogonality of the scaling function and its integer translates, we have

( x) ( x m)dx = [m]
------- (3)

If dilation equation is applied, we get

h [k ]h [k 2m] = [m]
k

Considering the orthogonality of scaling function and wavelet functions at a particular scale

h [k ] = (1)k h [ N k ]
Hence, h [ k ] and h [ k ] form perfect reconstruction orthonormal filter banks

Filter bank representation for 2-tap filter


The equations representing the filter co-efficients are
h [0] + h [1] = 2 & h2 [0] + h2 [1] = 1

The unique solution for these equations s


h [0] = h [1] = 1 2

1 2 1 h [1] = (1)1 h [0] = 2 h [0] = (1)0 h [1] =

There will be no filter with odd tap length

4-tap wavelet (Daubachies wavelet )


For a 4-tap filter,

h [k ] =
k =0 3 2 h [k ] = 1 k =0

h [0]h [2] + h [1]h [3] = 0

With these 3 equations, Daubachies wavelets can be generated

1+ 3 h [0] = 4 2 3+ 3 h [2] = 4 2

3+ 3 h [1] = 4 2 1 3 h [3] = 4 2

Where to start and where to stop?


The raw data is considered as a scaled version of data at infinite resolution

V2 V1 V0

W0

W1

The process of approximation from highest resolution can be explained as shown by the following figure
Approximate

h [ k ]
Highest resolution

2
Lowest resolution

h [ k ]
h [k ]

2
h [k ]
2

f [k ]

Detail 2

Detail 1

Reconstruction
Synthesis filter banks can be applied to get back the original signal
Approximation Processing 2

g [ k ]
+
2

g [ k ]

Detail - 2 Processing 2

g [k ]
+
Reconstructed signal

Detail - 1 Processing 2

g [k ]

2D Case
For 2 dimensional case, separability property enables the use of 1D filters
(t1 , t2 ) = 1 (t1 ) 2 (t2 )

The corresponding filters can be first applied in first dimension and then in other dimension. First, the LPF and HPF operations are done row wise and then column wise. This can be explained with the following figure

column

LP
row

LP
HP

2
2

LL-scaling co-eff

f [ m, n ]

LH-scaling co-eff

column
HP

column

row

LP
HP

2
2

HL-scaling co-eff

HH-scaling co-eff

column LL HL LH HH Original image

Co-efficient of approximation at level 1

Co-efficient of horizontal detail at level 1

Co-efficient of vertical detail at level 1

Co-efficient of diagonal detail at level 1

Original image

Decomposition at level 1

Decomposition at level 2

Anda mungkin juga menyukai