Anda di halaman 1dari 6

Q1: Explain the process of formation of image in human eye.

Image formation in the Eye The principal difference between the lens of the eye and an ordinary optical lens is that the former is flexible. As illustrated in Fig. 2.1, the radius of curvature of the anterior surface of the lens is greater than the radius of its posterior surface. The shape of the lens is controlled by tension in the fibers of the ciliary body. To focus on distant objects, the controlling muscles cause the lens to be relatively flattened. Similarly, these muscles allow the lens to become thicker in order to focus on objects near the eye. The distance between the center of the lens and the retina (called the focal length) varies from approximately 17 mm to about 14 mm, as the refractive power of the lens increases from its minimum to its maximum. When the eye focuses on an object farther away than about 3 m, the lens exhibits its lowest refractive power. When the eye focuses on a nearby object, the lens is most strongly refractive. This information makes it easy to calculate the size of the retinal image of any object.

Figure 2.2: Graphical representation of the eye looking at a palm tree. Point c is the optical center of the lens.

In Fig. 2.2, for example, the observer is looking at a tree 15 m high at a distance of 100 m. If h is the height in mm of that object in the retinal image, the geometry of Fig. 2.2 yields 15/100=h/17 or h=2.55 mm. The retinal image is reflected primarily in the area of the fovea. Perception then takes place by the relative excitation of light receptors, which transform radiant energy into electrical impulses that are ultimately decoded by the brain.

Brightness Adaptation and Discrimination Since digital images are displayed as a discrete set of intensities, the eyes ability to discriminate between different intensity levels is an important consideration in presenting image-processing results. The range of light intensity levels to which the human visual system can adapt is of the order of 1010 from the scotopic threshold to the glare limit. Experimental evidence indicates that subjective brightness

(intensity as perceived by the human visual system) is a logarithmic function of the light intensity incident on the eye.

Q2: Explain different linear methods for noise cleaning?


Linear Noise Cleaning Noise added to an image generally has a higherspatial-frequency spectrum than the normal image components because of its spatial decorrelatedness. Hence, simple low-pass filtering can be effective for noise cleaning. We will now disucss convolution method of noise cleaning. A spatially filtered output image G(j,k) can be formed by discrete convolution of an input image F(m,n) with a L * L impulse response array H(j,k) according to the relation G(j,k)= F(m,n) H(m+j+C, n+k+C) where C=(L+1)/2 [Eq 4.8] For noise cleaning, H should be of low-pass form, with all positive elements. Several common pixel impulse response arrays of low-pass form are used and two such forms are given below.

These arrays, called noise cleaning masks, are normalized to unit weighting so that the noise-cleaning process does not introduce an amplitude bias in the processed image. Another linear noise cleaning technique Homomorphic Filtering. Homomorphic filtering (16) is a useful technique for image enhancement when an image is subject to multiplicative noise or interference. Fig. 4.9 describes the process.

Figure 4.9: Homomorphic Filtering The input image F(j,k) is assumed to be modeled as the product of a noise-free image S(j,k) and an illumination interference array I(j,k). Thus, F(j,k) = S(j,k) I(j,k) Taking the logarithm yields the additive linear result log{F(j, k)} = log{I(j, k)} + log{S(j, k) Conventional linear filtering techniques can now be applied to reduce the log interference component. Exponentiation after filtering completes the enhancement process.

Q3: Describe various texture features of historical and practical importance.


Fourier Spectra Methods The degree of texture coarseness is proportional to its spatial period; a region of coarse texture should have its Fourier spectral energy concentrated at low spatial frequencies. Conversely, regions of fine texture should exhibit a concentration of spectral energy at high spatial frequencies. Although this correspondence exists to some degree, difficulties often arise because of spatial changes in the period and phase of texture pattern repetitions. Experiments (10) have shown that there is considerable spectral overlap of regions of distinctly different natural texture, such as urban, rural, and woodland regions extracted from aerial photographs. On the other hand, Fourier spectral analysis has proved successful (26, 27) in the detection and classification of coal miners black lung disease, which appears as diffuse textural deviations from the norm. Edge Detection Methods Rosenfeld and Troy (28) have proposed a measure of the number of edges in a neighborhood as a textural measure. As a first step in their process, an edge map arrayE(j, k) is produced by some edge detector such thatE(j, k) = 1for a detected edge andE(j, k) = 0otherwise. Usually, the detection threshold is set lower than the normal setting for the isolation of boundary points. This texture measure is defined as

whereW = 2w + 1is the dimension of the observation window. A variation of this approach is to substitute the edge gradientG(j, k) for the edge map array in Eq. 16.6-1. A generalization of this concept is presented in Section 16.6.4. Autocorrelation Methods Theautocorrelation function has been suggested as the basis of a texture measure (28). Although it has been demonstrated in the preceding section that it is possible to generate visually different stochastic fields with the same autocorrelation function, this does not necessarily rule out the utility of an autocorrelation feature set for natural images. The autocorrelation function is defined as

for computation over a W x Wwindow with T m, n T pixel lags. Presumably, a region of coarse texture will exhibit a higher correlation for a fixed shift (m, n) than will a region of fine texture. Thus, texture coarseness should be proportional to the spread of the autocorrelation function. Decorrelation Methods Stochastic texture fields generated by the model of Figure 16.5-1 can be described quite compactly by specification of the spatial operator O{ }and the stationary first-order

probability density p(W) of the independent, identically distributed generating process W(j, k) . This observation has led to a texture feature extraction procedure, developed by Faugeras and Pratt (5), in which an attempt has been made to invert the model and estimate its parameters. Figure 16.6-2 is a block diagram of their decorrelation method of texture feature extraction. In the first step of the method, the spatial autocorrelation function AF (m, n) is measured over a texture field to be analyzed. The autocorrelation function is then used to develop a whitening filter, with an impulse response HW (j, k) , using techniques described in Section 19.2. The whitening filter is a special type of decorrelation operator. It is used to generate the whitened field

This whitened field, which is spatially uncorrelated, can be utilized as an estimate of the independent generating process W(j, k) by forming its first-order histogram.

5. Explain with diagram Digital image restoration model. In order to effectively design a digital image restoration system, it is necessary quantitatively to characterize the image degradation effects of the physical imaging system, the image digitizer and the image display. Basically, the procedure is to model the image degradation effects and then perform operations to undo the model to obtain a restored image. It should be emphasized that accurate image modeling is often the key to effective image restoration. There are two basic approaches to the modeling of image degradation effects: a priori modeling and a posteriori modeling. In the former case, measurements are made on the physical imaging system, digitizer and display to determine their response to an arbitrary image field. In some instances, it will be possible to model the system response deterministically, while in other situations it will only be possible to determine the system response in a stochastic sense. The posteriori modeling approach is to develop the model for the image degradations based on measurements of a particular image to be restored.

Figure 5.1: Digital image restoration model. Basically, these two approaches differ only in the manner in which information is gathered to describe the character of the image degradation. Fig. 5.1 shows a general model of a digital imaging system and restoration process. In the model, a continuous image light distribution C(x,y,t,) dependent on spatial coordinates (x, y), time (t) and spectral wavelength () is assumed to exist as the driving force of a physical imaging system subject to point and spatial degradation effects and corrupted by deterministic and stochastic disturbances. Potential degradations include diffraction in the optical system, sensor nonlinearities, optical system aberrations, film nonlinearities, atmospheric turbulence effects, image motion blur and geometric distortion. Noise disturbances may be caused by electronic imaging sensors or film granularity. In this model, the physical imaging system produces a set of output image fields FO (i)( x ,y ,t j ) at time instant t j described by the general relation.

where OP { . } represents a general operator that is dependent on the space coordinates (x, y), the time history (t), the wavelength () and the amplitude of the light distribution (C). For a monochrome imaging system, there will only be a single output field, while for a natural color imaging system, FO (i)( x ,y ,t j ) may denote the red, green and blue tristimulus bands for i = 1, 2, 3, respectively.

Multispectral imagery will also involve several output bands of data. In the general model of Fig. 5.1 each observed image field FO (i)( x ,y ,t j ) is digitized, to produce an array of image samples E S (i) ( m1 , m2 , t j ) at each time instant t j. The output samples of the digitizer are related to the input observed field by