Anda di halaman 1dari 6

1. Explain the formation of image in human eye?

Ans: The human eye is a fluid-filled object that focuses images of objects on the retina. The cornea with an index of refraction of about 1.38 is where most of the refraction occurs. Some of this light will then passes through the pupil opening into the lens with an index of refraction of about 1.44. The lens is flexible and the ciliary muscles contract or relax to change its shape and focal length. When the muscle relax, the lens flattens and the focal length becomes longer so that distant objects can be focused on the retina. When the muscles contract, the lens is pushed into more convex shape and the focal length is shortened so that close objects can be focused on the retina. The retina contains rods and cones to detect the intensity and frequency of the light and send impulses to the brain along the optic nerve. A ray diagram can be used to show how light passes from a point on a real object (located somewhere in space outside the body) to the corresponding position on the image of the object in the retina at the back of the eye. Ray Diagram:

i. ii. iii. iv.

Representation of an object: First consider the object which is represented by a simple red arrow pointing upwards (left hand side of the diagram). Light leaves the object propagating in all directions: Light leaving the object in all direction is represented by the small arrows pointing upwards, up-left, upright, and downwards, down-left, and down-right. Some of the light leaving the object reaches the eye: Although the object is scattering light in all directions, only a small proportion of the light scattered from it reaches the eye. Light changes direction when it passes from the air into the eye: When light travelling away from the object, towards the eye, arrives at the eye, the first surface it reaches is the cornea. The ray-diagram shows the rays changing direction when they pass through the cornea.

The distance between the center of the lens and the retina (called the focal length) varies from approximately 17 mm to about 14 mm, as the refractive power of the lens increases from its minimum to its maximum. When the eye focuses on an object farther away than about 3 m, the lens exhibits its lowest refractive power. When the eye focuses on a nearby object the lens is most strongly refractive.

2. Explain different linear methods for noise cleaning? Ans: Noise reduction is a process of removing noise from a signal. Noise reduction techniques are conceptually very similar regardless of the signal being processed, however a priori knowledge of the characteristics of an expected signal can mean the implementations of these techniques vary greatly depending on the type of signal. All recording devices, both analogue or digital, have traits which make them susceptible to noise. Noise can be random or white noise with no coherence, or coherent noise introduced by the devices mechanism or processing algorithms. In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the films sensitivity, more sensitive film having larger sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level. Linear smoothing filters One method to remove noise is by convolving the original image with a mask that represents a low=pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function. This convolution bring the value of each pixel into closer harmony with the value of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights. Smoothing filters tend to blur an image, because pixel intensity values that are significantly higher or lower than the surrounding neighborhood would smear across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction; they are, however, often used as the basis for nonlinear noise reduction filters. Anisotropic diffusion Another method for removing noise is to evolve the image under a smoothing partial differential equation similar to the heat equation which is called anisotropic diffusion. With a spatially constant diffusion coefficient, this is equivalent to the heat equation or linear Gaussian filtering, but with a diffusion coefficient designed to detect edges, the noise can be removed without blurring the edges of the image.

3. Which are the two quantitative approaches used for the evaluation of image features? Explain. Ans. There are two quantitative approaches to the evaluation of image features: prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with region with regions (segments) that have been independently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature set. The best set of feature is, of course, that which results in the least classification error. The prototype performance approach for image segmentation is similar in nature. A prototype image with independently identified regions is segmented by a segmentation procedure using a test set of features. Then, the detected segments are compared to the known segments, and the segmentation error is evaluated. The figure-of-merit approach to feature evaluation involves the establishment of some functional distance measurements between sets of image features such that a large distance implies a low classification error, and vice versa. Faugeras and Pratt have utilized the Bhattacharyya distance figure-of-merit approach to feature evaluation. The method should be extensible for other features as well. The Bhattacharyya distance (B-distance for simplicity) is a scalar function of the probability densities of features of a pair of classes defined as: B(S1, S2) = -1n{p(x|S1) p(x|S2)1/2dx} Where x denotes a vector containing individual image feature measurements with conditional density p(x|S 1) Amplitude Features The most basic of all image features is some measure of image amplitude in terms of luminance, tristimulus value, spectral value or other units. There are many degrees of freedom in establishing image amplitude features. Image variables such as luminance or tristimulus values may be utilized directly or alternatively, some linear, non-linear, or perhaps, non-invertible transformation can be performed to generate variables in a new amplitude space. Transform Coefficient features The coefficient of a two-dimensional transform of a luminance image specify the amplitude of the luminance patterns (two-dimensional basis functions) of a transform such that the weighted sum of the luminance patterns is identical to the image. By this characterization of a transform, the coefficients may be considered to indicate the degree of correspondence of a particular luminance pattern with an image field. If a basis pattern is of the same spatial form as a feature to be detected within the image, image detection can be performed simply by monitoring the value of the transform coefficient.

4. Explain with diagram Digital image restoration model. Ans. In order to effectively design a digital image restoration system, it is necessary quantitatively to characterize the image degradation effects of the physical imaging system, the image digitizer and the image display. Basically, the procedure is to model the image degradation effects and then perform operations to undo the model to obtain a restored image. It should be emphasized that accurate image modeling is often the key to effective image restoration. There are two basic approaches to the modeling of image degradation effects: a priori modeling and posteriori modeling. In the former case, measurements are made on the physical imaging system, digitizer and display to determine their response to an arbitrary image field. In some instances, it will be possible to model the system response deterministically, while in other situations it will only be possible to determine the system response in a stochastic sense. The posteriori modeling approach is to develop the model for the image degradations based on measurements of a particular image to be restored.

Digital image restoration model Basically, these two approaches differ only in the manner in which information is gathered to describe the character of the image degradation. Figure above shows a general model of a digital imaging system and restoration process. In the model, a continuous image light distribution C(x,y,t, ) dependent on spatial coordinates (x,y), time (t) and spectral wavelength () is assumed to exist as the driving force of a physical imaging system subject to point and spatial degradation effects and corrupted by deterministic and stochastic disturbances. Protential degradations include diffraction on the optical system, sensor nonlinearities, optical system aberrations, film nonlinearities, atmospheric turbulence effects, image motion blur and geometric distortion. Noise disturbances may be caused by electronic imaging sensors or film granularity. In this model, the physical imaging system produces a set of output image fields Fo(i)(s,y,tj) at time instant tj described by the general relation.

Where Op{ . } represents a general operator that is dependent on the space coordinates (x,y), the time history (t), and the wavelength () and the amplitude of the light distribution . For a monochrome imaging system, there will only be a single output field, while for a natural color imaging system, Fo(i)( s,y,tj) may denote the red, green and blue tristimulus bands for i=1,2,3, respectively. Multispectral imagery will also involve several output bands of data.

5. Discuss Orthogonal Gradient Generation for first order derivative edge detection. Ans. There are two fundamental methods for generating first-order derivative edge gradients. One method involves generation of gradients in two orthogonal directions in an image; the second utilizes a set of directional derivatives. Orthogonal Gradient Generation An edge in a continuous domain edge segment F(x,y) can be detected by forming the continuous one-dimensional gradient G(x,y) along a line normal to the edge slope, which is at an angle with respect to the horizontal axis. If the gradient is sufficiently large (i.e., above some threshold value), an edge is deemed present. The gradient along the line normal to the edge slope can be computed in terms of the derivatives along orthogonal axes according to the following:

The spatial gradient amplitude is given by:

Orthogonal Gradient Generation For computational efficiency, the gradient amplitude is sometimes approximated by the magnitude combination.

The orientation of the spatial gradient with respect to the row axis is

The remaining issue for discrete domain orthogonal gradient generation is to chose a good discrete approximation to the continuous differentials of Eq. The simplest method of discrete gradient generation is to form the running difference of pixel along rows and columns of the image. The row gradient is defined as

And the column gradient is

Diagonal edge gradients can be obtained by forming running differences of diagonal pairs of pixels. This is the basis of the Roberts cross-difference operator, which is defined in magnitude form as

6. Explain about the region splitting and merging with example. Ans. Sub-divide an image into a set of disjoint regions and then merge and/or split the regions in an attempt to satisfy the conditions that the segmentations must be complete; that is, every pixel must be in a region and the points in region must be connected. Let R represent the entire image and select predicate P. One approach for segmenting R is to subdivide it successively into smaller and smaller quadrant regions so that, for ant region Ri, P(Ri) = TRUE. We start with the entire region. If P(R) = FALSE, then the image is divided into quadrants. If P is FALSE for any quadrant, we subdivide that quadrant into sub quadrants, and so on. This particular splitting technique has a convenient representation in the form of a so called quad tree (that is, a tree in which nodes have exactly four descendents). The root of the tree corresponds to the entire image and that each node corresponds to a subdivision. If only splitting were used, the final partition likely would contain adjacent regions with identical properties. This draw back may be remedied by allowing merging, as well as splitting. Satisfying the constraints of section requires merging only adjacent regions whose combined pixels satisfy the predicate P. That is, two adjacent regions Rj and Rk are merged only if P(Rj U Rk ) = TRUE. 1. Split into four disjoint quadrants any region Ri for which were P(Ri)=FALSE. 2. Merge any adjacent regions Ri and Rj for which P(Rj U Rk ) = TRUE. 3. Stop when no further merging or splitting is possible.

Anda mungkin juga menyukai