Anda di halaman 1dari 8

MC0086 - Digital Image Processing

(Book ID: B1007)

Q1. Explain the process of formation of image in human eye. Ans: Retina is the back surface of the eye and consists of cells that transmit image information to the brain. Other parts that do not take an active part in the formation of the image on the retina have many other important functions, such as providing mechanical support to the structures of the eye, supplying the tissues with fluids, nutrients, etc. A ray-diagram can be used to show how light passes from a point on a real object (located somewhere in space outside the body) to the corresponding position on the image of the object on the retina at the back of the eye. It is explained as given under: 1. Object representation: Let us consider the object first, which is represented by a simple red arrow pointing upwards. Most real objects have complicated shapes, textures, and so on. This arrow is used to represent a very simple object for which just two clearly defined points on the object are traced through the eye to the retina. 2. Light leaves the object - propagating in all directions: Let us consider this object as a scattering object. It means after the light in the area (which may be called "ambient light") reaches the object, it leaves surface of the object traveling in a wide range of directions. Light leaving the object in all directions is represented by the small arrows pointing upwards, upleft, up-right and downwards, down-left and down-right (small green arrows). We should note here that a very similar but slightly simpler case would be to consider a light source instead of a (solid, light scattering) object, and to say that the light source radiates light in all directions. That would result in the same diagram but would be less realistic because most of the light received by the eye is reflected or scattered from solid objects rather than from a source of illumination directly. Further, one should never stare directly at bright light sources such as the sun, lasers, and other bright sources of light. Doing so can cause permanent eyedamage. 3. Some lights from the object reach the eye: Though the object is scattering light in all directions, only a small proportion of the scattered light reaches the eye. Rays represent direction of travel of light. The pink rays indicate paths taken by light leaving the top point of the object (that eventually reaches the retina), while the green rays indicate paths taken by light leaving the lower point of the object (that eventually reaches the retina).

MC0086

Page 1

Only two rays are shown leaving each point on the object. This simplification is to keep the diagram clear. The two rays drawn in each case are the extreme ray, which is those that only just get through the optical system called the eye. These generally represent a cone of light that propagates all the way through the system from the object to the image. The idea of this cone of light is represented on the diagram by the area between the pink (upper) rays being shaded pale orange. This shaded area is a reminder that light leaving the top of the object along any ray that could be drawn between the two (extreme) pink rays should reach exactly the same position in the image at the back of the eye. The same applies to the area between the two green (lower) rays but this is not shaded to avoid over-complicating the diagram. 4. Light changes direction when it passes from the air into the eye: The first surface that light hit in the eye is the cornea. The ray-diagram shows the rays changing direction when they pass through the cornea. This process of change in direction is called Refraction. i.e, the re-direction of light as it passes from one medium into another. To describe this ray-diagram it is sufficient to say that several structures in the eye contribute to image formation by re-directing the light passing through them in such a way as to improve the quality of the image formed on the retina. The cornea and the lens are the parts responsible to cause the refraction of light while passing through the eye. Most of the refraction (bending, or "re-directing" of the light) occurs at the interface between air outside the eye and cornea.

Q2.

Explain different linear methods for noise cleaning?

Ans: The process of removing noise from signal is called Noise Reduction. Analogue and digital recording devices have traits which make them susceptible to noise. Noise can be random or white noise with no coherence, or coherent noise introduced by the device's mechanism or processing algorithms. In electronic recording devices, a major form of noise is hiss caused by random electrons that, heavily influenced by heat, stray from their designated path. These stray electrons influence the voltage of the output signal and thus create detectable noise. In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the film's sensitivity, more sensitive film having larger sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level. Linear smoothing filters:-

MC0086

Page 2

One method to remove noise is by convolving the original image with a mask that represents a low-pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function. This convolution brings the value of each pixel into closer harmony with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights. Smoothing filters tend to blur an image, because pixel intensity values that are significantly higher or lower than the surrounding neighborhood would "smear" across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction; they are, however, often used as the basis for nonlinear noise reduction filters. Anisotropic diffusion:Another method for removing noise is to evolve the image under a smoothing partial differential equation similar to the heat equation which is called anisotropic diffusion. With a spatially constant diffusion coefficient, this is equivalent to the heat equation or linear Gaussian filtering, but with a diffusion coefficient designed to detect edges, the noise can be removed without blurring the edges of the image. Q3. Which are the two quantitative approaches used for the evaluation of image features? Explain. Ans: Quantitative approaches are:- prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with regions (segments) that have been independently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature set. The best set of features is, of course, that which results in the least classification error. The prototype performance approach for image segmentation is similar in nature. A prototype image with independently identified regions is segmented by a segmentation procedure using a test set of features. Then, the detected segments are compared to the known segments, and the segmentation error is evaluated. The problems associated with the prototype performance methods of feature evaluation are the integrity of the prototype data and the fact that the performance indication is dependent not only on the quality of the features but also on the classification or segmentation ability of the classifier or segmenter. The figure-of-merit approach to feature evaluation involves the establishment of some functional distance measurements between sets of image features such that a large distance implies a low classification error, and vice versa. Faugeras and Pratt have utilized the Bhattacharyya distance figure-of-merit for texture feature evaluation. The method should be extensible for other features as well.The Bhattacharyya distance (B-distance for simplicity) is a scalar function of the probability densities of features of a pair of classes defined as

Where x denotes a vector containing individual image feature measurements with conditional density p (x|S1).

MC0086

Page 3

Q4. Explain with diagram Digital image restoration model


Ans: The process of restoring a blurred image on the basis of mathematical model is known as digital image restoration. Causes of blur:Atmospheric turbulence blur e.g., in remote sensing and astronomical imaging due to long-term exposure through the atmosphere, where the turbulence in the atmosphere gives rise to random variations in the refractive index. For many practical purposes, this blurring can be modelled by a Gaussian point spread function, and the discretized problem is a linear system of equations whose coefficient matrix is a block Toeplitz matrix with Toeplitz blocks.

Discretizations of deconvolution problems are solved by regularization methods - such as those implemented in the Matlab package Regularization Tools - that seek to balance the noise suppression and the loss of details in the restored `. Unfortunately, classical regularization algorithms tend to produce smooth solutions, and as a consequence it is difficult to recover sharp edges in the image. We have developed a 2-D version [2] of new algorithm [3] that is much better able to reconstruct the sharp edges that are typical in digital images. The algorithm, called PP-TSVD, is a modification of the truncated-SVD method and incorporates the solution of a linear l1-problem, and it includes a parameter k that controls the amount of noise reduction. The algorithm is implemented in Matlab and is available as Matlab function pptvsd. The four images on top of this page show various fundamental solutions that can be computed by means of the PP-TSVD algorithm. The underlying basis functions are delta functions, piecewise constant functions, piecewise linear functions, and piecewise 2 degree polynomials, respectively. We are currently investigating the use of the PP-TSVD algorithms in such areas as astronomy and geophysics.

Q5. Discuss Orthogonal Gradient Generation for first order derivative edge detection.
Ans: Orthogonal Gradient Generation An edge in a continuous domain edge segment F(x,y) can be detected by forming the continuous one-dimensional gradient G(x,y) along a line normal to the edge slope, which is at an angle with respect to the horizontal axis. If the gradient is sufficiently large (i.e., above some threshold value), an edge is deemed present. The gradient along the line normal to the edge slope can be computed in terms of the derivatives along orthogonal axes according to the following

MC0086

Page 4

Fig. 8.5 describes the generation of an edge gradient in the discrete domain in terms of a row gradient and a column gradient. The spatial gradient amplitude is given by

For computational efficiency, the gradient amplitude is sometimes approximated by the magnitude combination

The orientation of the spatial gradient with respect to the row axis is

The remaining issue for discrete domain orthogonal gradient generation is to choose a good discrete approximation to the continuous differentials of Eq. 8.3a. The simplest method of discrete gradient generation is to form the running difference of pixels along rows and columns of the image. The row gradient is defined as

and the column gradient is

MC0086

Page 5

Diagonal edge gradients can be obtained by forming running differences of diagonal pairs of pixels. This is the basis of the Roberts cross-difference operator, which is defined in magnitude form as

and in square-root form as

Prewitt has introduced a pixel edge gradient operator described by the pixel numbering convention of Fig. 8.6. The Prewitt operator square root edge gradient is defined as

Where K= 1. In this formulation, the row and column gradients are normalized to provide unitgain positive and negative weighted averages about a separated edge position.

MC0086

Page 6

The Sobel operator edge detector differs from the Prewitt edge detector in that the values of the north,south, east and west pixels are doubled (i.e., K = 2). The motivation for this weighting is to give equalimportance to each pixel in terms of its contribution to the spatial gradient. Fig. 8.7 shows examples of the Prewitt and Sobel gradients of the peppers image. The row and column gradients for all the edge detectors mentioned previously in this subsection involve a linear combination of pixels within a small neighborhood. Consequently, the row and column gradients can be computed by the convolution relationships.

MC0086

Page 7

Q6. Explain about the Region Splitting and merging with example
Ans: Region Splitting and Merging:- Sub-divide an image into a set of disjoint regions and then merge and/or split the regions in an attempt to satisfy the conditions. Let R represent the entire image and select predicate P. One approach for segmenting R is to subdivide it successively into smaller and smaller quadrant regions so that, for ant region, P() = TRUE. We start with the entire region. If then the image is divided into quadrants. If P is FALSE for any quadrant, we subdivide that quadrant into subquadrants, and so on. This particular splitting technique has a convenient representation in the form of a so called quad tree (that is, a tree in which nodes have exactly four descendants). The root of the tree corresponds to the entire image and that each node corresponds to a subdivision. In this case, only was sub divided further. If only splitting were used, the final partition likely would contain adjacent regions with identical properties. This drawback may be remedied by merging as well as splitting.

MC0086

Page 8

Anda mungkin juga menyukai