Anda di halaman 1dari 12

Master of Computer Application (MCA) Semester VI MC0086 Digital Image Processing 4 Credits Assignment Set 1 (60 Marks) Each

Each Question carries 10 Marks 6 X 10 = 60 Marks 1. Explain the fundamental steps in digital image processing. What are the Components of an Image Processing System? Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems. Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement.[1] The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. Images then could be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing and generally, is used because it is not only the most versatile method, but also the cheapest. Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994. Digital camera images Digital cameras generally include dedicated digital image processing chips to convert the raw data from the image sensor into a colour-corrected image in a standard image file format. Images from digital cameras often receive further processing to improve their quality, a distinct advantage that digital cameras have over film cameras. The digital image processing typically is executed by special software programs that can manipulate the images in many ways. Many digital cameras also enable viewing of histograms of images, as an aid for the photographer to understand the rendered brightness range of each shot more readily. 2. What is the difference between Spatial resolution and Gray-Level Resolution. Sampling is the principal factor determining the spatial resolution of an image. Basically spatial resolution is the smallest discernible detail in an image. As an example suppose we construct a chart with vertical lines of width W, and with space between the lines also having width W. A line-pair consists of one such line and its adjacent space. Thus

width of line pair is and there are line-pairs per unit distance. A widely used definition of resolution is simply the smallest number of discernible line pairs per unit distance; for es 100 line pairs/mm. Gray level resolution: This refers to the smallest discernible change in gray level. The measurement of discernible changes in gray level is a highly subjective process. We have considerable discretion regarding the number of Samples used to generate a digital image. But this is not true for the number of gray levels. Due to hardware constraints, the number of gray levels is usually an integer power of two. The most common value is 8 bits. It can vary depending on application. When an actual measure of physical resolution relating pixels and level of detail they resolve in the original scene are not necessary, it is not uncommon to refer to an L-level digital image of size as having a spatial resolution of pixels and a gray level resolution of L levels. Types of neighborhoods
Neighborhood operations play a key role in modern digital image processing. It is therefore important to understand how images can be sampled and how that relates to the various neighborhoods that can be used to process an image. Rectangular sampling - In most cases, images are sampled by laying a rectangular grid over an image as illustrated in Figure(1.1). This results in the type of sampling shown in Figure(1.3ab). Hexagonal sampling-An alternative sampling scheme is shown in Figure (1.3c) and is termed hexagonal sampling. Both sampling schemes have been studied extensively and both represent a possible periodic tiling of the continuous image space. However rectangular sampling due to hardware and software and software considerations remains the method of choice. Local operations produce an output pixel value based upon the pixel values in

the neighborhood .Some of the most common neighborhoods are the 4-connected neighborhood and the 8-connected neighborhood in the case of rectangular sampling and the 6-connected neighborhood in the case of hexagonal sampling illustrated in Figure(1.3).

Fig (1.3a)

Fig (1.3b)

Fig (

3. Explain Histogram modification technique. Explain different linear methods for noise cleaning? This method usually increases the global contrast of many images, especially when the usable data of the image is represented by close contrast values. Through this adjustment, the intensities can be better distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the most frequent intensity values.

The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In particular, the method can lead to better views of bone structure in x-ray images, and to better detail in photographs that are over or under-exposed. A key advantage of the method is that it is a fairly straightforward technique and an invertible operator. So in theory, if the histogram equalization function is known, then the original histogram can be recovered. The calculation is not computationally intensive. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal. In scientific imaging where spatial correlation is more important than intensity of signal (such as separating DNA fragments of quantized length), the small signal to noise ratio usually hampers visual detection. Histogram equalization often produces unrealistic effects in photographs; however it is very useful for scientific images like thermal, satellite or x-ray images, often the same class of images that user would apply false-color to. Also histogram equalization can produce undesirable effects (like visible image gradient) when applied to images with low color depth. For example, if applied to 8-bit image displayed with 8-bit gray-scale palette it will further reduce color depth (number of unique shades of gray) of the image. Histogram equalization will work the best when applied to images with much higher color depth than palette size, like continuous data or 16-bit gray-scale images. There are two ways to think about and implement histogram equalization, either as image change or as palette change. The operation can be expressed as P(M(I)) where I is the original image, M is histogram equalization mapping operation and P is a palette. If we define a new palette as P'=P(M) and leave image I unchanged then histogram equalization is implemented as palette change. On the other hand if palette P remains unchanged and image is modified to I'=M(I) then the implementation is by image change. In most cases palette change is better as it preserves the original data. Generalizations of this method use multiple histograms to emphasize local contrast, rather than overall contrast. Examples of such methods include adaptive histogram equalization and contrast limiting adaptive histogram equalization or CLAHE. Histogram equalization also seems to be used in biological neural networks so as to maximize the output firing rate of the neuron as a function of the input statistics. This has been proved in particular in the fly retina.[1] Histogram equalization is a specific case of the more general class of histogram remapping methods. These methods seek to adjust the image to make it easier to analyze or improve visual quality (e.g., retinex)

Back projection
The back projection (or "back project") of a histogrammed image is the re-application of the modified histogram to the original image, functioning as a look-up table for pixel brightness values. For each group of pixels taken from the same position from all input single-channel images the function puts the histogram bin value to the destination image, where the coordinates of the bin are determined by the values of pixels in this input group. In terms of statistics, the value of each output image pixel characterizes probability that the corresponding input pixel group belongs to the object whose histogram is used.[2]

4. Which are the two quantitative approaches used for the evaluation of image features? Explain. Describe various texture features of historical and practical importance. The word "Metamorphism" comes from the Greek: meta = change, morph = form, so metamorphism means to change form. In geology this refers to the changes in mineral assemblage and texture that result from subjecting a rock to conditions such pressures, temperatures, and chemical environments different from those under which the rock originally formed.

Note that Diagenesis is also a change in form that occurs in sedimentary rocks. In geology, however, we restrict diagenetic processes to those which occur at temperatures below 200oC and pressures below about 300 MPa (MPa stands for Mega Pascals), this is equivalent to about 3 kilobars of pressure (1kb = 100 MPa). Metamorphism, therefore occurs at temperatures and pressures higher than 200oC and 300 MPa. Rocks can be subjected to these higher temperatures and pressures as they are buried deeper in the Earth. Such burial usually takes place as a result of tectonic processes such as continental collisions or subduction. The upper limit of metamorphism occurs at the pressure and temperature where melting of the rock in question begins. Once melting begins, the process changes to an igneous process rather than a metamorphic process.

Grade of Metamorphism As the temperature and/or pressure increases on a body of rock we say the rock undergoes prograde metamorphism or that the grade of metamorphism increases. Metamorphic grade is a general term for describing the relative temperature and pressure conditions under which metamorphic rocks form. Low-grade metamorphism takes place at temperatures between about 200 to 320oC, and relatively low pressure. Low grade metamorphic rocks are generally characterized by an abundance of hydrous minerals. With increasing grade of metamorphism, the hydrous minerals begin to react with other minerals and/or break down to less hydrous minerals. High-grade metamorphism takes place at temperatures greater than 320oC and relatively high pressure. As grade of metamorphism increases, hydrous minerals become less hydrous, by losing H2O, and non-hydrous minerals become more common.

5. Write a note on Luminance Edge Detector Performance? Explain with a method. Luminance Edge Blur image effect blurs edges based on generated luminance values. No depth or normals textures are required, but normals are instead generated based on the image color values. Like all image effects, Edge Detection is available in Unity Pro only. Make sure to have the Pro Standard Assets installed.

Show Generated Normals Offset Scale Offset Scale

Preview normals as generated from luminance differences. The offset used when sampling the image for luminance values. The blur radius used when applying a blur based on a detected edge.

Hardware support
Edge Detect works on graphics cards with support for pixel shaders (2.0) or devices with OpenGL ES2.0 support. E.g. GeForce FX 5200 or Radeon 9500 and up. All image effects automatically disable themselves when they can not run on an end-users graphics card.

6. What is thresholding? Explain various types of thresholding.

During the thresholding process, individual pixels in an image are marked as "object" pixels if their value is greater than some threshold value (assuming an object to be brighter than the background) and as "background" pixels otherwise. This convention is known as threshold above. Variants include threshold below, which is opposite of threshold above; threshold inside, where a pixel is labeled "object" if its value is between two thresholds; and threshold outside, which is the opposite of threshold inside (Shapiro, et al. 2001:83). Typically, an object pixel is given a value of 1 while a background pixel is given a value of 0. Finally, a binary image is created by coloring each pixel white or black, depending on a pixel's labels.

Threshold selection
The key parameter in the thresholding process is the choice of the threshold value (or values, as mentioned earlier). Several different methods for choosing a threshold exist; users can manually choose a threshold value, or a thresholding algorithm can compute a value automatically, which is known as automatic thresholding (Shapiro, et al. 2001:83). A simple method would be to choose the mean or median value, the rationale being that if the object pixels are brighter than the background, they should also be brighter than the average. In a noiseless image with uniform background and object values, the mean or median will work well as the threshold, however, this will generally not be the case. A more sophisticated approach might be to create a histogram of the image pixel intensities and use the valley point as the threshold. The histogram approach assumes that there is some average values for both the background and object pixels, but that the actual pixel values have some variation around these average values. However, this may be computationally expensive, and image histograms may not have clearly defined valley points, often making the selection of an accurate threshold difficult. In such cases a unimodal threshold selection algorithm may be more appropriate. One method that is relatively simple, does not require much specific knowledge of the image, and is robust against image noise, is the following iterative method:

1. An initial threshold (T) is chosen, this can be done randomly or according to any other method desired. 2. The image is segmented into object and background pixels as described above, creating two sets: 1. = {f(m,n):f(m,n)>T} (object pixels) 2. = {f(m,n):f(m,n) T} (background pixels) (note, f(m,n) is the value of the pixel located in the column, row) 3. The average of each set is computed. 1. = average value of 2. = average value of 4. A new threshold is created that is the average of and 1. T = ( + )/2 5. Go back to step two, now using the new threshold computed in step four, keep repeating until the new threshold matches the one before it (i.e. until convergence has been reached). This iterative algorithm is a special one-dimensional case of the k-means clustering algorithm, which has been proven to converge at a local minimummeaning that a different initial threshold may give a different final result.

Adaptive thresholding
Thresholding is called adaptive thresholding when a different threshold is used for different regions in the image. This may also be known as local or dynamic thresholding (Shapiro, et al. 2001:89).

Categorizing thresholding Methods


Sezgin and Sankur (2004) categorize thresholding methods into the following six groups based on the information the algorithm manipulates (Sezgin et al., 2004):

"histogram shape-based methods, where, for example, the peaks, valleys and curvatures of the smoothed histogram are analyzed clustering-based methods, where the gray-level samples are clustered in two parts as background and foreground (object), or alternately are modeled as a mixture of two Gaussians entropy-based methods result in algorithms that use the entropy of the foreground and background regions, the cross-entropy between the original and binarized image, etc. object attribute-based methods search a measure of similarity between the gray-level and the binarized images, such as fuzzy shape similarity, edge coincidence, etc. spatial methods [that] use higher-order probability distribution and/or correlation between pixels local methods adapt the threshold value on each pixel to the local image characteristics."

Multiband thresholding
Colour images can also be thresholded. One approach is to designate a separate threshold for each of the RGB components of the image and then combine them with an AND operation. This reflects the way the camera works and how the data is stored in the computer, but it does not correspond to the way that people recognize colour. Therefore, the HSL and HSV colour models are more often used. It is also possible to use the CMYK colour model (Pham et al., 2007).

Spring 2012

Master of Computer Application (MCA) Semester VI MC0086 Digital Image Processing 4 Credits Assignment Set 2 (60 Marks) Each Question carries 10 Marks 6 X 10 = 60 Marks 1. Explain the process of formation of image in human eye. Discuss how human eye adapts to brightness. The evolution of the eye has been a subject of significant study, as a distinctive example of a homologous organ present in a wide variety of taxa. Certain components of the eye, such as the visual pigments, appear to have a common ancestry that is, they evolved once, before the animals radiated. However, complex, image-forming eyes evolved some 50 to 100 times [1] using many of the same proteins and genetic toolkits in their construction.[2][3] Complex eyes appear to have first evolved within a few million years, in the rapid burst of evolution known as the Cambrian explosion. There is no evidence of eyes before the Cambrian, but a wide range of diversity is evident in the Middle Cambrian Burgess shale, and the slightly older Emu Bay Shale.[4] Eyes show a wide range of adaptations to meet the requirements of the organisms which bear them. Eyes vary in their acuity, the range of wavelengths they can detect, their sensitivity in low light levels, their ability to detect motion or resolve objects, and whether they can discriminate colours. The complex structure of the eye has been used as evidence to support the theory that they have been designed by a creator, as it has been said to be unlikely to have evolved via natural selection. In 1802, philosopher William Paley called it a miracle of "design". Charles Darwin himself wrote in his Origin of Species, that the evolution of the eye by natural selection at first glance seemed "absurd in the highest possible degree". However, he went on to explain that despite the difficulty in imagining it, this was perfectly feasible: ...if numerous gradations from a perfect and complex eye to one very imperfect and simple, each grade being useful to its possessor, can be shown to exist; if further, the eye does vary ever so slightly, and the variations be inherited, which is certainly the case; and if any variation or modification in the organ be ever useful to an animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, can hardly be considered real.[5] He suggested a gradation from "an optic nerve merely coated with pigment, and without any other mechanism" to "a moderately high stage of perfection", giving examples of extant intermediate grades of evolution.[5] Darwin's suggestions were soon shown to be correct and current research is investigating the genetic mechanisms responsible for eye development and evolution.

Rate of evolution
The first fossils of eyes that have been found to date are from the lower Cambrian period (about 540 million years ago).[7] This period saw a burst of apparently rapid evolution, dubbed the "Cambrian explosion". One of the many hypotheses for "causes" of this diversification, the "Light Switch" theory of Andrew Parker, holds that the evolution of eyes initiated an arms race that led to a rapid spate of evolution.[8] Earlier than this, organisms may have had use for light sensitivity, but not for fast locomotion and navigation by vision.

Since the fossil record, particularly of the Early Cambrian, is so poor, it is difficult to estimate the rate of eye evolution. Simple modelling, invoking small mutations exposed to natural selection, demonstrates that a primitive optical sense organ based upon efficient photopigments could evolve into a complex human-like eye in approximately 400,000 years.[9][note 1]

One origin or many?


Whether one considers the eye to have evolved once or multiple times depends somewhat on the definition of an eye. Much of the genetic machinery employed in eye development is common to all eyed organisms, which may suggest that their ancestor utilized some form of light-sensitive machinery even if it lacked a dedicated optical organ. However, even photoreceptor cells may have evolved more than once from molecularly similar chemoreceptors, and photosensitive cells probably existed long before the Cambrian explosion.[10] Higher-level similarities such as the use of the protein crystalin in the independently derived cephalopod and vertebrate lenses[11] reflect the cooption of a protein from a more fundamental role to a new function within the eye.[12] 2. Describe digital Image representation. Explain the following terms. A digital image is a numeric representation (normally binary) of a two-dimensional image. Depending on whether the image resolution is fixed, it may be of vector or raster type. Without qualifications, the term "digital image" usually refers to raster images also called bitmap images. Raster images have a finite set of digital values, called picture elements or pixels. The digital image contains a fixed number of rows and columns of pixels. Pixels are the smallest individual element in an image, holding quantized values that represent the brightness of a given color at any specific point. Typically, the pixels are stored in computer memory as a raster image or raster map, a twodimensional array of small integers. These values are often transmitted or stored in a compressed form. Raster images can be created by a variety of input devices and techniques, such as digital cameras, scanners, coordinate-measuring machines, seismographic profiling, airborne radar, and more. They can also be synthesized from arbitrary non-image data, such as mathematical functions or threedimensional geometric models; the latter being a major sub-area of computer graphics. The field of digital image processing is the study of algorithms for their transformation.

Raster file formats


Most users come into contact with raster images through digital cameras, which use any of several image file formats. Some digital cameras give access to almost all the data captured by the camera, using a raw image format. The Universal Photographic Imaging Guidelines (UPDIG) suggests these formats be used when possible since raw files produce the best quality images. These file formats allow the photographer and the processing agent the greatest level of control and accuracy for output. Their use is inhibited by the prevalence of proprietary information (trade secrets) for some camera makers, but there have been initiatives such as OpenRAW to influence manufacturers to release these records publicly. An alternative may be Digital Negative (DNG), a proprietary Adobe product described as the public, archival format for digital camera raw data.[1] Although this format is not yet universally accepted, support for the product is growing, and increasingly professional archivists and conservationists, working for respectable organizations, variously suggest or recommend DNG for archival purposes.[2][3][4][5][6][7][8][9][10]

Vector
Vector images resulted from mathematical geometry (vector). In mathematical terms, a vector consists of point that has both direction and length. Often, both raster and vector elements will be combined in one image; for example, in the case of a billboard with text (vector) and photographs (raster).

Image viewing
Image viewer software displays images. Web browsers can display standard internet image formats including GIF, JPEG, and PNG. Some can show SVG format which is a standard W3C format. i) Adjacency ii) Connectivity iii) Regions and Boundaries 3. Explain with diagram Digital image restoration model. You are preserving an important part of your family history to continue on the memory for the next generation. If left un-restored the negative or picture could end up un-restorable and future generations won't have that vital visual link to their past. Don't leave it, act now. Make sure you get a second print and send a copy to a relative as a gift and a keep safe. If you are a Genealogist or Historian or just researching your family tree, drop us an email or give us a call for a fast, friendly repairs. Order your photo restoration today. Digital imaging or digital image acquisition is the creation of digital images, typically from a physical scene. The term is often assumed to imply or include the processing, compression, storage, printing, and display of such images. The most usual method is by digital photography with a digital camera but other methods are also employed. Digital imaging was developed in the 1960s and 1970s, largely to avoid the operational weaknesses of film cameras, for scientific and military missions including the KH-11 program. As digital technology became cheaper in later decades it replaced the old film methods for many purposes.

Methods
A digital photograph may be created directly from a physical scene by a camera or similar device. Alternatively, a digital image may be obtained from another image in an analog medium, such as photographs, photographic film, or printed paper, by an image scanner or similar device. Many technical imagessuch as those acquired with tomographic equipment, side-scan sonar, or radio telescopesare actually obtained by complex processing of non-image data. Weather radar maps as seen on television news are a commonplace example. The digitalization of analog real-world data is known as digitizing, and involves sampling (discretization) and quantization. Finally, a digital image can also be computed from a geometric model or mathematical formula. In this case the name image synthesis is more appropriate, and it is more often known as rendering. Digital image authentication is an issue [1] for the providers and producers of digital images such as health care organizations, law enforcement agencies and insurance companies. There are methods emerging in forensic photography to analyze a digital image and determine if it has been altered.

4. Explain basic morphological operations on gray scale images.

While point and neighborhood operations are generally designed to alter the look or appearance of an image for visual considerations, morphological operations are used to understand the structure or form of an image. This usually means identifying objects or boundaries within an image. Morphological operations play a key role in applications such as machine vision and automatic object detection. There are three primary morphological functions: erosion, dilation, and hit-or-miss. Others are special cases of these primary operations or are cascaded applications of them. Morphological operations are usually performed on binary images where the pixel values are either 0 or 1. For simplicity, we will refer to pixels as 0 or 1, and will show a value of zero as black and a value of 1 as white. While most morphological operations focus on binary images, some also can be applied to grayscale images. It is important to introduce the concepts of segmentation and connectivity. Consider a binary image where the predominant field of white pixels is divided (or segmented) into two parts by a black line. In this image there are three segments: the top group of white pixels, the bottom group of white pixels, and the group of black pixels that form the dividing line. Another three segment image would be one an outer border of white pixels, the black pixels that form square, and a group of white pixels within the square. We see that all pixels of a segment are directly adjacent to at least one other pixel of the same classification, they are all connected. Most morphological functions operate on 3 x 3 pixel neighborhoods. The pixels in a neighborhood are identified in one of two ways - sometimes interchangeably. The pixel of interest lies at the center of the neighborhood and is labeled X. The surrounding pixels are referred to as either X0 through X7, or by their compass coordinates E, NE, N, NW, W, SW, S, and SE. A pixel is four-connected if at least one of its neighbors in positions X 0, X2, X4, or X6 (E, N, W, or S) is the same value. The pixel is eight-connected if all neighbors are the same value. Under eight-connectivity, a set of pixels is said to be minimally connected if the loss of a single pixel causes the remaining pixels to lose connectivity. Binary Erosion and Dilation Erosion and dilation are related to convolution but are more for logical decision-making than numeric calculation. Like convolution, binary morphological operators such as erosion and dilation combine a local neighborhood of pixels with a pixel mask to achieve the result. Figure 6.3 shows this relationship. The output pixel, 0, is set to either a hit (1) or a miss (0) based on the logical AND relationship. Binary erosion uses the following for its mask: 1 1 1 1 1 1 1 1 1

This means that every pixel in the neighborhood must be 1 for the output pixel to be 1. Otherwise, the pixel will become 0. No matter what value the neighboring pixels have, if the central pixel is 0 the output pixel is 0. Just a single 0 pixel anywhere within the neighborhood will cause the output pixel to become 0. Erosion can be used to eliminate unwanted white noise pixels from an otherwise black area. The only condition in which a white pixel will remain white in the output image is if all of its neighbors are white. The effect on a binary image is to diminish, or erode, the edges of a white area of pixels.

5. Describe several texture features of historical and practical importance. Race is a classification system used to categorize humans into large and distinct populations or groups by heritable phenotypic characteristics, geographic ancestry, physical appearance, ethnicity and social status. In the early twentieth century the term was often used, in a taxonomic sense, to denote genetically differentiated human populations defined by phenotype.[1] This sense of "race" is still sometimes used within forensic anthropology (when analyzing skeletal remains), biomedical research, and race-based medicine.[2] In addition, law enforcement utilizes race in profiling suspects. These uses of racial categories are frequently criticized for perpetuating an outmoded understanding of human biological variation, and promoting stereotypes. Because in many societies racial groupings correspond closely with patterns of social stratification, for social scientists studying social inequality, race can be a significant variable. As sociological factors, racial categories may in part reflect subjective attributions, self-identities, and social institutions.[3][4] Accordingly, the racial paradigms employed in different disciplines vary in their emphasis on biological reduction as contrasted with societal construction. While biologists sometimes use the concept of race to make distinctions among fuzzy sets of traits, others in the scientific community suggest that the idea of race is often used [5] in a naive[6] or simplistic way. Among humans, race has no taxonomic significance; all living humans belong to the same hominid subspecies, Homo sapiens sapiens.[7][8] Social conceptions and groupings of races vary over time, involving folk taxonomies [9] that define essential types of individuals based on perceived traits. Scientists consider biological essentialism obsolete,[10] and generally discourage racial explanations for collective differentiation in both physical and behavioral traits.[6][11] When people define and talk about a particular conception of race, they create a social reality through which social categorization is achieved.[12] In this sense, races are said to be social constructs.[13] These constructs develop within various legal, economic, and sociopolitical contexts, and may be the effect, rather than the cause, of major social situations. [14] While race is understood to be a social construct by many, most scholars agree that race has real material effects in the lives of people through institutionalized practices of preference and discrimination. 6. Explain Haar transform. Explain different methods for wavelet transform in two dimensions.

The Haar transform is the simplest of the wavelet transforms. This transform cross-multiplies a function against the Haar wavelet with various shifts and stretches, like the Fourier transform crossmultiplies a function against a sine wave with two phases and many stretches.[5] The Haar transform is derived from the Haar matrix. An example of a 4x4 Haar transformation matrix is shown below.

The Haar transform can be thought of as a sampling process in which rows of the transformation matrix act as samples of finer and finer resolution.

Compare with the Walsh transform, which is also 1/1, but is non-localized. In mathematics, the Haar wavelet is a sequence of rescaled "square-shaped" functions which together form a wavelet family or basis. Wavelet analysis is similar to Fourier analysis in that it allows a target function over an interval to be represented in terms of an orthonormal function basis. The Haar sequence is now recognised as the first known wavelet basis and extensively used as a teaching example. The Haar sequence was proposed in 1909 by Alfrd Haar.[1] Haar used these functions to give an example of a countable orthonormal system for the space of square-integrable functions on the real line. The study of wavelets, and even the term "wavelet", did not come until much later. As a special case of the Daubechies wavelet, the Haar wavelet is also known as D2. The Haar wavelet is also the simplest possible wavelet. The technical disadvantage of the Haar wavelet is that it is not continuous, and therefore not differentiable. This property can, however, be an advantage for the analysis of signals with sudden transitions, such as monitoring of tool failure in machines.

Anda mungkin juga menyukai