Anda di halaman 1dari 28

ABSTRACT

The increasing use of biometrics in different environments presents new challenges. Most importantly, biometric data are irreplaceable. Therefore, storing biometric templates, which is unique to individual user, entails significant security risks. We propose a geometric transformation for securing the minutiae based fingerprint templates. The proposed scheme employs a robust one-way transformation that maps geometrical configuration of the minutiae points into a fixed-length code vector. This representation enables efficient alignment and reliable matching. Experiments are conducted by applying the proposed method on a synthetically generated minutiae point sets. Preliminary results show that the proposed scheme provides a simple and effective solution to the template security problem of the minutiae based fingerprint.

BIOMETRICS:
Biometrics (or biometric authentication) refers to the identification of humans by their characteristics or traits. Computer science, biometrics to be specific, is used as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. The two categories of biometric identifiers include physiological and behavioral characteristics. A biometric would identify by ones voice DNA, hand print or behavior. Behavioral characteristics are related to the behavior of a person, including but not limited to: typing rhythm, gait, and voice. Some researchers have coined the term behaviometrics to describe the latter class of biometrics. More traditional means of access control include token-based identification systems, such as a driver's license or passport, and knowledge-based identification systems, such as a password or personal identification number. Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based methods, however, the collection of biometric identifiers raises privacy concerns about the ultimate use of this information.

WHY BIOMETRICS:
Securing personal privacy and deterring identity theft are national priorities. These goals are essential to our democracy and our economy, and inherently important to our citizens. Moreover, failure to achieve these goals is substantially inhibiting the growth of our most advanced, leading-edge industries, notably including e-commerce, that depend upon the integrity of network transactions. Establishing end-to-end trust among all parties to network transactions is the indispensable basis for success. A large percentage of the public are reluctant to engage in ecommerce or conduct other network transactions owing to a well-founded lack of confidence that the system will protect their privacy and prevent their identity from being stolen and misused. The misgivings of the public are reinforced by recent publicized cases of loss of personal privacy, fraudulent funds transfers, and outright theft and abuse of identity in network transactions. Biometrics, an emerging set of technologies, promise an effective solution. Biometrics accurately identifies or verifies individuals based upon each persons unique physical or behavioral characteristics. Biometrics work by unobtrusively matching patterns of live individuals in real time against enrolled records. Leading examples are biometric technologies that recognize and authenticate faces, hands, fingers, signatures, irises, voices, and fingerprints. Biometric data are separate and distinct from personal information. Biometric templates cannot be reverse-engineered to recreate personal information and they cannot be stolen and used to access personal information. Precisely because of these inherent attributes, biometrics are effective means to secure privacy and deter identity theft.

FINGERPRINT:
A fingerprint in its narrow sense is an impression left by the friction ridges of a human finger. In a wider use of the term, fingerprints are the traces of an impression from the friction ridges of any part of a human or other primate hand. A print from the foot can also leave an impression of friction ridges. A friction ridge is a raised portion of the epidermis on the fingers and toes (digits), the palm of the hand or the sole of the foot, consisting of one or more connected ridge units of friction ridge skin. These are sometimes known as "epidermal ridges" which are caused by the underlying interface between the dermal papillae of the dermis and the interpapillary pegs of the epidermis. These epidermal ridges serve to amplify vibrations triggered, for example, when fingertips brush across an uneven surface, better transmitting the signals to sensory nerves involved in fine texture perception. These ridges also assist in gripping rough surfaces, as well as smooth wet surfaces. Impressions of fingerprints may be left behind on a surface by the natural secretions of sweat from the accrue glands that are present in friction ridge skin, or they may be made by ink or other substances transferred from the peaks of friction ridges on the skin to a relatively smooth surface such as a fingerprint card. Fingerprint records normally contain impressions from the pad on the last joint of fingers and thumbs, although fingerprint cards also typically record portions of lower joint areas of the fingers.

WHY FINGERPRINT:
With increasingly urgent need for reliable security, biometrics is being spotlighted as the authentication method for the next generation. Among numerous biometric technologies, fingerprint authentication has been in use for the longest time and bears more advantages than other biometric technologies do.

Fingerprint authentication is possibly the most sophisticated method of all biometric technologies and has been thoroughly verified through various applications. Fingerprint authentication has particularly proved its high efficiency and further enhanced the technology in criminal investigation for more than a century.

Even features such as a persons gait, face, or signature may change with passage of time and may be fabricated or imitated. However, a fingerprint is completely unique to an individual and stayed unchanged for lifetime. This exclusivity demonstrates that fingerprint authentication is far more accurate and efficient than any other methods of authentication.

Also, a fingerprint may be taken and digitalized by relatively compact and cheap devices and takes only a small capacity to store a large database of information. With these strengths, fingerprint authentication has long been a major part of the security market and continues to be more competitive than others in todays world.

WHY CANCELABLE BIOMETRICS:


Cancelable biometrics refers to the intentional and systematically repeatable distortion of biometric features in order to protect sensitive user-specific data. If a cancelable feature is compromised, the distortion characteristics are changed, and the same biometrics is mapped to a new template, which is used subsequently. Cancelable biometrics is one of the major categories for biometric template protection purpose besides biometric cryptosystem. Four objectives of designing a cancelable biometric scheme are as followed:

Diversity: No same cancelable features can be used across various applications, therefore a large number of protected templates from same biometric feature is required.

Reusability/Revocability: Straightforward revocation and reissue in the event of compromise.

Non-invertibility: Non-invertibility of template computation to prevent recovery of original biometric data.

Performance: The formulation should not deteriorate the recognition performance.

BIRDS EYEVIEW OF PROJECT:


If we talk about various forensic tests, the first thing that comes to our mind is the finger print impression testing. The term finger print impression testing is stapled with the forensic as well as the investigation departments. Even in movies whenever we come across a crime scene, surely we are shown few techniques of finger print impression test. The finger print impression test is executed by the forensic department, which is a matchless and private type of test to spot a clue of a crime and the criminal. The forensic science or simply forensic is a department of science which gives effort to answer various question which deals with the interest of the legal system of a city or state. The department of forensic is in relation with a crime or a civil action.

There are several tests and techniques which are helpful to find out the criminal who has committed a crime in a city or state. This becomes easier if the data of the criminal is available in the record file of the forensic department. The finger print impression testing is one of the prime most forensic tests that are very helpful to identify or trace the criminal. The finger print impression test is also known as an identification test of the crime. The most interesting fact of this finger print impression testing is that the finger prints of the murderer or criminal which is collected from the palm impression of the culprit is unique and will have no resemblance with anyone elses finger print on earth. There are various devices and methods which are used for finger print impression test.

2 TIPES OF BIOMETRICS:
Based on the above guidelines, several biometrics are being developed and are in use .this paper describes popularly used biometrics in terms of the character measured , the devices used to collect the biometric , features extracted , the algorithms used and the areas of applicability.

2.1 Fingerprint biometric: fingerprint identification is popular because of their ease in acquisition, number of sources (ten fingers) and their acceptance over a long time by law enforcement offices. Fingerprints form part of an individuals phenotype and are not

determined by genetics and hence qualify as good biometric. a fingerprint appears as a series of ridges with pores (sweat glands ) and valleys between these ridges as shown in fig1. in a fingerprint a minutia is a point where a ridge ends or splits. a typical finger would have 30 to 60 minutia points. Minutia is the feature which is extracted for fingerprint biometric.

A typical minutia map

Four technologies are in use to extract fingerprint images. These are as listed below: a) Optical Sensors: These sensors capture visual image of finger surface. Finger touches the surface of a prism and LEDs provide a light source. Image is captured after its total internal reflection in the prism, by a Charge Coupled Device IC (CCD-IC) or CMOS Camera. Optical sensors are reliable and inexpensive. However they are bulky and prone to surface dirt and dust that affects the quality of fingerprint collected.

b) Capacitive Sensors: These sensors scan surface of finger using dielectric measurements to distinguish ridges and valleys. Higher dielectric constant of ridges results in higher capacitance than that of valleys which contain air. Capacitive sensors produce better image quality over wider operating conditions. However they are expensive, consume more power and also do not work well with dry fingers.

c) Thermal Sensors: These sensors consist of contiguous arrangement of heating elements and thermal sensors and capture images based on differentials in heat emission between the ridges and valleys. Heat map is converted to an optical image of ridges which are cooler due to presence of sweat pores and valleys which are warmer. Thermal sensors are compact and inexpensive. But they consume more power and are ineffective on warm days.

d) Radio Frequency Sensor: These sensors scan sub-surface to get a true image of the finger. They use reflected RF beam to create an image of the layer. RF sensors are not affected by dirt or other impurities, have improved accuracy and reliability .They are also size. Also, it is very Difficult to fake the finger with this sensor as it takes subsurface image. robust and small in

Sensor used Optical sensor

Advantages Drawbacks Special Feature Reliable, Affected by dust, inexpensive dirt Capacitive sensor Dielectric Better image Expensive, constant quality consume more power, not good on dry fingers Thermal Sensor Heat emission Compact, Consume more Not effective on inexpensive Power warm days Radio Frequency Reflected RF High accuracy, Sensor beam compact Scan subsurface, difficult to fake

Measures Visual image

Table 1: comparison of various fingerprint sensors

2.1.2 Fingerprint identification is done using minutiae based matching or pattern matching . In minutiae based matching, the location and orientation of minutiae points are used for matching. The advantage with this method is the small template size typical space required being < 400 bytes per finger. Due to the small template size, matching becomes faster .However the minutiae extraction process itself takes a long time. In image based matching, the location, orientation as well as a portion of the image around the minutia point is stored. Patches of reference image are placed on test image; each patch is shifted and rotated over the test image to find a best fit .Once all patches are aligned to their best spots , the locations are used to verify the relative distance between patches . Enough patches at the right places indicate a match.

Notable fingerprint biometric systems are i) IAFIS ( Integrated Automatic Fingerprint Identification System) maintained by FBI and ii) US-VISIT which confirms whether a person applying for entry/exit from USA is the same person as the one granted visa by the department of state.

2.2 Face Recognition: Face appearance is a biometric which is used every day by everyone as a primary means of recognizing other humans. Because of this naturalness it is more acceptable than other biometrics. Face image acquisition is done in the following ways -

a) Single image: This consists of digital photographs obtained using cameras or scanners.

b) Video Sequence: This is obtained from surveillance cameras. However, due to low spatial resolution, it is not very useful for face recognition.

c) 3D Images: This is based on skin/skull geometry and requires 3D images of the face instead of 2D images. Newer face recognition techniques such as Stereo, structured light and phase based ranging are used for capturing 3D images.

Face recognition approaches in turn are divided into two categories -

a) Face appearance based: Here, a face image is transformed into what is known as Eigen faces .To generate a set of eigenfaces, a large set of digitized images of human faces taken under the same lighting conditions are normalized to line up the eyes and mouths. They are then resembled at the same pixel resolution .Eigenfaces can be extracted out of the image by means of a mathematical tool called PCA ( Principal Component Analysis ).

b) Face geometry based: This is based on face features. Features like the rim of the nose and cheeks of the subject are detected and their geometric relationships are used for recognition of the face as in fig 2.

Fig 2: Geometry based face recognition

Face biometric is implemented by Queensland Transport, Australia for Drivers License with technical support from Unisys Corporation, Sydney.

2.3 Iris Recognition: The colored part of the eye between the pupil and sclera is called the Iris. Since Iris is a protected internal organ with a complex texture, and because it is unique from person to person and stable throughout life, it forms a very good biometric.

Fig 3: Iris image Iris image acquisition is done in two ways

a) Daugman System: In this system, an LED based point light source is used along with a standard video camera. The system captures images with the Iris diameter between 100 to 200 pixels from a distance of 15 to 46cm using 330mm lens. John Daugman at the university of Cambridge computer laboratory developed Gabor wavelet based Iris recognition algorithm which is the basis for almost all commercially available Iris recognition systems. b) Wildes system: This system images the Iris with approximately 256 pixels across the diameter from 20cm using 80 mm lens and is area based i.e. it captures the iris as part of a larger image which also contains data derived from the immediately surrounding eye region. Iris recognition based on John Daugmans algorithms, is used by the United Arab Emirates (UAE) Ministry of Interior for recognizing foreigners entering the UAE, at 35 air, land, and sea ports. Each traveler is compared against about a million Iris codes on a watch-list through internet links; the time required for an exhaustive search through the database is about 1 second. On an average

day, about 12,000 arriving passengers are compared against the entire watch-list i.e. about 12 billion comparisons per day. So far, about 7,500,000 exhaustive searches against that database have been done, making about 7 trillion iris comparisons altogether. A total of 73,180 matches have so far been found between persons seeking re-entry into the UAE and persona non grata on the watch list.

2.4 Hand Recognition: In hand recognition, the geometric features of the hand such as the lengths of fingers and the width of the hand are measured using a charged couple device camera (CCD) and various reflectors and mirrors. Black and white pictures of i) An image of the top of the hand; and ii) An image of the side of the hand are captured. Unique features in the structure of the hand such as finger thickness, length and width, the distances between finger joints, the hands overall bone structure, etc are also recorded. To enroll, the user places his or her hand onto a platen three different times; three images are captured and averaged. The resulting image forms the basis for the enrolment template, which is then stored in the database of the hand geometry scanner. The enrolment phase can be completed within five seconds. In the verification phase, the user is prompted to place his/her hand only once on the platen. An image is captured, and forms the basis for the verification template. The verification template is compared against the enrolment template, in the same fashion as fingerprint recognition. The verification phase can be accomplished in just under one second .This technology is mostly used in physical access entry applications.

Fig 4: Hand geometry acquisition device sensing the top and side of hand

2.5 Additional Biometrics: These are less commonly used and hence are explained in brief.

a) Retina scan: The retina biometric analyzes the layer of blood vessels located at the back of the eye. This technique uses a low-intensity light source through an optical coupler and scans the unique patterns of the retinas blood vessels. Retina scanning is quite accurate and very unique to each individual similar to the Iris scan; but unlike the Iris scan, it requires the user to look into a receptacle and focus on a given point for the user's retina to be scanned. This is inconvenient for people who wear glasses and those concerned about close contact with the scanning device. This technique is more intrusive than other biometric techniques although the technology itself is very accurate for use in identification, verification and authentication. Additionally, diseases such as cataracts can cause the retina to change making this technique not reliable over a period of time.

b) Vein scan biometric: Vein scan biometric technology identifies a person from the patterns of the blood vessels in the back of the hand. The technology uses near-infrared light to detect vein vessel patterns. Vein patterns are distinctive between twins and even between a person's left and right hand. These are developed before birth, highly stable and change through one's life only in overall size. The technology is not intrusive, and works even if the hand is not clean. It is commercially available and implemented by Fijitsu of Japan

c) Facial thermograph: Facial thermograph detects heat patterns created by the branching of blood vessels and emitted from the skin. An infrared camera is used to capture the resulting images. The advantages of facial thermograph over other biometric technologies are - it is not intrusive, no physical contact is required, every living person presents a usable image, and the image can be collected on the fly. Also, unlike visible light systems, infrared systems work accurately even in dim light or total darkness. Although identification systems using facial thermo grams were undertaken in 1997, the effort was suspended because of the cost of manufacturing the system.

d) Skin pattern: The exact composition of all the skin elements such as skin layer thickness, undulations between layers, pigmentation, collagen fibers etc is distinctive to each person. Skin pattern recognition technology measures the characteristic spectrum of an individual's skin. A light sensor illuminates a small patch of skin with a beam of visible and near-infrared light. The

light is measured with a spectroscope after being scattered by the skin. The measurements are analyzed, and a distinct optical pattern is extracted.

f) Gait recognition: Recognizing individuals by their distinctive walk, involves capturing a sequence of images to derive and analyze motion characteristics. A person's gait can be hard to disguise because a person's musculature essentially limits the variation of motion, and measuring it requires no contact with the person. However, gait can be disguised if the individual, for example, is wearing loose fitting clothes. Preliminary results have confirmed its potential, but further development is necessary before its performance, limitations, and advantages can be fully assessed.

g) Ear shape recognition: Ear shape recognition is still a research topic. It is based on the distinctive shape of each person's ears and the structure of the largely cartilaginous, projecting portion of the outer ear. Although ear biometrics appears to be promising, no commercial systems are available.

3. Performance Metrics: While considering performance measures Biometric systems are classified into verification systems in which a biometric matcher makes a 1:1 match decision based on a score s; and identification systems which makes a 1: m match decision.

FINGERPRINT PATTERN MATCHING, FEATURE MATCHING:


A robust acoustic fingerprint algorithm must take into account the perceptual characteristics of the audio. If two files sound alike to the human ear, their acoustic fingerprints should match, even if their binary representations are quite different. Note that acoustic fingerprint matching may be a distance measure between feature vectors, and not a straight binary match. Therefore, acoustic fingerprints are not bitwise fingerprints which must be sensitive to any small changes in the data. Acoustic fingerprints are more analogous to human fingerprints where small variations that are insignificant to the features the fingerprint uses are tolerated. One can imagine the case of a smeared human fingerprint impression which can accurately be matched to another fingerprint sample in a reference database; acoustic fingerprints work in a similar way.

Traditionally, passwords (knowledge-based security) and badges (token-based security) have been used to restrict access to secure systems. However, security can be easily breached in these systems when a password is divulged to an unauthorized user or a badge is stolen by an impostor. The emergence of biometrics has addressed the problems that plague traditional verification methods. Biometrics refers to the automatic identification (or verification) of an individual (or a claimed identity) by using certain physiological or behavioral traits associated with the person (e.g., fingerprints, hand geometry, iris, retina, face, hand vein, facial thermo grams, signature, voiceprint). Biometric indicators have an edge over traditional security methods in that these attributes cannot be easily stolen or shared. Among all the biometric indicators, fingerprints have one of the highest levels of reliability [2, 3] and have been extensively used by forensic experts in criminal investigations. Traditionally, fingerprint patterns have been extracted by creating an inked impression of the fingertip on paper. Now compact solid-state sensors provide digital images of these patterns. These sensors can be easily incorporated into a mouse, keyboard or cellular phone making this a very attractive mode of identification. Fingerprint systems are being increasingly incorporated in a wide range of civilian and commercial applications for user-authentication purposes.

(a)

(b)

(c)

Fig. 1. Fingerprint images acquired using the solid state Veridical sensor (a,b) and the optical Digital Biometrics sensor (c). The detected minutiae points have been marked in the fingerprint images (17 in (a), 21 in (b), 39in (c)). The solid-state sensors provide only a small contact area (_ 0:6"_0:6") for the fingertip and, therefore, sample only a limited portion of the fingerprint pattern (300_300 pixels at 500 dpi). An optical sensor, on the other hand, has a contact area of 1" _ 1", resulting in images of size 480 _ 508 pixels at 500 dpi. Hence, the number of minutiae points that can be extracted from a fingerprint sample acquired using a solid-state sensor is smaller compared to that acquired using an optical sensor (see Figure 1). Further, multiple impressions of the same finger, acquired at different instances using a solid-state sensor, may overlap only over a small region due to the rotation and translation of subsequent fingerprints 1(a) and (b). The minutiae-based matching schemes will not perform well in such situations due to the lack of a sufficient number of common singular points between the two impressions.

(a)

(b)

Fig. 2. (a) Circular tessellation (80 sectors) about a core point. (b) Rectangular tessellation (81 cells) of a veridical image. Since the core point in (b) is located at the lower right corner of the image, we propose to use a rectangular tessellation. We describe a hybrid approach to fingerprint matching that combines a minutiae-based representation of the fingerprint with a Gabor-filter (texture-based) representation for matching purposes. The texture-based representation of the fingerprint is a modification of the method described in 5. The proposed algorithm first aligns the two fingerprints using the minutiae points extracted from both the images, and then uses texture information to perform detailed matching. As a result, more information than minutiae points is being used to match fingerprints. The resultant matching score is combined with that obtained using the minutiae based matching algorithm. Verification results suggest that the proposed hybrid approach is better suited for images acquired using compact solid-state sensors.

2. BACKGROUND
A fingerprint can be viewed as an oriented texture pattern. Jain et al. [5] show that, for sufficiently complex oriented textures such as fingerprints, invariant texture representations can be extracted by combining both global and local discriminatory information in the texture. Given a fingerprint image, they demonstrate that a compact and reliable translation- and rotationinvariant representation can be built based entirely on the inherent properties of the underlying fingerprint texture. They further illustrate that the representation thus derived, is useful for robust discrimination of the fingerprints. The above scheme for generic representation of oriented

texture relies on extracting a core point in the fingerprint. A circular region around the core point is located and tessellated into sectors (or cells) as shown in Figure 2(a). The pixel intensities in each sector are normalized to a constant mean and variance, and filtered using a bank of Gabor filters to produce a set of filtered images. Grayscale variance within a sector quantifies the underlying ridge structures and is used as a feature. A feature vector (640 bytes in length), termed as a Finger Code, is the collection of all the features, computed from all the sectors, in every filtered image. The Finger code captures the local information, and the ordered enumeration of the tessellation captures the invariant global relationships among the local patterns. The fingerprint matching algorithm is based simply on the Euclidean distance between the two corresponding Finger Codes and hence is extremely fast and scalable.

(a)

(b)

Fig. 3. Aligning the input image with the template image. The red thinned image is the template, and the blue thinned image is the input. (a) Alignment of two impressions of the same finger (the non-overlapping regions appear as white on black); (b) Alignment of two impressions of different fingers. We propose the following improvements in order to adapt this technique for matching images captured by solid-state sensors: (i) Estimate the translation and rotation parameters needed to align the input image with the template using their minutiae points. (ii) Use the foreground segmentation algorithm described in [6] to segment the input and the template images. (iii) Define a rectangular tessellation on the two aligned images. Extract texture features from each rectangular cell (of both images) using Gabor filters. (iv) Match features extracted from all the overlapping foreground cells and weigh the matching distance by the amount of overlap. (v) Combine the confidence scores of the filter-based matcher with the minutiae-based

matcher to obtain an improved matching performance. The purpose of using rectangular cells (as opposed to circular sectors) is two-fold: (a) Due to the reduced contact area of the sensor, it may not be possible to detect a core point about which the image can be circularly tessellated. Moreover, even if a core point was detected, it may lie at the image boundary, thereby providing very few valid tessellated sectors. (b) The reduced size of the limits the amount of non-linear deformation of the image. Thus every region in the image is given equal importance while extracting features. This is achieved by having equal-sized cells in a rectangular mesh (Figure 2(b)). 3. THE HYBRID MATCHING APPROACH The matching technique described here is referred to as the hybrid technique because it combines minutiae information available in a fingerprint with the underlying texture information in local regions to perform the matching.

3.1. Image Alignment


Minutiae points from both the input and template images are extracted using the algorithm described in [6]. The algorithm provides the following two outputs: (a) A set of minutiae points, each characterized by its spatial position and orientation in the fingerprint image. (b) Local ridge information in the vicinity of each minutiae point. The two sets of minutiae points are then matched using a point matching algorithm. The algorithm first selects a reference minutiae pair (one from each image) and then determines the number of corresponding minutiae pairs using the remaining set of points. The reference pair that results in the maximum number of corresponding pairs determines the best alignment. Fig. 4. Masking out background regions: (a) Template image, (b) Input image, (c) Input image after translation and rotation, (d) Masked input image. An exhaustive evaluation of all point correspondences is avoided due to the availability of local ridge information at every minutiae point. Once the minutiae points are aligned by this method, the rotation and translation parameters are computed. The estimated rotation parameter is the average of the individual rotation values of all corresponding minutiae pairs. The translation parameters are computed using the spatial coordinates of the reference minutiae pair that resulted in the best alignment. The results of the alignment of two impressions of the same finger and two impressions of

different fingers are shown in Figure 3. For the purpose of visualization, the thinned ridge map of each impression has been shown.

3.2. Image Tessellation


Background regions of the input fingerprint image are not used in the feature extraction and matching stages of the algorithm (Figure 4). The input and template images are normalized by constructing equal-sized non-overlapping windows over them and normalizing the pixel intensities within each window to a constant mean and variance. Each normalized image is tessellated into equal-sized non-overlapping rectangular cells of predefined dimensions (30 _ 30). The dimensions of the cell were chosen after observing that two neighboring ridges span over approximately 30 pixels. For a 300 _ 300 image, this results in 81 tessellated cells.

3.3. Feature Extraction

Fig. 5. Result of applying Gabor filters to Fig 4(d). Filtered images for orientations 0o, 22:5o, 45o, and 67:5o are shown. A bank of 8 Gabor filters is applied to each of the tessellated cell. All the 8 Gabor filters used for feature extraction have the same frequency, 0:1 pix1, but different

orientations (0o to 157:5o in steps of 22:5o). This frequency is chosen based on the average inter-ridge distance in the fingerprints (which is _ 10 pixels). This particular filtering results in a set of 8 filtered images for each cell. Four of the filtered images are show in Figure 5. The absolute average deviation of intensity in each filtered cell is treated as a feature value (Figure 6). Thus there are 8 feature values for each cell in the tessellation. The feature values from all the cells are concatenated to form a 648-dimensional (81 _ 8) feature vector. Feature values that reside in the masked regions of the input image are not used in the matching stage of the process, and are marked as missing values in the feature vector.

3.4. Matching
Matching an input image with a stored template involves computing the sum of the squared differences between the two feature vectors after discarding missing values. This distance is normalized by the number of valid feature values used to compute the distance. The matching score is combined with that obtained from the minutiae-based method, using the sum rule of combination. If the matching score is less than a predefined threshold, the input image is said to have successfully matched with the template.

4. EXPERIMENTAL RESULTS

Our database consists of fingerprint impressions obtained from 160 users using the Veridical sensor. Each user was asked to provide 4 different impressions of each of 4 different fingers - the left index finger, the left middle finger, the right index finger and the right middle finger. A set of 2; 560 (160 _ 4 _ 4) images were collected. An automatic quality checker was used to reject poor quality images.

Fig. 6. Feature values derived from the filtered images of Fig 5. For purposes of visualization, the feature values have been scaled to the 0 - 255 range. The performance of a biometric system can be shown as a Receiver Operating Characteristic (ROC) curve that plots the Genuine Accept Rate against the False Accept Rate (FAR) at different thresholds on the matching score. Figure 7 show the performance of the hybrid approach presented here. We compare this performance with a minutiae-based approach [6] that does not utilize texture information for representing the fingerprint. As can be seen in the graph, the hybrid approach outperforms the minutiae-based approach over a wide range of FAR values. For example, at a 1% FAR, the hybrid matcher gives a Genuine Accept Rate of 92% while the minutiae-based matcher gives a Genuine Accept Rate of 72%. The computational requirement of the hybrid matcher is dictated by the convolution operation associated with the eight Gabor filters. The entire matching algorithm, that includes feature extraction from the input image, and the subsequent matching process, takes around 8 seconds of CPU time in an Ultra 10 SPARC machine. However, it is possible to enhance the speed of this algorithm by implementing the convolution operation via a dedicated DSP chip.

FINGERPRINT RIDGES AND FURROWS:


A fingerprint is made of a number of ridges and valleys on the surface of the finger. Ridges are the upper skin layer segments of the finger and valleys are the lower segments. The ridges form

so-called minutia points: ridge endings (where a ridge end) and ridge bifurcations (where a ridge splits in two). Many types of minutiae exist, including dots (very small ridges), islands (ridges slightly longer than dots, occupying a middle space between two temporarily divergent ridges), ponds or lakes (empty spaces between two temporarily divergent ridges), spurs (a notch protruding from a ridge), bridges (small ridges joining two longer adjacent ridges), and crossovers (two ridges which cross each other). The uniqueness of a fingerprint can be determined by the pattern of ridges and furrows as well as the minutiae points. There are five basic fingerprint patterns: arch, tented arch, left loop, right loop and whorl. Loops make up 60% of all fingerprints, whorls account for 30%, and arches for 10%. Fingerprints are usually considered to be unique, with no two fingers having the exact same dermal ridge characteristics. There are two main algorithm families to recognize fingerprints:

Minutia matching compares specific details within the fingerprint ridges. At registration (also called enrollment), the minutia points are located, together with their relative positions to each other and their directions. At the matching stage, the fingerprint image is processed to extract its minutia points, which are then compared with the registered template.

Pattern matching compares the overall characteristics of the fingerprints, not only individual points. Fingerprint characteristics can include sub-areas of certain interest including ridge thickness, curvature, or density. During enrollment, small sections of the fingerprint and their relative distances are extracted from the fingerprint. Areas of interest are the area around a minutia point, areas with low curvature radius, and areas with unusual combinations of ridges.

Track tracing
Getting pixels locations that builds each track is a very simple can be extracted only by "find" instruction, the real problem exists in rearranging these pixels to form sequential continues track and represent it with two vectors of x & y measured from the estimated center. Many algorithms can be used to solve the rearrangement problem. the used algorithm depends on, - using the matlab function "find" can extract tracks and arrange its locations column by column from lift to right and in each column from up to down. - choosing a point to start the track with it, the easies method is by starting with the first point in the output of the previously extracted vectors. - the next point in that track will be the nearest point to the present one, computing the distance vector which values indicates the distance between the point and all else points in the track then finding index of that minimum, which will be the index-of track vector-of the next point. this pixel may have minimum distance with the previous one, to overcome this problem, indices of all taken point will be stacked in a vector, which is used to cancel that pixels. note, this pixel may have more than two points having same minimum distance, the bifurcation, solved by choosing the point having minimum row index, and as this bifurcation ends the algorithm will continue till all the track finishes

- the last step in track tracing to reference these x &y to an estimated center point the center point (x , y) is the mean of x & y for all tracks. Then subtract all x's from the center x and y's from the enter y.

To compare two images depending on their track location, some features should be extracted from the track locations. These features should be capable of referring corresponding tracks and measure the difference between each pair. Features used concerns the radius of the track, so first compute it by the ordinary equation, r = x2 + y2 That features will best be mean and standard deviation of each node of the wave packet decomposition using two levels of db2. The mean of the approximation coefficients is the estimate radius of the contour, which could be used to find corresponding tracks, the other features express the variation of the track with that radius; which is a measure of similarity within the track itself.

COMPARING IMAGES:
Comparing input image with a matrix of database and estimate the corresponding image will be of long steps. - Comparing the unknown image with the first image in database includes, that includes finding best corresponding tracks and then summarize the difference between each pair into one numeric value determines the closeness of both images. Thin find the closest image that having minimum of that value. Comparing between unknown image and the first in database, the feature that is responsible for finding best matching track is the average mean, which is the mean of the approximate decomposition. - So first step is to compare the mean of each track with means of the reference image tracks hence giving each track of the unknown the track number that will be compared with.- Compute the difference between each track pair. summarize the results of each track pair. That may be considered a measure of the closeness of image pair. Recomputed this value between the unknown and each image in the database. So the image that gives minimum closeness may be considered the result.

MATLAB: Matlab (matrix laboratory) is a numerical computing environment and fourth-generation programming language. Developed by Math Works, MATLAB allows matrix manipulations,

plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and FORTRAN. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing capabilities. An additional package, Simulink, adds graphical multi-domain simulation and Model-Based Design for dynamic and embedded systems. Variables are defined using the assignment operator, =. MATLAB is a weakly dynamically typed programming language. It is a weekly typed language because types are implicitly converted. It is a dynamically typed language because variables can be assigned without declaring their type, except if they are to be treated as symbolic objects, and that their type can change. Values can come from constants, from computation involving values of other variables, or from the output of a function. As suggested by its name (a contraction of "Matrix Laboratory"), MATLAB can create and manipulate arrays of 1 (vectors), 2 (matrices), or more dimensions. In the MATLAB vernacular, a vector refers to a one dimensional (1N or N1) matrix, commonly referred to as an array in other programming languages. A matrix generally refers to a 2-dimensional array, i.e. an mn array where m and n are greater than 1. Arrays with more than two dimensions are referred to as multidimensional arrays. Arrays are a fundamental type and many standard functions natively support array operations allowing work on arrays without explicit loops. Therefore the MATLAB language is also an example of array programming language.

Anda mungkin juga menyukai