Anda di halaman 1dari 5

ARTICLE IN PRESS

Neurocomputing 70 (2007) 1582–1586


www.elsevier.com/locate/neucom

Letters

Face and palmprint feature level fusion for


single sample biometrics recognition
Yong-Fang Yaoa,, Xiao-Yuan Jingb, Hau-San Wongc
a
Nanjing University of Posts & Telecommunications, Nanjing 210003, China
b
Shenzhen Graduate School, Harbin Institute of Technology, Xili Town, Shenzhen 518055, China
c
Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong
Received 16 May 2006; received in revised form 13 July 2006; accepted 25 August 2006
Communicated by D. Wang
Available online 16 November 2006

Abstract

In the application of biometrics authentication (BA) technologies, the biometric data usually shows three characteristics: large
numbers of individuals, small sample size and high dimensionality. One of major research difficulties of BA is the single sample
biometrics recognition problem. We often face this problem in real-world applications. It may lead to bad recognition result. To solve
this problem, we present a novel approach based on feature level biometrics fusion. We combine two kinds of biometrics: one is the face
feature which is a representative of contactless biometrics, and another is the palmprint feature which is a typical contact biometrics. We
extract the discriminant feature using Gabor-based image preprocessing and principal component analysis (PCA) techniques. And then
design a distance-based separability weighting strategy to conduct feature level fusion. Using a large face database and a large palmprint
database as the test data, the experimental results show that the presented approach significantly improves the recognition effect of single
sample biometrics problem, and there is strong supplement between face and palmprint biometrics.
r 2006 Elsevier B.V. All rights reserved.

Keywords: Single sample biometrics recognition; Face and palmprint biometrics; Feature level fusion; Gabor-based image preprocessing; PCA; Feature
weighting strategy; Biometrics supplement

1. Introduction effect on the large-scale biometric databases by using the


addressed methods.
In the application of biometrics authentication (BA) How to solve this problem? We think that the biometrics
technologies, the biometric data usually show three fusion technique is a possible solution, which generally
characteristics [5]: large numbers of individuals, small includes two steps. The first step is to select appropriate
sample size and high dimensionality. One of major research biometrics having the property of supplement beneficial to
difficulties of BA is the single sample biometrics recogni- recognition. In this paper, we combine two kinds of
tion problem which is an extreme case of small sample size. biometrics: one is the face feature that is a representative of
We often face this problem in real-world applications. It contactless biometrics (including face, gait, ear, etc.), and
may lead to bad recognition result. Recently, a few another is the palmprint feature that is a typical contact
researchers begin to address this issue in the field of face biometrics (including fingerprint, iris, palmprint, etc.). We
recognition. So far, most of addressed methods are based will demonstrate that there is strong supplement between
on the principal component analysis (PCA) technique [4]. these two biometrics.
However, it may be difficult to achieve good recognition The second step is to design appropriate fusion method.
Approximately, there are three fusion levels: pixel level,
feature level and decision (or classifier) level. Much work in
Corresponding author. Tel.: +86 25 68838397; fax: +86 25 68838397. biometrics fusion has been done in the high level, that is, in
E-mail address: yongfang_yao@yahoo.com (Y.-F. Yao). the decision level [1]. For the single sample biometrics

0925-2312/$ - see front matter r 2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.neucom.2006.08.009
ARTICLE IN PRESS
Y.-F. Yao et al. / Neurocomputing 70 (2007) 1582–1586 1583

recognition problem, we need to acquire helpful image Gabor and PCA transforms to extract the discriminant
classification information as much as possible. In this feature as follows: (i) Let Xface and Xpalm separately
paper, we focus on a lower level fusion, that is, the feature represent the face and palmprint image sample sets. For
level fusion. We extract the discriminant feature using each sample in Xface and Xpalm, we preprocess it using
Gabor-based image preprocessing and PCA techniques. Gabor transform and obtain 32( ¼ 4  8) transformed
Then, we design a distance-based separability weighting images (as displayed in Fig. 1). To reduce the computa-
strategy to conduct feature level fusion. tional cost, we downsample each transformed image by a
ratio equal to 4. We take the form of feature vector to
represent Gabor features. Therefore, we obtain the
2. Discriminant Gabor feature extraction
transformed sample sets Xgaborface and Xgaborpalm. (ii)
Use PCA transform to extract discriminant features from
For single sample problem where there is only one
Xgaborface and Xgaborpalm. We get the corresponding face
training sample per class, the within-class scatter matrix Sw
and palmprint discriminant feature sets Yface and Ypalm.
of the sample set is a zero matrix. Hence, the linear
discriminant analysis (LDA) transform [5] is degenerated
3. Feature level fusion
as PCA transform. The discriminant information can be
completely acquired from PCA transform.
We first do feature vector normalization. Assume that
As an image analysis tool, the Gabor transform is
yface represents one sample of Yface, we normalize it as
suitable for analyzing the gradually changing data such as
ynorface ¼ (yfacemface)/sface, where mface and sface indicate
face, iris and palmprint images. We use the circular Gabor
the mean value and variance value of training sample set of
filter which has the following general form [2]:
  Yface, and ynorface indicates a normalized face feature
1 x2 þ y2 vector. Similarly, we get ynorpalm which is a normalized
Gðx; y; y; u; sÞ ¼ exp 
2ps2 2s2 palmprint feature vector of Ypalm. By normalization,
 expf2piðux cos y þ uy sin yÞg, ð1Þ ynorface and ynorpalm have the same mean values (equal
to 0) and same variances (equal to 1). So we do not
where s is set as {2,4,8,16}, u ¼ 1/s, and y is set as compute their weights from the side of feature vector
{0,1,2,3,4,5,6,7}  (p/8). So the Gabor transform used variance. In this paper, we connect the design of weighting
in this paper contains four scales and eight orientations. strategy with the following classification method, the
Fig. 1 displays the face demo images of Gabor transform: nearest neighbor (NN) classifier. The NN classifier selects
(a) an original face image; (b) Gabor filtering results a candidate class with the minimum distance as the class
(expressed by the magnitude values). We combine the that a testing sample belongs to. Assuming there are M
testing samples in Yface and Ypalm. For the jth
(j ¼ 1,2,y,M) testing sample of Yface, we obtain a distance
vector d using NN classifier, where d ¼ [d1,d2,y,dc], c is
the class number, and di (i ¼ 1,2,y,c) is ranked in the
ascending order (that is d1od2oyodc). Let d̄ denote the
mean value of d. We use the ratio value of d̄ and d1 to
express the distance-based separability value sface of face
features, that is
sface ¼ d̄=d 1 . (2)
Similarly, for the corresponding testing sample of Ypalm, we
get the separability value spalm of palmprint features.
Next, let wj denote the ratio of separability values of
palmprint and face feature vectors, that is, wj ¼ spalm/sface.
Compute the weighting values [w1,w2,y,wM] for all testing
Fig. 1. Face demo images of Gabor transform. samples and get the average weighting value

Gabor Gaborface PCA Face


transform transform feature
Nearest Biometrics
Feature Feature neighbor recognition
level classifier
weighting result
fusion
Gabor Gabor PCA Palmprint
transform palm transform feature

Fig. 2. The single sample biometrics recognition procedure.


ARTICLE IN PRESS
1584 Y.-F. Yao et al. / Neurocomputing 70 (2007) 1582–1586

P
w̄  ðw̄ ¼ M j¼1 wj =MÞ. We assign the weight w̄ to all
palmprint feature vectors and the weight 1 to all face
feature vectors. w̄ reflects the ratio of total separability of
palmprint and face features. The variance of weights,
w_var is expressed by
w_var ¼ varðw1 ; w2 ; . . . ; wM Þ, (3)
where var( ) represents the vector variance. Then, we
serially combine a face feature vector ynorface and its
corresponding palmprint feature vector ynorpalm as
yfuse ¼ ½ynorface ; w̄  ynorpalm . (4)

We thus get a fused sample set Yfuse. We still take the NN


classifier to classify Yfuse. The single sample biometrics
recognition algorithm is carried out as follows:
Fig. 5. Recognition results using AR and palmprint databases.

(i) Do Gabor transform on each sample in Xface and


Xpalm. Get Xgaborface and Xgaborpalm.
(ii) Do PCA transform to extract discriminant features
from Xgaborface and Xgaborpalm. Get Yface and Ypalm.
(iii) Normalize Yface and Ypalm, and compute the weight
value w̄ using the proposed weighting strategy for a
test. Get the fused sample set Yfuse.
(iv) Use the NN classifier to classify Yfuse. The distance d( )
between a training sample y1 and a test sample y2 is
defined by
dðy1 ; y2 Þ ¼ jjy1  y2 jj2 , (5)
where || ||2 denotes the Euclidean distance.

Fig. 2 displays the entire single sample biometrics


recognition procedure. Fig. 6. Weighting values using AR and palmprint databases.

Fig. 3. Demo images of one subject from the AR database.

Fig. 4. Demo images of one subject from the palmprint database.


ARTICLE IN PRESS
Y.-F. Yao et al. / Neurocomputing 70 (2007) 1582–1586 1585

Table 1
Average recognition rates using AR and palmprint databases

Compared methods Average recognition rates (%)

Undo Gabor transform Single modal recognition AR-PCA 43.25


palm-PCA 56.4
Multimodal fusion (weighting or not) ARPalm-PCA-directfusion 77.07
ARPalm-PCA-weightfusion 80.49
Do Gabor transform Single modal recognition AR-GaborPCA 52.57
Palm-GaborPCA 62.72
Multimodal fusion (weighting or not) ARPalm-GaborPCA-directfusion 87.84
ARPalm-GaborPCA-weightfusion 90.73

4. Experimental results and conclusions extracted from AR-PCA and Palm-PCA, ARPalm-PCA-
directfusion obtains 77.07% recognition rate. Secondly,
We use a public and large face image database, the AR by doing Gabor-based image preprocessing, the directfu-
database [3]. The AR face database used in the experiment sion recognition rate is promoted to 87.84% using
contains 119 individuals. Each individual has 26 images ARPalm-GaborPCA-directfusion. Thirdly, by implement-
with size 60  60. All image samples of one subject are ing the proposed feature weighting strategy, the direct-
shown in Fig. 3. fusion recognition rate is further promoted to 90.73%
We use a palmprint database provided by the Hong using ARPalm-GaborPCA-weightfusion. The experimen-
Kong Polytechnic University [2]. The palmprint database tal results demonstrate that the presented approach has
used in the experiment contains 189 individuals. Each following three advantages: (i) normalized face and
individual has 20 images with size 64  64. All image palmprint biometric features are suitable for fusion,
samples of one subject are shown in Fig. 4. because they have strong supplement; (ii) discriminant
For the single sample recognition, we draw out the feature extraction based on Gabor and PCA transforms is
sample subsets of same size from these two databases. We effective to process face and palmprint images; (iii)
use all 119 face classes with each class containing the first distance-based separability weighting strategy can further
20 samples, and use the first 119 palmprint classes with improve the classification effect of biometrics fusion, and
each class containing all 20 samples. Therefore, the this strategy is well connected with the classification
numbers of training and testing samples are 119 and method. Consequently, the presented feature level fusion
2261( ¼ 11919), respectively. We in turn choose one approach is an effective solution for the single sample
sample marked same label of every class from these two biometrics problem.
subsets as the training sample. For example, we select the
sample marked label (a) for the first test experiment and the Acknowledgments
sample marked label (t) for the twentieth test. The
remainder are taken as testing samples. Fig. 5 displays The work described in this paper was fully supported by
the recognition results for all compared methods, where the the National Natural Science Foundation of China
selected single training sample number per class is varied (NSFC) under Project No. 60402018, and a grant
from 1 to 20. It is noticed that all methods employ the same from the Research Grants Council of Hong Kong
classifier, i.e. the NN classifier, and ‘‘directfusion’’ means Special Administrative Region, China [Project No. CityU
conducting feature fusion by assigning face and palmprint 1160/04E].
features the same weights. The presented ARPalm-Ga-
borPCA-weightfusion approach achieves the best recogni- References
tion rates in all cases. Fig. 6 shows the weighting values of
ARPalm-PCA-weightfusion and ARPalm-GaborPCA- [1] V. Chatzis, A.G. Bors, I. Pitas, Multimodal decision-level fusion for
weightfusion. According to Fig. 6, the average weights of person authentication, IEEE Trans. Syst. Man Cybern. A 29 (6) (1999)
674–680.
these two methods are 1.3186 and 1.2635, respectively. And [2] W.K. Kong, D. Zhang, W.X. Li, Palmprint feature extraction using 2-
computing by formula (3), the average variances of weights D Gabor filters, Pattern Recogn. 36 (10) (2003) 2339–2347.
of two methods are 0.3793 and 0.2676, respectively. [3] A.M. Martinez, R. Benavente, The AR face database, CVC Technical
Table 1 shows the average recognition rates of all Report No. 24, June 1998.
methods. AR-PCA and Palm-PCA are two basic single- [4] J. Wu, Z.H. Zhou, Face recognition with one training image per
person, Pattern Recogn. Lett. 23 (14) (2002) 1711–1719.
modal biometrics recognition methods. They acquire [5] D. Zhang, X. Jing, J. Yang, Biometric Images Discrimination (BID)
43.25% and 56.4% recognition rates, respectively. Firstly, Technologies, IGP/INFOSCI/IRM Press, USA, ISBN 1-59140-831-8,
performing directfusion by using the discriminant features 2006.
ARTICLE IN PRESS
1586 Y.-F. Yao et al. / Neurocomputing 70 (2007) 1582–1586

Yong-Fang Yao received her B.Sc. degree in Hau-San Wong is currently an Assistant Profes-
Automatic Control from the Nanjing University sor in the Department of Computer Science, City
of Science and Technology in 1999. She even University of Hong Kong. He received the B.Sc.
worked as a Computer Engineer at two IT and M.Phil. degrees in Electronic Engineering
corporations. She is a graduated student in the from the Chinese University of Hong Kong, and
Department of Computer Science, Beijing Insti- the Ph.D. degree in Electrical and Information
tute of Technology, and is now a teacher in the Engineering from the University of Sydney. He
Nanjing University of Posts & Telecommunica- has also held research positions in the University
tions. Her research interests include biometrics of Sydney and Hong Kong Baptist University.
recognition, image processing and artificial in- His research interests include multimedia infor-
telligence. mation processing, multimodal human-computer
interaction and machine learning. He is the co-author of the book
Xiao-Yuan Jing received his M.Sc. and Ph.D. in Adaptive Image Processing: A Computational Intelligence Perspective,
which is a joint publication of CRC Press and SPIE Press, and a guest co-
Pattern Recognition from the Nanjing University
of Science and Technology (1995 and 1998, editor of the special issue ‘‘Information Mining from Multimedia
respectively), China. He is now a Doctor super- Databases’’ for the EURASIP Journal on Applied Signal Processing. In
visor and in charge of a Bio-Computing Research addition, he was an organizing committee member of the 2000 IEEE
Pacific-Rim Conference on Multimedia and 2000 IEEE Workshop on
Center in the Shenzhen Graduate School, Harbin
Institute of Technology, China. His research Neural Networks for Signal Processing, which were both held in Sydney,
interests include biometrics, pattern recognition, Australia, and has co-organized a number of conference special sessions,
including the special session on ‘‘Image Content Extraction and
image processing, neural networks, machine
learning and artificial intelligence. Description for Multimedia’’ in 2000 IEEE International Conference on
Image Processing, Vancouver, Canada, and ‘‘Machine Learning Techni-
ques for Visual Information Retrieval’’ in 2003 International Conference
on Visual Information Retrieval, Miami, Florida.

Anda mungkin juga menyukai