Abstract
The face images are physiological biometric traits and are acquired with simple cameras for recognition of a person for
security issues. In this paper, we propose face recognition based on SWT and Procrustes analysis. The face database is created
by considering a number of persons with many images per person. The face images are resized to 128*128. One frontal image
from a number of image samples of a person is considered as a reference image. The Stationary Wavelet Transform (SWT) is
applied on the frontal reference image and also non-frontal images of a person. The Procrustes Analysis is performed on non-
frontal images with respect to frontal reference images to convert non-frontal to frontal images. The SWT matrices are
converted into row vectors. The SWT row vector of frontal reference images is concatenated with SWT row vectors of non-
frontal images to obtain final features. The Euclidean Distance (ED) is used to compare final features of database and test
images to compute performance parameters. It is observed that the performance of the proposed method is better compared to
existing methods.
Keywords: Biometrics, Procrustes Analysis, SWT, Face Recognition.
1. INTRODUCTION
The biometrics are used to measure and analyze the unique characteristics of biometrics of a human beings to
establish an identity. The biometrics are broadly classified into two categories viz., physiological and behavioral. The
physiological biometrics are related to body parts of a person such as a face, palmprint, fingerprint, iris etc., and the
characteristics are almost constant for a long period. The behavioral biometrics are related to the behavior of the human
beings and corresponding traits are a signature, keystroke, gait etc., which is not constant for a long time but depends
on circumstances. Nowadays an automatic face recognition system is used to identify a person effectively for several
applications. Face recognition system has three stages such as enrollment stage, testing stage, and matching stage. In
enrollment stage, the face images of several persons are stored and processed to enhance the quality of images. The
features of preprocessed face images are extracted using either spatial domain techniques such as Principal Component
Analysis(PCA), Independent Component Analysis(ICA), Singular Value Decomposition(SVD) etc., or Transform
Domain Techniques such as Fast Fourier Transform (FT) , Discrete Wavelet Transform (DWT), Discrete Cosine
Transform(DCT), Stationary Wavelet Transform(SWT) etc., In testing stage the procedure of feature extraction of face
image is same as enrollment stage. The query images are stored in the test stage. In matching stage the features of
enrollment stage and testing stage are compared using distance formulae such as Euclidean distance (ED), Chain
Square Distance, Hausdorff Distance (HD) etc., and also using classifiers such as Neural Network(NN), Support Vector
Machine (SVM), Self-Organized Map (SOM) etc.
Contribution: In this paper, face recognition based on SWT and Procrustes analysis is proposed. The frontal image of
a person is considered as a reference image and SWT is applied on frontal and non-frontal images of a person and
Procrustes analysis is performed to convert non-frontal SWT matrix into frontal. The final features are obtained by
concatenating frontal and non-frontal SWT coefficients.
Organization: This paper is organized as follows: Section 2 describes the overview of a literature survey, Section 3
focuses on proposed approach, Section 4 describes proposed an algorithm, and Section 5 shows the results of our
experiments on various datasets for face recognition, Finally Section 6 concludes and outlines our future work.
2. LITERATURE SURVEY
In this section, the reviews of existing techniques of face recognition presented by various researchers are discussed.
Smriti Tikoo and Nitin Malik[1] presented facial detection using viola Jones algorithm. This algorithm consists of i)
Haar feature selection ii) Creating an integral image iii) Adaboost training iv) Cascading amplifiers and recognition of
face has been done by using Back Propagation Neural Network (BPNN). Anagha S Dhavalikar and Kulkarni [2]
proposed Automatic Facial Expression Recognition system comprises of three stages i) Face detection involves skin
colour detection using YCbCr colour model, lighting compensation and morphological operations ii) Feature
extraction extracts n facial features of eyes, nose and mouth using Active Appearance Model iii) Facial recognition
using Euclidean Distance. The recognition rate was better. Kavitha and Manjeet Kaur [3] analyzed different
algorithms and methods. Navaneeth Bodla et al .,[4] proposed feature fusion heterogeneous technique on different face
features by jointly acquiring the non-linear high-dimensional projection of the deep features and generating template
feature information The experimental results show better and effective. Ying et al.,[5] introduced the Orthogonal
Procrustes Problem(OPP) to handle pose variations existed in 2D face images and develop a regression model that does
face alignment, pose correction and face representation. By adopting a progressive strategy and in each regression the
orthogonal matrix computes the linear transformation between near-by viewpoints. The pose variations are dealt only
in horizontal direction. The experimental results were better by using misaligned test images with pose variations
regardless of the training images are frontal or non-frontal. X. Chai, et al. [6], have proposed a method Locally Linear
Regression for Pose-Invariant Face Recognition in this method to efficiently generate the virtual frontal view from a
given non-frontal face image and To improve the prediction accuracy in the case of coarse alignment. Ravi et al.,[7]
proposed a method of face recognition technique using DT-CWT and LBP features in which five level DT-CWT is
applied on face image to obtain real and Imaginary bands to generate DT-CWT coefficients. The LBP algorithm is
applied on each 3x3 matrix of DT-CWT coefficients to obtain final features. Hengliang Tang et al.,[8] proposed a novel
face representation approach known as Haar Local Binary Pattern histogram transform, and then the LBP operator is
applied on each sub-image to extract the facial features. Ramesh and Raja [9] proposed a performance evaluation of
face recognition based on DWT and DT-CWT using Multi-matching Classifiers. The face images are resized to the
required size for DT-CWT. The two-level DWT is applied on face images to generate four sub-bands. Euclidian
Distance, Random Forest and Support Vector Machine matching algorithms are used for matching.
3. PROPOSED MODEL
In this section, a new approach to face recognition based on SWT and Procrustes is proposed is shown in figure1.
The model is tested using various available face databases such as ORL, Indian Female, JAFFE and combined
databases.
The ORL database stands for Olivetti and Oracle Research Laboratory database. The ORL database images of faces
consist of a set of images taken at the lab between April 1992 to April 1994. They took images of 40 persons and there
are 10 different images of each person. All the images were taken at different times, varying facial expressions, varying
the lightning. All the images were taken against a dark homogenous background.
The faces in a frontal view are in upright position, with a slight left-right rotation. There are a total of 400 images of
size 11292 with 256 gray level pixel values, Image files are in jpg format. The face image samples of a person are
shown in Figure 2.
The images are organized in two main directories - males and females. In each of these directories, there are 11
different images of that person. The orientation of the face is included are looking front, looking left, looking right,
looking up, looking up towards left, looking up towards the right, looking down. Available emotions are neutral, smile,
laughter, sad/disgust. The face image samples of a person are shown in Figure 3.
The images files are in BMP format that is bitmap images. The size of each image is 320 x 280 pixels, with 256 grey
levels per pixel. There are 18 images per person, each with different facial expression or configuration: center-light,
with glasses, happy, left-light, with no glasses, normal, surprised, and a wink. The face image samples of a person are
shown in Figure 5.
3.2 RESIZE:
Resize is used to change the size of the image for the desired size. All the face databases used are resized to 128*128 to
convert different sizes to uniform sizes.
The dimensions of all four subbands are equivalent to input images. The SWT is applied on face image and the
corresponding four subband images are shown in Figure 7. The LL band image is almost same as that of the original
image and has significant information of the original image. The LH band image has horizontal details of the original
image. The HL band image has vertical details of the original image. The HH band image has diagonal details of the
original image.
(a)Original image
It is the shape analysis by comparing shapes of two or more face images. It is performed by optimally translating,
uniformly scaling, rotating and shape comparison.
(i)Translation: The translational components of an image are removed by translating the image so that the mean of all
the image points lies at the origin. Consider k points in an image say (x1,y1), (x2,y2)(xk, yk). The mean of these
pixels is (x-y),
where
Translate the pixels so that their mean is translated to the origin ie., (x1-x, y1-y), (x2-x, y2-y),(xk-x,
yk-y)
(ii)Uniform scaling: Image is scaled So that the Root Mean Square Distance (RMSD) from the points to the translated
origin is 1.
The RMSD is a statistical measure of the objects scale or size (s) is given in equation (1).
(1)
When the point coordinates are by the objects initial scale then the scale becomes 1 ((x1-x)/s, (y1-y)/s).
(iii)Rotation: The standard reference orientation is not always available hence removing the rotational component is
more complex. Consider two objects consisting of the same number of points with scale and translation removed.
Let the two points be ((x1,y1),..), ((w1,z1),.) on one object and among them can be used to provide a reference
orientation. Rotate the other around the origin by fixing the reference object. This is done until an optimum angle of
rotation is obtained.
Where (u, v) are the coordinates of a rotated point and scaling when derivative of + +
with respect to and solving when the derivative is zero gives
(iv)Shape comparison: The difference between the shape of two objects can be evaluated only after superimposing the
two objects by translating, scaling and optimally rotating. The square root of sum of the squared distance between the
corresponding points can be used as a statistical measure of their difference in shape.
D=
The output of the Procrustes analysis is the orthogonal matrix, the Size of the matrix is same as that of the input to the
Procrustes analysis i.e the size of the output image of SWT is 128*128. Now the output of Procrustes is fed to the
horizontal concatenation.The non-frontal images in the database are converted into frontal images using the Procrustes
method as shown in Figure 8.
If p = (p1, p2) and q = (q1, q2) then the distance is given by the ED is computed using equation (2)
ED=
(2)
Where, M = No of coefficients in a vector.
Pi= Coefficient values of vectors in database.
qi = Coefficient values of vectors in test image.
4. PROPOSED ALGORITHM
Problem Definition: The face images are identified to authenticate a person for security issues. The proposed algorithm
is based on SWT and Procrustes analysis to recognize human beings effectively.
Objectives: The face recognition algorithm is developed with the following objectives
(i) To increase maximum and optimum TSR.
(ii) To reduce, the values of FRR, FAR, AND EER.
The algorithm is developed by applying Procrustes analysis technique on SWT matrix to obtain better performance
parameters and the algorithm is given in Table 1
7. The final features of images are equivalent to concatenation of row vectors of reference image and each
Procrustes analyzed non-frontal row vector features ie., [Row vector of reference image: Row vector of Procrustes
analyzed image]
8. In the test section, the image to be tested and frontal reference images are considered and SWT is applied.
9. The SWT matrix of the non-frontal image is converted to frontal SWT matrix using Procrustes analysis and
matrices are converted to row vectors.
10. The final features are obtained by concatenating row vectors of reference and non-frontal SWT matrices in
the test section.
11. The ED is used to compare final features of database and test images to compute performance parameters.
5. PERFORMANCE ANALYSIS
In this section, the definitions of performance parameters, the performance evaluation of proposed approach on widely
used face databases and comparison of results of the proposed method with existing methods are given.
(3)
(4)
EER=FAR-FRR (5)
(6)
The performance parameters FAR, FRR, EER, opt TSR and maximum TSR are computed by varying Persons Inside
Database (PID) and Persons Outside Database (POD).
The variations of FRR, FAR and TSR for PID and POD combinations of 40:80, 60:60 and 80:40 are shown in Figures
9,10,11.
It is observed that the FRR values decreases with a threshold, whereas, the values of FAR and TSR increases with a
threshold. The percentage EER values for PID and POD combinations of 40:80, 60:60 and 80:40 are 3%, 2%, 4%
respectively. The percentage optimum TSR for PID and POD combinations of 40:80, 60:60 and 80:40 are 97%, 98%,
96% respectively.
5.2.2 Results using Indian female face database: The performance parameters FAR, FRR, EER, opt TSR and TSR
are computed by varying PID and POD.
The variations of FRR, FAR and TSR for PID and POD combinations of 8:14, 11:11 and 14:8 are shown in Figures 12,
13, 14.
It is observed that the FRR values decreases with a threshold, whereas, the values of FAR and TSR increases with a
threshold. The percentage EER values for PID and POD combinations of 8:14, 11:11 and 14:8 are 0%, 7%, 11%
respectively. The percentage optimum TSR for PID and POD combinations of 8:14, 11:11 and 14:8 are 100%, 93%,
87% respectively.
5.2.3 Results using ORL face database: The performance parameters FAR, FRR, EER, opt TSR and TSR are
computed by varying PID and POD. The variations of FRR, FAR and TSR for PID and POD combinations of 10:30,
20:20 and 30:10 are shown in Figures 15, 16, 17.
It is observed that the FRR values decreases with a threshold, whereas, the values of FAR and TSR increases with a
threshold. The percentage EER values for PID and POD combinations of 10:30, 20:20 and 30:10 are 0%, 10%, 10%
respectively. The percentage optimum TSR for PID and POD combinations of 10:30, 20:20 and 30:10 are 100%, 80%,
83.33% respectively.
5.2.4 Results using JAFFE face database: The performance parameters FAR, FRR, EER, opt TSR and TSR are
computed by varying persons inside the database(PID) and Persons outside database(POD).
The variations of FRR, FAR and TSR for PID and POD combinations of 3:7, 5:5 and 7:3 are shown in Figures 18, 19,
20.
It is observed that the FRR values decreases with a threshold, whereas, the values of FAR and TSR increases with a
threshold. The percentage EER values for PID and POD combinations of 3:7, 5:5 and 7:3 are 15%, 13%, 10%
respectively. The percentage optimum TSR for PID and POD combinations of 3:7, 5:5 and 7:3 are 85%, 87%, 90%
respectively.
The performance of proposed method is compared with existing methods presented by Li Vuelong et al.,[17], Sujatha
BM et al.,[18], Sateesh Kumar H C et al.,[19] and Mehmet Korkmaz et al., It is observed that the percentage TSR is
high in the case of proposed method compared to existing methods.
Table 2 Comparison of TSR values of the proposed method with existing methods.
6. Conclusion
The face recognition biometric algorithms are used in security issues. In this paper, we proposed face recognition based
on SWT and Procrustes analysis. The face images of various databases with different sizes are converted into the
uniform size of 128*128. One frontal face image sample is considered as a reference image from a number of samples
from one person. The SWT is applied on reference frontal image and also non-frontal images of a person. The
Procrustes analysis technique is used between SWTS of the frontal and non-frontal matrix to convert non- frontal to
the frontal matrix. The frontal and non-frontal SWT matrix are concatenated to obtain final SWT features of database
face images. In the test section, the non-frontal face image to be tested is considered and SWT is applied and Procrustes
analysis is carried between SWTs of frontal and non-frontal to convert non-frontal into the frontal image. The row
SWT vectors of non-frontal and output of Procrustes are concatenated to obtain final features. The features of database
and test images are compared using ED to compute performance parameters. It is observed that the performance of the
proposed method is better than the existing methods. In future appropriate compression technique is used on a final
feature of database images to reduce the number of features to improve the speed of computation.
REFERENCES
[1] Smriti Tikoo and Nitin Malik, Detection of Face using Viola Jones and Vol. 6, Issue 6, Computer Science and
Mobile Computing, Vol. 5, pp. 288-295, 2016.
[2] Anagha S Dhavalikar and Kulkarni Face Detection and Facial Expression Recognition System, Proceedings of
International Conference on Communication System, pp.1-7,2014.
[3] Kavitha and Manjeet Kaur, A Survey Paper for Face Recognition Technologies, Proceedings in International
Journal of Scientific and Research publication, Issue 7, pp. 441-445, 2016.
[4] Navaveetha Bodla, Jingxiao Zheng, Honyu Xu, Jun-Cheng Chen, Carlos Castello, Rama Chellappa, Deep
Heterogeneous Feature Fusion for Template-Based face Recognition, Proceedings of IEEE Winter Conference on
Applications of Computer Vision, pp. 586-595, 2017.
[5] Ying Tai, Jian Yang, IEEE, Yigong Zhang, Lei Luo, Jianjun Qian, and Yu Chen,Face Recognition With Pose
Variations and Misalignment via Orthogonal Procrustes Regression, IEEE Transactions on Image Processing,
Vol.25, No. 6, June 2016.
[6] X. Chai, S. Shan, X. Chen, and W. Gao, Locally linear regression for pose-invariant face recognition, IEEE
Transaction Image Process., vol. 16, no. 7, pp. 17161725, Jul. 2007.
[7] Ravi J, Saleem S Teramani and K.B.Raja, Face Recognition using DT-CWT and LBP Features.
[8] H. Tang, S. Yanfeng, B. Yin and G. Yun, Face Recognition Based on Haar LBP Histogram, IEEE International
Conference on Advanced Computer Theory and Engineering, pp. 235 239, 2010.
[9] Ramesha K and K B Raja, Dual Transform based Feature Extraction for Face Recognition, International Journal
of Computer Science Issues, Vol. 8,pp.115-124, September 2011.
[10] ORL Database, http://www.camrol.co.uk
[11] https://indiafemaledatabase.com
[12] https://combined database.com.
[13] http://www.kasrl.org/jaffe_download.html
[14] M. Holschneider, R. Kronland-Martinet, J. Morlet and P. Tchamitchian. Real-time algorithms for signal analysis
with the help of the wavelet transform. In Wavelets, Time-Frequency Methods and Phase Space, pp. 289297.
Springer-Verlag, 1989.
[15] Ying Tai, Jian Yang, IEEE, Yigong Zhang, Lei Luo, Jianjun Qian, and Yu Chen,Face Recognition With Pose
Variations and Misalignment via Orthogonal Procrustes Regression, IEEE Transactions on Image Processing,
Vol.25, No. 6, June 2016
[16] S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment via regressing local binary features. IEEE Transactions on
Image Processing, 25(3):12331245, 2016.
[17] Li Vuelong, Meng Li, Feng Jufu and Jigang,Downsampling Sparse Representation and Discriminant Information
Aided Occluded Face Recognition, International Journal of Science China Information Sciences, vol.57, no. 03,
pp.1-8, 2014.
[18] Sujatha BM, Chetan Tippanna Madiwalar Suresh Babu K, Raja K B and Venugopal K R, Compression based face
recognition using DWT and SVM, Signal and Image Processing: An International Journal (SIPIJ) Vol.7, No.3,
pp.45-62, June 2016.
[19] Sateesh Kumar H C, C Chowda Reddy, Raja K B and Venugopal K R, Face Recognition based on STWT and
DTCWT using two-dimensional Q-SHIFT Filters, International Journal of Engineering and Application, Vol.7,
Issue 1, January 2017,pp.64-79.
[20] Mehmet Korkmaz and Nihat Yilmaz, Face Recognition by Using Back Propagation Artificial Neural Network
and Windowing Method, Journal of Image and Graphics, Vol.4, No 1, June 2016.
AUTHOR
Ms. Sahitya Reddy M V obtained his B.Tech degree in Electronics and Communication Engineering
from Vemana Institute of Technology and currently pursuing an M.E degree in Electronics and
Communication from University Visvesvaraya College of Engineering, Bangalore University,
Bangalore. Her research interests include Image Processing, Biometrics, and VLSI.