Anda di halaman 1dari 18

IPASJ International Journal of Computer Science (IIJCS)

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

FACE RECOGNITION BASED ON SWT AND


PROCRUSTES ANALYSIS
Ganapathi V Sagar1, Sahitya Reddy M V2, K Suresh Babu2, K B Raja2 and Venugopal K R3
1
Dr. Ambedkar Institute of Technology, Bangalore-560056, India.
2
University of Visvesvaraya College of Engineering, Bangalore University, Bangalore-560 001, India.

Abstract
The face images are physiological biometric traits and are acquired with simple cameras for recognition of a person for
security issues. In this paper, we propose face recognition based on SWT and Procrustes analysis. The face database is created
by considering a number of persons with many images per person. The face images are resized to 128*128. One frontal image
from a number of image samples of a person is considered as a reference image. The Stationary Wavelet Transform (SWT) is
applied on the frontal reference image and also non-frontal images of a person. The Procrustes Analysis is performed on non-
frontal images with respect to frontal reference images to convert non-frontal to frontal images. The SWT matrices are
converted into row vectors. The SWT row vector of frontal reference images is concatenated with SWT row vectors of non-
frontal images to obtain final features. The Euclidean Distance (ED) is used to compare final features of database and test
images to compute performance parameters. It is observed that the performance of the proposed method is better compared to
existing methods.
Keywords: Biometrics, Procrustes Analysis, SWT, Face Recognition.

1. INTRODUCTION
The biometrics are used to measure and analyze the unique characteristics of biometrics of a human beings to
establish an identity. The biometrics are broadly classified into two categories viz., physiological and behavioral. The
physiological biometrics are related to body parts of a person such as a face, palmprint, fingerprint, iris etc., and the
characteristics are almost constant for a long period. The behavioral biometrics are related to the behavior of the human
beings and corresponding traits are a signature, keystroke, gait etc., which is not constant for a long time but depends
on circumstances. Nowadays an automatic face recognition system is used to identify a person effectively for several
applications. Face recognition system has three stages such as enrollment stage, testing stage, and matching stage. In
enrollment stage, the face images of several persons are stored and processed to enhance the quality of images. The
features of preprocessed face images are extracted using either spatial domain techniques such as Principal Component
Analysis(PCA), Independent Component Analysis(ICA), Singular Value Decomposition(SVD) etc., or Transform
Domain Techniques such as Fast Fourier Transform (FT) , Discrete Wavelet Transform (DWT), Discrete Cosine
Transform(DCT), Stationary Wavelet Transform(SWT) etc., In testing stage the procedure of feature extraction of face
image is same as enrollment stage. The query images are stored in the test stage. In matching stage the features of
enrollment stage and testing stage are compared using distance formulae such as Euclidean distance (ED), Chain
Square Distance, Hausdorff Distance (HD) etc., and also using classifiers such as Neural Network(NN), Support Vector
Machine (SVM), Self-Organized Map (SOM) etc.
Contribution: In this paper, face recognition based on SWT and Procrustes analysis is proposed. The frontal image of
a person is considered as a reference image and SWT is applied on frontal and non-frontal images of a person and
Procrustes analysis is performed to convert non-frontal SWT matrix into frontal. The final features are obtained by
concatenating frontal and non-frontal SWT coefficients.

Organization: This paper is organized as follows: Section 2 describes the overview of a literature survey, Section 3
focuses on proposed approach, Section 4 describes proposed an algorithm, and Section 5 shows the results of our
experiments on various datasets for face recognition, Finally Section 6 concludes and outlines our future work.

Volume 5, Issue 9, September 2017 Page 57


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

2. LITERATURE SURVEY
In this section, the reviews of existing techniques of face recognition presented by various researchers are discussed.

Smriti Tikoo and Nitin Malik[1] presented facial detection using viola Jones algorithm. This algorithm consists of i)
Haar feature selection ii) Creating an integral image iii) Adaboost training iv) Cascading amplifiers and recognition of
face has been done by using Back Propagation Neural Network (BPNN). Anagha S Dhavalikar and Kulkarni [2]
proposed Automatic Facial Expression Recognition system comprises of three stages i) Face detection involves skin
colour detection using YCbCr colour model, lighting compensation and morphological operations ii) Feature
extraction extracts n facial features of eyes, nose and mouth using Active Appearance Model iii) Facial recognition
using Euclidean Distance. The recognition rate was better. Kavitha and Manjeet Kaur [3] analyzed different
algorithms and methods. Navaneeth Bodla et al .,[4] proposed feature fusion heterogeneous technique on different face
features by jointly acquiring the non-linear high-dimensional projection of the deep features and generating template
feature information The experimental results show better and effective. Ying et al.,[5] introduced the Orthogonal
Procrustes Problem(OPP) to handle pose variations existed in 2D face images and develop a regression model that does
face alignment, pose correction and face representation. By adopting a progressive strategy and in each regression the
orthogonal matrix computes the linear transformation between near-by viewpoints. The pose variations are dealt only
in horizontal direction. The experimental results were better by using misaligned test images with pose variations
regardless of the training images are frontal or non-frontal. X. Chai, et al. [6], have proposed a method Locally Linear
Regression for Pose-Invariant Face Recognition in this method to efficiently generate the virtual frontal view from a
given non-frontal face image and To improve the prediction accuracy in the case of coarse alignment. Ravi et al.,[7]
proposed a method of face recognition technique using DT-CWT and LBP features in which five level DT-CWT is
applied on face image to obtain real and Imaginary bands to generate DT-CWT coefficients. The LBP algorithm is
applied on each 3x3 matrix of DT-CWT coefficients to obtain final features. Hengliang Tang et al.,[8] proposed a novel
face representation approach known as Haar Local Binary Pattern histogram transform, and then the LBP operator is
applied on each sub-image to extract the facial features. Ramesh and Raja [9] proposed a performance evaluation of
face recognition based on DWT and DT-CWT using Multi-matching Classifiers. The face images are resized to the
required size for DT-CWT. The two-level DWT is applied on face images to generate four sub-bands. Euclidian
Distance, Random Forest and Support Vector Machine matching algorithms are used for matching.

3. PROPOSED MODEL
In this section, a new approach to face recognition based on SWT and Procrustes is proposed is shown in figure1.
The model is tested using various available face databases such as ORL, Indian Female, JAFFE and combined
databases.

3.1 Face Databases:


The various databases used to test the performance of the proposed method are explained in this section.

3.1.1 ORL Database [10]:

The ORL database stands for Olivetti and Oracle Research Laboratory database. The ORL database images of faces
consist of a set of images taken at the lab between April 1992 to April 1994. They took images of 40 persons and there
are 10 different images of each person. All the images were taken at different times, varying facial expressions, varying
the lightning. All the images were taken against a dark homogenous background.

The faces in a frontal view are in upright position, with a slight left-right rotation. There are a total of 400 images of
size 11292 with 256 gray level pixel values, Image files are in jpg format. The face image samples of a person are
shown in Figure 2.

3.1.2 INDIAN FEMALE Database [11]:


The databases consist of a set of face data taken at IIT Kanpur campus in February 2002. There are 11 different images
of each of 22 distinct subjects or persons. All the images were taken in an upright, frontal position against a bright
homogeneous background. The images files are in JPEG format. The size of each image is 640x480 pixels, with 256
grey levels per pixel and these are color images (RGB).

Volume 5, Issue 9, September 2017 Page 58


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

Figure 1 Proposed model

Figure 2 Samples of ORL face database

The images are organized in two main directories - males and females. In each of these directories, there are 11
different images of that person. The orientation of the face is included are looking front, looking left, looking right,
looking up, looking up towards left, looking up towards the right, looking down. Available emotions are neutral, smile,
laughter, sad/disgust. The face image samples of a person are shown in Figure 3.

Figure 3 Sample of INDIAN FEMALE face database

Volume 5, Issue 9, September 2017 Page 59


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

3.1.3 JAFFE Database [12]:


The Database consists of a set of JAFFE face data. There are 23 different images of each of 10 distinct subjects or
persons. All the images were taken in an upright, frontal position against a white homogeneous background. The
images files are in TIFF format. The size of each image is 256 x 256 pixels, with 256 grey levels per pixel. Hence
contains a total of 230 grayscale images in tiff format with 15 persons.
There are 23 images per person, each with different facial expression or configuration: center-light, w/glasses, happy,
left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink. The face image samples of a person are
shown in Figure 4.

Figure 4 Sample of JAFFE face database

3.1.4 COMBINED Databases [13]:


There are 18 different images of each of 120 distinct subject persons. All the images were taken in an upright, frontal
position against a white homogeneous background.

Figure 5 Sample of COMBINED Face Database

The images files are in BMP format that is bitmap images. The size of each image is 320 x 280 pixels, with 256 grey
levels per pixel. There are 18 images per person, each with different facial expression or configuration: center-light,
with glasses, happy, left-light, with no glasses, normal, surprised, and a wink. The face image samples of a person are
shown in Figure 5.

3.2 RESIZE:
Resize is used to change the size of the image for the desired size. All the face databases used are resized to 128*128 to
convert different sizes to uniform sizes.

3.3 Stationary Wavelet Transform [SWT] [14]:


The disadvantage of shift-variant nature of discrete wavelet transform due to decimation operation is eliminated in
Stationary Wavelet Transform (SWT). The input signal is passed through low pass and high pass filter to obtain
significant and detailed information without decimation. The number of coefficients in each subband is same as that of
input signal due to no decimation.The input image of dimension M*N is passed through low pass filter and high pass
filter and the corresponding outputs are also having dimensions of M*N. The outputs of low-pass filter and high-pass
filter are further passed through low pass and high pass filter to obtain four sub-bands viz., LL, LH, HL, and HH as
shown in Figure 6.

The dimensions of all four subbands are equivalent to input images. The SWT is applied on face image and the
corresponding four subband images are shown in Figure 7. The LL band image is almost same as that of the original
image and has significant information of the original image. The LH band image has horizontal details of the original
image. The HL band image has vertical details of the original image. The HH band image has diagonal details of the
original image.

Volume 5, Issue 9, September 2017 Page 60


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

Figure 6 SWT Decomposition

(a)Original image

(b) LL band (c) LH band

(d) HL band (e) HH band

Figure 7 SWT four subband images

3.4 PROCRUSTES ANALYSIS [5][15]

It is the shape analysis by comparing shapes of two or more face images. It is performed by optimally translating,
uniformly scaling, rotating and shape comparison.

Volume 5, Issue 9, September 2017 Page 61


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

(i)Translation: The translational components of an image are removed by translating the image so that the mean of all
the image points lies at the origin. Consider k points in an image say (x1,y1), (x2,y2)(xk, yk). The mean of these
pixels is (x-y),

where

Translate the pixels so that their mean is translated to the origin ie., (x1-x, y1-y), (x2-x, y2-y),(xk-x,
yk-y)
(ii)Uniform scaling: Image is scaled So that the Root Mean Square Distance (RMSD) from the points to the translated
origin is 1.
The RMSD is a statistical measure of the objects scale or size (s) is given in equation (1).

(1)

When the point coordinates are by the objects initial scale then the scale becomes 1 ((x1-x)/s, (y1-y)/s).

(iii)Rotation: The standard reference orientation is not always available hence removing the rotational component is
more complex. Consider two objects consisting of the same number of points with scale and translation removed.
Let the two points be ((x1,y1),..), ((w1,z1),.) on one object and among them can be used to provide a reference
orientation. Rotate the other around the origin by fixing the reference object. This is done until an optimum angle of
rotation is obtained.

The rotation by angle gives

(u1, v1)= (cosw1-sinz1, sinw1+cosz1)

Where (u, v) are the coordinates of a rotated point and scaling when derivative of + +
with respect to and solving when the derivative is zero gives

(iv)Shape comparison: The difference between the shape of two objects can be evaluated only after superimposing the
two objects by translating, scaling and optimally rotating. The square root of sum of the squared distance between the
corresponding points can be used as a statistical measure of their difference in shape.

D=

Where D = Procrustes distance

The output of the Procrustes analysis is the orthogonal matrix, the Size of the matrix is same as that of the input to the
Procrustes analysis i.e the size of the output image of SWT is 128*128. Now the output of Procrustes is fed to the

Volume 5, Issue 9, September 2017 Page 62


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

horizontal concatenation.The non-frontal images in the database are converted into frontal images using the Procrustes
method as shown in Figure 8.

(a)Frontal images (b) Non Frontal images

(c) Corresponding transformed images

Figure 8 transformed faces obtained after applying Procrustes

Volume 5, Issue 9, September 2017 Page 63


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

3.5 HORIZONTAL CONCATENATION


The Stationary wavelet transform (SWT) is applied on resized frontal reference image size of 128*128 to extract
initial features. The remaining non-frontal images per person in the database are considered and Stationary wavelet
transform (SWT) is applied to extract initial features. The Procrustes analysis is applied on SWT matrix of remaining
frontal images of a person to rotate images to frontal positions. The reference image SWT matrix is converted into row
vector of size 1*16384. The SWT matrix of Procrustes analyzed images are converted into row vectors of size 1*16384.
In horizontal concatenation reference row vector is concatenated with each row vectors of remaining SWT row vectors,
which results with each concatenated row vector os size 1*32768. The final feature size of each image in the database
is equal to 1*32768.
In the test section, the reference image of the database is considered as a reference image. The SWT is applied on the
reference image and test mage. The Procrustes analysis is used on SWT matrix of the test image to rotate test image
towards the frontal position. The SWT matrix of the reference image and Procrustes analyzed images are converted
into row vector of each size 1*16384. The final test features are obtained by concatenating two SWT row vectors
having a size of 1*32768.

3.6 EUCLIDEAN DISTANCE(ED):


It is used to calculate the similarity between the database images and test images if the ED is smaller than the
threshold then the test image matches with database image and the recognition is calculated.

If p = (p1, p2) and q = (q1, q2) then the distance is given by the ED is computed using equation (2)

ED=
(2)
Where, M = No of coefficients in a vector.
Pi= Coefficient values of vectors in database.
qi = Coefficient values of vectors in test image.

4. PROPOSED ALGORITHM
Problem Definition: The face images are identified to authenticate a person for security issues. The proposed algorithm
is based on SWT and Procrustes analysis to recognize human beings effectively.

Objectives: The face recognition algorithm is developed with the following objectives
(i) To increase maximum and optimum TSR.
(ii) To reduce, the values of FRR, FAR, AND EER.

The algorithm is developed by applying Procrustes analysis technique on SWT matrix to obtain better performance
parameters and the algorithm is given in Table 1

Table 1: Proposed Algorithm


Input: Face images from Standard databases
Output: Computation of performance parameters
1. The face images are considered from various available standard face databases.
2. The different sizes of face images are converted into the uniform size of 128*128.
3. The frontal face image of a person is considered as reference image and SWT is applied,
4. The non-frontal images of the same person are considered and SWT is applied.
5. The Procrustes analysis technique is used on SWT matrix of non-frontal and frontal images to convert non-
frontal to the frontal matrix.
6. The reference frontal image SWT and Procrustes analyzed SWT matrix of non-frontal images are converted
into row vectors.

Volume 5, Issue 9, September 2017 Page 64


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

7. The final features of images are equivalent to concatenation of row vectors of reference image and each
Procrustes analyzed non-frontal row vector features ie., [Row vector of reference image: Row vector of Procrustes
analyzed image]
8. In the test section, the image to be tested and frontal reference images are considered and SWT is applied.
9. The SWT matrix of the non-frontal image is converted to frontal SWT matrix using Procrustes analysis and
matrices are converted to row vectors.
10. The final features are obtained by concatenating row vectors of reference and non-frontal SWT matrices in
the test section.
11. The ED is used to compare final features of database and test images to compute performance parameters.

5. PERFORMANCE ANALYSIS
In this section, the definitions of performance parameters, the performance evaluation of proposed approach on widely
used face databases and comparison of results of the proposed method with existing methods are given.

5.1 Definitions of performance parameters

5.1.1 False Acceptance Rate (FAR):


It is the probability that system incorrectly matches with images stored with input image database. The FAR is
calculated using equation (3)

(3)

5.1.2 False Rejection Rate (FRR):


It is the ratio of number of authorized persons rejected in the database to the total number of persons in database and
can be calculated using the equation (4)

(4)

5.1.3 Equal Error Rate (EER):


It is the value where both the FRR and FAR rates are equal and is calculated using the following equation (5)

EER=FAR-FRR (5)

5.1.4 Total Success Rate (TSR):


It is the ratio of total number of persons correctly matched in the database to the total number of persons in the database
and is given by the following equation (6)

(6)

5.2 Performance evaluation of proposed approach:


The face recognition experiments to evaluate the proposed approach using face databases such as ORL. Indian Female,
JAFFEE and combined are described in the following sections.

5.2.1 Results using combined face database:

The performance parameters FAR, FRR, EER, opt TSR and maximum TSR are computed by varying Persons Inside
Database (PID) and Persons Outside Database (POD).

The variations of FRR, FAR and TSR for PID and POD combinations of 40:80, 60:60 and 80:40 are shown in Figures
9,10,11.

Volume 5, Issue 9, September 2017 Page 65


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

Figure 9 Variation of performance parameter with PID and POD of 40:80

Figure 10 Variation of performance parameter with PID and POD of 60:60

Volume 5, Issue 9, September 2017 Page 66


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

Figure 11 Variation of performance parameter with PID and POD of 80:40

It is observed that the FRR values decreases with a threshold, whereas, the values of FAR and TSR increases with a
threshold. The percentage EER values for PID and POD combinations of 40:80, 60:60 and 80:40 are 3%, 2%, 4%
respectively. The percentage optimum TSR for PID and POD combinations of 40:80, 60:60 and 80:40 are 97%, 98%,
96% respectively.

5.2.2 Results using Indian female face database: The performance parameters FAR, FRR, EER, opt TSR and TSR
are computed by varying PID and POD.
The variations of FRR, FAR and TSR for PID and POD combinations of 8:14, 11:11 and 14:8 are shown in Figures 12,
13, 14.

Figure 12 Variation of performance parameter with PID and POD 8:14

Volume 5, Issue 9, September 2017 Page 67


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

Figure 13 Variation of performance parameter with PID and POD of 11:11

Figure 14 Variation of performance parameter with PID and POD of 14:8

It is observed that the FRR values decreases with a threshold, whereas, the values of FAR and TSR increases with a
threshold. The percentage EER values for PID and POD combinations of 8:14, 11:11 and 14:8 are 0%, 7%, 11%
respectively. The percentage optimum TSR for PID and POD combinations of 8:14, 11:11 and 14:8 are 100%, 93%,
87% respectively.

5.2.3 Results using ORL face database: The performance parameters FAR, FRR, EER, opt TSR and TSR are
computed by varying PID and POD. The variations of FRR, FAR and TSR for PID and POD combinations of 10:30,
20:20 and 30:10 are shown in Figures 15, 16, 17.

Volume 5, Issue 9, September 2017 Page 68


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

Figure 15 Variation of performance parameter with PID and POD of 10:30

Figure 16 Variation of performance parameter with PID and POD of 20:20

Volume 5, Issue 9, September 2017 Page 69


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

Figure 17 Variation of performance parameter with PID and POD of 30:10

It is observed that the FRR values decreases with a threshold, whereas, the values of FAR and TSR increases with a
threshold. The percentage EER values for PID and POD combinations of 10:30, 20:20 and 30:10 are 0%, 10%, 10%
respectively. The percentage optimum TSR for PID and POD combinations of 10:30, 20:20 and 30:10 are 100%, 80%,
83.33% respectively.

5.2.4 Results using JAFFE face database: The performance parameters FAR, FRR, EER, opt TSR and TSR are
computed by varying persons inside the database(PID) and Persons outside database(POD).
The variations of FRR, FAR and TSR for PID and POD combinations of 3:7, 5:5 and 7:3 are shown in Figures 18, 19,
20.

Figure 18 Variation of performance parameter with PID and POD of 3:7

Volume 5, Issue 9, September 2017 Page 70


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

Figure 19 Variation of performance parameter with PID and POD of 5:5

Figure 20 Variation of performance parameter with PID and POD of 7:3

It is observed that the FRR values decreases with a threshold, whereas, the values of FAR and TSR increases with a
threshold. The percentage EER values for PID and POD combinations of 3:7, 5:5 and 7:3 are 15%, 13%, 10%
respectively. The percentage optimum TSR for PID and POD combinations of 3:7, 5:5 and 7:3 are 85%, 87%, 90%
respectively.

Volume 5, Issue 9, September 2017 Page 71


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

5.5 Comparison of the proposed method with existing methods:

The performance of proposed method is compared with existing methods presented by Li Vuelong et al.,[17], Sujatha
BM et al.,[18], Sateesh Kumar H C et al.,[19] and Mehmet Korkmaz et al., It is observed that the percentage TSR is
high in the case of proposed method compared to existing methods.

Table 2 Comparison of TSR values of the proposed method with existing methods.

SL NO TECHNIQUES DATABASE %TSR

1 DSRC-LDA [17] ORL DATABASE 88

2 DWT+SVM [18] ORL DATABASE 93.7

2 STWT+DTCWT [19] ORL DATABASE 77.5

2 BPANN+WM [20] ORL DATABASE 92

3 PROPOSED METHOD PROCRUSTES+SWT ORLDATABASE 100

The proposed method is better, for the following reasons:


1. SWT is shifting invariant transformation, hence variations in face angles will not affect the performance of the
algorithm.
2. The non-frontal images are converted into frontal images using Procrustes analysis, which takes care of angle
variations in images.
3. The features of frontal images and modified non-frontal images are concatenated to obtain final features for
effective recognition.

6. Conclusion
The face recognition biometric algorithms are used in security issues. In this paper, we proposed face recognition based
on SWT and Procrustes analysis. The face images of various databases with different sizes are converted into the
uniform size of 128*128. One frontal face image sample is considered as a reference image from a number of samples
from one person. The SWT is applied on reference frontal image and also non-frontal images of a person. The
Procrustes analysis technique is used between SWTS of the frontal and non-frontal matrix to convert non- frontal to
the frontal matrix. The frontal and non-frontal SWT matrix are concatenated to obtain final SWT features of database
face images. In the test section, the non-frontal face image to be tested is considered and SWT is applied and Procrustes
analysis is carried between SWTs of frontal and non-frontal to convert non-frontal into the frontal image. The row
SWT vectors of non-frontal and output of Procrustes are concatenated to obtain final features. The features of database
and test images are compared using ED to compute performance parameters. It is observed that the performance of the
proposed method is better than the existing methods. In future appropriate compression technique is used on a final
feature of database images to reduce the number of features to improve the speed of computation.

REFERENCES
[1] Smriti Tikoo and Nitin Malik, Detection of Face using Viola Jones and Vol. 6, Issue 6, Computer Science and
Mobile Computing, Vol. 5, pp. 288-295, 2016.
[2] Anagha S Dhavalikar and Kulkarni Face Detection and Facial Expression Recognition System, Proceedings of
International Conference on Communication System, pp.1-7,2014.
[3] Kavitha and Manjeet Kaur, A Survey Paper for Face Recognition Technologies, Proceedings in International
Journal of Scientific and Research publication, Issue 7, pp. 441-445, 2016.

Volume 5, Issue 9, September 2017 Page 72


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

[4] Navaveetha Bodla, Jingxiao Zheng, Honyu Xu, Jun-Cheng Chen, Carlos Castello, Rama Chellappa, Deep
Heterogeneous Feature Fusion for Template-Based face Recognition, Proceedings of IEEE Winter Conference on
Applications of Computer Vision, pp. 586-595, 2017.
[5] Ying Tai, Jian Yang, IEEE, Yigong Zhang, Lei Luo, Jianjun Qian, and Yu Chen,Face Recognition With Pose
Variations and Misalignment via Orthogonal Procrustes Regression, IEEE Transactions on Image Processing,
Vol.25, No. 6, June 2016.
[6] X. Chai, S. Shan, X. Chen, and W. Gao, Locally linear regression for pose-invariant face recognition, IEEE
Transaction Image Process., vol. 16, no. 7, pp. 17161725, Jul. 2007.
[7] Ravi J, Saleem S Teramani and K.B.Raja, Face Recognition using DT-CWT and LBP Features.
[8] H. Tang, S. Yanfeng, B. Yin and G. Yun, Face Recognition Based on Haar LBP Histogram, IEEE International
Conference on Advanced Computer Theory and Engineering, pp. 235 239, 2010.
[9] Ramesha K and K B Raja, Dual Transform based Feature Extraction for Face Recognition, International Journal
of Computer Science Issues, Vol. 8,pp.115-124, September 2011.
[10] ORL Database, http://www.camrol.co.uk
[11] https://indiafemaledatabase.com
[12] https://combined database.com.
[13] http://www.kasrl.org/jaffe_download.html
[14] M. Holschneider, R. Kronland-Martinet, J. Morlet and P. Tchamitchian. Real-time algorithms for signal analysis
with the help of the wavelet transform. In Wavelets, Time-Frequency Methods and Phase Space, pp. 289297.
Springer-Verlag, 1989.
[15] Ying Tai, Jian Yang, IEEE, Yigong Zhang, Lei Luo, Jianjun Qian, and Yu Chen,Face Recognition With Pose
Variations and Misalignment via Orthogonal Procrustes Regression, IEEE Transactions on Image Processing,
Vol.25, No. 6, June 2016
[16] S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment via regressing local binary features. IEEE Transactions on
Image Processing, 25(3):12331245, 2016.
[17] Li Vuelong, Meng Li, Feng Jufu and Jigang,Downsampling Sparse Representation and Discriminant Information
Aided Occluded Face Recognition, International Journal of Science China Information Sciences, vol.57, no. 03,
pp.1-8, 2014.
[18] Sujatha BM, Chetan Tippanna Madiwalar Suresh Babu K, Raja K B and Venugopal K R, Compression based face
recognition using DWT and SVM, Signal and Image Processing: An International Journal (SIPIJ) Vol.7, No.3,
pp.45-62, June 2016.
[19] Sateesh Kumar H C, C Chowda Reddy, Raja K B and Venugopal K R, Face Recognition based on STWT and
DTCWT using two-dimensional Q-SHIFT Filters, International Journal of Engineering and Application, Vol.7,
Issue 1, January 2017,pp.64-79.
[20] Mehmet Korkmaz and Nihat Yilmaz, Face Recognition by Using Back Propagation Artificial Neural Network
and Windowing Method, Journal of Image and Graphics, Vol.4, No 1, June 2016.

AUTHOR

Ganapathi V Sagar in the Dept of Electronics and Instrumentation Engineering at Dr.Ambedkar


Institute of Technology, Bangalore. He obtained his B.E. degree in Instrumentation Technology from
BDT, College of Davangere. His specialization in Master degree was Bio-Medical Instrumentation from
VTU, Belgaum and currently, he is pursuing Ph.D. in the area of Image Processing under the guidance
of Dr. K Suresh Babu, Professor, Dept of Electronics and Communication Engineering, University
Visvesvaraya College of Engineering, Bangalore. He has over 30 research publications in refereed
International Journals and Conference Proceedings. His area of interest is in the field of Signal Processing and
Communication Engineering.

Ms. Sahitya Reddy M V obtained his B.Tech degree in Electronics and Communication Engineering
from Vemana Institute of Technology and currently pursuing an M.E degree in Electronics and
Communication from University Visvesvaraya College of Engineering, Bangalore University,
Bangalore. Her research interests include Image Processing, Biometrics, and VLSI.

Volume 5, Issue 9, September 2017 Page 73


IPASJ International Journal of Computer Science (IIJCS)
Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
A Publisher for Research Motivation ........ Email:editoriijcs@ipasj.org
Volume 5, Issue 9, September 2017 ISSN 2321-5992

K Suresh Babu is an Associate Professor, Dept. of Electronics and Communication Engineering,


University Visvesvaraya College of Engineering, Ban galore University, Bangalore. He obtained his BE
and ME in Electronics and Communication Engineering from University Visvesvaraya College of
Engineering, Bangalore. He was awarded Ph.D. in Computer Science and Engineering from Bangalore
University. He has over 30 research publications in refereed International Journals and Conference
Proceedings. His research interests include Image Processing, Biometrics, and Signal Processing.

K B Raja is an Associate Professor, Dept. of Electronics and Communication Engineering, University


Visvesvaraya College of Engineering, Bangalore University, Bangalore. He obtained his BE and ME in
Electronics and Communication Engineering from University Visvesvaraya College of Engineering,
Bangalore. He was awarded Ph.D. in Computer Science and Engineering from Bangalore University. He
has over 160 research publications in refereed International Journals and Conference Proceedings. His
research interests include Image Processing, Biometrics, VLSI Signal Processing and Computer
Networks.

K R Venugopal is currently the Principal, University Visvesvaraya College of Engineering, Bangalore


University, Bangalore. He obtained his Bachelor of Engineering from University Visvesvaraya College of
Engineering. He received his Masters degree in Computer Science and Automation from Indian
Institute of Science, Bangalore. He was awarded Ph.D. in Economics from Bangalore University and
Ph.D. in Computer Science from Indian Institute of Technology, Madras. He has a distinguished
academic career and has degrees in Electronics, Economics, Law, Business Finance, Public Relations,
Communications, Industrial Relations, Computer Science, and Journalism. He has authored 27 books on Computer
Science and Economics, which include Petrodollar and the World Economy, C Aptitude, Mastering C, Microprocessor
Programming, Mastering C++ etc. He has been serving as the Professor and Chairman, Department of Computer
Science and Engineering, University Visvesvaraya College of Engineering, Bangalore University, Bangalore. During
his three decades of service at UVCE, he has over 375 research papers to his credit. His research interests include
computer networks, parallel and distributed systems, digital signal processing and data mining.

Volume 5, Issue 9, September 2017 Page 74

Anda mungkin juga menyukai