Anda di halaman 1dari 20

School of Information Technology & Engineering

A PROJECT REPORT
Submitted in partial fulfilment for the award of the course
Digital Image Processing (SWE1010)

Title: Face Detection Using Fisher Face and Viola Jones algorithm in MATLAB

By:

K SAIKUMAR REDDY 16MIS0307

Under the guidance of


Prof. SRINIVASA PERUMAL R
Assistant Professor (Selection Grade)
Abstract

The growing interest in computer vision of the past decade. Fueled by the steady doubling rate of computing
power every 13 months, face detection and recognition has transcended from an esoteric to a popular area
of research in computer vision and one of the better and successful applications of image analysis and
algorithm based understanding. Because of the intrinsic nature of the problem, computer vision is not only
a computer science area of research, but also the object of neuro-scientific and psychological studies, mainly
because of the general opinion that advances in computer image processing and understanding research will
provide insights into how our brain work and vice versa. Because of general curiosity and interest in the
matter, the author has proposed to create an application that would allow user access to a particular machine
based on an in-depth analysis of a person’s facial features.

Introduction
With the aid of a regular web camera, a machine is able to detect and recognize a person’s face; a custom
login screen with the ability to filter user access based on the users’ facial
features will be developed. The objectives of this thesis are to provide a set of detection
algorithms that can be later packaged in an easily portable framework amongst the different
processor architectures we see in machines (computers) today. These algorithms must provide at least a
95% successful recognition rate, out of which less than 3% of the detected faces are
false positives.

Literature Review
A throughout survey has revealed that various methods and combination of these methods can be applied
in development of a new face recognition system. Among the many possible approaches, we have decided
to use a combination of knowledge-based methods for face detection part and neural network approach for
face recognition part. The main reason in this selection is their smooth applicability and reliability issues.
Our face recognition system approach is given in Figure 2
Existing face recognation algorithm

Input part is prerequisite for face recognition system. Image acquisition operation is performed in this part.
Live captured images are converted to digital data for performing image-processing computations. These
captured images are sent to face detection algorithm.
Methodologies (Algorithms)

A) Viola Jones algorithm

The Viola-Jones algorithm is a widely used mechanism for object detection. The main property of this
algorithm is that training is slow, but detection is fast. This algorithm uses Haar basis feature filters, so it
does not use multiplications.

The efficiency of the Viola-Jones algorithm can be significantly increased by first generating the integral
image.

The integral image allows integrals for the Haar extractors to be calculated by
adding only four numbers. For example, the image integral of area ABCD (Fig.1) is calculated as II(yA,xA) –
II(yB,xB) – II(yC,xC) + II(yD,xD).

Fig.1 Image area integration using integral image


Detection happens inside a detection window. A minimum and maximum window size is chosen, and for
each size a sliding step size is chosen. Then the detection window is moved across the image as follows:

1. Set the minimum window size, and sliding step corresponding to that size.
2. For the chosen window size, slide the window vertically and horizontally with the same step. At each
step, a set of N face recognition filters is applied. If one filter gives a positive answer, the face is
detected in the current widow.
3. If the size of the window is the maximum size stop the procedure. Otherwise increase the size of the
window and corresponding sliding step to the next chosen size and go to the step 2.
Each face recognition filter (from the set of N filters) contains a set of cascade-connected classifiers. Each
classifier looks at a rectangular subset of the detection window and determines if it looks like a face. If it
does, the next classifier is applied. If all classifiers give a positive answer, the filter gives a positive answer
and the face is recognized. Otherwise the next filter in the set of N filters is run.
Each classifier is composed of Haar feature extractors (weak classifiers). Each Haar feature is the weighted
sum of 2-D integrals of small rectangular areas attached to each other. The weights may take values ±1.
Fig.2 shows examples of Haar features relative to the enclosing detection window. Gray areas have a
positive weight and white areas have a negative weight. Haar feature extractors are scaled with respect to the
detection window size.

Example rectangle features shown relative to the enclosing detection


window
The classifier decision is defined as:

fm,i is the weighted sum of the 2-D integrals. is the decision threshold for the i-th feature extractor. αm,i and
βm,i are constant values associated with the i-th feature extractor. θm is the decision threshold for the m-th
classifier.

Object detection Viola-Jones filter


B) Fisher Faces
We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial
expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a
high-dimensional space. We take advantage of the observation that the images of a particular face, under
varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space—if the
face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and
do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly
modeling this deviation, we linearly project the image into a subspace in a manner which discounts those
regions of the face with large deviation. Our projection method is based on Fisher’s Linear Discriminant and
produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and
facial expressions.

To compute the Fisherfaces, we assume the data in each class is Normally distributed. We denote the
multivariate Normal distribution as Ni(μi,Σi) , with mean μi and covariance matrix Σi , and its probability
density function is fi(x|μi,Σi) .
In the C class problem, we have Ni(μi,Σi) , with i=1,…,C . Given these Normal distributions and their class
prior probabilities Pi , the classification of a test sample x is given by comparing the log-likelihoods
of fi(x|μi,Σi)Pi for all i . That is,
argmin1≤i≤Cdi(x),
where di(x)=(x−μi)TΣ−1i(x−μi)+ln|Σi|−2lnPi are known as the discriminant scores of each class. The
discriminant scores thus defined yield the Bayes optimal solution.
The discriminant scores generally result in quadratic classification boundaries between classes. However, for
the case where all the covariance matrices are the same, Σi=Σ , ∀i , the quadratic parts of di cancel out,
yielding linear classifiers. These classifiers are called linear discriminant bases. Hence, the name of Linear
Discriminant Analysis. The case where all the covariances are identical is known as homoscedastic Normal
distributions.
Assume that C=2 and that the classes are homoscedastic Normals. Project the sample feature vectors onto
the one-dimensional subspace orthogonal to the classification hyperplane given by the discriminant score. It
follows that the number of misclassified samples in the original space of p dimensions and in this subspace
of just one dimension are the same. This is easily verifiable. Since the classification boundary is linear, all
the samples that where on one side of the space will remain on the same side of the 1-dimensions subspace.
This important point was first noted by R.A. Fisher and has allowed us to defined the LDA algorithm and
Fisherfaces.
Computing the Fisherfaces
The theoretical argument given in the preceding section shows how to obtain the Bayes optimal solution for
the 2-class homoscedastic case. In general, we will have more than 2-classes. In such a case, we reformulate
the above stated problem as that of minimizing within-class differences and maximizing between-class
distances.

Within class differences can be estimated using the within-class scatter matrix, given by

Sw=∑Cj=1∑nji=1(xij−μj)(xij−μj)T,
where xij is the ith sample of class j , μj is the mean of class j , and nj the number of samples in class j .

Likewise, the between class differences are computed using the between-class scatter matrix,

Sb=∑Cj=1(μj−μ)(μj−μ)T,
where μ represents the mean of all classes.
We now want to find those basis vectors V where Sw is minimized and Sb is maximized, where V is a
matrix whose columns vi are the basis vectors defining the subspace. These are given by,
|VTSbV||VTSwV|.

The solution to this problem is given by the generalized eigenvalue decomposition

SbV=SwVΛ,
where V is (as above) the matrix of eigenvectors and Λ is a diagonal matrix of corresponding eigenvalues.
The eigenvectors of V associated to non-zero eigenvalues are the Fisherfaces. There is a maximum
of C−1 Fisherfaces. This can be readily seen from the definition of Sb .Note that in our definition, Sb is a
combination of C feature vectors. Any C vectors define a subspace of C−1 or less dimensions. The equality
holds when these vectors are linearly independent from one another.

For face recognition:

The variance among faces in the database may come from distortions such as illumination, facial expression,
and pose variation. And sometimes, these variations are larger than variations among standard faces!!

The images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the
high dimensional image space. (without shadowing)

Idea:
Try to find a basis for projection that minimize the intra-class variation but preserve the inter-class variation.
Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner
which discount those regions of the face with large deviation.
Fisher Face – Image Recognition

Step-1: Face Acquisition


Step-2: Face Recognition
Description: Laser pass by the face and the software takes exact outline of the face and stores in the dataset.
Step-3: Face Processing
Description: When a face arrives infront of laser. The laser scans the face and matches with the dataset
Step-4: outcome
Description: If the scanned face matches the face stored in the dataset, It allows for further accessing.
RESULTS AND DISCUSSION
Code:
EYES DETECTION: %To detect
Eyes
EyeDetect = vision.CascadeObjectDetector('EyePairBig');
%Read the input Image
I = imread('sai.jpg');

BB=step(EyeDetect,I);
figure,imshow(I);
rectangle('Position',BB,
'LineWidth',4,'LineStyle
','-','EdgeColor','b');
title('Eyes Detection'); Eyes=imcrop(I,BB); figure,imshow(Eyes); hold off;

FACE DETECTION
clear all
clc
%Detect objects using Fisher face Algorithm

%To detect Face


FDetect = vision.CascadeObjectDetector;

%Read the input image


I = imread('harry.jpg');

%Returns Bounding Box values based on number of objects


BB = step(FDetect,I);

figure,
imshow(I); hold on
for i = 1:size(BB,1)
rectangle('Position',BB(i,:),'LineWidth',5,'LineStyle','-','EdgeColor','r');
end
title('Face Detection');
hold off;

A= imread('2.jpg');
Facedetector = vision.CascadeObjectDetector();
BBOX=step(Facedetector,A);
B=insertObjectAnnotation(A,'rectangle', BBOX, 'Face');
imshow(B), title('Detected Faces' );
n=size(BBOX,1);
str_n=num2str(n);
str=strcat('Number of detected faces are ', str_n);
disp(str);
Fisher face algorithm – Including Database

clear;
clc;
close all;
chos=0;
possibility=9;
% Input image have to be not too big (to avoid memory errors)
% Given the sizes of the image dimx and dimy
% we impose dimx*dimy <= prodmax
prodmax = 300;

messaggio='Insert the number of set: each set determins a class. This set should
include a number of images for each person, with some variations in expression and in
the lighting.';

while chos~=possibility,
chos=menu('Fisherfaces for Face Recognition','Select image','Add selected image to
database','Database Info','Face Recognition','Delete Database','Info',...
'Visualization tool','Source code for FisherFaces for Face Recognition','Exit');
%----------------
if chos==1,
clc;
[namefile,pathname]=uigetfile('*.*','Select image');
if namefile~=0
[img,map]=imread(strcat(pathname,namefile));
imshow(img);
dimensioni = size(img);
disp('Input image has been selected.');
disp('Now press on "Add selected image to database" button to add this image to
database or,');
disp('press on "Face Recognition" button to start face matching.');
else
warndlg('Input image must be selected.',' Warning ')
end
end
%----------------
if chos==2,
clc;
if exist('img')
if (exist('face_database.dat')==2)
load('face_database.dat','-mat');
%----------------------------------------------------------
% if image is too big it has to be resized
[dimx dimy] = size(img);
if dimx*dimy>prodmax
fattore = sqrt(prodmax*dimx/dimy)/dimx;
img = imresize(img,fattore);
end
%----------------------------------------------------------
face_number=face_number+1;
data{face_number,1}=img(:);
prompt={strcat(messaggio,'Class number must be a positive integer <=
',num2str(max_class))};
title='Class number';
lines=1;
def={'1'};
answer=inputdlg(prompt,title,lines,def);
zparameter=double(str2num(char(answer)));
if size(zparameter,1)~=0
class_number=zparameter(1);
if
(class_number<=0)||(class_number>max_class)||(floor(class_number)~=class_number)||(~isa
(class_number,'double'))||(any(any(imag(class_number))))
warndlg(strcat('Class number must be a positive integer <= ',num2str(max_class)),'
Warning ')
else
if class_number==max_class;
max_class=class_number+1;
end
data{face_number,2}=class_number;
save('face_database.dat','data','face_number','max_class','-append');
msgbox(strcat('Database already exists: image succesfully added to class number
',num2str(class_number)),'Database result','help');
close all;
clear('img')
end
else
warndlg(strcat('Class number must be a positive integer <= ',num2str(max_class)),'
Warning ')
end
else
%----------------------------------------------------------
% if image is too big it has to be resized
[dimx dimy] = size(img);
if dimx*dimy>prodmax
fattore = sqrt(prodmax*dimx/dimy)/dimx;
img = imresize(img,fattore);
end
%----------------------------------------------------------
face_number=1;
max_class=1;
data{face_number,1}=img(:);
prompt={strcat(messaggio,'Class number must be a positive integer <=
',num2str(max_class))};
title='Class number';
lines=1;
def={'1'};
answer=inputdlg(prompt,title,lines,def);
zparameter=double(str2num(char(answer)));
if size(zparameter,1)~=0
class_number=zparameter(1);
if
(class_number<=0)||(class_number>max_class)||(floor(class_number)~=class_number)||(~isa
(class_number,'double'))||(any(any(imag(class_number))))
warndlg(strcat('Class number must be a positive integer <= ',num2str(max_class)),'
Warning ')
else
max_class=2;
data{face_number,2}=class_number;
save('face_database.dat','data','face_number','max_class','dimensioni');
msgbox(strcat('Database was empty. Database has just been created. Image succesfully
added to class number ',num2str(class_number)),'Database result','help');
close all;
clear('img')
end
else
warndlg(strcat('Class number must be a positive integer <= ',num2str(max_class)),'
Warning ')
end

end
else
errordlg('No image has been selected.','File Error');
end
end
%----------------
if chos==3,
clc;
close all;
clear('img');
if (exist('face_database.dat')==2)
load('face_database.dat','-mat');
msgbox(strcat('Database has ',num2str(face_number),' image(s). There
are',num2str(max_class-1),' class(es). Input images must have the same
size.'),'Database result','help');
else
msgbox('Database is empty.','Database result','help');
end
end
%----------------
if chos==4,
clc;
close all;
if exist('img')
%----------------------------------------------------------
% if image is too big it has to be resized
[dimx dimy] = size(img);
if dimx*dimy>prodmax
fattore = sqrt(prodmax*dimx/dimy)/dimx;
img = imresize(img,fattore);
end
%----------------------------------------------------------
ingresso=double(img(:));
if (exist('face_database.dat')==2)
load('face_database.dat','-mat');
%----------------------------------------------------------
% EIGENFACES REDUCTION
%
% face_number is equal to "M" of Turk's paper
% i.e. the number of faces present in the database.
% These image are grouped into classes. Every class (or set) should include
% a number of images for each person, with some variations in expression and in the
% lighting.
matrice=zeros(size(data{1,1},1),face_number);
for ii=1:face_number
matrice(:,ii)=double(data{ii,1});
end
somma=sum(matrice,2);
media=somma/face_number;
for ii=1:face_number
matrice(:,ii)=matrice(:,ii)-media;
end
matrice=matrice/sqrt(face_number);
% up to now matrix "matrice" is matrix "A" of Turk's paper
elle=matrice'*matrice;
% matrix "elle" is matrix "L" of Turk's paper

% eigenvalues and eigenvectors of the "reduced" matrix A'*A


[V,D] = eig(elle);
% the following multiplication is performed to obtain the
% eigenvectors of the original matrix A*A' (see Turk's paper)
% See also Karhunen-Loeve algorithm, for face recognition
if det(D)~=0
% This modification to the original algorithm improves
% the recognition rate (if applicable!)
Vtrue=matrice*V*(abs(D))^-0.5;
else
Vtrue=matrice*V;
end
%Vtrue=matrice*V;
Dtrue=diag(D);

% the eigenvalues are sorted by order and only M' of them


% are taken. We impose M' equal to the number of classes
% (max_class-1)
[Dtrue,ordine]=sort(Dtrue);
Dtrue=flipud(Dtrue);
ordine=flipud(ordine);
Vtrue(:,1:face_number)=Vtrue(:,ordine);

Vtrue=Vtrue(:,1:max_class-1);
Dtrue=Dtrue(1:max_class-1);

% Eigenvectors and eigenvalues of EigenFaces method


Vtrue0 = Vtrue;
Dtrue0 = Dtrue;
% Vtrue0 will be used for "space - reduction"
%----------------------------------------------------------
% FISHERFACES ALGORITHM
L = length(media);
Sb = zeros(L,L); % Between-class scatter matrix
Sw = zeros(L,L); % Within-class scatter matrix

mean_classes = zeros(L,max_class-1);
person = zeros(max_class-1,1); % number of images for each class (=person)
for ii=1:face_number
ID = data{ii,2};
mean_classes(:,ID) = mean_classes(:,ID)+double(data{ii,1});
person(ID) = person(ID)+1;
end
for ii=1:(max_class-1)
mean_classes(:,ii) = mean_classes(:,ii)/person(ii);
end
% Sb computation
for ii=1:(max_class-1)
v = mean_classes(:,ii)-media;
Sb = Sb + v*v';
end
% Sw computation
for ii=1:face_number
ID = data{ii,2};
v = double(data{ii,1})-mean_classes(:,ID);
Sw = Sw + v*v';
end

% I now pass into eigenfaces space (reduction)


Sbr = Vtrue0'*Sb*Vtrue0;
Swr = Vtrue0'*Sw*Vtrue0;

[V,D] = eig(Sbr,Swr);

% now I return to original space


Vtrue = Vtrue0*V;
Dtrue = diag(D);

[Dtrue,ordine] = sort(Dtrue);
Dtrue = flipud(Dtrue);
ordine = flipud(ordine);
nordine = length(ordine);
Vtrue(:,1:nordine) = Vtrue(:,ordine);

Vtrue=Vtrue(:,1:max_class-1);
Dtrue=Dtrue(1:max_class-1);
% Normalization to 1
% The recognition rate improves with such normalization
lengthV = size(Vtrue,2);
for ii=1:lengthV
if norm(Vtrue(:,ii))~=0 && norm(Vtrue(:,ii))~=Inf
Vtrue(:,ii)=Vtrue(:,ii)/norm(Vtrue(:,ii));
end
end
%----------------------------------------------------------
% we calculate the eigenface components of
% the normalized input (mean-adjusted). I.e. the input
% image is projected into "face-space"
pesi=Vtrue'*(ingresso-media);

pesi_database = zeros(max_class-1,max_class-1);
pesi_database_mediati = zeros(max_class-1,max_class-1);

numero_elementi_classe=zeros(max_class-1,1);
for ii=1:face_number
ingresso_database=double(data{ii,1});
classe_database=data{ii,2};
pesi_correnti=Vtrue'*(ingresso_database-media);
pesi_database(:,classe_database)=pesi_database(:,classe_database)+pesi_correnti;
numero_elementi_classe(classe_database)=numero_elementi_classe(classe_database)+1;
end
for ii=1:(max_class-1)
pesi_database_mediati(:,ii)=pesi_database(:,ii)/numero_elementi_classe(ii);
end
% pesi_database_mediati is a matrix with the averaged eigenface components of the
images
% present in database. Each class has its averaged eigenface.
% We want to find the nearest (in norm) vector to the input
% eigenface components.

distanze_pesi=zeros(max_class-1,1);
for ii=1:(max_class-1)
distanze_pesi(ii)=norm(pesi-pesi_database_mediati(:,ii));
%distanze_pesi(ii) = sum((abs(pesi-pesi_database_mediati(:,ii))));
end

[minimo_pesi,posizione_minimo_pesi]=min(distanze_pesi);

% % now we are evaluating the distance of the mean-normalized


% % input face from the "space-face" in order to detrmine if
% % the input image is a face or not.
% proiezione=zeros(size(data{1,1},1),1);
% for ii=1:(max_class-1)
% proiezione=proiezione+pesi(ii)*Vtrue(:,ii);
% end
% distanza_spazio_facce=norm((ingresso-media)-proiezione);

messaggio1='See Matlab Command Window to see matching result.';


messaggio2='';
messaggio3='';
messaggio4='';

msgbox(strcat(messaggio1,messaggio2,messaggio3,messaggio4),'Matching result','help');

disp('The nearest class is number ');


disp(posizione_minimo_pesi);
disp('with a distance equal to ');
disp(minimo_pesi);
% disp('The distance from Face Space is ');
% disp(distanza_spazio_facce);

else
warndlg('No image processing is possible. Database is empty.',' Warning ')
end
else
warndlg('Input image must be selected.',' Warning ')
end
end
%----------------
if chos==5,
clc;
close all;
if (exist('face_database.dat')==2)
button = questdlg('Do you really want to remove the Database?');
if strcmp(button,'Yes')
delete('face_database.dat');
msgbox('Database was succesfully removed from the current directory.','Database
removed','help');
end
else
warndlg('Database is empty.',' Warning ')
end
end
%----------------
if chos==6,
clc;
close all;
helpwin facerecexplanation;
end
%----------------
if chos==7,
clc;
close all;
if (exist('face_database.dat')==2)
load('face_database.dat','-mat');
disp('Insert 0 to visualize total mean face');
disp('Insert 1 to visualize class mean face');
disp('Insert 2 to visualize the projection of input image onto face-space');
scelta = input('Insert your choice: ');
if scelta == 0
clc;
matrice=zeros(size(data{1,1},1),face_number);
for ii=1:face_number
matrice(:,ii)=double(data{ii,1});
end
somma=sum(matrice,2);
media=somma/face_number;
figure('Name','Total mean face');
imshow(uint8(reshape(media,dimensioni)));
end
if scelta == 1
clc;
classescelta = input('Insert class number:');
if classescelta <= max_class-1
somma = zeros(size(data{1,1},1),1);
contatore = 0;
for ii=1:face_number
if data{ii,2}==classescelta
somma = somma + double(data{ii,1});
contatore = contatore+1;
end
end
somma = somma/contatore;
figure('Name','Class mean face');
imshow(uint8(reshape(somma,dimensioni)));
else
warndlg('Class number is uncorrect',' Warning ');
end
end
if scelta == 2
clc;
[namefile,pathname]=uigetfile('*.*','Select image');
if namefile~=0
[img,map]=imread(strcat(pathname,namefile));
imshow(img);
dimensioni = size(img);
else
warndlg('Input image must be selected.',' Warning ')
end
ingresso=double(img(:));
% face_number is equal to "M" of Turk's paper
% i.e. the number of faces present in the database.
% These image are grouped into classes. Every class (or set) should include
% a number of images for each person, with some variations in expression and in the
% lighting.
matrice=zeros(size(data{1,1},1),face_number);
for ii=1:face_number
matrice(:,ii)=double(data{ii,1});
end
somma=sum(matrice,2);
media=somma/face_number;
for ii=1:face_number
matrice(:,ii)=matrice(:,ii)-media;
end
matrice=matrice/sqrt(face_number);
% up to now matrix "matrice" is matrix "A" of Turk's paper
elle=matrice'*matrice;
% matrix "elle" is matrix "L" of Turk's paper

% eigenvalues and eigenvectors of the "reduced" matrix A'*A


[V,D] = eig(elle);
% the following multiplication is performed to obtain the
% eigenvectors of the original matrix A*A' (see Turk's paper)
% See also Karhunen-Loeve algorithm, for face recognition
Vtrue=matrice*V*(abs(D))^-0.5;
%Vtrue=matrice*V;
Dtrue=diag(D);

% the eigenvalues are sorted by order and only M' of them


% are taken. We impose M' equal to the number of classes
% (max_class-1)
[Dtrue,ordine]=sort(Dtrue);
Dtrue=flipud(Dtrue);
ordine=flipud(ordine);
Vtrue(:,1:face_number)=Vtrue(:,ordine);

Vtrue=Vtrue(:,1:max_class-1);
Dtrue=Dtrue(1:max_class-1);

% we calculate the eigenface components of


% the normalized input (mean-adjusted). I.e. the input
% image is projected into "face-space"
pesi=Vtrue'*(ingresso-media);

figure('Name','Projection of input image onto face-space');


imshow(uint8(reshape(Vtrue*pesi+media,dimensioni)))
end
else
warndlg('Database is empty.',' Warning ');
end
end
%----------------
if chos==8,
clc;
close all;
helpwin sourcecode;
end
end
Conclusion
The extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are
lower than those of the Eigenface technique for tests on the Harvard and Yale Fase Databases.

Face detection is very important in each and every field to increase the Security . Laser method is very useful
process. As we suggested that method for more encryption. We have proposed some methods and
methodologies, Algorithms etc., We have compared the existing features with our ideas. We have done this
project for the increase in ideology on Face recognition.

References
https://homepages.cae.wisc.edu/~ece533/project/f06/orts_rpt.pdf

https://www.researchgate.net/profile/Cahit_Guerel/publication/262875649_Design_of_a_Face_Recognitio
n_System/links/00b7d5390cd5560a1a000000/Design-of-a-Face-Recognition-System.pdf
https://www.researchgate.net/publication/262875649_Design_of_a_Face_Recognition_System
http://www.scholarpedia.org/article/Fisherfaces
https://www.ics.uci.edu/~welling/teaching/273ASpring09/Fisher-LDA.pdf
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-face-recognition-using-
fisher-faces
http://biomisa.org/uploads/2017/04/Viola-Jones-face-detection.pdf

http://www.face-rec.org/algorithms/LDA/belhumeur96eigenfaces.pdf

http://idosi.org/mejsr/mejsr23(9)15/19.pdf

http://www.ijcsi.org/papers/IJCSI-9-6-1-169-172.pdf

http://disp.ee.ntu.edu.tw/~pujols/Eigenfaces%20and%20Fisherfaces.pdf

Anda mungkin juga menyukai