Anda di halaman 1dari 54

Index

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

ABSTRACT ...........................................................................................................1
INTRODUCTION ..................................................................................................2
FEASIBILITY STUDY ..........................................................................................4
SYSTEM PROFILE ...............................................................................................5
System analysis.......................................................................................................9
TOOLS AND TECHNOLOGY............................................................................17
Architecture Diagram ...........................................................................................18
Data Flow Diagram and Use case Diagram..........................................................20
DEVELOPMENT PROCESS AND DOCUMENTATION..................................25
SCREEN SHOOT .............................................................................................30
CODEING.........................................................................................................38
CONCLUSION AND FUTURE WORK..........................................................46
Future Enhancements........................................................................................47
SCREENSHOTS...............................................................................................48
REFERENCES .................................................................................................49

1 ABSTRACT

Physically challenged people find it difficult to use a computer because


information is presented in an inaccessible form to them. Though many forms of
computer access are available for disabled people, these systems are expensive and
require sophisticated hardware support. In this context, this system focuses on helping
quadriplegic and non-verbal users. The challenge is to develop a Human Computer
Interface for such users which is inexpensive and easy to implement. Human
Computer Interface is a discipline concerned with the design, evaluation and
implementation of interactive computing systems for human use and with the study of
major phenomena surrounding them. We propose an interface for people with severe
disabilities based on face tracking. Body features like the eyes and the lips may also
be used for implementing a human computer interface but with some limitations. In
eye tracking, the motion of the pupil is hard to track with a web camera which would
be the primary mode of input in the proposed system. For a physically challenged
user, moving the face itself demands greater effort and hence finer intricacies eyeball
and lip movement cannot be considered. The system depends on a web camera for
input and hence would be affordable by the target users. User friendliness is enhanced
as the system is devoid of any sophisticated hardware requirement.

CHAPTER 1

2 INTRODUCTION
2.1 A Brief Description
Virtual learning is increasing day by day, and Human Computer Interaction is a
necessity to make virtual learning a better experience. The emotions of a person play
a major role in the learning process. Hence the proposed work, detects the emotions of
a person, by his face expressions.
For a facial expression to be detected face location and area must be known;
therefore in most cases, emotion detection algorithms start with face detection, taking
into account the fact that face emotions are mostly depicted using the mouth.
Eventually, algorithms for eye and mouth detection and tracking are necessary,
in order to provide the features for subsequent emotion recognition. In this project we
propose a detection system for natural emotion recognition.
2.2

Need for Face Detection


Human activity is a major concern in a wide variety of applications such as

video surveillance, human computer interface, face recognition and face database
management. Most face recognition algorithms assume that face location is known.
Similarly, face-tracking algorithms often assume that initial face location is known. In
order to improve the efficiency of the face recognition systems, an efficient face
detection algorithm is needed.

2.3 Need for Emotion Detection


Human beings communicate through facial emotions in day to day interactions
with others. Human perceiving the emotions of fellow human is natural and inherently
accurate. Human can express his/her inner state of mind through emotions. Many
times, emotion indicates that a human needs help. Computer recognising emotions is
an important research in Human Computer Interfacing (HCI). This HCI can be a
welcoming method for physically disabled

and to those who are unable to express

their requirement by voice or by other means and especially to those who are confined

to bed.

The human emotion can be detected through facial actions or through

biosensors.

Facial actions are imaged through still or video cameras. From still

images, taken at discrete times, the changes in eye and mouth areas can be exposed.
Measuring and analysing such changes will lead to the determination of human
emotions.

3 FEASIBILITY STUDY
In the feasibility study, we studied various feasibility studies performed i.e
technical feasibility whether existing equipment, software were sufficient for
completing the project. The economic feasibility determines whether the doing of
project is economically beneficial. This seems to be beneficial because the company
need not spend any amount on the project. Trainees because they work at a less
amount and only machine time are burden.The outcome of first phase was that the
request and the various studies were approved and it was decided that the project
taken up will serve the end user. On developing and implementation this software
saves a lot of amount and Sharing of valuable company time.
The key considerations involved in the feasibility analysis are
Economical feasibility
Technical feasibility
Social feasibility

3.1 Economical feasibility


This study is carried out to check the economic impact that the system will
have on the organization.th e amount of fund that the company can pour into research
and development of the system is limited . the expenditure must be justified.

3.2 Technical feasibility


This is carried out to check the technical feasibility ,that is,the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the available
technical resources the developed system must have a modest requirements. And are
required for implementing this system.

3.3 Social feasibility


The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently. The
user must not be threatened by the system. His level of confidence must be increased
so that he is able to make some constructive criticism which is welcomed[9].

4 SYSTEM PROFILE
4.1 Overview of the Algorithm
Our project proposes an emotion detection system where in the facial
emotions namely - happy, sad and surprised are detected. First the face is detected
from an image using the skin colour model. This is then followed by feature
extraction such as eyes and mouth. This is used for further processing to detect the
emotion. For detecting the emotion we take into account the fact that emotions are
basically represented using mouth expressions. This is done using the shape and
colour properties of the lips.

4.2

Video Fragmentation
The input video of an e-learning student is acquired using an image

acquisition device and stored into a database. This video is extracted and fragmented
into several frames to detect the emotions of the e-leaning student and to thereby
improve the virtual learning environment. By the video acquisition feature which is
used to record and register the on-going emotional changes in the e-learning student,
the resulting emotions are detected by mapping the changes in the eye and lip region.
The videos are recorded into a database before processing, thereby making it useful to
analyse the changes of emotion for a particular subject or during a particular time of
the day.
Frame rate and motion blur are important aspects of video quality. This
demo helps to show the visual differences between various frame rates and motion
blur.

4.3 A few pre-sets to try out:


Motion blur is a natural effect when you film the world in discrete time
intervals. When a film is recorded at 25 frames per second, each frame has an
exposure time of up to 40 milliseconds (1/25 seconds). All the changes in the scene
over that entire 40 milliseconds will blend into the final frame. Without motion blur,
animation will appear to jump and will not look fluid.
When the frame rate of a movie is too low, your mind will no longer be

convinced that the contents of the movie are continuous, and the movie will appear to
jump (also called strobing).
The human eye and its brain interface, the human visual system, can
process 10 to 12 separate images per second, perceiving them individually, but the
threshold of perception is more complex, with different stimuli having different
thresholds: the average shortest noticeable dark period, such as the flicker of
a cathode ray tube monitor or fluorescent lamp, is 16 milliseconds, while singlemillisecond visual stimulus may have a perceived duration between 100ms and 400ms
due to persistence of vision in the visual cortex. This may cause images perceived in
this duration to appear as one stimulus, such as a 10ms green flash of light
immediately followed by a 10ms red flash of light perceived as a single yellow flash
of light.

4.4 Face Detection


The first step for face detection is to make a skin colour model. After
the skin colour model is produced, the test image is skin segmented (binary image)
and the face is detected. The result of Face Detection is processed by a decision
function based on the chroma components (CrCb from YCbCr and Hue from HSV).
Before the result is passed to the next module, it is cropped according to the skin
mask. Small background areas which could lead to errors during the next stages will
be deleted.
A model image of face detection with the bounding box is
illustrated below in Figure 4.1.

Figure 4.1 Face

Detection
4.1.3 FEATURE EXTRACTION
After the face has been detected the next step is feature extraction where the
eyes and mouth are extracted from the detected face .For eye extraction, this is done
by creating two eye maps, a chrominance eye map and a luminance eye map. The
two maps are then combined to locate the eyes in a face image, as shown in Figure
4.2.

Figure 4.2 Feature Detection


To locate the mouth region, we use the fact that it contains stronger red
components and weaker blue components than other facial regions (Cr > Cb), and
thus the mouth map is constructed .Based on this the mouth region is extracted
.Finally the extracted eyes and mouth from the face image according to the maps are
passed onto the next module of our algorithm.
EMOTION DETECTION
The last module is emotion detection. This module makes use of the fact that
the emotions are expressed majorly with the help of eye and mouth expressions as
show in Figure 4.3. Emotion detection from lip images is based on colour and shape
properties of human lips. Having a binary lip image, shape detection can be
performed. Thus, depending on the shape of the lips and other morphological
properties the emotions are detected. A computer is being taught to interpret human

emotions based on lip pattern, according to research published in the International


Journal of Artificial Intelligence and Soft Computing. The system could improve the
way we interact with computers and perhaps allow disabled people to use computerbased communications devices, such as voice synthesizers, more effectively and more
efficiently.

Figure 4.3 Emotion Detection


4.5

Architectural Design
The architectural diagram shows the overall working of the system, where
captured colour image sample is taken as the input and it is processed using
image processing tools and is analysed to locate the facial features such as eyes
and mouth, which will be further processed to recognize the emotion of the
person. After the localization of the facial features the next step is to localize the
characteristic points on the face. Followed by this is the feature extraction process
where the features are extracted such as eyes and mouth.
Based on the variations of eyes and mouth, emotion of a person is detected

and recognized. For a person who is happy, the eyes will be open and the lips will be
closed upwards whereas for a person who is sad, the eyes will be open and the lips
will be closed facing downwards. Similarly for a person who is surprised the eyes will
be wide open and there will be a considerable displacement of the eye brows from the
eyes and the mouth will be wide open. Based on the above measures mood exhibited
by a person is detected and it is recognized.

The Figure 4.4 shows the overall working of the system where the input is the
image and the output is the emotion recognized such as happy, sad or surprised.

Figure 4.4 Architectural Diagram

5 System analysis
5.1 Existing Face Detection Approaches
Feature Invariant Methods

These methods aim to find structural features that exist even when the pose,
viewpoint, or lighting conditions vary, and then use these to locate faces. These
methods are designed mainly for face localization.
TEXTURE
Human faces have a distinct texture that can be used to separate them
from different objects. The textures are computed using second-order statistical
features on sub images of 16X16 pixels. Three types of features are considered: skin,
hair, and others. To infer the presence of a face from the texture labels, the votes of
occurrence of hair and skin textures are used. Here the colour information is also
incorporated with face-texture model. Using the face texture model, a scanning
scheme for face detection in colour scenes in which the orange like parts including the
face areas are enhanced. One advantage of this approach is that it can detect faces
which are not upright or have features such as beards and glasses.
SKIN COLOUR
Human skin colour has been used and proven to be an effective feature in
many applications from face detection to hand tracking. Although different people
have different colour, several studies have shown that the major difference lies largely
between their intensity rather than their chrominance. Several colour spaces have been
utilized to label pixels as skin including RGB, Normalized RGB, HSV, YCbCr, YIQ,
YES, CIE XYZ and CIE LUV.
TEMPLATE MATCHING METHODS
In template matching, a standard face pattern is manually predefined or
parameterized by a function. Given an input image, the correlation values with the
standard patterns are computed for the four contours, eyes, nose, and mouth
independently. The existence of a face is determined based on the correlation values.
This approach has the advantage of being simple to implement. However, it has
proven to be inadequate for face detection since it cannot effectively deal with
variation in scale, pose, and shape. Multiresolution, multiscale, sub templates, and
deformable templates have subsequently been proposed to achieve scale and shape
invariance.
PREDEFINED FACE TEMPLATE
In this approach several sub templates for nose, eyes, mouth and face contour

are used to model a face. Each sub template is defined in terms of line segments.
Lines in the input image are extracted based on greatest gradient change and then
matched against the sub templates. The correlations between sub images and contour
templates are computed first to detect candidate location of faces. Then, matching
with the other sub templates is performed at the candidate positions. In other words,
the first phase determines focus of attention or region of interest and second phase
examines the details to determine the existence of a face.
APPEARANCE BASED METHODS
In the appearance based methods the templates are learned from examples in
images. In general, appearance based methods rely on techniques from statistical
analysis and machine learning to find the relevant characteristics of face and non face
images. The learned characteristics are in the form of distribution models that are
consequently used for face detection.

5.2

Existing Emotion Detection Approaches

GENETIC ALGORITHM
The eye feature plays a vital role in classifying the face emotion using Genetic
Algorithm. The acquired images must go through few pre-processing methods such as
grayscale, histogram equalization and filtering. A Genetic Algorithm methodology
estimates the emotions from eye feature alone. Observation of various emotions lead
to a unique characteristic of eye, that is, the eye exhibits ellipses of different
parameters in each emotion. Genetic Algorithm is adopted to optimize the ellipse
characteristics of the eye features. Processing time for Genetic Algorithm varies for
each emotion.
NEURAL NETWORK
Neural networks have found profound success in the area of pattern
recognition. By repeatedly showing a neural network, inputs are classified into
groups, the network can be trained to discern the criteria used to classify, and it can do
so in a generalized manner allowing successful classification of new inputs not used
during training. With the explosion of research in emotions in recent years, the
application of pattern recognition technology to emotion detection has become
increasingly interesting. Since emotion has become an important interface for the
communication between human and machine, it plays a basic role in rational decision-

making, learning, perception, and various cognitive tasks.


Human's emotion can be detected based on the physiological measurements,
facial expression. Since human shows the same facial muscles when expressing a
particular emotion, therefore the emotion can be quantified. Primary emotions such as
anger, disgust, fear, happiness, sadness and surprise can be classified using Neural
Network.

5.3 Feature Point Extraction


TEMPLATE MATCHING
An interesting approach in the problem of automatic facial feature extraction
is a technique based on the use of template prototypes, which are portrayed on the 2-d
space in gray scale format. This is a technique that is, to some extent, easy to use, but
also effective. It uses correlation as a basic tool for comparing the template with the
part of the image that we wish to recognize. An interesting question that arises is, the
behaviour of recognition with template matching in different resolutions. This
involves multi-resolution representations through the use of Gaussian pyramids. The
experiments proved that not very high resolutions are needed for template matching
recognition. For example, the use of templates of 36x36 pixels proved sufficient. This
fact shows us that template matching is not as computationally complex as we
originally imagined.
This class implements the face detection algorithm which starts by scanning
the given image with the SSR filter and locating the face candidates, then it assembles
the candidates that are close to each other using connected components (to treat less
candidates which means less processing time, remember this is a real-time
application), then we take the centre of each cluster and extract a template based on
this centre; we pass the template to the Support Vector Machine which tells us
whether this template is a face or not, if yes, we locate the eyes, then we locate the
nose.
FACE DETECTION TECHNIQUES ARE OF TWO CATEGORIES:

1. Feature based approach


2. Image-based approach.
Template Matching provides for the human face detection system.
1Feature Based Technique:The techniques in the first category make use of

apparent properties of face such as face geometry, skin colour, and motion. Even
feature-based technique can achieve high speed in face detection, but it also has
problem in poor reliability under lighting condition.
2. IMAGE BASED TECHNIQUE:
The image based approach takes advantage of current advance in
pattern recognition theory. Most of the image based approach applies a window
scanning technique for detecting face, which requires large computation.
To achieve high speed and reliable face detection system, we propose
the method which combines both feature-based and image-based approach using SSR
Filter.
TEMPLATE
MATCHING.
Template matching is a technique in digital image processing for finding small
parts of an image which match a template image or as a way to detect edges in
images.
The basic method of template matching uses a convolution mask (template),
tailored to a specific feature of the search image, which we want to detect.
This technique can be easily performed on grey images or edge
images. The convolution output will be highest at places where the image structure
matches the mask structure, where large image values get multiplied by large mask
values

5.4

Eyes and Nose detection using SSR Filter.


A real-time face detection algorithm using Six-Segmented Rectangular (SSR)

filter of the eyes and nose detection.


SSR is a six segment rectangle as illustrated in Figure 1.1.

Figure 1.1 SSR Filter


At the beginning, a rectangle is scanned throughout the input image. This
rectangle is segmented into six segments as shown below.
The SSR filter is used to detect the Between-the-Eyes based on two
characteristics of face geometry.
BTE - BETWEEN THE EYES
The detection of BTE is based on the property of the image characteristics of
the area on face. The intensity of the BTE image closely resembles a hyperbolic
surface as shown in Figure 1.2. The BTE is the saddle point on the hyperbolic surface.
A rotationally invariant filter could thus be devised for detecting the BTE area.

FIGURE 1.2 DETERMINATION OF BTE

The nose area is usually calculated to be 2/3rd of the value of L as shown in


Figure 1.3. The L is calculated as the approximate distance between both eyes and the
distance from eye to nose.

Figure1.3 Nose Tip Search Area Relative to Eyes

The common BTE area on human face resembles a hyperbolic surface. The
proposed work uses this hyperbolic model to describe the BTE region, the centre of
the BTE is thus the saddle point on the surface.

BLOBS
Blobs provide a complementary description of image structures in terms of
regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors
often contain a preferred point (a local maximum of an operator response or a centre
of gravity) which means that many blob detectors may also be regarded as interest
point operators. Blob detectors can detect areas in an image which are too smooth to
be detected by a corner detector.

5.5 Gabor Filtering


It is possible for Gabor filtering to be used in a facial recognition system. The
neighbouring region of a pixel may be described by the response of a group of Gabor
filters in different frequencies and directions, which have a reference to the specific
pixel. In that way, a feature vector may be formed, containing the responses of those
filters.

5.6 Automated Facial Feature Extraction


In this approach, as far as the frontal images are concerned, the fundamental
concept upon which the automated localization of the predetermined points is based

consists of two steps: the hierarchic and reliable selection of specific blocks of the
image and subsequently the use of a standardized procedure for the detection of the
required benchmark points. In order for the former of the two processes to be
successful, the need of a secure method of approach has emerged. The detection of a
block describing a facial feature relies on a previously, effectively detected feature.
By adopting this reasoning, the choice of the most significant characteristic -the
ground of the cascade routine- has to be made. The importance that each of the
commonly used facial features, regarding the issue of face recognition, has already
been studied by other researchers. The outcome of surveys proved the eyes to be the
most dependable and easily located of all facial features, and as such they were used.
The techniques that were developed and tried separately, utilize a combination of
template matching and Gabor filtering.

6 TOOLS AND TECHNOLOGY


6.1 Hardware Requirements
TABLE 5.1 HARDWARE REQUIREMENTS

S.No

Feature

Configuration

CPU

Intel core 2 Duo processor

Main memory

1 GB RAM

Hard Disk

60 GB Disk size

The above configuration in the Table 5.1 is the minimum hardware


requirements for the proposed system.

6.2 Software Requirements


Table 5.2 Software Requirements
Sr No

Software

Version

Windows

Picasa

visual studio

2010

The proposed system is executed using Windows 7, MatlabR2012a and


picassa 3 as shown in Table

7 Architecture Diagram

Main Page

Contains
Input Image
Exit

File
Restore
Image

Save

Next

Pre-Process
Skin Color
Binary Image

Connected
Segmentation

Face

Segmentation Process
Result
Mouth

Eye

EyeBrow

Options
Contrast
Sharpen
Image
Preview

Brightness

Help

Figure 5.5: Architecture diagram

8 Data Flow Diagram and Use case Diagram


Main Application

Launch

start

Login Page

Login Fail

Check
User
name&
Password

Login
Successfully
After
Successfully
Login in Main
Page

Home Page

Select Images
(.jpeg,.bng,.tiff

Convert to
Binary Image

Return Binary Images


Figure 5.6: DFD of Login module

Figure 5.7: DFD of segmentation and emotion result

21

Registratio
n

UserLogin

User

Select Image

Pre Process

Segmentatio
n

Figure 5.8: Use case Diagram

22

5.5 Class Diagram and ER diagram:


User:

Registatio
n
+Detail
+registatio
n()

User
+details
+Login()
+Registration()

User Login
+UserName
+Password
+Login()

Select Image
+Open File
dialog
+Upload
Image()

Pre Process
+result

color_
segmentation
+

Segmentatio
n
+Image
+
+
+

eyelocal ()
range ()
black white

()

Emotion
Result
+Databsae image
+Connection
+emotion ()
+Calculate Distance()
+compare Image()
+bezier_position
+displayResult()

Figure 5.9: class diagram

Position
Id
lip1_x
lip1_y
lip2_x
lip2_y
lip3_x
lip3_y
lip4_x

TB_ Sour ceUser

lip4_y

Per son

RecI d

lip5_x

Name

UserName

lip5_y

Smile

Pwd

lip6_x

Normal

lip6_y

Surprise

lef t _eye1_x

Sad

lef t _eye1_y
lef t _eye2_x
lef t _eye2_y
lef t _eye3_x
lef t _eye3_y
lef t _eye4_x
lef t _eye4_y
lef t _eye5_x
lef t _eye5_y
lef t _eye6_x
lef t _eye6_y
right _eye1_x
right _eye1_y
right _eye2_x
right _eye2_y
right _eye3_x
right _eye3_y
right _eye4_x
right _eye4_y
right _eye5_x
right _eye5_y
right _eye6_x
right _eye6_y
lip_h1
lip_h2
lip_h3
lip_h4
lef t _eye_h1
lef t _eye_h2
lef t _eye_h3
lef t _eye_h4
right _eye_h1
right _eye_h2
right _eye_h3
right _eye_h4

Figure 5.10 : ER diagram

24

9 DEVELOPMENT PROCESS AND DOCUMENTATION


9.1 Face Detection
Face detection is used in biometrics, often as a part of or together with
a facial recognition system. It is also used in video surveillance, human computer
interface and image database management. Some recent digital cameras use face
detection for autofocus. Face detection is also useful for selecting regions of interest
in photo slideshows that use a pan-and-scale Ken Burns effect.
Face detection can be regarded as a specific case of object-class detection. In
object-class detection, the task is to find the locations and sizes of all objects in an
image that belong to a given class. Examples include upper torsos, pedestrians, and
cars.
Face detection can be regarded as a more general case of face localization. In
face localization, the task is to find the locations and sizes of a known number of
faces. In face detection, one does not have this additional information.

SAMPLE COLLECTION

The sample skin coloured pixels is collected from images of people


belonging to different races. Each pixel is carefully chosen from the
images so that the other regions which are not belonging to the skin
colour do not get included.
9.2 Feature Extraction
Feature extraction is the process of detecting the required features from
the face and extracting it by cropping or other such technique.
EYE DETECTION
Two separate eye maps are built, one from the chrominance component
and the other from the luminance component. These two maps are then combined into
a single eye map. The eye map from the chrominance is based on the fact that high-

25

Cb and low-Cr values can be found around the eyes. The following formula presented
helps us to construct the map:

1/3*(Cb*Cb + (255-Cr)*(255-Cr) + (Cb/Cr)).


Eyes usually contain both dark and bright pixels in the luminance component,
so gray scale operators can be de- signed to emphasize brighter and darker pixels in
the luminance component around eye regions. Such operators are dilation and erosion.
We use gray scale dilation and erosion with a spherical structuring element to
construct the eye map.
The eye map from the chrominance is then combined with the eye map from
the luminance by an AND (multiplication) operation, Eye Map=(EyeMapChr) AND
(EyeMapLum). The resulting eye map is then dilated and normalized to brighten both
the eyes and suppress other facial areas. Then with an appropriate choice of a
threshold, we can track the location of the eye region.
MOUTH DETECTION
To locate the mouth region, we use the fact that it contains stronger red
components and weaker blue components than other facial regions (Cr >Cb), so the
mouth map is constructed as follows:
n= 0.95 * (1/k sum (Cr(x,y)*Cr(x,y))) / (1/k sum (Cr(x,y)/Cb(x,y)))
Map = Cr*Cr*(Cr*Cr n*Cr/Cb)
Where k is the number of pixels in the face.
The mouth detection diagram is shown in Figure 6.2

Happy:

Surprised:

Figure Mouth Detection Diagram

9.3 Emotion Detection


Emotion detection from lip images is based on colour and shape properties of
human lips. For this task we considered already having a rectangular colour image
containing lips and surrounding skin (with as small amount of skin as possible).
Given this we can start extracting a binary image of lips, which would give us the
necessary information about the shape.
To extract a binary image of lips, a double threshold approach was used.
First, a binary image (mask) containing objects similar to lips is extracted. The mask
image is extracted in the way that it contains a subset of pixels which is equal or
greater of the exact subset of lip pixels. Then, another image (marker) is generated by
extracting pixels which contain lips with highest probability. Later, the mask image is
reconstructed using the marker image to make results more accurate.
Having a binary lip image, shape detection can be performed. Some lip
features of face expressing certain emotions are obvious: side corners of happy lips
are higher compared to the lip centre than it is for serious or sad lips. One way to
express it more mathematically is to find the leftmost and rightmost pixels (lip
corners), draw a line between them and calculate the position of lip centre with
respect to that line. The lower below the line is the centre, the happier the lips are.
Another morphological lip property that can be extracted is mouth openness. Open

27

lips imply certain emotions: usually happiness and surprise.


For example (surprised and happy):
1. Based on the original binary image the first step is to remove small areas
which is done with the 'sizethre(x,y,'z')' function.
2. In the second step a morphological closing (imclose(bw,se)) with a 'disk'
structure element is done.
3. In the third step some properties of image regions are measured (blob
analysis).More precise:
A 'BoundingBox' is calculated which contains the smallest rectangle of
the region (in our case the green box). In digital image processing, the bounding
box is merely the coordinates of the rectangular border that fully encloses a digital
image when it is placed over a page, a canvas, a screen or other similar bidimensional
background.
'Extremas' were calculated which means a 8-by-2 matrix that specifies the
extrema points in the region. Each row of the matrix contains the x- and y-coordinates
of one of the points. The format of the vector is [top-left top-right right-top rightbottom bottom-right bottom-left left-bottom left-top] (in our case the cyan dots).
A 'Centroid' which is a 1-by-ndims(L) vector that specifies the
centre of mass of the region (in our case the blue 'star').

The centroid is calculated based on:

i.

p_poly_dist.....Calculates distance (shown as red line) between Centroid and


'left-top-right-top-line'.

ii. lipratio....Ratio between width and height of the bounding box.


iii. lip_sign....Is a positive/negative number, which is calculated to detect if the

'left-top-right-top-line' runs over/under the 'centroid'.


The decision is made if mood is 'happy', 'sad' and 'surprised'.
After reviewing some illumination correction (colour constancy) algorithms
we decided to use the "Max-RGB" (also known as "White patch") algorithm. This
algorithm assumes that in every image there is a white patch, which is then used as a
reference for present illumination. A more accurate "Colour by Correlation" algorithm
was also considered, but it required building a precise colour-illumination correlation
table in controlled conditions, which would be beyond the scope of this task. As the
face detection is always the first step in the processes of these recognition or
transmission systems, its performance would put a strict limit on the achieved
performance of the whole system. Ideally, a good face detector should accurately
extract all faces in images regardless of their positions, scales, orientations, colours,
shapes, poses, expressions and light conditions. However, for the current state of the
art in image processing technologies, this goal is a big challenge. For this reason,
many designed face detectors deal with only upright and frontal faces in wellconstrained environments.
This lip emotion detection algorithm has one restriction - the face
cannot be rotated more than 90 degrees, since then the corner detection would
obviously fail.

10 SCREEN SHOOT

30

31

32

33

34

35

36

37

11 CODEING
11.1 code for sobel filter Sobel Filter
namespace sobel_filtering
{
partial class Form1
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;
/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be
disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.components = new System.ComponentModel.Container();
this.button1 = new System.Windows.Forms.Button();
this.openFileDialog1 = new System.Windows.Forms.OpenFileDialog();
this.contextMenuStrip1 = new
System.Windows.Forms.ContextMenuStrip(this.components);
this.sharpenImageToolStripMenuItem = new
System.Windows.Forms.ToolStripMenuItem();
this.toolStripMenuItem2 = new
System.Windows.Forms.ToolStripSeparator();
this.contrastToolStripMenuItem = new
System.Windows.Forms.ToolStripMenuItem();
this.toolStripMenuItem5 = new
System.Windows.Forms.ToolStripSeparator();
this.brightnessToolStripMenuItem = new
System.Windows.Forms.ToolStripMenuItem();
this.toolStripMenuItem4 = new
System.Windows.Forms.ToolStripSeparator();
this.restoreImageToolStripMenuItem = new
System.Windows.Forms.ToolStripMenuItem();
this.toolStripMenuItem1 = new
System.Windows.Forms.ToolStripSeparator();
this.previewToolStripMenuItem = new
System.Windows.Forms.ToolStripMenuItem();

38

this.toolStripSeparator1 = new
System.Windows.Forms.ToolStripSeparator();
this.SaveImagetoolStripMenuItem = new
System.Windows.Forms.ToolStripMenuItem();
this.button_connected = new System.Windows.Forms.Button();
this.button_skincolor = new System.Windows.Forms.Button();
this.saveFileDialog1 = new System.Windows.Forms.SaveFileDialog();
this.pictureBox2 = new System.Windows.Forms.PictureBox();
this.pictureBox1 = new System.Windows.Forms.PictureBox();
this.button2 = new System.Windows.Forms.Button(); this.richTextBox1
= new System.Windows.Forms.RichTextBox(); this.richTextBox2 = new
System.Windows.Forms.RichTextBox(); this.richTextBox3 = new
System.Windows.Forms.RichTextBox(); this.richTextBox4 = new
System.Windows.Forms.RichTextBox(); this.label1 = new
System.Windows.Forms.Label();
this.label2 = new System.Windows.Forms.Label();
this.label3 = new System.Windows.Forms.Label();
this.label4 = new System.Windows.Forms.Label();
this.label5 = new System.Windows.Forms.Label();
this.contextMenuStrip1.SuspendLayout();
((System.ComponentModel.ISupportInitialize)(this.pictureBox2)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox1)).BeginInit();
this.SuspendLayout();
//
// button1
//
this.button1.Font = new System.Drawing.Font("Microsoft Sans Serif",
12F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point,
((byte)(0)));
this.button1.Location = new System.Drawing.Point(255, 346);
this.button1.Name = "button1";
this.button1.Size = new System.Drawing.Size(124, 43);
this.button1.TabIndex = 1; this.button1.Text =
"Select Image";
this.button1.UseVisualStyleBackColor = true;
this.button1.Click += new System.EventHandler(this.button1_Click);
//
// openFileDialog1
//
this.openFileDialog1.FileName = "openFileDialog1";
this.openFileDialog1.FileOk += new
System.ComponentModel.CancelEventHandler(this.openFileDialog1_FileOk);
//
// contextMenuStrip1
//
this.contextMenuStrip1.Items.AddRange(new
System.Windows.Forms.ToolStripItem[] {
this.sharpenImageToolStripMenuItem,
this.toolStripMenuItem2,
this.contrastToolStripMenuItem,
this.toolStripMenuItem5,
this.brightnessToolStripMenuItem,
this.toolStripMenuItem4,
this.restoreImageToolStripMenuItem,
this.toolStripMenuItem1,
this.previewToolStripMenuItem,
this.toolStripSeparator1,
this.SaveImagetoolStripMenuItem});
this.contextMenuStrip1.Name = "contextMenuStrip1";
this.contextMenuStrip1.Size = new System.Drawing.Size(154, 166);
//

// sharpenImageToolStripMenuItem
//
this.sharpenImageToolStripMenuItem.Name
= "sharpenImageToolStripMenuItem";
this.sharpenImageToolStripMenuItem.Size = new
System.Drawing.Size(153, 22);
this.sharpenImageToolStripMenuItem.Text = "Sharpen Image";
this.sharpenImageToolStripMenuItem.Click += new
System.EventHandler(this.sharpenImageToolStripMenuItem_Click);
//
// toolStripMenuItem2
//
this.toolStripMenuItem2.Name = "toolStripMenuItem2";
this.toolStripMenuItem2.Size = new System.Drawing.Size(150, 6);
//
// contrastToolStripMenuItem
//
this.contrastToolStripMenuItem.Name = "contrastToolStripMenuItem";
this.contrastToolStripMenuItem.Size = new System.Drawing.Size(153,
22);
this.contrastToolStripMenuItem.Text = "Contrast";
this.contrastToolStripMenuItem.Click += new
System.EventHandler(this.contrastToolStripMenuItem_Click);
//
// toolStripMenuItem5
//
this.toolStripMenuItem5.Name = "toolStripMenuItem5";
this.toolStripMenuItem5.Size = new System.Drawing.Size(150, 6);
//
// brightnessToolStripMenuItem
//
this.brightnessToolStripMenuItem.Name =
"brightnessToolStripMenuItem";
this.brightnessToolStripMenuItem.Size = new
System.Drawing.Size(153, 22); this.brightnessToolStripMenuItem.Text
= "Brightness"; this.brightnessToolStripMenuItem.Click
+= new
System.EventHandler(this.brightnessToolStripMenuItem_Click);
//
// toolStripMenuItem4
//
this.toolStripMenuItem4.Name = "toolStripMenuItem4";
this.toolStripMenuItem4.Size = new System.Drawing.Size(150, 6);
//
// restoreImageToolStripMenuItem
//
this.restoreImageToolStripMenuItem.Name
= "restoreImageToolStripMenuItem";
this.restoreImageToolStripMenuItem.Size = new
System.Drawing.Size(153, 22);
this.restoreImageToolStripMenuItem.Text = "Restore Image";
this.restoreImageToolStripMenuItem.Click += new
System.EventHandler(this.restoreImageToolStripMenuItem_Click);
//
// toolStripMenuItem1
//
this.toolStripMenuItem1.Name = "toolStripMenuItem1";
this.toolStripMenuItem1.Size = new System.Drawing.Size(150, 6);
//
// previewToolStripMenuItem
//
this.previewToolStripMenuItem.DisplayStyle =
System.Windows.Forms.ToolStripItemDisplayStyle.Text;
22);

this.previewToolStripMenuItem.Size = new System.Drawing.Size(153,


this.p
revie
wToo
this.previewToolStripMenuItem.Text = "Preview";
lStrip
this.previewToolStripMenuItem.Click += new
Menu
Item.
Nam
e=
"prev
iewT
oolSt
ripMe
nuIte
m";
System.EventHandler(this.previewToolStripMenuItem_Click);
//
// toolStripSeparator1
//
this.toolStripSeparator1.Name = "toolStripSeparator1";
this.toolStripSeparator1.Size = new System.Drawing.Size(150, 6);
//
// SaveImagetoolStripMenuItem
//
this.SaveImagetoolStripMenuItem.Name =
"SaveImagetoolStripMenuItem";
this.SaveImagetoolStripMenuItem.Size = new System.Drawing.Size(153,
22);
this.SaveImagetoolStripMenuItem.Text = "Save";
this.SaveImagetoolStripMenuItem.Click += new
System.EventHandler(this.SaveImagetoolStripMenuItem_Click);
//
// button_connected
//
this.button_connected.Font = new System.Drawing.Font("Microsoft
Sans Serif", 12F, System.Drawing.FontStyle.Regular,
System.Drawing.GraphicsUnit.Point, ((byte)(0)));
this.button_connected.Location = new System.Drawing.Point(594,
346);
this.button_connected.Name = "button_connected";
this.button_connected.Size = new System.Drawing.Size(122, 44);
this.button_connected.TabIndex = 16;
this.button_connected.Text = "Connected";
this.button_connected.UseVisualStyleBackColor = true;
this.button_connected.Click += new
System.EventHandler(this.button_connected_Click);
//
// button_skincolor
//
this.button_skincolor.Font = new System.Drawing.Font("Microsoft
Sans Serif", 12F, System.Drawing.FontStyle.Regular,
System.Drawing.GraphicsUnit.Point, ((byte)(0)));
this.button_skincolor.Location = new System.Drawing.Point(419,
346);
this.button_skincolor.Name = "button_skincolor";
this.button_skincolor.Size = new System.Drawing.Size(98, 43);
this.button_skincolor.TabIndex = 22; this.button_skincolor.Text =
"Skin Color"; this.button_skincolor.UseVisualStyleBackColor = true;
this.button_skincolor.Click += new
System.EventHandler(this.button_skincolor_Click);
//
// saveFileDialog1
//
this.saveFileDialog1.Filter = "Bitmap files (*.bmp)|*.bmp|Jpeg files
(*.jpg)|*.jpg|Png files (*.png)|*.png";
this.saveFileDialog1.FileOk += new
System.ComponentModel.CancelEventHandler(this.saveFileDialog1_FileOk);
//

// pictureBox2
//
this.pictureBox2.BorderStyle =
System.Windows.Forms.BorderStyle.FixedSingle;

this.pictureBox2.ContextMenuStrip = this.contextMenuStrip1;
this.pictureBox2.Location = new System.Drawing.Point(627, 129);
this.pictureBox2.Name = "pictureBox2";
this.pictureBox2.Size = new System.Drawing.Size(155, 175);
this.pictureBox2.SizeMode =
System.Windows.Forms.PictureBoxSizeMode.StretchImag
e; this.pictureBox2.TabIndex = 5;
this.pictureBox2.TabStop = false;
this.pictureBox2.MouseEnter += new
System.EventHandler(this.pictureBox_MouseClick);
//
// pictureBox1
//
this.pictureBox1.BorderStyle =
System.Windows.Forms.BorderStyle.FixedSingle;
this.pictureBox1.ContextMenuStrip = this.contextMenuStrip1;
this.pictureBox1.Cursor = System.Windows.Forms.Cursors.Cross;
this.pictureBox1.Location = new System.Drawing.Point(291, 129);
this.pictureBox1.Name = "pictureBox1";
this.pictureBox1.Size = new System.Drawing.Size(155, 175);
this.pictureBox1.SizeMode =
System.Windows.Forms.PictureBoxSizeMode.StretchImag
e; this.pictureBox1.TabIndex = 0;
this.pictureBox1.TabStop = false;
this.pictureBox1.Click += new
System.EventHandler(this.pictureBox1_Click);
this.pictureBox1.MouseDown += new
System.Windows.Forms.MouseEventHandler(this.pictureBox1_MouseDown);
this.pictureBox1.MouseEnter += new
System.EventHandler(this.pictureBox_MouseClick);
this.pictureBox1.MouseMove += new
System.Windows.Forms.MouseEventHandler(this.pictureBox1_MouseMove);
this.pictureBox1.MouseUp += new
System.Windows.Forms.MouseEventHandler(this.pictureBox1_MouseUp);
//
// button2
//
this.button2.Font = new System.Drawing.Font("Microsoft Sans Serif",
12F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point,
((byte)(0)));
this.button2.Location = new System.Drawing.Point(741, 345);
this.button2.Name = "button2";
this.button2.Size = new System.Drawing.Size(125, 44);
this.button2.TabIndex = 29;
this.button2.Text = "Next";
this.button2.UseVisualStyleBackColor = true;
this.button2.Click += new
System.EventHandler(this.button2_Click_1);
//
// richTextBox1
//
this.richTextBox1.Location = new System.Drawing.Point(255, 397);
this.richTextBox1.Name = "richTextBox1";
this.richTextBox1.ReadOnly = true;
this.richTextBox1.Size = new System.Drawing.Size(124, 75);
this.richTextBox1.TabIndex = 30;
this.richTextBox1.Text = "";
//
// richTextBox2
//
this.richTextBox2.Location = new System.Drawing.Point(419, 396);
this.richTextBox2.Name = "richTextBox2";
this.richTextBox2.ReadOnly = true;

this.richTextBox2.Size = new System.Drawing.Size(98, 75);


this.richTextBox2.TabIndex = 31;
this.richTextBox2.Text = "";
//
// richTextBox3
//
this.richTextBox3.Location = new System.Drawing.Point(594, 397);
this.richTextBox3.Name = "richTextBox3";
this.richTextBox3.ReadOnly = true;
this.richTextBox3.Size = new System.Drawing.Size(122, 75);
this.richTextBox3.TabIndex = 32;
this.richTextBox3.Text = "";
//
// richTextBox4
//
this.richTextBox4.Location = new System.Drawing.Point(741, 396);
this.richTextBox4.Name = "richTextBox4";
this.richTextBox4.ReadOnly = true;
this.richTextBox4.Size = new System.Drawing.Size(125, 76);
this.richTextBox4.TabIndex = 33;
this.richTextBox4.Text = "";
//
// label1
//
this.label1.AutoSize = true;
this.label1.Font = new System.Drawing.Font("Microsoft Sans Serif",
12F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point,
((byte)(0)));
this.label1.Location = new System.Drawing.Point(304, 321);
this.label1.Name = "label1";
this.label1.Size = new System.Drawing.Size(19, 20);
this.label1.TabIndex = 34;
this.label1.Text = "1";
//
// label2
//
this.label2.AutoSize = true;
this.label2.Font = new System.Drawing.Font("Microsoft Sans Serif",
12F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point,
((byte)(0)));
this.label2.Location = new System.Drawing.Point(462, 321);
this.label2.Name = "label2";
this.label2.Size = new System.Drawing.Size(19, 20);
this.label2.TabIndex = 35;
this.label2.Text = "2";
//
// label3
//
this.label3.AutoSize = true;
this.label3.Font = new System.Drawing.Font("Microsoft Sans Serif",
12F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point,
((byte)(0)));
this.label3.Location = new System.Drawing.Point(638, 321);
this.label3.Name = "label3";
this.label3.Size = new System.Drawing.Size(19, 20);
this.label3.TabIndex = 36;
this.label3.Text = "3";
//
// label4
//
this.label4.AutoSize = true;

this.label4.Font = new System.Drawing.Font("Microsoft Sans Serif",


12F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point,
((byte)(0)));
this.label4.Location = new System.Drawing.Point(803, 320);
this.label4.Name = "label4";
this.label4.Size = new System.Drawing.Size(19, 20);
this.label4.TabIndex = 37;
this.label4.Text = "4";
//
// label5
//
this.label5.AutoSize = true;
this.label5.Font = new System.Drawing.Font("Microsoft Sans Serif",
12F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point,
((byte)(0)));
this.label5.ForeColor = System.Drawing.Color.FromArgb(((int)(((byte)
(255)))), ((int)(((byte)(128)))), ((int)(((byte)(0)))));
this.label5.Location = new System.Drawing.Point(342, 29);
this.label5.Name = "label5";
this.label5.Size = new System.Drawing.Size(452, 20);
this.label5.TabIndex = 38;
this.label5.Text = "Perform Each of the given step one by one to
get result"; //
// Form1
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode =
System.Windows.Forms.AutoScaleMode.Font; this.BackColor =
System.Drawing.SystemColors.Control; this.ClientSize = new
System.Drawing.Size(1152, 574); this.Controls.Add(this.label5);
this.Controls.Add(this.label4); this.Controls.Add(this.label3);
this.Controls.Add(this.label2); this.Controls.Add(this.label1);
this.Controls.Add(this.richTextBox4);
this.Controls.Add(this.richTextBox3);
this.Controls.Add(this.richTextBox2);
this.Controls.Add(this.richTextBox1);
this.Controls.Add(this.button2);
this.Controls.Add(this.button_skincolor);
this.Controls.Add(this.pictureBox2);
this.Controls.Add(this.button_connected);
this.Controls.Add(this.button1);
this.Controls.Add(this.pictureBox1);
this.Name = "Form1";
this.Text = "Human Emotion Detection";
this.WindowState =
System.Windows.Forms.FormWindowState.Maximized; this.Load +=
new System.EventHandler(this.Form1_Load);
this.contextMenuStrip1.ResumeLayout(false);
((System.ComponentModel.ISupportInitialize)(this.pictureBox2)).EndInit();
((System.ComponentModel.ISupportInitialize)(this.pictureBox1)).EndInit();
this.ResumeLayout(false);
this.PerformLayout();
}
#endregion
private System.Windows.Forms.PictureBox pictureBox1;

private System.Windows.Forms.Button button1;


private System.Windows.Forms.OpenFileDialog openFileDialog1;
private System.Windows.Forms.PictureBox pictureBox2;
private System.Windows.Forms.Button button_connected;
private System.Windows.Forms.Button button_skincolor;
private System.Windows.Forms.SaveFileDialog saveFileDialog1;
// private System.Windows.Forms.Button button_eye_extract4;
private System.Windows.Forms.ContextMenuStrip
contextMenuStrip1; private
System.Windows.Forms.ToolStripMenuItem
sharpenImageToolStripMenuItem;
private System.Windows.Forms.ToolStripSeparator toolStripMenuItem2;
private
System.Windows.Forms.ToolStripMenuItem
contrastToolStripMenuItem;
private
System.Windows.Forms.ToolStripMenuItem
previewToolStripMenuItem;
private System.Windows.Forms.ToolStripSeparator toolStripMenuItem4;
private
System.Windows.Forms.ToolStripMenuItem
restoreImageToolStripMenuItem;
private System.Windows.Forms.ToolStripSeparator toolStripMenuItem5;
private System.Windows.Forms.ToolStripSeparator toolStripMenuItem1;
private System.Windows.Forms.ToolStripMenuItem
brightnessToolStripMenuItem;
private System.Windows.Forms.ToolStripSeparator toolStripSeparator1;
private
System.Windows.Forms.ToolStripMenuItem
SaveImagetoolStripMenuItem;
private System.Windows.Forms.Button button2;
private System.Windows.Forms.RichTextBox richTextBox1;
private System.Windows.Forms.RichTextBox richTextBox2;
private System.Windows.Forms.RichTextBox richTextBox3;
private System.Windows.Forms.RichTextBox richTextBox4;
private System.Windows.Forms.Label label1;
private System.Windows.Forms.Label label2;
private System.Windows.Forms.Label label3;
private System.Windows.Forms.Label label4;
private System.Windows.Forms.Label label5;
}
}

12 CONCLUSION AND FUTURE WORK


12.1 Conclusion
The proposed system utilizes feature extraction techniques and determines the
emotion of the person based on the facial features namely eyes and lips. The emotion
exhibited by a person is determined with a good accuracy and it is user friendly
system.

12.2 Face-Detection and Segmentation


In this project we have proposed an emotion detection and recognition system
for colour images. Although our application is only constructed for full frontal
pictures with only one person per picture Face-Detection is necessary for decreasing
the area of interest needed for further processing in order to achieve the best results.
Trying to detect the skin of a face in an image really is a hard task due to the
variance of illumination. The success of correct detection depends a lot on the light
sources and illumination properties of the environment the picture are taken.

12.3 Emotion Detection


The major difficulty of the used approach is determining the right hue
threshold range for lip extraction. Lip colours vary mostly according to face owner's
race, presence of make-up and illumination, under which the photo was taken. The
latter is the least problem, since there exist illumination correction algorithms.

13 Future Enhancements
The future work includes enhancement of the system so that it is able to detect
emotions of the person even in complex backgrounds having different illumination
conditions and to eliminate the lip colour constraint in the coloured images. The other
criterion that can be worked upon is to project more emotions other than happy, sad
and surprised.

14 SCREENSHOTS

SCREEN 8: The above screen is the result screen which displays the end
result of the system, the emotion portrayed by the person in the image.

15 REFERENCES
[1] J. Jarkiewicz, R. Kocielnik and K. Marasek, Anthropometric
Facial Emotion Recognition, Novel Interaction Methods and
Techniques -Lecture Notes in Computer Science, Volume
5611,
2009.
[2] L. Zhao, X. LinSun, J. Liu and X.Hexu, Face Detection Based on
Skin Colour, Proceedings of the third international conference on
machine learning and cybernetics, Shanghai, 2004.
[3] I. Maglogiannis, D. Vouyioukas and C. Aggelopoulos, Face
Detection and Recognition of Natural Human Emotion Using
Markov Random Fields, Pers Ubiquit Comput, 2009.
[4] M.H Yang, D. J. Kriegman, N. Ahuja,

Detecting Faces

in Images, IEEE transactions on pattern analysis and machine


intelligence, vol.24, no.1, 2002
[5] Pedro J. Muoz-Merino,Carlos Delgado Kloos and Mario MuozOrganero, Enhancement of Student Learning Through the Use of
a Hinting Computer e-Learning System and Comparison With
Human Teachers , IEEE Journal.vol.52, 2011.
[6] Emily Mower,Maja.J.Mataric and Shrikanth Narayanan, A Frame
Work for Automatic Human Emotion Classification using
Emotions Profile, IEEE Journal.vol.23, 2011
[7] Xiaogang Wang and Xiaoou Tang, Face Photo-Sketch Synthesis

and Recognition, IEEE transactions, 2009.


[8] Yan Tong Jixu Chen and Qiang ji, A Unified Probabilistic

Framework for Spontaneous Facial Action Modelling, IEEE


transactions on pattern analysis and machine intelligence, vol.32,
no.2, 2010.
[9] Chen, L.S., Huang, T.S. Emotional Expressions in Audiovisual
Human Computer Interaction, IEEE International Conference,
Volume: 1, 2000.
[10] De Silva, L.C., Ng, P. C. Bimodal Emotion Recognition,
Fourth IEEE International Conference, 2000.

Anda mungkin juga menyukai