Anda di halaman 1dari 20

European Journal of Scientific Research

ISSN 1450-216X Vol.44 No.4 (2010), pp.640-659


© EuroJournals Publishing, Inc. 2010
http://www.eurojournals.com/ejsr.htm

Human Emotions Detection using Brain Wave Signals:


A Challenging

Ali S. AlMejrad
Biomedical Technology Department, College of Applied Medical Sciences
King Saud University, P.O.Box 10219, Riyadh 11433, Kingdom Saudi Arabia
E-mail: amejrad@ksu.edu.sa

Abstract

This paper discusses the issues and challenges of research project that was designed
to assess the different human emotions through Electroencephalogram (EEG). This work
led to the development of real time system for human emotion detection through EEG, and
has been the benchmark for continuing international study. EEG measurement is non-
invasive and inexpensive, have a very high sensitivity to receive information about the
internal (endogenous) changes of brain state, and offer a very high time resolution in the
millisecond range. Because of the latter property, these data are particularly suited for
studies on brain mechanisms of cognitive-emotional information processing which occurs
in the millisecond range. It has been well known that specific cortical and sub-cortical
brain system is utilized and have been differentiated by regional electrical activities
according to the associated emotional states. There are important challenges have to be
faced for developing efficient EEG signals emotion recognition such as (i) designing a
protocol to stimulate unique emotion than multiple emotions (ii) develop a efficient
algorithm for removing noises and artifacts from the EEG signal (iii) utilize the suitable
and efficient artificial intelligence technique to classify the emotions. In addition,
emotional activities of the brain causes difference EEG characteristics waves, it has been
attempted to investigate the brain activity related to emotion through analyzing EEG.

Keywords: Electroencephalogram (EEG), Wavelet Transform (WT), Brain Computer


Interface (BCI)

1. Introduction
EEG signal represents the effect of the superimposition of diverse processes in the brain. Very little
research has been done to separately study the effects of these individual processes. Evidence from
imaging research suggests that processing of facial affect relies on the interplay of several distinct brain
areas. The inferior occipito-temporal cortex, especially the fusiform gyrus, plays a key role for the
detection of facial configurations. [1]. Further analysis of facial affect has been shown to be related to
activation of the superior temporal sulcus, the amygdale, the orbito-frntal cortex and the insular cortex.
[2]. In medical and basic research, the correlation of particular brain waves with sleep phases,
emotional states, physiological profiles, and types of mental activities is ongoing.
Nonverbal information appearing in human facial expressions, gestures, and voice plays an
important role in human communication. Especially, by using information of emotion and/or affection
the people can communicate with each other more smoothly. This means that non verbal
Human Emotions Detection using Brain Wave Signals:
A Challenging 641

communication is basis of human communication. In addition to this human-human communication, it


is also important that human-human communication via computer and the communication between
human- machines are more and more common one in the recent field of research. In order to
understand the communication between man-machine, we discussed the emotion as interface between
the human and machine through the EEG signals. There are several types of emotions such as joy,
happy, fear, anger, disgust, teasing and surprise can be used as interface between man-machine through
computer. Estimating the emotions from the human being can be achieved by several ways such as
Electrocardiogram (ECG), Skin conductance (SC), Electromyogram (EMG) and Blood volume
pressure (BVP). But, estimating the emotions from the human brain waves is quite new and effective
one for the recent day’s application. Because the brain waves are generated from the limbic system and
it is deeply related to cognition process.
There are many works has been done in physiological signal analysis. Thereby they find the
relation between the changes in physiological signals for changing the emotions. Little study has been
done to classify emotion from physiological features although they have been used simple pattern
recognition such as linear discriminates [3]. Traditional electroencephalography produces a large
volume display of brain electrical activity, which creates problems particularly in assessment of long
periods recording. Question arises how dynamical descriptors can be applied for the detection of the
changes of the chaoticity of the brain processes measured in EEG. The number and variety of methods
used in dynamical analysis has increased dramatically during the last fifteen years, and the limitations
of these methods, especially when applied to noisy biological data, are now becoming apparent; their
misapplication can easily produce fallacious results (Rapp, 1994). Clearly, very little research has been
performed in this domain, and yet still more remains to be done. Though this work is infancy, we have
only included the sources that we aware of, with the hope to assist other researchers on the topic. We
kindly apologizes the researchers’ for excluding their research work on this area to add in our work.

2. Neuro-Imaging Methods
The cognitive neuroscience methods have been widely classified to two different types such as (i)
Single Cell Recording (ii) Brain Imaging. The main limitations of single cell recording method are (i)
Invasive, because it needs brain surgery to the patients (ii) It is stressful and often involves medication
(iii) Time constraint during the experimental procedure and (iv) Retesting is not possible. Secondly, the
brain imaging methods are basically classified into Positron Emission Tomography (PET), functional
Magnetic Resonance Imaging (fMRI), Electroencephalogram (EEG), Magneto electroencephalogram
(MEG), Magnetic Resonance Imaging (MRI), and Transcranial Magnetic Simulation (TMS). The basic
advantages of the above methods are: non invasiveness, no brain surgery is required, high speed and
good accuracy. The PET and fMRI methods provide an indirect measure of blood flow. In which fMRI
are BOLD (Blood Oxygen Level Dependent) and provides a measure of hemodynamic adjustments.
The limitations of this method are: sensitive to artifacts (e.g., Movement, cavities, and tissue
impedance difference), blood flow level in CNS (Central Nervous System) can change the imaging
characteristics and Limited Temporal Resolution.
The basic idea behind the MEG is, measurement of magnetic fields occurring outside the head
as a result of naturally occurring electrical activity in the brain. It gives a better spatial resolution the
EEG and it is rarely used in clinical applications. But the concept of TMS is completely different from
the previous methods; it uses electromagnetic induction to temporarily disrupt brain function. Though
this procedure may be very focal in nature and gives high temporal resolution, there may be a chance to
induce the seizures in the human brain [4]. Unfortunately, the above methods require sophisticated
devices that can be operated only in special facilities. Moreover, techniques for measuring blood flow
have long latencies and thus are less appropriate for interaction. [5]. The main drawback of the above
method is lies on, bulky scanner, slower vascular response to local response. The most important is the
642 Ali S. AlMejrad

limitation in mobility of the user and the imaging can be controlled by the oxygen circulation in the
brain. In this research, we are going to investigate the basic and fundamental issues and challenges rely
on assessing the emotions through EEG signals.

3. Emotions
Emotions and their expression are key element in social interactions, being used as mechanisms for
signaling, directing attention, motivating and controlling interactions, situation assessment,
construction of self- and other's image, expectation formation, inter subjectivity, etc. It is not only
tightly intervened neurologically with the mechanisms responsible for cognition, but that they also play
a central role in decision making, problem solving, communicating, negotiating, and adapting to
unpredictable environments. Emotion consists of more than its outward physical expression: it also
consists of internal feelings and thoughts, as well as other internal process of which the person
experiencing the emotion may not be aware. Individual emotional state may be influenced by kinds of
situations, and different people have different subjective emotional experiences even response to the
same stimulus.
Recently, a constellation of findings, from neuroscience, psychology, and cognitive science,
suggests that emotion plays surprising critical roles in rational and intelligent behavior. When we are
happy, our perception is biased at selecting happy events, likewise for negative emotions. Similarly,
while making decisions, users are often influenced by their affective states. Reading a text while
experiencing a negatively valence emotional state of often leads to very different interpretation than
reading the same text while in a positive state [63]. They are classified the different types of emotions
elicited from the subjects through the physiological signals. They also described the different kind of
emotions, the type of feature extraction technique, the method of eliciting emotions and the
physiological signals used for classifying the emotions.
After a century of research, there is little agreement about a definition of emotions and many
theories have been proposed. A number of these could not be verified until recently when improved
measurement of specific physiological signals became available. In general emotions are short-term
(existing for few micro second- mille second) [6]. One of the hallmarks in emotion theory is whether
distinct physiological patterns accompany each emotion [7]. Ekman et al [8] and Winton et al [9]
provided some of the first findings showing significant differences in autonomic nervous system
signals according to a small number of emotional categories or dimensions, but there was no
exploration of automated classification.
It is also apparent that we as humans, while extremely good at feeling and expressing emotions,
still cannot agree on how they should best be defined [10]. These reasons are then topped by the fact
that emotion recognition is itself a technically challenging field. Being inherently multi-modal, there
are a number of ways in which emotions can be recognized. Then how to evaluate individual’s
emotional state objectively and find out the functional areas of emotion processing comes to be the
most attention issue. Hence the ability to recognize emotion is one of the hallmarks of emotional
intelligence, an aspect of human intelligence that has been argued to be even more important than
mathematical and verbal communication. This may be via speech, facial expression, gesture and or a
variety of other physical and physiological cues. This spread of modalities across which emotion is
expressed leaves the field open for many different potential recognition methods.

3.1. Emotion Categories


The human emotions are basically classified into three types: Motivational (Thirst, Hunger, Pain,
Mood), Basic (Happy, Sad, Fear, Disgust, Anger, Surprise) and Self Conscious or Social (Shame,
Embarrassment, Pride, Guilt). Pattern recognition efforts aimed at finding physiological correlates,
focusing on t-tests or Analysis of Variance (ANOVA) comparisons and combining data over many
subjects, where each was measured for a relatively small amount of time. Finally Picard R.W [11],
Human Emotions Detection using Brain Wave Signals:
A Challenging 643

classified physiological patterns for a set of six emotions (happy, sad, disgust, fear, joy and anger) by
showing the video clips to the subjects. The features used for classification are skin conductance, heart
beat rate, and temperature. In [12] this work, they have proposed the independent emotion recognition
by gathering data from the multiple subjects (Multi Modal) and classifieds the simple emotions such as
Pleasure and unpleasure using neural Networks and Support Vector Machine.

Figure 1 (a): Discussion Model for Emotion

Figure 1 (b): Discussion Model for Emotion

One of the researcher Lang P.J, claimed that emotions can be characterized in terms of judged
valence (Pleasant or unpleasant) and arousal (calm or aroused). The above Fig 1 (a) and Fig 1(b),
shows the region of relation between arousal with valence. The relation between physiological signals
and arousal/valence is established due to the activation of the automatic nervous system when emotions
are elicited.
After perceiving a stimulating event, an individual instantly and automatically experiences
physiological changes, these response to this changes are called Emotion (William James).
644 Ali S. AlMejrad
Figure 2: Brain Model for Emotion Recognition

Hence the emotions from brainwaves, which are an index of the central nervous system, seems
to be general and effective since emotions are excited in the limbic system and are deeply related to
cognition process. In the several region of brain, Amygdala plays a major role on recognizing fear
emotion. The above Fig 2 simply shows the region of brain in which amygdale present. In earlier
studies, the experimental study on animals proves that, the amygdale plays a major role in recognizing
fear in an animal brain. And also it states that, the bilateral removal reduces level of aggression and
fear in rats and monkeys.
Hence the Bilateral amygdale damage reduces recognition of fear-inducing stimuli reduces
recognition of fear in others. It simply reduces recognitions of fear in others. Bilateral amygdala
damage impairs recognition of negative emotions from facial expressions. Bilateral amygdala damage
does not in general impair recognition of emotions from complex static visual stimuli, provided those
stimuli contain cues in addition to facial expressions. This finding is especially notable in regard to
fear, whose recognition is often impaired following bilateral amygdala damage. Whereas the inclusion
of facial expressions improved recognition of negative emotions for all other subject groups, subjects
with bilateral amygdala damage derived much less benefit from the inclusion of facial expressions. A
variety of brain regions are involved in the processing of facial expressions of emotion. They are active
at different times and some structures are active at more than one time. The amygdala is particularly
implicated in the processing of fear stimuli receiving early (<120 ms) sub cortical as well as late (~170
ms) cortical input from the temporal lobes.

3.2. Recording of Emotional States Changes through EEG


In Psychological works, the signals measured from the central nervous system will give a relationship
between psychological changes and emotions. In most of the studies has been done on t- test or
analysis of variance comparisons. Very little works has been done to classify the emotions from the
physiological signals from the small pattern recognition system, such as discriminants system [3].
The majority of existing emotion understanding techniques is based on a single modality such
as PET, fMRI, EEG or static face images or videos. In the rapidly evolving brain-computer interface
(BCI) area, fNIRS (functional Near Infrared Spectroscopy) represents a low cost, user-friendly,
practical device for monitoring the cognitive and emotional states of the brain, especially from the
prefrontal cortex area. fNIRS detects the light that travels through the cortex tissues and is used to
monitor the hemodynamic changes during cognitive and emotional activity. The second modality,
which is used in this work, is to estimate the neural activity through EEG. EEG was used to study the
individual’s emotional state for more than two decades. The useful information about the emotional
state may be obtained as long as stable EEG patterns on the scalp are produced. EEG recordings
Human Emotions Detection using Brain Wave Signals:
A Challenging 645

capture neural activity on a millisecond scale from the entire cortical surface while fNIRS records
hemodynamic activity in second’s scale [13].

Figure 3: Cerebral hemispheres showing the motor areas (towards the front) and the sensory areas (towards
the back)

Regarding the EEG related to emotion, the researchers are often focusing on the reduction in
Alpha Band (8 Hz - 13 Hz) activity. Much research suggests an inverse relationship between alpha
activity and brain activation in adults. The EEG has recorded the electrical difference between resting
state to stimulus conditions in the human brain from the two hemispheres and other physiological
activity in response to stimuli. The use of EEG has been pivotal in studies concerned with brain
asymmetry and emotion.
According to (Lee M, 2000) this study, the positive and negative emotions may or may not be
estimated from the EEG signal using Skinner’s Point –Wise Correlation Dimension (PD2) analysis.
But this PD2 represents some of the mental activity in the brain areas. The Fig 4 simply shows the
region of brain activated in emotion recognition. Where the Green colour indicates the neutral, red
colour indicates the anger emotion and purple colour indicates the happiness. At last the blue colour
indicates the sadness of the human.

Figure 4: Localization of Brain Region for Emotion Recognition


646 Ali S. AlMejrad

It concluded that the arithmetic task play major role when compared to all other methods of
eliciting emotion from the subjects and more concentration increases the dimensional complexity of
dynamics of EEG measures. It is the result of interplay between an individual’s cognitive appraisal of
an event and his or her physical response to it.
The ambulatory electroencephalograph based system for monitoring brain waves becomes a
recent technology and it gives a lot of freedom to the patients to sitting in front of the EEG monitoring
device for the entire duration of recording. Because of this method, the patients can be in any place,
where the electrode cap pitted on the patients will collect the brain wave and transmit to the receiver
using the transmitter. The received EEG signals can state the condition of the patient’s.

3.3. Gathering of Affective Data


The difficulty in gathering accurate physiological data lies on whether or not the subject is washed
his/her hands, how much gel is applied on his/her electrode, motion artifacts, and precisely where the
sensor was placed. The above all facts will make to affect the getting of original data for EEG signal
processing. Normally, the emotions are elicited by the subject by three ways. (i) By showing some
pictures of different emotions to the subjects at some predefined interval (ii) By showing some video
clips of different emotions to the subjects at a frequent interval (iii) Ask the subjects to imagine about
the different kind of emotions have appeared on his past life.
The EEG is thought to be the synchronized sub threshold dendrite potentials produced by the
synaptic activity of many neurons summed [14]. In its formation not all types of brain activity have
identical impact. The depth, orientation and intrinsic symmetry of connections in the cortex are
significant in it. As it is exposed in previous works pyramidal cells are thought to cause the strongest
part of the EEG signal [15]. The aforementioned literature was concerned primarily with the
investigation of hemispheric specialization (Fig 4).

Figure 5: EEG signal analysis on human brain with various stimuli.

For instance, it was found that the alpha band power (8 Hz - 13 Hz) was less in the left
hemisphere than in the right for verbal tasks and less in the right hemisphere than in the left for spatial
tasks. This phenomenon is referred to in the literature as alpha band asymmetry. In the same work, they
Human Emotions Detection using Brain Wave Signals:
A Challenging 647

were investigated with both motor and non motor tasks and it was found that tasks that require motor
output engage the hemispheres more asymmetrically [16].
Other papers [17, 18] investigated the alpha asymmetry using only non motor tasks. Their
findings showed that the alpha asymmetry also exists for non motor tasks. In one paper [19], a subject
is described who can voluntarily suppress alpha waves in the left or right hemisphere. They concluded
that there are some measurable differences in the EEG that correlate with different types of mental
processes. With this in mind one can see the possibility of training a subject to produce and control
mental processes that can be distinguished from one another by an external device using the measured
EEG data as input.
The same researchers had studied pattern recognition techniques to try to search for differences
in the EEG during performance of tasks previously used to elicit hemispheric responses. Their findings
showed that unless the task re-among different tasks. They also found very little asymmetry associated
with the non-motor tasks. The above experiment was done on a group of subjects as a whole and no
attempt was made to distinguish tasks on an individual level. It was not the goal of this research to
prove or disprove the theory of hemispheric specialization; however, we chose emotional tasks based
on research in this area in hopes of producing measurably different responses in the EEG that could be
used to distinguish between the various tasks.
The first one can be done by using the universal data base for facial emotion recognition
prepared by Ekman and Frieson [20]. According the above methods, we are not sure that the original
emotion is elicited by the subject. This kind of methods for eliciting the emotion is also affect the
gathering of good data. Yuankui Y et al [21] has proposed to get the data from the children’s when
they are playing games, learning and etc for analyzing the EEG signal for recognizing the emotions.
None of the researchers have been done their data collection from the children’s for recognizing the
emotions so far.
After the long discussion with the neurologists, we come to a conclusion that, the ordinary
person can be able to control their emotions by externally showing their facial expression for the
different kind of emotion elicitation method. But the people those who have suffered by the diseases
like Paralysis, Brain Stroke, Short- Temper cannot able to control their emotion, thereby we can get the
original data as maximum as possible.
With the development of computer science and electronic technique, event-related potentials
(ERP) have been used to research cognition and emotion. Using this technique, some special emotion-
related components during emotional processing have been found to interpret the relationship between
psychological activities and changes of brain potentials.

Figure 6: Characteristics of P300 Component on EEG signal analysis

12
non-target stimuli
target stimuli
10

6
Amplitude( uV )

-2

-4
0 100 200 300 400 500 600

(a). Averaged wave on Cz


648 Ali S. AlMejrad

According to [22] this proposal, the research on the brainwave should consider the subjects of
mixed- sex samples rather than having single sex samples with clinical and normal populations and
hence include broader age-range. Most of the researches have been done on human emotion
recognition to recognize the happy and disgust, because more people don’t realize disgust.

Figure 7: Brain imaging for different stimulus components

Hence the limitations on the above methods are obvious in nature. The inclusion of sad, anger
and fear of negative emotions to be experimentally manipulated would substantially advance the study
of cerebral activity.
Under the condition of long-term recording of EEG signal, the activities of penitent always
cause disturbance during observation. The status of the brain under various stimulus conditions are
shown in Fig 5, Fig 6 and Fig 7. In which, the Stimulus on P300 plays a major role in recent day
analysis for analyzing the functional state of the human brain.

4. Research Methodology
The research methodology of this work has been shown in Fig 8. The electrical activity of the human
brain is recorded through the electrodes, which are placed on the scalp of the brain. These recorded
brain waves are undergone for preprocessing. In the preprocessing stage mainly constitutes of removal
of noise, artifacts, and other external interferences. Removal of noise can be done by using Wavelet
Transform and the artifacts can be removed by using Independent Component Analysis or by using
Rejection Filters. After the preprocessing, wavelet transform will be applied on the pre processed
signal for extracting the features from the EEG signals. Generally, the EEG signals are non stationary
in nature; hence the statistical properties of the signals like Mean, Median, Variance, Average Power,
Average Energy, Power Spectral Density Function, Skewness and other parameters. After this, the
Wavelet transform coefficients are dimensionally reduced for simplifying the classification process.
Human Emotions Detection using Brain Wave Signals:
A Challenging 649

Figure 8: Basic Diagram for Human Emotion Recognition using EEG signals

EEG Pre Feature


Acquisition Processing Extraction

Application Feature Data


Classification Reduction

Dimension reduction can be done by several methods such as, Principal Component Analysis
(PCA), Independent Component Analysis (ICA). These extracted features are classified for seven
different kinds of emotions say, Happy, Anger, Joy, Disgust, Fear, Relax, and Neutral through artificial
intelligence techniques such as Neural Network, Genetic Algorithm, Support Vector Machine, Fuzzy
Logic, and Hybrid Structure of the above method.

4.1. Electroencephalogram (EEG)


Electroencephalogram is a medical imaging technique that reads scalp electrical activity generated by
brain structures. The electrical nature of the human nervous system has been recognized for more than
a century. It is well known that the variation of the surface potential distribution on the scalp reflects
functional and physiological activities emerging from the underlying brain. This surface potential
variation can be recorded by affixing an array of electrodes to the scalp, and measuring the voltage
between pairs of these electrodes, which are then filtered, amplified, and recorded. The resulting data is
called the EEG [23]. Electroencephalogram signals are sometimes called brainwaves, though this use is
discouraged. Over the past three decades, a great deal of modern techniques such as Computer
Tomography, MRI, fMRI continuously coming into use, but EEG signal, as a non destructive method,
is still play a key role in the analysis and diagnosis of the brain and the investigation of the brain states
in different functional states [24]. Fig 9 shows the different stages of recording the EEG signal from
the human brain.

4.2. Analysis of EEG Signals


Methods of EEG signal analysis is broadly classified into two types: Parametric and Non-Parametric.
The later one generates the signals by assuming the stationary during short intervals. Here most efforts
to quantify the EEG have been spent on the rhythmic and spectral properties. There are lots methods
have been used for spectral characteristics estimation. Such as Fourier Transform, Fast Fourier
Transform, Short Time Fourier Transform. The main limitation of the above methods lies in poor time-
frequency resolution and technical problems. Besides this approach, there is one another method called
Interval Analysis. Though this method is widely accepted due to its simplicity and usefulness, but it
lags in sensitivity to the noises and other artifacts. [25]. But the parametric methods are more
specialized and general one. It assumes that the EEG signal may be represented by means of a
stochastic model involving specific parameters. This process may be considering a linear or non linear
model for generating the signal. Mostly linear models are used rather than non linear models. In this
way the non stationary signal may be taken into account.
650 Ali S. AlMejrad
Figure 9: EEG Recording System

EL ECT ROD ES AMPLIFIER

DISPLAY ADC HPF & LPF

A disadvantage with spectral analysis as described is the requirement of a fairly long


observation time, about 30 second or more, to achieve good spectral estimates. This may easily come
into conflict with the non stationary behavior of EEG’s, and it will result in difficulties in following
changes in the spectral properties. Another disadvantage is that the power spectrum is seldom in itself
the desired end result, since certain characteristic values are often needed, like peak frequencies,
bandwidths, and fractional power quantities. When they are calculated from the power spectrum, there
is no guarantee that the estimates will be efficient, nor are the statistical uncertainties known. These
disadvantages are largely overcome by using parametric models.

4.2.1. EEG Wave Groups


According to Berger [26], the frequency content of the EEG signal was crucial importance for the
assessment of the EEG. The most obvious activity is a rhythmic activity with frequency around 10 Hz.
The change of state of human will significantly affect the frequency band variation. Mostly, the
frequency content of the EEG signals is shifted to low-frequency values when the brain injuries or
functional disturbance occurs. More general injuries cause more low frequency activity. He also
pointed out that, the appearance of EEG signals depends upon the location of the electrode on the skull
and on the subject state of vigilance. The important characteristics of the EEG signals are
• Age of the subject
• Mental State of the Subject
• Region of Brain
• Heredity Factors
• Influences of the Brain
• Artifacts
The first four factors are present under normal circumstances. The EEG signals of children’s
are quite different from that in adults. In general, lower the age, the larger the amount of low-frequency
activity. The EEG signals are developing gradually during childhood time than adult’s age. In addition
to this frequency properties of the EEG signal, Berger also described the transient characteristics of the
EEG signals. These signals may be superimposed on the more stationary EEG background activity.
These are called spikes, sharp waves, and spike-and-wave activity according to their character.
The different frequency bands of EEG signal is shown in Fig 10. There is no good reason why
the entire EEG should be more representative of brain dynamics than the individual frequency sub-
bands. In fact, the sub-bands may yield more accurate information about constituent neuronal activities
underlying the EEG and consequently, certain changes in the EEG’s that are not evident in the original
full-spectrum EEG may be amplified when each sub-band is analyzed separately. The decomposition
of signal is achieved by means of using filters with different frequency responses characteristics to
detect four different EEG rhythms. The five kinds of rhythms are investigated to understand the in
different functional states of brain. [24].
Human Emotions Detection using Brain Wave Signals:
A Challenging 651

Figure 10: Frequency Spectrum of EEG signal

Though the EEG signals consist of several frequency bands with ranges from 0 Hz- 80 Hz, we
can concentrate only to 0 Hz - 40 Hz for analyzing the human brain activity. In general, EEG signal
frequency rhythm is classified into five types of frequency rhythms are particularly important: Delta,
Theta, Alpha, Beta, and Gamma.

4.3. Artifacts
Signals in the EEG that are of non-cerebral origin are called artifacts. Due to eye movements or
muscular activity which contaminates the EEG signals. Especially eye blinks cause larger artifacts in
the signal, since the corresponding muscles are very closer to the EEG electrodes. Since artifacts are
spread multiple channels. Removal of one more such components would remove too much useful EEG
signal information. This is one of the reasons why it takes considerable experience to interpret EEGs
clinically. These artifacts are impulsive in nature, larger in amplitude and different shape in
comparison to signal sequences. Because of the larger amplitude value, it dominates characterizations
of the signals based on second order statistics such as correlation and spectral analysis. These are
classified into (i) Patient related (ii) Technical related. The patient related artifacts are highly disturbs
the EEG signal rather the later one can be decreased by decreasing the electrode impedance, and by
using shorter electrode wires. The most common types of artifacts are:
• Eye artifacts (including eyeball, ocular muscles and eyelid)
• EKG artifacts
• EMG artifacts
• Gloss kinetic artifacts
Normally, the Eye artifacts such as eye-blinks or jaw clenching are common and strong
artifacts in EEG recordings and it can be reduced by using cross-hair fixation point and the EMG
artifacts can be reduced by having the EEG measurement away from the EMG sources. All these
artifacts can be reduced by using rejection filters and Independent Component Analysis (ICA). The
rejection filters uses a robust transform to remove the artifacts effectively. This filter gives a robust in
performance to the choice of user- specified parameters [27]. They reported that, these rejection filters
can be applied for the biomedical signals which are corrupted by occasional, short-duration artifacts.
ICA has been shown to be very efficient for the purpose of artifact removal [28]. The ICA based
artifact removal is more efficient to extract the useful information from the EEG signal [28].

4.4. Feature Extraction


In early days, the complex patterns of brain waves are analyzed by visual analysis. Computer-assisted
EEG signal analysis increased the desire for effective quantitative interpretation of EEG data and of
describing properties of the EEG which often cannot be perceived by human eye. Hence this method is
652 Ali S. AlMejrad

insufficient and it losses a most valuable information of the EEG signal. In order to preserve the
valuable information from the EEG signal for emotion recognition and for analyzing the condition of
human brain, the time domain, frequency domain and the time-frequency analysis may be used.
In this stage, the pre processed signals are converted into vectors of extracted features that can
be used by the intelligent emotion recognition module in order to determine subject’s emotions. The
selected features provide a combination of simple statistics and complicated characteristics which are
related to the nature of the physiological signals and the underlying classification problem. Generally
there are two approaches in the time series analysis for extracting the feature from the EEG signal. One
is Frequency Analysis and another one is Time analysis. The time domain methods based on
Parametric Models are very useful for extracting the feature from the EEG signal. Anderson C.W et.al
[29], has modeled the EEG signal using Auto Regressive models with sixth order co efficients. But this
AR based estimation methods will reduces the spectral loss problems and gives better frequency
resolution. When compared to FFT, AR model will require only shorter duration of data records [30].
For parametric model approach, Auto regressive model with time varying co efficients is the common
method for parametric spectral estimate for non stationary signals. The significant drawback existing
on this method is difficulties in establishing the model property for different EEG signals.
But Frequency domain based approaches are quite popular one in recent day applications. The
aim of signal analysis by this method is to extract relevant information from a signal by transforming it
to the frequency domain. Here the features are extracted from the transformed signal. The popular way
of frequency analysis is power spectral analysis via Fourier Transformation, which is widely used for
the standard quantitative analysis of the spectral decomposition of EEG Signals [31, 32, 33]. But this
well known method is valid only for the signals for stationary nature and linear random processes. It
also lags in simultaneous time and frequency measurement. There are some methods which make a
priori assumptions on the signal to be analyzed. This may yield sharp results if these assumptions are
valid, but is obviously not of general applicability. With the development of modern signal processing
techniques, there have been some attempts to automate the recognition of transients in EEG signals.
Generally, there are three main methods for the analysis of the time-dependent spectrum of non-
stationary signals. (i) STFT (ii) Wigner-Ville Distribution (iii) Time-Varying Parametric Model.
Here the STFT assumes the stationary of the signal within a temporal window to match the
time-frequency resolution chosen for spectral estimate. The main problem of this method is lies on
fixed time- frequency resolution chosen for the spectral estimate. WVD is good for time-frequency
concentration and edge characteristic, but for multi-component signal, it can introduce cross-
disturbance term, which may cause misunderstand for signal’s tome-frequency feature.
A powerful method was proposed in late 1980’s to perform time-scale analysis of signals: the
Wavelet Transform. It is a mathematical microscope is used to analyze different scales of neural
rhythms is shown to be a powerful tool for investigating small-scale oscillations of the brain signals.
This transform is suitable for Time Series Analysis and it will nullify some of these drawbacks through
its variable window size and time-frequency filtering properties [34, 35]. Though the wavelet
decomposition of the EEG records, transients features are accurately captured and localized in both
time and frequency context. This results in excellent feature extraction from non-stationary signals
such as EEGs [36].

4.4.1. Wavelet Transform (WT)


Wavelet transform is a spectral estimation technique in which any general function can be expressed as
an infinite series of wavelets. The basic idea underlying wavelet consists of expressing signal as linear
combinations of a particular set of functions, obtained by shifting and dilating one single function
called a Mother Wavelet. This decomposition of the signal leads to a set of coefficients called wavelet
coefficients. Therefore the signal can be reconstructed as a linear combination of the wavelet functions
weighted by the wavelet coefficients. In order to obtain a perfect reconstruction of the signal, adequate
no of coefficients must be computed [37]. There are several methods are available to analyze the
information of EEG signals. Since the early days of automatic EEG processing, representations based
Human Emotions Detection using Brain Wave Signals:
A Challenging 653

on a Fourier Transform (FT) have been most commonly applied. This approach is based on the EEG
spectrum contains some characteristics waveforms that fall primarily within four frequency bands,
such as Alpha, Beta, Gamma, Delta and Theta. But the Fourier Transform and its Discrete Version FT
suffer from large noise sensitivity [29].
Where wavelet transforms is a new two dimensional time-scale processing method for
analyzing non stationary signals with adequate scale values and shifting in time [38, 39]. Thus it can be
used as a powerful tool for characterizing the frequency as well as time components of EEG signals
[40]. The importance of using wavelet transforms are lies on its Multi Resolution Analysis (MRA),
capable to analyze the signal which is having discontinuities through the variable window size, and
localizing the information in Time-Frequency Plane. The above features would not be present in any of
the other transforms except the wavelet transform [41].
Mathematically speaking, the wavelet transform is a convolution of the wavelet function ψ (t)
with the signal x(t). Orthonormal dyadic discrete wavelets are associated with scaling functions φ (t),
The scaling function can be convolved with the signal to produce approximation coefficients S.
The scaling function Ф(t) can be defined as
Φ (t ) = ∑ h(k )φ (2t − k )
k (1)
We can also define the wavelet function ψ(t) as
Ψ (t ) = ∑ g (k )φ (2t − k )
k (2)
Where h(k) and g(k) are the high pas and low pass filter coefficients. The admissibility
condition should be satisfied.
The Mallat algorithm based on orthogonal wavelets, have been widely used in various areas of
non stationary EEG signal processing. [38, 39, 42, 43]. However, the orthogonal decomposition of
EEG signals is not able to detect the exact four basic rhythms of the spontaneous EEG signals. This
algorithm would not detect the high frequency signal rather it detects only the low-frequency signals.
The discrete wavelet transform decomposes a signal onto a set of basis functions called
wavelets. Here non-stationary biomedical signals is to expand them onto basis functions created by
expanding, contracting, and shifting a single prototype function (Mother Wavelet), specifically selected
for the signal under consideration. A wavelet function is a rapidly decreasing oscillation function given
by [43, 44, 45].
1 ⎛t −b ⎞
Ψa ,b (t ) = Ψ⎜ ⎟, a, b ∈ R, a ≠ 0
a ⎝ a ⎠ (3)
Where a is the scale parameter, R is the real number, and the analyzing wavelet function is
centered at time b. the wavelet transform of a signal f(t) is defined as

W f (a, b) =

∫α f (t )ψ a ,b * ( t ) dt

1 ⎛t − b ⎞
= ∫ f (t ) ψ *⎜
⎝ a ⎠
⎟ dt
−α a (4)
Wf (a,b) at given time a can be interpreted as a filter version of the signal band passed by the
filter ψa,b(t).
The large number of known wavelet families and functions provides a rich space in which to
search for a wavelet which will very efficiently represent a signal of interest in a large variety of
applications. Wavelet families include Bi-orthogonal, Coiflet, Harr, Symmlet, Daubechies wavelets,
etc. There is no absolute way to choose a certain wavelet. The choice of the wavelet function depends
on the application. But the only requirement is that the wavelet satisfies an admissibility condition: In
particular, it must have zero mean. The Haar wavelet algorithm has the advantage of being simple to
654 Ali S. AlMejrad

compute and easy to understand. The Daubechies algorithm is conceptually more complex and has a
slightly higher computational overhead. But, the Daubechies algorithm picks up detail that is missed by
the Haar wavelet algorithm. Even if a signal is not well represented by one member of the Daubechies
family, it may still be efficiently represented by another. Selecting a wavelet function which closely
matches the signal to be processed is of utmost importance in wavelet applications [37]. Daubechies
wavelet families are similar in shape to QRS complex and their energy spectrums are concentrated
around low frequencies.
The joint Time – Frequency resolution obtained by wavelet transform makes it good candidate
for the extraction of details as well as approximations of the signal which cannot be obtained by other
methods like Fast Fourier Transform (FFT) and Short Time Fourier Transform (STFT). Because of the
variable window size over the length of the signal, which allows the wavelet to be stretched or
compressed depending on the frequency of the signal [46, 47]. The time-frequency resolution of STFT
and WT is shown in fig 14. Normally the value of variance in ‘Joy’ and ‘Anger’ is having larger
amplitude when compared to the smaller magnitude of variance of ‘Sorrow’ and ‘Relaxation’.
In [48] this work, the researchers were used the concept of Band Relative Intensity Ratio
(BRIR) with Multi Resolution Time Frequency Analysis (MRTFA) of wavelet packet transform for
extracting the various frequency bands on low frequency region. Besides the Time Domain and
Frequency Domain methods, there is one another method called Fractal Analysis. Fractal analysis is a
new scientific paradigm that has been used successfully in many domains including biological and
physical sciences. These are objects which possess a form of self-scaling: Parts of the whole can be
made to fit the whole by shifting and stretching. Fractal description of an EEG signal can be a useful
tool for feature extraction [49]. Fractal features represent the morphology of the signals. These
morphological differences can be picked up and used by several applications. There are several
features based on fractal theory/morphological analysis that can be extracted from a usual signal [50].
Fractal dimension has been proven useful in quantifying the complexity of dynamical signals in
biology and medicine. Fractal dimensions are measures of the self similarity of the signals.
The Researcher [51] was used the time series approach for analyzing EEG signal for both 1D
and 2D Cartesian grid of electrodes. In the above work, the spline based WT are used to tune the
window size in order to extract the spikes into their component frequencies with high speed of
computation. One of the problems in applying WT on neural signal was identified by
W.Przybyszewski that, lack of consistent methodology to handle the pervasive noise which is recorded
from the brain- whether from the summated field potential or from neuron spike train data. Extracting
the various frequency bands from the central nervous system was first tested with the animals by Dixon
T.L et. al [52]. In this work they used the Wavelet Transform in addition to the Fast Fourier Transform
for analyzing the data. The features which are extracted from the EEG signals are the average power
and its relative differences. They concluded that the Wavelet Transform is most powerful tool for
studying the activities of human brain before and after the critical exposure to the drug.
Normally the basic features derived from the EEG signals are lies on detaching the various
frequency bands [53]. After separating the frequency bands, the basic parameters such as Energy,
Power, Average Power, Standard Deviation, Mean, Variance and Skewness are determined through the
power spectrum of the signals. The parameters are used to distinguish the functional state of the brain.
Here they observed that, ‘Joy’ and ‘Anger’ have large variance of amplitude and ‘sorrow’ and
‘relaxation’ have small variance. They also considered the variance and average amplitude as a
separate feature for classifying the emotions. The wavelet Packet Transform is also used to extract the
features such as energy [54], distance [55] or Clusters [56]. In order to derive features from the various
bio-signals, we use a common set of feature values which are processed and used as an additional input
or as a substitute to the raw signal for the classification. These common feature values are:

4.5. Feature Reduction


The main objective of data reduction is reducing the amount of data generated after wavelet transform
by without losing the original information of the signal features. Generally, all the data reduction
Human Emotions Detection using Brain Wave Signals:
A Challenging 655

methods will lose some amount of useful data features on data reduction process. The one way of
reducing the number of wavelet co efficient to be used for feature extraction is to prescribe a ‘stooping
criteria’, called thresholding operation [57]. This method of data reduction is also removes the noises
present in the original EEG signals by using wavelet transform denoising method. The denoising
method uses the threshold values as σ (2log L) ½. Here σ is the standard deviation of the noise and L
the length of the data vector [30]. But Principal Component Analysis method is also used to reduce the
data size of the wavelet co efficients [58]. Some of the researchers have used the Linear Discriminant
Bases (LDB) and Joint Best Bases (JBB) for reducing the feature space. Here the LDB uses “Distance”
measures among the energy distribution of signal classes as the criterion in finding optimal spaces.

4.6. Emotion Classification


There are so many methods are available to classify the features from the EEG signal. The
classification can be performed by using Support Vector machine (SVM), Neural Networks (NN),
Linear Discriminant Analysis (LDA), Genetic Algorithm (GA) and so on. The Multi Layer Perceptron
Network is also used to classify the EEG signal features. Because it has an ability to learn and
generalize, smaller training set requirements, fast operation, ease of implementation and therefore most
commonly used neural network. It has been an ability to describe the alertness level of arbitrary
subject. Normally, SVM have very good solid foundation in statistical learning theory, and guarantees
to find the optimal decision function for a set of training data, given a set of parameters determining
the operation of SVM. Hence it performs better classification on emotional features derived from the
EEG signal than LDA and conventional NN [58]. The Genetic Algorithm also performs well on
classifying the features. Recent day classifications are dealt with Radial basis Function (RBF) networks
for EEG signal classification. Because the networks train rapidly, usually orders of magnitude faster
than MLP, while exhibiting none of its training pathologies such as paralysis or local minima
problems.

5. Applications
The use of emotional understanding using computers is a field of increasing importance. In many ways
emotions are one of the last and least explored frontiers of intuitive human computer interaction.
Human Emotions are considered to be power tool for to enhance the communication between humans
and computer application. In recent years, a growing interest has developed in recording, detecting and
analyzing brain signals to investigate, explore, and understand human motor control systems with the
aim of attempting to build interfaces that use real-time brain signals to generate commands to control
and/or communicate with the environment [59].
Analysis of brain waves are mainly used for humans who are physically disabled by, such as
paralysis, brain stroke and other kind of brain disorders to communicate with the real world application like
controlling of house hold equipments, playing games, making or receiving calls from phone, and using the
computer. The standard keyboard/mouse model of computer use is not only unsuitable for many people
with disabilities, but also somewhat clumsy for many tasks regardless of the capabilities of the user. EEG
signals provide one possible means of human-computer interaction, which requires very little in terms of
physical abilities.
These brainwaves are also used importantly for lie detection in police enquires and to find out
the mental fatigue of the pilots and drivers. In medical fields, these are useful one for analyzing the
human diseases based on psychological, physiological and psycho physiology. By determining their,
emotional states inputs, the real time system may be able to adapt their behaviors, allowing users to
experience the interaction in a sensible way. This can perhaps be explained by the fact that computers
are traditionally viewed as logical and rational tools, something which is incompatible with the often
irrational and seeming illogical nature of emotions [60].
656 Ali S. AlMejrad

Figure 11: Basic Brain Computer Interface Model

The detection of emotion is becoming an increasingly important field for human computer
interaction as the advantages emotion recognition offer become more apparent and realizable. The
basic system for Brain Computer Interface is shown in Fig 11. Emotion recognition can be achieved by
a number of methods. The use of emotion in computers is a field which is becoming increasingly in
vogue at the current time. In many ways emotions are one of the last and least explored frontiers of
intuitive human computer interaction.

6. Our Study to Elicit Emotions and Capture Physiological Signals Data


After receiving the related literature, we concluded that our own experiment to find a mapping between
physiological signals and emotions experienced. In our experiment we used the movie clips and some
standard picture data bases for elicit targeted emotions – Sadness, Anger, Happy, and Fear. The
physiological signals (Brain Waves) are collected from the five different kind of subjects those who are
healthy, and age group between 23-27. The International 10-20 system is followed for placing 64
electrodes through the electrode cap on the subjects head surface. The collected readings are undergone
for preprocessing to remove the artifacts, noises and other external interference. The preprocessed
signal is subjected to the wavelet transform for extracting the features such as Mean, Median, Analysis
of Variance, Standard Deviation, Average Energy and Average Power from the EEG signals. This
feature vectors derived from the wavelet transform are given as input to the Artificial Intelligence
network to classify the four different types of emotions. Though this work is very new in this area to
classifying the emotions from brain waves, we have decided to do the best in real time application.

7. Conclusion and Future Research


The contributions of this paper not only gives discusses the emotion recognition from the EEG signal,
and also the findings of effective data recording from physiological signal, feature extraction through
wavelet transform, data reduction, feature classification through various classification methods, real
time applications and the scope for future research. Our future work plans to have a children’s as a
subject for deriving the physiological signal through wireless based EEG signal monitoring for
classifying the basic emotions.
During the last few years, studies on EEG machine showed that the EEG recorder based on
personal computer (PC) had to communicate with the medical instrument through the computer I/O
interface. The above methods usually adopted a wired serial port interface, such as RS-232C standard,
to transmit the measured the EEG signal, and were inconvenient in general use because of transmission
Human Emotions Detection using Brain Wave Signals:
A Challenging 657

lines between the instrument and human brain activity measurement device. When the conventional
EEG acquisition equipment is intended to transfer to a portable device, such as personal digital
assistant (PDA), wired transmission always caused inconvenience in mobilization. If the technical
advantages of wireless communications, such as the Bluetooth technology, are used, the application
field of EEG machine can be extended more widely. Besides, the computer usually lacks an effective
program to read, analyze, and then display the EEG signals stored in conventional EEG Machines. If
the recorded EEG data can be treated more completely, the serviceability of the EEG acquisition
system would be enhanced significantly.

References
[1] Kanwisher N, McDermott J, and Chun M.M, 1997. “The fusiform face area: a module in
human extrastriate cortex specialized for face perception”, Journal of Neuroscience, 17, pp,
4302- 4311.
[2] Haxby J.V, Hoffman E.A, and Gobbini M.I, 2000. “The distributed neural system for face
perception”, Trends Cognitive Neuroscience, 4, pp, 223-233.
[3] Picard R.W, Vyzas E, and Healey J, 2001. “Towards Machine Emotional Intelligence: Analysis
of Affective Physiological State”, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 23(10), pp, 1175- 1191.
[4] Dr. R.Newport, Human Social Interaction perspectives from neuroscience, www.psychology.
nottingham.ac.uk/staff/rwn.
[5] Marcel S, Jose del and R.Millan, 2006. “Person Authentication Using Brainwaves (EEG) and
Maximum A Posteriori Model Adaptation”, IEEE Trans on Pattern Analysis and Machine
Intelligence, Special issue on Biometrics, pp, 1-7.
[6] Jenkins J.M, Oatley K, and Stein NL, 1998. “Human Emotions”, A reader Balck Well
Publisher.
[7] Cacioppo C.J, Tassinary LG, 1990. “Inferring Physiological Significance from Physiological
Signals”, American Psychologist.
[8] Ekman P, Levenson R.W, and Freison W.V, 1983. “Autonomic Nervous System Activity
Distinguishes Among Emotions”, Journal of Experimental Social Psychology, pp, 195-216.
[9] Winton WM, Putnam L, and Krauss R, 1984. “Facial and Autonomic Manifestations of the
dimensional structure of Emotion”, Journal of Experimental Social Psychology, pp, 195-216.
[10] Richins, Marsha L, 1997. "Measuring Emotions in the Consumption Experience," Journal of
Consumer Research, 24 (September), 127-146.
[11] Picard R.W, 2000. “Affective Computing”, MIT Press.
[12] Takahashi K, Tsukaguchi A, 2003. “Remarks on Emotion Recognition from Multi-Modal Bio-
Potential Signals”, IEEE Trans on Industrial Technology, 3, pp, 1654-1659.
[13] Savran A, ciftci K, Chanel G, Javier Cruz Mota, Luong Hong Viet, Sankur B, Akarun L,
Caplier A, and Rombaut M, 2006. “Emotion Detection in the Loop from Brain Signals and
Facial Images” eNTERFACE’06.
[14] Orrison Jr. W. W, Lewine J. D, Sanders J. A and Hartshorne M. F, 1995. “Functions Brain
Imaging”, St Louis: Mosby-Year Book, Inc.
[15] Lopes da Silva F. H, Van Rotterdam A, 1982. “Biophysical aspects of EEG and MEG
generation”. In E. Niedermeyer and F. H. Lopes da Silva, editors, Electroencephalography,
pages 15-26. Urban & Schwarzenberg, München-Wien- Baltimore, 1982.
[16] Doyle J. C, Omstein R, and Galin D, 1974. “Lateral specialization of cognitive mode: I1 EEG
frequency analysis,” Psychophysiology, 11, pp. 567-578.
[17] Ehrlichman H, Wiener M. S, 1980. “EEG asymmetry during covert mental activity,”
Psychophysiology, 17, pp. 228-235.
658 Ali S. AlMejrad

[18] Kraft R. H, Mitchell O. R, Languis M. L, and Wheatley G. H, 1984. “Hemispheric asymmetry


during six- to eight-year old performance of piagetian conservation and reading tasks,”
Neuropsychological, 22, pp. 637-643.
[19] Gott P. S, Hughes E. C, and Whipple, 1984. “Voluntary control of two lateralized conscious
states: Validation by electrical and behavioral studies,” Neuropsychological, vol. 22, pp. 65-72.
[20] Ekman P, Friesen W, 1978. “Pictures of Facial Affect”, consulting physiologists press, Palo
Alto, CA.
[21] Yuankui Y, Jianzhong Z, 2005. “Recognition and Analyses of EEG & ERP Signals Related to
Emotion: From the perspective of Psychology”, IEEE Transactions on Signal Processing, pp,
96-99.
[22] Niemic C.P, 2002. “The theoretical and empirical review of psychophysiological studies of
emotion”, 1(1), Journal on Clinical and Social Psychology, pp, 15-18.
[23] Kandel E. R, Schwartz J. H, and Jessell T. M, 1991. “Principles of Neural Science”, 3rd ed.
New York: Elsevier/North-Holland.
[24] Lisha Sun, Minfen Shen, 2002. “Analysis of Non Stationary Electroencephalogram using the
Wavelet Transformation”, ICSP’02 Proceedings.
[25] Saltzberg B, 1957. “A New Approach to Signal Analysis in Electroencephalography”, IRE
Trans, on Medical Electronics, 8, pp, 24-30.
[26] Shen M. et. Al, 2001. “Method for extracting time-varying rhythms of electroencephalography
via wavelet packet analysis”, IEEE Proc on Science Measurement Tech, 148, (1).
[27] Mc Names J, Thong T, and Aboy M, 2004. “Impulse rejection filters for artifact removal in
spectral analysis of biomedical signals” IEEE Annual International Conference on EBMC, 1,
Sep 1-5, pp, 145-148.
[28] Jung T, 2000. “Removing Electroencephalographic Artifacts by Blind Source Separation”,
Psychophysiology, 37(2), pp, 163-178, 2000.
[29] Thakar N.V, 1993. “Multiresolution Wavelet Analysis of evoked potential”, IEEE Transactions
on Bio. Medical. Engg, 40(11), pp, 1085- 1093.
[30] P.Jahankhani, V.Kodogiannis, K.Revett, 2006. “EEG Signal Classification Using Wavelet
Feature Extraction and Neural Networks”, IEEE Trans on Modern Computing, JVA’06, pp,
120-124.
[31] Paraday J, Robert S, Taranssenka L, 1996. “A Review of Parametric Modeling Techniques for
EEG Analysis”, Medical Engineering Physics, 18(1), pp, 2-11.
[32] Jung T.P, 1997. “Estimating alertness from the EEG power spectrum”, IEEE Trans, Biomedical
Engg, 44(1), Vol 44, pp, 60-69.
[33] Adeli H, Zhou Z, Dadmehr N, 2003. “Analysis of EEG records in an epileptic patient using
wavelet transform,” , J. Neuroscience Methods, 123, 1, pp. 69–87.
[34] Burns C.S, Gopinath R.A, Guo H, 1998. Introduction to wavelets and Wavelet Transforms,
Prentice-Hall.
[35] Blinswska K.J, Durka P.J, 1994. “Application of wavelet transform and matching pursuit to the
time- varying EEG signals” Proce, of conference on artificial neural networks in engineering,
USA, pp, 535-540.
[36] Tseng S, Chen R, Chong F, 1995. “Evaluation of Parametric Methods in EEG signal Analysis”,
Medical Engg Physics, 17(1), pp, 71-78.
[37] Anderson C.W, Sijiercic Z, 1997. “Classification of EEG signals from Four Subjects during
Five Mental Tasks”.
[38] Clark L, 1995. “Multiresolution decomposition of non- stationary EEG signals: a preliminary
study”, Comput. Bio. Med, 24, (4), pp, 372-382.
[39] Burns C.S, Gopinath R.A Guo H, 1998. “Introduction to wavelets and Wavelet Transforms”,
Prentice-Hall.
[40] Rioul O, Martin V, 1991. “Wavelets and Signal processing”, IEEE Signal processing
Magazine, pp, 14-38.
Human Emotions Detection using Brain Wave Signals:
A Challenging 659

[41] DA-Zeng, Ming-Hu Ha, 2004. “Applications of Wavelet Transform in Medical Image
Processing”, IEEE Proce on Machine Learning and Cybernetics, 3, pp, 1816-1821.
[42] Unser.M , Aldroubl.A, 1996. “A review of wavelets in biomedical application”, Proce IEEE,
84, (4), pp, 626-638.
[43] Vetterli M, Kovacevic J, 1995. “Wavelets and Sub-band Coding”, Englewood Cliffs, NJ:
Prentices Hall.
[44] Blanko.S, 1996. “Time –Frequency Analysis of electroencephalograms series”, Phys Rev E. 54,
pp, 6661-6672.
[45] Grap A, 1995. “An Introduction to Wavelets”, IEEE Computer Science and Engg., 2(2).
[46] Mallat S. G, 1989. “A theory for multiresolution signal decomposition: the wavelet
representation,” , IEEE Trans on Pattern Anal. & Mach. Intelligence, 11(7), pp, 674–693.
[47] Qin S, Ji Z, 2004. “Multi-Resolution Time-Frequency Analysis for Detection of Rhythms of
EEG Signals”, IEEE Proce on Signal Processing, 11’th International DSP Workshop, Vol 4,
pp, 338-341.
[48] Adlakha A, 2002. “Single trial EEG classification,” Tech. Rep., Swiss Federal Institute Of
Technology.
[49] P. Smreka, 2002. “Fractal and Multifractal analysis of Heart Rate variability in Extremal States
of the Human Organism”, Ph.D. Thesis, Czech Technical University in Prague.
[50] Schiff S.J, 1994. “Wavelet Transforms For Epileptic Spike and Seizure Detection”, IEEE
Proce, pp, 1214-1215.
[51] Dexon T.L, Livezey G.T, 1996. “Wavelet- Based Feature Extraction for EEG Classification”,
IEEE Proce on EMBS, 3, pp, 1003-1004.
[52] Ishino K, hagiwara M, 2003. “A Feeling Estimation System Using a Simple
Electroencephalograph”, IEEE Proce, pp, 4204-4209.
[53] Learned R.E, Willsky A.S, 1995. “A Wavelet Packet Approach to transient signal
classification”, Applied Computer Harmonic Analysis, 2, pp, 265-278.
[54] Cocchi M, Seeber R, Ulrici A, 2001. “WPER: Wavelet Packet Transform for Pattern
Recognition of Signals”, Chemometrics Intel Lab System, 57, pp, 97-119.
[55] Pittner S, Kamarthi S.V, 1999. “Feature Extraction from Wavelet Co efficient for Pattern
Recognition Task”, IEEE Transaction on Pattern Recognition Analysis and Machine
Intelligence, 21, pp, 83-88.
[56] Juang B.H, Soong F.K, 2001. “Hands-free Telecommunications” HSC 2001, pp.5-10.
[57] Herrera R.E, Sclabassi R.J, Sun M, Dahl R.E, Ryan N, 1999, “Single Trial Visual Event-
Related Potential EEG Analysis Using The Wavelet Transform”, IEEE Proce on BMES/EMBS,
2, pp, 947.
[58] Walpow J.R, Birbaumer N, McFarland D.J, Pfrutscheller G, Vaughan T.M, 2002. “Brain-
Computer Interfaces for Communicatio and Control”, Clinical Neurophysics, 113, pp, 767-791.
[59] Zheng P, Li X.P, Soh W J, Shen KQ, Ong C J, Wilder-Smith E.P.V, 2006. “Lie Detection
Using EEG and Support Vector Machine”, IEEE Trans on Biomedical Engineering.
[60] Takahashi K, Tsukaguchi A, 2003. “Remarks on Emotion Recognition from Multi-Modal Bio-
Potential Signals”, IEEE Trans on Industrial Technology, 3, pp, 1654-1659.