Anda di halaman 1dari 57

B.E.

Electronic Engineering Project

Development of a Driver Alert System for Road Safety

Martin Gallagher

Supervisor: Dr. Edward Jones

Co Supervisor: Dr. Martin Glavin

2007
Design and Development of a Driver Fatigue Detection System ii

Acknowledgements

I wish to thank all the staff of the Electronic Engineering Department, my


supervisor, Dr. Edward Jones for his guidance and advice.

I thank my family and friends for their constant support and encouragement
throughout my studies.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System iii

Statement of Originality

I declare that this thesis is my original work except where stated

Signature……………………… Date………………………

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System iv

Abstract
In the report a multi sensor driver fatigue detection system is proposed. This
will be achieved using data acquired from a camera facing the driver and a number of
auxiliary sensors to give a more robust analysis through sensor fusion, thus creating a
better picture of the real world situation.

Images are captured, pre-processed and analysed using image processing


techniques. This results in the detection of the drivers pupils, or in the case of a
sleepy driver, their absence is detected. Sensors in the steering wheel are analysed to
determine the amount of grip the driver is applying to the wheel.

Combining these data feeds, classification of the results are made to determine
the level of fatigue the driver is experiencing. Should the level of detected fatigue
exceed a safe level, the system will make necessary remedies to reduce this through
an alarm procedure.

This report explains the major technical and physiological areas relevant to the
project. An analysis of the hardware and software component selection and the
implementation of these components into the system are discussed. Issues that arose
during the project are presented and the possibilities to develop, expand and integrate
the system into real world solutions are explored.

B.E. Electronic Engineering


Table of Contents
Section 1: Introduction ................................................................................................2

1.1 Project Specification ......................................................................................3


1.2 Area of application.........................................................................................4
1.3 Functionality ..................................................................................................5

Section 2: Concepts and Background Information...................................................7

2.1 Eyes......................................................................................................................8
2.2 Reactions............................................................................................................10
2.3 Sleep and Road Accidents .................................................................................11
2.4 Vehicles and microprocessors............................................................................14

Section 3: Processes....................................................................................................16

3.1 Hough Transform...............................................................................................17


3.2 Eye Detection Algorithms..................................................................................21
3.3 Pressure Sensor Analysis ...................................................................................24

Section 4: Design and Implementation ....................................................................26

4.1 Camera ...............................................................................................................27


4.2 Sensors ...............................................................................................................32
4.3 ADuC 8031 ........................................................................................................35
4.4 MATLAB...........................................................................................................39

Section 5: Analysis .....................................................................................................43

5.1 Results Analysis – Software ..............................................................................43


5.2 Results Analysis – Hardware .............................................................................44
5.3 Alarm Procedures...............................................................................................45
5.4 Testing................................................................................................................46

Section 6: Conclusion.................................................................................................48

6.1 Achievements.....................................................................................................49
6.2 Further Development .........................................................................................50
6.3 In Conclusion .....................................................................................................51
References ...................................................................................................................52

Bibliography ...............................................................................................................53
Design and Development of a Driver Fatigue Detection System 2

Section 1: Introduction

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 3

1.1 Project Specification

The purpose of this project is to investigate the development of a system for


detecting the likelihood that a driver is about to fall asleep in control of the vehicle,
and to sound an alarm or carry out some other function if this occurs. The system will
be primarily based on the use of a small camera mounted on the vehicle dashboard
which will locate and "track" the driver's eyes, and on this basis attempt to detect if
the driver is about to fall asleep.

For example, if the eyes close and remain closed for a certain period of time,
this may indicate that the driver has fallen asleep and that a crash is imminent.
Alternatively, if the eyes start to fall towards the bottom of the image (in a video
sequence), this might suggest that the driver's head is starting to droop. At the same
time, the system should not react inappropriately to "natural" movement of the
driver's eyes, e.g. if the driver turns his/her head to look out the side window.

The reliability and robustness of the system may be enhanced by making use
of additional sensing devices and combining all of the information in a "sensor
fusion" environment. For example, pressure sensors could be located in the steering
wheel to indicate how "tightly" the wheel is being gripped; if the pressure suddenly
drops, this may indicate that the driver is relaxing his/her hands because of fatigue.
On the other hand, the pressure may drop simply because the driver is relaxing, so
visual information may be required to "confirm" the hypothesis that the driver is
falling asleep. Initial algorithm development will be carried out in Matlab, with the
intention of porting some of the functionality to a suitable embedded system.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 4

1.2 Area of application

Road users have long been known to fall asleep whilst driving. Driving long
hours can induce fatigue causing lack of concentration and occasionally road
accidents.

This is even more critical on motorways where traffic travels at higher speeds
and drivers can succumb to “motorway hypnosis” as the repetitive and somewhat
passive experience of driving on long, wide, straight roads can cause the driver to
relax and lose concentration from the road and traffic around them.

This allows drivers who “legally” are fit to drive but physically are not, on the
roads. Many of these drivers are unaware of their fatigue and danger they pose to
themselves and other road users. A main application of this system is primarily to
protect such people described by alerting them to their fatigue.
.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 5

1.3 Functionality

The functionality of the system is illustrated in Figure 1.1.

A standard CMOS (Complimentary Metal Oxide Semiconductor) camera is


mounted in the front of the car to view the drivers face but not obstruct their
observation of the road ahead.

The frame grabber receives the camera feed from which it takes in the current
image data in digital form and allows access to the data for processing. This
procedure repeats itself at the start of every cycle once the previous frame is
processed.

The image is captured and taken into the system where it is pre-processed
before the Hough Transform is applied; the image is reduced in size to the region
surrounding the eyes, using cropping tools.

Then the cropped image is converted from RGB (Red Green Blue) colour to a
greyscale image with 255 levels of intensity. This reduces the amount of data in the
image while retaining much of the critical information needed.

As the colour image was captured, data from the pressure sensors located on
the steering wheel are sampled at a rate 10 times greater than that of the camera. The
sensors provide a voltage reading corresponding to the level of pressure exerted by
the user, this sampled data has 212 (4096) different levels. This is minimised to 25
levels to reduce complexity of the data and over sensitivity from the sensors. This is
done to control unnecessary sensitivity on the sensors and to acquire a reliable reading
of the pressure being applied to the wheel. The voltage levels are graded against set
threshold levels - ok, marginal or too light. These levels can be defined by the user to
suit their style of driving.

Once the pressure is processed and image data pre-processed the sensor fusion
aspect of this project uses the information given to it to vary the sensitivity of the

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 6

analysis of the image data. Should the steering wheel pressure be marginal the eye
detection function will increase its scrutiny for signs of fatigue.

Once sensor data has been considered the Hough Transform is applied to the
cropped greyscale image and circular object data is referenced to a minimum
threshold to remove spurious detections and interferences.

The data is analysed further for the detection of eyes in the image. This
involves the application of the geometric features of the eye to the data, resulting in a
more accurate detection. Resulting data is then further analysed to check if the points
selected resemble eyes, using the colour data from the original image.

Combining the resulting image information and the sensor data, the fatigue
detection system is able to decide whether the alarm procedure is activated or returns
to the start where it re-initialises and the process starts over again. The functionality
of this system should be unobservable to the driver during normal conditions and the
driver should only be aware of the system, should the alarm procedure be activated.

Figure 1.1: Overall Functional Schematic

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 7

Section 2: Concepts and Background Information

In this section some of the relevant anatomical aspects of the eye are discussed
to give a good grounding of its role and functions. This includes a brief description of
visual fatigue symptoms that a driver can experience over long car journeys that can
affect their perceptiveness and concentration levels. In addition the relevancy of
reaction times and the undeniable link between sleep and road accidents is presented.
The use of microprocessors in vehicles is also discussed.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 8

2.1 Eyes

The eye is a sensory organ which is used to detect contrast in objects


illuminated by light. The human eye has many components, but for the purpose of
this report we will look at the functions of the iris, pupil, sclera, cornea, retina, lens
and their interactions with each other.

The iris makes up the coloured part of the eye. Its function is to regulate the
amount of light that enters the eye. This is achieved by regulation of the pupil, the
dark centre spot in the iris which allows light to enter the eye.

Figure 2.1 Elements of the eye.

The pupillary sphincter muscle controls the size of the iris, automatically
adjusting to control the amount of light entering the eye. This action is known as the
papillary reflex. In low light the pupillary sphincter muscle causes the pupil to dilate
allowing more light to enter the eye and hit the retinal layer. The retina is the nerve

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 9

layer that lines the back of the eye, senses light and creates impulses that travel
through the optic nerve to the brain.

Diameters of a healthy pupil in dim conditions can be enlarged to 8mm. In


bright light the pupillary sphincter muscle causes the iris to contract. This makes the
pupil smaller, in the region of 1.5mm, which allows the eye to function more
efficiently in this condition. In ambient light conditions a normal pupil will range
from 3 to 4 mm in diameter.

The sclera is the outer protective layer of the eye. This layer is opaque and
usually white in healthy eyes. The optic nerve connects to the eye through this layer
and muscles attached to the sclera allow the eye to move.

The cornea is the transparent window of the eye. It sits over the pupil and iris,
transmitting and focusing light into the eye. Beneath the cornea is the lens, the
transparent structure inside the eye that focuses light rays onto the retina. The optic
nerve connects the eye to the brain. It transmits the electrical impulses from the retina
to the visual cortex in the brain.

After excessive stress on any of the functions of the eye, especially over long
car journeys, visual fatigue arises. The symptoms include painful irritation (burning)
of the eye accompanied by watering of the eye, reddening of the eye and
conjunctivitis, double vision and headaches. Visual fatigue also reduces the powers
of accommodation and convergence of the eye, sensitivity to contrast and speed of
perception. All types of visual work can contribute to visual fatigue. Driving calls
for more rapid and precise eye movements which make heavier demands on
perception, concentration and motor control [1].

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 10

2.2 Reactions

To create a system to inform and alert a driver they are suffering from fatigue
and are not in control of their vehicle, we must look at driver reaction to determine the
most appropriate method to do this.

According to Marc Green, University of West Virginia medical school,


Human perception & break reaction time studies, have reported a wide variety of
results. The most important variable is the driver’s expectation. This affects reaction
time by a factor of two. A fully conscious driver can react, being fully aware of time
and location in approximately 0.75 seconds. This can be doubled for unanticipated
events, and cause a reaction time of around 1.55 seconds [2].

These figures show that even when alert and awake, drivers can take some
time to react to events, especially to unexpected events. The studies presented show
much of the data has been collected for driver reaction to visual stimuli, this is
somewhat irrelevant for the purposes of this project as the driver will have their eyes
closed and cannot respond to visual marker.

Auditory signs do however improve a driver’s reaction time but this is based
on the driver knowing the reaction, to then signal. As the driver would be fatigued
this could seriously impact the reaction time and compound the problem.

A solution to this is to use an audiovisual signal, using an audio signal to alert


the driver and a visual signal to trigger a learned response. One such response is to a
tail break light in front of the driver. Many drivers reported in the studies presented
that they found themselves depressing the break before they were cognitively aware
of the break light in front of them [2]. This is an example of a learned response which
is automatic upon a trigger.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 11

2.3 Sleep and Road Accidents

Road safety is a topic discussed almost every day in the papers and on our
airwaves in this country and worldwide, unfortunately though this is usually because
of the lack of safety and precaution taken by road users.

While dangerous and drunken driving may be highly publicised a major


contributing factor in many accidents on our roads is driver fatigue. According to the
Road Safety Authority, driving tired is “as lethal as driving drunk” [3].

“Up to 20% of fatal crashes may be linked to driver fatigue, latest research
indicates that driver fatigue could be a contributory factor in up to a fifth of driver
deaths in Ireland. It also means that this silent killer could have been a contributory
[3]
factor in almost 200 driver deaths in a five year period.

The critical points at which driver fatigue related collisions happen are
between 2am and 6am and mid afternoon between 2pm and 4pm when our "circadian
rhythm" or body clock is at its lowest point. Males aged 18 to 30 are in the high risk
category. They tend to be over confident about their driving ability and believe they
can handle the situation. Women are less likely to be involved in sleep related crashes
[3]
.

If a driver persists in fighting sleep while driving, the impairment level is the
same as driving while over the drink drive limit. Eventually a driver will drift in and
out of consciousness and experience "micro sleeps" which can last for up to 10
seconds. In this time a driver has no control of the vehicle. Drivers can experience
such a micro sleep with their eyes wide open.

Driver fatigue not only impairs driving in a similar way to alcohol it also
magnifies the damage alcohol does. It is estimated that alcohol is twice as potent in
mid afternoon and in the early hours of the morning because we are more likely to be
tired at these times. Consequently, people who think they are driving under the legal
limit should be aware that even small amounts of alcohol consumed at these key times,
when we are tired, combine to render a driver “totally unfit for driving.” [3]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 12

In a survey of drivers involved in accidents in Norway [4] they were asked if


sleep or fatigue was a contributing factor in their accident. While only 3.9% of
drivers who responded and were at fault for their accident, admitted that fatigue was a
contributing factor, this 3.9% overall translates to a much higher figure for night time
accidents , contributing to approximately one fifth of accidents (18.9%).

Although not all instances of Driver Fatigue resulted in accidents, 10% of men
and 4% of women who responded to the survey admitted to falling asleep at the wheel
in the 12 months previous to the survey and 27% said they’d fallen asleep whilst
driving at some point in their driving lives.

Only 4% of these occurrences of driver fatigue resulted in accidents but 40%


reported crossing the boundary lines on the side of the road before regaining control
and 16% reported crossing the centre line of the road before regaining consciousness.

These figures show the increased safety risk of driving while fatigued. The
drivers' unawareness of the on-setting fatigue coupled with reluctance to rest is
pointed out as likely contributors to sleep-related accidents. This project aims to
highlight to the driver that they may be suffering from fatigue, thus removing these
contributory factors.

Figure 2.2 shows a graphical breakdown of the reported occurrences of falling asleep
whilst driving.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 13

Consequences of falling asleep whilst driving, (sample size = 1061)

33.70%

42.10%

0.50%
3.50%
4.60%
4.60%

Crossing Outer Boundary Line Crossing Opposite Boundary Line Crossed Centre Line
Ran off the road Collided with other vehical Unspecified

Figure 2.2 Breakdown of results occurring from falling asleep while driving.
Data Source [4]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 14

2.4 Vehicles and microprocessors

The integration of vehicles and microprocessors began on a very basic level in


1978 with the use of a modified 6802 chip in a Cadillac Seville, to drive the cars ‘trip
computer’-a system to display mileage and other information on the dashboard [5].
From humble beginnings their integration has come a long way and today almost
every vehicle on the road contains between 30 and 50 microprocessors, improving the
safety, efficiency and control of their vehicle.

Microprocessors control everything from electric windows and door locks to


anti-lock braking systems and airbags. Take ABS as an example. This is a system
which prevents the wheels of a vehicle locking when the driver breaks, giving greater
control, which would not be possible without the use of microprocessors. This system
undoubtedly saves lives.

Microprocessors in vehicles improve efficiency in two ways; time efficiency,


finding out what is wrong with your vehicle can be determined far quicker, and also
fuel efficiency. This is monitored by the Engine Control Unit (ECU) [5] and
subsequent adjustments are made. The use of microprocessors has drastically
changed and simplified the control of cars and their operation, bulky electronic cables
are now replaced with tiny chips with far higher capabilities.

Nowadays microprocessors and vehicles truly do go hand in hand, the


statistics speak for themselves. On average, a new car carries almost 200 pounds of
electronics and more than a mile of wiring. There are microprocessors integrated into
almost every aspect of vehicles, they can be found in wing mirrors, headrests, and
even wheel rims.

Aside from in-car entertainment which is developing rapidly, the future for
computers and cars looks very promising. Currently in development are innovations
such as wireless controls, i.e. wireless steering and a system to warn drivers if the air
pressure in their tyres drop. The development and integration of computer systems

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 15

which improve driving safety is a growth area with many new systems currently in
design, this project giving testament to this.

Another of these new safety systems in development is the ‘fly by wire’


system. This system imitates one used in modern aircraft, in which a fail-safe
computer network will allow the car in which it is implemented to steer and brake ‘by
wire’. This, if implemented, would mean changing the transmission technology of a
vehicle from its mechanical and hydraulic basis to electronic. The system would have
speed and precision far greater than previous technologies and could assist the driver
in all situations in a matter of milliseconds.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 16

Section 3: Processes

Processes used in this project include the use of the Hough transform, eye
detection algorithms and pressure sensor analysis. This section discusses how they
were implemented in the project and how they were used to best effect.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 17

3.1 Hough Transform

The Hough transform is a technique of finding any shape in a digital image. It


is usually used to find lines and curves or shapes that can be described by a set of
parameters.

The Hough transform was initially patented as US Patent 3,069,654 in 1962


under the name “Methods and Means for Recognising Complex Patterns”. The
circular Hough Transform and in particular the rho theta parameterisation was
described by Duda, R.O and P.E. Hart in “The Use of the Hough Transform to Detect
Lines and Curves”, in 1972 [6].

The simplest form of the transform is the line transform, where lines are the
desirable elements sought by the transform. Representing a line in polar form,
equation 3.1 specifies its normal passing through (x, y) drawn from the origin to (ρ, θ)
in polar space [7]. This is the dashed line in figure 3.1.

xcosθ+ ysinθ = ρ (1)

For each point in the (x, y) plane and on the line, the values of ρ and θ are
constant. Therefore for a given point in the (x, y) plane we can calculate the lines
passing through the point in terms of ρ and θ. Passing a range of lines at varying
angles [0, 2π] and varying θ accordingly it is then possible to calculate the value for ρ
[8]
.

By taking a set of lines through a point and calculating the ρ and θ values for
the lines at the point a Hough space can be created. This is illustrated in figure 3.1.
Distributing the results of these calculations to “bins” and incrementing their value or
“vote” for every result that is placed in them, an accumulation array can then be
created. The greater the vote value of the bin, once all point and line sets have been
calculated, the higher the probability that it is a point on the line.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 18

Figure 3.1 Shows three points and with 6 possible angle groupings. Image Source [8]

The coincidence of the pink points on the distribution graph of the results in
figure 3.2 highlights the action of the bins as the three points selected lie on the pink
line. This translated to an accumulation array would give a normalised binary image
based on the vote values of each point in the image.

Figure 3.2 Shows the resulting distribution of results Image Source [8]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 19

The transform works using the fact that an infinite number of lines at unique
angles can pass through a point in space. By applying a special version of the Hough
transform, the circular Hough transform, to images, circular objects within an image
can be detected. The circle can be represented in parameterised form by;

(x - a)2 + (y - b)2 = r2 (2)

Where (a, b) are the co-ordinates of the centre of the circle and r is the radius.
This additional parameter, r, the radius, causes the parameter space, characterised by
the ρ and θ parameters in the line detection, to increase from a 2D to 3D array to
allow for this third parameter. Points which lie on the centre of the desired circles
will accumulate votes from intersecting loci of the searches, shown in figure 3.3(b).
The center of the desired circle is common to all the points on its radius in the search
and will be a peak in the accumulation array, this is shown in figure 3.3(b) as the
white point.

(a) (b)

Figure 3.3 Figure (a) shows a binary image of two circles, of radius 10 and 30.
Figure (b) shows the Circular Hough Transform accumulation
array of (a) at radius 30. Image Source [9]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 20

Should the radius of the circle be unknown or the desired circle’s radii vary in
size, the transform loci will form a right cone for each point in the image, reflecting
the 3D nature of the array. This is illustrated in figure 3.4. The centre of the circle is
identified in the accumulation array in the same manner as circles of known length.

Figure 3.4 The conic nature of the multi radii search. Image Source [10]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 21

3.2 Eye Detection Algorithms

This section describes the algorithm used to refine the data returned from the
Hough transform, the basic principals which the algorithm are based on are also
discussed.

Data returned from the Hough transform function takes the form of two arrays,
one a 2xN matrix containing the co-ordinate information and the other a column
vector with the corresponding radii of the circles. The purpose of this screening of
points is to improve the reliability the detection of the eye. From results obtained in
testing the system, discussed further in section 5.4, it became apparent that data
returning from the function contained more than the detected eye information.

It was devised to reduce the level of error in the returned data that the
geometric properties of the face and eyes be used to differentiate points the Hough
transform sees as circular objects.

Initially the points are screened based on


colour information, setting a threshold to
ignore points that have a greyscale value
towards the white end of the spectrum
removes any shadows on the face or light
source reflections from the returned data.

Figure 3.5 The results from the Hough transform and their greyscale level.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 22

Pairing data points of the same radius allows


for the identification of probable matching
pairs of eyes. Healthy pupils should be of the
same size and react together to any changes in
light so it is reasonable to assume they should
be the same size. At this stage a point may
have multiple pairings with other points.

Figure 3.6 Results of pairing points of similar radius

The angle condition to be applied, this involves


removing pairings from the selection that fail to
have a relatively small angle with the horizontal.
This reflects the normal orientation of the
driver’s eyes during normal driving conditions.
A tolerance is given to allow for leaning

Figure 3.7 Removal of Pairs that fail angle condition.

Now the distance condition is applied to the


pairings, this is used as the distance between the
eyes is fixed and can be said to lie within a
maximum and minimum distance range for the
majority of people, i.e. most people’s eyes are
greater than 2cm apart but less than 20cm. This
is a broad range but eliminates any outlying
points.

Figure 3.8 Removal of pairs that do not fit distance criteria.

A final check on the colour data is made, pairs


should have similar colour information. The
selection of the best matching pair to the
defined colours of the pupil is made at this point.

Figure 3.9 Best match selection is made.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 23

The result is successful eye detection, pupils


are encircled blue and centre point is crossed
with red mark.

Figure 3.10 Original image with eyes highlighted.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 24

3.3 Pressure Sensor Analysis

Taking the sampling theory into account the rates of sampling are fixed by the
hardware but the processing of the data is at a dynamic rate thus always using current
data from the sensors.

The camera has a capture rate of 15 fps (frames per second). This physical
setting limits the number of frames that are processed per second. Any processing
rate above this would lead to repetitions of the frame but with perhaps different
pressure sensor data.

The pressure sensors are sampled at 10 times the rate of the cameras. This is
set arbitrarily to give 10 pressure samples per video frame. The purpose of this is to
minimise the risk of spikes in the voltage reading. This is achieved by taking an
average of the samples.

For example, a typical voltage reading from the sensors while there is no
pressure on the sensors, i.e. the steering wheel has been let go, would be in the range
of 0.0007 to 170mV. Over 10 samples, this would give on average a reading of
50mV and indicate that there was no pressure on the pads to the system. Should a
random spike in the voltage occur on sample seven and eight.

Sample 0 1 2 3 4 5 6 7 8 9
Value(mV) 15 25 1 15 5 3 5 800 500 4
Table 3.1

The average is 137.3 which is well within the threshold level and indicates to
the system that there is no pressure on the pads. Had the averaging system not been
implemented the sample may have been taken on the seventh or eighth sample of the
above example. This would have caused the system to read the pressure pads as being
depressed and therefore falsely indicating pressure on the steering wheel.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 25

The opposite situation is also true for voltage drops on the sensors. The
averaging window can be extended to further filter the effects of ripple in the sensor
voltage, i.e. taking the average of a larger sample of levels. For the purposes of this
project and implementation it was not deemed necessary to extend averaging over a
sample larger than 10.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 26

Section 4: Design and Implementation

This section describes the components used in the implementation of the


Driver Fatigue Detection System. In addition the process of choosing some of the
components is discussed with analysis and reasoning behind omitting other
components from the design.

Also discussed are the implementation decisions that were made and
difficulties encountered.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 27

4.1 Camera

A number of cameras were used in the development of this system. Each had
different advantages over each other.

The first camera used was a basic CMOS web cam with a picture resolution of
352x288 pixels. This was used to create test video, on which the initial versions of
the Hough transform were used to test. The camera images were quite small so there
were obvious advantages in the processing power which was relatively less than large
still images. The small image size however was a cause of difficulty also as the
Hough transform results proved difficult to analyse from the small amount of data that
could be extracted from the image about the eye, only a small number of pixels
contained the information displaying the circular iris.

Figure 4.1 Image of face using standard low resolution CMOS camera

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 28

The second camera used was an Omni vision CMOS camera fitted with a fish
eye lens. This camera had a higher resolution at 640x480 pixels per frame, this
resolved the issue of the previous cameras low resolution.

However, test data recorded using this camera, highlighted the importance of
adequate lighting on the subjects face, as shadows and varying light conditions proved
problematic in the detection of the eyes.

This camera was fitted with a fish eye lens which gave the image curvature
towards its edges, this is apparent from the skewed wall and ceiling edges in the
background of figure 4.2. The curvature was at its least in the centre of the image
where the subjects face was being tracked, despite this it was hazardous to introduce
such variances in the system given that the aim of the image capture was to detect
circles. Allowing for straight lines to gain curvature in the photo was deemed unwise
and for this reason the camera was not used in the project.

Figure 4.2 taken using Omni vision CMOS camera with fish eye lens

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 29

The last camera to be test with the system was a high-resolution CMOS web
cam with a manually variable focus lens. This was similar to the first camera used but
had the higher resolution of 1028 x 840.

This camera also implemented a face tracking system which could be run in
parallel with the fatigue detection system. This involved a digital zoom to the region
of the face and output this frame to the fatigue detection system which allowed for the
redundancy of much of the pre-cropping of the image. This solved the issue of initial
face detection as the camera software would always deliver the facial image to the
system provided the subject stayed within the range of the high-resolution image.

Figure 4.3 Image taken using standard CMOS high resolution camera

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 30

Because of the difficulties encountered with shadows hiding the eyes in the
eye sockets under ambient light conditions, thus an IR (Infrared) camera was
considered as an image source for low light conditions.

These cameras would perform under ambient light conditions in a similar


manner to the cameras previously discussed but in low light conditions it would
provide a workable solution to this difficulty. Applying an IR light source to
illuminate the subjects face would allow the IR camera to capture the necessary facial
data to apply eye detection on the image. As IR light is outside the electro magnetic
wavelength range of visible light, the subject would be unaware of this illumination.
In addition this would not cause the discomfort to the driver whilst driving at night
that illumination by light in the visible spectrum would cause.

Another advantage of the IR system is the reflection of IR light from the eyes.
This is a result of the eyes inability to absorb or diffuse all of the IR light as it can do
with visible light. This is the reason why the pupil has a “black” appearance. The
reflection of the IR light causes the pupils to be visible as bright, white, luminescent
points in the eye. Figure 4.4 shows this effect of IR light. This would allow for easier
processing of the image as the eye detection process would give more consistent
results in many lighting conditions.

It was also noted that the IR light reflection from the eye would be dependant
on the camera and the IR light source being on the same axis as the eye, i.e. the
camera and IR light source would be directly in front of the subject’s eye.
Considering the area of application for this project, this was a major difficulty as the
camera would obstruct the driver’s vision of the road ahead. This is discussed further
in “Real Time Visual Cues Extraction for Monitoring Driver Vigilance” by Qiang Ji
and Xiaojie Yang [11].

It was decided to retain the standard CMOS camera in an effort to develop the
algorithm for use with this type of technology as much as possible despite its
limitations.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 31

Figure 4.4 Image showing difference between (A) un-illuminated pupil captured
with IR camera and (B) illuminated pupil showing reflected IR light.
Image Source [12]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 32

4.2 Sensors

The auxiliary sensor data was taken from two, FSR (Force Sensitive Resistor).
The FSR is a pressure sensor and consists of two flexible substrates, with printed
electrodes and semiconductor material sandwiching in a spacer substrate. This is
shown in figure 4.5.

Figure 4.5 Construction of the FSR. Image Source [13]

These are used to monitor the drivers grip on the steering wheel. They were
chosen for their ease of use and availability. Numerous other sensor types could have
been used in its place such as strain gauges.

For test purposes, a pair of FSR was mounted on a device representing a


steering wheel, allowing the grip to be measured. The graph shown in figure 4.6 is
typical of the results that were obtained by carrying out tests on the devices. The
mentioned figure shows conductance (the inverse of resistance 1/R) vs. force. This
format allows interpretation on a linear scale. For reference, the corresponding
resistance values are also included on the right vertical axis [13]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 33

Figure 4.6 Graph of Force (g) v Resistance (Ω). Image Source [13]

Setting the FSRs in parallel and measuring them in series with a known
resistance it was possible to create a linear relationship between the voltage dropped
across the sensors and the pressure that was applied to them. This is shown in figure
4.6. Initially the known resistance was fixed at 330KΩ to give a voltage range of 0 to
9V (supply voltage). This was subsequently changed to 100Ω for implementation
with ADuC 8031 (discussed further in section 4.3) to give a voltage range of 0 to
2.5V.

In development the NIDAQ 6009-USB device was used to interface the sensor
circuit with the PC (Personal Computer). The NIDAQ 6009-USB device is a Digital
Acquisition Device and could be configured through software, to be used as a digital
voltmeter which logged readings to memory on the PC. Labview is the software
package that allowed the sensor circuit, using the NIDAQ USB device, to send data to
the fatigue detection system. This software allows the user to control and manipulate
the input and output of data on many National Instruments Devices.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 34

Figure 4.7 Circuit Diagram of Voltage Divider Circuit

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 35

4.3 ADuC 8031

The Analog Devices product, the ADuC831 was chosen for this project as it
provided the embedded system functionality that was stipulated in the initial
specification. Its core consists of an 8052 Microcontroller which provides the
necessary processing power to compute the demands made on it by the requirements
of this project.

Specifically the board is used to replace the NIDAQ USB-6009 device that
was used during development of the sensor system as it provides a Data Acquisition
System with the following functionalities [14]:

• 8-Channel, 5µs, Self-Calibrating, 12-Bit ADC


• dual 12-bit DACs
• Two 12-Bit Rail-to-Rail Voltage-Output DACs
• Industry Standard 8052 Microcontroller
• 62K-Byte In-Circuit Re-Programmable Flash Program Memory
• 4K-Byte Read/Write Accessible Non-Volatile Flash Data Memory
• 2K-Byte SRAM (In Addition to the 256-Bytes in the 8052 Core)

For the purposes of this project the ADC (Analogue/Digital Converter)


capabilities of this embedded system are explored. The ADC conversion block
incorporates a fast, 8-channel, 12-bit, single supply ADC.

The 8 analogue input channels give a number of different control options to


the user which can be configured through three SFR (Special Function Register).
Also the ADC0, ADC1, ADC2 inputs are buffered using an OP491 op-amp. A
resistor divider consisting of a 20k Ω resistor and a light dependant resistor is
connected to ADC3 input which is also buffered by the OP491. The ADC4, ADC5,
ADC6 and ADC7 inputs are not buffered.

The 8051 instruction-set assembly language is used in the implementation of


the ADC as it is compatible with the 8052 core. The 8052 core is an enhanced

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 36

version of the original Intel 8051. Example 8051 code is available for basic
ADuC831 operations on the Analogue Devices website [15]. A detailed functional
block diagram of the ADuC831 is shown in figure 4.8

Figure 4.8 Functional block diagram of the ADuC831. Image Source [14]

The analogue voltage input range for the ADC is 0 Volts - Vref, where Vref can
be specified from 1V to AVDD. It was found however that referencing the voltage
was not necessary as the circuits and threshold of the sensor circuits could be
calibrated to the on chip reference voltage of 2.5V. This is a low drift, factory
calibrated value for internal Vref. The calibration involved reducing the maximum
voltage drop across the fixed resistor in the sensor circuit from 9V to 2.5V.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 37

Converting the incoming voltage level to digital is done by using 2(number of bits)
levels to describe the analogue signal between 0 and Vref. This is achieved by
calculating the minimum voltage level in the analogue signal that represents
1/2(number of bits) bits, the least significant bit. This calculation is captured by equation 4.

LSB = Vref / 2(number of bits) (3)

Where Vref = 2.5V and


2(number of bits) = 212 = 4096

This gives a LSB = 0.61 mV. Translating this to binary gives us “000000000001”.
Figure 4.9 shows the ideal I/O transfer characteristic for 0 Volts to Vref.

Figure 4.9 ideal I/O transfer characteristic for 0 Volts to Vref. Image Source [14]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 38

“The ADC uses two SFR to store the converted 12-bit digital value,
ADCDATAL and ADCDATAH.” [14]. The ADC converts the analogue signal into a
12-bit representation of the signal using the formula described above. The two eight-
bit SFR store the result of the conversion in the lower half of the ADCDATAH SFR
and the remaining lower eight bits are stored in the ADCDATAL SFR. The top four
bits of the ADCDATAH SFR are channel selection bits, used to identify the channel
result. As only one channel is used this data is not relevant and is ignored. In Figure
4.10 the format of the ADC 12-bit result word is shown;

Figure 4.10 format of the ADC 12-bit result word. Image Source [14]

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 39

4.4 MATLAB

MATLAB is described by Mathworks [16], the software creator, as a “high-


level computing language” with technical applications and “environment for
algorithm development, data visualization, data analysis, and numeric computation”.
By using MATLAB for these areas of programming the product as a whole, language
and environment, can be used to great effect as extensive specialised libraries of usage
definable functions are available to the user. These are implemental by simply
naming and passing parameters to their function of choice.

MATLAB can be used in a large spectrum of applications, including but not


limited to signal and image processing, communications, control design, test and
measurement, financial modelling and analysis, and computational biology.

The functions which allow users to control and build their algorithms with
such ease are stored in “toolboxes”. These are collections of MATLAB functions
which relate to a particular application of area. For example, the image processing
toolbox is used in this project. The large range of toolboxes demonstrates the extent
and range of situations to which the MATLAB environment can be applied to solve
particular problems in its application areas.

MATLAB also provides a number of features for documenting and sharing


work. MATLAB code is compatible with other languages and applications and
contains specific functions for integrating MATLAB based algorithms with external
applications and languages, such as C, C++, FORTRAN, Java, COM, and Microsoft
Excel [16]. This allows developers to use solutions developed in MATLAB on existing
legacy systems without difficulty.

Mentioned above the Image processing toolbox was used extensively in this
project. It allowed for the conversion of the images from RBG to greyscale, this was
achieved by calling the function;

grayimg = rgb2gray(rawimg)

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 40

Where rawimg is the RGB image passed to the function and grey image is the
returned greyscale image. This ease of use and removal of much of the low-level
programming allows the developer to work quickly through a project without wasting
time on minor low-level functionality. This example of the function use shows only
one input argument. Many functions allow for control arguments to be passed to the
function also. This gives the user the low-level control over its operation that they
may desire for their application. An example of such a function type is;

[accum,circen,cirrad]=CircularHough_Grd(img,radrange,grdthres)

This calls the CircularHough_Grd function which is a user created function


and exists outside of the toolboxes. The function is used in the project to find the
Hough Transform of the image passed to it. This is done through passing the matrix
representation of the image to the function under the identifier “img”. The function
allows the user to define parameters in the function by passing any of the remaining
arguments to the function, such as “radrange” the range of radii the Hough Transform
should span, “grdthres” is the definable threshold level the transform should use.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 41

Software Implementation

The implementation of the software in the MATLAB environment took the


form of a main program calling functions. This was done to minimise size and
improve efficiency of the code. The Driver Fatigue Detection program can be divided
up into a number of sections, Image preparation, Hough transform and sensor coding.

Taking the Image Preparation block of the program we can see it as being two
separate parts, pre-Hough transform and post-Hough transform.

The pre-Hough Transform section initialises the video input adaptor in


MATLAB and applies default settings to the image when initialising. These setting
are updated by the feed forward results of the previous frame for successive iterations
through the images.

The video input is initialised using the “winvideo” adaptor


vid = videoinput(‘winvideo’);

Also the Image thresholds are set on the first iteration and remain constant for
the duration of the detection period. These are set by T1 and T2, horizontal
thresholds within which the eye is expected to stay between if the driver is alert and
awake. If the eyes are detected below T2, the upper threshold a warning is given to
the driver, should T1 be crossed the alarm procedure is activated.

T1 = 250 % Lower Frame Threshold


T2 = 200 % Upper Frame Threshold

The pre- Hough section of the code also initialises the cropping co ordinates in
the system, to an area in the centre of the frame. These are updated from the locations
of the subsequent detections of the eyes.

The image from the vid variable is captured using the getsnapshot(vid)
function, and renaming the variable frame. This is updated every cycle and is the full

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 42

colour version of the image captured. For the Hough transform this colour is
unnecessary and the image is converted to greyscale by the function discussed earlier,
rgb2grey(). The image is subsequently cropped and passed into the Hough transform
function.

The Hough transform section 3.1, discusses in greater detail the working of the
transform, These are translated into a MATLAB function which return the following
results, accumulation array of the image, circle locations and their radii are returned.

The post Hough preparations of the image apply the pickeyes() function to the
results of the Hough transform to improve the reliability of the results this function
applies the process described in section 3.2.The location of the eyes are then used to
calculate the size of the frame to which the new image should be cropped to.

For display purposes the code also contains an amount of plotting code to
graphically display the results of the transform through the MATLAB environment.
The new and current cropping co-ordinates are plotted on the display to allow the user
to see the changes in its size from one frame to the next along with the threshold
levels to allow the user to see their proximity to the thresholds. Once the main
program has run it is course the camera initialisation is closed and removed from the
workspace.

The sensor coding is run mainly on the ADuC8031 board. Once the hardware
implementation of the ADC has occurred the 12-bit binary number is manipulated and
the top two bytes are converted to decimal. These bytes provide 25 levels for the
main MATLAB program to take in. The 25 levels came about through taking 100mV
step levels from analogue sensor to the digital signal. The ADuC8031 core is coded
in 8051 code, For MATLAB to read the values correctly a terminator character is also
sent with the two bytes of data to the PC through the UART. This functionality was
coded in Aspire using assembly language. Examples of this are available on the
Analogue Devices website [15].

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 43

Section 5: Analysis

In this section the results obtained from the software and hardware
developments are discussed. In addition the alarm and testing procedures are
presented

5.1 Results Analysis – Software

In the progression of the project through software developments, there were


positive results in these areas. Taking an input video into the system we are able to
extract the relevant information we desire through electronic means without ever
having to look at the image. In this instance the relevant information is centred
around the eyes.

The main development of software has taken place in the MATLAB


development environment. Using this to create algorithms and a working structure
for the driver fatigue detection system, a modular function structure was developed
with the code required to gain maximum benefit of code reused.

The results of the software development are a method of finding the pupil in
eye and determining their position within a defined range.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 44

5.2 Results Analysis – Hardware

The hardware aspects of this project encompass the Pressure Sensor Circuit,
the ADuC8031, the PC and camera. In development the NIDAQ USB DAQ was used,
this did not interface easily with the MATLAB environment. This caused difficulties
while developing the system as the data had to be stored in an intermediary location to
allow the device and program to send data from one to another.

Fortunately all the current hardware elements are interfacing through the
MATLAB environment. The ADuC8031 is accessible through MATLAB allowing
the sampled data to pass into the environment.

The AduC8031 is connected to the PC via the serial cable and transmitting the
two digit level pressure to the MATLAB workspace where it is used with the camera
data.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 45

5.3 Alarm Procedures

The alarm procedure for alerting a driver is discussed briefly in section 2.2.
An auditory signal to alert the driver was decided as the best medium to convey this
message as the subject may have their eyes closed thus making any visual signals
useless.

The alarm procedure should be active for the duration that the rules are broken.
This would force the driver to remedy the situation by opening their eyes or placing
their hands back on the sensor.

Also care must be taken so as not to startle the driver as this could have a
greater detrimental effect with the possibility of the driver losing control of the car of
forcing it off the road accidentally.

For the purpose of the project it was not necessary for an auditory alarm to be
assembled. Instead a warning message appears to indicate the state of the system.
These are generated from the conditions on which the system makes its decisions.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 46

5.4 Testing

Testing was carried out on each module of the project as it was being
developed. This was done on a continuous basis as each part of the project was
developed and again once all the parts had been integrated.

Matlab

Testing of the MATLAB was carried out as it was written. Snippets of code
were written isolation and given defined stimuli to confirm their validity before they
were inserted into the main code. Any errors that arose could be easily identified by
single stepping through the code and checking what the variables in the code should
be. The pickeyes() function was tested to see the effect it had on the system. This
involved running a segment of video and counting the number of correct detections
verses the number of incorrect detections. This was done twice to check its
effectiveness when implemented and the effect when it is not. When the function was
not implemented the detection ratio was 50% when not used, this rose to 60% when
implemented.

Pressure Sensors

These FSRs were tested to check their sensitivity as this was crucial to the
success of the pressure sensor unit. This is discussed if section 4.2 in more detail.
The units test results mentioned in section 4.2 were verified though the digital half of
the project. This was done by supplying the ADC with the same measured voltage
levels from the pressure sensor unit but through a stable and quantifiable power
supply.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 47

ADuC8031

The ADuC8031’s serial communication was tested to verify its operation.


This involved reading from the board to the HyperTerminal and later reading and
writing to the microcontroller from MATLAB

Overall System

Testing the overall system highlighted a major difficulty that was resulting
from the combination of the different sections of code, each introducing a delay in the
system. Running the code using live video and pressure sensor feeds causes the
processes in the region of 8 seconds to process a frame from capture to processed
image output. This makes the system currently unworkable in real-time as a car
travelling at 100kmph would have travelled 222m in the time it would take for the
image to get processed. Adding reaction delay to the scenario and the driver would
have driven for approximately 10 seconds and could have easily gone off the road by
this stage.

A solution to this problem may be to use a lower level language to code the
system, this would allow more control on the time it would take and also embedding
the system on a microprocessor would all better allocation of resources to improve the
processing time.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 48

Section 6: Conclusion

In this final section the achievements are detailed and further areas of
development for this project are discussed.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 49

6.1 Achievements

All goals outlined by the project specification have been achieved. These
involved the detection and tracking of the eyes. This was done by implementing the
Hough transform using MATLAB software. This was achieved by interfacing a web-
cam to a PC and recording test videos. Using algorithms developed to detect eyes the
Hough transform we have been able to test the robustness under different lighting
conditions.

A set of conditions and rules has been developed to govern the situations when
there is danger of the driver falling asleep. These conditions make their decision by
using the outputs of the eye-tracking step to help make a decision as to whether or not
the driver is falling asleep. Using a range of scenarios the system has been tested.

The system has been extended by adding pressure sensors by construction of


simple signal conditioning circuitry on an embedded microprocessor and interfacing it
to a PC to enable capture of the pressure information in digital form.

The pressure sensors were integrated into the camera system of eye detecting
and tracking in order to produce a system in which “sensor fusion” can be carried out.
In order to achieve this, it was necessary to develop appropriate software to combine
the two information feeds from the camera and pressure sensors in order to make a
reliable decision to decide if the driver was falling asleep. This has been achieved and
this combined system is more robust than the camera based system alone.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 50

6.2 Further Development

There are a number of areas in this project that could be developed further.

Primarily the project could be developed further to include “Night Vision” or


low light cameras; this would improve low minimum light conditions for the detection
of the eyes. This type of camera is discussed in section 4.1.

Also a real time clock could be integrated into the system. This would allow
the system to increase its sensitivity during peak times for fatigue, discussed in
section 2.3

The specification of the project could be broadened to include a tracking and


learning element into the system. This would track and monitor the body movements
of the driver and build up a database of their normal behaviour to which the system
could reference against fatigued behaviour as abnormal and warn the driver they may
be suffering from fatigue, should they show any of the tell tale signs not covered by
the specification of this project.

Development of the systems communications capabilities would prove useful


to professional drivers of buses, taxis and haulage vehicles. The integration of the
system with a mobile data network would allow a central office for a fleet, to monitor
their workers attentiveness and vigilance. Perhaps this integration could be further
improved by developing this in tandem with a system described by US patent
5642093, “Warning system for vehicle”. This system monitors the outside
environment of the car, the relative proximity of other vehicles and the road
positioning of the subjects car to predict the onset of fatigue and warn the driver of a
lapse in concentration. Most aspects of the drivers experience would be captured by
the system and would warn the driver of fatigue under the vast majority of
circumstances.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 51

6.3 In Conclusion

I have enjoyed greatly the challenges this project has had to offer. I have
learned a multitude of new techniques with image processing and have developed
confidence in my own ability to develop systems.

The development of a commercial unit based a similar strategy to this projects


methods of eye detection is inevitable as world culture dictates higher safety standards
in all aspects of life.

In today’s world there are more vehicles on the roads than ever before at all
hours of the night and day. This increase in road usage is a factor in the rate of traffic
accidents on roads. In addition, especially in Ireland, hundreds of thousands commute
over long distances to work, day in, and day out. This long daily commute very early
in the morning and late in the evening can fatigue the driver thus inhibiting their
ability to react to non-expectant events and even worse, it can cause the driver to fall
asleep at the wheel. This is why such a system, outlined in this project would be of
benefit to drivers and their communities.

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 52

References
1. “Fitting the Task to the Human”, Taylor & Francis, 1997
2. “Reaction Time”, Institute of Transportation Engineers Expert Witness
Council Newsletter, Sunner 2-6, 2005
3. “Driver Fatigue” Road Safety Authority,
http://www.rsa.ie/NEWS/News/Driver_Fatigue.html
4. “Road accidents caused by drivers falling asleep,” Fridulv Sagberg ,
Accident Analysis and Prevention 31 (1999) 639–649
5. Embedded Systems Design , Jim Tundy,
http://www.embeddedsystems.com
6. "Use of the Hough Transformation to Detect Lines and Curves in Pictures"
Duda, R.O. and P.E. Hart, Comm. ACM, Vol.15, pp.11–15 (January, 1972).
7. The Hough transform:
http://planetmath.org/encyclopedia/HoughTransform.html
8. The Hough Transform: http://en.wikipedia.org/wiki/Hough_transform
9. GRJ Cooper & DR Cowan, “The detection of circular features in irregular
spaces data”, Computer and Geosciences 30 (2004) 101- 105
10. Circular Hough Image:
http://www.cis.rit.edu/class/simg782.old/talkHough/HoughLecCircles.html
11. Qiang Ji and Xiaojie Yang “Real Time Visual Cues Extraction for
Monitoring Driver Vigilance”, B. Schiele and G. Sagerer (Eds.): ICVS
2001, LNCS 2095, pp. 107{124, 2001}
12. Image if illuminated eyes by IR light
http://www.archimuse.com/mw2003/papera/milekic/milekic.html
13. The FSR Guide:
http://www.interlinkelectronics.com/library/media/papera/pdf/fsrguide.pdf
14. AduC831 Datasheet :
hhttp://www.analog.com/UploadedFiles/Data_Sheets/ADUC831.pdf
15. Analogue Devices Product Site:
http://www.analog.com/en/prod/0,2877,ADUC832,00.html
16. Mathworks website:
http://www.mathworks.com

B.E. Electronic Engineering


Design and Development of a Driver Fatigue Detection System 53

Bibliography
1. B. Schiele and G. Sagerer (Eds.): ICVS 2001, LNCS 2095, pp. 107{124,
2001.

2. Rafael C. Gonzalez (Author), Richard E. Woods (Author), Steven L. Eddins


(Author),Digital Image Processing Using MATLAB

B.E. Electronic Engineering

Anda mungkin juga menyukai