Anda di halaman 1dari 37

TRAFFIC RECOGNITION USING SMARTPHONE

CHAPTER-1
INTRODUCTION

1.1 GENERAL BACKGROUND

Typical traffic scenes contain a lot of traffic information, such as road


signs, road markings, and traffic lights. Usually, it is not easy for the drivers to
keep attention to the various presenting traffic information. The distraction, visual
fatigue, and understanding errors of the drivers can lead to severe traffic
accidents. Especially, as the traffic lights are used to direct the pedestrians and
vehicles to pass the intersections orderly and safely, it is of great importance to
recognize and understand them accurately. Therefore, many research institutions
are striving to recognize the traffic lights using in-car cameras to assist the driver
to understand driving conditions. This function is critical to driving assistance or
even autonomous driving. The information of traffic signal can be obtained
through wireless communication technology, but it is high cost for installing
wireless communication equipment and is difficult to construct the related
hardware facilities. In order to simplify the hardware, the image processing
technology can be used to recognize traffic lights. This method has several
advantages, such as low price, high performance, and easy upgrade. Some traffic
data (signs) can be missed for several causes such as the complexity of the road
scene, the high number of visual information, or even the driver’s stress or visual
fatigue. With the increase of computation power, the application of smart phones
in the driving assistance has gradually become a hot research field. Despite
significant research, the problem of detecting and recognizing road signs still
remains challenging due to varying lighting conditions, complex backgrounds and
different viewing angles. The method consists of real-time traffic light
localization, circular region detection, and traffic light recognition.

Vidya academy, Kilimanoor 1 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

1.2 PROPOSED SYSTEM

Traffic light recognition is of great significance for driver assistance or


autonomous driving. In this paper, a traffic light recognition system based on
Smartphone platforms is proposed. First, an ellipsoid geometry threshold model in
Hue Saturation Lightness color space is built to extract interesting color regions..
In order to solve the above problems, we present a traffic light recognition system
on Smartphone platforms. Different from, the Smartphone is fixed on the front
windshield of the ego vehicle with a bracket. The system recognizes the traffic
light, including its phase (red or green) and type (round and straight arrow)
information, and reminds the driver to follow the indications of traffic lights.
The system consists of three stages:
1) Candidate region extraction
2) Recognition
3) spatial-temporal analysis.
In the stage of candidate region extraction, an ellipsoid geometry threshold model
in Hue Saturation Lightness (HSL) color space is built to extract interesting color
regions, which can resolve the incorrect segmentation problem in the existing
linear color threshold method and avoid the problem of color cast to a certain
extent. In the stage of recognition, a new nonlinear kernel function is proposed to
effectively combine two heterogeneous features [histograms of oriented gradients
(HOG) and local binary pattern (LBP)], and a kernel extreme learning machine
(K-ELM) is designed to verify if a candidate region is a traffic light or not and to
simultaneously recognize the phase and type of traffic lights.

1.3 OBJECTIVE AND SCOPE

The test results of real scenes show that the proposed system can simultaneously
accurately recognize the phase and type of traffic lights compared with the
existing methods. Besides, the response of the system is rapid and a feedback can

Vidya academy, Kilimanoor 2 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

be given in less than a second. It is also worth pointing out that the recognition of
traffic lights (especially the recognition of arrow lights) is not useful unless the
results are associated
with the lane information. There may be multiple lights in the cross road that has
many lanes in the same direction. In such a scenario, each traffic light indicates
the traffic situation of its corresponding lane. Therefore, the recognition results of
traffic lights must be associated with the lanes. In the future, we will try to fuse
the recognition results with GPS navigation information. By considering both the
trajectory planning and the current location information, the recognition results
will be reasonably interpreted and utilized.
A traffic light recognition system based on Smartphone platforms is proposed and
several contributions have been made.
1) To avoid the influences of the camera color cast, the complex background,
weather, and illumination conditions, an ellipsoid geometry threshold model in
HSL
color space is built to extract interesting color regions. Meanwhile, a post
processing step is applied to obtain candidate regions of traffic lights.
2) A new kernel function is proposed to effectively combine two heterogeneous
features (HOG and LBP), and a K-ELM is designed to verify if a candidate region
is
a traffic light or not.
3) To further increase the reliability of recognition over a period of time, a spatial-
temporal analysis framework based on finite-state machine is introduced to
recognize the phase and type of traffic lights.

Vidya academy, Kilimanoor 3 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

CHAPTER-2
LITERATURE SURVEY

2.1 INTRODUCTION
Several driver assistant systems have been suggested in the past years using
either database information (i.e. learned Geographic Information Systems) or on-
vehicle sensors (i.e. laser, camera, etc.) to provide various environment
information such as traffic signs, speed limits, traffic lights, crosswalks, …

2.2 TECHNOLOGIES
To overcome the problem many solutions were present. Firstly in
identification techniques and then secondly in processing techniques. In
identification we can use visible-spectrum cameras (CCD or CMOS) or vision
sensors. In processing we can go for matlab coding even cpp can process the
image outputs.
2.21 TRAFFIC LIGHT DETECTION DURING DAY AND NIGHT
CONDITIONS BY A CAMERA
With the development of IVS (Intelligent Vehicle System), many
achievements have been gained in the field of vehicle driver assistance system,
such as the technology of car anti-collision, the technology of road curb detection
and the technology of pedestrian detection. Reduce accidents at traffic
intersections during day and night. Include three parts: CCD camera, an image
acquisition card and a PC RGB color space is adopted and the algorithms extract
the red, green and yellow objects in the object. The algorithm can obtain round
traffic lights as well as arrow shaped traffic lights and can give traffic signal
accurately. The need for different condition detection is
 Day and night condition differs from lighting conditions.
 To reduce visual perception problem.

Vidya academy, Kilimanoor 4 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

 Using a prior map, perception system can anticipate and predict the location
of traffic lights.
 Prior map encodes the control semantics of the individual lights.
 Map 3D positions of traffic lights.

ADVANTAGES
o Stable
o Reliable
o Low price
o High performance
o Easy upgrade
o Additional function can be implemented

DISADVANTAGES/DIFFICULTIES
o Cameras dissipate more energy.
o Need better verification.
o If camera resolution is low, problem arises in the signal direction during
night.
o Shape of the traffic signal differs.
2.22 TRAFFIC LIGHT RECOGNITION USING IMAGE PROCESSING
COMPARED TO LEARNING PROCESS
The method proposed is fully based on image processing. Detection step is
achieved in grayscale with spot light detection, and recognition is done using
generic “adaptive templates”. The whole process was kept modular which make
TLR capable of recognizing different traffic lights from various countries. To
compare our image processing algorithm with standard object recognition
methods we also developed several traffic light recognition systems based on
learning processes such as cascade classifiers with AdaBoost. The goal is to
design a modular algorithm for Traffic Lights Recognition (TLR) that is able to
detect supported and suspended traffic lights in a dynamic urban environment.
Regarding the previous works, We decided to base iur algorithm on the light
property emission of traffics lights. Therefore, the layout of our TLR algorithm
consists mainly three steps from camera,

Vidya academy, Kilimanoor 5 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

 Spot Light Detection


 Adaptive Template Matcher
 Validation

ADVANTAGES

 TLR capable to recognize different traffic lights from various


countries
 Reduce the complexity and stress for drivers with visual fatigue
 Provide good knowledge of traffic light

DISADVANTAGES

 Learning process couldn’t be evaluated in foreign countries


 Computation time is more
 Not a RTOS

TRAFFIC LIGHT MAPPING AND DETECTION

The outdoor perception problem is a major challenge for driver-assistance


and autonomous vehicle systems. While these systems can often employ active
sensors such as sonar, radar, and lidar to perceive their surroundings, the state of
standard traffic lights can only be perceived visually. By using a prior map, a
perception system can anticipate and predict the locations of traffic lights and
improve detection of the light state. The prior map also encodes the control
semantics of the individual lights. This paper presents methods for automatically
mapping the three dimensional positions of traffic lights and robustly detecting
traffic light state onboard cars with cameras. We have used these methods to map
more than four thousand traffic lights, and to perform onboard traffic light
detection for thousands of drives through intersections.

ADVANTAGES

 To reduce visual perception problem


 Using a prior map, perception system can anticipate and predict the
location of traffic lights

Vidya academy, Kilimanoor 6 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

 Prior map encodes the control semantics of the individual lights


 Map 3D positions of traffic lights
 Robust
 Reliable
 Deployed on multiple cars
 Timely information

DISADVANTAGES

 Do not provide estimation of traffic light altitude

 Registered within a few meters

 Slow in processing

RECOGNITION OF TRAFFIC LIGHTS IN LIVE VIDEOS STREAMS ON


MOBILE DEVICES

A mobile computer vision system is presented that helps visually impaired


pedestrians cross roads. The system detects pedestrian lights in the environment
and gives feedback about the current phase of the crucial light. For this purpose
the live video stream of a mobile phone is analyzed in four steps: localization,
classification, video analysis, and time-based verification. In particular, the
temporal analysis allows us to alleviate the inherent problems such as occlusions
(by vehicles), falsified colors, and others, and to further increase the decision
certainty over a period of time. Due to the limited resources of mobile devices
very efficient and precise algorithms have to be developed to ensure the reliability
and the interactivity of the system. A prototype system was implemented on a
Nokia N95 mobile phone and tested in real environment. It was trained to detect
German traffic lights. For the prototype training and testing, we generated image
and video databases including manually specified ground truth meta-data. These
databases described in this paper are publicly available for the research
community. Quantitative performance analysis is provided to demonstrate the
reliability and interactivity of the prototype system. Implementation in done in
four ,

* Localization

Vidya academy, Kilimanoor 7 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

* Classification

* Video analysis

* Time based verification

ADVANTAGES

 Fast
 Reliable
 High resolution
 High image quality
 System to detect pedestrial signal in the environment
 Gives feedback about current phase of crucial light
DISADVANTAGE
 Verification problems
 Increased false green light reports
 Not extended to other countries
 Video processing is a bit slow
 So, not a RTOS

EXISTING SYSTEMS PROBLEMS/DISADVANTAGES

 Less recognition accuracy

 Increased false positivity rate (FPR)

 Low robustness

 Strict demands of the accuracy of the candidate regions’ boundaries

 High computational cost

Detecting Driving Events Using Smartphone

In a fast-paced environment of today society, issues related to driving is


considered as a second priority in contrast to travelling from one place to another
in a shortest possible time. This often leads to possible accidents. In order to
reduce road traffic accidents, one domain which requires to be focused on is

Vidya academy, Kilimanoor 8 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

driving behavior. This paper proposes two algorithms which detect driving events
using a smartphone since it is easily accessible and widely available in the market.
More importantly, the proposed algorithms classify whether or not these events
are aggressive based on raw data from various onboard sensors on a smartphone.
Initial experimental results reveal that the pattern matching algorithm outperforms
the rule-based algorithm for driving events in both lateral and longitudinal
movements.

CONCLUSION(ടി എഡിറ്റ് ചെയ്യണംഇത്)

In the older systems there are so many constraints that needs to be solved.
Among the main problem is uniqueness of mounting systems. In commercial
perspective the systems that we design should be usable for all over the globe and
in different regions the notations as well as symbols differs , road laws differs,
even the dimension of the traffic symbols may affect our system. That need to be
avoided. The next problem is Day and Night conditions. Day and light conditions
differ from lighting conditions, So detection will be comparatively difficult if
there is no learning processes while image processing. The system should identify
the lighting conditions with 100% accuracy. While converting to grayscale there
may be lack of information due to over lighting or low light conditions. So system
should be self adjusted according to the surrounding conditions. Also the learning
process is so adaptive so that at any condition whether the dimension, positions
won’t affect the systems stability. For accomplishing that goal we use the past
years database which is used by previous driving assistance systems.
Traffic light mapping is the next important thing to be considered in
system design. After identifying the process of learning the processed output is
mapped to reduce visual perception problems. Mapping will be provided with
general conditions on systems database. The next problem is pedestrians and any
other unpredictable conditions arise over the system. So images won’t help that
much. That’s why live video streams are going to help to overcome over this area

Vidya academy, Kilimanoor 9 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

CHAPTER-3
ARCHITECTURE

3.1 INTRODUCTION

In the proposed system for future Driver Assistance Systems, we exploit


new and feasible technologies like using mobile phone data stream and mapping
technologies.

RECOGNITION OF TRAFFIC LIGHTS

The information of traffic signal can be obtained through wireless


communication technology, but it is high cost for installing wireless
communication equipment and is difficult to construct the related hardware
facilities. In order to simplify the hardware, the image processing technology can
be used to recognize traffic lights. This method has several advantages, such as
low price, high performance, and easy upgrade. Applying image processing
method, In a previous system the studied about round traffic lights detection by
judging the shape and size of an object, in which the arrow-shaped traffic light

Vidya academy, Kilimanoor 10 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

was not discussed. Later adopted a model to recognize traffic lights, in which the
road scene was simple and some interference problems such as vehicle lamps,
street lamps were not considered. So in this paper traffic lights including round
and arrow-shaped signal light detection problems were discussed in a complex
environment.

METHOD BASED ON IMAGE PROCESSING

System Overview

Our goal was to design a modular algorithm for Traffic Lights Recognition
(TLR) that is able to detect supported and suspended traffic lights in a dynamic
urban environment. Regarding the previous works, we decided to base our
algorithm on the light property emission of traffics lights. Therefore, the layout of
our TL R algorithm consists mainly of three steps as illustrated in Fig 1.

STREET LIGHT ADAPTIVE

CAMERA DETECTION TEMPLATE VALIDATION

MATCHING

FIG 1

Spot/Street Light Detection (SLD)

Despite the fact that traffic lights can be very different, all share the common
property to emit light. Therefore a robust SLD appears to be the best base for
TLR. The aim of this SLD step is to miss the fewest sought lights as possible.
This task is not an easy task due to the complexity of an urban scene and
especially when trying to detect small spots light in order to detect far traffic
lights.

To detect spots light in the source image we use an algorithm we already


described in . This algorithm uses the top-hat morphological operator [9]-[10] to
detect the bright areas and then applies shape filtering and region growing
algorithm to keep only the consistent areas. Only grayscale information is used in

Vidya academy, Kilimanoor 11 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

the SLD step. Therefore, it is few sensitive to illumination variation and color
distortion. The SLD we use detect up to 90% of all the visible lights emitted from
the traffic lights and is able to find any spot light as soon as it is 4 pixels wide or
more. Fig. 2 illustrates the result of the SLD algorithm.

FIG 2

Adaptive Template Matcher (ATM)

As mentioned above, one of our constraints was to have a fully generic TLR.
Therefore, and in order to be able to adapt our algorithm to different types of
traffic lights, we designed Adaptive Templates. Those templates are evaluated
with our Adaptive Template Matcher (ATM).

Template Matching was used previously in different recognition process.


This technique is usually slow when applied on the whole image. However, we
use the previously detected spots as hypothesis for template matching. Hence, we
create candidates only where spots were detected. Those candidates will then be
evaluated and either accepted or rejected according to their matching confidence
value. The fact that we use the template matching only where spots were
previously found, make it a lot faster (as detailed in the Results section).An
Adaptive Template can be defined as a combination of the 2D visual shape
representations of the 3D elements which form the real object. In addition,
templates also define algorithmic operators linked to one or more elements and
which will be evaluated at run time. Both elements and operators can have
different weights and matching confidence thresholds, or even be set to non

Vidya academy, Kilimanoor 12 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

discriminatory. Therefore, if any non discriminatory element or operator failed, it


prevents the candidate from being rejected.

The matching process is the recursive evaluation of a template and its


hierarchy, until all its elements and linked operators have been evaluated. The
confidence value of an element is computed according to weight and confidence
value of each child element or linked operator. Adaptive templates contain
elements which describe the visual appearance of the real object corresponding to
the template. Those elements can be instance of various types - such as Circle,
Square, Rectangle, Spot, Container, etc. and each type can have specific behavior.
Fig 3 illustrate relation between the elements types.

ElSpot ElSquare

ElCircle ElRect ElBg

ElContainer

Element

Fig. 6. Illustration of the element types classes relations. All element types derivate from the
base class Element. Container type allows element to contain children elements. Spot is a
special type which derives from the Circle type and can be linked with a spot object previously
returned by the Spot Light Detector.

LEARNING PROCESSES

Vidya academy, Kilimanoor 13 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

A task such as object recognition can usually be achieved with two very
different approaches: image processing or learning processes. we proposed a
TLR only based on image processing but the question arises whether learning
processes could give better results for such a task. In order to evaluate the
previously described TLR we also developed traffic light recognition based on the
use of learning processes. Using object-marked sequences acquired with an on
vehicle camera, we extracted traffic lights and non-traffic lights samples. Table I
describes different datasets used for training.

TRAFFIC LIGHT MAPPING

Traffic light mapping is the process of estimating the 3D position and


orientation, or pose, of traffic lights. Precise data on the pose of traffic lights is not
generally available, and while it is possible to make estimates of traffic light
locations from aerial or satellite imagery, such maps are typically only registered
to within a few meters and do not provide estimates of the traffic light altitude
(although it might be possible to use the image capture time and ephemeris data to
roughly estimate the altitude from the length of shadows, if they are present).
However, it is easy to drive a mapping car instrumented with cameras, GPS, IMU,
lasers, etc., through intersections and collect precisely timestamped camera
images. The traffic light positions can then be estimated by triangulating from
multiple views. All that is needed is a large set of well labeled images of the
traffic lights. The accuracy of the traffic light map is coupled to the accuracy of
the position estimates of the mapping car. In our case online position estimates of
the mapping car can be refined by offline optimization methods [ Thrun and
Montemerlo, 2005] to yield position accuracy below 0.15 m, or with a similar

Vidya academy, Kilimanoor 14 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

accuracy onboard the car by localizing with a map constructed from the offline
optimization.

CONCLUSION

Here in this section it is discussed about the way how our problem is solved
technically. By means of recognition image processing is done in several steps and atlast
mapping is done to find accurate identification of the same.

CHAPTER- 4

EXPERIMENTAL EVALUATION

INTRODUCTION

BASIC WORKING OF PROPOSED SYSTEM

Figure 1 presents the block diagram of our system. Our proposed system is
designed with a multi-stage architecture. In order to achieve real-time
performance on a mobile device, whose camera typically captures images at a rate
of 30 frame/s or lower, the processing time for each frame needs to be less than
1/30 second. Therefore, we designed the algorithm of each block to avoid

Vidya academy, Kilimanoor 15 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

complicated computation. As shown in Figure 1 , video frames read from the built
in camera are processed in RGB domain1 and a few simple

Figure 1

Figure 2

Figure 2 is the traffic light recognition system based on Smartphone


platforms is proposed

o Candidate region extraction based on color and brightness information


o K-ELM based single-frame traffic light recognition
o Multiframe traffic light recognition with spatial-temporal analysis

Vidya academy, Kilimanoor 16 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

EXTRACTION OF TRAFFIC LIGHT CANDIDATE REGIONS

The approaches for extracting candidate regions can be categorized as


either model-based or learning-based . The model-based approaches essentially
rely on the heuristically determined models using color, shape, and intensity.
Based on a color model, the image can be thresholded into a binary image , and
connected components can be clustered for further analysis. The heuristically
determined color thresholds may suffer from various illumination conditions. To
mitigate this, Jang et al. proposed to use multiexposure images to improve the
robustness against different illuminations, and Shen et al. used Gaussian models
to learn the thresholds. Followed by color filtering, the shape model methods
either analyze the shape of the connected components or actively search for a
particular shape, such as circles. The shape analyses on the connected components
or extracted blobs mainly focus on circularity ratio, color filling rate, convexity,
aspect ratio, holed regions, etc.

The filled circle transform and fast radial symmetry transform are
proposed to search for the circular shapes. We name the shape analysis after the
color thresholding as the color-shape model method, which is efficient since only
the color-relevant regions are considered. However, it is prone to color
disturbances. The spotlight detection method finds the bright area based on
intensity , which is robust against color disturbances. It relies on the fact that
traffic lights emit light and are brighter than their surrounding areas. Other
sophisticated candidate region extraction methods, such as Adaptive Background
Suppression Filter (AdaBSF), are comparatively computationally inefficient.

The learning based methods, commonly require lots of training data and
computation. The color-shape model method and the spotlight detection method
have its own strengths and weaknesses. It is appealing to combine the extracted
candidate regions from both methods in order to have a higher recall rate than any
of the two. In this paper, we propose a hybrid approach to combine the candidate
regions and put forward a strong classifier to recognize from all the extracted
regions. Feature Extraction Features are often extracted to determine whether the

Vidya academy, Kilimanoor 17 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

candidate region is a true traffic light. Shape features, such as aspect ratio, size,
and area, are used in , while color features are used in a mix of BLOB width,
height, center coordinate, area, extent, sum of pixels, brightness moment, and
geometric moment is adopted as features for their SVM classifier. Other features,
such as Haar Histogram of Oriented Gradients (HOG) 2D Gabor Wavelet, Local
Binary Pattern (LBP) histogram, and Aggregated Channel Features (ACF) are
proposed as descriptors for traffic lights. Even though numerous feature
descriptors are proposed, it is hard to define a small, relevant set of features,
which are generative or discriminative for a wide variety of datasets. In this paper,
we propose to use the raw RGB pixel values as the features, which is generative.
In order to account for different sizes of the candidate regions, we scale each
candidate to be a 10-by-10 image and obtain a 10×10×3 = 300 (3 for RGB
channels) dimensional feature vector. It is shown that by using the raw pixel
values, we can have a high classification accuracy and generalizability for other
datasets.

Currently we use the pixel values as the input for a traffic light classifier.
Empirically determined thresholds are used for traffic light validation. previously
proposed to use the matching confidence from the adaptive template matching
(ATM) for classification. Further improvement is made to use SVM to learn the
thresholds for the ATM in. Using SVM on the handcrafted features is proposed in
, while Adaboost is used for both feature selection and classification. In this
paper, we utilize SVM as our classifier

TL DETECTION AND CLASSIFICATION

In this section, we present our algorithm for circular Traffic Light


(TL) detection and classification

Vidya academy, Kilimanoor 18 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

Fig 1

The framework of circular TL detection and classification is shown in Fig. 1


The processing can be divided into two stages: candidate extraction stage and
recognition stage. Candidates from both spotlight detection and color-shape
model method are considered for recognition. All the extracted candidates are
then passed to a weak classifier, i.e., color filling ratio filter, to remove the
apparently false ones. The survivals of the filter are then fed into a strong
classifier, i.e., a SVM, to recognize the true Red(R)/Yellow(Y)/Green(G) traffic
lights. The detailed procedures are shown in Fig. 2, which are explained in the
following

SPOTLIGHT DETECTION

The first row of Fig. 2 shows the procedures of the spotlight detection. A
morphologically top hat operation is performed on the raw RGB image, followed
by converting to a gray image and a global binary thresholding based on intensity
values. An example of the spotlight detection is shown in Fig. 3. Filled circle
detection is then performed on the binary image, together with a R/Y/G color
mask, to extract the candidate circles, which is explained in Section III-D.

Fig 2

Vidya academy, Kilimanoor 19 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

The filled circle detection is performed on the binary image from the spotlight detection,
which aims to fit the boundary of a spotlight with a circle. It is also performed on the raw
input image, which aims to fit the shape of an object with a circle. The detected circles
from both spotlight detection and color-shape model method are all considered as
candidates for recognition. It should be highlighted that the parameters for circle
detection should be weak so that it has a low chance of miss detecting any circular
objects, even though it may introduce many false positives. The false positive circles can
be further removed with the following classifiers.

Fig. 3. An example of spotlight detection. (a) shows the raw input image, (b) shows the converted
gray image after the top hat operation, (c) shows the binary thresholding of (b), and (d) visualizes
the pixels of the raw image corresponding to the white pixels of (c).

Fig. 4. An example of color thresholding. (a) shows the raw input image, (b) shows masked image
using red mask, (c) shows masked image using green mask.

Vidya academy, Kilimanoor 20 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

Fig. 5. A comparison of extracted edges from image Fig. 4(a). The mask is applied on the edge
image in (a), while the mask is applied on the raw input image and then edge detection is
performed in (b).

Fig. 6. A comparison of circle detection results on the masked edge image (a) and on the
original edge image (b).

TL Recognition

1. Color filling ratio filter: In order to quickly filter out the false positives,
we use a color filling ratio filter before the SVM classifier, i.e., in a
cascade manner.
2. SVM classifier: With a bounding box, the corresponding image patch can
be extracted. The bounding boxes can be of different sizes due to the
different distances to the traffic lights. All the image patches are scaled to
a fixed size of 10-by-10. Instead of handcrafting a set of features, we use
the raw RGB pixel values of each scaled image patch as the input for the
SVM classifier. The dimension of a feature vector is 10 × 10 × 3 = 300 (3
for the number of channels). A 2-class SVM classifier is trained for traffic
lights of each color. The advantage of our featureless classification
method is that the trained classifiers are dependent on neither a fixed
image resolution nor a fixed camera model, and thus it has a high
generalizability.

Vidya academy, Kilimanoor 21 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

The precision and recall of the three different methods, which are applied on the one-
north dataset.

The traffic light recognition examples on three different scenes. The red and pink circles represent the
detected red and green lights, respectively, by spotlight detection (SD) method. The yellow and green
rectangles represent the detected red and green lights, respectively, by color-shape model (CSM) method. In
Scene 1, the first two red lights are detected by SD in (a), detected by CSM in (b), and detected by both in (c).
In Scene 2, the second red light is detected by SD in (d), detected by CSM in (e), and detected by both in (f).
In Scene 3, the left green light is detected by SD in (g), detected by CSM in (h), and detected by both in (i).

Generation of Traffic Light Candidate Regions

According to the CIE standard, each color of traffic lights is usually


defined in a specific area in the CIE chromaticity diagram. Color feature is
therefore widely used to detect traffic lights. There are some common used to
detect traffic lights. There are some common Hue Saturation Intensity (HSI) and
HSV. The RGB color space is not robust against the change of illuminations.
Comparing with RGB color space, HSI and HSV color spaces are insensitive to
luminance fluctuation and similar to the color perception of human. Thus, many
researchers use these color spaces as a measure for distinguishing predefined
colors of traffic lights. Besides the color spaces mentioned above ,other color
spaces such as YCbCr and CIELab are also used to extract traffic light candidate
regions. Although the color spaces used are different, most of the existing methods
determine whether a pixel is an interesting color pixel or not by the linear color
threshold method.

Vidya academy, Kilimanoor 22 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

That is to say, given the predefined range thresholds Tmin and Tmax in a
certain color channel, a pixel is an interesting color pixel if its corresponding
component value Tv satisfies “Tmin < Tv < Tmax”. The color-threshold-based
segmentation method has a common tradeoff problem: an increased false positive
rate (FPR) due to the wide color threshold range or a decreased true positive rate
(TPR) due to a narrow color threshold range. Especially, this situation could get
worse if color cast appears due to the white balance problem or the severe
exposure of camera to external illumination. Consequently, a robust color
segmentation method is required that considers various illumination conditions.
Color feature alone is inadequate to generate traffic light candidate regions because
of the existence of some interference such as billboards and trees. In addition,
complex scenes and cluttered backgrounds may cause many false positives. It is
necessary to use additional features to distinguish traffic lights and overcome
interference effects. Thus, many researchers integrate some specific features to
eliminate the interference regions, such as the geometrical feature and the
brightness feature. Yu et al. and Shen et al. use the geometrical features (aspect
ratio, area, and pixel density) to obtain precise candidate regions of lamps. In, the
color, brightness, and structural features are employed individually to obtain a set
of traffic light candidate locations. Lindner et al. propose to eliminate some
interference regions by detecting the circular edge of the lamp candidate region. In
addition, several interesting works have been reported for detection on the
candidate regions. For instance, in , the use of inter-component difference
information for effective color edge detection is proposed. In , a novel framework
for saliency detection, which first models the background with deep learning
architecture and then separates salient objects from the background, is proposed. In
recent years, in order to shorten the computation cost and reduce the risk of getting
incorrect candidate regions, some techniques adopt additional sensors such as the
global positioning system (GPS) and pre-existing maps containing traffic light
locations. For example, Fairfield and Urmson [8] propose a method to predict the
positions of traffic light with a prior map. The predicted positions are then
projected into the image frame using a camera model, and serve as traffic light
candidate regions. To improve the recognition accuracy, in , an on-board GPS
sensor is employed to identify the traffic light candidate regions.

Vidya academy, Kilimanoor 23 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

TRAFFIC LIGHT RECOGNITION SYSTEM

A traffic light recognition system based on smartphone platforms is


proposed to recognize the vertical traffic lights in urban environment. We are only
interested in recognizing the red and green traffic light, since the yellow traffic
light

represents a transient phase between the green light and the red light and it serves
as a warning signal. It is even absent for the traffic light with countdown timer.
Fig. 2 describes the proposed recognition system. The recognition system in this
paper includes three stages: 1) candidate region extraction based on color and
brightness information; 2) K-ELM-based single-frame traffic light recognition;
and 3) multiframe traffic light recognition with spatial-temporal analysis. The
details of the proposed system are presented in the following sections.

{ചെചെരചെ ഹീടിംഗ് വേചെ num ച ൊടുക്ക്}EXTRACTION


OF TRAFFIC LIGHT CANDIDATE REGIONS

Vidya academy, Kilimanoor 24 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

Since the traffic light is brighter than most of the background and has
special color, the recognition system proposed in this paper combines a color
segmentation with a bright region extraction algorithm to generate the traffic light
candidate regions. The system has three stages: 1) the extraction of color
candidate region; 2) the extraction of lamp candidate region; and 3) the generation
of the traffic light candidate region. The extraction process is shown in Fig. 3.

A. Color Candidate Region Extraction Based on Ellipsoid Geometry


Threshold Model
In this paper, the HSL color space is adopted to extract the traffic light
candidate regions. The space is better matched to visual perception, with less
correlated color channel [34]. Shen et al. [24] have pointed out that the color
appearances of the traffic lights are concentratedaroundseveral predetermined
specific colors so that they can be well described by Gaussian distributions.
Therefore, similar to [24], we model the color features of the traffic lights as
1D Gaussian distributions. First, we model the hue, saturation, and lightness
according to 1D Gaussian distributions. The value of the color channel k at
each pixel is defined as Ck, k = 1,2,3, Ck ∼ N(μk,σ2 k ). μk and σ2 k are the
mean and variance of color channel k, respectively.

Fig. 4. Statistical distributions of


the pixels in manually labeled
traffic light regions. (a) Statistical
distribution of red pixels. (b)
Statistical distribution of green
pixels.
Then, the interesting pixels of red and
green traffic light candidate regions are generated by the following equations:

Vidya academy, Kilimanoor 25 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

Here, the value range in color channel k is (μk −λ·σk,μk +λ·σk) and λ=3.

In order to learn parameters μk and σk, the training images with red and
green traffic lights are collected, respectively, and the traffic lights regions are
labeled manually. With all the pixels in the manually labeled traffic light regions,
the parameters μk and σk can be estimated. As the samples are collected from
different weather and illumination conditions, the parameters can adapt to
different environments. With the ranges determined above, three cubes (two for
red and one for green) can be determined, and the centers of the cubes are denoted
by (Hri, Sri, Lri), i = 1,2, and (Hg0, Sg0, Lg0), respectively. In Fig. 4(a) and (b),
the statistical distributions of the pixels in manually labeled traffic lights regions
are shown for the red and the green pixels, respectively. From the results in Fig. 4,
one can see that the pixels are gathering in the compact regions around the centers
of the cubes, instead of filling them. It is also clear from Fig. 4 that the colors of
the pixels in the corners of the cubes are very different from the colors of those at
the center. Since a cube without corners resembles an ellipsoid, an ellipsoid
geometry threshold modelis proposed in this paper .The model of red pixels (Hr,
Sr, Lr) and green pixels (Hg, Sg, Lg) are expressed as

Vidya academy, Kilimanoor 26 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

where Hr ∈ (Hl ri, Hh ri), Sr ∈ (Slri, Sh ri), Lr ∈ (Llri, Lh ri), Hg ∈ (Hl g,


Hh g ), Sg ∈ (Sl g, Sh g), andLg ∈ (Llg, Lh g). The centers of the ellipsoids are
coincident with the centers of the cubes expressed as (3) and (4). The other
parameters of the ellipsoids can be calculated as follows:

Fig. 5 shows the visualization of the ellipsoid geometry threshold models and the
statistical distributions of the pixels in manually labeled traffic light regions.
Compared with the cubes built by the traditional linear color threshold method,
the proposed ellipsoid geometry model can eliminate a large amount of the color
pixels that are uninteresting to our algorithm. In addition, more than 99% of the
interesting color pixels from the training images are contained within the
ellipsoids defined by the above parameters. More test images with traffic lights
have also been collected and the statistic of these images reveals a similar
conclusion.

Fig. 6. Sketch image of regions. (a) Color candidate region. (b) Expanded
region.

Vidya academy, Kilimanoor 27 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

Compared with the traditional methods, the proposed ellipsoid


geometry threshold model has the following advantages. 1) It resolves the
incorrect segmentation problem in traditional methods and improves the accuracy
of the candidate color pixel extraction. 2) It avoids the problem of traffic light
color cast to a certain extent. 3) It saves the processing time for the subsequent
color candidate region extraction and verification process, as it filters out some
uninteresting color regions. After extracting the interesting color pixels, we
conduct an eight-connected-component labeling to generate lamp candidate
regions. Similar to [1], several geometrical features of each candidate region are
computed: the pixel density, the aspect ratio, and the area. Using these
geometrical features, some color regions that unlikely belong to lamp candidate
regions are eliminated. As shown in Fig. 7(d), the vehicle’s right tail light is
eliminated as it does not satisfy the aspect ratio constraint.

B. Extraction of Lamp Candidate Regions


Due to the influence of complex background, weather, illumination
conditions and other light sources, some interference regions still exist in the
obtained color candidate regions. Considering that an active lamp is brighter than
the surrounding local region, this characteristic can be used as a post processing
step to extract the lamp candidate region while removing the interference regions.
The extraction process is as follows. First, an expanded region RE i is built for the
color candidate regions Ri,{ i = 1,2,...,N}. As shown in Fig. 6, the height and
width of the minimum enclosing rectangle of the color candidate region are
denoted by H and W, respectively. The region between the boundaries of RE i and
Ri is denoted by R i. Then the original color image of the expanded region RE i is
converted into a grayscale image. The top-hat transform is applied to eliminate
the influence of uneven illumination .Here, a square structuring element whose
width is 11pixels is chosen for the top-hat filter. Finally, a color candidate region
Ri is labeled a lamp candidate region if the region satisfies the following
conditions:
Ni < M
σRi >σRi
where Ni = N RE i −NRi, NRi represents the numbers of pixels in the region Ri,
andN RE i represents the number of pixels in the expanded region RE whose

Vidya academy, Kilimanoor 28 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

color is the same as the ones in the region Ri. M is a threshold and M =WH/4. σRi
and σR i represent the average gray values of region Ri and region R i,
respectively. They are calculated as follows:

Fig. 7. Results of extraction of lamp candidate regions. (a) Original color image. (b) Color candidate regions
(marked in yellow). (c) Image after top-hat transform. (d) Lamp candidate regions (marked in yellow).

C. Generation of Traffic Light Candidate Regions


As we know, a typical traffic light consists of three lamps arranged
vertically with equal sizes and the order of the three lamps is fixed as red,
yellow(or count-down timer), and green from top to bottom, as shown in Fig. 1.
Considering that a backboard is often around a traffic light in most traffic scenes,
this structural information can also be used to select real red and green traffic
lights from the candidates. Here, according to the relationship (the relative
positions and size ratios) between active lamps and the backboard, we can
generate the traffic light candidate regions. It should be noted that for the traffic
light with count-down timer, since the active lamp and the count-down timer have

Vidya academy, Kilimanoor 29 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

the same color and are extremely close to each other, these two regions might be
probably extracted as only one single region. This region would be easily treated
as one single lamp, and it will lead to the generation of a wrong traffic light
candidate region. To solve this problem, we generate the traffic light candidate
region RTL i according to the aspect ratio Ai(Ai = H/W) of the obtained lamp
candidate region, as shown in

where (XL,YT) and (XR,YB), respectively, denote the left-top and right-bottom
vertices of the traffic light candidate region RTL i , (xl, yt) and (xr, yb),
respectively, represent the left-top and right-bottom vertices of the minimum
enclosing rectangle of the lamp candidate region Ri, andr and c, respectively,
represent the height and width of the whole image. Here, K can be determined as
follows:

V. RECOGNITION OF THE TRAFFIC LIGHT IN A SINGLE IMAGE


After the above procedures of traffic light candidate region extraction,
there also exists the influence of interfering light sources, such as car tail lights. In
order to verify whether a candidate region is a traffic light or a background and
simultaneously recognize the type of the traffic light (round, straight arrow, left-
turn arrow, right-turn arrow, etc.), in this section, a new nonlinear kernel function
is proposed to effectively combine two heterogeneous features, HOG and LBP,
which is used to describe the traffic lights. In addition, a K-ELM is designed to
recognize the candidate region.
A. Feature Extraction of Traffic Light Candidate Regions

Vidya academy, Kilimanoor 30 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

HOG and LBP are two heterogeneous features with complementary


information. The combination of the two features can extract contour and texture
information simultaneously and has obtained effective results in the applications
such as pedestrian detection and face recognition [35]. Thus, we use HOG–LBP
feature to describe the traffic light candidate region. In [36], HOG and LBP are
directly concatenated to form a feature vector, while the contributions of each
feature are not considered, and the descriptive ability of the features is not fully
exploited. Inspired by [37], a new nonlinear kernel function is proposed to
combine the two heterogeneous features

where K(xi,xj) represents the proposed kernel function, xi is the feature vector of
sample i, xi =[ xiHOG ,xiLBP ], and xiHOG ,xiLBP represent the feature vectors
of HOG and LBP, respectively. β is a combination coefficient, which determines
the contribution of each feature, and β ∈[ 0,1]. By , theHOG feature and the LBP
can be combined with different β. It is worth noting that the method of
combinative HOG–LBP features in [35] is only a special case of the proposed
method, which is equal to β =0.5. More details can be seen from the experimental
results in Section VIII. In this paper, a traffic light candidate region is first
converted into grayscale and is then scaled to a size of 20×40 pixels, which is
used to extract the features of HOG and LBP. For HOG feature, the block size is
10, the cell size is 5, and the orientation bin number is 9; for LBP feature, we
extract 58D uniform patterns and 1D nonuniform pattern per block, and the
feature vectors of all the blocks are concatenated as the LBP feature of the
candidate region. For each traffic light candidate region, the dimension of the
feature vector is 1995. In order to reduce the computation burden, the between-
category to within-category sums of squares (BW) method is adopted to reduce
feature dimensions. The strategy of BW is to select the features with large
between-category distances and small within-category distances. Considering the
algorithmic acceleration based on OpenCL (to be described in Section VII-B), the

Vidya academy, Kilimanoor 31 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

dimension of the feature vector is reduced to 256, and then it is input to the K-
ELM with the proposed kernel function.
B. Recognition of Traffic Light Candidate Regions
Extreme learning machine (ELM) is a machine learning method with fast
training speed and suitable for the multicategory classification task [39]. Many
research results show that ELM produces comparable or better classification
accuracies with implementation complexity compared with artificial neural
networks and support vector machines. Furthermore, it has been pointed out that
K-ELM achieves good generalization performance; meanwhile, there is no
randomness in assigning connection weights between input and hidden layer and
the number of hidden nodes does not need to be given.
VI. RECOGNITION OF THE TRAFFIC LIGHT BASED ON SPATIAL-
TEMPORAL ANALYSIS

A. Traffic Light Phase Recognition Using a Finite-State Machine


The recognition of the traffic light in single images was described in
Sections IV and V. The traffic lights in the image were extracted and recognized,
and the phase and type of recognized traffic light can also be given. However,
some false positive recognition results might exist. In order to increase the
reliability of recognition over a period of time, in this section, the traffic light
recognition is extended from single images to multiframe images. 1) The number
of phases of the traffic light is limited (red or green) and each phase will last for a
certain period of time. In fact, there exists an optional yellow or count-down timer
phase, but in our system, we ignore this phase. 2) For each traffic light, it can only
be at one particular phase at one time, namely, either red or green light is
switched ON. The above rules can be applied to improve the recognition
performance of traffic light in a single frame. In this section, we introduce an
information queue Si, which allows a verification by multiframe spatial-temporal
analysis. The information queue Si is used to record the recognition results of the
recent Qsize times of the ith recognized traffic light, and the recognition results
consist of phase, type, and location. The Qsize denotes the size of the information
queue. The maintenance process of the information queue is shown in Fig. 8.
First, the existing information queues S ={ S1, S2,..., Si,...,SK} and the

Vidya academy, Kilimanoor 32 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

recognized traffic lights L = {L1, L2,...,Lj,...,LN} at time t +1 are associated by


the nearest neighbor (NN) rule. Then, according to the association results, the
information queues will be updated: if Si is associated to Lj, a new piece of
information corresponding to Lj is added; otherwise, a void flag is pushed. Here,
in order to perform the association, the traffic lights represented by S need to be
tracked. Then the updated tracked

locations are associated with L by the NN rule. In this paper, the Lucas-Kanade
algorithm [43] is adopted for tracking the traffic lights. All tracking points are
chosen at the positions where obvious features could be extracted, such as the
corners of the traffic light backboard. It should be noted that for each of the
unassociated recognized traffic lights, a new queue is established. After
establishing the information queues, a spatial-temporal analysis framework based
on a finite-state machine is introduced to enhance the reliability of the recognition
of traffic lights. For each recognized traffic light, four states exist in its life cycle:
1) initial state; 2) candidate state; 3) validation state; and 4) end state. The finite-
state machine is able to clearly describe the transitions between these states
(shown in Fig. 9) and the required conditions of these transitions. In this paper,
only the traffic lights at validation state have phase recognition results. Thus,
some occasional single-frame false positive recognition results can be reduced.
This finite-state machine can be interpreted as follows. For a new recognized light

Vidya academy, Kilimanoor 33 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

Lj, which has not been associated with any existing queues, a new queue is
established and its state information is initialized. At this moment, the light Lj is
at the initial state. This state is a temporary state. Once the initialization is
complete, it transits to the candidate state. This process corresponds to the state
transition process T1 in Fig. 10. For a traffic light Lj at candidate state or
validation state, it will enter the end state when there are NV consecutive void
flags in the queue, which means the traffic light is not successfully associated
continuously. The queue will be deleted afterward. This process corresponds to
T2. For a traffic light Lj at candidate state, it will turn into the validation state if
the validation condition is met. This process corresponds to T3. Otherwise, it will
maintain the current candidate state, which corresponds to T4. The validation

condition is as follows: the number of the most frequent appearingphase Ns in the


recent Qsize times in the information queue should not be less than the preset
threshold Qmin. For a traffic light Lj at validation state, the output phase after the
multiframe spatial-temporal analysis is

Green, if Ns = Ng
Red, if Ns = Nr
Unknown, otherwise
where Ns = max(Nr, Ng), andNr and Ng represent the number of phases as red
and green in the recent Qsize times, respectively. When the information queue of
one validated traffic light no longer meets the validation condition, its state turns
from the validation state into the candidate state. This process corresponds to T5.

Vidya academy, Kilimanoor 34 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

B. Type Recognition of the Traffic Light


After the recognition of the phase of the traffic light, a simple voting
approach is adopted to determine the type of the traffic light

where k ∈{ 1,2,3,4,5} represents the type of the traffic light to be round, straight
arrow, left-turn arrow, right-turn arrow, and unknown type, respectively. Ct
represents the type of the traffic light at time t.

VII.SYSTEM IMPLEMENTATION ON SMARTPHONE PLATFORMS


In our research, this recognition system is implemented on a Samsung
Note 3 smartphone. The Note 3 is equipped with a quad-core Krait 400
architecture CPU at up to 2.3 GHz per core and an Adreno 330 GPU with a
frequency of 450 MHz and 3G RAM. With limited computing resources on
smartphone platforms, more efficient solutions need to be explored to achieve
real-time performances.
A. Quick Extraction on Color Candidate Regions
As the image sequence captured by the Samsung Note 3 smartphone is in
the YUV (YUV4:2:0) color space, it needs to be converted into HSL space. Then
one can apply the proposed ellipsoid geometry threshold model to judge if a pixel
is an interesting color pixel or not. However, the process of color space
conversion and judgment will cause a sharp increase in computation cost. To
reduce computation load on the device, we combine both color space conversion
and interesting color pixel judgment process in an LUT. The storage structure of
the LUT is C[Y][U][V]=CV, where CV ∈{ 0,1,2} represents that the pixel is
green, red, and an uninteresting color pixel, respectively. The size of the LUT is
256 × 256 × 256. Therefore, by simply looking up the conversion table, a given
pixel can be judged quickly if it is an interesting color pixel or not.
B. Acceleration of K-ELM Algorithm Using OpenCL

Vidya academy, Kilimanoor 35 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

From the formula it can be seen that the output f (x) of K-ELM consists
of two parts: one part is AN×m = ((I/λ) + ELM)−1T, which is only related to the
training samples and can be calculated offline, and thus it does not consume any
online computation resource, and the other part is B1×N =[ K(x,x1),...,K(x,xN)],
which is related to the feature vector of the candidate region and the feature
vectors of the training samples.
This part needs to be calculated online with the proposed kernel function
in, and thus it is time consuming. Considering that the computation of each
dimension of B1×N is independent, it is suitable for parallel optimization. In
order to reduce the computation time, a CPU–GPU fusion-based approach is
adopted to accelerate the proposed K-ELM algorithm using OpenCL. For the
recognition of a given candidate region, its acceleration process is shown in Fig.
10. First, the calculation and dimension reduction of the candidate region’s
feature vector x is performed on CPU. To facilitate the computation of GPU,
considering the suggestion of and the number of the GPU’s processing elements,
the dimension of the feature vector is reduced to 256. Then, the needed data for
calculating f(x) is copied from the CPU memory to the global memory of GPU.
The data include the feature vector x, the off-line calculated feature vectors
X256×N =[ xT 1, xT 2 ··· xT N] of training samples, and AN×m. Here, X256×N
and AN×m are treated as constant matrices and copied only once at the
initialization. Next, f(x) is calculated on GPU. To calculate B1×N =
[K(x,x1),...,K(x,xN)], the global execution space of GPU is divided into n work
groups, n = N/M
Here, indicates the ceiling operation, M indicates the number of work
items in each work-group, and totally, M×n work items are generated. In this
paper, M =256,which corresponds to the dimension of the feature vector. For each
work item, one K(x,xi) operation is executed. In the process of calculating matrix
B, as the feature vector x of the candidate region is used by all the work items, it
is copied from global memory to the shared local memory of each work group. In
addition, all the work items are programmed to perform coalesced access to the
global memory so that the memory access is accelerated. Finally, the output f(x)
is copied to the CPU memory.

Vidya academy, Kilimanoor 36 Dept OF ECE


TRAFFIC RECOGNITION USING SMARTPHONE

Within the framework of OpenCL, a pipeline scheme is also designed to


parallel process multiple candidate regions, as shown in Fig. 10(b). It can be seen
that the calculation and copy of the feature vectors of these candidate regions are
pipelined, as well as the calculation and copy of the corresponding outputs. These
operations will hide the time used for copying data between the memories of CPU
and GPU. The proposed acceleration approach has been implemented and
evaluated on the Samsung Note 3 smartphone. From the test results, it can be seen
that the proposed acceleration approach achieves 0.75 ms per candidate region,
which is five times faster than the only CPU version of implementation .This
meets the requirement of the system’s real-time behavior.

Vidya academy, Kilimanoor 37 Dept OF ECE

Anda mungkin juga menyukai