Anda di halaman 1dari 5

The Journal of China

Universities of Posts and


Telecommunications
July 2010, 17(Suppl.): 7983
www.sciencedirect.com/science/journal/10058885

http://www.jcupt.com

Research on technology of traffic video incidents detection under


highway condition
LIU Xiao-ming (), ZHANG Zhong-hui, LI Guan-yu, LV Ting-jie
Information Management and Economic Information Key Laboratory of MII, Beijing University of Posts and Telecommunications, Beijing 100876, China

Abstract

Related algorithms of traffic video incidents detection are discussed systematically in this paper. All kinds of technologies in
the disposal process of video incidents detection also have been analyzed and investigated. Based on the widely research on the
theory of video processing arithmetic both at home and abroad, this paper put up a new algorithm of traffic video incidents
detection which has a good applicability under the condition of highway traffic. At the same time, this paper also provides a
whole solution for traffic video incidents detection in the highway environment. Experimental results show that the correct rate of
incidents detection can reach more than 95% during the day and 90% at night.
Keywords

intelligent traffic, video incidents detection, background extraction, moving object recognition

Introduction

Traditional traffic video supervision in the highway


environment lacks of intelligence video analysis and
processing, and it needs manual ways to supervise the video
in the highway environment continuously. However, the video
display system in the highway management center can not
show the videos of all roads at the same time, and manual
supervision can only monitor the videos group by group.
Therefore, there is a large number of non-controlled road
traffic video information, making the analysis of video
incidents detection time-consuming, labor-intensive and
inefficient.
Based on the studies of a variety of video processing
algorithms at home and abroad, this paper proposes a better
video detection algorithm which is suitable for the
background of highway applications and a whole set of traffic
video incidents detection solution under the highway
environment. This solution can process video incidents
detection, traffic inspection and affair processing of every
traffic supervision videos at the same time, can effectively

Received date: 19-03-2010


Corresponding author: LIU Xiao-ming, E-mail: lxm@bjedu.gov.cn
DOI: 10.1016/S1005-8885(09)60602-6

avoid the non-controlled state caused by rotating display,


and can automatically monitor and process each road
videos, thus saving a lot of manpower resources and
management costs.
Highway supervision video is the composition of the
continuous moving images,and the supervision system extract
the relevant features of the continuous moving video objects,
such as color, texture, contour and so on through the analysis
of image sequence. After a series of machine learning,
recognition and other operations, we can set up the object
contour, the regional information and other relevant feature
vectors of the video objects. Thus, through the
machine-learning approach, we can make the pattern
discovery and the segmentation and recognition of the
highway video objects, such as vehicles, pedestrians, roads,
trees, highway constructions. For example, the detection of
roads and vehicles needs to calculate and match the feature
vectors of the roads and vehicles with the corresponding
measure distance, through the measure distance to order the
similarity of image feature and the samples, then classify and
recognize the image object (such as vehicles, pedestrians,
roads, trees, highway constructions) according to the patterns
of semantic which are summarized by the machine-learning
approach.
After identifying and segmenting the video object, we

80

The Journal of China Universities of Posts and Telecommunications

analyze the operating characteristics of the video objects to


determine whether the traffic condition is normal. For
example, most vehicles on the highway are moving with the
same velocity, when one or more vehicles change their states
of motion greatly, abnormal traffic events often happen, such
as parking, fire, rollover, car accidents, speeding, retrograde
and so on.

2 System implementation
2.1 Pre-processing images
In order to reduce the noise of the images, we need to do
the image filtering to improve the quality of the images,
which is conducive for image processing. Most of the image
energy is concentrated in the low-frequency and mediumfrequency bands in the amplitude spectrum, while in highfrequency band, interesting information is often overwhelmed
by noise. Therefore, a low-pass filter will be able to weaken
the impact of high-frequency noise. Common low-pass filters
are Gaussian low-pass filters, mean filters. This paper uses
mean filters to reduce high-frequency noise of the images [1].
For each pixel (m, n) in the given image (i, j), take its
neighborhood S. Given S contains M pixels, take the average
as the gray scale of the processed image pixel (m, n). This
paper takes mean filter and uses the average gray scale of
every pixel in the neighborhood instead of the gray scale of
the original pixel.
The shape and the size of S is determined by the
characteristics of the image. Generally, the shape is square,
rectangular, cross and so on. Point (m, n) is usually located in
the center of S. If S is 33 neighborhoods and point (m, n) is
in the center of S, then

1 1 1
f (m, n) = f ( m + i, n + j )
(1)
9 i = 1 j = 1

2010

1 2
, which means the variance of the noise
M
becomes smaller, indicating that the intensity of the noise is
decreasing, that is, the noise is suppressed.

variance a2 =

2.2 Background extraction and adaptive update


Refs. [23] pointed out that the effectiveness of the
Gaussian mixture model-based method of background
extracting is the best, approximate median filter follows, and
inter-frame difference method is the worst. This paper
presents a method of background extraction and adaptive
update based on dynamic threshold, the program of which is
shown in Fig. 1. We denote Ir as the image which the video
frame number is r.

We denote f ( m, n) as the image is the disposal results of


image f which is disposed by the adjacent region average
algorithm. Given that the noise n is an additive noise,
unrelated in space, and the expectation is 0, variance is 2, g is
an unpolluted image, the image f with noise after the average
of neighborhood will be

1
1
1
f (m, n ) =
f (i, j ) =
g (i, j ) + n(i, j )
(2)

M
M
M
From Eq. (2) we can see that after the average of
neighborhood, the mean of the noise doesnt change, the

Fig. 1

Program of background extraction and dynamic update

Firstly, we initialize the number of the frames that is


required by the background learning, defaulting that the
background tends to be stable after the 200 frames in the
series of continuous moving images. If the moving
environment of the highway changes greatly, this value
should be larger. Then the background should be updated in a
certain period; therefore, we only need to cache one frame of

Supplement

LIU Xiao-ming, et al. / Research on technology of traffic video incidents detection under highway condition

81

the image Inr, the binary image which is used to update the

the binary image affecting the background as much as

background is
1 ; I n I n r > Th
I bin =
0; others

possible, we need to expand the binary image. The size of


(3)

where Ibin is the binary, 1 represents moving and 0 represents


static state, In is the image of the frame, Th is the threshold to
show whether the image is the moving image. The interval to
update the background is defaulted to be ten frames. If the

the templates and the times of the expansion are


determined by the experiments. It is found that 1616
template and one expansion will get a good result based on
a large number of experiments.
During the process of the adaptive update of the
background, take Ibpre as the background needed to be updated,

background, the inter-frame difference method is being


processed. After the difference method, we seek the mean and

Ib as the background that is updated completely, then the


current background is
aI n + (1 a ) I bpre ; I bin = 0
Ib =
(5)
I bpre ; others

maximum of its absolute value. The fixed threshold is easy to


make the segmentation fail as the light changes, so we need to

After the binary image Ibin is obtained, the point whose value is
0 in the image Ibin is stationary point, corresponding to the

dynamically update the threshold. The difference equation of

background region in the current image In, and needs to be


updated into the background, denoted by IB,
I B = I n * ( ~ I bin )
(6)

background changes greatly, the value should be smaller.


When the program reaches the interval to update the

background is as follows.
I diff = I n I b
We denote I b as the background image, the histogram of
Idiff is shown in Fig. 2. It can be seen from Fig. 2 that, after the
difference method, most of the pixels are in the low pixel area.
As shown in Fig. 2, the related threshold of the difference
between two frames is good to be set within 20. The
following equation is used in the actual calculation.
Th = aI med + (1 a) I max
(4)
where Imed and Imax are the mean pixel and the highest pixel
after the frame difference method respectively. When a=0.8, it
can detect the moving region stably.

where the sign ~ expresses the pixel point value in the


binary image change into the opposite number, that is to say,
the point value will change into 1 where the original point
value is 0. At the same time, the point value will change
into 0 where the original point value is 1. The point
which value is 1 in the image Ibin is moving point,
corresponding to the front region in the current image I n ,
and cant be updated into the background, the points that is
corresponding to original background are denoted by PB ,
PB =I bpre I bin

(7)

Background updating is showed as the following equation.


I b = aI B + (1 a) PB
(8)
After reaching the background image (as shown in Fig. 3),
subtract the original image (Fig. 4) by the background image,
we can get the front image based on the updated background
image, which is shown as Fig. 5.

Fig. 2

Histogram after difference method of background

After updating the threshold dynamically, the image will


be gone through binarization according to the threshold
value. In order to avoid the region with noise, the hole in

Fig. 3

Background image based on adaptive update

82

The Journal of China Universities of Posts and Telecommunications

Fig. 4

Original image

Fig. 5
Front image based on the adaptive updated
background image

1) Vehicle target detection


The purpose of vehicle target detection is to extract moving
region from the background image through sequence images.
The effective segmentation of moving region is very
important for the follow-up processing, because the target
classification, tracking and parameter estimation and other
processes are all based on the segmentation [1]. Right target
detection can greatly increase the correctness of the follow-up
tracking, identification and parameter estimation. However,
due to the dynamic changes of the background image, such as
air, light, shadow and confusion interference, it is rather
difficult to make the accurate detection of moving targets. At
present, in the area of studying moving target detection, the
main methods are difference method and optical flow method.
Inter-frame difference method is to extract two or three
adjacent frames from a continuous image sequence, and uses
pixel-based time difference method to obtain the moving
information form the image. Inter-frame difference method
has strong self-adaptability for dynamic environment, can
process quickly and has real-time ability. This method is a
good solution to the shadow problem because it is not
sensitive to objects in the same color. Difference method is

2010

used most frequently in moving target detection. It has


characteristics of simple implementation, quick process, and
good results in most cases. The basic principle is to make
subtraction between the gray scales of the corresponding
pixels. If the difference is small, the region can be considered
static. If the difference is big, it is seen as caused by moving
objects and should be marked. Using these marked area, we
can find the position of moving target in the image. The
drawback of difference method is that it is more sensitive to
light condition. Difference image contains less information
and it is difficult to reveal the actual number of objects,
shapes, and occlusion, etc. However, because the method is
simple and fast, it has been widely used.
In summary, the main advantage of image difference
method based on two frames is that the algorithm is simple,
the complexity of program design is low, and real-time
monitoring is easy to implement. While the optical flow
method, contour matching, motion vector calculation can not
meet the requirement of real-time process in this paper
because of the low speed. Therefore, inter-frame difference
method is taken.
2) Vehicle tracking
Kanade-Lucas-Tomasi (KLT) tracker is a relatively mature
tracker [46], it is developed from the corner extraction
method, uses similar methods to solve nonlinear optimization
and converges from the feature points in the original to the
corresponding points in another image, while provides a
number of criteria in case of non enough feature points to
supplement to make sure the number of feature points is
stable. Vehicle tracking mainly uses motion estimation theory
of Lucas-Kanade, combined with corner detection and feature
matching theory. Stanly Birchfield [79] and others put them
into the application of low-angle detection and tracking under
the highway environment. This paper uses KLT trackers
rounded image corner detection after obtaining the front
region. Each frame can detect 3 0004 000 corner points and
all points are above the moving target. As shown in Fig. 6, the
black points above the vehicle are detected corner points.
After detecting the moving targets, we get the initial
location and size of the target and extract the characteristics to
match and track. In this paper, the establishment of Kalman
model is to predict the location of the target. It can not only
effectively solve the issue of temporary shelter, but also
reduce the scope to search and improve the tracking speed.
Fig. 7 shows the results of tracking moving target in certain
area.

Supplement

LIU Xiao-ming, et al. / Research on technology of traffic video incidents detection under highway condition

83

effective, with high robustness.

Fig. 6

Results of corner points detection

Conclusions

A traffic video incidents detection technique has been


presented in this paper. This paper also provides a whole
solution for traffic video incidents detection in the highway
environment. A simple approach has been adopted for gets
the video incidents detection results. Some of the aspects of
the proposed algorithm need further analysis and
improvement. In addition, future research should be aimed at
the video incidents detection where the traffic environment on
highways is more complicated.
Acknowledgements
This work was supported by the Hi-Tech Research and
Development Program of China (2006AA04A106).

References

Fig. 7

Results of tracking moving target in certain area

3 Experimental results and analysis


The algorithm can be used to detect the background
information of the highway environment where the moving
targets are. Through the effective modeling of the difference
information between the moving targets and background, we
can fundamentally guarantee the capacity of the model to
distinguish the front region from the background.
Experimental results show that compared to the current
common tracking algorithm, the proposed method gets the
best tracking results in the open test sequence. At the same
time, experiments show that through dynamically updating,
the relevant threshold during the video processing, the
inter-frame difference method can also get a good background
extraction.
A number of experiments under various highway
environments show that the correct rate can reach more than
95% during the day and 90% at night. Practice has proved
that the proposed method can be a good solution to highway
incident detection problem, and the algorithm is simple and

1. Chen Weijie. Detection technology of moving objects under complex


background. Master thesis. Beijing, China: Beijing University of
Aeronautics and Astronautics, 2007: 1627 (in Chinese).
2. Clemens A, Christian L, Horst B. An embedded platform for remote
traffic surveillance. Proceedings of the 2nd Workshop on Embedded
Computer Vision (ECVW06), Mar 7, 2006, New York, NY, USA. Los
Alamitos, CA, USA: IEEE Computer Society, 2006: 125132.
3. Cheung S C S, Kamath C. Robust techniques for background subtraction
in urban traffic video. Proceedings of the SPIE: Video Communications
and Image Processing, 2004, 5308: 881892.
4. D. Daily, F.W. Cathy, and S. Pumrin. An algorithm to estimate mean
traffic speed using uncalibrated cameras. In IEEE Conference for
Intelligent Transportation Systems, pages 98107, 2000.
5. C. Schlosser, J. Reitberger, and S. Hinz. Automatic car detection in high
resolution urban scenes based on an adaptive 3D-model. In EEE/ISPRS
Joint Workshop on Remote Sensing and Data Fusion over Urban Areas,
Berlin, pages 98107, 2003.
6. Neeraj K. Kanhere, Shrinivas J. Pundlik, and Stanley T. Birchfield.
Vehicle segmentation and tracking from a low-angle off-axis camera. In
IEEE Conference on Computer Vision and Pattern Recognition, pages
11521157, 2005.
7. Lucas B D, Kanade T. An iterative image registration technique with an
application to stereo vision. Proceedings of the 7th International Joint
Conference on Artificial Intelligence (IJCAI81), Aug 2428, 1981,
Vancouver, Canada. San Francisco, CA, USA: Morgan Kaufmann
Publishers, 1981: 674679.
8. Tomasi C, Kanade T. Detection and tracking of point features. Technical
Report, CMU-CS-91-132. Pittsburgh, PA, USA: Carnegie Mellon
University, 1991.
9. Shi Jianbo, Tomasi C. Good features to track. Proceedings of IEEE the
1994 Computer Society Conference on Computer Vision and Pattern
Recognition (CVPR94), Jun 2123, 1994, Washington, DC, USA. Los
Alamitos, CA, USA: IEEE Computer Society, 1994: 593600.

Anda mungkin juga menyukai