BY:
Rakhi Gupta ( 0009510035)
DEPARTMENT OF CS
This is to certify that the project titled “Traffic Monitoring System (TMS) “ has been
completed successfully in MGM’s College of Engineering & Technology by:-
GUIDE HOD CS
2
Acknowledgements
Being the only member of my team was not easy and I take this opportunity to thank my
teachers and friends who helped me throughout the project.
First and foremost I would like to thank my guide for the project Mr Sanjeev Pippal,
H.O.D. (CS/IT Dept. MGMCOET) for his invaluable advice and time during
development of project.
I would also thank Mr. Abhishek Garg(Lecturer CS & IT Department), Mrs Archana Saar
(Lecturer CS & IT Department), Mr Nitin Bajpai (Lecturer CS & IT Department), Ms
Reena Gupta (Lecturer CS & IT Department), Mr Md. Haider (HOD, Physics
Department), and other faculty members for their constant support during the
development of the project. I am also thankful to Mr Neeraj Kaushik our Lab Incharge
for his help in installation of many software packages.
I express gratitude towards my friends Akshat Gupta and Shreyansh Jain for helping me
with video clips, Mayur Hemani for helping in development of algorithms and Amit
Khanna, Bhawana Bist, Debasish Das, Neha Puri, Prabhmeet Kaur, Shalini Singhi and
Sunder Singh for always being there. Last but not the least I express gratitude towards my
friend Anuj Gupta for providing me with constant support and motivation.
3
Abstract
Traffic Surveillance is an area of active research and many products catering to this need
are being developed. The solution to the exact problem of tracking of vehicles in spite of
weather changes and other real time problems however, still remains a challenge. This
project is a novel attempt to analyse the problem, provide an algorithmic insight towards
a possible solution and implementation of the proposed solution. Analysis of the problem
involves an in-depth investigation of various issues mostly pertaining to image
processing operations. Then alternate approaches to the these issues are sought and
developed. These approaches may be based on classical image processing or some
intuitive techniques. The classical image processing techniques have the advantage of
being time tested strategies, however they suffer from the drawback of having large
learning curves. Intuitive algorithms need rigorous testing but provide tailor made
solutions to the problem. The approach chosen considers performance issues and these
tradeoffs. After the major problems and solutions are identified, implementation issues
are addressed. These pertain to the choice of programming language as well as
architecture of solution.
4
Table of Contents
1 INTRODUCTION TO TRAFFIC SURVEILLANCE ................................................................ 10
1.1 PURPOSE ................................................................................................................................... 10
1.2 BACKGROUND ........................................................................................................................... 10
1.3 THE PROBLEM ........................................................................................................................... 10
1.4 OBJECTIVE ................................................................................................................................ 11
1.5 APPLICATIONS ........................................................................................................................... 11
1.6 ASSUMPTIONS ........................................................................................................................... 11
2 FEASIBILITY STUDY ............................................................................................................... 12
2.1 PURPOSE ................................................................................................................................... 12
2.2 REFERENCES ............................................................................................................................. 12
2.3 INPUT APPROACHES ................................................................................................................... 12
2.3.1 Loop Detectors ................................................................................................................ 12
2.3.2 Infrared Imaging.............................................................................................................. 13
2.3.3 Video ............................................................................................................................... 14
2.4 PROCESSING APPROACHES ......................................................................................................... 15
2.4.1 Classical Image Processing ............................................................................................. 16
2.4.2 Use of certain heuristics................................................................................................... 16
2.4.3 Conclusion ...................................................................................................................... 16
2.5 SUMMARY ................................................................................................................................. 16
3 ELEMENTARY CONCEPTS OF IMAGE PROCESSING ....................................................... 16
3.1 PURPOSE ................................................................................................................................... 16
3.2 REFERENCES ............................................................................................................................. 16
3.3 OVERVIEW ................................................................................................................................ 16
3.3.1 Low Level Image Processing ............................................................................................ 17
3.3.2 Intermediate-Level Image Processing............................................................................... 18
3.3.3 High Level Image Processing ........................................................................................... 18
3.3.4 Some Common Tools........................................................................................................ 19
4 TOWARDS THE SOLUTION .................................................................................................... 28
4.1 PURPOSE ................................................................................................................................... 28
4.2 REFERENCE ............................................................................................................................... 28
4.3 ISSUE #1 (EXTRACTION OF VEHICLE INFORMATION) ................................................................... 28
4.3.1 Problem........................................................................................................................... 28
4.3.2 Possible Approaches ........................................................................................................ 28
4.3.3 Solution ........................................................................................................................... 30
4.4 ISSUE #2 (CHANGES IN BACKGROUND) ....................................................................................... 30
4.4.1 Problem........................................................................................................................... 30
4.4.2 Possible Approaches ........................................................................................................ 30
4.4.3 Solution ........................................................................................................................... 30
4.5 ISSUE #3 (REMOVAL OF NOISE AND SMOOTHING OF IMAGE) ......................................................... 30
4.5.1 Problem........................................................................................................................... 30
4.5.2 Possible Approaches ........................................................................................................ 31
4.5.3 Solution ........................................................................................................................... 31
4.6 ISSUE #4 (CONVERSION FROM IMAGE DATA TO COORDINATES) .................................................... 31
4.6.1 Problem........................................................................................................................... 31
4.6.2 Possible Approaches ........................................................................................................ 31
4.6.3 Solution ........................................................................................................................... 37
4.7 ISSUE #5 (CLASSIFICATION OF PATTERNS) ................................................................................... 37
4.7.1 Problem........................................................................................................................... 37
4.7.2 Possible Approaches ........................................................................................................ 37
5
4.7.3 Solution ........................................................................................................................... 38
4.8 ISSUE #6 (INTERRELATING OF VEHICLES IN CONSEQUENT FRAMES) .............................................. 38
4.8.1 Problem........................................................................................................................... 38
4.8.2 Possible Approaches ........................................................................................................ 38
4.8.3 Solution ........................................................................................................................... 40
4.9 SUMMARY ................................................................................................................................. 41
5 REQUIREMENT ANALYSIS .................................................................................................... 42
5.1 INTRODUCTION .......................................................................................................................... 42
5.2 REFERENCES ............................................................................................................................. 42
5.3 TERMINOLOGY .......................................................................................................................... 42
5.3.1 Actors .............................................................................................................................. 42
5.3.2 Use Cases ........................................................................................................................ 42
5.4 USE CASE DIAGRAMS ................................................................................................................ 46
5.4.1 Primary Level Use Case diagram ..................................................................................... 46
5.4.2 Secondary Level Use Case Diagram 1.............................................................................. 49
5.4.3 Secondary Level Use Case Diagram 2.............................................................................. 52
5.4.4 Tertiary Level Use Case Diagram 1 ................................................................................. 57
5.4.5 Tertiary Level Use Case Diagram 2 ................................................................................. 60
5.4.6 Tertiary Level Use Case Diagram 3 ................................................................................. 65
5.4.7 Tertiary Level Use Case Diagram 4 ................................................................................. 67
5.4.8 Secondary Level Use Case Diagram 3.............................................................................. 72
5.4.9 Tertiary Level Use Case Diagram 5 ................................................................................. 74
5.4.10 Tertiary Level Use Case Diagram 6 ................................................................................. 75
5.4.11 Integrated Use Case Diagram .......................................................................................... 79
6 DESIGN DOCUMENT ............................................................................................................... 80
6.1 PURPOSE ................................................................................................................................... 80
6.2 REFERENCE ............................................................................................................................... 80
6.3 ACTIVITY DIAGRAMS................................................................................................................. 80
6.3.1 Activity Diagram for give_video_path .............................................................................. 80
6.3.2 Activity Diagram for select_operation .............................................................................. 81
6.3.3 Activity Diagram for set_background_frame .................................................................... 81
6.3.4 Activity Diagram for set_sample_frame............................................................................ 82
6.3.5 Activity Diagram for set_threshold_subtraction................................................................ 82
6.3.6 Activity Diagram for set_pattern_noise_removal .............................................................. 83
6.3.7 Activity Diagram for set_gateway_boundary .................................................................... 83
6.3.8 Activity Diagram for set_area_classification .................................................................... 84
6.3.9 Activity Diagram for set_reduction_area_threshold.......................................................... 84
6.3.10 Activity Diagram for set_template_increment ................................................................... 85
6.3.11 Activity Diagram for set_horizon_limit ............................................................................. 85
6.3.12 Activity Diagram for set_noise_area ................................................................................ 86
6.3.13 Activity diagram for tracking ........................................................................................... 87
6.3.14 Activity Diagram for classification ................................................................................... 88
6.4 CLASS/OBJECT DIAGRAMS ......................................................................................................... 89
6.4.1 Image classes hierarchy ................................................................................................... 89
6.4.2 Image to object modelling ................................................................................................ 89
6.4.3 Object containment in object_list ..................................................................................... 90
6.4.4 Bw_image class ............................................................................................................... 90
6.4.5 CBW_IMAGE class.......................................................................................................... 90
6.4.6 CCIMAGE class .............................................................................................................. 91
6.4.7 Colored_image class........................................................................................................ 91
6.4.8 I_pattern class ................................................................................................................. 92
6.4.9 Image Class ..................................................................................................................... 92
6.4.10 Object class ..................................................................................................................... 93
6.4.11 Object_list class............................................................................................................... 93
6
6.4.12 Object_node class ............................................................................................................ 94
6.4.13 Parameters class.............................................................................................................. 94
6.4.14 Part_image class ............................................................................................................. 95
6.4.15 Pattern Class ................................................................................................................... 95
6.4.16 Self_tracking_image class ................................................................................................ 95
6.5 SEQUENCE DIAGRAMS ............................................................................................................... 96
6.5.1 Make Subtracted Image.................................................................................................... 96
6.5.2 Classification Sequence Diagram ..................................................................................... 97
6.5.3 Tracking Sequence Diagram ............................................................................................ 98
6.6 USER INTERFACE DIALOGS ........................................................................................................ 99
6.6.1 Select Video Dialog ......................................................................................................... 99
6.6.2 Select Operation Dialog................................................................................................... 99
6.6.3 Set Image Processing Parameters Dialog ....................................................................... 100
6.6.4 Set Classification Parameters Dialog ............................................................................. 101
6.6.5 Set Tracking Parameters Dialog .................................................................................... 102
7 IMPLEMENTATION ............................................................................................................... 103
7.1 CODE SNIPPETS ........................................................................................................................ 103
7.1.1 Erosion.......................................................................................................................... 103
7.1.2 Finding objects in the bw_image .................................................................................... 104
7.1.3 Incrementing the template .............................................................................................. 108
7.2 DELIVERABLES ........................................................................................................................ 109
8 CONCLUSION AND EXTENSIONS ....................................................................................... 110
8.1 CONCLUSION ........................................................................................................................... 110
8.2 EXTENSIONS ............................................................................................................................ 110
9 APPENDIX ................................................................................................................................ 111
9.1 GLOSSARY .............................................................................................................................. 111
7
List of Figures
Figure 1 Image Processing an Overview ................................................................................................. 17
Figure 2 Quad Tree ................................................................................................................................ 23
Figure 3 Noise Removal Through Morphology ........................................................................................ 24
Figure 4 Connecting Regions Through Morphology ................................................................................ 25
Figure 5 Direction Codes ....................................................................................................................... 26
Figure 6 Chain Coded Figure ................................................................................................................. 26
Figure 7 Subtraction of consequent frames.............................................................................................. 29
Figure 8 Background subtraction............................................................................................................ 29
Figure 9 Moving Objects ........................................................................................................................ 32
Figure 10 A Labeled Image.................................................................................................................... 35
Figure 11 Bottom right corner ................................................................................................................ 36
Figure 12 Bottom left corner................................................................................................................... 36
Figure 13 Upper right corner ................................................................................................................. 36
Figure 14 Upper left corner .................................................................................................................... 36
Figure 15 A vehicle entering into the frame. ............................................................................................ 38
Figure 16 Left and Right overlap ............................................................................................................ 39
Figure 17 Inter Frame Overlapping Region ............................................................................................ 39
Figure 18 Frame no 1 ............................................................................................................................. 40
Figure 19 Frame no 2 ............................................................................................................................. 40
Figure 20 Templates 1 and 2................................................................................................................... 40
Figure 21 Primary Level Use Case Diagram........................................................................................... 46
Figure 22 Secondary Level Use Case Diagram 1..................................................................................... 49
Figure 23 Secondary Level Use Case Diagram 2..................................................................................... 52
Figure 24 Tertiary Level Use Case Diagram 1 ........................................................................................ 57
Figure 25 Tertiary Level Use Case Diagram 2 ........................................................................................ 60
Figure 26 Tertiary Level Use Case Diagram 3 ........................................................................................ 65
Figure 27 Tertiary Level Use Case Diagram 4 ........................................................................................ 67
Figure 28 Secondary Level Use Case Diagram 3..................................................................................... 72
Figure 29 Tertiary Level Use Case Diagram 5 ........................................................................................ 74
Figure 30 Tertiary Level Use Case Diagram 6 ........................................................................................ 75
Figure 31 Integrated Use Case Diagram................................................................................................. 79
Figure 32 Give_video_path..................................................................................................................... 80
Figure 33 Select_operation ..................................................................................................................... 81
Figure 34 Set_background_frame activity ............................................................................................... 81
Figure 35 Set_sample_frame .................................................................................................................. 82
Figure 36 Set_threshold_subtraction ...................................................................................................... 82
Figure 37 Set_pattern_noise_removal ..................................................................................................... 83
Figure 38 Set_gateway_boundary ........................................................................................................... 83
Figure 39 Set_area_classification ........................................................................................................... 84
Figure 40 Set_reduction_area_threshold ................................................................................................ 84
Figure 41 Set_template_increment .......................................................................................................... 85
Figure 42 Set_horizon_limit.................................................................................................................... 85
Figure 43 Set_noise_area ....................................................................................................................... 86
Figure 44 Tracking ................................................................................................................................. 87
Figure 45 Classification ......................................................................................................................... 88
Figure 46 Image Class Hierarchy ........................................................................................................... 89
Figure 47 Image to object modeling ........................................................................................................ 89
Figure 48 Object containment in object_list ............................................................................................ 90
Figure 49 Bw_image class ...................................................................................................................... 90
Figure 50 CBW_IMAGE class ................................................................................................................ 90
Figure 51 CCIMAGE class ..................................................................................................................... 91
Figure 52 Colored_image class .............................................................................................................. 91
8
Figure 53 I_pattern class ........................................................................................................................ 92
Figure 54 Image Class............................................................................................................................ 92
Figure 55 Object class ............................................................................................................................ 93
Figure 56 Object_list class ..................................................................................................................... 93
Figure 57 Object_node class................................................................................................................... 94
Figure 58 Parameters class .................................................................................................................... 94
Figure 59 Part_image class .................................................................................................................... 95
Figure 60 Pattern Class .......................................................................................................................... 95
Figure 61 Self_tracking_image class....................................................................................................... 95
Figure 62 Make Subtracted Image Sequence Diagram ............................................................................ 96
Figure 63 Classification Sequence Diagram............................................................................................ 97
Figure 64 Tracking Sequence Diagram ................................................................................................... 98
Figure 65 Select Video Dialog ................................................................................................................ 99
Figure 66 Select Operation Dialog ......................................................................................................... 99
Figure 67 Set Parameters (Image Processing) Dialog ........................................................................... 100
Figure 68 Set Parameters (Classification)............................................................................................. 101
Figure 69 Set Parameters (Tracking) .................................................................................................... 102
9
1 Introduction to Traffic Surveillance
1.1 Purpose
1.2 Background
The information gathered is useful in the long run for gathering statistics on
the basis of which roads may be planned. This is particularly useful in context
of metropolitan cities where the amount of space available for construction of
new roads is limited.
10
1.4 Objective
To design and implement a software tool that assists the traffic controller.
This tool analyses video and provides alarms in case some anomaly is
detected. Thus the job of traffic controller is eased, as he has to react only to
the alarms rather than the entire video.
1.5 Applications
Government buildings at low risk times – At high risk or red alert times most
of surveillance at critical public places is manual, however the software comes
to rescue at times when the risk factor is low. Generally this is the time when
surveillance is necessary but the number of alarms generated are less, so
automated solution is feasible.
Highways – the software may be used to identify vehicles that have broken
down and thus need help. As we can imagine that the distance to be covered
would be very large, so an automated solution is needed.
1.6 Assumptions
The project basically aims at prototype for the solution, thus the following
assumptions pertaining to video are reasonable
11
2 Feasibility Study
2.1 Purpose
This document gives an insight into feasibility issues, the issues are related to
technology, finance and resources.
2.2 References
1. http://www.indigosystems.com/CServices/irprimer.html
2. http://www.path.berkeley.edu/PATH/Publications/Media/FactSheet/Traffi
cSurveillance.pdf
3. “ Multimedia making it work”, Tay Vaughan, fifth edition, TMH.
4. “ Computer Networks”, A.S. Tanenbaum, third edition, PHI.
5. http://electronics.howstuffworks.com/digital-camera2.htm
6. Digital Image Processing – Gonzalez, Woods, Addison-Wesley, I Edition
1. Loop detectors.
2. Infrared Imaging.
3. Video.
3.1 Analog video
3.2 Digital video
12
Feasibility
As our eyes are capable of seeing only a very narrow region of the
electromagnetic spectrum, we need special instruments to extend
our vision beyond the limitations of the unaided eye. As the energy
of light changes, so too does its interaction with matter. Materials
that are opaque at one wavelength may be transparent at another. A
familiar example of this phenomenon is the penetration of soft
13
tissue by X-rays. What is opaque to visible light becomes
transparent to reveal the bones within.
Feasibility of technology
Cost
The price of a single camera is about Rs 10,000.
2.3.3 Video
14
We may store the information in analog or digital forms, leading to
analog or digital video.
Analog Video
Digital Video
Feasibility
In terms of cost small digital cameras are cheaper than the analog
ones. Also the operation of digital camera is easier, as video
obtained is directly in form of computer readable format.
Conclusion
15
2.4.1 Classical Image Processing
It involves processing of images a conventional manner. Though
the technique is a time-tested one. Understanding image processing
requires an insight into mathematics. Thus learning time taken is
appreciable.
2.4.3 Conclusion
The second approach though looks risky could be used, as it caters
to the need as good as the first approach and the learning time is
less.
2.5 Summary
Information would be supplied to system by use of digital camera clips.
Heuristic based approach should be used where possible.
3.1 Purpose
This document provides introductory concepts in image processing.
3.2 References
3.3 Overview
16
Figure 1 Image Processing an Overview
1. Noise Reduction – almost every image when acquired has some noise.
This may be due to known reasons like noisy carrier (the medium from
which image is obtained) or due to unknown reasons. Tools like low pass
filtering; morphological operations (explained later) might be used for this
purpose.
2. Contrast Improvement – the image obtained may not use the spectrum
of available colors evenly. This may lead to low contrast images. Simple
multiplication by a constant or techniques involving histogram
equalization may do the job.
3. Image Restoration – such techniques are used when the reason behind
poor quality image is known and can be modeled mathematically. This
procedure consists of reversing the process that caused poor quality image.
4. Image Enhancement – here we try to ‘enhance’ portions of interest in the
image. This may include point detection, line detection, edge detection
17
and enhancement. Tools used may be high pass filtering, Sobel operator
etc.
1. Image Segmentation
2. Representation and description
1. Template matching
2. Statistical approach
3. Syntactical approach
4. Neural Networks
18
1. Template Matching – this technique may be used if the region to be
matched is known to possess certain characteristics accurately. If we
are provided with accurate information as to how the ‘real world
object’ looks like, this could be saved. Next all regions found in image
matched with this stored data. Typical application is reading of printed
documents. Here the character set is predefined and recognition of
characters may be done by simple comparison of obtained characters
with this set.
2. Statistical Approach – here, each pattern is represented in terms of d
features or measurements and is viewed as a point in a d-dimensional
space. The aim is to choose those features that allow pattern vectors
belonging to different categories to occupy compact and disjoint
regions in a d-dimensional feature space. The effectiveness of the
representation space (feature set) is determined by how well patterns
from different classes can be separated. Classification of vehicles for
example might use the simple heuristic that very small area moving
regions correspond to two-wheelers, very large ones to lorries and rest
to cars.
3. Syntactic Approach – this approach may be used to identify complex
patterns. The patterns comprise of simple sub-patterns. The elementary
sub-patterns are referred to as primitives and all patterns consist of
these primitives in a hierarchical order. For example a complex pattern
like a face consists of two eyes, two ears, a nose and a mouth. By
identifying these features in an image with their relative placing it is
possible to say that u are detecting a face now. On a larger scale like
satellite images one may classify patterns as residential regions,
mountains, plains, forest etc.
4. Neural Networks – this approach is modeled after the natural
interpretation and recognition systems. The neural network found in
brain. These systems consist of layers of neurons interconnected with
each other, such that each layer provides input to the next one in
hierarchy. The first layer is given the raw input that is the image and
the last layer gives the output. Neural systems are learning systems and
interconnections are based on training patterns given to the network.
This technique is usually applied only after all the above techniques
fail.
Basic Information
19
Image – An image is a two dimensional data structure made up
from pixels. An image may be represented by the function f (x, y)
where x and y give the location of the pixel and f (x, y) gives
information about color or intensity at that pixel.
Thresholding
Method – visit each pixel of the image. If the intensity lies within
he threshold limits select that pixel (if monochrome image is
desired color the pixel Black else for color image leave the
intensity information intact.). If the pixel intensity does not lie
within the threshold limits color that pixel black.
Mathematically –
F (x, y) = {color – if F (x, y) lies in threshold limit
White – if F (x, y) does not lie in threshold limit}
20
Filtering
Basic formulation-
(a) ∪ Ri = R sum of all the regions give the image back
21
(b) Ri is a connected region.
(c) Ri ∩ Rj = Φ for all i and j
(d) P(Ri) = TRUE for all i
(e) P(Ri ∪ Rj) = FALSE for all i ≠ j
Input – an image
Output – segmented image
Method –
1. Select a suitable value for s.
2. Split the image into n disjoint quadrants of size ‘s’, if the
entire image does not from a region.
3. Merge any adjacent regions Ri and Rj for which P (Ri ∪ Rj)
= TRUE.
4. Set ‘s’ = ‘s’/4.
5. If ‘s’ is lesser than a threshold go to step 2 for each Ri else
go to step 6.
22
6. Stop.
Example
R1 R2
R3 R41 R42
R43 R44
Morphology
This tool is used for extracting image components that are useful in
representation and description of region shape. The tool may also
be used for pre or post processing of images.
Noise Removal
Motivation – we may selectively ‘lose’ information from an
image by eroding it against the boundaries. If after erosion all
information is not lost the erosion process reversed gives the
same information as was present before. However if all
information regarding a region is lost that region will not be
obtained when erosion is reversed.
23
Method – form a pattern. Now superimpose the pattern over
each pixel. If for any black pixel p, any pixel within the
superimposition limit is ‘white’ erode pixel p or make pixel p
‘white’. Now dilate the image by performing an operation
opposite to erosion. Superimpose the pattern at each pixel of
the image again. If for any ‘white’ pixel p any pixel within the
superimposition limit is black, color pixel p black.
24
i.e. first dilation and then erosion. During dilation all holes
information is lost and boundaries are sort of flooded as
illustrated. To compensate for this flooding at boundaries we
may erode the image. This process is illustrated in the diagram
provided next.
Chain codes
25
Figure 5 Direction Codes
We sample the image for grids larger than a pixel by pixel. The
size of grid is limited by amount of information loss acceptable.
To make the chain code independent of starting point we view it as
a circular chain and arrange it so that it forms the minimum integer
number. Further we make it rotation independent by using first
difference of chain code rather than the chain code itself. For e.g.
26
Perimeter – in the 8 connected approach we see that all even
direction values contribute to perimeter unit times and all odd
values contribute √2 times
Thus perimeter = even count + √2*odd count.
27
4 Towards the solution
4.1 Purpose
This document aims at identifying issues, possible approaches and solutions.
These issues arise because image-processing techniques given in section 3 are
generic, and we need to address application specific issues.
4.2 Reference
1. Digital Image Processing – Gonzalez, Woods, Addison-Wesley, I Edition
4.3.1 Problem
The frame information about the background needs to be suppressed and
that of vehicle needs to be retained.
28
Figure 7 Subtraction of consequent frames
As can be seen the major problem with this approach is that vehicle
obtained is scattered. Though we can use operations like filtering, the time
taken is large because the scattering is comparable to noise in the image.
Thus it would be better if we could extract the complete information about
the vehicle in one go. This is possible if we were to subtract the
background instead of consecutive frames.
29
For this image the results are definitely better, infact the only other
processing required would be to smooth out the edges.
4.3.3 Solution
After the above discussion it is evident that background subtraction does
the trick and would hence be used for the software.
4.4.1 Problem
This problem is basically due to the solution of the first issue. Here
because we are depending upon the background frame for processing a
change in background frame implies failure of system. This is a serious
limitation as changes in background are inevitable.
Updating of background
4.4.3 Solution
4.5.1 Problem
The image obtained after background subtraction still contains noise, and
edges are not prominent. We need a strategy by which we solve the above
problems.
30
4.5.2 Possible Approaches
Filtering of image.
We may use the image filtering techniques, such that low pass filters
eliminate noise and edges are enhanced high pass filters.
Here though noise is eliminated but edges obtained are not sharp. Also
filtering may lead to difficulty in thresholding operation as the number of
colors in the image might increase.
4.5.3 Solution
The issue is resolved as we use the second approach for removal of noise
and smoothing of image.
4.6.1 Problem
The data that we have as of now is in the form of image, this data must be
converted to numbers representing the coordinates of vehicle. This is
necessary because image data is large and difficult to process, where as
numbers are much easier to handle.
31
Figure 9 Moving Objects
This approach is optimistic and tries to avoid the problem. Here we try to
reason out the need of having numerical data instead of image data. For
the problem of comparison we can simply compare the subtracted frames.
This can be illustrated as shown by the following diagram.
This may be explained by fact that at boundary the lines are not continuos,
or any boundary is bound to bend. However before finding the boundary
points it is necessary to identify or tag different patterns in an image. The
next algorithm does exactly this.
Algorithm
1. Initialize the counter to 0
32
2. Row wise processing – scan the image row wise i.e. scan the entire
image such that each complete row is traversed before other begins
--------------------------------------------------------
--------------------------------------------------------
--------------------------------------------------------
At the end of this step we get the pixels arranged in row wise pattern
the next step is intuitively processing with respect to columns.
b) Let label (j) denote the label of jth pixel, further assign (x, y)
assigns label ‘y’ to the set pointed to by ‘x’, find_label (x) finds the
representative element for x, if no such element exists it returns 0,
makeset( y) makes a set with representative element y.
c) Now start processing the image as indicated. For each pixel do the
following
I. Check if the current pixel is not zero i.e. it was black in the
input image; if yes then proceed else go to next pixel.
II. If the pixel being considered is first of it’s column call
find_label (label (j)) if the result is zero call makeset
(counter, label (j)) i.e. make a new set of representative
element as label (j) and index counter. If find_label (j)
results in a value not equal to zero, i.e. j belongs to some
set do nothing.
III. If the pixel being considered is not the first pixel of any
column, let j-1, denote the pixel above this pixel. Now if
this pixel has a label call assign(find_label(label(j-
33
1)),label(j)). This means that we make both j-1 and j belong
to the same set. If the j-1 pixel is not black do same as step
II.
d) Process the column array now bottom up, as it was top down in
step c.
A word about the algorithm it looks confusing. But the major advantage is
that one might identify areas that are of shape
Or
Input image
0 0 0 0 0 0 0 0
0 1 0 0 1 0 1 1
0 1 0 0 1 0 1 0
0 1 1 1 1 0 1 1
34
5 col. {1,2,4,5,7}
6 col. {1,2,4,5,7}
7 col. {1,2,4,5,7} {3,6,8}
8 col. {1,2,4,5,7} {3,6,8}
Now after the pattern has been labeled we need to extract information
about boundary points. The algorithm presented next may do this.
For example in this image, we need to compute the coordinates of the red
points.
For this purpose the prototype uses the following algorithm.
1. For each pixel do the following steps.
2. Check if pixel is black if yes go to step 3 else process the next pixel
3. Compute the indices of the pixels into variables as shown below P5 is
the pixel under consideration
P1 P2 P3
P4 P5 P6
P7 P8 P9
35
• Now check if p5 is a corner by checking if any of the following conditions
is on
36
4. If any of the above tests is passed the pixel is included as a corner.
5. If P5 is the last pixel stop else go to step 2 with P5 as the next pixel.
4.6.3 Solution
4.7.1 Problem
After identification of the pattern we must know what category to what
category of vehicle does the pattern belong. This problem becomes
aggravated when we consider scenario when a vehicle is entering. Thus
the problem is two fold when to classify a vehicle and how to classify it.
However there is one glitch. When a lorry is entering into the frame it’s
area is less, and we can not be sure about the number of frames after
which it is bound to be in the frame completely. Thus the solution
proposed does not tell when to start classification.
37
Figure 15 A vehicle entering into the frame.
We see that the area of the vehicle first increases and then it decreases,
thus we can conclude that area may be safely used as long as we use this
heuristic for first being sure that vehicle is now properly in the frame.
4.7.3 Solution
The issue has been resolved using the second approach.
4.8.1 Problem
Vehicles obtained from different frames have to be correlated with each
other. This means a vehicle has to be tracked.
Processing step – for each item in the above list do the following
38
Figure 16 Left and Right overlap
b) Find the object in the template above that has left, right or both
overlaps with object being considered as shown above.
c) However the object pattern found now may be smaller than actual size
of object. This is because if the object has moved and we scan the area
where it used to be, we get only a partial image of the object. As
shown in the figure below by the above steps we only get the red
region, while the area of interest is red + green area. Thus we perform
step d).
39
Merits of this approach
1. The whole image need not be scanned during tracking.
2. The vehicle being scanned automatically ‘finds’ its occurrence in the
next frame and no matching is required.
Demerits
1. The object must overlap with its previous position.
2. Noise and or other disturbances must still be removed from the
template.
Some experimental images
Figure 18 Frame no 1
Figure 19 Frame no 2
These pictures correspond to the same car and show its position after a
frame time i.e. 1/10 part of a second. The green region in second frame
corresponds to the position of car in frame 1.
Template 1(in green) depicts the car taken from frame 2 and template 2 (in
red) depicts the car after the size of template increases.
4.8.3 Solution
The issue of tracking has been successfully resolved.
40
4.9 Summary
Many problems and their tentative solutions have been identified. The
problems and solutions suggested are by no means exclusive, but certainly
carry the essence of the software.
41
5 Requirement analysis
5.1 Introduction
This document gives the requirement analysis for the project. Use case
diagrams have been used for requirements modeling.
5.2 References
1. dotnetcoders.com
2. UML: A Beginner’s Guide, Jason T Roff
5.3 Terminology
5.3.1 Actors
Video Provider
He is responsible for setting the path of video, or providing location of
video to TMS.
TMS Expert
The TMS Expert configures a set of parameters that are used further for
analysis of video. His job is complex, as the parameters must be set with
the position rather than the particular video in mind.
Alarm Analyser
The Alarm Analyser analyses the information presented to him when
alarm is given. He is the basic targeted user, for the system.
set_video
Models the situation when Video Provider has to set video position in
TMS.
This use case is broken down into following use cases
give_video_path
Models the case when Video Provider actually sets the path of
video information.
42
select_operation
After the path to video is set, it must be either analysed or used for
setting up of camera position. The use case models this selection
operation.
tune_parameters
Models the situation when TMS Expert has to configure or fine tune
parameters according to a particular camera position.
This use case can be broken down into the following use cases
• tune_image_processing_parameters
Models the tuning of operations required for, basic image
processing, or intra frame processing.
The use case is further broken down into the following use cases
on basis of study in section 4
set_background_frame
Models setting up of background frame.
set_threshold_subtraction
Models setting up of threshold required for subtraction for
image subtraction.
set_pattern_noise_removal
Sets the patterns of erosion and dilation required for noise
removal and smoothing of image.
• tune_classification_parameters
Models the tuning of parameters required of classification of new
vehicles.
This use case can be further broken down into the following use
cases on basis of section 4.
set_gateway_boundary
Models the setting up of the gateway coordinates. This
is the region where complete processing is done.
set_area_noise
Models the setting of area of noise that is the area,
likely to be rejected as noise.
43
set_area_typeOfVehicle
Models setting up of the area limits for different vehicle
types, typically car, two-wheeler and lorry.
set_reduction_area_threshold
Models the setting of threshold area, through which the
area in pixels should decrease before it can be safely
concluded that vehicle is completely in the frame.
• tune_real_time_parameters
Models the setting of some real-world parameters and their relation
with images.
This use case can be broken down into the following use cases
set_pixel/distance_parameters
Models the setting of parameters that define the
relationship between pixels and actual distance.
set_safe_speed_limit
Models the setting of region specific, safe speed limit.
TMS Expert must provide both maximum and
minimum limits.
• tune_tracking_parameters
Models the parameters required for tracking of vehicles
This use case can be further split into following use cases
set_increment_template
Models the setting of the limit by which the template
should increase if vehicle is not found in the region
where it was previously.
set_horizon_limit
Sets the limit beyond which vehicles are not tracked, as
they may become too small for analysis.
set_noise_area
Set the noise area, that is the area that must never be
tracked.
analyse_video
44
Models the situation when Alarm Analyser analyses a particular video.
• manual_analysis
This use case models the situation where the Alarm Analyser does
a manual analysis of the clip. The condition arises usually because
automatic analysis fails.
This use case may be broken into the following use case
view_clip
Models the viewing of clip or video by the Alarm Analyser.
• auto_analysis
This use case models the situation where automatic analysis is
initiated by the alarm analyser.
This use case may be broken down into the following use cases
select_camera_position
Here a camera position is selected from the many camera
positions available.
start_auto_analysis
This use case models actual starting of automated analysis
after a camera position has been selected.
give_alarm
This use case is included by the start_auto_analysis use
case wherein if auto analysis finds any anomaly, alarm is
generated.
45
5.4 Use Case Diagrams
46
Goal in context : Set the video path and operation to be performed
on video.
Scope : TMS
Level : Primary
Failure end condition : Video data source is not in proper format and can
not be decompressed by the system.
Note : NA
Scope : TMS
Level : Primary
47
Pre condition : Successful execution of get_video use case
Note : NA
Scope : TMS
Level : Primary
48
Post condition : Video analysed for anomaly
Note : none
49
Secondary actor : none
Level : Secondary
Note : NA
Level : Secondary
50
Trigger : Video Provider wants to use the particular video.
51
5.4.3 Secondary Level Use Case Diagram 2
Note : NA
52
Level : Secondary
Pre condition : All frames from video data source have been
extracted.
Failure end condition : The values of parameters are invalid and are hence
not stored.
Note : NA
Level : Secondary
53
Trigger : Image_processing_parameters have been
successfully created.
Failure end condition : The values of parameters are invalid and are hence
not stored.
Note : NA
Goal in context : Set the pixel to distance ratio and safe speed limit.
Level : Secondary
54
Main success scenario : Real time parameters are saved in the system.
Failure end condition : The values of parameters are invalid and are hence
not stored.
Note : NA
Level : Secondary
Main success scenario : All classified vehicles are tracked till the horizon
limit and alarms are given for anomalies
(particularly stopped or slow vehicles).
55
false negative alarms (i.e. alarm not generated in
event of anomaly).
56
5.4.4 Tertiary Level Use Case Diagram 1
Note : NA
57
Level : Tertiary
Pre condition : TMS Expert has seen all frames from the video
and knows the path of the background frame.
Main success scenario : Background path provided points to image file that
has no vehicular traffic.
Failure end condition : Background path does not point a valid image file.
Note : NA
Level : Tertiary
58
subtraction is set.
Note : NA
Level : Tertiary
59
Trigger : Threshold for background subtraction is set.
Main success scenario : Image obtained after erosion and dilation with the
patterns set is free of noise.
60
Use case name : set_gateway_boundary
Note : NA
Level : Tertiary
61
UCD : Tertiary Level Use Case Diagram 2
Note : NA
Level : Tertiary
62
Parent use case : tune_classification_parameters
Note : NA
Level : Tertiary
63
Parent use case : tune_classification_parameters
Note : NA
Goal in context : Set the limit by which area (of vehicle) must
reduce before it can be classified. This is true
because when the vehicle has fully entered the
frame, it’s area will start to decrease, and we can
classify the vehicle.
Level : Tertiary
Main success scenario : All vehicles that are fully present in gateway
region are classified.
Failure end condition : Area of vehicle does not change by the limit set in
gateway region.
64
5.4.6 Tertiary Level Use Case Diagram 3
Note : NA
Level : Tertiary
65
Main success scenario : none
Note : NA
Level : Tertiary
66
5.4.7 Tertiary Level Use Case Diagram 4
Note : NA
67
Goal in context : Set the increment in size of template. If increment
is too small or large a lot of redundant processing
needs to be done, and track of object may be lost.
Level : Tertiary
68
Note : NA
Goal in context : Set the horizon limit such that tracking stops when
vehicle crosses this limit. This parameter avoids
tracking of vehicles that have very small pixel area
and uncertainty about proper capture from the
camera.
Level : Tertiary
Main success scenario : All vehicles with large pixel area are tracked.
69
Use case number : Twenty-one
Note : NA
Goal in context : Set the area of noise, such that if vehicle’s pixel
area becomes too small even before it reaches the
horizon tracking stops. This parameter also avoids
bad tracking condition when vehicle is matched
with noise pattern.
Level : Tertiary
Failure end condition : Noise is also tracked, noise limit set corresponds
to only the particular video and not the camera
position.
70
image from previous step are sent to the tracker
object.
5. Vehicles sucessfully tracked in the whole image
are highlighted and this image is shown to the
TMS Expert.
6. TMS Expert can now choose to process the next
frame or stop processing.
7. Incase processing is stopped noise limit is
saved.
71
5.4.8 Secondary Level Use Case Diagram 3
Note : NA
Scope : Alarm_analysis
Level : Secondary
72
Trigger : Alarm Analyser chooses to manually analyse
video or failure of auto_analysis use case.
Note : NA
Scope : Alarm_analysis
Level : Secondary
Main success scenario : All anomalies are identified and alarm raised for
all.
Failure end condition : Loss of track of many vehicles and many false
positive and negative alarms. Leading to necessity
of manual analysis.
73
Description : 1. Alarm Analyser provides the set of parameters
to be used for analysis.
2. TMS analyses video and gives alarms.
Note : NA
Scope : Alarm_analysis
Level : Tertiary
74
Post condition : Viewing of video.
Note : NA
75
Primary actor : Alarm Analyser
Goal in context : Identify the camera position that best suits the
given video.
Level : Tertiary
Failure end condition : Camera position set does not correspond to video.
76
Level : Tertiary
Main success scenario : All anomalies present within scope of video are
identified.
Note : NA
Level : Tertiary
77
Post condition : Alarm Analyser is presented with information
about the anomaly.
78
5.4.11 Integrated Use Case Diagram
79
6 Design Document
6.1 Purpose
6.2 Reference
1. dotnetcoders.com
2. UML: A Beginner’s Guide, Jason T Roff
Figure 32 Give_video_path
80
6.3.2 Activity Diagram for select_operation
Figure 33 Select_operation
81
6.3.4 Activity Diagram for set_sample_frame
Figure 35 Set_sample_frame
Figure 36 Set_threshold_subtraction
82
6.3.6 Activity Diagram for set_pattern_noise_removal
Figure 37 Set_pattern_noise_removal
Figure 38 Set_gateway_boundary
83
6.3.8 Activity Diagram for set_area_classification
Figure 39 Set_area_classification
Figure 40 Set_reduction_area_threshold
84
6.3.10 Activity Diagram for set_template_increment
Figure 41 Set_template_increment
Figure 42 Set_horizon_limit
85
6.3.12 Activity Diagram for set_noise_area
Figure 43 Set_noise_area
86
6.3.13 Activity diagram for tracking
Figure 44 Tracking
87
6.3.14 Activity Diagram for classification
Figure 45 Classification
88
6.4 Class/Object Diagrams
89
6.4.3 Object containment in object_list
90
6.4.6 CCIMAGE class
91
6.4.8 I_pattern class
92
6.4.10 Object class
93
6.4.12 Object_node class
94
6.4.14 Part_image class
95
6.5 Sequence Diagrams
96
6.5.2 Classification Sequence Diagram
97
6.5.3 Tracking Sequence Diagram
98
6.6 User Interface Dialogs
99
6.6.3 Set Image Processing Parameters Dialog
100
6.6.4 Set Classification Parameters Dialog
101
6.6.5 Set Tracking Parameters Dialog
102
7 Implementation
7.1.1 Erosion
void bw_image::erode(pattern * p)
{
if(color == NULL) return;
int i,j,k;
103
kk_i1 = k - width*(p->height/2) - (p-
>width/2) + ii*width + jj;
104
// make the labels array
label = makelabels();
for(i =0;i<NOPATTERNS;i++)
{
cur_patt[i].free = 0;
cur_patt[i].area = 0;
}
k=-1;
// process the entire label array and the image
k1 = k - width -1;
k2 = k - width;
k3 = k - width +1;
k4 = k - 1;
k5 = k;
k6 = k + 1;
k7 = k + width -1;
k8 = k + width;
k9 = k + width +1;
105
}
106
||//upper right corner
(label[k5] == label[k2] && label[k5] == label[k4]
&& label[k5] != label[k8])) bottom right)
{
cur_patt[label[k]].kpoints[cur_patt[label[k]].
free++] = k;
continue;
}
//upper left
label[k5] == label[k6] && label[k5] == label[k8]
&& label[k5] != label[k1] && label[k5] !=
label[k2] && label[k5] != label[k4] )
|| // upper right
(label[k5] == label[k4] && label[k5] == label[k8]
&& label[k5] != label[k2] && label[k5] !=
label[k3] && label[k5] != label[k6] )
|| // bottom left
(label[k5] == label[k2] && label[k5] == label[k6]
&& label[k5] != label[k4] && label[k5] !=
label[k7] && label[k5] != label[k8] )
}
}
if(label != NULL) free(label);
107
// now of all patterns found make into object those having area >
//noise_area
for(i=0;i<NOPATTERNS;i++)
{
if(cur_patt[i].area > noise_area)
{
cur_object->add(cur_patt[i],height,width,left,top);
}
}
return cur_object;
}
// check boundaries
bottom = (bottom >= parent->height-1)? parent->height-1 : bottom;
right = (right >= parent->width-1)? parent->width-1 : right;
top = (top < 0)? 0 : top;
left = (left < 0)? 0:left;
108
}
7.2 Deliverables
109
8 Conclusion and Extensions
8.1 Conclusion
The prototype developed, successfully tracks vehicles within acceptable limits of error.
The vehicles are classified into three categories of lorry, car and two-wheeler. Alarm is
raised in situation of anomalous behaviour.
8.2 Extensions
110
9 Appendix
9.1 Glossary
Area of pattern – The number of connected pixels that collectively form the
pattern.
Background Frame – The frame that does not have any vehicle on the road.
Frame Rate – The number of frames taken per second. This parameter is a
characteristic of the camera.
Gateway – A portion of the image. This region is characterised by fact that all
vehicles pass through this portion before coming into main frame. The region is
chosen such that almost all vehicles are fully in this region for 3-5 consecutive
frames.
111
Gateway Boundary – The limits left, right, top, bottom that define the gateway
region.
Horizon Limit – A limit from the top of frame after which we can start to drop
references of the vehicles.
Part_image – defines a portion of the image. Usually used to define the gateway
region or the specific region where presence of vehicle is predicted.
Labeled Image – a black and white image with each continuos region marked
with a different label.
Object – Abstraction of vehicle in TMS. Some major attributes are area, type of
vehicle and current location information.
112
Overlap – The phenomenon of intersection of two regions corresponding to same
object but in different (often-consecutive) frames.
Parameters – A class used for defining of all constants needed during processing.
Pixel – Unit picture element that is uniquely visible and controllable on the
screen.
Sample Frame – The frame on basis of which image processing parameters are
set.
Traffic Controller – A person responsible for manual analysis of video and taking
appropriate action in event of some anomaly.
113
Traffic Surveillance – The process of keeping vigilance on roads, so as to identify
any anomalous behaviour.
Video Provider – the person responsible for shooting of video clip and consequent
extraction of frames from this clip
114