Anda di halaman 1dari 114

Traffic Monitoring System (TMS)

Submitted for the partial fulfillment of the award of the


Degree of B.Tech
(Computer science & Engineering)

BY:
Rakhi Gupta ( 0009510035)

Under the guidance of


Prof. Parmanand Astya
(HOD, CS Department)

DEPARTMENT OF CS

Mahatma Gandhi Mission‘s


College of Engineering & Technology
Sector – 62, NOIDA
CERTIFICATE

This is to certify that the project titled “Traffic Monitoring System (TMS) “ has been
completed successfully in MGM’s College of Engineering & Technology by:-

Rakhi Gupta (BE CP, Roll No 0009510035)

GUIDE HOD CS

(Mr Sanjeev Pippal) (Mr. Sanjeev Pippal)

2
Acknowledgements

“ From the universe it comes and to it, it must disseminate”

Being the only member of my team was not easy and I take this opportunity to thank my
teachers and friends who helped me throughout the project.

First and foremost I would like to thank my guide for the project Mr Sanjeev Pippal,
H.O.D. (CS/IT Dept. MGMCOET) for his invaluable advice and time during
development of project.

I would also thank Mr. Abhishek Garg(Lecturer CS & IT Department), Mrs Archana Saar
(Lecturer CS & IT Department), Mr Nitin Bajpai (Lecturer CS & IT Department), Ms
Reena Gupta (Lecturer CS & IT Department), Mr Md. Haider (HOD, Physics
Department), and other faculty members for their constant support during the
development of the project. I am also thankful to Mr Neeraj Kaushik our Lab Incharge
for his help in installation of many software packages.

I express gratitude towards my friends Akshat Gupta and Shreyansh Jain for helping me
with video clips, Mayur Hemani for helping in development of algorithms and Amit
Khanna, Bhawana Bist, Debasish Das, Neha Puri, Prabhmeet Kaur, Shalini Singhi and
Sunder Singh for always being there. Last but not the least I express gratitude towards my
friend Anuj Gupta for providing me with constant support and motivation.

3
Abstract

Traffic Surveillance is an area of active research and many products catering to this need
are being developed. The solution to the exact problem of tracking of vehicles in spite of
weather changes and other real time problems however, still remains a challenge. This
project is a novel attempt to analyse the problem, provide an algorithmic insight towards
a possible solution and implementation of the proposed solution. Analysis of the problem
involves an in-depth investigation of various issues mostly pertaining to image
processing operations. Then alternate approaches to the these issues are sought and
developed. These approaches may be based on classical image processing or some
intuitive techniques. The classical image processing techniques have the advantage of
being time tested strategies, however they suffer from the drawback of having large
learning curves. Intuitive algorithms need rigorous testing but provide tailor made
solutions to the problem. The approach chosen considers performance issues and these
tradeoffs. After the major problems and solutions are identified, implementation issues
are addressed. These pertain to the choice of programming language as well as
architecture of solution.

4
Table of Contents
1 INTRODUCTION TO TRAFFIC SURVEILLANCE ................................................................ 10
1.1 PURPOSE ................................................................................................................................... 10
1.2 BACKGROUND ........................................................................................................................... 10
1.3 THE PROBLEM ........................................................................................................................... 10
1.4 OBJECTIVE ................................................................................................................................ 11
1.5 APPLICATIONS ........................................................................................................................... 11
1.6 ASSUMPTIONS ........................................................................................................................... 11
2 FEASIBILITY STUDY ............................................................................................................... 12
2.1 PURPOSE ................................................................................................................................... 12
2.2 REFERENCES ............................................................................................................................. 12
2.3 INPUT APPROACHES ................................................................................................................... 12
2.3.1 Loop Detectors ................................................................................................................ 12
2.3.2 Infrared Imaging.............................................................................................................. 13
2.3.3 Video ............................................................................................................................... 14
2.4 PROCESSING APPROACHES ......................................................................................................... 15
2.4.1 Classical Image Processing ............................................................................................. 16
2.4.2 Use of certain heuristics................................................................................................... 16
2.4.3 Conclusion ...................................................................................................................... 16
2.5 SUMMARY ................................................................................................................................. 16
3 ELEMENTARY CONCEPTS OF IMAGE PROCESSING ....................................................... 16
3.1 PURPOSE ................................................................................................................................... 16
3.2 REFERENCES ............................................................................................................................. 16
3.3 OVERVIEW ................................................................................................................................ 16
3.3.1 Low Level Image Processing ............................................................................................ 17
3.3.2 Intermediate-Level Image Processing............................................................................... 18
3.3.3 High Level Image Processing ........................................................................................... 18
3.3.4 Some Common Tools........................................................................................................ 19
4 TOWARDS THE SOLUTION .................................................................................................... 28
4.1 PURPOSE ................................................................................................................................... 28
4.2 REFERENCE ............................................................................................................................... 28
4.3 ISSUE #1 (EXTRACTION OF VEHICLE INFORMATION) ................................................................... 28
4.3.1 Problem........................................................................................................................... 28
4.3.2 Possible Approaches ........................................................................................................ 28
4.3.3 Solution ........................................................................................................................... 30
4.4 ISSUE #2 (CHANGES IN BACKGROUND) ....................................................................................... 30
4.4.1 Problem........................................................................................................................... 30
4.4.2 Possible Approaches ........................................................................................................ 30
4.4.3 Solution ........................................................................................................................... 30
4.5 ISSUE #3 (REMOVAL OF NOISE AND SMOOTHING OF IMAGE) ......................................................... 30
4.5.1 Problem........................................................................................................................... 30
4.5.2 Possible Approaches ........................................................................................................ 31
4.5.3 Solution ........................................................................................................................... 31
4.6 ISSUE #4 (CONVERSION FROM IMAGE DATA TO COORDINATES) .................................................... 31
4.6.1 Problem........................................................................................................................... 31
4.6.2 Possible Approaches ........................................................................................................ 31
4.6.3 Solution ........................................................................................................................... 37
4.7 ISSUE #5 (CLASSIFICATION OF PATTERNS) ................................................................................... 37
4.7.1 Problem........................................................................................................................... 37
4.7.2 Possible Approaches ........................................................................................................ 37

5
4.7.3 Solution ........................................................................................................................... 38
4.8 ISSUE #6 (INTERRELATING OF VEHICLES IN CONSEQUENT FRAMES) .............................................. 38
4.8.1 Problem........................................................................................................................... 38
4.8.2 Possible Approaches ........................................................................................................ 38
4.8.3 Solution ........................................................................................................................... 40
4.9 SUMMARY ................................................................................................................................. 41
5 REQUIREMENT ANALYSIS .................................................................................................... 42
5.1 INTRODUCTION .......................................................................................................................... 42
5.2 REFERENCES ............................................................................................................................. 42
5.3 TERMINOLOGY .......................................................................................................................... 42
5.3.1 Actors .............................................................................................................................. 42
5.3.2 Use Cases ........................................................................................................................ 42
5.4 USE CASE DIAGRAMS ................................................................................................................ 46
5.4.1 Primary Level Use Case diagram ..................................................................................... 46
5.4.2 Secondary Level Use Case Diagram 1.............................................................................. 49
5.4.3 Secondary Level Use Case Diagram 2.............................................................................. 52
5.4.4 Tertiary Level Use Case Diagram 1 ................................................................................. 57
5.4.5 Tertiary Level Use Case Diagram 2 ................................................................................. 60
5.4.6 Tertiary Level Use Case Diagram 3 ................................................................................. 65
5.4.7 Tertiary Level Use Case Diagram 4 ................................................................................. 67
5.4.8 Secondary Level Use Case Diagram 3.............................................................................. 72
5.4.9 Tertiary Level Use Case Diagram 5 ................................................................................. 74
5.4.10 Tertiary Level Use Case Diagram 6 ................................................................................. 75
5.4.11 Integrated Use Case Diagram .......................................................................................... 79
6 DESIGN DOCUMENT ............................................................................................................... 80
6.1 PURPOSE ................................................................................................................................... 80
6.2 REFERENCE ............................................................................................................................... 80
6.3 ACTIVITY DIAGRAMS................................................................................................................. 80
6.3.1 Activity Diagram for give_video_path .............................................................................. 80
6.3.2 Activity Diagram for select_operation .............................................................................. 81
6.3.3 Activity Diagram for set_background_frame .................................................................... 81
6.3.4 Activity Diagram for set_sample_frame............................................................................ 82
6.3.5 Activity Diagram for set_threshold_subtraction................................................................ 82
6.3.6 Activity Diagram for set_pattern_noise_removal .............................................................. 83
6.3.7 Activity Diagram for set_gateway_boundary .................................................................... 83
6.3.8 Activity Diagram for set_area_classification .................................................................... 84
6.3.9 Activity Diagram for set_reduction_area_threshold.......................................................... 84
6.3.10 Activity Diagram for set_template_increment ................................................................... 85
6.3.11 Activity Diagram for set_horizon_limit ............................................................................. 85
6.3.12 Activity Diagram for set_noise_area ................................................................................ 86
6.3.13 Activity diagram for tracking ........................................................................................... 87
6.3.14 Activity Diagram for classification ................................................................................... 88
6.4 CLASS/OBJECT DIAGRAMS ......................................................................................................... 89
6.4.1 Image classes hierarchy ................................................................................................... 89
6.4.2 Image to object modelling ................................................................................................ 89
6.4.3 Object containment in object_list ..................................................................................... 90
6.4.4 Bw_image class ............................................................................................................... 90
6.4.5 CBW_IMAGE class.......................................................................................................... 90
6.4.6 CCIMAGE class .............................................................................................................. 91
6.4.7 Colored_image class........................................................................................................ 91
6.4.8 I_pattern class ................................................................................................................. 92
6.4.9 Image Class ..................................................................................................................... 92
6.4.10 Object class ..................................................................................................................... 93
6.4.11 Object_list class............................................................................................................... 93

6
6.4.12 Object_node class ............................................................................................................ 94
6.4.13 Parameters class.............................................................................................................. 94
6.4.14 Part_image class ............................................................................................................. 95
6.4.15 Pattern Class ................................................................................................................... 95
6.4.16 Self_tracking_image class ................................................................................................ 95
6.5 SEQUENCE DIAGRAMS ............................................................................................................... 96
6.5.1 Make Subtracted Image.................................................................................................... 96
6.5.2 Classification Sequence Diagram ..................................................................................... 97
6.5.3 Tracking Sequence Diagram ............................................................................................ 98
6.6 USER INTERFACE DIALOGS ........................................................................................................ 99
6.6.1 Select Video Dialog ......................................................................................................... 99
6.6.2 Select Operation Dialog................................................................................................... 99
6.6.3 Set Image Processing Parameters Dialog ....................................................................... 100
6.6.4 Set Classification Parameters Dialog ............................................................................. 101
6.6.5 Set Tracking Parameters Dialog .................................................................................... 102
7 IMPLEMENTATION ............................................................................................................... 103
7.1 CODE SNIPPETS ........................................................................................................................ 103
7.1.1 Erosion.......................................................................................................................... 103
7.1.2 Finding objects in the bw_image .................................................................................... 104
7.1.3 Incrementing the template .............................................................................................. 108
7.2 DELIVERABLES ........................................................................................................................ 109
8 CONCLUSION AND EXTENSIONS ....................................................................................... 110
8.1 CONCLUSION ........................................................................................................................... 110
8.2 EXTENSIONS ............................................................................................................................ 110
9 APPENDIX ................................................................................................................................ 111
9.1 GLOSSARY .............................................................................................................................. 111

7
List of Figures
Figure 1 Image Processing an Overview ................................................................................................. 17
Figure 2 Quad Tree ................................................................................................................................ 23
Figure 3 Noise Removal Through Morphology ........................................................................................ 24
Figure 4 Connecting Regions Through Morphology ................................................................................ 25
Figure 5 Direction Codes ....................................................................................................................... 26
Figure 6 Chain Coded Figure ................................................................................................................. 26
Figure 7 Subtraction of consequent frames.............................................................................................. 29
Figure 8 Background subtraction............................................................................................................ 29
Figure 9 Moving Objects ........................................................................................................................ 32
Figure 10 A Labeled Image.................................................................................................................... 35
Figure 11 Bottom right corner ................................................................................................................ 36
Figure 12 Bottom left corner................................................................................................................... 36
Figure 13 Upper right corner ................................................................................................................. 36
Figure 14 Upper left corner .................................................................................................................... 36
Figure 15 A vehicle entering into the frame. ............................................................................................ 38
Figure 16 Left and Right overlap ............................................................................................................ 39
Figure 17 Inter Frame Overlapping Region ............................................................................................ 39
Figure 18 Frame no 1 ............................................................................................................................. 40
Figure 19 Frame no 2 ............................................................................................................................. 40
Figure 20 Templates 1 and 2................................................................................................................... 40
Figure 21 Primary Level Use Case Diagram........................................................................................... 46
Figure 22 Secondary Level Use Case Diagram 1..................................................................................... 49
Figure 23 Secondary Level Use Case Diagram 2..................................................................................... 52
Figure 24 Tertiary Level Use Case Diagram 1 ........................................................................................ 57
Figure 25 Tertiary Level Use Case Diagram 2 ........................................................................................ 60
Figure 26 Tertiary Level Use Case Diagram 3 ........................................................................................ 65
Figure 27 Tertiary Level Use Case Diagram 4 ........................................................................................ 67
Figure 28 Secondary Level Use Case Diagram 3..................................................................................... 72
Figure 29 Tertiary Level Use Case Diagram 5 ........................................................................................ 74
Figure 30 Tertiary Level Use Case Diagram 6 ........................................................................................ 75
Figure 31 Integrated Use Case Diagram................................................................................................. 79
Figure 32 Give_video_path..................................................................................................................... 80
Figure 33 Select_operation ..................................................................................................................... 81
Figure 34 Set_background_frame activity ............................................................................................... 81
Figure 35 Set_sample_frame .................................................................................................................. 82
Figure 36 Set_threshold_subtraction ...................................................................................................... 82
Figure 37 Set_pattern_noise_removal ..................................................................................................... 83
Figure 38 Set_gateway_boundary ........................................................................................................... 83
Figure 39 Set_area_classification ........................................................................................................... 84
Figure 40 Set_reduction_area_threshold ................................................................................................ 84
Figure 41 Set_template_increment .......................................................................................................... 85
Figure 42 Set_horizon_limit.................................................................................................................... 85
Figure 43 Set_noise_area ....................................................................................................................... 86
Figure 44 Tracking ................................................................................................................................. 87
Figure 45 Classification ......................................................................................................................... 88
Figure 46 Image Class Hierarchy ........................................................................................................... 89
Figure 47 Image to object modeling ........................................................................................................ 89
Figure 48 Object containment in object_list ............................................................................................ 90
Figure 49 Bw_image class ...................................................................................................................... 90
Figure 50 CBW_IMAGE class ................................................................................................................ 90
Figure 51 CCIMAGE class ..................................................................................................................... 91
Figure 52 Colored_image class .............................................................................................................. 91

8
Figure 53 I_pattern class ........................................................................................................................ 92
Figure 54 Image Class............................................................................................................................ 92
Figure 55 Object class ............................................................................................................................ 93
Figure 56 Object_list class ..................................................................................................................... 93
Figure 57 Object_node class................................................................................................................... 94
Figure 58 Parameters class .................................................................................................................... 94
Figure 59 Part_image class .................................................................................................................... 95
Figure 60 Pattern Class .......................................................................................................................... 95
Figure 61 Self_tracking_image class....................................................................................................... 95
Figure 62 Make Subtracted Image Sequence Diagram ............................................................................ 96
Figure 63 Classification Sequence Diagram............................................................................................ 97
Figure 64 Tracking Sequence Diagram ................................................................................................... 98
Figure 65 Select Video Dialog ................................................................................................................ 99
Figure 66 Select Operation Dialog ......................................................................................................... 99
Figure 67 Set Parameters (Image Processing) Dialog ........................................................................... 100
Figure 68 Set Parameters (Classification)............................................................................................. 101
Figure 69 Set Parameters (Tracking) .................................................................................................... 102

9
1 Introduction to Traffic Surveillance

1.1 Purpose

This document aims at providing an insight into Traffic Surveillance. The


document gives explanation to the problem faced, how the project aims to
solve the problem and possible applications of solution.

1.2 Background

Traffic Surveillance is an upcoming area of interest, particularly in the public


sector. Much effort is being spent on engineering and building of vehicles, but
surveillance of roads is still in stages of infancy even in some developed
countries.

Traffic Surveillance involves constant inspection of roads. The major focus


here are identification of vehicles in the following possible states :

• Vehicle stopped in middle of road indicating possible breakdown,


• Vehicle crossing the safe speed limits,
• Vehicle parked in non-parking zones.

The information gathered is useful in the long run for gathering statistics on
the basis of which roads may be planned. This is particularly useful in context
of metropolitan cities where the amount of space available for construction of
new roads is limited.

1.3 The Problem

Currently most solutions to Traffic Surveillance are manual and involve


scanning of video at Traffic Monitoring Stations. However, the number of
people available for analysis of video is much lesser than the number of video
streams. This means that workload on a single traffic controller is high, in
addition the nature of job is monotonous. Thus chances of error and delay in
analysis is possible.

10
1.4 Objective
To design and implement a software tool that assists the traffic controller.
This tool analyses video and provides alarms in case some anomaly is
detected. Thus the job of traffic controller is eased, as he has to react only to
the alarms rather than the entire video.

1.5 Applications

The software may be installed at Traffic Monitoring Stations situated at


critical locations like:

Government buildings at low risk times – At high risk or red alert times most
of surveillance at critical public places is manual, however the software comes
to rescue at times when the risk factor is low. Generally this is the time when
surveillance is necessary but the number of alarms generated are less, so
automated solution is feasible.

Highways – the software may be used to identify vehicles that have broken
down and thus need help. As we can imagine that the distance to be covered
would be very large, so an automated solution is needed.

1.6 Assumptions

The project basically aims at prototype for the solution, thus the following
assumptions pertaining to video are reasonable

• Frame rate is greater than 10 frames/second.


• Pixel area of vehicle in image is greater than 100 pixels.
• Video is free from distortion.
• Video obtained depicts free flow traffic.
• Video can be easily decompressed to extract frames.
• Video is free from rain, snow, fog or any other disturbing climatic factor.

11
2 Feasibility Study

2.1 Purpose

This document gives an insight into feasibility issues, the issues are related to
technology, finance and resources.

2.2 References

1. http://www.indigosystems.com/CServices/irprimer.html
2. http://www.path.berkeley.edu/PATH/Publications/Media/FactSheet/Traffi
cSurveillance.pdf
3. “ Multimedia making it work”, Tay Vaughan, fifth edition, TMH.
4. “ Computer Networks”, A.S. Tanenbaum, third edition, PHI.
5. http://electronics.howstuffworks.com/digital-camera2.htm
6. Digital Image Processing – Gonzalez, Woods, Addison-Wesley, I Edition

2.3 Input Approaches

Input to the system may be obtained by the following techniques –

1. Loop detectors.
2. Infrared Imaging.
3. Video.
3.1 Analog video
3.2 Digital video

2.3.1 Loop Detectors

About the technology

Loop detectors are widely used particularly for Traffic


Surveillance. The basic technique here is to bury a loop of wire
beneath the road surface with continuos current running through it.
When a vehicle passes over, it induces a surge of current through
the loop. These surges can be measured and counted yielding
information about traffic flow and density. Loops may be installed
a pair of meters apart to enable gathering of speed information.

12
Feasibility

Cost – the technique is not cost effective when tracking of a


vehicle for long distances must be done. A lot of wire must be laid
down that increases the cost, in comparison to video based
techniques.

Resources Required – Deployment of the technique require


permission from road authorities, as even small-scale
implementation would cause disturbance in traffic.

2.3.2 Infrared Imaging

About the technology

Light is only a small part of electromagnetic radiation that our eyes


can see. It is actually a very small portion of the electromagnetic
spectrum. Radio waves, infrared, ultraviolet, x-rays and gamma
rays are other forms of electromagnetic radiation of varying
energy. The following diagram illustrates this –

As our eyes are capable of seeing only a very narrow region of the
electromagnetic spectrum, we need special instruments to extend
our vision beyond the limitations of the unaided eye. As the energy
of light changes, so too does its interaction with matter. Materials
that are opaque at one wavelength may be transparent at another. A
familiar example of this phenomenon is the penetration of soft

13
tissue by X-rays. What is opaque to visible light becomes
transparent to reveal the bones within.

Extending human vision with electronic imaging is one of the most


powerful techniques available to science and industry, particularly
when it enables us to see light in the infrared, or IR portion of the
spectrum. Infrared means "below red," as infrared light has less
energy than red light. We typically describe light energy in terms
of wavelength, and as the energy of light decreases, its wavelength
gets longer. Infrared light, having less energy than visible light, has
a correspondingly longer wavelength. The infrared portion of the
spectrum ranges in wavelength from 1 to 15 microns, or about 2 to
30 times longer wavelength (and 2-30 times less energy) than
visible light

Infrared imaging can be used to capture vehicle information. This


is done by installing infrared cameras at roads and analysing the
video provided. The major advantage is infrared imaging works in
conditions of dense fog, rain or air borne particles. And it is
precisely under these conditions that surveillance is required.

Feasibility of technology

Cost
The price of a single camera is about Rs 10,000.

2.3.3 Video

About the technology

When an image is flashed on the retina of human eye, it is retained


for some number of milliseconds before decaying. If a sequence of
images is flashed at 50 or more images/sec, perception of motion is
obtained. All video systems exploit this principle to produce
moving pictures.

A camera basically consists of


Lens - to focus the image on the imaging
device
Imaging Device - to convert optical signal to
electrical signal.
Storage Media - to store information gathered.

14
We may store the information in analog or digital forms, leading to
analog or digital video.

Analog Video

Here two-dimensional image data is converted into one-dimension


data representing voltage as a function of time. The camera scans
an electron beam rapidly across the image and slowly down it,
recording the light intensity as it goes. At the end of scan a frame
is produced and the beam retraces. This intensity as a function of
time is broadcast, and analog receivers may repeat the scanning
process to reconstruct the image. Likewise to construct video,
many frames are sent. Frame rates of about 60 frames/second are
common.

Digital Video

Two-dimensional image is focussed on to a charged-coupled


device or CCD. A CCD is a collection of tiny light sensitive diodes
that convert photons (light) to electrons (electrical charge). The
electrical charge information is stored in an array present in the
CCD itself. An analog to digital converter turns each pixels value
into a digital value. Each pixel may be represented using 1, 8 or 24
bits depending upon the colour information desired.

Feasibility

In terms of cost small digital cameras are cheaper than the analog
ones. Also the operation of digital camera is easier, as video
obtained is directly in form of computer readable format.

Conclusion

On the basis of above discussion it can be concluded that digital


camera is the best choice because it is cost effective and signal
obtained can be processed directly. However real time information
retrieval and processing seems difficult at this stage, so initial
focus would be development of techniques for video clips.

2.4 Processing Approaches


Once the input information is obtained, it needs to be processed also. We
may process information by following any one of the following possible
approaches

15
2.4.1 Classical Image Processing
It involves processing of images a conventional manner. Though
the technique is a time-tested one. Understanding image processing
requires an insight into mathematics. Thus learning time taken is
appreciable.

2.4.2 Use of certain heuristics


It involves processing of images by some algorithms that are tailor
made for the problem. This solution requires only a brief
knowledge of image processing (presented in section 3).

2.4.3 Conclusion
The second approach though looks risky could be used, as it caters
to the need as good as the first approach and the learning time is
less.

2.5 Summary
Information would be supplied to system by use of digital camera clips.
Heuristic based approach should be used where possible.

3 Elementary concepts of image processing

3.1 Purpose
This document provides introductory concepts in image processing.

3.2 References

1. Digital Image Processing – Gonzalez, Woods, Addison-Wesley, I Edition

3.3 Overview

Image processing is hugely application dependent. The technique applied


depends upon the context for which the processing is done and the information
of interest. However usually every image processing system has the architecture
framework as shown below.

16
Figure 1 Image Processing an Overview

3.3.1 Low Level Image Processing

These activities focus primarily on image enhancement. The objective here


is usually to increase the overall quality of image obtained. This is done
with an eye towards the future use of image
The activities include

1. Noise Reduction – almost every image when acquired has some noise.
This may be due to known reasons like noisy carrier (the medium from
which image is obtained) or due to unknown reasons. Tools like low pass
filtering; morphological operations (explained later) might be used for this
purpose.
2. Contrast Improvement – the image obtained may not use the spectrum
of available colors evenly. This may lead to low contrast images. Simple
multiplication by a constant or techniques involving histogram
equalization may do the job.
3. Image Restoration – such techniques are used when the reason behind
poor quality image is known and can be modeled mathematically. This
procedure consists of reversing the process that caused poor quality image.
4. Image Enhancement – here we try to ‘enhance’ portions of interest in the
image. This may include point detection, line detection, edge detection

17
and enhancement. Tools used may be high pass filtering, Sobel operator
etc.

3.3.2 Intermediate-Level Image Processing

This consists of isolating the features (or regions or portions) of interest


from the whole image. This procedure involves removal of background or
‘clutter’ in order to extract the information, for which the system is made.
The techniques of importance here are

1. Image Segmentation
2. Representation and description

Image Segmentation – as is illustrated by the word image segmentation


basically divides the image into regions. These regions pertain to areas
that are of interest. For example for meteorological department cloud in an
image, is a region of interest, for this project the moving regions (or
vehicles) present an area of interest.
The tools used are thresholding and region based segmentation techniques.
Thresholding may be used if the region to be segmented has some known
color characteristic. Like it will be the black region, or the region of lying
within specified color range.
Region based segmentation is helpful if the region of interest is connected
using 4 or 8 connected approach. This technique is explained later.

Representation and description – after segmentation it is desired to


‘code’ the regions of interest. This means that we need to find an efficient
way to represent the region that has been found. This description may be
used for area computation, shape recognition, or any other desired
attribute computation. For this purpose chain codes, signatures and a
variety of descriptors may be used.

3.3.3 High Level Image Processing

This consists of the most ‘intelligent’ part of image processing –


Recognition and interpretation. Meaning that given the description of the
image we need to identify what the image represents. The methods for
achieving this task are

1. Template matching
2. Statistical approach
3. Syntactical approach
4. Neural Networks

18
1. Template Matching – this technique may be used if the region to be
matched is known to possess certain characteristics accurately. If we
are provided with accurate information as to how the ‘real world
object’ looks like, this could be saved. Next all regions found in image
matched with this stored data. Typical application is reading of printed
documents. Here the character set is predefined and recognition of
characters may be done by simple comparison of obtained characters
with this set.
2. Statistical Approach – here, each pattern is represented in terms of d
features or measurements and is viewed as a point in a d-dimensional
space. The aim is to choose those features that allow pattern vectors
belonging to different categories to occupy compact and disjoint
regions in a d-dimensional feature space. The effectiveness of the
representation space (feature set) is determined by how well patterns
from different classes can be separated. Classification of vehicles for
example might use the simple heuristic that very small area moving
regions correspond to two-wheelers, very large ones to lorries and rest
to cars.
3. Syntactic Approach – this approach may be used to identify complex
patterns. The patterns comprise of simple sub-patterns. The elementary
sub-patterns are referred to as primitives and all patterns consist of
these primitives in a hierarchical order. For example a complex pattern
like a face consists of two eyes, two ears, a nose and a mouth. By
identifying these features in an image with their relative placing it is
possible to say that u are detecting a face now. On a larger scale like
satellite images one may classify patterns as residential regions,
mountains, plains, forest etc.
4. Neural Networks – this approach is modeled after the natural
interpretation and recognition systems. The neural network found in
brain. These systems consist of layers of neurons interconnected with
each other, such that each layer provides input to the next one in
hierarchy. The first layer is given the raw input that is the image and
the last layer gives the output. Neural systems are learning systems and
interconnections are based on training patterns given to the network.
This technique is usually applied only after all the above techniques
fail.

3.3.4 Some Common Tools

Basic Information

Pixel (Picture Element)– it is the smallest independently


addressable unit of the image.

19
Image – An image is a two dimensional data structure made up
from pixels. An image may be represented by the function f (x, y)
where x and y give the location of the pixel and f (x, y) gives
information about color or intensity at that pixel.

Histogram – it is a bar graph representing the frequency of various


intensities (or colors) in an image. The x-axis usually plots the
intensities (in increasing numerical value) and y-axis represents the
number times of each intensity (or color) appears in the entire
image.

Frequency – for images, frequency is defined in spatial (or space)


domain rather than the time domain as is the convention. In case of
images the colors may change gradually (example texture of sky)
or abruptly (example around edges like boundary areas of a hill in
blue sky). If a plot of space vs. intensity were drawn we could
capture this change of intensities of color. If the change is abrupt
we get sharp peaks or high frequency and if the change is gradual
we get broad or flattened peaks or low frequency.

Thresholding

It is a very simple but effective tool for image processing.


Motivation – images might have some range of colors (predefined
or dynamically computable) that are of importance. Thresholding
extracts this range of colors from the image. For example if one
wants to get information about leafs from an image it would be
advised to threshold the image to extract all regions that are
‘green’.
In this example we know that color of ‘leaf’ (object of interest) is
‘green’. Dynamic calculation of thresholds involves computation
of intensity histograms.

Method – visit each pixel of the image. If the intensity lies within
he threshold limits select that pixel (if monochrome image is
desired color the pixel Black else for color image leave the
intensity information intact.). If the pixel intensity does not lie
within the threshold limits color that pixel black.

Mathematically –
F (x, y) = {color – if F (x, y) lies in threshold limit
White – if F (x, y) does not lie in threshold limit}

20
Filtering

Filtering of image is done with respect to frequency characteristics.


The filter may be a low (frequency) pass filter or a high
(frequency) pass filter.

Motivation – if we want to smooth out an image (blurring of


images or reduction of noise) we need to pass the low frequencies
and block high frequency components. We want to enhance edges
of an image low frequency must be blocked and high frequency
passed.

Neighborhood operations – here to decide the fate of a pixel,


information from the pixels surrounding it, is used. For low pass
filters we set the value of a pixel as average of the values of pixels
surrounding it. The net effect is that. If the pixel corresponds to
high frequency its value is smoothed or made ‘less’ (imagine
dropping an ink drop on a sponge!). On the other hand, low
frequency components do not suffer much change. For high pass
filters we take sum of products where if surrounding pixels are
multiplied by a negative value then pixel value is multiplied with a
positive value (or vice versa). Now if the pixel corresponds to low
frequency the sum of products will have a lesser absolute value
compared to the similar computation value at pixel corresponding
to high frequency. This is because intensity for low frequency
pixels will be approximately equal to intensity neighborhood
pixels. If the pixel is multiplied by a large positive coefficient and
neighborhood pixel by smaller negative coefficients and the result
is summed. As number of neighborhood pixels is larger the sum
will be small. On similar lines it is easy to show that sum of
products at high frequency component pixels (with the above
pattern of coefficients) would have a large absolute value. Next by
using some thresholding we may achieve high pass filtering.

Region Based Segmentation

Segmentation as pointed out previously is the separation of regions


of interest from the image. This differentiation may be hierarchical
as image in level1, road and background at second level, vehicle
on road, road and background at third level and so on.
For segmentation we may use thresholding (discussed earlier) or a
region based approach (as discussed next).

Basic formulation-
(a) ∪ Ri = R sum of all the regions give the image back

21
(b) Ri is a connected region.
(c) Ri ∩ Rj = Φ for all i and j
(d) P(Ri) = TRUE for all i
(e) P(Ri ∪ Rj) = FALSE for all i ≠ j

Here R denotes a region and P(Ri) is a logical predicate over the


points defining a region.

Region growing by pixel aggregation


Motivation – the region of interest in the image has some common
property and is linked or connected.

Method – select some seed pixels. From these select the


neighboring pixels that have similar properties in terms of gray
level, texture, color. All such pixels are given the same label.
Selection of seed pixel, suitable properties for inclusion of points
in various regions and stopping rule formulation are some of the
major points on which the success of the algorithm depends.

Region splitting and merging

Motivation – Same as above with specialty that region to be


identified may large in terms of area. Thus visiting each
neighborhood pixel (an exhaustive approach is costly).
Optimization is possible if we try a bottom up approach.

Method – divide an image into regions of arbitrary size. Then


these regions are merged or split depending upon the requirement.
Split the region if all pixels don’t satisfy the criteria of common
property and merge regions that are adjacent and satisfy the
criteria. Splitting is done for regions that are greater than a
threshold.

The algorithm may be exemplified as

Input – an image
Output – segmented image
Method –
1. Select a suitable value for s.
2. Split the image into n disjoint quadrants of size ‘s’, if the
entire image does not from a region.
3. Merge any adjacent regions Ri and Rj for which P (Ri ∪ Rj)
= TRUE.
4. Set ‘s’ = ‘s’/4.
5. If ‘s’ is lesser than a threshold go to step 2 for each Ri else
go to step 6.

22
6. Stop.

Example

R1 R2

R3 R41 R42
R43 R44

This information may be stored using quad tree

Figure 2 Quad Tree

Morphology

This tool is used for extracting image components that are useful in
representation and description of region shape. The tool may also
be used for pre or post processing of images.

Motivation – comparison of an image part with another image (or


pattern). This means that we compare an image portion with a
pattern at each pixel to decide its fate. The concept is somewhat
similar to filtering only that no averaging or sum of products is
used here.

Noise Removal
Motivation – we may selectively ‘lose’ information from an
image by eroding it against the boundaries. If after erosion all
information is not lost the erosion process reversed gives the
same information as was present before. However if all
information regarding a region is lost that region will not be
obtained when erosion is reversed.

23
Method – form a pattern. Now superimpose the pattern over
each pixel. If for any black pixel p, any pixel within the
superimposition limit is ‘white’ erode pixel p or make pixel p
‘white’. Now dilate the image by performing an operation
opposite to erosion. Superimpose the pattern at each pixel of
the image again. If for any ‘white’ pixel p any pixel within the
superimposition limit is black, color pixel p black.

For example in the picture provided below the brown portions


correspond to the pixels having value 1 and white region to
pixels having value 0. As is intuitive the larger brown area is
perhaps the region of interest the smaller brown areas
correspond to noise. If we superimpose the circular pattern as
shown over the all

Figure 3 Noise Removal Through Morphology

pixels in the image, information about the noise is lost.


However information about the region may be regained by
process of dilation as explained above. Thus we are able to
remove noise from the image

Connecting regions or filling of holes


Motivation – dilation followed by erosion.

Method – this operation is done on similar lines as noise


removal only that the order of erosion and dilation are reversed

24
i.e. first dilation and then erosion. During dilation all holes
information is lost and boundaries are sort of flooded as
illustrated. To compensate for this flooding at boundaries we
may erode the image. This process is illustrated in the diagram
provided next.

Figure 4 Connecting Regions Through Morphology

Chain codes

Chain codes are a representation of pattern in the image. They may


be thought of as a chain of connected steps representing the
boundary of an object.
The boundary information may be coded using a four connected or
eight connected approaches.

25
Figure 5 Direction Codes

We sample the image for grids larger than a pixel by pixel. The
size of grid is limited by amount of information loss acceptable.
To make the chain code independent of starting point we view it as
a circular chain and arrange it so that it forms the minimum integer
number. Further we make it rotation independent by using first
difference of chain code rather than the chain code itself. For e.g.

Figure 6 Chain Coded Figure

if chain code is 10103322 then its first difference is 33133030 the


first 3 is difference between last and first element. Changing the
size of grid does size normalization.

Finding perimeter, area, shape factor

26
Perimeter – in the 8 connected approach we see that all even
direction values contribute to perimeter unit times and all odd
values contribute √2 times
Thus perimeter = even count + √2*odd count.

Area – we need both the direction and ordinate or y value


information. vectors 2 and 6 are vertical and hence do not
contribute to area. Vectors 0 and 4 contribute in square units and
all others contribute in ± 0.5 units
We may say that

Vector 0 – area = area + Y


Vector 1 – area = area + Y + 0.5
Vector 2 – area = area
Vector 3 – area = area – Y- 0.5
Vector 4 – area = area – Y
Vector 5 – area = area – Y + 0.5
Vector 6 – area = area
Vector 7 – area = area + Y – 0.5

Thus calculation of area reduces to finding the chain code and


applying the above equations for each code.

Shape factor is defined as perimeter * perimeter / area. It may be


found out easily once area and perimeter are found. It is a measure
of elongation of shapes.
The reciprocal of shape factor is called as circularity.

27
4 Towards the solution

4.1 Purpose
This document aims at identifying issues, possible approaches and solutions.
These issues arise because image-processing techniques given in section 3 are
generic, and we need to address application specific issues.

4.2 Reference
1. Digital Image Processing – Gonzalez, Woods, Addison-Wesley, I Edition

4.3 Issue #1 (Extraction of Vehicle Information)

4.3.1 Problem
The frame information about the background needs to be suppressed and
that of vehicle needs to be retained.

4.3.2 Possible Approaches

Matching of frame with templates of vehicles

Here we store information regarding the shape of various vehicles in


templates. Each frame is now compared with such template for
identification of vehicle.
The exact procedure involves comparison of a large image area, (at least
the road area) with templates. As is intuitive time taken by technique is
large, making it unsuitable to video.

Use attribute of motion

Here basic technique involved is subtraction of frames. First we consider


subtraction of consecutive frames. The approach gives us information
about vehicles because as the vehicle moves, it changes from it’s previous
position, where as the background remains the same. Thus subtraction
effectively leads to identification of the vehicle.

However the problem encountered is that of scattering of vehicle.

28
Figure 7 Subtraction of consequent frames

As can be seen the major problem with this approach is that vehicle
obtained is scattered. Though we can use operations like filtering, the time
taken is large because the scattering is comparable to noise in the image.
Thus it would be better if we could extract the complete information about
the vehicle in one go. This is possible if we were to subtract the
background instead of consecutive frames.

The result for background subtraction is

Figure 8 Background subtraction

29
For this image the results are definitely better, infact the only other
processing required would be to smooth out the edges.

4.3.3 Solution
After the above discussion it is evident that background subtraction does
the trick and would hence be used for the software.

4.4 Issue #2 (Changes in background)

4.4.1 Problem
This problem is basically due to the solution of the first issue. Here
because we are depending upon the background frame for processing a
change in background frame implies failure of system. This is a serious
limitation as changes in background are inevitable.

4.4.2 Possible Approaches

Updating of background

The background is updated after a fixed number of frames have been


processed. The algorithm actually compares the current background with
the frame as of now, if the difference in pixel value is slight, that pixel is
adjusted to the new value, however a pixel retains its previous value if
difference is too high. Note that threshold used here should be lesser than
the threshold used for subtraction.

4.4.3 Solution

The solution suggested above is able to solve issue efficiently. Hence it


can be used.

4.5 Issue #3 (Removal of noise and smoothing of image)

4.5.1 Problem
The image obtained after background subtraction still contains noise, and
edges are not prominent. We need a strategy by which we solve the above
problems.

30
4.5.2 Possible Approaches

Filtering of image.

We may use the image filtering techniques, such that low pass filters
eliminate noise and edges are enhanced high pass filters.
Here though noise is eliminated but edges obtained are not sharp. Also
filtering may lead to difficulty in thresholding operation as the number of
colors in the image might increase.

Use of morphological operations.

A detailed study of morphological operations is provided in section 3.3.4.


The advantage of this approach is that noise is removed and the vehicle’s
image is smoothened.

4.5.3 Solution
The issue is resolved as we use the second approach for removal of noise
and smoothing of image.

4.6 Issue #4 (Conversion from image data to coordinates)

4.6.1 Problem

The data that we have as of now is in the form of image, this data must be
converted to numbers representing the coordinates of vehicle. This is
necessary because image data is large and difficult to process, where as
numbers are much easier to handle.

4.6.2 Possible Approaches

Save the image data

31
Figure 9 Moving Objects

This approach is optimistic and tries to avoid the problem. Here we try to
reason out the need of having numerical data instead of image data. For
the problem of comparison we can simply compare the subtracted frames.
This can be illustrated as shown by the following diagram.

In the above diagram we have a single object in motion. Now suppose we


say that if we find overlap then we have tracked the object so no need of
finding the coordinates of the pattern. However, this is not the case. In the
figure we can see that apart from the overlapped regions other regions do
exist. These regions must also “know” that they have been tracked or for
them also a match has been found. This can be done only if we consider
the image as being composed of patterns rather than pixels. Thus the
granularity of analysis has increased, and if we have to identify the pattern
then identifying the boundary of pattern is easy, and is exactly what the
issue indicates. Thus we can safely conclude that the problem is a real one
and a different solution must be found.

Use non-regularity information to extract boundary.

This may be explained by fact that at boundary the lines are not continuos,
or any boundary is bound to bend. However before finding the boundary
points it is necessary to identify or tag different patterns in an image. The
next algorithm does exactly this.

Input – a binary image 0 correspond to white regions, 1 correspond to


black regions
Output – a labeled image where a label exists for each connected
component like all pixels corresponding to 1 are a single region

Algorithm
1. Initialize the counter to 0

32
2. Row wise processing – scan the image row wise i.e. scan the entire
image such that each complete row is traversed before other begins
--------------------------------------------------------
--------------------------------------------------------
--------------------------------------------------------

For each pixel do the following


a) check if pixel is black
b) If yes check if it is the first pixel of that row; if yes increment the
counter by 1. Go to step d.
c) If no check, if the left pixel was white if yes increment the counter
by 1. Go to step d.
d) Assign the value of counter to the current location in output image
label.

At the end of this step we get the pixels arranged in row wise pattern
the next step is intuitively processing with respect to columns.

3. Column wise processing – scan the output image formed as of now


column wise i.e.

a) Initialize the counter to 1

b) Let label (j) denote the label of jth pixel, further assign (x, y)
assigns label ‘y’ to the set pointed to by ‘x’, find_label (x) finds the
representative element for x, if no such element exists it returns 0,
makeset( y) makes a set with representative element y.

c) Now start processing the image as indicated. For each pixel do the
following
I. Check if the current pixel is not zero i.e. it was black in the
input image; if yes then proceed else go to next pixel.
II. If the pixel being considered is first of it’s column call
find_label (label (j)) if the result is zero call makeset
(counter, label (j)) i.e. make a new set of representative
element as label (j) and index counter. If find_label (j)
results in a value not equal to zero, i.e. j belongs to some
set do nothing.
III. If the pixel being considered is not the first pixel of any
column, let j-1, denote the pixel above this pixel. Now if
this pixel has a label call assign(find_label(label(j-

33
1)),label(j)). This means that we make both j-1 and j belong
to the same set. If the j-1 pixel is not black do same as step
II.

d) Process the column array now bottom up, as it was top down in
step c.

4. Now we process again the output array. All values corresponding to


the same set are set to label equal to that of
index of that set.

A word about the algorithm it looks confusing. But the major advantage is
that one might identify areas that are of shape

Or

such regions can not be identified by 4 connected approach directly.

An example of this algorithm

Input image
0 0 0 0 0 0 0 0
0 1 0 0 1 0 1 1
0 1 0 0 1 0 1 0
0 1 1 1 1 0 1 1

Image after row wise processing


0 0 0 0 0 0 0 0
0 1 0 0 2 0 3 3
0 4 0 0 5 0 6 0
0 7 7 7 7 0 8 8

Column sets after processing of


1 col. {}
2 col. {1,4,7}
3 col. {1,4,7}
4 col. {1,4,7}

34
5 col. {1,2,4,5,7}
6 col. {1,2,4,5,7}
7 col. {1,2,4,5,7} {3,6,8}
8 col. {1,2,4,5,7} {3,6,8}

Image after step 4


0 0 0 0 0 0 0 0
0 1 0 0 1 0 2 2
0 1 0 0 1 0 2 0
0 1 1 1 1 0 2 2

Now after the pattern has been labeled we need to extract information
about boundary points. The algorithm presented next may do this.

Figure 10 A Labeled Image

For example in this image, we need to compute the coordinates of the red
points.
For this purpose the prototype uses the following algorithm.
1. For each pixel do the following steps.
2. Check if pixel is black if yes go to step 3 else process the next pixel
3. Compute the indices of the pixels into variables as shown below P5 is
the pixel under consideration

P1 P2 P3
P4 P5 P6
P7 P8 P9

35
• Now check if p5 is a corner by checking if any of the following conditions
is on

• (P2 + P5 + P4) are on and (P6 + P8 + P9) are Not on.

Figure 11 Bottom right corner

• (P2 + P5 + P6) are on and (P4 + P7 + P8) are Not on.

Figure 12 Bottom left corner


• (P4 + P5 + P8) are on and (P2 + P3 + P6) are Not on.

Figure 13 Upper right corner

• (P5 + P6 + P8) are on and (P1 + P2 + P4) are Not on.

Figure 14 Upper left corner

• (P5 + P6 + P8) are on and (P1 + P2 + P4) are not on.

36
4. If any of the above tests is passed the pixel is included as a corner.

5. If P5 is the last pixel stop else go to step 2 with P5 as the next pixel.

4.6.3 Solution

Issue is resolved using the second approach.

4.7 Issue #5 (classification of patterns)

4.7.1 Problem
After identification of the pattern we must know what category to what
category of vehicle does the pattern belong. This problem becomes
aggravated when we consider scenario when a vehicle is entering. Thus
the problem is two fold when to classify a vehicle and how to classify it.

4.7.2 Possible Approaches

Classify using templates


We can classify vehicles by comparing patterns obtained by some
templates. This approach is time consuming because number of patterns
may be many, also as chances of large amount of noise are less it is
wasteful of processing time and resources.

Classify using area


Under this technique we classify vehicles by simple heuristic of area. That
is if area is large we are seeing a lorry, if it is small then a two wheeler
else a car. Some experiments have put these limits as
Area > 2500 pixels -> lorry
Area < 2500 & Area > 700 -> car
Area < 700 -> two-wheeler.

However there is one glitch. When a lorry is entering into the frame it’s
area is less, and we can not be sure about the number of frames after
which it is bound to be in the frame completely. Thus the solution
proposed does not tell when to start classification.

For the solution we look at the following images

37
Figure 15 A vehicle entering into the frame.
We see that the area of the vehicle first increases and then it decreases,
thus we can conclude that area may be safely used as long as we use this
heuristic for first being sure that vehicle is now properly in the frame.

4.7.3 Solution
The issue has been resolved using the second approach.

4.8 Issue #6 (interrelating of vehicles in consequent frames)

4.8.1 Problem
Vehicles obtained from different frames have to be correlated with each
other. This means a vehicle has to be tracked.

4.8.2 Possible Approaches

Use of Overlap criteria

This technique basically works on principle of auto-tracking of a vehicle.


I.e. a vehicle tracks itself on basis of previously seen position and velocity
information. The algorithm presented next illustrates this approach.

Input – a list of vehicles. This contains two types of vehicles

• Those added to tracker recently by classification stage. Speed of such


objects is zero.
• Those that were identified before by tracker itself

Processing step – for each item in the above list do the following

a) Based on the current position + speed characteristic form a template.


Scan that template area from the main image and not the gateway
image.

38
Figure 16 Left and Right overlap
b) Find the object in the template above that has left, right or both
overlaps with object being considered as shown above.
c) However the object pattern found now may be smaller than actual size
of object. This is because if the object has moved and we scan the area
where it used to be, we get only a partial image of the object. As
shown in the figure below by the above steps we only get the red
region, while the area of interest is red + green area. Thus we perform
step d).

Figure 17 Inter Frame Overlapping Region


d) Here we increase the size of template formed. Checking the object
found, with the boundaries of the template can do this. If any boundary
corresponding to both regions match then we must increase the size of
template and scan the image again. We keep on scanning the image till
the object boundaries don’t overlap the boundary of the template.

e) After successfully matching an object with its pattern the objects is


updated wrt its area, position, and speed (velocity).

39
Merits of this approach
1. The whole image need not be scanned during tracking.
2. The vehicle being scanned automatically ‘finds’ its occurrence in the
next frame and no matching is required.

Demerits
1. The object must overlap with its previous position.
2. Noise and or other disturbances must still be removed from the
template.
Some experimental images

Figure 18 Frame no 1

Figure 19 Frame no 2

These pictures correspond to the same car and show its position after a
frame time i.e. 1/10 part of a second. The green region in second frame
corresponds to the position of car in frame 1.

Figure 20 Templates 1 and 2

Template 1(in green) depicts the car taken from frame 2 and template 2 (in
red) depicts the car after the size of template increases.

4.8.3 Solution
The issue of tracking has been successfully resolved.

40
4.9 Summary
Many problems and their tentative solutions have been identified. The
problems and solutions suggested are by no means exclusive, but certainly
carry the essence of the software.

41
5 Requirement analysis

5.1 Introduction
This document gives the requirement analysis for the project. Use case
diagrams have been used for requirements modeling.

5.2 References
1. dotnetcoders.com
2. UML: A Beginner’s Guide, Jason T Roff

5.3 Terminology

5.3.1 Actors

Video Provider
He is responsible for setting the path of video, or providing location of
video to TMS.

TMS Expert
The TMS Expert configures a set of parameters that are used further for
analysis of video. His job is complex, as the parameters must be set with
the position rather than the particular video in mind.

Alarm Analyser
The Alarm Analyser analyses the information presented to him when
alarm is given. He is the basic targeted user, for the system.

5.3.2 Use Cases

set_video
Models the situation when Video Provider has to set video position in
TMS.
This use case is broken down into following use cases

give_video_path
Models the case when Video Provider actually sets the path of
video information.

42
select_operation
After the path to video is set, it must be either analysed or used for
setting up of camera position. The use case models this selection
operation.

tune_parameters
Models the situation when TMS Expert has to configure or fine tune
parameters according to a particular camera position.

This use case can be broken down into the following use cases

• tune_image_processing_parameters
Models the tuning of operations required for, basic image
processing, or intra frame processing.

The use case is further broken down into the following use cases
on basis of study in section 4

 set_background_frame
Models setting up of background frame.

 set_threshold_subtraction
Models setting up of threshold required for subtraction for
image subtraction.

 set_pattern_noise_removal
Sets the patterns of erosion and dilation required for noise
removal and smoothing of image.

• tune_classification_parameters
Models the tuning of parameters required of classification of new
vehicles.

This use case can be further broken down into the following use
cases on basis of section 4.

 set_gateway_boundary
Models the setting up of the gateway coordinates. This
is the region where complete processing is done.

 set_area_noise
Models the setting of area of noise that is the area,
likely to be rejected as noise.

43
 set_area_typeOfVehicle
Models setting up of the area limits for different vehicle
types, typically car, two-wheeler and lorry.

 set_reduction_area_threshold
Models the setting of threshold area, through which the
area in pixels should decrease before it can be safely
concluded that vehicle is completely in the frame.

• tune_real_time_parameters
Models the setting of some real-world parameters and their relation
with images.

This use case can be broken down into the following use cases

 set_pixel/distance_parameters
Models the setting of parameters that define the
relationship between pixels and actual distance.

 set_safe_speed_limit
Models the setting of region specific, safe speed limit.
TMS Expert must provide both maximum and
minimum limits.

• tune_tracking_parameters
Models the parameters required for tracking of vehicles

This use case can be further split into following use cases

 set_increment_template
Models the setting of the limit by which the template
should increase if vehicle is not found in the region
where it was previously.

 set_horizon_limit
Sets the limit beyond which vehicles are not tracked, as
they may become too small for analysis.

 set_noise_area
Set the noise area, that is the area that must never be
tracked.

analyse_video

44
Models the situation when Alarm Analyser analyses a particular video.

This is further broken down into following use cases

• manual_analysis
This use case models the situation where the Alarm Analyser does
a manual analysis of the clip. The condition arises usually because
automatic analysis fails.

This use case may be broken into the following use case

 view_clip
Models the viewing of clip or video by the Alarm Analyser.

• auto_analysis
This use case models the situation where automatic analysis is
initiated by the alarm analyser.

This use case may be broken down into the following use cases

 select_camera_position
Here a camera position is selected from the many camera
positions available.

 start_auto_analysis
This use case models actual starting of automated analysis
after a camera position has been selected.

 give_alarm
This use case is included by the start_auto_analysis use
case wherein if auto analysis finds any anomaly, alarm is
generated.

45
5.4 Use Case Diagrams

5.4.1 Primary Level Use Case diagram

Figure 21 Primary Level Use Case Diagram

Use case number : One

Use case name : set_video

UCD : Primary Level Use Case Diagram

Parent use case : none

Note : Video Provider generalises TMS Expert and


Alarm Analyser

Primary actor : Video Provider

Secondary actor : none

46
Goal in context : Set the video path and operation to be performed
on video.

Scope : TMS

Level : Primary

Pre condition : TMS is running.

Post condition : Video source and operation to be performed on it


are also set.

Trigger : New video available for analysis

Main success scenario : Video data source is successfully created.

Failure end condition : Video data source is not in proper format and can
not be decompressed by the system.

Description : 1. A video stream is provided to TMS


2. A data source is created by the TMS
3. TMS gives options for subsequent use of data
source.

Use case number : Two

Use case name : tune_parameters

UCD : Primary Level Use Case Diagram

Parent use case : none

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : To set, a set of parameters for the camera position


given in video.

Scope : TMS

Level : Primary

47
Pre condition : Successful execution of get_video use case

Post condition : A new set of parameters is made available for


analysis of other video.

Trigger : Video provider chooses to use the video for


creation of new set of camera position.

Main success scenario : An efficient set of parameters is created.

Failure end condition : none.

Description : 1. A parameter object is created.


2. The video source is decompressed to give
bitmap files.
3. Various parameters are selected using basically
hit-trial method and these parameters are saved
in parameter object, that is further stored in a
file.

Use case number : Three

Use case name : analyse_video

UCD : Primary Level Use Case Diagram

Parent use case : none

Note : NA

Primary actor : Alarm Analyser

Secondary actor : none

Goal in context : To analyse video for anomalies like stopped


vehicles and overspeeding vehicles, by tracking of
speed.

Scope : TMS

Level : Primary

Pre condition : Successful execution of get_video use case

48
Post condition : Video analysed for anomaly

Trigger : Video provider chooses to use the video for


anomaly detection.

Main success scenario : All alarms are successfully raised.

Failure end condition : Many alarms are not raised.

Description : 1. User is given option of manual or automatic


analysis.

5.4.2 Secondary Level Use Case Diagram 1

Figure 22 Secondary Level Use Case Diagram 1

Use case number : Four

Use case name : give_video_path

UCD : Secondary Level Use Case Diagram 1

Parent use case : set_video

Note : none

Primary actor : Video Provider

49
Secondary actor : none

Goal in context : Gives location of video source to TMS.

Scope : set_video interface

Level : Secondary

Pre condition : none

Post condition : The video file is read if it exists.

Trigger : Video Provider has new set of digital video.

Main success scenario : Video data source is successfully created.

Failure end condition : Video format can not be decompressed

Description : 1. Video Provider keys in the path of video.


2. TMS tries to read and decompress this file.
3. If successful a data source object is created.

Use case number : Five

Use case name : select_operation

UCD : Secondary Level Use Case Diagram 1

Parent use case : set_video

Note : NA

Primary actor : Video Provider

Secondary actor : none

Goal in context : Select operation to be performed on video.

Scope : set_video interface

Level : Secondary

Pre condition : Successful execution of get_video use case.

Post condition : The ultimate use of video is known.

50
Trigger : Video Provider wants to use the particular video.

Main success scenario : none

Failure end condition : none

Description : 1. TMS gives option of using video for creation of


new parameter set or for alarm detection.

51
5.4.3 Secondary Level Use Case Diagram 2

Figure 23 Secondary Level Use Case Diagram 2

Use case number : Six

Use case name : tune_image_processing_parameters

UCD : Secondary Level Use Case Diagram 2

Parent use case : tune_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : To set parameters enabling the creation of


subtracted images.

Scope : tune_parameters interface

52
Level : Secondary

Pre condition : All frames from video data source have been
extracted.

Post condition : Image-processing parameters are set.

Trigger : TMS Expert starts tuning of parameters.

Main success scenario : Subtracted image, formed using Image-processing


parameters contains all vehicles, for all frames and
is free of noise.

Failure end condition : The values of parameters are invalid and are hence
not stored.

Description : Various parameters required in early stages of


processing are set up.

Use case number : Seven

Use case name : tune_classification_parameters

UCD : Secondary Level Use Case Diagram 2

Parent use case : tune_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set parameters required by classifier.

Scope : tune_parameters interface

Level : Secondary

Pre condition : Successful execution of tune_image_processing_


parameters.

Post condition : A set of classification parameters is saved.

53
Trigger : Image_processing_parameters have been
successfully created.

Main success scenario : The set parameters allow maximum vehicles to be


classified correctly.

Failure end condition : The values of parameters are invalid and are hence
not stored.

Description : 1. Various parameters are given values.


2. A classifier object is created, with the
classification parameters.
3. Subtracted images are made using image
processing parameters.
4. classify() is called for each subtracted image.
5. If result is satisfactory the parameters are
saved, else go to 1.

Use case number : Eight

Use case name : tune_real_time_parameters

UCD : Secondary Level Use Case Diagram 2

Parent use case : tune_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set the pixel to distance ratio and safe speed limit.

Scope : Tune_parameters interface

Level : Secondary

Pre condition : Successful execution of tune_classification


parameters.

Post condition : Real time information available to the system.

Trigger : Real time information available to TMS Expert.

54
Main success scenario : Real time parameters are saved in the system.

Failure end condition : The values of parameters are invalid and are hence
not stored.

Description : 1. The TMS Expert enters the real time


parameters.
2. TMS software validates the parameters.
3. If parameters are valid they are saved; else
error message is given.

Use case number : Nine

Use case name : tune_tracking parameters

UCD : Secondary Level Use Case Diagram 2

Parent use case : tune_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set parameters enabling fast and efficient tracking


of classified vehicles.

Scope : tune_parameters interface

Level : Secondary

Pre condition : Successful execution of


tune_real_time_parameters

Post condition : A set of tracking parameters is saved.

Trigger : Tracker’s parameters are available to the system.

Main success scenario : All classified vehicles are tracked till the horizon
limit and alarms are given for anomalies
(particularly stopped or slow vehicles).

Failure end condition : Track of in frame vehicles is lost, false positive


alarms (i.e. alarm generated without anomaly) and

55
false negative alarms (i.e. alarm not generated in
event of anomaly).

Description : 1. Various parameters are given values.


2. Classifier object and tracker object are created,
with classification and tracking parameters.
3. Subtracted images are made using image
processing parameters.
4. classify() is called for each subtracted image.
5. track() is called for each subtracted image, the
new vehicles found at step 4 are appended to
the list of vehicles for tracking in subsequent
frames.
6. If result is satisfactory the parameters are
saved, else go to 1.

56
5.4.4 Tertiary Level Use Case Diagram 1

Figure 24 Tertiary Level Use Case Diagram 1

Use case number : Ten

Use case name : set_background_frame

UCD : Tertiary Level Use Case Diagram 1

Parent use case : tune_image_processing_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Provide path for a background frame for


subtraction.

Scope : Tune_parameters interface

57
Level : Tertiary

Pre condition : TMS Expert has seen all frames from the video
and knows the path of the background frame.

Post condition : Background frame read by TMS.

Trigger : TMS Expert chooses to set Image-Processing


parameters.

Main success scenario : Background path provided points to image file that
has no vehicular traffic.

Failure end condition : Background path does not point a valid image file.

Description : 1. TMS Expert enters the path of background


frame.
2. Image file corresponding to this path is read.

Use case number : Eleven

Use case name : set_threshold_subtraction

UCD : Tertiary Level Use Case Diagram 1

Parent use case : tune_image_processing_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : To set an upper limit for permissible change in


intensity of background. If change is more than
this limit, motion is assumed.

Scope : Tune_parameters interface

Level : Tertiary

Pre condition : Successful execution of set_background_frame.

Post condition : Value of threshold used for background

58
subtraction is set.

Trigger : Background frame is set.

Main success scenario : A trade off is achieved. And subtracted image is


neither too black nor too white.

Failure end condition : Subtracted image contains substantially more


region than vehicles or all of the vehicle is not
captured by the black/white image.

Description : 1. Path to sample frame is set


2. Sample frame is read
3. Threshold value is set.
4. Sample frame is subtracted with background
frame and black and white image is obtained.
5. If image is satisfactory save value of threshold
else go to step 1.

Use case number : Twelve

Use case name : set_pattern_noise_removal

UCD : Tertiary Level Use Case Diagram 1

Parent use case : tune_image_processing_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set erosion and dilation patterns for morphing


operations whose aim is to eliminate noise.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : Successful execution of set_threshold_subtraction


use case.

Post condition : Patterns for erosion and dilation are set.

59
Trigger : Threshold for background subtraction is set.

Main success scenario : Image obtained after erosion and dilation with the
patterns set is free of noise.

Failure end condition : Substantial amount of noise is left after noise


removal.

Description : 1. The patterns for erosion and dilation are set.


2. According to patterns set till now the black and
white image is filtered to remove noise.
3. If results are satisfactory then pattern is saved
else go back to step 1.

5.4.5 Tertiary Level Use Case Diagram 2

Figure 25 Tertiary Level Use Case Diagram 2

Use case number : Thirteen

60
Use case name : set_gateway_boundary

UCD : Tertiary Level Use Case Diagram 2

Parent use case : tune_classification_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Identify the optimal gateway region

Scope : tune_parameters interface

Level : Tertiary

Pre condition : Successful execution of tune_image_processing_


parameters.

Post condition : Gateway coordinates are set.

Trigger : TMS Expert chooses too tune classification


parameters.

Main success scenario : Gateway region is optimal for classification.

Failure end condition : none

Description : 1. Coordinates for gateway region are set.


2. Image processing operations yield a black and
white image.
3. The image corresponding to gateway region is
passed to classifier module.
4. The vehicles classified are highlighted in the
image, presented to the TMS Expert.
5. TMS Expert can now choose to classify the next
frame or stop processing.
6. Incase processing is stopped the current set of
gateway parameters are saved.

Use case number : Fourteen

Use case name : set_area_noise

61
UCD : Tertiary Level Use Case Diagram 2

Parent use case : tune_classification_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set up upper limit for area of noise.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : Successful execution of set_gateway_boundary


use case.

Post condition : Noise area for classification region is set.

Trigger : Gateway boundary coordinates have been set.

Main success scenario : All noise is filtered.

Failure end condition : none

Description : 1. Upper limit for noise is set.


2. Image processing operations yield a black and
white image.
3. The image corresponding to gateway region is
passed to classifier module.
4. The vehicles classified are highlighted in the
image, presented to the TMS Expert.
5. TMS Expert can now choose to classify the
next frame or stop processing.
6. Incase processing is stopped the limit for noise
is saved.

Use case number : Fifteen

Use case name : set_area_typeOfVehicle

UCD : Tertiary Level Use Case Diagram 2

62
Parent use case : tune_classification_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set parameter for classifying the type of vehicle.


I.e. area.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : Successful execution of set_area_noise use case

Post condition : Area for various type of vehicles in classification


region is set.

Trigger : Area of noise been set.

Main success scenario : All vehicles are correctly identified

Failure end condition : none

Description : 1. Upper limits for various types of vehicles is set.


2. Image processing operations yield a black and
white image.
3. The image corresponding to gateway region is
passed to classifier module.
4. The vehicles classified are highlighted in the
image, presented to the TMS Expert.
5. TMS Expert can now choose to classify the next
frame or stop processing.
6. Incase processing is stopped the area for
different types of vehicles is saved.

Use case number : Sixteen

Use case name : set_reduction_area_threshold

UCD : Tertiary Level Use Case Diagram 2

63
Parent use case : tune_classification_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set the limit by which area (of vehicle) must
reduce before it can be classified. This is true
because when the vehicle has fully entered the
frame, it’s area will start to decrease, and we can
classify the vehicle.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : Successful execution of set_area_typeOfVehicle


use case.

Post condition : The limit for reduction of area of vehicle in


gateway region is set

Trigger : Area of different types of vehicles have been set.

Main success scenario : All vehicles that are fully present in gateway
region are classified.

Failure end condition : Area of vehicle does not change by the limit set in
gateway region.

Description : 1. Limit for reduction of area of vehicles is set.


2. Image processing operations yield a black and
white image
3. The image corresponding to gateway region is
passed to classifier module.
4. The vehicles classified are highlighted in the
image, presented to the TMS Expert.
5. TMS Expert can now choose to classify the next
frame or stop processing.
6. Incase processing is stopped the area for
different types of vehicles is saved.

64
5.4.6 Tertiary Level Use Case Diagram 3

Figure 26 Tertiary Level Use Case Diagram 3

Use case number : Seventeen

Use case name : set_pixel/distance_parameters

UCD : Tertiary Level Use Case Diagram 3

Parent use case : tune_real_time_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Identify relationship of pixels with actual distance.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : TMS Expert knows the pixel/distance information

Post condition : Pixel/distance information is saved in the system.

Trigger : Tuning of real time parameters.

65
Main success scenario : none

Failure end condition : Incorrect or invalid input.

Description : 1. TMS Expert enters the information about


pixel/actual distance information

Use case number : Eighteen

Use case name : set_safe_speed_limit

UCD : Tertiary Level Use Case Diagram 3

Parent use case : tune_real_time_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Identify the safe speed limit.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : Successful execution of


set_pixel/distance_parameters use case

Post condition : Safe speed limit information is saved in the


system.

Trigger : TMS Expert knows the safe speed limit


information

Main success scenario : none

Failure end condition : Incorrect or invalid input.

Description : 1. TMS Expert enters the information about safe


speed limit information

66
5.4.7 Tertiary Level Use Case Diagram 4

Figure 27 Tertiary Level Use Case Diagram 4

Use case number : Nineteen

Use case name : set_template_increment

UCD : Tertiary Level Use Case Diagram 4

Parent use case : tune_tracking_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

67
Goal in context : Set the increment in size of template. If increment
is too small or large a lot of redundant processing
needs to be done, and track of object may be lost.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : TMS Expert has an idea of increment in template’s


size.

Post condition : The increment in template size is set.

Trigger : Successful execution of set real_time_parameters


use case.

Main success scenario : Time taken for tracking is less.

Failure end condition : Track of object is lost.

Description : 1. Increment in template size is set.


2. Image processing operations yield a black and
white image.
3. The image corresponding to gateway region is
passed to classifier module, the whole image is
not deleted.
4. The vehicles classified along with the whole
image from previous step are sent to the tracker
object.
5. Vehicles sucessfully tracked in the whole image
are highlighted and this image is shown to the
TMS Expert.
6. TMS Expert can now choose to process the next
frame or stop processing.
7. Incase processing is stopped increment in
template size is saved.

Use case number : Twenty

Use case name : set_horizon_limit

UCD : Tertiary Level Use Case Diagram 4

Parent use case : tune_tracking_parameters

68
Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set the horizon limit such that tracking stops when
vehicle crosses this limit. This parameter avoids
tracking of vehicles that have very small pixel area
and uncertainty about proper capture from the
camera.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : TMS Expert has an idea of horizon limit.

Post condition : The horizon limit is set.

Trigger : Successful execution of set_increment_template


use case.

Main success scenario : All vehicles with large pixel area are tracked.

Failure end condition : Bad tracking (tracking of noise instead of vehicle


or unidentified loss of track of vehicle) and loss of
track of vehicle with reasonable amount of pixel
area.

Description : 1. Horizon limit is set.


2. Image processing operations yield a black and
white image.
3. The image corresponding to gateway region is
passed to classifier module, the whole image is
not deleted.
4. The vehicles classified along with the whole
image from previous step are sent to the tracker
object.
5. Vehicles successfully tracked in the whole
image are highlighted and this image is shown
to the TMS Expert.
6. TMS Expert can now choose to process the next
frame or stop processing.
7. Incase processing is stopped horizon limit is
saved.

69
Use case number : Twenty-one

Use case name : set_area_noise

UCD : Tertiary Level Use Case Diagram 4

Parent use case : tune_tracking_parameters

Note : NA

Primary actor : TMS Expert

Secondary actor : none

Goal in context : Set the area of noise, such that if vehicle’s pixel
area becomes too small even before it reaches the
horizon tracking stops. This parameter also avoids
bad tracking condition when vehicle is matched
with noise pattern.

Scope : tune_parameters interface

Level : Tertiary

Pre condition : TMS Expert has an idea of the noise size.

Post condition : The noise size is set.

Trigger : Successful execution of set_horizon_limit use


case.

Main success scenario : Bad tracking is avoided.

Failure end condition : Noise is also tracked, noise limit set corresponds
to only the particular video and not the camera
position.

Description : 1. Noise limit is set.


2. Image processing operations yield a black and
white image.
3. The image corresponding to gateway region is
passed to classifier module, the whole image is
not deleted.
4. The vehicles classified along with the whole

70
image from previous step are sent to the tracker
object.
5. Vehicles sucessfully tracked in the whole image
are highlighted and this image is shown to the
TMS Expert.
6. TMS Expert can now choose to process the next
frame or stop processing.
7. Incase processing is stopped noise limit is
saved.

71
5.4.8 Secondary Level Use Case Diagram 3

Figure 28 Secondary Level Use Case Diagram 3

Use case number : Twenty-two

Use case name : manual_analysis

UCD : Secondary Level Use Case Diagram 3

Parent use case : analyse_video

Note : NA

Primary actor : Alarm Analyser

Secondary actor : none

Goal in context : Manual anomaly detection

Scope : Alarm_analysis

Level : Secondary

Pre condition : Successful execution of set_video use case.

Post condition : Video is manually analysed

72
Trigger : Alarm Analyser chooses to manually analyse
video or failure of auto_analysis use case.

Main success scenario : none

Failure end condition : none

Description : 1. TMS plays the video for the Alarm Analyser.

Use case number : Twenty-three

Use case name : auto_analysis

UCD : Secondary Level Use Case Diagram 3

Parent use case : analyse_video

Note : NA

Primary actor : Alarm Analyser

Secondary actor : none

Goal in context : TMS raises alarms in case of anomaly which are


analysed by the Alarm Expert for action.

Scope : Alarm_analysis

Level : Secondary

Pre condition : Successful execution of set_video use case and


availability of a set of parameters for the particular
video.

Post condition : Video is analysed and alarms if any are raised.

Trigger : Alarm Analyser opts for analysis of video by


TMS.

Main success scenario : All anomalies are identified and alarm raised for
all.

Failure end condition : Loss of track of many vehicles and many false
positive and negative alarms. Leading to necessity
of manual analysis.

73
Description : 1. Alarm Analyser provides the set of parameters
to be used for analysis.
2. TMS analyses video and gives alarms.

5.4.9 Tertiary Level Use Case Diagram 5

Figure 29 Tertiary Level Use Case Diagram 5

Use case number : Twenty-four

Use case name : view_clip

UCD : Tertiary Level Use Case Diagram 5

Parent use case : manual_analysis

Note : NA

Primary actor : Alarm Analyser

Secondary actor : none

Goal in context : Present the video clip

Scope : Alarm_analysis

Level : Tertiary

Pre condition : none

74
Post condition : Viewing of video.

Trigger : Successful execution of manual_analysis use case.

Main success scenario : none

Failure end condition : none

Description : 1. TMS plays the video.

5.4.10 Tertiary Level Use Case Diagram 6

Figure 30 Tertiary Level Use Case Diagram 6

Use case number : Twenty-five

Use case name : select_camera_position

UCD : Tertiary Level Use Case Diagram 6

Parent use case : auto_analysis

Note : NA

75
Primary actor : Alarm Analyser

Secondary actor : none

Goal in context : Identify the camera position that best suits the
given video.

Scope : Alarm Analysis

Level : Tertiary

Pre condition : Appropriate camera position parameters have been


set by TMS Expert.

Post condition : Selection of camera position parameters

Trigger : Successful execution of auto_analysis use case.

Main success scenario : Camera position set corresponds to video.

Failure end condition : Camera position set does not correspond to video.

Description : 1. Alarm Analyser is presented with a list of


camera positions available.
2. He/she selects the camera position.

Use case number : Twenty-six

Use case name : start_auto_analysis

UCD : Tertiary Level Use Case Diagram 6

Parent use case : auto_analysis

Note : Includes give_alarm use case

Primary actor : Alarm Analyser

Secondary actor : none

Goal in context : Analyse video on basis of camera position


selected.

Scope : Alarm Analysis

76
Level : Tertiary

Pre condition : Successful execution of select_camera_position


use case.

Post condition : Video is analysed and various alarms are known.

Trigger : Alarm Analyser starts the analysis.

Main success scenario : All anomalies present within scope of video are
identified.

Failure end condition : A large percentage (>40%) of anomalies go


undetected.

Description : 1. The video source is decompressed to give


coloured image.
2. A black and white image is created by image
processing operations.
3. Classification and tracker modules are run,
alarms are given if anomaly is detected.

Use case number : Twenty-seven

Use case name : give_alarm

UCD : Tertiary Level Use Case Diagram 6

Parent use case : auto_analysis

Note : NA

Primary actor : Alarm Analyser

Secondary actor : none

Goal in context : Give description about anomaly detected by TMS

Scope : Alarm Analysis

Level : Tertiary

Pre condition : start_auto_analysis use case is running.

77
Post condition : Alarm Analyser is presented with information
about the anomaly.

Trigger : Anomaly found by tracker or classification


module.

Main success scenario : Anomaly information presented is


comprehensible.

Failure end condition : none

Description : 1. The frame concerned with anomaly is presented


with defaulter vehicle highlighted.
2. This information is viewed by the Alarm
Analyst.

78
5.4.11 Integrated Use Case Diagram

Figure 31 Integrated Use Case Diagram

79
6 Design Document

6.1 Purpose

This document explains the design of TMS.

6.2 Reference
1. dotnetcoders.com
2. UML: A Beginner’s Guide, Jason T Roff

6.3 Activity Diagrams

6.3.1 Activity Diagram for give_video_path

Figure 32 Give_video_path

80
6.3.2 Activity Diagram for select_operation

Figure 33 Select_operation

6.3.3 Activity Diagram for set_background_frame

Figure 34 Set_background_frame activity

81
6.3.4 Activity Diagram for set_sample_frame

Figure 35 Set_sample_frame

6.3.5 Activity Diagram for set_threshold_subtraction

Figure 36 Set_threshold_subtraction

82
6.3.6 Activity Diagram for set_pattern_noise_removal

Figure 37 Set_pattern_noise_removal

6.3.7 Activity Diagram for set_gateway_boundary

Figure 38 Set_gateway_boundary

83
6.3.8 Activity Diagram for set_area_classification

Figure 39 Set_area_classification

6.3.9 Activity Diagram for set_reduction_area_threshold

Figure 40 Set_reduction_area_threshold

84
6.3.10 Activity Diagram for set_template_increment

Figure 41 Set_template_increment

6.3.11 Activity Diagram for set_horizon_limit

Figure 42 Set_horizon_limit

85
6.3.12 Activity Diagram for set_noise_area

Figure 43 Set_noise_area

86
6.3.13 Activity diagram for tracking

Figure 44 Tracking

87
6.3.14 Activity Diagram for classification

Figure 45 Classification

88
6.4 Class/Object Diagrams

6.4.1 Image classes hierarchy

Figure 46 Image Class Hierarchy

6.4.2 Image to object modelling

Figure 47 Image to object modeling

89
6.4.3 Object containment in object_list

Figure 48 Object containment in object_list

6.4.4 Bw_image class

Figure 49 Bw_image class

6.4.5 CBW_IMAGE class

Figure 50 CBW_IMAGE class

90
6.4.6 CCIMAGE class

Figure 51 CCIMAGE class

6.4.7 Colored_image class

Figure 52 Colored_image class

91
6.4.8 I_pattern class

Figure 53 I_pattern class

6.4.9 Image Class

Figure 54 Image Class

92
6.4.10 Object class

Figure 55 Object class

6.4.11 Object_list class

Figure 56 Object_list class

93
6.4.12 Object_node class

Figure 57 Object_node class

6.4.13 Parameters class

Figure 58 Parameters class

94
6.4.14 Part_image class

Figure 59 Part_image class

6.4.15 Pattern Class

Figure 60 Pattern Class

6.4.16 Self_tracking_image class

Figure 61 Self_tracking_image class

95
6.5 Sequence Diagrams

6.5.1 Make Subtracted Image

Figure 62 Make Subtracted Image Sequence Diagram

96
6.5.2 Classification Sequence Diagram

Figure 63 Classification Sequence Diagram

97
6.5.3 Tracking Sequence Diagram

Figure 64 Tracking Sequence Diagram

98
6.6 User Interface Dialogs

6.6.1 Select Video Dialog

Figure 65 Select Video Dialog

6.6.2 Select Operation Dialog

Figure 66 Select Operation Dialog

99
6.6.3 Set Image Processing Parameters Dialog

Figure 67 Set Parameters (Image Processing) Dialog

100
6.6.4 Set Classification Parameters Dialog

Figure 68 Set Parameters (Classification)

101
6.6.5 Set Tracking Parameters Dialog

Figure 69 Set Parameters (Tracking)

102
7 Implementation

7.1 Code snippets

7.1.1 Erosion
void bw_image::erode(pattern * p)
{
if(color == NULL) return;
int i,j,k;

short * new_color = (short *)malloc(sizeof(short)*(height*width));


int ii,jj,kk_i1,kk_p;;
bool erode_flag = false;

// visit each pixel


for(i=0,k=0;i<height;i++)
for(j=0;j<width;j++,k++)
{
// if the pixel is black try to erode
if(color[k] == BLACK)
{
// try to place the pattern above the pixel
for(ii = 0;ii < p->height;ii++)
{
// upper boundary pixels
if( (i + (ii - p->height/2)) < 0)
continue;
// bottom boundary pixels
if( (i + (ii - p->height/2)) >= height )
continue;
for(jj = 0;jj < p->width;jj++)
{
// left boundary pixels
if( (j + (jj - p->width/2)) < 0 )
continue;
// right boundary pixels
if( (j + (jj - p->width/2)) >= width )
continue;

// kk is the corresponding position of


// image arrays

103
kk_i1 = k - width*(p->height/2) - (p-
>width/2) + ii*width + jj;

kk_p = ii*p->width + jj;

//if kk_p is TRUE and kk_i1 is white


erode k

if(p->values[kk_p] == BLACK &&


(color[kk_i1] == WHITE))
{
erode_flag = true;
goto abc;
}
}// for jj ends
}// for ii ends

abc: if(erode_flag == true)


{
new_color[k] = WHITE;
erode_flag = false;
}
else
new_color[k] = BLACK;
}// end checking for BLACK
else
new_color[k] = WHITE;

}// end visiting pixel

// swap color arrays!


free(color);
color = new_color;
}

7.1.2 Finding objects in the bw_image


// find objects from bw_image.
object_list * bw_image::find_objects(int noise_area,int left,int top)
{
int i,j,k;
int k1,k2,k3,k4,k5,k6,k7,k8,k9;
int * label;

104
// make the labels array
label = makelabels();

// iniatize a pattern array


i_pattern cur_patt[NOPATTERNS];

for(i =0;i<NOPATTERNS;i++)
{
cur_patt[i].free = 0;
cur_patt[i].area = 0;
}
k=-1;
// process the entire label array and the image

for(i = 0;i < height ; i++)


for(j = 0;j < width ;j++)
{
k++;
if(label[k] != 0)
{
// increment the area count for that object by 1
cur_patt[label[k]].area++;
// compute the indices of the 8 next positions k5
remains as k

k1 = k - width -1;
k2 = k - width;
k3 = k - width +1;
k4 = k - 1;
k5 = k;
k6 = k + 1;
k7 = k + width -1;
k8 = k + width;
k9 = k + width +1;

// this if condition checks for the image corners


// being on

if((i == 0 && j == 0) || //upper left


(i == 0 && j == width-1) || //upper right
(i == height-1 && j == 0)|| //bottom left
(i == height-1 && j == width-1)) // bottom right
{
cur_patt[label[k]].kpoints[cur_patt[label[k]].
free++] = k;
continue;

105
}

// case for the upper boundary


if((i==0) &&(
(label[k5] == label[k6] && label[k5] == label[k8]
&& label[k5] != label[k4] )
||// upper left corner
(label[k5] == label[k4] && label[k5] == label[k8]
&& label[k5] != label[k6] ))// upper right )
{
cur_patt[label[k]].kpoints[cur_patt[label[k]].
free++] = k;
continue;
}

// case for the lower boundary


if((i==height-1)&&
((label[k5] == label[k2] && label[k5] == label[k6]
&& label[k5] != label[k4])
||//bottom left corner
(label[k5] == label[k2] && label[k5] == label[k4]
&& label[k5] != label[k6])) //bottom right )
{
cur_patt[label[k]].kpoints[cur_patt[label[k]].
free++] = k;
continue;
}

// case for the left boundary


if((j==0) &&
((label[k5] == label[k6] && label[k5] == label[k8]
&& label[k5] != label[k2])
||// upper left corner
(label[k5] == label[k2] && label[k5] == label[k6]
&& label[k5] != label[k8])) bottom left)
{
cur_patt[label[k]].kpoints[cur_patt[label[k]].
free++] = k;
continue;
}

// case for the right boundary


if((j==width-1)&&
((label[k5] == label[k4] && label[k5] == label[k8]
&& label[k5] != label[k2])

106
||//upper right corner
(label[k5] == label[k2] && label[k5] == label[k4]
&& label[k5] != label[k8])) bottom right)
{
cur_patt[label[k]].kpoints[cur_patt[label[k]].
free++] = k;
continue;
}

// case for the other pixels not lying on the boundary


if(i != 0 && j != 0 && i != height-1 && j != width-
1)
if((

//upper left
label[k5] == label[k6] && label[k5] == label[k8]
&& label[k5] != label[k1] && label[k5] !=
label[k2] && label[k5] != label[k4] )

|| // upper right
(label[k5] == label[k4] && label[k5] == label[k8]
&& label[k5] != label[k2] && label[k5] !=
label[k3] && label[k5] != label[k6] )

|| // bottom left
(label[k5] == label[k2] && label[k5] == label[k6]
&& label[k5] != label[k4] && label[k5] !=
label[k7] && label[k5] != label[k8] )

||// bottom right


(label[k5] == label[k2] && label[k5] == label[k4]
&& label[k5] != label[k6] && label[k5] !=
label[k8] && label[k5] != label[k9] )
)
{
cur_patt[label[k]].kpoints[cur_patt[label[k]].
free++] = k;
}

}
}
if(label != NULL) free(label);

object_list * cur_object = new object_list();

107
// now of all patterns found make into object those having area >
//noise_area
for(i=0;i<NOPATTERNS;i++)
{
if(cur_patt[i].area > noise_area)
{
cur_object->add(cur_patt[i],height,width,left,top);
}
}
return cur_object;
}

7.1.3 Incrementing the template


// increase the template size by increment according to sides information.
void part_image::incr_temp(int increment, int sides)
{
if(sides ==0) return;

// get new coordinates


bottom += ((sides&BOTTOM)==BOTTOM?increment:0);
right += ((sides&RIGHT==RIGHT)?increment:0);
top -= ((sides&TOP)==TOP?increment:0);
left -= ((sides&LEFT)==LEFT?increment:0);

// check boundaries
bottom = (bottom >= parent->height-1)? parent->height-1 : bottom;
right = (right >= parent->width-1)? parent->width-1 : right;
top = (top < 0)? 0 : top;
left = (left < 0)? 0:left;

// set other parameters


height = bottom - top;
width = right - left;
if(color!= NULL) free(color);
color = (short *) malloc(sizeof(short)*height*width);
int i,j,k,k_new = 0;

// get color information.


for(i = top; i<bottom ; i++)
for(j = left;j<right;j++,k_new++)
{
k = i*parent->width + j;
color[k_new] = parent->color[k];
}

108
}

7.2 Deliverables

1. The TMS software


2. AVI splitter tool (trial version) for extraction of frames.
3. At least 2 video files successfully processed using TMS.
4. User’s manual.

109
8 Conclusion and Extensions

8.1 Conclusion

The prototype developed, successfully tracks vehicles within acceptable limits of error.
The vehicles are classified into three categories of lorry, car and two-wheeler. Alarm is
raised in situation of anomalous behaviour.

8.2 Extensions

The software may be extended to provide following features

1. Processing of on-line video


2. Consideration of shadow of vehicles on roads
3. Consideration of situations where two vehicles may cross.
4. Processing inspite of climatic changes.
5. Consideration of lighting effects due to day and night.
6. Considerations of vehicles in different lanes.

110
9 Appendix

9.1 Glossary

Alarm - Generated by TMS in case some anomaly is found. It helps to attract


attention of the alarm analyser.

Alarm Analyser – The person responsible of viewing of alarms and taking


appropriate actions, like sending help in case vehicle breakdown is detected.

Anomaly – The occurrence of an undesired or abnormal pattern in traffic flow.


Like a slow vehicle.

Area of pattern – The number of connected pixels that collectively form the
pattern.

Background Frame – The frame that does not have any vehicle on the road.

Bw_image – A Black and White image.

Camera position – Refers to the settings for a particular camera location. It


includes all parameters that need to be fixed in order to analyse input from the
position.

Classification – The process of identifying new vehicles entering the frame.

Classification Parameters – A set of parameters that define various constants for


classification.

Colored_image – A 256 color image.

Filling of Region Pattern – Pattern used for filling of holes. Refer to


morphological operations in section 3.3.4 figure 3.

Frame – An image extracted from video.

Frame Rate – The number of frames taken per second. This parameter is a
characteristic of the camera.

Gateway – A portion of the image. This region is characterised by fact that all
vehicles pass through this portion before coming into main frame. The region is
chosen such that almost all vehicles are fully in this region for 3-5 consecutive
frames.

111
Gateway Boundary – The limits left, right, top, bottom that define the gateway
region.

Horizon Limit – A limit from the top of frame after which we can start to drop
references of the vehicles.

I_pattern – A region of black and white image corresponding to a connected black


region.

Image – A collection of pixels, giving some visual information.

Image Processing – The techniques that help to enhance quality of image or


extract some information from them.

Image Processing Parameters – A set of parameters that define constants required


for preparing an image for further processing. During this processing a colored
image is converted into a black and white one having only the vehicle
information.

Part_image – defines a portion of the image. Usually used to define the gateway
region or the specific region where presence of vehicle is predicted.

InfraRed Imaging - The technique of capturing visual information lying beyond


the visual spectrum by use of infra red rays.

Kpoints – The points in a pattern (found in a black and white image)


corresponding to the intersection of edges.

Labeled Image – a black and white image with each continuos region marked
with a different label.

Loop Detector – a loop of current carrying wire buried underneath surface of


ground. It detects presence of vehicle above it.

Morphing Operations – Image processing operations. Basic operations being


erosion and dilation.

Noise – black areas in the image that do not correspond to vehicles.

Noise Removal Pattern - Pattern used for removal of noise. Refer to


morphological operations in section 3.3.4 figure 2.

Object – Abstraction of vehicle in TMS. Some major attributes are area, type of
vehicle and current location information.

112
Overlap – The phenomenon of intersection of two regions corresponding to same
object but in different (often-consecutive) frames.

Parameters – A class used for defining of all constants needed during processing.

Pattern – A square matrix representation of an image, used in morphing


operations. “1” indicates that color at that pixel is black and “0” indicates white
color.

Pixel – Unit picture element that is uniquely visible and controllable on the
screen.

Sample Frame – The frame on basis of which image processing parameters are
set.

Segmentation – the process of isolating regions of interest from an image.


Speed – Ratio of distance covered by a vehicle over time. The distance covered
may be in number of pixels or meters and the granularity of time is defined by
frame rate.

Subtraction of Frames – given two frames x and y of same dimensions we


calculate the difference of corresponding pixels, take absolute value of value thus
computed and compare this value to some threshold. If result is greater
corresponding pixel is black else white. In such a manner the entire frame is
processed.

Subtraction Threshold – the constant used for comparison during subtraction of


two frames.

Template – A region in the subtracted image with high probability of presence of


vehicle. A template increases its dimensions along the required axis as the region
of probability grows. This process stops when the template contains the complete
vehicle.

TMS Expert – the person responsible for configuring of various parameters


corresponding to a camera position.

Tracking – The process of identifying two objects in consecutive frames as the


same vehicle.

Tracking Parameters – A set of parameters that define various constants required


for tracking.

Traffic Controller – A person responsible for manual analysis of video and taking
appropriate action in event of some anomaly.

113
Traffic Surveillance – The process of keeping vigilance on roads, so as to identify
any anomalous behaviour.

Tune parameters – the process of setting the camera position parameters.

Video – In the context of this project, Video is a set of consecutive images.

Video Provider – the person responsible for shooting of video clip and consequent
extraction of frames from this clip

Virtual Boundary – the minimal rectangular boundary containing the complete


vehicle. This boundary is parallel to edges of frame.

114

Anda mungkin juga menyukai