CHAPTER 1
INTRODUCTION
Traffic light plays an important role in urban traffic management. In order to control the
traffic it is necessary to get the information about traffic parameters.
Real –time traffic counting and monitoring system use traffic detectors. There are several
traffic detecting technologies are available. Traffic detectors can be:
Traffic detectors can output: classified traffic count, traffic volume , speed, density, weight.
Commercially available in-road ways detectors are:
But these sensors are expensive and need replacement, if any damages. In order to overcome
this problem, video image processing system is used, to get traffic parameters.
Most commercially available(1960) Video Image Processing System(VIPS) are trip wire
system. Which mimic the operation of loop detectors but they do not track the vehicles. i.e.
they do not identify the individual vehicles as unique targets and follow their movements in
time distinct from other vehicles[1].The system allows user to specify several detection
regions in the video image and then the given system looks for image intensity changes in the
direction regions to indicate vehicle presence.
Some commercial systems do track vehicles, the so-called ``third generation'' VIPS, e.g.
CMS Mobilizer, EliopEVA, PEEK VideoTrak, Nestor TracVision, and Sumitomo IDET
(Chat- ziioanou et al., 1994; Klein and Kelley, 1996; MNDOT, 1997; Nihan et al., 1995).
Generally, these systems use region based tracking, i.e., vehicles are segmented based on
movement. Unfortunately, if one moving target including its shadow occludes another, the
two targets may become merged together by the tracking software.[2]
Recent evaluations of commercial VIPS found the systems had problems with congestion,
high flow, occlusion, camera vibration due to wind, lighting transitions between night/day
and day/ night, and long shadows linking vehicles together[3].Chao et al. (1996) have
developed an algorithm to differentiate vehicles from shadows. On a larger scale, the FHWA
has sponsored a major research effort administered by the Jet Propulsion Laboratory (JPL) to
advance wide-area traffic detector technology [4].For some years, Koller et al and Beymer et
al ,has been developing a vision-based vehicle tracking system:. The system uses video from
wayside cameras and processes it curbside, then, the data is transmitted in summary form to a
central location such as a Traffic Management Centre (TMC) for collation and computation
of multi-site parameters (e.g., link travel time).
For intelligent traffic light system ,the most common technique is the use of fuzzy logic
controller.Fuzzy logic traffic lights control is an alternative to conventional traffic lights
control which can be used for a wider array of traffic patterns at an intersection.[5].Traffic
flow is usually characterized by randomness and uncertainty. Fuzzy logic is known to be well
suited for modeling and control such problems. Applications of fuzzy logic in traffic signal
control has been made since the 1970s [10].The first attempt made to design Fuzzy Traffic
Controller was in 70s by Pappis and Mamdani [2]. After that Niittymaki, Kikuchi, Chui and
other researchers [4, 5] developed different algorithms and logic controllers to normalize
traffic flow.
Kelsey and Bisset [3] also designed a simulator for signal controlling of an isolated
intersection with one lane. Same work was also done by Niittymaki and Pursula [6]. They
observed that Fuzzy Controller reduces the vehicle delay when traffic volume was
heavy.Niittymaki and Kikuchi [4] developed Fuzzy based algorithm for pedestrians, crossing
the road. Nakatsuyama, Nagahashi, and Nishizukaapplied fuzzy logic to control two adjacent
intersections on an arterial with one-way movements. Fuzzy control rules were developed to
determine whether to extend or terminate the green signal for the downstream intersection
based on the upstream traffic.[7].
Chui was the first who uses Fuzzy Logic to control traffic in multiple intersections [5]. In this
attempt, only two way streets are evaluated without considering any turnings. In recent
years, Lin Zhang and Honglong Li [8] also worked on designing Fuzzy Traffic Controller for
Oversaturated intersections.
Jee-Hyong Lee and Hyung Lee-Kwang [9] presented direction-varying traffic signal control
but assume that right turn traffic flow do not disturb any other traffic flows in an
intersection.Many studies have presented algorithms for vision-based detection and
classification of vehicles and human activities in consecutive image frames of the intelligent
traffic systems [3][4]. Others have developed the scoreboard algorithm and cubical model for
estimating stationary backgrounds and segmenting vehicle occlusion in monocular image
sequences for automated visual traffic surveillance [5–8].
The running model and running average algorithms are at the heart of the scoreboard
algorithm, and temporal difference methods overcome inter-frame and reference differencing,
which are problems for traffic detection [9–12]. Other algorithms have been proposed to
display traffic congestion and detect accidents at intersections using background extraction
[13][14], while a probabilistic line feature grouping algorithm has also been proposed to
analyze traffic flow and accumulate traffic lane lines from traffic surveillance videos for
intelligent traffic systems [15][16]. Generally, temporal differencing and background
subtraction are the two main approaches to segmenting moving foreground objects in the
intelligent traffic applications.
1.2. Preamble
All the traffic and vehicles as many other are human activities, become a part of their
environment which they influence and transform to a degree and within a range that vary
from project to project, frequently seeming to be in opposition, but not necessarily opposing,
traffic and their environment interrelates with a degree of complexity that makes the task of
the traffic difficulty. The solution must be to find the golden mean by striking a balance
between divergent and sometimes contradictory goals.
Our day today traffic light controller is having minimal intelligence. It cannot judge the
vehicle density. So due to that it cause lot of problems like unnecessarily waiting a vehicle,
pollution, so in order to prevent that we have introduced intelligent traffic controller. It
prevent the unnecessarily waiting of a vehicle and control the traffic density.
The current traffic control system does not consider traffic density to make decision. But, our
intelligent traffic control system will take the decision and provides the proper delay
depending on the situation. The main goals are improving the traffic safety at the intersection
and minimizing the delays.
In day to day system, the current traffic control system is having minimal intelligence. It does
not consider traffic density to make decision. It won’t take decision depending on the
situation. To avoid these problems we use intelligent traffic control system.
We can use several sensors and detectors to get the information about traffic parameters. But
these sensors are expensive and need replacement, if any damages. Therefore in order to
overcome this problem we are using image processing technique to get information of
vehicular traffic.
Fuzzy logic controller dynamically controls the traffic light timings to ensure smooth flow of
traffic. It decreases the traffic delays. Here vehicle count obtain from image processing
technique is given as input to fuzzy logic controller; it determines whether to extend or
terminate the current green phase.
In the current traffic control system is having minimal intelligence. It does not consider
traffic density to make decision. It won’t take decision depending on the situation. If there is
no vehicle in a particular direction, then also the vehicle has to wait till their turn.
The proposed intelligent traffic control system will avoid the current problems. Depending on
the density of the vehicle it will take the decision and provides the proper delay to the vehicle
or traffic so that they can move in an optimum time.
To optimize the traffic flow mechanism by using fuzzy logic and image processing
techniques.
1.3.6.Methodology
The methodology that has been followed for the design is:
Then the count is given as input to the fuzzy logic. Depending on the count it will give time
to the corresponding input patterns.
CHAPTER 2
This chapter gives the brief introduction about machine vision and intelligent traffic control
system and it gives the overview of image processing techniques and overview of fuzzy
logic.
A device that collects data and forms an image, which is interpreted by a computer to
determine an appropriate position or to see an object. The vision-based methods may face the
problems of serious illumination variation, shadows, moving clouds or trees. Colored vision
provides maximum amount of information regarding the subject which proves to be quite
beneficial most of the times.
As we can seen in day to day system, the current traffic control system is having a minimal
intelligence. It does not consider traffic density to make decision. It won’t take decision
depending on the situation. If there is no vehicle in a particular direction, then also the vehicle
has to wait till their turn. But the intelligent traffic control system will avoid these problems.
Depending on the density of the vehicle it will take the decision and provides the proper
delay to the vehicle or traffic so that they can move in an optimum time.
An image is a two dimensional function, f(x, y), where x and y are spatial coordinates and
amplitude of ‘f ’at any pair of coordinates indicates intensity of the image at that point.
Digital image processing refers to processing of digital images by using digital computers.
Digital image processing offers two main advantages, of which the first being precision. In
each generation of photographic process, there is a loss of image quality and electrical signals
are degraded by physical limitations of the electrical components. Whereas, digital image
processing can essentially maintain exact precision.
The second advantage is its extreme flexibility, it enables one part of the image to be
magnified, another reduced, yet another rotated and so on. The contrast and the brightness
may be adjusted and the adjustments may be irregular, discontinuous or limited to some part
of the image and so on. These options are not available with photographic and electrical
digital image processing techniques.
2.3.1 Techniques Involved in Image Processing
There are several techniques in image processing they are gray conversion, binary
conversion, morphological operations, pixel connectivity, counting of objects in an image.
Gray Conversion: It converts RGB image into gray scale. Because it is used for
eliminating the hue and saturation information while retaining the luminance.
Binary Conversion: Gray difference image thus obtain should be converted into
binary image by assigning if gray difference image is greater than or equal to
threshold then it should be converted into white pixel. The video is converted into
binary because in videos light intensities will be altering background subtraction,
vehicle counting cannot be done properly. So videos are converted into binary.
Morphological Operations: Morphology is the study of the shape and form of
objects. The Video and Image Processing Block set software contains blocks that
perform morphological operations such as erosion, dilation, opening, and closing.
Morphological operations apply a structuring element to an input image, creating an
output image of the same size. In a morphological operation, the value of each pixel
in the output image is based on a comparison of the corresponding pixel in the input
image with its neighbours. By choosing the size and shape of the neighbourhood, we
can construct a morphological operation that is sensitive to specific shapes in the
input image. In dilation the value of the output pixel is the maximum value of all the
pixels in the input pixel's neighbourhood. In a binary image, if any of the pixels is set
to the value 1, the output pixel is set to 1. In erosion the value of the output pixel is
the minimum value of all the pixels in the input pixel's neighbourhood. In a binary
image, if any of the pixels is set to 0, the output pixel is set to 0.
Pixel Connectivity: Connectivity defines which pixels are connected to other pixels.
A set of pixels in a binary image that form a connected group is called an object or a
connected component.
Counting of Objects in an Image: In counting the accuracy of our results depends
on the size of the objects.
Fuzzy logic offers a soft computing partnership the important concept of computing
with words and also it provides a technique to deal with imprecision and information
granularity. In general, the fuzzy logic provides an inference structure that enables
appropriate human reasoning capabilities. Fuzzy Logic First translate the crisp input
from the sensor into linguistic description. Then it evaluates the control strategy
contained in fuzzy logic rule and translates the result back into a crisp value.
Fuzzification: Fuzzification translates a crisp value into a fuzzy value. Crisps means
TRUE or FALSE Where as fuzzy means intermediate value between TRUE or
FALSE.
Membership Function: Membership function is a curve that defines how each point in the
input space is mapped to a membership value (or degree of membership) between TRUE or
FALSE. The Membership Function Editor is the tool that lets you display and edit all of the
membership functions associated with all of the input and output variables for the entire
fuzzy inference system. In our project we are using Triangular membership function because
this gives a good starting point and proportional output.
Defuzzification: The interface for the output variables contains the defuzzification
procedures. Defuzzification is conversion of a fuzzy quantity into a precise quantity
for the computer to take controlling action. In Fuzzification we have used standard
membership function so to get the proportional output we used centroid method.
Rule Edit: It is used to modify the rules of a FIS structure stored in a files and used to
inspect the rules being used by a fuzzy interface system.
Rule Viewer: The Rule Viewer allows you to interpret the entire fuzzy inference
process at once, The Rule Viewer also shows how the shape of certain membership
functions influences the overall result. It is used to verify the practical results.
Fuzzy inference process comprises of five parts: Fuzzification of the input variables,
application of the fuzzy operator (AND or OR) in the antecedent, implication from the
antecedent to the consequent, aggregation of the consequents across the rules, and
defuzzification. These sometimes cryptic and odd names have very specific meaning that are
defined in the following steps.
The first step is to take the inputs and determine the degree to which they belong to each of
the appropriate fuzzy sets via membership functions. In Fuzzy Logic Toolbox software, the
input is always a crisp numerical value limited to the universe of discourse of the input
variable (in this case the interval between 0 and 10) and the output is a fuzzy degree of
membership in the qualifying linguistic set (always the interval between 0 and 1).
Fuzzification of the input amounts to either a table lookup or a function evaluation.
After the inputs are fuzzified, you know the degree to which each part of the antecedent is
satisfied for each rule. If the antecedent of a given rule has more than one part, the fuzzy
operator is applied to obtain one number that represents the result of the antecedent for that
rule. This number is then applied to the output function. The input to the fuzzy operator is
two or more membership values from fuzzified input variables. The output is a single truth
value.
Before applying the implication method, we must determine the rule's weight. Every rule has
a weight (a number between 0 and 1), which is applied to the number given by the
antecedent. Generally, this weight is 1 (as it is for this example) and thus has no effect at all
on the implication process. From time to time you may want to weight one rule relative to the
others by changing its weight value to something other than 1.
After proper weighting has been assigned to each rule, the implication method is
implemented. A consequent is a fuzzy set represented by a membership function, which
weights appropriately the linguistic characteristics that are attributed to it. The consequent is
reshaped using a function associated with the antecedent (a single number). The input for the
implication process is a single number given by the antecedent, and the output is a fuzzy set.
Implication is implemented for each rule. Two built-in methods are supported, and they are
the same functions that are used by the AND method: min(minimum), which truncates the
output fuzzy set, and prod (product), which scales the output fuzzy set.
Because decisions are based on the testing of all of the rules in a FIS, the rules must be
combined in some manner in order to make a decision. Aggregation is the process by which
the fuzzy sets that represent the outputs of each rule are combined into a single fuzzy set.
Aggregation only occurs once for each output variable, just prior to the fifth and final step,
defuzzification. The input of the aggregation process is the list of truncated output functions
returned by the implication process for each rule. The output of the aggregation process is
one fuzzy set for each output variable.
As long as the aggregation method is commutative (which it always should be), then the
order in which the rules are executed is unimportant. Three built-in methods are supported:
max (maximum)
probor (probabilistic OR)
sum (simply the sum of each rule's output set)
Step 5.Defuzzify
The input for the defuzzification process is a fuzzy set (the aggregate output fuzzy set) and
the output is a single number. As much as fuzziness helps the rule evaluation during the
intermediate steps, the final desired output for each variable is generally a single number.
However, the aggregate of a fuzzy set encompasses a range of output values, and so must be
defuzzified in order to resolve a single output value from the set.
Perhaps the most popular defuzzification method is the centroid calculation, which returns
the centre of area under the curve. There are five built-in methods supported: centroid,
bisector, middle of maximum (the average of the maximum value of the output set), largest of
maximum, and smallest of maximum.
The three plots across the top of the figure represent the antecedent and consequent of the
first rule. Each rule is a row of plots, and each column is a variable. The rule numbers are
displayed on the left of each row.
The first two columns of plots (the six yellow plots) show the membership functions
referenced by the antecedent, or the if-part of each rule.
The third column of plots (the three blue plots) shows the membership functions
referenced by the consequent, or the then-part of each rule.
The fourth plot in the third column of plots represents the aggregate weighted
decision for the given inference system.
CHAPTER 3
3.1.Background Subtraction
Background subtraction is the difference between original image and reference image.
Background subtraction is a widely used approach for detecting moving objects in videos
from static cameras. There are several approaches for background subtraction it depends on
the type of background model used and the procedure used to update the model. In our
approach we are using modelling each pixel as a mixture of Gaussians, and then this
Gaussian distribution of the adaptive mixture model are evaluated to determine which is
background and which is foreground. Each pixel is classified based on whether the Gaussian
distribution which represents it most electively is considered part of the background model.
This results in a stable, real-time outdoor tracker which reliably deals with lighting changes,
repetitive motions from clutter, and long-term scene changes. Rather than explicitly
modelling the values of all the pixels as one particular type of distribution, we simply model
the values of a particular pixel as a mixture of Gaussians. Based on the persistence and the
variance of each of the Gaussians of the mixture, we determine which Gaussians may
correspond to back- ground colours. Pixel values that do not fit the back- ground distributions
are considered foreground. Our system adapts to deal robustly with lighting changes,
repetitive motions of scene elements, tracking through cluttered regions, slow-moving
objects, and introducing or removing objects from the scene. Slowly moving objects take
longer to be incorporated into the background, because their colour has a larger variance than
the background. Also, repetitive variations are learned, and a model for the background
distribution is generally maintained even if it is temporarily replaced by another distribution
which leads to faster recovery when objects are removed. Our back grounding method
contains two significant parameters α, the learning constant and T, the pro- portion of the data
that should be accounted for by the background.
This method involves each pixel resulted from a particular surface under particular lighting, a
single Gaussian would be sufficient to model the pixel value while accounting for acquisition
noise. If only lighting changed over time, a single, adaptive Gaussian per pixel would be
sufficient. In practice, multiple surfaces often appear in the view frustum of a particular pixel
and the lighting conditions change. Thus, multiple, adaptive Gaussians are necessary. We use
a mixture of adaptive Gaussians to approximate this process. Each time the parameters of the
Gaussians are up- dated, the Gaussians are evaluated using a simple heuristic to hypothesize
which are most likely to be part of the “background process.” Pixel values that do not match
one of the pixel’s “background” Gaussians are grouped using connected components. Finally,
the connected components are tracked from frame to frame using a multiple hypothesis
tracker.
We consider the values of a particular pixel over time as a “pixel process”. The “pixel
process” is a time series of pixel values, e.g. scalars for gray values or vectors for colour
images. At any time, t, the pixel value of a particular pixel, {x0,y0} is:
If lighting changes occurred in a static scene, it would be necessary for the Gaussian to track
those changes. If a static object was added to the scene and was not incorporated into the
background until it had been there longer than the previous object, the corresponding pixels
could be considered foreground for arbitrarily long periods. This would lead to accumulated
errors in the foreground estimation, resulting in poor tracking behaviour. These factors
suggest that more recent observations may be more important in determining the Gaussian
parameter estimates. Here important factor is the static object has more variance than moving
one.
𝑃(𝑋𝑡 ) = ∑𝐾
𝑖=1 𝑤𝑖,𝑡 ∗ 𝜂(𝑋𝑡, µ𝑖, 𝑡, 𝛴𝑖, 𝑡) (2)
where K is the number of distributions, ωi, t is an estimate of the weight of the i th Gaussian
in the mixture at time t, µi, t is the mean value of the ith Gaussian in the mixture at time t, Σi,
t is the co- variance matrix of the ith Gaussian in the mixture at time t, and where η is a
Gaussian probability density function:
1 1
𝜂(𝑋𝑡, µ, 𝛴) = 𝑛 1 𝑒 − 2 (𝑋𝑡 − µ𝑡)𝑇𝛴 − 1(𝑋𝑡 − µ𝑡) (3)
(2𝜋) 2 |𝛴|2
Thus, the distribution of recently observed values of each pixel in the scene is characterized
by a mixture of Gaussians. A new pixel value will, in general, be represented by one of the
major components of the mixture model and used to update the model.
Each pixel process varies over time as the state of the world changes, so we use an
approximate method which essentially treats each new observation as a sample set of size 1
and uses standard learning rules to integrate the new data. Because there is a mixture model
for every pixel in the image, implementing an exact EM algorithm on a window of recent
data would be costly. Instead, we implement an on-line K-means approximation. Every new
pixel value, X(t), is checked against the existing K Gaussian distributions, until a match is
found. A match is defined as a pixel value within 2.5 standard deviations of a distribution.
This threshold can be perturbed with little effect on performance. This is effectively a per
pixel or per distribution threshold. This is extremely useful when different regions have
different lighting , because objects which appear in shaded regions do not generally exhibit as
much noise as objects in lighted regions. A uniform threshold often results in objects
disappearing when they enter shaded regions.
If none of the K distributions match the current pixel value, the least probable distribution is
replaced with a distribution with the current value as its mean value, an initially high
variance, and low prior weight. The prior weights of the K distributions at time t, ωi,t, are
adjusted as follows:
where α is the learning rate2 and Mk, t is 1 for the model which matched and 0 for the
remaining models. After this approximation, the weights are re- normalized. 1/α defines the
time constant which determines the speed at which the distribution’s parameters
change.𝑤𝑘,𝑡 is effectively a causal low-pass filtered average of the (threshold) posterior
probability that pixel values have matched model k given observations from time 1 through t.
This is equivalent to the expectation of this value with an exponential window on the past
values.
The µ and σ parameters for unmatched distributions remain the same. The parameters of the
distribution which matches the new observation are up- dated as follows:
2
𝜎𝑡2 = (1 − 𝜌)𝜎𝑡−1 + 𝜌(𝑋𝑡 − 𝜇𝑡 )𝑇 (𝑋𝑡 − 𝜇𝑡 ) (7)
𝑋𝑡
𝜌 =∝ 𝜂 (𝜇 ) (8)
𝑘, 𝜎𝑘
which is effectively the same type of causal low-pass filter as mentioned above, except that
only the data which matches the model is included in the estimation.
One of the significant advantages of this method is that when something is allowed to become
part of the background, it doesn’t destroy the existing model of the background. The original
background colour remains in the mixture until it becomes the K th most probable and a new
colour is observed. Therefore, if an object is stationary just long enough to become part of the
background and then it moves, the distribution describing the previous background still exists
with the same µ and σ2, but a lower ω and will be quickly re-incorporated into the
background.
The parameter of the mixture model of each pixel change is to determine which of the
Gaussians of the mixture are most likely produced by background processes. In this we are
consider the accumulation of supporting evidence and the relatively low variance for the
“background” distributions when a static, persistent object is visible. In contrast, when a new
object occludes the background object, it will not, in general, match one of the existing
distributions which will result in either the creation of a new distribution or the increase in the
variance of an existing distribution. Also, the variance of the moving object is expected to
remain larger than a background pixel until the moving object stops. Before this, we need a
method for deciding what portion of the mixture model best represents background processes.
First, the Gaussians are ordered by the value of ω/σ. This value increases both as a
distribution gains more evidence and as the variance decreases. After re-estimating the
parameters of the mixture, it is sufficient to sort from the matched distribution towards the
most probable background distribution, because only the matched models relative value will
have changed. The above method allows us to identify foreground pixels in each new frame
while updating the description of each pixel’s process. These labelled foreground pixels can
Dept of Instrumentation Technology, MCE, Hassan Page 17
Machine Vision Based Intelligent Traffic Control System Using Fuzzy Logic
then be segmented into regions by a two-pass, connected components algorithm. Because this
procedure is effective in determining the whole moving object, moving regions can be
characterized not only by their position, but size, moments, and other shape information.
3.2.VEHICLE COUNTING
start
Input video
Is there a match No
between foreground Considered as foreground
and background
Yes
Check foreground neighbors
Is pixel
No No
Considered it as noise, filtered
count=255
Yes
No
Is t=0
Stop
CHAPTER 4
This chapter gives the brief explanation about fuzzy inference system.
4.1. FUZZIFICATION
4.1.1.Input and Output variables: Input variable is used to assign the vehicle count to
the Fuzzy instruction set is named as route, where output variable is used to give the
corresponding time is named as signal.
The above figures shows the different member ship functions, fig 4.1.shows triangular
member ship function, here the number of vehicles arriving is 30 and the corresponding time
is 50 sec.fig 4.2. shows the trapezoidal member ship function here the number of vehicles
arriving is 30 and the corresponding time is 51.7sec.fig 4.3.shows the Gaussian member ship
function here the number of vehicles arriving is 30 and the corresponding time is 50.2.
By looking all above member ship function triangular member ship function is best and it
gives good starting point and accurate results so we have used triangular member ship
function in our project. It is shown below fig
ROUTE SIGNAL
Zero None
Fewer Very low
Few Low
Medium Medium
High Long
Very high Very long
Table.4.1. indicates the number of vehicles arriving at the route side and signal
indicate the corresponding time. Suppose there is no arrival of vehicle it indicates zero
and the corresponding time is none, it follows for the others also.
Fig.4.1. shows the input triangular member ship function here we have taken 5 input
triangular member ship function they are zero, fewer, few, medium, high, very high. Says
how input member ship functions are taken.
The table 4.2.shows the output membership function here we have taken 6 output member
ship functions they are none, verylow, low, medium, long, verylong, all of them are triangular
membership function there range from 0 to 100 it indicates the corresponding time.
The figure4.2.shows the output triangular membership function here we have taken 6 output
functions they are none, verylow, low, medium, long, verylong they indicate how output
membership functions are taken.
Figure.4.3shows the rule editor window it is used to modify the rules of a fis stricter stored in
a file it is user friendly .The users can define their own rules as per their requirement and
select the option. In our project we have used 6 input and out put variables so there are 6 rules
and we have matched the respected input and output.
The figure4.4.shows the output membership for rule viewer window left side indicates the
arrival of vehicle in the triangular form and at the right side it indicates the time taken in the
triangular form.
4.2. DEFUZZIFICATION
Defuzzification is the conversion of a fuzzy quantity into a precise quantity for the computer
to take controlling actions.
Centroid method
Bisector method
Middle, Smallest and Largest of maximum method (MOM,SOM,LOM)
Picking a method
1. Multiply the weighted strength of each output member function by the respective member
function centre points
2. Add these values
3. Divide area by the sum of the weighted member function strength
Formula:
∑𝑁
𝑖=1(𝑐𝑒𝑛𝑡𝑒𝑟𝑖 .𝑠𝑡𝑟𝑒𝑛𝑔𝑡ℎ𝑖 )
Output═ N is number of output members
∑𝑁
𝑖=1 𝑠𝑡𝑟𝑒𝑛𝑔𝑡ℎ𝑖
The figure4.5.shows the over all function of fuzzy logic it explain about the number of
vehicles arriving is taken from the image processing and the corresponding time is calculated
by using centroid method it gives time based on the corresponding vehicle count.
CHAPTER 5
In our project, first we are converting RGB image into gray scale image because it is used for
eliminating hue saturation with luminance. After gray scale conversion we have done
background subtraction.
Fig.5.1 shows the result of background subtraction. Here, the system deals with lighting
changes, repetitive motion of scene elements, tracking through cluttered regions, slow
moving objects and introducing or removing objects from the scene. If there is any slow
moving vehicle in a video, the system take more time to consider it as background. Because,
the learning parameter of the system is very low. If the vehicle entered into a frame and it
stays more time, after a particular period of time also, if it stays itself only then the system
considered it as background instead of foreground. Because moving vehicles has larger
variance than background.
A stationary vehicle, which is considered as background, moves after some time, and then it
is considered as foreground instead of background. This is due to updating of mean and
variance values after detecting the foreground in each frame.After differentiating between
foreground and background , if there is any dust r cluttered particles, then the Gaussian filter
filters those unwanted particles by checking foreground neighbours. If there is minimum of
eight neighbours around the particular pixel then only it is considered as object. So we can
get accurate result.
One of the significant advantages of this method is that when something is allowed to
become part of the background, it does not destroy the existing model of the background. We
can clearly observe the background subtraction result by seeing the below figure.
The proposed traffic control system gives approximate count of the vehicle in each frame by
considering the area of each objects using bounding box method. Fig.5.2 shows the result for
vehicle count.
Fig.5.1.number of vehicles arrived from the route direction is 1: The corresponding time for vehicle passage is 9.54sec
Fig.5.2.number of vehicles arrived from the route direction is 2: The corresponding time for vehicle passage is 11.9sec
Fig.5.3.number of vehicles arrived from the route direction is 3: The corresponding time for vehicle passage is 13.8 sec
Fig.5.4.number of vehicles arrived from the route direction is 4: The corresponding time for vehicle passage is 15.2 sec
Fig.5.5.number of vehicles arrived from the route direction is 5: The corresponding time for vehicle passage is 16.4 sec
Fig.5.6.number of vehicles arrived from the route direction is 6: The corresponding time for vehicle passage is 17.4 sec
Fig.5.7.number of vehicles arrived from the route direction is 7: The corresponding time for vehicle passage is 18.1 sec
Fig.5.8.number of vehicles arrived from the route direction is 8: The corresponding time for vehicle passage is 18.8sec
Fig.5.9.number of vehicles arrived from the route direction is 9: The corresponding time for vehicle passage is 19.3 sec
Fig.5.10.number of vehicles arrived from the route direction is 10: The corresponding time for vehicle passage is 19.6sec
Fig.5.11.number of vehicles arrived from the route direction is 11: The corresponding time for vehicle passage is19.9sec
Fig.5.12.number of vehicles arrived from the route direction is 12: The corresponding time for vehicle passage is20 sec
Fig.5.13.number of vehicles arrived from the route direction is 13: The corresponding time for vehicle passage is22.3sec
Fig.5.14.number of vehicles arrived from the route direction is 14: The corresponding time for vehicle passage is24.1se
Fig.5.15.number of vehicles arrived from the route direction is 15: The corresponding time for vehicle passage is25.6sec
Fig.5.16.number of vehicles arrived from the route direction is 16: The corresponding time for vehicle passage is27.3sec
Fig.5.17.number of vehicles arrived from the route direction is 17: The corresponding time for vehicle passage is28.7sec
Fig.5.18.number of vehicles arrived from the route direction is 18: The corresponding time for vehicle passage is30 sec
Fig.5.19.number of vehicles arrived from the route direction is 19: The corresponding time for vehicle passage is31.3sec
Fig.5.20.number of vehicles arrived from the route direction is 20: The corresponding time for vehicle passage is32.7sec
Fig.5.21.number of vehicles arrived from the route direction is 21: The corresponding time for vehicle passage is34.2sec
Fig.5.22.number of vehicles arrived from the route direction is 22: The corresponding time for vehicle passage is35.9sec
Fig.5.23.number of vehicles arrived from the route direction is 23: The corresponding time for vehicle passage is37.7sec
Fig.5.24.number of vehicles arrived from the route direction is 24: The corresponding time for vehicle passage is40 sec
Fig.5.25.number of vehicles arrived from the route direction is 25: The corresponding time for vehicle passage is42.3sec
Fig.5.26.number of vehicles arrived from the route direction is 26: The corresponding time for vehicle passage is44.1sec
Fig.5.27.number of vehicles arrived from the route direction is 27: The corresponding time for vehicle passage is45.8sec
Fig.5.28.number of vehicles arrived from the route direction is 28: The corresponding time for vehicle passage is47.3sec
Fig.5.29.number of vehicles arrived from the route direction is 29: The corresponding time for vehicle passage is48.7sec
Fig.5.30.number of vehicles arrived from the route direction is 30: The corresponding time for vehicle passage is50 sec
Fig.5.31.number of vehicles arrived from the route direction is 31: The corresponding time for vehicle passage is51.3sec
Fig.5.32.number of vehicles arrived from the route direction is 33: The corresponding time for vehicle passage
is54.2secFig.5.33.number of vehicles arrived from the route direction is 33: The corresponding time for vehicle passage is54.2sec
Fig.5.34.number of vehicles arrived from the route direction is 34: The corresponding time for vehicle passage is55.9sec
Fig.5.35.number of vehicles arrived from the route direction is 35: The corresponding time for vehicle passage is57.7sec
Fig.5.36.number of vehicles arrived from the route direction is 36: The corresponding time for vehicle passage is60 sec
Fig.5.37.number of vehicles arrived from the route direction is 37: The corresponding time for vehicle passage is62.3sec
Fig.5.38.number of vehicles arrived from the route direction is 38: The corresponding time for vehicle passage is64.1sec
Fig.5.39.number of vehicles arrived from the route direction is 40: The corresponding time for vehicle passage is67.3sec
The two plots across the top of the figures represent the antecedent and consequent of the first
rule. Each rule is a row of plots, and each column is a variable. The rule numbers are
displayed on the left of each row. The first column of plots (the six yellow plots) show the
membership functions referenced by the antecedent, or the if-part of each rule. The second
column of plots (the six blue plots) shows the membership functions referenced by the
consequent, or the then-part of each rule. The seventh plot in the second column of plots
represents the aggregate weighted decision for the given inference system. From the above
analysis we can conclude that the number of vehicles increases corresponding time for green
ON timer is also increases.
5.1.2 Table: Corresponding time at different values of input variable Route side
CONCLUSION
FUTURE SCOPE
REFERENCES
A.BOOKS
B.RESEARCH PAPERS
[1] Niittymäki J., Mäenpää M. (2001). The role of fuzzy logic public transport priority in
traffic signal control. Traffic Engineering and Control, International Journal of Traffic
Management and Transportation Planning, January 2001, Hemming- Group Ltd. By
permission.
[2] Pappis, C. P., and E. H. Mamdani. A Fuzzy Logic Controller for a Traffic Junction. IEEE
Transactions on Systems, Man, and Cybernetics, Vol. SMC-7, No. 10, October 1977, pp.
707-717.
[3] Kelsey, R. L., and K. R. Bisset. Simulation of Traffic Flow and Control Using Fuzzy and
Conventional Methods. Fuzzy Logic and Control: Software and Hardware Applications,
Prentice Hall, Englewood Cliffs, New Jersey, 1993, pp. 262-278.
[4] Niittymaki, J., and S. Kikuchi. Application of Fuzzy Logic to the Control of a Pedestrian
Crossing Signal. In Transportation Research Record: Journal of the Transportation Research
Board, No. 1651, TRB, National Research Council, Washington, D.C., 1998, pp. 30-38.
[5] Chiu, S. Adaptive Traffic Signal Control Using Fuzzy Logic. Proceedings of the IEEE
Intelligent Vehicles Symposium, 1992, pp. 98-107.
[6] Niittymaki, J., and M. Pursula. Signal Control Using Fuzzy Logic. Fuzzy Sets and
Systems, Vol. 116, 2000, pp. 11-22.
[7] Nakatsuyama, M., H. Nagahashi, and N. Nishizuka. Fuzzy Logic Phase Controller for
Traffic Junctions in the One-Way Arterial Road. Proceedings of the IFAC Ninth Triennial
World Congress, 1984, pp. 2865-2870.
[8] Li, H., P. D. Prevedouros, and L. Zhang. Signal Control for Oversaturated Intersections
Using Fuzzy LogicSubmitted for consideration for presentation at the 2005 Annual Meeting
of the TRB and publication in the Transportation Research Record.
[9] Hong Wei, Wang Yong, Mu Xuanqin and Wu Yan.A cooperative fuzzy control method
for traffic Lights. 2001 IEEE Intelligent Transportation Systems Conference Proceedings -
Oakland (CA), USA - August 25-29, 2001.
[10] Chiu, C. C., Wang, C. Y., Ku, M.Y., & Lu, Y. B. (2006). Real-time Recognition and
Tracking System of Multiple Vehicles. IEEE International Conference on Intelligent Vehicles
Symposium, 478-483.
[11] Ku, M. Y., Chiu, C. C., Chen, H. T., & Hong, S. H. (2008). Visual Motorcycle Detection
and Tracking Algorithm.WSEAS Trans. on Electronics, 5(4), pp.121-131.
[12] Haritaoglu, I., Harwood, D., & Davis, L. S. (2000). W4: real-time surveillance of people
and their activities. IEEE Trans. Pattern Analysis and Machine Intelligence, 22(8), 809-830.
[13] Gupte, S., Masoud, O., Martin, R. F. K., &Papanikolopoulos N. P. (2002). Detection and
classification of vehicles. IEEE Trans. Intelligent Transportation Systems, 3(1), 37-47.
[14] Hi, A. H. S. & Yung, N. H. C. (1998). A fast and accurate scoreboard algorithm for
estimating stationary backgrounds in an image sequence.In Proc. IEEE Symp. Circuits and
Systems, 4(31), 241–244.
[15] Pang, C. C. C., Lam, W. W. L., & Yung, N. H. C. (2003). A novel method for handling
vehicle occlusion in visual traffic surveillance.in Proc. SPIE Conf. SPIE-IS&T Electronic
Imaging, 5014, 437-447.
[16] Pang, C. C. C., Lam, W. W. L., & Yung, N. H. C. (2004). A novel method for resolving
vehicle occlusion in a monocular traffic-image sequence. IEEE Trans. Intelligent
Transportation Systems, 5(3), 129–141.
[17] Pang, C. C. C., Lam, W. W. L., & Yung, N. H. C. (2007). A method for vehicle count in
the presence of multiple-vehicle occlusions in traffic images. IEEE Trans. on Intelligent
Transportation Systems, 8(3), 441-459.
[18] Anderson, C., Burt, P., &Wal, G. V. D. (1985). Change detection and tracking using
pyramid transformation techniques. in Proc. SPIE Conf. Intelligent Robots and Computer
Vision, 579, 72-78.
[19] Ali, A. T., &Dagless, E. L. (1992). Alternative practical methods for moving object
detection. in Proc. IEEE Conf. Image Processing and its Applications, April, 77-80.
[20] Rosin, P. L., & Ellis, T. (1995). Image difference threshold strategies and shadow
detection. in Proc. British Machine Vision Conference, 347-356.
[21] Li, G., Zeng, R., & Lin, L. (2006). Moving target detection in video monitoring
system.in Proc. WCICA Conf. Intelligent Control and Automation, 2, 9778-9781.
[22] Arth, C., Bischof, H., &Leistner, C. (2006). TRICam—an embedded platform for remote
traffic surveillance.in Proc. Computer Vision and Pattern Recognition Workshop Conf.
Digital Object Identifier, June, 125.
[23] Wu, B.-F., Juang, J.-H., Tsai, P.-T., Chang, M.-W., Fan, C.-J., Lin, S.-P., Wu, J. Y.-J.,
& Lee, H. (2007). A new vehicle detection approach in traffic jam conditions.in Proc. IEEE
Symp. Computational Intelligence in Image and Signal Processing, April, 1-6.
[24] Kim, Z., & Malik, J. (2003). Fast vehicle detection with probabilistic feature grouping
and its application to vehicle tracking. in Proc. IEEE Conf. Computer Vision, 1, 524-531.
[25] Hsieh, J.-W., Yu, S.-H., Chen, Y.-S., & Hu, W.-F. (2006). Automatic traffic surveillance
system for vehicle tracking and classification. IEEE
[26] Stephen Chiu and SujeetChand .” Self-Organizing Traffic Control via Fuzzy Logic“,
Proc.32nd IEEE Conf. on Decision &ControlSan
[27]Klein, L., Kelley, M., (Hughes Aircraft Company), 1996. Detection Technology for
IVHS: Final Report, FHWA Report No. FHWA-RD-95-100.
URL:www.mathworks.com