1. Introduction
Over the last decade, TCP (Transmission Control Protocol)
congestion control [1] has been used to adaptively control the
rates of individual connections sharing IP (Internet Protocol)
network links. However, TCP congestion control algorithm
over current drop-tail networks has one serious drawback, i.e.
TCP source reduces its transmission rate only after losing
packets. Therefore, even with techniques such as congestion
avoidance, slow start, fast retransmit and fast recovery
mechanism [9], the performance of TCP congestion control
algorithm over current drop-tail networks is inadequate in a
heavily loaded network. Active queue management has been
proposed as a solution for preventing packet loss due to buffer
overflow. RED (Random Early Detection) [2], an active queue
management algorithm was recommended by the IETF for
deployment in IP routers/networks [3]. The basic idea behind
an active queue management algorithm is to convey
congestion notification early enough to the senders, so that
senders are able to reduce the transmission rates before the
queue overflows and any sustained packet loss occurs. It is
now widely accepted that a RED-controlled queue performs
better than a drop-tail queue. However, the inherent design of
RED makes it difficult to parameterize RED queues to give
good performance under different network scenarios. Several
algorithms, like SRED [4] and BLUE [5], discard packets with
a load-dependent probability whenever the queue buffer in a
router appears to be congested.
In this paper we describe the Dynamic-RED (DRED)
active queue management algorithm that is proposed in [7].
_
Feedback
Signal
q
Actuacting
(or Error)
Signal
e = T - q
B
pd
Control Signal
or
Manipulating
Variable
pd
Disturbances
(Perturbations)
d
+
Process
Controlled
Output
q
z-1
e(n)
1
B
e(n)
e(n)
pd (n 1)
if 0 pd (n 1)
1
B
B
e(n)
0
if pd (n 1)
0
B
if pd (n 1)
pd (n)
where
(2)
is a control gain and B is buffer size.
Initialize timer to t
n=0
Initialize p d ( n), e( n) 0
Timer expires
Reset timer to t
n=n+1
e( n) (1 )e( n 1) e(n )
else
e( n) e(n )
p d (n ) p d (n 1)
B
1
0
No-drop
Threshold L
L= 0.9T
Pb
(4)
1 count Pb
avg min th
where Pb max p
, maxp is maximum
max th min th
Pa
picked up from the list for each subsequent packet arrival, and
its content is compared with the source and destination of the
new packet. If there is a match, Hit is set to one. Otherwise,
Hit is set zero, and with a certain probability p, the content of
this zombie may be replaced by the source and destination of
this new packet. The Hit frequency P(t) will be updated after
each packet arrival, and it can be estimated by using an
exponentially weighted moving average filter expressed in the
following equation.
P (t ) (1 ) P (t 1) Hit (t ) ,
(8)
p
/
M
where
, p is the probability of updating the zombie
list when Hit is zero, and M is number of the zombies in the
list. It was shown that P(t)-1 is a good estimate of the number
of active connections.
Secondly, SRED algorithm can stabilize the queue size at
a level independent of the number of active connections. The
basic drop probability Psred of SRED is related to the queue
size as follows.
q ( n) T
p(n) p(n 1) d sgn
T
q (n) T
T
(7)
where T is the control target, and is equal to d1/d2. If d is the
drop probability adjustment parameter d1, and the control
target is set to B/2, the drop probability of BLUE can be
expressed as equation (7). This type of control leads to a
system where the process output oscillates [10]. Thus, the
queue size in a router can theoretically be unstable with the
BLUE algorithm.
Actuacting
(or Error)
Signal
e=T-q
Reference
Input
T +
_
Feedback
Signal
q
Relay
+1
e
-1
Control Signal
or
Manipulating
Variable
p
Disturbances
(Perturbations)
D
+
Process
q ( n ) T
sgn
Pmax
Psred Pmax / 4
0
if B / 3 q B
if B / 6 q B / 3
(9)
if q B / 6
1
Hit (t )
1
(10)
Controlled
Output
q
z-1
Filter Gain ( )
Control Target (T)
Default Values
118 packets
352 packets
0.1
0.002
586 packets
10 packet
transmission time
0.00005
0.002
293 packets
BLUE
SRED
586 packets
264 packets
0.05
0.05
0.01
0.00025
0.000025
450 packets
1000
0.15
0.25
860 packets
The queue size, drop probability and loss rate are plotted
each 10 msec interval. We run the simulations on a PIII-800
NT workstation. It takes about 15 minutes to complete a
simulation for 100 seconds of simulated time.
Figure 4 shows a network configuration with a total of
1000 TCP sources grouped into 10 subnets with 100 sources
per subnet. In this symmetric system, the interconnection
between the routers is a T3 (45Mbps) link and the rest of links
have an equal data rate so that there is no bottleneck. All the
links have a propagation delay of 10 msec and the round-trip
time (RTT) of the network is 60 msec. We use this model to
run the simulation with different number of sources and to
compare the performance of all four active queue management
algorithms. Due to lack of space, we only present simulation
results for 1000 sources. The default values for each algorithm
are given in Table 2.
Figure 7 shows the queue size of each algorithm with
1000 TCP connections respectively. It can be seen that the
queue sizes are unstable in both RED and BLUE algorithm.
This is due to the design principle of both algorithms. For
RED, the queue size oscillates violently around the thresholds.
For RED and BLUE the queue size shows periods of buffer
underflow
and
overflow
and
the
queue
size
increases/decreases has the number of connection
increases/decreases (load-dependent). Like SRED, the queue
size of DRED is stable around the target buffer occupancy, but
SRED need a larger buffer size than DRED in order to achieve
the same performance. Both DRED and SRED are loadindependent.
Figure 8 shows the drop probability of each algorithm
with 1000 TCP connections respectively. The drop probability
of RED is the highest all four algorithms. The drop probability
of DRED seems to adapt faster compared to BLUE and
SRED. BLUEs drop probability does not react fast enough,
thus leading to the periods of buffer overflow and underflow.
Figure 9 shows the packet loss rate of each algorithm with
1000 TCP connections respectively. Among them, the packet
loss rate of RED is the highest. What is interesting to note is
that the drop probability of DRED seems to be a good
indicator of the real packet loss rate.
B
RED has a hard time maintaining the
queue size between the two
thresholds
maxth
T
minth
(b)DRED
(a) RED
B
SRED effectively controls the queue
size at the expense of a larger buffer
size
B/3
Like RED, BLUE suffers from
queue size oscillations (in this
scenario)
B/6
(d) SRED
(c)BLUE
Figure 7. Queue size for 1000 TCP Connections
(a) RED
(c) BLUE
(b) DRED
(d) SRED
Figure 8. Drop Probability for 1000 TCP Connections
(a) RED
(b) DRED
(c) BLUE
(d) SRED
Figure 9. Packet Loss Rate for 1000 TCP Connections
DRED stabilizes the queue size
around the T
(a) RED
(b) DRED
(c) BLUE
(d) SRED
Figure 10. Histogram of Queue Size