Anda di halaman 1dari 8

International Journal of Emerging Technology and Advanced Engineering

Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, February 2013)
654

Preventing Congestion Collapse in Networks using Network
Border Patrol (NBP)
A. Yugandhara rao
1
, V. Sandeep Kumar
2
, M. Harish Kumar
3

1
Asst Professor,
2, 3
Student, cse, Lendi Institute of Engineering & Technology, Viziangaram
Abstract The fundamental philosophy behind the
Internet is expressed by the scalability argument: no protocol,
mechanism, or service should be introduced into the Internet
if it does not scale well. A key corollary to the scalability
argument is the end-to-end argument: to maintain scalability,
algorithmic complexity should be pushed to the edges of the
network whenever possible. The Internets excellent
scalability and robustness result in part from the end-to-end
nature of Internet congestion control. End-to-end congestion
control algorithms alone, however, are unable to prevent the
congestion collapse and unfairness created by applications
that are unresponsive to network congestion. To address these
maladies, we propose and investigate a novel congestion-
avoidance mechanism called network border patrol (NBP).
KeywordsBorder control, border monitor,
congestion control, congestion collapse, core-stateless
mechanisms, end-to-end argument, Internet.
I. INTRODUCTION
Network Border Patrol is a core-stateless congestion
avoidance mechanism. That is, it is aligned with the core-
stateless approach which allows routers on the borders (or
edges) of a network to perform flow classification and
maintain per-flow state but does not allow routers at the
core of the network to do so. Figure one illustrates this
architecture. In this paper, we draw a further distinction
between two types of edge routers. Depending on which
flow it is operating on, an edge router may be viewed as
ingress or an egress router. An edge router operating on a
flow passing into a network is called an ingress router,
whereas an edge router operating on a flow passing out of a
network is called an egress router. Note that a flow may
pass through more than one egress (or ingress) router if the
end-to-end path crosses multiple networks.

Fig-1: The core-stateless internet architecture assumed by NBP
NBP prevents congestion collapse through a
combination of per-flow rate monitoring at egress routers
and per-flow rate control at ingress routers. Rate
monitoring allows an egress router to determine how
rapidly each flow's packets are leaving the network,
whereas rate control allows an ingress router to police the
rate at which each flow's packets enter the network.
Linking these two functions together are the feedback
packets exchanged between ingress and egress routers;
ingress routers send egress routers forward feedback
packets to inform them about the flows that are being rate
controlled, and egress routers send ingress routers
backward feedback packets to inform them about the rates
at which each flow's packets are leaving the network.
This section describes three important aspects of the
NBP mechanism:
1. The architectural components, namely the modified
edge routers, which must be present in the network,
2. The feedback control algorithm, which determines
how and when information is exchanged between
edge routers,
3. The rate control algorithm, which uses the
information carried in feedback packets to regulate
flow transmission rates and thereby prevent
congestion collapse in the network.








International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, February 2013)
655

Flaws or Drawbacks with Existing System:
Packets are buffered in the routers present in the
network which causes congestion collapse from
undelivered packets arises when bandwidth is
continuously consumed by packets that are dropped
before reaching their ultimate destination.
Retransmission of undelivered packets is required to
ensure no loss of data.
Unfair bandwidth allocation arises in the internet due
to the presence of undelivered packets
Advantages of Proposed Systems:
To preventing the Congestion Control through the
communication over network we are introducing Network
Border Patrol Concept. The Advantages over these
concepts are listed below:
Buffering of packets in carried out in the edge routers
rather than in the core routers.
The packets are sent into the network based on the
capacity of the network and hence there is no
possibility of any undelivered packets present in the
network.
Absence of undelivered packets avoids overload due
to retransmission.
Fair allocation of bandwidth is ensured.
II. SYSTEM DESIGN
System architectural components:
As a result of its strict adherence to end-to-end
congestion control, the current Internet suffers from two
maladies: Congestion collapse from undelivered packets,
and unfair allocations of bandwidth between competing
traffic flows .
The first malady--congestion collapse from
undelivered packets arises when packets that are
dropped before reaching their ultimate continually
consume bandwidth destinations . The second malady--
unfair bandwidth allocation to competing network
flowsarises in the Internet for a variety of reasons,
one of which is the existence of applications that do
not respond properly to congestion. Adaptive
applications(e.g., TCP-based applications) that respond
to congestion by rapidly reducing their transmission
rates are likely to receive unfairly small bandwidth
allocations when competing with unresponsive
applications. The Internet protocols themselves can also
introduce unfairness. The TCP algorithm, for instance ,
inherently causes each TCP flow to receive a
bandwidth that is inversely proportional to its round-
trip time.
Hence, TCP connections with short round-trip times
may receive unfairly large allocations of network
bandwidth when compared to connections with longer
round-trip times.
To address the maladies of congestion collapse we
introduce and investigate a novel Internet traffic
control protocol called network border patrol (NBP) .
The basic principle of NBP is to compare , at the
borders of a network , the rates at which packets from
each application flow are entering and leaving the
network . If a flows packets are entering the network
faster than they are leaving it , then the network is
likely buffering or , worse yet, discarding the flows
packets. In other words, the network is receiving more
packets than it is capable of handling. NBP prevents this
scenario by controlling the networks borders, ensuring
that each flows packets do not enter the network at a rate
greater than they are able to leave the network. This
controlling prevents congestion collapse from undelivered
packets; because unresponsive flows otherwise
undeliverable packets never enter the network in the first
place.
Although NBP is capable of preventing congestion
collapse and improving the fairness of bandwidth
allocations, these improvements do not come for free. NBP
also introduces added communication overhead, since in
order for an edge outer to know the rate at which its
packets are leaving the network, it must exchange feedback
with other edge routers. Unlike some existing approaches
trying to solve congestion collapse, however, NBPs added
complexity is isolated to edge routers; routers within the
core of the network do not participate in the prevention of
congestion collapse. Moreover, end systems operate in total
ignorance of the fact that NBP is implemented in the
network, so no changes to transport protocols are necessary
at end systems.

Figure 2: The overall architecture of network border control


International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, February 2013)
656

NBP prevents congestion collapse through a
combination of per-flow rate monitoring at egress routers
and per-flow rate control at ingress routers. Rate
monitoring allows an egress router to determine how
rapidly each flow's packets are leaving the network,
whereas rate allows an ingress router to police the rate at
which each flow's packets enter the network. Linking these
two functions together are the feedback packets exchanged
between ingress and egress routers; ingress routers send
egress routers forward feedback packets to inform them
about the flows that are being rate controlled, and egress
routers send ingress routers backward feedback packets to
inform them about the rates at which each flow's packets
are leaving the network.
Figure 3 illustrates the architecture of an NBP egress
router's input port. Packets sent by ingress routers arrive at
the input port of the egress router and are first classified by
flow. In the case of IPv6, this is done by examining the
packet header's flow label, whereas in the case of IPv4, it is
done by examining the packet's source and destination
addresses and port numbers. Each flow's bit rate is then rate
monitored using a rate estimation algorithm such as the
Time Sliding Window (TSW) . These rates are collected by
a feedback controller, which returns them in backward
feedback packets to an ingress router whenever a forward
feedback packet arrives from that ingress router. In some
cases, to be described later in this section, backward
feedback packets are also generated asynchronously; that
is, an egress router sends them to an ingress router without
first waiting for a forward feedback packet.


Figure 3: An input port of an NBP egress router
The output ports of NBP ingress routers are also
enhanced. Each contains a flow classifier, per-flow traffic
shapers (e.g., leaky buckets), a feedback controller, and a
rate controller. See Figure four. The flow classifier
classifies packets into flows, and the traffic shapers limit
the rates at which packets from individual flows enter the
network. The feedback controller receives backward
feedback packets returning from egress routers and passes
their contents to the rate controller. It also generates
forward feedback packets, which it periodically transmits
to the network's egress routers. The rate controller adjusts
traffic shaper parameters according to a TCP-like rate
control control algorithm, which is described later in this
section.


Figure 4: An output port of an NBP ingress router
III. MODULE DESCRIPTION
The various modules in this project are as follows
Module 1:- SOURCE MODULE.
Module 2:- INGRESS ROUTER MODULE.
Module 3:- ROUTER MODULE.
Module 4:- EGRESS ROUTER MODULE.
Module 5:- DESTINATION MODULE.
1. SOURCE MODULE:
The task of this Module is to send the packet to the
Ingress router.
Input data entities: Message to be transmitted from the
source to the destination node in the form of packet with IP
address for its identification.

International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, February 2013)
657

Algorithm : Triple DES (Data Encryption Standard).
Output: Formatted packet with the required information for
communicating between the source & the destination node.
2. INGRESS ROUTER MODULE:
An edge router operating on a flow passing into a
network is called an ingress router. Rate control allows an
ingress router to police the rate at which each flows
packets enter the network. Using rate control and leak
bucket algorithm to rank the nodes in the network.
Input data entities: which determine the rate of the packets.
Algorithm : Per-flow Traffic Shapers (e.g., Leaky Bucket
Algorithm), A Feedback Controller, Rate Controller.
Output : All the nodes in the network assigned with a
unique rank.
3. ROUTER MODULE:
The task of this Module is to accept the packet from the
Ingress router and send it to the Egress router.
Input data entities: receives data from neighboring nodes
and transfer into and transfer into other neighboring nodes.
Output : transfer packets to neighboring nodes
4. EGRESS ROUTER MODULE:
An edge router operating on a flow passing out of a
network is called an egress router. Rate monitoring allows
an egress router to determine how rapidly each flows
packets are leaving the network. Using time sliding
window and rate monitoring algorithm to rank the nodes in
the network
Input data entities: which determine the rate of the Packets
flow in the network.
Algorithm: Time Sliding Window (TSW) Algorithm, A
Feedback Controller, Rate Monitor.
Output: packets are sending to destination.
5. DESTINATION MODULE:
The task of this Module is to accept the packet from the
Egress router and stored in a file in the Destination
machine.
Input data entities: message to be received from the Egress
router to the destination node in the form of packets with IP
address.
Algorithm: Triple DES (Decryption).
Output: formatted packets with the requirement
information for communication between source and
destination nodes.

IV. ALGORITHM DESCRIPTION
Feed back control algorithm:
NBP feedback control algorithm determines how and
when feedback packets are exchanged between edge
routers.

Figure 5: Forward and Backward feedback packets exchanged by
edge routers.
Necessity of ICMP packets for three reasons:
They allow egress routers to discover which ingress
routers are acting as sources for each of flows they are
monitoring.
They allow egress routers to communicate per flow
bit rates to ingress routers.
They allow ingress routers to detect network
congestion and control their feedback generation by
estimating edge-to-edge round trip times.
Forward feedback packet is a time stamp and a list of
flow specifications for flows originating at ingress router.
Time stamp is used to calculate the round trip time between
two edge routers. Flow specifications indicates to an egress
routers the identities of active flows whenever a packet
from a new flow arrives, when egress router receives a
forward feedback packet, it immediately generates a
backward feedback and returns to ingress router.
Backward feedback contains forward feedback original
timestamp, router hop count and list of observed rates; of
feedback packets. Router hop count indicates how many
routers in the path between source and destination. It is a
calculated at egress router by examining time to live (TTL)
arriving forward feedback. Contents of Backward feedback
passed to rate controller to adjust the parameters of traffic
shaper.



International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, February 2013)
658

To generate forward feedback packets an ingress router
keeps a timer for each egress router. Time interval between
feedback update is calculated by base round trip time (base
RTT) for egress router; it is e.baseRTT. Round trip time is
shortest time observed between ingress router and egress
router e. It reflects generally network is congested or not.
e.baseRTT is calculated by estimating the current round
trip time for backward feedback packet and updating
e.baseRTT. The reason for asynchronous backward
feedback packet generation is to prevent feedback packets
that are delayed or dropped by network.
It also ensures ingress routers receive frequent rate
feedback and able to respond to congestion even distance
between routers is large.
Rate control algorithm:
NBP rate control Algorithm regulates the rate at which
each flow is allowed to enter the network.
NBP rate control algorithm may be in two phases:
1. Slow Start Phase
2. Congestion Avoidance Phase
In network new flows entering the network start with
slow start phase and proceed to congestion avoidance phase
only after flow has experienced; incipient congestion. Rate
control algorithm is invoked when a back ward feedback
packet arrives at an ingress router. BF packets contain a
timestamp and list of flows arriving at egress router from
ingress router as well as monitored egress rates for each
flow. Upon arrival of a backward feedback the algorithm
calculates the current round-trip time(current RTT)
between edge routers and update base round trip-time
(e.base RTT).e.baseRTT reflects best observed round trip
time between two edge routers. The algorithm calculates
deltaRTT which is difference between round trip time
(current RTT) and base round trip time (e.base RTT). If a
deltaRTT value greater than zero indicates packets are
requiring a longer time to traverse network, due to
buffering of packets within the network.










This algorithm decides that a flow is experiencing
incipient congestion. Whenever it estimates that network
has buffered the equivalent of more than one of the flows
packets at each router hop. Algorithm first calculates
product of flows ingress router (f.ingress Rate) and
deltaRTT (i.e., f.ingress Rate X deltaRTT). This value
provides an estimate of amount of flows data that is
buffered somewhere in the network.
If (f.ingressRate X deltaRTT) greater than number of
router hops between ingress and egress routers (e.hop
count) multiplied by size of largest possible packet (MSS)
(i.e., MSS X e.hop count) then flow is experiencing
congestion.
Ensuring there is always at least one packet buffered for
transmission on a network is simplest way to achieve full
utilization of link. When network has buffered more than
one flow packets at each router hop; when congestion
occurs flows with higher ingress rates detect congestion
first. This is condition because f.ingressRate X deltaRTT <
MSS X e.hopcount there by detecting that path is congested
due to ingress flow f. If flow is in slow start phase its
ingress rate is doubled for each round trip time that has
elapsed since last backward feedback packet arrived
(f.ingress X 2^RTTsElapsed) the estimated number of
round trip times since the last feedback packet arrived is
denoted as RTTs elapsed.
Doubling the ingress rate allows a new flow to rapidly
capture available bandwidth if the network is underutilized.
If the flow is in the congestion avoidance phase, its ingress
rate is conservatively incremented by a rate quantum value
in order to avoid the creation of congestion. The rate
quantum is computed as the maximum segment size
divided by the current round trip time between the edge
routers. This results in rate growth behavior that is similar
to TCP in its congestion avoidance phase. Furthermore, the
rate quantum is not allowed to exceed the flow's current
egress rate divided by a constant quantum factor (QF). This
guarantees that rate increments are not excessively large
when the round trip time is small.


International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, February 2013)
659

Pseudo-code for ingress router rate-control algorithm:

Leaky bucket algorithm:
Each host is connected to the network by an interface
containing leaky bucket i.e., a finite internal queue. If a
packet arrives at the queue when it is full packet is
discarded. In other words, if one or more processes within
the host try to send a packet when the maximum number is
already queued, the new packet is unceremoniously
discarded. The host is allowed to put one packet per clock
tick onto the network. This mechanism turns an uneven
flow of packets from the user processes inside the host into
an even flow of packets onto the network, smoothing out
bursts and greatly reducing the chances of congestion.

Figure 6: Flow chart representation of Leaky Bucket Algorithm


Data encryption standard (DES):
The DES (Data Encryption Standard) algorithm is the
most widely used encryption algorithm in the world. DES
works by encrypting groups of 64 message bits, which is
the same as 16 hexadecimal numbers. To do the encryption,
DES uses "keys" where apparently 16 hexadecimal
numbers long are also, or apparently 64 bits long.
However, every 8th key bit is ignored in the DES
algorithm, so that the effective key size is 56 bits. But, in
any case, 64 bits (16 hexadecimal digits) is the round
number upon which DES is organized.
DES is a block cipher--meaning it operates on plaintext
blocks of a given size (64-bits) and returns cipher text
blocks of the same size. Thus DES results in a permutation
among the 2^64 (read this as: "2 to the 64th power")
possible arrangements of 64 bits, each of which may be
either 0 or 1. Each block of 64 bits is divided into two
blocks of 32 bits each, a left half block L and a right half
R. (This division is only used in certain operations.)
Triple-DES:
Triple-DES is just DES with two 56-bit keys applied.
Given a plaintext message, the first key is used to DES-
encrypt the message. The second key is used to DES-
decrypt the encrypted message. (Since the second key is
not the right key, this decryption just scrambles the data
further.) The twice-scrambled message is then encrypted
again with the first key to yield the final cipher text. This
three-step procedure is called triple-DES.

Figure 7: Triple DES process
Triple-DES is just DES done three times with two keys
used in a particular order. (Triple-DES can also be done
with three separate keys instead of only two. In either case
the resultant key space is about 2^112).
V. RESULTS
In figure 8 and figure 9 we sending the data through
entering the text or browsing the text file by providing
encryption process for security purpose. In the entering of
encryption key must be less than are equal to 8.

International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, February 2013)
660


Figure 8: At Source side


Figure 9: Encryption Process
In figure-10 it receiving the data from the source in the
form of packets. Before sending the packet to the router it
will exchanging the feedback between ingress and egress
roters. After getting backward feedback from the egress
router it will send the packets to the core router.

Figure 10: At Ingress Router
In Figure 11 it will just receiving the packets from the
ingress router and send those packets directly to the egress
router.


Figure 11: At Router Module
In figure 12 the egress router monitoring the rate of the
packets in a network. Based on the rate of each flow it will
send to the ingress router as a backward feedback and also
receiving packets from the core router and send those
packets sand to the destination.



Figure 12: At Egress Router


International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 2, February 2013)
661

In figure 13 & figure 14 we are receiving the packets
from the Egress router and decrypted the message using the
decryption Process and then the data send by the source
will be shown in the text area specified in the destination in
figure 14.

Figure13: Decryption Process

Figure 14: At Destination Side
VI. CONCLUSION
Here we have presented a novel congestion avoidance
mechanism for the Internet, called Network Border
Control. Unlike existing Internet congestion control
approaches, which rely solely on end-to-end control, NBC
is able to prevent congestion collapse from undelivered
packets. It does this by ensuring at the border of the
network that each flow's packets do not enter the network
faster than they are able to leave it. NBC requires no
modifications to core routers nor to end systems. Only edge
routers are enhanced so that they can perform the requisite
per-flow monitoring, per-flow rate control and feedback
exchange operations.









Extensive simulation results show that NBC successfully
prevents congestion collapse from undelivered packets. As
in any feedback-based traffic control mechanism, stability
is an important performance concern in NBC. Preliminary
results already suggest that NBC benefits greatly from its
use of explicit rate feedback, which prevents rate over-
corrections in response to indications of network
congestion. NBC effectively eliminates congestion
collapse, when combined with fair queuing.
REFERENCES
[1 ] Demers.A, Keshav.S, and Shenker.S(Sept. 1989), Analysis and
simulation of a fair queuing algorithm, in Proc. ACM SIGCOMM,
pp. 112
[2 ] Federal Register 38, No. 93 (May 15, 1973)"Cryptographic
Algorithms for Protection of Computer Data During Transmission
and Dormant Storage".
[3 ] Federal Information Processing Standard (FIPS) Publication
46,National Bureau of Standards, U.S. Department of Commerce,
Washington D.C. (January 1977) Data Encryption Standard.
[4 ] Nagle.J (Jan 1984), Congestion control in IP/TCP Internet works,
Internet EngineeringTask Force, RFC 896.
[5 ] Rangarajan and A. Acharya(Oct,1999), ERUF: Early regulation of
unresponsive best-effort traffic, presented at the Int. Conf.
Networks and Protocols.
BOOKS REFERRED
[1 ] Andrew S. Tanenbaum (2002) Computer Networks , Prentice
Hall of India.
[2 ] Bruce Schneier(1996) Applied Cryptography John Wiley &
Sons.
[3 ] Douglas R. Stinson(1995) Cryptography:Theory and Practice,
CRC Press.
[4 ] Eillotte Rusty Harold (1997) Java Network Programming,
OReilly Publications
[5 ] Herbert Schildt (2005) The Complete Reference Java, Tata
McGraw-Hill.
[6 ] William Stallings (2001) Data and Computer Communications,
Prentice Hall of India.
[7 ] William Stalings (1999) Cryptography and Network Security,
Prentice Hall of India

Anda mungkin juga menyukai