0 penilaian0% menganggap dokumen ini bermanfaat (0 suara)
115 tayangan5 halaman
This paper proposes the concept of multi-buffer management for efficient packet delivery. Using multiple buffers it able to handle large number of packets in the network and also based on the priority of the packets sent. Our objective is to maximize weighted throughput.
Deskripsi Asli:
Judul Asli
priority based multiple buffer management for weighted packets
This paper proposes the concept of multi-buffer management for efficient packet delivery. Using multiple buffers it able to handle large number of packets in the network and also based on the priority of the packets sent. Our objective is to maximize weighted throughput.
Hak Cipta:
Attribution Non-Commercial (BY-NC)
Format Tersedia
Unduh sebagai DOC, PDF, TXT atau baca online dari Scribd
This paper proposes the concept of multi-buffer management for efficient packet delivery. Using multiple buffers it able to handle large number of packets in the network and also based on the priority of the packets sent. Our objective is to maximize weighted throughput.
Hak Cipta:
Attribution Non-Commercial (BY-NC)
Format Tersedia
Unduh sebagai DOC, PDF, TXT atau baca online dari Scribd
Abstract arriving packets. Therefore, some packets
already in the buffer might have to be Motivated by providing differentiated dropped. Without loss of generality, time services on the Internet, we consider efficient is assumed to be discrete. In each time online algorithms for buffer management in step, the buffer management selects a network switches. The online buffer pending packet in the buffer to send. The management problem formulates the problem buffer management can be regarded as an of queuing policies of network switches online scheduling algorithm processing supporting QOS (Quality of Service) prioritized packets. Fig. 1 illustrates the guarantee. In FIFO buffering model, in which buffer structure inside a network switch. unit-length packets arrive in an online manner and each packet is associated with a value (weight) representing its priority. The order of the packets being sent should comply with the order of their arriving time. The buffer size is finite. At most one packet can be sent in each time step. The paper proposes the concept of multi-buffer management for efficient packet delivery and allows the prioritized delivery of packet, maintains the multiple buffer delivery of packet Our objective is to maximize weighted Throughput, defined by the total value of the packets sent.
1. Introduction Figure 1. The buffer management
inside a network switch. Network switches inside routers of the IP-based network infrastructure are Buffer size is considered to be critical in implementing the Internet’s finite and drops the packet if overflow Functionalities. Packets arrive at the occurs. This paper proposes the concept of network switches in an online manner. The multiple buffers for managing large buffer management policy in the network number of packets Using multiple buffer it Switches is in charge of two tasks: packet able to handle large number of packets in queuing and packet delivery. When new the network and also based on the priority packets arrive, buffer management decides of the packet it will be delivered. which ones to accept and queue for potential delivery. Since the buffer has finite buffer slots, it may not be able to accommodate all 2. Competitive FIFO Buffer Management In optimal offline algorithm each for Weighted Packets time step, ROS greedily accepts packets and drops the minimum-value one if They consider efficient online packets overflow happens. Then ROS algorithms for buffer management in sends a pending packet with the maximum network switches. We study a FIFO value. Since those sent packets do not buffering model, in which unit-length need to obey the FIFO order in the packets arrive in an online manner and each delivery sequence, at the end of each time packet is associated with a value (weight) step, all unsent packets (if any) will be representing its priority. The order of the kept in the buffer. packets being sent should comply with the order of their arriving time. The buffer size 3. Competitive Queuing Policies for QoS is finite. At most one packet can be sent in Switches each time step. They[2] design competitive Packet scheduling in a network online FIFO buffering algorithms, where providing differentiated services, where competitive ratios are used to measure each packet is assigned a value. various online algorithms’ performance against the queueing models for supporting QoS worst-case scenarios. They first provide an (Quality of Service)[3]. In the online algorithm with a constant competitive nonpreemptive model, packets accepted to ratio 2. Then, They study the experimental the queue will be transmitted eventually performance of their algorithm on real and cannot be dropped. The FIFO Internet packet traces and compare it with preemptive model allows packets accepted all other known FIFO online competitive to the queue to be preempted (dropped) algorithms. They provide an online prior to their departure, while ensuring that algorithm ON with a constant competitive transmitted packets are sent in the order of ratio. They experimentally evaluate arrival. In the bounded delay model, algorithm ON and all known competitive packets must be transmitted before a buffer management algorithms. certain deadline, otherwise it is lost (while transmission ordering is allowed to be arbitrary). In all models the goal of the 2.1. A Competitive Online Algorithm ON buffer policy is to maximize the total value of the accepted packets. Let a be the ratio Fei Li[2] describes an online algorithm between the maximal and minimal value. called ROS (ROS stands for Relaxed Online For the non-preemptive model They derive Optimal Solution). ROS Works in a relaxed a O(log a) competitive ratio, both model in which the FIFO order constraint over exhibiting a buffer policy and a general sending packets is not demanded. A good Lower bound. characteristic of ROS results that can per mutate the packet sending sequence of ROS to 4. Competitive Queue Policies for get an optimal offline algorithm OPT for the Differentiated Services FIFO buffering model. ROS is used to calculate ON’s competitive ratio. In Competitive Queue Policies for Differentiated Services the packets are 2.2. An optimal offline algorithm OPT tagged as either being high or low priority packets. Outgoing links in the network are serviced by a single FIFO queue. This model [4] gives a benefit of α>= 1 to each Simulations of a TCP/IP network are used high priority packet and a benefit of 1 to to illustrate the performance of RED each low priority packet. A queue policy gateways. Random Early Detection control which of the arriving packets are gateways are ineffective mechanism for dropped and which enter the queue. Once a congestion avoidance at the gateway, in packet enters the queue it is eventually sent. cooperation with network transport The aim of a queue policy is to maximize protocols. If RED gateways packets when the sum of the benefits of all the packets it the average queue size exceeds the delivers. W. Aiello, Y. Mansour, S. maximum threshold, rather than simply Rajagopolan, and A. Rosen analyze and setting a bit in packet headers, then RED compare different queue policies for this gateways control the calculated average problem using the competitive analysis queue size. This action provides an upper approach, where the benefit of the once bound on the average delay at the policy is compared to the benefit of an gateway. The probability that the RED optimal offline policy. They derive both gateway chooses a particular connection to upper and lower bounds for the policies and notify during congestion is roughly in most cases bounds are tight. proportional to that connection’s share of the bandwidth at the gateway. This 5. Random Early Detection Gateways for approach avoids a bias against bursty Congestion Avoidance traffic at the gateway. Random Early Detection (RED) gateways for congestion avoidance in 6. Proposed Work packet-switched networks. The gateway In this paper we proposes the detects incipient congestion by computing concept of implementing the multiple the average queue size. The gateway could buffers for managing large number of notify connections of congestion either by packets. In simple buffer not possible to dropping packets arriving at the gateway or manage the more number of packets. by setting a bit in packet headers. When the average queue size exceeds a preset 6.1 Packet Analysis threshold, the gateway drops or marks each Packet analysis, often referred to as arriving packet with a certain probability, packet sniffing or protocol analysis, where the exact probability is a function of describes the process of capturing and the average queue size. RED gateways keep interpreting live data as it flows across a the average queue size low while allowing network in order to better understand what occasional bursts of packets in the queue. is happening on that network. Packet During congestion, the probability that the analysis can help us understand network gateway notifies a particular connection to characteristics, learn who is on a network, reduce its window is roughly proportional to determine who or what is utilizing that connection’s share of the bandwidth available bandwidth, identify peak through the gateway. RED gateways are network usage times, identify possible designed to accompany a transport-layer attacks or malicious activity, and find congestion control protocol such as TCP. unsecured and bloated applications. The RED gateway has no bias 6.1.1 Viewing Endpoints against bursty traffic and avoids the global An endpoint is the place where synchronization of many connections communication ends on a particular decreasing their window at the same time. Protocol. For instance, there are two endpoints in TCP/IP communication: the IP request separate buffer is maintained. addresses of the systems sending and Buffer has a fixed number of slots. All the receiving data, 192.168.1.25 and arriving packets are fed into the buffer and 192.168.1.30. An example on Layer 2 would delivered to the client requests. be the communication taking Place between two physical NICs and their MAC addresses. The NICs sending and receiving data have addresses of 00:ff:ac:ce:0b:de and 00:ff:ac:e0:dc:0f, Making those addresses the endpoints of communication.
Fig 6.2 Multiple Buffer Management
6.3 Packet Priority
When a network segment becomes congested, the hub-and-switch workload results in the delay or dropping of packets. On a network using packet-priority values, Fig 6.1 Viewing end points a packet with a higher priority receives preferential treatment and is serviced before a packet with a lower priority. N Source Destination Destination Capture Capture Source IP o MAC MAC IP d Time d Priority is given to the application Length packets based on the type of data packets. Thu Nov 19 Each data packets have different priority 192.168.0.25 0 00:26:18:51: ff:ff:ff:ff:ff:ff 192.168.0.11 5 09:47:02 209 to the delivery. Priority is set based on the 3b:2f IST delay value of the arriving packets. Each 2009 data packet has a delay value. During the Thu packets in the buffer the delay value of Nov 19 packet is analyzed and based on the delay 00:26:18:51: 00:08:5c:8c:e 203.145.184. 1 192.168.0.11 09:47:02 52 3b:2f 7:06 32 IST constrain the packet delivered to the 2009 clients. Thus allows the prioritized delivery of packet.
Table 6.Packet Analysis Conclusion
6.2 Multiple Buffer Management Multiple Buffers are managed We conclude that for practical between clients in order to maintain the usage of these priority algorithms, large number of incoming packets. For each thorough experiments need to performed and analyzed to pick the best algorithm in real applications. The Multi-buffering model and measure its performance experimentally measured using Internet traces.
References
[1] The Internet traffic archive.
http://ita.ee.lbl.gov.
[2] Fei Li Competitive FIFO Buffer
Management for Weighted Packets .In proceedings of the 7th Annual communication and services Research Conference,2009
[3] N. Andelman, Y. Mansour, and A. Zhu.
Competitive queuing polices for QoS switches. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 761–770, 2003.
[4] W. Aiello, Y. Mansour, S. Rajagopolan,
and A. Rosen. Competitive queue policies for differentiated services. In Proceedings of the 19th Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM), pages 431–440, 2000.
[5] S. Floyd and V. Jacobson. Random early
detection gateway for congestion avoidance. IEEE/ACM Transactions on Networking, pages 397–413, 1993.