Anda di halaman 1dari 4

A Technical Analysis of QoS

Intro to QoS

Quality of Service (QoS) for networks is an industry-wide set of standards and


mechanisms for ensuring high-quality performance for critical applications. By using QoS
mechanisms, network administrators can use existing resources efficiently and ensure the
required level of service without reactively expanding or over-provisioning their networks.

One of the key concerns in maintaining QoS is ensuring that critical or time-sensitive
applications receive priority over other traffic. For instance, Voice-over-IP (VoIP) packets
should receive higher priority than typical data transfers so that users do not experience dropped
phrases or delays. The goal of QoS is to provide preferential delivery service for the applications
that need it by ensuring sufficient bandwidth, controlling latency and jitter, and reducing data
loss. These QoS parameters are increasing in importance as networks become more
interconnected
Two most common QoS tools used to handle traffic are classification and queuing.
Classification identifies and marks traffic to ensure network devices know how to identify and
prioritize data as it traverses a network. Queues are buffers in devices that hold data to be
processed. Queues provide bandwidth reservation and prioritization of traffic as it enters or
leaves a network device. If the queues are not emptied, they overflow and drop traffic.

Benefits of QoS

All networks can take advantage of aspects of QoS for optimum efficiency, whether the
network is for a small corporation or an enterprise. Network administrators can use QoS to
guarantee throughput for mission-critical applications so that their transactions can be processed
in an acceptable amount of time. Network administrators can also use QoS to manage UDP
traffic.
Unlike TCP, UDP is an inherently unreliable protocol that does not receive feedback
from the network and, therefore, cannot detect network if there is a congestion. Network
administrators can use QoS to manage the priority of applications that rely on UDP, such as
multimedia applications, so that they have the required bandwidth even in times of network
congestion, but do not overwhelm the network.

The following are some benefits of QoS:


 Take Control of traffic and network resources
 Dynamic Allocation of Bandwidth for Prioritization Level
 Improves user experience
 Reduces unnecessary network upgrade cost
Analysis and comparison of different Queuing Services

FIFO
The default scheduling algorithm used on most interfaces of Cisco routers and switches if
FIFO (first in, first out). This form of queuing requires no configuration, and simply processes
and forwards packets in the order that they arrive. If the queue becomes saturated, new packets
will be dropped (tail drop).

FIFO queuing is problematic because there is no service differentiation. This form of


queuing may be insufficient for real-time applications, especially during times of congestion.
The last packet in the queue may be the most important packet in the queue, but it’s still the last
packet to be serviced out of the queue.

Priority Queuing
Traffic must be assigned to these queues, usually using access-lists. Packets from the
High queue are always processed before packets from the Medium queue. Likewise, packets
from the Medium queue are always processed before packets in the Normal queue, etc. As long
as there are packets in the High queue, no packets from any other queues are processed. Once the
High queue is empty, then packets in the Medium queue are processed… but only if no new
packets arrive in the High queue. This is referred to as a strict form of queuing.
The obvious advantage of PQ is that higher-priority traffic is always processed first. The
nasty disadvantage to PQ is that the lower-priority queues can often receive no service at all. A
constant stream of High priority traffic can starve out the lower-priority queues.

Custom Queuing
A less strict form of queuing is Custom Queuing (CQ), which employs a weighed round-
robin queuing methodology. Each queue is processed in order, but each queue can have a
different weight or size (measured either in bytes, or the number of packets). Each queue
processes its entire contents during its turn. CQ supports a maximum of 16 queues.

Weighted Fair Queuing


WFQ uses an automatic classification mechanism which is based on source IP address,
Destination IP address, source port, destination port, type of service, protocol. WFQ uses a
mathematical algorithm that prioritizes interactive traffic with high IP Precedence values.
Packets with a higher priority are scheduled before lower-priority packets arriving at the same
time. This is accomplished by assigning a sequence number to each arriving packet WFQ is the
default on slow serial links and good for low volume flows.
WFQ suffers from several key disadvantages. Traffic cannot be queued based on user-
defined classes. It cannot provide specific bandwidth guarantees to a traffic flow. Only
supported on slower links (2.048 Mbps or less)

Class-Based WFQ
Traffic within each queue is processed using FIFO. Each queue is provided with a
configurable minimum bandwidth guarantee. It can either be a fixed amount, percentage of the
total interface bandwidth or percentage of the remaining unallocated bandwidth. CBWFQ queues
are only held to their minimum bandwidth guarantee during periods of congestion
The key disadvantage with CBWFQ is that no mechanism exists to provide a strict-
priority queue for real-time traffic, such as VoIP, to alleviate latency.

Low-Latency Queuing
LLQ is an improved version of CBWFQ that includes one or more strict-priority queues,
to alleviate latency issues for real-time applications. Strict-priority queues are always serviced
before standard class-based queues. The key difference between LLQ and PQ (which also has a
strict priority queue), is that the LLQ strict-priority queue will not starve all other queues. The
LLQ strict-priority queue is policed, either by bandwidth or a percentage of the bandwidth.

Explanation of Traffic Shaping and Committed Access Rate

Traffic shaping allows you to control the traffic going out an interface in order to match its flow
to the speed of the remote, target interface and to ensure that the traffic confirms to policies
contracted for it. Traffic shaping controls the bandwidth available and sets the priority of traffic
processed by the policy to control the volume of traffic for a specific period (bandwidth
throttling) or rate the traffic is sent (rate limiting).

CAR embodies a rate-limiting feature for policing traffic. The rate-limiting feature of CAR
manages the access bandwidth policy for a network by ensuring that traffic falling within
specified rate parameters is sent, while dropping packets that exceed the acceptable amount of
traffic or sending them with a different priority. The exceed action for CAR is to drop or mark
down packets.

CAR is often configured on interfaces at the edge of a network to limit traffic into or out of the
network while shaping is usually implemented on the customer side, and will buffer traffic that
exceeds the provider’s committed rate. Policing can be implemented for both inbound and
outbound traffic on an interface. Shaping can only occur on outbound traffic on an interface.
For instance, an ISP may delay P2P packets, such as those transmitted
by BitTorrent networks. Providers often force the customer to adhere to a specific policy of
service in such delaying P2P packets. This policy is referred to as the Service Level Agreement
(SLA) between the customer and provider. Shaping is usually implemented on the customer side,
and will buffer traffic that exceeds the provider’s committed rate. Thus, shaping can slow the
traffic rate and draw off traffic in compliance with the provider’s SLA. Policing is usually
implemented on the provider side, and will either drop or re-mark traffic that exceeds the
provider’s committed rate.

Conclusion

Therefore, The IP protocol was originally designed to reliably get a packet to its destination with less
consideration to the amount of time. But now, IP networks must support many different type of
applications, and these applications require low latency. The best-effort IP network introduces a variable
and unpredictable amount of delay. The converged network mixes different types of traffic, and each
with very different requirement. So, QoS technologies play a crucial role in a multiservice IP network.

QoS is on the forefront of networking technology. The future brings us the notion of user-based QoS in
which QoS policies are based on a user as well as application. The recent issue about ending net
neutrality correlates to prioritizing traffic over the network with QoS.

Anda mungkin juga menyukai