Anda di halaman 1dari 25

PACKETEER TECHNICAL FOCUS PAPER

Controlling Bandwidth and Performance

With PacketShaper Products

Packeteer, Inc.
10201 N. De Anza Blvd., Cupertino, CA 95014
Tel: (408) 873-4400
info@packeteer.com | www.packeteer.com

Packeteer, the Packeteer logo, combinations of Packeteer and the Packeteer logo, as well as AppCelera, PacketSeeker, PacketShaper, PacketShaper Xpress, PacketWise,
and PolicyCenter are trademarks or registered trademarks of Packeteer, Inc. in the United States and other countries. Other product and company names used in this
document are used for identification purposes only and may be trademarks of other companies and are the property of their respective owners. Copyright 2001-2004
Packeteer, Inc. All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, transmitted, or translated into another language
without the express written consent of Packeteer, Inc. Packeteer software is licensed, not sold, and its use is subject to the license terms set forth in the end user license
agreement.

Table of Contents
Controlling Bandwidth and Performance ................................................................................... 3
The Performance Problem ............................................................................................................ 3
Why a Deluge?......................................................................................................................... 3
The Nature of Network Traffic ................................................................................................ 4
Solution Alternatives ..................................................................................................................... 4
Management Decrees ............................................................................................................... 5
Additional Bandwidth and Compression ................................................................................. 5
Queuing-Only Schemes on Routers or other Networking Equipment ..................................... 5
Packet Marking ........................................................................................................................ 6
Packeteers Application Traffic Management.......................................................................... 7
Controlled Passage......................................................................................................................... 8
Per-Class Limits and/or Reservations ...................................................................................... 8
Per-Session Rate Policies....................................................................................................... 10
Per-Session Priority Policies .................................................................................................. 11
Other Per-Session Policies ..................................................................................................... 12
TCP Rate Control................................................................................................................... 12
UDP Rate Control and Queuing............................................................................................. 13
Packet Marking for MPLS and ToS....................................................................................... 14
Scheduling.............................................................................................................................. 14
Putting Control Features to Use ................................................................................................. 14
Characterizing Traffic ............................................................................................................ 14
Suggestions and Examples..................................................................................................... 17
For More Information ................................................................................................................. 18
Appendix A: Preparing for VoIP A Detailed Example ....................................................... 19
Appendix B: How TCP Rate Control Works ............................................................................ 23

Controlling Bandwidth and Performance

Packeteer, Inc.

Controlling Bandwidth and


Performance
In the battle for bandwidth on congested WAN and Internet
access links, demanding applications such as music downloads or
large email attachments can flood capacity and undermine critical applications. Abundant data, protocols that
swell to use any available bandwidth, network bottlenecks, and new, popular, and bandwidth-hungry
applications all conspire against network and application performance.
Identifying performance problems is a good first step, but its not enough. Packeteer controls bandwidth
allocation with flexible policies to protect critical applications, pace greedy traffic, limit recreational usage,
and block malicious activity. Bandwidth minimums and/or maximums apply to each application, session,
user, and/or location. Each type of traffic maps to a specific allocation policy, ensuring that each receives an
appropriate slice of bandwidth.
This paper describes todays application performance problems, proposes a few alternative solutions, and
then delves into detail about Packeteers control features. The paper does not cover visibility or compression
capabilities, which are each covered in their own papers on the Packeteer website. If you are interested in a
shorter summary of Packeteers control capabilities, see the product overview paper called Strategies for
Managing Application Traffic, and then look for the section on control.

The Performance Problem


At best, application performance over the network is inconsistent and unpredictable, and at worst, its
consistently slow and frustrating. Todays performance problems are due to the combination of a deluge of
traffic, its variety of urgency and performance requirements, and the nature of TCP/IP protocols. In addition,
the capacity mismatch between local- and wide-area networks creates unmanaged congestion at LAN/WAN
speed-conversion bottlenecks

Why a Deluge?
The increase in traffic stems from numerous environmental changes in applications, networks, and our own
habits. They include:
More application traffic: An explosion of application size, user demand, and richness of media
Recreational traffic: Abundant traffic resulting from recent trends in Internet radio, MP3 music downloads,
instant messaging, web browsing, interactive gaming, and more
Webification: Web-based application interfaces that typically consume 5 to 10 times their former bandwidth
Distributed applications: Enterprise applications that run over the WAN or Internet instead of being
confined to a single machine
Server consolidation: A trend to combine data centers and reduce the number of application servers, forcing
previously local traffic (high bandwidth, low latency, and low cost) to travel the WAN or Internet (low
bandwidth, high latency, and expensive)
Voice/video/data network convergence: One network that supports voice, video, and data, each with
different performance demands and requirements
SNA/IP convergence: An IP network that also supports SNA applications using TN3270 or TN5250 but
with a corresponding drop in performance
Disaster readiness: Redundant data centers, mirroring of large amounts of data

Controlling Bandwidth and Performance

Packeteer, Inc.

Security: Worms, viruses, and denial-of-service (DoS) attacks ranked as the number one source of
network congestion in a recent Network World survey
New habits: Users doing more types of tasks online: shopping, research, news, collaboration, finances,
socializing, medical diagnostics, and more

The Nature of Network Traffic


The de-facto network standard is the TCP/IP protocol suite, and over 80 percent of TCP/IP traffic is TCP.
Although TCP offers us many advantages and strengths, management and enforcement of QoS (quality of
service) are not among them.
Many of TCPs control or reliability features contribute to performance problems.

TCP retransmits when the network cloud drops packets or delays acknowledgments
When packets drop or acknowledgements are delayed due to congested conditions and overflowing
router queues, retransmissions simply contribute more traffic and worsen the original problem.

TCP increases bandwidth demands exponentially


With TCPs slow-start algorithm, senders can iteratively double the transmission size until packets drop
and problems occur. The algorithm introduces an exponential growth rate and can rapidly dominate
capacity. Without regard for traffics urgency, concurrent users, or competing applications, TCP simply
expands each flows usage until it causes problems. This turns each sizeable traffic flow into a
bandwidth-hungry, potentially destructive consumer that could undermine equitable or appropriate
allocation of network resources.

TCP imposes network overload


TCP expands allocation until packets are dropped or responses are delayed. It floods routers by design!

As large amounts of data are forwarded to routers, more congestion forms, bigger queues form, more delay is
introduced, more packets are discarded, more timeouts occur, more retransmissions are sent, more congestion
forms, and the cyclical spiral continues.
When demand rises and large packet bursts prompt this domino effect, all traffic experiences delays large
or small, interactive or batch, urgent or frivolous. But critical or urgent applications (SAP or web
conferencing, for example) suffer the most. Users turn cranky. Productivity deteriorates. Business declines.

Solution Alternatives
When faced with bandwidth constraints and performance that is unpredictable, inequitable, inconsistent, or
just too slow, a number of solutions come to mind. This section addresses the following potential solutions,
focusing on their advantages and limitations:

Management decrees

Additional bandwidth and compression

Packet marking and MPLS

Queuing-only schemes on routers or other networking equipment

Packeteers application traffic management

Controlling Bandwidth and Performance

Packeteer, Inc.

Management Decrees
A university says, Dont download MP3 music files. Or a corporation says, Dont use Internet radio; put
a radio on your desk instead. Managerial edicts are only as effective as an organizations ability to enforce
them.
In addition, this approach only impacts the network load due to unsanctioned traffic. It does nothing to
manage the concurrence of file transfers, web-based applications, large email attachments, Citrix-based
applications, print traffic, and all the other traffic that is both necessary and important. Real-world traffic has
an incredible variety of requirements that complicate the task of enforcing appropriate performance for all.

Additional Bandwidth and Compression


When performance problems occur, a common response to too much traffic is buy more bandwidth. But it is
not an effective solution. Network managers spend large portions of their budgets on bandwidth upgrades in
attempts to solve performance problems, only to find that the problems persist.
Critical applications that suffer poor performance arent necessarily the applications that get access to extra
capacity. Usually, its the less urgent, bandwidth-hungry applications that monopolize increased bandwidth.
In this illustration, more bandwidth is added, but the beneficiaries are top bandwidth consumers (web
browsing, email, music downloads) instead of the most critical applications (Oracle, Citrix, TN3270). If
usage patterns perpetuate after a purchase of more bandwidth (as they usually do), then critical applications
still lose out to the more aggressive and less important traffic. Not the best bandwidth bargain.

Web Browsing
Email
Music Downloads
File Transfers
Real Audio
Oracle
Citrix
Gaming
Other
TN3270

28 %
20
12
9
8
7
5
5
4
2

Web Browsing
Email
Music Downloads
File Transfers
Real Audio
Oracle
Citrix
Gaming
Other
TN3270

40 %
22
18
6
5
3
1
1
1
1

Increased bandwidth always imposes a set-up cost. In some places, larger pipes are not available or are
prohibitively expensive. Even if bandwidth costs drop, they are a recurring monthly cost. In September
2003, Gartner stated, The WAN represents the single largest recurring cost, other than people, in IS
organizations.
The same phenomenon occurs when organizations turn to stand-alone compression solutions, those without
application-aware control features. Although compression does create extra bandwidth, the gains go to the
wrong applications.

Queuing-Only Schemes on Routers or other Networking Equipment


Routers provide queuing technology that buffers packets to wait for a chance to pass on a congested network.
A variety of queuing schemes, including weighted fair queuing, priority output queuing, and custom queuing,
attempt to prioritize and distribute bandwidth to individual data flows so that low-volume applications dont
get overtaken by large transfers.

Controlling Bandwidth and Performance

Packeteer, Inc.

Router-based, queuing-only solutions have improved in the recent past. For example, they can now enforce
per-traffic-type aggregate bandwidth rates for any traffic type they can differentiate. But a variety of router
and queuing limitations remain:

Routers manage bandwidth passively, discarding packets and providing no direct feedback to end
systems. Routers use queuing (buffering and waiting) or packet tossing to try to control traffic sources
and their rates.

Queues, by their definition, oblige traffic to wait in lines and add delay to transaction time. Dropping
packets is even worse for TCP applications since it forces the application to wait for a timeout and then
retransmit.

Queues do not proactively control the rate at which traffic enters the wide-area network at the other edge
of a connection.

Queuing-based solutions are not bi-directional and do not control the rate at which traffic travels into a
LAN from a WAN, where there is no queue.

Routers cant enforce per-flow minimum or maximum bandwidth rates.

Routers dont allow traffic to expand beyond its bandwidth limit when congestion and competing traffic
are not issues.

Routers dont enable distinct strategies for high-speed and low-speed connections.

Routers dont allow specification of the maximum number of allowed flows for a given type of traffic or
a given sender.

Queuing addresses a problem only after congestion occurs. Its an after-the-fact approach to a real-time
problem.

Queuing schemes can be very difficult to configure.

Routers dont have the ability to assess the performance their queuing delivers.

Traffic classification is too coarse and overly dependent on port matching and IP addresses. Routers
cant automatically detect and identify many applications as they pass. They cant identify non-IP traffic,
much VoIP traffic, peer-to-peer traffic, games, HTTP on non-standard ports, non-HTTP traffic on port
80, and other types of traffic. Their inability to distinctly identify traffic severely limits their ability to
distinctly control it.

Queuing is a good tactic, and one that should be incorporated into any reasonable performance solution. But
it doesnt stand alone as an effective solution. Although routers dont identify large numbers of traffic types
or enforce a variety of flexible allocation strategies, a strong case could be made that they shouldnt. The first
and primary function of a router is to route. Similarly, although a router has some traffic-blocking features, it
doesnt function as a complete firewall. And it shouldnt. It needs to focus its processing power on prompt,
efficient routing responsibilities.

Packet Marking
Packet marking is a growing trend to ensure speedy treatment across the WAN and across heterogeneous
network devices. A variety of standards have evolved over time. First, CoS/ToS (class and type of service
bits) were incorporated into IP. Then, Diffserv became the newer marking protocol for uniform quality of
service, essentially the same as ToS bits, just more of them. And more recently, MPLS emerged as the
newest standard, integrating the ability to specify a network path with class of service for consistent QoS.

Controlling Bandwidth and Performance

Packeteer, Inc.

The advantages of packet marking are clear. Its proactive and doesnt wait until a problem occurs before
taking action. It is an industry-standard system that different equipment from different vendors all
incorporate, ensuring consistent treatment. But, as before with queuing, it doesnt stand alone as an effective
solution, as it:

Requires assistance to differentiate different types of traffic and applications so that the proper
distinguishing markers can be applied

Lacks control over the rate at which packets enter the WAN

Can not apply explicit bandwidth minimums and maximums

Doesnt control the number of allowed flows for a given type of traffic or a given sender

Needs another solution to detect low- and high-speed connections, although it can then implement
appropriate treatment for each

With assistance, packet marking can contribute to excellent control of performance.

Packeteers Application Traffic Management


Packeteer offers a broad spectrum of tools and technologies to control performance. They include explicit
bits-per-second minimum and maximum bandwidth rates, relative priorities, the ability to precisely target the
right traffic, both inbound and outbound control, and features to address the deficits listed in the sections on
queuing and packet marking, making them complete performance solutions. Together, these capabilities and
more form Packeteers application traffic management system.
With Packeteer, you can:

Protect the performance of important


applications such as SAP and Oracle

Contain unsanctioned and recreational


traffic such as KaZaA and AudioGalaxy

Provision steady streams for voice or


video traffic to ensure smooth
performance

Stop applications or users from


monopolizing the link

Reserve or cap bandwidth using an


explicit rate, percentage of capacity, or
priority

Detect virus, worm, or denial-of-service


types of attacks and limit their impact

Strike a balance between consistent


access and a bandwidth limit for
applications such as Microsoft
Exchange that are both bandwidthhungry and critically important

Allow immediate passage for small, delay-sensitive traffic such as Telnet

Provision bandwidth equitably between multiple locations, groups, or users

Controlling Bandwidth and Performance

Packeteer, Inc.

Graphs comparing usage, and


efficiency, before and after using
Packeteers control features
create an impressive picture.

Packeteers capabilities to control network traffic, and therefore to control application and network
performance, are available in the PacketShaper and PacketShaper Xpress product lines.
PacketShaper Xpress also offers compression, freeing bandwidth that can then be carefully allocated to the
critical applications that need it most by using Packeteers control capabilities. Compression is covered in a
separate paper. The remainder of this paper delves into details about control features.

Controlled Passage
PacketShaper includes features to specify bandwidth
minimums and/or maximums to one or more applications,
sessions, users, locations, streams, and other traffic subsets.
PacketShaper divides your network traffic into classes. By
default, it categorizes passing traffic into a separate class for
each application, service, or protocol, but you can specify a lot of other criteria to separate traffic by
whatever scheme you have in mind. (To see more about traffic classification, see Packeteers Gaining
Visibility paper.)
Traffic classes are extremely important, because whatever PacketShaper can do, it does on a class-by-class
basis. PacketShaper can also apply its features to other subsets of traffic besides classes, such as each users
traffic or each sessions traffic. But the traffic class is your most powerful tool to target your control
strategies to the precise traffic you want without influencing the traffic you dont want. For example, you can
control the subset of traffic that matches: PeopleSoft running on Citrix MetaFrame with an MPLS path label
of 5 heading to the London office.

Per-Class Limits and/or Reservations


Your network probably supports several applications that might be important, but are not urgently
time-sensitive. As explained earlier in The Nature, when these applications have bursty, bandwidth-greedy
behavior, trouble starts. Bandwidth-starved urgent applications suffer sluggish performance the losers in
the fight for bandwidth in bottlenecks at WAN or Internet links.
A PacketShaper partition creates a virtual separate pipe for a traffic class. A partition is appropriate when
you want to limit a greedy application or when you want to protect a vulnerable urgent application. It
contains or protects (or both) all traffic in one class as a whole.
You specify the size of a partitions private link, designate whether it can expand or burst, and optionally cap
its growth. You can define partitions using explicit bandwidth rates or percentages of capacity. Partitions do
not waste bandwidth, as they always share their unused excess bandwidth with other traffic.
PacketShaper allocates bandwidth for partitions minimum sizes and other bandwidth guarantees first. After
that, remaining bandwidth is divvied up. If allowed to burst, a partition gets a pro-rated share of this
remaining bandwidth, subject to the partition limit and the traffics priority (indicated in policies, coming
later).

Controlling Bandwidth and Performance

Packeteer, Inc.

Partition Usage Examples


Problem

Solution

Behavior

Music downloads sometimes


swamp a companys network.
Although wanting to avoid an
outright ban, management doesnt
want employees depending on the
company network for abundant
speedy downloads.

Partition size=0; burstable;


limit=5%
A partition on music traffic
reserves no bandwidth but
allows music downloads to
take up to 5 percent of
capacity.

When more important traffic needs


bandwidth, music gets none. Even when
there are no other takers, music can
access only 5 percent of capacity.

SAP performance at a T1 branch


office is terrible.

Partition size=250Kbps;
burstable; limit=none
A partition on SAP traffic
reserves about a sixth of the
link for SAP and allows SAP to
use the whole link if it is
available.

If SAP is active, it gets bandwidth, period.


No matter how much other traffic is also
active, SAP gets all the bandwidth it
needs up to 250 Kbps.
If SAP needs more than 250 Kbps, it gets
a pro-rated share of other available
bandwidth.
If SAP needs less than 250 Kbps, it loans
the unused portion to other applications.

Microsoft Exchange is vitally


important to an organization and
needs definite bandwidth to work
effectively. However, the
organizations other applications
are suffering as Exchange can
tend toward bandwidth-greedy
habits.

Partition size=25%; burstable;


limit=65%
A partition on Exchange traffic
both contains and protects.

Exchange always performs adequately


because it always has access to
25 percent of capacity no matter what
other traffic is present.
If Exchange needs more, it gets a prorated share of remaining bandwidth up
to 65 percent of capacity. Exchange never
takes over the network.
If Exchange needs less than 25 percent, it
loans the unused portion to other
applications.

Variations on the Partition Theme


Two variations on the partition theme are of particular interest: hierarchical partitions and dynamic partitions.
Hierarchical partitions are embedded in larger, parent partitions. They enable you to carve a large bandwidth
allotment into managed subsets.
For example, you could reserve 40 percent of your link capacity for applications running over Citrix, and
then reserve portions of that 40
percent for each application
running over Citrix perhaps half
for PeopleSoft and a quarter each
for Great Plains and Sales Logix.
Dynamic partitions are per-user
partitions that manage each users
bandwidth allocation across one or
more applications. In addition,
dynamic partitions can be created
for a group of users within an IP
address range.
Dynamic partitions are useful for
situations when you care more about equitable bandwidth allocation than about how its put to use.

Controlling Bandwidth and Performance

Packeteer, Inc.

Dynamic partitions are created as users initiate traffic


of a given class. When the maximum number of
dynamic partitions is reached, an inactive slot (if
there is one) is released for each new active user.
Otherwise, you choose whether latecomers are
refused or squeezed into an overflow area. Dynamic
partitions greatly simplify administrative overhead
and allow over-subscription.
For example, a university can give each dormitory
student a minimum of 20 Kbps and a maximum of
60 Kbps to use in any way the student wishes. Or a
business can protect and/or cap bandwidth for distinct
departments (accounting, human resources,
marketing, and so on).
As always, PacketShaper lends any unused
bandwidth to others in need.

Per-Session Rate Policies


Many applications need to be managed on a flow-by-flow basis rather than as a combined whole. Per-session
control enables many benefits. They can:

Time connections exchanges to minimize time-outs and retransmissions and maximize throughput

Prevent a single session or user from monopolizing

Allocate precisely the rate that streaming traffic needs to avoid jitter and ensure good reception

PacketShapers rate policies can deliver


a minimum rate (perhaps zero) for each
individual session of a traffic class,
allow that session prioritized access to
excess bandwidth, and set a limit on the
total bandwidth it can use. A policy can
keep greedy traffic in line or can
protect latency-sensitive sessions. As
with partitions, any unused bandwidth
is automatically lent to other
applications.
For example, VoIP (Voice over IP) can
be a convenient and cost-saving option,
but only if it delivers good service
consistently. When delay-sensitive
voice traffic traverses congested WAN
links on a shared network, the result
can be delay, jitter, packet loss, and
poor reception. Each flow requires a guaranteed minimum rate or the service is unusable. After all, a voice
stream that randomly speeds up and slows down as packets arrive in clumps is not likely to attain wide
commercial acceptance. Voice traffic needs a per-session guarantee to prevent annoying jitter.
All types of streaming media (such as distance learning, NetMeeting, QuickTime, Real Audio, Streamworks,
SHOUTcast, WindowsMedia, and WebEx) can benefit from rate policies with per-session minimums to

Controlling Bandwidth and Performance

Packeteer, Inc.

10

secure good performance. Many thin-client or server-based applications also benefit from per-session
minimums to ensure smooth performance.
Print traffic, emails with large attachments, and file transfers are all examples of bandwidth-greedy traffic
that would benefit from rate policies, but without a guaranteed minimum, with a lower priority than that for
critical traffic, and optionally with a bandwidth limit.
To see how a per-session bandwidth limit might be useful, consider an organization with abundant file
transfers. Although necessary and important, the file transfers arent urgent and do tend to overtake all
capacity. Now suppose someone who is equipped with a T3 initiates a file transfer. Assume a partition is in
place and keeps the aggregate total of all transfer traffic in line. The one high-capacity user could dominate
the entire FTP
Before-and-after effects on recreational traffics
partition, leaving
bandwidth usage after using PacketShapers rate
other potential FTP
policies and partitions on select applications.
users without
resources. Because a
partition applies only
to the aggregate total
of a traffic class,
individual users would
still be in a free-for-all
situation. A rate
policy that caps each
FTP session at 100
Kbps, or any
appropriate amount,
would keep
downloads equitable.
Admission Control
What happens when so many users swamp a service that it cant accommodate the number and maintain
good performance? Without PacketShaper, performance would degrade for everyone. What other options are
there? You could:

Start denying access to the service once existing users consume available resources

Keep latecomers waiting for the next available slot with just enough bandwidth to string them along

For web services, redirect latecomers to an alternate web page

Another handy feature of rate policies admission control offers precisely these three options for
services that need a guaranteed rate for good performance. You can decide how to handle additional sessions
during bandwidth shortages: deny access, squeeze in another user, or, for web requests, redirect the request.

Per-Session Priority Policies


Priority policies allocate bandwidth based on a priority, 0 to 7. Small, non-bursty, latency-sensitive
applications such as telnet are good candidates for priority policies with a high priority. In contrast, you
might give games such as Doom and Quake a priority of 0 on a business network so that people can play
only if the network is not otherwise busy.
The following table of priorities offers guidelines only. Of course, different applications are of varying
urgencies in different environments, so tailor these suggestions to match your own requirements.

Controlling Bandwidth and Performance

Packeteer, Inc.

11

Priority
7
6
5
4

Description
Mission-critical, urgent, important, time-sensitive, interactive, transaction-based.
Examples might include SAP, Oracle, and a sales website.
Important, needed, less time-sensitive.
Examples might include collaboration and messaging systems, such as Microsoft Exchange.

Standard service, default, not unusually important or unimportant.


Examples might include web browsing.

Needed, but low-urgency or large file size.


Examples might include FTP downloads and email.

1
0

Marginal traffic with little or no business importance.


Examples might include MP3 music downloads, Internet radio, and games.

Other Per-Session Policies


PacketShaper offers several other policies in addition to rate and priority policies. They include:
Policy Type

Policy Description

Usage Examples

Discard Policies

Discard policies intentionally block traffic. The


packets are simply tossed and no feedback is
sent back to the sender.

Discard traffic from websites with questionable


content.
Block attempts to Telnet into your site.
Block external FTP requests to your internal
FTP server.

Never-Admit
Policies

Never-Admit policies are similar to discard


policies except that the policy informs the sender
of the block.

Redirect music enthusiasts to a web page


explaining that streaming audio is allowed only
between 10:00 p.m. and 6:00 a.m.

Ignore Policies

Ignore policies simply pass traffic on, not


applying any bandwidth management at all.

Let any traffic going to a destination that is not


on the other side of the managed WAN access
link pass unmanaged.

TCP Rate Control


PacketShapers patented TCP Rate Control operates behind the scenes for all traffic with rate policies,
optimizing a limited-capacity link. It overcomes TCPs shortcomings, proactively preventing congestion on
both inbound and outbound traffic. TCP Rate Control paces traffic, telling the end stations to slow down or
speed up. Its no use sending packets any faster if they will be accepted only at a particular rate once they
arrive. Rather than discarding packets from a congested queue, TCP Rate Control paces packets to prevent
congestion. It forces a smooth, even flow rate that maximizes throughput.
TCP Rate Control detects real-time flow speed, forecasts packet-arrival times, meters acknowledgments
going back to the sender, and modifies the advertised window sizes sent to the sender. Just as a router
manipulates a packets header information to influence the packets direction, PacketShaper manipulates a
packets header information to influence the packets rate.
Imagine putting fine sand through a straw or small pipe. Sand passes through the straw evenly and quickly.
Now imagine putting chunky gravel through the same straw. The gravel gets stuck and arrives in clumps.
PacketShaper conditions traffic so that it becomes more like sand than gravel. These smoothly controlled
connections are much less likely to incur packet loss, and, more importantly, the end user experiences
consistent service.

Controlling Bandwidth and Performance

Packeteer, Inc.

12

For more details about TCP Rate Control, see Appendix B: How TCP Rate Control Works at the end of
this paper.

UDP Rate Control and Queuing


Unlike TCP, UDP sends data to a recipient without establishing a connection and does not attempt to verify
that the data arrived intact. Therefore, UDP is referred to as an unreliable, connectionless protocol. The
services that UDP provides are minimal port number multiplexing and an optional checksum errorchecking process so UDP uses less time, bandwidth, and processing overhead than TCP.
While UDP doesnt offer a high level of error recovery, it still has appeal for certain types of operations.
UDP is used mostly by applications that require fast delivery and are not concerned with reliability DNS,
for example. Some UDP applications, such as RealAudio and VoIP, generate persistent, session-oriented
traffic. Whenever an application uses UDP for transport, the application must take responsibility for
managing the end-to-end connection, handling packet retransmission and other flow-control services native
to TCP.
Because UDP doesnt manage the end-to-end connection, it doesnt get feedback regarding real-time
conditions, and it cant prevent or adapt to congestion. Therefore, UDP can end up contributing significantly
to an overabundance of traffic, impacting all protocols, UDP, TCP, and non-IP included. In addition, latencysensitive flows, such as VoIP, can be so delayed as to be useless.
UDP Control Mechanisms
PacketShaper combines techniques in rate control and queuing to deliver control over performance to UDP
traffic.
PacketShaper is very effective in controlling outbound UDP traffic. When a client requests data from a
server, PacketShaper intervenes and paces the flow of outbound data, regulating the flow of UDP packets
before they traverse the congested access link. It can speed urgent UDP traffic or give streams steady access.
Management of inbound traffic presents a bigger challenge. By the time inbound UDP traffic reaches
PacketShaper, it already has crossed the expensive, congested access link, and PacketShaper cannot directly
control the link rate. However, PacketShaper can control inbound UDP traffics rate to the destination host.
PacketShaper queues incoming UDP packets on a flow-by-flow basis when they are not scheduled for
immediate transfer, based on priority and competing traffic. PacketShapers UDP queues implement an
important and helpful addition: UDP delay bound. The delay bound defines how long packets can remain
buffered before they become too old to be useful. For example, a delay bound of 200 ms is appropriate for a
streaming audio flow. The delay bound helps avoid retransmissions from holding traffic too long.
Either priority or rate policies are appropriate for UDP traffic classes, depending on the UDP traffic and its
goals:

A priority policy is best for UDP traffic that is transaction-oriented.

A rate policy is best for persistent UDP traffic because its guaranteed bits-per-second option can ensure a
minimum rate for each UDP flow.

Many of PacketShapers other control mechanisms are also appropriate for UDP traffic. UDP traffic
management is part of a comprehensive strategy to manage the bandwidth and performance of many types of
traffic and applications using the variety of PacketShapers control features.

Controlling Bandwidth and Performance

Packeteer, Inc.

13

Packet Marking for MPLS and ToS


As discussed earlier, packet marking is a growing trend to ensure speedy treatment across the WAN and
across heterogeneous network devices. CoS, ToS, and Diffserv technologies evolved to boost QoS.
More recently, MPLS emerged as the newest standard, integrating the ability to specify a network path with
class of service for consistent QoS. Network convergence of voice, video, and data have spurred interest in
MPLS, with the goal of having one network that can support appropriate paths for each service. MPLS is a
standards-based technology to improve network performance for select traffic. Traffic normally takes a
variety of paths from point A to point B, depending upon each router's decisions on the appropriate next hop.
With MPLS, you define specific paths for specific traffic, identified by a label put in each packet.
PacketShaper can classify, mark, and remark traffic based on IP COS/TOS bits, Diffserv settings, and MPLS
labels, allowing traffic types to have uniform end-to-end treatment by multi-vendor devices. By attending to
marking and remarking, PacketShaper can act as a type of universal translator, detecting intentions in one
protocol and perpetuating those intentions with a different protocol as it forwards the packets.
For MPLS, PacketShaper adds to the performance gains possible with MPLS alone. It attends to MPLS
administrative overhead automatically, classifies traffic based on MPLS labels and/or experimental bits, tags
an applications unlabelled traffic (more granularity than with an ingress router), and/or swaps or removes
labels (instead of burdening an egress router). When PacketShaper adds layer-7 application awareness to
MPLS installations, you can use the most appropriate paths for precisely the right traffic.
Incidentally, PacketShaper offers all the same features for VLANs that it does for MPLS classifying
traffic by VLAN; pushing, popping, and swapping VLAN identifiers and priorities; and putting each
VLANs traffic on the right path to its destination.

Scheduling
Sometimes organizations need different control strategies at different times or for different days. For
example:

A middle school prohibits instant messaging during class hours but allows it during lunch or after school.

A company's network administrator blocks games and MP3 music downloads on weekdays, but allows
them on weekends.

A sales-ordering application gets twice its usual bandwidth in the last two days of the month because the
sales personnel typically deliver the most orders right before each monthly deadline.

With PacketShapers scheduling features, you can control performance differently at different times. You can
vary your configuration details based on the day or the time of day. The choice of day can be daily,
weekends, weekdays, specific dates, specific days of the week, and/or specific dates of the month.

Putting Control Features to Use


Youve seen a variety of mechanisms to control traffic and its performance. But discussions of tools, no
matter how powerful, arent really interesting until you put them to use and see their value. Thats what well
do in this section.

Characterizing Traffic
Managing bandwidth allocation for today's traffic diversity is a definite challenge. Network traffic and
applications do not share the same characteristics or requirements. We dont have the same performance

Controlling Bandwidth and Performance

Packeteer, Inc.

14

expectations for all traffic. Therefore, before choosing a control strategy and features to implement that
strategy, you must first characterize your goals and traffic.
First, consider whether your primary concern is application performance or traffic load. Typically, if you are
concerned with keeping customers or employees productive, then you are concerned about application
performance. But if you supply bandwidth to users or organizations, and you are not involved with the
applications that run over that bandwidth, then you are concerned about capacity and traffic volume.
Examples where Performance Is Foremost
An enterprise providing applications to staff
A service provider offering managed application
services to subscribers
A business using B2B or B2C applications to
conduct commerce

Examples where Load Is Foremost


A service provider that offers contracted amounts
of bandwidth to businesses or individuals
A university that supplies each dormitory room with
an equitable portion of bandwidth

If your primary concern is load rather than performance, then you can skip ahead to the section titled
Suggestions and Examples. In addition, you might want to consult the PacketShaper ISP paper.
A good initial approach to managing performance is to manage two traffic categories proactively traffic
that needs to have its performance protected, and traffic that tends to swell to take an unwarranted amount of
capacity.
For each type of traffic you want to manage, consider its behavior with respect to four characteristics:
importance, time sensitivity, size, and jitter. Each characteristic below has an associated explanation and
question to ask yourself, as well as several examples.
Importance
Sometimes the same application can be crucial to one organizations function and just irritating noise on
anothers network.
Ask yourself: Is the traffic critical to organizational success?
Important

Not Important

SAP to a manufacturing business

Real Audio to a non-related business

Quake to a provider of gaming services

Games in a business context

PeopleSoft to a support organization

Instant messaging in a classroom

Email to a business

Time Sensitivity
Some traffic, although important, is not particularly time sensitive. For example, for most organizations, print
traffic is an important part of business. But employees and productivity will probably not be impacted if a
print job takes another few seconds to make its way to the printer. In contrast, any critical application that
leaves a user poised on top of the Enter key waiting for a response is definitely time sensitive.
Ask yourself: Is the traffic interactive or particularly latency sensitive?
Time Sensitive

Not Time Sensitive

Telnet

Print

Citrix-based, interactive applications

Email

Oracle

File transfers

Controlling Bandwidth and Performance

Packeteer, Inc.

15

For important and time-sensitive traffic, consider using a high priority in a priority policy (for small flows)
or in a rate policy (for other flows). Consider a partition with a minimum size.
Size
A traffic session that tends to swell to use increasing amounts of bandwidth and produce large surges of
packets is said to be bursty. TCPs slowstart algorithm creates or exacerbates traffics tendency to burst.
As TCP attempts to address the sudden demand of a bursting connection, congestion and retransmissions
occur.
Applications such as FTP, multimedia components of HTTP traffic, print, and music downloads are
considered bursty since they generate large amounts of download data.
Users expectations for this traffic depend on the context. For example, if a large multimedia file is being
downloaded for later use, the user may not require high speed as much as steady progress and the assurance
that the download wont have to be restarted.
Ask yourself: Are flows large and bandwidth hungry, expanding to consume all available bandwidth?
Large and Bursty

Small and Not Bursty

Music downloads

Telnet

Email with large attachments

ICMP

Print

TN3270

For large and bursty traffic, consider a partition with a limit. If the bursty traffic is important, consider a
partition with both a minimum and a maximum size. Consider a rate policy with a per-session limit if you are
concerned that one high-capacity user might impact others using the same application. Consider a policy
with a low or medium priority, depending on importance.
For small, non-bursty flows, consider a priority policy. Use a high priority if the small flow is important.
Jitter
An application that is played (video or audio) as it arrives at its network destination is said to stream. A
streaming application needs a minimum bits-per-second rate to deliver smooth and satisfactory performance.
Streaming media that arrives with stutter and static is not likely to gain many fans. On the other hand, too
many fans can undermine performance for everyone, including users of other types of applications.
Ask yourself: Does the traffic require smooth consistent delivery or it loses value?
Prone to Jitter

Not Prone to Jitter

VoIP

Email

WindowsMedia

Print

Real Audio

MS SQL

Distance-learning applications

AppleTalk

Controlling Bandwidth and Performance

Packeteer, Inc.

16

For jitter-prone traffic, especially if it is also important, consider a rate policy with a per-session guarantee.
Use a partition with a limit and admission control features if too many users might usurp all bandwidth.

Suggestions and Examples


The following table lists a few common traffic types, their characteristics, typical behavior, desired behavior,
and control configuration suggestions.
Traffic
Types

FTP

Web
Browsing

Web-Based
Applications

Telnet

Important

Time Sizeable Jitter


Sensitive /Bursty Prone

Unpredictable
display and
delay times

Rate policy with 0


Prompt,
guaranteed, burstable,
consistent display medium priority, optional persession cap

Insufficient
and/or
inconsistent
response times

Prompt,
consistent
response

Slow,
inconsistent
performance

Immediate
transfer for
prompt response Priority policy with a high
times; small size priority
wont impact
others

Bursts and
abundant
downloads clog
WAN access
and undermine
time-sensitive
applications

Contained
downloads using
a small portion
(or none) of
network
resources

Rate policy with 0


guaranteed, burstable,
priority 0;
Partition to contain the
aggregate of all users to less
than 5 percent of capacity (or
as desired)

Swift, consistent
performance

Rate policy with 0


guaranteed, burstable, high
priority;
Partition with a min size to
protect all SAP traffic

They get what


they pay for

Dynamic partition to allocate


each user's or each subnet's
bandwidth equitably

A contracted
amount of
Dont care
bandwidth
Voice over
IP

Configuration Suggestions

No stalled
sessions;
sustained
download
progress; paced
bursts

Music
downloads
in a
business
environment

SAP

Desired
Behavior

Stuck progress
indicators;
peaks clog WAN
access, slowing
more timesensitive
applications

Varies

Usual
Undesired
Behavior

Slow and
unpredictable
response times

Dont
care

Dont
care

Some users
claim more than
Dont
their fair share
care
and others are
shorted.

Rate policy with 0


guaranteed, burstable,
medium priority;
Partition to contain the
aggregate of all traffic

Rate policy with optional persession guarantee, burstable,


high priority, optional persession cap

The first appendix is a detailed example of combining PacketShapers control features, and it employs VoIP
as its subject.

Controlling Bandwidth and Performance

Packeteer, Inc.

17

For More Information


In summary,
PacketShapers control
capabilities offer policybased bandwidth
allocation to boost or
curb application
performance over the
WAN and Internet. If
youd like more
information, consult
Packeteers web site
(www.packeteer.com) or
call 408-873-4400 or
800-697-2253.

PacketShapers control features make a


dramatic impact on recreational traffics
bandwidth usage.

Control Off

Control On
PacketShapers control features
have an equally dramatic impact on
a critical applications time on the
network and response time.

Controlling Bandwidth and Performance

Packeteer, Inc.

18

Appendix A: Preparing for VoIP A Detailed Example


Suppose you are planning your companys migration from a conventional phone system to VoIP. You want
to avoid the performance downfalls that frequently plague a move from a circuit-switched PBX to an IP-PBX
system. Your carrier offers an MPLS-based IP VPN service for your WAN backbone that looks promising,
as MPLS can offer different classes of service for different portions of your WAN traffic. Initial usage
estimates indicate that you need to be able to support about 30 concurrent conversations at your main site,
which has a 3 Mbps WAN link. Smaller branch offices have smaller links and will have fewer concurrent
conversations. You want good voice quality without compromising performance for critical applications, the
original inhabitants of your IP network.
You made a list of some of the questions you need answered before you proceed:

How much bandwidth will voice traffic need?

How will existing applications perform once voice traffic is active? Do you need to upgrade capacity?

What will take care of QoS at the gaps between the LAN and your carriers MPLS backbone?

Will your carriers MPLS system be able to spot all the different types of voice traffic and give each the
correct MPLS paths?

Specifically, how will you control voice traffic for optimum performance? What will happen if every
employee suddenly decides to use the phone?

Well examine the answers, one by one


How much bandwidth will your voice traffic need?
Different codecs (analog-to-digital voice coder) package voice streams into different size packets. Many
customers select the G.729 codec for delivering voice across the WAN. It states an 8 Kbps rate requirement.
Manufacturers typically omit the overhead for control, headers, and forward error correction from their
bandwidth quotes. An 8 Kbps codec will actually need 17 to 21 Kbps due to this overhead. In addition, it is
best to overstate the needs of UDP traffic by 15 to 20 percent. Putting this all together, we get about 24 or 25
Kbps per session.
Check with your vendor to find out your codecs rate requirements. Or, if you already have voice deployed
on your network, PacketShaper can measure how much bandwidth one session actually uses, and youll
know for sure.
Multiplying your main sites requirement of 30 concurrent sessions by 25 Kbps gives about 750 Kbps as the
capacity needed during peak call times.
How will existing applications perform once voice traffic is active? Do you need to upgrade
capacity?
1. Get a before picture.
First, you need to find out more about the application traffic you already have, whether you make good
use of your existing bandwidth, and if you could reclaim any capacity with more stringent management.
Using PacketShapers visibility features (described in another paper, Gaining Visibility), find out
precisely which applications are already running on your network, the bandwidth they consume, top
applications, unmanaged response times, the amount of bandwidth wasted due to inefficiency, and other
key information. Usage and response-time figures give you a before picture before traffic management
and before voice traffic.

Controlling Bandwidth and Performance

Packeteer, Inc.

19

2. Implement a basic performance strategy.


Using PacketShapers control features, rein in a couple of applications that take more than their fair share
of bandwidth and protect a couple of applications that dont have sufficient performance. Use the
policies and partitions suggested for different types of traffic in the previous section. For partitions sizes
and limits, use a percentage of capacity instead of explicit rates, so that your definitions will still work if
you change capacity.
3. Get an intermediate picture.
Let your applications run for a while and check their usage and response times with PacketShapers
visibility features. You now have an intermediate picture after some traffic management but before
voice traffic.
4. Simulate peak call volumes.
You can use PacketShaper to see how performance would be if you were experiencing peak call volumes
(even without really having any voice traffic). Set PacketShapers managed link size to your real
capacity minus the amount you calculated in the answer to the last step, step 3. For our example, that
would yield 2.250 Mbps (3 Mbps - 750 Kbps).
5. Get an after picture.
Let your applications run for a while and check their usage and response times with PacketShapers
visibility features. You now have an after picture after traffic management and after having only the
bandwidth you would have left over when voice traffic takes its share.
6. What did you find out?
Are critical applications suffering poor response times after you reduced your supposed capacity? Or did
your protective partitions and policies keep them intact? How about compared with their original
response times before you added any control features? Could you scale back any other unimportant
traffic to free up more capacity?
Another suggestion to stretch bandwidth is to use compression on most of your normal traffic to free up
more capacity for voice or critical applications. PacketShaper Xpress offers control features, discussed in
this paper, as well as compression, covered in another paper.
7. Make a decision.
If performance was adequate after controlling bandwidth allocation, then you need not upgrade capacity
before deploying voice. If it was not, then you are better off with more capacity. You can use these same
techniques to figure out how much more capacity you need.
For example, suppose your critical applications were too slow even at the full 3 Mbps when unmanaged.
Then they were fine at 3 Mbps after a couple of partitions and policies. If they were still fine with the
2.25 Mbps link size, then you dont need to upgrade. But if they were a bit too slow at the 2.25 Mbps
link size, even with management in place, you could try out 2.5 Mbps and see if that made the difference.
If so, you would only need to upgrade 250 Kbps instead of the whole 750 Kbps needed for voice traffic.
What will take care of QoS at the gaps between the LAN and your carriers MPLS backbone?
Voice traffic that travels only within your LANs is probably okay. Its the congested speed-conversion
bottlenecks at the LAN-WAN transitions that are the most problematic for introducing delays and latency.
That is precisely where PacketShaper sits and precisely where PacketShaper controls access and
performance.

Controlling Bandwidth and Performance

Packeteer, Inc.

20

Will your carriers MPLS system be able to spot all the different types of voice traffic and
give each the correct MPLS paths?
Not necessarily. Getting the right MPLS label attached to a flow depends on being able to identify and tag
each voice protocol appropriately. Ingress MPLS routers are responsible for assigning labels and paths to
flows. But their classification abilities are usually weak.
Voice clients typically use UDP streams. They have different flows and protocols for initiation, control, and
data flows. For example, the H.323 protocol starts a conversation on one port (for H.323), jumps to another
port (for Q.931), and eventually splits up into a data flow (of RTP) and control flow (of RTCP). Voice traffic
has many associated protocols, including: Clarent, CU-SeeMe, Dialpad, H.323, I-Phone, Media Gateway
Control Protocol, H.248/Megaco, MCK Communications, Micom, Net2Phone, T.120, Skinny Client Control
Protocol (Skinny), SIP, VDOPhone, RTP, and RTCP.
Thats a lot of protocols and applications to expect the MPLS routers to detect and tag appropriately while
they attend to routing responsibilities. In addition, the MPLS routers must appropriately identify and tag your
other applications that share your IP network so that they also follow their correct MPLS paths. More often
than not, it doesnt happen. The result? You end up with rudimentary traffic classification and much traffic
following inappropriate MPLS paths. And less than optimal performance.
When PacketShaper sits before an ingress MPLS router, it is already employing advanced, application-aware
classification before invoking appropriate bandwidth-allocation controls to ease the bottleneck and expedite
voice traffic. It might as well use its granular classification to assign whatever MPLS labels your MPLS
backbone needs. Its no extra trouble, and it offloads the routers. Precise classification is one of
PacketShapers strongest differentiators. PacketShaper automatically detects and identifies all the voice
protocols listed above as well as your other IP applications. Use its strengths to get the most from your voice
and MPLS infrastructure.
Specifically, how will you control voice traffic for optimum performance?
1. Protect bandwidth for VoIP as a whole.
Define a partition for all VoIP traffic to reserve a portion of your WAN access link for VoIP. When there
is insufficient VoIP traffic to use the whole partition, the unused portion will go to other applications.
(Covered in Per-Class Limits and/or Reservations)
2. Clear easy passage for VoIP setup and control traffic.
Both setup and control traffic types for VoIP (H.245, Q.931, RTCP, SIP, Megaco, MGCP, Skinny,
MCK-Signaling, and so on) are characterized by small and somewhat urgent flows. Setup traffic occurs
at the start of sessions, and control traffic is intermittent throughout. PacketShaper identifies each setup
and control protocol and creates their traffic classes automatically. You can assign priority policies at a
high priority (5, 6, or 7) to these VoIP classes. (Covered in Per-Session Priority Policies)
3. Allocate steady streams for voice data.
The voice streams themselves (RTP, for example) are large and data laden. PacketShaper creates their
traffic classes automatically. A rate policy with a per-flow bandwidth guarantee paves the way for
smooth, jitter-free reception. The rate policy serves to:

Reserve the appropriate per-session bandwidth to ensure smooth streaming performance

Gain the benefits of TCP Rate Control and reduce retransmissions that waste bandwidth

Indicate the relative importance of your voice traffic

Insulate voice users from impacting each other

Controlling Bandwidth and Performance

Packeteer, Inc.

21

For your rate policy, use a guaranteed rate of the minimum bits-per-second that are required for
acceptable session quality (you determined the rate in the first question of this example). Make your
policy burstable with a relatively high priority. If you wish, you can apply a limit to the rate policy to
prevent one high-capacity VoIP user from claiming an unnecessarily large portion of bandwidth. For our
earlier example, you might create a rate policy with 25 Kbps guaranteed, burstable at priority 7, with a
limit of 50 Kbps. (Covered in Per-Session Rate Policies)
The rate policys default delay bound of 200 milliseconds is appropriate. (Covered in UDP Control
Mechanisms)
4. Handle over-subscription.
Choose an Admissions Control strategy to handle too many concurrent conversations. You can refuse
new connections once the number of VoIP streams climbs sufficiently to use the entire VoIP partition (or
allow the partition to burst to the link size, if desired). If too many conversationalists request the phone at
the same time (more than 30 at the main site, in our example), then latecomers will have to wait. Early
birds will not see degradation in quality. Although, other strategy choices are available, and the choice is
up to you.
5. Assign MPLS labels.
You already have separate classes for the various types of voices setup, control, and data traffic. Have
PacketShaper push the appropriate MPLS onto each packet for each type of traffic so that it takes the
correct path through your MPLS backbone. (Covered in Packet Marking for MPLS and ToS and
earlier in this example)
6. Control non-VoIP traffic appropriately.
Keep the bandwidth demands of bandwidth-greedy applications (such as FTP, print, and recreational
traffic) reasonable. Assign partitions with limits and policies with lower priorities to these types of traffic
so that they leave room for VoIP. (Covered in Per-Class Limits and/or Reservations, Per-Session
Rate Policies, Per-Session Priority Policies, as well as many other places in this paper.)

Controlling Bandwidth and Performance

Packeteer, Inc.

22

Appendix B: How TCP Rate Control Works


TCP Review
The Transmission Control Protocol (TCP) provides connection-oriented services for the protocol suites
application layer that is, a client and a server must establish a connection to exchange data. TCP transmits
data in segments encased in IP datagrams, along with checksums, used to detect data corruption, and
sequence numbers to ensure an ordered byte stream. TCP is considered to be a reliable transport mechanism
because it requires the receiving computer to acknowledge not only the receipt of data but also its
completeness and sequence. If the sending computer doesnt receive notification from the receiving
computer within an expected time frame, the sender times out and retransmits the segment.
TCP uses a sliding-window flow-control mechanism to control throughput over wide-area networks. As the
receiver acknowledges initial receipt of data, it advertises how much data it can handle, called its window
size. The sender can transmit multiple packets, up to the recipients window size, before it stops and waits for
an acknowledgment. The sender fills the pipe, waits for an acknowledgment, and fills the pipe again.
While the receiver typically handles TCP flow control, TCPs slow-start algorithm is a flow-control
mechanism managed by the sender that is designed to take full advantage of network capacity. When a
connection opens, only one packet is sent until an ACK is received. For each received ACK, the sender can
double the transmission size, within bounds of the recipients window.
TCPs congestion-avoidance mechanisms attempt to alleviate the problem of abundant packets filling up
router queues. TCP increases a connections transmission rate using the slow-start algorithm until it senses a
problem and then it backs off. It interprets dropped packets and/or timeouts as signs of congestion. The goal
of TCP is for individual connections to burst on demand to use all available bandwidth, while at the same
time reacting conservatively to inferred problems in order to alleviate congestion.
TCP Rate Control
Traffic consists of chunks of data that accumulate at access links where speed conversion occurs. To
eliminate the chunks, TCP Rate Control paces or smoothes the flow by detecting a remote user's access
speed, factoring in network latency, and correlating this data with other traffic flow information. Rather than
queuing data that passes through the box and metering it out at the appropriate rate, PacketShaper induces the
sender to send just-in-time data. By changing the traffic chunks, or bursts, to optimally sized and timed
packets, PacketShaper improves network efficiency, increases throughput, and delivers more consistent,
predictable, and prompt response times.
TCP Rate Control uses three methods to control the rate of transmissions:

Detects real-time flow speed

Meters acknowledgments going back to the sender

Modifies the advertised window sizes sent to the sender

Just as a router manipulates a packets header information to influence the packets direction, PacketShaper
manipulates a packets header information to influence the packets rate.
TCP autobaud is Packeteers technology that allows PacketShaper to automatically detect the connection
speed of the client or server at the other end of the connection or on the other side of the Internet. This
speed-detection mechanism allows PacketShaper to adapt bandwidth-management strategies even as
conditions vary.

Controlling Bandwidth and Performance

Packeteer, Inc.

23

PacketShaper is a predictive scheduler that anticipates bandwidth needs and meters the ACKs and window
sizes accordingly. It uses autobaud, known TCP behaviors, and bandwidth-allocation policies as predictive
criteria.
PacketShaper changes the end-to-end TCP semantics from its position in the middle of a connection. First,
using autobaud, it determines a connections transfer rate to use as a basis on which to time transmissions.
PacketShaper intercepts a transactions acknowledgment and holds onto it for the amount of time that is
required to smooth the traffic flow and increase throughput without incurring retransmission timeout. It also
supplies a window size that helps the sender determine when to send the next packet and how much to send
in order to optimize the real-time connection rate.
Evenly spaced packet transmissions yield significant multiplexing gains in the network. As packet bursts are
eliminated, network utilization can increase up to 80 percent. Packet spacing also avoids the queuing bias
imposed by weighted fair queuing schemes, which force packet bursts to the end of a queue, giving
preference to low-volume traffic streams. Thus, sand-like packet transmissions yield increased network
utilization and proceed cleanly through weighted fair queues.
In this packet diagram, PacketShaper intervenes and
paces the data transmission to deliver predictable
service.
The sequence described by the packet diagram
includes:

A data segment is sent to the receiver.

The receiver acknowledges receipt and


advertises an 8000-byte window size.

PacketShaper intercepts the ACK and determines


that the data must be transmitted more evenly.
Otherwise, subsequent data segments will queue
up and packets will be tossed because
insufficient bandwidth is available. In addition,
more urgent and smaller packets from interactive
applications would be held behind the flood of
this more bulky data.

PacketShaper revises the ACK to the sender; the


sender immediately emits data according to the
revised window size.

Controlling Bandwidth and Performance

Packeteer, Inc.

24

The figure on the left in the following illustration with two packet diagrams provides an example of the
traffic patterns when natural TCP algorithms are used. Note that the second packet must be transmitted twice
because the sender did not get word in time that it was received, an unnecessary waste. Near the bottom,
observe the packet burst that occurs. This is quite typical of TCPs slow start (but huge later) growth and is
what causes congestion and buffer overflow. The figure on the right provides an example of the evenly
spaced data transmissions that occur when TCP Rate Control is used. This even spacing not only reduces
router queues but also helps increase the average bits per second since it uses more of the bandwidth more of
the time. The pros and cons of the two mechanisms are listed below each diagram.

Without PacketShaper: Chunky traffic flow, less throughput, bursty sporadic transfer, more
retransmissions.
With PacketShaper: Smooth traffic flow, more throughput, consistent transfer rate, fewer retransmissions.

Controlling Bandwidth and Performance

Packeteer, Inc.

25

Anda mungkin juga menyukai