MPLS Overview
Multiprotocol Label Switching (MPLS) has evolved from being a buzzword in the
networking industry to a widely deployed technology in service provider (SP)
networks. MPLS is a contemporary solution to address a multitude of problems faced
by present-day networks: speed, scalability, quality of service (QoS) management,
and traffic engineering. Service providers are realizing larger revenues by the
implementation of service models based on the flexibility and value added services
provided by MPLS solutions. MPLS also provides an elegant solution to satisfy the
bandwidth management and service requirements for next-generation IP–based
backbone networks.
As shown in Figure 1-1, in the data forwarding path, the following process takes
place:
BRBRAITT March-2007 2
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Figure 1-2 illustrates the same network as depicted in Figure 1-1 with MPLS
forwarding where route table lookups are performed only by MPLS edge border
routers, R1 and R4. The routers in MPLS network R1, R2, and R3 propagate updates
for 172.16.10.0/24 network via an IGP routing protocol just like in traditional IP
networks, assuming no filters or summarizations are not configured. This leads to the
creation of an IP forwarding table. Also, because the links connecting the routers are
MPLS enabled, they assign local labels for destination 172.16.10.0 and propagate
them upstream to their directly connected peers using a Label Distribution Protocol
(LDP); for example, R1 assigns a local label L1 and propagates it to the upstream
neighbor R2. R2 and R3 similarly assign labels and propagate the same to upstream
neighbors R3 and R4, respectively. Consequently, as illustrated in Figure 1-2, the
routers now maintain a label forwarding table to enable labeled packet forwarding in
addition to the IP routing table. The concept of upstream and downstream is explained
in greater detail in the section "MPLS Terminology."
BRBRAITT March-2007 3
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
As shown in Figure 1-2, the following process takes place in the data forwarding path
from R4 to R1:
1. R4 receives a data packet for network 172.16.10.0 and identifies that the path to
the destination is MPLS enabled. Therefore, R4 forwards the packet to next-hop
Router R3 after applying a label L3 (from downstream Router R3) on the packet
and forwards the labeled packet to R3.
2. R3 receives the labeled packet with label L3 and swaps the label L3 with L2 and
forwards the packet to R2.
3. R2 receives the labeled packet with label L2 and swaps the label L2 with L1 and
forwards the packet to R1.
4. R1 is the border router between the IP and MPLS domains; therefore, R1 removes
the labels on the data packet and forwards the IP packet to destination network
172.16.10.0.
MPLS functionality on Cisco devices is divided into two main architectural blocks:
BRBRAITT March-2007 4
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Figure 1-3 depicts the control plane and data plane functions.
MPLS Terminology
This section provides an overview of the common MPLS-related terminology used for
the rest of this book:
BRBRAITT March-2007 5
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
MPLS-enabled domain. The egress Edge LSR performs the functions of label
disposition or removal (pop) and forwarding an IP packet to the destination.
Note that the imposition and disposition processes on an Edge LSR might
involve label stacks versus only labels.
Figure 1-4 depicts the network in Figure 1-2 with all routers identified as
LSRs or Edge LSRs based on their location and operation in the MPLS
domain.
• MPLS Label Switched Path (LSP)— The path from source to destination for
a data packet through an MPLS-enabled network. LSPs are unidirectional in
nature. The LSP is usually derived from IGP routing information but can
diverge from the IGP's preferred path to the destination (as in MPLS traffic
engineering, which is discussed in Chapter 9, "MPLS Traffic Engineering"). In
Figure 1-4, the LSP for network 172.16.10.0/24 from R4 is R4-R3-R2-R1.
• Upstream and downstream— The concept of downstream and upstream are
pivotal in understanding the operation of label distribution (control plane) and
data forwarding in an MPLS domain. Both downstream and upstream are
defined with reference to the destination network: prefix or FEC. Data
intended for a particular destination network always flows downstream.
Updates (routing protocol or label distribution, LDP/TDP) pertaining to a
specific prefix are always propagated upstream. This is depicted in Figure 1-5
where downstream with reference to the destination prefix 172.16.20.0/24 is in
the path R1-R2-R3, and downstream with reference to 172.16.10.0/24 is the
path R3-R2-R1. Therefore, in Figure 1-5, R2 is downstream to R1 for
destination 172.16.20.0/24, and R1 is downstream to R2 for destination
172.16.10.0/24.
BRBRAITT March-2007 6
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
• MPLS labels and label stacks— An MPLS label is a 20-bit number that is
assigned to a destination prefix on a router that defines the properties of the
prefix as well as forwarding mechanisms that will be performed for a packet
destined for the prefix.
BRBRAITT March-2007 7
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
The 20-bit label value is a number assigned by the router that identifies the prefix in
question. Labels can be assigned either per interface or per chassis. The 3-bit
experimental field defines the QoS assigned to the FEC in question that has been
assigned a label. For example, the 3 experimental bits can map to the 7 IP precedence
values to map the IP QoS assigned to packets as they traverse an MPLS domain.
A label stack is an ordered set of labels where each label has a specific function. If the
router (Edge LSR) imposes more than one label on a single IP packet, it leads to what
is called a label stack, where multiple labels are imposed on a single IP packet.
Therefore, the bottom-of-stack indicator identifies if the label that has been
encountered is the bottom label of the label stack.
The TTL field performs the same function as an IP TTL, where the packet is discarded
when the TTL of the packet is 0, which prevents looping of unwanted packets in the
network. Whenever a labeled packet traverses an LSR, the label TTL value is
decremented by 1.
The label is inserted between the Frame Header and the Layer 3 Header in the packet.
Figure 1-7 depicts the label imposition between the Layer 2 and Layer 3 headers in an
IP packet.
If the value of the S bit (bottom-of-stack indicator) in the label is 0, the router
understands that a label stack implementation is in use. As previously mentioned, an
LSR swaps only the top label in a label stack. An egress Edge LSR, however,
continues label disposition in the label stack until it finds that the value of the S bit is
set to 1, which denotes a bottom of the label stack. After the router encounters the
bottom of the stack, it performs a route lookup depending on the information in the IP
Layer 3 Header and appropriately forwards the packet toward the destination. In the
case of an ingress Edge LSR, the Edge LSR might impose (push) more than one label
to implement a label stack where each label in the label stack has a specific function.
Label stacks are implemented when offering MPLS-based services such as MPLS
VPN or MPLS traffic engineering. In MPLS VPN (see Chapter 3, "Basic MPLS VPN
Overview and Configuration"), the second label in the label stack identifies the VPN.
In traffic engineering (see Chapter 9), the top label identifies the endpoint of the TE
tunnel, and the second label identifies the destination. In Layer 2, VPN
implementations over MPLS, such as AToM (see Chapter 11, "Any Transport over
BRBRAITT March-2007 8
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
MPLS [AToM]") and VPLS (see Chapter 12, "Virtual Private LAN Service [VPLS]),
the top label identifies the Tunnel Header or endpoint, and the second label identifies
the VC. All generic iterations of the label stack implementation are shown in Figure
1-8.
CEF avoids the overhead of cache rewrites in the IP Core environment by using a
Forwarding Information Base (FIB) for the destination switching decision, which
mirrors the entire contents of the IP routing table. There is a one-to-one mapping
between FIB table and routing table entries.
When CEF is used on a router, the router maintains, at a minimum, an FIB, which
contains a mapping of destination networks in the routing table to appropriate next-
hop adjacencies. Adjacencies are network nodes that can reach one another with a
BRBRAITT March-2007 9
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
single hop across the link layer. This FIB resides in the data plane, which is the
forwarding engine for packets processed by the router.
In addition to the FIB, two other structures on the router are maintained, which are the
Label Information Base (LIB) and Label Forwarding Information Base (LFIB). The
distribution protocol in use between adjacent MPLS neighbors is responsible for the
creation of entries in the LIB and LFIB.
The LIB functions in the control plane and is used by the label distribution protocol
where IP destination prefixes in the routing table are mapped to next-hop labels that
are received from downstream neighbors, as well as local labels generated by the label
distribution protocol.
The LFIB resides in the data plane and contains a local label to next-hop label
mapping along with the outgoing interface, which is used to forward labeled packets.
Figure 1-9 shows the interoperation of the various tables maintained on a router.
BRBRAITT March-2007 10
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
MPLS Operation
The implementation of MPLS for data forwarding involves the following four steps:
MPLS operation typically involves adjacent LSR's forming an LDP session, assigning
local labels to destination prefixes and exchanging these labels over established LDP
sessions. Upon completion of label exchange between adjacent LSRs, the control and
data structures of MPLS, namely FIB, LIB, and LFIB, are populated, and the router is
ready to forward data plane information based on label values.
Following label assignment on a router, these labels are distributed among directly
connected LSRs if the interfaces between them are enabled for MPLS forwarding.
This is done either by using LDP or tag distribution protocol (TDP). TDP is
deprecated and, by default, LDP is the label distribution protocol. The command mpls
label protocol {ldp | tdp} is configured only if LDP is not the default label
distribution protocol or if you are reverting from LDP to TDP. The command can be
configured in global and interface configuration mode. The interface configuration
command will, however, override the global configuration.
TDP and LDP function the same way but are not interoperable. It is important to note
that when Cisco routers are in use, the default protocol that is running on an MPLS-
enabled interface is dependent on the version of IOS running on the device; care must
be taken when configuring Cisco routers in a multivendor environment. TDP uses
TCP port 711 and LDP uses TCP port 646. A router might use both TDP and LDP on
the same interface to enable dynamic formation of LDP or TDP peers depending on
the protocol running on the interface of the peering MPLS neighbor. LDP is defined in
BRBRAITT March-2007 11
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
All LDP messages follow the type, length, value (TLV) format. LDP uses TCP port
646, and the LSR with the higher LDP router ID opens a connection to port 646 of
another LSR:
BRBRAITT March-2007 12
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
1. LDP sessions are initiated when an LSR sends periodic hellos (using UDP
multicast on 224.0.0.2) on interfaces enabled for MPLS forwarding. If another
LSR is connected to that interface (and the interface enabled for MPLS), the
directly connected LSR attempts to establish a session with the source of the
LDP hello messages. The LSR with the higher LDP router ID is the active
LSR. The active LSR attempts to open a TCP connection with the passive LSR
(LSR with a lower router ID) on TCP port 646 (LDP).
2. The active LSR then sends an initialization message to the passive LSR,
which contains information such as the session keepalive time, label
distribution method, max PDU length, and receiver's LDP ID, and if loop
detection is enabled.
3. The passive LDP LSR responds with an initialization message if the
parameters are acceptable. If parameters are not acceptable, the passive LDP
LSR sends an error notification message.
4. Passive LSR sends keepalive message to the active LSR after sending an
initialization message.
5. The active LSR sends keepalive to the passive LDP LSR, and the LDP session
comes up. At this juncture, label-FEC mappings can be exchanged between
the LSRs.
Figure 1-11 depicts the two modes of label distribution between R1 (Edge LSR) and
R2 (LSR). In the downstream-on-demand distribution process, LSR R2 requests a
label for the destination 172.16.10.0. R1 replies with a label mapping of label 17 for
172.16.10.0. In the unsolicited downstream distribution process, R1 does not wait for
a request for a label mapping for prefix 172.16.10.0 but sends the label mapping
information to the upstream LSR R2.
BRBRAITT March-2007 13
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
If an LSR supports liberal label retention mode, it maintains the bindings between a
label and a destination prefix, which are received from downstream LSRs that might
not be the next hop for that destination. If an LSR supports conservative label
retention mode, it discards bindings received from downstream LSRs that are not
next-hop routers for a destination prefix. Therefore, with liberal retention mode, an
LSR can almost immediately start forwarding labeled packets after IGP convergence,
where the numbers of labels maintained for a particular destination are large, thus
consuming memory. With conservative label retention, the labels maintained are
labels from the confirmed LDP or TDP next-hop neighbors, thus consuming minimal
memory.
BRBRAITT March-2007 14
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
bit label field). This label is used in MPLS networks that implement
penultimate hop popping discussed in the next section.
• Explicit-null Label— This label is assigned to preserve the EXP value of the
top label of an incoming packet. The top label is swapped with a label value of
0 (20 bit label field) and forwarded as an MPLS packet to the next-hop
downstream router. This label is used in the implementation of QoS with
MPLS.
• Aggregate— In this label, the incoming MPLS packet is converted to an IP
packet (by removing all labels if label stack is found on incoming packet), and
an FIB (CEF) lookup is performed to identify the outgoing interface to
destination (used in MPLS VPN implementations, which is discussed in
Chapter 3).
BRBRAITT March-2007 15
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
BRBRAITT March-2007 16
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Introduction
The goal of this paper is to put all of this into perspective and provide a holist
view of the need for and the value of creating a QoS-enabled network. This
paper will describe the relationship between the numerous technologies and
acronyms used to describe QoS.
This paper assumes that the reader is familiar with different networking
technologies and architectures. This paper will provide answers to the
following:
Today, network traffic is highly diverse and each traffic type has unique
requirements in terms of bandwidth, delay, jitter and availability. With the
explosive growth of the Internet, most network traffic today, is IP-based.
Having a single end-to-end transport protocol is beneficial because
networking equipment becomes less complex to maintain resulting in lower
operational costs. This benefit, however, is countered packets do not take a
specific path as they traverse the network. This results in unpredictable QoS
in a best-effort network.
BRBRAITT March-2007 17
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Take this “TDM voice” application and now transport it over a best-effort IP
network. The best-effort IP network introduces a variable and unpredictable
amount of delay to the voice packets and also drops voice packets when eth
network is congested. As you can see, the best-effort IP network does not
provide the behavior that the voice application requires. QoS techniques can
be applied to the best-effort IP network to make it capable of supporting VoIP
with acceptable, consistent and predictable voice quality.
Since the early 1990s there has been a movement towards network
convergence, i.e., transport all services over the same network infrastructure.
Traditionally, there were separate, dedicated networks for different types of
applications. However, many of these networks are being consolidated to
reduce operational costs or improve profit margins.
Not too long ago, an enterprise may have had a private TDM-based voice
network, an IP network to the Internet, an ISDN video conferencing network,
an SNA network and a multi-protocol (IPX, AppleTalk, etc.) LAN. Similarly, a
service provider may have had a TDM-based voice network, an ATM or
SONET backbone network and a Frame Relay or ISDN access network.
The converged network mixes different types of traffic, each with very different
requirements. These different traffic types often react unfavourable together.
For example, a voice application expects to experience no packet loss and a
minimal but fixed amount of packet delay. The voice application operates in
steady-state fashion with voice channels (or packets) being transmitted in
fixed time intervals. The voice application receives this performance level
when it operates over a TDM network. Now take the voice application and run
it over a best-effort IP network as VoIP (Voice over IP). The best-effort IP
network has varying amounts of packet loss and potentially large amounts of
variable delay (typically caused by network congestion points). The IP network
provides almost exactly the opposite performance required by the voice
application.
Some believe that QoS is not needed and that increasing bandwidth will
suffice and provide good QoS for all applications. They argue that
implementing QoS is complicated and adding bandwidth is simple. While
BRBRAITT March-2007 18
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
there is some truth to these statements, one first must look closer at the QoS
problems trying to be solved and whether adding bandwidth will solve them.
If all network connections hand infinite bandwidth such that networks never
became congested, then one would not need to apply QoS technologies.
Indeed, there are paths of some Carrier networks that have tremendous
amounts of bandwidth and have been carefully engineered to minimize
congestion. However, high-bandwidth connections are not available
throughout the network, from the traffic’s source to the traffic’s destination.
This is especially true for access networks where the most commonly
available bandwidth is typically only hundreds of kbps. Furthermore,
bandwidth discontinuities in the network are potential congestion points
resulting in variable and unpredictable QoS that a user or application
experiences.
For example, a network operator has dark fiber in their backbone network and
DWDM interfaces on their backbone switches. The operator has determined
that sufficient bandwidth is no longer available to provide the network users
with the required performance objectives. In this scenario, it is simpler and
more cost effective to add additional wavelengths to provide QoS by
increasing bandwidth than add complicated QoS traffic management
mechanisms.
Those who lease their network
connections are more constrained with
their bandwidth choices.
BRBRAITT March-2007 19
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
• Network Availability
• Bandwidth
• Delay
• Jitter
• Loss
• Emission priority
• Discard priority
Network Availability
Network availability can have a significant affect on QoS. Simply put, if the
network is unavailable, even during brief periods of time, the user or
application may achieve unpredictable or undesirable performance (QoS).
Network availability is the summation of the availability of many items that are
used to create a network. These include networking device redundancy, e.g.,
redundant interfaces, processor cards or power supplies in routers and
switches, resilient networking protocols, multiple physical connections, e.g.,
fiber or copper, backup power sources, etc. network operators can increase
their network’s availability by implementing varying degrees of each of these
items.
The greatest challenge for network
operators today is to provide highly
available IP networks.
BRBRAITT March-2007 20
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Bandwidth
Bandwidth is probably the second most significant parameter that affects QoS.
Bandwidth allocation can be subdivided into two types:
• Available Bandwidth
• Guaranteed Bandwidth
Available Bandwidth
Guaranteed Bandwidth
BRBRAITT March-2007 21
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Delay
Network delay is the transit time an application experiences from the ingress
point to the egress point of the network. Delay can cause significant QoS
issues with applications such as voice and video, and applications such as
SNA and Fax transmission that simply time-out and fail under excessive delay
conditions. Some applications can compensate for finite amounts of delay but
once a certain amount is exceeded, the QoS becomes compromised.
Finally, delay can be both fixed and variable. Examples of fixed delay are:
Jitter
Jitter is the measure of delay variation between consecutive packets for a
given traffic flow. Jitter has a pronounced effect on real-time, delay-sensitive
applications such as voice and video. These real-time applications expect to
receive packets at a fairly constant rate with fixed delay between consecutive
packets. As the arrival rate varies, the jitter impacts the application’s
performance. A minimal amount of jitter may be acceptable but as jitter
increases, the application may become unusable.
BRBRAITT March-2007 22
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Some applications, such as voice gateways and IP phones can compensate for a finite
amount of jitter. Since a voice application requires the audio to play out at a constant
rate, the application will replay the previous voice packet until the next voice packet
arrives. However, if the next packet is delayed to too long, it is simply discarded when
it arrives resulting in a small amount of distorted audio.
Loss
Loss can occur due to errors introduced by the physical transmission medium.
For example, most landline connections have very low loss as measured in
the Bit Error Rate (BER). However, wireless connections such as satellite,
mobile or fixed wireless networks, have a high BER that varies due to
environment or geographical conditions such as fog, rain, RF interference, cell
handoff during roaming and physical obstacles such as trees, buildings and
mountains. Wireless technologies often transmit redundant information since
packets will inherently get dropped some of the time due to the nature of the
transmission medium.
Loss can also occur when congested network nodes drop packets. Some
networking protocols such as TCP (Transmission Control Protocol) offer
packet loss protection by retransmitting packets that may have been dropped
or corrupted by the network. When a network becomes increasingly
congested, more packets are dropped and hence more TCP retransmission. If
congestion continues, the network performance will significantly decrease
because much of the bandwidth is being used to retransmit dropped packets.
TCP will eventually reduce its transmission window size resulting in smaller
and smaller packets being transmitted. This eventually will reduce congestion
resulting in fewer packets being dropped.
Emission-Priorities
BRBRAITT March-2007 23
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Discard Priorities
Discard priorities are used to determine the order in which traffic gets
discarded. The traffic may get dropped due to network node congestion or
when the traffic is out-of-profile, i.e., the traffic exceeds its prescribed amount
of bandwidth for some period of time.
Under congestion, traffic with a higher discard priority gets dropped before
traffic with a lower discard priority. Traffic with similar QoS performance
requirements can be subdivided using discard priorities. This allows the traffic
to receive the same performance when the network node is not congested.
However, when the network node is congested, the discard priority is used to
drop the “more eligible” traffic first.
Discard priorities also allow traffic with the same emission priority to be
discarded when the traffic is out-of-profile. Without discard priorities, traffic
would need to be separated into different queues in a network node to provide
service differentiation. This can be expensive since only a limited number of
hardware queues (typically eight or less) are available on networking devices.
Some devices may have software-based queues but as these are increasing
used, network node performance is typically reduced.
With discard priorities, traffic can be placed in the same queue but in effect,
the queue is subdivided into virtual queues, each with different discard priority.
For example, if a product supports three discard priorities, then one hardware
queue in effect provides three QoS levels.
BRBRAITT March-2007 24
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Application Requirements
Performance Dimensions
Application Bandwidth Sensitivity to
Delay Jitter Loss
VoIP Low High High Med
Video High High High Med
Conferencing
Streaming Video High Med Med Med
on Demand
eBusiness Low Med Med Med
(Web browsing) Med Med Low High
Email Low Low Low High
File transfer Med Low Low High
Table 1: Application performance Dimensions
Categorizing Applications
Traffic Example
Category Application
Network Control Critical Alarms, Routing, Billing,
OAM
Interactive VoIP, Interactive Gaming Video
Conferencing
Responsive Streaming audio/video,
eCommerce
BRBRAITT March-2007 25
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Interactive Applications
Responsive Applications
BRBRAITT March-2007 26
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Steaming media applications include Internet radio (talk radio and music) and
audio/video broadcasts (news, training, eduction and motion pictures).
Streaming applications require the network to be responsive when they are
initiated so the user doesn’t wait too long before the media begins playing.
These applications also require the network to be responsive for certain types
of signaling. For example, with movies on demand, when one changes
channels or “forwards”, “rewinds” or “pauses” the media, one expects the
application to react similarly to the response time of their VCR controls.
Timely Applications
These applications require that the packets arrive with a bounded amount of
delay. For example, if an email takes a few minutes to arrive at its destination,
this is acceptable. However, in a business environment, if an email took 10
minutes to arrive at its destination, this is often unacceptable. The same
bounded delay applies to file transfers. Once a file transfer is initiated, delay
BRBRAITT March-2007 27
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
and jitter are inconsequential because file transfers often minutes to complete.
Note that timely applications use TCP-based transport and therefore, packet
loss is managed by TCP which retransmits any lost packets resulting in no
packet loss.
Some applications are used to control the operation and administration of the
network. Such applications include network routing protocols, billing
applications and QoS monitoring and measuring for SLAs. These applications
can be subdivided into those required for critical and standard network
operating conditions. To create high-availability networks, network control
applications require “priority” over end-user applications because if the
network is not operating properly, end-user application performance will suffer.
“Timely
“Timelyapplications
applicationsexpect
expectthethe
network
networkQoSQoStotoprovide
provide
packets
packets with
with
a abounded
bounded amount
amountofofdelay”.
delay”.
BRBRAITT March-2007 28
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
IP QoS IP Differentiated
Services (DiffServ)
Network-Signaled Paths ATM PNNI, MPLS
RSVP-TE or MPLS
CR-LDP
Traffic-Engineered paths ATM PVCs, MPLS
Label Switched Paths (LSPs)
Link Layer QoS Ethernet 802.1Q VLANs,
802.1p, ATM, MPLS, PPP,
UMTS, DOCSIS, Frame Relay
Physical Layer QoS Wavelengths, Virtual
Circuits (VCs), Ports
Frequencies
These technologies allow for the separation of traffic. The separation and
prioritisation may take the form of wavelengths (lambdas), Virtual Circuits
(VCs), ports on a device or frequencies over the air. This is the simplest form
of QoS whereby different levels of QoS are provided through traffic separation
at the physical layer. For example, the blue wavelength may provide a priority
service and the red wavelength may provide a best-effort service. In some
cases, this type of QoS can be inexpensive, e.g., adding additional
wavelengths over an existing fiber optic cable. However, this approach can
also be very expensive if the resources are leased or limited, e.g., frequency
spectrum.
Each link layer has a different type of QoS technology that can be applied.
The most common link layers are Ethernet, ATM, PPP, MPLS, DOCSIS (HFC
cable), Frame Relay and Mobile wireless technologies (only UMTS/3GPP will
BRBRAITT March-2007 29
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
be discussed here). The following sections will provide a brief overview of the
different link layer QoS technologies. For additional details, references are
provided at the end of this document.
Ethernet
ATM
The ATM Forum created ATM service categories, each with different QoS
traffic management parameters and performance levels. The most widely
available ATM service categories are CBR (Constant Bit Rate), rt-VBR (real-
time Variable Bit Rate), nrt-VBR (non real-time VBR) and UBR (Unspecified
Bit Rate). In general, CBR is used for circuit emulation services (including
voice or video transport), rt-VBR is used for real-time packet-based voice or
video services, nrt-VBR s used for priority data services and UBR is used for
best-effort data services.
Other ATM service categories defined that are less widely available are ARB
(Available Bit Rate) and GFR (Guaranteed Frame Rate) both of which are
enhancements over the UBR service and provide additional service
guarantees that are not provided by UBR.
PPP
BRBRAITT March-2007 30
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
Numbers are used to determine the service class to which the particular
packet fragments belong.
MPLS
MPLS provides for 2 different proposed forms of QoS via the EXP
(Experimental) bits in the MPLS shim header. When using E-LSPs (EXP-
inferred Label Switched Paths), the EXP bits provide 8 service classes that
support both emission and discard priorities and Differentiated Services
(DiffServ) traffic class behaviors. When using L-LSPs (Label-inferred LSPs),
the EXP bits provide 8 discard priorities.
DOCSIS
The DOCSIS protocol is used over HFC cable networks and has been
proposed for use over fixed wireless networks. The DOCSIS protocol
supports QoS via traffic separation through the use of Service Ids (SIDs).
There are four SIDs. They are rtPS (real-time Polling Service), nrtPS (non
real-time Polling Service), UGS (Unsolicited Grant Service), UGS-AD
Unsolicited Grant with Activity Detection) and BE (Best Effort Services). rtPS
is used for bursty real-time services such as video and VoIP with silence
suppression. nrtPS is used for bursty, non real-time flows such as web
browsing. UGS is used for periodic real-time flows such as VoIP. UGS-AD is
used for intermittent yet periodic real-time flows such as VoIP with Voice
Activity Detection (VAD).
Frame Relay
Frame Relay has a QoS mechanism called Discard Eligibility (DE) that can be
set by the networking device for traffic that can be discarded under network
congestion. Frame relay also provides a Committed Information Rate (CIR)
and Excess Information Rate (EIR) that define the minimum and burst
bandwidth guarantees, respectively.
Service providers need to ensure that their networks are properly engineered
to support new services such as IP telephony, video on demand (movies),
interactive learning and video conferencing services. These services cannot
be offered without certain QoS-related network capabilities.
BRBRAITT March-2007 31
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
The SLA specifies the terms and conditions of the service being offered. Once
a service provider can accurately measure the network capabilities and
provide a guaranteed performance level, he can confidently offer a billable
service to his subscribers.
After reading through this document, you will quickly conclude that QoS can
be quite a complicated topic and there are many different technologies,
standards and network architectures to consider. Furthermore, there is no
single QoS technology or standard that can be used across your network.
How can QoS be simplified?
BRBRAITT March-2007 32
“DATA NETWORK” FOR JTOs PH-II : MPLS_Overview
VoIP Premium
Interactive Video Conferencing, Platinum
Interactive Gaming
Responsive Streaming audio/video Gold
eCommerce Silver
Timely Email, non-critical OAM Bronze
File transfer Standard
NNSCs are being built into Nortel Networks product but can also be created in
other vendor’s products through QoS policy management systems. The
default behaviors would be created in the policy management systems and
the device-specific configuration rules (commands) would then be transmitted
to the device.
BRBRAITT March-2007 33