Anda di halaman 1dari 18

Understanding and Managing

IP/MPLS Mobile Backbone and

Backhaul Networks

Table of Contents

Introduction  3

Evolved Packet System (EPS)  4

Transport Requirements  5

IP/MPLS Transport  6

VPWS Backhaul and L3VPN Backbone Transport  7

L3VPN Backhaul and Backbone Transport  8

Route Analytics  9

Visualizing Layer 3 VPNs  10

Visualizing Layer 2 VPNs (VPWS)  12

Visualizing RSVP-TE Tunnels  13

Visualizing EPS Traffic  15

Concluding Remarks  17

Copyright 2015, Packet Design

Page 2 of 18
IP/MPLS Backbone and Backhaul Transport Networks

With the advent of the smart phones, tablets and other connected devices, traffic has grown
exponentially and created congestion in mobile networks. Circuit switching 2G and 3G mobile networks
allocate bandwidth statically (as they require TDM circuits) to each cell site and do not take advantage of
the statistical multiplexing found in packet switching networks. Wired service providers, however, have
adopted packet-switching IP/MPLS-based network architectures to take advantage of the bandwidth
efficiency and higher resiliency, while choosing Ethernet for framing due to significant cost per port

To meet ever-increasing user traffic demands, mobile orators have embraced Long Term Evolution (LTE)
radio access. LTE Advanced can provide up to 1 Gbps bandwidth to each user. If unchecked, this LTE traffic
would further burden the mobile network. To address these challenges, the Third Generation Partnership
Project (3GPP) has defined System Architecture Evolution (SAE), the core network architecture for the
non-radio access part of the network. The main component of the SAE is a new IP-based Evolved Packet
Core (EPC), shown in Figure 1. Together, SAE and LTE form the Evolved Packet System (EPS). Mobile
operators have been deploying EPS networks, often marketed simply as LTE networks.

IP IP Evolved Packet Core

Backhaul and Backbone Transport

Figure 1. All IP-based Evolved Packet Core

Because the EPS is all IP based, IP/MPLS is used for the backhaul and backbone transport networks that
connect various end-points. With EPS, 3G mobile operators, who are already running an IP/MPLS-based
backbone network, are extending IP/MPLS to their backhaul networks.

An IP/MPLS-based network presents new and unique challenges. With statically-configured circuits,
mobile operators enjoy predictable performance. For example, the propagation delay of a circuit is known
during provisioning. When that circuit fails, a protection circuit takes over which is also pre-provisioned
and has predictable performance. (Note that this leads to over-provisioning bandwidth.) With IP/MPLS,
the paths between EPS end-points are dynamic and extremely resilient to failures; IP/MPLS will find a
path as long as one exists, regardless of the number and locations of failures in the network. However,
one of many consequences of this is that the delay of an IP path can vary significantly, especially under
failure conditions. Many LTE applications, such as voice, video, and real-time gaming, require strict quality

Copyright 2015, Packet Design

Page 3 of 18
IP/MPLS Backbone and Backhaul Transport Networks

of service with delay and packet loss budgets across the EPS. Not meeting these requirements can be
detrimental to the user experience and may lead to subscriber churn.

Statistical multiplexing of IP packets, even though much more efficient, also makes capacity planning
more challenging as it can introduce congestion during peak use periods or under link or router failures.

The dynamic IP/MPLS control plane needs to be managed carefully to ensure EPS end-point reachability
is not compromised and a path always exists between end-points that need to communicate with each
other. As we will see later, several IP/MPLS control plane protocols are in use here. These protocols interact
with each other in complex ways. When reachability between two end-points becomes compromised,
understanding the protocol interactions will help in finding the root cause of the failure. For that, it is
necessary to collect, analyze, and monitor the protocol messages and behaviors.

In this white paper, we first give a brief overview of EPS. We then illustrate how mobile operators
deploy IP/MPLS in their backhaul as well as backbone networks. We then illustrate how route analytics
technology can address the challenges of running IP/MPLS backbone and backhaul transport networks.

Evolved Packet System (EPS)

An EPS typically encompasses the following logical elements:

eNodeB: This is the LTE evolved base station. eNodeBs are the radio towers to which user
equipment (UE), such as cell phones and tablets, connect.
Serving Gateway (S-GW): The S-GW is a data plane element. S-GWs are typically placed at the
demarcation point between the radio access network (RAN) and core network. The S-GWs main
purpose is to track the users mobility and to send its traffic to the appropriate eNodeB as the
user moves. All user packets are carried inside bearers logical pipelines connecting two or
more points. The S-GW redirects these bearers as a UE moves from one eNodeB to the next. It also
maintains state for the bearers when the UE enters low power mode and un-associates its bearers.
Packet Data Network (PDN) Gateway (P-GW): The P-GW is also a data plane element. P-GWs are
placed at the demarcation of the PDN. The P-GW assigns IP addresses to UEs, enforces QoS, filters
packets, collects charging information, and forwards UE IP packets to/from the PDN, including the
Mobility Management Entity (MME): The MME is a control plane element. It manages the UE,
including access to the network, assignment of resources, and management of mobility (i.e.,
tracking, paging, roaming and hand-over). For example, when new packets arrive at an S-GW for a
UE that is in low power mode, MME pages that UE so that it can reestablish its bearers and receive
the packets S-GW had been buffering.
Policy and Charging Rules Function (PCRF). The PCRF is a control plane element. As the name
implies, it is the policy and charging brain for the network. However, enforcement is done at the
P-GW. The PCRF tells the P-GW how to handle packets. For example, for a user who has exceeded
their quota, the PCTF may instruct the P-GW to rate limit the users packets.

Copyright 2015, Packet Design

Page 4 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Home Subscriber Server (HSS): The HSS is a control plane element. It contains a users subscribed
services, such as whether or not the user is allowed to roam and the QoS treatments to which they
have subscribed.

The above description is overly simplified, but sufficient to serve this white paper. Since this white paper is
about providing an IP/MPLS transport to the EPS, the most relevant elements are eNodeBs, S-GWs, P-GWs
and MMEs. Also, we purposefully illustrated them as logical elements. They may be appliances on their
own, may be running in a blade in some other appliance such as an IP/MPLS router, or may be part of a
combo-device, for example, one with IP/MPLS routing and forwarding functionality.

Transport Requirements

There are two main kinds of packets the network carries: data (including voice) and control packets. UE
data packets are IP packets. At this IP layer, UE is one hop away from the P-GW. That is, the next IP hop
from the UE is the P-GW. This is because these packets are relayed through the eNodeB and S-GW using
GPRS Tunneling Protocol (GTP). GTP itself rides on UDP that rides on the networks true IP layer. The
control packets between the MME and UE are also relayed over IP (but use a different set of protocols).

eNodeBs communicate with S-GWs and MMEs. S-GWs also communicate with P-GWs. In addition, as a UE
moves between neighboring eNodeBs, the hand over is done directly between the involved eNodeBs.
Hence, eNodeBs also communicate with neighboring eNodeBs. This is best illustrated by Figure 2. The
GTP connections between eNodeBs are referred to as X2 connections and the GTP connection between
an eNodeB and an S-GW or an MME is referred to as an S1 connection. eNodeBs, S-GWs, MMEs and the S1
and X2 connections between them forms the RAN (E-UTRAN). The network that transports these S1 and
X2 connections is often called the backhaul network. The network that connects S-GWs to P-GWs and
other EPS elements and the Internet is called the backbone network.




UE eNodeB S-GW P-GW IP services (e.g. IMS)

Figure 2. eNodeB Communication Patterns

Copyright 2015, Packet Design

Page 5 of 18
IP/MPLS Backbone and Backhaul Transport Networks

IP/MPLS Transport
A typical mobile operator will have a national backbone network and for each of its regions it will have a
backhaul network. Already with 3G, backbone networks transported mobile traffic using IP/MPLS control
plane, more specifically using IP/MPLS BGP VPNs (often referred to as L3VPNs). Because now the payload
on the backhaul network is also IP, IP/MPLS can be used in the backhaul network as well.

There are many possible IP/MPLS transport architectures for backhaul and backbone networks [See Cisco
UMMT at]. In this paper, we focus on two scenarios that
are most widely deployed by the mobile operators. In both scenarios, L3VPNs are used in the backbone
network, just like with 3G networks. In the first scenario, L3VPNs are extended to the backhaul networks.
In the second scenario, L2VPNs, specifically Virtual Private Wire Service (VPWS), are used in the backhaul
network. Virtual Private LAN Service (VPLS) can also be used in the backhaul network. However, it is
difficult to manage and often avoided.

For both scenarios, at each cell, a router referred to as the cell site gateway (CSG) is paired with an eNodeB.
This CSG has an interface connected to the eNodeB and other interfaces connecting the cell site to the
rest of the IP/MPLS backhaul infrastructure.

Similarly S-GWs, P-GWs, MMEs, PCRFs and HHS reside in mobility sites. Each of these sites also has an
IP/MPLS router, referred to as a Mobility Site Gateway (MSG) or Mobile Transport Gateway (MTG). The MTG
has interfaces connected to the EPS elements, as well as interfaces connected to the backhaul and/or the
backbone infrastructure. This is pictorially depicted in Figure 3.



Aggregation Node Aggregation Node


CSG Core Node Core Node CSG

RAN Aggregation Network Aggregation Network RAN

IP/MPLS Domain
domain IP/MPLS Domain Domain domain
Pre-Aggregation Pre-Aggregation
Node (AGN)
Core Node Core Node

CSG Aggregation Node Aggregation Node CSG


Figure 3. CSG, MTG, Backhaul and Backbone Networks

Copyright 2015, Packet Design

Page 6 of 18
IP/MPLS Backbone and Backhaul Transport Networks

VPWS Backhaul and L3VPN Backbone Transport

S-GWs are the demarcation points between the backbone and backhaul networks. P-GWs are typically
in the backbone network. Each CSG establishes two VPWS pseudo-wires for redundancy to two MTGs.
In each MTGs mobility site there is an S-GW that is serving that cell. This MTG belongs to the backhaul
network. S-GW also has a core-facing interface to forward packets towards the P-GW. This interface
is connected to another MTG in the backbone and is associated with a L3VPN Virtual Routing and
Forwarding (VRF) instance. In other words, the backbone MTG is a L3VPN PE router. Similarly, the backhaul
MTG is a L2VPN PE router because it is terminating the pseudo-wire at one of its attachment circuits. Note
that the backhaul-facing MTGs terminate hundreds of even thousands of pseudo-wires. And as a result,
their configuration can be challenging.

Bidirectional Forwarding Detection (BFD) is used in these pseudo-wires so that when a link or a node in
the network fails or an MTG or S-GW crashes, the CSG can quickly switch to the alternate pseudo-wire.

Other mobility sites in the backbone, containing MMEs, P-GWs, PCRF and HHS, are also similarly
connected to a VRF at an MTG. Each VRF is assigned import and export route targets, and Border Gateway
Protocol (BGP) distributes these routes based on the route-target policies. A VPWS backhaul and L3VPN
backbone transport architecture is illustrated in Figure 4.

Mobility Site


Backbone PE PE PE P-GW

Provider Edge
(PE) Router
Route Reflector




L2VPN Backhaul S-GW



Figure 4. VPWS Backhaul and L3VPN Backbone Transport

Note that an X2 connection between two eNodeBs in this architecture has to be routed from the CSG
of the first eNodeB to the MTG, and then to the CSG of the second eNodeB. This may not be very delay-
efficient for large backhaul networks. Both VPLS and L3VPNs can set up X2 transport paths that are more
direct. The attractiveness of this approach is that field engineers are used to provisioning/configuring
physical circuits, and provisioning pseudo-wires is very similar and much simpler than configuring BGP

Copyright 2015, Packet Design

Page 7 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Another use of the pseudo-wires is for transporting 2G and 3G traffic. For example the TDM circuit
between a BTS and SGSN can now be carried inside a pseudo-wire between the CSG of the BTS and MTG
of the SGSN. This eliminates the need to replace BTS and SGSN equipment as the backhaul network moves
to all IP/MPLS. However, it does require TDM interface ports in the CSG and MTG.

L3VPN Backhaul and Backbone Transport

Since L3VPNs are already deployed in 3G backbone networks, a natural transport architecture is to extend
them to the backhaul network. In this case, the eNodeB interface goes into a VRF in the CSG. This VRF also
has import and export route targets. It contains a statically configured IP prefix for the eNodeB. The export
route-target (or one of them) identifies the cell. Import route-targets include the neighboring cell sites
route-targets as well as the relevant mobility sites route-targets. Once BGP populates the VRFs with routes
matching the import route-targets, the CSG can establish transport paths to neighboring cell sites and
mobility sites. All the S1 and X2 connections would be transported directly (i.e. without going through an
MTG). Similarly for the S-GW, both the backbone and backhaul facing interfaces go into VRFs in MTGs in
the backbone and backhaul network respectively. A L3VPN backhaul and backbone transport architecture
is depicted in Figure 5.

Mobility Site


Backbone PE PE PE

Provider Edge
(PE) Router
Route Reflector




L3VPN Backhaul

PE Inline RR



Figure 5. L3VPN Backhaul and Backbone Transport

Quality of Experience (QoE) for a voice call does not tolerate packet loss very well. Because of this, when
a link or a router fails, many operators deploy RSVP-TE Fast Re-Route (FRR) local-repair mechanism. When
a link fails, as soon as the ends of the link detect the failure, the packets are directed to the pre-setup FRR
tunnel that bypasses this link (link protection), and sometimes even the next router (node protection), and
rejoins the tunnel one or two hops later, respectively.

If we put all these together, traffic from a user who is reading the latest headline news on the CNN web

Copyright 2015, Packet Design

Page 8 of 18
IP/MPLS Backbone and Backhaul Transport Networks

site on his cell phone is first transmitted over radio signals to an eNodeB, and from there over an S1
connection to a S-GW. The eNodeB is connected to the CSG over an interface that is cross-connected
to a pseudo-wire that is routed over an RSVP-TE tunnel (with FRR protection) to an interface of the MSG
connected to this S-GW. From the S-GW, traffic goes over an interface connected to a VRF in the MSG.
L3VPN MP-BGP routes are than used to find a path to the P-GW. Once the P-GW route is found, an RSVP-
TE tunnel (with FRR protection) takes it there. Of course, IGP is crucial in computing these RSVP-TE tunnel
paths. At P-GW the user packets become native IP packets and are routed to CNN over the Internet.

For very large backhaul and backbone networks, it may be challenging to run a common IGP across
the entire network. A common IGP is required because BGP VPNs use underlying IGP to establish the
transport LSPs between Provider Edge (PE) routers connecting cell and mobility sites together. Use of
areas for backhaul networks definitely helps, as does partitioning the backhaul network into access and
aggregation halves. Some operators even divide the network into multiple ASs and use inter-AS VPN
techniques to connect the EPS end-points. For simplicity, we will focus on a single AS with common IGP
deployments in this white paper.

Route Analytics
Route analytics taps into the routing protocols the source of intelligence that determines how
IP/MPLS networks deliver traffic and uses these protocols to provide very accurate network topology
and routing visibility, including the IGP protocols, MP-BGP, Layer 2 and Layer 3 VPNs, RSVP-TE tunnels, and
the interactions among them.

Route analytics technology uses a collector that acts like a passive router. The collector peers with
selected routers across the network, and using the routing protocols OSPF, IS-IS and MP-BGPit
collects and records the control messages that routers exchange in order to calculate how traffic will be
sent across the network (see Figure 6). By processing this information just the way routers do albeit in
a more comprehensive fashion every Layer 3 routed path in the network can be calculated, from every
EPS element to every other element. Thus, a routing topology of the entire network can be created and
maintained for operational and engineering analysis. Since routing protocols report changes to the
topology in real time, this topology map is continuously updated in real time and always reflects exactly
the way network is operating.

Copyright 2015, Packet Design

Page 9 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Mobility Site



Backbone PE PE PE

Provider Edge
(PE) Router
Route Reflector



L2VPN Backhaul



Figure 6. Route analytics technology passively peers with, listens to and analyzes routing protocols to provide a
real-time, network-wide understanding of all IP/MPLS VPN topology changes

For protocols that can not be collected directly (e.g. RSVP-TE tunnels, static routes, pseudo-wires) route
analytics technology queries the routers using a combination of SNMP, NETCONF, and CLI. To catch the
changes in real time, it taps into SNMP traps, syslog messages and changes in IGP. For example, when a
link fails, IGP will communicate this to the collector and the collector will query the head-end routers of
the tunnels that were going over this link. This is both lighter weight than a polling system (only relevant
routers are queried and only when needed) and real time.

Visualizing Layer 3 VPNs

Figure 7. Layer 3 VPN

services and topology
for one of the services

Copyright 2015, Packet Design

Page 10 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Figure 7 shows the Layer 3 VPN services configured in the network. Using the MP-BGP collector, route
analytics technology discovers all route-targets and their associated VPN routes. The collector (using
SNMP/NETCONF/CLI) also collects VRF definitions, import and export route-targets at each PE router.
The user can then name these route-targets. In the figure, the PDI-VPN1-ALL service is active on three
PE routers and has 90 active BGP routes. This service connects 45 eNodeBs to each other over three PE
routers (there are two routes per eNodeB for redundancy).

Figure 7 also shows the relevant IGP topology of the PDI-VPN1-ALL service where yellow routers are the
PE routers and purple routers are provider (P) routers. Only the P routers that are on the path between
these PE routers are shown. These PE routers implement the PDI-VPN1-ALL service because they have
VRFs that either import or export the services route-target(s). All the X2 connections between these
eNodeBs will only ride on these links. However, there may be other services riding on these links as well.
The orange link in the figure has congestion.

Figure 8. eNodeB Routes for PDI-VPN1-ALL

Figure 8 shows all the eNodeB VPN routes with their MPLS labels for PDI-VPN1-ALL. The Up/B in the
state column indicates these routes are currently available (otherwise it would have been Down) and
expected (B stands for in-baseline). When a certain number of these routes go away or new non-baseline
routes appear, the user can be alerted.

Copyright 2015, Packet Design

Page 11 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Figure 9. Path of an X2 connection and its expected delay

Figure 9 shows a path for an X2 connection between two eNodeBs in this service. The path contains four
hops and is first determined by looking at the BGP routes at the ingress VRF, then for the matching BGP
route an ISIS path is computed. As indicated, the path is expected to have 3.5 milliseconds delay.

Visualizing Layer 2 VPNs (VPWS)

Figure 10. Virtual Leased Lines in a VPWS

Virtual Leased Line (VLL) parameters collected by route analytics technology include the VLL name,
virtual circuit ID, MPLS labels, attachment circuits connected, and the payload type. Figure 10 shows
the VLLs collected in a lab network. Some VLLs can be multi-segment, where two or more pseudo-wires
are stitched together by the intermediate routers. The path of the highlighted VLL is shown in Figure
11. Yellow routers are the PE routers where EPC elements connect. The dark green router in the middle
stitches two pseudo-wires so that the VLL path between the EPS elements is complete. Stitching is usually
rare, however it can be employed when crossing administrative boundaries. VLLs, similar to Layer 3 VPNs,
can be grouped under service names. The path in this case is computed using the IGP protocol.

Copyright 2015, Packet Design

Page 12 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Figure 11. Path of a Multi-Segment VLL

Figure 12. Asymmetric Path for a VLL

Figure 12 shows paths for another VLL. In this case the two directions of the VLL use different paths, that
is, the paths are asymmetric. Asymmetric paths often imply different delay, jitter, loss and congestion
characteristics in each direction and can be problematic to some applications. Notice the shoe-horn icons
on the path from R4 to R2 colored in purple. These icons indicates that the path in this direction takes an
RSVP-TE tunnel and is protected against node and link failures. However, the traffic in the other direction
is not protected.

Visualizing RSVP-TE Tunnels

Because of the FRR capabilities, RSVP-TE tunnels are often used in mobile backbone and backhaul
networks. These tunnels often do not request traffic reservations, as they are not needed for the mobile
use case.

Copyright 2015, Packet Design

Page 13 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Figure 13. RSVP-TE Tunnels and missing node and link protection for the tunnel

Figure 13 shows the tunnels in the lab network. Some of these tunnels have secondary alternates that
are pre-signalled, as shown in the paths column. Most tunnels in this table have requested node and link
protection. However, as can be seen in the FRR column, most of the tunnels only have partial protection.
As new links and routers are added to the network, it is easy to forget to create these FRR tunnels. For the
highlighted tunnel, its path is shown underneath the table. Routers and links with red crosses show where
the protection is missing. It is also possible to see the protection path by clicking on protected elements.
The FRR tunnel protecting R3s failure is shown in Figure 14. Notice that this is a really long path (and
hence can violate some EPS performance requirements). And also notice that the FRR tunnel goes over
the tunnel tail, then backwards to R1, then forwards to the tunnel tail again. This is not unusual though
not efficient. This FRR tunnel may be protecting many other tunnels and it may be efficient for others.

Figure 14. FRR for protecting R3

Copyright 2015, Packet Design

Page 14 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Visualizing EPS Traffic

Route analytics technology integrates traffic information by collecting flow data (statistical information on
unidirectional IP traffic streams generated by routers, such as IPFIX, NetFlow, jFlow, sFlow and NetStream)
from key traffic ingress points. For a mobile network, this would include interfaces of MTGs, CSGs and
Internet gateways. Using knowledge of the precise path that every flow takes at any time through the
network, route analytics projects the traffic data onto the component links of that path (see Figure 15)
as well as Layer 2 and Layer 3 VPN services, S1 and X2 connections (configured as application groups),
ingress and egress routers, and for each class of service. The result is a highly accurate, integrated routing
and traffic map that shows the volume of class-of-service (CoS) traffic on every link in the network. Figure
16 and Figure 17 show the links and traffic levels for the VPN-2-NASSA service.

Mobility eNodeB

Flow 1 Src Dest ...

Flow 2 Src Dest ...
Flow 3 Src Dest ...
Flow 4 Src Dest ...

NetFlow Data

Flow 1 Src Dest . . .
Flow 2 Src Dest . . . User
NetFlow Data

Figure 15. Route analytics integrates NetFlow traffic statistics into the IP/MPLS VPN routing topology by
mapping traffic flows onto their routed paths. The result is a real-time, integrated routing and traffic topology.

Copyright 2015, Packet Design

Page 15 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Figure 16. Links carrying VPN-2-NASSA service traffic

Figure 17. Links and traffic levels of VPN-2-NASSA service

Copyright 2015, Packet Design

Page 16 of 18
IP/MPLS Backbone and Backhaul Transport Networks

Concluding Remarks
In order to handle the explosion of mobile traffic, mobile operators are deploying IP/MPLS in their
backhaul networks. The dynamic nature of the IP/MPLS control plane requires a new network
management paradigm as SNMP polling-based OSS systems can not keep up with the scale and rate of
changes in these environments.

Route analytics technology addresses this challenge very well as we have demonstrated above with a few,
certainly not exhaustive examples. Additional benefits of route analytics technology that are applicable to
mobile operators include:

Rewindable Routing and Traffic History for Troubleshooting: By continuously recording

the state of routing and traffic, route analytics technology accurately portrays the network-
wide state of all links, peerings, paths, and prefixes as well as all traffic flows at any point in time.
Engineers can rewind the network topology to pinpoint the precise MPLS VPN route, RSVP-TE
tunnel and routed path that service traffic took through the network, as well as the utilization of
the component links, at the time of a problem. Many hard to diagnose problems are intermittent
but repeat. Having this rewindable history is very effective in finding the root cause of these hard
Monitoring and Alerting: In this white paper, we focused on visualization. However, anomalous
conditions can be monitored automatically and the user alerted. For example, a critical path may
change and start to have unacceptable performance. It is possible to detect path changes with a
significant delay increase and alert on them.
Network Modeling: Model router, link, Layer 2 and Layer 3 VPN, and RSVP-TE tunnel and traffic
changes, and accurately predict the effect on the networks behavior and the impact on the
Internet Peering Analysis: Monitor and ensure acceptable external peering utilization levels. This
data can be used in optimizing transit and peering arrangements to significantly reduce peering
Network-Wide Routing Health Audits: Perform network-wide audits of routing health by
systematically examining the network for problems and vulnerabilities, such as unacceptable S1
and X2 connection delays, asymmetric routes, routing black holes, potential loss of path diversity
between EPS elements under failures, and underutilized assets.
Network-Wide Capacity Planning: Measure traffic levels on links, Layer 2 and 3 VPN services,
ingress and egress routers, and traffic groups (S1 and X2 connections) over time. Trending this data
to identify when and where to add capacity, including links, S-GWs, P-GWs and MMEs.

Copyright 2015, Packet Design

Page 17 of 18
IP/MPLS Backbone and Backhaul Transport Networks

To learn more about Packet Design and Route Explorer, please:

Email us at
Visit Packet Designs web site at
Call us at +1.408.490.1000

Copyright 2015, Packet Design

Page 18 of 18