Anda di halaman 1dari 6

Integrating Routing with Content Delivery Networks

Brian Field Ph.D.


Comcast Corporation
Centennial, Colorado
Brian_Field@cable.comcast.com
Jan van Doorn
Jan_VanDoorn@cable.comcast.com
Jim Hall
jim_hall2@cable.comcast.com


AbstractSeveral research efforts are underway to associate
content names to the network space. There is a general belief that
video and other types of cacheable content are going to continue to
demand ever larger portions of a networks bandwidth.
Technologies that enable network link bandwidth increases are
struggling to keep up with the network operators bandwidth
demand at appealing cost-points. Network operators are
considering using content delivery network (CDN) technologies as
a means to cache content in the network and thus slow the need for
ever larger network link bandwidths. In this paper we present an
approach to extend the Border Gateway Protocol (BGP) with a new
address family geared to carry the uniform resource identifier
(URI) namespace. Besides the benefits of using a proven scalable
protocol, having both the CDN service and the network
infrastructure operating across the same domain of information
provides compelling benefits in the cross CDN and network traffic
modeling space, as well as in the planning and operational
domains.
I. INTRODUCTION
Video traffic continues to be the primary content that uses
an ever larger portion of the Internet and Internet Service
Provider (ISP) network capacity. One report [1] indicates that
in 2015, upwards of 90% of Internet traffic will be video
content. New video technologies, such as fragmented video,
have a number of benefits in an Internet environment, including
their ability to be cached by a content delivery network (CDN)
infrastructure. The evolution of optical and router high-speed
link technology struggles to keep up with network operator
bandwidth needs at cost-effective price points. CDN
technology offers the possibility that running Caches and
Origins on Common Off the Shelf (COTS) technology is a
cost-effective way to reduce network bandwidth needs at the
cost of a CDN caching infrastructure. Deploying a CDN
infrastructure within an operators network has another key
benefit, namely reduced Transmission Control Protocol round-
trip times (TCP RTT). Smaller TCP RTT enables better
download performance across a cross-section of end-client
TCP stack implementations, and thus enables these end-clients
to pull the larger and hence, higher video quality fragments. A
CDN infrastructure thus can directly improve video quality to
customers.
An open CDN protocol that enables multiple vendors'
Caching and Origin platforms to co-exist and interact in a
single, unified CDN does not currently exist. Instead, each
CDN component vendor has created their own set of
proprietary protocols for how their caching components
interact collectively to form a single, cohesive, vendor-
proprietary CDN. There are a number of working groups,
working in part or in their entirety to standardize key aspects in
the CDN inter-op space to enable an open CDN protocol. From
our perspective, there are three (3) key criteria to the open
CDN protocol problem that need to be considered: 1) Caches
and Origins must use a single consistent open protocol to
enable cross-vendor and platform inter-op, 2) Caches must be
aware of the network, and 3) CDNs must be able to change
the CDN performance or treatment provided to one or more
assets over time.
By meeting these criteria, a CDN will be capable of
changing its behavior based on its knowledge of current
network topology and resources. This paper proposes
extending an existing routing protocol to satisfy these three
criteria.
The CDN Interconnection (CDNI) working group at the
IETF has defined four interfaces for CDNI (control, request
routing, logging and metadata) [2]. This paper addresses issues
of control and routing of content, but does not tackle metadata
or logging.
II. CURRENT STATE OF CDN TECHNOLOGY
While standard protocols such as HTTP and DNS play a
vital role in today's CDNs, many, if not most, aspects of CDN
operation depend on vendor-proprietary protocols. There is no
agreed upon CDN protocol that enables a CDN operator to
select Caching and Origin components from different vendors
and have them function individually, but cooperatively, as a
single, unified CDN entity. In effect, there is currently no CDN
protocol to solve the intra-CDN problem. We include Origin
IEEE INFOCOM 2012 Workshop on Emerging Design Choices in Name-Oriented Networking
978-1-4673-1017-8/12/$31.00 2012 IEEE 292
functionality into the intra-CDN problem space, as Origins tend
to be treated as separate from the CDN components, yet are
foundational to the overall function of the CDN. In this paper,
we treat Origin functionality as an equal partner and participant
in the CDN, and thus it needs to be included in the overall
solution.
Much like how classic ISPs "peer" with each other to
enable the exchange of routes, but still focus resources on a
subset of a geography or business dimension, there is
agreement in the CDN community that CDNs will also want to
peer with each other. Again, there is no common agreed upon
protocol for how CDNs will peer with each other. Recent
activity in the IETF has resulted in a number of proposals,
including an Internet draft that proposes creating a new
Address Family in the Border Gateway Protocol (BGP) to carry
IP prefix information to address request routing across CDNs
[3]. We believe this approach is a step in the right direction and
is a subset of the overall solution defined in this paper.
A complementary problem, and one not addressed in this
paper, is how CDNs signal to which Content Service Providers
(CSPs) they have access. The flow of signaling messages from
CSP (or Origins or upstream CDNs) to downstream CDNs
needs to be defined and open, and is a key component to any
intra- or inter-CDN solution. In this paper, we propose a
solution to address the signaling of content information from
the CSP to downstream CDNs.
As mentioned previously, there is an expectation that a
majority of traffic on the Internet in 2015 will be video, likely
cacheable, and a key network operator problem will be
optimizing the network to cost effectively carry cacheable
video. An alternative way to think about this optimization
problem is to make the application itself, namely the CDN
components (i.e. Caches and Origins), network aware. A
number of approaches have been proposed to make CDN and
peer-to-peer technologies more efficient in their use of network
resources, including Application-Layer Traffic Optimization
(ALTO) [4] and Decoupled Application Data Enroute
(DECADE) [5]. While these are both useful technologies,
neither will adequately address the CDN space. One important
requirement on the CDN is the ability to make CDN service
and traffic engineering decisions based on as much baseline
network information as possible. The manner in which
mechanisms such as ALTO present information to a client
results in a number of inefficiencies. A CDN operator will want
to refine the set of network dimensions it optimizes and these
optimizations might be considered operator proprietary. The
implication here is that each time a CDN operator opts to
change its business or service model, it may very well need a
third-party ALTO server to create a completely new type of
map that provides the network information now needed to
satisfy the new CDN business or service rules. We do not
believe this model is scalable, nor provides the network
insights needed by a CDN operator, and does not align into
many operators vision of emerging application empowering
technologies like OpenFlow [6] or Software Defined
Networking [7].
One aspect within the CDN service space that is often
overlooked is how to address the mapping of a content asset to
the set of CDN components used to serve this content. It is
clearly understood that for a given library some assets will be
accessed more often than others. Out of a library of assets, a
subset of assets will be hot for some period of time or an
asset will coldused by few or no clients. Assets will likely
transition from being cold to hot and then eventually back to
cold. The open CDN protocol must provide a way for a CDN to
dynamically change how an asset is treated by the set of CDN
components in order to get the most overall effectiveness out of
the entire CDN caching infrastructure.
An efficient solution in the CDN space should be able to
enable the CDN system to dynamically treat an asset
differently from a CDN delivery perspective. Namely, a CDN
should have the ability to treat cold assets differently from hot
assets. A CDN operator might reap the best network bandwidth
savings by mapping hot assets to a larger portion of their CDN
infrastructure, while cold assets would be mapped to a smaller
set of the CDN infrastructure. While it is certainly possible to
treat both hot and cold assets the same in the CDN (i.e. map all
assets to the same set of CDN components), better CDN
caching efficiencies are possible if hot and cold assets are
treated differently within the CDN. Specifically, allocating
more resources to hot content will result in better caching hit
rates and hence caching resource efficiencies. Allocating fewer
resources or less storage to cold content likely also helps
overall caching efficiency, as cold content does not cause hot
content to be removed from the cache. Better CDN caching
efficiencies translates into more cost savings at both the CDN
level and at the network level. The solution proposed in this
paper is not to replace caching logic within an individual cache
component, but instead empower the entire CDN infrastructure
to operate holistically and more efficiently.
We use the discussion of treating cold and hot assets
differently in a CDN as a means to develop a compelling
solution to the problems listed here.
III. ROUTING AND FOCUSED USE OF NETWORK RESOURCES
The network (layer 3 routing) domain gains benefits from
being able to associate specific network link resources to
specific IP end-points (e.g. map different IP prefixes to
different links to maximize link efficiencies). An ISP that peers
with multiple ISPs uses business and bandwidth constraints to
dynamically move network traffic (IP prefixes) across different
peering links. This can be done via both prefix size
manipulation and by using BGP policy capabilities. For
instance, an ISP could demote the use of a peering link for
some IP prefixes by AS (Autonomous System) padding the
respective BGP announcement. Another approach to adjust
how traffic arrives over a set of peering links is to send more
specific advertisements over different peering linksthis is a
common way for an ISP to traffic engineer how traffic
arrives over an ISPs peering links. This traffic engineering
approach of announcing more specifics is similar in nature to
the engineering we would like to perform at the CDN level to
treat hot or cold assets differently.
We propose that network traffic engineering using more
specifics is a way to solve the hot and cold asset treatment
problem within the CDN space.
293
A. Routing and More Specifics as a Traffic Engineering
Tool
Consider the two ISP networks in Fig. 1. ISP 1 has the
24.24.0.0/16 address space that it is announcing from router 1
(R1) to ISP 2 over a single link to associated ISP 2 router 2
(R2). While ISP 2 might need to require connectivity to all of
the 24.24.0.0/16 space, assume that most of the traffic from
ISP 2 to ISP 1 is to a subset of the 24.24.0.0/16 space. Assume
that most of ISP 2s traffic is to 24.24.24.0/24. Further assume
that the existing link between the two ISPs is often overloaded
and both ISPs agree that they should augment capacity
between the two networks with a second network link.
In general, there are two common ways these ISPs could
add capacity between their networks (see Fig. 2). The Option 1
approach would be to add a single link between existing
routers R1 and R2 and announce 24.24.0.0/16 over both these
links. The Option 2 approach would be for ISP 1 to setup a new
router (R3) that is connected to, or network-wise, near the
24.24.24.0/24 subnet. ISP 1 could deploy R3 and then build the
second link from R3 in ISP 1s network to R2 in ISP 2s
network. ISP 1 could then announce just the more specific
route 24.24.24.0/24 to R2 from the R3 router and continue to
announce the aggregate of 24.24.0.0/16 from R1 to R2. Using
this approach where ISP 1 announces a more specific route,
ISP 1 can cause a more efficient use of its network resources by
causing traffic to 24.24.24.0/24 to be sent directly to R3, which
is close or directly connected to the 24.24.24.0/24 network.
This concept of more specifics is a common way for an ISP to
traffic engineer how traffic arrives into or across its network
infrastructure.
The network model outlined here provides the functionality
we are looking for in the CDN space: namely the ability to treat
hot and cold assets differently within a CDN. In the above
example, we are logically treating the 24.24.24.0/24 subnet as
being hot and the rest of the 24.24.0.0/16 subnet as being cold.
We now apply the routing paradigm to the CDN domain.
B. More Specifics Applied to the CDN Space
Imagine that we treat Caches as "routers of content" that
forward requests for content, much like IP routers forward
packets, and where URI prefixes are advertised by Origins that
store certain sets of content and by Caches that have access to
certain sets of content. In this scenario, the network prefix
24.24.0.0/16 is analogous to an aggregate URI prefix
http://nbc.com. Assume that the more specific IP prefix
24.24.24.0/24 is akin to a sub-set of the nbc.com domain that
has hot URI assets, namely NBC Olympic content that might
be created during the upcoming Olympics in 2012 (i.e.
http://nbc.com/olympics). Replace the router designations in
the previous examples with Caches, as shown in Fig. 3. Here,
R2 becomes a child Cache (CC-2) and routers R1 and R3
become parent Caches (PC-1 and PC-3 respectively). The
child Cache CC-2 can use either PC-1 or PC-3 to reach NBC
content, but we want to make access to the Olympic content,
which is hot, as effective as possible. By applying the routing
paradigm to the announcement of the URI namespace, in
Option 1, PC-1 could announce nbc.com to CC-2. In Option
2, we define a new parent Cache (PC-3) that announces the
Olympic more specific prefixes to CC-2 (i.e.
nbc.com/olympics). In option 2, CC-2 would see two
announcements, an aggregate (i.e. nbc.com from PC-1) and a
more specific (i.e. nbc.com/olympics from PC-3), and build a
forwarding information base (FIB) like structure. Much like in
the classic routing paradigm, the FIB would associate URI
prefixes with an associated next-hop address to parent Caches
that have announced access to a set of the URI namespace. On
a cache miss, CC-2 would do a longest match lookup on the
miss request URI against its locally created FIB and determine
which parent Cache should be used to support the content
request. For a URI request to CC-2 for the content
nbc.com/olympics/swimming-results, CC-2 would perform
the FIB longest-match lookup and send the HTTP GET request
for this URI to PC-3. If CC-2 received a URI request for the
URI http://nbc.com/30rock/episode-42, CC-2 would send the
HTTP GET request to PC-1.
There are clear parallels between the traffic engineering in
24.24.0.0/16
24.24.24.0/24
R1
ISP 2
ISP 1
R2
R1 announces to R2
24.24.0.0/16 ->R1

Fig. 1. Base ISP connectivity
24.24.0.0/16
24.24.24.0/24
R1
ISP 2
ISP 1
R2
R1 announces to R2
24.24.0.0/16 ->R1
R3
R3 announces to R2
24.24.24.0/24 ->R3
24.24.0.0/16
24.24.24.0/24
R1
ISP 2
ISP 1
R2
R1 announces to R2
24.24.0.0/16 ->R1
over both links
Option 1
Option 2
Fig. 2. Network augment and associated routing options
Nbc.com
Nbc.com/
olympics
PC-1 announces to CC-2
Nbc.com -> PC-1
Nbc.com
Nbc.com/
olympics
PC-1 announces to CC-2
nbc.com ->PC-1
Option 1
Option 2
Network
connectivity
Network
connectivity
Network
connectivity
Network
connectivity
PC-1
CC-2
PC-1
CC-2
PC-3 announces to CC-2
Nbc.com/olympics -> PC-3
PC-3
Fig. 3. Applying routing options to CDN paradigm
294
the network space and corresponding needs in the CDN space.
If routing has the right mechanisms to do the types of traffic
engineering we would like to do in the CDN space, namely
associated prefixes and next-hops together, then we should
consider the existing routing protocols as the signaling
mechanism in the CDN space. This leads us to BGP, or
specifically, Multi-protocol BGP (M-BGP) [8].
IV. MULTI-PROTOCOL BGP ADDRESS FAMILY EXTENSIONS
FOR CARRYING URIS
We propose the creation of a new BGP address family (AF)
that is specialized for carrying content URI prefixes and
associating these prefixes to IPv4/IPv6 next-hop addresses. At
a high level, this URI AF defines new BGP Network Layer
Reachability Information (NLRI), where the prefix is a
portion (aggregate) or entire (more specific) asset URI string
and the next-hop is an IPv4 or IPv6 address. BGP using this
new AF is targeted at carrying URI announcements within a
CDN domain, mimicking how Internal BGP (iBGP) works
within routers in an ISP network (with some subtle changes).
A CDN domain will consist of all the Caches and Origins
that operate as a single cohesive CDN; all CDN components
with a CDN domain would use the same BGP autonomous
system number (ASN). This CDN ASN need not be tied to the
underlying network ASN(s); the CDN operates as an overlay to
the network and the CDN BGP information also operates as an
overlay to the network BGP information. What this means is
that the underlying network routing infrastructure does not see
or participate in the CDN BGP URI informationthis CDN
BGP information is only carried and processed by the CDN
elements.
Once we make the leap to carry URI prefixes in BGP in the
intra-CDN domain, this URI AF could also be used to carry
URI prefix announcements between CDN administrative
domains (inter-CDN). This parallels how External BGP
(eBGP) works between multiple ISPs in the existing IPv4/IPv6
namespace. Note that this URI AF is complementary to the
IETF ID that addresses request routing [3]. Our proposal
addresses URI announcements from the CSP through CDNs
toward intermediate (transit) CDNs and towards edge
(regional) CDNs, while the IETF ID proposal addresses the
announcements of client address space from edge CDNs, up
through transit CDNs to CSPs.
Our expectation is that CDNs will operate much like ISPs,
namely that CDNs will peer based on business relationships
that make strategic and economic sense between CDN
operators.
A. Internal CDN Tiering
A large ISP might server linear and video on demand
content to millions of subscribers.. Building a CDN to deliver
video to this number of end-points will require a large number
of Edge Caches. To reduce network bandwidth between these
Edge Caches and the Origins, one or more CDN mid caching
tiers might exist between the Origins and Edge Caches. The
number of mid-tier layers required to scale are a function of a
CDN operator's specific business and network requirements, as
well as the CDN components performance profile. Without a
loss of generality, we discuss a CDN domain that consists of
three (3) caching tiers and an Origin layer. End-clients connect
to an Edge Cache. The Edge Cache might be configured to
have access to one or more Mid-Tier Caches. These Mid-Tier
Caches might have access to one or more Caches operating at a
Root Caching tier. The Root Cache tier then has access to the
Origin(s).
We envision Caches being associated with a specific tier.
A tier is defined simply as a numeric value between 0 and
255 that represents the layer in which this Cache operates in
respect to other Caches within the CDN. A Caches tier only
has significance within its CDN domain. We assume an Origin
has a tier value of 0 and an Edge Cache has a tier value of 250.
Edge-clients (or dCDNs) only access Edge Caches (those with
a tier value of 250). Edge Caches and all Intermediate Caches
can pull content from any Cache that is at a lower tier numeric
value. Since a single CDN domain operates as a single BGP
AS, it is this logical CDN tiering structure that enforces
directionality of how content is pulledand thus prevent loops
from forming inside the CDN.
B. NLRI Format
In the BGP URI AF, the BGP URI AF NLRI consists of
three (3) routing constructs and one operational construct. The
three routing constructs are: 1) the complete or partial URI
prefix; 2) the tier of the Cache making the announcement; and
3) the IP address of this Cache. These are all self-explanatory.
In the NLRI, we also embed information that is operationally
useful to a CDN operatorspecifically the Cache reports the
set of parent Cache IPs that it has installed into its FIB for this
URI prefix. This piece of information is not necessary for the
CDN to operate properly, but it greatly simplifies a key
operational requirement, namely understanding what the CDN
distribution tree looks like for a given URI prefix at any
point in time.
C. URI Prefix Selection
URI prefix selection in this new AF is nearly identical to
how BGP network path selection works. The one refinement is
that a single CDN domain might have multiple caching tiers
and we need to force the CDN delivery structure to be loop free
across these tiers. This is done by augmenting the BGP prefix
selection process with a tier validation step. Basically, a
Cache operating at tier X should only consider BGP URI
announcements from a CDN component that has a tier value
less than X. This step assures the CDN to operate loop free. If a
Cache (C-100) is operating at tier 100 and receives an
announcement for some URI from a Cache (C-50) operating at
tier 50, a Cache (C-20) operating at tier 20, and a cache (C-
200) operating at tier 200, the C-100 Cache will ignore the C-
200 announcement. C-100 will consider both the C-50 and C-
20 announcements, but will prefer the C-50 over C-20, as it is
the closest to its own tier value. If the C-50 Cache
announcement disappears from BGP or becomes invalid
because the announcement next-hop is not present in the
Caches routing table, C-100 will then select and install the C-
20 announcement into its Cache FIB.
Note that since we use the base BGP prefix selection
algorithm, the network distance to a next-hop Cache IP is
295
considered. It is this step in the prefix selection process that
makes the Cache network aware. Much like in the classic
BGP case, a network topology event would trigger the CDN to
re-perform the URI path selection, thus enabling the Cache to
always pick the nearest upstream Cache for pulling content.
Further, given the use of the tiering mechanism to enforce a
loop free CDN, multiple same tier parent Caches can be
installed into a Cache FIB for some URI prefix, even if these
parent Caches are at different network cost or metrics from the
child Cache. Since we are using BGP, we can leverage all the
existing BGP constructs and policies as part of making the
CDN network aware. CDN network awareness for some
aspects of the CDN operation might be based on the nearness
of an upstream Cache (i.e. per Interior Gateway Protocol (IGP)
cost or metrics). Alternatively, the CDN operation might want
to use a policy to pick a specific upstream Cache and only use
another if the primary Cache becomes unavailable (e.g. using
the BGP LOCAL_PREF mechanism). All other normal BGP
steps remain intact and are processed in the standard BGP
prefix selection order.
D. Logical CDNs and Using BGP Communities
One of the key criterion in the CDN space we listed earlier
was the need to treat a hot asset differently from a cold asset in
terms of which CDN components might be used to service hot
or cold content and how caching on these components might be
configured (caching versus cut-through).
We use the concept of BGP communities as both a tagging
mechanism on a URI announcement and as a filtering
mechanism on a Cache. Specifically, a CDN operator will
select one BGP community value to indicate when an asset is
hot and another community value when an asset is cold. Caches
are configured with the BGP community values they are to
support. A Cache configured to support hot content will accept
URI announcements that have the hot BGP community tag.
This Cache will then inbound drop all URI announcements that
do not have the hot BGP community tag. Likewise, a Cache
configured to support only cold assets will accept URI
announcements with the cold BGP community tag and will
inbound drop all other URI announcements. A CDN provider
may have a more detailed business policy that can be enforced
by applying additional BGP community values to the
announcements and the Caches filtering logic.
BGP communities become the mechanism to signal to the
CDN the delivery treatment that should be applied for any URI
at the current time. As assets transition from cold to hot and
then back to cold, the CDN will have some internal analytics
and associated business rules to determine when to treat (and
BGP signal) a URI as hot or cold. The mechanism for
determining hot and/or cold assets is outside the scope of this
paper, but the real-time means to change the CDN treatment for
a URI prefix is driven by applying the right BGP community to
the URI announcement.
E. Benefits of BGP in the CDN Space
Before we talk about the benefits of using BGP for carrying
content URI prefixes, we enumerate some base assumptions for
the CDN BGP URI solution.
A CDN domain need not use the same ASN as the
underlying network ASN. In fact, a single CDN
domain could span multiple network ASNs.
To prevent URI namespace hijacking, all URI
announcements should be secured using the
mechanisms defined by the Secure Inter-Domain
Routing working group.
This URI AF only exists on Caches and Origins; it is
not run on the routing infrastructure.
The following points detail the benefits of using BGP:
BGP is well known. Using BGP in the URI space
means instant expertise and vetted implementations.
The continuous refinement and capabilities added to
the BGP protocol in the network domain instantly
become available and applicable to the CDN domain.
All existing operational tools available for network
BGP analysis and capacity planning would instantly be
useable in the CDN space.
F. Additional Related BGP Address Families
This paper has focused on a mechanism for how
announcements from a CSP (or Origin) can percolate through a
single CDN, or across multiple CDNs. The CDNI Footprint
Advertisement Internet draft [3] details a similar approach that
addresses how client prefix information is used to enable CDNs
to announce or learn which IP prefixes are served by a CDN
(or through a transition of CDNs). The approach put forth in
this ID and the BGP URI AF approach proposed in this paper
are very much aligned and complementary. The AF detailed in
the ID addresses the request routing problem, while the BGP
URI AF presented here addresses CDN delivery and associated
URI treatment. In addition to these two AFs, we also propose
an additional AF that is applicable in the CDN space. This
additional AF is one where the prefix is a URI and the next-hop
could be a URI or related control constructs (e.g. BGP
community value(s)). The initial benefit of this is to allow an
Edge Cache to rewrite a client URI to the target Origin URI
namespace, but can also be used to promote or demote assets
from cold to hot or vice versa.
G. Scaling of BGP for CDNs
BGP is the foundation for announcing both IPv4 and IPv6
connectivity Internet wide. The URI namespace is much larger
than the IPv4 or IPv6 namespace. One might deduce that in the
CDN namespace, there must be one or more CDNs that carry a
default-free view of the URI namespace and thus come to the
conclusion that the number of URI prefixes that need to be
carried are of the order of the number of DNS domain names,
which far exceed what is likely to be possible to be carried in
BGP in the near future [9]. This thinking, however, is applying
the Internet connectivity model to the CDN, which is both
unnecessary and counter-productive to caching efficiency.
Recall that the purpose of a CDN is to cache content. If one
considers Internet content to be the library, then one can easily
deduce that a vast majority of the URI namespace does not
296
yield value in being cached in a CDN. For instance, theres
likely little value in caching content associated with the
multitude of niche domain names. A CDN should only carry
the URI namespace that results in caching efficiencies,
meaning a CDN need only carry a very small portion of the
entire Internet URI namespace in the BGP URI AF.
A further constraint to the number of URIs that should be
carried in a CDNs BGP URI AF is the CDN storage footprint.
The CDN storage footprint places a very restrictive bound on
how many assets can be cached and yield positive caching
efficiencies.
V. RELATED WORK
The concepts presented within this paper clearly parallel the
Content Centric Networking (CCN) paradigm [10] suggested
by the researchers at PARC. The CCN approach uses Interest
packets to signal directly on the routing infrastructure
interested in some piece of content. It is suggested that these
Interest packets are routed, implying there is some FIB-like
mechanism used to determine which outgoing interface is
closest to the desired URI. The CCN researchers, as do we,
suggest announcing the URI information in some routing
protocolthe CCN team suggests IGP, while we suggest
announcing this content namespace in BGP. In the CCN space,
it would be natural to extend carrying the content names
between CCN domains using BGPmuch like we are
proposing.
The CCN approach suggests that one can use a routers line
card packet buffers (the Content Store, or CS) to effectively
cache content directly on the forwarding path. Our approach
uses standalone Caches that connect into the routing
infrastructure. In our proposed solution, content hairpins from
the Router to the Cache, and back again to the Router, which is
somewhat inefficient. The CCN approach attempts to cache the
content in the CS. If the CS uses just the line card packet
buffers, it seems highly unlikely much caching efficiencies will
be gained in cores of large ISP networks. Given current and
suggested future routing architectures, and associated
technology limitations, the CS size will be many orders of
magnitudes smaller (and more costly) than the storage one can
expect in the stand-alone caching platform. What this means is
that for the CS to be caching efficient, the number of assets it
can cache will be very small relative to the stand-alone caching
model, maybe to the point where little network bandwidth will
be saved.
The stand-alone caching model (dedicated Caches)
leverages a Content Router to determine which requests use
the CDN and which do not. What this means is that the CDN
operator can choose which assets (or the number of assets) to
run through the CDN and default everything else to direct
client-Origin connectivity. We see no similar filtering
functionality in the CCN architecture, which suggests that the
CCN approach will struggle to provide useful caching
efficiencies in the core of large ISP networks compared to the
stand-alone caching approach.
VI. CONCLUSION
In this paper, we have proposed that BGP be augmented
with a new AF to carry URL prefixes as a means to: 1) have
both Caches and Origins use a single consistent open protocol
to enable cross-vendor and platform inter-operability, 2) make
the Caches aware of the network, and 3) enable CDNs to
change the CDN performance or treatment provided to one or
more assets over time. We have provided details on what the
NLRI structure would look like, and detailed the URI prefix
selection process. We have provided insight into why BGP is a
sound and scalable approach to carry URI prefixes. We
concluded with how this approach both aligns and differs from
the CCN technology approach.
ACKNOWLEDGEMENTS
The authors would like to thank Bruce Davie for his
comments and thoughts on earlier revisions of this paper.
REFERENCES
[1] Cisco, VNI - Forecast and Methodology, 2010-2015,
http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns7
05/ns827/white_paper_c11-
481360_ns827_Networking_Solutions_White_Paper.html, June 2011.
[2] K. Lueng, Y. Lee - Content Distribution Network Interconnection
(CDNI) Requirements, http://tools.ietf.org/html/draft-ietf-cdni-
requirements-02
[3] S. Previdi, F. Le Faucheur, A Gilluou, J. Medved - CDNI Footprint
Advertisement
http://www.ietf.org/internet-drafts/draft-previdi-cdni-footprint-
advertisement-00.txt
[4] B. Niven-Jenkins, N. Bitar, J..Medved, S.Previdi - Use Cases for ALTO
within CDNs draft-jenkins-alto-cdn-use-cases-01, June 2011
[5] H. Song, Y. Yang, R. Alimi - DECoupled Application Data Enroute
(DECADE) Problem Statement, draft-ietf-decade-problem-statement-04,
October 2011
[6] http://www.openflow.org/
[7] N. McKeown - Software Defined Networking
http://tiny-tera.stanford.edu/~nickm/talks/infocom_brazil_2009_v1-1.pdf
[8] T. Bates, R. Chandra, D. Katz, and Y. Rekhter, Multiprotocol Extensions
for BGP-4, January 2007
[9] A, Narayan, D. Oran - NDN and routing can it
scale?,http://trac.tools.ietf.org/group/irtf/trac/attachment/wiki/icnrg/IRT
F%20-%20CCN%20And%20IP%20Routing%20-%202.pdf
[10] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs,
R. L. Braynard (PARC) - Networking Named Content, CoNEXT 2009,
Rome, December, 2009.
297

Anda mungkin juga menyukai