Anda di halaman 1dari 84

Guide

Cisco Application Centric


Infrastructure Release 1.2(1)
Design Guide Using the Basic GUI

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 1 of 84

Contents
Introduction .............................................................................................................................................................. 5
Components and Versions .................................................................................................................................... 6
Hardware Choices ................................................................................................................................................ 6
Expansion Modules .......................................................................................................................................... 6
Leaf Switches ................................................................................................................................................... 7
Spine Switches ................................................................................................................................................. 7
Cisco APIC ....................................................................................................................................................... 7
Whats New in Cisco ACI Release 1.2? ................................................................................................................ 8
Management Tools ............................................................................................................................................... 8
What Is the Basic GUI? ......................................................................................................................................... 8
Physical Topology ................................................................................................................................................... 9
Leaf-and-Spine Design ......................................................................................................................................... 9
Mapping Database ................................................................................................................................................ 9
Main Concepts of Cisco ACI ................................................................................................................................. 10
Tenants ............................................................................................................................................................... 10
Endpoint Groups ................................................................................................................................................. 12
EPG Classification.......................................................................................................................................... 12
vzAny ............................................................................................................................................................. 12
Application Network Profile ................................................................................................................................. 12
Contract .......................................................................................................................................................... 12
Filters ............................................................................................................................................................. 13
Subjects ......................................................................................................................................................... 13
Routing and Switching in the Policy Model ......................................................................................................... 13
Bridge Domains .............................................................................................................................................. 14
VXLAN Forwarding......................................................................................................................................... 15
VXLAN Tunnel Endpoints ............................................................................................................................... 15
VXLAN Headers Used in the Cisco ACI Fabric .............................................................................................. 16
Inside Versus Outside Routing ....................................................................................................................... 17
Forwarding Tables: Global Station Table and Local Station Table ................................................................. 18
External EPGs ................................................................................................................................................ 18
Virtual Machine Migration ............................................................................................................................... 21
Controller Design Considerations ........................................................................................................................ 21
Cisco APIC Functions ......................................................................................................................................... 22
Getting Started .................................................................................................................................................... 22
Infrastructure VLAN ............................................................................................................................................ 23
Fabric Discovery ................................................................................................................................................. 23
VTEP IP Address Pool ........................................................................................................................................ 24
Verifying Cisco APIC Connectivity ...................................................................................................................... 24
In-Band and Out-of-Band Management of Cisco APIC ....................................................................................... 25
Cluster Sizing and Redundancy .......................................................................................................................... 25
Preparing the Fabric Infrastructure ...................................................................................................................... 26
Configuring MP-BGP .......................................................................................................................................... 26
Spanning-Tree Considerations ........................................................................................................................... 28
Flooding Within the EPG ................................................................................................................................ 28
Configuring External Switches Connected to the Same Bridge Domain but Different EPGs .......................... 28
Working with Multiple Spanning Tree ............................................................................................................. 29
Configuring VLAN and VXLAN Domains ............................................................................................................ 29
Configuring the Infrastructure VLAN............................................................................................................... 31
Adding a VLAN to a Port ................................................................................................................................ 31
Configuring Trunk and Access Ports .............................................................................................................. 32
Creating Port Channels ....................................................................................................................................... 33
Creating Local Port Channels ........................................................................................................................ 33
Creating Virtual Port Channels ....................................................................................................................... 34
Assigning VMM Domains to Ports ....................................................................................................................... 34
Creating Tenants ................................................................................................................................................ 37
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 2 of 84

Creating L3Out Connections ............................................................................................................................... 37


Preparing the Virtual Infrastructure ..................................................................................................................... 39
Integrating Cisco APIC and VMware vCenter ..................................................................................................... 40
Configuring Cisco ACI vDS ............................................................................................................................ 41
Configuring vDS Uplinks ................................................................................................................................ 41
Creating an Attachable Entity Profile for Virtualized Server Connectivity ............................................................ 42
Designing the Tenant Network ............................................................................................................................. 43
Implementing a Simple Layer 2 Topology ........................................................................................................... 44
Configuring Bridge Domains ............................................................................................................................... 47
Tuning the Bridge Domain Configuration ....................................................................................................... 47
Using Common Bridge Domain Configurations .............................................................................................. 48
Placing the Default Gateway ............................................................................................................................... 49
Virtual IP, Virtual MAC Address ..................................................................................................................... 50
Configuring EPG and Server Connectivity .......................................................................................................... 50
Creating EPGs ............................................................................................................................................... 51
Configuring EPG Deployment Immediacy Options ......................................................................................... 54
Using Endpoint Learning: Virtual Machine Port Group ................................................................................... 55
Using Endpoint Learning: Bare-Metal Host .................................................................................................... 55
Using Endpoint Aging ..................................................................................................................................... 55
Configuring Contracts ......................................................................................................................................... 55
Setting the Contract Scope ............................................................................................................................ 56
Setting the Contract Direction ........................................................................................................................ 56
Configuring an Unenforced VRF Instance ...................................................................................................... 57
Using vzAny ................................................................................................................................................... 57
Configuring VRF Route Leaking with Contracts ............................................................................................. 58
Configuring External Layer 3 Connectivity ......................................................................................................... 58
Private Versus External Routed Networks .......................................................................................................... 59
Relationships in the Object Tree ......................................................................................................................... 59
Border Leaf Switches .......................................................................................................................................... 59
Performing Route Distribution Within the Cisco ACI Fabric ................................................................................ 60
Assigning the BGP Autonomous System Number .............................................................................................. 61
Configuring Infrastructure L3Out Connectivity .................................................................................................... 62
Importing and Exporting Routes ..................................................................................................................... 63
Configuring Tenant L3Out Connectivity .............................................................................................................. 65
Creating Subnets............................................................................................................................................ 65
Creating an External EPG .............................................................................................................................. 66
Announcing Subnets to the Outside ............................................................................................................... 66
Designing for Shared Services ............................................................................................................................. 67
Configuring VRF Route Leaking Across Tenants................................................................................................ 69
Configuring Shared Subnets and Contract Interfaces .................................................................................... 69
Announcing Shared Subnets to an L3Out Connection ................................................................................... 70
Configuring a Shared L3Out Connection ............................................................................................................ 72
Configuring the VRF Instance and Bridge Domains in the Common Tenant .................................................. 72
Configuring the VRF Instance in the Common Tenant and Bridge Domains in Each Specific Tenant ........... 73
Sharing the L3Out Connection Across Multiple VRF Instances ..................................................................... 73
Creating the Shared Services Tenant Using External Peering............................................................................ 75
Scalability Considerations .................................................................................................................................... 75
VLAN Consumption on the Leaf Switch .............................................................................................................. 76
Scalability with Hardware Proxy .......................................................................................................................... 77
Scalability with Layer 2 Forwarding Bridge Domain ............................................................................................ 77
Policy CAM Consumption ................................................................................................................................... 77
Policy CAM Consumption with Contracts Between Layer 3 External EPGs and Internal EPGs ..................... 78
Policy CAM Scalability Improvements Starting from Cisco ACI Release 1.2 .................................................. 78
vzAny ............................................................................................................................................................. 80
Capacity Dashboard ........................................................................................................................................... 80
Configuring In-Band and Out-of-Band Management .......................................................................................... 80
Configuring an Out-of-Band Management Network ............................................................................................ 81
Configuring an In-Band Management Network ................................................................................................... 81
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 3 of 84

Best Practices Summary ....................................................................................................................................... 83


For More Information ............................................................................................................................................. 84

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 4 of 84

Introduction

Cisco Application Centric Infrastructure (Cisco ACI ) technology enables you to integrate virtual and physical
workloads in a programmable, multihypervisor fabric to build a multiservice or cloud data center. The Cisco ACI
fabric consists of discrete components that operate as routers and switches but is provisioned and monitored as a
single entity. The solution operates like a single switch and router that provides traffic optimization, security, and
telemetry functions, stitching together virtual and physical workloads.
This document describes how to implement a fabric like the one depicted in Figure 1.
This document describes these main building blocks of the design:

Two spine switches interconnected to several leaf switches

Top-of-rack (ToR) leaf switches, with a mix of 10GBASE-T leaf switches and Enhanced Small Form-Factor
Pluggable (SFP+) leaf switches, connected to the servers

Physical and virtualized servers dual-connected to the leaf switches

A pair of border leaf switches connected to the rest of the network with a configuration that Cisco ACI calls a
Layer 3 outside (L3Out) connection

A cluster of three Cisco Application Policy Infrastructure Controllers (APICs) dual-attached to a pair of leaf
switches in the fabric

Figure 1.

Cisco ACI Fabric

The network fabric in this design provides the following main services:

Connectivity for physical and virtual workloads

Partitioning of the fabric into multiple tenants, which can be departments or hosted customers

A shared-services partition (tenant) to host servers or virtual machines whose computing workloads provide
infrastructure services such as Network File System (NFS) and Microsoft Active Directory to the other
tenants

Capability to provide dedicated and shared L3Out connections to the tenants present in the fabric

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 5 of 84

Components and Versions

Cisco ACI with APIC Release 1.2(1)

Cisco Nexus switches with Cisco NX-OS Software Release 11.2(1) Cisco Nexus 9000 Series ACI-Mode

Switches

VMware ESX with VMware vSphere 6

Hardware Choices
Cisco ACI offers a variety of hardware platforms. Choose a platform according to the type of physical layer
connectivity you need, the amount of ternary content-addressable memory (TCAM) space and buffer space you
need, and whether you want to use IP-based classification of workloads into endpoint groups (EPGs). Table 1
provides a summary of the hardware options currently available. You should refer to the Cisco product page for the
most up-to-date information.
Table 1.

Cisco ACI Fabric Hardware Options as of This Writing

Expansion Modules
You can choose among three expansion modules according to the switches you are using and your needs:

Cisco M12PQ: Twelve 40-Gbps ports with an additional 40 MB of buffer space and a smaller TCAM
compared to the other models; can be used with the Cisco Nexus 9396PX, 9396TX, and 93128TX Switches

Cisco M6PQ: Six 40-Gbps ports with additional policy TCAM space; can be used with the Cisco Nexus
9396PX, 9396TX, and 93128TX Switches

Cisco M6PQ-E: Six 40-Gbps ports with additional policy TCAM space; can be used with the Cisco Nexus
9396PX, 9396TX, and 93128TX Switches and allows you to classify workloads into EPGs based on the IP
address of the originating workload

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 6 of 84

Leaf Switches
In Cisco ACI, all workloads connect to leaf switches. The leaf switches used in a Cisco ACI fabric are ToR
switches. They are divided into two main categories based on their hardware:

Modular leaf switches: These are ToR leaf switches with an expansion module. The expansion module
determines the amount of additional buffer space and the capacity of the TCAM.

Fixed form-factor leaf switches: These switches are based on the same hardware as the M6PQ expansion
module. The -E version of these switches is based on the same hardware as the M6PQ-E expansion
module.

Spine Switches
The Cisco ACI fabric forwards traffic mainly based on host lookups, and a mapping database stores the information
about the ToR switch on which each IP address resides. This information is stored in the fabric cards of the spine
switches.
The spine switches have several form factors. The models also differ in the number of endpoints that they can hold
in the mapping database, which depends on the number of fabric modules installed. Modular switches equipped
with six fabric modules can hold the following numbers of endpoints:

Fixed form-factor Cisco Nexus 9336PQ: Up to 200,000 endpoints

Modular 4-slot switch: Up to 300,000 endpoints

Modular 8-slot switch: Up to 600,000 endpoints

Modular 16-slot switch: Up to 1.2 million endpoints

Note:

You can mix spine switches of different types, but the total number of endpoints that the fabric supports is

the minimum common denominator. You should stay within the maximum tested limits for the software, which are
shown in the Capacity Dashboard in the APIC GUI. At the time of this writing, the maximum number of endpoints
that can be used in the fabric is 180,000.
Cisco APIC
The APIC is the point of configuration for policies and the place where statistics are archived and processed to
provide visibility, telemetry, and application health information and enable overall management of the fabric. The

controller is a physical server appliance like a Cisco UCS C220 M3 or M4 Rack Server with two 10 Gigabit
Ethernet interfaces that are designed to be connected to the leaf switches and with 1 Gigabit Ethernet interfaces
for out-of-band management. Two controller models are available: Cisco APIC-M (for medium-size configurations)
and APIC-L (for large configurations). You can also choose between APIC-CLUSTER-M1 and APIC-CLUSTER-M2
and between APIC-CLUSTER-L1 and APIC-CLUSTER-L2, depending on the server hardware that you are using.
At the time of this writing, the recommendation is to use APIC-M for fabrics with fewer than 1000 edge ports, and
APIC-L for fabrics with more than 1000 edge ports.
Note:

You can mix clusters of different sizes (M and L), but the scalability will be that of the less powerful cluster

member.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 7 of 84

Whats New in Cisco ACI Release 1.2?


Cisco ACI Release 1.2(1) offers many new features. The main new features relevant for this design guide include
the following:

Redesigned GUI

Cisco NX-OS Software command-line interface (CLI), as in Cisco Nexus switches

Capability to share L3Out connections across multiple tenants with intertenant route leaking

Increased scalability support achieved by distributing L3Out classification and contracts in the data plane
across all the leaf switches in the fabric

Management Tools
The tools that the networking team can use to configure Cisco ACI include the following:

CLI for monitoring each device in the fabric (which is accessible through Secure Shell [SSH] or the console
and uses the traditional NX-OS show commands)

A new NX-OS CLI to manage the entire fabric from the APIC

Two GUI modes (advanced and basic) that guide the user through the tasks of managing fabrics of various
sizes

Representational state transfer (REST) calls with XML or JavaScript Object Notation (JSON), which can be
sent with various tools, such as POSTMAN

Python scripts using the libraries provided by Cisco or using scripts that originate in REST calls.

This design guide assumes that the administrator is using the redesigned GUI (called the Basic GUI).
Note:

This document doesnt include examples of the use of the NX-OS CLI, but you can also perform all the

configurations described in this document by using the new NX-OS CLI.

What Is the Basic GUI?


The Basic GUI enables network administrators to configure leaf ports, configure tenants without the need to
configure switch profiles, interface profiles, policy groups, attachable entity profiles (AEPs), etc.
The Advanced GUI instead has a 1:1 mapping with the object model.
The main differences between the Advanced GUI and the Basic GUI are in the workflows that need to be
performed to achieve the same configuration. For instance, with the Basic GUI the user configures one port at a
time, as was the case prior to Cisco ACI; hence, the GUI creates one object for each port.
If you want to configure many ports simultaneously and identically, the preferred tool is the Advanced GUI. If you
want to create configurations using interface profiles, selectors, policy groups, etc., or if you plan to automate the
fabric, you also should use the Advanced GUI.
Changes made through the Basic GUI can be seen, but cannot be modified, in the Advanced GUI, and changes
made in the Advanced GUI cannot be rendered in the Basic GUI. The GUI also prevents you from changing
objects that are created in one GUI from the other GUI.
The Basic GUI is kept synchronized with the NX-OS CLI, so that if you make a change from the NX-OS CLI, these
changes are rendered in the Basic GUI, and changes made in the Basic GIU are rendered in the NX-OS CLI.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 8 of 84

Physical Topology
Cisco ACI uses a leaf-and-spine topology, in which each leaf switch is connected to every spine switch in the
network, with no interconnection between leaf switches or spine switches:

Each leaf and spine is connected with one or more 40 Gigabit Ethernet link.

Each APIC appliance connects to two leaf switches for resiliency.

Leaf-and-Spine Design
The fabric is based on a leaf-and-spine architecture where leaf and spines provide the following functions:

Leaf devices: These devices have ports connected to Classic Ethernet devices (servers, firewalls, router
ports, etc.) and 40 Gigabit Ethernet uplink ports connected to the fabric cloud. Leaf switches are at the edge
of the fabric and provide the Virtual Extensible LAN (VXLAN) tunnel endpoint (VTEP) function. They are
also responsible for routing or bridging tenant packets and for applying network policies. Leaf devices can
map an IP or MAC address to the destination VTEP. Leaf devices can be used as regular NX-OS devices or
in Cisco ACI mode as leaf devices.

Spine devices: These devices exclusively interconnect leaf devices. Spine devices also provide the
mapping database function. The hardware used for the spine device is designed for this function. The
hardware includes specific line cards for the Cisco Nexus 9500 platform switches (such as the Cisco Nexus
6736PQ line card) and a ToR switch with 40 Gigabit Ethernet ports (the Cisco Nexus 9336PQ).

Besides forwarding traffic, the leaf discovers the endpoints and informs the spine switch. As a result, the spine
switch creates the mapping between the endpoint and the VTEP.
The leaf is also the place where policies are applied to traffic.
All leaf devices connect to all spine devices, and all spine devices connect to all leaf devices, but no direct
connectivity is allowed between spine devices or between leaf devices. If you incorrectly cable spine switches to
each other or leaf switches to each other, the interfaces will be disabled. You may have topologies in which certain
leaf devices are not connected to all spine devices, but traffic forwarding will not be as effective as when each leaf
is attached to each spine.

Mapping Database
For Cisco ACI to forward traffic through the fabric, the fabric must know the identity and the location of the
endpoint. The fabric can learn the location of the endpoint in the following ways:

The administrator can statically program the identity-to-location mapping.

Upon creation of a new virtual machine, the virtual machine manager (VMM) can update the APIC with the
identity and location information. The locationthat is, the port to which the virtual machine is connected
is known through a combination of what the virtual machine manager tells the APIC (the ESXi host on which
the virtual machine is located) and the information that the APIC retrieves from the leaf (the Link Layer
Discovery Protocol [LLDP] or Cisco Discovery Protocol neighbor, to identify the interface to which the ESXi
host is connected).

Dynamic Host Configuration Protocol (DHCP) packets can be used to learn identity-to-location mapping.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 9 of 84

Learning can occur through Address Resolution Protocol (ARP), Gratuitous ARP (GARP), and Reverse
ARP (RARP) traffic. The CPU on the leaf switch, upon receiving a copy of the ARP, GARP, or RARP
packet, updates its local mapping cache with a static entry for this host and informs the centralized mapping
database of the update for the host address through Council of Oracles Protocol (COOP).

Learning can be based on the arrival of the first packet.

Upon learning the endpoint information, the leaf switch to which the endpoint is connected updates the mapping
database.
The mapping database is a database maintained by the fabric and contains the mapping for each endpoint
attached to the network (the identifier) and the address of the tunnel endpoint (TEP) that the endpoint sits behind
(the locator). The endpoint address is both the MAC address and the IP address of the endpoint plus the logical
network in which the endpoint resides (the Virtual Routing and Forwarding [VRF] instance). The mapping database
in the spine is replicated for redundancy, and it is synchronized across all spine switches.
The spine proxy database is updated using COOP. The leaf switch selects one of the spine switches at random to
which to send the update. That spine switch then updates all the other spine switches to help ensure consistency
of the database across the nodes.
When an ingress leaf switch forwards a packet, it checks the local cache of the mapping database. If it does not
find the endpoint address it is looking for, it encapsulates the packet with the destination address of the spine proxy
anycast and forwards it as a unicast packet. The spine switch, upon receiving the packet, looks up the destination
identifier address in its forwarding tables, which contain the entire mapping database. The spine then
reencapsulates the packet using the destination locator while retaining the original ingress source locator address
in the VXLAN encapsulation. This packet is then forwarded as a unicast packet to the intended destination. This
process eliminates unknown unicast flooding and ARP flooding.
The mapping database consists of entries on the spine switches and on the leaf switches (in the local station
table). The entries in the mapping database can expire. The default timer for the table that holds the host
information on the leaf switches is 900 seconds. After 75 percent of this value is reached, the leaf sends three ARP
requests as unicast packets in a staggered fashion (with a time delta between the requests) to check for the
endpoints existence. If there is no ARP response, then the endpoint is removed from the local table and from the
mapping database in the spine.

Main Concepts of Cisco ACI


This section provides a summary of some of the main Cisco ACI concepts that are used in the rest of this design
guide.

Tenants
The Cisco ACI fabric uses VXLAN-based overlays to provide the abstraction necessary to share the same
infrastructure across multiple independent forwarding and management domains, called tenants. Figure 2
illustrates the concept.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 10 of 84

Figure 2.

Tenants Are Logical Divisions of the Fabric

Tenants provide two functions: a management function and a data-plane function. A tenant is a collection of
configurations that belong to a particular entity, such as the development environment in Figure 2, and keeps the
management of those configurations separate from that of other tenants. The tenant also provides a data-plane
isolation function using VRF instances (private networks) and bridge domains. Figure 3 illustrates the relationship
among the building blocks of a tenant.
Figure 3.

Hierarchy of Tenants, Private Networks (VRF Instances), Bridge Domains, and EPGs

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 11 of 84

Endpoint Groups
Cisco ACI introduces an important security design concept: it groups endpoints into EPGs when they have the
same access-control requirements.
EPG Classification
Traffic from endpoints is classified and grouped into EPGs based on various configurable criteria, some of which
are hardware dependent, and some of which are software dependent:

Port and VLAN, port and VXLAN, network and mask, or explicit virtual network interface card (vNIC)
assignment

IP and MAC address, or virtual machine attributes for virtual machines attached to a Cisco Application
Virtual Switch (AVS; since Cisco ACI Release 1.1(1j)) and for virtual machines running in Microsoft Hyper-V
starting from 1.2(1)

IP host address (starting with Cisco ACI Release 1.2(1)) with -E version switches and generic expansion
modules (GEMs)

The first, and simplest, classification methodology divides workloads based on the port of the incoming traffic and
the VLAN ID or assigns particular vNICs to particular port groups.
The second option assigns virtual machines to a particular EPG based on attributes of the virtual machine itself.
The third option classifies traffic originated by servers (physical or virtualized) based on the full IP address of the
traffic that the servers send.
vzAny
vzAny, also called EPG collection for context, is a special EPG that represents all the EPGs associated with the
same VRF instance. This concept is useful when a configuration has contract rules that are common across all the
EPGs in the same VRF instance. In this case, you can place the rules that are common in a contract associated
with this EPG.

Application Network Profile


An application network profile is a collection of EPGs, contracts, and connectivity policy.
In general, you should think of the application profile as a hierarchical container of EPGs and their connectivity
requirements.
Note:

To deploy Cisco ACI, you dont need to know the mappings of applications. Several documented methods

are available for mapping existing network constructs in the Cisco ACI policy model.
Contract
A contract is a policy construct used to define the communication between EPGs. Without a contract between
EPGs, no communication is possible between those EPGs (unless the VRF instance is configured as
unenforced). Within an EPG, a contract is not required to allow communication because this communication is
always allowed. Figure 4 shows the relationship between EPGs and contracts.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 12 of 84

Figure 4.

EPGs and Contracts

An EPG provides or consumes a contract (or provides and consumes different contracts). For example, the Web
EPG in the example in Figure 4 provides a contract that the App EPG consumes. Similarly, the App EPG provides
separate contracts, which are consumed by the Web and DB EPGs.
Filters
A filter is a rule specifying fields such as the TCP port and protocol type, and it is referenced within a contract to
define the communication allowed between EPGs in the fabric.
A filter contains one or more filter entries that specify the rule. The example in Figure 5 shows how filters and filter
entries are configured in the APIC GUI.
Figure 5.

Filters and Filter Entries

Subjects
A subject is a construct contained within a contract and that typically references a filter. For example, the contract
Web might contain a subject named Web-Subj that references a filter named Web-Filter.

Routing and Switching in the Policy Model


All configurations in Cisco ACI are part of a tenant.
The application abstraction demands that EPGs always be part of an application network profile, and the
relationship between EPGs through contracts can span application profiles and even tenants.
Bridging domains and routing instances provide the transport infrastructure for the workloads defined in the EPGs.
Within a tenant, you define one or more Layer 3 networks (VRF instances), one or more bridge domains per
network, and EPGs to divide the bridge domains.
The relationships among the various objects are as follows: the EPG points to a bridge domain, and the bridge
domain points to a Layer 3 network (Figure 6).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 13 of 84

Figure 6.

Relationships Among Networks (VRF Instances), Bridge Domains, and EPGs

Bridge Domains
The concept of the bridge domain is similar to the concept of the VLAN in a traditional network.
A bridge domain is a container for subnets. Subnets are the equivalent of a switch virtual interface (SVI) IP
address, and they are pervasive gateway addresses. A pervasive gateway address is a gateway address that is
automatically available on all leaf switches instead of being localized on just a specific leaf. Therefore, a server can
be placed in the bridge domain on any leaf, and its default gateway is always local on the leaf to which the server is
connected.
The bridge domain can act as a broadcast or flooding domain if broadcast or flooding is enabled (these are rarely
needed). The bridge domain is not a VLAN, although it can act similar to a VLAN. You instead should think of it as
a distributed Layer 2 broadcast domain, which, on a leaf, can be translated locally as a VLAN with local
significance.
The default implementation of the Cisco ACI bridge domain does not operate in flooding mode. By using direct
ARP forwarding and VXLAN, the fabric can provide Layer 2 adjacency within the same bridge domain without any
flooding.
The bridge domain also can be set to operate in flooding mode if the fabric needs to interoperate with traditional
networks that require flooding to function.
The bridge domain must have a reference to a VRF instance called a Layer 3 network or private network.
If you dont configure this relationship in Cisco ACI, the bridge domain is never instantiated. Even if you plan to use
the bridge domain as a pure Layer 2 configuration, the bridge domain must have a reference to a VRF instance.
The VRF instance itself is not allocated in hardware unless the option to use unicast routing is selected in the
bridge domain. With this approach, the relationship between the VRF instance and the bridge domain that is
mandated by the object model does not consume hardware resources unnecessarily.
Whenever you create an EPG, you need to reference a bridge domain.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 14 of 84

VXLAN Forwarding
VXLAN is a critical technology used in the Cisco ACI fabric. VXLAN is designed to address the shortcomings
associated with regular VLANs:

VXLAN provides greater scalability in the number of Layer 2 segments supported. Whereas VLANs are
limited to just over 4000, VXLAN can scale (through the use of a 24-bit ID) to up to 16 million individual
segments.

VXLAN allows extension of Layer 2 across Layer 3 boundaries through the use of MAC address in User
Datagram Protocol (MAC-in-UDP) encapsulation.

VXLAN uses an 8-byte header consisting of a 24-bit virtual network identifier (VNID) and a number of reserved bits,
as shown in Figure 7.
Figure 7.

VXLAN Header

VXLAN Tunnel Endpoints


The VTEP is the device that terminates a VXLAN tunnel. A VTEP is a virtual or physical device that maps end
devices to VXLAN segments and performs encapsulation and deencapsulation. A VTEP has two interfaces: one on
the local LAN segment, used to connect directly to end devices, and the other on the IP transport network, used to
encapsulate Layer 2 frames into UDP packets and send them over the transport network.
Figure 8 illustrates the concept of a VTEP.
Figure 8.

VTEP

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 15 of 84

In a Cisco ACI environment, VXLAN is used to encapsulate traffic inside the fabric: in other words, each leaf switch
acts as a hardware VTEP, as shown in Figure 9.
Figure 9.

VTEPs in a Cisco ACI Fabric

In addition to its scalability, VXLAN allows the separation of location from identity. In a traditional IP-based
environment, the IP address is used to provide information about an endpoints identity, as well as information
about where that endpoint resides in the network. An overlay technology such as VXLAN separates these functions
and creates two name spaces: one for the identity, and another to signify where that endpoint resides.
In the case of Cisco ACI, the endpoints IP address is the identifier, and a VTEP address designates the location of
an endpoint in the network.
VXLAN Headers Used in the Cisco ACI Fabric
In the Cisco ACI fabric, some extensions have been added to the VXLAN header to allow the segmentation of
EPGs and the management of filtering rules, as well as to support the enhanced load balancing techniques used in
the fabric.
The enhanced VXLAN header used in the Cisco ACI fabric is shown in Figure 10.
Figure 10.

Cisco ACI Enhanced VXLAN Header

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 16 of 84

Notice that Cisco ACI uses the reserved fields of the regular VXLAN header for other purposes. The Source Group
field is used to represent the EPG to which the endpoint that is the source of the packet belongs. This information
allows the filtering policy to be consistently applied regardless of the location of an endpoint.
Inside Versus Outside Routing
When a leaf switch receives a frame from the host, it needs to determine whether the destination IP address is
inside the fabric or outside of the fabric.
The forwarding space used to forward a packet is determined by the IP network in which it is located and where it
is going:

Inside networks are those associated with tenants and their bridge domains.

Outside networks are those associated with the outside routes for each of those tenants.

If the destination IP address matches any/32 host route entry in the global station table, then the destination is an
endpoint inside the fabric, and the leaf switch has already learned it. If the destination IP address doesnt match
with any/32 host route entry, the leaf switch checks whether the destination IP address is within the IP address
range of the tenant. If the address is within the range, then the destination IP address is inside the fabric, but if the
leaf switch hasnt yet learned the destination IP address, the leaf switch then encapsulates the frame in VXLAN
format and with the destination address of the spine proxy IP.
The spine proxy checks the inner destination IP address against its proxy database. It looks up the routes of the
external routing table, and if the destination of the packet is outside the fabric, it matches it with one of the routes in
the external routing table. The packet is then sent to the VTEP address of the border leaf switch (Figure 11).
Figure 11.

Forwarding to Known Endpoints Inside the Fabric or to the Outside

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 17 of 84

Forwarding Tables: Global Station Table and Local Station Table


Figure 12 illustrates the forwarding tables that are present in each Cisco ACI leaf.
Figure 12.

Forwarding Tables in an Cisco ACI Leaf Switch

The local station table (LST) learns the hosts directly connected to the leaf switch. The Global Station Table (GST)
keeps the information about hosts in the fabric. If a Cisco ACI bridge domain has routing enabled, the LST learns
both IP addresses and MAC addresses. If the Cisco ACI bridge domain is not configured for routing, the LST
learns only the MAC addresses.
The traffic flows uses the tables as follows:

Traffic arriving from the fabric and directed to a node attached to a leaf switch goes first through the GST
and then through the LST. The source address is checked against the GST, and the destination address is
checked against the LST.

Traffic sourced by a locally attached endpoint and directed to the fabric goes first through the LST and then
through the GST. The source address is checked against the LST, and the destination address is checked
against the GST.

Traffic that is locally switched goes first through the LST, then to the GST, and then back to the LST and to
the destination endpoint. The GST also contains the locally connected endpoints because the destination IP
address is looked up in the GST.

External EPGs
The external endpoints are assigned to an external EPG (which the GUI sometimes calls external networks). For
the L3Out connections, the external endpoints are mapped to an external EPG based on IP prefixes.
For each L3Out connection, the user has the option to create one or multiple external EPGs based on whether
different policy treatments are needed for different groups of external endpoints.
Under the Layer 3 external EPG configurations, the user can map external endpoints to this EPG by adding IP
prefixes and network masks. The network prefix and mask dont need to be the same as the ones in the routing
table. When only one external EPG is required, simply use 0.0.0.0/0 to assign all external endpoints to this external
EPG.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 18 of 84

Note: When mapping IP prefixes to an external EPG, be sure the IP address ranges specified by the IP prefixes for
the external EPG dont include the tenant IP address space. Otherwise, the fabric endpoints that are attached to
the border leaf may be assigned to the external EPG. For example, if the tenant is assigned subnet 100.1.1.0/24
and the user has the IP prefix 100.1.0.0/16 in the external EPG configuration, if an endpoint with IP 100.1.1.10 is
attached to the border leaf, this endpoint may be classified as assigned to the external EPG. As a result, the wrong
policy will be applied for the traffic related to this endpoint. The user can use 0.0.0.0/0 to define the external EPG.
In such a case, the border leaf will derive the external EPG based on the incoming interface. As a result, no policy
problem will occur even though 0.0.0.0/0 overlaps with the tenant IP address space.
After the external EPG has been created, the proper contract can be applied between the external EPG and other
EPGs.
The example in Figure 13 shows how data forwarding and policy enforcement work for the traffic flow between the
internal EPG and external EPG. In this example there are two external EPGs: ExtEPG1 and ExtEPG2. Hosts with
the IP addresses of 100.1.1.0/24 and 100.1.2.0/24 are mapped to ExtEPG1. Those with the IP addresses of
200.1.1.0/24 and 200.1.2.0/24 are mapped to ExtEPG2.
Figure 13.

Traffic Filtering for Traffic Flowing from an EPG to the Outside

The following actions are taken for the traffic from the internal EPG to the external EPG:

When the ingress leaf switch receives the frame, it learns the source MAC address and the source IP
address and programs them into the LST. The leaf switch derives the source EPG based on the VLAN ID or
VXLAN VNID. The MAC and IP addresses in the LST also contain the EPG information, and they can be
used to derive EPG information for the subsequent packets.

The ingress leaf switch checks the destination IP address against the external longest-prefix-match (LPM)
table. The external LPM table stores the external summary routes learned from the border leaf through
Multiprotocol Border Gateway Protocol (MP-BGP; see the section External Layer 3 Connectivity later in
this document). The matched entry provides the border leaf TEP IP address. The ingress leaf encapsulates
the original frame in the VXLAN frame with the border leaf TEP IP address as the destination IP address of
the outer header. It also includes the source EPG information in the VXLAN header.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 19 of 84

The VXLAN frame is forwarded by the spine node to the border leaf switch. On the border leaf, the
destination IP address of the original frame is checked against the external EPG mapping table. This table
provides the IP prefix and mask to the external EPG mapping information. The lookup result provides the
destination EPG of this frame.

With both the source (carried in the VXLAN header) and destination EPG identified, the border leaf then
applies the proper policy between the two EPGs.

Figure 14 illustrates the traffic flow for traffic entering the fabric from the outside.
Figure 14.

Traffic Filtering for External Traffic Entering the Cisco ACI Fabric

The border leaf receives a frame destined for one of the internal endpoints. The border leaf checks the
source IP address against the external EPG mapping table. The lookup result provides the source EPG of
the frame.

The border leaf performs the lookup in the GST with the destination IP address. The GST provides cache
entries for the remote endpoints (the endpoints attached to other leaf switches). If the lookup finds a match,
the entry in the table provides the egress leaf TEP IP address as well as the destination EPG information. If
the lookup doesnt find a match for any entry in the table, the border leaf sends the frame to the spine by
using the spine anycast VTEP IP proxy address as the destination IP address for the outer header. The
spine switch performs the lookup with the inner IP address and forwards the frame to the proper egress leaf
switch. In this process, the source EPG identified in the first step is carried in the VXLAN header.

If the IP lookup finds a match in the GST in step 2 in Figure 14, the border leaf has both the source EPG
and destination EPG, and it applies the proper contract configured for these two EPGs. The border leaf then
encapsulates the frame in VXLAN format with the remote leaf TEP IP as the destination address for the
outer header. Note that this behavior is configurable in the release used in this design guide, so it is
possible to move the policy application to the destination leaf (also referred to as the computing leaf)
instead of the border leaf. This process is described in the section Policy CAM Scalability Improvements
Starting from Cisco ACI Release 1.2 later in this document.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 20 of 84

If the IP lookup in step 2 in Figure 14 doesnt find a match, the border leaf sends the frame to the spine
switch, and the spine switch then finds out the proper egress leaf by checking its hardware-mapping
database. Because no destination EPG information is available, the border leaf cant apply the policy. When
the frame is received on the egress leaf, the egress leaf checks the destination IP address in the inner
header against its LST and identifies the egress interface as well as the destination EPG. With both the
source EPG (carried in the VXLAN header) and the destination EPG available, the egress leaf then applies
the policy configured for these two EPGs. The return traffic from the inside EPG to the outside EPG has the
border leaf learned endpoints in the GST, and all the subsequent frames traveling to this endpoint have the
policy enforced at the border leaf. This behavior changes in Cisco ACI Release 1.2 to optimize the
consumption of the policy content-addressable memory (CAM), and it is described in the section Policy
CAM Scalability Improvements Starting from Cisco ACI Release 1.2 later in this document.

Virtual Machine Migration


Cisco ACI handles virtual machine migration from one leaf (VTEP) to a different leaf (VTEP) as follows (Figure 15):

When the virtual machine migrates, it sends out a GARP message.

The arriving leaf forwards that GARP message to the leaf on which the virtual machine was originally
located.

The destination leaf originates a COOP update to update the mapping database.

The original leaf marks the IP address of the virtual machine as a bounce entry.

All traffic received by the original leaf is sent to the destination leaf.

Meanwhile, all the leaf switches in the fabric update their forwarding tables.

Figure 15.

Virtual Machine Migration in the Cisco ACI Fabric

Controller Design Considerations


The Cisco ACI solution is built on the following main components:

Cisco Application Policy Infrastructure Controller: The APIC is clustered network control and policy system
that provides image management, bootstrapping, and policy configuration.

Physical switching fabric built on a leaf-and-spine topology: Every leaf switch is connected to every spine
switch (technically referred to as a bipartite graph) using 40 Gigabit Ethernet connections.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 21 of 84

Cisco APIC Functions


The APIC provides of the following control functions:

Policy manager: Manages the distributed policy repository responsible for the definition and deployment of
the policy-based configuration of Cisco ACI

Topology manager: Maintains up-to-date Cisco ACI topology and inventory information

Observer: The monitoring subsystem of the APIC; serves as a data repository for Cisco ACI operational
state, health, and performance information

Boot director: Controls the booting and firmware updates of the spine and leaf switches as well as the APIC
elements

Appliance director: Manages the formation and control of the APIC appliance cluster

Virtual machine manager (or VMM): Acts as an agent between the policy repository and a hypervisor and is
responsible for interacting with hypervisor management systems such as VMware vCenter

Event manager: Manages the repository for all the events and faults initiated from the APIC and the fabric
nodes

Appliance element: Manages the inventory and state of the local APIC appliance

Getting Started
Cisco ACI is designed to reduce the amount of time needed to bring up the fabric. The procedure for bringing up
the fabric is as follows:
1.

Dual-connect the 10 Gigabit Ethernet adapters to Cisco ACI leaf switches.

2.

Power on the APIC.

3.

Provide the information requested in the initial setup dialog box: enter the fabric name, controller name, TEP
IP address pool, infrastructure VLAN, management IP address, and default gateway, and select the option to
enforce password strength.

4.

Make sure that the infrastructure VLAN doesnt overlap existing VLANs that may be in the existing network
infrastructure to which you are connecting. Make sure that the TEP IP address pool doesnt overlap existing IP
address pools that may be in use by the servers (in particular, by virtualized servers).

5.

Repeat the preceding steps for each controller (most deployments will have three controllers).

6.

Open a browser to https://<ip_you_assigned>.


Perform the fabric discovery by giving a name to each leaf and spine switch.
The three APICs automatically join a cluster.
Data is replicated and split into shards for better efficiency in lookups.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 22 of 84

Figure 16 shows a typical example of the connection of the APIC to the Cisco ACI fabric.
Figure 16.

Cisco APIC Connection to the Cisco ACI Fabric

Infrastructure VLAN
Cisco Nexus 9000 Series Switches running in standalone mode reserve a range of VLAN IDs: 3968 to 4095. The
default infrastructure VLAN in Cisco ACI is VLAN 4093. This VLAN is used for internal connectivity between the
APIC and spine nodes and leaf switches, and it needs to be extended if an external device is fully managed by
Cisco ACI: for instance, if you need to add an AVS to the fabric. Therefore, if you are connecting to standalone
Cisco Nexus 9000, 7000, or 5000 Series Switches you may want to change this VLAN to a number that is not in
the range of reserved VLANs, using, for instance, VLAN 3967. Then should you need to add an AVS later, the
VLAN used by AVS to communicate with the rest of the Cisco ACI fabric can traverse the network of Cisco Nexus
9000, 7000, or 5000 Series Switches without conflicting with the VLANs used by the transit network. AVS uses the
infrastructure VLAN for DHCP purposes and for the OpFlex protocol.

Fabric Discovery
APICs discover the IP addresses of other APICs in the cluster using an LLDP-based discovery process. This
process maintains an appliance vector, which provides a mapping from an APIC ID to an APIC IP address and a
universally unique identifier (UUID) of the APIC. Initially, each APIC has an appliance vector filled with its local IP
address, and all other APIC slots are marked as unknown.
Upon switch reboot, the policy element on the leaf switch gets its appliance vector from the APIC. The switch then
advertises this appliance vector to all its neighbors and reports any discrepancies between its local appliance
vector and the neighbors appliance vectors to all the APICs in the local appliance vector.
Using this process, APICs learn about the other APICs connected to the Cisco ACI fabric through leaf switches.
After the APIC validates these newly discovered APICs in the cluster, the APICs update their local appliance vector
and program the switches with the new appliance vector. Switches then start advertising this new appliance vector.
This process continues until all the switches have the identical appliance vector, and all APICs know the IP
addresses of all the other APICs.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 23 of 84

VTEP IP Address Pool


The Cisco ACI fabric is brought up in a cascading manner, starting with the leaf nodes that are directly attached to
the APIC. LLDP and control-plane Intermediate SwitchtoIntermediate Switch (IS-IS) Protocol convergence
occurs in parallel to this boot process. The Cisco ACI fabric uses LLDP-based and DHCP-based fabric discovery to
automatically discover the fabric switch nodes, assign the infrastructure VTEP addresses, and install the firmware
on the switches. Prior to this automated process, you must perform a short bootstrap configuration on the APIC.
The TEP address pool is a critical part of the configuration. You need to choose a nonoverlapping address space
for the inside of the fabric. Although TEPs are located inside the fabric, in some situations the internal network is
extended: for instance, to virtualized servers. Therefore, you don't want to overlap the internal TEP network with an
external network in your data center.

Verifying Cisco APIC Connectivity


APIC software creates bond0 and bond0.<infrastructure VLAN> for in-band connectivity to the Cisco ACI leaf
switches. It also creates bond1 as an out-of-band (OOB) management port.
Assuming that the infrastructure VLAN ID is 4093, the network interfaces are as follows:

Bond0: This is the network interface card (NIC) bonding interface for in-band connection to the leaf switch.
No IP address is assigned for this interface.

Bond0.4093. This subinterface connects to the leaf switch. The VLAN ID 4093 is specified during the initial
APIC software configuration. This interface obtains a dynamic IP address from the pool of TEP addresses
specified in the setup configuration.

bond1: This is the NIC bonding interface for OOB management. No IP address is assigned. This interface is
used to bring up another interface called oobmgmt.

oobmgmt: This OOB management interface allows users to access the APIC. The IP address is assigned to
this interface during the APIC initial configuration process in the dialog box.

You can also see the interfaces in the GUI, as shown in Figure 17.
Figure 17.

Cisco APIC Interfaces

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 24 of 84

From the APIC CLI, you can verify connectivity by using the following commands:

IP link

show lldp neighbor

acidiag fnvread (to check the status of the controllers and of the fabric)

In-Band and Out-of-Band Management of Cisco APIC


When bringing up the APIC, you enter the management IP address for OOB management as well as the default
gateway. If later you add an in-band management network, the APIC will give preference to the in-band
management network connectivity. The APIC is automatically configured to use both the OOB and the in-band
management networks. For more information about in-band and OOB management, refer to the section In-Band
and Out-of-Band Management Configuration and Design later in this document.

Cluster Sizing and Redundancy


To support greater scale and resilience, a concept known as data sharding is supported both for data stored in the
APIC and for data stored in the endpoint mapping database located at the spine layer. The basic idea behind
sharding is that the data repository is split into several database units, known as shards. Data is placed in a shard,
and that shard is then replicated three times, with each replica assigned to an APIC appliance, as shown in
Figure 18.
Figure 18.

Cisco APIC Data Sharding

For each replica, a shard leader is elected, with write operations occurring only on the elected leader. Therefore,
requests arriving at an APIC can be redirected to the APIC that carries the shard leader. After recovery from a
split-brain condition (in which APICs are no longer connected to each other), automatic reconciliation is
performed based on timestamps.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 25 of 84

The APIC can expand and shrink a cluster by defining a target cluster size.
The target size and operational size may not always match. They will not match when:

Someone expands the target cluster size on purpose

Someone shrinks the target cluster size on purpose

A controller node has failed

When an APIC cluster is expanded, some shard replicas shut down on the old APICs and start on the new one to
help ensure that replicas continue to be evenly distributed across all APICs in the cluster.
When you add a node to the cluster, you must enter the new cluster size on an existing node.
If you need to remove an APIC node from the cluster, you must remove the appliance at the end. For example, you
must remove node number 4 from a 4-node cluster; you cannot remove node number 2 from a 4-node cluster.
If a shard replica residing on an APIC loses connectivity to other replicas in the cluster, that shard replica is said to
be in the minority state. A replica in the minority state cannot be written to (for example, no configuration changes
can be made). A replica in the minority state can, however, continue to serve read requests. If a cluster had only
two APIC nodes, a single failure will lead to a minority situation. However, because the minimum number of nodes
in an APIC cluster is three, the risk that this situation will occur is extremely low.

Preparing the Fabric Infrastructure


The Cisco ACI fabric is an IP-based fabric that implements an integrated overlay, allowing any subnet to be placed
anywhere in the fabric and support a fabricwide mobility domain for virtualized workloads. Spanning Tree Protocol
is not required in the Cisco ACI fabric and leaf switches.
The Cisco ACI fabric has been designed with a cloud provisioning model. This model defines two main
administrator roles:

Infrastructure administrator: This administrator has a global view of the system, like a superuser. This
administrator configures the resources shared by multiple tenants and also creates the tenants.

Tenant administrator: This administrator configures the resources dedicated to a particular tenant.

You do not need to create two separate roles to administer Cisco ACI. The same person can perform both roles.
This section describes the network connectivity preparation steps normally performed by the infrastructure
administrator prior to handing over tenant administration to individual tenant administrators.

Configuring MP-BGP
Cisco ACI uses MP-BGP to distribute external routing information across the leaf switches in the fabric. Therefore,
the infrastructure administrator needs to define the spine switches that are used as route reflectors and the
autonomous system number (ASN) that is used in the fabric (Figure 19).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 26 of 84

Figure 19.

BGP Peering Relationship in Cisco ACI

The Cisco ACI fabric supports one ASN. The same ASN is used for internal MP-BGP and for the internal BGP
(iBGP) session between the border leaf switches and external routers. Given that the same ASN is used in both
cases when using iBGP, the user needs to find the ASN on the router to which the Cisco ACI border leaf connects
and to use it as the BGP ASN for the Cisco ACI fabric.
You can make the spine switches the route reflectors by configuring them as such under the System > Systems
Settings configuration, as shown in Figure 20.
Figure 20.

Setting the Spine Switches as Route Reflectors

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 27 of 84

Spanning-Tree Considerations
The Cisco ACI fabric does not run Spanning Tree Protocol natively, but it does forward spanning-tree Bridge
Protocol Data Units (BPDUs) between ports within an EPG.
Flooding Within the EPG
Cisco ACI floods BPDU frames within an EPG by using the VNID assigned to the EPG when it encapsulates the
BPDUs in VXLAN format. The flooding scope for BPDUs is different than the flooding scope for data traffic. The
unknown unicast traffic and broadcast traffic are flooded within the bridge domain; the spanning-tree BPDUs are
flooded within the EPG.
Figure 21 shows an example in which external switches connect to the fabric.
Figure 21.

Fabric BPDU Flooding Behavior

In the example in Figure 21, two switches running Spanning Tree Protocol are connected to the fabric, with a third
switch connected to the other two. Switches 1 and 2 are connected to ports on two different Cisco ACI leaf nodes,
but those ports reside in the same EPG. The Cisco ACI fabric floods BPDU frames within the same EPG;
therefore, Switches 1 and 2 act as if they are connected directly to each other. As a result, the segment between
Switch 2 and the Cisco ACI fabric is blocked.
Configuring External Switches Connected to the Same Bridge Domain but Different EPGs
If two external switches are connected to two different EPGs within the fabric, you must ensure that those external
switches are not directly connected outside the fabric. It is strongly recommended in this case that you enable
BPDU guard on the access ports of the external switches to help ensure that any accidental direct physical
connections are blocked immediately.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 28 of 84

Consider Figure 22 as an example.


Figure 22.

Switches Connected to Different EPGs in the Same Bridge Domain

In this example, VLANs 10 and 20 from the outside network are stitched together by the Cisco ACI fabric. The
Cisco ACI fabric provides Layer 2 bridging for traffic between these two VLANs. These VLANs are in the same
flooding domain. From the perspective of Spanning Tree Protocol, the Cisco ACI fabric floods the BPDUs within
the EPG. When the Cisco ACI leaf receives the BPDUs on EPG 1, it floods then to all leaf ports in EPG 1, and it
does not send the BPDU frames to ports in other EPGs. As a result, this flooding behavior can break the potential
loop within the EPG (VLAN 10 and VLAN 20). You must help ensure that VLANs 10 and 20 do not have any
physical connections other than the one provided by the Cisco ACI fabric. Be sure to turn on the BPDU guard
feature on the access ports of the outside switches. This way, if someone mistakenly connects the outside
switches to each other, BPDU guard can disable the port and break the loop.
Working with Multiple Spanning Tree
Additional configuration is required to help ensure that Multiple Spanning Tree (MST) BPDUs flood properly. BPDU
frames for Per-VLAN Spanning Tree (PVST) and Rapid Per-VLAN Spanning Tree (RPVST) have a VLAN tag. The
Cisco ACI leaf can identify the EPG on which the BPDUs need to be flooded based on the VLAN tag in the frame.
However, for MST (IEEE 802.1s), BPDU frames dont carry a VLAN tag, and they are sent over the native VLAN.
Typically, the native VLAN is not used to carry data traffic, and the native VLAN may not be configured for data
traffic on the Cisco ACI fabric. As a result, to help ensure that MST BPDUs are flooded to the desired ports, the
user needs to create an EPG for VLAN 1 as native VLAN in order to carry the BPDUs. In addition to this the
administrator has to configure the mapping of Instances to VLANs in order to define which MAC address table
must be flushed when a Topology Change Notification occurs. This configuration is outside of the scope of this
document.

Configuring VLAN and VXLAN Domains


In spanning-tree networks, the user must specify which VLANs belong to which ports by using the switchport
trunk allowed vlan command. In Cisco ACI, you specify a domain (physical or virtual), and you associate this
domain with a range of ports. Unlike with traditional NX-OS operations, in Cisco ACI the VLANs used for port
groups on virtualized servers are dynamically negotiated between the APIC, and the VMM pools can be static or
dynamic.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 29 of 84

You can configure VLAN pools as shown in Figure 23.


Figure 23.

Defining a VLAN Domain

As part of the initial setup, you should create pools of VLANs for:

Virtualized servers

Physical servers

L3Out connections

These pools are then associated with ports and EPGs later in the deployment. Screen views also provide useful
information about which tenant and EPG are using a particular pool of VLANs and on which ports these VLANs are
configured (Figure 24 and Figure 25).
Note:

As of this writing, Cisco ACI doesnt let you configure EPG static binding to a port and VXLAN. This

binding is possible only through orchestrated VXLAN negotiation as in the case of VMware vShield and Cisco AVS.
Figure 24.

Association of VLAN Pools with EPGs

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 30 of 84

Figure 25.

Association of VLAN Pools with Ports and Nodes

Configuring the Infrastructure VLAN


In Cisco ACI, the traffic forwarding in the overlay is modeled as a tenant called the infrastructure tenant. This
tenant is not supposed to be configured by the user directly, but when troubleshooting you can see the VRF
instance of this tenant as the overlay-1 part of the infrastructure tenant. This tenant extends outside Cisco ACI
through a particular VLAN called the infrastructure VLAN. This VLAN is chosen at the time of APIC initialization,
and it is VLAN 4093 by default. If you want to extend Cisco ACI to an existing network with AVS (if AVS is
deployed behind a Cisco Nexus 7000 or 5000 Series Switch), you should choose a different VLAN for the
infrastructure VLAN to avoid overlapping the reserved VLAN range used by the Cisco Nexus 7000 and 5000 Series
Switches.
Adding a VLAN to a Port
Figure 26 illustrates how you can add a VLAN to a port in Cisco ACI. You simply select the port and then choose
the domain association. You can then directly associate the port with an EPG of a particular tenant, as explained
later.
Figure 26.

Adding VLANs to a Port

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 31 of 84

If you associate the port directly with a tenant EPG, you can also select whether you want this port to be a trunk or
an access port (Figure 27).
Figure 27.

Adding a Port to a Tenant EPG

Configuring Trunk and Access Ports


In Cisco ACI, you can configure ports that are used by EPGs in one of these ways:

Tagged (classic IEEE 802.1q trunk): Traffic for the EPG is sourced by the leaf with the specified VLAN tag.
The leaf also expects to receive traffic tagged with that VLAN to be able to associate it with the EPG. Traffic
received untagged is discarded.

Untagged: Traffic for the EPG is sourced by the leaf as untagged. Traffic received by the leaf as untagged
or with the tag specified during the static binding configuration is associated with the EPG.

IEEE 802.1p: If only one EPG is bound to that interface, the behavior is identical as in the untagged case. If
other EPGs are associated with the same interface, then traffic for the EPG is sourced with an IEEE 802.1q
tag using VLAN 0 (IEEE 802.1p tag).

You cannot have different interfaces on the same leaf bound to a given EPG in both the tagged and untagged
modes at the same time. Therefore, you should always select the IEEE 802.1p option to connect an EPG to a
bare-metal host.
You should use the untagged option only for hosts that cannot handle traffic tagged with VLAN 0.
In summary, when associating a port with a tenant and EPG, you can select whether the port is configured as a
trunk or an access port. An access port has two submodes:

Access (IEEE 802.1p): If an EPG has a mix of trunk ports (tagged) and access ports (IEEE 802.1p), when a
packet leaf switches the access port (IEEE 802.1p), it is tagged with 0 in the IEEE 802.1q header. This
option should work with most servers. If it doesnt, you should select the access (untagged) option. For
preexecution environment (PXE) booting services to work on certain OSs, the access (untagged) option is
preferable.

Access (untagged): An EPG cannot have a mix of trunk ports and untagged ports, so you can use this
option only if the EPG connects to untagged ports only.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 32 of 84

The preferred option for configuring ports as access ports is the access (IEEE 802.1p) option.
Note:

When an EPG is deployed with the access (untagged) option, you cannot deploy that EPG as a trunk port

(tagged) on other ports of the same switch. You can have one EPG, with both tagged and access (IEEE 802.1p)
interfaces. The tagged interface allows trunked devices to attach to the EGP, and the access interface (IEEE
802.1p) allows devices that do not support IEEE 802.1q to be attached to the fabric.

Creating Port Channels


With Cisco ACI, you can create port channels by selecting multiple interfaces of the switch from the fabric inventory
configuration view.
Creating Local Port Channels
To create a port channel, you just select the ports, as shown in Figure 28, and select Configure PC.
Figure 28.

Selecting Multiple Ports to Create a Port Channel

You then give the configuration a policy-group name, as shown in Figure 29.
Figure 29.

Configuring a Port Channel

You can then configure the port channel just as in the previous example of the configuration of a single port. In this
case, you would select the VLAN domain and assign ports to EPGs as needed.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 33 of 84

Creating Virtual Port Channels


A virtual port channel (vPC) is a technology that allows the configuration of a single port channel connected to two
different physical devices, rather than to a single device. The Cisco ACI implementation of vPC differs slightly from
the implementation on the Cisco Nexus 7000, 6000, and 5000 Series Switches.
One difference is that in Cisco ACI, no vPC peer link is required. In a traditional vPC environment, a physical link
must be deployed between the vPC pair and designated as the peer link through which synchronization must
occur. In the Cisco ACI implementation of vPC, all peer communication occurs through the fabric itself.
To help ensure that traffic can reach a vPC-connected endpoint regardless of the leaf switch to which it is sent, a
special anycast VTEP is used on both leaf switches participating in the vPC. Traffic from other locations in the
fabric is directed to this anycast VTEP.
The spine node performs a symmetrical hash function for unicast traffic to determine the leaf nodes to which to
send the traffic. After the traffic reaches the leaf node, it is forwarded if the vPC is available.
When traffic is sent to another leaf in the fabric, the anycast VTEP is used as the source in the overlay header.
For multicast traffic, the spine sends traffic to both leaf switches (to both anycast VTEPs), and the hash function
determines which of the leaf switches should forward to the endpoint.
You can configure vPC by selecting two leaf switches in the Fabric view and selecting the Configure tab, as Figure
30 illustrates.
If you select an interface on the first leaf and another interface on the second leaf, you can see that the GUI
displays the option to configure a vPC.
Figure 30.

Creating vPCs

The rest of the configuration is identical to what was already described for port channels.

Assigning VMM Domains to Ports


Cisco APIC closely integrates with the server virtualization layer. When you create an EPG in Cisco ACI, the
equivalent construct is automatically created at the virtualization layer (port group) and mapped to Cisco ACI
policy.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 34 of 84

Integration with the server virtualization layer is defined through the creation of a configuration known as a VMM
domain. The VMM domain configuration provides networking for virtualized hosts. It provides uplink port groups to
attach to the physical NICs and port groups for the vNIC of the virtual machines running on the virtualized hosts.
Figure 31 shows the virtualized hosts in this example before their vNICs (vmnics) are associated with port groups
created by Cisco ACI.
Figure 31.

VMware ESX Host Configuration Prior to Creation of the VMM Domain

Each VMM domain consists of two elements:

A virtual distributed switch (vDS) as the vSphere Distributed Switch

A pool of VLANs (or VXLANs) that can be used by port groups (a name space)

To create a VMM, you have to first create a VLAN (or VXLAN) pool, as previously described.
Then you can create the VMM domain, as shown in Figure 32. The configuration includes the reference to a
dynamic pool of VLANs, the name that you want to assign to the virtual switch that Cisco ACI provisions in
vCenter, the name of the vCenter data center where you want the vDS to be placed, and the credentials of the
vCenters that you want Cisco ACI to use.
Figure 32.

Creating the VMM Domain Using VMware vCenter

The result of this configuration is the creation of a vDS in vCenter, as in Figure 33.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 35 of 84

Figure 33.

vDS Created by Cisco ACI in VMware vCenter

You then add the host uplinks to this vDS and assign vmnics to the Cisco ACI vDS uplink port group, as in
Figure 34.
Figure 34.

VMware ESX Host vmincs After Creation of the Cisco ACI vDS

You then complete the configuration by assigning the VMM domain to the Cisco ACI ports that connect to the ESX
hosts ports, as in Figure 35.
Figure 35.

Assigning the VMM Domain to Cisco ACI Ports

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 36 of 84

After the integration with vCenter is complete, the fabric or tenant administrator creates EPGs, contracts, and
application profiles. When the EPG has been associated with the VMM domain, the corresponding port group is
created at the virtualization level. The server administrator then connects virtual machines to these port groups.

Creating Tenants
The infrastructure administrator creates tenants and allocates resources to tenants such as VRF instances and
L3Out connections. The tenant concept is mostly an administrative construct that provides data-plane separation
for the traffic. Traffic on the data plane is tagged with a VXLAN VNID that corresponds with the VRF instance or the
bridge domain, depending on whether the traffic is routed or bridged.
In Cisco ACI, the workflow is as follows:

The infrastructure administrator can create tenants, VRF instances, bridge domains, and an L3Out
connection and assign these to tenants.

The tenant administrator can create EPGs, VRF instances, and bridge domains and map EPGs to VMM
domains or to specific ports and connect them to each other or to an L3Out connection.

With these two roles, the infrastructure administrator should not only create a tenant, but also the VRF instances
that this tenant can use. The administrator should also complete the configuration by defining the L3Out
connections for each tenant.
The GUI lets you create the tenant and the VRF instance in the same configuration step.

Creating L3Out Connections


The L3Out connection is the building block that defines the way that each tenant can connect outside the Cisco
ACI fabric. This section presents the basic L3Out configuration steps that the infrastructure administrator must
perform to complete the configuration before handing it over to the tenant administrator. The section External
Layer 3 Connectivity later in this document discusses in more detail the design considerations for defining an
L3Out connection.
The infrastructure administrator performs the following steps (Figures 36 through 39):
1.

Select one or more leaf switches and assign a VRF instance to them.

2.

Configure the routing protocol on the leaf (BGP, Open Shortest Path First [OSPF], or Enhanced Interior
Gateway Routing Protocol [EIGRP]).

3.

Select the ports on the leaf switches that are used to connect to the external Layer 3 devices.

4.

Change the port to Layer 3 mode to configure a Layer 3 port, subinterface, or SVI.

5.

Assign the port to a tenant and VRF instance.

6.

Specify an IP address for the port.

7.

Set up the route map to control which subnets can be announced to the outside and can match the bridge
domain of the tenant (if this setting has been configured).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 37 of 84

Figure 36.

Assigning a VRF Instance to a Leaf Switch

Figure 37.

Configuring the VRF Instance

Figure 38.

Configuring a Port as a Layer 3 Port

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 38 of 84

Figure 39.

Adding the Protocol-Specific Configuration to the Layer 3 Port

Preparing the Virtual Infrastructure


The VMM provides Cisco ACI connectivity to data center hypervisors. The hypervisors can be Microsoft Hyper-V,
VMware ESX, or OpenStack with Kernel-based Virtual Machine (KVM).
A VMM domain is defined as a VMM and the pool of VLANs and VXLANs that this VMM uses to send traffic to the
leaf switches. The VLAN pool should be dynamic, to allow the VMM and APIC to allocate VLANs as needed for the
port groups being used.
The VMM domain is associated with an AEP and a policy group and the interfaces at which it is attached to define
where virtual machines can move. With the Basic GUI, you dont need to know about the AEP and the policy group
because all you need to do is create a VMM domain and map it to the ports using the Configuration tab in the
Fabric view.
This section discusses additional configurations and design considerations for deployment of Cisco ACI with a
virtualized environment and, in particular, with VMware vSphere.
Cisco ACI can be integrated with vSphere in several ways:

By manually allocating VLANs to port groups and matching them using static EPG mapping: The traffic
encapsulation is based on VLANs in this case, and virtual machine discovery is based on the transmission
of traffic by the virtual machines. In this case, the virtual machines are equivalent to physical machines.
Therefore, the Cisco ACI fabric configuration is based on the use of a physical domain.

By integrating the APIC and vCenter and using the standard VMware vDS: The traffic encapsulation is
based on VLANs in this case, and virtual machine discovery is based on a combination of communication
between vCenter and the APIC and the use of LLDP or Cisco Discovery Protocol between the ESX host
and the Cisco ACI leaf.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 39 of 84

By integrating the APIC and vCenter and using the standard VMware vDS with vShield Manager: The traffic
encapsulation is based on VXLANs in this case, and virtual machine discovery is based on a combination of
communication between vCenter and the APIC and LLDP or Cisco Discovery Protocol between the ESX
host and the Cisco ACI leaf.

By integrating the APIC and vCenter and using Cisco AVS: The traffic encapsulation is based on VLAN or
VXLANs in this case, and virtual machine discovery is based on OpFlex.

Note:

OpFlex is an extensible policy resolution protocol that can be used to distribute policies to physical or

virtual network devices. The OpFlex protocol is used between the Cisco ACI fabric and AVS to provide the
configurations to AVS. It is used by AVS to give Cisco ACI information about which endpoints are attached to
particular servers and to AVS.

Integrating Cisco APIC and VMware vCenter


The steps that the APIC uses to integrate with vSphere are summarized here:
1.

The administrator creates a VMM domain in the APIC with the IP address and credentials for connecting to
vCenter.

2.

The APIC and vCenter perform an initial handshake.

3.

The APIC automatically connects to vCenter and creates a vDS under vCenter.

4.

The vCenter administrator adds the ESX host or hypervisor to the APIC vDS and assigns the ESX host
hypervisor ports as uplinks on the APIC vDS. These uplinks must connect to the Cisco ACI leaf switches.

5.

The APIC learns to which leaf port the hypervisor host is connected by using LLDP or Cisco Discovery
Protocol.

6.

The tenant administrator creates and associates application EPGs with the VMM domains.

7.

The APIC automatically creates port groups in vCenter under the vDS. The EPG is automatically mapped to
port groups. This process provisions the network policy in vCenter.

8.

The vCenter administrator creates virtual machines and assigns them to port groups. The APIC learns about
the virtual machine placements based on the vCenter events.

9.

The policyfor example, contracts and filtersis pushed.

For these configurations to work, the APIC must be able to reach the vCenter IP address. If both in-band and OOB
paths are available to external servers, APIC chooses the in-band path.
The endpoint discovery is the result of both data-plane and control-plane operations.
Control-plane learning is based on the use of vCenter APIs and, for hosts that support it, on the use of OpFlex (for
AVS and Hyper-V). For hosts that do not support OpFlex, LLDP is used to resolve the virtual host ID to the
attached port on the leaf node.
Data-path learning is based on the traffic forwarding at the leaf.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 40 of 84

Configuring Cisco ACI vDS


For each VMM domain, the APIC creates a vDS in the hypervisor. In Figure 40, the user configured two VMM
domains with the same vCenter but with different data centers, and as a result the APIC creates two vDSs.
Figure 40.

For Each VMM Domain, Cisco APIC Creates a vDS in the Hypervisor

Configuring vDS Uplinks


The vCenter administrator assigns the host and its uplinks to a particular Cisco ACI vDS. At the same time, the
administrator also assigns the uplinks to a vDS uplink port group. For instance, in the example of Figure 40, the
administrator might assign vCenter2-DVUplinks-96.
From the Topology view of the fabric, the administrator can configure the load balancing of the uplinks and whether
they run Cisco Discovery Protocol. To do this, the administrator selects the Configuration tab and the ports on the
leaf switch that are connected to virtualized servers.
The configuration options for LLDP and Cisco Discovery Protocol are as follows:

vDS supports only one discovery protocoleither Cisco Discovery Protocol or LLDPnot both at the same
time.

LLDP takes precedence if both LLDP and Cisco Discovery Protocol are defined.
To use Cisco Discovery Protocol, disable LLDP and enable Cisco Discovery Protocol.

The configuration options for uplink load balancing are as follows:

LACP: You can enable or disable LACP for the uplink port group. You can use this option when the
corresponding ports on the leaf are configured for vPC.

MAC address pinning: This option is approximately equivalent to virtual port ID load balancing. This option
doesnt require configuration of vPC or port channels on the leaf to which the ports are connected.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 41 of 84

The GUI displays the options for a virtual switch (vswitch), as in Figures 41 and 42.
Figure 41.

Configuring Uplinks for the vDS After Selecting the Ports to Which They Connect

v
Figure 42.

Configuration Options for vDS Uplinks

Creating an Attachable Entity Profile for Virtualized Server Connectivity


For practical reasons, you may want to bundle multiple VLAN domains and VMM domains on the same port. Cisco
ACI provides a configuration construct called an attachable entity profile, or AEP, that is designed to bundle these
different VLAN pools on the same interface.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 42 of 84

With the Basic GUI, you dont need to configure the AEP. You can just add a VLAN pool and multiple VMM
domains to an interface from the Fabric view, as shown earlier in Figure 41.

Designing the Tenant Network


This section describes how to configure a tenant and the design choices associated with this process.
In a traditional network infrastructure, the configuration steps would consist of the following steps:
1.

Define a number of VLANs at the access and aggregation layers.

2.

Configure the access ports to assign server ports to VLANs.

3.

Define a VRF instance at the aggregation-layer switches.

4.

Define an SVI for each VLAN and map these to each VRF instance.

5.

Define Hot Standby Router Protocol (HSRP) parameters for each SVI.

6.

Create and apply access control lists (ACLs) to control traffic between server VLANs and from server VLANs
to the core.

A similar configuration in Cisco ACI requires the following steps:


1.

Create a tenant and a VRF instance.

2.

Define one or more bridge domains if you want broadcast and flooding (although you can have a bridge
domain without flooding or broadcast).

3.

Create EPGs for each server security zone (these can map one to one with the VLANs in the previous
configuration steps).

4.

Configure the default gateway (called a subnet in Cisco ACI) as part of the bridge domain or the EPG.

5.

Create contracts.

6.

Configure the connectivity between EPGs and contracts.

Cisco ACI defines multiple roles for the configuration and management of the fabric. The roles of interest for this
design guide are the infrastructure administrator and the tenant administrator.
This document already described the preparatory steps performed by the infrastructure administrator:

VLAN domain definition

VMM domain definition

Port configuration

Tenant creation

Creation of at least one VRF instance per tenant

Configuration of an L3Out connection and association of the L3Out connection with the tenant

The tenant administrator receives from the infrastructure administrator a tenant and one or more L3Out
connections to use. The tenant administrator then creates bridge domains, EPGs, and contracts and connects
them together to create the desired connectivity.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 43 of 84

Implementing a Simple Layer 2 Topology


If you need to implement a simple Layer 2 topology without any routing, you can create one or more bridge
domains and EPGs. You can then configure the bridge domains for hardware-proxy mode or for flood-and-learn
mode.
Note when creating any configuration or design in Cisco ACI, for objects to be instantiated and, as a result,
programmed into the hardware, they must meet the requirements of the object model. If a reference is missing, the
object will not be instantiated.
In the Cisco ACI object model, the bridge domain is a child of the VRF instance, so even if you need a purely
Layer 2 network, you still need to create a VRF instance and associate the bridge domain with that VRF instance.
This approach doesnt mean that the configuration consumes VRF hardware resources. In fact, if you dont enable
routing, no VRF resources will be allocated in hardware.
In summary, to create a simple Layer 2 network in Cisco ACI, you need to perform the following steps:
1.

Create a VRF instance.

2.

Create a bridge domain and associate the bridge domain with the VRF instance.

3.

Do not enable unicast routing in the bridge domain.

4.

Do not give the bridge domain a subnet IP address (SVI IP address).

5.

Configure the bridge domain for either optimized switching (also called hardware-proxy mode: that is, using
the mapping database) or the traditional flood-and-learn behavior (if there are silent hosts).

6.

Create EPGs and make sure that they have a relationship with a bridge domain (EPGs and bridge domain do
not have a one-to-one relationship; you can have multiple EPGs in the same bridge domain).

7.

Create contracts between EPGs as necessary, or if you want all EPGs to be able to talk to each other without
any filtering, you can set the VRF instance as unenforced.

In the example in Figure 43, the infrastructure administrator had already provisioned the tenant named
Development. The figure shows the main view of the tenant.
Figure 43.

Main View of Development Tenant

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 44 of 84

In the Networking view, you can see that the infrastructure administrator has already provided a VRF instance and
an L3Out connection for this tenant (Figure 44).
Figure 44.

Networking View of Development Tenant

To create a bridge domain, you can just drag and drop the bridge domain icon from the toolbar and associate it
with the VRF instance (Figure 45).
Figure 45.

Associating a Bridge Domain with the VRF Instance

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 45 of 84

You can then choose the forwarding for this bridge domain (Figure 46).
Figure 46.

Creating a Bridge Domain

On the L3 Configurations tab, you disable unicast routing (Figure 47).


Figure 47.

Configuring the Bridge Domain for Layer 2 Forwarding Only

The VRF configuration gives you the option to allow all EPGs to talk to each other without the need to enter any
contracts. Figure 48 shows the VRF options. You can make EPGs talk without contracts by selecting the option
Unenforced.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 46 of 84

Figure 48.

Filtering Options for the VRF Instance

Now the tenant administrator can create EPGs and associate them with physical or virtual workloads.

Configuring Bridge Domains


Traffic forwarding in Cisco ACI operates as follows:

Cisco ACI routes traffic destined for the router MAC address.

Cisco ACI bridges traffic that is not destined for the router MAC address.

In both cases, the traffic traverses the fabric encapsulated in VXLAN to the VTEP destination IP address where the
endpoint has been discovered.
Notice that the bridge domain must be associated with a router instance for the subnets to be instantiated. The
other fields control the way that unknown unicast traffic and multicast traffic is forwarded.
Tuning the Bridge Domain Configuration
Cisco ACI doesnt use flooding by default, but you can configure this behavior.
Figure 49 shows the configurable options for the bridge domain.
Figure 49.

Configuration Options for the Bridge Domain

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 47 of 84

You can configure the bridge domain forwarding characteristics as optimized or as custom, as follows:

ARP Flooding: If the ARP Flooding checkbox is not selected, optimized ARP handling allows the fabric to
forward ARP requests as unicast traffic.

L2 Unknown Unicast: Hardware Proxy for unknown unicast traffic is the default option. This forwarding
behavior uses the mapping database to send the traffic to the destination without relying on flood-and-learn
behavior.

These are the options for Layer 2 unknown unicast frames:

Flood: If the Flood option is enabled in a bridge domain, the packet is flooded in the bridge domain by using
a multicast tree rooted in the spine that is scoped to the bridge domain.

No-flood (default): The packet is looked up in the spine, and if it is not found in the spine, it is dropped.

These are the options for Layer 2 multicast frames:

Flood (default): Flood in the bridge domain.

Drop: Drop the packet.

The L3 Configurations tab allows the administrator to configure the following parameters:

Unicast Routing: If this setting is enabled the fabric routes traffic and provides the default gateway function
(though the subnet configuration).

Gateway and Subnet Address: These settings identify the SVI IP addresses and default gateway addresses
for the bridge domain.

Enforce subnet check for IP learning: This option is like unicast reverse-forwarding-path check. It verifies
that the IP addresses entering the bridge domain are indeed from subnets defined in the bridge domain.

Using Common Bridge Domain Configurations


If you need a bridge domain for regular server endpoints and you dont intend to introduce firewalls or load
balancers, you can use the default bridge domain configuration. In this case, the standard configuration consists of
one single bridge domain with the following settings:

Traffic forwarding: Optimized

Routing: Enabled

Subnets: List of default gateways for the bridge domain (Subnets can also be configured under the EPG)

Subnet check for IP learning: Enabled

If you need to connect firewalls or load balancers to a bridge domain, you should select the following options:

ARP Flooding: Enabled


If ARP flooding is disabled, if the firewall is silent (that is, if it does not send out GARP or similar packets),
the Cisco ACI fabric will not learn the MAC information for the firewall. In this situation, ARP requests from
the router to the firewall will fail. To avoid this scenario, regular ARP flooding should be enabled in the
bridge domains connected to the firewall interfaces. In other words, you should not enable the Cisco ACI
optimized ARP handling for this bridge domain.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 48 of 84

L2 Unknown Unicast: Flood


By default, hardware-proxy mode is used to handle unknown unicast traffic. For the bridge domain
connected to a firewall or a load balancer, you should enable regular Layer 2 handling (flooding).

Unicast Routing: Disabled


If the Layer 3 gateway services are running on a firewall or a load balancer and not on the Cisco ACI fabric,
the bridge domain should not be routing.

Gateway and Subnet Address: Not configured


If the Layer 3 gateway services are running on a firewall or a load balancer and not on the Cisco ACI fabric,
the subnet should not be configured on the bridge domain connected to the firewall or load balancer.

Placing the Default Gateway


The default gateway in Cisco ACI can be placed either under the bridge domain or under the EPG:

Subnet under the bridge domain: If you do not plan any route leaking among VRF instances and tenants,
the subnets can be placed under the bridge domain. Figure 50 shows where to configure subnets under the
bridge domain.

If you plan to make servers on a given EPG accessible from other tenants (such as in the case of shared
services), you need to place the subnet under the EPG, because a contract will then also place a route into
the respective VRF instances to which that EPG belongs. Figure 51 shows where to configure subnets
under the EPG.

Figure 50.

Configuring Subnets Under the Bridge Domain

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 49 of 84

Figure 51.

Configuring Subnets Under the EPG

Subnets can have these properties:

Advertised Externally: This option, also referred to as the public setting, indicates that this subnet can be
advertised to the external router by the border leaf.

Private to VRF: This option indicates that this subnet is contained within the Cisco ACI fabric and is not
advertised to external routers by the border leaf.

Shared Between VRF Instances: This option is for shared services. It is used to indicate that this subnet
needs to be leaked to one or more private networks. The shared subnet attribute is applicable to both public
and private subnets.

Note:

Prior to Cisco ACI Release 1.2, subnets that were placed under the EPG could not be marked as public;

hence, they could not be advertised to the outside. With Release 1.2, the subnets configured under the EPG offer
the same configuration options as the subnets placed under the bridge domain: private, public, and shared.
When configuring the default gateway for servers, follow these best practices:

Configure the subnet under the EPG (so that you have the flexibility to create leaking across VRF
instances).

If the subnet must be announced outside the fabric, mark it as advertised externally.

If the subnet must be announced in other VRF instances, mark it as shared,

Virtual IP, Virtual MAC Address


The Subnet configuration under the Bridge Domain lets you define the equivalent of a pervasive gateway for the
case of Layer 2 extension.
When extending a Bridge Domain outside of a single fabric, such as in the case of multiple ACI fabrics
interconnected at Layer 2, you need to ensure that the default gateway IP address is identical between sites in
order to allow Virtual Machine migration. This IP address in ACI is called a Virtual IP address. You can also
define a virtual MAC address that is identical between the two fabrics. This virtual MAC address is associated with
the virtual IP address.

Configuring EPG and Server Connectivity


The tenant administrator is the operator responsible for associating EPGs with the ports to which physical servers
are connected or with the VMM to which the EPGs must be extended through port groups.
The infrastructure administrator configures the VMM domains and the port channels and vPCs. The tenant
administrator can create EPGs and associate the EPGs with the existing ports, port channels, and vPCs,
specifying which VLAN to match or which VMM domain to use.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 50 of 84

A dedicated EPG is created for each application tier (directory services, Microsoft Exchange services, Microsoft
SQL services, etc.). Each EPG contains the hosts and servers for the tier being configured.
Each EPG is associated with a VLAN taken from one of the VLAN domains. For virtual machines, EPGs are
associated with the VMM domain. For physical hosts, a VLAN must be statically allocated from a VLAN domain.
Creating EPGs
The first step in creating EPGs is to configure the EPGs in an application network profile (Figure 52). This step
enables you to create EPGs inline.
Figure 52.

Configuring an EPG in the Application Network Profile

If you then select the application profile, the GUI displays a toolbar with the configuration options (Figure 53).
Figure 53.

Configuring the Application Network Profile

Here you can create EPGs in multiple ways. For instance, you can use the drag-and-drop menu.
When you bring an EPG to the canvas, the GUI asks you for a name and a bridge domain, as shown in Figure 54.
Figure 54.

Associating the EPG with the Bridge Domain

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 51 of 84

When you hover the mouse cursor over the EPG, you can see what is configured with the EPG, as shown in
Figure 55.
Figure 55.

Hovering the Mouse Cursor over the EPG Displays Information About the Configurations in the EPG

You can then associate the EPGs with VMM domains or with physical ports (Figures 56 through 59).
Figure 56.

Associating an EPG with a VMM Domain (1)

In the case in Figure 56, you would then enter the VMM domain information based on the VMM domain that the
infrastructure administrator has created, as shown in Figure 57.
Figure 57.

Associating an EPG with a VMM Domain (2)

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 52 of 84

Figure 58.

Associating an EPG with a Physical Port (1)

Figure 59.

Associating an EPG with a Physical Port (2)

You then add contracts to define which EPGs can talk to which other EPGs, as shown in Figure 60.
Figure 60.

Creating a Contract Between EPG 1 and EPG 2

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 53 of 84

You then enter the contract details, as in Figure 61.


Figure 61.

Adding Entries to the Contract

Configuring EPG Deployment Immediacy Options


When you associate an EPG with a port, the hardware of the leaf switch is programmed immediately.
When associating an EPG with a VMM domain, Cisco ACI can optimize the use of hardware resources based on
the Deploy Immediacy and Resolution Immediacy configurations, as displayed in Figure 62.
Figure 62.

Deployment Immediacy Options

These configuration options control whether the configuration is pushed immediately to the policy engine software
on the leaf switches (resolution immediacy), and whether the settings are configured immediately on the hardware
of the leaf switch (deployment immediacy).

Resolution Immediacy Pre-provision: The policy is immediately resolved and prepopulated on all the policy
engines of the leaf switches to which a VMM domain is attached. (This setting is the infrastructure
administrator configuration in which each port is associated with a VMM domain.)

Resolution Immediacy Immediate: The configuration of VRF instances, VLANs, etc. is resolved immediately
according to which ports have virtualized hosts attached that belong to the VMM domain.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 54 of 84

Resolution Immediacy On Demand: The configuration of VRF instances, VLANs, etc. is resolved upon
notification from a VMM that a new endpoint has been discovered or upon discovery of a new endpoint
through data-plane learning.

Deployment Immediacy Immediate: The APIC pushes all configurations to the hardware of the leaf switch
as soon as the policy is entered into the policy engine of the leaf switch.

Deployment On Demand: The configuration is applied in hardware on the leaf switch when the VMM reports
a newly attached endpoint or when a new endpoint is detected on the data path.

Using Endpoint Learning: Virtual Machine Port Group


Assuming that a VMM domain has been configured correctly and a connection has been made to the virtual
machine management system, the creation of an EPG results in the creation of a corresponding port group on the
virtualized hosts.
The virtual machine administrator places the vNIC in the appropriate port group, which triggers a notification to the
APIC that the virtual machine has changed. Following the detection of the endpoint and depending on the
configuration of the resolution and deployment immediacy settings, EPG policies are downloaded to the specific
leaf switch to which the endpoint is attached.
The leaf switch then informs the spine switch mapping database of the new endpoint details (identity and location).
Using Endpoint Learning: Bare-Metal Host
If a bare-metal host joins the fabric, the leaf switch to which the endpoint is attached uses information in DHCP or
ARP requests (depending on whether or not the host is statically addressed). The identity and location information
gleaned from this process is sent to the spine switch mapping database using COOP.
Using Endpoint Aging
If no activity occurs on an endpoint, the endpoint information is aged out dynamically based on the setting on an
idle timer. If no activity is detected from a local host after 75 percent of the idle timer value has elapsed, the fabric
checks whether the endpoint is still alive by sending a probe to it.
If the endpoint does not actively send or receive traffic for the configured idle time interval, a notification is sent to
the mapping database, using COOP, to indicate that the endpoint should be deleted from the database.

Configuring Contracts
The Cisco ACI fabric does not permit communication between EPGs by default, so the administrator must
configure contracts between the application EPGs.
Figure 63 shows how contracts are configured between EPGs: for instance, between internal EPGs and external
EPGs.
Figure 63.

Contracts Between Internal and External EPGs

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 55 of 84

Setting the Contract Scope


The scope of the contract defines the EPGs to which the contract can be applied:

Private network: EPGs associated with the same VRF instance can use this contract.

Application profile: EPGs in the same application profile can use this contract.

Tenant: EPGs in the same tenant can use this contract.

Global: EPGs throughout the fabric can use this contract.

Setting the Contract Direction


In Cisco ACI, by default a contract is defined as bidirectional. Figure 64 illustrates the concept of a bidirectional
contract. The contract in diagram (a) is bidirectional, and the contract in diagram (b) is not.
In the figure, (a) shows EPG1 consuming contract C and EPG2 providing contract C. If this contract is using a filter
that specifies allow traffic to port 80, the programming in the TCAM allows EPG1 to send traffic destined for port
80, and it allows EPG2 to send traffic from port 80. This is a bidirectional contract.
If you instead also want to be able to send traffic to port 80 from EPG2, you can use the approach shown in (b). In
this case, EPG1 and EPG2 both provide and consume contract C. In this case, the contract is not bidirectional.
Figure 64.

Contracts and Filters

The ACL programmed in Figure 64 changes if instead of the default options you select the following:

If you disable Apply Both Directions, then the only entry in the TCAM would be:
permit EPG1 to EPG2 srcPort=any dstPort=80

If you enable Apply Both Directions and you disable Reverse Filter Ports. the two entries in the TCAM would
be:
permit EPG1 to EPG2 srcPort=any dstPort=80
permit EPG2 to EPG1 srcPort=any dstPort=80

Another important consideration is that in Cisco ACI, in principle you dont need to provide and consume a contract
to achieve the configuration illustrated in (b). You can also simply edit the filter that contract C uses. For instance, if
you want to turn (a) into (b) without adding a contract, all you need to do is to edit the filter as follows:

permit any to port 80

permit port 80 to any

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 56 of 84

Figure 65 shows another example of a contract with HTTP and HTTPS traffic and the effect of changing the
Reverse Filter Ports settings.
Figure 65.

Contracts with HTTP and HTTPS Filters and the Effect of Reverse Filter Ports

This list summarizes the best practices for configuring contracts:

Use only one contract between any pairs of EPGs.

Do not change the default configurations of Apply Both Directions and Reverse Filter Ports.

If you need to add rules, edit the subject and filters under the contract itself.

When possible, group the rules that are common across all the EPGs that are under the same VRF
instance and associate them with the vzAny EPG to reduce the consumption of TCAM entries.

Configuring an Unenforced VRF Instance


In certain deployments, all EPGs in a given VRF instance may need to be able to talk to each other. In this case,
simply set the VRF instance with which they are associated as unenforced.
Using vzAny
vzAny, also referred to as EPG collection for context, is a special EPG that represents all the EPGs associated with
the same VRF instance. This concept is useful when a configuration has contract rules that are common across all
the EPGs under the same VRF instance. In this case, you can place the rules that are common in a contract
associated with this EPG.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 57 of 84

Figure 66 shows where to find the vzAny configuration in the GUI.


Figure 66.

EPG Collection for Context

Configuring VRF Route Leaking with Contracts


To configure route leaking between any two tenants or VRF instances, Cisco ACI requires the configuration of a
contract interface and the definition of subnets under the EPG instead of under the bridge domain.
Here, note that the direction of the contract has no particular influence on the leaking of the routes. If a contract is
provided by an EPG in one VRF instance to an EPG in the other VRF instance, as long as the subnets are listed
under each EPG and configured as shared, both subnets are installed in the VRF instances to enable the
communication between the two EPGs.

Configuring External Layer 3 Connectivity


This section explains how Cisco ACI can connect to an outside networks using Layer 3 technology. It explains the
route exchange between Cisco ACI and the external routers, and how to use dynamic routing protocols between
the Cisco ACI border leaf switch and external routers. It also explores the forwarding behavior between internal
and external endpoints and the way that policy is enforced for the traffic flow between them.
As mentioned earlier, Cisco ACI refers to the connectivity of a tenant to the outside Layer 3 network as an L3Out
connection. The configuration of the L3Out connection is simplified in the Basic GUI.
When using the Basic GUI, the infrastructure administrator configures the L3Out connection and assigns it to the
tenant. The tenant administrator uses L3Out connections by:

Connecting them to various EPGs

Establishing contracts between the Layer 3 external EPG and the tenant EPGs

Defining which subnets can be announced outside and which subnets cannot

The section Creating L3Out Connections earlier in this document described the basic steps the infrastructure
administrator needs to follow to create an L3Out connection. This section provides additional details about the
implementation and use of the L3Out connection by the tenant administrator.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 58 of 84

Private Versus External Routed Networks


Private networks are also called VRF instances or contexts. A private network is a VRF instance that provides IP
address space isolation for different tenants. Each tenant can belong to one or more private networks, or tenants
can share one default private network with other tenants if the Cisco ACI fabric doesnt use overlapping IP
addresses.
External routed networks are also called L3Out connections in this and other Cisco ACI documents. In the L3Out
administrators configure the interfaces, protocols, and protocol parameters that are used to provide IP connectivity
to external routers. Using the Network menu on the external routed network, users can configure external EPGs.
After external EPGs are defined, users can define contracts for the communication between internal EPGs (EPGs
defined under the application profile) and external EPGs.

Relationships in the Object Tree


L3Out connections, or external routed networks, provide IP connectivity between a private network of a tenant and
an external IP network. Each L3Out connection is associated with only one private network (VRF instance). A
private network may not have an L3Out connection if IP connectivity to the outside is not required.
For subnets to be announced to the outside router, they need to be defined as advertised externally, and the bridge
domain must have a relationship with the L3Out connection (in addition to its association with the VRF instance).
Starting with Cisco ACI Release 1.2(1), the subnet under the EPG can also be defined as advertised externally.
The bridge domain with which the EPG is associated must have an association with the L3Out connection and the
VRF instance.

Border Leaf Switches


The border leaf switches are Cisco ACI leaf switches that provide Layer 3 connections to outside networks. Any
Cisco ACI leaf switch can be a border leaf. There is no limitation on the number of leaf switches that can be used
as border leaf switches. The border leaf can also be used to connect to computing, IP storage, and service
appliances. In large-scale design scenarios, for easier scalability, you may want to separate border leaf switches
from the leaf switches that connect to computing and service appliances.
Border leaf switches support three types of interfaces to connect to an external router:

Layer 3 interface

Subinterface with IEEE 802.1Q tagging: With a subinterface, the same physical interface can be used to
provide a Layer 2 outside connection for multiple private networks.

Switched virtual interface: With an SVI, the same physical interface that supports Layer 2 and Layer 3 can
be used for a Layer 2 outside connection as well as an L3Out connection.

In addition to supporting routing protocols to exchange routes with external routers, the border leaf also applies and
enforces policy for traffic between internal and external endpoints.
As of this writing, Cisco ACI supports the following routing mechanisms:

Static routing (supported for IPv4 and IPv6)

OSPFv2 for normal and not-so-stubby-area (NSSA) areas (IPv4)

OSPFv3 for normal and NSSA areas (IPv6)

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 59 of 84

EIGRP (IPv4 only)

iBGP (IPv4 and IPv6)

eBGP (IPv4 and IPv6)

By using subinterfaces or SVIs, border leaf switches can provide L3Out connectivity for multiple tenants with one
physical interface.

Performing Route Distribution Within the Cisco ACI Fabric


MP-BGP is implemented between leaf and spine switches to propagate external routes within the Cisco ACI fabric.
BGP route reflector technology is deployed to support a large number of leaf switches within a single fabric. All the
leaf and spine switches are in one single BGP autonomous system. After the border leaf learns the external routes,
it can then redistribute the external routes of a given VRF instance to an MP-BGP address family VPN Version 4
and Version 6. With address family VPNv4, MP-BGP maintains a separate BGP routing table for each VRF
instance. Within MP-BGP, the border leaf advertises routes to a spine switch, which is a BGP route reflector. The
routes are then propagated to all the leaf switches on which the VRF instances (or private networks in terminology
of the APIC GUI) are instantiated. Figure 67 illustrates the routing protocol within the Cisco ACI fabric and the
routing protocol between the border leaf and external router using VRF-lite.
Figure 67.

Routing Protocols in Cisco ACI Fabric

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 60 of 84

Border leaf switches are the place at which tenant subnets are injected into the protocol running between the
border leaf switches and external routers. Users determine which tenant subnets they want to advertise to the
external routers. When specifying subnets under a bridge domain or an EPG for a given tenant, the user can
specify the scope of the subnet:

Advertised Externally: This subnet is advertised to the external router by the border leaf.

Private to VRF: This subnet is contained within the Cisco ACI fabric and is not advertised to external routers
by the border leaf.

Shared Between VRF Instances: This option is for shared services. It indicates that this subnet needs to be
leaked to one or more private networks. The shared-subnet attribute applies to both public and private
subnets.

In addition to specifying a tenant subnet as advertised externally, the user needs to associate a L3Out connection
with a bridge domain for the border leaf to advertise the tenant subnet to an external router.

Assigning the BGP Autonomous System Number


The Cisco ACI fabric supports one ASN. The same ASN is used for internal MP-BGP and for the iBGP session
between the border leaf switches and external routers. Without MP-BGP, the external routes (static, OSPF,
EIGRP, or BGP) for the L3Out connections would not be propagated within the Cisco ACI fabric, and the Cisco ACI
leaf switches that are not part of the border leaf would not have IP connectivity to an outside network. Therefore, if
the administrator needs to use iBGP, the administrator needs to identify the ASN on the router to which the Cisco
ACI border leaf connects and use it as the BGP ASN for the Cisco ACI fabric.
You can make the spine switches the route reflector by configuring them as such in the System > Systems Settings
configuration, as shown in Figure 68.
Figure 68.

Setting the Spine Switches as Route Reflectors

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 61 of 84

If you need to change the BGP ASN in the Basic GUI, you can find the configuration in System > Systems Settings
(Figure 69).
Figure 69.

Assigning the BGP ASN to the Spine Switches

If you use iBGP to exchange routes between the fabric and the outside network, each VRF instance (L3Out
connection) of a given tenant has its own iBGP session with all the BPG speakers of the autonomous system to
which it is connected (or with the route reflector of the autonomous system).
Border leaf switches do not need to have iBGP sessions among themselves because border leaf switches learn
routes from each other through MP-BGP.
If you use eBPG to exchange routes between the fabric and the outside network, the ASN of the eBGP session
configured on the L3Out connection must be different from the ASN of the fabric and from the ASN of the external
router with which it peers.

Configuring Infrastructure L3Out Connectivity


As previously described, the infrastructure administrator creates a tenant and a VRF instance.
These are the configuration steps:
1.

Select the leaf switches in the Fabric view and create or assign the VRF instance to the specific leaf. This step
includes the configuration of the routing protocol and, potentially, static routes and the assignment of the
router ID.

2.

Select the port or ports on the leaf on which to configure the L3Out connection. The administrator defines the
IP address of the port and the protocol configuration specific to that port.

The following settings may be useful in completing the configuration:

Ignore the maximum transmission unit (MTU) for the OSPF interface policy.

Set a secondary IP address (a floating IP address such as the address for HSRP) for the SVI in the case of
a static routing configuration.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 62 of 84

Importing and Exporting Routes


The infrastructure administrator can configure a route map to define the routes from the tenant that are announced
outside and the routes that can be imported.
In the route map, the infrastructure administrator associates the L3Out connection with the bridge domain whose
subnets need to be announced.
Figure 70 shows that the route-map configuration is part of the VRF configuration on the leaf.
Figure 70.

Configuring VRF on the Border Leaf

The route map also specifies the subnets that are imported into a given tenant.
For a given tenant VRF instance, the route map can be defined for one leaf switch (local scope) or for all leaf
switches (global scope) to which the tenant VRF instance has external Layer 3 connections (Figure 71).
With global scope, the user does not have to repeat the same route-map configuration on other leaf switches that
need the same route control.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 63 of 84

Figure 71.

Creating the Route Map to Import Routes into the Tenant Routing Table

In the prefix list, the administrator adds the external subnets to import into the tenant routing table (Figure 72).
These must be exact matches (prefix/prefix length) or all prefixes (0.0.0.0/0 aggregate).
Figure 72.

Defining the Prefixes to Import into the Tenant Routing Table

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 64 of 84

Configuring Tenant L3Out Connectivity


The tenant administrator can view and modify the configurations performed by the infrastructure administrator,
which are found in the directory tree under VRFs, as in Figure 73.
Note that in the past, a person had to be an infrastructure administrator to navigate the fabric inventory. With Cisco
ACI Release 1.2, the tenant can verify the configuration directly from the Tenants view.
The configuration of the L3Out connection performed by the tenant administrator consists of the following steps:
1.

Create a Layer 3 external EPG by using the Prefix EPG option.

2.

Create contracts between EPGs and the Layer 3 external EPG.

Figure 73.

Creating a L3Out Configuration Specific to the Tenant

Creating Subnets
In addition, the tenant must create subnets for the bridge domain and specify which subnets can be advertised and
which subnets cannot. This configuration can be performed either from the L3 Configurations tab of the bridge
domain or from the subnet folder under the bridge domain (as shown in Figures 74 and 75) or the EPG. You
normally should enable the Enforce subnet check for IP learning option.
Figure 74.

Configuring a Subnet for the Bridge Domain

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 65 of 84

Figure 75.

Configuring Subnets and Specifying Whether the Routing Protocol Announces Them

For subnets to be announced outside, the infrastructure administrator needs to perform additional steps to create a
route map from the fabric configuration view.
Creating an External EPG
The tenant administrator creates an EPG to classify the traffic entering the tenant from the outside. Figure 76
illustrates the creation of an external EPG that classifies remote traffic coming from a particular leaf and matching a
particular subnet.
To configure connectivity between the outside and the tenant EPG, the tenant administrator then creates contracts.
For instance, the internal EPG may provide the contract, and the external EPG may consume it.
Figure 76.

Creating a Layer 3 External (Prefix) EPG

Announcing Subnets to the Outside


For a subnet in a bridge domain to be announced to the outside, the following configurations are required:

The subnet must be configured under the bridge domain and the EPG and marked as advertised externally
(or public).

The bridge domain must have a relationship to the L3Out connection that is defined from the route map for
the VRF instance on the leaf switch.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 66 of 84

A contract must exist between the Layer 3 external EPG and the EPG associated with the bridge domain. If
this contract is not in place, the announcement of the subnets cannot occur.

Note:

In some scenarios, a contract may not appear to be necessary because the subnets appear to be

announced without a contract in place. If the bridge domain is on the same leaf on which the L3Out connection is
located, the subnet may simply be announced. In general, if the bridge domain is not physically present on the
same leaf because no endpoint is associated with it, the subnets are announced only if the Layer 3 external EPG
has a contract with the client EPG.

Designing for Shared Services


A common requirement of multitenant cloud infrastructures is the capability to provide shared services to hosted
tenants. Such services include Active Directory, DNS, and filers. Figure 77 illustrates this requirement.
Figure 77.

Shared Services Tenant

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 67 of 84

In Figure 77, Tenants 1, 2, and 3 have locally connected servers, respectively part of EPGs A, B, and C. Each
tenant has an L3Out connection connecting remote branch offices to this data center partition. Remote clients for
Tenant 1 need to establish communication with servers connected to EPG A. Servers hosted in EPG A need
access to shared services hosted in EPG D in a different tenant. EPG D provides shared services to the servers
hosted in EPGs A and B and to the remote users of Tenant 3.
In this design, each tenant has a dedicated L3Out connection to connect to the remote offices. The subnets of EPG
A are announced to the remote offices for Tenant 1, the subnets in EPG B are announced to the remote offices of
Tenant 2, and so on. In addition, some of the shared services may be used from the remote offices, as in the case
of Tenant 3. In this case, the subnets of EPG D are announced to the remote offices of Tenant 3.
Another common requirement is shared access to the Internet, as shown in Figure 78. In the figure, the L3Out
connection of the Shared Services tenant (L3Out 4) is shared across Tenants 1, 2, and 3. Remote users may also
need to use this L3Out connection, as in the case of Tenant 3. In this case, remote users can access L3Out 4
through Tenant 3.
Figure 78.

Shared L3Out Connection

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 68 of 84

These requirements can be implemented in several ways:

Use the equivalent of VRF leaking (which in Cisco ACI means configuring the subnet as shared).

Use the VRF instance from the Common tenant and the bridge domains from each specific tenant.

Provide shared services with outside routers connected to all tenants.

Provide shared services from the Shared Services tenant by connecting it with external cables to other
tenants in the fabric.

The first two options dont require any additional hardware beyond the Cisco ACI fabric itself. The third option
requires external routing devices such as additional Cisco Nexus 9000 Series Switches that are not part of the
Cisco ACI fabric. The fourth option, which is logically equivalent to the third one, uses a tenant as if it were an
external router and connects it to the other tenants through loopback cables.
If you are using the software version recommended in this design document (Cisco ACI Release 1.2), you most
likely will want to use the first option. If it is acceptable for different tenants to use a shared address space, then
you can use the second option. If you need to put shared services in a physically separate device, you are likely to
use the third option. And if you have specific constraint that make the first two options not viable, but if you dont
want to have an additional router to manage, then most likely you will want to use the fourth option.

Configuring VRF Route Leaking Across Tenants


To configure route leaking between any two tenants or VRF instances, Cisco ACI requires the you to configure a
contract interface and to define subnets under the EPG and under the bridge domain.
Configuring Shared Subnets and Contract Interfaces
The configuration of shared subnets and contract interfaces consists of the following steps:
1.

Configure subnets under the EPGs for which you need to establish a contract (interface).

2.

Mark the subnets as shared. If they need to be announced to a L3Out connection, also mark them as
advertised externally.

3.

Create a contract with a scope that is global.

4.

Export the contract to a tenant (for example, if the contract is defined in the Shared Services tenant, export the
contract to Tenants 1, 2, and 3).

5.

From each tenant, you can consume or provide the contract interface.

For this configuration, the direction of the contract is not important, as can be seen in Figure 79. This example
shows two VRF instances: VRF 1 and VRF 2. EPG 1 in VRF 1 is associated with subnet 10.10.10.x, and EPG 2 in
VRF 2 is associated with subnet 20.20.20.x.
The routing table of VRF 1 shows 10.10.10.0/24 directly connected, and it shows 20.20.20.0/24 connected through
an overlay. Similarly, the routing table of VRF 2 shows 20.20.20.0/24 directly connected, and it shows
10.10.10.0/24 connected through an overlay.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 69 of 84

The routes in the respective tenants are installed as a result of the contracts being provided by EPG 2 and
consumed by EPG 1. You do not need to configure the reverse contract. The presence of one contract between
EPG 2 and EPG 1 is enough to inject the route for 10.10.10.0/24 into VRF 2 and the route for 20.20.20.0/24 into
VRF 1.
Figure 79.

VRF Route Leaking Through a Contract Interface

Announcing Shared Subnets to an L3Out Connection


With the configurations defined in the previous section, the subnet under the EPG is available to other tenants so
that traffic can flow between the EPGs (EPGs A, B, C, and D in Figure 77).
Starting from Cisco APIC Release 1.2(1), you can also make the subnets of EPG D reachable from a remote client
connected to a different tenant like the remote clients of Tenant 3 in Figure 77.
If you need EPG D to be reachable from a remote client, you need to mark the subnet as advertised externally, and
shared and you need to create a contract interface between the L3Out connection of the tenant (for example,
Tenant 3) and EPG D.
The subnets of the remote clients also need to be announced to the Shared Services tenant where EPG D is
located.
To do this, you need to modify the route-map configuration of the VRF instance on the border leaf as follows:
1.

Create a route map (Figure 80).

2.

Define the external prefix as shared (Figure 81).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 70 of 84

Figure 80.

Defining a Route Map

Figure 81.

Defining the External Prefix as Shared

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 71 of 84

Configuring a Shared L3Out Connection


You can configure a variety of options for an L3Out connection that is shared across multiple tenants. You can
define the VRF instance and bridge domains in the Common tenant and use them from the other tenants. Starting
from Cisco APIC Release 1.2(1), you also can configure a L3Out connection with a VRF instance and a bridge
domain in any tenant and share the L3Out connection with other tenants, each with its own VRF instance and
bridge domains.
Configuring the VRF Instance and Bridge Domains in the Common Tenant
In this configuration, you create the VRF instance and bridge domains in the Common tenant and create EPGs in
the individual tenants. Then you associate the EPGs with the bridge domains of the Common tenant. This
configuration can use static or dynamic routing.
The configuration in the Common tenant is as follows:
1.

Configure a VRF instance (private network) under the Common tenant.

2.

Configure an L3Out connection under the Common tenant and associate it with the VRF instance.

3.

Configure the bridge domains and subnets under the Common tenant.

4.

Associate the bridge domains with the VRF instance and L3Out connection.

The configuration in each tenant is as follows:


1.

Under each tenant, configure EPGs and associate the EPGs with the bridge domain in the Common tenant.

2.

Configure a contract and application profile under each tenant.

Figure 82 illustrates this concept.


Figure 82.

Shared L3Out Connection Through the Common Tenant with a VRF Instance and Bridge Domains in the Common
Tenant

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 72 of 84

This approach has the following advantages:

The L3Out connection can be configured as dynamic or static.

Each tenant has its own EPGs and contracts.

This approach has the following disadvantages:

Each bridge domain and subnet is visible to all tenants.

All tenants are under the same VRF instance.

Configuring the VRF Instance in the Common Tenant and Bridge Domains in Each Specific Tenant
In this configuration, you create a VRF instance in the Common tenant and create bridge domains and EPGs in the
individual tenants. Then you associate the bridge domain of each tenant with the VRF instance in the Common
tenant (Figure 83). This configuration can use static or dynamic routing.
Configure the Common tenant as follows:
1.

Configure a VRF instance (private network) under the Common tenant.

2.

Configure L3Out connection under the Common tenant and associate it with the VRF instance.

Configure the individual tenants as follows:


1.

Configure a bridge domain and subnet under each customer tenant.

2.

Associate the bridge domain with the VRF instance in the Common tenant and the L3Out connection.

3.

Under each tenant, configure EPGs and associate the EPGs with the bridge domain in the tenant itself.

4.

Configure contracts and application profiles under each tenant.

Figure 83.

Shared L3Out Connection with the VRF Instance in the Common Tenant

The advantage of this approach is that each tenant can see only its own bridge domain and subnet.
Sharing the L3Out Connection Across Multiple VRF Instances
In this configuration, you create a VRF instance and a bridge domain in a tenant (which could be the Common
tenant), and you create a VRF instance, a bridge domain, and EPGs in individual tenants (Figure 84). The tenant
that contains the L3Out connection is called, for instance, the Shared Services tenant.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 73 of 84

You share the L3Out connection by creating a contract from the Shared Services tenant to individual tenants.
Routes are shared as follows:

The border leaf learns external routes. The border leaf leaks routes to the individual tenant VRF instances
through MP-BGP.

The tenant subnets that are marked as advertised externally and shared are leaked to the Shared Services
tenant VRF instance and advertised to external routers.

Configure the Shared Services tenant as follows:


1.

Configure a VRF instance (private network) and a bridge domain under the tenant.

2.

Configure an L3Out connection under the tenant and associate it with the VRF instance you configured in the
preceding step.

3.

Configure a global contract and export it to the tenants.

4.

From the Layer 3 external EPG, provide the contract.

5.

From the fabric inventory view, configure the route map in the VRF instance on the border leaf. Define the
prefix list with the subnets (for instance, 0.0.0.0/0 aggregate to import all the routes) and define the subnets as
shared.

Configure the individual tenants as follows:


1.

Configure a VRF instance and a bridge domain under the customer tenant.

2.

Under each tenant, configure EPGs and associate the EPGs with the bridge domain in the tenant itself.

3.

Configure the subnets under the EPGs as shared and advertised externally.

4.

Configure contracts and application profiles under each tenant.

5.

Consume the global contract from the Shared Services tenant.

Figure 84.

Sharing an L3Out Connection from a Tenant with Other Tenants

The main advantage of this approach is that each tenant has its own VRF instance and bridge domain for better
isolation.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 74 of 84

Creating the Shared Services Tenant Using External Peering


To meet some design requirements, instead of using VRF leaking, you may want to configure shared services
through external connectivity to a router, or by using external firewalls, or by using 10- and 40-Gbps cables to
interconnect tenants.
In this case, the design consists simply of configuring multiple L3Out connections per tenant in which one of the
L3Out connections is used to route to the Shared Services tenant, as in Figure 85.
Figure 85.

Using External Connectivity to Create a Shared Services Tenant

In Figure 85, each tenant has two L3Out connections, and one of them connects to the Shared Services tenant of
the same ACI Fabric, either through an external router or through direct connectivity using 10- and 40-Gbps cables
to interconnect tenants.
If you are using direct connectivity or a transparent firewall, for example, be sure to do the following:

Create a Layer 3 domain.

Map the Layer 3 domain to the interfaces.

In the interface configuration, be sure that Miscabling Protocol (MCP), LLDP, and Cisco Discovery Protocol
are disabled; otherwise, Cisco ACI will detect a loop and disable the ports.

Change the MAC addresses of the Layer 3 interfaces on each tenant so that the two tenants have different
MAC addresses.

Scalability Considerations
The scalability factors of a Cisco ACI fabric include the following:

The number of leaf switches and ports supported (read tested); this value is mainly a control-plane factor

The number of EPGs supported

The number of contracts supported

The number of VRF instances supported

The number of L3Out connections supported

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 75 of 84

Cisco publishes the numbers for each of these factors that have been validated by quality assurance (QA) testing
on the page that contains the configuration guide for a given release.
The Cisco ACI versions released as of this writing are as follows:

Release 1.0(1)

Release 1.0(2)

Release 1.0(3)

Release 1.0(4)

Release 1.1(1)

Release 1.1(2)

Release 1.1(3)

See http://www.cisco.com/c/en/us/support/cloud-systems-management/application-policy-infrastructure-controllerapic/tsd-products-support-series-home.html#ReleaseNotes.
At the time of this writing, the verified scalability limits for Release 1.1(3f) are as follows:

Number of leaf switches for a Layer 3 fabric: 200

Number of endpoints: 180,000

Number of EPGs: 15,000

Number of contracts tested: 32,000 per leaf with ALEv2

Number of VRF instances per leaf: 100

Number of tenants: 3000 (which means 3000 VRF instances across the fabric)

Number of L3Out connections per VRF instance: 3

Number of Layer 3 external EPGs per L3Out connection: 16

For more information, refer to http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/verifiedscalability/b_Verified_Scalability_Release_1_1_3f.html.

VLAN Consumption on the Leaf Switch


EPGs and bridge domains consume Layer 2 segment table resources on the forwarding engines on the leaf
switches. The current generation Cisco ACI leaf switches support up to 3500 Layer 2 segments per leaf.
The EPG and bridge domain require one Layer 2 segment on the leaf. Therefore, the sum of the EPGs and bridge
domains present on each leaf cant exceed 3500. This requirement applies to all types of EPGs: EPGs identified by
VLAN ID, EPGs identified by VNID, and EPGs identified by IP address.
You can make design choices to increase the number of EPGs supported at the leaf level and at the fabric level.
For example, by associating multiple EPGs with the same bridge domain, you can increase the EPG scale at the
leaf and fabric levels. In addition, with VMM integration, EPG and bridge domain constructs are instantiated on a
given leaf only when a workload attached to the leaf is in that EPG or bridge domain.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 76 of 84

Scalability with Hardware Proxy


Cisco ACI can support a much greater number of endpoints at the fabric level than the number supported at the
leaf level. Cisco ACI leaf switches update one spine switch with the information about local endpoints. Upon
receiving the updates for endpoints, spine switches synchronize the endpoints with those of other spine switches.
With this approach, the tables on each leaf need to contain the following information:

The LST needs to contain only the IP and MAC addresses of locally attached hosts.

The GST needs to contain only the information about the IP and MAC addresses of remote hosts with which
the local hosts have an active conversation.

For new flow and conversations and for entries that have aged out, the spine proxy is used as an inline-mapping
database to provide location services for the remote leaf. With this approach, even if entries in the GST age out,
the fabric never needs to be flooded.

Scalability with Layer 2 Forwarding Bridge Domain


When the fabric operates in hardware-proxy mode, use of the forwarding tables is optimized through the mapping
database on the spine switches. This is the recommended configuration.
If the fabric has silent hosts and if it operates at Layer 2, there is no way to probe the entries that have expired,
because the only address that is stored is the MAC address, and the MAC address is removed from the forwarding
tables when the idle timer reaches the age-out time specified for the bridge domain. As a result, to prevent traffic
from being lost in the presence of silent hosts, you should not use the hardware-proxy feature.
The scalability calculations in this case must follow logic similar to that used for traditional Layer 2 switch
infrastructure.
If the fabric has every bridge domains enabled on every leaf, then the fabric-level endpoint scale will be equal to
the size of GST. You can increase the scalability by localizing bridge domains to a subset of leaf switches; in this
case, the fabric-level endpoint scale can be much greater than the size of GST for a single leaf switch.

Policy CAM Consumption


The use of contracts and filters consumes space in the policy CAM. The size of the policy CAM depends on the
Cisco Nexus 9000 Series Switch you are using and the model of the expansion module, but you can optimize the
consumption of the entries in this table.
You need to distinguish the use of contracts in two different scenarios:

Contracts between an L3Out connection (Layer 3 external EPG) and an internal EPG: Use Cisco ACI
Release 1.2 to optimize bottlenecks on the border leaf TCAM by distributing the processing on the
computing leaf switches.

Contracts between EPGs: Consider using the vzAny feature when applicable.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 77 of 84

Policy CAM Consumption with Contracts Between Layer 3 External EPGs and Internal EPGs
The forwarding and policy filtering described at the beginning of this document is based on Cisco ACI prior to
Release 1.2.
The main limitation in the way that the software programs the fabric is that the border leaf becomes the scalability
bottleneck.
This bottleneck doesnt exist for EPG-to-EPG contracts because the computing leaf switches install entries in the
TCAM only if there are locally connected endpoints. Therefore, the scalability for EPG-to-EPG contracts is well
optimized because not all leaf switches are connected to the endpoints for all EPGs.
In the case of the border leaf switch, almost all EPGs present in the fabric may have a contract with the Layer 3
external EPG. Because the filtering occurs on the border leaf, the border leaf has to store the filtering entries for
almost all EPGs present in the fabric. As a result, when using a release prior to 1.2 the contracts on the border leaf
become a bottleneck limiting the scalability of the policy CAM.
Policy CAM Scalability Improvements Starting from Cisco ACI Release 1.2
If you are experiencing policy CAM scalability problems with a release prior to Release 1.2, the policy CAM on the
border leaf switch is likely the bottleneck, and upgrading to Release 1.2 may resolve the problem.
Cisco ACI Release 1.2 introduces a different approach, in which the border leaf does not apply filtering. Instead,
the filtering always occurs on the computing leaf switch: that is, on a leaf to which servers are connected. Figures
86 and 87 show the change in TCAM programming introduced by the Cisco ACI Release 1.2.
You can specify the point at which the policy is applied in the fabric as part of the L3Out configuration by choosing
Ingress for the Policy Enforcement Direction option, as shown in Figure 88.
Figure 86.

Contract Between Internal EPG and Layer 3 External EPG: Policy CAM Filters Traffic at Ingress

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 78 of 84

Figure 87.

Contract Between Layer 3 External EPG and Internal EPG: Policy CAM Filters Traffic at Egress on the Computing
Leaf

Figure 88.

Configuring Policy Enforcement Direction

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 79 of 84

vzAny
As described in other sections, Cisco ACI provides an abstraction that represents all the EPGs associated with the
same VRF. This group of EPGs is called vzAny. By using this construct, you can group together all the filter entries
that are common across multiple contracts and build a single contract that is associated with vzAny.
The filter entries that are specific to each EPG should be kept in the individual contracts for the EPGs. Whether two
EPGs can talk is the result of a lookup in the contract associated with the vzAny group and the contract associated
with a specific EPG.
This approach reduces the use of the policy CAM because common entries across multiple EPGs need to be
programmed only once.

Capacity Dashboard
The Cisco ACI GUI includes the Capacity Dashboard, which is a tool for monitoring the current use of fabric
resources (Figure 89). With this tool, you can see the total number of endpoints, bridge domains, VRF instances,
and EPGs, in the fabric compared to the maximum as well as the consumption of resources per leaf.
Figure 89.

Capacity Dashboard

Configuring In-Band and Out-of-Band Management


Two tenants are created by default in Cisco ACI for management purposes:

Infrastructure: This tenant is used for TEP-to-TEP (or leaf-to-leaf) traffic within the fabric and for bootstrap
protocols within the fabric.

Management: This tenant is used for management connectivity between the APICs and switch nodes, as
well as for connectivity to other management systems (authentication, authorization, and accounting [AAA]
servers, vCenter, etc.).

The infrastructure tenant is preconfigured for the fabric infrastructure, including the VRF instance and bridge
domain used for the fabric VXLAN overlay. The infrastructure tenant can be used to extend the fabric infrastructure
to outside systems that support overlay protocols such as VXLAN. Administrators are strongly advised against
modifying the infrastructure tenant. This tenant is not used for general management functions.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 80 of 84

The management tenant is used for general in-band and OOB management of Cisco ACI. The management tenant
has a default VRF instance and private network named inb (for in band). A single bridge domain is also created
by default, also named inb.
Cisco ACI switch nodes have:

In-band access to the management tenant

Onboard management port connected to the OOB management network

When using the Basic GUI, you dont need to configure the management tenant, You can configure management
access through the System > In Band & Out Of Band management configurations.

Configuring an Out-of-Band Management Network


APIC controllers and spine and leaf nodes have dedicated OOB management and console ports.
To configure OOB management for the leaf devices, choose System > In Band & Out Of Band and add the
addresses to each node together with the default gateway information, as in Figure 90.
Figure 90.

Creating an OOB Management Address for mgmt0 on Leaf 1 (Node 101)

Configuring an In-Band Management Network


In Cisco ACI, you can configure in-band management to connect to multiple leaf switches and spine switches from
any port in the fabric. For this purpose, choose System > In Band & Out Of Band and configure the IP addresses of
leaf switches and spine switches on the in-band management network, as shown in Figure 91.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 81 of 84

Figure 91.

IP Addressing for In-Band Management

You then need to specify which ports of the fabric connect to the outside network and with which VLAN, as shown
in Figure 92.
You can also define the ACL to filter access to the in-band management network by specifying which subnets are
allowed (which management stations can connect) and which ports are enabled.
Figure 92.

Defining Connectivity of the In-Band Management Network to the Outside

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 82 of 84

Best Practices Summary


This section summarizes some of the best practices presented in this document and provides a checklist you can
use to verify configuration settings before deploying a Cisco ACI fabric:

Choose an infrastructure VLAN that does not overlap with the range of reserved VLANs used by other
network devices connected to the Cisco ACI fabric.

Make sure that the TEP address space used by the fabric doesnt overlap with the external IP address
space.

Configure MP-BGP on the spine switches and assign the ASN that you want to use.

If you need to configure objects to be used by multiple tenants, you should configure them in the Common
tenant.

Configure the external EPGs with IP ranges that do not overlap with the tenants address space.

When creating a bridge domain, be sure to associate the bridge domain with a VRF instance even if you
intend to use the bridge domain only for Layer 2 switching.

For IP traffic, use bridge domains configured for optimized forwarding: that is, enable hardware-proxy mode
and do not use ARP flooding if at all possible. This provides the best scalability for the fabric. Select the
Enforce Subnet Check for IP Learning option.

For Layer 2 forwarding and in the presence of silent hosts, configure the bridge domain to flood unknown
unicast packets and to flood ARP packets.

When creating an EPG, be sure to associate the EPG with the bridge domain.

If you plan to configure shared services using route leaking between tenants and VRF instances, configure
the subnet IP address under the EPG.

When associating EPGs with bare-metal servers, use access (802.1p) as the access-port option.

Map the VLAN domains or the VMM domains to the fabric physical interfaces to which you then associate
the EPGs for bare-metal or virtualized servers, respectively.

If you are using Cisco ACI Release 1.2, be sure to enable the distribution of policy for EPG-to-outside traffic
at the computing leaf switches instead of at the border leaf switch by choosing Ingress for the Policy
Enforcement Direction option on the L3Out connection.

Use only one contract between any two of EPGs and add filter entries as needed.

Do not change the default subject configurations of Apply Both Directions and Reverse Filter Ports.

Use vzAny when possible to reduce the number of contracts.

To announce the tenant networks to the outside, you must configure the subnet under the bridge domain
and the EPG and mark it as advertised externally (or as public).

Make sure that the bridge domains have a relationship with their respective L3Out connections.
Make sure that the Layer 3 external EPG has a contract with the EPGs that need to communicate with
the outside.

Make sure that the route map is configured on the VRF instance.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 83 of 84

For More Information


For more information, please refer to http://www.cisco.com/go/aci.

Printed in USA

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

C07-736077-00

12/15

Page 84 of 84