Cisco Unified
Communications
Nutanix Best Practices
Copyright
Copyright 2016 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.
Copyright | 2
Virtualizing Cisco Unified Communications
Contents
1. Executive Summary................................................................................ 5
2. Introduction..............................................................................................6
2.1. Audience........................................................................................................................ 6
2.2. Purpose..........................................................................................................................6
3
Virtualizing Cisco Unified Communications
9. Conclusion............................................................................................. 40
Appendix......................................................................................................................... 41
Best Practices Checklist..................................................................................................... 41
References.......................................................................................................................... 42
Meet the Author.................................................................................................................. 43
About Nutanix......................................................................................................................43
List of Figures................................................................................................................44
List of Tables................................................................................................................. 45
4
Virtualizing Cisco Unified Communications
1. Executive Summary
Nutanix delivers a highly resilient converged compute and storage platform that brings the
benefits of web-scale infrastructure to organizations of all sizes. Designed for supporting
virtualized environments including VMware vSphere, Microsoft Hyper-V, and AHV, Nutanix is the
ideal infrastructure for running all types of virtual workloads, including real-time technologies such
as Cisco Unified Communications (UC).
UC administrators can easily provision the full Unified Communications suite on Nutanix,
providing call control, voice messaging, E911, and Cisco Jabber with minimal datacenter
footprint.
Nutanix provides the functionality business and IT administrators need to deliver fast
performance, easy scalability, simple management, fast provisioning, and high availability for
Cisco UC environments, including:
Convergence of compute and storage into a single appliance.
A highly distributed software architecture that eliminates the bottlenecks and complexities
found in traditional SAN and NAS storage platforms when scaling to a large number of Cisco
UC workloads.
Data and I/O localization. Recently accessed VM data is stored locally on flash for fast access.
The Nutanix web-scale converged infrastructure provides full support for VAAI, allowing you to
leverage all the latest advances from VMware and taking your solution to the next level.
Nondisruptive scaling. Autodiscovery and simple addition of new nodes to the Nutanix cluster
enables a cost-effective pay-as-you-grow model.
Reduced storage requirements due to capacity-optimization technologies, such as
compression and deduplication.
Data protection with Nutanix replication, VMCaliber snapshots, and disaster recovery that can
be managed transparently in the Nutanix Prism web interface or API.
No-downtime, one-click upgrades of the Acropolis Operating System (AOS) storage software,
firmware, and hypervisor of choice.
This document shows the Nutanix recommended configuration for successfully implementing
Cisco Unified Communications on Nutanix. The document also provides sizing guidance and
scalability options.
1. Executive Summary | 5
Virtualizing Cisco Unified Communications
2. Introduction
2.1. Audience
This best practice guide is intended for Nutanix and Cisco Unified Communications
administrators and solutions architects who are responsible for planning, designing, and
deploying UC infrastructures in either on-premises or UC-as-a-Service environments running on
the Nutanix platform.
Some portions of this document reference tools and websites that are necessary for the
successful deployment of Cisco UC but that are only available to Cisco partners. As such, we
assume a Cisco partners involvement in UC design and deployment.
2.2. Purpose
This best practice guide acts as a reference when making design decisions for a Cisco Unified
Communications deployment on the Nutanix infrastructure. The primary focus is on core Cisco
Unified Communications applications.
This document contains:
An overview of the Nutanix platform and the benefits of using web-scale infrastructure for
Unified Communications.
An overview of the core Cisco Unified Communications components.
Sizing and placement guidelines for deploying UC VMs on Nutanix.
Best practices for Nutanix configuration for UC environments.
2. Introduction | 6
Virtualizing Cisco Unified Communications
3.4. AHV
Nutanix ships with a hardened, enterprise-ready hypervisor based on proven open source
technology. AHV is managed with the Prism interface, a robust REST API, and an interactive
command-line interface called aCLI (Acropolis CLI). These tools combine to eliminate the
management complexity typically associated with open source environments and allow out-of-
the-box virtualization on Nutanixall without the licensing fees associated with other hypervisors.
for high performance and low-latency HDDs for affordable capacity. The file system automatically
tiers data across different types of storage devices using intelligent data placement algorithms.
These algorithms make sure that the most frequently used data is available in memory or in flash
for the fastest possible performance.
With the DSF, a CVM writes data to local flash memory for fast acknowledgment; the CVM also
handles read operations locally for reduced latency and fast data delivery.
The figure below shows an overview of the Nutanix architecture, including the hypervisor of your
choice (AHV, ESXi, or Hyper-V), user VMs, the Nutanix storage CVM, and its local disk devices.
Each CVM connects directly to the local storage controller and its associated disks. Using local
storage controllers on each host localizes access to data through the DSF, thereby reducing
storage I/O latency. The DSF replicates writes synchronously to at least one other Nutanix node
in the system, distributing data throughout the cluster for resiliency and availability. Replication
factor 2 (RF2) creates two identical data copies in the cluster, and replication factor 3 (RF3)
creates three identical data copies. Having a local storage controller on each node ensures that
storage performance as well as storage capacity increase linearly with node addition.
Local storage for each Nutanix node in the architecture appears to the hypervisor as one large
pool of shared storage. This allows the DSF to support all key virtualization features. Data
localization maintains performance and quality of service (QoS) on each host, minimizing the
effect noisy VMs have on their neighbors performance. This functionality allows for large,
mixed-workload clusters that are more efficient and more resilient to failure when compared to
traditional architectures with standalone, shared, and dual-controller storage arrays.
When VMs move from one hypervisor to another, such as during live migration and high
availability, the now local CVM serves a newly migrated VMs data. When reading old data
(stored on the now remote CVM) the local CVM forwards the I/O request to the remote CVM. All
write I/O occurs locally. The DSF detects that I/O is occurring from a different node and migrates
the data to the local node in the background, allowing for all read I/O to now be served locally.
The data only migrates when there have been enough reads and writes from the remote node to
minimize network utilization.
The next figure shows how data follows the VM as it moves between hypervisor nodes.
The figure above shows the basic components of a UC deployment. This overview highlights
virtual applications central to providing service to end users and clients. Weve included required
physical items such as the Cisco IOS voice gateway, laptops for Jabber clients, and IP phones
to illustrate how the virtual server infrastructure fits into the overall UC system. Each UC virtual
server application is implemented using a number of VMs.
Additional UC components, such as Cisco Unified Contact Center for call center and help desk
agents, Video Communications Server for telepresence, and WebEx, can be virtualized following
similar methods, but we do not discuss them directly in this guide.
The figure above shows a geographically distributed cluster, with servers residing in New York
and Boston forming a single CUCM cluster. This distributed CUCM cluster provides call control
service to phones in New York, New Jersey, and Boston.
CUCM, like many Cisco UC applications, is deployed using an Open Virtualization Archive (OVA)
template and can be configured with different CPU, RAM, and disk sizes, as needed. The OVA
file can be imported into vSphere and contains the definition of all attributes required to create
the VM. CUCM servers that make up a cluster vary in size, depending on the number of phones
they support. The user selects the server size in the VMware OVA deployment dropdown during
vSphere import. Consult the Cisco DocWiki for a breakdown of the available CUCM OVA options.
Performance of the CUCM server is heavily focused on CPU and RAM, as all call processing is
performed in memory. During normal operation, storage is only utilized for writing out logs and
call records.
In the figure above, an active/active CUC cluster in New York provides voicemail service to all
phones in New York and New Jersey. A separate active/active CUC cluster provides service to
phones in Boston.
CUC is heavily focused on processing power to perform voice transcoding and speech
recognition. In addition, there are large storage requirements to read and write voice messages
in real-time on both servers in a pair. More information about the CUC OVA sizes can be found in
the Cisco DocWiki.
The figure above depicts a single CUCM cluster with two IM&P subclusters. The New York
subcluster provides service to Jabber clients in New York and New Jersey, while the Boston
subcluster is dedicated to Boston clients.
The IM&P server primarily utilizes CPU and RAM for real-time routing of instant messaging.
Storage utilization during normal operation focuses on writing logs.
calls. UC endpoints can exist on laptops, desktops, IP phones, and wireless handsets. As
devices move from one location to another, the CER server is responsible for tracking the current
location. During an emergency call the system invokes the CER server to send the calling
device's current location to the PSAP operator.
Two CER servers form a CER Server Group for high availability; multiple Server Groups form
a CER Cluster. Information on the UC application OVA sizes for Emergency Responder can be
found in the Cisco DocWiki.
The figure above shows a sample CER Server Group split between New York and Boston. The
CER Publisher server provides service to all phones, and the Subscriber only takes over if the
Publisher is offline.
The figure above shows an example of an E911 call made from New York Phone C. CER
identifies the location of the call and passes this information to the PSAP operator.
CER primarily uses CPU and RAM to perform call routing and location lookups in real-time,
as well as scheduled phone location discovery. Storage utilization consists of emergency call
logging and system log files.
IM&P VMs per IM&P subcluster: 12 Heavy CPU and RAM use
Subclusters per CUCM cluster: 13 Light storage use
Users per subcluster: 115,000
Users per cluster: 45,000
Maximum clusters: unlimited (1 per CUCM
cluster)
PLM 1 PLM VM per license domain Light CPU, RAM, and storage use
entire Nutanix cluster. VM-centric to the core, Nutanix is the ideal compute and storage platform
for highly critical Cisco Unified Communications VMs.
General guidelines for Cisco UC Sizing can be found in the Solution Reference Design Guide for
Collaboration. For a more exact calculation that takes into account expected call volume, use the
Cisco Collaboration Sizing Tool available to Cisco partners.
Each UC application is listed with the number of VMs required and the OVA template size.
For convenience, the table lists the actual size of the OVA in vCPUs, RAM, and storage. This
information may change in future versions of Cisco UC products. To find the exact specifications
for each product and each OVA size, refer to the DocWiki Virtualization page.
The OVAs delivered by Cisco for the purposes of Unified Communications are fixed and cannot
be modified, or UC application installation may fail. The purpose of the OVA is to ensure the best
possible end user experience by enforcing rules regarding resource reservation, disk sizes, and
oversubscription (these rules can be found in the Cisco Virtualization Sizing Guidelines page). An
added benefit is that the OVA greatly simplifies creating VMs for Cisco UC.
The following section shows two sample layouts of Unified Communications design on Nutanix:
one for 1,000 users and the other for 30,000.
CUCM
2 1 4 80 2 8 160
2,500 Users
IM&P
2 1 2 80 2 4 160
1,000 Users
CUC
2 2 4 160 4 8 320
1,000 Users
CER
2 1 4 80 2 8 160
1,000 Users
PLM 1 1 4 50 1 4 50
Total 11 32 850
Using the sizing information in the table above, we can lay out the VMs on individual nodes in the
Nutanix system.
We've selected the 16-core Nutanix 3350 as a deployment example. This model allows ample
room for the UC applications, using an N+1 hypervisor strategy in case of Nutanix node
hardware failure. We only show three nodes below, but generally recommend four nodes in any
configuration for higher availability. The fourth node also allows more non-UC workloads to be
deployed in the configuration.
possible to use the four unreserved vCPUs for non-Cisco UC workloads. When access to the
DSF is not IOPS intensive, you can use these spare CPU cores. All free, unused resources are
shaded.
Figure 11: 1,000 User VM Placement with Cores Reserved and VMs Not Pinned
CUCM
11 2 6 110 22 66 1,210
7,500 Users
IM&P
15,000 4 4 8 2x 80 16 32 640
Users
CUC
20,000 4 8 8 2x 300 32 32 2,400
Users
CER
30,000 2 2 6 2x 80 4 12 320
Users
PLM 1 1 4 50 1 4 50
Total 75 146 4,620
In this example we have chosen two blocks, containing seven 24-core nodes for maximum VM
density. Other hardware models and core densities are available. N+1 hardware redundancy is
achieved with an additional node to tolerate hardware failure. Critical UC application roles are
separated manually between Nutanix nodes as required.
The NX-3000 can be mixed with other node types to form a single Nutanix cluster and thereby
maximize performance and operational efficiency. This allows users to add and balance storage
and compute capacity in a Nutanix cluster to suit their particular needs.
Cluster Size 332 nodes per cluster, one or more clusters per site
Storage Pool Single storage pool with all SSD and HDD devices
Container Single container with redundancy factor of 2
Used to host UC virtual disk files (VMDKs)
8.4. Networking
Designed for true linear scaling, Nutanix recommends a leaf-spine network architecture. A leaf-
spine architecture consists of two network tiers: an L2 leaf and an L3 spine based on 10 GbE or
40 GbE and nonblocking switches. This architecture maintains consistent performance without
any throughput reduction due to a static maximum of three hops from any node in the network.
The figure below shows a scale-out leaf-spine network architecture design. This architecture
provides 20 Gb active throughput from each node to its L2 leaf, and scalable 80 Gb active
throughput from each leaf to spine switch, providing scale from one Nutanix block to thousands
without any impact to available bandwidth.
Each nodes 10 GbE NIC is attached to a separate top-of-rack switch; the type of vSphere
switch determines the load balancing and NIC teaming option. Nutanix recommends using
Route based on originating virtual port with the vSphere Standard Switch and Route based on
physical NIC Load with the vSphere Distributed Switch. No advanced switching configuration
such as Link Aggregation Control Protocol (LACP), Cisco EtherChannel, or HP teaming is
required for the Nutanix node.
Additional Nutanix networking best practices for VMware vSphere can be found in the Tech Note
VMware vSphere Networking on Nutanix.
Nutanix recommends always using the 10 Gbps network interfaces for the Nutanix Controller
VM, hypervisor, and guest VM network connectivity. However, the 1 Gbps network infrastructure
is supported for a maximum of eight Nutanix nodes and can be used if no 10 Gbps network
infrastructure is available.
geographic or local redundancy. Administrators can perform manual recovery to start failed VMs
on new hardware if necessary.
The DSF provides the ability to survive disk failure without impact to running VMs, as all data is
written redundantly to peer nodes. Complete node failure would only interrupt service to VMs
running on that node. VMs can be restarted on another node, or VMs can be restarted when the
Nutanix node is running again.
CPU Oversubscription
CPU oversubscription is not allowed when implementing virtualized Cisco UC. Plan for a 1:1
mapping of vCPUs to physical cores and deploy UC VMs using the Cisco supplied OVA. CPU
reservations are created when the OVA is deployed. Hyperthreading should be enabled, but
mapping of vCPUs must be performed based on the number of physical cores.
Memory Oversubscription
Memory must be assigned on a dedicated basis to Cisco UC VMs and cannot be oversubscribed.
This is easy to ensure on Nutanix by deploying the Cisco provided OVA and planning VM layout
accordingly. Memory reservation is performed when the OVA is deployed.
Disk Oversubscription
Thin provisioning can be used as long as disk space is available when a VM requires it. By
design, Unified Communications servers eventually use all available disk space for logging and
other functions. The Cisco recommended best practice is to use thick provisioning lazy zeroed to
ensure smooth operation of critical UC VMs. This precaution is taken to avoid running out of disk
space when a critical UC application requires it. Thin provisioning is an acceptable method when
UC VMs are guaranteed to never run out of disk space during operation.
Use these numbers as a guideline for placing UC applications on Nutanix nodes to balance
the workload over multiple Nutanix nodes. In general, most Cisco UC applications are not disk
intensive.
Note: CPU core requirements primarily drive UC VM placement, rather than storage
or memory requirements.
9. Conclusion
Cisco Unified Communications can be successfully deployed on the Nutanix platform, delivering
high availability and true linear scalability. The Nutanix platform eliminates traditional SAN and
NAS complexity while providing a highly resilient storage and compute infrastructure with a small
datacenter footprint and full Cisco UC support.
Cisco's adoption of virtualization for Unified Communications provides an amazing opportunity
for UC administrators. The synergy between virtualized UC and Nutanix, an infrastructure
platform purpose-built for virtualization, can be leveraged to build the best enterprise-class
communications tools with the lowest administrative overhead.
Nutanix is the optimal compute and storage platform for critical real-time UC applications,
allowing Unified Communications cluster scaling as needed during a deployment, instead of
building the entire infrastructure up front for only a small number of users. This flexibility makes
Nutanix a perfect fit for on-premise, private-cloud, or UC-as-a-Service deployment models
where the ability to scale over time is crucial.
For Nutanix or UC on Nutanix questions please use our Nutanix Next online community:
next.nutanix.com.
9. Conclusion | 40
Virtualizing Cisco Unified Communications
Appendix
Appendix | 41
Virtualizing Cisco Unified Communications
References
1. CUCM OVA Sizes
2. CUC OVA Sizes
3. IM&P OVA Sizes
4. CER OVA Sizes
5. All OVA Sizes
6. Cisco Virtualization Sizing Guidelines
7. UC Virtualization Storage System Guidelines
8. Supported VMware Features for Cisco UC
9. Cisco Coresidency Guidelines
Appendix | 42
Virtualizing Cisco Unified Communications
About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix enterprise cloud platform leverages web-scale engineering
and consumer-grade design to natively converge compute, virtualization, and storage into
a resilient, software-defined solution with rich machine intelligence. The result is predictable
performance, cloud-like infrastructure consumption, robust security, and seamless application
mobility for a broad range of enterprise applications. Learn more at www.nutanix.com or follow up
on Twitter @nutanix.
Appendix | 43
Virtualizing Cisco Unified Communications
List of Figures
Figure 1: Nutanix Enterprise Cloud Platform.....................................................................7
Figure 11: 1,000 User VM Placement with Cores Reserved and VMs Not Pinned..........27
44
Virtualizing Cisco Unified Communications
List of Tables
Table 1: Document Version History.................................................................................. 6
45