Anda di halaman 1dari 61

AHV Administration Guide

Acropolis 5.1
10-Nov-2017
Contents

1: Virtualization Management..................................................................... 4
Storage Overview.................................................................................................................................6
Virtualization Management Web Console Interface............................................................................ 7

2: Node Management...................................................................................8
Controller VM Access.......................................................................................................................... 8
Admin Access to Controller VM................................................................................................8
Shutting Down a Node in a Cluster (AHV)....................................................................................... 10
Starting a Node in a Cluster (AHV)...................................................................................................10
Changing CVM Memory Configuration (AHV)...................................................................................12
Changing the Acropolis Host Name.................................................................................................. 12
Changing the Acropolis Host Password............................................................................................13
Nonconfigurable AHV Components...................................................................................................14

3: Controller VM Memory Configurations............................................... 15


CVM Memory and vCPU Configurations (G5/Broadwell)..................................................................15
Platform Workload Translation (G5/Broadwell).......................................................................16
CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge).................................................... 17
CVM Memory Configurations for Features........................................................................................18

4: Host Network Management.................................................................. 19


Prerequisites for Configuring Networking.......................................................................................... 19
AHV Networking Recommendations................................................................................................. 19
Layer 2 Network Management with Open vSwitch........................................................................... 21
About Open vSwitch............................................................................................................... 21
Default Factory Configuration................................................................................................. 22
Viewing the Network Configuration.........................................................................................23
Creating an Open vSwitch Bridge.......................................................................................... 25
Configuring an Open vSwitch Bond with Desired Interfaces..................................................25
Virtual Network Segmentation with VLANs............................................................................ 26
Changing the IP Address of an Acropolis Host................................................................................ 29

5: Virtual Machine Management............................................................... 30


VM Management................................................................................................................................30
Supported Guest VM Types for AHV................................................................................................ 30
Virtual Machine Network Management..............................................................................................30
Configuring 1 GbE Connectivity for Guest VMs..................................................................... 31
Configuring a Virtual NIC to Operate in Access or Trunk Mode.............................................32
Virtual Machine Memory and CPU Configurations............................................................................33
Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)..........................................34
GPU Pass-Through for Guest VMs...................................................................................................35
Windows VM Provisioning................................................................................................................. 36
Nutanix VirtIO for Windows.................................................................................................... 36
Installing Windows on a VM................................................................................................... 43
Configuring a Windows VM to use Unified Extensible Firmware Interface (UEFI)................. 44

2
VM Import...........................................................................................................................................44
Uploading Files to DSF for Microsoft Windows Users...................................................................... 45

6: Event Notifications................................................................................ 46
Generated Events.............................................................................................................................. 46
Creating a Webhook.......................................................................................................................... 47
Listing Webhooks...............................................................................................................................49
Updating a Webhook......................................................................................................................... 49
Deleting a Webhook.......................................................................................................................... 50
Notification Format............................................................................................................................. 50

7: Integration with Network Functions.................................................... 52


Network Function Architecture...........................................................................................................52
Types of Network Function VMs....................................................................................................... 52
Network Function VM Chains............................................................................................................54
Network Function Limitations.............................................................................................................55
Enabling and Disabling Support for Network Function VMs............................................................. 55
Creating a Network Function VM...................................................................................................... 55
Creating a Network Function VM Chain............................................................................................57
Clearing a Network Function Configuration...................................................................................... 58
Example: Configuring a Sample Network Function Chain................................................................ 58
Copyright......................................................................................................................... 60
License............................................................................................................................................... 60
Conventions........................................................................................................................................60
Default Cluster Credentials................................................................................................................60
Version................................................................................................................................................61

3
1
Virtualization Management
Nutanix nodes with AHV include a distributed VM management service responsible for storing VM
configuration, making scheduling decisions, and exposing a management interface.

Snapshots

Snapshots are consistent for failures. They do not include the VM's current memory image, only the VM
configuration and its disk contents. The snapshot is taken atomically across the VM configuration and disks
to ensure consistency.
If multiple VMs are specified when creating a snapshot, all of their configurations and disks are placed into
the same consistency group. Do not specify more than 8 VMs at a time.
If no snapshot name is provided, the snapshot is referred to as "vm_name-timestamp", where the
timestamp is in ISO-8601 format ( YYYY-MM-DDTHH:MM:SS.mmmmmm ).

VM Disks

A disk drive may either be a regular disk drive or a CD-ROM drive.


By default, regular disk drives are configured on the SCSI bus, and CD-ROM drives are configured on the
IDE bus. The IDE bus supports CD-ROM drives only; regular disk drives are not supported on the IDE bus.
You can also configure CD-ROM drives to use the SCSI bus. By default, a disk drive is placed on the first
available bus slot.
Disks on the SCSI bus may optionally be configured for passthrough on platforms that support iSCSI.
When in passthrough mode, SCSI commands are passed directly to DSF over iSCSI. When SCSI
passthrough is disabled, the hypervisor provides a SCSI emulation layer and treats the underlying
iSCSI target as a block device. By default, SCSI passthrough is enabled for SCSI devices on supported
platforms.
If you do not specify a storage container when creating a virtual disk, it is placed in the storage container
named "default". You do not need to create the default storage container.

Virtual Networks (Layer 2)

Each VM network interface is bound to a virtual network. Each virtual network is bound to a single VLAN;
trunking VLANs to a virtual network is not supported. Networks are designated by the L2 type ( vlan ) and
the VLAN number. For example, a network bound to VLAN 66 would be named vlan.66 .
Each virtual network maps to virtual switch br0 . The user is responsible for ensuring that the specified
virtual switch exists on all hosts, and that the physical switch ports for the virtual switch uplinks are properly
configured to receive VLAN-tagged traffic.
A VM NIC must be associated with a virtual network. It is not possible to change this association. To
connect a VM to a different virtual network, it is necessary to create a new NIC. While a virtual network is in
use by a VM, it cannot be modified or deleted.

Virtualization Management | AHV Administration Guide | AHV | 4


Managed Networks (Layer 3)

A virtual network can have an IPv4 configuration, but it is not required. A virtual network with an IPv4
configuration is a managed network; one without an IPv4 configuration is an unmanaged network. A VLAN
can have at most one managed network defined. If a virtual network is managed, every NIC must be
assigned an IPv4 address at creation time.
A managed network can optionally have one or more non-overlapping DHCP pools. Each pool must be
entirely contained within the network's managed subnet.
If the managed network has a DHCP pool, the NIC automatically gets assigned an IPv4 address from one
of the pools at creation time, provided at least one address is available. Addresses in the DHCP pool are
not reserved. That is, you can manually specify an address belonging to the pool when creating a virtual
adapter. If the network has no DHCP pool, you must specify the IPv4 address manually.
All DHCP traffic on the network is rerouted to an internal DHCP server, which allocates IPv4 addresses.
DHCP traffic on the virtual network (that is, between the guest VMs and the Controller VM) does not reach
the physical network, and vice versa.
A network must be configured as managed or unmanaged when it is created. It is not possible to convert
one to the other.

Figure: Acropolis Networking Architecture

Host Maintenance

When a host is in maintenance mode, it is marked as unschedulable so that no new VM instances are
created on it. Subsequently, an attempt is made to evacuate VMs from the host.
If the evacuation attempt fails (for example, because there are insufficient resources available elsewhere in
the cluster), the host remains in the "entering maintenance mode" state, where it is marked unschedulable,
waiting for user remediation. You can shut down VMs on the host or move them to other nodes. Once the
host has no more running VMs it is in maintenance mode.
When a host is in maintenance mode, VMs are moved from that host to a temporary host. After exiting
maintenance mode, those VMs are automatically returned to the original host, eliminating the need to
manually move them.

Virtualization Management | AHV Administration Guide | AHV | 5


Limitations

Number of online VMs per host 128

Number of online VM virtual disks per host 256

Number of VMs per consistency group 8


(with snapshot.create)

Number of VMs to edit concurrently 64


(for example, with vm.create/delete and power operations)

Storage Overview
Acropolis uses iSCSI and NFS for storing VM files.

Figure: Acropolis Storage Example

iSCSI for VMs

Each disk which maps to a VM is defined as a separate iSCSI target. The Nutanix scripts work with
libvirtd in the kernel to create the necessary iSCSI structures in Acropolis. These structures map to
vDisks created in the Nutanix storage container specified by the administrator. If no storage container is
specified, the script uses the default storage container name.

Storage High Availability with I/O Path Optimization

Unlike with Microsoft Hyper-V and VMware ESXi clusters, in which the entire traffic on a node is rerouted
to a randomly selected healthy Controller VM when the local Controller VM becomes unavailable, in
an Acropolis cluster, a rerouting decision is taken on a per-vDisk basis. When the local Controller VM
becomes unavailable, iSCSI connections are individually redirected to a randomly selected healthy
Controller VM, resulting in distribution of load across the cluster.
Instead of maintaining live, redundant connections to other Controller VMs, as is the case with the Device
Mapper Multipath feature, AHV initiates an iSCSI connection to a healthy Controller VM only when the

Virtualization Management | AHV Administration Guide | AHV | 6


connection is required. When the local Controller VM becomes available, connections to other Controller
VMs are terminated and the guest VMs reconnect to the local Controller VM.

NFS Datastores for Images

Nutanix storage containers can be accessed by the Acropolis host as NFS datastores. NFS datastores are
used to manage images which may be used by multiple VMs, such as ISO files. When mapped to a VM,
the script maps the file in the NFS datastore to the VM as a iSCSI device, just as it does for virtual disk
files.
Images must be specified by absolute path, as if relative to the NFS server. For example, if a datastore
named ImageStore exists with a subdirectory called linux, the path required to access this set of files would
be /ImageStore/linux. Use the nfs_ls script to browse the datastore from the Controller VM:
nutanix@cvm$ nfs_ls --long --human_readable /ImageStore/linux
-rw-rw-r-- 1 1000 1000 Dec 7 2012 1.6G CentOS-6.3-x86_64-LiveDVD.iso
-rw-r--r-- 1 1000 1000 Jun 19 08:56 523.0M archlinux-2013.06.01-dual.iso
-rw-rw-r-- 1 1000 1000 Jun 3 19:22 373.0M grml64-full_2013.02.iso
-rw-rw-r-- 1 1000 1000 Nov 29 2012 694.3M ubuntu-12.04.1-amd64.iso

Virtualization Management Web Console Interface


Many of the virtualization management features can be managed from the Prism GUI.
In virtualization management-enabled clusters, you can do the following through the web console:
Configure network connections
Create virtual machines
Manage virtual machines (launch console, start/shut down, take snapshots, migrate, clone, update, and
delete)
Monitor virtual machines
Enable VM high availability
For more information about these features, see the Web Console Guide.

Virtualization Management | AHV Administration Guide | AHV | 7


2
Node Management

Controller VM Access
Most administrative functions of a Nutanix cluster can be performed through the web console or nCLI.
Nutanix recommends using these interfaces whenever possible and disabling Controller VM SSH access
with password or key authentication. Some functions, however, require logging on to a Controller VM
with SSH. Exercise caution whenever connecting directly to a Controller VM as the risk of causing cluster
issues is increased.

Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables
are set to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not
import or change any locale settings.

Admin Access to Controller VM


You can access the Controller VM as the admin user (admin user name and password) with SSH. For
security reasons, the password of the admin user must meet complexity requirements. When you log on to
the Controller VM as the admin user for the first time, you are prompted to change the default password.
Following are the default credentials of the admin user:
User name: admin and Password: Nutanix/4u
The password must meet the following complexity requirements:
At least 8 characters long
At least 1 lowercase letter
At least 1 uppercase letter
At least 1 number
At least 1 special character
At least 4 characters difference from the old password
Should not be among the last 10 passwords
After you have successfully changed the password, the new password is synchronized across all Controller
VMs and interfaces (Prism web console, nCLI, and SSH).
Note:
As an admin user, you cannot access nCLI by using the default credentials. If you are logging
in as the admin user for the first time, you must SSH to the Controller VM or log on through the
Prism web console. Also, you cannot change the default password of the admin user through
nCLI. To change the default password of the admin user, you must SSH to the Controller VM or
log on through the Prism web console.

Node Management | AHV Administration Guide | AHV | 8


When you make an attempt to log in to the Prism web console for the first time after you
upgrade to AOS 5.1 from an earlier AOS version, you can use your existing admin user
password to log in and then change the existing password (you are prompted) to adhere to the
password complexity requirements. However, if you are logging in to the Controller VM with
SSH for the first time after the upgrade as the admin user, you must use the default admin user
password (Nutanix/4u) and then change the default password (you are prompted) to adhere to
the password complexity requirements.

By default, the admin user password does not have an expiry date, but you can change the password at
any time.
When you change the admin user password, you must update any applications and scripts using the
admin user credentials for authentication. Nutanix recommends that you create a user assigned with the
admin role instead of using the admin user for authentication. The Prism Web Console Guide describes
authentication and roles.
Following are the default credentials to access a Controller VM.

Controller VM Credentials

Interface Target User Name Password

SSH client Nutanix Controller VM admin Nutanix/4u


nutanix nutanix/4u
Prism web console Nutanix Controller VM admin Nutanix/4u

Accessing the Controller VM Using the Admin Account

Perform the following procedure to log on to the Controller VM by using the admin user with SSH for the
first time.

1. Log on to the Controller VM with SSH by using the management IP address of the Controller VM and
the following credentials.
User name: admin and Password: Nutanix/4u
You are now prompted to change the default password.

2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:
New password:
Retype new password:
Password changed.

The password must meet the following complexity requirements:


At least 8 characters long
At least 1 lowercase letter
At least 1 uppercase letter
At least 1 number
At least 1 special character
At least 4 characters difference from the old password
Should not be among the last 10 passwords
For information about logging on to a Controller VM by using the admin user account through the Prism
web console, see Logging Into The Web Console in the Prism Web Console guide.

Node Management | AHV Administration Guide | AHV | 9


Shutting Down a Node in a Cluster (AHV)
Before you begin: Shut down guest VMs that are running on the node, or move them to other nodes in
the cluster.

Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2
(RF2), you can only shut down one node for each cluster. If an RF2 cluster would have more than
one node shut down, shut down the entire cluster.

1. If the Controller VM is running, shut down the Controller VM.

a. Log on to the Controller VM with SSH.

b. List all the hosts in the cluster.


acli host.list

Note the value of Hypervisor address for the node you want to shut down.

c. Put the node into maintenance mode.


nutanix@cvm$ acli host.enter_maintenance_mode Hypervisor address [wait="{ true |
false }" ]

Replace Hypervisor address with the value of Hypervisor address for the node you want to shut
down. Value of Hypervisor address is either the IP address of the AHV host or the host name.
Specify wait=true to wait for the host evacuation attempt to finish.

d. Shut down the Controller VM.


nutanix@cvm$ cvm_shutdown -P now

2. Log on to the AHV host with SSH.

3. Shut down the host.


root@ahv# shutdown -h now

Starting a Node in a Cluster (AHV)

1. Log on to the AHV host with SSH.

2. Find the name of the Controller VM.


root@ahv# virsh list --all | grep CVM

Make a note of the Controller VM name in the second column.

3. Determine if the Controller VM is running.


If the Controller VM is off, a line similar to the following should be returned:
- NTNX-12AM2K470031-D-CVM shut off

Make a note of the Controller VM name in the second column.

If the Controller VM is on, a line similar to the following should be returned:


- NTNX-12AM2K470031-D-CVM running

Node Management | AHV Administration Guide | AHV | 10


4. If the Controller VM is shut off, start it.
root@ahv# virsh start cvm_name

Replace cvm_name with the name of the Controller VM that you found from the preceding command.

5. If the node is in maintenance mode, log on to the Controller VM and take the node out of maintenance
mode.
nutanix@cvm$ acli
<acropolis> host.exit_maintenance_mode AHV-hypervisor-IP-address

Replace AHV-hypervisor-IP-address with the IP address of the AHV hypervisor.


<acropolis> exit

6. Log on to another Controller VM in the cluster with SSH.

7. Verify that all services are up on all Controller VMs.


nutanix@cvm$ cluster status

If the cluster is running properly, output similar to the following is displayed for each node in the cluster:

CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848, 10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176, 8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037, 9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886, 8888, 8889,
8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627, 4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Tunnel UP [9480, 9543, 9544]
ClusterHealth UP [9521, 9619, 9620, 9947, 9976, 9977,
10301]
Janus UP [9532, 9624, 9625]
NutanixGuestTools UP [9572, 9650, 9651, 9674]
MinervaCVM UP [10174, 10200, 10201, 10202, 10371]
ClusterConfig UP [10205, 10233, 10234, 10236]
APLOSEngine UP [10231, 10261, 10262, 10263]
APLOS UP [10343, 10368, 10369, 10370, 10502,
10503]
Lazan UP [10377, 10402, 10403, 10404]
Orion UP [10409, 10449, 10450, 10474]

Node Management | AHV Administration Guide | AHV | 11


Delphi UP [10418, 10466, 10467, 10468]

Changing CVM Memory Configuration (AHV)


You can increase memory reserved for each Controller VM in your cluster by using the 1-click Controller
VM Memory Upgrade available from the Prism web console. Increase memory size depending on the
workload type or to enable certain AOS features. See the Controller VM Memory Configurations topic in
the Acropolis Advanced Administration Guide.

1. Run the Nutanix Cluster Checks (NCC).


From the Prism web console Health page, select Actions > Run Checks. Select All checks and
click Run.
Log in to a Controller VM and use the ncc CLI.
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than PASS, resolve the reported issues before proceeding. If you are
unable to resolve the issues, contact Nutanix support for assistance.

2. Log on to the web console for any node in the cluster.

3. Open Configure CVM from the gear icon in the web console.
The Configure CVM dialog box is displayed.

4. Select the Target CVM Memory Allocation memory size and click Apply.
The values available from the drop-down menu can range from 16 GB to the maximum available
memory in GB.
AOS applies memory to each Controller VM that is below the amount you choose.
If a Controller VM was already allocated more memory than your choice, it remains at the memory
amount. For example, selecting 28 GB upgrades any Controller VM currently at 20 GB. A Controller VM
with a 48 GB memory allocation remains unmodified.

Changing the Acropolis Host Name


To change the name of an Acropolis host, do the following:

1. Log on to the AHV host with SSH.

2. Use a text editor such as vi to set the value of the HOSTNAME parameter in the /etc/sysconfig/
network file.
HOSTNAME=my_hostname

Replace my_hostname with the name that you want to assign to the host.

3. Use the text editor to replace the host name in the /etc/hostname file.

4. Restart the Acropolis host.

Node Management | AHV Administration Guide | AHV | 12


Changing the Acropolis Host Password
Tip: Although it is not required for the root user to have the same password on all hosts, doing so
makes cluster management and support much easier. If you do select a different password for one
or more hosts, make sure to note the password for each host.
Perform these steps on every Acropolis host in the cluster.

1. Log on to the AHV host with SSH.

2. Change the root password.


root@ahv# passwd root

3. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.

The password you choose must meet the following complexity requirements:
In configurations with high-security requirements, the password must contain:
At least 15 characters.
At least one upper case letter (AZ).
At least one lower case letter (az).
At least one digit (09).
At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
At least eight characters different from the previous password.
At most three consecutive occurrences of any given character.
The password cannot be the same as the last 24 passwords.

In configurations without high-security requirements, the password must contain:


At least eight characters.
At least one upper case letter (AZ).
At least one lower case letter (az).
At least one digit (09).
At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
At least three characters different from the previous password.
At most three consecutive occurrences of any given character.
The password cannot be the same as the last 10 passwords.

In both types of configuration, if a password for an account is entered three times unsuccessfully within
a 15-minute period, the account is locked for 15 minutes.

Node Management | AHV Administration Guide | AHV | 13


Nonconfigurable AHV Components
The components listed here are configured by the Nutanix manufacturing and installation processes. Do
not modify any of these components except under the direction of Nutanix Support.

Warning: Modifying any of the settings listed here may render your cluster inoperable.

Warning: You must not run any commands on a Controller VM that are not covered in the Nutanix
documentation.

Nutanix Software

Settings and contents of any Controller VM, including the name and the virtual hardware configuration
(except memory when required to enable certain features)

AHV Settings

Hypervisor configuration, including installed packages


iSCSI settings
Open vSwitch settings
Taking snapshots of the Controller VM

Node Management | AHV Administration Guide | AHV | 14


3
Controller VM Memory Configurations
Controller VM memory allocation requirements differ depending on the models and the features that are
being used.

CVM Memory and vCPU Configurations (G5/Broadwell)


This topic lists the recommended Controller VM memory allocations for workload categories.

Note: If the AOS upgrade process detects that any node hypervisor host has total physical
memory of 64 GB or greater, it automatically upgrades any Controller VM in that node with less
than 32 GB memory by 4 GB. The Controller VM is upgraded to a maximum 32 GB.
If the AOS upgrade process detects any node with less than 64 GB memory size, no memory
changes occur.
For nodes with ESXi hypervisor hosts with total physical memory of 64 GB, the Controller VM
is upgraded to a maximum 28 GB. With total physical memory greater than 64 GB, the existing
Controller VM memory is increased by 4 GB.

To calculate the number of vCPUs for your model, use the number of physical cores per socket in your
model. The minimum number of vCPUS your Controller VM can have is eight and the maximum number
is 12, unless otherwise noted.
If your CPU has eight or fewer logical cores, allocate a maximum of 75 percent of the cores of a single
CPU to the Controller VM. For example, if your CPU has 6 cores, allocate 4 vCPUs.

Controller VM Memory Configurations for Base Models

Platform Recommended / Default vCPUs


Memory (GB)
Default configuration for all platforms 20 8

Nutanix Broadwell Models

The following table shows the minimum amount of memory required for the Controller VM on each node
for platforms that do not follow the default. For the workload translation into models, see Platform Workload
Translation (G5/Broadwell) on page 16.

Platform Default Memory (GB)


VDI, server virtualization 20
Storage Heavy 28
Storage Only 28
Large server, high-performance, all-flash 32

Controller VM Memory Configurations | AHV Administration Guide | AHV | 15


Platform Workload Translation (G5/Broadwell)
The following table maps workload types to the corresponding Nutanix and Lenovo models.

Workload Exceptions

Note: Upgrading to 5.1 requires a 4GB memory increase, unless the CVM memory already has
32 GB.
If all the data disks in a platform are SSDs, the node is assigned the High Performance workload except for
the following exceptions.
Klas Voyager 2 uses SSDs but due to workload balance, this platform workload default is VDI.
Cisco B-series is expected to have large remote storage and two SSDs as a local cache for the hot tier,
so this platform workload is VDI.

Workload Nutanix Nutanix Lenovo Cisco Dell Additional


Platforms
Features NX Model SX Model HX Model UCS XC

VDI NX-1065S- SX-1065- HX3310 B200-M4 XC430- Klas


G5 G5 Xpress Telecom
VOYAGER2

NX-1065- - HX3310-F C240-M4L - Crystal


G5
RS2616PS18

NX-3060- - HX2310-E C240-M4S - -


G5

NX-3155G- - HX3510-G C240- - -


G5 M4S2

NX-3175- - HX3710 C220-M4S - -


G5

- - HX1310 C220-M4L - -

- - HX2710-E Hyperflex - -
HX220C-
M4S

- - HX3510- - - -
FG

- - HX3710-F - - -

Storage Heavy NX-6155- - HX5510 - - -


G5

NX-8035- - HX5510-C - - -
G5

NX-6035- - - - - -
G5

Storage Node NX-6035C- - HX5510-C - XC730xd-12C-


G5

Controller VM Memory Configurations | AHV Administration Guide | AHV | 16


Workload Nutanix Nutanix Lenovo Cisco Dell Additional
Platforms
Features NX Model SX Model HX Model UCS XC

High Performance and NX-8150- - HX7510 C240- XC630-10P -


All-Flash G5 M4SX

NX-1155- - HX7510-F Hyperflex XC730xd-12R-


G5 HX240C-
M4SX

NX-6155- - - - - -
G5

NX-8150- - - - - -
G5

CVM Memory and vCPU Configurations (G4/Haswell/Ivy Bridge)


This topic lists the recommended Controller VM memory allocations for models and features.

Controller VM Memory Configurations for Base Models

Platform Default

Platform Recommended/Default vCPUs


Memory (GB)
Default configuration for all platforms unless 20 8
otherwise noted

The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.

Nutanix Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)
NX-1020 16 16 4
NX-6035C 28 28 8
NX-6035-G4 28 20 8
NX-8150 32 32 8
NX-8150-G4 32 32 8
NX-9040 32 20 8
NX-9060-G4 32 32 8

Controller VM Memory Configurations | AHV Administration Guide | AHV | 17


Dell Platforms

Platform Recommended Default Memory vCPUs


Memory (GB) (GB)

XC730xd-24 32 20 8
XC6320-6AF
XC630-10AF

Lenovo Platforms

Platform Recommended/Default vCPUs


Memory (GB)

HX-3500 28 8
HX-5500
HX-7500

CVM Memory Configurations for Features


The following table lists the minimum amount of memory required when enabling features.
The memory size requirements are in addition to the default or recommended memory available for your
platform.
The Controller VM is upgraded to a maximum 32 GB.
For ESXi hosts: The Controller VM is upgradeable to a maximum 28 GB for nodes with ESXi hypervisor
hosts with total physical memory of 64 GB.
Note: Total CVM memory required = recommended platform memory + memory required for each
enabled feature

Features Memory (GB)


Capacity tier deduplication (includes performance tier deduplication) 12
Redundancy factor 3 8
Performance tier deduplication 8
Cold-tier nodes + capacity tier deduplication 4
Capacity tier deduplication + redundancy factor 3 12

Controller VM Memory Configurations | AHV Administration Guide | AHV | 18


4
Host Network Management
Network management in an Acropolis cluster consists of the following tasks:
Configuring Layer 2 switching through Open vSwitch. When configuring Open vSwitch, you configure
bridges, bonds, and VLANs.
Optionally changing the IP address, netmask, and default gateway that were specified for the hosts
during the imaging process.

Prerequisites for Configuring Networking


Change the configuration from the factory default to the recommended configuration. See Default Factory
Configuration on page 22 and AHV Networking Recommendations on page 19.

AHV Networking Recommendations


Nutanix recommends that you perform the following OVS configuration tasks from the Controller VM, as
described in this documentation:
Viewing the network configuration
Configuring an Open vSwitch bond with desired interfaces
Assigning the Controller VM to a VLAN
For performing other OVS configuration tasks, such as adding an interface to a bridge and configuring
LACP for the interfaces in an OVS bond, log on to the AHV host, and then follow the procedures described
in the OVS documentation at http://openvswitch.org/.
Nutanix recommends that you configure the network as follows:

Recommended Network Configuration

Network Component Best Practice

Open vSwitch Do not modify the OpenFlow tables that are associated with the default
OVS bridge br0.

VLANs Add the Controller VM and the AHV host to the same VLAN. By default,
the Controller VM and the hypervisor are assigned to VLAN 0, which
effectively places them on the native VLAN configured on the upstream
physical switch.
Do not add any other device, including guest VMs, to the VLAN to which
the Controller VM and hypervisor host are assigned. Isolate guest VMs on
one or more separate VLANs.

Host Network Management | AHV Administration Guide | AHV | 19


Network Component Best Practice

Virtual bridges Do not delete or rename OVS bridge br0.


Do not modify the native Linux bridge virbr0.

OVS bonded port (bond0) Aggregate the 10 GbE interfaces on the physical host to an OVS bond
on the default OVS bridge br0 and trunk these interfaces on the physical
switch.
By default, the 10 GbE interfaces in the OVS bond operate in the
recommended active-backup mode.

Note: The mixing of bond modes across AHV hosts in the same
cluster is not recommended and not supported.
LACP configurations are known to work, but support might be limited.

1 GbE and 10 GbE If you want to use the 10 GbE interfaces for guest VM traffic, make sure
interfaces (physical host) that the guest VMs do not use the VLAN over which the Controller VM and
hypervisor communicate.
If you want to use the 1 GbE interfaces for guest VM connectivity, follow
the hypervisor manufacturers switch port and networking configuration
guidelines.
Do not include the 1 GbE interfaces in the same bond as the 10 GbE
interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to bridge
br0, either individually or in a second bond. Use them on other bridges.

IPMI port on the hypervisor Do not trunk switch ports that connect to the IPMI interface. Configure the
host switch ports as access ports for management simplicity.

Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur as
implementations scale upward (see Knowledge Base article KB1612).
Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches
with larger buffers for production workloads.
Use an 802.3-2012 standardscompliant switch that has a low-latency,
cut-through design and provides predictable, consistent traffic latency
regardless of packet size, traffic pattern, or the features enabled on
the 10 GbE interfaces. Port-to-port latency should be no higher than 2
microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch
ports that are connected to the hypervisor host.
Avoid using shared buffers for the 10 GbE ports. Use a dedicated buffer for
each port.

Host Network Management | AHV Administration Guide | AHV | 20


Network Component Best Practice

Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture.
This simple, flat network design is well suited for a highly distributed,
shared-nothing compute and storage architecture.
Add all the nodes that belong to a given cluster to the same Layer-2
network segment.
Other network layouts are supported as long as all other Nutanix
recommendations are followed.

Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.

This diagram shows the recommended network configuration for an Acropolis cluster. The interfaces in the
diagram are connected with colored lines to indicate membership to different VLANs:

Figure:

Layer 2 Network Management with Open vSwitch


AHV uses Open vSwitch to connect the Controller VM, the hypervisor, and the guest VMs to each other
and to the physical network. The OVS package is installed by default on each Acropolis node and the OVS
services start automatically when you start a node.
To configure virtual networking in an Acropolis cluster, you need to be familiar with OVS. This
documentation gives you a brief overview of OVS and the networking components that you need to
configure to enable the hypervisor, Controller VM, and guest VMs to connect to each other and to the
physical network.

About Open vSwitch


Open vSwitch (OVS) is an open-source software switch implemented in the Linux kernel and designed to
work in a multiserver virtualization environment. By default, OVS behaves like a Layer 2 learning switch

Host Network Management | AHV Administration Guide | AHV | 21


that maintains a MAC address learning table. The hypervisor host and VMs connect to virtual ports on the
switch. Nutanix uses the OpenFlow protocol to configure and communicate with Open vSwitch.
Each hypervisor hosts an OVS instance, and all OVS instances combine to form a single switch. As an
example, the following diagram shows OVS instances running on two hypervisor hosts.

Figure: Open vSwitch

Default Factory Configuration


The factory configuration of an Acropolis host includes a default OVS bridge named br0 and a native linux
bridge called virbr0.
Bridge br0 includes the following ports by default:
An internal port with the same name as the default bridge; that is, an internal port named br0. This is the
access port for the hypervisor host.
A bonded port named bond0. The bonded port aggregates all the physical interfaces available on the
node. For example, if the node has two 10 GbE interfaces and two 1 GbE interfaces, all four interfaces
are aggregated on bond0. This configuration is necessary for Foundation to successfully image the
node regardless of which interfaces are connected to the network.
Note: Before you begin configuring a virtual network on a node, you must disassociate the
1 GbE interfaces from the bond0 port. See Configuring an Open vSwitch Bond with Desired
Interfaces on page 25.

The following diagram illustrates the default factory configuration of OVS on an Acropolis node:

Host Network Management | AHV Administration Guide | AHV | 22


Figure: Default factory configuration of Open vSwitch in AHV

The Controller VM has two network interfaces. As shown in the diagram, one network interface connects to
bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses this bridge to
communicate with the hypervisor host.

Viewing the Network Configuration


Use the following commands to view the configuration of the network elements.
Before you begin: Log on to the Acropolis host with SSH.

To show interface properties such as link speed and status, log on to the Controller VM, and then list the
physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces

Output similar to the following is displayed:

name mode link speed


eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000

To show the ports and interfaces that are configured as uplinks, log on to the Controller VM, and then
list the uplink configuration.
nutanix@cvm$ manage_ovs --bridge_name bridge show_uplinks

Replace bridge with the name of the bridge for which you want to view uplink information. Omit the --
bridge_name parameter if you want to view uplink information for the default OVS bridge br0.

Host Network Management | AHV Administration Guide | AHV | 23


Output similar to the following is displayed:

Uplink ports: bond0


Uplink ifaces: eth1 eth0

To show the virtual switching configuration, log on to the Acropolis host with SSH, and then list the
configuration of Open vSwitch.
root@ahv# ovs-vsctl show

Output similar to the following is displayed:

59ce3252-f3c1-4444-91d1-b5281b30cdba
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "vnet0"
Interface "vnet0"
Port "br0-arp"
Interface "br0-arp"
type: vxlan
options: {key="1", remote_ip="192.168.5.2"}
Port "bond0"
Interface "eth3"
Interface "eth2"
Port "bond1"
Interface "eth1"
Interface "eth0"
Port "br0-dhcp"
Interface "br0-dhcp"
type: vxlan
options: {key="1", remote_ip="192.0.2.131"}
ovs_version: "2.3.1"

To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then list the
configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name

For example, show the configuration of bond0.


root@ahv# ovs-appctl bond/show bond0

Output similar to the following is displayed:

---- bond0 ----


bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
active slave mac: 0c:c4:7a:48:b2:68(eth0)

slave eth0: enabled


active slave
may_enable: true

slave eth1: disabled


may_enable: false

Host Network Management | AHV Administration Guide | AHV | 24


Creating an Open vSwitch Bridge
To create an OVS bridge, do the following:

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

3. Create an OVS bridge on each host in the cluster.


nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br bridge'

Replace bridge with a name for the bridge. Bridge names must not exceed 10 characters.
The output does not indicate success explicitly, so you can append && echo success to the command. If
the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'

Output similar to the following is displayed:

nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
Executing ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success on the
cluster
================== 192.0.2.203 =================
FIPS mode initialized
Nutanix KVM
success
...

Configuring an Open vSwitch Bond with Desired Interfaces


When creating an OVS bond, you can specify the interfaces that you want to include in the bond.
Use this procedure to create a bond that includes a desired set of interfaces or to specify a new set of
interfaces for an existing bond. If you are modifying an existing bond, AHV removes the bond and then re-
creates the bond with the specified interfaces.

Note: Perform this procedure on factory-configured nodes to remove the 1 GbE interfaces from
the bonded port bond0. You cannot configure failover priority for the interfaces in an OVS bond, so
the disassociation is necessary to help prevent any unpredictable performance issues that might
result from a 10 GbE interface failing over to a 1 GbE interface. Nutanix recommends that you
aggregate only the 10 GbE interfaces on bond0 and use the 1 GbE interfaces on a separate OVS
bridge.

To create an OVS bond with the desired interfaces, do the following:

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

Host Network Management | AHV Administration Guide | AHV | 25


3. Create a bond with the desired set of interfaces.
nutanix@cvm$ manage_ovs --bridge_name bridge --interfaces interfaces --bond_name bond_name
update_uplinks

Replace bridge with the name of the bridge on which you want to create the bond. Omit the --
bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of --bond_name is bond0.
Replace interfaces with one of the following values:
A comma-separated list of the interfaces that you want to include in the bond. For example,
eth0,eth1 .
A keyword that indicates which interfaces you want to include. Possible keywords:
10g. Include all available 10 GbE interfaces
1g. Include all available 1 GbE interfaces
all. Include all available interfaces
For example, create a bond with interfaces eth0 and eth1 on a bridge named br1. Using allssh enables
you to use a single command to effect the change on every host in the cluster.

Note: If the bridge on which you want to create the bond does not exist, you must first create
the bridge. For information about creating an OVS bridge, see Creating an Open vSwitch
Bridge on page 25. The following example assumes that a bridge named br1 exists on every
host in the cluster.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces eth0,eth1 --bond_name bond1
update_uplinks'

Example output similar to the following is displayed:

2015-03-05 11:17:17 WARNING manage_ovs:291 Interface eth1 does not have link state
2015-03-05 11:17:17 INFO manage_ovs:325 Deleting OVS ports: bond1
2015-03-05 11:17:18 INFO manage_ovs:333 Adding bonded OVS ports: eth0 eth1
2015-03-05 11:17:22 INFO manage_ovs:364 Sending gratuitous ARPs for 192.0.2.21

Virtual Network Segmentation with VLANs


You can set up a segmented virtual network on an Acropolis node by assigning the ports on Open vSwitch
bridges to different VLANs. VLAN port assignments are configured from the Controller VM that runs on
each node.
For best practices associated with VLAN assignments, see AHV Networking Recommendations on
page 19. For information about assigning guest VMs to a VLAN, see the Web Console Guide.

Assigning an Acropolis Host to a VLAN

To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:

1. Log on to the AHV host with SSH.

2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be
on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag

Replace host_vlan_tag with the VLAN tag for hosts.

Host Network Management | AHV Administration Guide | AHV | 26


3. Confirm VLAN tagging on port br0.
root@ahv# ovs-vsctl list port br0

4. Check the value of the tag parameter that is shown.

5. Verify connectivity to the IP address of the AHV host by performing a ping test.

Assigning the Controller VM to a VLAN

By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the Controller VM to
a different VLAN, change the VLAN ID of its public interface. After the change, you can access the public
interface from a device that is on the new VLAN.
Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are
logged on to the Controller VM through its public interface. To change the VLAN ID, log on to the
internal interface that has IP address 192.168.5.254.

Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the
following:

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

3. Assign the public interface of the Controller VM to a VLAN.


nutanix@cvm$ change_cvm_vlan vlan_id

Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10

Output similar to the following us displayed:

Replacing external NIC in CVM, old XML:


<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<source bridge="br0" />
<vlan>
<tag id="10" />
</vlan>
<virtualport type="openvswitch">
<parameters interfaceid="95ce24f9-fb89-4760-98c5-01217305060d" />
</virtualport>
<target dev="vnet0" />
<model type="virtio" />
<alias name="net2" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
</interface>

new XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<model type="virtio" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
<source bridge="br0" />

Host Network Management | AHV Administration Guide | AHV | 27


<virtualport type="openvswitch" />
</interface>
CVM external NIC successfully updated.

Configuring a Virtual NIC to Operate in Access or Trunk Mode

By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can
send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is
connected. If restricted to using access mode interfaces, a VM running an application on multiple VLANs
(such as a firewall application) must use multiple virtual NICsone for each VLAN. Instead of configuring
multiple virtual NICs in access mode, you can configure a single virtual NIC on the VM to operate in trunk
mode. A virtual NIC in trunk mode can send and receive traffic over any number of VLANs in addition to its
own VLAN. You can trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the
trunk mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic only
over its own VLAN.
To configure a virtual NIC as an access port or trunk port, do the following:

1. Log on to the Controller VM with SSH.

2. Do one of the following:

a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]

Specify appropriate values for the following parameters:


vm. Name of the VM.
network. Name of the virtual network to which you want to connect the virtual NIC.
trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set
to kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To
trunk all VLANs, set vlan_mode to kTrunked and skip this parameter.
vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode. Default: kAccess .

b. Configure an existing virtual NIC to operate in the required mode.


nutanix@cvm$ acli vm.nic_update vm mac_addr [update_vlan_trunk_info={true | false}]
[vlan_mode={kAccess | kTrunked}] [trunked_networks=networks]

Specify appropriate values for the following parameters:


vm. Name of the VM.
mac_addr. MAC address of the virtual NIC to update (the MAC address is used to identify the
virtual NIC). Required to update a virtual NIC.
trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set
to kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To
trunk all VLANs, set vlan_mode to kTrunked and skip this parameter.
update_vlan_trunk_info. Update the VLAN type and list of trunked VLANs. If not specified, the
parameter defaults to false and the vlan_mode and trunked_networks parameters are ignored.
vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode.

Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see the

Host Network Management | AHV Administration Guide | AHV | 28


"VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App Mobility
Fabric Guide.

Changing the IP Address of an Acropolis Host

Perform the following procedure to change the IP address of an Acropolis host.

Caution: All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor
can be multihomed provided that one interface is on the same subnet as the Controller VM.

1. Edit the settings of port br0, which is the internal port on the default bridge br0.

a. Log on to the host console as root.


You can access the hypervisor host console either through IPMI or by attaching a keyboard and
monitor to the node.

b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0

c. Update entries for host IP address, netmask, and gateway.


The block of configuration information that includes these entries is similar to the following:
ONBOOT="yes"
NM_CONTROLLED="no"
PERSISTENT_DHCLIENT=1
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="br0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"

Replace host_ip_addr with the IP address for the hypervisor host.


Replace subnet_mask with the subnet mask for host_ip_addr.
Replace gateway_ip_addr with the gateway address for host_ip_addr.

d. Save your changes.

e. Restart network services.


/etc/init.d/network restart

2. Log on to the Controller VM and restart genesis.


nutanix@cvm$ genesis restart

If the restart is successful, output similar to the following is displayed:

Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]


Genesis started on pids [30378, 30379, 30380, 30381, 30403]

For information about how to log on to a Controller VM, see Controller VM Access on page 8.

3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an
Acropolis Host to a VLAN on page 26.

Host Network Management | AHV Administration Guide | AHV | 29


5
Virtual Machine Management
The following topics describe various aspects of virtual machine management in an AHV cluster.

VM Management

Supported Guest VM Types for AHV


The following shows the supported guest OS types for VMs on AHV.

Maximum vDisks per bus type

SCSI: 256
PCI: 6
IDE: 4

OS types with SCSI (recommended) and IDE bus types

Windows 2016 Standard, 2016 Datacenter


Windows 7, 8, 8.1, 10
Windows Server 2008 R2, 2012, 2012 R2, 2016
RHEL 6.4, 6.5, 6.6, 6.7, 6.8, 7.0, 7.1, 7.2, 7.3
CentOS 6.4, 6.5, 6.6, 6.7, 6.8, 7.0, 7.1, 7.2, 7.3
Ubuntu 12.04.5, 14.04.x, 16.04.x, 16.10, Server, Desktop (32-bit and 64-bit)
FreeBSD 9.3, 10.0, 10.1,10.2, 10.3, 11.0
SUSE Linux Enterprise Server 11 SP3 / SP4
SUSE Linux Enterprise Server 12
Oracle Linux 6.x, 7.x

OS types with PCI (recommended) and IDE bus types

RHEL 5.10, 5.11, 6.3


CentOS 5.10, 5.11, 6.3
Ubuntu 12.04
SUSE Linux Enterprise Server 12

Virtual Machine Network Management


Virtual machine network management involves configuring connectivity for guest VMs through Open
vSwitch bridges.

Virtual Machine Management | AHV Administration Guide | AHV | 30


Configuring 1 GbE Connectivity for Guest VMs
If you want to configure 1 GbE connectivity for guest VMs, you can aggregate the 1 GbE interfaces (eth0
and eth1) to a bond on a separate OVS bridge, create a VLAN network on the bridge, and then assign
guest VM interfaces to the network.
To configure 1 GbE connectivity for guest VMs, do the following:

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM.


root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

3. Determine the uplinks configured on the host.


nutanix@cvm$ allssh manage_ovs show_uplinks

Output similar to the following is displayed:

Executing manage_ovs show_uplinks on the cluster


================== 192.0.2.49 =================
Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

================== 192.0.2.50 =================


Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

================== 192.0.2.51 =================


Bridge br0:
Uplink ports: br0-up
Uplink ifaces: eth3 eth2 eth1 eth0

4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the
previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond
name are br0 and br0-up , respectively.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up
update_uplinks'

The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.

5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge called br1
(bridge names must not exceed 10 characters.).
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1'

6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to
a bond named br1-up .
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up
update_uplinks'

7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the guest VMs, and
associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN
10 .
nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1

Virtual Machine Management | AHV Administration Guide | AHV | 31


8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign interfaces on
the guest VMs to the network.
For information about assigning guest VM interfaces to a network, see "Creating a VM" in the Prism
Web Console Guide.

Configuring a Virtual NIC to Operate in Access or Trunk Mode


By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can
send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is
connected. If restricted to using access mode interfaces, a VM running an application on multiple VLANs
(such as a firewall application) must use multiple virtual NICsone for each VLAN. Instead of configuring
multiple virtual NICs in access mode, you can configure a single virtual NIC on the VM to operate in trunk
mode. A virtual NIC in trunk mode can send and receive traffic over any number of VLANs in addition to its
own VLAN. You can trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the
trunk mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic only
over its own VLAN.
To configure a virtual NIC as an access port or trunk port, do the following:

1. Log on to the Controller VM with SSH.

2. Do one of the following:

a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]

Specify appropriate values for the following parameters:


vm. Name of the VM.
network. Name of the virtual network to which you want to connect the virtual NIC.
trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set
to kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To
trunk all VLANs, set vlan_mode to kTrunked and skip this parameter.
vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode. Default: kAccess .

b. Configure an existing virtual NIC to operate in the required mode.


nutanix@cvm$ acli vm.nic_update vm mac_addr [update_vlan_trunk_info={true | false}]
[vlan_mode={kAccess | kTrunked}] [trunked_networks=networks]

Specify appropriate values for the following parameters:


vm. Name of the VM.
mac_addr. MAC address of the virtual NIC to update (the MAC address is used to identify the
virtual NIC). Required to update a virtual NIC.
trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set
to kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To
trunk all VLANs, set vlan_mode to kTrunked and skip this parameter.
update_vlan_trunk_info. Update the VLAN type and list of trunked VLANs. If not specified, the
parameter defaults to false and the vlan_mode and trunked_networks parameters are ignored.
vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode.

Virtual Machine Management | AHV Administration Guide | AHV | 32


Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see the
"VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App Mobility
Fabric Guide.

Virtual Machine Memory and CPU Configurations


Memory and CPUs are hot-pluggable on guest VMs running on AHV. You can increase the memory
allocation and the number of CPUs on your VMs while the VMs are powered on. You can change the
number of vCPUs (sockets) while the VMs are powered on. However, you cannot change the number of
cores per socket while the VMs are powered on.
You can change the memory and CPU configuration of your VMs only through the Acropolis CLI (aCLI).
Following is the supportability matrix of operating systems on which the memory and CPUs are hot-
pluggable.

Operating Systems Edition Bits Hot-pluggable Hot-pluggable CPU


Memory

Windows Server Datacenter x86 Yes No


2008

Windows Server Standard x86_64 No No


2008 R2

Windows Server Datacenter x86_64 Yes Yes


2008 R2

Windows Server Standard x86_64 Yes No


2012 R2

Windows Server Datacenter x86_64 Yes No


2012 R2

CentOS 6.3+ x86 No Yes

CentOS 6.3+ x86_64 Yes Yes

CentOS 6.8 No Yes

CentOS 6.8 x86_64 Yes Yes

CentOS 7.2 x86_64 Yes Yes

Suse Linux 11-SP3+ x86_64 No Yes


Enterprise Edition

Suse Linux 12 x86_64 Yes Yes


Enterprise Edition

Memory OS Limitations

1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the
memory is not online, you cannot use the new memory. Perform the following procedure to make the
memory online.

Virtual Machine Management | AHV Administration Guide | AHV | 33


a. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/device/system/memory/memoryXXX/state

Display the state of a specific memory block.


$ grep line /sys/devices/system/memory/*/state
b. Make the memory online.
$ echo online > /sys/devices/system/memory/memoryXXX/state

2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more memory
to that VM so that the final memory is greater than 3 GB, results in a memory-overflow condition. To
resolve the issue, restart the guest OS (CentOS 7.2) with the following setting:
swiotlb=force

CPU OS Limitations

1. On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you might
have to bring the CPUs online. For each hot-plugged CPU, run the following command to bring the CPU
online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online

Replace <n> with the number of the hot plugged CPU.


2. Device Manager on some versions of Windows such as Windows Server 2012 R2 displays the hot-
plugged CPUs as new hardware, but the hot-plugged CPUs are not displayed under Task Manager.

Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)


Perform the following procedure to hot plug the memory and CPUs on the AHV VMs.

1. Log on the Controller VM with SSH.

2. Update the memory allocation for the VM.


nutanix@cvm$ acli vm.update vm-name memory=new_memory_size

Replace vm-name with the name of the VM and new_memory_size with the memory size.

3. Update the number of CPUs on the VM.


nutanix@cvm$ acli vm.update vm-name num_vcpus=n

Replace vm-name with the name of the VM and n with the number of CPUs.

Note: After you upgrade from a hot-plug unsupported version to the hot-plug supported
version, you must power cycle the VM that was instantiated and powered on before the
upgrade, so that it is compatible with the memory and CPU hot-plug feature. This power-cycle
has to be done only once after the upgrade. New VMs created on the supported version shall
have the hot-plug compatibility by default.

Virtual Machine Management | AHV Administration Guide | AHV | 34


GPU Pass-Through for Guest VMs
AHV hosts support GPU pass-through for guest VMs, allowing applications on VMs direct access to GPU
resources. The Nutanix user interfaces provide a cluster-wide view of GPUs, allowing you to allocate
any available GPU to a VM. You can also allocate multiple GPUs to a VM. However, in a pass-through
configuration, only one VM can use a GPU at any given time.

Host Selection Criteria for VMs with GPU Pass-Through

When you power on a VM with GPU pass-through, the VM is started on the host that has the specified
GPU, provided that the Acropolis Dynamic Scheduler determines that the host has sufficient resources
to run the VM. If the specified GPU is available on more than one host, the Acropolis Dynamic Scheduler
ensures that a host with sufficient resources is selected. If sufficient resources are not available on any
host with the specified GPU, the VM is not powered on.
If you allocate multiple GPUs to a VM, the VM is started on a host if, in addition to satisfying Acropolis
Dynamic Scheduler requirements, the host has all of the GPUs that are specified for the VM.
If you want to a VM to always use a GPU on a specific host, configure host affinity for the VM.

Support for Graphics and Compute Modes

AHV supports running GPU cards in either graphics mode or compute mode. If a GPU is running in
compute mode, Nutanix user interfaces indicate the mode by appending the string compute to the model
name. No string is appended if a GPU is running in the default graphics mode.

Switching Between Graphics and Compute Modes

If you want to change the mode of the firmware on a GPU, put the host in maintenance mode, and
then flash the GPU manually by logging on to the AHV host and performing standard procedures as
documented for Linux VMs by the vendor of the GPU card.
Typically, you restart the host immediately after you flash the GPU. After restarting the host, redo the GPU
configuration on the affected VM, and then start the VM. For example, consider that you want to re-flash an
NVIDIA Tesla M60 GPU that is running in graphics mode. The Prism web console identifies the card as
an NVIDIA Tesla M60 GPU. After you re-flash the GPU to run in compute mode and restart the host, redo
the GPU configuration on the affected VMs by adding back the GPU, which is now identified as an NVIDIA
Tesla M60.compute GPU, and then start the VM.

Supported GPU Cards

The following GPUs are supported:


NVIDIA Tesla M10 GPU
NVIDIA Tesla M60 GPU

Limitations

GPU pass-through support has the following limitations:


HA is not supported for VMs with GPU pass-through. If the host fails, VMs that have a GPU
configuration are powered off and then powered on automatically when the node is back up.
Live migration of VMs with a GPU configuration is not supported. Live migration of VMs is necessary
when the BIOS, BMC, and the hypervisor on the host are being upgraded. During these upgrades, VMs
that have a GPU configuration are powered off and then powered on automatically when the node is
back up.

Virtual Machine Management | AHV Administration Guide | AHV | 35


VM pause and resume are not supported.
You cannot hot add VM memory if the VM is using a GPU.
Hot add and hot remove support is not available for GPUs.
You can change the GPU configuration of a VM only when the VM is turned off.

Configuring GPU Pass-Through

For information about configuring GPU pass-through for guest VMs, see Creating a VM (AHV) in the
"Virtual Machine Management" chapter of the Prism Web Console Guide.

Windows VM Provisioning

Nutanix VirtIO for Windows


Nutanix VirtIO is a collection of drivers for paravirtual devices that enhance stability and performance of
VMs on AHV.
Nutanix VirtIO is available in two formats:
An ISO used when installing Windows in a VM on AHV.
An installer used to update VirtIO for Windows.

VirtIO Requirements

The following are requirements for Nutanix VirtIO for Windows.


Operating system:
Microsoft Windows Server Version: Windows 2008 R2 or later versions.
Microsoft Windows Client Version: Windows 7 or later versions.
AHV version 20160925.30 (at minimum)

Installing Nutanix VirtIO for Windows

This topic describes how to download the Nutanix VirtIO and Nutanix VirtIO Microsoft Installer (MSI). The
MSI installs and upgrades the Nutanix VirtIO drivers.
Before you begin: Be sure you have the VirtIO requirements, see VirtIO Requirements on page 36.
To download the Nutanix VirtIO, perform the following.

1. Go to the Nutanix Support Portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.

2. Use the filter search to find the latest Nutanix Virtio package.

3. Click and download the Nutanix VirtIO package.


You can choose the ISO, if you are creating a new Windows VM. The installer is available on the
ISO if your VM does not have internet access.
You can choose the MSI, if you are updating drivers in a Windows VM.

Virtual Machine Management | AHV Administration Guide | AHV | 36


Figure: Search filter and VirtIO options

4. Upload the ISO to the cluster


This task is described in Configuring Images in the Web Console Guide.

5. Run the Nutanix VirtIO MSI by opening the download.

6. Read and accept the Nutanix VirtIO License Agreement. Click Install.

Figure: Nutanix VirtIO Windows Setup Wizard

The Nutanix VirtIO Setup Wizard shows a status bar and completes installation.

Manually Installing Nutanix VirtIO

Manually install Nutanix VirtIO required for 32-bit OSes.


Before you begin: Be sure you have the VirtIO requirements, see VirtIO Requirements on page 36.
Note: To automatically install Nutanix VirtIO, see Installing Nutanix VirtIO for Windows on
page 36.

1. Go to the Nutanix Support Portal and navigate to Downloads > Tools & Firmware.

2. Use the filter search to find the latest Nutanix Virtio ISO.

3. Download the latest VirtIO for Windows ISO to your local machine.

Note: Nutanix recommends extracting the VirtIO ISO into the same VM where you will load
Nutanix VirtIO for easier installation.

4. Upload the Nutanix VirtIO ISO to your cluster


This procedure is described in Configuring Images in the Web Console Guide.

Virtual Machine Management | AHV Administration Guide | AHV | 37


5. Locate the VM where you want to install the Nutanix VirtIO ISO and update the VM.

6. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.

a. TYPE: CD-ROM

b. OPERATION: CLONE FROM IMAGE SERVICE

c. BUS TYPE: IDE

d. IMAGE: Select the Nutanix VirtIO ISO

e. Click Add.

7. Log into the VM and navigate to Control Panel > Device Manager.

8. Note: Ensure you select the x86 subdirectory for 32-bit Windows or the amd64 for 64-bit
Windows.
Open the devices and select the specific Nutanix drivers for download. For each device, right click and
Update Driver Software into the drive containing the VirtIO ISO. For each device, follow the wizard
instructions until you receive installation confirmation.

a. System Devices > Nutanix VirtIO Balloon Drivers

b. Network Adapter > Nutanix VirtIO Ethernet Adapter.

c. Processors > Storage Controllers > Nutanix VirtIO SCSI pass through Controller
The Nutanix VirtIO SCSI pass through Controller prompts you to restart your system. You may
restart at any time to successfully install the controller.

Virtual Machine Management | AHV Administration Guide | AHV | 38


Figure: List of Nutanix VirtIO downloads

Virtual Machine Management | AHV Administration Guide | AHV | 39


Upgrading Nutanix VirtIO for Windows

This topic describes how to upload and upgrade Nutanix VirtIO and Nutanix VirtIO Microsoft Installer (MSI).
The MSI installs and upgrades the Nutanix VirtIO drivers.
Before you begin: Be sure you have the VirtIO requirements, see VirtIO Requirements on page 36.

1. Go to the Nutanix Support Portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.

2. Select the ISO if you are creating a new Windows VM.

Note: The installer is available on the ISO if your VM does not have internet access.

a. Upload the ISO to the cluster as described in Configuring Images in the Web Console Guide.

b. Mount ISO image into CD-ROM of each VM where you want to upgrade in the cluster.

3. If you are updating drivers in a Windows VM, select the appropriate 32-bit or 64-bit MSI.

4. Upgrade drivers.

Note: The below options might prompt a system restart.

For SCSI drivers for SCSI boot disks, manually upgrade the drivers with the vendor's instructions.
For all other drivers, run the Nutanix VirtIO MSI installer (the preferred installation method) and
follow the wizard instructions.
Note: Running the Nutanix VirtIO MSI installer upgrades all drivers.

Upgrading drivers may cause VMs to restart automatically.

5. Read and accept the Nutanix VirtIO License Agreement. Click Install.

Figure: Nutanix VirtIO Windows Setup Wizard

The Nutanix VirtIO Setup Wizard shows a status bar and completes installation.

Virtual Machine Management | AHV Administration Guide | AHV | 40


Creating a Windows VM on AHV with Nutanix VirtIO (New and Migrated VMs)

Before you begin:


Upload your Windows Installer ISO to your cluster as described in Configuring Images in the Web
Console Guide.
Upload the Nutanix VirtIO ISO to your cluster as described in Configuring Images in the Web Console
Guide.
The following task describes how to create a new Windows VM in AHV or migrate a Windows VM from a
non-Nutanix source to AHV with the Nutanix VirtIO drivers. To install a new or migrated Windows VM with
Nutanix VirtIO, complete the following.

1. Log in to the Prism web console using your Nutanix credentials.

2. At the top left corner, click Home > VM.


The VM page appears.

3. Click + Create VM in the corner of the page.


The Create VM dialog box appears.

Figure: The Create VM dialog box

4. Complete the indicated fields.

Virtual Machine Management | AHV Administration Guide | AHV | 41


a. NAME: Enter a name for the VM.

b. vCPU(s): Enter the number of vCPUs

c. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.

d. MEMORY: Enter the amount of memory for the VM (in GiBs).

5. If you are creating a new Windows VM, add a Windows CD-ROM to the VM.

a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated fields.
The current CD-ROM opens in a new window.

b. OPERATION: CLONE FROM IMAGE SERVICE

c. BUS TYPE: IDE

d. IMAGE: Select the Windows OS Install ISO.

e. Click Update.

6. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.

a. TYPE: CD-ROM

b. OPERATION:CLONE FROM IMAGE SERVICE

c. BUS TYPE: IDE

d. IMAGE: Select the Nutanix VirtIO ISO.

e. Click Add.

7. Add a new disk for the hard drive.

a. TYPE: DISK

b. OPERATION: ALLOCATE ON STORAGE CONTAINER

c. BUS TYPE: SCSI

d. STORAGE CONTAINER: Select the appropriate storage container.

e. SIZE: Enter the number for the size of the hard drive (in GiB).

f. Click Add to add the disk driver.

8. If you are migrating a VM, create a disk from the disk image. Click Add New Disk and complete the
indicated fields.

a. TYPE: DISK

b. OPERATION: CLONE FROM IMAGE

c. BUS TYPE: SCSI

d. CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image you created
previously.

Virtual Machine Management | AHV Administration Guide | AHV | 42


e. Click Add to add the disk driver.

9. (Optional) After you have migrated or created a VM, add a network interface card (NIC). Click Add New
NIC and completing the indicated fields.

a. VLAN ID: Choose the VLAN ID according to network requirements and enter the IP address if
required.

b. Click Add.

10. Once you complete the indicated fields, click Save.

What to do next: Install Windows by following Installing Windows on a VM on page 43.

Installing Windows on a VM
Before you begin: Create a Windows VM. See "Creating a Windows VM on AHV after Migration" in the
Migration Guide.
To install a Windows VM, do the following.
Note: Nutanix VirtIO cannot be used to install Windows 7 or Windows Server 2008 R2.

1. Log in to the Prism UI using your Nutanix credentials.

2. At the top left corner, click Home > VM.


The VM page appears.

3. Select the Windows VM.

4. In the center of the VM page, click the Power On button.

5. Click the Launch Console button.


The Windows console opens in a new window.

6. Select the desired language, time and currency format, and keyboard information.

7. Click Next > Install Now.


The Windows Setup windows displays the operating systems to install.

8. Select the Windows OS you want to install.

9. Click Next and accept the license terms.

10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.
The browse folder opens.

11. Choose the Nutanix VirtIO CD drive and the Windows version by choosing the Nutanix VirtIO drive and
then the Windows OS folder. Click OK.

Virtual Machine Management | AHV Administration Guide | AHV | 43


Figure: Select the Nutanix VirtIO drivers for your OS

The Select the driver to install window appears.

12. Select all listed drivers and click Next.

13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress which can take several minutes.

14. Fill in your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows Setup completes installation.

Configuring a Windows VM to use Unified Extensible Firmware Interface (UEFI)


To enable a VM running a UEFI-enabled operating system to start up, you must configure the VM to use
UEFI.
To configure a VM to use UEFI, do the following:

1. Log on to the AHV host with SSH.

2. Log on to the Controller VM with SSH.

3. Configure the VM to use UEFI.


nutanix@cvm$ acli vm.update vm uefi_boot=True

Replace vm with the name of the VM.

VM Import
If you have legacy KVM VMs from a Nutanix solution that did not offer virtualization management, you must
import the VMs using the import_vm utility from the Controller VM.

Virtual Machine Management | AHV Administration Guide | AHV | 44


import_vm

Usage
nutanix@cvm$ import_vm vm [vm2 vm3 .. vmN]

Required Arguments
A space-separated list of VMs to import
Examples
Import two VMs
nutanix@cvm$ import_vm vm24 vm25

Import VMs from another host


nutanix@cvm$ import_vm --host 10.1.231.134 vm24 vm25

Use default network vlan.231


nutanix@cvm$ import_vm --default_network vlan.231 vm24 vm25

Optional Arguments
--convert_virtio_disks
Convert Virtio disks attached to the VM to SCSI disks. (default: skip Virtio disks)
--default_network
Add NICs to the network specified with this parameter if the utility cannot determine the
appropriate network. (default: do not attach indeterminate NICs to any network)
--host
Connect to a host other than the host where the Controller VM is running. (default: host of the
Controller VM where the utility is run)
--ignore_multiple_tags
Add NICs to the network specified with --default_network if they have multiple VLAN tags.
(default: do not attach NICs with multiple VLAN tags to any network)

Uploading Files to DSF for Microsoft Windows Users


If you are a Microsoft Windows user, you can securely upload files to DSF by using the following
procedure.

1. Authenticate by using Prism username and password or, for advanced users, use the public key that is
managed through the Prism cluster lockdown user interface.

2. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start browsing the
DSF data store.

Note: The root directory displays storage containers and you cannot change it. You can only
upload files to one of the storage containers and not directly to the root directory. To create or
delete storage containers, you can use the prism user interface.

Virtual Machine Management | AHV Administration Guide | AHV | 45


6
Event Notifications
You can register webhook listeners with the Nutanix event notification system by creating webhooks on the
Nutanix cluster. For each webhook listener, you can specify the events for which you want notifications to
be generated. Multiple webhook listeners can be notified for any given event. The webhook listeners can
use the notifications to configure services such as load balancers, firewalls, TOR switches, and routers.
Notifications are sent in the form of a JSON payload in an HTTP POST request, enabling you to send them
to any endpoint device that can accept an HTTP POST payload at a URL. Notifications can also be sent
over an SSL connection.
For example, if you register a webhook listener and include VM migration as an event of interest, the
Nutanix cluster sends the specified URL a notification whenever a VM migrates to another host.
You register webhook listeners by using the Nutanix REST API, version 3.0. In the API request, you specify
the events for which you want the webhook listener to receive notifications, the listener URL, and other
information such as a name and description for the webhook.

Generated Events
The following events are generated by an AHV cluster.

Virtual Machine Events

Event Description
VM.CREATE A VM is created.
VM.DELETE A VM is deleted.
When a VM that is powered on is deleted, in
addition to the VM.DELETE notification, a VM.OFF
event is generated.

VM.UPDATE A VM is updated.
VM.MIGRATE A VM is migrated from one host to another.
When a VM is migrated, in addition to the
VM.MIGRATE notification, a VM.UPDATE event is
generated.

VM.ON A VM is powered on.


When a VM is powered on, in addition to the
VM.ON notification, a VM.UPDATE event is
generated.

Event Notifications | AHV Administration Guide | AHV | 46


Event Description
VM.OFF A VM is powered off.
When a VM is powered off, in addition to the
VM.OFF notification, a VM.UPDATE event is
generated.

VM.NIC_PLUG A virtual NIC is plugged into a network.


When a virtual NIC is plugged in, in addition to the
VM.NIC_PLUG notification, a VM.UPDATE event is
generated.

VM.NIC_UNPLUG A virtual NIC is unplugged from a network.


When a virtual NIC is unplugged, in addition to
the VM.NIC_UNPLUG notification, a VM.UPDATE
event is generated.

Virtual Network Events

Event Description
SUBNET.CREATE A virtual network is created.
SUBNET.DELETE A virtual network is deleted.
SUBNET.UPDATE A virtual network is updated.

Creating a Webhook
Send the Nutanix cluster an HTTP POST request whose body contains the information essential to
creating a webhook (the events for which you want the listener to receive notifications, the listener URL,
and other information such as a name and description of the listener).
Note: Each POST request creates a separate webhook with a unique UUID, even if the data
in the body is identical. Each webhook generates a notification when an event occurs, and that
results in multiple notifications for the same event. If you want to update a webhook, do not
send another request with changes. Instead, update the webhook. See Updating a Webhook on
page 49.

Event Notifications | AHV Administration Guide | AHV | 47


To create a webhook, send the Nutanix cluster an API request of the following form:

POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}

Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate values
for the following parameters:
name. Name for the webhook.
post_url. URL at which the webhook listener receives notifications.
username and password. User name and password to use for authenticating to the listener. Include
these parameters if the listener requires them.
events_filter_list. Comma-separated list of events for which notifications must be generated.
description. Description of the webhook.
api_version. Version of Nutanix REST API in use.
The following sample API request creates a webhook that generates notifications when VMs are
powered on and powered off:
POST https://192.0.2.3:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "vm_notifications_webhook",
"resources": {
"post_url": "http://192.0.2.10:8080/",
"credentials": {
"username":"admin",
"password":"Nutanix/4u"
},
"events_filter_list": [
"VM.ON", "VM.OFF", "VM.UPDATE", "VM.CREATE", "NETWORK.CREATE"
]
},
"description": "Notifications for VM events."
},
"api_version": "3.0"
}

Event Notifications | AHV Administration Guide | AHV | 48


The Nutanix cluster responds to the API request with a 200 OK HTTP response that contains the UUID
of the webhook that is created. The following response is an example:
{
"status": {
"state": "PENDING"
},
"spec": {
. . .
"uuid": "003f8c42-748d-4c0b-b23d-ab594c087399"
}
}

The notification contains metadata about the entity along with information about the type of event that
occurred. The event type is specified by the event_type parameter.

Listing Webhooks
You can list webhooks to view their specifications or to verify that they were created successfully.
To list webhooks, do the following:

To show a single webhook, send the Nutanix cluster an API request of the following form:
GET https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid

Replace cluster_IP_address with the IP address of the Nutanix cluster. Replace webhook_uuid with the
UUID of the webhook that you want to show.

To list all the webhooks configured on the Nutanix cluster, send the Nutanix cluster an API request of
the following form:
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks/list
{
"filter": "string",
"kind": "webhook",
"sort_order": "ASCENDING",
"offset": 0,
"total_matches": 0,
"sort_column": "string",
"length": 0
}

Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate values
for the following parameters:
filter. Filter to apply to the list of webhooks.
sort_order. Order in which to sort the list of webhooks. Ordering is performed on webhook names.
offset.
total_matches. Number of matches to list.
sort_column. Parameter on which to sort the list.
length.

Updating a Webhook
You can update a webhook by sending a PUT request to the Nutanix cluster. You can update the name,
listener URL, event list, and description.

Event Notifications | AHV Administration Guide | AHV | 49


To update a webhook, send the Nutanix cluster an API request of the following form:

PUT https://cluster_IP_address:9440/api/nutanix/v3/webhooks/webhook_uuid

{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}

Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of
the webhook you want to update, respectively. For a description of the parameters, see Creating a
Webhook on page 47.

Deleting a Webhook

To delete a webhook, send the Nutanix cluster an API request of the following form:

DELETE https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid

Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of the
webhook you want to update, respectively.

Notification Format

An event notification has the same content and format as the response to the version 3.0 REST API
call associated with that event. For example, the notification generated when a VM is powered on has

Event Notifications | AHV Administration Guide | AHV | 50


the same format and content as the response to a REST API call that powers on a VM. However, the
notification also contains a notification version, and event type, and an entity reference, as shown:
{
"version":"1.0",
"data":{
"metadata":{

"status": {
"name": "string",
"providers": {},
.
.
.
"event_type":"VM.ON",
"entity_reference":{
"kind":"vm",
"uuid":"63a942ac-d0ee-4dc8-b92e-8e009b703d84"
}
}

For VM.DELETE and SUBNET.DELETE, the UUID of the entity is included but not the metadata.

Event Notifications | AHV Administration Guide | AHV | 51


7
Integration with Network Functions
On AHV clusters, you can run network functions such as security devices and firewalls to process guest
VM traffic as the traffic flows through the Open vSwitch (OVS) network. These network functions are in the
form of VMs placed along the path of network traffic and can perform various tasks that are not available
natively in AHV, such as packet inspection and filtering.
Support for network function VMs is available only on AHV clusters.

Network Function Architecture


AHV clusters include a separate OVS bridge named network function bridge (in the OVS configuration, the
name is abbreviated to br.nf). bridge is dedicated to redirecting traffic to network function VMs, as shown:

Figure: Network Function Architecture

Traffic originating guest VMs connected to a network is received on the local OVS bridge. Thereafter, the
traffic is passed on to the network function VMs through the network function bridge. The network function
VMs are deployed in a chain called a network function chain. After the packets are processed by all the
network function VMs in the chain, the traffic is directed toward the uplink interfaces or bond through an
uplink bridge. The flow is reversed for traffic directed at the guest VMs.

Types of Network Function VMs


In the context of a Nutanix cluster, network function VMs are categorized based on whether or not they
must only inspect packets or also process packets. If a network function VM must only inspect traffic, the
VM is deployed in tap mode. If a network function VM must process packets, the VM is deployed in inline
mode.
Network function VMs use specialized virtual NICs referred to as network function NICs to receive packets
from guest VMs and inject modified packets back into the network. In addition to the network function NICs,
each network function VM can have virtual NICs meant for management connectivity, referred to as normal
NICs.

Integration with Network Functions | AHV Administration Guide | AHV | 52


A normal NIC belongs to a network and has an IP address, but a network function NIC does not belong to
any network and does not have an IP address.

Network Function VMs in Tap Mode

In the tap mode, a network function VM implements the port mirroring concept through a single network
interface referred to as a tap interface, which is also an interface of type network function NIC. Through the
tap interface, the network function VM only receives a copy of each packet flowing through the network.
The following image depicts the traffic flow through a network function VM running in tap mode. The figure
shows only the tap interface:

Figure: Network Function VM in Tap Mode

A network function VM running in tap mode can be used only to monitor traffic. Any packets that such a
network function VM attempts to inject back into the network are dropped.

Network Function VMs in Inline Mode

A network function VM running in inline mode has two network function NICs. The network function
VM receives guest VM traffic through a network function NIC referred to as the ingress interface and
injects processed packets back into the network through a network function NIC referred to as the egress
interface.
The following image depicts the traffic flow through a network function VM running in inline mode. The
figure shows only the network function NICs:

Integration with Network Functions | AHV Administration Guide | AHV | 53


Figure: Network Function VM in Inline Mode

Network Function VM Chains


A network function VM chain is an ordered list of network function VMs that process traffic as it passes
through them. By design, you need a network function VM chain even if you want to deploy only one
network function VM.
You can create a network function VM chain that includes both network function VMs running in inline
mode and network function VMs running in tap mode. You can also deploy a VM in both modes in a chain
as long as the VM has an ingress interface, an egress interface, and a tap interface. To deploy a VM in
both modes, you must add the VM to the chain twice, once in the tap mode and once in the inline mode.

Traffic Sources

Network function VM chains can receive traffic from the following entities:
A network
A vNIC on a guest VM

Processing Order

Network function VMs in a chain receive and process traffic in the order in which they are placed in the
chain. Network function VMs running in inline mode enforce the processing order implied by their position
in the chain while network function VMs running in tap mode do not. For example, in the following figure,
network function VMs B and C can process traffic originating from the guest VM only after network function
VM A has processed the traffic and injected it back into the network. However, network function VMs B and
C receive network function VM As egress traffic simultaneously because network function VM B, running in
tap mode, only receives a copy of the packets on the network.

Integration with Network Functions | AHV Administration Guide | AHV | 54


Figure: Network Function VMs in a Chain

Network Function Limitations


This feature has the following limitations:
Network function VMs must reside within the Nutanix cluster. Devices outside the cluster are not
supported.
You must create an instance of a network function VM on each host in a Nutanix cluster.

Enabling and Disabling Support for Network Function VMs


Support for network function VMs is disabled by default.
To enable or disable support for network function VMs, do the following:

1. Log on to the Controller VM.

2. Enable or disable support for network function VMs.


nutanix@cvm$ allssh \
'manage_ovs (enable_bridge_chain | disable_bridge_chain)'

Creating a Network Function VM


You can create a network function VM from the aCLI. You need the disk image of the network function VM.
You must create a network function VM on each host in the cluster, so perform the following procedure on
each host.
To create a network function VM, do the following:

1. Log on to the Controller VM with SSH.

2. Access the Acropolis command line.


nutanix@cvm$ acli
<acropolis>

3. Create a VM.
<acropolis> vm.create vm_name

Integration with Network Functions | AHV Administration Guide | AHV | 55


Replace vm_name with a name for the network function VM. This is the name that identifies a specific
instance of the network function VM. For example, if you are deploying a firewall on each host, you can
replace vm_name for the instances with firewall_1 , firewall_2 , and so on.

4. Specify that the VM is an agent VM (agent VMs are powered on before the guest VMs are powered
on, are not powered off until all the guest VMs are powered off, and are not migrated if the host goes
down). In the context of a Nutanix cluster, network function VMs are also a type of system VM, so
specify that the VM is a system VM. Also assign it a name.
<acropolis> vm.update vm_name agent_vm=true \
extra_flags=is_system_vm=true;system_vm_base_name="network_function_name"

Replace vm_name with the name of the VM. Replace network_function_name with a label that
describes the function of the VM. Note the difference between vm_name and network_function_name.
The name vm_name identifies a specific VM and therefore must be unique across the cluster. The label
network_function_name identifies its function and can be used for all instances of the network function
on the cluster. If you are deploying an instance of a firewall on each node and you replace vm_name for
the instances with firewall_1 , firewall_2 , and so on, you can replace network_function_name for all
the instances with firewall . Each node can have at most one VM with a given label.

5. Attach a clone of the network function VMs disk image to the VM.
<acropolis> vm.disk_create vm_name clone_from_image="image"

Replace vm_name with the name of the network function VM and replace image with the name of the
network function VMs disk image. For information about how to download an image from a URL or
upload an image from your workstation, see "Configuring Images" in the "Virtual Machine Management"
chapter of the Prism Web Console Guide.

6. If you want the network function VM to have management connectivity, attach a virtual NIC to the VM
and plug it into the management network.
<acropolis> vm.nic_create vm_name type=kNormalNic network="network"

Replace vm_name with the name of the network function VM and network with the network into which
to plug the VM.

7. Attach a network function NIC to the VM.


<acropolis> vm.nic_create vm_name type=kNetworkFunctionNic \
network_function_nic_type=(kIngress | kEgress | kTap)

Replace vm_name with the name of the network function VM. Repeat the command to attach a second
network function NIC to the VM if you want to deploy the VM in inline mode.

Note: Network function VMs that are in inline mode must have both ingress and egress NICs.
Network function VMs in tap mode must have a tap NIC. A single network function VM can be
deployed in both inline and tap modes, in which case it is expected to have all three NICs.

8. Configure a VM-host affinity rule to ensure that the network function VM always runs on the same AHV
host.
<acropolis> vm.affinity_set vm_name host_list=host

Replace vm_name with the name of the network function VM. Replace host with the name of the host
on which the VM must run.

9. Start the network function VM.


<acropolis> vm.on vm_name

Replace vm_name with the name of the network function VM.

Integration with Network Functions | AHV Administration Guide | AHV | 56


Creating a Network Function VM Chain
You chain network function VMs by creating a chain and then adding network function VMs to it. Add VMs
in the order in which you want them to receive traffic, and then specify the source of traffic for the chain.
Do the following to create a network function VM chain:

1. Log on to the Controller VM with SSH.

2. Access the Acropolis command line.


nutanix@cvm$ acli
<acropolis>

3. Create a chain.
<acropolis> nf.chain_create chain_name

Replace chain_name with a name for the chain.

4. Run the following command to add a network function VM to the chain:


<acropolis> nf.chain_add_function chain_name network_function_name="nf_name" \
type= (kInline | kTap)

Replace chain_name with the name of the chain.


Replace nf_name with the network function name of the VM. The network function name is the label
that you assigned to the system_vm_base_name parameter when creating the network function VM.
The type parameter specifies the mode in which you want to deploy the network function VM (inline
mode or tap mode). If a VM has all three types of network function NICs (an egress interface, an
ingress interface, and a tap interface), you can add the VM in both modes by running this command
once with type set to kInline and then running the command a second time with type set to kTap .

5. Do one of the following to specify the source of traffic for the chain:
To receive traffic from a particular network, update the network to enable the network function VM
chain.
<acropolis> net.update_network_function_chain network chain_name

Replace network with the name of the network from which the chain must receive traffic.
Replace chain_name with the name of the chain.
Repeat the command for each network from which the network function VM chain must receive
traffic.

To receive traffic from a particular virtual NIC on a guest VM, update the virtual NIC.
<acropolis> vm.nic_update_network_function_chain vm mac_addr chain_name

Replace vm with the name of the guest VM whose virtual NIC you want to configure.
Replace mac_addr with the MAC address of the virtual NIC from which you want the chain to receive
traffic.
Replace chain_name with the name of the chain that must receive traffic from the virtual NIC.

Integration with Network Functions | AHV Administration Guide | AHV | 57


Clearing a Network Function Configuration
You can remove all the network function VMs from a chain or you can clear the network function
configuration associated with a virtual NIC from which a chain receives traffic.
To clear a network function configuration, do one of the following:

Remove the network function VMs from the chain.


<acropolis> nf.chain_clear_functions chain_name

Replace chain_name with the name of the chain from which you want to remove network function VMs.

Remove the network function configuration from a virtual NIC on a guest VM.
<acropolis> vm.nic_clear_network_function_chain vm mac_addr

Replace vm with the name of the guest VM.


Replace mac_addr with the MAC address of the virtual NIC whose network function VM configuration
you want to clear.

Example: Configuring a Sample Network Function Chain


Consider that you have created the following network function VMs and you want to add them to a chain in
the listed order:
VM1, a network function VM running in inline mode with network function name nf_vm_inline_1 and
network function NICs eth0 and eth1 .
VM2, a network function VM running in tap mode with network function name nf_vm_tap and network
function NIC eth0 .
VM3, a network function VM running in inline mode with network function name nf_vm_inline_2 and
network function NICs eth0 and eth1 .
Further consider that you want the network function chain to receive traffic from the guest VMs on a
network named network1 .
The following figure depicts the network function VM chain:

Figure: Sample Network Function Chain

Integration with Network Functions | AHV Administration Guide | AHV | 58


To configure the network function VM chain, do the following:
1. Create a chain.
<acropolis> nf.chain_create chain1
2. Add VM1 to the chain.
<acropolis> nf.chain_add_function chain1 type=kInline \
port1=local:nf_vm_inline_1:eth0 port2= local: nf_vm_inline_1:eth1
3. Add VM2 to the chain.
<acropolis> nf.chain_add_function chain1 type=kTap port1=local:nf_vm_tap:tap
4. Add VM2 to the chain.
<acropolis> nf.chain_add_function chain1 type=kInline \
port1=local:nf_vm_inline_2:eth0 port2=local: nf_vm_inline_2:eth1
5. Update the network to enable the network function VM chain to receive traffic from network network1.
<acropolis> net.update_network_function_chain network1 chain1

Integration with Network Functions | AHV Administration Guide | AHV | 59


Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or
other jurisdictions. All other brand and product names mentioned herein are for identification purposes only
and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

Copyright | AHV Administration Guide | AHV | 60


Interface Target Username Password

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: November 10, 2017 (2017-11-10 15:27:42 GMT+5.5)

Copyright | AHV Administration Guide | AHV | 61

Anda mungkin juga menyukai