Acropolis 5.1
10-Nov-2017
Contents
1: Virtualization Management..................................................................... 4
Storage Overview.................................................................................................................................6
Virtualization Management Web Console Interface............................................................................ 7
2: Node Management...................................................................................8
Controller VM Access.......................................................................................................................... 8
Admin Access to Controller VM................................................................................................8
Shutting Down a Node in a Cluster (AHV)....................................................................................... 10
Starting a Node in a Cluster (AHV)...................................................................................................10
Changing CVM Memory Configuration (AHV)...................................................................................12
Changing the Acropolis Host Name.................................................................................................. 12
Changing the Acropolis Host Password............................................................................................13
Nonconfigurable AHV Components...................................................................................................14
2
VM Import...........................................................................................................................................44
Uploading Files to DSF for Microsoft Windows Users...................................................................... 45
6: Event Notifications................................................................................ 46
Generated Events.............................................................................................................................. 46
Creating a Webhook.......................................................................................................................... 47
Listing Webhooks...............................................................................................................................49
Updating a Webhook......................................................................................................................... 49
Deleting a Webhook.......................................................................................................................... 50
Notification Format............................................................................................................................. 50
3
1
Virtualization Management
Nutanix nodes with AHV include a distributed VM management service responsible for storing VM
configuration, making scheduling decisions, and exposing a management interface.
Snapshots
Snapshots are consistent for failures. They do not include the VM's current memory image, only the VM
configuration and its disk contents. The snapshot is taken atomically across the VM configuration and disks
to ensure consistency.
If multiple VMs are specified when creating a snapshot, all of their configurations and disks are placed into
the same consistency group. Do not specify more than 8 VMs at a time.
If no snapshot name is provided, the snapshot is referred to as "vm_name-timestamp", where the
timestamp is in ISO-8601 format ( YYYY-MM-DDTHH:MM:SS.mmmmmm ).
VM Disks
Each VM network interface is bound to a virtual network. Each virtual network is bound to a single VLAN;
trunking VLANs to a virtual network is not supported. Networks are designated by the L2 type ( vlan ) and
the VLAN number. For example, a network bound to VLAN 66 would be named vlan.66 .
Each virtual network maps to virtual switch br0 . The user is responsible for ensuring that the specified
virtual switch exists on all hosts, and that the physical switch ports for the virtual switch uplinks are properly
configured to receive VLAN-tagged traffic.
A VM NIC must be associated with a virtual network. It is not possible to change this association. To
connect a VM to a different virtual network, it is necessary to create a new NIC. While a virtual network is in
use by a VM, it cannot be modified or deleted.
A virtual network can have an IPv4 configuration, but it is not required. A virtual network with an IPv4
configuration is a managed network; one without an IPv4 configuration is an unmanaged network. A VLAN
can have at most one managed network defined. If a virtual network is managed, every NIC must be
assigned an IPv4 address at creation time.
A managed network can optionally have one or more non-overlapping DHCP pools. Each pool must be
entirely contained within the network's managed subnet.
If the managed network has a DHCP pool, the NIC automatically gets assigned an IPv4 address from one
of the pools at creation time, provided at least one address is available. Addresses in the DHCP pool are
not reserved. That is, you can manually specify an address belonging to the pool when creating a virtual
adapter. If the network has no DHCP pool, you must specify the IPv4 address manually.
All DHCP traffic on the network is rerouted to an internal DHCP server, which allocates IPv4 addresses.
DHCP traffic on the virtual network (that is, between the guest VMs and the Controller VM) does not reach
the physical network, and vice versa.
A network must be configured as managed or unmanaged when it is created. It is not possible to convert
one to the other.
Host Maintenance
When a host is in maintenance mode, it is marked as unschedulable so that no new VM instances are
created on it. Subsequently, an attempt is made to evacuate VMs from the host.
If the evacuation attempt fails (for example, because there are insufficient resources available elsewhere in
the cluster), the host remains in the "entering maintenance mode" state, where it is marked unschedulable,
waiting for user remediation. You can shut down VMs on the host or move them to other nodes. Once the
host has no more running VMs it is in maintenance mode.
When a host is in maintenance mode, VMs are moved from that host to a temporary host. After exiting
maintenance mode, those VMs are automatically returned to the original host, eliminating the need to
manually move them.
Storage Overview
Acropolis uses iSCSI and NFS for storing VM files.
Each disk which maps to a VM is defined as a separate iSCSI target. The Nutanix scripts work with
libvirtd in the kernel to create the necessary iSCSI structures in Acropolis. These structures map to
vDisks created in the Nutanix storage container specified by the administrator. If no storage container is
specified, the script uses the default storage container name.
Unlike with Microsoft Hyper-V and VMware ESXi clusters, in which the entire traffic on a node is rerouted
to a randomly selected healthy Controller VM when the local Controller VM becomes unavailable, in
an Acropolis cluster, a rerouting decision is taken on a per-vDisk basis. When the local Controller VM
becomes unavailable, iSCSI connections are individually redirected to a randomly selected healthy
Controller VM, resulting in distribution of load across the cluster.
Instead of maintaining live, redundant connections to other Controller VMs, as is the case with the Device
Mapper Multipath feature, AHV initiates an iSCSI connection to a healthy Controller VM only when the
Nutanix storage containers can be accessed by the Acropolis host as NFS datastores. NFS datastores are
used to manage images which may be used by multiple VMs, such as ISO files. When mapped to a VM,
the script maps the file in the NFS datastore to the VM as a iSCSI device, just as it does for virtual disk
files.
Images must be specified by absolute path, as if relative to the NFS server. For example, if a datastore
named ImageStore exists with a subdirectory called linux, the path required to access this set of files would
be /ImageStore/linux. Use the nfs_ls script to browse the datastore from the Controller VM:
nutanix@cvm$ nfs_ls --long --human_readable /ImageStore/linux
-rw-rw-r-- 1 1000 1000 Dec 7 2012 1.6G CentOS-6.3-x86_64-LiveDVD.iso
-rw-r--r-- 1 1000 1000 Jun 19 08:56 523.0M archlinux-2013.06.01-dual.iso
-rw-rw-r-- 1 1000 1000 Jun 3 19:22 373.0M grml64-full_2013.02.iso
-rw-rw-r-- 1 1000 1000 Nov 29 2012 694.3M ubuntu-12.04.1-amd64.iso
Controller VM Access
Most administrative functions of a Nutanix cluster can be performed through the web console or nCLI.
Nutanix recommends using these interfaces whenever possible and disabling Controller VM SSH access
with password or key authentication. Some functions, however, require logging on to a Controller VM
with SSH. Exercise caution whenever connecting directly to a Controller VM as the risk of causing cluster
issues is increased.
Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables
are set to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not
import or change any locale settings.
By default, the admin user password does not have an expiry date, but you can change the password at
any time.
When you change the admin user password, you must update any applications and scripts using the
admin user credentials for authentication. Nutanix recommends that you create a user assigned with the
admin role instead of using the admin user for authentication. The Prism Web Console Guide describes
authentication and roles.
Following are the default credentials to access a Controller VM.
Controller VM Credentials
Perform the following procedure to log on to the Controller VM by using the admin user with SSH for the
first time.
1. Log on to the Controller VM with SSH by using the management IP address of the Controller VM and
the following credentials.
User name: admin and Password: Nutanix/4u
You are now prompted to change the default password.
2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:
New password:
Retype new password:
Password changed.
Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2
(RF2), you can only shut down one node for each cluster. If an RF2 cluster would have more than
one node shut down, shut down the entire cluster.
Note the value of Hypervisor address for the node you want to shut down.
Replace Hypervisor address with the value of Hypervisor address for the node you want to shut
down. Value of Hypervisor address is either the IP address of the AHV host or the host name.
Specify wait=true to wait for the host evacuation attempt to finish.
Replace cvm_name with the name of the Controller VM that you found from the preceding command.
5. If the node is in maintenance mode, log on to the Controller VM and take the node out of maintenance
mode.
nutanix@cvm$ acli
<acropolis> host.exit_maintenance_mode AHV-hypervisor-IP-address
If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848, 10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176, 8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037, 9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886, 8888, 8889,
8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627, 4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Tunnel UP [9480, 9543, 9544]
ClusterHealth UP [9521, 9619, 9620, 9947, 9976, 9977,
10301]
Janus UP [9532, 9624, 9625]
NutanixGuestTools UP [9572, 9650, 9651, 9674]
MinervaCVM UP [10174, 10200, 10201, 10202, 10371]
ClusterConfig UP [10205, 10233, 10234, 10236]
APLOSEngine UP [10231, 10261, 10262, 10263]
APLOS UP [10343, 10368, 10369, 10370, 10502,
10503]
Lazan UP [10377, 10402, 10403, 10404]
Orion UP [10409, 10449, 10450, 10474]
If the check reports a status other than PASS, resolve the reported issues before proceeding. If you are
unable to resolve the issues, contact Nutanix support for assistance.
3. Open Configure CVM from the gear icon in the web console.
The Configure CVM dialog box is displayed.
4. Select the Target CVM Memory Allocation memory size and click Apply.
The values available from the drop-down menu can range from 16 GB to the maximum available
memory in GB.
AOS applies memory to each Controller VM that is below the amount you choose.
If a Controller VM was already allocated more memory than your choice, it remains at the memory
amount. For example, selecting 28 GB upgrades any Controller VM currently at 20 GB. A Controller VM
with a 48 GB memory allocation remains unmodified.
2. Use a text editor such as vi to set the value of the HOSTNAME parameter in the /etc/sysconfig/
network file.
HOSTNAME=my_hostname
Replace my_hostname with the name that you want to assign to the host.
3. Use the text editor to replace the host name in the /etc/hostname file.
3. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.
The password you choose must meet the following complexity requirements:
In configurations with high-security requirements, the password must contain:
At least 15 characters.
At least one upper case letter (AZ).
At least one lower case letter (az).
At least one digit (09).
At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
At least eight characters different from the previous password.
At most three consecutive occurrences of any given character.
The password cannot be the same as the last 24 passwords.
In both types of configuration, if a password for an account is entered three times unsuccessfully within
a 15-minute period, the account is locked for 15 minutes.
Warning: Modifying any of the settings listed here may render your cluster inoperable.
Warning: You must not run any commands on a Controller VM that are not covered in the Nutanix
documentation.
Nutanix Software
Settings and contents of any Controller VM, including the name and the virtual hardware configuration
(except memory when required to enable certain features)
AHV Settings
Note: If the AOS upgrade process detects that any node hypervisor host has total physical
memory of 64 GB or greater, it automatically upgrades any Controller VM in that node with less
than 32 GB memory by 4 GB. The Controller VM is upgraded to a maximum 32 GB.
If the AOS upgrade process detects any node with less than 64 GB memory size, no memory
changes occur.
For nodes with ESXi hypervisor hosts with total physical memory of 64 GB, the Controller VM
is upgraded to a maximum 28 GB. With total physical memory greater than 64 GB, the existing
Controller VM memory is increased by 4 GB.
To calculate the number of vCPUs for your model, use the number of physical cores per socket in your
model. The minimum number of vCPUS your Controller VM can have is eight and the maximum number
is 12, unless otherwise noted.
If your CPU has eight or fewer logical cores, allocate a maximum of 75 percent of the cores of a single
CPU to the Controller VM. For example, if your CPU has 6 cores, allocate 4 vCPUs.
The following table shows the minimum amount of memory required for the Controller VM on each node
for platforms that do not follow the default. For the workload translation into models, see Platform Workload
Translation (G5/Broadwell) on page 16.
Workload Exceptions
Note: Upgrading to 5.1 requires a 4GB memory increase, unless the CVM memory already has
32 GB.
If all the data disks in a platform are SSDs, the node is assigned the High Performance workload except for
the following exceptions.
Klas Voyager 2 uses SSDs but due to workload balance, this platform workload default is VDI.
Cisco B-series is expected to have large remote storage and two SSDs as a local cache for the hot tier,
so this platform workload is VDI.
- - HX1310 C220-M4L - -
- - HX2710-E Hyperflex - -
HX220C-
M4S
- - HX3510- - - -
FG
- - HX3710-F - - -
NX-8035- - HX5510-C - - -
G5
NX-6035- - - - - -
G5
NX-6155- - - - - -
G5
NX-8150- - - - - -
G5
Platform Default
The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.
Nutanix Platforms
XC730xd-24 32 20 8
XC6320-6AF
XC630-10AF
Lenovo Platforms
HX-3500 28 8
HX-5500
HX-7500
Open vSwitch Do not modify the OpenFlow tables that are associated with the default
OVS bridge br0.
VLANs Add the Controller VM and the AHV host to the same VLAN. By default,
the Controller VM and the hypervisor are assigned to VLAN 0, which
effectively places them on the native VLAN configured on the upstream
physical switch.
Do not add any other device, including guest VMs, to the VLAN to which
the Controller VM and hypervisor host are assigned. Isolate guest VMs on
one or more separate VLANs.
OVS bonded port (bond0) Aggregate the 10 GbE interfaces on the physical host to an OVS bond
on the default OVS bridge br0 and trunk these interfaces on the physical
switch.
By default, the 10 GbE interfaces in the OVS bond operate in the
recommended active-backup mode.
Note: The mixing of bond modes across AHV hosts in the same
cluster is not recommended and not supported.
LACP configurations are known to work, but support might be limited.
1 GbE and 10 GbE If you want to use the 10 GbE interfaces for guest VM traffic, make sure
interfaces (physical host) that the guest VMs do not use the VLAN over which the Controller VM and
hypervisor communicate.
If you want to use the 1 GbE interfaces for guest VM connectivity, follow
the hypervisor manufacturers switch port and networking configuration
guidelines.
Do not include the 1 GbE interfaces in the same bond as the 10 GbE
interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to bridge
br0, either individually or in a second bond. Use them on other bridges.
IPMI port on the hypervisor Do not trunk switch ports that connect to the IPMI interface. Configure the
host switch ports as access ports for management simplicity.
Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur as
implementations scale upward (see Knowledge Base article KB1612).
Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches
with larger buffers for production workloads.
Use an 802.3-2012 standardscompliant switch that has a low-latency,
cut-through design and provides predictable, consistent traffic latency
regardless of packet size, traffic pattern, or the features enabled on
the 10 GbE interfaces. Port-to-port latency should be no higher than 2
microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch
ports that are connected to the hypervisor host.
Avoid using shared buffers for the 10 GbE ports. Use a dedicated buffer for
each port.
Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture.
This simple, flat network design is well suited for a highly distributed,
shared-nothing compute and storage architecture.
Add all the nodes that belong to a given cluster to the same Layer-2
network segment.
Other network layouts are supported as long as all other Nutanix
recommendations are followed.
Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.
This diagram shows the recommended network configuration for an Acropolis cluster. The interfaces in the
diagram are connected with colored lines to indicate membership to different VLANs:
Figure:
The following diagram illustrates the default factory configuration of OVS on an Acropolis node:
The Controller VM has two network interfaces. As shown in the diagram, one network interface connects to
bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses this bridge to
communicate with the hypervisor host.
To show interface properties such as link speed and status, log on to the Controller VM, and then list the
physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces
To show the ports and interfaces that are configured as uplinks, log on to the Controller VM, and then
list the uplink configuration.
nutanix@cvm$ manage_ovs --bridge_name bridge show_uplinks
Replace bridge with the name of the bridge for which you want to view uplink information. Omit the --
bridge_name parameter if you want to view uplink information for the default OVS bridge br0.
To show the virtual switching configuration, log on to the Acropolis host with SSH, and then list the
configuration of Open vSwitch.
root@ahv# ovs-vsctl show
59ce3252-f3c1-4444-91d1-b5281b30cdba
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "vnet0"
Interface "vnet0"
Port "br0-arp"
Interface "br0-arp"
type: vxlan
options: {key="1", remote_ip="192.168.5.2"}
Port "bond0"
Interface "eth3"
Interface "eth2"
Port "bond1"
Interface "eth1"
Interface "eth0"
Port "br0-dhcp"
Interface "br0-dhcp"
type: vxlan
options: {key="1", remote_ip="192.0.2.131"}
ovs_version: "2.3.1"
To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then list the
configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
Replace bridge with a name for the bridge. Bridge names must not exceed 10 characters.
The output does not indicate success explicitly, so you can append && echo success to the command. If
the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
Executing ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success on the
cluster
================== 192.0.2.203 =================
FIPS mode initialized
Nutanix KVM
success
...
Note: Perform this procedure on factory-configured nodes to remove the 1 GbE interfaces from
the bonded port bond0. You cannot configure failover priority for the interfaces in an OVS bond, so
the disassociation is necessary to help prevent any unpredictable performance issues that might
result from a 10 GbE interface failing over to a 1 GbE interface. Nutanix recommends that you
aggregate only the 10 GbE interfaces on bond0 and use the 1 GbE interfaces on a separate OVS
bridge.
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
Replace bridge with the name of the bridge on which you want to create the bond. Omit the --
bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of --bond_name is bond0.
Replace interfaces with one of the following values:
A comma-separated list of the interfaces that you want to include in the bond. For example,
eth0,eth1 .
A keyword that indicates which interfaces you want to include. Possible keywords:
10g. Include all available 10 GbE interfaces
1g. Include all available 1 GbE interfaces
all. Include all available interfaces
For example, create a bond with interfaces eth0 and eth1 on a bridge named br1. Using allssh enables
you to use a single command to effect the change on every host in the cluster.
Note: If the bridge on which you want to create the bond does not exist, you must first create
the bridge. For information about creating an OVS bridge, see Creating an Open vSwitch
Bridge on page 25. The following example assumes that a bridge named br1 exists on every
host in the cluster.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces eth0,eth1 --bond_name bond1
update_uplinks'
2015-03-05 11:17:17 WARNING manage_ovs:291 Interface eth1 does not have link state
2015-03-05 11:17:17 INFO manage_ovs:325 Deleting OVS ports: bond1
2015-03-05 11:17:18 INFO manage_ovs:333 Adding bonded OVS ports: eth0 eth1
2015-03-05 11:17:22 INFO manage_ovs:364 Sending gratuitous ARPs for 192.0.2.21
To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:
2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be
on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag
5. Verify connectivity to the IP address of the AHV host by performing a ping test.
By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the Controller VM to
a different VLAN, change the VLAN ID of its public interface. After the change, you can access the public
interface from a device that is on the new VLAN.
Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are
logged on to the Controller VM through its public interface. To change the VLAN ID, log on to the
internal interface that has IP address 192.168.5.254.
Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the
following:
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10
new XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<model type="virtio" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
<source bridge="br0" />
By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can
send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is
connected. If restricted to using access mode interfaces, a VM running an application on multiple VLANs
(such as a firewall application) must use multiple virtual NICsone for each VLAN. Instead of configuring
multiple virtual NICs in access mode, you can configure a single virtual NIC on the VM to operate in trunk
mode. A virtual NIC in trunk mode can send and receive traffic over any number of VLANs in addition to its
own VLAN. You can trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the
trunk mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic only
over its own VLAN.
To configure a virtual NIC as an access port or trunk port, do the following:
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]
Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see the
Caution: All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor
can be multihomed provided that one interface is on the same subnet as the Controller VM.
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0
For information about how to log on to a Controller VM, see Controller VM Access on page 8.
3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an
Acropolis Host to a VLAN on page 26.
VM Management
SCSI: 256
PCI: 6
IDE: 4
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the
previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond
name are br0 and br0-up , respectively.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up
update_uplinks'
The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.
5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge called br1
(bridge names must not exceed 10 characters.).
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1'
6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to
a bond named br1-up .
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up
update_uplinks'
7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the guest VMs, and
associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN
10 .
nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]
Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the
memory is not online, you cannot use the new memory. Perform the following procedure to make the
memory online.
2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more memory
to that VM so that the final memory is greater than 3 GB, results in a memory-overflow condition. To
resolve the issue, restart the guest OS (CentOS 7.2) with the following setting:
swiotlb=force
CPU OS Limitations
1. On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you might
have to bring the CPUs online. For each hot-plugged CPU, run the following command to bring the CPU
online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online
Replace vm-name with the name of the VM and new_memory_size with the memory size.
Replace vm-name with the name of the VM and n with the number of CPUs.
Note: After you upgrade from a hot-plug unsupported version to the hot-plug supported
version, you must power cycle the VM that was instantiated and powered on before the
upgrade, so that it is compatible with the memory and CPU hot-plug feature. This power-cycle
has to be done only once after the upgrade. New VMs created on the supported version shall
have the hot-plug compatibility by default.
When you power on a VM with GPU pass-through, the VM is started on the host that has the specified
GPU, provided that the Acropolis Dynamic Scheduler determines that the host has sufficient resources
to run the VM. If the specified GPU is available on more than one host, the Acropolis Dynamic Scheduler
ensures that a host with sufficient resources is selected. If sufficient resources are not available on any
host with the specified GPU, the VM is not powered on.
If you allocate multiple GPUs to a VM, the VM is started on a host if, in addition to satisfying Acropolis
Dynamic Scheduler requirements, the host has all of the GPUs that are specified for the VM.
If you want to a VM to always use a GPU on a specific host, configure host affinity for the VM.
AHV supports running GPU cards in either graphics mode or compute mode. If a GPU is running in
compute mode, Nutanix user interfaces indicate the mode by appending the string compute to the model
name. No string is appended if a GPU is running in the default graphics mode.
If you want to change the mode of the firmware on a GPU, put the host in maintenance mode, and
then flash the GPU manually by logging on to the AHV host and performing standard procedures as
documented for Linux VMs by the vendor of the GPU card.
Typically, you restart the host immediately after you flash the GPU. After restarting the host, redo the GPU
configuration on the affected VM, and then start the VM. For example, consider that you want to re-flash an
NVIDIA Tesla M60 GPU that is running in graphics mode. The Prism web console identifies the card as
an NVIDIA Tesla M60 GPU. After you re-flash the GPU to run in compute mode and restart the host, redo
the GPU configuration on the affected VMs by adding back the GPU, which is now identified as an NVIDIA
Tesla M60.compute GPU, and then start the VM.
Limitations
For information about configuring GPU pass-through for guest VMs, see Creating a VM (AHV) in the
"Virtual Machine Management" chapter of the Prism Web Console Guide.
Windows VM Provisioning
VirtIO Requirements
This topic describes how to download the Nutanix VirtIO and Nutanix VirtIO Microsoft Installer (MSI). The
MSI installs and upgrades the Nutanix VirtIO drivers.
Before you begin: Be sure you have the VirtIO requirements, see VirtIO Requirements on page 36.
To download the Nutanix VirtIO, perform the following.
1. Go to the Nutanix Support Portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.
2. Use the filter search to find the latest Nutanix Virtio package.
6. Read and accept the Nutanix VirtIO License Agreement. Click Install.
The Nutanix VirtIO Setup Wizard shows a status bar and completes installation.
1. Go to the Nutanix Support Portal and navigate to Downloads > Tools & Firmware.
2. Use the filter search to find the latest Nutanix Virtio ISO.
3. Download the latest VirtIO for Windows ISO to your local machine.
Note: Nutanix recommends extracting the VirtIO ISO into the same VM where you will load
Nutanix VirtIO for easier installation.
6. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.
a. TYPE: CD-ROM
e. Click Add.
7. Log into the VM and navigate to Control Panel > Device Manager.
8. Note: Ensure you select the x86 subdirectory for 32-bit Windows or the amd64 for 64-bit
Windows.
Open the devices and select the specific Nutanix drivers for download. For each device, right click and
Update Driver Software into the drive containing the VirtIO ISO. For each device, follow the wizard
instructions until you receive installation confirmation.
c. Processors > Storage Controllers > Nutanix VirtIO SCSI pass through Controller
The Nutanix VirtIO SCSI pass through Controller prompts you to restart your system. You may
restart at any time to successfully install the controller.
This topic describes how to upload and upgrade Nutanix VirtIO and Nutanix VirtIO Microsoft Installer (MSI).
The MSI installs and upgrades the Nutanix VirtIO drivers.
Before you begin: Be sure you have the VirtIO requirements, see VirtIO Requirements on page 36.
1. Go to the Nutanix Support Portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.
Note: The installer is available on the ISO if your VM does not have internet access.
a. Upload the ISO to the cluster as described in Configuring Images in the Web Console Guide.
b. Mount ISO image into CD-ROM of each VM where you want to upgrade in the cluster.
3. If you are updating drivers in a Windows VM, select the appropriate 32-bit or 64-bit MSI.
4. Upgrade drivers.
For SCSI drivers for SCSI boot disks, manually upgrade the drivers with the vendor's instructions.
For all other drivers, run the Nutanix VirtIO MSI installer (the preferred installation method) and
follow the wizard instructions.
Note: Running the Nutanix VirtIO MSI installer upgrades all drivers.
5. Read and accept the Nutanix VirtIO License Agreement. Click Install.
The Nutanix VirtIO Setup Wizard shows a status bar and completes installation.
c. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
5. If you are creating a new Windows VM, add a Windows CD-ROM to the VM.
a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated fields.
The current CD-ROM opens in a new window.
e. Click Update.
6. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.
a. TYPE: CD-ROM
e. Click Add.
a. TYPE: DISK
e. SIZE: Enter the number for the size of the hard drive (in GiB).
8. If you are migrating a VM, create a disk from the disk image. Click Add New Disk and complete the
indicated fields.
a. TYPE: DISK
d. CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image you created
previously.
9. (Optional) After you have migrated or created a VM, add a network interface card (NIC). Click Add New
NIC and completing the indicated fields.
a. VLAN ID: Choose the VLAN ID according to network requirements and enter the IP address if
required.
b. Click Add.
Installing Windows on a VM
Before you begin: Create a Windows VM. See "Creating a Windows VM on AHV after Migration" in the
Migration Guide.
To install a Windows VM, do the following.
Note: Nutanix VirtIO cannot be used to install Windows 7 or Windows Server 2008 R2.
6. Select the desired language, time and currency format, and keyboard information.
10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.
The browse folder opens.
11. Choose the Nutanix VirtIO CD drive and the Windows version by choosing the Nutanix VirtIO drive and
then the Windows OS folder. Click OK.
13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress which can take several minutes.
14. Fill in your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows Setup completes installation.
VM Import
If you have legacy KVM VMs from a Nutanix solution that did not offer virtualization management, you must
import the VMs using the import_vm utility from the Controller VM.
Usage
nutanix@cvm$ import_vm vm [vm2 vm3 .. vmN]
Required Arguments
A space-separated list of VMs to import
Examples
Import two VMs
nutanix@cvm$ import_vm vm24 vm25
Optional Arguments
--convert_virtio_disks
Convert Virtio disks attached to the VM to SCSI disks. (default: skip Virtio disks)
--default_network
Add NICs to the network specified with this parameter if the utility cannot determine the
appropriate network. (default: do not attach indeterminate NICs to any network)
--host
Connect to a host other than the host where the Controller VM is running. (default: host of the
Controller VM where the utility is run)
--ignore_multiple_tags
Add NICs to the network specified with --default_network if they have multiple VLAN tags.
(default: do not attach NICs with multiple VLAN tags to any network)
1. Authenticate by using Prism username and password or, for advanced users, use the public key that is
managed through the Prism cluster lockdown user interface.
2. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start browsing the
DSF data store.
Note: The root directory displays storage containers and you cannot change it. You can only
upload files to one of the storage containers and not directly to the root directory. To create or
delete storage containers, you can use the prism user interface.
Generated Events
The following events are generated by an AHV cluster.
Event Description
VM.CREATE A VM is created.
VM.DELETE A VM is deleted.
When a VM that is powered on is deleted, in
addition to the VM.DELETE notification, a VM.OFF
event is generated.
VM.UPDATE A VM is updated.
VM.MIGRATE A VM is migrated from one host to another.
When a VM is migrated, in addition to the
VM.MIGRATE notification, a VM.UPDATE event is
generated.
Event Description
SUBNET.CREATE A virtual network is created.
SUBNET.DELETE A virtual network is deleted.
SUBNET.UPDATE A virtual network is updated.
Creating a Webhook
Send the Nutanix cluster an HTTP POST request whose body contains the information essential to
creating a webhook (the events for which you want the listener to receive notifications, the listener URL,
and other information such as a name and description of the listener).
Note: Each POST request creates a separate webhook with a unique UUID, even if the data
in the body is identical. Each webhook generates a notification when an event occurs, and that
results in multiple notifications for the same event. If you want to update a webhook, do not
send another request with changes. Instead, update the webhook. See Updating a Webhook on
page 49.
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}
Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate values
for the following parameters:
name. Name for the webhook.
post_url. URL at which the webhook listener receives notifications.
username and password. User name and password to use for authenticating to the listener. Include
these parameters if the listener requires them.
events_filter_list. Comma-separated list of events for which notifications must be generated.
description. Description of the webhook.
api_version. Version of Nutanix REST API in use.
The following sample API request creates a webhook that generates notifications when VMs are
powered on and powered off:
POST https://192.0.2.3:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "vm_notifications_webhook",
"resources": {
"post_url": "http://192.0.2.10:8080/",
"credentials": {
"username":"admin",
"password":"Nutanix/4u"
},
"events_filter_list": [
"VM.ON", "VM.OFF", "VM.UPDATE", "VM.CREATE", "NETWORK.CREATE"
]
},
"description": "Notifications for VM events."
},
"api_version": "3.0"
}
The notification contains metadata about the entity along with information about the type of event that
occurred. The event type is specified by the event_type parameter.
Listing Webhooks
You can list webhooks to view their specifications or to verify that they were created successfully.
To list webhooks, do the following:
To show a single webhook, send the Nutanix cluster an API request of the following form:
GET https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address with the IP address of the Nutanix cluster. Replace webhook_uuid with the
UUID of the webhook that you want to show.
To list all the webhooks configured on the Nutanix cluster, send the Nutanix cluster an API request of
the following form:
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks/list
{
"filter": "string",
"kind": "webhook",
"sort_order": "ASCENDING",
"offset": 0,
"total_matches": 0,
"sort_column": "string",
"length": 0
}
Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate values
for the following parameters:
filter. Filter to apply to the list of webhooks.
sort_order. Order in which to sort the list of webhooks. Ordering is performed on webhook names.
offset.
total_matches. Number of matches to list.
sort_column. Parameter on which to sort the list.
length.
Updating a Webhook
You can update a webhook by sending a PUT request to the Nutanix cluster. You can update the name,
listener URL, event list, and description.
PUT https://cluster_IP_address:9440/api/nutanix/v3/webhooks/webhook_uuid
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
"password":"string"
},
"events_filter_list": [
string
]
},
"description": "string"
},
"api_version": "string"
}
Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of
the webhook you want to update, respectively. For a description of the parameters, see Creating a
Webhook on page 47.
Deleting a Webhook
To delete a webhook, send the Nutanix cluster an API request of the following form:
DELETE https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of the
webhook you want to update, respectively.
Notification Format
An event notification has the same content and format as the response to the version 3.0 REST API
call associated with that event. For example, the notification generated when a VM is powered on has
"status": {
"name": "string",
"providers": {},
.
.
.
"event_type":"VM.ON",
"entity_reference":{
"kind":"vm",
"uuid":"63a942ac-d0ee-4dc8-b92e-8e009b703d84"
}
}
For VM.DELETE and SUBNET.DELETE, the UUID of the entity is included but not the metadata.
Traffic originating guest VMs connected to a network is received on the local OVS bridge. Thereafter, the
traffic is passed on to the network function VMs through the network function bridge. The network function
VMs are deployed in a chain called a network function chain. After the packets are processed by all the
network function VMs in the chain, the traffic is directed toward the uplink interfaces or bond through an
uplink bridge. The flow is reversed for traffic directed at the guest VMs.
In the tap mode, a network function VM implements the port mirroring concept through a single network
interface referred to as a tap interface, which is also an interface of type network function NIC. Through the
tap interface, the network function VM only receives a copy of each packet flowing through the network.
The following image depicts the traffic flow through a network function VM running in tap mode. The figure
shows only the tap interface:
A network function VM running in tap mode can be used only to monitor traffic. Any packets that such a
network function VM attempts to inject back into the network are dropped.
A network function VM running in inline mode has two network function NICs. The network function
VM receives guest VM traffic through a network function NIC referred to as the ingress interface and
injects processed packets back into the network through a network function NIC referred to as the egress
interface.
The following image depicts the traffic flow through a network function VM running in inline mode. The
figure shows only the network function NICs:
Traffic Sources
Network function VM chains can receive traffic from the following entities:
A network
A vNIC on a guest VM
Processing Order
Network function VMs in a chain receive and process traffic in the order in which they are placed in the
chain. Network function VMs running in inline mode enforce the processing order implied by their position
in the chain while network function VMs running in tap mode do not. For example, in the following figure,
network function VMs B and C can process traffic originating from the guest VM only after network function
VM A has processed the traffic and injected it back into the network. However, network function VMs B and
C receive network function VM As egress traffic simultaneously because network function VM B, running in
tap mode, only receives a copy of the packets on the network.
3. Create a VM.
<acropolis> vm.create vm_name
4. Specify that the VM is an agent VM (agent VMs are powered on before the guest VMs are powered
on, are not powered off until all the guest VMs are powered off, and are not migrated if the host goes
down). In the context of a Nutanix cluster, network function VMs are also a type of system VM, so
specify that the VM is a system VM. Also assign it a name.
<acropolis> vm.update vm_name agent_vm=true \
extra_flags=is_system_vm=true;system_vm_base_name="network_function_name"
Replace vm_name with the name of the VM. Replace network_function_name with a label that
describes the function of the VM. Note the difference between vm_name and network_function_name.
The name vm_name identifies a specific VM and therefore must be unique across the cluster. The label
network_function_name identifies its function and can be used for all instances of the network function
on the cluster. If you are deploying an instance of a firewall on each node and you replace vm_name for
the instances with firewall_1 , firewall_2 , and so on, you can replace network_function_name for all
the instances with firewall . Each node can have at most one VM with a given label.
5. Attach a clone of the network function VMs disk image to the VM.
<acropolis> vm.disk_create vm_name clone_from_image="image"
Replace vm_name with the name of the network function VM and replace image with the name of the
network function VMs disk image. For information about how to download an image from a URL or
upload an image from your workstation, see "Configuring Images" in the "Virtual Machine Management"
chapter of the Prism Web Console Guide.
6. If you want the network function VM to have management connectivity, attach a virtual NIC to the VM
and plug it into the management network.
<acropolis> vm.nic_create vm_name type=kNormalNic network="network"
Replace vm_name with the name of the network function VM and network with the network into which
to plug the VM.
Replace vm_name with the name of the network function VM. Repeat the command to attach a second
network function NIC to the VM if you want to deploy the VM in inline mode.
Note: Network function VMs that are in inline mode must have both ingress and egress NICs.
Network function VMs in tap mode must have a tap NIC. A single network function VM can be
deployed in both inline and tap modes, in which case it is expected to have all three NICs.
8. Configure a VM-host affinity rule to ensure that the network function VM always runs on the same AHV
host.
<acropolis> vm.affinity_set vm_name host_list=host
Replace vm_name with the name of the network function VM. Replace host with the name of the host
on which the VM must run.
3. Create a chain.
<acropolis> nf.chain_create chain_name
5. Do one of the following to specify the source of traffic for the chain:
To receive traffic from a particular network, update the network to enable the network function VM
chain.
<acropolis> net.update_network_function_chain network chain_name
Replace network with the name of the network from which the chain must receive traffic.
Replace chain_name with the name of the chain.
Repeat the command for each network from which the network function VM chain must receive
traffic.
To receive traffic from a particular virtual NIC on a guest VM, update the virtual NIC.
<acropolis> vm.nic_update_network_function_chain vm mac_addr chain_name
Replace vm with the name of the guest VM whose virtual NIC you want to configure.
Replace mac_addr with the MAC address of the virtual NIC from which you want the chain to receive
traffic.
Replace chain_name with the name of the chain that must receive traffic from the virtual NIC.
Replace chain_name with the name of the chain from which you want to remove network function VMs.
Remove the network function configuration from a virtual NIC on a guest VM.
<acropolis> vm.nic_clear_network_function_chain vm mac_addr
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: November 10, 2017 (2017-11-10 15:27:42 GMT+5.5)