VMware
Technical Solutions Professional
Student Study Guide - VTSP 5.5
Page 1
Page 2
VTSP 5.5
Page 3
Page 4
VTSP 5.5
Page 5
Page 6
VTSP 5.5
Page 7
Page 8
VTSP 5.5
Page 9
Course 1
Module 1: vSphere Overview
Page 10
VTSP 5.5
Course Objectives
At the end of this course you should be able to:
Page 11
vSphere Overview
This is module 1, vSphere Overview. These are the topics that will be covered in this
module.
Page 12
VTSP 5.5
Module 1 Objectives
At the end of this module you should be able to:
Page 13
VMware Vision
Before we discuss VMware Architecture, let's familiarize ourselves with the VMware
vision.
Our vision is to be efficient, automate quality of service, and have independent choices.
We aim to reduce capital and operational costs by over 50% for all applications,
automate quality of service, and remain independent of hardware, operating systems,
application stacks, and service providers.
Page 14
VTSP 5.5
VMware Vision
At VMware, our goal is to help businesses and governments move beyond IT as a Cost
Center to a more business-centric IT as a Service model. This new model of IT
creates improved approaches at each critical layer of a modern IT architecture:
Infrastructure, Applications, and End-User Access.
Page 15
The Management layer of vSphere 5.5 consists of the vCenter Server, which acts as a
central point for configuring, provisioning, and managing virtualized IT environments.
Page 16
VTSP 5.5
The Interface layer of vSphere 5.5 is comprised of clients that allow a user to access the
vSphere Data Center, for example, vSphere Client and vSphere Web Client.
Page 17
Computing and memory resources called hosts, clusters, and resource pools.
Storage resources called datastores and datastore clusters.
Networking resources called standard virtual switches and distributed virtual
switches.
vSphere Distributed Services such as vSphere vMotion, vSphere Storage
vMotion, vSphere DRS, vSphere Storage DRS, Storage I/O Control, VMware HA,
and FT that enable efficient and automated resource management and high
availability for virtual machines.
And virtual machines.
These features are discussed in Module 2.
Page 18
VTSP 5.5
Compute Servers: The computing servers are industry standard x86 servers that
run ESXi 5.5 on the bare metal. The ESXi 5.5 software provides resources for
and runs the virtual machines.
Storage Networks and Arrays: Fibre Channel Storage Area Network (FC SAN)
arrays, iSCSI (Internet Small Computer System Interface) SAN arrays, and
Network Attached Storage (NAS) arrays are widely used storage technologies
supported by vSphere 5.5 to meet different datacenter storage needs.
IP Networks: Each compute server can have multiple physical network adapters
to provide high bandwidth and reliable networking to the entire vSphere
datacenter.
vCenter Server: vCenter Server provides a single point of control to the
datacenter. It provides essential datacenter services, such as access control,
performance monitoring, and configuration. It unifies the resources from the
individual computing servers to be shared among virtual machines in the entire
datacenter.
Management Clients: vSphere 5.5 provides several interfaces such as vSphere
Client and vSphere Web Client for datacenter management and virtual machine
access.
Page 19
Introduction to vSOM
The vSphere Management market is extremely large. More than 50 percent of physical
servers have been virtualized and more than 80% of virtualized environments are using
vSphere.
That 80 percent adds up to about 25 million unmanaged hosts.
This is a massive opportunity for vSphere with Operations Management.
This system combines the benefits of the worlds best virtualization platform, vSphere,
with the functionality of vCenter Operations Manager Standard Edition. vCenter
Operations Manager delivers more value to customers through operational insight into
the virtual environment for monitoring and performance, as well as optimized capacity
management.
vCenter Operations Manager is an integral part of vSphere with Operations
Management. It is available as part of the Standard, Enterprise and Enterprise plus
versions of vSphere with Operations Management.
Now lets explore vCenter Operations Management in more detail.
Page 20
VTSP 5.5
Page 21
Page 22
VTSP 5.5
Page 23
Page 24
VTSP 5.5
Page 25
Page 26
VTSP 5.5
Page 27
Page 28
VTSP 5.5
Page 29
Page 30
VTSP 5.5
Manager 5.7 includes the new Metrics Profile feature that allows a subset to be chosen
from metrics collected from vCenter Server.
By default, this feature is set to Full Profile, meaning that all metrics from all registered
vCenter Servers are collected. The Balanced Profile setting ensures that only the most
vital metrics from vCenter Server are collected.
The Full Profile allows for 5 million metrics to be collected and the Balanced Profile
allows for 2.2 million metrics.
For larger deployments, you may need to add additional disks to the vApp.
vCenter Operations Manager is only compatible with certain web browsers and vCenter
Server versions.
For vApp compatibility and requirements you should consult the vApp Deployment and
Configuration Guide.
Page 31
Page 32
VTSP 5.5
Page 33
Page 34
VTSP 5.5
Module Summary
This concludes module 1, vSphere Overview.
Now that you have completed this module, you should be able to:
Page 35
Page 36
VTSP 5.5
Module 2 Objectives
At the end of this module you will be able to:
Identify the key features of vSphere 5.5, describing the key capabilities and
identifying the key value propositions of each one.
Identify any license level restrictions for each feature.
Page 37
Page 38
VTSP 5.5
Page 39
Page 40
VTSP 5.5
Page 41
Page 42
VTSP 5.5
Page 43
Page 44
VTSP 5.5
Page 45
vSphere Replication
vSphere Replication replicates powered-on virtual machines over the network from one
vSphere host to another without needing storage array-based native replication.
vSphere Replication reduces bandwidth needs, eliminates storage lock-in, and allows
you to build flexible disaster recovery configurations.
This proprietary replication engine copies only changed blocks to the recovery site,
ensuring both lower bandwidth utilization and more aggressive recovery point objectives
compared with manual full system copies of virtual machines.
Replication is discussed in Course 3.
Page 46
VTSP 5.5
Page 47
Page 48
VTSP 5.5
Page 49
Page 50
VTSP 5.5
Page 51
Page 52
VTSP 5.5
Virtual Disks
When you create a virtual machine, a certain amount of storage space on a datastore is
provisioned, or allocated, to the virtual disk files. Each of the three vSphere hosts has
two virtual machines running on it.
The lines connecting them to the disk icons of the virtual machine disks (VMDKs) are
logical representations of their allocation from the larger VMFS volume, which is made
up of one large logical unit number (LUN).
A virtual machine detects the VMDK as a local SCSI target.
The virtual disks are really just files on the VMFS volume, shown in the illustration as a
dashed oval.
Virtual Disks are discussed in Course 3 and Course 5.
Page 53
Page 54
VTSP 5.5
Page 55
Page 56
VTSP 5.5
Page 57
Module Summary
This concludes module 2, vSphere Infrastructure and Hypervisor Components.
Now that you have completed this module, you should be able to:
Identify the key features of vSphere 5.5 describing the key capabilities and
identifying the key value propositions of each one.
Identify any license level restrictions for each feature.
Page 58
VTSP 5.5
Page 59
Module Overview
By the time you have completed this module, you will be able to select vSphere
components to meet solution requirements by identifying the capabilities and benefits of
each solution component in order to present its value proposition.
The module presents a series of customer scenarios that define specific requirements
and constraints.
You will be asked to select vSphere components to meet solution requirements by
identifying the capabilities and benefits of each solution component in order to present
its value proposition.
Page 60
VTSP 5.5
Page 61
Page 62
VTSP 5.5
Page 63
Page 64
VTSP 5.5
Page 65
Page 66
VTSP 5.5
Page 67
Page 68
VTSP 5.5
Page 69
Page 70
VTSP 5.5
Page 71
Page 72
VTSP 5.5
Page 73
Page 74
VTSP 5.5
Page 75
Course Review
This concludes the course vSphere Overview.
Now that you have finished this course, you should be able to:
Provide an overview of vSphere as part of VMwares Vision and Cloud Infrastructure
Solution,
Describe the physical and virtual topologies of a vSphere 5.5 Data Center and explain
the relationship between the physical components and the vSphere Virtual
Infrastructure,
Describe the features and capabilities of vSphere and explain their key benefits for a
customer,
Describe the vSphere Hypervisor Architecture and explain its key features, capabilities
and benefits, and
Map vSphere Components to solution benefits and identify value propositions.
Page 76
VTSP 5.5
Course 2
Page 77
Course Objectives
After you complete this course, you should be able to:
Explain the components and features of vCenter
Communicate design choices to facilitate the selection of the correct vCenter solution
configuration
Explore the customers requirements to define any dependencies that those
requirements will create.
Explain the key features and benefits of the distributed services to illustrate the impact
those features will have on a final design.
Page 78
VTSP 5.5
Page 79
Module Objectives
After you complete this module, you should be able to:
Page 80
VTSP 5.5
Page 81
Page 82
VTSP 5.5
Page 83
Page 84
VTSP 5.5
Page 85
Page 86
VTSP 5.5
8. The vSphere Web Client provides a rich application experience delivered through a
cross-platform supporting Web browser.
This surpasses the functionality of the trusted VMware vSphere Client (the VI or
Desktop Client) running on Windows.
The vSphere Web Client can be installed on the vCenter server along with other
vCenter Server components, or it can be installed as a standalone server.
9. The Single Sign-On Server must be able to communicate with your identity sources
such as Active Directory, Open LDAP and a Local Operating System.
10. The Inventory service must be able to communicate with the Single Sign-On Server,
the vCenter Server and the client.
11. The vCenter Server must be able to communicate with the ESXi hosts in order to
manage them.
12. vCenter Server must also be accessible to any systems that will require access to
the API.
13. The Web Client is accessed via a Web browser that connects to the Web Client
Server. All of these services rely heavily on DNS.
Page 87
Page 88
VTSP 5.5
Page 89
Page 90
VTSP 5.5
Page 91
Page 92
VTSP 5.5
Page 93
Page 94
VTSP 5.5
Resource Maps
vSphere administrators can use resource maps to monitor proper connectivity which is
vital for migration operations, such as VMware vSphere vMotion or vSphere Storage
vMotion.
Resource maps are also useful to verify VMware vSphere High Availability, VMware
Distributed Resource Scheduler (DRS) cluster memberships are that host and virtual
machine connectivity is valid.
A resource map is a graphical representation of the data centers topology. It visually
represents the relationships between the virtual and physical resources available in a
data center.
Preconfigured map views that are available are: Virtual Machine Resources, which
displays virtual machine-centric relationships; Host Resources, which displays hostcentric physical relationships; and vMotion Resources, which displays potential hosts for
vMotion migration.
Maps help vSphere administrators find information such as which clusters or hosts are
most densely populated, which networks are most critical, and which storage devices
are being utilized.
Resource Maps are only available using the vSphere Desktop Client..
Page 95
Orchestrator
Orchestrator or vCO is an automation and orchestration platform that provides a library
of extensible workflows.
It enables vSphere administrators to create and execute automated, configurable
processes to manage their VMware virtual environment.
Orchestrator provides drag-and-drop automation and orchestration for the VMware
virtual environment. Orchestrator is included with vCenter.
As an example, when you create a virtual machine in your environment, you make
decisions about how that virtual machine is configured, how many network cards,
processors memory etc. that you want it to be configured with. However, once the
machine is created and like many organizations, you have additional IT processes that
need to be applied.
Do you need to add the VM to active directory? Do you need to update the change
management Database, customize the guest OS or notify the VM owner or other teams
that the virtual machine is ready?
vCenter Orchestrator lets you create workflows that automate activities such as
provisioning a virtual machine, performing scheduled maintenance, initiating backups,
and many others. You can design custom automations based on vCenter Orchestrator
out-of-the-box workflows and run your automations from the workflow engine.
Page 96
VTSP 5.5
You can also use plugins and workflows published on VMware Solution Exchange, a
community of extensible solutions plug-ins, to connect to multiple VMware and 3rd party
applications.
Through an open and flexible plug-in architecture, VMware vCenter Orchestrator allows
you to automate server provisioning and operational tasks across both VMware and
third-party applications, such as service desks, change management and asset
management systems.
These plug-ins provide hundreds of out-of-the-box workflows to help you both
accelerate and dramatically reduce the cost of delivering IT services across your
organization.
In addition to plug-ins included with the vCenter Orchestrator, the latest plug-ins can be
found on the VMware Solution Exchange.
You need to understand the clients current IT workflow automation capabilities and if
they are using any other products for this already, you will have to be prepared to
research how Orchestrator integrates with them.
To understand how Orchestrator works, it is important to understand the difference
between automation and orchestration.
Automation provides a way to perform frequently repeated processes without manual
intervention. For example, a shell, Perl, or PowerShell script that adds ESXi hosts to
vCenter Server.
On the other hand, orchestration provides a way to manage multiple automated
processes across heterogeneous systems.
An example of this would be to add ESXi hosts from a list to vCenter Server, update a
CMDB with the newly added ESXi hosts, and then send email notification.
Orchestrator exposes every operation in the vCenter Server API, enabling the vSphere
administrator to integrate all these operations into the automated processes.
Orchestrator also enables the administrator to integrate with other management and
administration solutions through its open plug-in architecture. This enables the vSphere
administrator to capture manual and repetitive tasks for the vSphere environment and
automate them through workflows.
Orchestrator provides several benefits.
It helps vSphere administrators ensure consistency and standardization and achieve
overall compliance with existing IT policies. It also shortens the time for deployment of a
complex environment (for example, SAP) to hours instead of days. Orchestrator also
enables vSphere administrators to react faster to unplanned issues in VMware Data
Center.
For example, when a virtual machine is powered off unexpectedly, the vSphere
administrator can configure options to trigger the Power-On workflow to bring the
virtual machine back online.
Page 97
Alarms
The vSphere alarm infrastructure supports automating actions and sending different
types of notifications in response to certain server conditions. Many alarms exist by
default on vCenter Server systems and you can also create your own alarms. For
example, an alarm can send an alert email message when CPU usage on a specific
virtual machine exceeds 99% for more than 30 minutes.
The alarm infrastructure integrates with other server components, such as events and
performance counters.
You can set alarms for objects such as virtual machines, hosts, clusters, data centers,
datastores, networks, vNetwork Distributed Switches, distributed virtual port groups, and
vCenter Server.
Alarms have two types of triggers.
They can be triggered by either the condition or state of an object or by events occurring
to an object.
You can monitor inventory objects by setting alarms on them. Setting an alarm involves
selecting the type of inventory object to monitor, defining when and for how long the
alarm will trigger, and defining actions that will be performed as a result of the alarm
being triggered. You define alarms in the Alarm Settings dialog box.
Alarms should be configured to detect and report. Avoid overly aggressive vCenter
Server alarm settings. Each time an alarm condition is met, vCenter Server must take
Page 98
VTSP 5.5
an appropriate action. Too many alarms place extra load on vCenter Server which
affects system performance. Therefore identify the alarms that you need to leverage.
You can use the SMTP agent included with vCenter Server to send email notifications
to the appropriate personnel that you wish to be notified when alarms are triggered. You
can also trap event information by configuring a centralized SNMP server and/or
alternatively even run a script when the alarm triggers.
Page 99
Page 100
VTSP 5.5
Page 101
Page 102
VTSP 5.5
Advanced Search allows you to search for managed objects that meet multiple criteria.
For example, you can search for virtual machines matching a search string or the virtual
machines that reside on hosts whose names match a second search string.
If the vSphere Web Client is connected to a vCenter Server system that is part of a
Linked Mode group, you can search the inventories of all vCenter Server systems in
that group.
You can only view and search for inventory objects that you have permission to view. In
Linked Mode the search service queries Active Directory for information about user
permissions so you must be logged in to a domain account to search all vCenter Server
systems in a Linked Mode group. If you log in using a local account, searches return
results only for the local vCenter Server system, even if it is joined to other servers in
Linked Mode.
Page 103
Page 104
VTSP 5.5
Page 105
Page 106
VTSP 5.5
Page 107
Page 108
VTSP 5.5
Page 109
Page 110
VTSP 5.5
Page 111
Which Client?
Your customer wants to be able to utilize the following features.
Which feature belongs to each client?
Page 112
VTSP 5.5
Module Summary
In summary:
vCenter Server is the primary management tool for vSphere administrators providing a
convenient single point of control for all the components in the data center. vCenter
Server provides the core management functionality and services for large environments,
which are required by the vSphere administrator to perform basic infrastructure
operations.
Page 113
Page 114
VTSP 5.5
Module Objectives
By the time you have completed this module you should be able to describe:
Installation Options
Page 115
Page 116
VTSP 5.5
Page 117
Page 118
VTSP 5.5
Databases
Each vCenter Server instance must have its own database. Multiple vCenter Server
instances cannot share the same database schema.
Multiple vCenter Server databases can reside on the same database server, or they can
be separated across multiple database servers. Oracle databases can run multiple
vCenter Server instances in a single database server provided you have a different
schema owner for each vCenter Server instance.
vCenter Server supports Oracle and Microsoft SQL Server databases.
After you choose a supported database type, make sure you understand any special
configuration requirements such as the service patch or service pack level.
Also ensure that the machine has a valid ODBC data source name (DSN) and that you
install any native client appropriate for your database.
Ensure that you check the interoperability matrixes for supported database and service
pack information by clicking the link shown on screen.
Performance is affected by the number of hosts and the number of powered-on virtual
machines in your environment.
Correctly sizing the database will ensure that you avoid performance issues. Where
possible, try to minimize the amount of network hops between vCenter Server and its
Page 119
Page 120
VTSP 5.5
Directory Services
vSphere 5.1 introduced Single Sign On. Single Sign On has been completely
redesigned in vSphere 5.5.
Single Sign On in vSphere 5.1 used Active Directory as an LDAP Server as an identity
source.
vSphere 5.5 introduces Native Active Directory support using Kerberos as an Identity
Source.
vCenter Single Sign On creates an authentication domain that users are authenticated
in to access available resources (vCenter etc.)
The System Domain Identity Source is the default Identity Data Store that ships as part
of vSphere. The System Domain has a name which is a FQDN: the default is
vsphere.local.
The login name for the administrator is always: administrator@vsphere.local.
You should not set the vCenter Administrator to be a Local OS account as this doesnt
federate.
There are four identity sources that can be configured for Single Sign On.
Active Directory (Integrated Windows Authentication) which uses Kerberos.
Page 121
Page 122
VTSP 5.5
Page 123
Page 124
VTSP 5.5
Page 125
Page 126
VTSP 5.5
If using the VMware Update Manager Download service, which can be used if the
Update Manager server cannot be given access to the Internet, you will require another
physical or virtual machine and database to host this component.
With the overall solution in mind, it also is important to know that there is no update
manager plug-in for the vSphere Web Client.
As you can see, the decision to implement this plug-in not only requires compatibility
checks, but also there are design choices to be made concerning the allocation and
selection of server resources, databases and network connectivity.
Page 127
VTSP 5.5
The vCenter Server database should also be backed up on a regular basis in the event
that the database becomes corrupt, so that it can easily be restored.
You may also choose to protect VMware vCenter Server using third-party clustering
solutions including, but not limited to, MSCS (Microsoft Cluster Services) and VCS
(Veritas Cluster Services).
Finally, when virtualizing vCenter Server, consider the services and servers that
vCenter server depends on. For example, you might want to start-up virtual machines
running Active Directory, DNS, SQL and SSO in that order first and ensure they power
up with a high-priority.
You should document the shutdown and start-up procedure for the cluster as a whole.
You should also consider whether or not you wish the vCenter Server to only reside on
a fixed host, where you can guarantee resources. If so, ensure that you change your
DRS automation level accordingly.
If using HA, ensure you change the start-up priority for vCenter Server to High.
To provide comprehensive protection for the vCenter server and guarantee high
availability, consider using vCenter Server Heartbeat which is discussed next.
VMware recommends having vCenter as a virtual machine in most instances.
Page 129
Page 130
VTSP 5.5
Page 131
Page 132
VTSP 5.5
Page 133
Page 134
VTSP 5.5
Page 135
Page 136
VTSP 5.5
Page 137
Page 138
VTSP 5.5
Module Summary
Now that you have completed this module, you should be able to identify sizing and
dependencies issues, installation options, network connectivity requirements, plug-ins
and add-ons, and service and server resilience considerations for vCenter Server.
Page 139
Page 140
VTSP 5.5
Module Objectives
In this module we are going to take a closer look at the distributed services that vCenter
manages and enables. These services provide the cluster wide features and advanced
functionality that are the key to vSphere scalability.
Many aspects of this module focus on not only explaining the distributed services but
also showing you sample whiteboards of how to present them to customers.
If a particular service/feature/functionality is not a part of the core vSphere Standard
licenses we will mention which license tier in which the feature is available.
Page 141
Presenting vMotion
The traditional challenge for IT is how to execute operational and maintenance tasks
without disruption to business service delivery. In a non-virtualized environment this
essentially means downtime whenever maintenance is required on the infrastructure.
In a virtualized environment we have virtual machines running on the hypervisor which
is installed on the physical host.
Using vMotion, we can migrate (move) a virtual machine from one host to another host
without incurring any downtime. To do this we have to copy the memory footprint and
the running state of the virtual machine progressively across to a separate physical host
and then switch over to the running instance of the virtual machine to the new host, all
without any loss of data or downtime.
This may require a significant amount of dedicated network bandwidth as we may be
moving a lot of data between each host. Sufficient network bandwidth should be
considered during the planning, implementation and configuration stages (On-going
monitoring should also be carefully planned).
It also requires that the CPU on each physical host is from the same manufacturer and
family, i.e. you cant vMotion from an Intel to an AMD.
Best practice is for Virtual Machine files to be stored on shared storage systems (i.e. a
storage system where multiple hosts can access the same storage), such as a Fibre
Channel or iSCSI storage area network (SAN), or on an NFS NAS volume.
Page 142
VTSP 5.5
Page 143
Presenting HA
Weve talked about planned outages and how to manage them using vMotion. What
happens if you have an unplanned outage and how do we deal with it?
In the case of multiple ESXi hosts with virtual machines running on them, and where
files used shared storage, we can restart virtual machines on other available hosts if
one of our hosts fails.
This technology is called vSphere High Availability or vSphere HA.
vSphere HA provides high availability for virtual machines and the applications running
within them, by pooling the ESXi hosts they reside on into a cluster.
Hosts in the cluster are continuously monitored. In the event of a host failure, the virtual
machines on the failed host attempt to restart on alternate hosts.
vSphere App HA is new in vSphere 5.5. It is a virtual appliance that you can deploy on
the vCenter Server.
Using the components of vSphere App HA, you can define high availability policies for
critical middleware applications running on your virtual machines in the Data Center,
and configure remediation actions to increase their availability.
This means that you can virtualize Tier 1 applications and create a platform to ensure
that vSphere hosts the most critical part of the business.
Page 144
VTSP 5.5
In designing the system, you must ensure that you have enough capacity for HA
recovery from a host failure. HA performs failover and restarts virtual machines on
different hosts. Its first priority is the immediate availability of all virtual machines.
If you have hosts with too many virtual machines and you dont have enough capacity,
some of the virtual machines might not start even if HA is enabled.
However, if you do not have sufficient resources, you can prioritize your restart order for
the most important virtual machines.
Virtual machines are restarted even if insufficient resources exist, but you now have a
performance issue because virtual machines contend for the limited resources.
If virtual machines have reservations and those reservations cannot be guaranteed,
then some virtual machines might not be restarted.
You should also implement redundant heartbeat network addresses and isolation
addresses, and address the possible issue of a host isolation response (in the case
where a master heartbeat is lost).
As a minimum, you require an Essentials Plus License to use HA. vSphere App HA is
only available with an Enterprise Plus License.
Page 145
Presenting DRS
A major concern in IT environments is to ensure that the load is distributed effectively
across the available resources. In this example we have two hosts handling all of the
load and one host with no load. Ideally, you want your infrastructure to sense this and to
move the loads so that the overall utilization is balanced.
VMware DRS or distributed load balancing monitors host, CPU and memory utilization
and can automatically respond to changes in load by rebalancing virtual machines
across the cluster when necessary. These virtual machines will be moved using
vMotion.
As new virtual machines are created or started, DRS can decide the optimal placement
for the virtual machines so that CPU and Memory resources are evenly consumed
across the cluster.
When you add a new physical server to a cluster, DRS enables virtual machines to
immediately take advantage of the new resources because it re-distributes the running
virtual machines across the newly expanded pool of resources. You can also define
rules (affinity rules) that allow you to control which virtual machines must be kept on the
same host and which virtual machines run on separate hosts.
In order for DRS to work, the pre-requisites of vMotion apply to the hosts and virtual
machines in a DRS cluster.
Page 146
VTSP 5.5
A Combination of HA and DRS can be used to enhance the clusters response to host
failures by improving the load distribution of the restarted VMs. HA powers on VMs then
DRS load balances VMs on Hosts.
Using DRS you can choose different levels of automation, Fully automated, Partially
automated and Manual. Fully Automated automatically places virtual machines on
hosts when powered on, as well as automatically migrating virtual machines. Partially
automated means that virtual machines will be automatically placed on hosts when
powered on and vCenter will suggest migrations. Manual will suggest migration
recommendations for virtual machines.
When designing DRS it is important to ensure that you have shared storage. As DRS
uses vMotion you should ensure that you understand the design requirements for
vMotion setup.
As a minimum you require an Enterprise License to use DRS.
Page 147
Presenting DPM
Dynamic Power Management is an enhancement to DRS which monitors overall
utilization across a cluster and if it finds that the required protection levels can be met by
running all VMs on a reduced number of hosts, it will evacuate all virtual machines from
one or more hosts and then put those hosts into standby mode in order to save overall
power consumption.
When the virtual machine load increases and the host can no longer provide the
required level of protection, DPM will automatically restart hosts and migrate virtual
machines back onto them once they have come back online. When configuring DPM,
you must ensure that the cluster can startup and shutdown each host using IMPI or
Wake on LAN.
You should configure the vSphere DPM automation level for automatic operation and
use the default vSphere DPM power threshold. This decreases power and cooling costs
as well as decreasing administrative management overhead.
As a minimum you require an Enterprise License to use this feature.
Page 148
VTSP 5.5
Presenting FT
In some cases it is desirable to have the absolute minimum risk of downtime for some
virtual machines. VSphere fault tolerance or FT maintains an identical copy of a running
virtual machine in lockstep on a separate host. All Inputs and events performed on the
primary virtual machine are recorded and replayed on the secondary virtual machine
ensuring that the two remain in an identical state.
FT ensures that in the case of a host failure, the lockstep copy instantly takes over with
zero downtime.
vSphere Fault Tolerance is currently limited to Virtual Machines with a single vCPU and,
like vMotion, dedicated network uplinks on all hosts are recommended in order to
ensure that there is sufficient bandwidth available. All other standard vMotion
constraints also apply to VMs protected with Fault Tolerance. Consider that you are
using twice the amount of resources, so factor this into your design when using FT.
As a minimum you require a Standard License in order to use FT.
Page 149
Page 150
VTSP 5.5
Page 151
Page 152
VTSP 5.5
Page 153
Page 154
VTSP 5.5
Page 155
Page 156
VTSP 5.5
Auto Deploy
vSphere Auto Deploy facilitates the rapid provisioning of vSphere hosts by leveraging
the network boot capabilities of x86 servers together with the small footprint of the ESXi
hypervisor. Once installed a vCenter host profile is used to configure the host. After
configuration the host is connected to vCenter where it is available to host virtual
machines. The entire process is fully automated allowing new hosts to be quickly
provisioned with no manual intervention.
Stateless or diskless caching host deployments let you continue operation if the Auto
Deploy server becomes unreachable. This was the only mode of install available in
vSphere 5.0.
Stateless Caching caches the image when you apply the host profile. When you later
reboot, the host continues to use the Auto Deploy infrastructure to retrieve its image. If
the Auto Deploy server is not available, the host uses the cached image.
Page 157
Page 158
VTSP 5.5
You should ensure that you have the resources needed to install a VSA cluster. You will
require a physical or virtual machine that runs vCenter Server, however you can run
vCenter Server on one of the ESXi hosts in the cluster.
You require two or three physical hosts with ESXi installed. The hosts must all be the
same type of ESXi installation. VSA does not support combining freshly installed ESXi
and modified ESXi hosts in a single cluster. You require at least one Gb Ethernet or
10Gb Ethernet Switch.
As a minimum you require an Essentials Plus License to use one instance of VSA. A
VSA License is included in the Essentials Plus and all advanced Kits.
VSA is covered in more depth in Course 5.
Page 159
Planned Maintenance
Your customer wants to be able to carry out planned maintenance tasks on ESXi hosts without
service interruption.
Which of the following technologies would be a best fit for them?
There are two correct answers.
Page 160
VTSP 5.5
Page 161
Module Summary
We have now explored the features, benefits and configuration requirements for the
advanced scalability and cluster wide features of vSphere that are managed and
controlled via vCenter.
Now that you have completed this module, feel free to review it until you are ready to
start the next module.
Page 162
VTSP 5.5
Course 3
Page 163
Course 3 Objectives
At the end of this course you should be able to:
Page 164
VTSP 5.5
Page 165
Module 1 Objectives
At the end of this module the delegate will be able to:
Page 166
VTSP 5.5
Page 167
VTSP 5.5
Because most modern processors are now equipped with multi-core processors, it is
easy to build a system with tens of cores running hundreds of virtual machines. In such
a large system, allocating CPU resources efficiently and fairly is critical.
Fairness is one of the major design goals of the CPU scheduler.
Allocation of CPU time to virtual machines has to be faithful to the resource
specifications like CPU shares, reservations, and limits. The CPU scheduler works
according to the proportional share algorithm. This aims to maximize CPU utilization
and world execution efficiency, which are critical to system throughput.
With this in mind, you must ensure that you choose the appropriate amount of vCPUs
for your virtual machine depending on the type of workload that it will execute. This is
discussed in more detail later.
Page 169
Page 170
VTSP 5.5
ESXi supports virtual machines with up to 64 virtual CPUs which allows you to run
larger CPU-intensive workloads on the VMware ESXi platform. ESXi also supports 1TB
virtual RAM(vRAM). This means you can assign up to 1TB of RAM to ESXi 5.5 virtual
machines.
In turn, this means you can run even the largest applications in vSphere including very
large databases, and you can virtualize even more resource-intensive Tier 1 and 2
applications.
With vSphere 5.1, VMware partnered with NVIDIA to provide hardware-based vGPU
support inside the virtual machine.
vGPUs improve the graphics capabilities of a virtual machine by off-loading graphicintensive workloads to a physical GPU installed on the vSphere host. vSphere 5.1 was
the first vSphere release to provide support for hardware-accelerated 3D graphicsvirtual graphics processing unit (vGPU)-inside of a virtual machine.
That support was limited to only NVIDIA-based GPUs. With vSphere 5.5, vGPU support
has been expanded to include both Intel- and AMD-based GPUs. Virtual machines with
graphic-intensive workloads or applications that typically have required hardware-based
GPUs can now take advantage of additional vGPU vendors, makes and models.
Virtual machines still can leverage VMware vSphere vMotion technology, even across a
heterogeneous mix of vGPU vendors, without any downtime or interruptions to the
virtual machine.
vGPU support can be enabled using both the vSphere Web Client and VMware Horizon
View for Microsoft Windows 7 OS and Windows 8 OS. The following Linux OSs also are
supported: Fedora 17 or later, Ubuntu 12 or later and Red Hat Enterprise Linux (RHEL)
7. Controlling vGPU use in Linux OSs is supported using the vSphere Web Client.
Page 171
Configuration Maximums
Before deploying a virtual machine, you must plan your environment. You should
understand the requirements and configuration maximums for virtual machines
supported by vSphere 5.5.
The maximum CPU configuration is 64 vCPUs per virtual machine. You must have
adequate licensing in place if you want to use this many vCPUs.
The maximum amount of RAM per virtual machine is 1TB. Before scaling this much,
take into account whether the guest operating system can support these amounts and
whether the client can use these resources for the workload required.
With vSphere 5.5, the maximum size of a virtual disk is 62TB - an increase from almost
2TB in vSphere 5.1. 62TB Virtual Mode RDMs can also be created.
vSphere 5.5 adds AHCI SATA controllers. You can configure a maximum of four
controllers with support for 30 devices per controller, making a total of 120 devices. This
increases the number of virtual disks available to a virtual machine from 60 to 180.
The maximum amount of virtual SCSI targets per virtual machine is 60, and is
unchanged.
Currently, the maximum number of Virtual NICs that a virtual machine can have is 10.
Be sure you choose the network adapters appropriate for the virtual machine you are
creating.
Page 172
VTSP 5.5
Page 173
VTSP 5.5
Previously in vSphere 5.1 the server was limited to using 32Gb of physical RAM. In
vSphere 5.5 this restriction has been removed.
Page 175
Page 176
VTSP 5.5
Page 177
Page 178
VTSP 5.5
Page 179
VMware Tools
VMware Tools is a suite of utilities that enhances the performance of the virtual
machines guest OS and improves the management of the virtual machine.
The VMware Tools installer files for Windows, Linux, FreeBSD, NetWare, and Solaris
guest OS are built into ESXi as ISO image files.
After installing and configuring the guest OS, you must install VMware Tools.
VMware Tools provide two very visible benefits: better video performance and the ability
to move the mouse pointer freely into and out of the console window.
VMware Tools also install other important components, such as device drivers.
The VMware Tools service performs various tasks such as passing messages from the
host to the guest OS, running scripts that help automate the operations of the OS,
synchronizing the time in the guest OS with the time in the host OS, and sending a
heartbeat to the host so that it knows the guest OS is running.
On Windows guests, VMware Tools controls grabbing and releasing of the mouse
pointer.
VMware Tools also enables you to copy and paste text between the desktop of the local
host and the desktop of the virtual machine.
Page 180
VTSP 5.5
VMware Tools includes a set of VMware device drivers for improved graphical, network,
and mouse performance, as well as efficient memory allocation between virtual
machines.
From the VMware Tools control panel, you can modify settings and connect and
disconnect virtual devices.
There is also a set of VMware Tools scripts that help automate the guest OS tasks.
An icon in the notification area of the Windows taskbar indicates when VMware Tools is
running and provides ready access to the VMware Tools control panel and help utility.
Page 181
Page 182
VTSP 5.5
Page 183
Page 184
VTSP 5.5
NCIS
When you configure a virtual machine, you can add virtual network interface cards
(NICs) and specify the adapter type. The types of network adapters that are available
depend on the following factors:
The virtual machine version, which in turn depends on what host created it or
most recently updated it.
Whether the virtual machine has been updated to the latest version for the
current host.
The guest OS.
Six main NIC types are supported: E1000, Flexible, Vlance, VMXNET, VMXNET 2
(Enhanced) and VMXNET3.
The default virtual NIC emulated in a virtual machine is either an AMD PCnet32 device
(vlance), an Intel E1000 device (E1000), or an Intel E1000e device (E1000e).
VMware also offers the VMXNET family of paravirtualized network adapters. These
provide better performance than default adapters and should be used for optimal
performance within any guest OS for which they are available. The VMXNET virtual
NICs (particularly VMXNET3) also offer performance features not found in the other
virtual NICs.
The VMXNET3 paravirtualized NIC requires that the virtual machine use virtual
hardware version 7 or later and, in some cases, requires that VMware Tools be installed
Page 185
Page 186
VTSP 5.5
vRAM
Carefully select the amount of memory you allocate to your virtual machines. You
should allocate enough memory to hold the working set of applications you will run in
the virtual machine, thus minimizing thrashing. You should also avoid over-allocating
memory, as this consumes memory that could be used to support more virtual
machines.
ESXi uses five memory management mechanisms-page sharing, ballooning, memory
compression, swap to host cache, and regular swapping-to dynamically reduce the
amount of physical memory required for each virtual machine.
When Page Sharing is enabled, ESXi uses a proprietary technique to transparently and
securely share memory pages between virtual machines, thus eliminating redundant
copies of memory pages. If the virtual machines memory usage approaches its memory
target, ESXi will use ballooning to reduce that virtual machines memory demands.
If the virtual machines memory usage approaches the level at which host-level
swapping will be required, ESXi will use memory compression to reduce the number of
memory pages it will need to swap out. If memory compression doesnt keep the virtual
machines memory usage low enough, ESXi will next forcibly reclaim memory using
host-level swapping to a host cache (if one has been configured). Swap to host cache is
a feature that allows users to configure a special swap cache on SSD storage. In most
cases this host cache (being on SSD) will be much faster than the regular swap files
(typically on hard disk storage), significantly reducing access latency.
Student Study Guide - VTSP 5.5
Page 187
Page 188
VTSP 5.5
CPUs
When choosing the number of vCPUs consider whether the virtual machine needs more
than one. As a general rule, always try to use as few vCPUs as possible. If the
operating system supports symmetric multiprocessing (SMP), consider whether the
application is multithreaded and whether it would benefit from multiple vCPUs. This
could provide improvements for the virtual machine and the host.
Configuring a virtual machine with more vCPUs than its workload can use might cause
slightly increased resource usage, potentially impacting performance on very heavilyloaded systems. Common examples of this include a single-threaded workload running
in a multiple-vCPU virtual machine or a multi-threaded workload in a virtual machine
with more vCPUs than the workload can effectively use.
Unused vCPUs still consume timer interrupts in some guest operating systems, though
not with tickless timer kernels such as 2.6 Linux kernels.
Page 189
SCSI
ESXi supports multiple virtual disk types.
Thick provisioned - Thick virtual disks, which have all their space allocated at creation
time, are further divided into eager zeroed and lazy zeroed disks. An eager-zeroed thick
disk has all space allocated and zeroed out at the time of creation. This increases the
time it takes to create the disk, but results in the best performance, even on the first
write to each block.
A lazy-zeroed thick disk has all space allocated at the time of creation, but each block is
zeroed only on first write. This results in a shorter creation time, but reduced
performance the first time a block is written to. Subsequent writes, however, have the
same performance as eager-zeroed thick disks.
The use of VAAI*-capable SAN storage can speed up disk creation and zeroing by
offloading operations to the storage array.
*VMware vSphere Storage APIs-Array Integration
Thin-provisioned - Space required for a thin-provisioned virtual disk is allocated and
zeroed upon first write, as opposed to upon creation. There is a higher I/O cost (similar
to that of lazy-zeroed thick disks) during the first write to an unwritten file block, but on
subsequent writes, thin-provisioned disks have the same performance as eager-zeroed
thick disks.
Page 190
VTSP 5.5
The use of VAAI-capable SAN storage can improve thin-provisioned disk first-time-write
performance by improving file locking capability and offloading zeroing operations to the
storage array.
Thin provisioning of storage addresses a major inefficiency issue by allocating blocks of
storage to a guest operating system (OS), file system, or database only as they are
needed, rather than at the time of creation.
However, traditional thin provisioning does not address reclaiming stale or deleted data
within a guest OS, leading to a gradual growth of storage allocation to a guest OS over
time. With vSphere 5.1, VMware introduces a new virtual disk type, the space-efficient
sparse virtual disk (SE sparse disk), with the ability to reclaim previously-used space
within the guest OS. Currently, SE sparse disk is restricted to VMware Horizon View.
As a guide, use one partition per virtual disk, to deploy a system disk and a separate
application data disk. This simplifies backup and separate disks help distribute I/O load.
Place a virtual machines system and data disks on the same datastore, unless they
have widely varying I/O characteristics. Do not place all system disks on one datastore
and all data disks on another.
Store swap files on shared storage with the virtual machine files as this option is the
default and the simplest configuration for administration.
Page 191
Page 192
VTSP 5.5
Page 193
Hot-adding Hardware
Examples of hot-addable devices are USB controllers, Ethernet adapters, and hard disk
devices.
Page 194
VTSP 5.5
Hot-adding Hardware
USB Controllers
USB controllers are available to add to virtual machines to support USB passthrough
from an ESXi host or client computer to the virtual machine.
You can add multiple USB devices to a virtual machine when the physical devices are
connected to an ESXi host. USB passthrough technology supports adding USB devices
such as security dongles and mass storage devices to virtual machines that reside on
the host to which the devices are connected.
Devices can connect to only one virtual machine at a time.
For a list of USB device models supported for passthrough, refer to the knowledge base
article at kb.vmware.com/kb/1021345
Page 195
Hot-adding Hardware
Network Interface Cards
You can add a network interface card (NIC) to a virtual machine to bridge a network, to
enhance communications, or to replace an older adapter.
When you add a NIC to a virtual machine, you select the adapter type, network
connection, and indicate whether the device should connect when the virtual machine is
turned on.
Ensure that your operating system supports the type of NIC that you wish to use and
remember to use the VMXNET3 paravirtualized network adapter for operating systems
where it is supported.
Page 196
VTSP 5.5
Hot-adding Hardware
Hard Disks
To add a hard disk to a virtual machine, you can create a virtual disk, add an existing
virtual disk, or add a mapped SAN LUN. You may have to refresh or rescan the
hardware in an operating system such as Windows 2003.
You cannot hot-add IDE disks.
Page 197
Page 198
VTSP 5.5
Page 199
Page 200
VTSP 5.5
Page 201
Page 202
VTSP 5.5
Page 203
Page 204
VTSP 5.5
Module Summary
Now that you have completed this module, you should be able to:
Page 205
Page 206
VTSP 5.5
Module 2 Objectives
At the end of this module, you will be able to:
Describe the options in vSphere 5.5 for copying or moving Virtual Machines
within and between Virtual Infrastructures.
Explain how and why a customer should make use of these features.
Page 207
Templates
VMware provides several methods to provision vSphere virtual machines.
The optimal method for your environment depends on factors such as the size and type
of your infrastructure and the goals that you want to achieve.
A template is a master copy of a virtual machine that can be used to create and
provision new virtual machines, minimizing the time needed for provisioning.
The template image usually includes a specific OS, one or more applications, and a
configuration that provides virtual counterparts to hardware components.
Templates coexist with virtual machines in the inventory and cannot be powered on or
edited.
You can create a template by converting a powered-off virtual machine to a template,
cloning a virtual machine to a template, or by cloning another template.
Converting a virtual machine to a template is extremely fast as no copy tasks are
needed. The files are just renamed.
Cloning can be relatively slow as a full copy of the disk files needs to be made.
Templates can be stored in a VMFS datastore or an NFS datastore.
You can deploy from a template in one data center to a virtual machine in a different
data center.
Page 208
VTSP 5.5
Template Contents
Templates are master images from which virtual machines are deployed. A welldesigned template provides the best starting point for most virtual machine
deployments.
When creating a template, you should consider the workload it will be used for, how it
can be optimized to run within ESXi server, and how the guest OS can be optimized.
The type of workload that the virtual machine will process will affect the amount of
vCPU and memory it needs. Size and type of base disk, data disk or disks and default
SCSI controller all must be considered.
You should disable any devices and ports that the virtual machine will not use. Disable
any serial or parallel ports in the virtual machine BIOS that are not required.
Ensure that VMware Tools is installed into the guest OS.
Only use templates for master image deployment ensuring that they are fit for purpose.
This minimizes the administration overhead that needs to be done on the guest
operating system.
Page 209
Page 210
VTSP 5.5
Page 211
Page 212
VTSP 5.5
Snapshots: An Overview
Snapshots capture the state and data of a virtual machine at a point in time.
Snapshots are useful when you must revert repeatedly to the same virtual machine
state, but you do not want to create multiple virtual machines.
These short-term solutions for capturing point-in-time virtual machine states are not
appropriate for long-term virtual machine backups. Do not run production virtual
machines from snapshots on a long-term basis.
Snapshots do not support some disk types or virtual machines configured with bus
sharing.
VMware does not support snapshots of raw disks, RDM physical-mode disks, or guest
operating systems that use an iSCSI initiator in the guest.
Snapshots are not supported with PCI vSphere DirectPath I/O devices.
Snapshots can negatively affect the performance of a virtual machine. Performance
degradation depends on how long the snapshot or snapshot tree is in place, the depth
of the tree, and how much the virtual machine and its guest operating system have
changed from the time you took the snapshot.
This degradation might include a delay in the time it takes the virtual machine to poweron.
When preparing your VMFS datastore, factor snapshots into the size if you are going to
use them. Increase the datastore usage on disk alarm to a value above 30% to avoid
running out of space unexpectedly on a VMFS datastore which is undesirable in a
production environment.
Student Study Guide - VTSP 5.5
Page 213
VTSP 5.5
Snapshots provide a point-in-time image of the disk that backup solutions can use, but
Snapshots are not meant to be a robust method of backup and recovery.
If the files containing a virtual machine are lost, its snapshot files are also lost. Also,
large numbers of snapshots are difficult to manage, consume large amounts of disk
space, and are not protected in the case of hardware failure.
Short-lived snapshots play a significant role in virtual machine data protection solutions
where they are used to provide a consistent copy of the VM while the backup operation
is carried out.
Page 215
Page 216
VTSP 5.5
Page 217
Page 218
VTSP 5.5
Configure automated vCenter Server alarms to trigger when a virtual machine is running
from snapshots.
VMware KB 1025279 details a full list of best practices.
Page 219
Page 220
VTSP 5.5
Page 221
Page 222
VTSP 5.5
Page 223
Page 224
VTSP 5.5
Migration Overview
Apart from importing and exporting the virtual machines, you can migrate virtual
machines from one host to another or from one datastore to another.
Choice of migration method will depend on the environment and whether the priority is
avoiding downtime, maximizing virtual machine performance, or using new storage.
There are five migration techniques, each one serving a distinct purpose.
If a virtual machine is powered off or suspended during migration, we refer to the
process as cold migration.
With a cold migration, the source and target host do not require shared storage.
If the virtual machine is powered off, it can be moved and powered on using a
completely different host with different CPU family characteristics.
If your virtual machine needs to stay running for any reason, then you can use vMotion
to migrate the virtual machines. vMotion is required if you are using VMware Distributed
Resource Scheduler or DRS, as it allows DRS to balance virtual machines across hosts
in the DRS cluster. This will be discussed in detail later.
If you are migrating a virtual machines files to a different datastore to balance the disk
load better or transition to a different storage array, use Storage vMotion.
With enhanced vMotion, you can change the location of a VMs datastore and host
simultaneously, even if the two hosts have no shared storage in common.
Page 225
Cold Migration
A cold migration moves the virtual machine configuration files and optionally relocates
the disk files, in three basic steps.
First, the vCenter Server moves the configuration files, including the NVRAM and the
log files.
A cold migration also moves the suspend file for suspended virtual machines and
optionally, the disks of the virtual machine from the source host to the destination hosts
associated storage area.
Then, the vCenter Server registers the virtual machine with the new host.
After the migration is complete, the vCenter Server deletes the old version of the virtual
machine from the source host.
If any errors occur during the migration, the virtual machine reverts to the original state
and location.
If the virtual machine is turned off and configured with a 64-bit guest operating system,
vCenter Server generates a warning if you try to migrate it to a host that does not
support 64-bit operating systems.
Otherwise, CPU compatibility checks do not apply when you migrate turned off virtual
machines with cold migration.
Page 226
VTSP 5.5
vMotion Migration
There are three types of vMotion migration: vMotion, Storage vMotion and enhanced
vMotion.
vMotion is a key enabling technology for creating the dynamic, automated and selfoptimizing data center.
With vSphere vMotion, you can migrate virtual machines from one physical server to
another with zero downtime, providing continuous service availability and complete
transaction integrity.
If you need to take a host offline for maintenance, you can move the virtual machine to
another host. With vMotion, virtual machine working processes can continue throughout
a migration.
The entire state of the virtual machine is moved to the new host, while the associated
virtual disk remains in the same location on storage that is shared between the two
hosts. After the virtual machine state is migrated, the virtual machine runs on the new
host. Migrations with vMotion are completely transparent to the running virtual machine.
You can use vSphere Distributed Resource Scheduler (DRS) to migrate running virtual
machines from one host to another to balance the load thanks to vMotion.
Migration with vMotion requires a vMotion license and a specific configuration. vMotion
is available in Standard, Enterprise and Enterprise Plus editions of vSphere.
Page 227
Page 228
VTSP 5.5
To meet vMotion compatibility requirements, ensure that a virtual machine's swap file is
accessible to the destination host.
Configure a VMkernel port group on each host for vMotion. Use of Jumbo Frames is
recommended for best vMotion performance.
Ensure that virtual machines have access to the same subnets on source and
destination hosts.
Concurrent vMotion and Storage vMotion are possible but may require additional
network resources.
If you need to support storage vMotion or more than four concurrent vMotion migrations
you must check the product documentation for the limits of simultaneous migrations.
Note that a vMotion migration will fail if the virtual machine uses raw disks for clustering
purposes.
Page 229
Storage vMotion
With Storage vMotion, you can migrate a virtual machine and its disk files from one
datastore to another while the virtual machine is running.
You can move virtual machines off arrays for maintenance or to upgrade.
You also have the flexibility to optimize disks for performance, or to transform disk
types, which you can use to reclaim space.
During a migration with Storage vMotion, you can transform virtual disks from ThickProvisioned Lazy Zeroed or Thick-Provisioned Eager Zeroed to Thin-Provisioned or the
reverse.
You can choose to place the virtual machine and all its disks in a single location, or
select separate locations for the virtual machine configuration file and each virtual disk.
The virtual machine does not change execution host during a migration with Storage
vMotion.
The Storage vMotion migration process does not disturb the virtual machine. There is
no downtime and the migration is transparent to the guest operating system and the
application running on the virtual machine.
You can migrate a virtual machine from one physical storage type to another. Storage
vMotion supports FC, iSCSI, and NAS network storage.
Storage vMotion was enhanced in vSphere 5.x to support migration of virtual machine
disks with snapshots.
Page 230
VTSP 5.5
Page 231
Page 232
VTSP 5.5
Enhanced vMotion
vSphere 5.1 enabled a virtual machine to change its datastore and host simultaneously,
even if the two hosts don't have any shared storage in common.
It allows virtual machine migration between clusters in a larger data center, which may
not have a common set of datastores between them but also allows virtual machine
migration in small environments without access to expensive shared storage equipment.
Another way of looking at this functionality is supporting VMotion without shared
storage.
To use enhanced vMotion, the hosts must be connected to the same VMware vCenter
and be part of the same data center.
In addition, the hosts must be on the same layer-2 network.
vSphere 5.1 and later allows the combination of vMotion and Storage vMotion into a
single operation.
This combined migration copies both the virtual machine memory and its disk over the
network to the destination host.
After all the memory and disk data are send over, the destination virtual machine will
resume and the source virtual machine will be powered off.
This vMotion enhancement ensures
Page 233
Page 234
VTSP 5.5
Enhanced vMotion
Cross-host storage vMotion is subject to the following requirements and limitations:
The hosts must be licensed for vMotion and running ESXi5.1 or later.
The hosts must meet the networking requirements for vMotion mentioned
previously.
The virtual machines must be configured for vMotion and Virtual machine disks
must be in persistent mode or be raw device mappings.
The destination host must have access to the destination storage.
When you move a virtual machine with RDMs and do not convert those RDMs to
VMDKs, the destination host must have access to the RDM LUNs.
Finally, consider the limits for simultaneous migrations when you perform a cross-host
storage vMotion.
See the vCenter Server and Host Management product documentation for further
information available at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html
Page 235
Page 236
VTSP 5.5
Page 237
Page 238
VTSP 5.5
Page 239
Module Summary
Now that you have completed this module, you should be able to:
Describe the options in vSphere 5.5 for copying or moving Virtual Machines
within and between Virtual Infrastructures
Explain how and why a customer should make use of these features
Having completed this module, feel free to review it until you are ready to start the next
module.
Page 240
VTSP 5.5
Page 241
Module 3 Objectives
At the end of this module you will be able to:
Explain the benefits of, and prerequisites for, vSphere Replication and vSphere
Update Manage
Page 242
VTSP 5.5
Page 243
Page 244
VTSP 5.5
vSphere Replication
vSphere Replication is the only true hypervisor-level replication engine available
today.
It is integrated with a vSphere Essentials Plus license or higher.
Changed blocks in the virtual machine disk or disks for a running virtual machine at a
primary site are sent to a secondary site, where they are applied to the virtual machine
disks for the offline or protection copy of the virtual machine.
This is cost-efficient because it reduces both storage costs and replication costs.
At the storage layer, vSphere Replication eliminates the need to have higher-end
storage arrays at both sites.
Customers can use lower-end arrays, and different storage across sites, including
Direct-Attached Storage. For example, a popular option is to have Tier 1 storage at the
production site, but lower-end storage such as less expensive arrays at the failover site.
All this leads to overall lower costs per replication.
vSphere Replication is also inherently less complex than storage-based replication.
Replication is managed directly from vCenter, eliminating dependencies on storage
teams. As it is managed at the level of individual virtual machines, the setup of SRM is
less complicated.
SRM can use vSphere Replication to replicate data to servers at the recovery site.
Despite its simplicity and cost-efficiency, vSphere Replication is still a robust, powerful
replication solution.
Student Study Guide - VTSP 5.5
Page 245
Page 246
VTSP 5.5
Replication Appliance
vSphere Replication is distributed as a 64-bit virtual appliance packaged in the .ova
format.
A previous iteration of vSphere Replication was included with SRM 5.0.
vSphere Replication Management Server and vSphere Replication Server are now
included in the single VR appliance.
This allows a single appliance to act in both a VR management capacity and as the
recipient of changed blocks.
This makes scaling sites an easy task.
The replication appliance in version 5.5 contains an "add-on" VR appliance. These
additional appliances allow you to configure replication to up to a maximum of 10 other
target locations.
Page 247
Page 248
VTSP 5.5
The VR Agents on the central data center track changed blocks and distribute them via
the vSphere host's management network to the VR Server defined as the destination for
each individual VM.
Consider the following however. A virtual machine can only be replicated to a single
destination. A virtual machine cannot be replicated to multiple remote locations at one
time. Any target destination must have a vSphere Replication Appliance to act as a VR
management component as well as a target, or a vSphere Replication Server to act
strictly as a target for replication.
Page 249
VTSP 5.5
You can safely use vSphere Replication in combination with certain vSphere features,
such as vSphere vMotion. Some other vSphere features, for example vSphere
Distributed Power Management, require special configuration for use with vSphere
Replication.
You must check that vSphere Replication is compatible with the versions of ESXi
Server, vCenter Server and Site Recovery Manager on the site that it will be used.
Page 251
Page 252
VTSP 5.5
VDP Advanced, which is sold separately and protects approximately 200 VMs
per VDP Advanced virtual appliance. VDP Advanced is licensed per-CPU and is
available either as a stand-alone license or included with the vSphere with
Operations Management (vSOM) Enterprise and Enterprise Plus Acceleration
Kits.
VDP, which is included with vSphere 5.1 Essentials Plus and higher. VDP
provides basic backup and recovery for approximately 50 VMs per VDP virtual
appliance.
Page 253
Page 254
VTSP 5.5
Page 255
Page 256
VTSP 5.5
Page 257
Page 258
VTSP 5.5
Page 259
Page 260
VTSP 5.5
Page 261
Page 262
VTSP 5.5
Page 263
Page 264
VTSP 5.5
Page 265
Page 266
VTSP 5.5
Page 267
Module Summary
Now that you have completed this module, you should be able to:
Explain the benefits of, and prerequisites for, vSphere Replication and vSphere
Update Manager
Page 268
VTSP 5.5
Course 4
Page 269
Course Objectives
At the end of this course you should be able to:
Explain the role and function of the components of vSphere virtual networking
Describe the advanced networking features of vSphere 5.5 and carry out an activity to
determine the proper networking architecture for a customer scenario
This course does not include information on VMware NSX - Network Virtualization.
Information on NSX will be covered in future eLearning training.
Page 270
VTSP 5.5
Page 271
Module 1 Objectives
At the end of this module you will be able to:
Page 272
VTSP 5.5
Page 273
Page 274
VTSP 5.5
port for this ESXi host has resilient connectivity via pNIC0 and pNIC1. This type of
virtual switch is used to provide networking for this and only this ESXi host. It is called a
standard vSwitch.
If we have more than one host, we then encounter similar logistical problems that we
have in a physical environment such as managing the configuration on multiple
systems. If we are using technologies such as vMotion, Fault tolerance, HA or DRS or
even if we just want to move machines between hosts we must ensure that virtual
machine ports group names are consistent and that they must physically map to the
same networks. This can become increasingly difficult with each host that we add.
The impact of misconfiguration can be minimized using host profiles to deploy ESXi
Hosts which ensures that all virtual switch configurations are consistent.
Distributed vSwitches will provide the same unified configuration but with additional
functionality.
Page 275
Page 276
VTSP 5.5
Page 277
Page 278
VTSP 5.5
Page 279
vSwitch
A standard virtual switch can connect its uplink ports to more than one physical Ethernet
adapter to enable NIC teaming. With NIC teaming, two or more physical adapters can
be used for load balancing or to provide failover capabilities in the event of a physical
adapter hardware failure or a network outage.
The virtual ports on a virtual standard switch provide logical connection points among
and between virtual and physical devices. You can think of the virtual ports as virtual
RJ-45 ports. Each virtual switch can have up to 4088 virtual Ports, with a limit of 4,096
ports on all virtual switches on a host. This system-wide limit includes eight reserved
ports per standard virtual switch.
Virtual Ethernet adapters (vNICs) connect to virtual ports when you power on the virtual
machine on which the adapters are configured, when you take an explicit action to
connect the device, or when you migrate a virtual machine using vSphere vMotion.
A vNIC updates the virtual switch port with the MAC filtering information when it is
initialized and whenever it changes. A virtual port may ignore any requests from the
virtual Ethernet adapter that would violate the Layer 2 security policy in effect for the
port. For example, if MAC spoofing is blocked, the port drops any packets that violate
this rule.
When designing your environment you should consider how many networks are
needed. How many networks or VLANS required depends on the types of traffic
required for VMware vSphere operation and in support of the organization's services
and applications.
Page 280
VTSP 5.5
Page 281
Page 282
VTSP 5.5
Distributed Switch
VMware vSphere 5.1 enhanced the networking capabilities of the distributed switch.
Some of these features like Network Health Check help detect mis-configurations
across physical and virtual switches.
Configuration Backup Restore allows vSphere admins to store the VDS configuration as
well as recover the network from the old configurations.
You can address the challenges that you face when a management network failure
causes the hosts to disconnect from vCenter Server using rollback and recovery.
Importantly this allows you to recover from lost connectivity or incorrect configurations.
vSphere 5.5 introduces some key networking enhancements and capabilities to further
simplify operations, improve performance and provide security in virtual networks. LACP
has been enhanced in vSphere 5.5.
Traffic filtering has been introduced as well as Differentiated Service Code Point
Marking support.
Distributed virtual switches require an Enterprise Plus license.
Distributed virtual switches are not manageable when vCenter Server is unavailable, so
vCenter Server becomes a tier-one application.
Page 283
Page 284
VTSP 5.5
Page 285
Page 286
VTSP 5.5
Page 287
Page 288
VTSP 5.5
Page 289
MTU - Checks whether the physical access switch port MTU jumbo frame setting
based on per VLAN matches the vSphere distributed switch MTU setting.
Network adapter teaming - Checks whether the physical access switch ports
EtherChannel setting matches the distributed switch distributed port group IP
Hash teaming policy settings.
The default interval for performing the configuration check is one minute.
For the VLAN and MTU check, there must be at least two physical uplinks connected to
the VDS.
For the Teaming policy check, there must be at least two active uplinks in the teaming
and at least two hosts in the VDS.
Page 290
VTSP 5.5
Page 291
Page 292
VTSP 5.5
Automatic Rollback
The management network is configured on every host and is used to communicate with
VMware vCenter as well as to interact with other hosts during vSphere HA
configuration. This is critical when it comes to centrally managing hosts through vCenter
Server.
If the management network on the host goes down or there is a misconfiguration,
VMware vCenter can't connect to the host and thus can't centrally manage resources.
The Automatic rollback and recovery feature of vSphere 5.5 addresses all the concerns
that customers have regarding the use of management network on a VDS.
This feature automatically detects any configuration changes on the management
network and if the host can't reach the vCenter Server, it doesn't permit the
configuration changes to take effect by rolling back to a previous valid configuration.
There are two types of Rollbacks.
Host networking rollbacks: These occur when an invalid change is made to the
host networking configuration. Every network change that disconnects a host
also triggers a rollback.
Distributed switch rollbacks: These occur when invalid updates are made to
distributed switch-related objects, such as distributed switches, distributed port
groups, or distributed ports.
Page 293
Page 294
VTSP 5.5
Page 295
VTSP 5.5
In this release, new workflows to configure LACP across a large number of hosts are
made available through templates.
In this example, a vSphere host is deployed with four uplinks, and those uplinks are
connected to the two physical switches. By combining two uplinks on the physical and
virtual switch, LAGs are created.
The LACP configuration on the vSphere host is performed on the VDS and the port
groups.
First, the LAGs and the associated uplinks are configured on the VDS. Then, the port
groups are configured to use those LAGs.
In this example, the green port group is configured with LAG1; the yellow port group is
configured with LAG2.
All the traffic from virtual machines connected to the green port group follows the LAG1
path.
Page 297
Page 298
VTSP 5.5
Page 299
Page 300
VTSP 5.5
Page 301
Page 302
VTSP 5.5
Page 303
Module Summary
Now that you have completed this module, you should be able to:
Page 304
VTSP 5.5
Page 305
Module 2 Objectives
At the end of this module you should be able to:
Page 306
VTSP 5.5
Page 307
Page 308
VTSP 5.5
reachable by and can reach any node in the same promiscuous PVLAN, as well as any
node in the primary PVLAN.
Ports on secondary PVLANs can be configured as either isolated or community.
Virtual machines on isolated ports communicate only with virtual machines on
promiscuous ports, whereas virtual machines on community ports communicate with
both promiscuous ports and other ports on the same secondary PVLAN.
PVLANs do not increase the total number of VLAN's available, all PVLAN IDs are VLAN
IDs but their use means that you do not have to dedicate VLAN's for each isolated
segment.
Page 309
Page 310
VTSP 5.5
Virtual machines in an isolated private VLAN cannot communicate with other virtual
machines except those in the promiscuous private VLAN.
In this example, virtual machines C and D are in isolated private VLAN 155, so they
cannot communicate with each other. However, virtual machines C and D can
communicate with virtual machines E and F.
Page 311
Virtual machines in a community private VLAN can communicate with each other and
with the virtual machines in the promiscuous private VLAN, but not with any other virtual
machine.
In this example, virtual machines A and B can communicate with each other and with E
and F because they are in the promiscuous private VLAN. However, they cannot
communicate with C or D because they are not in the community private VLAN.
Network packets originating from a community are tagged with the secondary PVLAN
ID as it transverses the network.
There are a couple of things to note about how vNetwork implements private VLANs.
First, vNetwork does not encapsulate traffic inside private VLANs. In other words, there
is no secondary private VLAN encapsulated inside a primary private VLAN packet.
Also, traffic between virtual machines on the same private VLAN, but on different ESXi
hosts, moves through the physical switch.
Therefore, the physical switch must be private VLAN-aware and configured
appropriately so that traffic in the secondary private VLAN can reach its destination.
Page 312
VTSP 5.5
VLAN limitations
Traditional VLAN-based switching models suffer challenges such as operationally
inefficient fault tolerance.
High-availability technologies such as VMware Fault Tolerance work best with flat
Layer 2 networks, but creating and managing this architecture can be operationally
difficult, especially at scale.
Page 313
VLAN limitations
IP address maintenance and VLAN limits become challenges as the data center scales,
particularly when strong isolation is required or in service provider environments.
In large cloud deployments, applications within virtual networks may need to be logically
isolated.
For example, a three-tier application can have multiple virtual machines requiring
logically isolated networks between the virtual machines.
Traditional network isolation techniques such as VLAN (4096 LAN segments through a
12-bit VLAN identifier) may not provide enough segments for such deployments even
with the use of PVLANs.
In addition, VLAN-based networks are bound to the physical fabric and their mobility is
restricted.
Page 314
VTSP 5.5
Page 315
Page 316
VTSP 5.5
Page 317
Page 318
VTSP 5.5
IP Storage networks
Page 319
Page 320
VTSP 5.5
Page 321
Page 322
VTSP 5.5
Page 323
Page 324
VTSP 5.5
Page 325
Traffic Filtering
In a vSphere distributed switch (version 5.5 and later), the traffic filtering and marking
policy allows you to protect the virtual network from unwanted traffic and security
attacks.
It also allows you to apply a QoS (Quality of Service) tag to a certain type of traffic.
Traffic filtering is the ability to filter packets based on the various parameters of the
packet header.
This capability is also referred to as access control lists (ACLs), and is used to provide
port-level security.
The vSphere Distributed Switch supports packet classification.
This is based on the following three different types of qualifier:
MAC SA and DA qualifiers, System traffic qualifiers, such as vSphere vMotion, vSphere
management, vSphere FT, and so on; and, IP qualifiers, such as Protocol type, IP SA,
IP DA, and port number.
Page 326
VTSP 5.5
Traffic Filtering
After the qualifier has been selected and packets have been classified, users have the
option either to filter or tag those packets.
When the classified packets have been selected for filtering, users have the option to
filter ingress, egress, or traffic in both directions.
Traffic-filtering configuration is at the port group level.
Page 327
Page 328
VTSP 5.5
Page 329
Failover Policies
Virtual network uplink resilience is provided by the failover policies within the properties
of the NIC team, at the virtual switch or port group level.
The failover policy specifies whether the team has standby physical NIC capacity, and
how that standby capacity is used.
Failover policies determine the method to be used for failover detection and how traffic
is re-routed in the event of a physical adapter failure on the host.
The failover policies that can be set are network failure detection, notify switches,
failback, and failover order.
It is important to remember that physical uplinks can be mapped to only one vSwitch at
a time while all port groups within a vSwitch can potentially share access to its physical
uplinks.
This allows design choices where standby NICs can be shared amongst multiple virtual
networks that are otherwise fully isolated, or all uplinks are active but some are also
defined as standby for alternative networks.
This type of design minimizes the total number of physical uplinks while maintaining
reasonable performance during failures without requiring dedicated standby NICs that
would otherwise be idle.
Page 330
VTSP 5.5
This capability becomes less useful as the number of vSwitches on each host
increases; hence best practice is to minimize the number of vSwitches.
During switch and port configuration, you can define which physical NICs are reserved
for failover and which are excluded.
Designs should ensure that under degraded conditions, such as when single network
link failures occur, not only is continuity ensured via failover, but acceptable bandwidth
is delivered under those conditions.
Page 331
Failover Policies
Network Failover Detection
Network failover detection specifies the method to use for failover detection. The policy
can be set to either the Link Status only option or the Beacon Probing option within the
vSphere Client.
When the policy is set to Link Status only, failover detection will rely solely on the link
status that the network adapter provides. This option detects failures, such as cable
pulls and physical switch power failures. However, it does not detect configuration
errors, such as a physical switch port being blocked by spanning tree protocol or
misconfigured to the wrong VLAN or cable pulls on the other side of a physical switch.
The Beaconing option sends out and listens for beacon probes on all NICs in the team
and uses this information, along with link status, to determine link failure. This option
detects many failures that are not detected by Link Status only alone.
LACP works with Link Status Network failover detection.
Page 332
VTSP 5.5
Failover Policies
Notify Switches
When you use the notify switches policy, you must specify how the VMkernel
communicates with the physical switches in the event of a failover.
The notify switches can be set to either Yes or No.
If you select Yes, a notification is sent out over the network to update the lookup tables
on physical switches whenever a virtual Ethernet adapter is connected to the vSwitch or
dvSwitch or whenever that virtual Ethernet adapter's traffic is routed over a different
physical Ethernet adapter in the team due to a failover event,.
In almost all cases, this is desirable for the lowest latency when a failover occurs.
Page 333
Failover Policies
Failback
By default, NIC teaming applies a failback policy.
This means that if a physical Ethernet adapter that had failed comes back online, the
adapter is returned to active duty immediately, displacing the standby adapter that took
over its slot. This policy is in effect when the Rolling Failover setting is set to No. If the
primary physical adapter experiences intermittent failures, this setting can lead to
frequent changes in the adapter in use.
Another approach is to set Rolling Failover to Yes.
With this setting, a failed adapter is left inactive even after recovery until another
currently active adapter fails, requiring replacement. Please note that the Failover Order
policy can be set in the vSphere Client.
Page 334
VTSP 5.5
Failover Policies
Failover Order
You can use the Failover Order policy setting to specify how to distribute the work load
for the physical Ethernet adapters on the host.
You can place some adapters in active use, designate a second group as standby
adapters for use in failover situations, and designate other adapters as unused,
excluding them from NIC Teaming.
Please note that the Failover Order policy can be set in the vSphere Client.
Page 335
Page 336
VTSP 5.5
management traffic, vSphere Replication (VR) traffic, NFS traffic, and virtual machine
traffic. As a best practice for networks that support different types of traffic flow, take
advantage of Network I/O Control to allocate and control network bandwidth. You can
also create custom network resource pools for virtual machine traffic. The iSCSI traffic
resource pool shares do not apply to iSCSI traffic on a dependent hardware iSCSI
adapter.
Without network I/O control you will have to dedicate physical uplinks (pNICs)
specifically and solely for software iSCSI traffic if you are using the software iSCSI
adapter.
Network I/O control is only available on distributed switches. It must be enabled and
licensed using as a minimum an Enterprise Plus license.
Page 337
Page 338
VTSP 5.5
Page 339
Page 340
VTSP 5.5
Page 341
Page 342
VTSP 5.5
vShield Edge can have multiple interfaces, but you must connect at least one internal
interface to a portgroup or VXLAN virtual wire before you can deploy the vShield Edge.
Before you install vShield in your vCenter Server environment, consider your network
configuration and resources.
You can install one vShield Manager per vCenter Server, one vShield App or one
vShield Endpoint per ESX host, and multiple vShield Edge instances per data center.
Page 343
Page 344
VTSP 5.5
Network Activity 1
Your customer has approached you for help with the design of their environment. On a
single ESXi host they have eight physical 1Gbps Network cards.
They have already decided to use one standard virtual switch with five port groups, as
they want to keep the design uncomplicated.
They want to connect Production and Test & Development virtual machines to the
Standard vSwitch.
They propose the connections as shown for the vNICs to the relevant Port Groups.
The customer predicts that several virtual machines will be added to the production port
group in the near future and wants to ensure that the physical connections to this port
group are as resilient as possible.
The customer cannot tolerate any loss of management due to the loss of a physical
switch or physical network adapter.
The customer wants to use IP Storage, avail of vMotion with all other ESXi hosts, and
have a separate port group for Management.
Have a look at the options A, B, C and D to see how the Physical network adapters
might be connected, then select the best configuration for this customer and click
Submit.
Option A is displayed here.
Page 345
Network Activity 1
Option C is displayed here.
Page 346
VTSP 5.5
Network Activity 1
Option D is displayed here.
Page 347
Network Activity 1
The correct solution was Option C.
This configuration will allow several virtual machines to be added to the production port
group in the future and ensure that the physical connections to this port group are as
resilient as possible.
There is no risk of loss of management due to the loss of a physical switch or physical
network adapter.
The customer can use IP Storage, avail of vMotion with all other ESXi hosts, and have a
separate port group for Management.
Page 348
VTSP 5.5
Network Activity 2
Your customer has 4 virtual machines that they wish to use placed in controlled
networks to restrict communication to and from the machines. These virtual machines
will all need to be able to communicate with the default gateway device. They have
approached you with the following requirement:
Virtual machine A must be able to communicate with any node in the Primary PVLAN.
Virtual machine B must be able to communicate with virtual machine A but not with
virtual machines C or D.
Virtual machine C must be able to communicate with virtual machine A and D. It must
not be allowed to communicate with virtual machine B.
Virtual machine D must be able to communicate with virtual machine A and C. It must
not be allowed to communicate with virtual machine B.
Place each VM into the correct PVLAN.
Page 349
Network Activity 2
The correct solution is shown.
Virtual Machine A can communicate with any node in the Primary PVLAN.
Virtual Machine B can communicate with Virtual Machine A but not with Virtual Machine
C or D.
Virtual Machine C can communicate with virtual machine A and D. It cannot
communicate with Virtual Machine B.
Virtual Machine D can communicate with virtual machine A and C. It cannot
communicate with Virtual Machine B.
Page 350
VTSP 5.5
Network Activity 3
A customer has approached you for help in scaling out their network environment. They
have recently purchased several new ESXi hosts, as well as some 10Gps network
adapters.
The customer has requested a solution that can be deployed across the ESXi Servers,
simplifying data center setup and administration.
They want a solution that enables the convergence of diverse workloads on each
10Gps networking connection for optimum utilization of a 10 GigE link as well as
optimizing uplink capacity.
Finally they want to know which level of vSphere License they will require in order to
achieve this.
From the solutions shown on screen, choose the most appropriate solution for each
area of deployment.
Page 351
Module 2 Summary
This concludes Module 2, vSphere Networks - Advanced Features.
Now that you have completed this module, you should be able to:
Page 352
VTSP 5.5
Course 5
Page 353
Course Objectives
At the end of this course you should be able to:
Explain the vStorage architecture, virtual machine storage requirements and the
function of the types of storage available to vSphere solutions.
Describe the vSphere PSA, SIOC, VAAI, VASA and Storage DRS and explain
the benefits and requirements of each.
Determine the proper storage architecture by making capacity, performance and
feature capability decisions.
Information & Training on VSAN or virtualization of storage are not included in this
overview.
Page 354
VTSP 5.5
Page 355
Module 1 Objectives
At the end of this module, you will be able to:
Page 356
VTSP 5.5
Page 357
Page 358
VTSP 5.5
Page 359
VTSP 5.5
When creating VMFS formatted datastores, the vSphere Administrator must first choose
the LUN that will provide the physical storage capacity for the datastore and then select
how much of the LUN's capacity will be allocated to that datastore.
This allocation is called a volume. VMFS volumes can span multiple LUNS in which
case each part is called a VMFS volume extent.
For best performance, a LUN should not be configured with multiple VMFS datastores.
Each LUN should only be used for a single VMFS datastore.
In contrast, NFS shares are created by the storage administrator and are presented and
mounted to ESXi hosts as NFS Datastores by the vSphere Administrator.
Whether they are based on VMFS formatted storage or NFS mounts, all datastores are
logical containers that provide a uniform model for storing virtual machine files.
Page 361
Page 362
VTSP 5.5
Types of Datastores
The type of datastore to be used for storage depends upon the type of physical storage
devices in the data center. The physical storage devices include local SCSI disks and
networked storage, such as FC SAN disk arrays, iSCSI SAN disk arrays, and NAS
arrays.
Local SCSI disks store virtual machine files on internal or external storage devices
attached to the ESXi host through a direct bus connection.
On networked storage, virtual machine files are stored on external shard storage
devices or arrays. The ESXi host communicates with the networked devices through a
high-speed network.
Let's add the HBAs and Switches in the diagram. Notice that there are front-end
connections on the SAN and NAS arrays.
As we mentioned earlier, block storage from local disks, FC SANs and iSCSI SANs are
formatted as VMFS volumes. NAS storage must use NFS v3 shares for an ESXi host to
be able to use it for NFS Datastores.
The performance and capacity of the storage subsystem depends on the storage
controllers (the capability and quantity of local RAID controllers, SAN HBAs and
Network Adapters used for IP storage), the SAN or NAS array controllers for remote
storage.
Page 363
Page 364
VTSP 5.5
VMFS Volume
A VMFS volume is a clustered file system that allows multiple hosts read and write
access to the same storage device simultaneously.
The cluster file system enables key vSphere features, such as fast live migration of
running virtual machines from one host to another, to be supported when virtual
machines are stored on SAN storage that is shared between multiple vSphere hosts.
It also enables vSphere HA to automatically restart virtual machines on separate hosts if
needed, and it enables clustering of virtual machines across different hosts.
VMFS provides an on-disk distributed locking system to ensure that the same virtual
machine is not powered-on by multiple hosts at the same time.
If an ESXi host fails, the on-disk lock for each virtual machine can be released so that
virtual machines can be restarted on other ESXi hosts.
Besides locking functionality, VMFS allows virtual machines to operate safely in a SAN
environment with multiple ESXi hosts sharing the same VMFS datastore. Up to 64 hosts
can be connected concurrently to a single VMFS-5 volume.
VMFS can be deployed on a variety of SCSI-based storage devices, such as FC and
iSCSI SAN arrays. A virtual disk stored on a VMFS always appears as a mounted SCSI
device to a virtual machine.
Page 365
Page 366
VTSP 5.5
NFS Volumes
NFS is a file-sharing protocol that is used by NAS systems to allow multiple remote
systems to connect to a shared file system. It is used to establish a client-server
relationship between the ESXi hosts and NAS devices. In contrast to block storage
provided by local SCSI disks or SAN arrays, the NAS system itself is responsible for
controlling access and managing the layout and the structure of the files and directories
of the underlying physical storage.
ESXi hosts mount NFS volumes as NFS Datastores. Once these are created they
provide the same logical structure for storage that VMFS datastores provide for block
storage.
NFS allows volumes to be accessed simultaneously by multiple ESXi hosts that run
multiple virtual machines. With vSphere 5.5 NFS datastores provide the same advanced
features that depend on shared storage as VMFS datastores.
The strengths of NFS are similar to those of VMFS datastores. After storage is
provisioned to the ESXi hosts, the vCenter administrator is free to use the storage as
needed.
One major difference between VMFS and NFS datastores is that NFS shares can be
mounted on other systems even while they are mounted on ESXi hosts. This can make
it simpler to move data in or out of an NFS datastore for example if you want to copy
ISO files into an ISO library stored on an NFS datastore or simply wish to copy virtual
Page 367
Page 368
VTSP 5.5
Page 369
Page 370
VTSP 5.5
vSphere hosts can be configured to consume some of the vSphere Flash Resource as
vSphere Flash Swap Cache, which replaces the Swap to SSD feature previously
introduced with vSphere 5.0.
The cache reservation is allocated from the Flash Read Cache resource.
The Flash Read Cache can be reserved for any individual VMDK in a Flash Read
Cache pool.
A Flash Read Cache is created only when a virtual machine is powered on, and it is
discarded when a virtual machine is suspended or powered off.
Page 371
VTSP 5.5
Read-intensive workloads with working sets that fit into the cache might benefit from a
Flash Read Cache configuration.
Page 373
Storage Approaches
When considering storage options for a design it is important to fully understand the
benefits and limitations of each type of storage solution.
Shared and Local Storage
Shared storage is more expensive than local storage, but it supports a larger number of
vSphere features.
However, local storage might be more practical in a small environment with only a few
ESXi hosts.
Shared VMFS or NFS datastores offer a number of benefits over local storage.
Shared storage is a pre-requisite for vSphere HA & FT and significantly enhances the
speed of vMotion, DRS and DPM machine migrations. Shared storage is ideal for
repositories of virtual machine templates or ISOs. As environments scale shared
storage initially simplifies and eventually becomes a pre-requisite for increased growth.
To deliver high capacity, high performance with recoverability a shared storage solution
is required.
Page 374
VTSP 5.5
Isolated Storage
Isolated storage is a design choice where each virtual machine is mapped to a single
LUN, as is generally the case with physical servers.
When using RDMs, such isolation is implicit because each RDM volume is mapped to a
single virtual machine.
The primary advantage of this approach is that it limits the performance impact that
virtual machines can have on each other at the vSphere storage level although there
are security situations where storage isolation is desirable.
Given the advances in current storage systems, performance gains of RDMs is minimal
and should be used sparingly or if required by an application vendor.
The disadvantage of this approach is that when you scale the virtual environment, you
will reach the upper limit of 256 LUNs and VMFS volumes that can be configured on
each ESXi host. You may also need to provide an additional disk or LUN each time you
want to increase the storage capacity for a virtual machine.
This situation can lead to a significant management overhead. In some environments,
the storage administration team may need several days' notice to provide a new disk or
a LUN. You can also use vMotion to migrate virtual machines with RDMs as long as
both the source and target hosts have access to the raw LUN.
Another consideration is that every time you need to grow the capacity for a virtual
machine, the minimum commit size is that of an allocation of a LUN.
Page 375
Page 376
VTSP 5.5
Consolidated Storage
When using consolidated storage, you gain additional management productivity and
resource utilization by pooling the storage resource and sharing it with many virtual
machines running on several ESXi hosts.
Dividing this shared resource between many virtual machines allows better flexibility,
easier provisioning, and simplifies ongoing management of the storage resources for
the virtual environment.
Keeping all your storage consolidated also enables Storage DRS to be used to ensure
that storage resources are dynamically allocated in response to utilization needs.
Compared to strict isolation, consolidation normally offers better utilization of storage
resources.
The main disadvantage is additional resource contention that, under some
circumstances, can lead to a reduction in virtual machine I/O performance.
By including consolidated storage in your original design, you can save money in your
hardware budget in the long run.
Think about investing early in a consolidated storage plan for your environment.
Page 377
Page 378
VTSP 5.5
After considering all these points, the best practice is to have a mix of consolidated and
isolated storage.
Page 379
Page 380
VTSP 5.5
Page 381
Page 382
VTSP 5.5
Page 383
Thick Provisioning
When you create a virtual machine, a certain amount of storage space on a datastore is
provisioned, or allocated, to the virtual disk files.
By default, ESXi offers a traditional storage provisioning method. In this method, the
amount of storage the virtual machine will need for its entire lifecycle is estimated, a
fixed amount of storage space is provisioned to its virtual disk, and the entire
provisioned space is committed to the virtual disk during its creation. This type of virtual
disk that occupies the entire provisioned space is called a thick disk format.
A virtual disk in the thick format does not change its size unless it is modified by a
vSphere administrator. From the beginning, it occupies its entire space on the datastore
to which it is assigned. However, creating thick format virtual disks leads to
underutilization of datastore capacity because large amounts of storage space that are
pre-allocated to individual virtual machines might remain unused and stranded since it
cannot be used by any other virtual machine.
Thick virtual disks, which have all their space allocated at creation time, are further
divided into two types: eager zeroed and lazy zeroed.
The default allocation type for thick disks is lazy-zeroed. While all of the space is
allocated at the time of creation, each block is zeroed only on first write. This results in a
shorter creation time, but reduced performance the first time a block is written to.
Subsequent writes, however, have the same performance as on eager-zeroed thick
disks. The Eager-Zeroed Thick type of disk has all space allocated and zeroed out at
Page 384
VTSP 5.5
the time of creation. This increases the time it takes to create the disk, but it results in
the best and most consistent performance, even on the first write to each block.
Page 385
Thin Provisioning
To avoid over-allocating storage space and thus minimize stranded storage, vSphere
supports storage over-commitment in the form of thin provisioning.
When a disk is thin provisioned, the virtual machine thinks it has access to a large
amount of storage. However, the actual physical footprint is much smaller.
Disks in thin format look just like disks in thick format in terms of logical size. However,
the VMFS drivers manage the disks differently in terms of physical size. The VMFS
drivers allocate physical space for the thin-provisioned disks on first write and expand
the disk on demand, if and when the guest operating system needs it. This capability
enables the vCenter Server administrator to allocate the total provisioned space for
disks on a datastore at a greater amount than the actual capacity of the datastore.
It is important to note that thin provisioned disks add overhead to virtual disk operations
when the virtual disk needs to be extended, for example when data is written for the first
time to a new area of the disk, this can lead to more variable performance with thin
provisioned disks than with thick provisioned disks, especially when compared to eagerzero thick provisioned disks and they may not be suitable for VMs with demanding disk
performance requirements.
If the VMFS volume is full and a thin disk needs to allocate more space for itself, the
virtual machine will be paused and vSphere prompts the vCenter Server administrator
to provide more space on the underlying VMFS datastore. While the virtual machine is
suspended, the integrity of the VM is maintained however this is still a very undesirable
scenario and care must be taken to avoid this happening if possible.
Page 386
VTSP 5.5
vSphere provides alarms and reports that specifically track allocation versus current
usage of storage capacity so that the vCenter Server administrator can optimize
allocation of storage for the virtual environment and be alerted when there is any risk of
datastores running out of sufficient capacity to cater for the dynamic space
requirements of running machines with thin provisioned virtual disks.
Page 387
Page 388
VTSP 5.5
Page 389
VTSP 5.5
and other VMs on the same datastore may be impacted (e.g. it may be impossible to
start VMs due to the inability of the VMkernel to create a swap file on the datastore).
The guideline is to always leave between 10 and 20% of capacity free on datastores so
that these typical overheads can be accommodated without impacting production
systems but when snapshots are in active use, care must be taken to monitor the
consumption of free space to ensure they do not negatively impact running VMs.
Page 391
Storage Considerations
You must keep a few points in mind while configuring datastores, selecting virtual disks
and choosing the type of storage solution for a vSphere environment.
For VMFS volumes the best practice is to have just one VMFS volume per LUN. Each
VMFS can be used for multiple virtual machines, as we have noted earlier, as
consolidation on shared datastores simplifies administration and management but there
will be scenarios where virtual machines, or even individual virtual disks, are best
served by dedicating a specific datastore to them. These are usually driven by
performance considerations.
When virtual machines running on a datastore require more space, you can dynamically
increase the capacity of a VMFS datastore by extending the volume or by adding an
extent. An extent is a partition on a storage device, or LUN. You can add up to 32 new
extents of the same storage type to an existing VMFS datastore.
Test and production environments should be kept on separate VMFS volumes.
RDMs can be used for virtual machines that are part of physical-to-virtual clusters or
clusters that span ESXi hosts (cluster-across-boxes) or virtual machines that need to
work directly with array features, such as array level snapshots.
You must keep iSCSI and NAS on separate and isolated IP networks for best
performance.
Page 392
VTSP 5.5
The default limit for NFS mounts on ESXi is eight but this can be extended up to sixty
four, if necessary.
When deploying ESXi 5.5 in environments that have older VMFS 2 file systems you
have to first upgrade VMFS 2 to VMFS 3 and then upgrade to VMFS 5 as ESXi 5.5
does not support VMFS2.
Page 393
Page 394
VTSP 5.5
Page 395
Alarms
Alarms are notifications that are set on events or conditions for an object. For example,
the vCenter Server administrator can configure an alarm on disk usage percentage, to
be notified when the amount of disk space used by a datastore reaches a certain level.
The vSphere administrator can set alarms on all managed objects in the inventory.
When an alarm is set on a parent entity, such as a cluster, all child entities inherit the
alarm. Alarms cannot be changed or overridden at the child level.
Alarms should be used to generate notifications when specific disk utilization thresholds
are reached. By default vSphere will generate warnings when a datastore exceeds 75%
capacity allocated, and an alert is raised when it exceeds 85%. These defaults can be
changed and should be selected to generate effective notifications to the administrators.
For example if a datastore has thick provisioned VMDKs and snapshots will not be used
then it may be safe to pre-allocate over 90% of this datastore and change the warning
and alert levels accordingly. In contrast if a datastore contains very dynamic thin
provisioned VMDK's or contains VMs that will have a number of active snapshots
running then the default alarm levels might be more appropriate.
Page 396
VTSP 5.5
Page 397
Page 398
VTSP 5.5
Page 399
Page 400
VTSP 5.5
VMDirectPath I/O
VMDirectPath I/O is a vSphere feature that allows an administrator to reserve specific
hardware devices, such as Fibre Channel HBA's or Network Adapters, for use by a
specific Virtual Machine. With VMDirectPath IO the physical device is presented directly
to the VM Guest.
The Guest must fully support the hardware and will require drivers to be installed and
full configuration of the services associated with the device must be performed within
the Guest OS.
This may be required for situations where the VM has to have complete control of the
storage hardware at the HBA level. In some cases the performance needs of the VM
workload may require this level of control and this can only be provided by using
VMDirectPath IO to reserve the required devices and present them directly to the VM
Guest.
The main drawback is that any VM configured to use VMDirectPath IO is effectively
locked into the host it is running on and cannot avail of vMotion, HA, FT, DRS or other
cluster techniques where vSphere may need to move the virtual machine to another
host. VM Snapshots are also not supported as vSphere has no visibility to the directly
managed storage.
Page 401
VTSP 5.5
Generally, VMware tests ESXi with supported storage systems for basic connectivity,
HBA failover, and so on. Not all storage devices are certified for all features and
capabilities of ESXi, and vendors might have specific positions of support with regard to
ESXi.
You should observe these tips for preventing problems with your SAN configuration:
Page 403
Page 404
VTSP 5.5
Place only one VMFS datastore on each LUN. Multiple VMFS datastores on one
LUN are not recommended.
Do not change the path policy the system sets for you unless you understand the
implications of making such a change.
Document everything; include information about configuration, access control,
storage, switch, server and iSCSI HBA configuration, software and firmware
versions, and storage cable plan.
Plan for failure.
Page 405
Page 406
VTSP 5.5
Saving money - Less than half the cost of storage hardware alternatives
Easy installation - with just a few mouse clicks, even on existing virtualized
environments (brown-field installation)
Add more storage anytime - Add more disks without disruption as your storage
needs expand
Get High Availability in a few clicks
Minimize application downtime - Migrate virtual machines from host to host, with
no service disruption
Eliminate any single point of failure - Provide resilient data protection that
eliminates any single point of failure within your IT environment
Manage multiple VSA clusters centrally - Run one or more VSA-enabled
customers from one vCenter Server for centralized management of distributed
environments
Page 407
VSA is not intended to be a High End, High Capacity Enterprise Storage Cluster
Solution. It is targeted towards the Small to Medium business market.
Across 3 hosts the maximum amount of useable storage that a VSA 5.5 cluster
can support is 36TB.
Decide on 2-member or 3-member VSA cluster. You cannot add another VSA
cluster member to a running VSA cluster. For example, you cannot extend a 2member VSA cluster with another member.
vCenter Server must be installed and running before you create the VSA cluster.
Consider the vSphere HA admission control reservations when determining the
number of virtual machines and the amount of resources that your cluster
supports.
The VSA cluster that includes ESXi version 5.0 hosts does not support memory
overcommitment for virtual machines. Because of this, you should reserve the
configured memory of all non-VSA virtual machines that use VSA datastores so
that you do not overcommit memory.
Page 408
VTSP 5.5
Page 409
Page 410
VTSP 5.5
Drive Types
All vSphere storage solutions ultimately require physical storage hardware that is either
installed locally in vSphere hosts or are managed by a remote SAN or NAS array.
These have traditionally been hard disk drives, precision mechanical devices that store
data on extremely rigid spinning metal or ceramic disks coated in magnetic materials.
Hard disks use a variety of interfaces but most enterprise disks today use either Serial
Attached SCSI, SAS, Near-Line Serial Attached SCSI, NL-SAS or Serial ATA.
Page 411
Drive Types
SAS currently provides 6 Gigabit per second interface speeds which allows an
individual disk to transfer data at up to 600Megabytes / sec provided the disk can
support that sort of transfer rate. SAS also supports a number of advanced queuing,
error recovery and error reporting technologies that make it ideal for enterprise storage.
SAS drives are available in 7.2k , 10k and 15k rpm speeds. These speeds determine
the effective performance limit that drives can sustain in continuous operation with 7.2k
drives delivering about 75 - 100 IO operations per second (IOPS) and 15K drives
generally delivering around at most 210 IOPS.
SAS drives are specifically manufactured and designed deliver high reliability. A key
metric for hard drive reliability is what is termed the Bit Error Rate, which indicates the
volume of data that can be read from the drive before a single bit error is experienced.
For SAS drives this is typically 1 in 10^16. This number is sufficiently high to guarantee
that most drives will not experience an error during their supported lifetime. SAS drives
are also rated for relatively long Mean Time Between Failure of somewhere in the
region of 1.6 million hours, or about 180 years.
Page 412
VTSP 5.5
Page 413
Page 414
VTSP 5.5
Drive Types-SATA
SATA is an older interface that is primarily used in non server environments. It has few of the
high end reliability and queuing features that are required for server storage and hard drives that
use SATA have been phased out of SAN and NAS solutions over the past few years. Performance
is somewhat slower than NL-SAS drives that are otherwise identical.
Page 415
Page 416
VTSP 5.5
You can use PSA SATP claim rules for tagging SSD devices manually that are not
detected automatically.
vSphere 5.5 introduces support for Hot-Pluggable PCIe SSD devices.
SSD drives can be added or removed from a running ESXi host. This reduces the
amount of downtime for virtual machines for administrators in the same way that
traditional hot plug SAS/SATA disks have done.
The hardware and BIOS of the server must support Hot-Plug PCIe.
Page 417
Storage Tradeoffs
There are many trade-offs involved in storage design. At a fundamental level there are
broad choices to be made in terms of the cost, features and complexity when selecting
which storage solutions to include.
Direct attached storage and the VSA are low cost storage solutions that are relatively
simple to configure and maintain.
The principle drawbacks with them are the lack of support for most advanced features
with DAS and the limited scale and performance capability of the VSA. As a result both
of these are primarily suited to small environments where cost is a principle driver and
the advanced cluster features are less compelling.
For the SAN and NAS solutions FC SANs are typically high cost with iSCSI and NAS
solutions tending to be more prevalent at the lower end.
All three can support all advanced cluster based vSphere functionality and deliver very
high capacity, high performance with exceptional resilience. There are relatively few
truly entry level FC solutions but both iSCSI and NFS solutions scale from the
consumer price level up, and there are a number of entry level iSCSI and NAS solutions
that are on the HCL and supported.
FC tends to be dominant at the high end and in demanding environments where
performance is critical but there are iSCSI and NAS solutions at the very high end too.
Page 418
VTSP 5.5
The choice on which is most suitable will often come down to a customer preference for
having an Ethernet/IP-storage network over a dedicated FC network or whether specific
detailed OEM capabilities are better suited to the specific customer needs than the
alternatives.
Page 419
Page 420
VTSP 5.5
Page 421
Page 422
VTSP 5.5
Page 423
Module Summary
This concludes module 1, vSphere vStorage Architecture.
Now that you have completed this module, you should be able to:
Page 424
VTSP 5.5
Page 425
Module 2 Objectives
At the end of this module, you will be able to:
Page 426
VTSP 5.5
Page 427
VTSP 5.5
Extending PSA
PSA is an open, modular framework that coordinates the simultaneous operation of
multiple multipathing plug-ins or MPPs.
The PSA framework also supports the installation of third-party plug-ins that can replace
or supplement vStorage native components which we have just seen.
These plug-ins are developed by software or storage hardware vendors and integrate
with the PSA.
They improve critical aspects of path management and add support for new path
selection policies and new arrays, currently unsupported by ESXi.
Third-party plug-ins are of three types: third-party SATPs, third-party PSPs, and thirdparty MPPs.
Third-party SATPs are generally developed by third-party hardware manufacturers, who
have expert knowledge of their storage devices.
These plug-ins are optimized to accommodate specific characteristics of the storage
arrays and support the new array lines.
You need to install third-party SATPs when the behavior of your array does not match
the behavior of any existing PSA SATP.
When installed, the third-party SATPs are coordinated by the NMP. They can be
simultaneously used with the VMware SATPs.
Student Study Guide - VTSP 5.5
Page 429
Page 430
VTSP 5.5
Page 431
Multipathing
To maintain a constant connection between an ESXi host and its storage, ESXi
supports multipathing.
Multipathing is the technique of using more than one physical path for transferring data
between an ESXi host and an external storage device.
In case of a failure of any element in the SAN network, such as HBA, switch, or cable,
ESXi can fail over to another physical path.
In addition to path failover, multipathing offers load balancing for redistributing I/O loads
between multiple paths, thus reducing or removing potential bottlenecks.
Page 432
VTSP 5.5
FC Multipathing
To support path switching with Fibre Channel or FC SAN, an ESXi host typically has
two or more HBAs available, from which the storage array can be reached using one or
more switches.
Alternatively, the setup should include one HBA and two storage processors or SPs so
that the HBA can use a different path to reach the disk array.
In the graphic shown, multiple paths connect each ESXi host with the storage device for
a FC storage type.
In FC multipathing, if HBA1 or the link between HBA1 and the FC switch fails, HBA2
takes over and provides the connection between the server and the switch. The process
of one HBA taking over another is called HBA failover.
Similarly, if SP1 fails or the links between SP1 and the switches break, SP2 takes over
and provides the connection between the switch and the storage device.
This process is called SP failover.
The multipathing capability of ESXi supports both HBA and SP failover.
Page 433
iSCSI Multipathing
Multipathing between a server and storage array provides the ability to load-balance
between paths when all paths are present and to handle failures of a path at any point
between the server and the storage.
Multipathing is a de facto standard for most Fibre Channel SAN environments.
In most software iSCSI environments, multipathing is possible at the VMkernel network
adapter level, but not the default configuration.
In a VMware vSphere environment, the default iSCSI configuration for VMware ESXi
servers creates only one path from the software iSCSI adapter (vmhba) to each iSCSI
target.
To enable failover at the path level and to load-balance I/O traffic between paths, the
administrator must configure port binding to create multiple paths between the software
iSCSI adapters on ESXi servers and the storage array. Without port binding, all iSCSI
LUNs will be detected using a single path per target.
By default, ESXi will use only one vmdk NIC as egress port to connect to each target,
and you will be unable to use path failover or to load-balance I/O between different
paths to the iSCSI LUNs.
This is true even if you have configured network adapter teaming using more than one
uplink for the VMkernel port group used for iSCSI.
Page 434
VTSP 5.5
In the case of simple network adapter teaming, traffic will be redirected at the network
layer to the second network adapter during connectivity failure through the first network
card, but failover at the path level will not be possible, nor will load balancing between
multiple paths.
Page 435
Page 436
VTSP 5.5
Storage I/O Control enables the honoring of share values across all ESX hosts
accessing the same datastore.
Storage I/O Control is best used for avoiding contention and thus poor performance on
shared storage.
It gives you a way of prioritizing which VMs are critical and which are not so critical from
an I/O perspective.
Page 437
Datastore Clusters
A datastore cluster is a collection of datastores with shared resources and a shared
management interface. Datastore clusters are to datastores what clusters are to hosts.
When you create a datastore cluster, you can use vSphere Storage DRS to manage
storage resources.
A datastore cluster enabled for Storage DRS is a collection of datastores working
together to balance Capacity and IOPs latency.
When you add a datastore to a datastore cluster, the datastores resources become part
of the datastore cluster's resources.
As with clusters of hosts, you use datastore clusters to aggregate storage resources,
which enables you to support resource allocation policies at the datastore cluster level.
The following resource management capabilities are also available per datastore
cluster:
Page 438
VTSP 5.5
Page 439
Page 440
VTSP 5.5
Storage DRS
Storage Distributed Resource Scheduler (SDRS) allows you to manage the aggregated resources
of a datastore cluster.
When Storage DRS is enabled, it provides recommendations for virtual machine disk placement
and migration to balance space and I/O resources across the datastores in the datastore cluster.
Storage DRS provides the following functions:
Initial placement of virtual machines based on storage capacity and, optionally, on I/O latency.
Storage DRS provides initial placement and ongoing balancing recommendations to datastores in
a Storage DRS-enabled datastore cluster.
Use of Storage vMotion to migrate virtual machines based on storage capacity. The default
setting is to balance usage when a datastore becomes eighty percent full. Consider leaving more
available space if snapshots are used often or multiple snapshots are frequently used
Storage DRS provides for the use of Storage vMotion to migrate virtual machines based on I/O
latency.
When I/O latency on a datastore exceeds the threshold, Storage DRS generates recommendations
or performs Storage vMotion migrations to help alleviate high I/O load.
Use of fully automated, storage maintenance mode to clear a LUN of virtual machine files.
Storage DRS maintenance mode allows you to take a datastore out of use in order to service it.
Storage DRS also provides configuration in either manual or fully automated modes use of
affinity and anti-affinity rules to govern virtual disk location.
Page 441
Page 442
VTSP 5.5
VAAI Overview
vStorage APIs for Array Integration (VAAI) provide hardware acceleration functionality.
VAAI enables your host to offload specific virtual machine and storage management
operations to compliant storage hardware.
With the storage hardware assistance, your host performs these operations faster and
consumes less CPU, memory, and storage fabric bandwidth.
VAAI uses the following Block Primitives:
Atomic Test & Set (ATS), which is used during creation and locking of files on the
VMFS volume, replacing SCSI reservations for metadata updates. NFS uses its
own locking mechanism, so does not use SCSI reservations.
Clone Blocks/Full Copy/XCOPY is used to copy or migrate data within the same
physical array.
Zero Blocks/Write Same is used to zero-out disk regions.
In ESXi 5.x, support for NAS Hardware Acceleration is included with support for
these primitives.
Full File Clone - Like the Full Copy VAAI primitive provided for block arrays, this
Full File Clone primitive enables virtual disks to be cloned by the NAS device.
Native Snapshot Support - Allows creation of virtual machine snapshots to be
offloaded to the array.
Extended Statistics - Enables visibility to space usage on NAS datastores and is
useful for Thin Provisioning.
Page 443
Page 444
VTSP 5.5
Page 445
Page 446
VTSP 5.5
Page 447
Profile-Driven Storage
Managing datastores and matching the SLA requirements of virtual machines with the
appropriate datastore can be challenging and cumbersome tasks.
Profile-driven storage enables the creation of datastores that provide varying levels of
service. Profile-driven storage can be used to do the following: Categorize datastores
based on system-defined or user-defined levels of service.
The capabilities of the storage subsystem can be identified by using VMware Storage
APIs for Storage Awareness (VASA). Storage vendors can publish the capabilities of
their storage to VMware vCenter Server, which can display the capabilities in the user
interface.
User-defined means storage capabilities can be identified by the user (for non-VASA
systems). For example, user-defined levels might be gold, silver and bronze.
Profile-Driven Storage will reduce the amount of manual administration required for
virtual machine placement while improving virtual machine SLA storage compliance.
Virtual machine storage profiles can be associated to virtual machines and periodically
checked for compliance to ensure that the virtual machine is running on storage with the
correct performance and availability characteristics.
Use compliance checks to ensure a virtual machine is always on appropriate storage.
Find non-compliant virtual machines and correct the error via Storage vMotion.
Page 448
VTSP 5.5
Page 449
Page 450
VTSP 5.5
Page 451
Module Summary
This concludes module 2, vSphere vStorage Advanced Features.
Now that you have completed this module, you should be able to:
Page 452
VTSP 5.5
Page 453
Module Objectives
Using a sample customer, this module will help you to make design choices with regard
to base VM capacity and performance needs; define requirements for snapshots,
SDRS, template storage and planned growth rates; and explore utilization and
performance trade-offs with shared storage consolidation decisions.
Page 454
VTSP 5.5
Page 455
Page 456
VTSP 5.5
Page 457
Page 458
VTSP 5.5
Page 459
Page 460
VTSP 5.5
Page 461
Page 462
VTSP 5.5
Page 463
Module Summary
By now, you should be able to:
Page 464
VTSP 5.5