Anda di halaman 1dari 21

Virtualizing

Exchange 2013
the right way

Virtualizing Exchange 2013 the right way

Customers have attempted to virtualize Microsoft Exchange Server since the


earliest hypervisors appeared. At first, Microsoft resisted these attempts and
would not provide support if problems appeared. The attitude was that any
problem must be replicated on a real server before support was possible.
The situation changed with Exchange 2007. Customer demand, the growing
maturity of virtualization technology and the appearance of Microsofts own
hypervisor (Hyper-V) created a new imperative for Exchange to support
virtualization. Since then, Microsoft has steadily improved the ability of
Exchange to use different virtualization technologies and Exchange has become
an application that is commonly run on Microsoft Hyper-V, VMware vSphere
and the other hypervisors approved by Microsoft.
Virtualization creates its own particular technical demands that system
administrators have to take into account as they plan its use with applications.
Some applications, like Exchange, have relatively strict guidelines about the
virtualization technologies that can be used and those that cannot. Sometimes
this is because a technology is unproven with Exchange; sometimes it is
because the way that the technology operates conflicts with the way that
Exchange behaves. This document lays out the most important issues that
system administrators should know about as they approach the deployment of
Exchange 2013 on a virtualized platform.
Given the rapid cadence of updates with Microsofts release for Exchange
2013, the ongoing development of hypervisors, new capabilities in Windows
and other improvements in hardware and software, the advice outlined here is
prone to revision over time. It is correct as of April 2014 and covers Exchange
2013 SP1, Windows 2012 and Windows 2012 R2, and the virtualization
technology available at this time.

Virtualizing Exchange 2013 the right way

The case for virtualizing Exchange


Advocates of virtualization usually advance their case on the basis that
virtualization allows greater utilization of available hardware. The idea is
that one or more large virtual hosts are capable of providing the necessary
resources to support the required number of Exchange servers. Tuning the
virtual host allows precisely the right level of resources to be dedicated to each
Exchange server, whether it is a dedicated mailbox server, a multi-role server, or
a CAS or transport server. The advantages claimed include:
Virtual servers make more efficient use of available hardware and
therefore reduce the overall cost of the solution. It is possible that
a well-configured and managed virtual host is capable of supporting a
number of virtual Exchange servers and that the overall solution will make
better use of the total hardware resources than if Exchange is deployed on
a set of physical computers. This is particularly true in deployments where
Exchange serves only a couple of hundred mailboxes and the load only
takes a portion of the total resources available in a physical server. Another
example of virtualization adding value is the deployment of several virtual
Exchange servers (ideally on multiple host machines) instead of one large
physical server to support thousands of mailboxes. In this case, the virtual
Exchange servers can be arranged in a Database Availability Group (DAG) to
take advantage of the applications native high availability features, whereas
the single large physical server is not protected by a DAG and therefore
represents a single potential point of failure.
Efficient use of resources must be examined in the context of designs
created for specific circumstances. It is possible to dedicate far too many
resources to handle a particular workload and consequently the virtual
servers will not be particularly efficient; likewise, it is possible to dedicate too
few resources in an attempt to maximize utilization.
Virtual servers are more flexible and easier to deploy. Well-managed
virtual environments are configured to allow new Exchange servers to be
spun up very quickly indeedfar faster than it takes to procure new physical
hardware and then install Windows, Exchange and whatever other software
is required for the production environment. This capability allows a solution
to be more flexible than a physical equivalenta factor that might be
important when migrating from one Exchange version to another or if extra
capacity is required in situations like corporate mergers.
Virtual servers are easier to restore. If a virtual server fails, it can be easier
to create a new virtual server and then restore it using the Exchange Setup /
RecoverServer option than it would be to fix the myriad hardware problems
that can afflict a physical server.
3

Virtualizing Exchange 2013 the right way

Virtual Exchange servers allow small companies to deploy technologies


such as DAGs without having to invest in multiple physical servers.
This is correct, as it is possible to run many virtual Exchange 2013 servers
arranged in a DAG on a single physical server. However, as explained above,
the concept of not wanting all of ones eggs to be in a single basket holds
true here too as a failure of the single physical server necessarily renders the
complete DAG inoperative.
None of these advantages can be gained without careful planning and
preparation of the virtual environment that will support Exchange. A badly
configured and managed virtual environment will be even more fraught with
problems than its physical counterpart. It is therefore critical to emphasize that
it requires substantial effort to support virtualized Exchange. In the IT world,
nothing good comes free of charge.

The case against virtualizing


Exchange
Some experienced Exchange administrators will always prefer to run Exchange
on physical computers on the basis that these servers deliver a more
predictable level of performance. This position is often taken for mailbox
servers but can be extended to CAS and transport servers too. The arguments
usually advanced against virtualizing Exchange include:
Servers are easier to configure and support. It cannot be denied that it is
simpler to load Exchange on a physical computer, if only because no work
is necessary to prepare the virtual host machines to support Exchange. It is
also true that it is easier to support running Exchange on physical computers
because the hypervisor layer does not have to be considered when
debugging problems occur during deployment or ongoing operations.
Removing virtualization removes complexity. Out-of-the-box, Exchange
is a reasonably complex server application that depends on Windows and a
number of other components (like PowerShell). Keeping the deployment of
Exchange as simple as possible has many virtues in terms of supportability
and cost. Adding a hypervisor layer to the mix increases the complexity of
the overall solution and makes it harder to deploy and manage servers. The
fact that Microsoft does not use virtualized servers in its Exchange Online
service within Office 365 is often cited to support the claim.

Virtualizing Exchange 2013 the right way

Virtualization is expensive. The cost of hypervisor licenses is an obvious


additional expense that is incurred on top of Windows and Exchange server
licenses. Removing the hypervisor reduces cost as does removing the
time necessary for administrators to configure the virtual hosts to support
Exchange. A further cost is incurred in the resources required to operate
the extra layer of the hypervisor and the processing that is performed to
keep virtual servers running. This layer typically imposes a CPU penalty
in the order of 10% on the host computer. The Microsoft Exchange Role
Requirements calculator helps you to understand the impact of virtualization
by factoring in this overhead when it calculates the number of Exchange
2013 servers required to handle a particular workload.
All of these issues can be mitigated in different ways. Experience and
knowledge of the virtualized platform eases the workload in configuration
and support. The same is true in terms of complexity. People who are
unaccustomed to working with virtual servers will find the effort required to
deploy Exchange on this platform much harder than those who are used to
virtualization.
Cost is often the hardest problem to mitigate because the direct and easily
measurable cost of software licenses can only be offset by efficiencies in
hardware utilization and ease of server deployment and management, both
of which are more difficult to measure in terms of direct savings. It is also true
that Exchange 2013 requires more CPU and memory resources than previous
versions and that the preferred approach to scaling Exchange is to scaleout rather than scale-up. When these factors are put together with the need
to ensure resilience by separating components so that no one failure can
compromise a large number of servers, the result is often a number of virtual
Exchange servers distributed across multiple physical hosts, which in turn
increases the overall cost for both hardware and software.
What is true is that any decision to use virtualization for any application
should be well-founded and based on real data. Deciding to virtualize on a
whim is seldom a good idea. Deciding to virtualize Exchange because your
company has substantial knowledge in virtualization technology and has
made a strategic decision to virtualize applications whenever possible is quite
a different matter. Make sure that you understand the risks involved and the
advantage that you want to gain before you embark on your virtualization
project.

Virtualizing Exchange 2013 the right way

The attitude of the Exchange


development group towards
virtualization
The Microsoft Exchange development group supports virtualized Exchange
because they believe that running Exchange on a virtual platform makes
perfect sense for a certain subset of their 300 million-plus mailbox installed
base. This message has been given at many conferences, most recently by
Jeff Mealiffe during the Exchange Server 2013: Virtualization Best Practices
session at the Microsoft Exchange Conference in April 2014.
Support for virtualized Exchange does not mean that Microsoft endorses
virtualization for every customer. Instead, they say that any decision to use
virtualization should result in a definable benefit for the customer. In other
words, if you decide to deploy Exchange on a virtual platform, make sure that
you get something measurable out of that decision.
Microsofts preferred architecture for Exchange 2013
On April 21, 2014, Microsoft published a blog post describing The Preferred
Architecture for Exchange 2013 and stated that the architecture avoids
virtualization. The relevant text is:
In the preferred architecture, all servers are physical, multi-role servers. Physical
hardware is deployed rather than virtualized hardware for two reasons:
1. The servers are scaled to utilize eighty percent of resources during the worstfailure mode.
2. Virtualization adds an additional layer of management and complexity, which
introduces additional recovery modes that do not add value, as Exchange
provides equivalent functionality out of the box.
Proponents of virtualization might be disappointed by this statement as it
appears to cast a cold eye over virtual servers. However, the statement should
be considered in the light of the logic driving its content.
Microsoft is heavily influenced by their experience of running Exchange 2013
at massive scale in Office 365. This deployment does not use virtualization
because of the undoubted complexity that the additional layer would bring
to the management of 100,000+ servers.
Support is easier and cheaper for Microsoft when customers follow very
simple principles for the deployment of any software. Support is harder
when hypervisors are involved because it requires additional experience and
expertise on the part of support personnel.

Virtualizing Exchange 2013 the right way

Any preferred architecture from a vendor can only be expressed as a set


of guiding principles that should be taken into account when laying out
the precise details for the deployment of that vendors product in specific
circumstances. Those circumstances include business requirements, existing
operational and technology environment, and the skills and expertise of the
architects and IT staff. In other words, you have to put the principles into
context and adjust as necessary for a particular situation.
For example, The Preferred Architecture states: To achieve a highly available
and site resilient architecture, you must have two or more datacenters that
are well-connected. This statement is accurate, but it is also unreasonable
for companies who do not have the resources to run two data centers (a
fact acknowledged by Microsoft), and it does not work where (for different
reasons) a company has decided to concentrate IT into a single data center. In
these scenarios you should ignore site resilience and instead work out how to
leverage the IT assets of the company to achieve the highest availability that is
possible for Exchange 2013 within the known constraints.
Using the same logic of interpreting architectural principles in context, it is
possible that virtualization is the best platform for Exchange within a company
because the physical and people assets are in place to make virtualization a
good choice.

Basic recommendations for Exchange


2013 deployment
The general advice for deploying Exchange 2013 servers applies whether you
use physical or virtual servers and include:
Use multi-role servers whenever possible to ensure that you make maximum
use of available resources and to increase the overall resilience of the
design.
Configure servers in DAGs to achieve high availability. Each DAG is a
Windows Failover cluster, but Exchange takes care of all of the configuration
and management of the underlying cluster.
All of the members of a DAG must run the same version of the Windows
operating system. All DAG members should run the same version of
Exchange, except for the time when a cumulative update is being applied
across the DAG. Remember to put DAG members into maintenance mode
before you apply a cumulative update.

Virtualizing Exchange 2013 the right way

Ensure that every database is replicated so that at least three copies exist
within a DAG. This level of redundancy ensures that the loss of one or two
servers will not remove service from users. For this reason, scale-out by
distributing mailboxes across multiple servers (that can support the multiple
database copies) rather than scale-up with large servers that support
thousands of users. Remember, a database can never be replicated on the
same server. Exchange high availability depends on the ability to maintain
replicated databases distributed across multiple servers.
Ensure that Exchange servers are configured with the appropriate level
of CPU, storage and memory to support the predicted workload. Use
the Microsoft Exchange Role Requirements calculator to come up with
basic configurations and then adjust to reflect the details of your specific
environment.
Follow best practice for the management and monitoring of Exchange 2013.
Remember that Managed Availability is active on all Exchange 2013 servers
to provide an element of automatic management. However, Managed
Availability does not monitor Windows or the hypervisor.
Never attempt to compensate or replace a feature built into Exchange with
what appears to be an overlapping hypervisor feature. Hypervisors are
not created specifically to support Exchange. Instead, they are designed to
support general-purpose computing and should be used in that fashion.
Exchange servers typically support third-party software products that are
integrated with Exchange to provide additional functionality to end users.
These products have to be validated against the selected hypervisor.
Best practice evolves over time as experience with software develops and
improvements emerge in hardware and software. Microsoft updates Exchange
2013 on a quarterly basis and this update cadence has to be factored into your
deployment plans.
Familiarity with a hypervisor and Windows is not sufficient knowledge to
deploy Exchange 2013. You need to understand all aspects of the equation
before you begin and you need to be able to manage the resulting
infrastructure.

Virtualizing Exchange 2013 the right way

Understanding Exchange product


support for virtualization
technologies
The Exchange product group has gradually improved its support for
hypervisors and virtualized servers over the years. This support comes about
through comprehensive testing performed using different versions of Exchange
against different hypervisors over the years. The testing is done to ensure that
different combinations work, that data integrity is maintained through different
circumstances and that it is possible to support solutions during customer
deployment. The experience gained through this activity results in a clear set of
recommendations that you should take seriously as you plan your deployment
of Exchange virtual servers.
Unsurprisingly, Exchange supports all versions of Hyper-V. Third-party
hypervisors such as vSphere are supported if they are validated through the
Microsoft Server Virtualization Validation (SVVP) program. The usual advice
is to use the latest version of a hypervisor as the platform for virtualized
Exchange. All Exchange 2013 and Exchange 2010 server roles are supported for
virtualization although you are advised to run multi-role servers to achieve the
best blend of availability and resource utilization.
The remainder of this section focuses on different aspects of virtualization
technology and its application to Exchange 2013.
Storage
It is important to note that all versions of Exchange only support block-mode
storage. The NFS solutions that are often used in virtualized environments are
unsupported by Exchange, largely because of performance and reliability issues
that have been encountered in the past. The performance issues are largely
resolved but reliability (data integrity) remains a concern. Microsoft does not
test NFS solutions with Exchange and therefore cannot guarantee that any
of these solutions can provide the necessary storage qualities required by
Exchange. See this blog post for more information about the three basic areas
of concern that the Exchange product group has with NFS.
A great deal of technical debate has flowed around this topic and some NFS
vendors offer support to their customers if they elect to use NFS with Exchange.
Advocates of NFS argue that the storage works with Exchange and can be
presented to Exchange in an appropriate, reliable and robust manner, and
they point to successful implementations to back up their case. However, the
problem is that although it is possible to create a working solution based on a
certain version of a hypervisor running specific drivers connected to a particular
set of NFS storage, a general guarantee cannot be extended that every NFS
solution will work in exactly the same way.
9

Virtualizing Exchange 2013 the right way

Microsofts stance remains that they do not support NFS with Exchange,
whether the NFS storage is presented to physical Exchange servers or virtual
Exchange servers. Other shared storage alternatives such as iSCSI, SMB 3.0 or
Fibre Channel exist and are capable of delivering reliable storage to Exchange.
It is therefore Microsofts absolutely clear recommendation that any storage
presented to Exchange must be block-mode.
Differencing or delta disks are also unsupported with Exchange. One difference
between Exchange 2010 and Exchange 2013 is that the newer version supports
SMB 3.0 for VHD files. Microsoft recommends the use of the JetStress utility
to validate that the storage provided to virtual Exchange servers is capable of
handling the predicted load generated by mailboxes and other activities.
Each version of Exchange has different storage characteristics as Microsoft
continues to drive down physical I/O demands in favor of using more cached
in-memory data. At the same time, new features demand more resources.
Collectively, these facts make it imperative that sufficient hardware resources
are dedicated to ensure the success of a virtualized Exchange project and that
those resources are validated through testing.
In general, any storage provided to Exchange should be optimized for low I/O
latency and its ability to be used by the Exchange high availability features.
Given that Exchange 2013 is able to use many different kinds of storage from
enterprise-class to JBOD disks, providing the right kind of block-mode storage
should not be an issue.
Host-based clustering
Host-based clustering is supported by Exchange. This is a method to bring
machines back online after a hardware failure occurs on one host in a cluster.
Such an outage results in the transfer of the virtual machines running on that
host to another node in the cluster. However, host-based clustering does not
deliver true high availability for Exchange as the host has no knowledge of the
way that various Exchange features combine to contribute to high availability
for its databases and transport system. On the other hand, if you run singular
Exchange servers that are not part of a DAG, host-based clustering is an
effective manner to restore these servers to good health fast.
Although it handles hardware failures, host-based clustering does not resolve
other situations that are catered to by Exchange, such as the single page
patching facility used to resolve corrupt pages in replicated databases or
the automatic switchover of databases to another DAG member if Managed
Availability determines that a protocol has failed on a server. While realizing its
limitations, it is a good idea to use host-based clustering in combination with
Exchange high availability to achieve maximum protection against potential
failure.
10

Virtualizing Exchange 2013 the right way

Migration
Exchange 2013 supports migration technology with some limitations. For
example, you can use Hyper-Vs Live Migration or VMwares vMotion functions
to move virtual Exchange servers between hosts but you cannot use Hyper-Vs
Quick Migration facility. The essential thing is that a virtual machine running
Exchange must remain online during the migration.
Performing a point-in-time save-to-disk and move is unsupported. The reason
is simple: to maintain the best possible performance, Exchange manipulates
a lot of data in memory. If the Exchange server is a member of a DAG, that
memory includes a view of the current state of the underlying Windows
Failover Cluster.
Save-to-disk and move might bring an Exchange server back online in a state
where the in-memory data causes inconsistency for the moved Exchange server
or for another server within the organization. For example, when you bring a
DAG member back online, that server might believe that it is a fully functioning
member of the Windows Failover Cluster and therefore will attempt to function
as such. But during the time that the migration was happening, the other
members of the DAG might have discovered that the server had gone offline
and will therefore have adjusted cluster membership by removing the offline
server. The result is a synchronization clash where one server has a certain view
of the cluster that is not shared by the other members. Restoring the DAG and
cluster to full operational health will require manual administrator intervention.
Keeping the virtual machine (and thus Exchange) online during migrations
avoids the issue as it avoids the need for other Exchange servers to take action
(such as activating database copies on other servers within a DAG or initiating
the replay of in-transit messages from Safety Net) because the other Exchange
servers register the fact that the server has failed.
The biggest issue that you are likely to face with migration is ensuring that DAG
member nodes continue to communicate during the move. Failure to achieve
this will cause the cluster heartbeat to timeout and the node being moved will
be evicted from the Windows Failover cluster that underpins the DAG. When
a migration happens, a point-in-time copy of the virtual machines memory
is taken from the source to the target host. At the same time, pages that are
being changed are tracked and these pages are also copied to the target as
the migration progresses. Eventually no more pages are being changed and
the brownout period occurs, during which the virtual machine is unavailable
because it is being transferred from the source host to the target.

1. Note: You can also use the Quick Create option which is great when you only want to deploy a stand-alone
VM. The moment you need networking between the different VMs in Azure, it is preferable to work with the
From Gallery option

11

Virtualizing Exchange 2013 the right way

If the brownout period is less than the cluster heartbeat timeout (typically five
seconds), the Exchange server can continue working from the point that the
brownout started and normal operations will continue. But if the brownout
lasts longer than the cluster timeout, Windows Failover clustering will consider
that the node has gone offline and will evict the node from the cluster. In turn,
this will cause the Active Manager process running within the DAG to initiate
a server failover for the now-evicted node and will activate its databases on
other DAG members. In effect, the migration failed because service was not
maintained and normal operations did not continue when the virtual machine
moved to the new host. The now-moved server will eventually come back
online and rejoin the cluster, but a separate, manual administrative intervention
will be necessary to reactivate the database copies on the server to rebalance
workload across the DAG.
Two steps can be used to mitigate the problem. The first is to ensure that
sufficient network bandwidth is available to transfer virtual machines without
running the risk that the brownout period exceeds the cluster heartbeat
timeout. The exact amount of bandwidth required depends on the size of the
virtual machine, the workload that it is under at the time and the version of the
hypervisor that is used, so some testing will be necessary to establish exactly
how quickly virtual machines can be moved. The second step is to adjust the
cluster heartbeat timeout to reflect the expected brownout period. Adjusting the
cluster heartbeat timeout is not usually recommended but it can be an effective
solution to the problem. If you do decide to adjust the timeout, the highest
value recommended by the Exchange development group is ten seconds.
See http://blogs.msdn.com/b/clustering/archive/2012/11/21/10370765.aspx
for more information about how to tune the heartbeat interval for Windows
Failover clusters.
Conflict with hypervisor disaster recovery features
Exchange 2013 is designed to be a highly available application that can
continue to provide an excellent service to clients even when common outage
scenarios such as disk or server failures affect databases. Together with many
other features, including Managed Availability, the Database Availability
Group and replicated databases are the fundamental building blocks used by
Exchange to provide high availability.
It is important to note that many hypervisor functions that are considered to
be high availability features are in fact technology designed to be used for
disaster recovery (DR). Its true that these features can contribute to the delivery
of a highly available service, but this is not the intent. DR features like Hyper-V
replica are intended to keep servers online following hardware outages rather
than allowing them to function at the application level when transactional
context and preservation are required to deliver robust, highly available services.
12

Virtualizing Exchange 2013 the right way

Whether deployed on physical or virtual servers, Exchange depends on its own


high availability features to ensure that service is maintained to end users.
This approach allows Exchange to function in the same predictable manner on
both physical and virtual servers. DR features enabled by a hypervisor, such as
Hyper-V replica, are not supported with Exchange. These features are best used
by applications that, unlike Exchange, do not have high availability designed
into the core of the product. When you virtualize Exchange, you should
continue to use its native high availability features and deploy Exchange multirole servers configured in a DAG to achieve the desired goal.
Memory
When running Exchange 2013 on virtual machines, you must configure static
memory for those machines. Exchange does not support dynamic memory,
ballooning, memory overcommit, or memory reclamation for any production
server. Any third-party products that manipulate memory on a virtual platform
should be avoided when Exchange is deployed. The reason is simple: Exchange
servers cache a lot of data in-memory to improve performance. If memory is
arbitrarily removed from a virtual Exchange server, it can have unpredictable
consequences for different Exchange components up to and including severe
performance degradation where clients will experience obvious multi-second
responses to common server requests such as opening a new message. In most
cases, performance will slow by forcing Exchange to use expensive I/O to fetch
information from storage rather than being able to manipulate pages held in its
cache.
Another factor to take into consideration is that Exchange 2013 includes a
workload management system that is designed to ensure that no Exchange
component is allowed to stress a server by taking too many resources and
that important work is processed before less important activities. Workload
management does this by monitoring the activities running on an Exchange
server to ensure that available resources are consumed efficiently. It is
obvious that workload management can only succeed if the server is stable
and resources do not appear and disappear without warning. If resources are
suddenly removed, the decisions made by workload management are unlikely
to be as good as they should be and overall system performance will suffer
because important protocol components (such as those that handle interactive
client requests) will not receive the resources they need.
Dynamically resizing memory can offer some advantages on lab systems if you
are attempting to determine an optimum configuration for Exchange and in
these circumstances, when real user data is not at risk, it is acceptable to use
dynamic memory.

13

Virtualizing Exchange 2013 the right way

Snapshots
Hypervisor snapshots are not supported as a way to restore an Exchange server
back to a point in time. This is a perfectly acceptable technique to use with test
Exchange servers but should never be done in production for much the same
reason as virtual Exchange servers must be kept online during migrations.
Consider the situation that would occur if you took a hypervisor snapshot of an
Exchange 2013 multi-role server that is part of a DAG and then attempted to
restore that snapshot at a later date. At the point when the snapshot is taken,
Exchange is probably manipulating a great deal of data in memory, including
some that has not yet been committed to databases and some that is being
replicated to other DAG members through block-mode transfer. It might also
be using classic file copy to transfer transaction logs to other DAG members.
All of this happens under the control of the Microsoft Exchange Replication
service, which understands what data needs to be replicated to different target
servers and what is the current state of replication.
When you restore the snapshot, you bring an Exchange server back into the
DAG as it was at the time when the snapshot was taken. This could be hours
or even days away from current time. The hypervisor has no information
about what processing Exchange was doing across the DAG when it took
the snapshot. Likewise, the hypervisor has no knowledge of what steps are
necessary to bring the server back into the DAG in a predictable and known
manner. And when the Exchange server wakes up, the Exchange Replication
service has no idea of what has happened between the time when the snapshot
was taken and the current state of replication across the other members of the
DAG. No code is included in Exchange to fix a DAG member by advancing it to
the current time in a controlled and consistent manner.
The result is that the restored snapshot will probably put the DAG into a state
where replication is in an inconsistent state for one or more databases. It is even
possible that some database corruption will occur, especially if some single page
patching had been performed to fix corrupt database pages when the server
was offline. It is even possible that the server will work smoothly after the
snapshot is restored, but it is impossible to guarantee that normal service will
resume after a restore, and Microsoft will not support you if problems occur.
Snapshots do have value in lab environments because they allow you to
return a server to a well-known situation, for instance before you applied a
new cumulative update to the server. Given the workloads that lab systems
are generally exposed to, you run less risk of encountering the problematic
conditions referred to above than you do with production servers in full flow.
Even so, if you elect to use snapshots in lab environments, you might have
to take snapshots of all members of a DAG so that you can roll-back to a
consistent point if necessary. Even introducing an aged snapshot to a test DAG
is unlikely to go well because Exchange 2013 is not built to handle time travel.
14

Virtualizing Exchange 2013 the right way

An exception often proves the rule. In this instance, taking a snapshot or a


Hyper-V shadow copy, which is subsequently used for backup purposes, is the
exception where it is acceptable to use these techniques against a production
Exchange 2013 server. The snapshot or copy is used as an intermediate step in
the process of creating a valid Exchange backup set that can be used to recover
data in the future. Instructing the hypervisor to take the snapshot of the
running server is only part of the overall process that is necessary to preserve
the integrity of Exchange and must be done using the Windows Volume
ShadowCopy Services APIs provided by Microsoft to first quiesce the server,
inform Exchange that a backup is being taken, and then ensure that the backup
is complete so that Exchange can truncate its transaction log stream.
Processor oversubscription
It is not recommended that you oversubscribe processors when allocating
them to virtual Exchange servers. Exchange is an application that will use
whatever resources are made available to it and this is especially true of
Exchange 2013. If you assign processors to a virtual machine (vCPU), Exchange
will expect to be able to use those processors, and if the host cannot make
the processors available because the physical CPU capacity does not exist or
the CPUs are in use by other applications, the performance of Exchange will
suffer. The effects of oversubscription (CPU starvation) will be seen in areas
such as growth of message queues, a slowing in content indexing (leading
to inaccurate searches) and growth in RPC latency that results in lack of
responsiveness to client requests.
Best practice is to only assign processors that can be absolutely guaranteed
to virtual Exchange servers (a one-to-one ratio between vCPU and physical
processors). In other words, do not oversubscribe.
Hyperthreading
Microsoft does not recommend that hyperthreading is enabled on physical
servers that run Exchange 2013. This is because they do not want the logical
processors that are exposed by hyperthreading to be visible to the server as
this changes the way that the .Net framework calculates its memory allocation.
Exchange makes heavy use of the .Net framework and its important that this is
based on accurate memory calculations.
The guidance is slightly different for virtual servers. In this case it is acceptable
to enable hyperthreading as long as servers are sized based on the available
physical processor cores rather than virtual CPUs. For example, a server
equipped with four physical CPUs, each of which supports four processors,
has 16 physical processors available to it. Allocate these physical processors to
virtual servers instead of the logical processors enabled by hyperthreading. In
other words, if you have four virtual Exchange servers, allocate each server four
processors.
15

Virtualizing Exchange 2013 the right way

In general, it is best to size the hardware requirements for Exchange as if


the application were to be deployed on physical computers and then make
whatever adjustments are required to accommodate the virtual platform,
including reserving the necessary resources so that they are dedicated to
Exchange and cannot be seized by other applications.
See http://technet.microsoft.com/en-us/library/jj619301(v=exchg.150).aspx for
more information about hardware requirements for virtualized Exchange.

NUMA
NUMA is non-uniform memory access, a design used in multiprocessing
computer systems to increase processor speed without imposing a penalty on
the processor bus. The non-uniform part of the name arises from the fact
that some memory is closer to a processor than is other memory, so a uniform
access time is not experienced. Hypervisors use NUMA to make more efficient
use of memory across guest machines. For example, Hyper-V provides a feature
called virtual NUMA that groups virtual processors and the memory assigned
to guest machines into virtual NUMA nodes, and guest machines see a
topology that is based on the physical topology of the host system. When
a virtual machine is created, Hyper-V creates a configuration based on the
physical topology, taking into account the number of logical processors and
the amount of memory per NUMA node.
Unlike SQL, Exchange knows nothing about NUMA and will not experience any
scalability benefits from running on servers that have NUMA enabled. However,
Exchange will take advantage of the operating system optimizations for NUMA.
This is consistent with the recommendation to assign static CPU and memory
resources to virtual machines that run Exchange 2013 so that Exchange can
then manage the allocated resources across its different components.

Operational considerations
Once deployed, a virtualized Exchange environment must be maintained. Here
are some recommendations to help keep your virtual Exchange 2013 servers in
good health.
Deployment of virtual Exchange servers across hosts
In general it is better to scale-out Exchange by creating many virtual servers
than it is to scale-up and have everything located in a small number of very
large servers. This approach allows you to take maximum advantage of
Exchange high availability by distributing mailboxes across databases managed
in a DAG where each database has at least three copies. Along the same lines,

16

Virtualizing Exchange 2013 the right way

you should distribute the virtual Exchange servers across multiple physical
hosts so that a failure of a single host will impact as few Exchange servers as
possible. Even if the hardware can accommodate the load, it does not make
sense to run 10 or 12 virtual Exchange servers on one large host as a failure can
result in total loss of service to all clients.
Ideally, it should be possible for the Exchange servers that remain online
following the failure of a host to restore full service through the activation of
database copies on the remaining servers. Achieving this goal calls for careful
planning of mailbox and database placement and attention to detail for other
dependencies, such as witness servers, Active Directory domain controllers,
load balancers, and so on. It also requires ongoing monitoring to ensure that
databases are not being activated in an unplanned manner due to automated
failovers invoked by Managed Availability, server maintenance or other
operational reasons.
Although Exchange is designed to be a highly available application and
incorporates a large number of features to realize this goal, its successful
operation requires all layers of the solution to function properly from the
base hardware upward. Hypervisors are not magic and do not automatically
attribute high availability to virtualized applications. The same is true of the
high availability features built into Exchange 2013. The combination of the high
availability features can complement each other but neither will be effective
unless high availability is designed into a deployment. This means that:
Exchange 2013 high availability features are used as the prime protection for
Exchange data
Hypervisor high availability features such as migration are used to fill in gaps
that might exist
Attention is given to other elements of the solution such as Active Directory,
load balancers and witness servers
The resulting configuration is tested to ensure that it provides sufficient
resilience against common failure conditions such as the loss of a host
machine, storage controller, network links and so on.
It is also true that those running the environment need to be able to cope with
failure because it is certain that some failures will occur over time. It makes
sense to prepare for failure by implementing adequate redundancy in both
hardware and software and multiple paths to essential infrastructure.

17

Virtualizing Exchange 2013 the right way

Upgrades
Microsoft issues a new cumulative update for Exchange 2013 quarterly, and
the latest available update must be installed before Microsoft will support a
system. Therefore, you must be prepared to deploy cumulative updates on a
regular basis. At the same time, you must be prepared to apply regular updates
to Windows, the hypervisor, hardware drivers, and other applications and thirdparty products that are used alongside Exchange 2013.
Updating an Exchange 2013 server is not difficult and the application of the
latest cumulative update brings the server completely up to date with the latest
features and bug fixes. However, the history of Exchange 2013 cumulative
updates is spotted with small, but irritating, errors that have caused grief
to customers. For this reason it is essential that a new cumulative update is
carefully tested in an environment that accurately mimics the production
environment before it is introduced into production.
Backups
It is possible to take a host-based backup that backs up a complete server,
including Exchange. In order to produce a backup that can be successfully
used to restore a viable Exchange server, it is necessary that the backup
utility is aware of Exchange and interacts with the product to capture the
server configuration together with databases at the time the backup is taken.
The most important thing is that the backup utility uses Windows Volume
ShadowCopy Services (VSS) to interact properly with the Information Store and
that log truncation occurs properly. Note that the log truncation process differs
for standalone Exchange 2013 mailbox servers and those that are members
of a DAG. It is therefore critical that the backup software uses the proper VSS
API calls to prepare Exchange for backup and then informs Exchange after a
successful backup is achieved. It is also important that backed up databases
are in a clean state so that they can be used immediately if they need to be
restored.
The only real way to know whether backups are successful is their ability to
be used as the recovery source for data. For this reason it is important to test
that backups taken from virtual Exchange 2013 servers can be quickly and
reliably restored. Make sure that this test is done regularly so that operations
staff is familiar with the steps necessary to restore a server from backup and
to validate that the restored databases contain what is expected and do not
need to be put into a clean state before they are used. Particular care should be
taken when recovering databases that belong to a DAG.

18

Virtualizing Exchange 2013 the right way

Summary
The decision to virtualize Exchange 2013 should never be made in a vacuum. It
is a complex decision that requires many factors to be weighed and assessed
in the context of the IT environment into which Exchange is to be deployed,
the business need to be satisfied and the long-term technical strategy of
the company. Microsoft has made Exchange 2013 an excellent candidate for
virtualization, providing that the caveats explained in this paper are respected.
The question, therefore, is whether physical or virtual Exchange 2013 systems
best serve the needs of your company. And only you can answer that question.

19

Virtualizing Exchange 2013 the right way

About the author


Tony Redmond is the owner of Tony Redmond & Associates, an Irish consulting
company focused on Microsoft technologies. With experience at VicePresident level at HP and Compaq plus recognition as a Microsoft MVP, Tony
is considered by many around the world an expert in Microsoft Collaboration
Technology. Tony has authored 13 books, filed a patent and more. He is a
senior contributing editor to WindowsITPro.com where he writes the Exchange
Unwashed blog.

About Veeam Software


Veeam enables the Always-On Business by providing solutions that deliver
Availability for the Modern Data Center, which provides recovery time and
point objectives (RTPO) of less than 15 minutes for all applications and data.
Veeam recognizes the challenges in keeping a business up and running at all
times and addresses them with solutions that provide high-speed recovery,
data loss avoidance, verified protection, leveraged data and complete visibility.
Veeam Backup & Replication leverages technologies that enable the modern
data center, including VMware vSphere, Microsoft Hyper-V, NetApp storage,
and HP 3PAR StoreServ and StoreVirtual Storage, to help organizations meet
RTPO, save time, mitigate risks, and dramatically reduce capital and operational
costs. Veeam Availability Suite provides all of the benefits and features of
Veeam Backup & Replication along with advanced monitoring, reporting and
capacity planning for the backup infrastructure. Veeam Management Pack for
System Center is the most comprehensive, intuitive and intelligent extension
for app-to-metal management of Hyper-V and vSphere infrastructures, and
includes monitoring and reporting for Veeam Backup & Replication. The Veeam
Cloud Provider (VCP) program offers flexible monthly and perpetual licensing
to meet the needs of hosting, managed service and cloud service providers.
The VCP program currently includes more than 5,000 service provider partners
worldwide.
Founded in 2006, Veeam currently has 25,000 ProPartners and more than
111,500 customers worldwide. Veeams global headquarters are located in Baar,
Switzerland, and the company has offices throughout the world.
To learn more, visit http://www.veeam.com

20

Virtualizing Exchange 2013 the right way

To learn more, visit http://www.veeam.com/backup

21

Anda mungkin juga menyukai