Anda di halaman 1dari 119

NICE Perform®

Release 3.2
3.1, 3.2, & 3.5

Virtualization Configuration Guide

November 2010
385A0809-01 Rev. A3

®
Insight from Interactions TM
Information in this document is subject to change without notice and does not represent a
commitment on the part of NICE Systems Ltd. The systems described in this document are
furnished under a license agreement or nondisclosure agreement.
All information included in this document, such as text, graphics, photos, logos and images, is the
exclusive property of NICE Systems Ltd. and protected by United States and international
copyright laws.
Permission is granted to view and photocopy (or print) materials from this document for personal,
non-commercial use only. Any other copying, distribution, retransmission or modification of the
information in this document, whether in electronic or hard copy form, without the express prior
written permission of NICE Systems Ltd., is strictly prohibited. In the event of any permitted
copying, redistribution or publication of copyrighted material, no changes in, or deletion of, author
attribution, trademark legend or copyright notice shall be made.

All contents of this document are: Copyright © 2010 NICE Systems Ltd. All rights reserved.
This product is covered by one or more of the following US patents:
5,185,780 5,216,744 5,274,738 5,289,368 5,325,292 5,339,203 5,396,371
5,446,603 5,457,782 5,911,134 5,937,029 6,044,355 6,115,746 6,122,665
6,192,346 6,246,752 6,249,570 6,252,946 6,252,947 6,311,194 6,330,025
6,542,602 6,615,193 6,694,374 6,728,345 6,775,372 6,785,369 6,785,370
6,856,343 6,865,604 6,871,229 6,880,004 6,937,706 6,959,079 6,965,886
6,970,829 7,010,106 7,010,109 7,058,589 7,085,728 7,152,018 7,203,655
7,240,328 7,305,082 7,333,445 7,346,186 7,383,199 7,386,105 7,392,160
7,436,887 7,474,633 7,532,744 7,545,803 7,546,173 7,573,421 7,577,246
7,581,001 7,587,454 7,599,475 7,631,046 7,660,297 7,664,794 7,665,114
7,683,929 7,705,880 7,714,878 7,716,048 7,720,706 7,725,318 7,728,870
7,738,459 7,751,590 7,761,544 7,770,221 7,788,095 7,801,288 RE41,292

360o View, ACTIMIZE, Actimize logo, Alpha, Customer Feedback, Dispatcher Assessment, Encorder, eNiceLink,
Executive Connect, Executive Insight, FAST, FAST alpha Blue, FAST alpha Silver, FAST Video Security, Freedom,
Freedom Connect, IEX, Interaction Capture Unit, Insight from Interactions, Investigator, Last Message Replay,
Mirra, My Universe, NICE, NICE logo, NICE Analyzer, NiceCall, NiceCall Focus, NiceCLS, NICE Inform, NICE
Learning, NiceLog, NICE Perform, NiceScreen, NICE SmartCenter, NICE Storage Center, NiceTrack,
NiceUniverse, NiceUniverse Compact, NiceVision, NiceVision Alto, NiceVision Analytics, NiceVision ControlCenter,
NiceVision Digital, NiceVision Harmony, NiceVision Mobile, NiceVision Net, NiceVision NVSAT, NiceVision Pro,
Performix, Playback Organizer, Renaissance, Scenario Replay, ScreenSense, Tienna, TotalNet, TotalView, Universe,
Wordnet are trademarks and registered trademarks of NICE Systems Ltd. All other registered and unregistered
trademarks are the property of their respective owners.
Applications to register certain of these marks have been filed in certain countries, including Australia, Brazil, the
European Union, Israel, Japan, Mexico, Argentina and the United States. Some of such registrations have matured
to registrations.
385A0809-01 Rev. A3
For assistance contact your local supplier or nearest NICE Systems Customer Service Center:
EMEA Region: (Europe, Middle East, Africa)
Tel: +972-9-775-3800
Fax: +972-9-775-3000
email: support@nice.com

APAC Region: (Asia/Pacific)


Tel: +852-8338-9818
Fax: +852-2802-1800
email: support.apac@nice.com

The Americas Region: (North, Central, South America)


Tel: 1-800-NICE-611
Fax: +720-264-4012
email: support.americas@nice.com

Israel:
Tel: 09-775-3333
Fax: 09-775-3000
email: support@nice.com

NICE invites you to join the NICE User Group (NUG).


Visit the NUG Website at www.niceusergroup.org, and follow the online instructions.

International Headquarters-Israel North America


Tel: +972-9-775-3100 Tel: 1-800-663-5601
Fax: +972-9-775-3070 Fax: +201-356-2197
email: info@nice.com email: na_sales@nice.com
United Kingdom Germany
Tel: +44-8707-22-4000 Tel: +49-(0)-69-97177-0
Fax: +44-8707-22-4500 Fax: +49-(0)-69-97177-200
France Hong-Kong
Tel: +33-(0)1-41-38-5000 Tel: +852-2598-3838
Fax: +33-(0)1-41-38-5001 Fax: +852-2802-1800

All queries, comments, and suggestions are welcome! Please email: nicebooks@nice.com
For more information about NICE, visit www.nice.com
Revision History
Virtualization Configuration Guide

Revision Modification Date Description

A1 September 2009 • Guide name changed from VMware Configuration


Guide to Virtualization Configuration Guide.
• Added procedure for configuring memory reservation
for the 32-bit operating system for the virtual machine
running SQL Server.

A2 August 2010 • Terms and Abbreviations.


• Snapshot management section for VMware
configuration.
• Network adapter configuration for VMware
configuration.
• Microsoft Hyper-V platform.
• Procedure on how to convert a physical machine to a
virtual machine.
• Guidelines to virtual machine time synchronization.
• Enabling NTP on ESX Servers.

A3 November 2010 • New section - VMware High Availability


• New chapter - Virtual Desktop Infrastructure (VDI) for
NICE Client-Side Applications
Blank page for double-sided printing.
Contents
1
Virtualization in the NICE Perform Environment 11
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
Terms and Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

2
VMware Configuration 17
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Logging in to the VMware vCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
VMware vCenter Server Summary View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Allocating Resources for the Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . .25
Virtual Machine Time Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
Enabling NTP on ESX Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Virtual Machine Snapshot Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
Virtual Machine Network Interface Card (NIC) Configuration . . . . . . . . . . . . . . .35
Changing the NIC Type to Enhanced VMXNET / VMXNET2 (Enhanced) . . . . .37
Changing the VoIP Logger’s Capture NIC Type to E1000 . . . . . . . . . . . . . . . . .40
Virtual Machine with the SQL Database Installation . . . . . . . . . . . . . . . . . . . . . .46
Configuring Memory Reservation for the 64-bit Operating System . . . . . . . . . .46
Configuring Memory Reservation for the 32-bit Operating System . . . . . . . . . .48
Configuring the Disk used for the Database Server as Mapped Raw LUN . . . . .51
Virtual Machine with the VoIP Logger Installation . . . . . . . . . . . . . . . . . . . . . . .52
Configuring Memory Reservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
Disabling the Balloon Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
Setting the Priority for the VoIP Logger’s Capture NIC . . . . . . . . . . . . . . . . . . .57
Configuring the VoIP Logger’s Sniffing NIC for Promiscuous Mode . . . . . . . . . .59
Increasing the Rx Buffer for the VoIP Logger’s Capture NIC . . . . . . . . . . . . . . .62
Increasing the Rx Buffer for the VMXNET2 (Enhanced) Card for ESX V4.0 .62

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 7
Contents

Increasing the Rx Buffer for the E1000 Card for ESX V3.5 . . . . . . . . . . . . .66
VMware High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
Implications of High Availability in the Nice Perform Environment . . . . . . . . . .69
Configuration Guidelines to VMware High Availability . . . . . . . . . . . . . . . . . . . .70
Sample Cluster with High Availability Settings . . . . . . . . . . . . . . . . . . . . . . .71
Configuring High Availability Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72

3
Microsoft Hyper-V Configuration 75
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Configuring Hyper-V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Network Adapter Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Virtual Network settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77
Hyper-V Host Server Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78
Virtual Machine File Location for a New Virtual Machine . . . . . . . . . . . . . . . . . .79
Virtual Machine Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81
Examples of Virtual Machine Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
Virtual Machine Time Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87

4
Virtual Desktop Infrastructure (VDI) for NICE Client-Side
Applications 89
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Supported Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
VMware View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91
Overview of View Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91
PCoIP Display Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
Configuring the PCoIP Display Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . .92
Guidelines for NICE Client-Side Applications Running on a VDMware Virtual
Desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93
Client Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
Citrix XenDesktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
Overview of XenDesktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 8
Contents

Desktop Delivery Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98


Guidelines for NICE Client-Side Applications Running on a Citrix XenDesktop .98
Client Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99
Logging in via the Online Plug-In . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99
Logging in via Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
Playback Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101

A
Converting a Physical Machine into a VMware Virtual Machine 103
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
Installing VMware Converter 4.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105
Converting the Physical Machine into a Virtual Machine . . . . . . . . . . . . . . . . .109
Defining the New Virtual Machine’s IP Address . . . . . . . . . . . . . . . . . . . . . . . .119

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 9
Blank page for double-sided printing.
1
Virtualization in the NICE Perform
Environment

IN THIS CHAPTER This chapter provides a basic overview of NICE


Perform in the virtualization environment.
Overview ................................................................... 12
Terms and Abbreviations........................................... 13

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 11
Chapter 1: Virtualization in the NICE Perform Environment
Overview

Overview
Virtualization is the facility that enables multiple operating systems to run simultaneously on one
machine in a safe and efficient manner. Of the benefits of virtualization are the lowering of
hardware expenditures, reducing Total Cost of Ownership by up to 75%, increasing agility and
further automating IT operations. This document provides general guidelines on how to configure
NICE Perform servers in a virtual infrastructure.
The following topics are described:
• VMware Configuration on page 17
The following topics are described:
• Logging in to the VMware vCenter on page 18
• VMware vCenter Server Summary View on page 19
• Allocating Resources for the Virtual Machines on page 25
• Virtual Machine Time Synchronization on page 29
• Virtual Machine Snapshot Management on page 33
• Virtual Machine Network Interface Card (NIC) Configuration on page 35
• Virtual Machine with the SQL Database Installation on page 46
• Virtual Machine with the VoIP Logger Installation on page 52
• Microsoft Hyper-V Configuration on page 75
The following topics are described:
• Network Adapter Configuration on page 76
• Hyper-V Host Server Disk Management on page 78
• Virtual Machine File Location for a New Virtual Machine on page 79
• Virtual Machine Settings on page 81
• Virtual Machine Time Synchronization on page 87
• Converting a Physical Machine into a VMware Virtual Machine on page 103
The following topics are described:
• Installing VMware Converter 4.0.1 on page 105
• Converting the Physical Machine into a Virtual Machine on page 109

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 12
Chapter 1: Virtualization in the NICE Perform Environment
Terms and Abbreviations

Terms and Abbreviations

Term Description
Balloon Driver Memory reclaimed from virtual machines by cooperation with the
VMware Tools (vmmemctl driver) and guest operating systems.
This is the preferred method for reclaiming memory from virtual
machines, since it reclaims the memory that is considered least
valuable by the guest operating system. The system "inflates" the
balloon driver to increase memory pressure within the virtual machine,
causing the guest operating system to invoke its own native memory
management algorithms. When memory is tight, the guest operating
system decides which particular pages of memory to reclaim, and if
necessary, swaps them to its own virtual disk. This proprietary
technique provides predictable performance that closely matches the
behavior of a native system under similar memory constraints.
ESX Server VMware's enterprise server virtualization platform. Two versions exist
- ESX Server and ESXi Server. The ESXi Server has no service
console and is the thinnest version available. ESX Server has many
optional features like VMotion and VMHA and some built-in features
like the VMFS file system.
Hypervisor Also called virtual machine monitor (VMM), allows multiple
operating systems to run concurrently on a host computer— a feature
called hardware virtualization. It is so named because it is
conceptually one level higher than a supervisor. The hypervisor
presents the guest operating systems with a virtual platform and
monitors the execution of the guest operating systems. In that way,
multiple operating systems, including multiple instances of the same
operating system, can share hardware resources.
LUN Logical Unit Number - Storage LUNs (Logical Unit Numbers) are
commonly shared between virtual machines - each virtual machine
uses a configured section of a LUN.
Memory Reservation Reserving memory for specific purposes. Operating systems and
applications generally reserve fixed amounts of memory at startup and
allocate more when the processing requires it. If there is not enough
free memory to load the core kernel of an application, it cannot be
launched. Although a virtual memory function will simulate an almost
unlimited amount of memory, there is always a certain amount of
"real" memory that is needed.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 13
Chapter 1: Virtualization in the NICE Perform Environment
Terms and Abbreviations

Term Description
NIC Teaming NIC teaming is the process of grouping together several physical NICs
into one single logical NIC, which can be used for network fault
tolerance and transmit load balance. The process of grouping NICs is
called teaming. Teaming has two purposes:
• Fault Tolerance: By teaming more than one physical NIC to a
logical NIC, high availability is maximized. Even if one NIC fails,
the network connection does not cease and continues to operate on
other NICs.
• Load Balancing: Balancing the network traffic load on a server
can enhance the functionality of the server and the network. Load
balancing within network interconnect controller (NIC) teams
enables distributing traffic amongst the members of a NIC team so
that traffic is routed among all available paths.
Promiscuous Mode In a network, promiscuous mode allows a network device to intercept
and read each network packet that arrives in its entirety. This mode of
operation is sometimes given to a network snoop server that captures
and saves all packets for analysis (for example, for monitoring
network usage).
In an Ethernet local area network (LAN), promiscuous mode is a mode
of operation in which every data packet transmitted can be received
and read by a network adapter. Promiscuous mode must be supported
by each network adapter as well as by the input/output driver in the
host operating system. Promiscuous mode is often used to monitor
network activity.
Resource Pools Use resource pools to hierarchically partition available CPU and
memory resources. Each standalone host and each DRS cluster has an
(invisible) root resource pool that groups the resources of that host or
cluster. The root resource pool is not displayed because the resources
of the host (or cluster) and the root resource pool are always the same.
Rx Buffer Used to reduce Logger packet loss and improve performance.
If the virtual machine's network driver runs out of receive (Rx)
buffers, that is, a buffer overflow occurs, receive packets could be
dropped at the virtual switch. The number of dropped packets can be
reduced by increasing the virtual network driver’s Rx buffers.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 14
Chapter 1: Virtualization in the NICE Perform Environment
Terms and Abbreviations

Term Description
VMware ESX Server is managed by the VMware Infrastructure Client. Its
vCenter/Virtual centralized management platform is called Virtual Center or VMware
Center vCenter.
VMware DRS VMware DRS (Distributed Resource Scheduler) is a utility that
balances computing workloads with available resources in a
virtualized environment. The utility is part of a virtualization suite
called VMware Infrastructure 3.
With VMware DRS, users define the rules for allocation of physical
resources among virtual machines. The utility can be configured for
manual or automatic control. Resource pools can be easily added,
removed or reorganized. If desired, resource pools can be isolated
between different business units. If the workload on one or more
virtual machines drastically changes, VMware DRS redistributes the
virtual machines among the physical servers. If the overall workload
decreases, some of the physical servers can be temporarily
powered-down and the workload consolidated.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 15
Blank page for double-sided printing.
2
VMware Configuration

IN THIS CHAPTER This chapter describes the recommended settings


for NICE Perform installations on the VMware
Overview ................................................................... 18 platform.
Logging in to the VMware vCenter ............................ 18
VMware vCenter Server Summary View................... 19
For NICE Perform configuration requirements
and specifications, see the Design
Allocating Resources for the Virtual Machines.......... 25
Considerations and Certified Servers guides.
Virtual Machine Time Synchronization ...................... 29
Virtual Machine Snapshot Management ................... 33
Virtual Machine Network Interface Card (NIC)
Configuration ............................................................. 35
Virtual Machine with the SQL Database Installation . 46 NOTE
Virtual Machine with the VoIP Logger Installation ..... 52
VMware High Availability........................................... 69
The examples shown in this chapter only
represent a typical VMware solution and
may not been seen at your site.
The guidelines described here refer to both
VMware Infrastructure 3.0 (ESX 3.5) and
VMware vSphere (ESX 4.0, 4.1)
deployments.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 17
Chapter 2: VMware Configuration
Overview

Overview
The VMware platform, based on the ESX server, is designed to run multiple virtual machines that
facilitate hosting multiple operating systems running NICE Perform components. Infrastructure
specifications, the virtual machines’ resources (disk, CPU, memory, network) can be allocated and
configured either directly within the ESX server or via the VMware vCenter.

Logging in to the VMware vCenter

To log in to the VMware vCenter:

1. Double-click VMware vCenter .


The VMware vSphere Client Login window appears.
Figure 2-1 VMware vSphere Client Login Window

2. Enter the IP address or hostname of your VMware vCenter Server, and the VMware vCenter
Server username and password with administrative rights.
If you do not have administrative rights, you can still log in to the system with read-only rights
and review configuration settings. If a setting needs to be changed, you can alert an
administrator, who will then make the change.
3. Click Login.
The VMware vCenter Server appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 18
Chapter 2: VMware Configuration
VMware vCenter Server Summary View

VMware vCenter Server Summary View


This section describes how to display a summary of your VMware vCenter Server configuration.

To display the VMware vCenter Server summary:


1. Log in to the VMware vCenter Server as described in Logging in to the VMware vCenter
on page 18.
2. In the VMware vCenter Server, from the Inventory drop-down list, select Hosts and
Clusters. Then click the Summary tab.
The Data Centers and ESX Servers related to each Data Center are displayed. Each ESX
server contains Virtual Machines that can be grouped under Resource Pools.

TIP

Resource pools are used to hierarchically partition available CPU and memory resources.
Each standalone host and DRS cluster has an invisible root resource pool that groups the
resources of that host or cluster. The root resource pool is not displayed because the
resources of the host (or cluster) and the root resource pool are always the same.
A resource pool can contain child resource pools, virtual machines, or both. You can
therefore create a hierarchy of shared resources. The resource pools at a higher level are
called parent resource pools. Resource pools and virtual machines that are at the same
level are called siblings. The cluster itself represents the root resource pool.
Note: VMware DRS facilitates balancing resources across virtual machines.

The NICE Perform integration with VMware does not require special customization to the
Resource Pools.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 19
Chapter 2: VMware Configuration
VMware vCenter Server Summary View

Figure 2-2 VMware vCenter Server Summary View - Hosts and Clusters

3. Select the ESX server.


The ESX server resources are displayed.
Figure 2-3 VMware vCenter - ESX Server Summary

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 20
Chapter 2: VMware Configuration
VMware vCenter Server Summary View

The Summary tab displays the following information:


• General area - information about the machine that is hosting the ESX server - its
manufacturer, model, processors, processor type, memory (RAM), the number of installed
network interface cards (NICs), the number of virtual machines running on this ESX.
• Datastore area - the defined storage locations are displayed.
4. Click the Resource Allocation tab.
5. From the View options, click CPU.
Figure 2-4 VMware vCenter - CPU Resource Allocation

The Reservation and Limit columns indicate CPU allocation per virtual machine. The Shares
column indicates the CPU resource priority, facilitating scenarios in which there exists
competition for CPU resources between virtual machines.
6. From the View options, click Memory.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 21
Chapter 2: VMware Configuration
VMware vCenter Server Summary View

Figure 2-5 VMware vCenter - Memory Resource Allocation

The Reservation and Limit columns provide information about the memory allocated per
virtual machine. The Shares column indicates memory priority, facilitating scenarios in which
there exists competition for memory resources between virtual machines.
Configure resources as described here:
• Memory Define zero (0) for all virtual machines except the virtual
Reservation (MB) machines running the SQL server and NICE VoIP Logger.
See:
• Virtual Machine with the SQL Database Installation
on page 46
• Virtual Machine with the VoIP Logger Installation
on page 52
• Memory Limit (MB) Unlimited
7. Keep all other defaults.
8. Click the Configuration tab.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 22
Chapter 2: VMware Configuration
VMware vCenter Server Summary View

Figure 2-6 VMware vCenter - Configuration Tab

One (1) NIC

Many virtual
machines

Additional configuration information about the ESX server is displayed. For example, you can
see which virtual machine is connected to the different NICs (Network Interface Cards).
If several NICs are configured as teaming NICs, this is also displayed. In Figure 2-7, one NIC
is connected to several virtual machines.
When teaming NICs are configured, additional NICs (vmnic0\1\2) under Physical Adapters
are displayed. See Figure 2-7.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 23
Chapter 2: VMware Configuration
VMware vCenter Server Summary View

Figure 2-7 VMware vCenter - Configuration Tab - Teaming NICs

Teaming NICs

Several virtual
machines

TIP

NIC Teaming is a feature of VMware Infrastructure 3 and above, that enables you to
connect a single virtual switch to multiple physical Ethernet adapters. A team can share
traffic loads between physical and virtual networks and provide passive failover in case of
an outage. NIC teaming policies are set at the port group level.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 24
Chapter 2: VMware Configuration
Allocating Resources for the Virtual Machines

Allocating Resources for the Virtual Machines

To allocate resources for the virtual machines:


1. Log in to the VMware vCenter as described in Logging in to the VMware vCenter
on page 18.
2. In the VMware vCenter, from the Inventory drop-down list, select Hosts and Clusters.
3. Click the Summary tab.
The Data Centers and ESX Servers related to each Data Center are displayed. Each ESX
server contains Virtual Machines that can be grouped under Resource Pools.
Figure 2-8 VMware vCenter Summary View - Hosts and Clusters

4. Select a virtual machine and click Edit Settings.


The Virtual Machine Properties window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 25
Chapter 2: VMware Configuration
Allocating Resources for the Virtual Machines

Figure 2-9 Virtual Machine Properties

The CPU, Memory, Disk, Network Adapter resource types for the selected virtual machine are
displayed. The following resource types are configurable:
• Memory (MB) Amount of allocated memory
• CPU Number of CPUs (vCPU) that are allocated as specified in the
Certified Servers Guide.
• Disk The disk used for the virtual machine is usually the disk used for the
shared storage to which the ESX server is connected. Several disks
can be assigned to one virtual machine.
The size of each disk is determined when assigned to a virtual
machine. Using the Disk Management tool, partitioning can be done
later from the operating system.
5. Click the Resources tab.
The CPU, Memory, Disk, and Advanced CPU resource types are listed.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 26
Chapter 2: VMware Configuration
Allocating Resources for the Virtual Machines

Figure 2-10 Virtual Machine Properties - Resources

6. Select Memory.
The Resource Allocation area displays the configurable parameters for the Memory resource.
Figure 2-11 Virtual Machine Properties - Memory Resources

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 27
Chapter 2: VMware Configuration
Allocating Resources for the Virtual Machines

In the Resource Allocation area, you can configure the following memory parameters:
• Shares Default = Normal
Memory Shares determine memory priority. When there exists
resource competition between virtual machines, the virtual machine
with the Memory Shares with the highest priority will be allocated the
highest memory resource priority.
• Reservation Default = 0
(MB)
Amount of Memory that will always be available for the VM.
• Limit (MB) Default = 0
Limits the memory that the virtual machine will be able to consume.
7. In the Resources tab, select Disk.
The Resource Allocation area displays the configurable parameters for the Disk resource.
Figure 2-12 Virtual Machine Properties - Disk Resources

In the Resource Allocation area, you can configure the Shares disk parameter.
• Shares Default = Normal
Disk Shares determine disk priority. When there exists resource
competition between virtual machines, the virtual machine with the
the Disk Shares with the highest priority will be allocated highest disk
resource priority.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 28
Chapter 2: VMware Configuration
Virtual Machine Time Synchronization

Virtual Machine Time Synchronization


Virtual machine time synchronization must be set as follows:
• All virtual machines must be time-synchronized to the host ESX server.
• All host ESX servers must be time-synchronized to your organization NTP. This eliminates
the need to configure NTP for each virtual machine.
• The Windows Time Service must be disabled.
For instructions on setting NTP on the ESX Server, see Enabling NTP on ESX Servers
on page 30.

To time-synchronize your virtual machines to the host ESX server:

• In each virtual machine, open the VMware Tools Properties window and select Time
synchronization between the virtual machine and the ESX Server.
Figure 2-13 VMware Tools Properties Window

VMware Tools

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 29
Chapter 2: VMware Configuration
Virtual Machine Time Synchronization

Enabling NTP on ESX Servers


In addition to enabling NTP on the ESX Server, the NTP client on each ESX host must be set to a
physical time source. If none are available within the organization, round robin DNS records can
be configured to gather time from the Internet via UDP port 123.
The ESX built-in firewall will automatically be modified to allow outgoing connections on port
123. Your server clock will shortly start syncing with the NTP server you selected. You can check
the current time by clicking the Refresh link in the VI Client.

To enable NTP on ESX Servers:


1. Connect via VMware vSphere Client to ESX or vCenter Server.
2. Select the relevant ESX server and click the Configuration tab.
3. In the Software area, select Time Configuration.
Figure 2-14 VMware ESX - Configuration Tab > Time Configuration

4. Click Properties.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 30
Chapter 2: VMware Configuration
Virtual Machine Time Synchronization

Figure 2-15 VMware ESX - Configuration Tab > Properties

The Time Configuration Properties window appears.


Figure 2-16 Time Configuration Properties Window

5. In the NTP Configuration area, select NTP Client Enabled.


6. Click Options.
The NTP Daemon Options window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 31
Chapter 2: VMware Configuration
Virtual Machine Time Synchronization

Figure 2-17 NTP Daemon Options ‐ General

7. Click General. In the Startup Policy area, select Start and Stop with Host.
8. Click NTP Settings.
Figure 2-18 NTP Daemon Options ‐ NTP Settings

9. To add a new NTP server (or multiple NTP servers), click Add. Then select Restart NTP
service to apply changes.
10. Click OK.
Changes are applied and NTP service is started.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 32
Chapter 2: VMware Configuration
Virtual Machine Snapshot Management

Virtual Machine Snapshot Management


For performance reasons, virtual machine snapshots are prohibited.

To confirm that no snapshots have been saved on your virtual machine:


1. In the VMware vCenter, select your virtual machine.
Figure 2-19 VMware vCenter - Virtual Machine Selected

If no snapshots are
defined, this button Snapshot Manager button
appears disabled

Virtual machine selected

2. Click Snapshot Manager.


The Snapshots window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 33
Chapter 2: VMware Configuration
Virtual Machine Snapshot Management

Figure 2-20 Snapshots Window - No Snapshots Defined

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 34
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

Virtual Machine Network Interface Card (NIC) Configuration


All virtual machines running NICE components must be configured with the recommended NICs.
If a virtual machine’s current NIC type is Flexible, you must remove this NIC and install one of
the recommended NIC types instead.
Recommended NIC types for virtual machines running NICE components:
• ESX Version 3.5
• All NICE components except for the VoIP Logger's capture NIC: [Enhanced VMXNET]
or [E1000]
• NICE VoIP Logger’s capture NIC: E1000
• ESX Version 4.0:

• VoIP Logger's capture NIC: VMXNET2 (Enhanced)


• All virtual machines running NICE components:
• [VMXNET2 (Enhanced)] - recommended
-or-
• [E1000]
-or-
• [VMXNET 3]

NOTE

Make sure to install VMware Tools on each virtual machine. These will enable enhanced
network capabilities.

Changing the network interface card (NIC) type involves the following steps:
1. Changing the NIC type:
• Changing the NIC Type to Enhanced VMXNET / VMXNET2 (Enhanced)
on page 37.
-or-
• VoIP Logger’s capture NIC only: Changing the VoIP Logger’s Capture NIC Type to
E1000 on page 40.
2. Virtual machine running the VoIP Logger:
a. VoIP Logger’s capture NIC only: Setting the Priority for the VoIP Logger’s Capture
NIC on page 57

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 35
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

a. Passive VoIP Recording only: Configuring the VoIP Logger’s Sniffing NIC for
Promiscuous Mode on page 59.
b. Increasing the Rx Buffer for the VoIP Logger’s Capture NIC on page 62.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 36
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

Changing the NIC Type to Enhanced VMXNET / VMXNET2 (Enhanced)


You configure the network adapter for each virtual machine via the Edit Settings option.

To change a virtual machine’s network adapter type:


1. Shut down the virtual machine.
2. Select the virtual machine and click Edit Settings.
The Virtual Machine Properties window appears.
Figure 2-21 Virtual Machine Properties

3. Select the network adapter that needs to be removed and click Remove.
4. Click Add.
The Select Device Type window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 37
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

Figure 2-22 Select Device Type

5. Select Ethernet Adapter and click Next.


The Network Type window appears.
Figure 2-23 Network Type Window

6. Select Enhanced vmxnet and click Next.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 38
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

Figure 2-24 Network Type - Enhanced VMXNET 2

7. From the Type menu, select one of the following:


• For ESX3.5, select Enhanced vmxnet.
• For ESX4.0, select VMXNET2 (Enhanced).
8. Click Next.
The Ready to Complete window appears.
Figure 2-25 Ready to Complete Hardware Wizard

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 39
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

NOTE

If the IP of the network card was already configured you may need to reconfigure the IP after
the virtual machine is started.

9. Confirm settings and click Finish.


10. If you changed the NIC type for the VoIP Logger’s capture NIC, you may need to reset its
priority. See Setting the Priority for the VoIP Logger’s Capture NIC on page 57.

Changing the VoIP Logger’s Capture NIC Type to E1000


If ESX Version 4.0 is installed at your site, you must use VMXNET2 (Enhanced) for your VoIP
Logger’s capture NIC. If ESX Version 3.5 is installed at your site, you must use E1000 for your
VoIP Logger’s capture NIC.
This section describes how to change the NIC type to E1000. If you need to change the NIC type
to VMXNET2 (Enhanced), see Changing the NIC Type to Enhanced VMXNET / VMXNET2
(Enhanced) on page 37.

To change a virtual machine’s network adapter type to E1000:


1. Shut down the virtual machine running the VoIP Logger.
2. Select the VoIP Logger’s virtual machine and click Edit Settings.
3. Click the Options tab and change the Guest Operating System to Microsoft Windows
Server 2003, DataCenter Edition.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 40
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

Figure 2-26 Virtual Machine Properties Window - Options Tab

4. Click OK.
5. Reopen the Logger’s Virtual Machine Properties window.
6. Click the Hardware tab.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 41
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

Figure 2-27 Virtual Machine Properties Window - Hardware Tab

7. Click Add.
The Add Hardware Wizard starts.
Figure 2-28 Add Hardware Wizard - Select Device Type Window

8. Select Ethernet Adapter and click Next.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 42
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

The Network Type window appears.


Figure 2-29 Add Hardware Wizard - Network Type Window

Example only -
select the NIC that
will be used for
capturing audio

Configure the Network Type as described below:


• Adapter Type Select E1000
• Network Connection From the Named network with specified label menu,
select NIC that will be used for capturing audio (RTP).
9. Click Next.
The Ready to Complete window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 43
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

Figure 2-30 Add Hardware Wizard - Ready to Complete Window

10. Click Finish.


The network adapter appears in the Virtual Machine Properties window.
Figure 2-31 Virtual Machine Properties Window - Network Adapter Configuration

Network adapter
designated for
capturing audio
(RTP) is now
configured

11. Click OK.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 44
Chapter 2: VMware Configuration
Virtual Machine Network Interface Card (NIC) Configuration

12. Reopen the Logger’s Virtual Machine Properties window.


13. Click the Options tab.
14. In the Guest Operating System area, change the Version to Microsoft Windows Server
2003, Standard Edition.
Figure 2-32 Virtual Machine Properties Window - Guest Operating System

15. Click OK.


16. Restart the virtual machine.
Your operating system recognizes this network adapter as if it were now newly connected.
17. Reconfigure the NIC’s IP address.
18. Proceed to Setting the Priority for the VoIP Logger’s Capture NIC on page 57.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 45
Chapter 2: VMware Configuration
Virtual Machine with the SQL Database Installation

Virtual Machine with the SQL Database Installation


This section describes how to configure a virtual machine running the SQL database.

NOTE

These procedures are relevant to the virtual machine running the Data Mart.

Configuring a virtual machine running the SQL database involves:


• Configuring Memory Reservation for the 64-bit Operating System on page 46.
• Configuring Memory Reservation for the 32-bit Operating System on page 48.
• Configuring the Disk used for the Database Server as Mapped Raw LUN on page 51.

Configuring Memory Reservation for the 64-bit Operating System


When your SQL database is installed on a virtual machine running on a 64-bit operating system,
the Memory Reservation must equal the amount of memory allocated to the virtual machine. This
will ensure that this virtual machine will have sufficient memory for performing all operations at
all times. For NICE requirements and specifications, see the Certified Servers Guide.

To configure Memory Reservation for the 64-bit operating system:


1. Log in to the VMware vCenter as described in Logging in to the VMware vCenter
on page 18.
2. In the VMware vCenter, from the Inventory drop-down list, select Hosts and Clusters.
3. Select the virtual machine running SQL database and click Edit Settings. The Virtual
Machine Properties window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 46
Chapter 2: VMware Configuration
Virtual Machine with the SQL Database Installation

Figure 2-33 Virtual Machine Properties

The CPU, Memory, Disk, Network Adapter resource types for the selected virtual machine are
displayed.
4. Click the Resources tab. The CPU, Memory, Disk, and Advanced CPU resource types are
listed.
Figure 2-34 Virtual Machine Properties - Resources

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 47
Chapter 2: VMware Configuration
Virtual Machine with the SQL Database Installation

5. Select Memory. The Resource Allocation area displays the configurable parameters for the
Memory resource.
Figure 2-35 Memory Reservation for the Virtual Machine Running SQL

Example only

In the Resource Allocation area, configure parameters as described below:


• Shares Normal
• Reservation (MB) Set according to specifications in the Certified Servers Guide.
6. Click OK.

Configuring Memory Reservation for the 32-bit Operating System


When your SQL database is installed on a virtual machine running on a 32-bit operating system,
you must configure the Memory Reservation to 4096 MB. This will ensure that this virtual
machine will have sufficient memory for performing all operations at all times.

To configure Memory Reservation for the 32-bit operating system:


1. Log in to the VMware vCenter as described in Logging in to the VMware vCenter
on page 18.
2. In the VMware vCenter, from the Inventory drop-down list, select Hosts and Clusters.
3. Select the virtual machine running SQL database and click Edit Settings. The Virtual
Machine Properties window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 48
Chapter 2: VMware Configuration
Virtual Machine with the SQL Database Installation

Figure 2-36 Virtual Machine Properties

The CPU, Memory, Disk, Network Adapter resource types for the selected virtual machine are
displayed.
4. Click the Resources tab. The CPU, Memory, Disk, and Advanced CPU resource types are
listed.
Figure 2-37 Virtual Machine Properties - Resources

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 49
Chapter 2: VMware Configuration
Virtual Machine with the SQL Database Installation

5. Select Memory. The Resource Allocation area displays the configurable parameters for the
Memory resource.
Figure 2-38 Memory Reservation for the Virtual Machine Running SQL

In the Resource Allocation area, configure parameters as described below:


• Shares Normal
• Reservation (MB) 4096 MB
6. Click OK.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 50
Chapter 2: VMware Configuration
Virtual Machine with the SQL Database Installation

Configuring the Disk used for the Database Server as Mapped Raw LUN
Storage LUNs (Logical Unit Numbers) are commonly shared between virtual machines - each
virtual machine uses a configured section of a LUN.
The best practice for virtual machines running the SQL server is to map disk drives used for SQL
data and logs to Raw LUNs. In this way, the LUN is dedicated to a specific virtual machine. You
can verify whether a virtual machine is configured as Mapped Raw LUN in the Properties window
of a selected virtual machine.
Figure 2-39 Virtual Machine Properties - Mapped Raw LUN

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 51
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Virtual Machine with the VoIP Logger Installation


Configuring a virtual machine running the VoIP Logger installation involves:
• Configuring Memory Reservation (below).
• Disabling the Balloon Driver on page 54.
• Capture NIC only: Setting the Priority for the VoIP Logger’s Capture NIC on page 57.
• Passive VoIP Logger only: Configuring the VoIP Logger’s Sniffing NIC for Promiscuous
Mode on page 59.
• Increasing the Rx Buffer for the VoIP Logger’s Capture NIC on page 62.

Configuring Memory Reservation


The Memory Reservation for the virtual machine running the VoIP Logger must be configured
with the same value that was allocated to the virtual machine.

To configure memory reservation for a virtual machine running the NICE VoIP Logger:
1. Log in to the VMware vCenter as described in Logging in to the VMware vCenter
on page 18.
2. In the VMware vCenter, from the Inventory drop-down list, select Hosts and Clusters.
3. Select the virtual machine running the VoIP Logger and click Edit Settings. The Virtual
Machine Properties window appears.
Figure 2-40 Virtual Machine Properties

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 52
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

The CPU, Memory, Disk, Network Adapter resource types for the selected virtual machine are
displayed.
4. Click the Resources tab. The CPU, Memory, Disk, and Advanced CPU resource types are
listed.
Figure 2-41 Virtual Machine Properties - Resources

5. Select Memory. The Resource Allocation area displays the configurable parameters for the
Memory resource.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 53
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-42 Memory Reservation for the Virtual Machine Running NICE VoIP Logger

In the Resource Allocation area, configure parameters as described below:


• Shares Normal
• Reservation (MB) At least 2048 MB
• Limit Unlimited
6. Click OK.

Disabling the Balloon Driver


In the VMware configuration, the Balloon Driver reclaims memory from the virtual machines in
cooperation with the VMware Tools (vmmemctl driver) and the guest operating systems.
This is the preferred method for reclaiming memory from virtual machines, since it reclaims the
memory that is considered least valuable by the guest operating system. The system inflates the
balloon driver to increase memory pressure within the virtual machine, causing the guest operating
system to invoke its own native memory management algorithms. When memory is tight, the guest
operating system decides which particular pages of memory to reclaim, and if necessary, swaps
them to its own virtual disk. This proprietary technique provides predictable performance that
closely matches the behavior of a native system under similar memory constraints.
To enable the NICE VoIP Logger to run properly on the virtual machine, the Balloon Driver must
be disabled. This section describes how to disable the Balloon Driver.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 54
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

To disable the Balloon Driver:


1. Using vSphere Client, connect directly to the ESX Server host on which the virtual machine
resides.
2. Log in to the ESX Server host as a user with administrative rights.
3. Shut down the virtual machine.
4. On the Inventory panel, right-click the virtual machine and click Edit Settings.
The Virtual Machine Properties window appears.
5. Click the Options tab and select General.
Figure 2-43 Virtual Machine Properties Window - General Parameters

6. Click Configuration Parameters. The Configuration Parameters window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 55
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-44 Configuration Parameters Window

Value area

Type zero (0)

Add Row

7. Click Add Row.


8. In the text field, type sched.mem.maxmemctl.
9. In the Value area, type 0.
10. Click OK.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 56
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Setting the Priority for the VoIP Logger’s Capture NIC


The capture NIC must be set to second in priority.
If you changed your virtual machine’s NIC type as described in Changing the NIC Type to
Enhanced VMXNET / VMXNET2 (Enhanced) on page 37 or Changing the VoIP Logger’s
Capture NIC Type to E1000 on page 40, you may need to reset the NIC’s priority. This section
describes how to set NIC priority.
You set the NIC priority for the VoIP Logger’s capture NIC in the virtual machine’s operating
system.

To set the priority for the VoIP Logger’s capture NIC:


1. From the virtual machine’s operating system, open the Network Connections window.
2. From the Advanced menu, select Advanced Settings.
Figure 2-45 Network Connections Window - Advanced > Advanced Settings

The Advanced Settings window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 57
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-46 Network Connections - Advanced Settings

The network adapter designated for capturing audio


must appear second in the list

3. In the Connections area, set the network adapter that is designated for capturing your audio
(RTP) to appear second in the list.
4. Click OK.
5. Restart the virtual machine.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 58
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Configuring the VoIP Logger’s Sniffing NIC for Promiscuous Mode


If you are running a Passive VoIP Logger on a virtual machine, you must perform this
procedure.
The Passive VoIP Logger uses a NIC (Network Interface Card) for sniffing audio on the network.
This VoIP Logger configuration requires at least one dedicated NIC for sniffing the audio. In the
VMware environment, the ESX server must be configured with this dedicated NIC, and to enable
sniffing, the NIC must be configured for Promiscuous mode.

TIP

What is Promiscuous mode?


In a network, promiscuous mode allows a network device to intercept and read each network
packet that arrives in its entirety. This mode of operation is sometimes given to a network
snoop server that captures and saves all packets for analysis (for example, for monitoring
network usage).
In an Ethernet local area network (LAN), promiscuous mode is a mode of operation in which
every data packet transmitted can be received and read by a network adapter. Promiscuous
mode must be supported by each network adapter as well as by the input/output driver in the
host operating system. Promiscuous mode is often used to monitor network activity.

To configure the sniffing NIC for promiscuous mode:


1. Log in to the VMware vCenter as described in Logging in to the VMware vCenter
on page 18.
2. Select the ESX Server and click the Configuration tab.
3. Select Networking. The installed NICs are listed.
4. Locate the virtual switch whose NIC will be used for sniffing. Click this virtual switch’s
Properties button.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 59
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-47 VMware vCenter - Configuration Tab

The Properties window for the sniffing NIC appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 60
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-48 NIC Properties Window

5. Click Edit. The Edit Properties window appears.


Figure 2-49 Edit Properties Window

6. Click the Security tab and from the Promiscuous Mode drop-down list, select Accept.
7. Click OK. The ESX server’s dedicated NIC is now configured for Promiscuous mode.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 61
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Increasing the Rx Buffer for the VoIP Logger’s Capture NIC


This section describes:
• ESX V4.0: Increasing the Rx Buffer for the VMXNET2 (Enhanced) Card for ESX V4.0
on page 62
• ESX V3.5: Increasing the Rx Buffer for the E1000 Card for ESX V3.5 on page 66

IMPORTANT

Make sure that the network interface card designated for capturing audio is approved by
NICE. See Virtual Machine Network Interface Card (NIC) Configuration on page 35.

Increasing the Rx Buffer for the VMXNET2 (Enhanced) Card for ESX V4.0
The Rx buffer must be increased for every network interface card that is configured to capture
voice via sniffing. While the default Rx buffer is 150, the maximum Rx buffer supported by
VMware for adapter type VMXNET2 (Enhanced) for ESX Version 4 is 512. Increasing the network
adapter’s Rx buffer improves the VoIP Logger’s performance and reduces packet loss.
You can increase the Rx buffer for the VMXNET2 (Enhanced) card in one of the two following
ways:
• Increasing the Rx Buffer in the .vmx Configuration File
-or-
• Increasing the Rx Buffer in the Virtual Machine’s Properties Window

TIP

From VMware KB1010071:


Receive packets might be dropped at the virtual switch if the virtual machine's network driver
runs out of receive (Rx) buffers, that is, a buffer overflow. The dropped packets may be
reduced by increasing the Rx buffers for the virtual network driver.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 62
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Increasing the Rx Buffer in the .vmx Configuration File

To increase the Rx buffer in the .vmx configuration file:


1. Shut down the virtual machine that is running the VoIP Logger.
2. Open the .vmx configuration file and add the following line:
Ethernet<x>.numRecvBuffers=<value>
where <x> refers to your virtual NIC and <value> refers to the new value for the Rx buffer
size.
EXAMPLE:
Ethernet1.numRecvBuffers=512

TIP

You can also change the buffer by using the Edit option. However, before the buffer is
changed, you must shut down the virtual machine.

3. Save and close the file.

Increasing the Rx Buffer in the Virtual Machine’s Properties Window

To increase the Rx buffer in the virtual machine’s Properties window:


1. Shut down the virtual machine that is running the VoIP Logger.
2. In the virtual machine’s Properties window, select the Options tab.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 63
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-50 Virtual Machine Properties Window

3. Select General. Then click Configuration Parameters.


Verify the number of the Ethernet that is used for the sniffing.
4. Click Add Row.
5. In the Parameter field, enter Ethernet1.numRecvBuffers.
6. In the Value field, enter 512.
See Figure 2-51.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 64
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-51 Configuration Parameters Window - ethernet1.numRecvBuffers Value

7. Click OK.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 65
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Increasing the Rx Buffer for the E1000 Card for ESX V3.5
You increase the Rx buffer for the E1000 card in the virtual machine’s operating system.

To increase the Rx buffer for the E1000 network interface card:


1. From the operating system, open the Network Connections window.
2. Right-click the network adapter that was designated for capturing audio (RTP), and select
Properties.
The Properties window appears.
Figure 2-52 Network Connections - Capture NIC Properties Window

3. In the General tab, click Configure. Then click Advanced.


4. Define values as described below:
• Number of Receive Buffers Enter 4096.
• Number of Coalesce Buffers Enter 512.
• Transmit Descriptors Enter 512.
See Figure 2-53, Figure 2-54, and Figure 2-55.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 66
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-53 Network Connection Properties - Number of Receive Buffers

Set Number of Receive Buffers to 4096

Figure 2-54 Network Connection Properties - Number of Coalesce Buffers

Set Number of Coalesce Buffers to 512

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 67
Chapter 2: VMware Configuration
Virtual Machine with the VoIP Logger Installation

Figure 2-55 Network Connection Properties - Number of Transmit Descriptors

Set Number of Transmit


Descriptors to 512

5. Restart the virtual machine.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 68
Chapter 2: VMware Configuration
VMware High Availability

VMware High Availability


VMware High Availability offers the following:
• Provides automatic restart of virtual machines in case of physical host failures.
• Provides high availability while reducing the need for passive standby hardware and dedicated
administration
• Is configured, managed, and monitored using vCenter Server
VMware High Availability provides high availability for applications running on virtual machines.
In the event of a physical server failure, affected virtual machines are automatically restarted on
other production servers with spare capacity.
The VMware High Availability offers the following advantages over the traditional failover
solution:
• Reduced hardware cost and setup – When you use VMware High Availability, you must have
sufficient resources to back up the number of hosts you need to guarantee, in the event of a
failover. However, the vCenter Server system automatically manages resource and
configuration clusters.
• Increased application availability – Because the virtual machine can recover from hardware
failure, all applications that start automatically at restart, have availability without increased
computing needs, even if the application itself is not a clustered application.
• VMware Distributed Resource Scheduler (DRS) integration – If a host has failed and virtual
machines have been restarted on other hosts, DRS can provide migration recommendations or
migrate virtual machines for balanced resource allocation. If one or both of the source and
destination hosts of a migration fail, VMware High Availability can help recover from that
failure.

Implications of High Availability in the Nice Perform Environment


When an ESX server fails and High Availability is configured:
• Virtual Machines with NICE Perform components running on this host are shut down
unexpectedly and the same virtual machines are restarted on other ESX servers.
• In the event of a failover, NICE Perform resumes functioning within a short period of time.
• In the event of a failover, NICE Perform downtime operates as follows:
• NICE Perform System downtime changes according to the specific components that
failed.
• Virtual machines are restarted. Downtime is determined by the time the virtual machines
take to restart on the other ESX servers. This may take several minutes.
• NICE Perform system recovery time is determined by the time the components take to
resume functioning after restart. This may vary between 2-30 minutes.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 69
Chapter 2: VMware Configuration
VMware High Availability

Configuration Guidelines to VMware High Availability


The VMware administrator is responsible for the High Availability configuration.
• General recommendations for High Availability configurations are as follows:
• ESX Servers are configured in a Cluster.
• The Cluster is configured with High Availability enabled.
• Virtual Machines with NICE Perform servers must reside on shared storage.
• All ESX servers in the Cluster must have access to the shared storage.
• ESX networking servers should be configured with the identical name and VLAN access
to enable the virtual machines to access the same VLAN after having restarted on another
ESX server.
• Virtual Machine restart priority options are as follows:
• Three (3) priority levels can be set for each Virtual Machine.
• High
• Medium
• Low
• The default priority setting is Use Cluster Settings – set to Medium.
• NICE Perform virtual machine recommendations:
• Database server – High
• Other servers – leave default settings
See Sample Cluster with High Availability Settings on page 71.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 70
Chapter 2: VMware Configuration
VMware High Availability

Sample Cluster with High Availability Settings


This section describes the recommended settings for running NICE Perform in a VMware cluster
configured for High Availability.
Referring to Figure 2-56, a cluster called Consolidation has been defined. ESX servers and virtual
machines have been configured on this cluster. DRS (Distributed Resource Scheduler) and HA
(High Availability) have been enabled.
Figure 2-56 vCenter - High Availability Configuration

DRS & HA
have been
enabled

To access cluster features, select


the cluster and click Edit Settings

Virtual machines in ESX servers configured in the cluster


the cluster

A cluster called Consolidation has been created

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 71
Chapter 2: VMware Configuration
VMware High Availability

Configuring High Availability Settings

To configure High Availability:


1. Select the cluster and click Edit Settings.
The Cluster Settings window appears.
2. Select the Cluster Features tab.
In the Features area, select Turn On VMware HA and Turn On VMware DRS.
See Figure 2-57.
Figure 2-57 Cluster Settings Window - Cluster Features Tab

3. Select the VMware HA tab.


a. In the Host Monitoring Status area, select Enable Host Monitoring.
b. In the Admission Control area, select Enable: Do not power on VMs that violate
availability constraints.
c. In the Admission Control Policy area, select Host failures cluster tolerates and
define a policy type.
See Figure 2-58.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 72
Chapter 2: VMware Configuration
VMware High Availability

Figure 2-58 Cluster Settings Window - VMware HA Tab

4. Click the Virtual Machine Options tab.


5. Set the VM Restart Priority for the virtual machine running the Database Server components
to High. All other NICE Perform virtual machine components can remain with default setting
- Medium.
See Figure 2-59.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 73
Chapter 2: VMware Configuration
VMware High Availability

Figure 2-59 Cluster Settings Window - VMware HA Tab

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 74
3
Microsoft Hyper-V Configuration

Overview ................................................................... 76 This chapter describes how to configure the


Configuring Hyper-V ................................................. 76 Microsoft Hyper-V Server 2008 R2 for NICE
Virtual Network settings.................................... 77 Perform.
Hyper-V Host Server Disk Management .......... 78
Virtual Machine File Location for a New Virtual
Machine ............................................................ 79
Virtual Machine Settings................................... 81
Virtual Machine Time Synchronization ............. 87

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 75
Chapter 3: Microsoft Hyper-V Configuration
Overview

Overview
The Microsoft Hyper-V Server 2008 R2 is designed to run multiple virtual machines that facilitate
hosting multiple operating systems running NICE Perform components. Infrastructure
specifications, the virtual machines’ resources (disk, CPU, memory, network) can be allocated and
configured either by the Hyper-V Manager on the host server or via the management application
called SCVMM (System Center Virtual Machine Manager 2008 R2) which is installed on
Windows 2008 server in the same Domain as the Hyper-V server.
For supported NICE Perform components, configuration requirements, and specifications, see the
Design Considerations and Certified Servers guides.

Configuring Hyper-V
This section describes the following topics:
• Network Adapter Configuration on page 76
• Hyper-V Host Server Disk Management on page 78
• Virtual Machine File Location for a New Virtual Machine on page 79
• Virtual Machine Settings on page 81

Network Adapter Configuration


Figure 3-1 describes the Network Interface Card (NIC) configurations and settings for the
Microsoft Hyper-V Manager running on the Windows 2008 R2 with the Hyper-V installation.
One of the network cards is set for management, used for the connection between the host server
and the LAN. This network connection is assigned the IP of the host server (Windows 2008 R2).
At least one NIC is designated to connect between virtual machines. The configuration of this NIC
is done by the Virtual Network Settings. Other NICs can be used as additional connections for the
virtual machines or for a iSCSI connection to shared storage.
Figure 3-1 Network Interface Card Settings

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 76
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Virtual Network settings


The virtual network that is used by all virtual machines is connected to a physical NIC that is
connected to the LAN. Several NICs may be configured in order to split the network load between
the virtual machines. Each of the NICs must be configured for a specific virtual machine. See
Figure 3-2.
Figure 3-2 Virtual Network Settings

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 77
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Hyper-V Host Server Disk Management


In the Disk Management view, you can see all available disks – Local and Storage LUNs. Note
that you only format the Storage disks that are for general use – contain the virtual machine’s HD
files (.VHD). The LUNs that should be mapped as dedicated physical disks for the SQL servers
Data and Log (total of 4 LUNs for DB and DM), should remain in Offline status so they will be
available as dedicated disks for the SQL virtual machines.
See Figure 3-3.
Figure 3-3 Disk Management

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 78
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Virtual Machine File Location for a New Virtual Machine


When you create a new virtual machine or add a virtual disk, the configuration files and the VHD
(virtual hard disk file) must be configured to reside on the Shared Storage and not on the local
disks of the host server. The default location can be set in the Hyper-V Settings window.
See Figure 3-4 and Figure 3-5.
Figure 3-4 Location of the Virtual Hard Disk Files

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 79
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Figure 3-5 Default Location for Configuration Files

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 80
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Virtual Machine Settings


The settings of each virtual machine contain the following parameters:
• Number of virtual CPU processors.
• Memory allocation – RAM assigned to the virtual machine.
• Network – the network adapter (one or more) assigned to the virtual machine.
• Virtual disks – the disk that contains the operating system must be defined as IDE under IDE
Controller. All other disks will be defined as SCSI under SCSI controller.
• One SCSI controller can be assigned to more then one hard drive. More SCSI controllers can
be added as well.
• For SQL servers running on virtual machines, the recommendation is to map two physical
disks (path through) for SQL data and SQL log.

NOTE

• To optimize performance, all virtual disks must be created with a fixed size. This is
selected when initially configuring a new virtual machine or when adding new virtual
disk.
• Make sure that Hyper-V Integration Services have been installed on each virtual machine.
• Do not save virtual machine snapshots. Snapshots can compromise the performance of
your virtual machine.
• For supported Nice Perform components and specifications of resources to assign to each
virtual machine, see the NICE Perform Certified Servers Guide.
• After you complete setting up the operating system, it is recommended to install the
Integration Services. To verify if they are already installed, see the virtual machine
Settings > Management.

See Examples of Virtual Machine Settings on page 82.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 81
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Examples of Virtual Machine Settings


Figure 3-6 Virtual Machine Processor – No Reservation or Limit

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 82
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Figure 3-7 Virtual Machine Memory – RAM

Figure 3-8 Virtual Machine - Network

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 83
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Virtual Machine Hard Drive Settings - IDE for the Operating System

Figure 3-9 Virtual Machine Hard drive – IDE for OS

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 84
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Virtual Machine Hard Drive Settings – SCSI for Partitions Without the Operating System

For partitions other than C: (operating system), the Hard Drive must be added under the SCSI
Controller. The location of this Hard Drive file (VHD) should be on the Shared Storage.
Figure 3-10 Virtual Machine Hard Drive – SCSI for Partitions Without the Operating System

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 85
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Virtual Machine Database Server Settings

The virtual machine used for database servers (DB or DM) with SQL – the Disks for SQL Data
files and SQL Log files must be added as Physical Hard Drive. Two dedicated LUNs are needed
per database server (one for Data and one for Log).
The list of physical drives contains the LUNs that appear in the disk management in offline status.
Figure 3-11 Virtual Machine Hard Drive for Database Server

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 86
Chapter 3: Microsoft Hyper-V Configuration
Configuring Hyper-V

Virtual Machine Time Synchronization


Virtual machine time synchronization must be set as follows:
• All virtual machines must be time-synchronized to the host Hyper-V server.
• All host Hyper-V servers must be time-synchronized to your organization NTP. This
eliminates the need to configure NTP for each virtual machine.
• The Windows Time Service must be disabled.

To time-synchronize your virtual machines:

• In each virtual machine, in the Integration Services window, select Time Synchronization.
Figure 3-12 Virtual Machine Settings - Time Synchronization

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 87
Blank page for double-sided printing.
4
Virtual Desktop Infrastructure (VDI) for
NICE Client-Side Applications

IN THIS CHAPTER This chapter describes guidelines to installing


and accessing NICE applications in the VDI
Overview ................................................................... 90 (Virtual Desktop Infrastructure) environment.
Supported Solutions ......................................... 90
VMware View ............................................................ 91
Overview of View Manager .............................. 91
PCoIP Display Protocol .................................... 92
Guidelines for NICE Client-Side Applications
Running on a VDMware Virtual Desktop ......... 93
Client Login ...................................................... 94
Citrix XenDesktop...................................................... 96
Overview of XenDesktop .................................. 96
Desktop Delivery Protocol ................................ 98
Guidelines for NICE Client-Side Applications
Running on a Citrix XenDesktop ...................... 98
Client Login....................................................... 99
Troubleshooting.............................................. 101

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 89
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
Overview

Overview
Desktop virtualization involves encapsulating and delivering either access to an entire information
system environment or to the environment itself via a remote client device. The client device may
use an entirely different hardware architecture than that used by the projected desktop
environment, and may also be based upon an entirely different operating system.
The desktop virtualization model allows the use of virtual machines to let multiple network
subscribers maintain individualized desktops on a single, centrally located computer or server. The
central machine may operate at a residence, business, or data center. Users may be geographically
scattered, but all must be connected to the central machine by a local area network, a wide area
network, or the public Internet.
Virtual Desktop Infrastructure (VDI) is a method of hosting a desktop operating system within a
virtual machine (VM) running on a centralized server. VDI is a variation on the client/server
computing model, sometimes referred to as server-based computing (SBC). The term was coined
by VMware Inc.

Supported Solutions
Supported VDI solutions for NICE client-side applications:
• VMWare:
• VDM
• VMware View
• Citrix:
• Citrix Xen Desktop
• Citrix XenApplications (also known as Citrix Presentation Server)

For a complete list of components supported by NICE Perform, see the Certified Servers Guide.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 90
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
VMware View

VMware View
VMware VDI called VMware View is a solution for delivering virtualized desktops (for example:
desktop with Windows XP, Vista, 7) to the client.
Former versions of VMware View are called VDM (Virtual Desktop Manager).

Overview of View Manager


View Manager integrates with VMware vCenter Server, allowing administrators to create desktops
on virtual machines running on VMware ESX server, and then deploy these virtual desktops to end
users. In addition, View Manager utilizes your existing Active Directory infrastructure for user
authentication and management.
Once a desktop has been created, Web-based or locally installed client software enables authorized
end users to securely connect to centralized virtual desktops, back-end physical systems, or
terminal servers.
The virtual desktops are virtual machines (with Windows XP, Windows Vista or Windows 7
operating systems) running on the VMware ESX servers, controlled by the VMware vCenter. The
desktops are managed and provisioned to the client by the VMware View Manager.
Figure 4-1 provides a high-level view of the View Manager environment and its main
components.
Figure 4-1 Schematic Diagram of the View Manager Environment
Thin
Clients

NICE Network
Perform View
Servers Connection
Server

LAN

VMware
ESX Host vCenter
Server
Virtual Virtual Virtual Virtual
Desktop Desktop Desktop Desktop

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 91
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
VMware View

PCoIP Display Protocol


The Display Protocol is the method used for delivering the desktop remotely to the client. Former
versions of VMware VDI (VDM and View3.1) used Microsoft RDP protocol for connecting the
client to the desktop.
PCoIP is a new, improved, and enhanced protocol provided with VMware View 4. Microsoft RDP
is also available as before.
PCoIP provides an optimized PC experience for the delivery of images, audio, and video content
for a wide range of users on the LAN or across the WAN. PCoIP can compensate for an increase in
latency or a reduction in bandwidth, to ensure that end users can remain productive regardless of
network conditions. PCoIP is supported as the display protocol for VMware View desktops with
virtual machines and with physical machines that contain Teradici host cards.
PCoIP should be used for VMware View 4 (and higher).

Configuring the PCoIP Display Protocol


It is recommended to set PCoIP as the default display protocol in the View Manager 4.0 (or higher)
administrator per Desktop or Pool of Desktops.

To configure the PCoIP display protocol:

• Open the VMware View Manager and set the Default display protocol to PCoIP.
Figure 4-2 VMware View Manager - Web Interface

NOTE

For information about View Manager administration, see the VMware documentation guide -
http://www.vmware.com/pdf/view40_admin_guide.pdf

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 92
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
VMware View

Guidelines for NICE Client-Side Applications Running on a VMware


Virtual Desktop
• For list of components supported by NICE Perform, see the Certified Servers Guide.
• Installation via template
NICE client-side applications for the agent desktop (such as NICE Applications’ Set Security,
NICE Player Codec pack, standalone Player, Reporter Viewer, ScreenAgent, ROD Desktop,
Desktop Analytics Agent) can be installed on the template before using it for deploying new
virtual machines.
• Component installation – NICE Perform components are installed on the virtual machine the
same as they would be installed on physical desktops.
• Specific configurations
• ScreenAgent installation
• Installation mode: WorkStation mode
• Use the default capture component installation mode - Hooking & Scraper methods
• For installation via template, select the Unique Agent User Name registration method

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 93
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
VMware View

Client Login
You log into the View Connection Server and connect to a virtual desktop in via the VMware
View Client (requires installing VMware View Client software on the client machine).

NOTE

Web Login is not supported for NICE client-side components.

To log in to a virtual desktop via the VMware View Client:


1. From the Start menu, select Programs > VMware > VMware View Client.
The Connection Server login window appears.
Figure 4-3 VMware View Client Login Window

2. Enter the IP address or hostname of the View Connection Server at your site and click
Connect.
The VMware View Client login window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 94
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
VMware View

Figure 4-4 VMware View Client Login Window

3. Enter your username and password and click Login.


Available desktops on the VMware View Connection Server are displayed.

TIP

PCoIP can also be selected during client login to the virtual desktop.
To select PCoIP, press on the VMware View Connection Server name’s arrow and select
Display Protocol > PCoIP.
Figure 4-5 Selecting PCoIP in the Login Window

4. Click Connect.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 95
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
Citrix XenDesktop

Citrix XenDesktop
Citrix XenDesktop is Citrix VDI solution for delivering virtualized desktops (for example: desktop
with Windows XP, Windows Vista, 7) to the client.

NOTE

The descriptions in this section were taken from the Citrix documentation.

Overview of XenDesktop
Citrix XenDesktop is a desktop virtualization system that centralizes and delivers Microsoft
Windows XP, Vista or 7 virtual desktops as a service to users anywhere. Virtual desktops are
dynamically assembled on demand, providing users with pristine, yet personalized, desktops each
time they log on. This ensures that performance never degrades, while the high speed delivery
protocol provides unparalleled responsiveness over any network. XenDesktop delivers a high
definition user experience over any connection including high latency wide area networks. The
open architecture of XenDesktop offers choice and flexibility of virtualization platform and
endpoints. Unlike other desktop virtualization alternatives, XenDesktop simplifies desktop
management by using a single image to deliver personalized desktops to users and enables
administrators to manage service levels with built-in desktop performance monitoring.
Citrix XenDesktop provides a complete virtual desktop delivery system by integrating several
distributed components with advanced configuration tools that simplify the creation and real-time
management of the virtual desktop infrastructure.
Desktop Delivery Controller - Installed on servers in the data center, the controller authenticates
users, manages the assembly of users' virtual desktop environments, and brokers connections
between users and their virtual desktops. It controls the state of the desktops, starting and stopping
them based on demand and administrative configuration. Desktop Delivery Controller also
includes Profile management, in some editions, to manage user personalization settings in
virtualized or physical Windows environments.
Virtual Desktop Agent - Installed on virtual desktops, the agent enables direct ICA (Independent
Computing Architecture) connections between the virtual desktop and user devices.
Citrix online plug-in - Installed on user devices, the Citrix online plug-in (formerly "Citrix
Desktop Receiver") enables direct ICA connections from user devices to virtual desktops.
Figure 4-6 on page 97 provides a high-level view of the XenDesktop environment.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 96
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
Citrix XenDesktop

Figure 4-6 Schematic Diagram of the XenDesktop Environment


Thin
Clients

NICE Network
Perform DDC -
Servers Desktop
Delivery
Controller

LAN

Xen Server

Virtual Virtual Virtual Virtual


Desktop Desktop Desktop Desktop

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 97
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
Citrix XenDesktop

Desktop Delivery Protocol


Independent Computing Architecture (ICA) is Citrix proprietary protocol used to deliver the
virtual machine desktop to the client. It enables the user with High Definition experience.

Guidelines for NICE Client-Side Applications Running on a Citrix


XenDesktop
• For list of supported components see the Certified Servers Guide.
• Installation via template
NICE client-side applications for the agent desktop (such as NICE Applications’ Set Security,
NICE Player Codec pack, standalone Player, Reporter Viewer, ScreenAgent, ROD Desktop,
Desktop Analytics Agent) can be installed on the template before using it for deploying new
virtual machines.
• Component installation – NICE Perform components are installed on the virtual machine the
same as they would be installed on physical desktops.
• Specific configurations
• NICE ScreenAgent installation
• Installation mode: WorkStation mode
• Use default Capture component installation mode - Hooking & Scraper methods)
• For installation on template select Unique Agent User Name registration method
• Limitations
• NICE ScreenAgent support
• NICE Perform Release 3.1 – UP 3.1.19 or later
• NICE Perform Release R3.2 – UP 3.2.11 or later

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 98
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
Citrix XenDesktop

Client Login
You can log in to the Desktop Delivery Controller and connect to a virtual desktop in two ways:
• Run the Online Plug-in that was installed on the client machine
• Web login

Logging in via the Online Plug-In

To log in to the Desktop Delivery Controller via the Online Plug-in:


1. From the Start menu, select Programs > Citrix > Online plug-in.
The Citrix online plug-in login window appears.
Figure 4-7 Citrix Online Plug-In

2. Enter your username and password and click OK.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 99
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
Citrix XenDesktop

Logging in via Web

To log in to the Desktop Delivery Controller via Web login:


1. In a Web browser, enter the following:

http://<Desktop Delivery Controller IP address/hostname>/Citrix/DesktopWeb/auth/login.aspx

The Citrix XenDesktop Logon window appears.


Figure 4-8 Citrix XenDesktop Logon Window

2. Enter your username and password and click OK.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 100
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
Citrix XenDesktop

Troubleshooting

Playback Issue
If the virtual desktop OS does not recognize the client’s local speakers audio device, in Citrix Xen
Desktop Delivery Controller server, do the following:
1. Open the Citrix Presentation Server console.
2. Navigate to Farm > Policies.
3. Select a policy and open its properties.
Figure 4-9 Citrix Presentation Server Console - Select Properties

4. Navigate to Client Devices > Resources > Audio > Turn off speakers and select Not
Configured.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 101
Chapter 4: Virtual Desktop Infrastructure (VDI) for NICE Client-Side Applications
Citrix XenDesktop

Figure 4-10 Citrix Presentation Server Console - Turn off speakers

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 102
A
Converting a Physical Machine into a
VMware Virtual Machine

Overview ................................................................. 104 This appendix describes how to convert a


Installing VMware Converter 4.0.1 .......................... 105 physical machine into a VMware virtual
Converting the Physical Machine into a Virtual Machine machine.
109

NOTE

The process of converting a NICE physical


machine into a NICE virtual machine is
supported for a VMware vSphere version
only. Converting a NICE physical machine
into a lower VMware version has not been
certified.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 103
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Overview

Overview
This appendix describes the process of converting a physical machine into a virtual machine
running ESX Version 4.0, using VMware vCenter Converter Version 4.0.1. For server
specifications, see the Certified Servers Guide.
Before performing this procedure, ensure that:
• The network speed between all servers and the ESX is 100Mbp or higher.
• You have administrative privileges to the physical machine.
• You have administrative privileges to the ESX machine.
• The maximum disk size on the physical server is smaller than the maximum disk size that you
will create on the ESX server.
• All NICE services running on the physical machine have been stopped.
Converting a physical machine into a virtual machine involves the following steps:
1. Installing VMware Converter 4.0.1
2. Converting the Physical Machine into a Virtual Machine
3. Defining the New Virtual Machine’s IP Address

IMPORTANT

The conversion is performed at the rate of 25GB per hour.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 104
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Installing VMware Converter 4.0.1

Installing VMware Converter 4.0.1


VMware Converter 4.0.1 must be installed on a dedicated machine. For a list of supported
operating systems, see http://www.vmware.com.

To install VMware Converter 4.0.1:


1. Run VMware-converter-4.0.1-161434.exe.
The VMware vCenter Converter Welcome Window appears.
Figure A-1 VMware vCenter Converter Welcome Window

2. Click Next.
The VMware vCenter License Agreement window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 105
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Installing VMware Converter 4.0.1

Figure A-2 VMware vCenter License Agreement Window

3. Select I accept the terms in the License Agreement and click Next.
The Destination Folder window appears.
Figure A-3 VMware vCenter Destination Folder Window

4. To change the Destination Folder, click Change.


The Change Current Destination Folder window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 106
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Installing VMware Converter 4.0.1

Figure A-4 VMware vCenter Change Current Destination Folder Window

5. Enter the new installation path and click OK.


The new path appears in the Destination Folder window.
6. Click Next.
The Setup Type window appears.
Figure A-5 VMware vCenter Setup Type Window

7. Select Local Installation and click Next.


The Ready to Install window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 107
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Installing VMware Converter 4.0.1

Figure A-6 VMware vCenter Ready to Install Window

8. Click Install.
The VMware vCenter Converter is installed.
9. Click Finish.
10. Proceed to Converting the Physical Machine into a Virtual Machine on page 109.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 108
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

Converting the Physical Machine into a Virtual Machine

To convert your physical machine into a virtual machine:


1. On the physical machine, shut down all NICE processes.
2. Open the VMware vCenter Converter Standalone.
Figure A-7 VMware vCenter Converter Standalone

3. Click Convert Machine.


The Specify Source tab appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 109
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

Figure A-8 Specify Source Tab

4. Enter the connection detail for the physical server and click Next.
The Standalone Agent Deployment window appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 110
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

Figure A-9 Standalone Agent Deployment Window

5. Select Automatically uninstall the files when import succeeds. Then click Yes.
A Deploying agent message appears.
6. Click Yes again.
The Specify Destination tab appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 111
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

Figure A-10 Specify Destination Tab (1)

7. Enter the ESX connection details and click Next.


The ESX connection details that you entered are displayed.
Figure A-11 Specify Destination - Specify Destination Tab (2)

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 112
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

8. Verify all details are correct and click Next.


The View/Edit Options tab appears.
Figure A-12 View/Edit Options Tab

9. Click the Data to Copy Edit option.


The Data to Copy View/Edit options appear.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 113
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

Figure A-13 Data to Copy View/Edit Options Tab

10. Select all devices and click Next.


Information about the devices appears.
Figure A-14 Devices Tab

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 114
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

11. Configure the following:


• Number of virtual CPUs
• Ram Memory size
12. Click Advanced options.
Figure A-15 Advanced Options

13. Select the following:


• Install VMware tools on the imported virtual machine

• Remove system restore checkpoints on destination

• Reconfigure destination virtual machine


14. Click Next.
The Ready to Complete tab appears.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 115
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

Figure A-16 Ready to Complete Tab

15. Verify installation details and click Finish.


The conversion process starts. Conversion status is displayed.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 116
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

Figure A-17 Conversion Status

16. Wait until the conversion process completes.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 117
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Converting the Physical Machine into a Virtual Machine

Figure A-18 Conversion Completed

All components have converted successfully.


17. Restart all NICE services.
18. To complete virtual machine definitions, proceed to Defining the New Virtual Machine’s IP
Address on page 119.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 118
Appendex A: Converting a Physical Machine into a VMware Virtual Machine
Defining the New Virtual Machine’s IP Address

Defining the New Virtual Machine’s IP Address


To complete the new virtual machine’s definition, you must define its IP address.

To define the new virtual machine’s IP address:


1. Shut down the physical machine.
2. Log onto the guest operating system.
3. From the Network Connections window, open the Properties window for the virtual machine’s
NIC and define its IP address.

NOTE

Domain environment: If the physical machine was already defined as part of the
domain, the new virtual machine will already be a member of the domain.

NICE Perform ® Release 3.1, 3.2, & 3.5: Virtualization Configuration Guide (Rev. A3) 119

Anda mungkin juga menyukai