Guillermo Corti
Sylvain Delabarre
Ho Jin Kim
Ondrej Plachy
Marcos Quezada
Gustavo Santos
ibm.com/redbooks
International Technical Support Organization
May 2014
SG24-8198-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page xiii.
This edition applies to Version 2.2, of IBM PowerVM Express Edition (5765-PVX), IBM PowerVM
Standard Edition (5765-PVS), IBM PowerVM Enterprise Edition (5765-PVE), IBM PowerVM EE
Edition for Small Server (5765-PVD), IBM PowerVM for Linux Edition (5765-PVL), Hardware
Management Console (HMC) V7 Release 7.8.0, IBM Power 740 Firmware Level AL740-110, and
IBM Power 750 Firmware Level AL730-122.
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 IBM PowerVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 IBM PowerVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Power Integrated Facility for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 VIOS 2.2.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 VIOS Performance Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 PowerVM Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 HMC feature updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7.1 Considerations and prerequisites for HMC . . . . . . . . . . . . . . . . . . . . . 5
1.7.2 HMC and Power Enterprise Pool interaction. . . . . . . . . . . . . . . . . . . . 5
1.7.3 HMC and IBM PowerVC interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Contents v
vi IBM PowerVM 2013 Enhancements
Figures
IBM® Power Systems™ servers coupled with IBM PowerVM® technology are
designed to help clients build a dynamic infrastructure, helping to reduce costs,
manage risk, and improve service levels.
IBM PowerVM delivers industrial-strength virtualization for IBM AIX®, IBM i, and
Linux environments on IBM POWER® processor-based systems. IBM
PowerVM V2.2.3 is enhanced to continue its leadership in cloud computing
environments. Throughout the chapters of this IBM Redbooks® publication, you
will learn about the following topics:
New management and performance tuning software products for PowerVM
solutions. Virtual I/O Server (VIOS) Performance Advisor has been enhanced
to provide support for N_Port Identifier Virtualization (NPIV) and Fibre
Channel, Virtual Networking and Shared Ethernet Adapter, and Shared
Storage Pool configurations. IBM Power Virtualization Performance
(PowerVP™) is introduced as a new visual performance monitoring tool for
Power Systems servers.
The scalability, reliability, and performance enhancements introduced with the
latest versions of the VIOS, IBM PowerVM Live Partition Mobility, and the
Hardware Management Console (HMC). As an example, this book goes
through the Shared Storage Pool improvements that include mirroring of the
storage pool, dynamic contraction of the storage pool, dynamic disk growth
within the storage pool, and scaling improvements.
This book is intended for experienced IBM PowerVM users who want to enable
2013 IBM PowerVM virtualization enhancements for Power Systems. It is
intended to be used as a companion to the following publications:
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590
Syed R Ahmed, Suman Batchu, Carl Bender, David Bennin, Bill Casey,
Ping Chen, Shaival Chokshi, Rich Conway, Michael Cyr, Robert K Gardner,
Yiwei Li , Nicolas Guérin, Stephanie Jensen, Manoj Kumar, P Scott McCord,
Nidugala Muralikrishna, Paul Olsen, Steven E Royer, Josiah Sathiadass,
Vasu Vallabhaneni, Bradley Vette.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xvii
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Chapter 1. Introduction
This publication provides a summary of all the major IBM PowerVM and
Hardware Management Console (HMC) enterprise enhancements introduced in
the October 2013 announcement.
Before you continue, you need to be familiar with and have practical experience
with the contents in the following IBM Redbooks publications: IBM PowerVM
Virtualization Introduction and Configuration, SG24-7940, and IBM PowerVM
Virtualization Managing and Monitoring, SG24-7590.
This book was written so that you can go through the pages starting here or jump
to whatever subject interests you. The following chapters and sections of this
book are briefly introduced in this chapter:
IBM Power Virtualization Center (IBM PowerVC)
IBM Power Virtualization Performance (IBM PowerVP) for Power Systems
Power Integrated Facility for Linux (IFL)
Virtual I/O Server (VIOS) 2.2.3
VIOS Performance Advisor
PowerVM Live Partition Mobility
Hardware Management Console (HMC) feature updates
After the product code is loaded, IBM PowerVC’s no-menus interface will guide
you through three simple configuration steps to register physical hosts, storage
providers, and network resources. Then, it starts capturing and intelligently
deploying your virtual machines (VMs), among other tasks shown in the following
list:
Create VMs and then resize and attach volumes to them.
Import existing VMs and volumes so they can be managed by IBM PowerVC.
Monitor the utilization of the resources that are in your environment.
Migrate VMs while they are running (hot migration).
Deploy images quickly to create new VMs that meet the demands of your
ever-changing business needs.
Table 1-1 shows an overview of the key features included with IBM PowerVC
Editions.
Supports IBM Power Systems hosts Supports IBM Power Systems hosts
that are managed by the Integrated that are managed by a Hardware
Virtualization Manager (IVM). Management Console (HMC).
Supports storage area networks, local Supports storage area networks.
storage, and a combination in the Supports multiple VIOS VMs on each
same environment. host.
Supports a single VIOS VM on each
host.
For more information about IBM PowerVC, see the IBM PowerVC Introduction
and Configuration, SG24-8199.
IBM PowerVP helps reduce the time and complexity to find and display
performance bottlenecks through a simple dashboard that shows the
performance health of the system. It can help simplify both prevention and
troubleshooting and therefore reduce the cost of performance management.
Power IFL takes advantage of the following aspects for clients willing to
consolidate Linux workloads on the POWER Architecture:
Competitive pricing to add Linux to an enterprise Power System
Scalable to 32 sockets with seamless growth
Enterprise-class reliability and serviceability
Chapter 1. Introduction 3
1.5 VIOS Performance Advisor
The VIOS Performance Advisor tool provides advisory reports that are based on
key performance metrics from various partition resources collected from the
VIOS environment. This tool provides health reports that have proposals for
making configurational changes to the VIOS environment and to identify areas to
investigate further.
The VIOS Performance Advisor has been enhanced to provide support for
N_Port Identifier Virtualization (NPIV) and Fibre Channel, Virtual Networking and
Shared Ethernet Adapter, and Shared Storage Pool configurations.
Server evacuation is a new feature that helps systems administrators to move all
the capable logical partitions (LPARs) from one system to another when
performing maintenance tasks and without disrupting business operations. This
enhancement supports Linux, AIX, and IBM i VMs.
Both enhancements are part of the Hardware Management Console V7.7.8 and
no additional charge.
The following HMC models cannot be upgraded to support this functionality and
HMC V7.7.8 is their last supported firmware level:
7042-CR4
7310-CR4
7310-C05
7310-C06
7042-C06
7042-C07
7315-CR3
7310-CR3
Chapter 1. Introduction 5
New resources can be added to the pool or existing resources can be
removed from the pool.
Pool information can be viewed, including pool resource assignments,
compliance, and history logs.
It is a solution that helps reduce time and complexity to find out and display
performance bottlenecks. It presents to an administrator with a simple dashboard
showing the performance health of the system. It can help simplify both
prevention and troubleshooting and therefore reduce the cost of performance
management.
IBM PowerVP even allows an administrator to drill down to view specific adapter,
bus, or CPU usage. An administrator can see the hardware adapters and how
much workload is placed on them. IBM PowerVP provides both an overall and
detailed view of IBM Power System server hardware so that it is easy to see how
VMs are consuming resources.
Note: At the time of writing this book, the firmware was supported on the
following servers:
8231-E1D (IBM Power 710 Express)
8202-E4D (IBM Power 720 Express)
8231-E2D (IBM Power 730 Express)
8205-E6D (IBM Power 740 Express)
8408-E8D (IBM Power 750)
9109-RMD (IBM Power 760)
9117-MMC (IBM Power 770)
9179-MHC (IBM Power 780)
8246-L1D (IBM PowerLinux 7R1)
8246-L2D (IBM PowerLinux 7R2)
8246-L1T (IBM PowerLinux 7R1)
8246-L2T (IBM PowerLinux 7R2)
8248-L4T (IBM PowerLinux 7R4)
The system-level agent also acts as a partition-level agent for the partition on
which it is running. The other partitions are then configured to point the
partition-level agent to the system-level agent using the TCP/IP host name of the
system-level agent partition. The partition-level agents need to connect to the
system-level agent, so the system-level agent needs to be running before the
partition-level agents can collect and provide partition-specific information. The
system-level agent also needs to be running for the GUI to display information
about the system and its partitions (Figure 2-1 on page 12).
The IBM PowerVP product installer is a graphical installer with a dialog. The
PowerVP GUI is installed only on the system where you run the installation.
Partition Partition
agent agent
PowerVP
GUI
Workstation/LPAR
System agent
Follow these steps to get the system agent up and running:
1. Copy the installation package powervp.x.x.x.x.bff to a directory on the AIX
and VIOS VM. From that directory, run the following commands as root:
installp -agXd . powervp.rte
cd /tmp/gsk8
installp -acgqw -d /tmp/gsk8 GSKit*
/opt/ibm/powervp/iconfig Listen="* 13000" SystemLevelAgent=
The agent’s configuration file is configured by the previous iconfig command,
and it is in the following location:
/etc/opt/ibm/powervp/powervp.conf
2. To start the IBM PowerVP partition agent for the first time without rebooting,
run this command:
nohup /opt/ibm/powervp/PowerVP.sh &
After the next reboot, the agent is started automatically by the init script:
/etc/rc.d/rc2.d/SPowerVP
These prerequisites need to be installed on the Linux system before the agent
installation by running this command:
sysstat procps net-tools ethtool perf coreutils ksh
The following IBM PowerVP RPM files are needed on Linux systems:
gskcrypt64-8.0.50.11.linux.ppc.rpm
gskssl64-8.0.50.11.linux.ppc.rpm
powervp-driver-*.ppc64.rpm (select the correct file that matches the Linux
distribution installed)
powervp-x.x.x.x.ppc64.rpm
Note: If there is no powervp-driver RPM that matches the version of Linux that
is used, the source package powervp-driver-source-1.1.0-1.ppc64.rpm can
be installed. This package installs the necessary source package to build a
powervp-driver RPM on the current system. The files are unpacked in the
/opt/ibm/powervp/driver-source directory. From that directory, issue the
make command to build a powervp-driver RPM file for the current Linux
system. There are many necessary prerequisite packages when you build the
kernel modules. Consult the online documentation for the Linux prerequisites.
Use these commands to check whether the IBM i agent is successfully installed
(Example 2-1).
If you need to install the IBM i agent manually, use the following instructions in
Example 2-2.
System-wide statistics
The partition list at the top contains a line for every LPAR on the Power Systems
server. The first column indicates the partitions that can be “drilled down” to see
partition-specific performance information. The second column is the LPAR ID,
which matches the configuration in the HMC for the system. The third column
indicates whether the processors for the partition are Dedicated or Shared. The
fourth and fifth columns provide the Cores Entitled and Cores Assigned
(currently using) for the partition. The sixth column is a moving bar that indicates
the CPU utilization for the partition. An example of the IBM PowerVP main
window is in Figure 2-4.
A new tab is displayed after you select a node, showing the in-depth (or a
drill-down view) hardware of the selected node. The larger boxes are the
processor modules within the node. Columns are in each processor module box
that indicate each of the CPU cores on the module. The utilization depicted in the
cores will change over time as performance statistics change, and possibly the
color will also change. The lines between the processor modules represent the
buses between the modules. The lines that run off the page represent the buses
to other nodes.
The boxes above and below the processor modules represent the I/O controllers
(also known as the GX controllers) with the lines to them representing the buses
from the processor modules to the controllers. Similarly, the boxes to both sides
represent the memory controllers (also known as the MC controllers) with the
lines to them representing the buses from the processor modules to the
controller. The colors of the lines can change based on the utilization of the
buses. The bus utilization is also shown as a percentage in the controller box.
For partitions with dedicated cores, you can click the LPAR line to show the cores
that are assigned to the partition. You can also click any of the cores to show
which LPAR is assigned to the core. If an LPAR or core is assigned to a shared
partition pool, these are all grouped together with the same color (usually blue)
because they cannot be differentiated. If you have active cores that are not
assigned to a dedicated partition or the shared pool, these cores can have CPU
utilization because they might be borrowed by partitions that need additional
processing power.
If you drill down to a specific LPAR, the following information is displayed. A new
tab is created that shows the partition detailed information. The bars represent
different performance metrics for that specific partition:
The CPU column shows the CPU utilization for that partition as a percentage
of the entitled processor resources.
The Disk Transfer Rate shows the rate of bytes read and written to disk. After
selecting a disk column, statistics for the individual disks for the LPAR in the
bottom half of the display will appear.
The Total Ethernet column represents the rate of bytes sent and received on
the Ethernet. After selecting the network column, statistics for each Ethernet
adapter for the LPAR in the bottom half of the display will appear.
Note: The IBM PowerVP system agent also behaves like a partition agent.
There is no need to run both system and partition agents on a single LPAR.
Only LPARs that do not run system agents need partition agents.
With the Power Integrated Facility for Linux (Power IFL) offering, IBM brings the
industry-leading class Power platform closer to the Linux ecosystem. Power IFL
helps clients to consolidate operations and reduce overhead by using their
existing production systems and infrastructures.
Power IFL is available for Power 770, 780, and 795 servers with available
capacity on demand (CoD) memory and cores.
3.1.1 Requirements
Power IFL has the following requirements:
Firmware level 780
HMC Level Version 7, Release 7.8
The Power IFL contract (form Z126-6230) must be signed by the client before the
order. This contract needs to be signed one time for each client enterprise per
country. The client agrees to run the system in a manner that isolates the Power
IFL cores in a separate virtual shared processor pool from the rest of the other
operating system cores.
32 GB memory
32 GB Memory Act activations
$ xxx per GB
4 PowerVM for
4 x PowerVM EE PowerLinux
License entitlement License Entitlements
Note: If PowerVM Standard Edition is running on other cores, all cores will be
upgraded to PowerVM Enterprise Edition (5765-PVE) at the client’s expense.
The number of general-purpose cores, and therefore the capability available for
AIX and IBM i partitions, is the total number of licensed activations minus any IFL
and VIOS activations.
Dedicated
Dedicated Dedicated
Dedicated Dedicated
Decidated Shared
Shared Shared
Shared Shared
Shared Shared
Shared
LPAR11
LPAR LPAR22
LPAR LPAR33
LPAR LPAR 44
LPAR LPAR55
LPAR LPAR66
LPAR LPAR77
LPAR
VIOS
VIOS VIOS
VIOS
44 VPs
VPs 33 VPs
VPs 16 VPs
16 VPs 88VPs
VPs
11core
core 11core
core 10cores
10 cores Uncapped
Uncapped Uncapped
Uncapped Uncapped
Uncapped Uncapped
Uncapped
2.5PrU
2.5 PrU 1.5PrU
1.5 PrU 12PrU
12 PrU 88PrU
PrU
2 GB
2GB 2GB
2GB 64GB
64GB 64 GB
64GB 3 2GB
32GB 6 4GB
64GB 64 GB
64GB
Defaul tShared
Default Shared Pool
Pool
PowerVM Hypervisor
PowerVM Hypervisor
VIOS VIOS
SharedPool01
4 max proc units Default Shared Poo l
PowerVM Hypervisor
To solve the compliance issues from scenario 1, the following actions are in
place:
The VIOS and dedicated AIX partition remains the same.
The Maximum Processing Units for Shared Pool01 is set to 4 by way of the
Virtual Resources section on the HMC to prevent AIX and IBM i from
obtaining more than four physical processor cores of resources.
The AIX and IBM i Shared LPARS are dynamically moved into the shared
processor pool SharedPool01 by way of the Partitions tab on the Shared
Processor Pool Management panel on the HMC.
AIX, IBM i, and VIOS LPARs can only obtain a maximum of 16 cores.
It will be the system owner’s responsibility to bring the configuration back into
compliance.
Note: At the time of writing this book, the firmware performs a soft compliance
validation.
From now on, the VIOS rootvg requires at least 30 GB of disk space. It is advised
that you protect the VIOS rootvg by a Logical Volume Manager (LVM) mirror or
hardware RAID. In correctly configured redundant VIOS environments, it is
possible to update VIOSs in sequence without interrupting client virtual machines
(VMs). Extra work might be required if client VMs are configured to use an LVM
mirror between logical unit numbers (LUNs) that are provided by dual VIOSs.
The PowerVM implementation of virtual networking takes place in both the Power
Hypervisor and VIOSs.
In this section, we describe the new method used for SEA failover configuration.
This enhancement is achieved by removing the requirement of a dedicated
control-channel adapter for each SEA configuration pair.
4.1.1 Requirements
The new simplified SEA failover configuration is dependent on the following
requirements:
VIOS Version 2.2.3
Hardware Management Console (HMC) 7.7.8
Firmware Level 780 or higher
Note: At the time of writing this book, this feature is not supported on
hardware models MMB and MHB.
For more details about the supported machine model types, go to this website:
https://www-304.ibm.com/webapp/set2/sas/f/power5cm/power7.html
The SEA failover still supports the traditional provisioning of the dedicated
control-channel adapter in SEA failover VIOSs. Existing SEA and SEA failover
functionality continues to work, which allows the existing SEA failover
configuration to migrate to the new VIOS. The new mechanism is supported
without making any configuration changes.
Multiple SEA pairs are allowed to share the VLAN ID 4095 within the same virtual
switch. We still can have only two VIOSs for each SEA failover configuration.
The new simplified SEA failover configuration relies the following dependencies:
VLAN ID 4095 is a reserved VLAN for internal management traffic. POWER
Hypervisor 7.8 and higher have support for management VLAN ID 4095.
The HMC ensures that the management VLAN ID 4095 is not user
configurable.
The HMC also needs to ensure that the SEA priority value is either 1 or 2 so
that users do not configure more than two SEAs in a failover configuration.
Because the existing SEA failover configuration is still available, the following
method is used to identify a simplified configuration:
The method to discover an SEA failover partner is decided based on user
input for the control channel (ctl_chan) attribute of the SEA device on the
mkvdev command.
If the control-channel adapter is specified on the mkvdev command and the
specified adapter is not one of the trunk adapters of the SEA, a dedicated
control-channel adapter is specified.
If no control-channel adapter is specified on the mkvdev command, the default
trunk adapter is the Port Virtual LAN Identifier (PVID) adapter of the SEA.
Partners are discovered using the new discovery protocol implementation
over the management VLAN ID 4095.
4.1.4 Migration
Migration from the current SEA to the simplified SEA configuration without a
dedicated control channel requires a network outage. It is not possible to remove
the dedicated control-channel adapter dynamically at run time.
The SEA must be in a defined state before you can remove the dedicated
control-channel adapter. This is necessary to avoid any condition that leads to an
SEA flip-flop or both SEAs bridging.
4.1.5 Examples
The new syntax for the mkvdev command authorizes you to use the ha_mode
parameter without specifying any control-channel adapter.
PVID=1
PVID=1
PVID=1
PVID=1
VLAN 4095 control channel
HYPERVISOR
VLAN=1 VLAN=1
External
Ethernet switch Network
2. We must identify the new virtual adapter from VIOS1 with the lsdev
command. We create the SEA adapter with the mkvdev command using the
physical Ethernet adapter ent14. The SEA adapter ent9 is created:
$ lsdev -dev ent* -vpd|grep C144|grep ent
ent8 U9119.FHB.5102806-V1-C144-T1 Virtual I/O
Ethernet Adapter (l-lan)
$ mkvdev -sea ent14 -vadapter ent8 -default ent8 -defaultid 144
-attr ha_mode=auto
ent9 Available
en9
et9
3. We verify that ent9 is now the primary adapter with the errlog command:
$ errlog
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
E48A73A4 1104235213 I H ent9 BECOME PRIMARY
5. We must identify the new virtual adapter from VIOS2 with the lsdev
command. We create the SEA adapter with the mkvdev command using the
physical Ethernet adapter ent8. The SEA adapter ent14 is created:
$ lsdev -dev ent* -vpd|grep C144|grep ent
ent13 U9119.FHB.5102806-V2-C144-T1 Virtual I/O
Ethernet Adapter (l-lan)
$ mkvdev -sea ent8 -vadapter ent13 -default ent13 -defaultid 144
-attr ha_mode=auto
ent14 Available
en14
et14
6. We check that ent14 is now the backup adapter with the errlog command:
$ errlog
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
1FE2DD91 1104165413 I H ent14 BECOME BACKUP
7. We verify that VIOS2 can fail over by putting the VIOS1’s SEA adapter in
standby mode with the chdev command and check the errlog entry:
chdev -dev ent9 -attr ha_mode=standby
$ errlog
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
1FE2DD91 1104235513 I H ent9 BECOME BACKUP
In the latest release of VIOS Version 2.2.3, the SSP functionality has been
further enhanced in the following areas:
Pool resiliency is enhanced by mirroring the storage pool (two failover groups)
Pool shrink is enhanced by allowing the dynamic contraction of the storage
pool by removing a physical volume
Dynamic disk growth within the storage pool
Scaling improvements with more client VMs supported and larger physical
volumes in the pool
New lu and pv commands
New failgrp command
Cluster-wide operations performed concurrently
Cluster
Aware
AIX
infarstructure
An SSP usually spans multiple VIOSs. The VIOSs constitute a cluster that is
based on Cluster Aware AIX (CAA) technology in the background. A cluster
manages a single SSP. After the physical volumes are allocated to the SSP
environment, the physical volume management tasks, such as capacity
management, are performed by the cluster. Physical storage that becomes part
of the SSP in these VIOSs is no longer managed locally.
Figure 4-5 shows how data is accessed from client VM through all layers to the
physical storage.
LU2
LU3
Individual chunks
(can be mirrored)
Physical LUNs
are PVs in VIOS
Storage
A
LUN0 LUN1 storage arrays
(can be redundant)
The virtual SCSI disk devices exported from the SSP support SCSI persistent
reservations. These SCSI persistent reservations persist across (hard) resets.
The persistent reservations supported by a virtual SCSI disk from the SSP
support all the required features for the SCSI-3 Persistent Reserves standard.
Inside the storage pool, there might be two sets of shared LUNs (physical
volumes (PVs)). These two named sets of LUNs are referred to as failure groups
or mirrors. The preferred practice is to define those two failure groups on
different physical storage arrays for best availability.
The whole pool is either a single copy pool (one failure group) or double copy
(two failure groups). If two failure groups are defined, the whole pool is mirrored,
not just individual logical units (LUs) of PVs. Data space that belongs to an LU is
divided into 64 MB chunks each and they are placed into individual physical
volumes (LUNs) in the pool. The exact data placement is decided in the
background; therefore, it is not exact physical one-to-one mirroring (like RAID1,
for example).
By default, a single copy pool is created by the cluster -create command with
first failure group named Default. It is possible to rename the first failure group to
an arbitrary name and add a second failure group.
Figure 4-6 shows the placement and flow of data when you use SSP mirroring.
Synchronized
automaticaly
LU2 COPY
The storage in an SSP is managed by the cluster and a distributed data object
repository with a global namespace. The distributed data object repository uses
a cluster file system that has been enhanced specifically for the purpose of
storage virtualization using the VIOS. The distributed object repository is the
foundation for advanced storage virtualization features, such as shared access,
thin provisioning, and mirroring.
The VIOS clustering model is based on Cluster Aware AIX (CAA) and Reliable
Scalable Cluster Technology (RSCT). CAA is a toolkit for creating clusters. A
reliable network connection is needed between all the VIOSs that are in the
cluster. On the VIOS, the poold daemon handles group services. The vio_daemon
is responsible for monitoring the health of the cluster nodes and the pool, as well
as the pool capacity.
CAA provides a set of tools and APIs to enable clustering on the AIX operating
system (which is the base of the VIOS appliance). CAA does not provide the
application monitoring and resource failover capabilities that IBM PowerHA®
System Mirror provides. Other software products can use the APIs and
command-line interfaces (CLIs) that CAA provides to cluster their applications
and services.
Each cluster based on CAA requires at least one physical volume for the
metadata repository. All cluster nodes in a cluster must see all the shared disks -
both repository disk and storage disks. Therefore, the disks need to be zoned
and correctly masked on the storage array to all the cluster nodes that are part of
the SSP. All nodes can read and write to the SSP. The cluster uses a distributed
lock manager to manage access to the storage.
Nodes that belong to a CAA cluster use the common AIX HA File System
(AHAFS) for event notification. AHAFS is a pseudo file system used for
synchronized information exchange; it is implemented in the AIX kernel
extension.
2 2.2.1.3 4
3 2.2.2.0 16
4 2.2.3.0 16
Table 4-2 on page 55 lists the differences in various SSP parameters to previous
versions of SSP.
Maximum capacity of a 4 TB 4 TB
virtual disk (LU) in pool
Table 4-3 on page 56 lists the requirements for installation and the use of the
latest version of SSP functionality.
If all previous setup steps are completed and all the planning requirements that
are described in 4.2.2, “Planning for SSPs” on page 54 are met, it is possible to
create a cluster using the cluster command. The initial cluster setup can take
time. There is not much feedback on the window while the cluster is being
created.
Note: In the previous example and all the following examples in this chapter,
we use custom logical names of physical disks. It is for the convenience of the
administrator to have both consistent and meaningful logical device names
across the entire cluster. The renaming is done by running the rendev
command (as root). This step is optional and not necessary in cluster
configuration:
vioa1.pwrvc.ibm.com:/# rendev -l hdisk6 -n ssprepohdisk0
vioa1.pwrvc.ibm.com:/# rendev -l hdisk7 -n sspmirrahdisk0
We can check whether the cluster is defined successfully by using the cluster
-status and lscluster -d commands as shown in Example 4-2.
If everything works well, all defined disks are visible, the cluster is in an OK state,
all disks are UP, we can continue adding nodes (Example 4-3).
New lu command
The new lu command is introduced to simplify the management of the logical
units within an SSP. By using the lu command, various operations, such as
create, map, unmap, remove, and list, can be performed on logical units in an
SSP.
The following list shows various flags of the lu command and examples of its
usage:
-create creates a new logical unit. By default, a thin-provisioned logical unit is
created. Use the -thick option to create a thick-provisioned logical unit. Use
the -map flag to map an existing logical unit to a virtual SCSI adapter. An
example of the usage follows:
$ lu -create -clustername SSP -sp SSPpool -lu vmaix10_hd0 -size 20G
Lu Name:vmaix10_hd0
Lu Udid:461b48367543c261817e3c2cfc326d12
-list displays information about the logical units in the SSP. Use the
-verbose option to display the detailed information about logical units. An
example of the usage follows:
$ lu -list
POOL_NAME: SSPpool
TIER_NAME: SYSTEM
LU_NAME SIZE(MB) UNUSED(MB) UDID
vmaix10_hd0 20480 20481
461b48367543c261817e3c2cfc326d12
-map maps an existing LU to the virtual target adapter. An example of the
usage follows:
$ lu -map -clustername SSP -sp SSPpool -lu vmaix10_hd0 -vadapter
vhost8 -vtd vtd_vmaix10_hd0
Assigning logical unit 'vmaix10_hd0' as a backing device.
VTD:vtd_vmaix10_hd0
-unmap unmaps an existing LU but does not delete it. An example of the
usage follows:
$ lu -unmap -clustername SSP -sp SSPpool -lu vmaix10_hd0
vtd_vmaix10_hd0 deleted
Important warning: Using the option -all will immediately delete all LUs
in the pool even if they are mapped to a VM.
New pv command
The new pv command is introduced to manage the physical volumes (shared
SAN LUNs) within an SSP. By using the pv command, various operations, such
as add, add to a failure group, replace, remove, and list, can be performed on
physical volumes in an SSP:
-list lists physical volumes in an SSP and their Universal Disk Identification
(UDIDs), for example:
$ pv -list
POOL_NAME: SSPpool
TIER_NAME: SYSTEM
FG_NAME: Default
PV_NAME SIZE(MB) STATE UDID
sspmirrahdisk0 51200 ONLINE
33213600507680191026C4000000000000~
-list -capable lists the physical volumes that can be added to an SSP. The
physical volumes that are accessible on all VIOSs across the entire cluster
that are not part of the SSP will be listed. Also, UDIDs of those physical
volumes will be listed:
$ pv -list -capable
PV_NAME SIZE(MB) UDID
sspmirrahdisk1 51200
33213600507680191026C400000000000002504214503IBMfcp
sspmirrbhdisk0 51200
33213600507680191026C400000000000002604214503IBMfcp
sspmirrbhdisk1 51200
33213600507680191026C400000000000002704214503IBMfcp
-add adds physical volumes to one or more failure groups in an SSP. When a
disk is added to a storage pool, chunks that belong to already existing LUs in
the pool are automatically redistributed in the background:
pv -add -fg MIRRA: sspmirrahdisk1 MIRRB: sspmirrbhdisk1
Given physical volume(s) have been added successfully.
POOL_NAME:SSPpool
TIER_NAME:SYSTEM
FG_NAME:MIRRB
FG_SIZE(MB):51136
FG_STATE:ONLINE
-modify used together with the -attr flag modifies the specified attribute. The
following example shows how to rename a default failure group to a new
name:
$ failgrp -modify -fg Default -attr fg_name=MIRRA
Given attribute(s) modified successfully.
lscluster -m
This command lists the cluster node configuration information together with a
listing of contact IP addresses for individual nodes in the cluster. See
Example 4-7 on page 64.
-------------------------------------------------------------------
-------------------------------------------------------------------
Interface State Protocol Status SRC_IP->DST_IP
-------------------------------------------------------------------
tcpsock->02 UP IPv4 none 172.16.21.110->172.16.21.111
-------------------------------------------------------------------
-------------------------------------------------------------------
Interface State Protocol Status SRC_IP->DST_IP
-------------------------------------------------------------------
tcpsock->03 UP IPv4 none 172.16.21.110->172.16.21.112
-------------------------------------------------------------------
-------------------------------------------------------------------
Interface State Protocol Status SRC_IP->DST_IP
-------------------------------------------------------------------
tcpsock->04 UP IPv4 none 172.16.21.110->172.16.21.113
lscluster -d
This command shows list of disks currently configured in the cluster and their
status. See Example 4-8 on page 66.
Node vioa1.pwrvc.ibm.com
Node UUID = a806fb6c-3c0e-11e3-9cb8-e41f13fdcf7c
Number of disks discovered = 2
sspmirrahdisk1:
State : UP
uDid :
33213600507680191026C400000000000002504214503IBMfcp
uUid : b17cf1df-5ba1-38b6-9fbf-f7b1618a9010
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : CLUSDISK
ssprepohdisk0:
State : UP
uDid :
33213600507680191026C400000000000002304214503IBMfcp
uUid : 7fbcc0ec-e0ec-9127-9d51-96384a17c9d7
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : REPDISK
Node viob2.pwrvc.ibm.com
Node UUID = adf248a8-3c11-11e3-8b0a-e41f13fdcf7c
Number of disks discovered = 2
sspmirrahdisk1:
State : UP
uDid :
33213600507680191026C400000000000002504214503IBMfcp
uUid : b17cf1df-5ba1-38b6-9fbf-f7b1618a9010
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : CLUSDISK
ssprepohdisk0:
State : UP
uDid :
33213600507680191026C400000000000002304214503IBMfcp
uUid : 7fbcc0ec-e0ec-9127-9d51-96384a17c9d7
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : REPDISK
Node viob1.pwrvc.ibm.com
Node UUID = 2eddfec2-3c11-11e3-8be2-e41f13fdcf7c
Number of disks discovered = 2
sspmirrahdisk1:
State : UP
uDid :
33213600507680191026C400000000000002504214503IBMfcp
uUid : b17cf1df-5ba1-38b6-9fbf-f7b1618a9010
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : CLUSDISK
ssprepohdisk0:
State : UP
uDid :
33213600507680191026C400000000000002304214503IBMfcp
uUid : 7fbcc0ec-e0ec-9127-9d51-96384a17c9d7
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : REPDISK
lscluster -s
This command lists the cluster network statistics on the local node and errors
in the network (if they occur). See Example 4-9 on page 68.
chrepos
This command replaces a disk, which is used as the repository disk by the
SSP cluster, with another disk. Example 4-15 shows how to use this
command to recover if you have a lost repository disk.
Node vioa1.pwrvc.ibm.com
Node UUID = a806fb6c-3c0e-11e3-9cb8-e41f13fdcf7c
Number of disks discovered = 2
sspmirrahdisk1:
State : UP
uDid :
33213600507680191026C400000000000002504214503IBMfcp
uUid : b17cf1df-5ba1-38b6-9fbf-f7b1618a9010
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : CLUSDISK
ssprepohdisk0:
State : UP
uDid :
33213600507680191026C400000000000002304214503IBMfcp
uUid : 7fbcc0ec-e0ec-9127-9d51-96384a17c9d7
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : REPDISK
Node viob2.pwrvc.ibm.com
Node UUID = adf248a8-3c11-11e3-8b0a-e41f13fdcf7c
Number of disks discovered = 2
sspmirrahdisk1:
State : UP
uDid :
33213600507680191026C400000000000002504214503IBMfcp
uUid : b17cf1df-5ba1-38b6-9fbf-f7b1618a9010
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : CLUSDISK
ssprepohdisk0:
State : UP
uDid :
33213600507680191026C400000000000002304214503IBMfcp
uUid : 7fbcc0ec-e0ec-9127-9d51-96384a17c9d7
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : REPDISK
Node viob1.pwrvc.ibm.com
Node UUID = 2eddfec2-3c11-11e3-8be2-e41f13fdcf7c
Number of disks discovered = 2
sspmirrahdisk1:
State : UP
uDid :
33213600507680191026C400000000000002504214503IBMfcp
uUid : b17cf1df-5ba1-38b6-9fbf-f7b1618a9010
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : CLUSDISK
ssprepohdisk0:
State : UP
uDid :
33213600507680191026C400000000000002304214503IBMfcp
uUid : 7fbcc0ec-e0ec-9127-9d51-96384a17c9d7
Site uUid : a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
Type : REPDISK
snap caa
Use this command to collect all information about the underlying CAA cluster
component when sending information to IBM support.
SHARED DISKS
Name Uuid
Udid
sspmirrahdisk1
b17cf1df-5ba1-38b6-9fbf-f7b1618a9010
33213600507680191026C400000000000002504214503IBMfcp
NODES
Name Uuid
N_gw Site_uuid
vioa1.pwrvc.ibm.com
a806fb6c-3c0e-11e3-9cb8-e41f13fdcf7c 1
a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c
gw_flag : 0
SITES
Name Shid Uuid
Prio
LOCAL 1
a8028ac8-3c0e-11e3-9cb8-e41f13fdcf7c 1
REPOS DISKS
Name Uuid
Udid
ssprepohdisk0
7fbcc0ec-e0ec-9127-9d51-96384a17c9d7
33213600507680191026C400000000000002304214503IBMfcp
MCAST ADDRS
IPv4 IPv6 Uuid
228.16.21.110 ff05::e410:156e
aa5566ec-3c0e-11e3-9cb8-e41f13fdcf7c
The VIOS Performance Advisor tool provides advisory reports based on key
performance metrics for various partition resources collected from the VIOS
environment.
The VIOS Performance Advisor tool provides advisory reports that are based on
key performance metrics of various partition resources collected from the VIOS
environment. Use this tool to provide health reports that have proposals for
making configurational changes to the VIOS environment and to identify areas
for further investigation.
VIOS Version 2.2, Fix Pack (FP) 24, Service Package (SP) 1 includes the
following enhancements for the VIOS Performance Advisor tool. However, the
development of new functions for virtualization is an ongoing process. Therefore,
it is best to visit the following website, where you can find more information about
the new and existing features:
http://bit.ly/1nctYzk
The primary focus of the VIOS Performance Advisor is to cover the following
VIOS technologies:
SEA Shared Ethernet Adapter
NPIV N_Port ID Virtualization
SSP Shared Storage Pool
The VIOS Performance Advisor has been enhanced to provide support for NPIV
and Fibre Channel, Virtual Networking, Shared Ethernet Adapter, and Shared
Storage Pool configurations.
VIOS Performance Advisor can be downloaded for free from the VIOS
Performance Advisor tool website, with VIOS Version 2.1.0.10, or later. By using
the VIOS CLI, run the vios_advisor command.
For example, to monitor the system for 30 minutes and generate a report, enter
the following command:
vioa1:/home/padmin [padmin]$ part -i 10
part: Reports are successfully generated in vioa1_131031_11_34_12.tar
Note: If you use the part command, you must log in as the padmin role not
the root user.
Reports for the on-demand monitoring mode are successfully generated in the
vioa1_131031_11_34_12.tar file.
The output generated by the part command is saved in a .tar file, which is
created in the current working directory. The naming convention for files in the
on-demand monitoring mode is hostname_yymmdd_hhmmss.tar. In the
postprocessing mode, the file name is that of the input file with the file name
extension changed from a .nmon file to a .tar file.
The following example shows the tar file extracted from an output of the part
command:
# ls
images vioa1_131031_1134.nmon vios_advisorv2.xsl
popup.js vios_advisor.xsl
style.css vios_advisor_report.xml
The data is gathered in the vioa1_131031_1134.nmon file. If you want an .xls file
for Excel, you can run the nmon_analyser.
The vios_advisor.xml report is part of the output .tar file with the other
supporting files. To view the generated report, complete the following example.
vioa1:/home/padmin [padmin]$ ls -al vioa1_131031_11_34_12.tar
-rw-r--r-- 1 padmin staff 337920 Oct 31 11:44 vioa1_131031_11_34_12.tar
vioa1:/home/padmin [padmin]$ oem_setup_env
# cd /home/padmin
# tar -xf vioa1_131031_11_34_12.tar
# ls
images vioa1_131031_1134.nmon vios_advisorv2.xsl
popup.js vios_advisor.xsl
style.css vios_advisor_report.xml
Note: The Suggested value column (highlighted in Figure 5-1) shows changes
that are advised to decrease performance risks and impacts.
Note: In the VIOS - Processor table (Figure 5-4 on page 85) of the CPU
(central processing unit) advisory report, the status of the Variable Capacity
Weight is marked with a warning icon (exclamation point in a triangle). The
preferred practice is for the VIOS to have an increased priority of 129 - 255
when in uncapped shared processor mode. For the definitions for the warning
icons, see Figure 5-2 on page 84.
In Figure 5-5 on page 86, CPU capacity status indicates investigation required.
For the VIOSs in our lab environment, the preferred practice capacity settings are
used due to low performance requirements:
0.1 Processing units for desired entitled capacity
1 Desired virtual processor
255 Weight for uncapped processing mode
To enable the feature, access the partition properties for a specific VIOS on the
Hardware Management Console (HMC). On the General tab (Figure 5-8), select
Allow performance information collection.
Figure 5-10 VIOS I/O Activity (disk and network) advisory report
Also, If you have NPIV clients, you can expand the NPIV items. The following
traffic-related statistics are shown:
Average I/O per seconds
I/Os blocked
Traffic by individual worldwide port name (WWPN)
If you expand the SEA column, the following detail is shown (Figure 5-15 on
page 92):
SEA utilization based on the physical interfaces
Arbitrate (baudrate) and traffic information
Per-client SEA traffic information
The number of SEA adapters configured
The utilization metrics, such as average send and receive
The SEA peak send and receive
The following commands enable the LargeReceive function for physical device
ent1:
chdev -l en1 -a state=down
chdev -l ent1 -a large_receive=yes (if hardware supports)
chdev -l en1 -a state=up
You can migrate all the migration-capable AIX, Linux, and IBM i partitions from
the source server to the destination server by running the following command
from the HMC command line:
migrlpar -o m -m source_server -t target_server --all
The command finishes silently and the virtual machines (VMs) are in the target
server. If the target server has partitions that were configured before the
evacuation start, the moved LPARs will coexist with the previous partitions. To roll
back the LPARs to the source server, move them individually using the HMC.
To stop the migration of all the migration-capable AIX, Linux, and IBM i partitions,
run the following command from the HMC command line:
migrlpar –o s -m source_server --all
At the time of writing this book, the latest available levels are HMC 7.7.8.0 and
VIOS 2.2.1. Firmware levels relate to each server and can vary depending on the
model.
Link: For more information about software and firmware levels or to download
the latest codes, go to this website:
http://www.ibm.com/support/fixcentral
The following tables describe the VIOS Processing Unit resources that are
suggested to achieve maximum throughput. These resources are in addition to
the resources already assigned to the VIOS to handle the existing virtual I/O
resource requirements, using a 10 Gb network adapter for partition mobility.
Table 6-1 lists the suggested resources in addition to the configured resources
for the VIOSs for a single migration to achieve maximum throughput. These extra
resources are only needed on the Mover Service Partitions (MSPs).
Table 6-2 VIOS requirement for up to 16 concurrent migrations (8 for each MSP)
Up to 16 concurrent migrations - eight for each Mover Service Partition
POWER7 POWER7+
Virtual processors 4 3
In addition to these suggested settings for memory and processing units, there
are other settings for dedicated network adapters. These other settings do not
apply to virtual network adapters in the VMs, but they can be applied to the
VIOSs. Follow these steps:
1. Enable the Large Send Offload and Large Receive Offload options on all
network devices that are involved in partition mobility. This setting is enabled,
by default, on all network adapters that support this feature. If you need to
enable this setting, manually run these commands on the VIOS
(Example 6-1).
Example 6-1 Enabling Large Send Offload and Large Receive Offload
lsdev -dev ent0 -attr | grep large
large_receive no Enable receive TCP segment aggregation True
large_send no Enable transmit TCP segmentation offload True
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see “Help from IBM” on
page 108. Note that some of the documents referenced here might be available
in softcopy only.
IBM Power Systems HMC Implementation and Usage Guide, SG24-7491
Integrated Virtualization Manager for IBM Power Systems Servers,
REDP-4061
IBM PowerVM Best Practices, SG24-8062
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590
IBM Systems Director VMControl Implementation Guide on IBM Power
Systems, SG24-7829
Power Systems Memory Deduplication, REDP-4827
PowerVM Migration from Physical to Virtual Storage, SG24-7825
IBM PowerVM Virtualization Active Memory Sharing, REDP-4470
A Practical Guide for Resource Monitoring and Control (RMC), SG24-6615
Other publications
These publications are also relevant as further information sources:
The following types of documentation are located on the Internet at this
website:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp
– User guides
– System management guides
– Application programmer guides
Online resources
These websites are also relevant as further information sources:
For further details of supported machine model types, visit this website:
https://www-304.ibm.com/webapp/set2/sas/f/power5cm/power7.html
The latest configuration file for a Power enterprise pool is available on the IBM
Capacity on Demand website:
http://www-03.ibm.com/systems/power/hardware/cod/offerings.html
IBM Business Partners
http://www.ibm.com/services/econfig/announce/index.html
IBM internal website
http://w3-03.ibm.com/transform/worksmart/docs/e-config.html
Further configuration examples and details are provided in the Power
Systems Information Center under the POWER7 Systems section:
http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5
Upgrade from previous version (updateios command) and update packages
can be downloaded from IBM Fix Central:
http://www-933.ibm.com/support/fixcentral
For storage hardware that is supported in VIOS, see this website:
http://bit.ly/1oIPmLS
VIOS information:
http://bit.ly/1nctYzk
PowerVP and mobile IBM Power Systems servers coupled with IBM PowerVM technology are
CoD activations designed to help clients build a dynamic infrastructure, helping to reduce INTERNATIONAL
explained
costs, manage risk, and improve service levels. TECHNICAL
IBM PowerVM delivers industrial-strength virtualization for IBM AIX, IBM i, SUPPORT
Shared Storage Pool
and Linux environments on IBM POWER processor-based systems. IBM ORGANIZATION
PowerVM V2.2.3 is enhanced to continue its leadership in cloud computing
enhancements environments. Throughout the chapters of this publication, you will learn
explained about the following topics:
New management and performance tuning software products for
PowerVM solutions. Virtual I/O Server (VIOS) Performance Advisor has
BUILDING TECHNICAL
Power Integrated been enhanced to provide support for N_Port Identifier Virtualization INFORMATION BASED ON
Facility for Linux (NPIV) and Fibre Channel, Virtual Networking and Shared Ethernet PRACTICAL EXPERIENCE
described Adapter, and Shared Storage Pool configurations. IBM Power
Virtualization Performance (PowerVP) is introduced as a new visual
performance monitoring tool for Power Systems servers. IBM Redbooks are developed by
The scalability, reliability, and performance enhancements introduced
the IBM International Technical
with the latest versions of the VIOS, IBM PowerVM Live Partition
Support Organization. Experts
Mobility, and the Hardware Management Console (HMC). As an
from IBM, Customers and
example, this book goes through the Shared Storage Pool
Partners from around the world
improvements that include mirroring of the storage pool, dynamic
create timely technical
contraction of the storage pool, dynamic disk growth within the storage
information based on realistic
pool, and scaling improvements.
scenarios. Specific
recommendations are provided
This book is intended for experienced IBM PowerVM users who want to to help you implement IT
enable 2013 IBM PowerVM virtualization enhancements for Power Systems. solutions more effectively in
It is intended to be used as a companion to the following publications: your environment.
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590