Jon Tate
Pall Beck
Angelo Bernasconi
Werner Eggli
ibm.com/redbooks
International Technical Support Organization
March 2010
SG24-6423-07
Note: Before using this information and the product it supports, read the information in “Notices” on
page xvii.
This edition applies to Version 5 Release 1 Modification 0 of the IBM System Storage SAN Volume Controller
and is based on pre-GA versions of code.
Note: This book is based on a pre-GA version of a product and might not apply when the product becomes
generally available. We recommend that you consult the product documentation or follow-on versions of
this book for more current information.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Contents v
5.10.2 System requirements for the IBM System Storage hardware provider . . . . . . . 216
5.10.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . 216
5.10.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.10.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . 221
5.10.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.11 Specific Linux (on Intel) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.11.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.11.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.11.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.11.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.11.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.11.6 Creating and preparing the SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . 231
5.11.7 Using the operating system MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
5.11.8 Creating and preparing MPIO volumes for use. . . . . . . . . . . . . . . . . . . . . . . . . 233
5.12 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.12.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.12.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 238
5.12.3 Guest operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.12.4 HBAs for hosts running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.12.5 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
5.12.6 VMware storage and zoning recommendations . . . . . . . . . . . . . . . . . . . . . . . . 240
5.12.7 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . 241
5.12.8 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
5.12.9 Attaching VMware to VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
5.12.10 VDisk naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5.12.11 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . 246
5.12.12 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
5.12.13 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5.13 SUN Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.13.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 249
5.13.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.14 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5.14.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 250
5.14.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5.14.3 Co-existence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5.14.4 Using an SVC VDisk as a cluster lock disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
5.14.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . 251
5.15 Using SDDDSM, SDDPCM, and SDD Web interface . . . . . . . . . . . . . . . . . . . . . . . . 251
5.16 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.17 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.17.1 Publications containing SVC storage subsystem attachment guidelines . . . . . 253
Contents vii
6.7.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.7.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.8 Remote copy techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
6.8.1 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
6.8.2 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
6.9 Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
6.9.1 Global Mirror relationship between primary and secondary VDisks . . . . . . . . . . 313
6.9.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
6.9.3 Dependent writes that span multiple VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
6.9.4 Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
6.10 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
6.10.1 Intercluster communication and zoning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
6.10.2 SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
6.10.3 Maintenance of the intercluster link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
6.10.4 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6.10.5 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6.10.6 Space-efficient background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6.11 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6.11.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6.11.2 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6.11.3 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
6.11.4 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 329
6.11.6 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
6.12 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
6.12.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
6.12.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
6.12.3 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
6.12.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
6.12.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
6.12.6 Changing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.12.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.12.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.12.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
6.12.10 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 336
6.12.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
6.12.12 Deleting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 337
6.12.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
6.12.14 Reversing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 337
Chapter 7. SAN Volume Controller operations using the command-line interface. . 339
7.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
7.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
7.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 340
7.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
7.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
7.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
7.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
7.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
7.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
7.2.8 Adding MDisks to a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
7.2.9 Showing the Managed Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
viii Implementing the IBM System Storage SAN Volume Controller V5.1
7.2.10 Showing MDisks in an managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
7.2.11 Working with Managed Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
7.2.12 Creating a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
7.2.13 Viewing Managed Disk Group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
7.2.14 Renaming a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
7.2.15 Deleting a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
7.2.16 Removing MDisks from a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . 349
7.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.3.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.3.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
7.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
7.4 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7.4.1 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7.4.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
7.4.3 Creating a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
7.4.4 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
7.4.5 Adding a mirrored VDisk copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
7.4.6 Splitting a VDisk Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
7.4.7 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
7.4.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
7.4.9 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
7.4.10 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
7.4.11 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
7.4.12 Showing VDisk-to-host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
7.4.13 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.4.14 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.4.15 Migrate a VDisk to an image mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
7.4.16 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
7.4.17 Showing a VDisk on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
7.4.18 Showing VDisks using a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . 373
7.4.19 Showing which MDisks are used by a specific VDisk . . . . . . . . . . . . . . . . . . . . 374
7.4.20 Showing from which Managed Disk Group a VDisk has its extents . . . . . . . . . 374
7.4.21 Showing the host to which the VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . . 375
7.4.22 Showing the VDisk to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . . 376
7.4.23 Tracing a VDisk from a host back to its physical disk . . . . . . . . . . . . . . . . . . . . 376
7.5 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
7.6 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
7.6.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
7.6.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
7.7 Managing the cluster using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
7.7.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
7.7.2 Changing cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
7.7.3 Cluster authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
7.7.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
7.7.5 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
7.7.6 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
7.7.7 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
7.7.8 Start statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
7.7.9 Stopping a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
7.7.10 Status of copy operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Contents ix
7.7.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
7.8 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
7.8.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
7.8.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
7.8.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
7.8.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
7.8.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
7.9 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
7.9.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
7.9.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
7.9.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
7.9.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
7.10 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
7.10.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
7.10.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
7.10.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
7.10.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
7.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
7.11.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
7.11.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
7.11.3 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
7.11.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
7.11.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 401
7.11.6 Preparing (pre-triggering) the FlashCopy consistency group . . . . . . . . . . . . . . 402
7.11.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
7.11.8 Starting (triggering) FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . 404
7.11.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
7.11.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
7.11.11 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 406
7.11.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
7.11.13 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . 407
7.11.14 Migrating a VDisk to a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . 407
7.11.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
7.11.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
7.12 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
7.12.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 415
7.12.3 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
7.12.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
7.12.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 418
7.12.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
7.12.7 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
7.12.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
7.12.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
7.12.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 422
7.12.11 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 423
7.12.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 424
7.12.13 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . . . . . 424
7.12.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
7.12.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . . . . 425
7.12.16 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . . 426
7.12.17 Creating an SVC partnership among many clusters . . . . . . . . . . . . . . . . . . . . 427
7.12.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Contents xi
8.3.2 Creating MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
8.3.3 Renaming a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
8.3.4 Deleting a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
8.3.5 Adding MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
8.3.6 Removing MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
8.3.7 Displaying MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
8.3.8 Showing MDisks in this group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
8.3.9 Showing the VDisks that are associated with an MDisk group . . . . . . . . . . . . . . 492
8.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
8.4.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
8.4.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
8.4.3 Fibre Channel-attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
8.4.4 iSCSI-attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
8.4.5 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
8.4.6 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
8.4.7 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
8.4.8 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
8.5 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
8.5.1 Using the Viewing VDisks using MDisk window . . . . . . . . . . . . . . . . . . . . . . . . . 504
8.5.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
8.5.3 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
8.5.4 Creating a Space-Efficient VDisk with autoexpand. . . . . . . . . . . . . . . . . . . . . . . 509
8.5.5 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
8.5.6 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
8.5.7 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
8.5.8 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
8.5.9 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
8.5.10 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
8.5.11 Migrating a VDisk to an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
8.5.12 Creating a VDisk Mirror from an existing VDisk . . . . . . . . . . . . . . . . . . . . . . . . 521
8.5.13 Creating a mirrored VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
8.5.14 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
8.5.15 Creating an image mode mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
8.5.16 Migrating to a Space-Efficient VDisk using VDisk Mirroring . . . . . . . . . . . . . . . 532
8.5.17 Deleting a VDisk copy from a VDisk mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
8.5.18 Splitting a VDisk copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
8.5.19 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
8.5.20 Showing the MDisks that are used by a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . 537
8.5.21 Showing the MDG to which a VDisk belongs . . . . . . . . . . . . . . . . . . . . . . . . . . 538
8.5.22 Showing the host to which the VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . . 538
8.5.23 Showing capacity information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
8.5.24 Showing VDisks mapped to a particular host . . . . . . . . . . . . . . . . . . . . . . . . . . 539
8.5.25 Deleting VDisks from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
8.6 Working with solid-state drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
8.6.1 Solid-state drive introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
8.7 SVC advanced operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
8.7.1 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
8.8 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
8.8.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
8.8.2 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
8.8.3 Starting the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
8.8.4 Stopping the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
8.8.5 Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
xii Implementing the IBM System Storage SAN Volume Controller V5.1
8.8.6 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
8.8.7 Setting the cluster time and configuring the Network Time Protocol server . . . . 549
8.8.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
8.9 Manage authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
8.9.1 Modify current user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
8.9.2 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
8.9.3 Modifying a user role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
8.9.4 Deleting a user role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
8.9.5 User groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
8.9.6 Cluster password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
8.9.7 Remote authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
8.10 Working with nodes using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
8.10.1 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
8.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
8.10.3 Adding nodes to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
8.10.4 Configuring iSCSI ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
8.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
8.12 FlashCopy operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
8.13 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
8.13.1 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
8.13.2 Preparing (pre-triggering) the FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
8.13.3 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
8.13.4 Starting (triggering) a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . 574
8.13.5 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
8.13.6 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
8.13.7 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
8.13.8 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
8.13.9 Migrating between a fully allocated VDisk and a Space-Efficient VDisk . . . . . . 580
8.13.10 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . 580
8.14 Metro Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
8.14.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
8.14.2 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
8.14.3 Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . 585
8.14.4 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
8.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_Pri . . . . 590
8.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 594
8.14.7 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
8.14.8 Starting a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 597
8.14.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
8.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
8.14.11 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
8.14.12 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 600
8.14.13 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 600
8.14.14 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 602
8.14.15 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . . . . . 603
8.14.16 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
8.14.17 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . . 605
8.14.18 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . 606
8.15 Global Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
8.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
8.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . . 609
8.15.3 Global Mirror link tolerance and delay simulations . . . . . . . . . . . . . . . . . . . . . . 612
8.15.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
Contents xiii
8.15.5 Creating Global Mirror relationships for GM_DB_Pri and GM_DBLog_Pri . . . . 617
8.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 620
8.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
8.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 624
8.15.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
8.15.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
8.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
8.15.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 627
8.15.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 628
8.15.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 630
8.15.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . . 631
8.15.16 Changing copy direction for Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
8.15.17 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 634
8.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
8.17 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
8.17.1 Package numbering and version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
8.17.2 Upgrade status utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
8.17.3 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
8.17.4 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
8.17.5 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
8.17.6 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
8.17.7 Setting up error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
8.17.8 Setting syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
8.17.9 Set e-mail features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
8.17.10 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
8.17.11 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
8.17.12 Viewing the license settings log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
8.17.13 Dumping the cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
8.17.14 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
8.17.15 Setting up a quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
8.18 Backing up the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
8.18.1 Backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
8.18.2 Saving the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
8.18.3 Restoring the SVC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
8.18.4 Deleting the configuration backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
8.18.5 Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
8.18.6 Common Information Model object manager log configuration. . . . . . . . . . . . . 673
xiv Implementing the IBM System Storage SAN Volume Controller V5.1
9.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
9.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
9.5.1 Windows Server 2008 host system connected directly to the DS4700. . . . . . . . 688
9.5.2 Adding the SVC between the host system and the DS4700. . . . . . . . . . . . . . . . 690
9.5.3 Putting the migrated disks onto an online Windows Server 2008 host . . . . . . . . 698
9.5.4 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . . 700
9.5.5 Migrating the VDisk from managed mode to image mode . . . . . . . . . . . . . . . . . 702
9.5.6 Migrating the VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . . 705
9.5.7 Free the data from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
9.5.8 Put the free disks online on Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . 711
9.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
9.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
9.6.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
9.6.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
9.6.4 Migrate the image mode VDisks to managed MDisks . . . . . . . . . . . . . . . . . . . . 722
9.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
9.6.6 Migrate the VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
9.6.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
9.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
9.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
9.7.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
9.7.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
9.7.4 Migrating the image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
9.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
9.7.6 Migrating the managed VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . 747
9.7.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
9.8 Migrating AIX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
9.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
9.8.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
9.8.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
9.8.4 Migrating image mode VDisks to VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
9.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
9.8.6 Migrating the managed VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
9.8.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
9.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
9.10 Using VDisk Mirroring and Space-Efficient VDisks together . . . . . . . . . . . . . . . . . . . 771
9.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
9.10.2 VDisk Mirroring With Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . 773
9.10.3 Metro Mirror and Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
Contents xv
Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
Performance data collection and TotalStorage Productivity Center for Disk . . . . . . . . 812
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
xvi Implementing the IBM System Storage SAN Volume Controller V5.1
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L™ IBM Systems Director Active Energy System Storage™
AIX® Manager™ System Storage DS®
developerWorks® IBM® System x®
DS4000® Power Systems™ System z®
DS6000™ Redbooks® Tivoli®
DS8000® Redbooks (logo) ® TotalStorage®
Enterprise Storage Server® Solid® WebSphere®
FlashCopy® System i® XIV®
GPFS™ System p® z/OS®
Emulex, and the Emulex logo are trademarks or registered trademarks of Emulex Corporation.
Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States
and other countries.
QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered
trademark in the United States.
ACS, Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S.
and other countries.
VMotion, VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware,
Inc. in the United States and/or other jurisdictions.
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel Xeon, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
xviii Implementing the IBM System Storage SAN Volume Controller V5.1
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-6423-07
for Implementing the IBM System Storage SAN Volume Controller V5.1
as created or updated on March 30, 2010.
New information
Added iSCSI information
Added Solid® State Drive information
Changed information
Removed duplicate information
Consolidated chapters
Removed dated material
This IBM® Redbooks® publication is a detailed technical guide to the IBM System Storage™
SAN Volume Controller (SVC), a virtualization appliance solution that maps virtualized
volumes visible to hosts and applications to physical volumes on storage devices. Each
server within the storage area network (SAN) has its own set of virtual storage addresses,
which are mapped to physical addresses. If the physical addresses change, the server
continues running using the same virtual addresses that it had before. Therefore, volumes or
storage can be added or moved while the server is still running. The IBM virtualization
technology improves management of information at the “block” level in a network, enabling
applications and servers to share storage devices on a network. This book is intended to
allow you to implement the SVC at a 5.1.0 release level with a minimum of effort.
Jon Tate is a Project Manager for IBM System Storage SAN Solutions at the International
Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level 2 and 3 support for IBM storage
products. Jon has 24 years of experience in storage software and management, services,
and support, and is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist.
He is also the UK Chairman of the Storage Networking Industry Association.
Pall Beck is a SAN Technical Team Lead in IBM Nordic. He has 12 years of experience
working with storage and joined the IBM ITD DK in 2005. Prior to working for IBM in Denmark,
he worked as an IBM service representative performing hardware installations and repairs for
IBM System i®, System p®, and System z® in Iceland. As a SAN Technical Team Lead for
ITD DK, he led a team of administrators running several of the largest SAN installations in
Europe. His current position involves the creation and implementation of operational
standards and aligning best practices throughout the Nordics. Pall has a diploma as an
Electronic Technician from Odense Tekniske Skole in Denmark and IR in Reykjavik, Iceland.
Angelo Bernasconi is a Certified ITS Senior Storage and SAN Software Specialist in IBM
Italy. He has 24 years of experience in the delivery of maintenance and professional services
for IBM Enterprise clients in z/OS® and open systems. He holds a degree in Electronics and
his areas of expertise include storage hardware, SAN, storage virtualization, de-duplication,
and disaster recovery solutions. He has written extensively about SAN and virtualization
products in three IBM Redbooks publications, and he is the Technical Leader of the Italian
Open System Storage Professional Services Community.
Werner Eggli is a Senior IT Specialist with IBM Switzerland. He has more than 25 years of
experience in Software Development, Project Management, and Consulting concentrating in
the Networking and Telecommunication Segment. Werner joined IBM in 2001 and works in
pre-sales as a Storage Systems Engineer for Open Systems. His expertise is the design and
implementation of IBM Storage Solutions. He holds a degree in Dipl.Informatiker (FH) from
Fachhochschule Konstanz, Germany.
We extend our thanks to the following people for their contributions to this project.
We also want to thank the following people for their contributions to previous editions and to
those people who contributed to this edition:
John Agombar
Alex Ainscow
Trevor Boardman
Chris Canto
Peter Eccles
Carlos Fuente
Alex Howell
Colin Jewell
Paul Mason
Paul Merrison
Jon Parkes
Steve Randle
Lucy Raw
Bill Scales
Dave Sinclair
Matt Smith
Steve White
Barry Whyte
IBM Hursley
Bill Wiegand
IBM Advanced Technical Support
xxii Implementing the IBM System Storage SAN Volume Controller V5.1
Dorothy Faurot
IBM Raleigh
Sharon Wang
IBM Chicago
Chris Saul
IBM San Jose
Sangam Racherla
IBM ITSO
A special mention must go to Brocade for their unparalleled support of this residency in terms
of equipment and support in many areas throughout. Namely:
Jim Baldyga
Yong Choi
Silviano Gaona
Brian Steffler
Steven Tong
Brocade Communications Systems
Comments welcome
Your comments are important to us.
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review IBM Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xxiii
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/pages/IBM-Redbooks/178023492563?ref=ts
Follow us on twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xxiv Implementing the IBM System Storage SAN Volume Controller V5.1
1
So, what is storage virtualization? The IBM explanation of storage virtualization is clear:
Storage virtualization is a technology that makes one set of resources look and feel like
another set of resources, preferably with more desirable characteristics.
It is a logical representation of resources not constrained by physical limitations:
– Hides part of the complexity
– Adds or integrates new function with existing services
– Can be nested or applied to multiple layers of a system
The focus of this book is block-level virtualization, that is, the block aggregation layer. File
system virtualization is out of the intended scope of this book.
If you are interested in file system virtualization, refer to IBM General Parallel File System
(GPFS™) or IBM scale out file services, which is based on GPFS. For more information and
an overview of the IBM General Parallel File System (GPFS) Version 3, Release 2 for AIX®,
Linux®, and Windows®, go to this Web site:
http://www-03.ibm.com/systems/clusters/software/whitepapers/gpfs_intro.html
For the IBM scale out file services, go to this Web site:
http://www-935.ibm.com/services/us/its/html/sofs-landing.html
The Storage Networking Industry Association’s (SNIA) block aggregation model (Figure 1-1
on page 3) provides a good overview of the storage domain and its layers.
Figure 1-1 on page 3 shows the three layers of a storage domain: the file, the block
aggregation, and the block subsystem layers. The model splits the block aggregation layer
into three sublayers. Block aggregation can be realized within hosts (servers), in the storage
network (storage routers and storage controllers), or in storage devices (intelligent disk
arrays).
The IBM implementation of a block aggregation solution is the IBM System Storage SAN
Volume Controller (SVC). The SVC is implemented as a clustered appliance in the storage
network layer. Chapter 2, “IBM System Storage SAN Volume Controller” on page 7 provides a
more in-depth discussion of why IBM has chosen to implement its IBM System Storage SAN
Volume Controller in the storage network layer.
The key concept of virtualization is to decouple the storage (which is delivered by commodity
two-way Redundant Array of Independent Disks (RAID) controllers attaching physical disk
drives) from the storage functions that are expected from servers in today’s storage area
network (SAN) environment.
Decoupling is abstracting the physical location of data from the logical representation that an
application on a server uses to access data. The virtualization engine presents logical
entities, which are called volumes, to the user and internally manages the process of mapping
the volume to the actual physical location. The realization of this mapping depends on the
specific implementation. Another implementation-specific issue is the granularity of the
mapping, which can range from a small fraction of a physical disk, up to the full capacity of a
single physical disk. A single block of information in this environment is identified by its logical
unit identifier (LUN), which is the physical disk, and an offset within that LUN, which is known
as a logical block address (LBA).
Be aware that the term physical disk that is used in this context describes a piece of storage
that might be carved out of a RAID array in the underlying disk subsystem.
The address space is mapped between the logical entity, which is usually referred to as a
virtual disk (VDisk), and the physical disks, which are identified by their LUNs. We refer to
these LUNs, which are provided by the storage controllers to the virtualization layer, as
managed disks (MDisks) throughout this book.
The server and the application only know about logical entities and access these logical
entities via a consistent interface that is provided by the virtualization layer. Each logical entity
owns a common and well defined set of functionality that is independent of where the physical
representation is located.
The functionality of a VDisk that is presented to a server, such as expanding or reducing the
size of a VDisk, mirroring a VDisk to a secondary site, creating a FlashCopy/Snapshot, thin
provisioning/over-allocating, and so on, is implemented in the virtualization layer and does not
rely in any way on the functionality that is provided by the disk subsystems that deliver the
MDisks. Data that is stored in a virtualized environment is stored in a location-independent
way, which allows a user to move or migrate its data, or parts of it, to another place or storage
pool, that is, the place where the data really belongs.
The logical entity can be resized, moved, replaced, replicated, over-allocated, mirrored,
migrated, and so on without any disruption to the server and the application. After you have
an abstraction layer in the SAN, you can perform almost any task.
The ability to deliver these functions in a homogeneous way on a scalable and highly
available platform, over any attached storage and to every attached server, is the key
challenge for every block-level virtualization solution.
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs are related to managing the storage system.
How much of managing multiple systems with separate interfaces can be managed as a
single entity? In an non-virtualized storage environment, every system is an island. Even if
you have a large system that claims to virtualize, that system is an island that you will need to
replace in the future.
With the SVC, you can reduce the number of separate environments that you need to
manage to one environment ideally. However, depending on how many tens or thousands of
systems you have, even reducing the number is a step in the right direction.
The SVC provides a single interface for storage management. Of course, there is an initial
effort for the setup of the disk subsystems; however, all of the day-to-day storage
management can be performed on the SVC. For example, you can use the data migration
functionality of the SVC for data migration as disk subsystems are phased out. SVC can
move the data online and without any impact on your servers.
Also, the virtualization layer offers advanced functions, such as data mirroring or FlashCopy®
so there is no need to purchase them again for each new disk subsystem.
With the SVC, you do not need to keep and manage free space in each disk subsystem. You
do not need to worry whether there is sufficient free space on the right storage tier, or in a
single system.
Even if there is enough free space in one system, it might not be accessible in a
non-virtualized environment for a specific server or application due to multipath driver issues.
The SVC is able to handle the storage resources that it manages as a single storage pool.
Disk space allocation from this pool is a matter of minutes for every server connected to the
SVC, because you provision the capacity as needed, without disrupting applications.
1.3 Conclusion
Storage virtualization is no longer merely a concept or an unproven technology. All major
storage vendors offer storage virtualization products. Making use of storage virtualization as
the foundation for a flexible and reliable storage solution helps a company better align
business and IT by optimizing the storage infrastructure and storage management to meet
business demands.
The IBM System Storage SAN Volume Controller is a mature, fifth generation virtualization
solution, which uses open standards and is consistent with the Storage Networking Industry
Association (SNIA) storage model. The SVC is an appliance-based in-band block
virtualization process, in which intelligence, including advanced storage functions, is migrated
from individual storage devices to the storage network.
We expect the use of SVC will improve the utilization of your storage resources, simplify the
storage management, and improve the availability of your applications.
COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.
The first publications covering this project were released to the public in 2003 in the form of
the IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003, “The architecture of a SAN storage
control system”, by J. S. Glider, C. F. Fuente, and W. J. Scales, which you can read at this
Web site:
http://domino.research.ibm.com/tchjr/journalindex.nsf/e90fc5d047e64ebf85256bc80066
919c/b97a551f7e510eff85256d660078a12e?OpenDocument
The results of the COMPASS project defined the fundamentals for the product architecture.
The announcement of the first release of the IBM System Storage SAN Volume Controller
took place in July 2003.
The following releases brought new, more powerful hardware nodes, which approximately
doubled the I/O performance and throughput of its predecessors, provided new functionality,
and offered additional interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).
In 2008, the 15,000th SVC engine was shipped by IBM. More than 5,000 SVC systems
worldwide are in operation.
With the new release of SVC that is introduced in this book, we will get a new generation of
hardware nodes. This hardware, which will approximately double the performance of its
predecessors, also provides solid-state drive (SSD) support. New software features are iSCSI
support (which will be available on all hardware nodes that support the new firmware) and
There are three major approaches in use today to be considered for the implementation of
block-level aggregation:
Network-based: Appliance
The device is a SAN appliance that sits in the data path, and all I/O flows through the
device. This kind of implementation is also referred to as symmetric virtualization or
in-band. The device is both target and initiator. It is the target of I/O requests from the host
perspective and the initiator of I/O requests from the storage perspective. The redirection
is performed by issuing new I/O requests to the storage.
Switch-based: Split-path
The device is usually an intelligent SAN switch that intercepts I/O requests on the fabric
and redirects the frames to the correct storage location. The actual I/O requests are
themselves redirected. This kind of implementation is also referred to as asymmetric
virtualization or out-of-band. Data and the control data path are separated, and a specific
(preferably highly available and disaster tolerant) controller outside of the switch holds the
metainformation and the configuration to manage the split data paths.
Controller-based
The device is a storage controller that provides an internal switch for external storage
attachment. In this approach, the storage controller intercepts and redirects I/O requests
to the external storage as it does for internal storage.
While all of these approaches provide in essence the same cornerstones of virtualization,
several have interesting side effects.
All three approaches can provide the required functionality. Although, the implementation
(especially the switch-based split I/O architecture) can make it more difficult to implement part
of the required functionality.
This challenge is especially true for FlashCopy services. Taking a point-in-time clone of a
device in a split I/O architecture means that all of the data has to be copied from the source to
the target first.
The drawback is that the target copy cannot be brought online until the entire copy has
completed, that is, minutes or hours later. Think of using this approach for implementing a
sparse flash, which is a flash copy without a background copy where the target disk is only
populated with the blocks or extents that are modified after the point in time when the flash
copy was taken (or an incremental series of cascaded copies).
Scalability is another issue, because it might be difficult to try to scale out to n-way clusters of
intelligent line cards. A multiway switch design is also difficult to code and implement,
because of the issues in maintaining fast updates to metadata to keep the metadata
synchronized across all processing blades; the updates must occur at wire speed or you lose
that claim.
For the same reason, space-efficient copies and replication are also difficult to implement.
Both synchronous and asynchronous replication require a level of buffering of I/O requests -
while switches have buffering built in, the number of additional buffers is huge and grows as
the link distance increases. Most of today’s intelligent line cards do not provide anywhere near
this level of local storage. The most common solution is to use an external system to provide
the replication services, which means another system to manage and maintain, which
conflicts with the concept of virtualization.
The controller-based approach has high functionality, but it fails in terms of scalability or
upgradability. Because of the nature of its design, there is no true decoupling with this
approach, which becomes an issue for the life cycle of this solution, such as a controller. You
will be challenged with data migration issues and questions, such as how to reconnect the
servers to the new controller, and how to reconnect them online without any impact to your
applications.
Be aware that you not only replace a controller in this scenario, but also, implicitly, replace
your entire virtualization solution. You not only have to replace your hardware, but you also
must update or repurchase the licenses for the virtualization feature, advanced copy
functions, and so on.
With a network-based appliance solution that is based on a scale-out cluster architecture, life
cycle management tasks, such as adding or replacing new disk subsystems or migrating data
between them, are extremely simple. Servers and applications remain online, data migration
takes place transparently on the virtualization platform, and licenses for virtualization and
copy services require no update, that is, cause no additional costs when disk subsystems
have to be replaced. Only the network-based appliance solution provides you with an
independent and scalable virtualization platform that can provide enterprise-class copy
services, is open for future interfaces and protocols, lets you choose the disk subsystems that
best fit your requirements, and does not lock you into specific SAN hardware.
For these reasons, IBM has chosen the network-based appliance approach for the
implementation of the IBM System Storage SAN Volume Controller.
On the SAN storage that is provided by the disk subsystems, the SVC can offer the following
services:
The ability to create and manage a single pool of storage attached to the SAN
Block-level virtualization (logical unit virtualization)
Advanced functions to the entire SAN, such as:
– Large scalable cache
– Advanced Copy Services:
• FlashCopy (point-in-time copy)
• Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous)
• Data migration
You can configure SAN-based storage infrastructures using SVC with two or more SVC
nodes, which are arranged in a cluster. These nodes are attached to the SAN fabric, along
with RAID controllers and host systems. The SAN fabric is zoned to allow the SVC to “see”
the RAID controllers, and for the hosts to “see” the SVC. The hosts are not usually able to
directly “see” or operate on the RAID controllers unless a “split controller” configuration is in
use. You can use the zoning capabilities of the SAN switch to create these distinct zones. The
assumptions that are made about the SAN fabric will be limited to make it possible to support
a number of separate SAN fabrics with a minimum development effort. Anticipated SAN
fabrics include FC, iSCSI over Gigabit Ethernet, and other types might follow in the future.
Figure 2-2 shows a conceptual diagram of a storage system utilizing the SVC. It shows a
number of hosts that are connected to a SAN fabric or LAN. In practical implementations that
have high availability requirements (the majority of the target clients for SVC), the SAN fabric
“cloud” represents a redundant SAN. A redundant SAN is composed of a fault-tolerant
arrangement of two or more counterpart SANs, therefore providing alternate paths for each
SAN-attached device.
Both scenarios (using a single network and using two physically separate networks) are
supported for iSCSI-based/LAN-based access networks to the SVC. Redundant paths to
VDisks can be provided for both scenarios.
A cluster of SVC nodes are connected to the same fabric and present VDisks to the hosts.
These VDisks are created from MDisks that are presented by the RAID controllers. There are
two distinct zones shown in the fabric: a host zone, in which the hosts can see and address
For simplicity, Figure 2-3 shows only one SAN fabric and two types of zones. In an actual
environment, we recommend using two redundant SAN fabrics. The SVC can be connected
to up to four fabrics. You set up zoning for each host, disk subsystem, and fabric. Learn about
zoning details in 3.3.2, “SAN zoning and SAN connections” on page 76.
For iSCSI-based access, using two networks and separating iSCSI traffic within the networks
by using a dedicated virtual local area network (VLAN) path for storage traffic will prevent any
IP interface, switch, or target port failure from compromising the host server’s access to the
VDisk LUNs.
The SAN is zoned so that the application servers cannot see the back-end physical storage,
which prevents any possible conflict between the SVC and the application servers both trying
to manage the back-end storage. The SVC is based on the following virtualization concepts,
which are discussed more throughout this chapter.
A node is an SVC, which provides virtualization, cache, and copy services to the SAN. SVC
nodes are deployed in pairs, to make up a cluster. A cluster can have between one and four
SVC node pairs in it, which is a product limit not an architectural limit.
When a host server performs I/O to one of its VDisks, all the I/Os for a specific VDisk are
directed to one specific I/O Group in the cluster. During normal operating conditions, the I/Os
for a specific VDisk are always processed by the same node of the I/O Group. This node is
referred to as the preferred node for this specific VDisk.
Both nodes of an I/O Group act as the preferred node for its specific subset of the total
number of VDisks that the I/O Group presents to the host servers. But, both nodes also act as
failover nodes for their specific partner node in the I/O Group. A node will take over the I/O
handling from its partner node, if required.
In an SVC-based environment, the I/O handling for a VDisk can switch between the two
nodes of an I/O Group. Therefore, it is mandatory for servers that are connected through FC
to use multipath drivers to be able to handle these failover situations.
SVC 5.1 introduces iSCSI as an alternative means of attaching hosts. However, all
communications with back-end storage subsystems, and with other SVC clusters, is still
through FC. The node failover can be handled without a multipath driver installed on the
server. An iSCSI-attached server can simply reconnect after a node failover to the original
target IP address, which is now presented by the partner node. To protect the server against
link failures in the network or host bus adapter (HBA) failures, a multipath driver is mandatory.
The SVC I/O Groups are connected to the SAN so that all application servers accessing
VDisks from this I/O Group have access to this group. Up to 256 host server objects can be
defined per I/O Group; these host server objects can consume VDisks that are provided by
this specific I/O Group.
If required, host servers can be mapped to more than one I/O Group of an SVC cluster;
therefore, they can access VDisks from separate I/O Groups. You can move VDisks between
I/O Groups to redistribute the load between the I/O Groups. With the current release of SVC,
I/Os to the VDisk that is being moved have to be quiesced for a short time for the duration of
the move.
The SVC cluster and its I/O Groups view the storage that is presented to the SAN by the
back-end controllers as a number of disks, known as managed disks or MDisks. Because the
SVC does not attempt to provide recovery from physical disk failures within the back-end
controllers, an MDisk is usually, but not necessarily, provisioned from a RAID array. The
application servers however do not see the MDisks at all. Instead, they see a number of
logical disks, which are known as virtual disks or VDisks, which are presented by the SVC I/O
Groups through the SAN (FC) or LAN (iSCSI) to the servers. A VDisk is storage that is
provisioned out of one Managed Disk Group (MDG), or if it is a mirrored VDisk, out of two
MDGs.
An MDG is a collection of up to 128 MDisks, which creates the storage pools out of which
VDisks are provisioned. A single cluster can manage up to 128 MDGs. The size of these
pools can be changed (expanded or shrunk) at run time without taking the MDG or the VDisks
that are provided by it offline. At any point in time, an MDisk can only be a member in one
MDG with one exception (image mode VDisk), which will be explained later in this chapter.
MDisks that are used in a specific MDG must have the following characteristics:
They must have the same hardware characteristics, for example, the same RAID type,
RAID array size, disk type, and disk revolutions per minute (RPMs). Be aware that it is
For further details, refer to SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521, at this Web site:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
VDisks can be mapped to a host to allow access for a specific server to a set of VDisks. A
host within the SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified
names (IQNs), defined on the specific server. Note that iSCSI names are internally identified
by “fake” WWPNs, or WWPNs that are generated by the SVC. VDisks might be mapped to
multiple hosts, for example, a VDisk that is accessed by multiple hosts of a server cluster.
An MDisk can be provided by a SAN disk subsystem or by the solid state drives that are
provided by the SVC nodes themselves. Each MDisk is divided into a number of extents. The
size of the extent will be selected by the user at the creation time of an MDG. The size of the
extent ranges from 16 MB (default) up to 2 GB.
We recommend that you use the same extent size for all MDGs in a cluster, which is a
prerequisite for supporting VDisk migration between two MDGs. If the extent size does not fit,
you must use VDisk Mirroring (see 2.2.7, “Mirrored VDisk” on page 21) as a workaround. For
Figure 2-5 shows the two most popular ways to provision VDisks out of an MDG. Striped
mode is the recommended method for most cases. Sequential extent allocation mode might
slightly increase the sequential performance for certain workloads.
You can allocate the extents for a VDisk in many ways. The process is under full user control
at VDisk creation time and can be changed at any time by migrating single extents of a VDisk
to another MDisk within the MDG. You can obtain details of how to create VDisks and migrate
extents via GUI or CLI in Chapter 7, “SAN Volume Controller operations using the
command-line interface” on page 339, Chapter 8, “SAN Volume Controller operations using
the GUI” on page 469, and Chapter 9, “Data migration” on page 675.
SVC limits the number of extents in a cluster. The number is currently 222 ~= 4 million
extents, and this number might change in future releases. Because the number of
addressable extents is limited, the total capacity of an SVC cluster depends on the extent size
that is chosen by the user. The capacity numbers that are specified in Table 2-1 for an SVC
cluster assume that all defined MDGs have been created with the same extent size.
16 MB 64 TB 256 MB 1 PB
32 MB 128 TB 512 MB 2 PB
64 MB 256 TB 1024 MB 4 PB
For most clusters, a capacity of 1 - 2 PB is sufficient. We therefore recommend that you use
256 MB or, for larger clusters, 512 MB as the standard extent size.
It is a best practice if you work with image mode MDisks to put them in a dedicated MDG
and use a special name for it (Example: MDG_IMG_xxx). And, remember that the extent
size chosen for this specific MDG has to be the same as the extent size in which you plan
to migrate the data. All of SVC copy services can be applied to image mode disks.
VDisks have two modes: image mode and managed mode. The following state diagram in
Figure 2-7 on page 19 shows the state transitions.
Doesn't Managed
exist mode
delete
vdisk
complete
migrate
Managed
Image
mode
mode
migrating
migrate to
image mode
Managed mode VDisks have two policies: the sequential policy and the striped policy. Policies
define how the extents of a VDisk are carved out of an MDG.
Figure 2-8 on page 20 represents this diagrammatically. It shows VDisk V, which is made up
of a number of extents. Each of these extents is mapped to an extent on one of the MDisks: A,
B, or C. The mapping table stores the details of this indirection. Note that several of the MDisk
extents are unused. There is no VDisk extent, which maps to them. These unused extents are
available for use in creating new VDisks, migration, expansion, and so on.
A managed mode VDisk can have a size of zero blocks, in which case, it occupies zero
extents. This type of a VDisk cannot be mapped to a host or take part in any Advanced Copy
Services functions.
The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm: If the set of MDisks from which to allocate extents contains more than
one disk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no free
extents when its turn arrives, its turn is missed and the round-robin moves to the next MDisk
in the set that has a free extent.
Beginning with SVC 5.1, when creating a new VDisk, the first MDisk from which to allocate an
extent is chosen in a pseudo random way rather than simply choosing the next disk in a
round-robin fashion. The pseudo random algorithm avoids the situation whereby the “striping
effect” inherent in a round-robin algorithm places the first extent for a large number of VDisks
on the same MDisk. Placing the first extent of a number of VDisks on the same MDisk might
lead to poor performance for workloads that place a large I/O load on the first extent of each
VDisk or that create multiple sequential streams.
Wherever possible, we recommend using SVC copy services in preference to the underlying
controller copy services.
This function is achieved using two copies of the VDisk, which are typically allocated from
separate MDGs or using image-mode copies. The VDisk is the entity that participates in
FlashCopy and a Remote Copy relationship, is served by an I/O Group, and has a preferred
node. The copy now has the virtualization attributes, such as MDG and policy (striped,
sequential, or image).
A copy is not a separate object and cannot be created or manipulated except in the context of
the VDisk. Copies are identified via the configuration interface with a copy ID of their parent
VDisk. This copy ID can be either 0 or 1. Depending on the configuration history, a single
copy can have an ID of either 0 or 1.
The feature does provide a “point-in-time” copy functionality that is achieved by “splitting” a
copy from the VDisk. The feature does not address other forms of mirroring based on Remote
Copy (sometimes called “Hyperswap”), which mirrors VDisks across I/O Groups or clusters,
nor is it intended to manage mirroring or remote copy functions in back-end controllers.
A copy can be added to a VDisk with only one copy or removed from a VDisk with two copies.
Checks will prevent the accidental removal the sole copy of a VDisk. A newly created,
unformatted VDisk with two copies will initially have the copies out-of-synchronization. The
primary copy will be defined as “fresh” and the secondary copy as “stale”. The
synchronization process will update the secondary copy until it is synchronized, which will be
done at the default “synchronization rate” or one defined when creating the VDisk or
subsequently modifying it.
If mirrored VDisks get expanded or shrunk, all of their copies also get expanded or shrunk.
If it is known that MDisk space, which will be used for creating copies, is already formatted, or
if the user does not require read stability, a “no synchronization” option can be selected which
declares the copies as “synchronized” (even when they are not).
The time for a copy, which has become unsynchronized, to resynchronize is minimized by
copying only those 256 KB grains that have been written to since synchronization was lost.
This approach is known as an “incremental synchronization”. Only those changed grains
need be copied to restore synchronization.
Where there are two copies of a VDisk, one copy is known as the primary copy. If the primary
is available and synchronized, reads from the VDisk are directed to it. The user can select the
primary when creating the VDisk or can change it later. Selecting the copy allocated on the
higher-performance controller will maximize the read performance of the VDisk. The write
performance will be constrained by the lower-performance controller, because writes must
complete to both copies before the VDisk is considered to have been successfully written.
Remember that writes to both copies must complete to be considered successfully written
when VDisk Mirroring creates one copy in a solid-state drive MDG and the second copy in an
MDG populated with resources from a disk subsystem.
Note: SVC does not prevent you from creating the two copies in one or more solid-state
drive MDGs of the same node. Although doing so means that you lose redundancy and
might therefore be faced with access loss to your VDisk if the node fails or restarts.
A VDisk with copies can be checked to see whether all of the copies are identical. If a medium
error is encountered while reading from any copy, it will be repaired using data from another
fresh copy. This process can be asynchronous but will give up if the copy with the error goes
offline.
Mirrored VDisks consume bitmap space at a rate of 1 bit per 256 KB grain, which translates to
1 MB of bitmap space supporting 2 TB-worth of mirrored VDisk. The default allocation of
bitmap space in 20 MB, which supports 40 TB of mirrored VDisk. If all 512 MB of variable
bitmap space is allocated to mirrored VDisks, 1 PB of mirrored VDisks can be supported.
The advent of the mirrored VDisk feature will inevitably lead clients to think about two-site
solutions for cluster and VDisk availability.
Generally, the advice is not to split a cluster, that is, the single I/O Groups, across sites. But
there are certain configurations that will be effective. Be careful that you prevent a situation
that is referred to as a “split brain” scenario (caused, for example, by a power outage on the
SAN switches; the SVC nodes are protected by their own uninterruptible power supply unit).
In this scenario, the connectivity between components will be lost and a contest for the SVC
cluster quorum disk occurs. Which set of nodes wins is effectively arbitrary. If the set of nodes
which won the quorum disk then experiences a permanent power loss, the cluster is lost. The
way to prevent this split brain scenario is to use a configuration that will provide effective
The real capacity will determine the quantity of MDisk extents that will be allocated for the
VDisk. The virtual capacity will be the capacity of the VDisk reported to other SVC
components (for example, FlashCopy, Cache, and Remote Copy) and to the host servers.
The real capacity will be used to store both the user data and the metadata for the SE VDisk.
The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.
The Space-Efficient VDisk feature can be used on its own to create over-allocated or
late-allocation VDisks, or it can be used in conjunction with FlashCopy to implement
Space-Efficient FlashCopy. SE VDisk can be used in conjunction with the mirrored VDisks
feature, as well, which we refer to as Space-Efficient Copies of VDisks.
When an SE VDisk is initially created, a small amount of the real capacity will be used for
initial metadata. Write I/Os to grains of the SE VDisk that have not previously been written to
will cause grains of the real capacity to be used to store metadata and user data. Write I/Os to
grains that have previously been written to will update the grain where data was previously
written. The grain is defined when the VDisk is created and can be 32 KB, 64 KB, 128 KB, or
256 KB.
SE VDisks store both user data and metadata. Each grain requires metadata. The overhead
will never be greater than 0.1% of the user data. The overhead is independent of the virtual
capacity of the SE VDisk. If you are using SE VDisks in a FlashCopy map, use the same grain
size as the map grain size for the best performance. If you are using the Space-Efficient
VDisk directly with a host system, use a small grain size.
SE VDisk format: SE VDisks do not need formatting. A read I/O, which requests data from
unallocated data space, will return zeroes. When a write I/O causes space to be allocated,
the grain will be zeroed prior to use. Consequently, an SE VDisk will always be formatted
regardless of whether the format flag is specified when the VDisk is created. The
formatting flag will be ignored when an SE VDisk is created or when the real capacity is
expanded; the virtualization component will never format the real capacity for an SE VDisk.
The real capacity of an SE VDisk can be changed provided that the VDisk is not in image
Mode. Increasing the real capacity allows a larger amount of data and metadata to be stored
on the VDisk. SE VDisks use the real capacity of a VDisk in ascending order as new data is
written to the VDisk. Consequently, if the user initially assigns too much real capacity to an SE
VDisk, the real capacity can be reduced to free up storage for other uses. It is not possible to
reduce the real capacity of an SE VDisk to be less than the capacity that is currently in use
other than by deleting the VDisk.
A VDisk that is created with a zero contingency capacity will go offline as soon as it needs to
expand whereas a VDisk with a non-zero contingency capacity will stay online until it has
been used up.
Autoexpand will not cause space to be assigned to the VDisk that can never be used.
Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity will be recalculated.
To support the autoexpansion of SE VDisks, the MDGs from which they are allocated have a
configurable warning capacity. When the used free capacity of the group exceeds the warning
capacity, a warning is logged. To allow for capacity used by quorum disks and partial extents
of image mode VDisks, the calculation uses the free capacity. For example, if a warning of
80% has been specified, the warning will be logged when 20% of the free capacity remains.
SE VDisks: SE VDisks require additional I/O operations to read and write metadata to
back-end storage and to generate additional load on the SVC nodes. We therefore do not
recommend the use of SE VDisks for high performance applications.
SVC 5.1.0 introduces the ability to convert a fully allocated VDisk to an SE VDisk, by using
the following procedure:
1. Start with a VDisk that has one fully allocated copy.
2. Add a Space-Efficient copy to the VDisk.
3. Allow VDisk Mirroring to synchronize the copies.
4. Remove the fully allocated copy.
This procedure uses a zero-detection algorithm. Note that as of 5.1.0, this algorithm is used
only for I/O that is generated by the synchronization of mirrored VDisks; I/O from other
components (for example, FlashCopy) is written using normal procedures.
Note: Consider SE VDisks as targets in Flash Copy relationships. Using them as a target
in Metro Mirror or Global Mirror relationships makes no sense, because during the initial
synchronization, the target will become fully allocated.
I/O governing: I/O governing is applied to remote copy secondaries, as well as primaries.
If an I/O governing rate has been set on a VDisk, which is a remote copy secondary, this
governing rate will also be applied to the primary. If governing is in use on both the primary
and the secondary VDisks, each governed quantity will be limited to the lower of the two
specified values. Governing has no effect on FlashCopy or data migration I/O.
An I/O budget is expressed as a number of I/Os, or a number of MBs, over a minute. The
budget is evenly divided between all SVC nodes that service that VDisk, that is, between the
nodes that form the I/O Group of which that VDisk is a member.
The algorithm operates two levels of policing. While a VDisk on each SVC node has been
receiving I/O at a rate lower than the governed level, no governing is performed. A check is
made every minute that the VDisk on each node is continuing to receive I/O at a rate lower
than the threshold level. Where this check shows that the host has exceeded its limit on one
or more nodes, policing begins for new I/Os.
This algorithm might cause I/O to backlog in the front end, which might eventually cause
“Queue Full Condition” to be reported to hosts that continue to flood the system with I/O. If a
host stays within its 1 second budget on all nodes in the I/O Group for a period of 1 minute,
the policing is relaxed, and monitoring takes place over the 1 minute period as before.
New iSCSI feature: The new iSCSI feature is a software feature that is provided by the
new SVC 5.1 code. This feature will be available on any SVC hardware node that supports
SVC 5.1 code. It is not restricted to the new 2145-CF8 nodes.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that
encapsulates SCSI commands into TCP/IP packets and thereby leverages an existing IP
network, instead of requiring expensive FC HBAs and a SAN fabric infrastructure.
A pure SCSI architecture is based on the client/server model. A client (for example, server or
workstation) initiates read or write requests for data from a target server (for example, a data
storage system). Commands, which are sent by the client and processed by the server, are
The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the TCP/IP network, especially over a
potentially unreliable IP network.
The concepts of names and addresses have been carefully separated in iSCSI:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
“initiator name” and “target name” also refer to an iSCSI name.
An iSCSI Address specifies not only the iSCSI name of an iSCSI node, but also a location
of that node. The address consists of a host name or IP address, a TCP port number (for
the target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node
and provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN),
which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted
for Internet nodes.
The iSCSI qualified name format is defined in RFC3720 and contains (in order) these
elements:
The string “iqn”.
A date code specifying the year and month in which the organization registered the
domain or sub-domain name used as the naming authority string.
The organizational naming authority string, which consists of a valid, reversed domain or a
subdomain name.
Optionally, a colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique.
For SVC, the IQN for its iSCSI target is specified as:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
On a Windows server, the IQN, that is, the name for the iSCSI Initiator, can be defined as:
iqn.1991-05.com.microsoft:<computer name>
You can abbreviate IQNs by a descriptive name, known as an alias. An alias can be assigned
to an initiator or a target. The alias is independent of the name and does not have to be
unique. Because it is not unique, the alias must be used in a purely informational way. It
cannot be used to specify a target at login or used during authentication. Both targets and
initiators can have aliases.
An iSCSI name provides the correct identification of an iSCSI device irrespective of its
physical location. Remember, the IQN is an identifier, not an address.
Be careful: Before changing cluster or node names for an SVC cluster that has servers
connected to it by way of SCSI, be aware that because the cluster and node name are part
of the SVC’s IQN, you can lose access to your data by changing these names. The SVC
GUI will display a specific warning, the CLI does not.
The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to
adjust various parameters between two network entities and to confirm the access rights of
an initiator.
If the iSCSI login phase is completed successfully, the target confirms the login for the
initiator; otherwise, the login is not confirmed and the TCP connection breaks.
As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more
than one TCP connection was established, iSCSI requires that each command/response pair
goes through one TCP connection. Thus, each separate read or write command will be
carried out without the necessity to trace each request for passing separate flows. However,
separate transactions can be delivered through separate TCP connections within one
session.
Figure 2-11 shows an overview of the various block-level storage protocols and where the
iSCSI layer is positioned.
The existing SVC node hardware has two Ethernet ports. Until now, only one Ethernet port
has been used for cluster configuration. With the introduction of iSCSI, you can now use a
second port. The configuration details of the two Ethernet ports can be displayed by the GUI
or CLI, but they will also be displayed on the node’s panel.
In the case of an upgrade to the SVC 5.1 code, the original cluster IP address will be retained
and will always be found on the eth0 interface on the configuration node. A second, new
cluster IP address can be optionally configured in SVC 5.1. This second cluster IP address
will always be on the eth1 interface on the configuration node. When the configuration node
fails, both configuration IP addresses will move to the new configuration node.
Figure 2-12 shows an overview of the new IP addresses on an SVC node port and the rules
regarding how these IP addresses are moved between the nodes of an I/O Group.
The management IP addresses and the ISCSI target IP addresses will fail over to the partner
node N2 if node N1 restarts (and vice versa). The ISCSI target IPs will fail back to their
corresponding ports on node N1 when node N1 is up and running again.
In an SVC cluster running 5.1 code, an eight node cluster with full iSCSI coverage (maximum
configuration) therefore has the following number of IP addresses:
Two IPV4 configuration addresses (one configuration address is always associated with
the eth0:0 alias for the eth0 interface of the configuration node, and the other configuration
address goes with eth1:0).
One IPV4 service mode fixed address (although many DCHP addresses can also be
used). This address is always associated with the eth0:0 alias for the eth0 interface of the
configuration node.
Two IPV6 configuration addresses (one address is always associated with the eth0:0 alias
for the eth0 interface of the configuration node, and the other address goes with eth1:0).
One IPV6 service mode fixed address (although many DCHP addresses can also be
used). This address is always associated with the eth0:0 alias for the eth0 interface of the
configuration node.
We show the configuration of the SVC ports in great detail in Chapter 7, “SAN Volume
Controller operations using the command-line interface” on page 339 and in Chapter 8, “SAN
Volume Controller operations using the GUI” on page 469.
Hosts can discover VDisks through one of the following three mechanisms:
Internet Storage Name Service (iSNS): SVC can register itself with an iSNS name server;
you set the IP address of this server by using the svctask chcluster command. A host
can then query the iSNS server for available iSCSI targets.
Service Location Protocol (SLP): The SVC node runs an SLP daemon, which responds to
host requests. This daemon reports the available services on the node, such as the
CIMOM service that runs on the configuration node; the iSCSI I/O service can now also be
reported.
iSCSI Send Target request. The host can also send a Send Target request using the iSCSI
protocol to the iSCSI TCP/IP port (port 3260).
The user can choose to enable Challenge Handshake Authentication Protocol (CHAP)
authentication, which involves sharing a CHAP secret between the SVC cluster and the host.
After the successful completion of the link establishment phase, the SVC as authenticator
sends a challenge message to the specific server (peer). The server responds with a value
that is calculated by using a one-way hash function on the index/secret/challenge, such as an
MD5 checksum hash.
The response is checked by the SVC against its own calculation of the expected hash value.
If there is a match, the SVC acknowledges the authentication. If not, the SVC will terminate
the connection and will not allow any I/O to VDisks. At random intervals, the SVC might send
new challenges to the peer to recheck the authentication.
You can assign a CHAP secret to each SVC host object. The host must then use CHAP
authentication in order to begin a communications session with a node in the cluster. You can
also assign a CHAP secret to the cluster if two-way authentication is required. While creating
an iSCSI host within an SVC cluster, you will get the initiator’s IQN, for example, for a
Windows server:
iqn.1991-05.com.microsoft:ITSO_W2008
Because you can use iSCSI in networks where data can be accessed illegally, the
specification allows separate security methods. You can set up security, for example, via a
method, such as IPSec, which is transparent for higher levels, such as iSCSI, because it is
implemented at the IP level. You can obtain details about securing iSCSI in RFC3723,
Securing Block Storage Protocols over IP, which is available at this Web site:
http://tools.ietf.org/html/rfc3723
If FC-attached hosts see their FC target, and VDisks go offline, for example, due to a problem
in the target node, its ports, or the network, the host has to use a separate SAN path to
continue I/O. A multipathing driver is therefore always required on the host.
SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the
key difference) the host is reconnected to the same IP target that reappears after a short
period of time and its VDisks continue to be available for I/O.
A host multipathing driver for iSCSI is required if you want these capabilities:
To protect a server from network link failures
To protect a server from network failures, if the server is connected via two HBAs to two
separate networks
To protect a server from a server HBA failure (if two HBAs are in use)
To provide load balancing on the server’s HBA and the network links
Copy services are implemented between VDisks within a single SVC or multiple SVC
clusters. They are therefore independent of the functionalities of the underlying disk
subsystems that are used to provide storage resources to an SVC cluster.
With the SVC, Metro Mirror and Global Mirror are the IBM branded terms for the functions that
are synchronous remote copy and asynchronous remote copy.
Synchronous remote copy ensures that updates are committed at both the primary and the
secondary before the application considers the updates complete; therefore, the secondary is
fully up-to-date if it is needed in a failover. However, the application is fully exposed to the
latency and bandwidth limitations of the communication link to the secondary. In a truly
remote situation, this extra latency can have a significant adverse effect on application
performance.
SVC assumes that the FC fabric to which it is attached contains hardware that achieves the
long distance requirement for the application. This hardware makes distant storage
accessible as though it were local storage. Specifically, it enables a group of up to four SVC
clusters to connect (FC login) to each other and establish communications in the same way
as though they were located nearby on the same fabric. The only differences are in the
expected latency of that communication, the bandwidth capability of the links, and the
availability of the links as compared with the local fabric. Special configuration guidelines exist
for SAN fabrics that are used for data replication. Issues to consider are the distance and the
bandwidth of the site interconnections.
In asynchronous remote copy, the application considers an update complete before that
update has necessarily been committed at the secondary. Hence, on a failover, certain
updates might be missing at the secondary. The application must have an external
mechanism for recovering the missing updates and reapplying them. This mechanism can
involve user intervention. Asynchronous remote copy provides comparable functionality to a
continuous backup process that is missing the last few updates. Recovery on the secondary
site involves bringing up the application on this recent “backup” and, then, reapplying the
most recent updates to bring the secondary up-to-date.
The asynchronous remote copy must present at the secondary a view to the application that
might not contain the latest updates, but is always consistent. If consistency has to be
guaranteed at the secondary, applying updates in an arbitrary order is not an option. At the
primary side, the application is enforcing an ordering implicitly by not scheduling an I/O until a
previous dependent I/O has completed. We do not know the actual ordering constraints of the
application; the best approach is to choose an ordering that the application might see if I/O at
the primary was stopped at a suitable point. One example is to apply I/Os at the secondary in
the order that they were completed at the primary. Thus, the secondary always reflects a state
that can have been seen at the primary if we froze I/O there.
The SVC Global Mirror protocol operates to identify small groups of I/Os, which are known to
be active concurrently in the primary cluster. The process to identify these groups of I/Os
does not significantly contribute to the latency of these I/Os when they execute at the primary.
These groups are applied at the secondary in the order in which they were executed at the
primary. By identifying groups of I/Os that can be applied concurrently at the secondary, the
protocol maintains good throughput as the system size grows.
The relationship between the two copies is not symmetrical. One copy of the data set is
considered the primary copy, which is sometimes also known as the source. This copy
provides the reference for normal runtime operation. Updates to this copy are shadowed to a
secondary copy, which is sometimes known as the destination or even the target. The
secondary copy is not normally referenced for performing I/O. If the primary copy fails, the
The secondary copy is not accessible for application I/O other than the I/Os that are
performed for the remote copy process. The SVC allows read-only access to the secondary
storage when it contains a consistent image. This capability is only intended to allow boot
time operating system discovery to complete without error so that any hosts at the secondary
site can be ready to start up the applications with minimum delay, if required. For instance,
many operating systems need to read logical block address (LBA) 0 to configure a logical
unit.
“Enabling” the secondary copy for active operation will require SVC, operating system, and
possibly application-specific work, which needs to be performed as part of the entire failover
process. The SVC software at the secondary must be instructed to stop the relationship,
which makes the secondary logical unit accessible for normal I/O access. The operating
system might need to mount file systems, or similar work, which can typically only happen
when the logical unit is accessible for writes. The application might have a log of work to
recover.
Note that this property of remote copy, the requirement to enable the secondary copy,
differentiates it from RAID-1 mirroring. The latter aims to emulate a single, reliable disk,
regardless of what system accesses it. Remote copy retains the property that there are two
volumes in existence, but it suppresses one volume while the copy is being maintained.
The underlying storage at the primary or secondary of a remote copy will normally be RAID
storage, but it can be any storage, which can be managed by the SVC.
Making use of a secondary copy involves a conscious policy decision by a user that a failover
is required. The application work involved in establishing operation on the secondary copy is
substantial. The goal is to make this rapid but not seamless. Rapid is still much faster
compared to recovering from a backup copy.
Most clients will aim to automate this remote copy through failover management software.
SVC provides Simple Network Management Protocol (SNMP) traps and interfaces to enable
this automation. IBM Support for automation is provided by IBM Tivoli® Storage Productivity
Center for Replication.
Or, you can access the documentation online at the IBM Tivoli Storage Productivity Center
information center:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
2.2.16 FlashCopy
FlashCopy makes a copy of a source VDisk to a target VDisk. The original content of the
target VDisk is lost. After the copy operation has started, the target VDisk has the contents of
the source VDisk as it existed at a single point in time. Although the copy operation takes
time, the resulting data at the target appears as though the copy was made instantaneously.
You can run FlashCopy on multiple source and target VDisks. FlashCopy permits the
management operations to be coordinated so that a common single point in time is chosen
for copying target VDisks from their respective source VDisks. This capability allows a
consistent copy of data, which spans multiple VDisks.
SVC also permits multiple Target VDisks to be FlashCopied from each Source VDisk. You can
use this capability to create images from separate points in time for each Source VDisk, you
Starting with SVC 5.1, Reverse FlashCopy is supported. It enables target VDisks to become
restore points for the source without breaking the FlashCopy relationship and without having
to wait for the original copy operation to complete. SVC supports multiple targets and thus
multiple rollback points.
Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery
of their applications and databases. IBM Support is provided by Tivoli Storage FlashCopy
Manager:
http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/
You can read a detailed description of Data Mirroring and FlashCopy copy services in
Chapter 7, “SAN Volume Controller operations using the command-line interface” on
page 339. We discuss data migration in Chapter 6, “Advanced Copy Services” on page 255.
For example, a client can request the use of an application without being concerned about
either where the application resides or which physical server is processing the request. The
user simply gains access to the application in a timely and reliable manner. Another benefit is
scalability. If you need to add users or applications to your system and want performance to
be maintained at existing levels, additional systems can be incorporated into the cluster.
The SVC is a collection of up to eight cluster nodes, which are added in pairs. In future
releases, the cluster size might be increased to permit further performance scalability. These
nodes are managed as a set (cluster) and present a single point of control to the
administrator for configuration and service activity.
The actual eight node limit within an SVC cluster is a limitation of the actual product, not an
architectural one. Larger clusters are possible without changing the underlying architecture.
Based on a 14-node cluster, coupled with solid-state drive controllers, the project achieved a
data rate of over one million IOPS with a response time of under 1 millisecond (ms).
It is key for all active nodes of a cluster to know that they are members of the cluster.
Especially in situations, such as the split brain scenario where single nodes lose contact to
other nodes and cannot determine if the other nodes can be reached anymore, it is key to
have a solid mechanism to decide which nodes form the active cluster. A worst case scenario
is a cluster that splits into two separate clusters.
Within an SVC cluster, the voting set and an optional quorum disk are responsible for the
integrity of the cluster. If nodes are added to a cluster, they get added to the voting set; if
nodes are removed, they will also quickly be removed from the voting set. Over time, the
voting set, and hence the nodes in the cluster, can completely change so that the cluster has
migrated onto a completely separate set of nodes from the set on which it started.
These rules guarantee that there is only ever at most one group of nodes able to operate as
the cluster, so the cluster never splits into two. The SVC cluster implements a dynamic
quorum. Following a loss of nodes, if the cluster can continue operation, the cluster will adjust
the quorum requirement, so that further node failure can be tolerated.
The lowest Node Unique ID in a cluster becomes the boss node for the group of nodes and
proceeds to determine (from the quorum rules) whether the nodes can operate as the cluster.
This node also presents the maximum two cluster IP addresses on one or both of its node’s
Ethernet ports to allow access for cluster management.
If a tiebreaker condition occurs, the one half of the cluster nodes, which is able to reserve the
quorum disk after the split has occurred, locks the disk and continues to operate. The other
half stops its operation. This design prevents both sides from becoming inconsistent with
each other.
When MDisks are added to the SVC cluster, the SVC cluster checks the MDisk to see if it can
be used as a quorum disk. If the MDisk fulfills the requirements, the SVC will assign the three
Note: To be considered eligible as a quorum disk, an LUN must meet the following criteria:
It must be presented by a disk subsystem that is supported to provide SVC quorum
disks.
It cannot be allocated on one of the node’s internal flash disks.
It has been manually allowed to be a quorum disk candidate using the svctask
chcontroller -allow_quorum yes command.
It must be in managed mode (no image mode disks).
It must have sufficient free extents to hold the cluster state information, plus the stored
configuration metadata.
It must be visible to all of the nodes in the cluster.
If possible, the SVC will place the quorum candidates on separate disk subsystems. After the
quorum disk has been selected, however, no attempt is made to ensure that the other quorum
candidates are presented through separate disk subsystems.
With SVC 5.1, quorum disk candidates and the active quorum disk in a cluster can be listed
by the svcinfo lsquorum command. When the set of quorum disk candidates has been
chosen, it is fixed.
A new quorum disk candidate will only be chosen in one of these conditions:
The administrator requests that a specific MDisk becomes a quorum disk by using the
svctask setquorum command.
An MDisk that is a quorum disk is deleted from an MDG.
An MDisk that is a quorum disk changes to image mode.
A cluster needs to be regarded as a single entity for disaster recovery purposes. The cluster
and the quorum disk need to be colocated.
There are special considerations concerning the placement of the active quorum disk for a
stretched cluster and stretched I/O Group configurations. Details are available at this Web
site:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311
Important: Running an SVC cluster without a quorum disk can seriously affect your
operation. A lack of available quorum disks for storing metadata will prevent any migration
operation (including a forced MDisk delete). Mirrored VDisks might be taken offline if there
is no quorum disk available. This behavior occurs, because synchronization status for
mirrored VDisks is recorded on the quorum disk.
During the normal operation of the cluster, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the cluster. If a
node fails for any reason, the workload that is intended for it is taken over by another node
until the failed node has been restarted and readmitted to the cluster (which happens
automatically). In the event that the microcode on a node becomes corrupted, resulting in a
failure, the workload is transferred to another node. The code on the failed node is repaired,
and the node is readmitted to the cluster (again, all automatically).
Preferred node: The preferred node does not signify absolute ownership. The data can
still be accessed by the partner node in the I/O Group in the event of a failure.
2.3.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive suffer from both seek and latency time at the drive level, which can result
in from one to 10 ms of response time (for an enterprise-class disk).
The new 2145-CF8 nodes combined with SVC 5.1 provide 24 GB memory per node, or 48
GB per I/O Group, or 192 GB per SVC cluster. The SVC provides a flexible cache model, and
the node’s memory can be used as read or write cache. The size of the write cache is limited
to a maximum of 12 GB of the node’s memory. Dependent on the current I/O situation on a
node, the free part of the memory (maximum 24 GB) can be fully used as read cache.
Cache is allocated in 4 KB pages. A page belongs to one track. A track is the unit of locking
and destage granularity in the cache. It is 32 KB in size (eight pages). A track might only be
partially populated with valid pages. The SVC coalesces writes up to the 32 KB track size if
the writes reside in the same tracks prior to destage; for example, if 4 KB is written into a
track, another 4 KB is written to another location in the same track. Therefore, the blocks
written from the SVC to the disk subsystem can be any size between 512 bytes up to 32 KB.
When data is written by the host, the preferred node within the I/O Group saves the data in its
cache. Before the cache returns completion to the host, the write must be mirrored to the
partner node, or copied in the cache of its partner node, for availability reasons. After having a
copy of the written data, the cache returns completion to the host.
Write data that is held in cache is not destaged to disk; therefore, if only one copy of the data
is kept, you risk losing data. Write cache entries without updates during the last two minutes
are automatically destaged to disk.
If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining
node empties all of its write cache and proceeds in a operation mode, which is referred to as
write-through mode. A node operating in write-through mode writes data directly to the disk
subsystem before sending an “I/O complete” status message back to the host. Running in
this mode can degrade the performance of the specific I/O Group.
Starting with SVC Version 4.2.1, write cache partitioning was introduced to the SVC. This
feature restricts the maximum amount of write cache that a single MDG can allocate in a
cluster. Table 2-2 shows the upper limit of write cache data that a single MDG in a cluster can
occupy.
An SVC node can treat part or all of its physical memory as non-volatile. Non-volatile means
that its contents are preserved across power losses and resets. Besides the bitmaps for Flash
Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are
the most important items in the non-volatile memory. The actual amount that can be treated
as non-volatile is dependent on the hardware.
In the event of a disruption or external power loss, the physical memory is copied to a file in
the file system on the node’s internal disk drive, so that the contents can be recovered when
external power is restored. The uninterruptible power supply units, which are delivered with
each node’s hardware, ensure that there is sufficient internal power to keep a node
operational to perform this dump when external power is removed. After dumping the content
of the non-volatile part of the memory to disk, the SVC node shuts down.
Starting with SVC release 4.3.1, the SVC Console (ICAT) can use the CIM Agent that is
embedded in the SVC cluster. With release 5.1 of the code, using the embedded CIMOM is
mandatory. This CIMOM will support the Storage Management Initiative Specification (SMI-S)
Version 1.3 standard.
Earlier SVC releases authenticated all users locally. SVC 5.1 has two authentication
methods:
Local authentication: Local authentication is similar to the existing method and will be
described next.
Remote authentication: Remote authentication supports the use of a remote
authentication server, which for SVC is the Tivoli Embedded Security Services, to validate
the passwords. The Tivoli Embedded Security Services is part of the Tivoli Integrated
Portal, which is one of the three components that come with Tivoli Productivity Center 4.1
(Tivoli Productivity Center, Tivoli Productivity Center for Replication, and Tivoli Integrated
Portal) that are pre-installed on the IBM System Storage Productivity Center 1.4. The IBM
System Storage Productivity Center 1.4 is the management console for SVC 5.1 clusters.
Each SVC cluster can have multiple users defined. The cluster maintains an audit log of
successfully executed commands, indicating which users made what actions at what times.
Passwords for local users do not have any forbidden characters, but passwords cannot begin
or end with blanks.
To register an SSH key for the superuser to provide command-line access, you use the GUI,
usually at the end of the cluster initialization process. But, you can also add it later.
The superuser is always a member of user group 0, which has the most privileged role within
the SVC.
User groups are used for local and remote authentication. Because SVC knows of five roles,
there are, by default, five user groups defined in an SVC cluster (see Table 2-3).
0 SecurityAdmin SecurityAdmin
1 Administrator Administrator
2 CopyOperator CopyOperator
3 Service Service
4 Monitor Monitor
The access rights for a user belonging to a specific user group are defined by the role that is
assigned to the user group. It is the role that defines what a user can do (or cannot do) on an
SVC cluster.
Table 2-4 on page 42 shows the roles ordered (from the top) by starting with the least
privileged Monitor role down to the most privileged SecurityAdmin role.
Local users: Be aware that local users are created per each SVC cluster. Each user has a
name, which must be unique across all users in one cluster. If you want to allow access for
a user on multiple clusters, you have to define the user in each cluster with the same name
and the same privileges.
Figure 2-14 on page 43 shows an overview of local authentication within the SVC.
Remote users only have to be defined in the SVC if command-line access is required. In that
case, the remote authentication flag has to be set, and an SSH key and its password have to
be defined for this user. Remember that for users requiring CLI access with remote
authentication, defining the password locally for this user is mandatory.
Remote users cannot belong to any user group, because the remote authentication service,
for example, an Lightweight Directory Access Protocol (LDAP) directory server, such as IBM
Tivoli Directory Server or Microsoft® Active Directory, will deliver the user group information.
The upgrade from SVC 4.3.1 is seamless. Existing users and roles are migrated without
interruption. Remote authentication can be enabled after the upgrade is complete.
The authentication service supported by SVC is the Tivoli Embedded Security Services server
component level 6.2.
The Tivoli Embedded Security Services server provides the following two key features:
Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in
use, which means that the SVC communicates only with Tivoli Embedded Security
Services to get its authentication information. The type of protocol that is used to access
the central directory or the kind of the directory system that is used is transparent to SVC.
Tivoli Embedded Security Services provides a secure token facility that is used to enable
single sign-on (SSO). SSO means that users do not have to log in multiple times when
using what appears to them to be a single system. It is used within Tivoli Productivity
Center. When the SVC Console is launched from within Tivoli Productivity Center, the user
will not have to log on to the SVC Console, because the user has already logged in to
Tivoli Productivity Center.
With reference to Figure 2-16 on page 45, the user starts application A with a user name and
password (1), which are authenticated using the Tivoli Embedded Security Services server
(2). The server returns a token (3), which is an opaque string that can only be interpreted by
the Tivoli Embedded Security Services server. The server also supplies the user’s groups and
an expiry time stamp for the token. The client device (SVC in our case) is responsible for
mapping an Tivoli Embedded Security Services user group to roles.
Application A needs to launch application B. Instead of getting the user to enter a new
password to authenticate to application B, A passes B the Tivoli Embedded Security Services
token (4). Application B passes the Tivoli Embedded Security Services token to the Tivoli
Embedded Security Services server (5), which decodes the token and returns the user’s ID
and groups to application B (6) along with an expiry time stamp.
ESS LDAP
4: launch( tk )
Server Server
5: auth( tk )
Application B
The token expiry time stamp is advice to the Tivoli Embedded Security Services client
applications A and B about credential caching. The applications are permitted to cache and
use a token or user name-password combination until the time stamp expires and is returned
by the server.
So, in the our example, application B can cache the fact that a particular token maps to a
particular user ID and groups, which is a performance boost, because it saves the latency of
querying the Tivoli Embedded Security Services server on each interaction between A and B.
After the lifetime of the token has expired, application A must query the server again and
obtain a new time stamp to rejuvenate the token (or alternatively discover that the credentials
are now invalid).
The Tivoli Embedded Security Services server administrator can configure the length of time
that is used to set expiry timestamps. This system is only effective if the Tivoli Embedded
Security Services server and the applications have synchronized clocks.
svcinfo lscluster.......
SVC supports either an HTTP or HTTPS connection to the Tivoli Embedded Security
Services server. If the HTTP option is used, the user and password information is
transmitted in clear text over the IP network.
2. Configure user groups on the cluster matching those user groups that are used by the
authentication service. For each group of interest that is known to the authentication
Also, Tivoli Productivity Center 4.1 leverages the Tivoli Integrated Portal infrastructure and its
underlying WebSphere® Application Server capabilities to make use of an LDAP registry and
enable single sign-on (SSO).
You can obtain more information about implementing SSO within Tivoli Productivity Center
4.1 in Chapter 6 (LDAP authentication support and single sign-on) of the IBM Tivoli Storage
Productivity Center V4.1 Release Guide, SG247725, at this Web site:
http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open
The new SVC 2145-CF8 Storage Engine has the following key hardware features:
New SVC engine based on Intel Core i7 2.4 GHz quad-core processor
24 GB memory, with future growth possibilities
Four 8 Gbps FC ports
Up to four solid-state drives, enabling scale-out high performance solid-state drive support
with SVC
Two power supplies
Double bandwidth compared to its predecessor node (2145-8G4)
The new nodes can be smoothly integrated within existing SVC clusters. New nodes can be
intermixed in pairs within existing SVC clusters. Mixing engine types in a cluster results in
VDisk throughput characteristics of the engine type in that I/O Group. The cluster
nondisruptive upgrade capability can be used to replace older engines with new 2145-CF8
engines.
They are 1U high, fit into 19 inch racks, and use the same uninterruptible power supply unit
models as previous models. Integration into existing clusters requires that the cluster runs
SVC 5.1 code. The only node that does not support SVC 5.1 code is the 2145-4F2-type node.
An upgrade scenario for SVC clusters based, or containing, these first generation nodes will
be available later this year. Figure 2-17 shows the front-side view of the new SVC 2145-CF8
node.
Remember that several of the new features in the new SVC 5.1 release, such as iSCSI, are
software features and are therefore available on all nodes supporting this release.
The nodes come with a 4-port HBA. The FC ports on these node types autonegotiate the link
speed that is used with the FC switch. The ports normally operate at the maximum speed that
is supported by both the SVC port and the switch. However, if a large number of link errors
occur, the ports might operate at a lower speed than what is supported.
The actual port speed for each of the four ports can be displayed via the GUI, the CLI, the
node’s front panel, and also by light-emitting diodes (LEDs) that are placed at the rear of the
node. For details, consult the node-specific SVC hardware installation guides:
IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation
Guide, GC52-1356
IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation
Guide, GC27-2219
The SVC imposes no limit on the FC optical distance between SVC nodes and host servers.
FC standards, along with small form-factor pluggable optics (SFP) capabilities and cable type,
dictate the maximum FC distances that are supported.
If you use longwave SFPs in the SVC node itself, the longest supported FC link between the
SVC and switch is 10 km (6.21 miles).
Table 2-5 shows the actual cable length that is supported with shortwave SFPs.
Table 2-6 shows the rules that apply with respect to the number of inter-switch link (ISL) hops
allowed in a SAN fabric between SVC nodes or the cluster.
0 1 1 Maximum 3
(connect to the same (recommended: 0, (recommended: 0,
switch) connect to the same connect to the same
switch) switch)
If the configuration node failed, a separate node in the cluster took over the duties of the
configuration node and the IP address for the cluster was then presented at the eth0 port of
that new configuration. The configuration node supported concurrent access on the IPv4 and
IPv6 configuration addresses on the eth0 port from SVC 4.3 onward.
Starting with SVC 5.1, the cluster configuration node can now be accessed on either eth0 or
eth1. The cluster can have two IPv4 and two IPv6 addresses that are used for configuration
purposes (CLI or CIMOM access). The cluster can therefore be managed by SSH clients or
GUIs on System Storage Productivity Centers on separate physical IP networks. This
capability provides redundancy in the event of a failure of one of these IP networks.
While CPUs and cache/memory devices continually improve their performance, this is not
true in general for mechanical disks that are used as external storage.
The single times that are shown are not that important, but look at the time differences
between accessing data that is located in cache and data that is located on external disk.
We have added a second scale to Figure 2-18, which gives you an idea of how long it takes to
access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you
an idea of the importance of future storage technologies closing or reducing the gap between
access times for data stored in cache/memory versus access times for data stored on a
external medium.
However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O on it have not increased at the same rate — although they have certainly
increased. In actual environments, we can expect from today’s enterprise-class FC
serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a
latency) of approximately 7 ms per I/O.
To simplify it, today rotating disks are getting, and still will, bigger in capacity (several TBs),
smaller in form factor/footprint (3.5 inches, 2.5 inches, and 1.8 inches), and less expensive
($/GB), but not necessarily faster.
The limiting factor is the number of revolutions per minute (rpm) that a disk can perform
(actually 15,000). This factor defines the time that is required to access a specific data block
on a rotating device. There might be smaller improvements in the future, but a big step, such
as doubling the number of revolutions, if technically even possible, inevitably has a massive
increase in power consumption and a price increase.
Enterprise-class solid-state drives deliver typically 50,000 read and 20,000 write IOPs with
latencies of typically 50us for reads and 800us for writes. Their form factors (2.5 inches/3.5
inches) and their interfaces (FC/SAS/Serial Advanced Technology Attachment (SATA)) make
them easy to integrate into existing disk shelves.
Today’s solid-state drive technology is only a first step into the world of high performance
persistent semiconductor storage. A group of the approximately 10 most promising
technologies are collectively referred to as Storage Class Memory (SCM).
http://www.almaden.ibm.com/st/nanoscale_st/nano_devices/
You can obtain details of Storage Class Memory at this Web site:
http://tinyurl.com/plk7as
You can read a comprehensive and worthwhile overview of the solid-state drive technology in
a subset of the well known Spring 2009 SNIA Technical Tutorials, which are available on the
SNIA Web site:
http://www.snia.org/education/tutorials/2009/spring/solid
When these technologies become a reality, it will fundamentally change the architecture of
today’s storage infrastructures.
The next topic describes integrating the first releases of this new technology into the SVC.
Up to four solid-state drives are supported per node, which will provide up to 560 GB of
usable solid-state drive capacity per node. Always install the same amount of solid-state drive
capacity in both nodes of an I/O Group.
In a cluster running 5.1 code, node pairs with solid-state drives can be mixed with older node
pairs, either with or without local solid-state drives installed.
This scalable architecture enables clients to take advantage of the throughput capabilities of
the solid-state drive. The following performance exists per I/O Group (from solid-state drives
only):
IOPS: 200 K reads, 80 K writes, and 56 K 70/30 mix
MBps: 800 MBps reads and 400 MBps writes
SSDs are local drives in an SVC node and are presented as MDisks to the SVC cluster. They
belong to an SVC internal controller. These controller objects will have the worldwide node
name (WWNN) of the node in question, but they will be reported as standard controller
objects that can be renamed by the user. SVC reserves eight of these controller objects for
the internal SSD controllers.
You must follow the SVC solid-state drive configuration rules for MDisks and MDisk groups:
Each solid-state drive is recognized by the cluster as a single MDisk.
For each node that contains solid-state drives, create a single MDisk group that includes
only the solid-state drives that are installed in that node.
Terminology: An MDG using solid-state drives contained within an SVC node will be
referenced as SVC solid-state drive storage throughout this book. The configuration rules
given in this book apply to SVC solid-state drive storage. Do not confuse this term with
solid-state drive storage that is contained in SAN-attached storage controllers, such as the
IBM DS8000 or DS5000.
When you add a new solid-state drive to an MDisk group (move it from unmanaged to
managed mode), the solid-state drive is automatically formatted and set to a block size of 512
bytes.
You must follow these configuration rules for VDisks using storage from solid-state drives
within SVC nodes:
VDisks using SVC solid-state drive storage must be created in the I/O Group where the
solid-state drives physically reside.
VDisks using SVC solid-state drive storage must be mirrored to another MDG to provide
fault tolerance. There are two supported mirroring configurations:
– For the highest performance, the two VDisk copies must be created in the two MDGs
that correspond to the SVC solid-state drive storage in two nodes in the same I/O
Group. The recommended solid-state drive configuration for highest performance is
shown in Figure 2-19 on page 54.
– For the best utilization of the solid-state drive capacity, the primary VDisk copy must be
placed on SVC solid-state drive storage and the secondary copy can be placed on Tier
1 storage, such as an IBM DS8000. Under certain failure scenarios, the performance
of the VDisk will degrade to the performance of the non-solid-state drive storage. All
read I/Os are sent to the primary copy of a mirrored VDisk; therefore, reads will
experience solid-state drive performance. Write I/Os are mirrored to both locations, so
performance will match the speed of the slowest copy. The recommended solid-state
Important: For VDisks that are provisioned out of SVC solid-state drive storage, VDisk
Mirroring is mandatory to maintain access to the data that is stored on solid-state drives if
one of the nodes in the I/O Group is being serviced or fails.
Remember that VDisks that are based on SVC solid-state drive storage must always be
presented by the I/O Group and, during normal operation, by the node to which the solid-state
drive belongs. These rules are designed to direct all host I/O to the node containing the
relevant solid-state drives.
Existing VDisks can be migrated while online to SVC solid-state drive storage. It might be
necessary to move the VDisk into the correct I/O Group first, which requires quiescing I/O to
this VDisk during the move.
Figure 2-19 on page 54 shows the recommended solid-state drive configuration for the
highest performance.
For a read-intensive application, mirrored VDisks can keep their secondary copy on a
SAN-based MDG, such as an IBM DS8000 providing Tier 1 storage resources to an SVC
cluster.
Because all read I/Os are sent to the primary copy (which is set as the solid-state drive),
reasonable performance occurs as long as the Tier 1 storage can sustain the write I/O rate.
Performance will decrease if the primary copy fails. Ensure that the node on which the
primary VDisk copy resides is also the preferred node for the VDisk. Figure 2-20 on page 55
shows the recommended solid-state drive configuration for the best capacity utilization.
SVC 5.1 provides the functionality to upgrade the solid-state drive’s firmware and pre-GA
code.
For details, see IBM System Storage SAN Volume Controller Software Installation and
Configuration Guide Version, SC23-6628.
2.6.2 SVC 5.1 supported hardware list, device driver, and firmware levels
With the SVC 5.1 release, as in every release, IBM offers functional enhancements and new
hardware that can be integrated into existing or new SVC clusters and also interoperability
Note: With SVC 5.1, the usage of the embedded CIMOM is mandatory. We therefore
recommend, when upgrading, that you switch the existing configurations from the
Master Console/IBM System Storage Productivity Center-based CIMOM to the
embedded CIMOM (remember to update the Tivoli Productivity Center configuration if it
in use). Then, upgrade the Master Console/IBM System Storage Productivity Center,
and finally, upgrade the SVC cluster.
Windows Server 2008 support for the SVC GUI and Master Console
IBM System Storage Productivity Center 1.3 support
NTP synchronization
The SVC cluster time operates in one of two exclusive modes:
– Default mode in which the cluster uses the configuration node’s system clock
– NTP mode in which the cluster uses an NTP time server as its time source and adjusts
the configuration node’s system clock according to time values obtained from the NTP
server. When operating in NTP mode, the SVC cluster will log an error if an NTP server
is unavailable.
Performance enhancement for overlapped Global Mirror writes
Several limits have been removed with SVC 5.1, but not all of them. The following list gives an
overview of the most important limits. For details, always consult the SVC support site:
iSCSI support
All host iSCSI names are converted to an internally generated WWPN (one per iSCSI
name per I/O Group). Each iSCSI name in an I/O Group consumes one WWPN that
otherwise is available for a “real” FC WWPN.
So, the limits for ports per I/O Group/cluster/host object remain the same, but these limits
are now shared between FC WWPNs and iSCSI names.
The number of cluster partnerships has been lifted from one up to a maximum of three
partnerships, which means that a single SVC cluster can have partnerships of up to three
clusters at the same time.
Remote Copy (RC):
– The number of RC relationships has increased from 1,024 to 8,192. Remember that a
single VDisk at a single point of time can be a member of exactly one RC relationship.
– The number of RC relationships per RC consistency group has also increased to
8,192.
VDisk
A VDisk can contain a maximum of 217 (or 131,072) extents. With an extent size of 2 GB,
the maximum VDisk size is 256 TB.
You can see the lBM Redbooks publications about SVC at this Web site:
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
Cluster
A cluster is a group of 2,145 nodes that presents a single configuration and service interface
to the user.
Consistency group
A consistency group is a group of VDisks that has copy relationships that need to be
managed as a single entity.
Copied
Copied is a FlashCopy state that indicates that a copy has been triggered after the copy
relationship was created. The copy process is complete, and the target disk has no further
dependence on the source disk. The time of the last trigger event is normally displayed with
this status.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages a cache of the configuration
information that describes the cluster configuration and provides a focal point for configuration
commands. If the configuration node fails, another node in the cluster will assume the role.
Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN
provides all of the connectivity of the redundant SAN, but without the 100% redundancy. An
SVC node is typically connected to a redundant SAN made out of two counterpart SANs. A
counterpart SAN is often called a SAN fabric.
Error code
An error code is a value used to identify an error condition to a user. This value might map to
one or more error IDs or to values that are presented on the service panel. This value is used
to report error conditions to IBM and to provide an entry point into the service guide.
Excluded
Excluded is a status condition that describes an MDisk that the 2145 cluster has decided is
no longer sufficiently reliable to be managed by the cluster. The user must issue a command
to include the MDisk in the cluster-managed storage.
Extent
A fixed size unit of data that is used to manage the mapping of data between MDisks and
VDisks.
FC port logins
FC port logins is the number of hosts that can see any one SVC node port. Certain disk
subsystems, such as the IBM DS8000, recommend limiting the number of hosts that use
each port, to prevent excessive queuing at that port. Clearly, if the port fails or the path to that
port fails, the host might fail over to another port and the fan-in criteria might be exceeded in
this degraded mode.
Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KB/256
KB) in the SVC. It is also the unit to extend the real size of a Space-Efficient VDisk (32,64,128
or 256 KB).
Host ID
A numeric identifier assigned to a group of host FC ports or iSCSI host names for the
purposes of LUN mapping. For each host ID, there is a separate mapping of SCSI IDs to
VDisks. The intent is to have a one-to-one relationship between hosts and host IDs, although
this relationship cannot be policed.
Image mod
Image mod is a configuration mode similar to the router mode but with the addition of cache
and copy functions. SCSI commands are not forwarded directly to the MDisk.
I/O Group
An I/O Group is a collection of VDisk and node relationships, that is, an SVC node pair that
presents a common interface to host systems. Each SVC node is associated with exactly one
I/O Group. The two nodes in the I/O Group provide access to the VDisks in the I/O Group.
ISL hop
An inter-switch link (ISL) is a connection between two switches and is counted as an “ISL
hop.” The number of “hops” is always counted on the shortest route between two N-ports
(device connections). In an SVC environment, the number of ISL hops is counted on the
shortest route between the pair of nodes farthest apart. It measures distance only in terms of
ISLs in the fabric.
Local fabric
Because the SVC supports remote copy, there might be significant distances between the
components in the local cluster and those components in the remote cluster. The local fabric
is composed of those SAN components (switches, cables, and so on) that connect the
components (nodes, hosts, and switches) of the local cluster together.
LU and LUN
LUN is formally defined by the SCSI standards as a logical unit number. It is used as an
abbreviation for an entity, which exhibits disk-like behavior, for example, a VDisk or an MDisk.
Node
A node is a single processing unit, which provides virtualization, cache, and copy services for
the SAN. SVC nodes are deployed in pairs called I/O Groups. One node in the cluster is
designated the configuration node.
Oversubscription
Oversubscription is the ratio of the sum of the traffic on the initiator N-port connection, or
connections to the traffic on the most heavily loaded ISLs where more than one connection is
used between these switches. Oversubscription assumes a symmetrical network, and a
specific workload applied evenly from all initiators and directed evenly to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.
Prepare
Prepare is a configuration command that is used to cause cached data to be flushed in
preparation for a copy trigger operation.
RAS
RAS stands for reliability, availability, and serviceability.
RAID
RAID stands for a redundant array of independent disks.
Redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so
no matter what component fails, data traffic will continue. Connectivity between the devices
within the SAN is maintained, although possibly with degraded performance, when an error
has occurred. A redundant SAN design is normally achieved by splitting the SAN into two
independent counterpart SANs (two SAN fabrics), so that if one counterpart SAN is
destroyed, the other counterpart SAN keeps functioning.
Remote fabric
Because the SVC supports remote copy, there might be significant distances between the
components in the local cluster and those components in the remote cluster. The remote
fabric is composed of those SAN components (switches, cables, and so on) that connect the
components (nodes, hosts, and switches) of the remote cluster together.
SAN
SAN stands for storage area network.
SCSI
SCSI stands for Small Computer Systems Interface.
Tip: The IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551,
contains comprehensive information that goes into greater depth regarding the topics that
we discuss here.
We also go into much more depth about these topics in SAN Volume Controller Best
Practices and Performance Guidelines, SG24-7521, which is available at this Web site:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is
shipped, and can only operate, on the following node types:
SAN Volume Controller 2145-CF8
SAN Volume Controller 2145-8A4
SAN Volume Controller 2145-8G4
SAN Volume Controller 2145-8F2
SAN Volume Controller 2145-8F4
It was also shipped and will operate with the SVC 2145-4F2.
When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 – 240 V,
single phase.
Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external
protection.
Important: Do not share the SVC uninterruptible power supply unit with any other devices.
Figure 3-4 on page 71 shows a power cabling example for the 2145-CF8.
There are guidelines to follow for Fibre Channel (FC) cable connections. Occasionally, the
introduction of a new SVC hardware model means that there are internal changes. One
example is the worldwide port name (WWPN) mapping in the port mapping. The 2145-8G4
and 2145-CF8 have the same mapping.
We suggest that you place the racks in separate rooms, if possible, in order to gain protection
against critical events (fire, water, power loss, and so on) that might affect one room only.
Remember the maximum distances that are supported between the nodes in one I/O Group
(100 m (or 320 ft., 1.13 inches)). You can extend this distance by submitting a formal SCORE
request to increase the limit by following the rules that will be specified in any SCORE
approval.
Each node in an SVC cluster needs to have at least one Ethernet connection.
IBM supports the option of having multiple console access, using the traditional SVC
hardware management console (HMC) or the IBM System Storage Productivity Center
console. Multiple Master Consoles or IBM System Storage Productivity Center consoles
can access a single cluster, but when multiple Master Consoles access one cluster, you
cannot concurrently perform configuration and service tasks.
The Master Console can be supplied on either pre-installed hardware, or just software
supplied to and subsequently installed by the user.
With SVC 5.1, the cluster configuration node can now be accessed on both Ethernet ports,
and this capability means that the cluster can have two IPv4 addresses and two IPv6
addresses that are used for configuration purposes.
The cluster can therefore be managed by IBM System Storage Productivity Centers on
separate networks, which provides redundancy in the event of a failure of one of these
networks.
Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each
Ethernet port on every node; these IP addresses are independent of the cluster configuration
IP addresses. The command-line interface (CLI) commands for managing the cluster IP
addresses have therefore been moved from svctask chcluster to svctask chclusterip in
SVC 5.1. And, new commands have been introduced to manage the iSCSI IP addresses.
When connecting to the SVC with Secure Shell (SSH), choose one of the available IP
addresses to connect to. There is no automatic failover capability, so if one network is down,
use the other IP address.
Clients might be able to use intelligence in domain name servers (DNS) to provide partial
failover.
When using the GUI, clients can add the cluster to the SVC Console multiple times (one time
per IP address). Failover is achieved by using the functional IP address when launching the
SVC Console interface.
The zoning capabilities of the SAN switch are used to create these distinct zones. SVC 5.1
supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, which depends on the hardware platform and
on the switch where the SVC is connected.
We recommend connecting the SVC and the disk subsystem to the switch operating at the
highest speed, in an environment where you have a fabric with multiple speed switches.
All SVC nodes in the SVC cluster are connected to the same SANs, and they present VDisks
to the hosts. These VDisks are created from MDGs that are composed of MDisks presented
by the disk subsystems. There must be three distinct zones in the fabric:
SVC cluster zone: Create one zone per fabric with all of the SVC ports cabled to this fabric
to allow SVC intracluster node communication.
Host zones: Create an SVC host zone for each server that receives storage from the SVC
cluster.
Storage zone: Create one SVC storage zone for each storage subsystem that is
virtualized by the SVC.
SAN configurations that use intracluster Metro Mirror and Global Mirror relationships do not
require additional switch zones.
SAN configurations that use intercluster Metro Mirror and Global Mirror relationships require
the following additional switch zoning considerations:
A cluster can be configured so that it can detect all of the nodes in all of the remote
clusters. Alternatively, a cluster can be configured so that it detects only a subset of the
nodes in the remote clusters.
Use of inter-switch link (ISL) trunking in a switched fabric.
Use of redundant fabrics.
For intercluster Metro Mirror and Global Mirror relationships, you must perform the following
steps to create the additional required zones:
1. Configure your SAN so that FC traffic can be passed between the two clusters. To
configure the SAN this way, you can connect the clusters to the same SAN, merge the
SANs, or use routing technologies.
2. (Optional) Configure zoning to allow all of the nodes in the local fabric to communicate
with all of the nodes in the remote fabric.
McData Eclipse routers: If you use McData Eclipse routers, Model 1620, only 64 port
pairs are supported, regardless of the number of iFCP links that is used.
Figure 3-9 on page 78 shows an example of SVC, host, and storage subsystem connections.
Note: In SVC Version 3.1 and later, the command svcinfo lsfabric generates a
report that displays the connectivity between nodes and other controllers and hosts.
This report is particularly helpful in diagnosing SAN problems.
Zoning examples
Figure 3-10 shows an SVC cluster zoning example.
Figure 3-13 shows the use of IPv4 management and iSCSI addresses in the same subnet.
You can set up the equivalent configuration with only IPv6 addresses.
Figure 3-16 on page 83 shows the use of a redundant network and a third subnet for
management.
Figure 3-17 shows the use of a redundant network for both iSCSI data and management.
Apply the following general guidelines for back-end storage subsystem configuration
planning:
In the SAN, disk subsystems that are used by the SVC cluster are always connected to
SAN switches and nothing else.
Other disk subsystem connections out of the SAN are possible.
Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN, but it is
recommended. Therefore, controller A in the DS4000 can be connected to SAN A only, or
to SAN A and SAN B, and controller B in the DS4000 can be connected to SAN B only, or
to SAN B and SAN A.
Split controller configurations are supported with certain rules and configuration
guidelines. See IBM System Storage SAN Volume Controller Planning Guide, GA32-0551,
for more information.
All SVC nodes in an SVC cluster must be able to see the same set of disk subsystem
ports on each disk subsystem controller. Operation in a mode where two nodes see a
separate set of ports on the same controller becomes degraded. This degradation can
occur if inappropriate zoning was applied to the fabric. It can also occur if inappropriate
LUN masking is used. This guideline has important implications for a disk subsystem,
such as DS3000, DS4000, or DS5000, which imposes exclusivity rules on which HBA
worldwide names (WWNs) a storage partition can be mapped to.
MDisks in the SVC are LUNs assigned from the underlying disk subsystems to the SVC and
can be either managed or unmanaged. A managed MDisk is an MDisk that is assigned to an
MDG:
MDGs are collections of MDisks. An MDisk is contained within exactly one MDG.
An SVC supports up to 128 MDGs.
There is no limit to the number of VDisks that can be in an MDG other than the limit per
cluster.
MDGs are collections of VDisks. Under normal circumstances, a VDisk is associated with
exactly one MDG. The exception to this rule is when a VDisk is migrated, or mirrored,
between MDGs.
SVC supports extent sizes of 16, 32, 64, 128, 256, 512, 1,024, and 2,048 MB. The extent size
is a property of the MDG, which is set when the MDG is created. It cannot be changed, and
all MDisks, which are contained in the MDG, have the same extent size, so all VDisks that are
associated with the MDG must also have the same extent size.
Table 3-1 on page 89 shows all of the extent sizes that are available in an SVC.
16 MB 64 TB
32 MB 128 TB
64 MB 256 TB
128 MB 512 TB
256 MB 1 PB
512 MB 2 PB
1,024 MB 4 PB
2,048 MB 8 PB
1 100%
2 66%
3 40%
4 30%
5 or more 25%
Think of the rule as no single partition can occupy more than its upper limit of cache
capacity with write data. These limits are upper limits, and they are the points at which the
SVC cache will start to limit incoming I/O rates for VDisks created from the MDG. If a
particular partition reaches this upper limit, the net result is the same as a global cache
resource that is full. That is, the host writes will be serviced on a one-out-one-in basis —
as the cache destages writes to the back-end disks. However, only writes targeted at the
full partition are limited, all I/O destined for other (non-limited) MDGs will continue as
normal. Read I/O requests for the limited partition will also continue as normal. However,
because the SVC is destaging write data at a rate that is obviously greater than the
controller can actually sustain (otherwise the partition does not reach the upper limit),
reads are likely to be serviced equally slowly.
Therefore, you can define the VDisks using the following considerations:
Optimize the performance between the hosts and the SVC by distributing the VDisks
between the various nodes of the SVC cluster, which means spreading the load equally on
the nodes in the SVC cluster.
Get the level of performance, reliability, and capacity you require by using the MDG that
corresponds to your needs (you can access any MDG from any node), that is, choose the
MDG that fulfils the demands for your VDisk, with respect to performance, reliability, and
capacity.
Recommendations:
We highly recommend that you keep a warning level on the used capacity so that it
provides adequate time for the provision of more physical storage.
Warnings must not be ignored by an administrator.
Use the autoexpand feature of the Space-Efficient VDisks.
If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports
to maximize high availability and performance.
In order to configure greater than 256 hosts, you will need to configure the host to iogrp
mappings on the SVC. Each iogrp can contain a maximum of 256 hosts, so it is possible to
create 1,024 host objects on an eight-node SVC cluster. VDisks can only be mapped to a
host that is associated with the I/O Group to which the VDisk belongs.
FlashCopy guidelines
Consider these FlashCopy guidelines:
Identify each application that must have a FlashCopy function implemented for its VDisk.
FlashCopy is a relationship between VDisks. Those VDisks can belong to separate MDGs
and separate storage subsystems.
You can use FlashCopy for backup purposes by interacting with the Tivoli Storage
Manager Agent, or for cloning a particular environment.
Define which FlashCopy best fits your requirements: NO copy, Full copy, Space Efficient,
or Incremental.
Define which FlashCopy rate best fits your requirement in terms of performance and time
to get the FlashCopy completed. The relationship of the background copy rate value to the
attempted number of grains to be split per second is shown in Table 3-3 on page 94.
Define the grain size that you want to use. Larger grain sizes can cause a longer
FlashCopy elapsed time and a higher space usage in the FlashCopy target VDisk. Smaller
grain sizes can have the opposite effect. Remember that the data structure and the source
data location can modify those effects. In an actual environment, check the results of your
FlashCopy procedure in terms of the data copied at every run and in terms of elapsed
time, comparing them to the new SVC FlashCopy results, and eventually adapt the
grain/second and the copy rate parameter to fit your environment’s requirements.
1 - 10 128 KB 0.5 2
11 - 20 256 KB 1 4
21 - 30 512 KB 2 8
31 - 40 1 MB 4 16
41 - 50 2 MB 8 32
51 - 60 4 MB 16 64
61 - 70 8 MB 32 128
71 - 80 16 Mb 64 256
81 - 90 32 MB 128 512
Figure 3-19 contains two redundant fabrics. Part of each fabric exists at the local cluster and
at the remote cluster. There is no direct connection between the two fabrics.
Due to the more complex interactions involved, IBM explicitly tests products of this class for
interoperability with the SVC. The current list of supported SAN routers can be found in the
supported hardware list on the SVC support Web site:
http://www.ibm.com/storage/support/2145
IBM has tested a number of FC extenders and SAN router technologies with the SVC, which
must be planned, installed, and tested so that the following requirements are met:
For SVC 4.1.0.x, the round-trip latency between sites must not exceed 68 ms (34 ms one
way) for FC extenders, or 20 ms (10 ms one-way) for SAN routers.
For SVC 4.1.1.x and later, the round-trip latency between sites must not exceed 80 ms
(40 ms one-way). For Global Mirror, this limit allows a distance between the primary and
secondary sites of up to 8,000 km (4,970.96 miles) using a planning assumption of 100
km (62.13 miles) per 1 ms of round-trip link latency.
The latency of long distance links depends upon the technology that is used to implement
them. A point-to-point dark fiber-based link will typically provide a round-trip latency of
1ms per 100 km (62.13 miles) or better. Other technologies will provide longer round-trip
latencies, which will affect the maximum supported distance.
The configuration must be tested with the expected peak workloads.
When Metro Mirror or Global Mirror is used, a certain amount of bandwidth will be required
for SVC intercluster heartbeat traffic. The amount of traffic depends on how many nodes
are in each of the two clusters.
Figure 3-20 shows the amount of heartbeat traffic, in megabits per second, that is
generated by various sizes of clusters.
These numbers represent the total traffic between the two clusters, when no I/O is taking
place to mirrored VDisks. Half of the data is sent by one cluster, and half of the data is sent
by the other cluster. The traffic will be divided evenly over all available intercluster links;
therefore, if you have two redundant links, half of this traffic will be sent over each link,
during fault free operation.
The bandwidth between sites must be, at the least, sized to meet the peak workload
requirements while maintaining the maximum latency specified previously. The peak
workload requirement must be evaluated by considering the average write workload over a
period of one minute or less, plus the required synchronization copy bandwidth. With no
synchronization copies active and no write I/O disks in Metro Mirror or Global Mirror
relationships, the SVC protocols will operate with the bandwidth indicated in Figure 3-20,
Figure 3-21 shows the correct relationship between VDisks in a Metro Mirror or Global Mirror
solution.
The capabilities of the storage controllers at the secondary cluster must be provisioned to
allow for the peak application workload to the Global Mirror VDisks, plus the client-defined
level of background copy, plus any other I/O being performed at the secondary site. The
performance of applications at the primary cluster can be limited by the performance of
the back-end storage controllers at the secondary cluster to maximize the amount of I/O
that applications can perform to Global Mirror VDisks.
We do not recommend using SATA for Metro Mirror or Global Mirror secondary VDisks
without complete review. Be careful using a slower disk subsystem for the secondary
VDisks for high performance primary VDisks, because SVC cache might not be able to
buffer all the writes, and flushing cache writes to SATA might slow I/O at the production
site.
Global Mirror VDisks at the secondary cluster must be in dedicated MDisk groups (which
contain no non-Global Mirror VDisks).
Because there are multiple data migration methods, we suggest that you choose the data
migration method that best fits your environment, your operating system platform, your kind of
data, and your application’s service level agreement.
With data migration, we recommend that you apply the following guidelines:
Choose which data migration method best fits your operating system platform, your kind of
data, and your service level agreement.
Check the interoperability matrix for the storage subsystem to which your data is being
migrated:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
Choose where you want to place your data after migration in terms of the MDG related to
a specific storage subsystem tier.
Check if a sufficient amount of free space or extents are available in the target MDG.
Decide if your data is critical and must be protected by a VDisk Mirroring option or if it has
to be replicated in a remote site for disaster recovery.
Prepare offline all of the zone and LUN masking/host mappings that you might need in
order to minimize downtime during the migration.
Prepare a detailed operation plan so that you do not overlook anything at data migration
time.
Tip: Technically, almost all storage controllers provide both striping (RAID-1 or RAID-10)
and a form of caching. The real advantage is the degree to which you can stripe the data,
that is, across all MDisks in a group and therefore have the maximum number of spindles
active at one time. The caching is secondary. The SVC provides additional caching to what
midrange controllers provide (usually a couple of GB), whereas enterprise systems have
much larger caches.
When discussing performance for a system, it always comes down to identifying the
bottleneck, and thereby the limiting factor of a given system. At the same time, you must take
into consideration the component for whose workload you identify a limiting factor, because it
might not be the same component that is identified as the limiting factor for other workloads.
100 Implementing the IBM System Storage SAN Volume Controller V5.1
3.4.1 SAN
The SVC now has many models: 2145-4F2, 2145-8F2, 2145-8F4, 2145-8G4, 2145-8A4, and
2145-CF8. All of them can connect to 2 Gbps, 4 Gbps, or 8 Gbps switches. From a
performance point of view, it is better to connect the SVC to 8 Gbps switches.
Correct zoning on the SAN switch will bring security and performance together. We
recommend that you implement a dual HBA approach at the host to access the SVC.
In most cases, the SVC will be able to improve the performance, especially on middle to low
end disk subsystems, older disk subsystems with slow controllers, or uncached disk systems,
for these reasons:
The SVC has the capability to stripe across disk arrays, and it can do so across the entire
set of supported physical disk resources.
The SVC has a 4 GB, 8 GB, or 24 GB cache in the latest 2145-CF8 model and it has an
advanced caching mechanism.
The SVC’s large cache and advanced cache management algorithms also allow it to improve
upon the performance of many types of underlying disk technologies. The SVC’s capability to
manage, in the background, the destaging operations incurred by writes (while still supporting
full data integrity) has the potential to be particularly important in achieving good database
performance.
Depending upon the size, age, and technology level of the disk storage system, the total
cache available in the SVC can be larger, smaller, or about the same as that associated with
the disk storage. Because hits to the cache can occur in either the upper (SVC) or the lower
(disk controller) level of the overall system, the system as a whole can take advantage of the
larger amount of cache wherever it is located. Thus, if the storage control level of cache has
the greater capacity, expect hits to this cache to occur, in addition to hits in the SVC cache.
Also, regardless of their relative capacities, both levels of cache will tend to play an important
role in allowing sequentially organized data to flow smoothly through the system. The SVC
cannot increase the throughput potential of the underlying disks in all cases. Its ability to do
so depends upon both the underlying storage technology, as well as the degree to which the
workload exhibits “hot spots” or sensitivity to cache size or cache algorithms.
IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, shows the SVC’s cache
partitioning capability:
http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
Assuming that there are no bottlenecks in the SAN or on the disk subsystem, remember that
specific guidelines must be followed when you are performing these tasks:
Creating an MDG
Creating VDisks
Connecting or configuring hosts that must receive disk space from an SVC cluster
You can obtain more detailed information about performance and best practices for the SVC
in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
You can obtain more information about using the TotalStorage Productivity Center to monitor
your storage subsystem in Monitoring Your Storage Subsystems with TotalStorage
Productivity Center, SG24-7364:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
See Chapter 8, “SAN Volume Controller operations using the GUI” on page 469 for detailed
information about collecting performance statistics.
102 Implementing the IBM System Storage SAN Volume Controller V5.1
4
You still have full management control of the SVC no matter which method you choose. IBM
System Storage Productivity Center is supplied by default when you purchase your SVC
cluster.
If you already have a previously installed SVC cluster in your environment, it is possible that
you are using the SVC Console (Hardware Management Console (HMC)). You can still use it
together with IBM System Storage Productivity Center, but you can only log in to your SVC
from one of them at a time.
If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are
using the SVC Console or IBM System Storage Productivity Center, because the SVC CLI is
located on the cluster and accessed via Secure Shell (SSH), which can be installed
anywhere.
Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.
104 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-2 TCP/IP ports
For more information about TCP/IP prerequisites, see Chapter 3, “Planning and
configuration” on page 65 and also the IBM System Storage Productivity Center: Introduction
and Planning Guide, SC23-8824.
In order to start an SVC initial configuration, Figure 4-3 shows a common flowchart that
covers all of the types of management.
In the next sections, we describe each of the steps shown in Figure 4-3.
106 Implementing the IBM System Storage SAN Volume Controller V5.1
4.2 Systems Storage Productivity Center overview
The System Storage Productivity Center (SSPC) is an integrated hardware and software
solution that provides a single management console for managing IBM SVC, IBM DS8000,
and other components of your data storage infrastructure.
The current release of System Storage Productivity Center consists of the following
components:
IBM Tivoli Storage Productivity Center Basic Edition 4.1.1
IBM Tivoli Storage Productivity Center Basic Edition 4.1.1 is preinstalled on the System
Storage Productivity Center server.
Tivoli Storage Productivity Center for Replication is preinstalled. An additional license
is required.
IBM SAN Volume Controller Console 5.1.0
IBM SAN Volume Controller Console 5.1.0 is preinstalled on the System Storage
Productivity Center server. Because this level of the console no longer requires a
Common Information Model (CIM) agent to communicate with the SVC, a CIM Agent is
not installed with the console. Instead, you can use the CIM Agent that is embedded in the
SVC hardware. To manage prior levels of the SVC, install the corresponding CIM Agent on
the IBM System Storage Productivity Center server. PuTTY remains installed on the
System Storage Productivity Center and is available for key generation.
IBM System Storage DS® Storage Manager 10.60 is available for you to optionally
install on the System Storage Productivity Center server, or on a remote server. The DS
Storage Manager 10.60 can manage the IBM DS3000, IBM DS4000, and IBM DS5000.
With DS Storage Manager 10.60, when you use Tivoli Storage Productivity Center to add
and discover a DS CIM Agent, you can launch the DS Storage Manager from the topology
viewer, the Configuration Utility, or the Disk Manager of the Tivoli Storage Productivity
Center.
IBM Java™ 1.5 is preinstalled. IBM Java is preinstalled and supports DS Storage
Manager 10.60. You do not need to download Java from Sun Microsystems.
DS CIM Agent management commands. The DS CIM Agent management commands
(DSCIMCLI) for 5.4.3 are preinstalled on the System Storage Productivity Center.
Figure 4-4 shows the product stack in the IBM System Storage Productivity Center Console
1.4.
The IBM System Storage Productivity Center Console replaces the functionality of the SVC
Master Console (MC), which was a dedicated management console for the SVC. The Master
Console is still supported and will run the latest code levels of the SVC Console software
components.
IBM System Storage Productivity Center has all of the software components preinstalled and
tested on a System xTM machine model IBM System Storage Productivity Center 2805-MC4
with Windows installed on it.
All the software components installed on the IBM System Storage Productivity Center can be
ordered and installed on hardware that meets or exceeds minimum requirements. The SVC
Console software components are also available on the Web.
When using the IBM System Storage Productivity Center with the SVC, you have to install it
and configure it before configuring the SVC. For a detailed guide to the IBM System Storage
Productivity Center, we recommend that you refer to the IBM System Storage Productivity
Center Software Installation and User’s Guide, SC23-8823.
For information pertaining to physical connectivity to the SVC, see Chapter 3, “Planning and
configuration” on page 65.
108 Implementing the IBM System Storage SAN Volume Controller V5.1
8 GB of RAM (eight 1-inch dual inline memory modules of double-data-rate 3 (DDR3)
memory, with a data rate of 1,333 MHz
Two 146 GB hard disk drives, each with a speed of 15,000 RPM
One Broadcom 6708 Ethernet card
One CD/DVD bay with read and write-read capability Microsoft Windows 2008 Enterprise
Edition
It is designed to perform System Storage Productivity Center functions. If you plan to upgrade
System Storage Productivity Center for more functions, you can purchase the Performance
Upgrade Kit to add more capacity to your hardware.
For detailed installation guidance, see the IBM System Storage Productivity Center:
Introduction and Planning Guide, SC23-8824:
https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=
5000033&familyind=5356448
Also, see the IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center
for Replication Installation and Configuration Guide, SC27-2337:
http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597
Figure 4-5 shows the front view of the System Storage Productivity Center Console based on
the 2805-MC4 hardware.
Figure 4-6 shows a rear view of System Storage Productivity Center Console based on the
2805-MC4 hardware.
For detailed installation guidance, see the IBM System Storage SAN Volume Controller:
Master Console Guide, SC27-2223:
http://www-01.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH&d
c=DA400&q1=english&q2=-Japanese&uid=ssg1S7002609&loc=en_US&cs=utf-8&lang=en
110 Implementing the IBM System Storage SAN Volume Controller V5.1
4.3 Setting up the SVC cluster
This section provides step-by-step instructions for building the SVC cluster initially.
4.3.1 Creating the cluster (first time) using the service panel
This section provides the step-by-step instructions that are needed to create the cluster for
the first time using the service panel.
Use Figure 4-7 as a reference for the SVC 2145-8F2 and 2145-8F4 node model buttons to be
pushed in the steps that follow. Use Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4
models. And, use Figure 4-9 as a reference for the SVC Node 2145-CF8 model.
112 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-8 SVC 8G4 node front and operator panel
4.3.2 Prerequisites
Ensure that the SVC nodes are physically installed. Prior to configuring the cluster, ensure
that the following information is available:
License: The license indicates whether the client is permitted to use FlashCopy,
MetroMirror, or both. It also indicates how much capacity the client is licensed to virtualize.
For IPv4 addressing:
– Cluster IPv4 addresses: These addresses include one address for the cluster and
another address for the service address.
– IPv4 subnet mask.
– Gateway IPv4 address.
For IPv6 addressing:
– Cluster IPv6 addresses: These addresses include one address for the cluster and
another address for the service address.
– IPv6 prefix.
– Gateway IPv6 address.
114 Implementing the IBM System Storage SAN Volume Controller V5.1
4.3.3 Initial configuration using the service panel
After the hardware is physically installed into racks, complete the following steps to initially
configure the cluster through the service panel:
1. Choose any node that is to become a member of the cluster being created.
2. At the service panel of that node, press and release the up or down navigation button
continuously until Node: is displayed.
Important: If a time-out occurs when entering the input for the fields during these
steps, you must begin again from step 2. All of the changes are lost, so be sure to have
all of the information available before beginning again.
3. Press and release the left or right navigation button continuously until Create Cluster? is
displayed. Press the select button.
4. If IPv4 Address: is displayed on line 1 of the service display, go to step 5. If Delete
Cluster? is displayed on line 1 of the service display, this node is already a member of a
cluster. Either the wrong node was selected, or this node was already used in a previous
cluster. The ID of this existing cluster is displayed on line 2 of the service display:
a. If the wrong node was selected, this procedure can be exited by pressing the left, right,
up, or down button (it cancels automatically after 60 seconds).
b. If you are certain that the existing cluster is not required, follow these steps:
i. Press and hold the up button.
ii. Press and release the select button. Then, release the up button, which deletes the
cluster information from the node. Go back to step 1 and start again.
Important: When a cluster is deleted, all of the client data that is contained in that
cluster is lost.
5. If you are creating the cluster with IPv4, then, press the select button; otherwise for IPv6,
press the down arrow to display IPv6 Address:, and press the select button.
6. Use the up or down navigation buttons to change the value of the first field of the IP
address to the value that has been chosen.
Note: For IPv4, pressing and holding the up or down buttons will increment or
decrease the IP address field by units of 10. The field value rotates from 0 to 255 with
the down button, and from 255 to 0 with the up button.
For IPv6, you do the same steps except that it is a 4-digit hexadecimal field, and the
individual characters will increment.
7. Use the right navigation button to move to the next field. Use the up or down navigation
buttons to change the value of this field.
8. Repeat step 7 for each of the remaining fields of the IP address.
9. When the last field of the IP address has been changed, press the select button.
10.Press the right arrow button:
a. For IPv4, IPv4 Subnet: is displayed.
b. For IPv6, IPv6 Prefix: is displayed.
11.Press the select button.
Important: Make a note of this password now. It is case sensitive. The password is
displayed only for approximately 60 seconds. If the password is not recorded, the
cluster configuration procedure must be started again from the beginning.
20.When Cluster: is displayed on line 1 of the service display and the Password: display has
timed out, the cluster was created successfully. Also, the cluster IP address is displayed
on line 2 when the initial creation of the cluster is completed.
If the cluster is not created, Create Failed: is displayed on line 1 of the service display.
Line 2 contains an error code. Refer to the error codes that are documented in IBM
System Storage SAN Volume Controller: Service Guide, GC26-7901, to identify the
reason why the cluster creation failed and the corrective action to take.
Important: At this time, do not repeat this procedure to add other nodes to the cluster.
Adding nodes to the cluster is accomplished in 7.8.2, “Adding a node” on page 388 and in
8.10.3, “Adding nodes to the cluster” on page 560.
Important: Make sure that the SVC cluster IP address (svcclusterip) can be reached
successfully with a ping command from the SVC Console.
116 Implementing the IBM System Storage SAN Volume Controller V5.1
4.4.1 Configuring the GUI
If this is the first time that the SVC administration GUI is being used, you must configure it:
1. Open the GUI using one of the following methods:
– Double-click the icon marked SAN Volume Controller Console on the SVC Console’s
desktop.
– Open a Web browser on the SVC Console and point to this address:
http://localhost:9080/ica (We accessed the SVC Console using this method.)
– Open a Web browser on a separate workstation and point to this address:
http://svcconsoleipaddress:9080/ica
2. Click Add SAN Volume Controller Cluster, and you will be presented with the window
that is shown in Figure 4-11.
3. Click OK and a pop-up window opens and prompts for the user ID and the password of the
SVC cluster, as shown in Figure 4-13. Enter the user ID admin and the cluster admin
password that was set earlier in 4.3.1, “Creating the cluster (first time) using the service
panel” on page 111, and click OK.
4. The browser accesses the SVC and displays the Create New Cluster wizard window, as
shown in Figure 4-14. Click Continue.
118 Implementing the IBM System Storage SAN Volume Controller V5.1
5. At the Create New Cluster window (Figure 4-15), fill in the following details:
– A new superuser password to replace the random one that the cluster generated: The
password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore.
It cannot start with a number and has a minimum of one character and a maximum of
15 characters.
Users: The Admin user that was previously used will no longer be needed. It will be
replaced by the superuser user that will be created at the cluster initialization time.
Starting from SVC 5.1, the CIM Agent has been moved inside the SVC cluster.
– A service password to access the cluster for service operation: The password is case
sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start
with a number and has a minimum of one character and a maximum of 15 characters.
– A cluster name: The cluster name is case sensitive and can consist of A to Z, a to z, 0
to 9, and the underscore. It cannot start with a number and has a minimum of one
character and a maximum of 15 characters.
– A service IP address to access the cluster for service operations. Choose between an
automatically assigned IP address from Dynamic Host Configuration Protocol (DHCP)
or a static IP address.
Tip: The service IP address differs from the cluster IP address. However, because
the service IP address is configured for the cluster, it must be on the same IP
subnet.
Important: The SVC must be in a secure room if this function is enabled, because
anyone who knows the correct key sequence can reset the admin password:
Use this key sequence:
a. From the Cluster: menu item displayed on the service panel, press the left or
right button until Recover Cluster? is displayed.
b. Press the select button. Service Access? is displayed.
c. Press and hold the up button, and then press and release the select button.
This step generates a new random password. Write it down.
Important: Be careful, because pressing and holding the down button, and
pressing and releasing the select button, places the node in service mode.
6. After you have filled in the details, click Create New Cluster (Figure 4-15).
Important: Make sure that you confirm the Administrator and Service passwords and
retain them in a safe place for future use.
7. A Creating New Cluster window opens, as shown in Figure 4-16. Click Continue each
time when prompted.
120 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-16 Creating New Cluster
8. A Created New Cluster window opens, as shown in Figure 4-17. Click Continue.
9. A Password Changed window will confirm that the password has been modified, as shown
in Figure 4-18. Click Continue.
Note: By this time, the service panel display on the front of the configured node
displays the cluster name that was entered previously (for example, ITSO-CLS3).
10.Then, you are redirected to the License setting window, as shown in Figure 4-19. Choose
the type of license that is appropriate for your purchase, and click GO to continue.
11.Next, the Capacity Licensing Settings window is displayed, as shown in Figure 4-20. To
continue, fill out the fields for Virtualization Limit, FlashCopy Limit, and Global and Metro
Mirror Limit for the number of Terabytes that are licensed. If you do not have a license for
any of these features, leave the value at 0. Click Set License Settings.
12.A confirmation window will confirm the settings for the features, as shown in Figure 4-21.
Click Continue.
122 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-21 Capacity Licensing Settings confirmation
13.A window confirming that you have successfully created the initial settings for the cluster
opens, as shown in Figure 4-22.
We describe all of these steps in Chapter 7, “SAN Volume Controller operations using the
command-line interface” on page 339, and in Chapter 8, “SAN Volume Controller operations
using the GUI” on page 469.
124 Implementing the IBM System Storage SAN Volume Controller V5.1
4.5 Secure Shell overview and CIM Agent
Prior to SVC Version 5.1, Secure Shell (SSH) was used to secure data flow between the SVC
cluster configuration node (SSH server) and a client, either a command-line client through the
command-line interface (CLI) or the Common Information Model object manager (CIMOM).
The connection is secured by the means of a private key and a public key pair:
1. A public key and a private key are generated together as a pair.
2. A public key is uploaded to the SSH server.
3. A private key identifies the client and is checked against the public key during the
connection. The private key must be protected.
4. The SSH server must also identify itself with a specific host key.
5. If the client does not have that host key yet, it is added to a list of known hosts.
Secure Shell is the communication vehicle between the management system (usually the
System Storage Productivity Center) and the SVC cluster.
SSH is a client/server network application. The SVC cluster acts as the SSH server in this
relationship. The SSH client provides a secure environment from which to connect to a
remote machine. It uses the principles of public and private keys for authentication.
The communication interfaces prior to SVC version 5.1 are shown in Figure 4-24.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administration and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.
To use the CLI or, for the SVC graphical user interface (GUI) prior to SVC 5.1, an SSH client
must be installed on that system, the SSH key pair must be generated on the client system,
and the client’s SSH public key must be stored on the SVC cluster or clusters.
The System Storage Productivity Center and the HMC must have the freeware
implementation of SSH-2 for Windows called PuTTY preinstalled. This software provides the
Starting with SVC 5.1, the management design has been changed, and the CIM Agent has
been moved into the SVC cluster.
With SVC 5.1, SSH keys authentication is no longer needed for the GUI but only for the SVC
command-line interface.
4.5.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system:
Note: These keys will be used in the step documented in 4.6, “Using IPv6” on page 136.
1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start Programs PuTTY PuTTYgen.
2. On the PuTTY Key Generator GUI window (Figure 4-26), generate the keys:
a. Select SSH2 RSA.
b. Leave the number of bits in a generated key value at 1024.
c. Click Generate.
126 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-26 PuTTY key generator GUI
3. Move the cursor on the blank area in order to generate the keys.
To generate keys: The blank area indicated by the message is the large blank
rectangle on the GUI inside the section of the GUI labelled Key. Continue to move the
mouse pointer over the blank area until the progress bar reaches the far right. This
action generates random characters to create a unique key pair.
4. After the keys are generated, save them for later use:
a. Click Save public key, as shown in Figure 4-27.
b. You are prompted for a name (for example, pubkey) and a location for the public key
(for example, C:\Support Utils\PuTTY). Click Save.
If another name or location is chosen, ensure that a record of the name or location is
kept, because the name and location of this SSH public key must be specified in the
steps that are documented in 4.5.2, “Uploading the SSH public key to the SVC cluster”
on page 129.
Tip: The PuTTY Key Generator saves the public key with no extension, by default.
We recommend that you use the string “pub” in naming the public key, for example,
“pubkey”, to easily differentiate the SSH public key from the SSH private key.
128 Implementing the IBM System Storage SAN Volume Controller V5.1
e. When prompted, enter a name (for example, icat) and location for the private key (for
example, C:\Support Utils\PuTTY). Click Save.
If you choose another name or location, ensure that you keep a record of it, because
the name and location of the SSH private key must be specified when the PuTTY
session is configured in the steps that are documented in 4.6, “Using IPv6” on
page 136.
We suggest that you use the default name icat.ppk, because, in SVC clusters running
on versions prior to SVC 5.1, this key has been used for icat application authentication
and must have this default name.
Private key extension: The PuTTY Key Generator saves the private key with the
PPK extension.
Important: If the private key was named something other than icat.ppk, make sure that
you rename it to the icat.ppk file in the C:\Program Files\IBM\svcconsole\cimom
folder. The GUI (which will be used later) expects the file to be called icat.ppk and for it
to be in this location. This key is no longer used in SVC 5.1, but it is still valid for the
previous version.
2. From the Create a User window, insert the user ID name that you want to create and the
password. At the bottom of the window, select the access level that you want to assign to
your user (remember that the Security Administrator is the maximum level) and choose
the location where you want to upload the SSH pub key file you have created for this user,
as shown Figure 4-30. Click Ok.
3. You have completed your user creation process and uploaded the users’ SSH public key
that will be paired later with the users’ private .ppk key, as described in 4.5.3, “Configuring
the PuTTY session for the CLI” on page 130. Figure 4-31 shows the successful upload of
the SSH admin key.
4. You have now completed the basic setup requirements for the SVC cluster using the SVC
cluster Web interface.
130 Implementing the IBM System Storage SAN Volume Controller V5.1
Perform these steps to configure the PuTTY session on the SSH client system:
1. From the System Storage Productivity Center Windows desktop, select Start
Programs PuTTY PuTTY to open the PuTTY Configuration GUI window.
2. In the PuTTY Configuration window (Figure 4-32), from the Category pane on the left,
click Session, if it is not selected.
Tip: The items selected in the Category pane affect the content that appears in the right
pane.
3. In the right pane, under the “Specify the destination you want to connect to” section, select
SSH. Under the “Close window on exit” section, select Only on clean exit, which ensures
that if there are any connection errors, they will be displayed on the user’s window.
4. From the Category pane on the left side of the PuTTY Configuration window, click
Connection SSH to display the PuTTY SSH Configuration window, as shown in
Figure 4-33.
5. In the right pane, in the “Preferred SSH protocol version” section, select 2.
6. From the Category pane on the left side of the PuTTY Configuration window, select
Connection SSH Auth.
7. On Figure 4-34, in the right pane, in the “Private key file for authentication:” field under the
Authentication Parameters section, either browse to or type the fully qualified directory
path and file name of the SSH client private key file created earlier (for example,
C:\Support Utils\PuTTY\icat.PPK). See Figure 4-34.
132 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-34 PuTTY Configuration: Private key location
8. From the Category pane on the left side of the PuTTY Configuration window, click
Session.
9. In the right pane, follow these steps, as shown in Figure 4-35:
a. Under the “Load, save, or delete a stored session” section, select Default Settings,
and click Save.
b. For the Host Name (or IP address), type the IP address of the SVC cluster.
c. In the Saved Sessions field, type a name (for example, SVC) to associate with this
session.
d. Click Save.
You can now either close the PuTTY Configuration window or leave it open to continue.
Tip: Normally, output that comes from the SVC is wider than the default PuTTY window
size. We recommend that you change your PuTTY window appearance to use a font with a
character size of 8. To change, click the Appearance item in the Category tree, as shown
in Figure 4-35, and then, click Font. Choose a font with a character size of 8.
134 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-36 Open PuTTY command-line session
4. If this is the first time that the PuTTY application is being used since generating and
uploading the SSH key pair, a PuTTY Security Alert window with a prompt opens stating
that there is a mismatch between the private and public keys, as shown in Figure 4-37.
Click Yes, which invokes the CLI.
5. At the Login as: prompt, type admin and press Enter (the user ID is case sensitive). As
shown in Example 4-1, the private key used in this PuTTY session is now authenticated
against the public key that was uploaded to the SVC cluster.
Using a passphrase: If you are generating an SSH keypair so that you can
interactively use the CLI, we recommend that you use a passphrase so you will need to
authenticate every time that you connect to the cluster. It is possible to have a
passphrase-protected key for scripted usage, but you will have to use the expect
command or a similar command to have the passphrase parsed into the ssh command.
136 Implementing the IBM System Storage SAN Volume Controller V5.1
Using IPv6: To remotely access the SVC Console and clusters running IPv6, you are
required to run Internet Explorer 7 and have IPv6 configured on your local workstation.
Windows IP Configuration
4. A confirmation window displays (Figure 4-40). Click X in the upper-right corner to close
this tab.
5. Before you remove the cluster from the SVC Console, test the IPv6 connectivity using the
ping command from a cmd.exe session on the System Storage Productivity Center (as
shown in Example 4-3 on page 139).
138 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 4-3 Testing IPv6 connectivity to the SVC cluster
C:\Documents and Settings\Administrator>ping
2001:0610:0000:0000:0000:0000:0000:119
6. In the Viewing Clusters pane, in the GUI Welcome window, select the cluster that you want
to remove. Select Remove a Cluster from the list, and click Go.
7. The Viewing Clusters window reopens, without the cluster that you have removed. Select
Add a Cluster from the list, and click OK (Figure 4-41).
8. The Adding a Cluster window opens. Enter your IPv6 address, as shown in Figure 4-42,
and click OK.
9. You will be asked to insert your CIM user ID (superuser) and your password
(default=passw0rd), as shown in Figure 4-43.
10.The Viewing Clusters window reopens with the cluster displaying an IPv6 address, as
shown in Figure 4-44. Click Launch the SAN Volume Controller Console for the cluster,
and go back to modifying IP addresses, as you did in step 1.
Figure 4-44 Viewing Clusters window: Displaying the new cluster using the IPv6 address
11.In the Modify IP Addresses window, select the IPv4 address port, select Clear Port
Settings, and click GO, as shown in Figure 4-45.
140 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-45 Clear Port Settings
13.A second window (Figure 4-47) opens, confirming that the IPv4 stack has been disabled
and the associated addresses have been removed. Click Return.
3. Execute the setup.exe file from the location where you have saved and unzipped the
latest SVC Console file.
Figure 4-48 shows the location of the setup.exe file on our system.
142 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-48 Location of the setup.exe file
4. The Installation wizard will start. This first window (as shown in Figure 4-49) asks you to
shut down any running Windows programs, stop all SVC services, and review the readme
file.
5. Figure 4-49 Shows how to stop SVC services.
After you have reviewed the installation instructions and the readme file, click Next.
7. The Installation will ask you to read and accept the terms of the license agreement, as
shown in Figure 4-51. Click Next.
144 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-51 License agreement window
8. The installation detects your existing SVC Console installation (if you are upgrading). If it
does detect your existing SVC Console installation, it will ask you to perform these steps:
– Select Preserve Configuration if you want to keep your existing configuration. (You
must make sure that this option is checked.)
– Manually shut down the SVC Console services:
• IBM System Storage SAN Volume Controller Pegasus Server
• Service Location Protocol
• IBM WebSphere Application Server V6 - SVC
There might be differences in the existing services, depending on which version you
are upgrading from. Follow the instructions on the dialog wizard for which services to
shut down, as shown in Figure 4-52. Click Next.
Important: If you want to keep your SVC configuration, make sure that you select
Preserve Configuration. If you omit this selection, you will lose your entire SVC
Console setup, and you will have to reconfigure your console as though it were a new
installation.
9. The installation wizard then checks that the appropriate services are shut down, removes
the previous version, and shows the Installation Confirmation window, as shown in
Figure 4-53. If the wizard detects any problems, it first shows you a page detailing the
possible problems, giving you time to fix them before proceeding.
146 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-53 Installation Confirmation
10.Figure 4-54 shows the progress of the installation. For our environment, it took
approximately 10 minutes to complete.
11.The installation process now starts the migration for the cluster user accounts. Starting
with SVC 5.1, the CIMOM has been moved into the cluster, and it is no longer present in
the SVC Console or System Storage Productivity Center. The CIMOM authentication login
process will be performed in the ICA application when we launch the SVC management
application.
12.At the end of the user accounts migration process, you might get the error that is shown in
Figure 4-56.
148 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 4-56 SVC cluster user account migration error
150 Implementing the IBM System Storage SAN Volume Controller V5.1
15.And finally, to see the new interface, you can launch the SVC Console by using the icon on
the desktop. Log in and confirm that the upgrade was successful by noting the Console
Version number on the right side of the window under the graphic. See Figure 4-59.
To access the SVC, you must click Clusters on the left pane. You will be redirected to the
Viewing Clusters window, as shown in Figure 4-60.
Finally, you can manage your SVC cluster, as shown in Figure 4-62.
152 Implementing the IBM System Storage SAN Volume Controller V5.1
5
Starting with SVC 5.1, iSCSI is introduced as an alternative protocol to attaching hosts via a
LAN to the SVC. However, within the SVC, all communications with back-end storage
subsystems, and with other SVC clusters, take place via Fibre Channel (FC).
For iSCSI/LAN-based access networks to the SVC using a single network, or using two
physically separated networks, is supported. The iSCSI feature is a software feature that is
provided by the SVC 5.1 code. It will be available on the new CF8 nodes and also on the
existing nodes that support the SVC 5.1 release. The existing SVC node hardware has
multiple 1 Gbps Ethernet ports. Until now, only one 1 Gbps Ethernet port has been used, and
it has been used for cluster configuration. With the introduction of iSCSI, both ports can now
be used.
Redundant paths to VDisks can be provided for the SAN, as well as for the iSCSI
environment.
Figure 5-1 shows the attachments that are supported with the SVC 5.1 release.
154 Implementing the IBM System Storage SAN Volume Controller V5.1
SVC imposes no special limit on the FC optical distance between the SVC nodes and the
host servers. A server can therefore be attached to an edge switch in a core-edge
configuration while the SVC cluster is at the core. SVC supports up to three inter-switch link
(ISL) hops in the fabric. Therefore, the server and the SVC node can be separated by up to
five actual FC links, four of which can be 10 km (6.2 miles) long if longwave small form-factor
pluggables (SFPs) are used. For high performance servers, the rule is to avoid ISL hops, that
is, connect the servers to the same switch to which the SVC is connected, if possible.
The access from a server to an SVC cluster via the SAN fabrics is defined by the use of
zoning. Consider these rules for host zoning with the SVC:
For configurations of fewer than 64 hosts per cluster, the SVC supports a simple set of
zoning rules that enables the creation of a small set of host zones for various
environments. Switch zones containing HBAs must contain fewer than 40 initiators in total,
including the SVC ports that act as initiators. Thus, a valid zone is 32 host ports, plus eight
SVC ports. This restriction exists, because the order N2 scaling of the number of remote
status change notification messages (RSCN) with the number of initiators per zone [N]
can cause problems. We recommend that you zone using single HBA port zoning, as
described in the next paragraph.
For configurations of more than 64 hosts per cluster, the SVC supports a more restrictive
set of host zoning rules. Each HBA port must be placed in a separate zone. Also included
in this zone is exactly one port from each SVC node in the I/O Groups that are associated
with this host. We recommend that hosts are zoned this way in smaller configurations, too,
but it is not mandatory.
Switch zones containing HBAs must contain HBAs from similar hosts or similar HBAs in
the same host. For example, AIX and Windows NT® hosts must be in separate zones, and
t QLogic and Emulex adapters must be in separate zones.
To obtain the best performance from a host with multiple FC ports, ensure that each FC
port of a host is zoned with a separate group of SVC ports.
To obtain the best overall performance of the subsystem and to prevent overloading, the
workload to each SVC port must be equal, typically by zoning approximately the same
number of host FC ports to each SVC FC port.
For any given VDisk, the number of paths through the SAN from the SVC nodes to a host
must not exceed eight. For most configurations, four paths to an I/O Group (four paths to
each VDisk that is provided by this I/O Group) are sufficient.
Figure 5-2 on page 156 shows an overview for a setup with servers that have two single port
HBAs each. Follow this method to connect them:
Try to distribute the actual hosts equally between two logical sets per I/O Group. Connect
hosts from each set always to the same group of SVC ports. This “port group” includes
exactly one port from each SVC node in the I/O Group. The zoning defines the correct
connections.
Using this schema provides four paths to one I/O Group for each host. It helps to maintain an
equal distribution of host connections on the SVC ports. Figure 5-2 shows an overview of this
host zoning schema.
We recommend whenever possible to use the minimum number of paths that are necessary
to achieve sufficient redundancy in the SAN environment, for SVC environments, no more
than four paths per I/O Group or VDisk.
156 Implementing the IBM System Storage SAN Volume Controller V5.1
Remember that all paths have to be managed by the multipath driver on the host side. If we
assume a server is connected via four ports to the SVC, each VDisk is seen via eight paths.
With 125 VDisks mapped to this server, the multipath driver has to support handling up to
1,000 active paths (8 x 125). You can obtain details and current limitations for the IBM
Subsystem Device Driver (SDD) in Storage Multipath Subsystem Device Driver User’s Guide,
GC52-1309-01, at this Web site:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1
For hosts using four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 5-3. You can combine this schema with the previous four path
zoning schema.
The port mask is associated with a host object. The port mask controls which SVC (target)
ports any particular host can access. The port mask applies to logins from any of the host
(initiator) ports associated with the host object in the configuration model. The port mask
consists of four binary bits, represented in the command-line interface (CLI) as 0 or 1. The
rightmost bit is associated with FC port 1 on each node. The leftmost bit is associated with
port 4. A 1 in any particular bit position allows access to that port and a zero denies access.
The default port mask is 1111, preserving the behavior of the product prior to the introduction
of this feature.
5.2.2 Nodes
There are one or more iSCSI nodes within a network entity. The iSCSI node is accessible via
one or more network portals. A network portal is a component of a network entity that has a
TCP/IP network address and that can be used by an iSCSI node.
An iSCSI node is identified by its unique iSCSI name and is referred to as an IQN. Remember
that this name serves only for the identification of the node; it is not the node’s address, and in
iSCSI, the name is separated from the addresses. This separation allows multiple iSCSI
nodes to use the same addresses, or, while it is implemented in the SVC, the same iSCSI
node to use multiple addresses.
5.2.3 IQN
An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its
own IQN, which by default will be in this form:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
An iSCSI host in SVC is defined by specifying its iSCSI initiator names, for an example of an
IQN of a Windows Server:
iqn.1991-05.com.microsoft:itsoserver01
During the configuration of an iSCSI host in the SVC, you must specify the host’s initiator
IQNs. You can read about host creation in detail in Chapter 7, “SAN Volume Controller
operations using the command-line interface” on page 339, and in Chapter 8, “SAN Volume
Controller operations using the GUI” on page 469.
An alias string can also be associated with an iSCSI node. The alias allows an organization to
associate a user friendly string with the iSCSI name. However, the alias string is not a
substitute for the iSCSI name.
Figure 5-4 on page 159 shows an overview of iSCSI implementation in the SVC.
158 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-4 SVC iSCSI overview
A host that is using iSCSI as the communication protocol to access its VDisks on an SVC
cluster uses its single or multiple Ethernet adapters to connect to an IP LAN. The nodes of
the SVC cluster are connected to the LAN by the existing 1 Gbps Ethernet ports on the node.
For iSCSI, both ports can be used.
Note that Ethernet link aggregation (port trunking) or “channel bonding” for the SVC nodes’
Ethernet ports is not supported for the 1 Gbps ports in this release. The support for Jumbo
Frames, that is, support for MTU sizes greater than 1,500 bytes, is planned for future SVC
releases.
For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, two
IPv4 and two IPv6 addresses or iSCSI network portals can be defined. Figure 2-12 on
page 29 shows one IPv4 and one IPv6 address per Ethernet port.
5.4 Authentication
Authentication of hosts is optional; by default, it is disabled. The user can choose to enable
Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which
involves sharing a CHAP secret between the cluster and the host. If the correct key is not
provided by the host, the SVC will not allow it to perform I/O to VDisks. The cluster can also
be assigned a CHAP secret.
A new feature with iSCSI is you can move IP addresses, which are used to address an iSCSI
target on the SVC node, between the nodes of an I/O Group. IP addresses will only be moved
from one node to its partner node if a node goes through a planned or unplanned restart. If
the Ethernet link to the SVC cluster fails due to a cause outside of the SVC (such as the cable
being disconnected, the Ethernet router failing, and so on), the SVC makes no attempt to fail
over an IP address to restore IP access to the cluster. To enable validation of the Ethernet
access to the nodes, it will respond to ping with the standard one-per-second rate without
frame loss.
The SVC 5.1 release introduced a new concept, which is used for handling the iSCSI IP
address failover, that is called a “clustered Ethernet port”. A clustered Ethernet port consists
of one physical Ethernet port on each node in the cluster and contains configuration settings
that are shared by all of these ports. These clustered ports are referred to as Port 1 and Port
2 in the CLI or GUI on each node of an SVC cluster. Clustered Ethernet ports can be used for
iSCSI or management ports.
Figure 5-5 on page 161 shows an example of an iSCSI target node failover. It gives a
simplified overview of what happens during a planned or unplanned node restart in an SVC
I/O Group:
1. During normal operation, one iSCSI node target node instance is running on each SVC
node. All of the IP addresses (IPv4/IPv6) belonging to this iSCSI target, including the
management addresses if the node acts as the configuration node, are presented on the
two ports (P1/P2) of a node.
2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal
(IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP
addresses (if N1 acted as the configuration node), will fail over to Port1/Port2 of the
partner node within the I/O Group, that is, node N2. An iSCSI initiator running on a server
will execute a reconnect to its iSCSI target, that is, the same IP addresses presented now
by a new node of the SVC cluster.
3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP
addresses) running on N2 will fail back to N1. Again, the iSCSI initiator running on a server
will execute a reconnect to its iSCSI target. The management addresses will not fail back.
N2 will remain in the role of the configuration node for this cluster.
160 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-5 iSCSI node failover scenario
From the server’s point of view, it is not required to have a multipathing driver (MPIO) in place
to be able to handle an SVC node failover. In the case of a node restart, the server simply
reconnects to the IP addresses of the iSCSI target node that will reappear after several
seconds on the ports of the partner node.
The commands for the configuration of the iSCSI IP addresses have been separated from the
configuration of the cluster IP addresses.
The following commands are new commands for managing iSCSI IP addresses:
The svcinfo lsportip command lists the iSCSI IP addresses assigned for each port on
each node in the cluster.
The svctask cfgportip command assigns an IP address to each node’s Ethernet port for
iSCSI I/O.
The following commands are new commands for managing the cluster IP addresses:
The svcinfo lsclusterip command returns a list of the cluster management IP
addresses configured for each port.
The svctask chclusterip command modifies the IP configuration parameters for the
cluster.
You can obtain a detailed description of how to use these commands in Chapter 7, “SAN
Volume Controller operations using the command-line interface” on page 339.
For iSCSI-based access, using two separate networks and separating iSCSI traffic within the
networks by using a dedicated VLAN path for storage traffic will prevent any IP interface,
switch, or target port failure from compromising the host server’s access to the VDisk LUNs.
AIX-specific information: In this section, the IBM System p information applies to all AIX
hosts that are listed on the SVC interoperability support site, including IBM System i
partitions and IBM JS blades.
The following sections detail the current support information. It is vital that you check the Web
sites that are listed regularly for any updates.
For the latest information, and device driver support, always refer to this site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX
162 Implementing the IBM System Storage SAN Volume Controller V5.1
The following IBM Web site provides current interoperability information about supported
HBAs and firmware:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_pSeries
Note: The maximum number of FC ports that are supported in a single host (or logical
partition) is four. These ports can be four single-port adapters or two dual-port adapters or
a combination, as long as the maximum number of ports that are attached to the SAN
Volume Controller does not exceed four.
Perform the following steps to configure your host system to use the fast fail and dynamic
tracking attributes:
1. Issue the following command to set the FC SCSI I/O Controller Protocol Device to each
Adapter:
chdev -l fscsi0 -a fc_err_recov=fast_fail
The previous command was for adapter fscsi0. Example 5-1 shows the command for both
adapters on our test system running AIX 5L V5.3.
2. Issue the following command to enable dynamic tracking for each FC device:
chdev -l fscsi0 -a dyntrk=yes
The previous example command was for adapter fscsi0. Example 5-2 on page 164 shows
the command for both adapters on our test system running AIX 5L V5.3.
You can find the worldwide port number (WWPN) of your FC host adapter and check the
firmware level, as shown in Example 5-4. The network address is the worldwide port name
(WWPN) for the FC adapter.
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
164 Implementing the IBM System Storage SAN Volume Controller V5.1
Device Type: fcp
Physical Location:
U0.1-P2-I4/Q1
SDD works by grouping each physical path to an SVC logical unit number (LUN), represented
by individual hdisk devices within AIX, into a vpath device. For example, if you have four
physical paths to an SVC LUN, this design produces four new hdisk devices within AIX). From
this point forward, AIX uses this vpath device to route I/O to the SVC LUN. Therefore, when
making a Logical Volume Manager (LVM) Volume Group using mkvg, we specify the vpath
device as the destination and not the hdisk device.
The SDD support matrix for AIX is available at this Web site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX
SDD/SDDPCM installation
After downloading the appropriate version of SDD, install it using the standard AIX installation
procedure. The currently supported SDD Levels are available at:
http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5
000033&familyind=5329528&taskind=2
Check the driver readmefile and make sure your AIX system fulfills all the prerequisites.
SDD installation
In Example 5-5, we show the appropriate version of SDD downloaded into the /tmp/sdd
directory. From here, we extract it and initiate the inutoc command, which generates a
dot.toc (.toc) file that is needed by the installp command prior to installing SDD. Finally,
we initiate the installp command, which installs SDD onto this AIX host.
The 2145 devices.fcp file: A specific “2145” devices.fcp file no longer exists. The
standard devices.fcp file now has combined support for SVC/Enterprise Storage
Server/DS8000/DS6000.
We can also check that the SDD server is operational, as shown in Example 5-7.
Enabling the SDD or SDDPCM Web interface is shown in 5.15, “Using SDDDSM, SDDPCM,
and SDD Web interface” on page 251.
SDDPCM installation
In Example 5-8, we show the appropriate version of SDDPCM downloaded into the
/tmp/sddpcm directory. From here, we extract it and initiate the inutoc command, which
generates a dot.toc (.toc) file that is needed by the installp command prior to installing
SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX
host.
166 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 5-9 checks the installation of SDDPCM.
Enabling the SDD or SDDPCM Web interface is shown in 5.15, “Using SDDDSM, SDDPCM,
and SDD Web interface” on page 251.
5.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3
Before adding a new volume from the SVC, the AIX host system Kanga had a simple, typical
configuration, as shown in Example 5-10.
#lspv
hdisk0 0009cddaea97bf61 rootvg active
hdisk1 0009cdda43c9dfd5 rootvg active
hdisk2 0009cddabaef1d99 rootvg active
#lsvg
rootvg
In Example 5-11, we show SVC configuration information relating to our AIX host, specifically,
the host definition, the VDisks created for this host, and the VDisk-to-host mappings for this
configuration.
Using the SVC CLI, we can check that the host WWPNs, which are listed in Example 5-4 on
page 164, are logged into the SVC for the host definition “aix_test”, by entering:
svcinfo lshost aix_test
We can also find the serial numbers of the VDisks using the following command:
svcinfo lshostvdiskmap
copy_id 0
status offline
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 5.00GB
real_capacity 5.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
168 Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap Kanaga0001
id name SCSI_id host_id host_name wwpn vdisk_UID
13 Kanaga0001 0 2 Kanaga 10000000C932A7FB 60050768018301BF2800000000000015
13 Kanaga0001 0 2 Kanaga 10000000C932A800 60050768018301BF2800000000000015
We need to run cfgmgr on the AIX host to discover the new disks and enable us to start the
vpath configuration; if we run the config manager (cfgmgr) on each FC adapter, it will not
create the vpaths, only the new hdisks. To configure the vpaths, we need to run the
cfallvpath command after issuing the cfgmgr command on each of the FC adapters:
# cfgmgr -l fcs0
# cfgmgr -l fcs1
# cfallvpath
Alternatively, use the cfgmgr -vS command to check the complete system. This command
will probe the devices sequentially across all FC adapters and attached disks; however, it is
extremely time intensive:
# cfgmgr -vS
The raw SVC disk configuration of the AIX host system now appears, as shown in
Example 5-12. We can see the multiple hdisk devices, representing the multiple routes to the
same SVC LUN, and we can see the vpath devices available for configuration.
Example 5-12 VDisks from SVC added with multiple separate paths for each VDisk
#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-02 SAN Volume Controller Device
hdisk4 Available 1Z-08-02 SAN Volume Controller Device
hdisk5 Available 1Z-08-02 SAN Volume Controller Device
hdisk6 Available 1Z-08-02 SAN Volume Controller Device
hdisk7 Available 1D-08-02 SAN Volume Controller Device
hdisk8 Available 1D-08-02 SAN Volume Controller Device
hdisk9 Available 1D-08-02 SAN Volume Controller Device
hdisk10 Available 1D-08-02 SAN Volume Controller Device
hdisk11 Available 1Z-08-02 SAN Volume Controller Device
hdisk12 Available 1Z-08-02 SAN Volume Controller Device
hdisk13 Available 1Z-08-02 SAN Volume Controller Device
hdisk14 Available 1Z-08-02 SAN Volume Controller Device
hdisk15 Available 1D-08-02 SAN Volume Controller Device
hdisk16 Available 1D-08-02 SAN Volume Controller Device
hdisk17 Available 1D-08-02 SAN Volume Controller Device
hdisk18 Available 1D-08-02 SAN Volume Controller Device
vpath0 Available Data Path Optimizer Pseudo Device Driver
vpath1 Available Data Path Optimizer Pseudo Device Driver
vpath2 Available Data Path Optimizer Pseudo Device Driver
vpath3 Available Data Path Optimizer Pseudo Device Driver
To make a Volume Group (for example, itsoaixvg) to host the vpath1 device, we use the mkvg
command passing the vpath device as a parameter instead of the hdisk device, which is
shown in Example 5-13 on page 170.
Now, by running the lspv command, we can see that vpath1 has been assigned into the
itsoaixvg Volume Group, as shown in Example 5-14.
Example 5-14 Showing the vpath assignment into the Volume Group
#lspv
hdisk0 0009cddaea97bf61 rootvg active
hdisk1 0009cdda43c9dfd5 rootvg active
hdisk2 0009cddabaef1d99 rootvg active
vpath1 0009cddabce27ba5 itsoaixvg active
The lsvpcfg command also displays the new relationship between vpath1 and the itsoaixvg
Volume Group, but it also shows each hdisk that is associated with vpath1, as shown in
Example 5-15.
In Example 5-16, running the lspv vpath1 command shows a more verbose output for
vpath1.
170 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 5-17 SDD commands used to check the availability of the adapters
#datapath query adapter
Active Adapters :2
In Example 5-18, we see detailed information about each vpath device. Initially, we see that
vpath1 is the only vpath device in an open status. It is open, because it is the only vpath that
is currently assigned to a Volume Group. Additionally, for vpath1, we see that only path 1 and
path 3 have been selected (used) by SDD. These paths are the two physical paths that
connect to the preferred node of the I/O Group of this SVC cluster. The remaining two paths
within this vpath device are only accessed in a failover scenario.
Example 5-18 SDD commands that are used to check the availability of the devices
Total Devices : 4
5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD
The itsoaixvg Volume Group is created using vpath1. A logical volume is created using the
Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount
point, as shown in Example 5-19.
Example 5-19 Host system new Volume Group and file system configuration
#lsvg -o
itsoaixvg
rootvg
#lsvg -l itsoaixvg
itsoaixvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv01 jfs2log 1 1 1 open/syncd N/A
fslv00 jfs2 128 128 1 open/syncd /teslv1
fslv01 jfs2 128 128 1 open/syncd /teslv2
#df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 0.03 0.01 62% 1357 31% /
/dev/hd2 9.06 4.32 53% 17341 2% /usr
/dev/hd9var 0.03 0.03 10% 137 3% /var
/dev/hd3 0.12 0.12 7% 31 1% /tmp
/dev/hd1 0.03 0.03 2% 11 1% /home
/proc - - - - - /proc
/dev/hd10opt 0.09 0.01 86% 1947 38% /opt
/dev/lv00 0.41 0.39 4% 19 1% /usr/sys/inst.images
/dev/fslv00 2.00 2.00 1% 4 1% /teslv1
/dev/fslv01 2.00 2.00 1% 4 1% /teslv2
5.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCM
Before adding a new volume from the SVC, the AIX host system Atlantic had a simple, typical
configuration, as shown in Example 5-20.
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
# lsvg
rootvg
In Example 5-22 on page 174, we show the SVC configuration information relating to our AIX
host, specifically the host definition, the VDisks that were created for this host, and the
VDisk-to-host mappings for this configuration.
172 Implementing the IBM System Storage SAN Volume Controller V5.1
Our example host is named Atlantic. Example 5-21 shows the HBA information for our
example host.
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A644
Manufacturer................001E
Customer Card ID Number.....2765
FRU Number.................. 00P4495
Network Address.............10000000C932A865
ROS Level and ID............02C039D0
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401411
Device Specific.(Z5)........02C039D0
Device Specific.(Z6)........064339D0
Device Specific.(Z7)........074339D0
Device Specific.(Z8)........20000000C932A865
Device Specific.(Z9)........CS3.93A0
Device Specific.(ZA)........C1D3.93A0
Device Specific.(ZB)........C2D3.93A0
Device Specific.(ZC)........00000000
Hardware Location Code......U0.1-P2-I4/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I4/Q1
## lscfg -vpl fcs2
fcs2 U0.1-P2-I5/Q1 FC Adapter
Part Number.................80P4383
EC Level....................A
Serial Number...............1F5350CD42
Manufacturer................001F
Customer Card ID Number.....2765
FRU Number.................. 80P4384
Network Address.............10000000C94C8C1C
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I5/Q1
#
Using the SVC CLI, we can check that the host WWPNs, as listed in Example 5-22, are
logged into the SVC for the host definition Atlantic, by entering this command:
svcinfo lshost Atlantic
We can also discover the serial numbers of the VDisks by using the following command:
svcinfo lshostvdiskmap Atlantic
174 Implementing the IBM System Storage SAN Volume Controller V5.1
We need to run the cfgmgr command on the AIX host to discover the new disks and to enable
us to use the disks:
# cfgmgr -l fcs1
# cfgmgr -l fcs2
Alternatively, use the cfgmgr -vS command to check the complete system. This command
will probe the devices sequentially across all FC adapters and attached disks; however, it is
extremely time intensive:
# cfgmgr -vS
The raw SVC disk configuration of the AIX host system now appears, as shown in
Example 5-23. We can see the multiple MPIO FC 2145 devices, representing the SVC LUN.
Example 5-23 VDisks from SVC added with multiple various paths for each VDisk
# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 MPIO FC 2145
hdisk4 Available 1D-08-02 MPIO FC 2145
hdisk5 Available 1D-08-02 MPIO FC 2145
To make a Volume Group (for example, itsoaixvg) to host the LUNs, we use the mkvg
command passing the device as a parameter. This action is shown in Example 5-24.
Example 5-24 Running the mkvg command
# mkvg -y itsoaixvg hdisk3
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg
# mkvg -y itsoaixvg1 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg1
# mkvg -y itsoaixvg2 hdisk5
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg2
Now, by running the lspv command, we can see the disks and the assigned Volume Groups,
as shown in Example 5-25.
Example 5-25 Showing the vpath assignment into the Volume Group
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
hdisk3 0009cdca28b589f5 itsoaixvg active
hdisk4 0009cdca28b87866 itsoaixvg1 active
hdisk5 0009cdca28b8ad5b itsoaixvg2 active
In Example 5-26 on page 176, we show that running the lspv hdisk3 command shows a
more verbose output for one of the SVC LUNs.
Example 5-27 SDDPCM commands that are used to check the availability of the adapters
# pcmpath query adapter
Active Adapters :2
From Example 5-28, we see detailed information about each MPIO device. The asterisk (*)
next to the path numbers shows which paths have been selected (used) by SDDPCM. These
paths are the two physical paths that connect to the preferred node of the I/O Group of this
SVC cluster. The remaining two paths within this MPIO device are only accessed in a failover
scenario.
Example 5-28 SDDPCM commands that are used to check the availability of the devices
Total Devices : 3
176 Implementing the IBM System Storage SAN Volume Controller V5.1
SERIAL: 6005076801A180E90800000000000061
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0* fscsi1/path0 OPEN NORMAL 37 0
1 fscsi1/path1 OPEN NORMAL 66 0
2 fscsi2/path2 OPEN NORMAL 71 0
3* fscsi2/path3 OPEN NORMAL 38 0
5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg Volume Group is created using hdisk3. A logical volume is created using the
Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount
point, as shown in Example 5-29.
Example 5-29 Host system new Volume Group and file system configuration
# lsvg -o
itsoaixvg2
itsoaixvg1
itsoaixvg
rootvg
# crfs -v jfs2 -g itsoaixvg -a size=3G -m /itsoaixvg -p rw -a agblksize=4096
File system created successfully.
3145428 kilobytes total disk space.
New File System size is 6291456
# lsvg -l itsoaixvg
itsoaixvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv00 jfs2log 1 1 1 closed/syncd N/A
fslv00 jfs2 384 384 1 closed/syncd /itsoaixvg
#
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 5.00GB
real_capacity 5.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
178 Implementing the IBM System Storage SAN Volume Controller V5.1
2. To identify to which vpath this VDisk is associated on the AIX host, we use the datapath
query device SDD command, as shown in Example 5-19 on page 172. Here, we can see
that the VDisk with vdisk_UID 60050768018301BF2800000000000016 is associated with
vpath1, because the vdisk_UID matches the SERIAL field on the AIX host.
3. To see the size of the volume on the AIX host, we use the lspv command, as shown in
Example 5-31. This command shows that the volume size is 5,112 MB, equal to 5 GB, as
shown in Example 5-30 on page 178.
4. To expand the volume on the SVC, we use the svctask expandvdisksize command to
increase the capacity on the VDisk. In Example 5-32, we expand the VDisk by 1 GB.
5. To check that the VDisk has been expanded, use the svcinfo lsvdisk command. Here,
we can see that the Kanaga0002 VDisk has been expanded to a capacity of 6 GB
(Example 5-33).
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 6.00GB
real_capacity 6.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
6. AIX has not yet recognized a change in the capacity of the vpath1 volume, because no
dynamic mechanism exists within the operating system to provide a configuration update
communication. Therefore, to encourage AIX to recognize the extra capacity on the
volume without stopping any applications, we use the chvg -g fc_source_vg command,
where fc_source_vg is the name of the Volume Group to which vpath1 belongs.
If AIX does not return any messages, the command was successful, and the volume
changes in this Volume Group have been saved. If AIX cannot see any changes in the
volumes, it will return an explanatory message.
7. To verify that the size of vpath1 has changed, we use the lspv command again, as shown
in Example 5-34.
Example 5-34 Verify that AIX can see the newly expanded VDisk
#lspv vpath1
PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvg
PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER
0009cdda00004c000000011abce27c89
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 2
TOTAL PPs: 767 (6136 megabytes) VG DESCRIPTORS: 2
FREE PPs: 128 (1024 megabytes) HOT SPARE: no
USED PPs: 639 (5112 megabytes) MAX REQUEST: 256 kilobytes
FREE DISTRIBUTION: 00..00..00..00..128
USED DISTRIBUTION:
154..153..153..153..26
180 Implementing the IBM System Storage SAN Volume Controller V5.1
Here, we can see that the volume now has a size of 6,136 MB, equal to 6 GB. Now, we can
expand the file systems in this Volume Group to use the new capacity.
The AIX installation images from IBM developerWorks® are available at this Web site:
http://sourceforge.net/projects/openssh-aix
5.6.1 Configuring Windows Server 2000, Windows 2003 Server, and Windows
Server 2008 hosts
This section provides an overview of the requirements for attaching the SVC to a host running
Windows Server 2000, Windows 2003 Server, or Windows Server 2008.
Before you attach the SVC to your host, make sure that all of the following requirements are
fulfilled:
For Windows Server 2003 x64 Edition operating system, you must install the Hotfix from
KB 908980. If you do not install it before operation, preferred pathing is not available. You
can find the Hotfix at this Web site:
http://support.microsoft.com/kb/908980
Check LUN limitations for your host system. Ensure that there are enough FC adapters
installed in the server to handle the total LUNs that you want to attach.
182 Implementing the IBM System Storage SAN Volume Controller V5.1
5.6.3 Hardware lists, device driver, HBAs, and firmware levels
The latest information about supported hardware, device driver, and firmware is available at
this Web site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_Windows
At this Web site, you will also find the hardware list for supported HBAs and the driver levels
for Windows. Check the supported firmware and driver level for your HBA and follow the
manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA. In
most manufacturers’ driver readme files, you will find instructions for the Windows registry
parameters that have to be set for the HBA driver:
For the Emulex HBA driver, SDD requires the port driver, not the miniport port driver.
For the QLogic HBA driver, SDDDSM requires the storport version of the miniport driver.
For the QLogic HBA driver, SDD requires the scsiport version of the miniport driver.
In IBM System x servers, the HBA must always be installed in the first slots. If you install, for
example, two HBAs and two network cards, the HBAs must be installed in slot 1 and slot 2,
and the network cards can be installed in the remaining slots.
For the Emulex HBA StorPort driver, accept the default settings and set the topology to 1 (1 =
F Port Fabric). For the Emulex HBA FC Port driver, use the default settings and change the
parameters to the parameters that are provided in Table 5-1.
Maximum number of LUNs (MaximumLun) Equal to or greater than the number of the SVC
LUNs that are available to the HBA
184 Implementing the IBM System Storage SAN Volume Controller V5.1
Note: The parameters that are shown in Table 5-1 correspond to the parameters in
HBAnywhere.
On your Windows server hosts, change the disk I/O timeout value to 60 in the Windows
registry:
1. In Windows, click Start, and select Run.
2. In the dialog text box, type regedit and press Enter.
3. In the registry browsing tool, locate the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key.
4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the
value to 60, as shown in Figure 5-6.
NT 4 1.5.1.1
See the following Web site for the latest information about SDD for Windows:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7
001350&loc=en_US&cs=utf-8&lang=en
Before installing the SDD driver, the HBA driver has to be installed on your system. SDD
requires the HBA SCSI port driver.
After downloading the appropriate version of SDD from the Web site, extract the file and run
setup.exe to install SDD. A command line will appear. Answer Y (Figure 5-7) to install the
driver.
After the setup has completed, answer Y again to reboot your system (Figure 5-8).
To check if your SDD installation is complete, open the Windows Device Manager, expand
SCSI and RAID Controllers, right-click Subsystem Device Driver Management, and click
Properties (see Figure 5-9 on page 187).
186 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-9 Subsystem Device Driver Management
MPIO is not shipped with the Windows operating system; storage vendors must pack the
MPIO drivers with their own DSM. IBM Subsystem Device Driver DSM (SDDDSM) is the IBM
multipath I/O solution that is based on Microsoft MPIO technology; it is a device-specific
module specifically designed to support IBM storage devices on Windows 2003 Server and
Windows Server 2008 servers.
The intention of MPIO is to get a better integration of multipath storage solution with the
operating system, and it allows the use of multipaths in the SAN infrastructure during the boot
process for SAN boot hosts.
188 Implementing the IBM System Storage SAN Volume Controller V5.1
SDDDSM is the IBM multipath I/O solution that is based on Microsoft MPIO technology, and it
is a device-specific module that is specifically designed to support IBM storage devices.
Together with MPIO, it is designed to support the multipath configuration environments in the
IBM System Storage SAN Volume Controller. It resides in a host system with the native disk
device driver and provides the following functions:
Enhanced data availability
Dynamic I/O load-balancing across multiple paths
Automatic path failover protection
Concurrent download of licensed internal code
Path-selection policies for the host system
No SDDDSM support for Windows Server 2000
For the HBA driver, SDDDSM requires the StorPort version of HBA miniport driver
Table 5-3 shows, at the time of writing, the supported SDDDSM driver levels.
The installation procedure for SDDDSM and SDD are the same, but remember that you have
to use the StorPort HBA driver instead of the SCSI driver. We describe the SDD installation in
5.6.6, “Installing the SDD driver on Windows” on page 185. After completing the installation,
you will see the Microsoft MPIO in Device Manager (Figure 5-11 on page 190).
We describe the SDDDSM installation for Windows Server 2008 in 5.8, “Example
configuration of attaching an SVC to a Windows Server 2008 host” on page 200.
Before adding a new volume from the SVC, the Windows 2003 Server host system had the
configuration that is shown in Figure 5-12 on page 191, with only local disks.
190 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-12 Windows 2003 Server host system before adding a new volume from SVC
We can check that the WWPN is logged into the SVC for the host named Senegal by entering
the following command (Example 5-35):
svcinfo lshost Senegal
The configuration of the Senegal host, the Senegal_bas0001 VDisk, and the mapping
between the host and the VDisk are defined in the SVC, as described in Example 5-36. In our
example, the Senegal_bas0002 and Senegal_bas003 VDisks have the same configuration as
the Senegal_bas0001 VDisk.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
We can also obtain the serial number of the VDisks by entering the following command
(Example 5-37):
svcinfo lsvdiskhostmap Senegal_bas0001
192 Implementing the IBM System Storage SAN Volume Controller V5.1
id name SCSI_id host_id host_name wwpn vdisk_UID
7 Senegal_bas0001 0 1 Senegal 210000E08B89B9C0
6005076801A180E9080000000000000F
7 Senegal_bas0001 0 1 Senegal 210000E08B89CCC2
6005076801A180E9080000000000000F
After installing the necessary drivers and the rescan disks operation completes, the new disks
are found in the Computer Management window, as shown in Figure 5-13.
Figure 5-13 Windows 2003 Server host system with three new volumes from SVC
In Windows Device Manager, the disks are shown as IBM 2145 SCSI Disk Device
(Figure 5-14 on page 194). The number of IBM 2145 SCSI Disk Devices that you see is equal
to:
(number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs)
The IBM 2145 Multi-Path Disk Devices are the devices that are created by the multipath driver
(Figure 5-14 on page 194). The number of these devices is equal to the number of VDisks
that are presented to the host.
When following the SAN zoning recommendation, this calculation gives us, for one VDisk and
a host with two HBAs:
(number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x
2 = 4 paths
You can check if all of the paths are available if you select Start All Programs
Subsystem Device Driver (DSM) Subsystem Device Driver (DSM). The SDD (DSM)
command-line interface will appear. Enter the following command to see which paths are
available to your system (Example 5-38).
Total Devices : 3
194 Implementing the IBM System Storage SAN Volume Controller V5.1
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 162 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 155 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
C:\Program Files\IBM\SDDDSM>
Note: All path states have to be OPEN. The path state can be OPEN or CLOSE. If one
path state is CLOSE, it means that the system is missing a path that it saw during startup.
If you restart your system, the CLOSE paths are removed from this view.
Important:
For VDisk expansion to work on Windows Server 2000, apply Windows Server 2000
Hotfix Q327020, which is available from the Microsoft Knowledge Base at this Web site:
http://support.microsoft.com/kb/327020
If you want to expand a logical drive in a extended partition in Windows 2003 Server,
apply the Hotfix from KB 841650, which is available from the Microsoft Knowledge Base
at this Web site:
http://support.microsoft.com/kb/841650/en-us
Use the updated Diskpart version for Windows 2003 Server, which is available from the
Microsoft Knowledge Base at this Web site:
http://support.microsoft.com/kb/923076/en-us
If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all nodes except one node, and that applications in the resource that use the volume
that is going to be expanded are stopped before expanding the volume. Applications running
in other resources can continue. After expanding the volume, start the application and the
resource, and then restart the other nodes in the MSCS.
An example of how to expand a volume on a Windows 2003 Server host, where the volume is
a VDisk from the SVC, is shown in the following discussion.
To list a VDisk size, use the svcinfo lsvdisk <VDisk_name> command. This command gives
this information for the Senegal_bas0001 before expanding the VDisk (Example 5-36 on
page 191).
Here, we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find on what
vpath this VDisk is on the Windows 2003 Server host, we use the datapath query device
SDD command on the Windows host (Figure 5-15).
To see the size of the volume on the Windows host, we use Disk Manager, as shown in
Figure 5-15.
196 Implementing the IBM System Storage SAN Volume Controller V5.1
This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use
the svctask expandvdisksize command to increase the capacity on the VDisk. In this
example, we expand the VDisk by 1 GB (Example 5-39).
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 11.00GB
real_capacity 11.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
To check that the VDisk has been expanded, we use the svctask expandvdisksize
command. In Example 5-39, we can see that the Senegal_bas0001 VDisk has been
expanded to 11 GB in capacity.
This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity
available for the file system, use the following commands, as shown in Example 5-40:
diskpart Starts DiskPart in a DOS prompt
list volume Shows you all available volumes
select volume Selects the volume to expand
detail volume Displays details for the selected volume, including the unallocated
capacity
extend Extends the volume to the available unallocated space
198 Implementing the IBM System Storage SAN Volume Controller V5.1
Disk ### Status Size Free Dyn Gpt
-------- ---------- ------- ------- --- ---
* Disk 1 Online 11 GB 1020 MB
Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
DISKPART> extend
Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
After extending the volume, the detail volume command shows that there is no free capacity
on the volume anymore. The list volume command shows the file system size. The Disk
Management window also shows the new disk size, as shown in Figure 5-17.
The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded
by expanding the underlying SVC VDisk. The new space will appear as unallocated space at
the end of the disk.
Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without
backing up your data, because this operation is disruptive for the data, due to a change in
the position of the logical block address (LBA) on the disks.
As a prerequisite for this example, we have already performed steps 1 to 5 for the hardware
installation, SAN configuration is done, and the hotfixes are applied. The Disk timeout value is
set to 60 seconds (see 5.6.5, “Changing the disk timeout on Microsoft Windows Server” on
page 185), and we will start with the driver installation.
200 Implementing the IBM System Storage SAN Volume Controller V5.1
5. Right-click the HBA, and select Update driver Software (Figure 5-18).
7. Enter the path to the extracted QLogic driver, and click Next (Figure 5-20 on page 202).
202 Implementing the IBM System Storage SAN Volume Controller V5.1
9. When the driver update is complete, click Close to exit the wizard (Figure 5-22).
10.Repeat steps 1 to 8 for all of the HBAs that are installed in the system.
5. After the SDDDSM Setup is finished, type Y and press Enter to restart your system.
After the reboot, the SDDDSM installation is complete. You can verify the installation
completion in Device Manager, because the SDDDSM device will appear (Figure 5-24 on
page 204), and the SDDDSM tools will have been installed (Figure 5-25 on page 204).
204 Implementing the IBM System Storage SAN Volume Controller V5.1
5.8.3 Attaching SVC VDisks to Windows Server 2008
Create the VDisks on the SVC and map them to the Windows Server 2008 host.
In this example, we have mapped three SVC disks to the Windows Server 2008 host named
Diomede, as shown in Example 5-41.
Perform the following steps to use the devices on your Windows Server 2008 host:
1. Click Start, and click Run.
2. Enter the diskmgmt.msc command, and click OK. The Disk Management window opens.
3. Select Action, and click Rescan Disks (Figure 5-26).
4. The SVC disks will now appear in the Disk Management window (Figure 5-27 on
page 206).
After you have assigned the SVC disks, they are also available in Device Manager. The three
assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in
the Device Manager (Figure 5-28).
206 Implementing the IBM System Storage SAN Volume Controller V5.1
5. To check that the disks are available, select Start All Programs Subsystem Device
Driver DSM, and click Subsystem Device Driver DSM (Figure 5-29). The SDDDSM
Command Line Utility will appear.
Figure 5-29 Windows Server 2008 Subsystem Device Driver DSM utility
6. Enter the datapath query device command and press Enter (Example 5-42). This
command will display all of the disks and the available paths, including their states.
Total Devices : 3
C:\Program Files\IBM\SDDDSM>
SAN zoning recommendation: When following the SAN zoning recommendation, we get
this result, using one VDisk and a host with two HBAs, (number of VDisks) x (number of
paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.
7. Right-click the disk in Disk Management, and select Online to place the disk online
(Figure 5-30).
208 Implementing the IBM System Storage SAN Volume Controller V5.1
10.Mark all of the disks that you want to initialize, and click OK (Figure 5-32).
11.Right-click the unallocated disk space, and select New Simple Volume (Figure 5-33).
14.Assign a drive letter, and click Next (Figure 5-35 on page 210).
210 Implementing the IBM System Storage SAN Volume Controller V5.1
16.Click Finish, and repeat this step for every SVC disk on your host system (Figure 5-37).
Windows Server 2008 also uses the DiskPart utility to extend volumes. To start it, select
Start Run, and enter DiskPart. The DiskPart utility will appear. The procedure is exactly
the same as the procedure in Windows 2003 Server. Follow the Windows 2003 Server
description to extend your volume.
When the VDisk mapping is removed, we perform a rescan for the disk, Disk Management on
the server removes the disk, and the vpath goes into the status of CLOSE on the server. We
can verify these actions by using the datapath query device SDD command, but the vpath
that is closed will first be removed after a reboot of the server.
In the following sequence of examples, we show how we can remove an SVC VDisk from a
Windows server. We show it on a Windows 2003 Server operating system, but the steps also
apply to Windows Server 2000 and Windows Server 2008.
We will remove Disk 1. To find the correct VDisk information, we find the Serial/UID number
using SDD (Example 5-43).
Total Devices : 3
Knowing the Serial/UID of the VDisk and the host name Senegal, we find the VDisk mapping
to remove by using the lshostvdiskmap command on the SVC, and then, we remove the
actual VDisk mapping (Example 5-44).
212 Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0
6005076801A180E90800000000000010
1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0
6005076801A180E90800000000000011
Here, we can see that the VDisk is removed from the server. On the server, we then perform
a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been
removed, as shown in Figure 5-38.
SDD also shows us that the status for all paths to Disk1 has changed to CLOSE, because the
disk is not available (Example 5-45 on page 214).
Total Devices : 3
The disk (Disk1) is now removed from the server. However, to remove the SDD information of
the disk, we need to reboot the server, but we can wait until a more suitable time.
We can install the PuTTY SSH client software on a Windows host by using the PuTTY
installation program. This program is in the SSHClient\PuTTY directory of the SAN Volume
Controller Console CD-ROM, or you can download PuTTY from the following Web site:
http://www.chiark.greenend.org.uk/~sgtatham/putty/
The following Web site offers SSH client alternatives for Windows:
http://www.openssh.com/windows.html
Cygwin software has an option to install an OpenSSH client. You can download Cygwin from
the following Web site:
http://www.cygwin.com/
214 Implementing the IBM System Storage SAN Volume Controller V5.1
We discuss more information about the CLI in Chapter 7, “SAN Volume Controller operations
using the command-line interface” on page 339.
In this section, we discuss how to install the Microsoft Volume Copy Shadow Service.
The following components are used to provide support for the service:
SAN Volume Controller
SAN Volume Controller Master Console
IBM System Storage hardware provider, known as the IBM System Storage Support for
Microsoft Volume Shadow Copy Service
Microsoft Volume Shadow Copy Service
To provide the point-in-time shadow copy, the components complete the following process:
1. A backup application on the Windows host initiates a snapshot backup.
2. The Volume Shadow Copy Service notifies the IBM System Storage hardware provider
that a copy is needed.
3. The SAN Volume Controller prepares the volume for a snapshot.
4. The Volume Shadow Copy Service quiesces the software applications that are writing
data on the host and flushes file system buffers to prepare for a copy.
5. The SAN Volume Controller creates the shadow copy using the FlashCopy Service.
6. The Volume Shadow Copy Service notifies the writing applications that I/O operations can
resume and notifies the backup application that the backup was successful.
The Volume Shadow Copy Service maintains a free pool of VDisks for use as a FlashCopy
target and a reserved pool of VDisks. These pools are implemented as virtual host systems
on the SAN Volume Controller.
Before you begin, you must have experience with, or knowledge of, administering a Windows
operating system. And you must also have experience with, or knowledge of, administering a
SAN Volume Controller.
5.10.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install the IBM
System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service
software on the Windows operating system:
SAN Volume Controller and Master Console Version 2.1.0 or later with FlashCopy
enabled. You must install the SAN Volume Controller Console before you install the IBM
System Storage Hardware provider.
IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk
Service software Version 3.1 or later.
During the installation, you will be prompted to enter information about the SAN Volume
Controller Master Console, including the location of the truststore file. The truststore file is
generated during the installation of the Master Console. You must copy this file to a location
that is accessible to the IBM System Storage hardware provider on the Windows server.
When the installation is complete, the installation program might prompt you to restart the
system. Complete the following steps to install the IBM System Storage hardware provider on
the Windows server:
1. Download the installation program files from the IBM Web site, and place a copy on the
Windows server where you will install the IBM System Storage hardware provider:
http://www-1.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH
&dc=D400&uid=ssg1S4000663&loc=en_US&cs=utf-8&lang=en
2. Log on to the Windows server as an administrator, and navigate to the directory where the
installation program is located.
3. Run the installation program by double-clicking IBMVSS.exe.
216 Implementing the IBM System Storage SAN Volume Controller V5.1
4. The Welcome window opens, as shown in Figure 5-39. Click Next to continue with the
installation. You can click Cancel at any time to exit the installation. To move back to
previous windows while using the wizard, click Back.
Figure 5-39 IBM System Storage Support for Microsoft Volume Shadow Copy installation
5. The License Agreement window opens (Figure 5-40). Read the license agreement
information. Select whether you accept the terms of the license agreement, and click
Next. If you do not accept, it means that you cannot continue with the installation.
Figure 5-40 IBM System Storage Support for Microsoft Volume Shadow Copy installation
Figure 5-41 IBM System Storage Support for Microsoft Volume Shadow Copy installation
Figure 5-42 IBM System Storage Support for Microsoft Volume Shadow Copy installation
218 Implementing the IBM System Storage SAN Volume Controller V5.1
8. From the next window, select the required CIM server, or select “Enter the CIM Server
address manually”, and click Next (Figure 5-43).
Figure 5-43 IBM System Storage Support for Microsoft Volume Shadow Copy installation
9. The Enter CIM Server Details window opens. Enter the following information in the fields
(Figure 5-44):
a. In the CIM Server Address field, type the name of the server where the SAN Volume
Controller Console is installed.
b. In the CIM User field, type the user name that the IBM System Storage Support for
Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to
gain access to the server where the SAN Volume Controller Console is installed.
c. In the CIM Password field, type the password for the user name that the IBM System
Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service
software will use to gain access to the SAN Volume Controller Console.
d. Click Next.
Figure 5-44 IBM System Storage Support for Microsoft Volume Shadow Copy installation
10.In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to
restart the system (Figure 5-45 on page 220).
Add it on al information:
If these settings change after installation, you can use the ibmvcfg.exe tool to update
the Microsoft Volume Shadow Copy and Virtual Disk Services software with the new
settings.
If you do not have the CIM Agent server, port, or user information, contact your CIM
Agent administrator.
220 Implementing the IBM System Storage SAN Volume Controller V5.1
This command ensures that the service named IBM System Storage Support for Microsoft
Volume Shadow Copy Service and Virtual Disk Service software is listed as a provider
(Example 5-46).
Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware
Provider'
If you are able to successfully perform all of these verification tasks, the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was
successfully installed on the Windows server.
When a shadow copy is created, the IBM System Storage hardware provider selects a
volume in the free pool, assigns it to the reserved pool, and then removes it from the free
pool. This process protects the volume from being overwritten by other Volume Shadow Copy
Service users.
To successfully perform a Volume Shadow Copy Service operation, there must be enough
VDisks mapped to the free pool. The VDisks must be the same size as the source VDisks.
Use the SAN Volume Controller Console or the SAN Volume Controller command-line
interface (CLI) to perform the following steps:
1. Create a host for the free pool of VDisks. You can use the default name VSS_FREE or
specify another name. Associate the host with the worldwide port name (WWPN)
5000000000000000 (15 zeroes) (Example 5-47).
2. Create a virtual host for the reserved pool of volumes. You can use the default name
VSS_RESERVED or specify another name. Associate the host with the WWPN
5000000000000001 (14 zeroes) (Example 5-48 on page 222).
3. Map the logical units (VDisks) to the free pool of volumes. The VDisks cannot be mapped
to any other hosts. If you already have VDisks created for the free pool of volumes, you
must assign the VDisks to the free pool.
4. Create VDisk-to-host mappings between the VDisks selected in step 3 and the
VSS_FREE host to add the VDisks to the free pool. Alternatively, you can use the ibmvcfg
add command to add VDisks to the free pool (Example 5-49).
5. Verify that the VDisks have been mapped. If you do not use the default WWPNs
5000000000000000 and 5000000000000001, you must configure the IBM System
Storage hardware provider with the WWPNs (Example 5-50).
Commands:
/h | /help | -? | /?
showcfg
listvols <all|free|unassigned>
add <volume esrial number list> (separated by spaces)
rem <volume serial number list> (separated by spaces)
Configuration:
set user <CIMOM user name>
222 Implementing the IBM System Storage SAN Volume Controller V5.1
set password <CIMOM password>
set trace [0-7]
set trustpassword <trustpassword>
set truststore <truststore location>
set usingSSL <YES | NO>
set vssFreeInitiator <WWPN>
set vssReservedInitiator <WWPN>
set FlashCopyVer <1 | 2> (only applies to ESS)
set cimomPort <PORTNUM>
set cimomHost <Hostname>
set namespace <Namespace>
set targetSVC <svc_cluster_ip>
set backgroundCopy <0-100>
ibmvcfg set username Sets the user name to access ibmvcfg set username Dan
<username> the SAN Volume Controller
Console.
ibmvcfg set password Sets the password of the user ibmvcfg set password
<password> name that will access the SAN mypassword
Volume Controller Console.
ibmvcfg set targetSVC Specifies the IP address of the set targetSVC 9.43.86.120
<ipaddress> SAN Volume Controller on
which the VDisks are located
when VDisks are moved to and
from the free pool with the
ibmvcfg add and ibmvcfg rem
commands. The IP address is
overridden if you use the -s flag
with the ibmvcfg add and
ibmvcfg rem commands.
ibmvcfg set usingSSL Specifies whether to use ibmvcfg set usingSSL yes
Secure Sockets Layer protocol
to connect to the SAN Volume
Controller Console.
ibmvcfg set cimomPort Specifies the SAN Volume ibmvcfg set cimomPort 5999
<portnum> Controller Console port
number. The default value is
5,999.
ibmvcfg set cimomHost Sets the name of the server ibmvcfg set cimomHost
<server name> where the SAN Volume cimomserver
Controller Console is installed.
ibmvcfg set namespace Specifies the namespace value ibmvcfg set namespace
<namespace> that the Master Console is \root\ibm
using. The default value is
\root\ibm.
ibmvcfg set vssFreeInitiator Specifies the WWPN of the ibmvcfg set vssFreeInitiator
<WWPN> host. The default value is 5000000000000000
5000000000000000. Modify
this value only if there is a host
already in your environment
with a WWPN of
5000000000000000.
ibmvcfg listvols all Lists all VDisks, including ibmvcfg listvols all
information about the size,
location, and VDisk to host
mappings.
ibmvcfg listvols free Lists the volumes that are ibmvcfg listvols free
currently in the free pool.
ibmvcfg listvols unassigned Lists the volumes that are ibmvcfg listvols unassigned
currently not mapped to any
hosts.
ibmvcfg add -s ipaddress Adds one or more volumes to ibmvcfg add vdisk12 ibmvcfg
the free pool of volumes. Use add 600507 68018700035000000
the -s parameter to specify the 0000000BA -s 66.150.210.141
IP address of the SAN Volume
Controller where the VDisks are
located. The -s parameter
overrides the default IP address
that is set with the ibmvcfg set
targetSVC command.
ibmvcfg rem -s ipaddress Removes one or more volumes ibmvcfg rem vdisk12 ibmvcfg
from the free pool of volumes. rem 600507 68018700035000000
Use the -s parameter to specify 0000000BA -s 66.150.210.141
the IP address of the SAN
Volume Controller where the
VDisks are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.
224 Implementing the IBM System Storage SAN Volume Controller V5.1
5.11 Specific Linux (on Intel) information
The following sections describe specific information pertaining to the connection of Linux on
Intel-based hosts to the SVC environment.
For SVC Version 4.3, the following support information was available at the time of writing:
Software supported levels:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278
Hardware supported levels:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277
At this Web site, you will find the hardware list for supported HBAs and device driver levels for
Windows. Check the supported firmware and driver level for your HBA, and follow the
manufacture’s instructions to upgrade the firmware and driver levels for each type of HBA.
Installing SDD
This section describes how to install SDD for older distributions. Before performing these
steps, always check for the currently supported levels, as described in 5.11.2, “Configuration
information” on page 225.
226 Implementing the IBM System Storage SAN Volume Controller V5.1
The cat /proc/scsi/scsi command in Example 5-52 shows the devices that the SCSI driver
has probed. In our configuration, we have two HBAs installed in our server, and we configured
the zoning to access our VDisk from four paths.
To manually load and configure SDD on Linux, use the service sdd start command (SUSE
Linux users can use the sdd start command). If you are not running a supported kernel, you
will get an error message.
If your kernel is supported, you see an OK success message, as shown in Example 5-54.
Issue the cfgvpath query command to view the name and serial number of the VDisk that is
configured in the SAN Volume Controller, as shown in Example 5-55.
The cfgvpath command configures the SDD vpath devices, as shown in Example 5-56.
The configuration information is saved by default in the /etc/vpath.conf file. You can save
the configuration information to a specified file name by entering the following command:
cfgvpath -f file_name.cfg
If necessary, you can disable the startup option by entering this command:
chkconfig sdd off
228 Implementing the IBM System Storage SAN Volume Controller V5.1
Run the datapath query commands to display the online adapters and the paths to the
adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and
path 2. Path 1 and path 3 connect to the other node and are used as alternate or backup
paths for high availability, as shown in Example 5-58.
Active Adapters :2
Total Devices : 1
You can dynamically change the SDD path-selection policy algorithm by using the datapath
set device policy SDD command.
You can see the SDD path-selection policy algorithm that is active on the device when you
use the datapath query device command. Example 5-58 shows that the active policy is
optimized, which means that the SDD path-selection policy algorithm active is Optimized
Sequential.
230 Implementing the IBM System Storage SAN Volume Controller V5.1
5.11.6 Creating and preparing the SDD volumes for use
Follow these steps to create and prepare the volumes:
1. Create a partition on the vpath device, as shown in Example 5-60.
3. Create the mount point, and mount the vpath drive, as shown in Example 5-62.
4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc, and
the datapath query command shows that four paths are available (Example 5-63).
Total Devices : 1
232 Implementing the IBM System Storage SAN Volume Controller V5.1
Path# Adapter/Hard Disk State Mode Select Errors
0 Host0Channel0/sda OPEN NORMAL 1 0
1 Host0Channel0/sdb OPEN NORMAL 6296 0
2 Host1Channel0/sdc OPEN NORMAL 6178 0
3 Host1Channel0/sdd OPEN NORMAL 0 0
[root@Palau ~]#
You will find this information in the links that are provided in 5.11.2, “Configuration
information” on page 225. In SLES10, the multipath drivers and tools are installed by default,
but for RHEL5, the user has to explicitly choose the multipath components during the OS
installation to install them.
Each of the attached SAN Volume Controller LUNs has a special device file in the Linux /dev
directory.
Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC
allows. The following Web site provides the most current information about the maximum
configuration for the SAN Volume Controller:
http://www.ibm.com/storage/support/2145
Tip: Run insserv boot.multipath multipathd to automatically load the multipath driver
and multipathd daemon during startup.
Example 5-64 on page 234 shows the commands issued on a Red Hat Enterprise Linux 5.1
operating system.
3. Open the multipath.conf file, and follow the instructions to enable multipathing for IBM
devices. The file is located in the /etc directory. Example 5-65 shows editing using vi.
6. Type the multipath -dl command to see the mpio configuration. You will see two groups
with two paths each. All paths must have the state [active][ready] and one group will be
[enabled].
234 Implementing the IBM System Storage SAN Volume Controller V5.1
7. Use the fdisk command to create a partition on the SVC disk, as shown in Example 5-67.
WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
[root@palau scsi]# shutdown -r now
236 Implementing the IBM System Storage SAN Volume Controller V5.1
8. Create a file system using the mkfs command (Example 5-68).
9. Create a mount point, and mount the drive, as shown in Example 5-69.
Important: If you are running the VMware V3.01 build, you are required to move to a
minimum VMware level of V3.02 for continued support.
Install the host adapters in your system. Refer to the manufacturer’s instructions for
installation and configuration of the HBAs.
In IBM System x servers, the HBA must always be installed in the first slots. Therefore, if you
install, for example, two HBAs and two network cards, the HBAs must be installed in slot 1
and slot 2 and the network cards can be installed in the remaining slots.
For older ESX versions, you will find the supported HBAs at the IBM Web Site:
http://www.ibm.com/storage/support/2145
The interoperability matrixes for ESX V3.02, V3.5, and V3.51 are available at the VMware
Web site (clicking this link opens or downloads the PDF):
238 Implementing the IBM System Storage SAN Volume Controller V5.1
V3.02
http://www.vmware.com/pdf/vi3_io_guide.pdf
V3.5
http://www.vmware.com/pdf/vi35_io_guide.pdf
The supported HBA device drivers are already included in the ESX server build.
After installing, load the default configuration of your FC HBAs. We recommend using the
same model of HBA with the same firmware in one server. It is not supported to have Emulex
and QLogic HBAs that access the same target in one server.
If you are unfamiliar with the VMware environments and the advantages of storing virtual
machines and application data on a SAN, we recommend that you get an overview about the
VMware products before continuing.
Theoretically, you can run all of your virtual machines on one LUN, but for performance
reasons, in more complex scenarios, it can be better to load balance virtual machines over
separate HBAs, storages, or arrays.
For example, if you run an ESX host, with several virtual machines, it makes sense to use one
“slow” array, for example, for Print and Active Directory Services guest operating systems
without high I/O, and another fast array for database guest operating systems.
240 Implementing the IBM System Storage SAN Volume Controller V5.1
More flexibility (the multipathing policy and disk shares are set per VDisk)
Microsoft Cluster Service requires its own VDisk for each cluster disk resource
More documentation about designing your VMware infrastructure is provided at one of these
Web sites:
http://www.vmware.com/vmtn/resources/
http://www.vmware.com/resources/techresources/1059
Guidelines:
ESX Server hosts that use shared storage for virtual machine failover or load balancing
must be in the same zone.
You can have only one VMFS volume per VDisk.
To make these changes on your system, perform the following steps (Example 5-70):
1. Back up the /etc/vmware/esx.cof file.
2. Open the /etc/vmware/esx.cof file for editing.
3. The file includes a section for every installed SCSI device.
4. Locate your SCSI adapters, and edit the previously described parameters.
5. Repeat this process for every installed HBA.
Example 5-71 shows that the host Nile is logged into the SVC with two HBAs.
Then, we have to set the SCSI Controller Type in VMware. By default, ESX Server disables
the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS
file at the same time (Figure 5-47 on page 243).
But in many configurations, such as those configurations for high availability, the virtual
machines have to share the same VMFS file to share a disk.
242 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 5-47 Changing SCSI bus settings
3. Create your VDisks on the SVC, and map them to the ESX hosts.
Tips:
If you want to use features, such as VMotion, the VDisks that own the VMFS file have to
be visible to every ESX host that will be able to host the virtual machine. In SVC, select
Allow the virtual disks to be mapped even if they are already mapped to a host.
The VDisk has to have the same SCSI ID on each ESX host.
For this example configuration, we have created one VDisk and have mapped it to our ESX
host, as shown in Example 5-72.
ESX does not automatically scan for SAN changes (except when rebooting the entire ESX
server). If you have made any changes to your SVC or SAN configuration, perform the
following steps:
1. Open your VMware Infrastructure Client.
2. Select the host.
3. In the Hardware window, choose Storage Adapters.
4. Click Rescan.
Now, the created VMFS datastore appears in the Storage window (Figure 5-49). You will see
the details for the highlighted datastore. Check whether all of the paths are available and that
the Path Selection is set to Most Recently Used.
If not all of the paths are available, check your SAN and storage configuration. After fixing the
problem, select Refresh to perform a path rescan. The view will be updated to the new
configuration.
244 Implementing the IBM System Storage SAN Volume Controller V5.1
The recommended Multipath Policy for SVC is Most Recently Used. If you have to edit this
policy, perform the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change (see Figure 5-50).
5. Select Most Recently Used.
6. Click OK.
7. Click Close.
Now, your VMFS datastore has been created, and you can start using it for your guest
operating systems.
where:
SCSI HBA
The number of the SCSI HBA (can change).
SCSI target
The number of the SCSI target (can change).
SCSI VDisk
The number of the VDisk (never changes).
disk partition
The number of the disk partition (never changes). If the last number is not displayed, the
name stands for the entire VDisk.
We provide the instructions to perform this task in 5.6.5, “Changing the disk timeout on
Microsoft Windows Server” on page 185.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
246 Implementing the IBM System Storage SAN Volume Controller V5.1
fast_write_state empty
used_capacity 60.00GB
real_capacity 60.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 65.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000010
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 65.00GB
real_capacity 65.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>
The VMFS volume has now been extended, and the new space is ready for use.
248 Implementing the IBM System Storage SAN Volume Controller V5.1
5.13 SUN Solaris support information
For the latest information about supported software and driver levels, always refer to this site:
http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
SDD will use a round-robin algorithm when failing over paths, that is, it will try the next known
preferred path. If this method fails and all preferred paths have been tried, it will use a
round-robin algorithm on the non-preferred paths until it finds a path that is available. If all
paths are unavailable, the VDisk will go offline. Therefore, it can take time to perform path
failover when multiple paths go offline.
SDD under Solaris performs load balancing across the preferred paths where appropriate.
OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/5.0, and Solaris with
Sun Cluster V3.1/3.2 are supported at the time of writing.
SDD is aware of the preferred paths that SVC sets per VDisk. SDD will use a round-robin
algorithm when failing over paths, that is, it will try the next known preferred path. If this
method fails and all preferred paths have been tried, it will use a round-robin algorithm on the
non-preferred paths until it finds a path that is available. If all paths are unavailable, the VDisk
will go offline. It can take time, therefore, to perform path failover when multiple paths go
offline.
SDD under HP-UX performs load balancing across the preferred paths where appropriate.
When creating a Volume Group, specify the primary path that you want HP-UX to use when
accessing the Physical Volume that is presented by SVC. This path, and only this path, will be
used to access the PV as long as it is available, no matter what the SVC’s preferred path to
that VDisk is. Therefore, be careful when creating Volume Groups so that the primary links to
the PVs (and load) are balanced over both HBAs, FC switches, SVC nodes, and so on.
When extending a Volume Group to add alternate paths to the PVs, the order in which you
add these paths is HP-UX’s order of preference if the primary path becomes unavailable.
Therefore, when extending a Volume Group, the first alternate path that you add must be from
the same SVC node as the primary path, to avoid unnecessary node failover due to an HBA,
FC link, or FC switch failure.
250 Implementing the IBM System Storage SAN Volume Controller V5.1
SAN boot support
SAN boot is supported on HP-UX by using PVLinks as the multipathing software on the boot
device. You can use PVLinks or SDD to provide the multipathing support for the other devices
that are attached to the system.
To ensure redundancy, when editing your Cluster Configuration ASCII file, make sure that the
variable FIRST_CLUSTER_LOCK_PV has a separate path to the lock disk for each HP node
in your cluster. For example, when configuring a two-node HP cluster, make sure that
FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SVC node and through a
separate FC switch than the FIRST_CLUSTER_LOCK_PV on HP server B.
To accommodate this behavior, SVC supports a “type” associated with a host. This type can
be set using the svctask mkhost command and modified using the svctask chhost
command. The type can be set to generic, which is the default for HP-UX.
When an initiator port, which is a member of a host of type HP-UX, accesses an SVC, the
SVC will behave in the following way:
Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an inquiry command for any page is sent to LUN 0 using Peripheral Device
Addressing, it is reported as Peripheral Device Type 0Ch (controller).
When any command other than an inquiry is sent to LUN 0 using Peripheral Device
Addressing, SVC will respond as an unmapped LUN 0 normally responds.
When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral
Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown
Device Type.
When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device
Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh
(unknown or no device type). This response is in contrast to the behavior for generic hosts,
where peripheral Device Type 00h is returned.
The command documentation for the various operating systems is available in the Multipath
Subsystem Device Driver User Guides:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7
000303&loc=en_US&cs=utf-8&lang=en
For all platforms except Linux, the multipath driver package ships an sddsrv.conf template
file named the sample_sddsrv.conf file. On all UNIX platforms except Linux, the
sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, the
sample_sddsrv.conf file is in the directory where SDD is installed.
You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as the sample_sddsrv.conf file by simply copying it and naming the copied file sddsrv.conf.
You can then dynamically change port binding by modifying the parameters in the
sddsrv.conf file and changing the values of Enableport and Loopbackbind to True.
Figure 5-51 shows the start window of the multipath driver Web interface.
You might have a number of servers in the configuration that are idle, or do not initiate the
calculated quantity of I/O operations. If so, you might not need to limit the queue depth.
252 Implementing the IBM System Storage SAN Volume Controller V5.1
5.17 Further sources of information
For more information about host attachment and configuration to the SVC, refer to the IBM
System Storage SAN Volume Controller: Host Attachment Guide, SC26-7563.
For more information about SDDDSM or SDD configuration, refer to the IBM TotalStorage
Multipath Subsystem Device Driver User’s Guide, SC30-4096.
When looking for information about certain storage subsystems, this link is usually helpful:
http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp
In Chapter 8, “SAN Volume Controller operations using the GUI” on page 469, we describe
how to use the GUI and Advanced Copy Services.
In the topics that follow, we describe how FlashCopy works on the SVC, and we present
examples of configuring and utilizing FlashCopy.
FlashCopy is also known as point-in-time copy. You can use the FlashCopy technique to help
solve the challenge of making a consistent copy of a data set that is constantly being
updated. The FlashCopy source is frozen for a few seconds or less during the point-in-time
copy process. It will be able to accept I/O when the point-in-time copy bitmap is set up and the
FlashCopy function is ready to intercept read/write requests in the I/O path. Although the
background copy operation takes time, the resulting data at the target appears as though the
copy were made instantaneously.
SVC’s FlashCopy service provides the capability to perform a point-in-time copy of one or
more VDisks. Because the copy is performed at the block level, it operates underneath the
operating system and application caches. The image that is presented is “crash-consistent”:
that is to say, it is similar to an image that is seen in a crash event, such as an unexpected
power failure.
Various tasks can benefit from the use of FlashCopy. In the following sections, we describe
the most common situations.
It might be beneficial to quiesce the application on the host and flush the application and OS
buffers so that the new VDisk contains data that is “clean” to the application. Though without
this step, the newly created VDisk data will still be usable by the application, it will require
recovery procedures (such as log replay) to use. Quiescing the application ensures that the
startup time against the mirrored copy is minimized.
The cache on the SVC is also flushed using the FlashCopy prestartfcmap command; see
“Preparing” on page 275 prior to performing the FlashCopy.
The data set that has been created on the FlashCopy target is immediately available, as well
as the source VDisk.
256 Implementing the IBM System Storage SAN Volume Controller V5.1
6.1.3 Backup
FlashCopy does not affect your backup time, but it allows you to create a point-in-time
consistent data set (across VDisks), with a minimum of downtime for your source host. The
FlashCopy target can then be mounted on another host (or the backup server) and backed
up. Using this procedure, the backup speed becomes less important, because the backup
time does not require downtime for the host that is dependent on the source VDisks.
6.1.4 Restore
You can keep periodically created FlashCopy targets online to provide extremely fast restore
of specific files from the point-in-time consistent data set revealed on the FlashCopy targets.
You simply copy the specific files to the source VDisk in case a restore is needed.
Data mining is a good example of an area where FlashCopy can help you. Data mining can
now extract data without affecting your application.
A key advantage of SVC Multiple Target Reverse FlashCopy function is that the reverse
FlashCopy does not destroy the original target. Thus, any process using the target, such as a
tape backup process, will not be disrupted. Multiple recovery points can be tested.
SVC is also unique in that an optional copy of the source VDisk can be made before starting
the reverse copy operation in order to diagnose problems.
When a user suffers a disaster and needs to restore from an on-disk backup, the user follows
this procedure:
1. (Optional) Create a new target VDisk (VDisk Z) and FlashCopy the production VDisk
(VDisk X) onto the new target for later problem analysis.
2. Create a new FlashCopy map with the backup to be restored (VDisk Y) or (VDisk W) as
the source VDisk and VDisk X as the target VDisk, if this map does not already exist.
3. Start the FlashCopy map (VDisk Y VDisk X) with the new -restore option to copy the
backup data onto the production disk.
4. The production disk is instantly available with the backup data.
258 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-1 Reverse FlashCopy
Regardless of whether the initial FlashCopy map (VDisk X VDisk Y) is incremental, the
reverse operation only copies the modified data.
Consistency groups are reversed by creating a set of new “reverse” FlashCopy maps and
adding them to a new “reverse” consistency group. A consistency group cannot contain more
than one FlashCopy map with the same target VDisk.
IBM Tivoli FlashCopy Manager V2.1 is a new product that will improve the interlock between
SVC and Tivoli Storage Manager for Advanced Copy Services, as well.
Figure 6-2 on page 260 shows the Tivoli Storage Manager for Advanced Copy Services
features.
Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for
Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli
FlashCopy Manager, you can coordinate and automate host preparation steps before issuing
FlashCopy start commands to ensure that a consistent backup of the application is made.
You can put databases into hot backup mode, and before starting FlashCopy, you flush the
filesystem cache.
FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy
and provides a simple interface to the “reverse” operation.
260 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-3 Tivoli Storage Manager FlashCopy Manager features
It is beyond the intended scope of this book to describe Tivoli Storage Manager FlashCopy
Manager.
When FlashCopy is started, it makes a copy of a source VDisk to a target VDisk, and the
original contents of the target VDisk are overwritten. When the FlashCopy operation is
started, the target VDisk presents the contents of the source VDisk as they existed at the
single point-in-time of FlashCopy starting. This operation is also referred to as a time-zero
copy (T0 ).
When a FlashCopy is started, the source and target VDisks are instantaneously available.
When FlashCopy starts, bitmaps are created to govern and redirect I/O to the source or target
VDisk, depending on where the requested block is located, while the blocks are copied in the
background from the source VDisk to the target VDisk.
For more details about background copy, see 6.4.5, “Grains and the FlashCopy bitmap” on
page 266.
Figure 6-4 on page 262 illustrates the redirection of the host I/O toward the source VDisk and
the target VDisk.
The source and target VDisks must both belong to the same SVC cluster, but they can be in
separate I/O Groups within that cluster. SVC FlashCopy associates a source VDisk to a target
VDisk in a FlashCopy mapping.
VDisks, which are members of a FlashCopy mapping, cannot have their size increased or
decreased while they are members of the FlashCopy mapping. The SVC supports the
creation of enough FlashCopy mappings to allow every VDisk to be a member of a FlashCopy
mapping.
A FlashCopy mapping is the act of creating a relationship between a source VDisk and a
target VDisk. FlashCopy mappings can be either stand-alone or a member of a consistency
group. You can perform the act of preparing, starting, or stopping on either the stand-alone
mapping or the consistency group.
Rule: After a mapping is in a consistency group, you can only operate on the group, and
you can no longer prepare, start, or stop the individual mapping.
262 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-5 FlashCopy mapping
Figure 6-6 shows four targets and mappings taken from a single source. It also shows that
there is an ordering to the targets: Target 1 is the oldest (as measured from the time it was
started) through to Target 4, which is the newest. The ordering is important because of the
way in which data is copied when multiple target VDisks are defined and because of the
dependency chain that results. A write to the source VDisk does not cause its data to be
copied to all of the targets; instead, it is copied to the newest target VDisk only (Target 4 in
Figure 6-6). The older targets will refer to new targets first before referring to the source.
From the point of view of an intermediate target disk (neither the oldest or the newest), it
treats the set of newer target VDisks and the true source VDisk as a type of composite
source.
It treats all older VDisks as a kind of target (and behaves like a source to them). If the
mapping for an intermediate target VDisk shows 100% progress, its target VDisk contains a
complete set of data. In this case, mappings treat the set of newer target VDisks, up to and
including the 100% progress target, as a form of composite source. A dependency
relationship exists between a particular target and all newer targets (up to and including a
target that shows 100% progress) that share the same source until all data has been copied
to this target and all older targets.
You can read more information about Multiple Target FlashCopy in 6.4.6, “Interaction and
dependency between Multiple Target FlashCopy mappings” on page 267.
FlashCopy commands can be issued to a FlashCopy consistency group, which affects all
FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if it is not
part of a defined FlashCopy consistency group.
Dependent writes
To illustrate why it is crucial to use consistency groups when a data set spans multiple VDisks,
consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update is to be
performed.
2. A second write is executed to update the database.
3. A third write is executed to update the database log, indicating that the database update
has completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next step. However, if the database log (updates 1 and 3) and the
database itself (update 2) are on separate VDisks and a FlashCopy mapping is started during
this update, you need to exclude the possibility that the database itself is copied slightly
before the database log. This will result in the target VDisks seeing writes (1) and (3) but not
(2), because the database was copied before the write was completed.
In this case, if the database was restarted using the backup that was made from the
FlashCopy target disks, the database log indicates that the transaction had completed
successfully when, in fact, that is not the case, because the FlashCopy of the VDisk with the
database file was started (bitmap was created) before the write was on the disk. Therefore,
the transaction is lost, and the integrity of the database is in question.
264 Implementing the IBM System Storage SAN Volume Controller V5.1
To overcome the issue of dependent writes across VDisks and to create a consistent image of
the client data, it is necessary to perform a FlashCopy operation on multiple VDisks as an
atomic operation. To achieve this condition, the SVC supports the concept of consistency
groups.
A FlashCopy consistency group can contain up to 512 FlashCopy mappings (up to the
maximum number of FlashCopy mappings supported by the SVC cluster). FlashCopy
commands can then be issued to the FlashCopy consistency group and thereby
simultaneously for all of the FlashCopy mappings that are defined in the consistency group.
For example, when issuing a FlashCopy start command to the consistency group, all of the
FlashCopy mappings in the consistency group are started at the same time, resulting in a
point-in-time copy that is consistent across all of the FlashCopy mappings that are contained
in the consistency group.
Maximum configurations
Table 6-1 shows the FlashCopy properties and maximum configurations.
FlashCopy targets per source 256 This maximum is the maximum number of
FlashCopy mappings that can exist with the same
source VDisk.
FlashCopy mappings per cluster 4,096 The number of mappings is no longer limited by
the number of VDisks in the cluster, and so, the
FlashCopy component limit applies.
FlashCopy consistency groups 127 This maximum is an arbitrary limit that is policed
per cluster by the software.
FlashCopy VDisks per I/O Group 1,024 This maximum is a limit on the quantity of
FlashCopy mappings using bitmap space from
this I/O Group. This maximum configuration will
consume all 512 MB of bitmap space for the I/O
Group and allow no Metro and Global Mirror
bitmap space. The default is 40 TB.
FlashCopy mappings per 512 This limit is due to the time that is taken to prepare
consistency group a consistency group with a large number of
mappings.
To illustrate how the FlashCopy indirection layer works, we look at what happens when a
FlashCopy mapping is prepared and subsequently started.
When a FlashCopy mapping is prepared and started, the following sequence is applied:
1. Flush write the data in cache onto a source VDisk or VDisks that are part of a consistency
group.
2. Put cache into write-through on the source VDisks.
3. Discard cache for the target VDisks.
4. Establish a sync point on all of the source VDisks in the consistency group (creating the
FlashCopy bitmap).
5. Ensure that the indirection layer governs all of the I/O to the source VDisks and target
VDisks.
6. Enable cache on both the source VDisks and target VDisks.
FlashCopy provides the semantics of a point-in-time copy, using the indirection layer, which
intercepts the I/Os that targeted at either the source VDisks or target VDisks. The act of
starting a FlashCopy mapping causes this indirection layer to become active in the I/O path,
which occurs as an atomic command across all FlashCopy mappings in the consistency
group. The indirection layer makes a decision about each I/O. This decision is based upon
these factors:
The VDisk and the logical block address (LBA) to which the I/O is addressed
Its direction (read or write)
The state of an internal data structure, the FlashCopy bitmap
The indirection layer either allows the I/O to go through the underlying storage, redirects the
I/O from the target VDisk to the source VDisk, or stalls the I/O while it arranges for data to be
copied from the source VDisk to the target VDisk. To explain in more detail which action is
applied for each I/O, we first look at the FlashCopy bitmap.
Source reads
Reads of the source are always passed through to the underlying source disk.
Target reads
In order for FlashCopy to process a read from the target disk, FlashCopy must consult its
bitmap. If the data being read has already been copied to the target, the read is sent to the
target disk. If it has not, the read is sent to the source VDisk or possibly to another target
VDisk if multiple FlashCopy mappings exist for the source VDisk. Clearly, this algorithm
266 Implementing the IBM System Storage SAN Volume Controller V5.1
requires that while this read is outstanding, no writes are allowed to execute that change the
data being read. The SVC satisfies this requirement by using by a cluster-wide locking
scheme.
The rate at which the grains are copied across from the source VDisk to the target VDisk is
called the copy rate. By default, the copy rate is 50, although you can alter this rate. For more
information about copy rates, see 6.4.13, “Space-efficient FlashCopy” on page 276.
In Figure 6-8, we illustrate how the background copy runs while I/Os are handled according to
the indirection layer algorithm.
Target 0 is not dependent on a source, because it has completed copying. Target 0 has two
dependent mappings (Target 1 and Target 2).
Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been
copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of
Target 1 has been copied, it can then move to the idle_copied state.
Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target
2 has been copied. No target is dependent on Target 2, so when all of the data has been
copied to Target 2, it can move to the Idle_copied state.
Target 3 has actually completed copying, so it is not dependent on any other maps.
268 Implementing the IBM System Storage SAN Volume Controller V5.1
Stopping the copy process
An important scenario arises when a stop command is delivered to a mapping for a target that
has dependent mappings.
After a mapping is in the Stopped state, it can be deleted or restarted, which must not be
allowed if there are still grains that hold data upon which other mappings depend. To avoid
this situation, when a mapping receives a stopfcmap or stopfcconsistgrp command, rather
than immediately moving to the Stopped state, it enters the Stopping state. An automatic copy
process is driven that will find and copy all of the data that is uniquely held on the target VDisk
of the mapping that is being stopped, to the next oldest mapping that is in the Copying state.
Stopping the copy process: The stopping copy process can be ongoing for several
mappings sharing the same source at the same time. At the completion of this process, the
mapping will automatically make an asynchronous state transition to the Stopped state or
the idle_copied state if the mapping was in the Copying state with progress = 100%.
For example, if the mapping associated with Target 0 was issued a stopfcmap or
stopfcconsistgrp command, Target 0 enters the Stopping state while a process copies the
data of Target 0 to Target 1. After all of the data has been copied, Target 0 enters the Stopped
state, and Target 1 is no longer dependent upon Target 0, but Target 1 remains dependent on
Target 2.
Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target VDisks to
has already been copied, see if the grain is split. If the
read from the oldest of these grain is not already copied to
targets. Otherwise, read from the next oldest target for this
the source. source, copy the grain to the
next oldest target. Then,
write to the target.
In Figure 6-10, we illustrate the logical placement of the FlashCopy indirection layer.
In Example 6-1 on page 271, we list the size of the Image_VDisk_A VDisk. Subsequently, the
VDisk_A_copy VDisk is created, specifying the same size.
270 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 6-1 Listing the size of a VDisk in bytes and creating a VDisk of equal size
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_VDisk_A
id 8
name Image_VDisk_A
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG_Image
capacity 36.0GB
type image
.
.
.
autoexpand
warning
grainsize
Tip: Alternatively, you can use the expandvdisksize and shrinkvdisksize VDisk
commands to modify the size of the VDisk. See 7.4.10, “Expanding a VDisk” on page 367
and 7.4.16, “Shrinking a VDisk” on page 372 for more information.
You can use an image mode VDisk as either a FlashCopy source VDisk or target VDisk.
272 Implementing the IBM System Storage SAN Volume Controller V5.1
Mapping event Description
Flush done The FlashCopy mapping automatically moves from the Preparing state
to the Prepared state after all cached data for the source is flushed and
all cached data for the target is no longer valid.
Start When all of the FlashCopy mappings in a consistency group are in the
Prepared state, the FlashCopy mappings can be started.
To preserve the cross volume consistency group, the start of all of the
FlashCopy mappings in the consistency group must be synchronized
correctly with respect to I/Os that are directed at the VDisks by using
the startfcmap or startfcconsistgrp command.
The following actions occur during the startfcmap or
startfcconsistgrp command’s run:
New reads and writes to all source VDisks in the consistency
group are paused in the cache layer until all ongoing reads and
writes beneath the cache layer are completed.
After all FlashCopy mappings in the consistency group are
paused, the internal cluster state is set to allow FlashCopy
operations.
After the cluster state is set for all FlashCopy mappings in the
consistency group, read and write operations continue on the
source VDisks.
The target VDisks are brought online.
As part of the startfcmap or startfcconsistgrp command, read and
write caching is enabled for both the source and target VDisks.
Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the Stopped state.
Copy complete After all of the source data has been copied to the target and there are
no dependent mappings, the state is set to Copied. If the option to
automatically delete the mapping after the background copy completes
is specified, the FlashCopy mapping is automatically deleted. If this
option is not specified, the FlashCopy mapping is not automatically
deleted and can be reactivated by preparing and starting again.
Idle_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping
exists between the source and target, but the source and the target behave as independent
VDisks in this state.
Copying
The FlashCopy indirection layer governs all I/O to the source and target VDisks while the
background copy is running.
Reads and writes are executed on the target as though the contents of the source were
instantaneously copied to the target during the startfcmap or startfcconsistgrp command.
The source and target can be independently updated. Internally, the target depends on the
source for certain tracks.
Read and write caching is enabled on the source and the target.
Stopped
The FlashCopy was stopped either by a user command or by an I/O error.
When a FlashCopy mapping is stopped, any useful data in the target VDisk is lost. Therefore,
while the FlashCopy mapping is in this state, the target VDisk is in the Offline state. To regain
access to the target, the mapping must be started again (the previous point-in-time will be
lost) or the FlashCopy mapping must be deleted. The source VDisk is accessible, and
read/write caching is enabled for the source. In the Stopped state, a mapping can be
prepared again or it can be deleted.
Stopping
The mapping is in the process of transferring data to an dependency mapping. The behavior
of the target VDisk depends on whether the background copy process had completed while
the mapping was in the Copying state. If the copy process had completed, the target VDisk
remains online while the stopping copy process completes. If the copy process had not
completed, data in the cache is discarded for the target VDisk. The target VDisk is taken
offline, and the stopping copy process runs. After the data has been copied, a stop complete
asynchronous event notification is issued. The mapping will move to the Idle/Copied state if
the background copy has completed or to the Stopped state if the background copy has not
completed.
Suspended
The target has been “flashed” from the source and was in the Copying or Stopping state.
Access to the metadata has been lost, and as a consequence, both the source and target
VDisks are offline. The background copy process has been halted.
When the metadata becomes available again, the FlashCopy mapping will return to the
Copying or Stopping state, the access to the source and target VDisks will be restored, and
the background copy or stopping process will be resumed. Unflushed data that was written to
the source or target before the FlashCopy was suspended is pinned in the cache, consuming
resources, until the FlashCopy mapping leaves the Suspended state.
274 Implementing the IBM System Storage SAN Volume Controller V5.1
Preparing
Because the FlashCopy function is placed logically beneath the cache to anticipate any write
latency problem, it demands no read or write data for the target and no write data for the
source in the cache at the time that the FlashCopy operation is started. This design ensures
that the resulting copy is consistent.
In the Preparing state, the FlashCopy mapping is prepared by the following steps:
1. Flushing any modified write data associated with the source VDisk from the cache. Read
data for the source will be left in the cache.
2. Placing the cache for the source VDisk into write-through mode, so that subsequent writes
wait until data has been written to disk before completing the write command that is
received from the host.
3. Discarding any read or write data that is associated with the target VDisk from the cache.
While in this state, writes to the source VDisk will experience additional latency, because the
cache is operating in write-through mode.
While the FlashCopy mapping is in this state, the target VDisk is reported as online, but it will
not perform reads or writes. These reads and writes are failed by the SCSI front end.
Before starting the FlashCopy mapping, it is important that any cache at the host level, for
example, the buffers in the host OSs or applications, are also instructed to flush any
outstanding writes to the source VDisk.
Prepared
When in the Prepared state, the FlashCopy mapping is ready to perform a start. While the
FlashCopy mapping is in this state, the target VDisk is in the Offline state. In the Prepared
state, writes to the source VDisk experience additional latency because the cache is
operating in write-through mode.
For the best performance, the grain size of the Space-Efficient VDisk must match the grain
size of the FlashCopy mapping. However, if the grain sizes differ, the mapping still proceeds.
Consider the following information when you create your FlashCopy mappings:
If you are using a fully allocated source with a space-efficient target, disable the
background copy and cleaning mode on the FlashCopy map by setting both the
background copy rate and cleaning rate to zero. Otherwise, if these features are enabled,
all of the source is copied onto the target VDisk, which causes the Space-Efficient VDisk
to either go offline or to grow as large as the source.
If you are using only a space-efficient source, only the space that is used on the source
VDisk is copied to the target VDisk. For example, if the source VDisk has a virtual size of
800 GB and a real size of 100 GB, of which 50 GB has been used, only the used 50 GB is
copied.
276 Implementing the IBM System Storage SAN Volume Controller V5.1
Two more interesting combinations of incremental FlashCopy and Space-Efficient VDisks are:
A space-efficient source VDisk can be incrementally copied using FlashCopy to a
space-efficient target VDisk. Whenever the FlashCopy is retriggered, only data that has
been modified is recopied to the target. Note that if space is allocated on the target
because of I/O to the target VDisk, this space is not reclaimed when the FlashCopy is
retriggered.
A fully allocated source VDisk can be incrementally copied using FlashCopy to another
fully allocated VDisk at the same time as being copied to multiple space-efficient targets
(taken at separate points in time). This combination allows a single full backup to be kept
for recovery purposes and separates the backup workload from the production workload,
and at the same time, allowing older space-efficient backups to be retained.
The background copy rate is a property of a FlashCopy mapping that is expressed as a value
between 0 and 100. It can be changed in any FlashCopy mapping state and can differ in the
mappings of one consistency group. A value of 0 disables background copy.
The relationship of the background copy rate value to the attempted number of grains to be
split (copied) per second is shown in Table 6-5.
1 - 10 128 KB 0.5
11 - 20 256 KB 1
21 - 30 512 KB 2
31 - 40 1 MB 4
41 - 50 2 MB 8
51 - 60 4 MB 16
61 - 70 8 MB 32
71 - 80 16 MB 64
81 - 90 32 MB 128
91 - 100 64 MB 256
The grains per second numbers represent the maximum number of grains that the SVC will
copy per second, assuming that the bandwidth to the managed disks (MDisks) can
accommodate this rate.
6.4.15 Synthesis
The FlashCopy functionality in SVC simply creates copy VDisks. All of the data in the source
VDisk is copied to the destination VDisk, including operating system control information, as
well as application data and metadata.
Certain operating systems are unable to use FlashCopy without an additional step, which is
termed synthesis. In summary, synthesis performs a type of transformation on the operating
system metadata in the target VDisk so that the operating system can use the disk.
However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared and the mappings are derived from a particular
source VDisk. The lock is used in the following modes under the following conditions:
The lock is held in shared mode for the duration of a read from the target VDisk, which
touches a grain that is not split.
The lock is held in exclusive mode during a grain split, which happens prior to FlashCopy
starting any destage (or write-through) from the cache to a grain that is going to be split
(the destage waits for the grain to be split). The lock is held during the grain split and
released before the destage is processed.
If the lock is held in shared mode, and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.
If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.
Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in either
shared or exclusive mode must wait for it to be freed.
Node failure
Normally, two copies of the FlashCopy bitmaps are maintained; one copy of the FlashCopy
bitmaps is on each of the two nodes making up the I/O Group of the source VDisk. When a
node fails, one copy of the bitmaps, for all FlashCopy mappings whose source VDisk is a
278 Implementing the IBM System Storage SAN Volume Controller V5.1
member of the failing node’s I/O Group, will become inaccessible. FlashCopy will continue
with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node
in the source I/O Group. The cluster metadata is updated to indicate that the missing node no
longer holds up-to-date bitmap information.
When the failing node recovers, or a replacement node is added to the I/O Group, up-to-date
bitmaps will be reestablished on the new node, and it will again provide a redundant location
for the bitmaps:
When the FlashCopy bitmap becomes available again (at least one of the SVC nodes in
the I/O Group is accessible), the FlashCopy mapping will return to the Copying state,
access to the source and target VDisks will be restored, and the background copy process
will be resumed. Unflushed data that was written to the source or target before the
FlashCopy was suspended is pinned in the cache until the FlashCopy mapping leaves the
Suspended state.
Normally, two copies of the FlashCopy bitmaps are maintained (in non-volatile memory),
one copy on each of the two SVC nodes making up the I/O Group of the source VDisk. If
only one of the SVC nodes in the I/O Group to which the source VDisk belongs goes
offline, the FlashCopy mapping will continue in the Copying state, with a single copy of the
FlashCopy bitmap. When the failed SVC node recovers, or a replacement SVC node is
added to the I/O Group, up-to-date FlashCopy bitmaps will be reestablished on the
resuming SVC node and again provide a redundant location for the FlashCopy bitmaps.
If both nodes in the I/O Group become unavailable: If both nodes in the I/O Group to
which the target VDisk belongs become unavailable, the host cannot access the target
VDisk.
Because the storage area network (SAN) that links the SVC nodes to each other and to the
MDisks is made up of many independent links, it is possible for a subset of the nodes to be
temporarily isolated from several of the MDisks. When this situation happens, the managed
disks are said to be Path Offline on certain nodes.
Other nodes: Other nodes might see the managed disks as Online, because their
connection to the managed disks is still functioning.
When an MDisk enters the Path Offline state on an SVC node, all of the VDisks that have any
extents on the MDisk also become Path Offline. Again, this situation happens only on the
affected nodes. When a VDisk is Path Offline on a particular SVC node, the host access to
that VDisk through the node will fail with the SCSI sensor indicating Offline.
Table 6-6 lists which combinations of FlashCopy and Remote Copy are supported. In the
table, remote copy refers to Metro Mirror and Global Mirror.
280 Implementing the IBM System Storage SAN Volume Controller V5.1
6.4.20 Recovering data from FlashCopy
You can use FlashCopy to recover the data if a form of corruption has happened. For
example, if a user deletes data by mistake, you can map the FlashCopy target VDisks to the
application server, and import all the logical volume-level configurations, start the application,
and restore the data back to a given point in time.
Tip: It is better to map a FlashCopy target VDisk to a backup machine with the same
application installed. We do not recommend that you map a FlashCopy target VDisk to the
same application server to which the FlashCopy source VDisk is mapped, because the
FlashCopy target and source VDisks have the same signature, pvid, vgda, and so on.
Special steps are necessary to handle the conflict at the OS level. For example, you can
use the recreatevg command in AIX to generate separate vg, lv, file system, and so on,
names in order to avoid a naming conflict.
FlashCopy backup is a disk-based backup copy that can be used to restore service more
quickly than other backup techniques. This application is further enhanced by the ability to
maintain multiple backup targets, spread over a range of time, allowing the user to choose a
backup from before the time of the corruption.
SVC provides a single point of control when enabling Metro Mirror in your SAN, regardless of
the disk subsystems that are used.
The general application of Metro Mirror is to maintain two real-time synchronized copies of a
disk. Often, two copies are geographically dispersed to two SVC clusters, although it is
possible to use Metro Mirror in a single cluster (within an I/O Group). If the primary copy fails,
you can enable a secondary copy for I/O operation.
Tips: Intracluster Metro Mirror will consume more resources for a specific cluster,
compared to an intercluster Metro Mirror relationship. We recommend using intercluster
Metro Mirror when possible.
A typical application of this function is to set up a dual-site solution using two SVC clusters.
The first site is considered the primary or production site, and the second site is considered
the backup site or failover site, which is activated when a failure at the first site is detected.
Using standard single mode connections, the supported distance between two SVC clusters
in a Metro Mirror partnership is 10 km (6.2 miles), although greater distances can be achieved
by using extenders. For extended distance solutions, contact your IBM representative.
Limit: When a local and a remote fabric are connected together for Metro Mirror purposes,
the inter-switch link (ISL) hop count between a local node and a remote node cannot
exceed seven.
Errors, such as a loss of connectivity between the two clusters, can mean that it is not
possible to replicate data from the primary VDisk to the secondary VDisk. In this case, Metro
Mirror operates to ensure that a consistent image is left at the secondary VDisk, and then
continues to allow I/O to the primary VDisk, so as not to affect the operations at the
production site.
Figure 6-12 on page 283 illustrates how a write to the master VDisk is mirrored to the cache
of the auxiliary VDisk before an acknowledgement of the write is sent back to the host that
issued the write. This process ensures that the secondary is synchronized in real time, in
case it is needed in a failover situation.
However, this process also means that the application is fully exposed to the latency and
bandwidth limitations (if any) of the communication link to the secondary site. This process
might lead to unacceptable application performance, particularly when placed under peak
load. Therefore, using Metro Mirror has distance limitations.
282 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-12 Write on VDisk in Metro Mirror relationship
Multiple Cluster Mirroring enables Metro Mirror and Global Mirror relationships to exist
between a maximum of four SVC clusters.
The SVC clusters can take advantage of the maximum number of remote mirror relationships
because Multiple Cluster Mirroring enables clients to copy from several remote sites to a
single SVC cluster at a disaster recovery (DR) site. It supports implementation of
consolidated DR strategies and helps clients that are moving or consolidating data centers.
284 Implementing the IBM System Storage SAN Volume Controller V5.1
With Multiple Cluster Mirroring, there is a wider range of possible topologies. You can connect
a maximum of four clusters, directly or indirectly. Therefor, a cluster can never have any more
than three partners.
Figure 6-14 shows four clusters in a star topology, with cluster A at the center. Cluster A can
be a central DR site for the three other locations.
Using a star topology, you can migrate separate applications at separate times by using a
process, such as this example:
1. Suspend application at A.
2. Remove the A B relationship.
3. Create the A C relationship (or alternatively, the B C relationship).
4. Synchronize to cluster C, and ensure A C is established:
– A B, A C, A D, B C, B D, and C D
– A B, A C, and B C
Figure 6-16 is a fully connected mesh where every cluster has a partnership to each of the
three other clusters. Therefore, VDisks can be replicated between any pair of clusters, but
note that this topology is not required, unless relationships are needed between every pair of
clusters:
A B, B C, and C D
The other option is a daisy-chain topology between four clusters, where we have a cascading
solution; however, a VDisk must be in only one relationship, such as A B, for example. At
the time of writing, a three-site solution, such as DS8000 Metro Global Mirror, is not
supported.
286 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-17 SVC daisy-chain topology
Unsupported topology
As an illustration of what is not supported, we show this example:
A B, B C, C D, and D E
This topology is unsupported, because five clusters are indirectly connected. If the cluster can
detect this topology at the time of the fourth mkpartnership command, the command will be
rejected.
Rules:
A VDisk can only be part of one Metro Mirror relationship at a time.
A VDisk that is a FlashCopy target cannot be part of a Metro Mirror relationship.
In the most common applications of Metro Mirror, the master VDisk contains the production
copy of the data and is used by the host application, while the auxiliary VDisk contains a
mirrored copy of the data and is used for failover in DR scenarios. The terms master and
auxiliary describe this use. However, if Metro Mirror is applied differently, the terms master
VDisk and auxiliary VDisk need to be interpreted appropriately.
An application that performs a high volume of database updates is usually designed with the
concept of dependent writes. With dependent writes, it is important to ensure that an earlier
write has completed before a later write is started. Reversing the order of dependent writes
can undermine an application’s algorithms and can lead to problems, such as detected, or
undetected, data corruption.
Consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update will be
performed.
2. A second write is executed to update the database.
3. A third write is executed to update the database log, indicating that a database update has
completed successfully.
288 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-20 shows the write sequence.
The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next step.
Database logs: All databases have logs associated with them. These logs keep records of
database changes. If a database needs to be restored to a point beyond the last full, offline
backup, logs are required to roll the data forward to the point of failure.
But imagine if the database log and the database itself are on separate VDisks and a Metro
Mirror relationship is stopped during this update. In this case, you need to consider the
possibility that the Metro Mirror relationship for the VDisk with the database file is stopped
slightly before the VDisk containing the database log. If this situation occurs, it is possible that
the secondary VDisks see writes (1) and (3), but not (2).
Then, if the database was restarted using data available from secondary disks, the database
log will indicate that the transaction had completed successfully, when it did not. In this
scenario, the integrity of the database is in question.
Because the MM_Relationship 1 and 2 are part of the consistency group, they can be
handled as one entity, while the stand-alone MM_Relationship 3 is handled separately.
Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror
consistency groups can provide the ability to group relationships, so that they are
manipulated in unison. Metro Mirror relationships within a consistency group can be in any
form:
Metro Mirror relationships can be part of a consistency group, or they can be stand-alone
and therefore handled as single instances.
A consistency group can contain zero or more relationships. An empty consistency group,
with zero relationships in it, has little purpose until it is assigned its first relationship, except
that it has a name.
All of the relationships in a consistency group must have matching master and auxiliary
SVC clusters.
290 Implementing the IBM System Storage SAN Volume Controller V5.1
Although it is possible to use consistency groups to manipulate sets of relationships that do
not need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a consistency group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
consistency group.
For example, consider the case of two applications that are completely independent, yet they
are placed into a single consistency group. In the event of an error, there is a loss of
synchronization, and a background copy process is required to recover synchronization.
While this process is in progress, Metro Mirror rejects attempts to enable access to secondary
VDisks of either application.
If one application finishes its background copy much more quickly than the other application,
Metro Mirror still refuses to grant access to its secondary VDisks even though it is safe in this
case, because Metro Mirror policy is to refuse access to the entire consistency group if any
part of it is inconsistent.
Stand-alone relationships and consistency groups share a common configuration and state
model. All of the relationships in a non-empty consistency group have same state as the
consistency group.
SVC node ports on each SVC cluster must be able to access each other to facilitate the
partnership creation. Therefore, you must define a zone in each fabric for intercluster
communication (see Chapter 3, “Planning and configuration” on page 65).
These channels are maintained and updated as nodes appear and disappear and as links
fail, and they are repaired to maintain operation where possible. If communication between
SVC clusters is interrupted or lost, an error is logged (and consequently, Metro Mirror
relationships will stop).
To handle error conditions, you can configure SVC to raise Simple Network Management
Protocol (SNMP) traps to the enterprise monitoring system.
Nodes that are in separate clusters do not exchange messages after initial discovery is
complete, unless they have been configured together to perform Metro Mirror.
The intercluster link carries control traffic to coordinate activity between two clusters. It is
formed between one node in each cluster. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
If the designated node fails (or all of its logins to the remote cluster fail), a new node is chosen
to carry control traffic. This node change causes the I/O to pause, but it does not put the
relationships in a Consistent Stopped state.
292 Implementing the IBM System Storage SAN Volume Controller V5.1
Synchronized before creation
In this method, the administrator must ensure that the master and auxiliary VDisks contain
identical data before creating the relationship. There are two ways to ensure that the master
and auxiliary VDisks contain identical data:
Both disks are created with the security delete feature so as to make all data zero.
A complete tape image (or other method of moving data) is copied from one disk to the
other disk.
In either technique, no write I/O must take place to either the master or the auxiliary before
the relationship is established.
If these steps are performed incorrectly, Metro Mirror will report the relationship as being
consistent when it is not, therefore, likely making any secondary disk useless. This method
has an advantage over full synchronization, because it does not require all of the data to be
copied over a constrained link. However, if data needs to be copied, the master and auxiliary
disks cannot be used until the copy is complete, which might be unacceptable.
After the copy is complete, the administrator must ensure that a startrcrelationship
command is issued with the -clean flag.
With this technique, only data that has changed since the relationship was created, including
all regions that were incorrect in the tape image, is copied from the master to the auxiliary. As
with “Synchronized before creation” on page 293, the copy step must be performed correctly
or the auxiliary will be useless, although the copy operation will report it as being
synchronized.
In Figure 6-22 on page 294, the Metro Mirror relationship state diagram shows an overview of
states that can apply to a Metro Mirror relationship in a connected state.
When creating the Metro Mirror relationship, you can specify if the auxiliary VDisk is already
in sync with the master VDisk, and the background copy process is then skipped. This
capability is especially useful when creating Metro Mirror relationships for VDisks that have
been created with the format option.
The numbers in Figure 6-22 relate to the following numbers. To create the relationship:
Step 1:
a. The Metro Mirror relationship is created with the -sync option, and the Metro Mirror
relationship enters the Consistent stopped state.
b. The Metro Mirror relationship is created without specifying that the master and auxiliary
VDisks are in sync, and the Metro Mirror relationship enters the Inconsistent stopped
state.
Step 2:
a. When starting a Metro Mirror relationship in the Consistent stopped state, the Metro
Mirror relationship enters the Consistent synchronized state. Therefore, no updates
(write I/O) have been performed on the primary VDisk while in the Consistent stopped
state. Otherwise, the -force option must be specified, and the Metro Mirror relationship
then enters the Inconsistent copying state, while the background copy is started.
b. When starting a Metro Mirror relationship in the Inconsistent stopped state, the Metro
Mirror relationship enters the Inconsistent copying state, while the background copy is
started.
294 Implementing the IBM System Storage SAN Volume Controller V5.1
Step 3
When the background copy completes, the Metro Mirror relationship transits from the
Inconsistent copying state to the Consistent synchronized state.
Step 4:
a. When stopping a Metro Mirror relationship in the Consistent synchronized state,
specifying the -access option, which enables write I/O on the secondary VDisk, the
Metro Mirror relationship enters the Idling state.
b. To enable write I/O on the secondary VDisk, when the Metro Mirror relationship is in
the Consistent stopped state, issue the command svctask stoprcrelationship
specifying the -access option, and the Metro Mirror relationship enters the Idling state.
Step 5:
a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the
-primary argument to set the copy direction. Given that no write I/O has been
performed (to either the master or auxiliary VDisk) while in the Idling state, the Metro
Mirror relationship enters the Consistent synchronized state.
b. If write I/O has been performed to either the master or the auxiliary VDisk, the -force
option must be specified, and the Metro Mirror relationship then enters the Inconsistent
copying state, while the background copy is started.
Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an
error), a state transition is applied:
For example, the Metro Mirror relationships in the Consistent synchronized state enter the
Consistent stopped state, and the Metro Mirror relationships in the Inconsistent copying
state enter the Inconsistent stopped state.
In case the connection is broken between the SVC clusters in a partnership, then all
(intercluster) Metro Mirror relationships enter a Disconnected state. For further
information, refer to “Connected versus disconnected” on page 295.
Under certain error scenarios, communications between the two clusters might be lost. For
example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric
connection between the two clusters might fail, leaving the two clusters running but unable to
communicate with each other.
When the two clusters can communicate, the clusters and the relationships spanning them
are described as connected. When they cannot communicate, the clusters and the
relationships spanning them are described as disconnected.
The disconnected relationships are portrayed as having a changed state. The new states
describe what is known about the relationship and what configuration commands are
permitted.
When the clusters can communicate again, the relationships become connected again. Metro
Mirror automatically reconciles the two state fragments, taking into account any configuration
or other event that took place while the relationship was disconnected. As a result, the
relationship can either return to the state that it was in when it became disconnected or it can
enter another connected state.
Relationships that are configured between VDisks in the same SVC cluster (intracluster) will
never be described as being in a disconnected state.
A secondary is described as consistent if it contains data that might have been read by a host
system from the primary if power had failed at an imaginary point in time while I/O was in
progress, and power was later restored. This imaginary point in time is defined as the
recovery point. The requirements for consistency are expressed with respect to activity at the
primary up to the recovery point:
The secondary VDisk contains the data from all of the writes to the primary for which the
host received successful completion and that data had not been overwritten by a
subsequent write (before the recovery point).
For writes for which the host did not receive a successful completion (that is, it received
bad completion or no completion at all), and the host subsequently performed a read from
the primary of that data and that read returned successful completion and no later write
was sent (before the recovery point), the secondary contains the same data as that
returned by the read from the primary.
From the point of view of an application, consistency means that a secondary VDisk contains
the same data as the primary VDisk at the recovery point (the time at which the imaginary
power failure occurred).
296 Implementing the IBM System Storage SAN Volume Controller V5.1
The application might work without a problem.
Because of the risk of data corruption, and in particular undetected data corruption, Metro
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.
When deciding how to use consistency groups, the administrator must consider the scope of
an application’s data, taking into account all of the interdependent systems that communicate
and exchange information.
If two programs or systems communicate and store details as a result of the information
exchanged, either of the following actions might occur:
All of the data accessed by the group of systems must be placed into a single consistency
group.
The systems must be recovered independently (each within its own consistency group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in time in the past. Write I/O might have continued to a
primary and not have been copied to the secondary. This state arises when it becomes
impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between clusters when writing to the secondary.
When communication is lost for an extended period of time, Metro Mirror tracks the changes
that happen at the primary, but not the order of such changes, or the details of such changes
(write data). When communication is restored, it is impossible to synchronize the secondary
without sending write data to the secondary out-of-order and, therefore, losing consistency.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the primary is accessible for read and
write I/O, but the secondary is not accessible for either read or write I/O. A copy process
needs to be started to make the secondary consistent.
This state is entered when the relationship or consistency group was InconsistentCopying
and has either suffered a persistent error or received a stop command that has caused the
copy process to stop.
If the relationship or consistency group becomes disconnected, the secondary side transits to
InconsistentDisconnected. The primary side transits to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the primary is accessible for read and
write I/O, but the secondary is not accessible for either read or write I/O.
In this state, a background copy process runs that copies data from the primary to the
secondary VDisk.
In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.
A persistent error or stop command places the relationship or consistency group into an
InconsistentStopped state. A start command is accepted, but it has no effect.
If the relationship or consistency group becomes disconnected, the secondary side transits to
InconsistentDisconnected. The primary side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the secondary contains a consistent
image, but it might be out-of-date with respect to the primary.
This state can arise when a relationship was in a Consistent Synchronized state and suffers
an error that forces a Consistency Freeze. It can also arise when a relationship is created with
a CreateConsistentFlag set to TRUE.
298 Implementing the IBM System Storage SAN Volume Controller V5.1
Normally, following an I/O error, subsequent write activity causes updates to the primary and
the secondary is no longer synchronized (set to false). In this case, to re-establish
synchronization, consistency must be given up for a period. You must use a start command
with the -force option to acknowledge this situation, and the relationship or consistency group
transits to InconsistentCopying. Enter this command only after all of the outstanding errors
are repaired.
In the unusual case where the primary and the secondary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual
case, you can enter a switch command that moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the primary and the secondary.
An informational status log is generated every time that a relationship or consistency group
enters the ConsistentStopped with a status of Online state. You can configure this situation to
enable an SNMP trap and provide a trigger to automation software to consider issuing a
start command following a loss of synchronization.
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the primary VDisk is accessible for
read and write I/O, and the secondary VDisk is accessible for read-only I/O.
Writes that are sent to the primary VDisk are sent to both the primary and secondary VDisks.
Either successful completion must be received for both writes, the write must be failed to the
host, or a state must transit out of the ConsistentSynchronized state before a write is
completed to the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
If the relationship or consistency group becomes disconnected, the same transitions are
made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary disks operate in the primary role.
Consequently, both master and auxiliary are accessible for write I/O.
In this state, the relationship or consistency group accepts a start command. Metro Mirror
maintains a record of regions on each disk that received write I/O while idling. This record is
used to determine what areas need to be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either VDisk in any relationship has received write I/O, which is indicated
by the Synchronized status. If the start command leads to loss of consistency, you must
specify the -force parameter.
Also, while in this state, the relationship or consistency group accepts a -clean option on the
start command. If the relationship or consistency group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The VDisk or disks in this half of the relationship
or consistency group are all in the primary role and accept read or write I/O.
The major priority in this state is to recover the link and make the relationship or consistency
group connected again.
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transits to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or consistency group, which depends on these factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The VDisks in this half of the relationship
or consistency group are all in the secondary role and do not accept read or write I/O.
No configuration activity, except for deletes, is permitted until the relationship becomes
connected again.
When the relationship or consistency group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either condition is true:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop command while disconnected.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The VDisks in this half of the relationship or
consistency group are all in the secondary role and accept read I/O but not write I/O.
In this state, the relationship or consistency group displays an attribute of FreezeTime, which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time that it had in that state. When entered from ConsistentSynchronized, the
300 Implementing the IBM System Storage SAN Volume Controller V5.1
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
cluster.
A stop command with the -access flag set to true transits the relationship or consistency
group to the IdlingDisconnected state. This state allows write I/O to be performed to the
secondary VDisk and is used as part of a DR scenario.
When the relationship or consistency group becomes connected again, the relationship or
consistency group becomes ConsistentSynchronized only if this action does not lead to a loss
of consistency. These conditions must be true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the primary while disconnected.
Empty
This state only applies to consistency groups. It is the state of a consistency group that has
no relationships and no other state information to show.
It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group, at which point, the state of the relationship becomes the
state of the consistency group.
Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy takes place on relationships that are in the
InconsistentCopying state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
all of the nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node in turn divides its allocation evenly between the multiple relationships performing a
background copy.
Switching copy direction: The copy direction for a Metro Mirror relationship can be
switched so the auxiliary VDisk becomes the primary, and the master VDisk becomes the
secondary.
While the Metro Mirror relationship is active, the secondary copy (VDisk) is not accessible for
host application write I/O at any time. The SVC allows read-only access to the secondary
VDisk when it contains a “consistent” image. This time period is only intended to allow boot
time operating system discovery to complete without error, so that any hosts at the secondary
site can be ready to start up the applications with minimum delay, if required.
This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads that are performed at the secondary
and later write I/Os that are performed at the primary.
To enable access to the secondary VDisk for host operations, you must stop the Metro Mirror
relationship by specifying the -access parameter.
While access to the secondary VDisk for host operations is enabled, the host must be
instructed to mount the VDisk and related tasks before the application can be started, or
instructed to perform a recovery process.
For example, the Metro Mirror requirement to enable the secondary copy for access
differentiates it from third-party mirroring software on the host, which aims to emulate a
single, reliable disk regardless of what system is accessing it. Metro Mirror retains the
property that there are two volumes in existence, but it suppresses one volume while the copy
is being maintained.
Using a secondary copy demands a conscious policy decision by the administrator that a
failover is required and that the tasks to be performed on the host involved in establishing
operation on the secondary copy are substantial. The goal is to make this rapid (much faster
when compared to recovering from a backup copy) but not seamless.
The failover process can be automated through failover management software. The SVC
provides Simple Network Management Protocol (SNMP) traps and programming (or
scripting) for the command-line interface (CLI) to enable this automation.
302 Implementing the IBM System Storage SAN Volume Controller V5.1
Parameter Value
Total VDisk size per I/O Group There is a per I/O Group limit of 1,024 TB on the quantity of
primary and secondary VDisk address space that can participate
in Metro Mirror and Global Mirror relationships. This maximum
configuration will consume all 512 MB of bitmap space for the I/O
Group and allow no FlashCopy bitmap space.
The command set for Metro Mirror contains two broad groups:
Commands to create, delete, and manipulate relationships and consistency groups
Commands to cause state changes
Where a configuration command affects more than one cluster, Metro Mirror performs the
work to coordinate configuration activity between the clusters. Certain configuration
commands can only be performed when the clusters are connected and fail with no effect
when they are disconnected.
Other configuration commands are permitted even though the clusters are disconnected. The
state is reconciled automatically by Metro Mirror when the clusters become connected again.
For any given command, with one exception, a single cluster actually receives the command
from the administrator. This design is significant for defining the context for a
CreateRelationship mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp
command, in which case, the cluster receiving the command is called the local cluster.
The exception mentioned previously is the command that sets clusters into a Metro Mirror
partnership. The mkpartnership command must be issued to both the local and remote
clusters.
The commands here are described as an abstract command set and are implemented as
either method:
A command-line interface (CLI), which can be used for scripting and automation
A graphical user interface (GUI), which can be used for one-off tasks
svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for
setting up a two-cluster partnership. This command is a prerequisite for creating Metro Mirror
relationships.
svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror
partnership between the local cluster and a remote cluster.
To establish a fully functional Metro Mirror partnership, you must issue this command to both
clusters. This step is a prerequisite to creating Metro Mirror relationships between VDisks on
the SVC clusters.
When creating the partnership, you can specify the bandwidth to be used by the background
copy process between the local and the remote SVC cluster, and if it is not specified, the
bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or
equal to the bandwidth that can be sustained by the intercluster link.
In order to set the background copy bandwidth optimally, make sure that you consider all
three resources (the primary storage, the intercluster link bandwidth, and the secondary
storage). Provision the most restrictive of these three resources between the background
copy bandwidth and the peak foreground I/O workload. This provisioning can be done by a
calculation (as previously described) or alternatively by determining experimentally how much
background copy can be allowed before the foreground I/O latency becomes unacceptable,
and then backing off to allow for peaks in workload and a safety margin.
svctask chpartnership
In case it is needed to change the bandwidth that is available for background copy in an SVC
cluster partnership, you can use the svctask chpartnership command to specify the new
bandwidth.
svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror
consistency group.
304 Implementing the IBM System Storage SAN Volume Controller V5.1
The Metro Mirror consistency group name must be unique across all of the consistency
groups that are known to the clusters owning this consistency group. If the consistency group
involves two clusters, the clusters must be in communication throughout the creation process.
The new consistency group does not contain any relationships and will be in the Empty state.
Metro Mirror relationships can be added to the group either upon creation or afterward by
using the svctask chrelationship command.
svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship.
This relationship persists until it is deleted.
The auxiliary VDisk must be equal in size to the master VDisk or the command will fail, and if
both VDisks are in the same cluster, they must both be in the same I/O Group. The master
and auxiliary VDisk cannot be in an existing relationship and cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When creating the Metro Mirror relationship, it can be added to an already existing
consistency group, or it can be a stand-alone Metro Mirror relationship if no consistency
group is specified.
To check whether the master or auxiliary VDisks comply with the prerequisites to participate
in a Metro Mirror relationship, use the svcinfo lsrcrelationshipcandidate command.
svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available VDisks that are
eligible for a Metro Mirror relationship.
When issuing the command, you can specify the master VDisk name and auxiliary cluster to
list candidates that comply with prerequisites to create a Metro Mirror relationship. If the
command is issued with no flags, all VDisks that are not disallowed by another configuration
state, such as being a FlashCopy target, are listed.
svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a
Metro Mirror relationship:
Change the name of a Metro Mirror relationship.
Add a relationship to a group.
Remove a relationship from a group using the -force flag.
svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror
consistency group.
svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro
Mirror relationship.
When issuing the command, the copy direction can be set, if it is undefined, and optionally
mark the secondary VDisk of the relationship as clean. The command fails it if it is used to
attempt to start a relationship that is part of a consistency group.
This command can only be issued to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (primary and secondary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
either by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force flag when restarting the relationship. This situation can arise if, for
example, the relationship was stopped, and then, further writes were performed on the
original primary of the relationship. The use of the -force flag here is a reminder that the data
on the secondary will become inconsistent while resynchronization (background copying)
occurs, and therefore, the data is not usable for DR purposes before the background copy has
completed.
In the Idling state, you must specify the primary VDisk to indicate the copy direction. In other
connected states, you can provide the -primary argument, but it must match the existing
setting.
svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a
relationship. It can also be used to enable write access to a consistent secondary VDisk by
specifying the -access flag.
306 Implementing the IBM System Storage SAN Volume Controller V5.1
If the relationship is in an Inconsistent state, any copy operation stops and does not resume
until you issue a svctask startrcrelationship command. Write activity is no longer copied
from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized
state, this command causes a consistency freeze.
The svctask startrcconsistgrp command is used to start a Metro Mirror consistency group.
This command can only be issued to a consistency group that is connected.
For a consistency group that is idling, this command assigns a copy direction (primary and
secondary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped either by a stop command or by an I/O error.
svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro
Mirror consistency group. It can also be used to enable write access to the secondary VDisks
in the group if the group is in a Consistent state.
If the consistency group is in an Inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
copied from the primary to the secondary VDisks belonging to the relationships in the group.
For a consistency group in the ConsistentSynchronized state, this command causes a
consistency freeze.
svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two VDisks. It does
not affect the VDisks themselves.
If the relationship is disconnected at the time that the command is issued, the relationship is
only deleted on the cluster on which the command is being run. When the clusters reconnect,
then the relationship is automatically deleted on the other cluster.
If you delete an inconsistent relationship, the secondary VDisk becomes accessible even
though it is still inconsistent. This situation is the one case in which Metro Mirror does not
inhibit access to inconsistent data.
svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror consistency group.
This command deletes the specified consistency group. You can issue this command for any
existing consistency group.
If the consistency group is disconnected at the time that the command is issued, the
consistency group is only deleted on the cluster on which the command is being run. When
the clusters reconnect, the consistency group is automatically deleted on the other cluster.
Alternatively, if the clusters are disconnected, and you still want to remove the consistency
group on both clusters, you can issue the svctask rmrcconsistgrp command separately on
both of the clusters.
If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.
svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of the primary and
secondary VDisks when a stand-alone relationship is in a Consistent state. When issuing the
command, the desired primary is specified.
svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of the primary and
secondary VDisks when a consistency group is in a Consistent state. This change is applied
to all of the relationships in the consistency group, and when issuing the command, the
desired primary is specified.
308 Implementing the IBM System Storage SAN Volume Controller V5.1
6.6.15 Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy takes place on relationships that are in the
InconsistentCopying state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
the nodes that are performing background copy for one of the eligible relationships. This
allocation is made without regard for the number of disks for which the node is responsible.
Each node in turn divides its allocation evenly between the multiple relationships performing a
background copy.
Global Mirror works by defining a Global Mirror relationship between two VDisks of equal size
and maintains the data consistency in an asynchronous manner. Therefore, when a host
writes to a source VDisk, the data is copied from the source VDisk cache to the target VDisk
cache. At the initiation of that data copy, the confirmation of I/O completion is transmitted back
to the host.
Minimum firmware requirement: The minimum firmware requirement for Global Mirror
functionality is V4.1.1. Any cluster or partner cluster that is not running this minimum level
will not have Global Mirror functionality available. Even if you have a Global Mirror
relationship running on a down-level partner cluster and you only want to use intracluster
Global Mirror, the functionality will not be available to you.
Limit: When a local and a remote fabric are connected together for Global Mirror
purposes, the ISL hop count between a local node and a remote node must not exceed
seven hops.
The Global Mirror function provides the same function as Metro Mirror Remote Copy, but over
long distance links with higher latency, without requiring the hosts to wait for the full round-trip
delay of the long distance link.
Figure 6-23 shows that a write operation to the master VDisk is acknowledged back to the
host issuing the write before the write operation is mirrored to the cache for the auxiliary
VDisk.
The Global Mirror algorithms maintain a consistent image at the secondary at all times. They
achieve this consistent image by identifying sets of I/Os that are active concurrently at the
primary, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary. As a result, Global Mirror maintains the features of Write Ordering
and Read Stability that are described in this chapter.
The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary cluster and, so, is not subject to the latency
of the long distance link. These two elements of the protocol ensure that the throughput of the
total cluster can be grown by increasing cluster size, while maintaining consistency across a
growing data set.
310 Implementing the IBM System Storage SAN Volume Controller V5.1
In a failover scenario, where the secondary site needs to become the primary source of data,
certain updates might be missing at the secondary site. Therefore, any applications that will
use this data must have an external mechanism for recovering the missing updates and
reapplying them, for example, such as a transaction log replay.
Colliding writes
Prior to V4.3.1, the Global Mirror algorithm required that only a single write is active on any
given 512 byte LBA of a VDisk. If a further write is received from a host while the secondary
write is still active, even though the primary write might have completed, the new host write
will be delayed until the secondary write is complete. This restriction is needed in case a
series of writes to the secondary have to be retried (called “reconstruction”). Conceptually,
the data for reconstruction comes from the primary VDisk.
If multiple writes are allowed to be applied to the primary for a given sector, only the most
recent write will get the correct data during reconstruction, and if reconstruction is interrupted
for any reason, the intermediate state of the secondary is Inconsistent.
Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that
are sent to secondary VDisks. This feature allows testing to be performed that detects
colliding writes, and therefore, this feature can be used to test an application before the full
deployment of the feature. The feature can be enabled separately for each of the intracluster
or intercluster Global Mirrors. You specify the delay setting by using the chcluster command
and viewed by using the lscluster command. The gm_intra_delay_simulation field
expresses the amount of time that intracluster secondary I/Os are delayed. The
gm_inter_delay_simulation field expresses the amount of time that intercluster secondary
I/Os are delayed. A value of zero disables the feature.
312 Implementing the IBM System Storage SAN Volume Controller V5.1
Multiple Cluster Mirroring
SVC 5.1 introduces Multiple Cluster Mirroring. The rules for a Global Mirror Multiple Cluster
Mirroring environment are the same as the rules in an Metro Mirror environment. For more
detailed information, see 6.5.4, “Multiple Cluster Mirroring” on page 284.
A Global Mirror relationship is composed of two VDisks that are equal in size. The master
VDisk and the auxiliary VDisk can be in the same I/O Group, within the same SVC cluster
(intracluster Global Mirror), or can be on separate SVC clusters that are defined as SVC
partners (intercluster Global Mirror).
Rules:
A VDisk can only be part of one Global Mirror relationship at a time.
A VDisk that is a FlashCopy target cannot be part of a Global Mirror relationship.
In the most common applications of Global Mirror, the master VDisk contains the production
copy of the data and is used by the host application, while the auxiliary VDisk contains the
mirrored copy of the data and is used for failover in DR scenarios. The terms master and
auxiliary help explain this use. If Global Mirror is applied differently, the terms master and
auxiliary VDisks need to be interpreted appropriately.
The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next step.
Database logs: All databases have logs associated with them. These logs keep records of
database changes. If a database needs to be restored to a point beyond the last full, offline
backup, logs are required to roll the data forward to the point of failure.
314 Implementing the IBM System Storage SAN Volume Controller V5.1
But imagine if the database log and the database are on separate VDisks and a Global Mirror
relationship is stopped during this update. In this case, you must consider the possibility that
the Global Mirror relationship for the VDisk with the database file is stopped slightly before the
VDisk containing the database log.
If this happens, it is possible that the secondary VDisks see writes (1) and (3) but not write
(2). Then, if the database was restarted using the data available from the secondary disks,
the database log indicates that the transaction had completed successfully, when it did not. In
this scenario, the integrity of the database is in question.
A Global Mirror consistency group can contain an arbitrary number of relationships up to the
maximum number of Global Mirror relationships that is supported by the SVC cluster. Global
Mirror commands can be issued to a Global Mirror consistency group, and thereby
simultaneously for all Global Mirror relationships that are defined within that consistency
group, or to a single Metro Mirror relationship, if not part of a Global Mirror consistency group.
For example, when issuing a Global Mirror start command to the consistency group, all of
the Global Mirror relationships in the consistency group are started at the same time.
Figure 6-27 on page 316 illustrates the concept of Global Mirror consistency groups. Because
GM_Relationship 1 and GM_Relationship 2 are part of the consistency group, they can be
handled as one entity, while the stand-alone GM_Relationship 3 is handled separately.
Certain uses of Global Mirror require the manipulation of more than one relationship. Global
Mirror consistency groups can provide the ability to group relationships so that they are
manipulated in unison. Global Mirror relationships within a consistency group can be in any
form:
Global Mirror relationships can be part of a consistency group, or be stand-alone and
therefore handled as single instances.
A consistency group can contain zero or more relationships. An empty consistency group,
with zero relationships in it, has little purpose until it is assigned its first relationship, except
that it has a name.
All of the relationships in a consistency group must have matching master and auxiliary
SVC clusters.
For example, consider the case of two applications that are completely independent, yet they
are placed into a single consistency group. In the event of an error, there is a loss of
synchronization, and a background copy process is required to recover synchronization.
While this process is in progress, Global Mirror rejects attempts to enable access to the
secondary VDisks of either application.
If one application finishes its background copy much more quickly than the other application,
Global Mirror still refuses to grant access to its secondary VDisk. Even though it is safe in this
316 Implementing the IBM System Storage SAN Volume Controller V5.1
case, Global Mirror policy refuses access to the entire consistency group if any part of it is
inconsistent.
Stand-alone relationships and consistency groups share a common configuration and state
model. All of the relationships in a consistency group that is not empty have the same state as
the consistency group.
SVC node ports on each SVC cluster must be able to access each other to facilitate the
partnership creation. Therefore, you must define a zone in each fabric for intercluster
communication; see Chapter 3, “Planning and configuration” on page 65 for more
information.
These channels are maintained and updated as nodes appear and disappear and as links
fail, and are repaired to maintain operation where possible. If communication between the
SVC clusters is interrupted or lost, an error is logged (and, consequently, Global Mirror
relationships will stop).
To handle error conditions, you can configure the SVC to raise SNMP traps or e-mail. Or, if
Tivoli Storage Productivity Center for Replication is in place, the Tivoli Storage Productivity
Center for Replication can control the link’s status and issue an alert using SNMP traps or
e-mail, too.
Devices that advertise themselves as SVC nodes are categorized according to the SVC
cluster to which they belong. SVC nodes that belong to the same cluster establish
communication channels between themselves and begin to exchange messages to
implement the clustering and functional protocols of SVC.
Nodes that are in separate clusters do not exchange messages after the initial discovery is
complete unless they have been configured together to perform Global Mirror.
If the designated node fails (or if all of its logins to the remote cluster fail), a new node is
chosen to carry control traffic. This event causes I/O to pause, but it does not cause
relationships to become Consistent Stopped.
Figure 6-28 shows the best relationship between VDisks and their preferred nodes in order to
get the best performance.
Background copy I/O will be scheduled to avoid bursts of activity that might have an adverse
effect on system behavior. An entire grain of tracks on one VDisk will be processed at around
the same time but not as a single I/O. Double buffering is used to try to take advantage of
sequential performance within a grain. However, the next grain within the VDisk might not be
scheduled for a while. Multiple grains might be copied simultaneously and might be enough to
satisfy the requested rate, unless the available resources cannot sustain the requested rate.
Background copy proceeds from the low LBA to the high LBA in sequence to avoid convoying
conflicts with FlashCopy, which operates in the opposite direction. It is expected that
background copy will not convoy conflict with sequential applications, because it tends to vary
disks more often.
318 Implementing the IBM System Storage SAN Volume Controller V5.1
6.10.6 Space-efficient background copy
Prior to SVC 4.3.1, if a primary VDisk was space-efficient, the background copy process
caused the secondary to become fully allocated. When both primary and secondary clusters
are running SVC 4.3.1 or higher, Metro Mirror and Global Mirror relationships can preserve
the space-efficiency of the primary.
Conceptually, the background copy process detects an unallocated region of the primary and
sends a special “zero buffer” to the secondary. If the secondary VDisk is space-efficient, and
the region is unallocated, the special buffer prevents a write (and, therefore, an allocation). If
the secondary VDisk is not space-efficient, or the region in question is an allocated region of
a Space-Efficient VDisk, a buffer of “real” zeros is synthesized on the secondary and written
as normal.
If the secondary cluster is running code prior to SVC 4.3.1, this version of the code is
detected by the primary cluster and a buffer of “real” zeros is transmitted and written on the
secondary. The background copy rate controls the rate at which the virtual capacity is being
copied.
In either technique, no write I/O must take place either on the master or the auxiliary before
the relationship is established.
If these steps are not performed correctly, the relationship is reported as being consistent,
when it is not. This situation most likely makes any secondary disk useless. This method has
an advantage over full synchronization: It does not require all of the data to be copied over a
constrained link. However, if the data must be copied, the master and auxiliary disks cannot
be used until the copy is complete, which might be unacceptable.
After the copy is complete, the administrator must ensure that a new relationship is started
(startrcrelationship is issued) with the -clean flag.
With this technique, only the data that has changed since the relationship was created,
including all regions that were incorrect in the tape image, is copied from master and auxiliary.
As with “Synchronized before creation” on page 320, the copy step must be performed
correctly, or else the auxiliary is useless, although the copy reports it as being synchronized.
Figure 6-29 on page 321 shows an overview of the states that apply to a Global Mirror
relationship in the connected state.
320 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 6-29 Global Mirror state diagram
When creating the Global Mirror relationship, you can specify whether the auxiliary VDisk is
already in sync with the master VDisk, and the background copy process is then skipped.
This capability is especially useful when creating Global Mirror relationships for VDisks that
have been created with the format option. The following steps explain the Global Mirror state
diagram (these numbers correspond to the numbers in Figure 6-29):
Step 1:
a. The Global Mirror relationship is created with the -sync option, and the Global Mirror
relationship enters the Consistent stopped state.
b. The Global Mirror relationship is created without specifying that the master and
auxiliary VDisks are in sync, and the Global Mirror relationship enters the Inconsistent
stopped state.
Step 2:
a. When starting a Global Mirror relationship in the Consistent stopped state, it enters the
Consistent synchronized state. This state implies that no updates (write I/O) have been
performed on the primary VDisk while in the Consistent stopped state. Otherwise, you
must specify the -force option, and the Global Mirror relationship then enters the
Inconsistent copying state, while the background copy is started.
b. When starting a Global Mirror relationship in the Inconsistent stopped state, it enters
the Inconsistent copying state, while the background copy is started.
Step 3:
a. When the background copy completes, the Global Mirror relationship transits from the
Inconsistent copying state to the Consistent synchronized state.
In a case where the connection is broken between the SVC clusters in a partnership, all of the
(intercluster) Global Mirror relationships enter a Disconnected state. For further information,
refer to “Connected versus disconnected” on page 322.
Under certain error scenarios, communications between the two clusters might be lost. For
example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric
connection between the two clusters might fail, leaving the two clusters running but unable to
communicate with each other.
When the two clusters can communicate, the clusters and the relationships spanning them
are described as connected. When they cannot communicate, the clusters and the
relationships spanning them are described as disconnected.
In this scenario, each cluster is left with half of the relationship, and each cluster has only a
portion of the information that was available to it before. Only a subset of the normal
configuration activity is available.
322 Implementing the IBM System Storage SAN Volume Controller V5.1
The disconnected relationships are portrayed as having a changed state. The new states
describe what is known about the relationship and which configuration commands are
permitted.
When the clusters can communicate again, the relationships become connected again.
Global Mirror automatically reconciles the two state fragments, taking into account any
configuration activity or other event that took place while the relationship was disconnected.
As a result, the relationship can either return to the state that it was in when it became
disconnected or it can enter another connected state.
Relationships that are configured between VDisks in the same SVC cluster (intracluster) will
never be described as being in a disconnected state.
A secondary is described as consistent if it contains data that might have been read by a host
system from the primary if power had failed at an imaginary point in time while I/O was in
progress, and power was later restored. This imaginary point in time is defined as the
recovery point. The requirements for consistency are expressed with respect to activity at the
primary up to the recovery point:
The secondary VDisk contains the data from all writes to the primary for which the host
had received successful completion and that data has not been overwritten by a
subsequent write (before the recovery point).
The writes are on the secondary and the host did not receive successful completion for
these writes (that is, the host received bad completion or no completion at all), and the
host subsequently performed a read from the primary of that data. If that read returned
successful completion and no later write was sent (before the recovery point), the
secondary contains the same data as the data that was returned by the read from the
primary.
From the point of view of an application, consistency means that a secondary VDisk contains
the same data as the primary VDisk at the recovery point (the time at which the imaginary
power failure occurred).
Because of the risk of data corruption, and, in particular, undetected data corruption, Global
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.
When deciding how to use consistency groups, the administrator must consider the scope of
an application’s data, taking into account all of the interdependent systems that communicate
and exchange information.
If two programs or systems communicate and store details as a result of the information
exchanged, either of the following actions might occur:
All of the data that is accessed by the group of systems must be placed into a single
consistency group.
The systems must be recovered independently (each within its own consistency group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at an earlier point in time. Write I/O might have continued to a
primary and not have been copied to the secondary. This state arises when it becomes
impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between clusters when writing to the secondary.
When communication is lost for an extended period of time, Global Mirror tracks the changes
that happen at the primary, but not the order of these changes, or the details of these
changes (write data). When communication is restored, it is impossible to make the
secondary synchronized without sending write data to the secondary out-of-order and,
therefore, losing consistency.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the primary is accessible for read and
write I/O, but the secondary is inaccessible for either read or write I/O. A copy process needs
to be started to make the secondary consistent.
324 Implementing the IBM System Storage SAN Volume Controller V5.1
This state is entered when the relationship or consistency group was InconsistentCopying
and has either suffered a persistent error or received a stop command that has caused the
copy process to stop.
If the relationship or consistency group becomes disconnected, the secondary side transits to
InconsistentDisconnected. The primary side transits to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the primary is accessible for read and
write I/O, but the secondary is inaccessible for either read or write I/O.
In this state, a background copy process runs, which copies data from the primary to the
secondary VDisk.
In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.
A persistent error or stop command places the relationship or consistency group into the
InconsistentStopped state. A start command is accepted, but it has no effect.
If the relationship or consistency group becomes disconnected, the secondary side transits to
InconsistentDisconnected. The primary side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the secondary contains a consistent
image, but it might be out-of-date with respect to the primary.
This state can arise when a relationship is in the Consistent Synchronized state and
experiences an error that forces a Consistency Freeze. It can also arise when a relationship is
created with a CreateConsistentFlag set to true.
Normally, following an I/O error, subsequent write activity causes updates to the primary, and
the secondary is no longer synchronized (set to false). In this case, to re-establish
synchronization, consistency must be given up for a period. A start command with the -force
option must be used to acknowledge this situation, and the relationship or consistency group
transits to InconsistentCopying. Issue this command only after all of the outstanding errors
are repaired.
In the unusual case where the primary and secondary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual
case, a switch command is permitted that moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the primary and the secondary.
An informational status log is generated every time a relationship or consistency group enters
the ConsistentStopped with a status of Online state. This can be configured to enable an
SNMP trap and provide a trigger to automation software to consider issuing a start
command following a loss of synchronization.
ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O.
The secondary VDisk is accessible for read-only I/O.
Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks.
Either successful completion must be received for both writes, the write must be failed to the
host, or a state must transit out of the ConsistentSynchronized state before a write is
completed to the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
If the relationship or consistency group becomes disconnected, the same transitions are
made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary disks are operating in the primary role.
Consequently, both master and auxiliary disks are accessible for write I/O.
In this state, the relationship or consistency group accepts a start command. Global Mirror
maintains a record of regions on each disk that received write I/O while Idling. This record is
used to determine what areas need to be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either VDisk in any relationship has received write I/O, which is indicated
by the synchronized status. If the start command leads to loss of consistency, you must
specify a -force parameter.
Also, while in this state, the relationship or consistency group accepts a -clean option on the
start command. If the relationship or consistency group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The VDisk or disks in this half of the relationship
or consistency group are all in the primary role and accept read or write I/O.
The major priority in this state is to recover the link and reconnect the relationship or
consistency group.
326 Implementing the IBM System Storage SAN Volume Controller V5.1
No configuration activity is possible (except for deletes or stops) until the relationship is
reconnected. At that point, the relationship transits to a connected state. The exact connected
state that is entered depends on the state of the other half of the relationship or consistency
group, which depends on these factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The VDisks in this half of the relationship
or consistency group are all in the secondary role and do not accept read or write I/O.
No configuration activity, except for deletes, is permitted until the relationship reconnects.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The VDisks in this half of the relationship or
consistency group are all in the secondary role and accept read I/O but not write I/O.
In this state, the relationship or consistency group displays an attribute of FreezeTime, which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time that it had in that state. When entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
cluster.
A stop command with the -access flag set to true transits the relationship or consistency
group to the IdlingDisconnected state. This state allows write I/O to be performed to the
secondary VDisk and is used as part of a DR scenario.
When the relationship or consistency group reconnects, the relationship or consistency group
becomes ConsistentSynchronized only if this state does not lead to a loss of consistency.
This is the case provided that these conditions are true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the primary while disconnected.
It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group, at which point, the state of the relationship becomes the
state of the consistency group.
When creating the Global Mirror relationship, one VDisk is defined as the master, and the
other VDisk is defined as the auxiliary. The relationship between the two copies is
asymmetric. When the Global Mirror relationship is created, the master VDisk is initially
considered the primary copy (often referred to as the source), and the auxiliary VDisk is
considered the secondary copy (often referred to as the target).
The master VDisk is the production VDisk, and updates to this copy are real-time mirrored to
the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship was
created are destroyed.
Switching the copy direction: The copy direction for a Global Mirror relationship can be
switched so the auxiliary VDisk becomes the primary and the master VDisk becomes the
secondary.
While the Global Mirror relationship is active, the secondary copy (VDisk) is inaccessible for
host application write I/O at any time. The SVC allows read-only access to the secondary
VDisk when it contains a “consistent” image. This read-only access is only intended to allow
boot time operating system discovery to complete without error, so that any hosts at the
secondary site can be ready to start up the applications with minimal delay, if required.
For example, many operating systems need to read logical block address (LBA) 0 (zero) to
configure a logical unit. Although read access is allowed at the secondary in practice, the data
on the secondary volumes cannot be read by a host, because most operating systems write a
“dirty bit” to the file system when it is mounted. Because this write operation is not allowed on
the secondary volume, the volume cannot be mounted.
This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads that are performed at the secondary
and later write I/Os that are performed at the primary.
To enable access to the secondary VDisk for host operations, you must stop the Global Mirror
relationship by specifying the -access parameter.
While access to the secondary VDisk for host operations is enabled, you must instruct the
host to mount the VDisk and other related tasks, before the application can be started or
instructed to perform a recovery process.
Using a secondary copy demands a conscious policy decision by the administrator that a
failover is required, and the tasks to be performed on the host that is involved in establishing
operation on the secondary copy are substantial. The goal is to make this failover rapid (much
faster than recovering from a backup copy), but it is not seamless.
328 Implementing the IBM System Storage SAN Volume Controller V5.1
You can automate the failover process by using failover management software. The SVC
provides Simple Network Management Protocol (SNMP) traps and programming (or
scripting) for the command-line interface (CLI) to enable this automation.
Total VDisk size per I/O Group A per I/O Group limit of 1,024 TB exists on the quantity of Primary
and Secondary VDisk address spaces that can participate in
Metro Mirror and Global Mirror relationships. This maximum
configuration will consume all 512 MB of bitmap space for the I/O
Group and allow no FlashCopy bitmap space.
The command set for Global Mirror contains two broad groups:
Commands to create, delete, and manipulate relationships and consistency groups
Commands that cause state changes
Where a configuration command affects more than one cluster, Global Mirror performs the
work to coordinate configuration activity between the clusters. Certain configuration
commands can only be performed when the clusters are connected, and those commands
fail with no effect when the clusters are disconnected.
For any given command, with one exception, a single cluster actually receives the command
from the administrator. This action is significant for defining the context for a
CreateRelationship (mkrcrelationship) command or a CreateConsistencyGroup
(mkrcconsistgrp) command, in which case, the cluster receiving the command is called the
local cluster.
The exception is the command that sets clusters into a Global Mirror partnership. The
administrator must issue the mkpartnership command to both the local and to the remote
cluster.
The commands are described here as an abstract command set. You can implement these
commands in one of two ways:
A command-line interface (CLI), which can be used for scripting and automation
A graphical user interface (GUI), which can be used for one-off tasks
svcinfo lsclustercandidate
Use the svcinfo lsclustercandidate command to list the clusters that are available for
setting up a two-cluster partnership. This command is a prerequisite for creating Global Mirror
relationships.
To display the characteristics of the cluster, use the svcinfo lscluster command, specifying
the name of the cluster.
svctask chcluster
There are three parameters for Global Mirror in the command output:
-gmlinktolerance link_tolerance
This parameter specifies the maximum period of time that the system will tolerate delay
before stopping Global Mirror relationships. Specify values between 60 and 86400
seconds in increments of 10 seconds. The default value is 300. Do not change this value
except under the direction of IBM Support.
-gminterdelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intercluster copying
to a secondary VDisk) is delayed. This parameter permits you to test performance
implications before deploying Global Mirror and obtaining a long distance link. Specify a
value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use
this argument to test each intercluster Global Mirror relationship separately.
-gmintradelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intracluster copying
to a secondary VDisk) is delayed. This parameter permits you to test performance
implications before deploying Global Mirror and obtaining a long distance link. Specify a
value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use
this argument to test each intracluster Global Mirror relationship separately.
330 Implementing the IBM System Storage SAN Volume Controller V5.1
You can view all of these parameter values with the svcinfo lscluster <clustername>
command.
gmlinktolerance
The gmlinktolerance parameter needs a particular and detailed note.
If poor response extends past the specified tolerance, a 1920 error is logged and one or more
Global Mirror relationships are automatically stopped, which protects the application hosts at
the primary site. During normal operation, application hosts experience a minimal effect from
the response times, because the Global Mirror feature uses asynchronous replication.
However, if Global Mirror operations experience degraded response times from the
secondary cluster for an extended period of time, I/O operations begin to queue at the
primary cluster. This queue results in an extended response time to application hosts. In this
situation, the gmlinktolerance feature stops Global Mirror relationships and the application
host’s response time returns to normal. After a 1920 error has occurred, the Global Mirror
auxiliary VDisks are no longer in the consistent_synchronized state until you fix the cause of
the error and restart your Global Mirror relationships. For this reason, ensure that you monitor
the cluster to track when this 1920 error occurs.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0 (zero).
However, the gmlinktolerance feature cannot protect applications from extended response
times if it is disabled. It might be appropriate to disable the gmlinktolerance feature in the
following circumstances:
During SAN maintenance windows where degraded performance is expected from SAN
components and application hosts can withstand extended response times from Global
Mirror VDisks.
During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the Global Mirror relationships. For
example, if you test using an I/O generator, which is configured to stress the back-end
storage, the gmlinktolerance feature might detect the high latency and stop the Global
Mirror relationships. Disabling the gmlinktolerance feature prevents this result at the risk of
exposing the test host to extended response times.
Example 6-2 shows an example of a script in ksh to check the Global Mirror status.
# Start Programm
if [[ $1 == "" ]]
then
CICLI="true"
fi
while $CICLI
do
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG
if [[ $GM_STATUS = $PARA_TEST ]]
then
sleep 600
else
sleep 600
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]]
then
ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS
TESTEX=`echo $?`
echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG
fi
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
if [[ $GM_STATUS = $PARA_TESTSTOP ]]
then
echo "`date` Global Mirror restarted <$GM_STATUS>"
else
echo "`date` ERROR Global Mirro Failed <$GM_STATUS>"
fi
sleep 600
fi
((VAR+=1))
done
PARA_TESTSTOP="consistent_stopped"
Sample script: The script that is described in Example 6-2 on page 331 is supplied as-is.
332 Implementing the IBM System Storage SAN Volume Controller V5.1
A 1920 error indicates that one or more of the SAN components are unable to provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).
If you experience 1920 errors, we suggest that you install a SAN performance analysis tool,
such as the IBM Tivoli Storage Productivity Center, and make sure that the tool is correctly
configured and monitoring statistics to look for problems and to try to prevent them.
svctask mkpartnership
Use the svctask mkpartnership command to establish a one-way Global Mirror partnership
between the local cluster and a remote cluster.
To establish a fully functional Global Mirror partnership, you must issue this command on both
clusters. This step is a prerequisite for creating Global Mirror relationships between VDisks on
the SVC clusters.
When creating the partnership, you can specify the bandwidth to be used by the background
copy process between the local and the remote SVC cluster, and if it is not specified, the
bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or
equal to the bandwidth that can be sustained by the intercluster link.
In order to set the background copy bandwidth optimally, make sure that you consider all
three resources (the primary storage, the intercluster link bandwidth, and the secondary
storage). Provision the most restrictive of these three resources between the background
copy bandwidth and the peak foreground I/O workload. Perform this provisioning by
calculation or, alternatively, by determining experimentally how much background copy can
be allowed before the foreground I/O latency becomes unacceptable and then reducing the
background copy to accommodate peaks in workload and an additional safety margin.
svctask chpartnership
To change the bandwidth that is available for background copy in an SVC cluster partnership,
use the svctask chpartnership command to specify the new bandwidth.
svctask mkrcconsistgrp
Use the svctask mkrcconsistgrp command to create a new, empty Global Mirror consistency
group.
The Global Mirror consistency group name must be unique across all consistency groups that
are known to the clusters owning this consistency group. If the consistency group involves two
clusters, the clusters must be in communication throughout the creation process.
The new consistency group does not contain any relationships and will be in the Empty state.
You can add Global Mirror relationships to the group, either upon creation or afterward, by
using the svctask chrelationship command.
Optional parameter: If you do not use the -global optional parameter, a Metro Mirror
relationship will be created instead of a Global Mirror relationship.
svctask mkrcrelationship
Use the svctask mkrcrelationship command to create a new Global Mirror relationship. This
relationship persists until it is deleted.
The auxiliary VDisk must be equal in size to the master VDisk or the command will fail, and if
both VDisks are in the same cluster, they must both be in the same I/O Group. The master
and auxiliary VDisk cannot be in an existing relationship, and they cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When creating the Global Mirror relationship, you can add it to a consistency group that
already exists, or it can be a stand-alone Global Mirror relationship if no consistency group is
specified.
To check whether the master or auxiliary VDisks comply with the prerequisites to participate
in a Global Mirror relationship, use the svcinfo lsrcrelationshipcandidate command, as
shown in “svcinfo lsrcrelationshipcandidate” on page 334.
svcinfo lsrcrelationshipcandidate
Use the svcinfo lsrcrelationshipcandidate command to list the available VDisks that are
eligible to form a Global Mirror relationship.
When issuing the command, you can specify the master VDisk name and auxiliary cluster to
list candidates that comply with the prerequisites to create a Global Mirror relationship. If the
command is issued with no parameters, all VDisks that are not disallowed by another
configuration state, such as being a FlashCopy target, are listed.
334 Implementing the IBM System Storage SAN Volume Controller V5.1
svctask chrcrelationship
Use the svctask chrcrelationship command to modify the following properties of a Global
Mirror relationship:
Change the name of a Global Mirror relationship.
Add a relationship to a group.
Remove a relationship from a group using the -force flag.
svctask chrcconsistgrp
Use the svctask chrcconsistgrp command to change the name of a Global Mirror
consistency group.
svctask startrcrelationship
Use the svctask startrcrelationship command to start the copy process of a Global Mirror
relationship.
When issuing the command, you can set the copy direction if it is undefined, and, optionally,
you can mark the secondary VDisk of the relationship as clean. The command fails if it is
used as an attempt to start a relationship that is already a part of a consistency group.
You can only issue this command to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (primary and secondary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
either by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when restarting the relationship. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original primary of the relationship. The use of the -force parameter here is a reminder
that the data on the secondary will become inconsistent while resynchronization (background
copying) takes place and, therefore, is unusable for DR purposes before the background copy
has completed.
In the Idling state, you must specify the primary VDisk to indicate the copy direction. In other
connected states, you can provide the primary argument, but it must match the existing
setting.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue an svctask startrcrelationship command. Write activity is no longer copied
from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized
state, this command causes a Consistency Freeze.
svctask startrcconsistgrp
Use the svctask startrcconsistgrp command to start a Global Mirror consistency group.
You can only issue this command to a consistency group that is connected.
For a consistency group that is idling, this command assigns a copy direction (primary and
secondary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped either by a stop command or by an I/O error.
svctask stoprcconsistgrp
Use the svctask startrcconsistgrp command to stop the copy process for a Global Mirror
consistency group. You can also use this command to enable write access to the secondary
VDisks in the group if the group is in a consistent state.
If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
copied from the primary to the secondary VDisks, which belong to the relationships in the
group. For a consistency group in the ConsistentSynchronized state, this command causes a
Consistency Freeze.
336 Implementing the IBM System Storage SAN Volume Controller V5.1
svctask rmrcrelationship
Use the svctask rmrcrelationship command to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two VDisks. It does
not affect the VDisks themselves.
If the relationship is disconnected at the time that the command is issued, the relationship is
only deleted on the cluster on which the command is being run. When the clusters reconnect,
the relationship is automatically deleted on the other cluster.
Alternatively, if the clusters are disconnected, and you still want to remove the relationship on
both clusters, you can issue the rmrcrelationship command independently on both of the
clusters.
A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.
If you delete an inconsistent relationship, the secondary VDisk becomes accessible even
though it is still inconsistent. This situation is the one case in which Global Mirror does not
inhibit access to inconsistent data.
svctask rmrcconsistgrp
Use the svctask rmrcconsistgrp command to delete a Global Mirror consistency group. This
command deletes the specified consistency group. You can issue this command for any
existing consistency group.
If the consistency group is disconnected at the time that the command is issued, the
consistency group is only deleted on the cluster on which the command is being run. When
the clusters reconnect, the consistency group is automatically deleted on the other cluster.
Alternatively, if the clusters are disconnected, and you still want to remove the consistency
group on both clusters, you can issue the svctask rmrcconsistgrp command separately on
both of the clusters.
If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.
svctask switchrcrelationship
Use the svctask switchrcrelationship command to reverse the roles of the primary VDisk
and the secondary VDisk when a stand-alone relationship is in a consistent state; when
issuing the command, the desired primary needs to be specified.
338 Implementing the IBM System Storage SAN Volume Controller V5.1
7
You can use either the CLI or GUI to manage IBM System Storage SAN Volume Controller
(SVC) operations. We prefer to use the CLI in this chapter. You might want to script these
operations, and we think it is easier to create the documentation for the scripts using the CLI.
When the command syntax is shown, you will see certain parameters in square brackets, for
example, [parameter], indicating that the parameter is optional in most, if not all, instances.
Any information that is not in square brackets is required information. You can view the syntax
of a command by entering one of the following commands:
svcinfo -?: Shows a complete list of information commands.
svctask -?: Shows a complete list of task commands.
svcinfo commandname -?: Shows the syntax of information commands.
svctask commandname -?: Shows the syntax of task commands.
svcinfo commandname -filtervalue?: Shows the filters that you can use to reduce the
output of the information commands.
Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask
commandname -h command.
If you look at the syntax of the command by typing svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, backspace, and delete keys to
edit commands before you resubmit them.
To display more detailed information about a specific controller, run the command again and
append the controller name parameter, for example, controller id 0, as shown in Example 7-1
on page 341.
340 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-1 svctask lscontroller command
IBM_2145:ITSO_SVC_4:admin>svcinfo lscontroller 0
id 0
controller_name ITSO_XIV_01
WWNN 50017380022C0000
mdisk_link_count 10
max_mdisk_link_count 10
degraded no
vendor_id IBM
product_id_low 2810XIV-
product_id_high LUN-0
product_revision 10.1
ctrl_s/n
allow_quorum yes
WWPN 50017380022C0170
path_count 2
max_path_count 4
WWPN 50017380022C0180
path_count 2
max_path_count 2
WWPN 50017380022C0190
path_count 4
max_path_count 6
WWPN 50017380022C0182
path_count 4
max_path_count 12
WWPN 50017380022C0192
path_count 4
max_path_count 6
WWPN 50017380022C0172
path_count 4
max_path_count 6
Choosing a new name: The chcontroller command specifies the new name first. You
can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 15 characters in length. However, the new name cannot
start with a number, dash, or the word “controller” (because this prefix is reserved for SVC
assignment only).
Chapter 7. SAN Volume Controller operations using the command-line interface 341
7.2.3 Discovery status
Use the svcinfo lsdiscoverystatus command, as shown in Example 7-3, to determine if a
discovery operation is in progress. The output of this command is the status of active or
inactive.
If new storage has been attached and the cluster has not detected it, it might be necessary to
run this command before the cluster can detect the new MDisks.
Use the svctask detectmdisk command to scan for newly added MDisks (Example 7-4).
To check whether any newly added MDisks were successfully detected, run the svcinfo
lsmdisk command and look for new unmanaged MDisks.
If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk
subsystem, and that the zones are set up properly.
Note: If you have assigned a large number of logical unit numbers (LUNs) to your SVC, the
discovery process can take time. Check, several times, using the svcinfo lsmdisk
command if all of the MDisks that you were expecting are present.
When all of the disks allocated to the SVC are seen from the SVC cluster, the following
procedure is a good way to verify which MDisks are unmanaged and ready to be added to the
Managed Disk Group (MDG).
342 Implementing the IBM System Storage SAN Volume Controller V5.1
Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo
lsmdisk command, as shown in Example 7-6.
From this output, you can see additional information about each MDisk (such as the
current status). For the purpose of our current task, we are only interested in the
unmanaged disks, because they are candidates for MDGs (all MDisks, in our case).
Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.
2. If not all of the MDisks that you expected are visible, rescan the available FC network by
entering the svctask detectmdisk command, as shown in Example 7-7.
3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are
still not visible, check that the LUNs from your subsystem have been properly assigned to
the SVC and that appropriate zoning is in place (for example, the SVC can see the disk
subsystem). See Chapter 3, “Planning and configuration” on page 65 for details about
setting up your storage area network (SAN) fabric.
The overview command is svcinfo lsmdisk -delim, as shown in Example 7-8 on page 344.
The summary for an individual MDisk is svcinfo lsmdisk (name/ID of the MDisk from which
you want the information), as shown in Example 7-9 on page 344.
Chapter 7. SAN Volume Controller operations using the command-line interface 343
Example 7-8 svcinfo lsmdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam
e,UID
0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004
86a6600000ae94a89575900000000000000000000000000000000
1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000
000e134a895d6e00000000000000000000000000000000
2,mdisk2,online,managed,0,MDG_DS47,16.0GB,0000000000000002,controller0,600a0b80004
858a000000e144a895d9400000000000000000000000000000000
3,mdisk3,online,managed,0,MDG_DS47,16.0GB,0000000000000003,controller0,600a0b80004
858a000000e154a895db000000000000000000000000000000000
344 Implementing the IBM System Storage SAN Volume Controller V5.1
This command renamed the MDisk named mdisk6 to mdisk_6.
The chmdisk command: The chmdisk command specifies the new name first. You can
use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 15 characters in length. However, the new name cannot
start with a number, dash, or the word “mdisk” (because this prefix is reserved for SVC
assignment only).
By running the svcinfo lsmdisk command, you can see that mdisk9 is excluded in
Example 7-11.
After taking the necessary corrective action to repair the MDisk (for example, replace the
failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing
the svctask includemdisk command (Example 7-12), because the SVC cluster does not
include the MDisk automatically.
Running the svcinfo lsmdisk command again shows mdisk9 online again, as shown in
Example 7-13.
Chapter 7. SAN Volume Controller operations using the command-line interface 345
7.2.8 Adding MDisks to a managed disk group
If you created an empty MDG or you simply assign additional MDisks to your already
configured MDG, you can use the svctask addmdisk command to populate the MDG
(Example 7-14).
You can only add unmanaged MDisks to an MDG. This command adds the MDisk named
mdisk6 to the MDG named MDG_DS45.
Important: Do not add this MDisk to an MDG if you want to create an image mode VDisk
from the MDisk that you are adding. As soon as you add an MDisk to an MDG, it becomes
managed, and extent mapping is not necessarily one-to-one anymore.
346 Implementing the IBM System Storage SAN Volume Controller V5.1
This section describes the operations using MDisks and MDGs. It explains the tasks that we
can perform at an MDG level.
Using the svctask mkmdiskgrp command, create an MDG, as shown in Example 7-17.
This command creates an MDG called MDG_DS47. The extent size that is used within this
group is 512 MB, which is the most commonly used extent size.
We have not added any MDisks to the MDG yet, so it is an empty MDG.
There is a way to add unmanaged MDisks and create the MDG in the same command. Using
the command svctask mkmdiskgrp with the -mdisk parameter and entering the IDs or names
of the MDisks adds the MDisks immediately after the MDG is created.
So, prior to the creation of the MDG, enter the svcinfo lsmdisk command, as shown in
Example 7-18, where we list all of the available MDisks that are seen by the SVC cluster.
Using the same command as before (svctask mkmdiskgrp) and knowing the MDisk IDs that
we are using, we can add multiple MDisks to the MDG at the same time. We now add the
unmanaged MDisks, as shown in Example 7-18, to the MDG that we created, as shown in
Example 7-19.
This command creates an MDG called MDG_DS47. The extent size that is used within this
group is 512 MB, and two MDisks (0 and 1) are added to the group.
Chapter 7. SAN Volume Controller operations using the command-line interface 347
MDG name: The -name and -mdisk parameters are optional. If you do not enter a -name,
the default is MDiskgrpx, where x is the ID sequence number that is assigned by the SVC
internally. If you do not enter the -mdisk parameter, an empty MDG is created.
If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the
underscore. The name can be between one and 15 characters in length, but it cannot start
with a number or the word “mDiskgrp” (because this prefix is reserved for SVC assignment
only).
By running the svcinfo lsmdisk command, you now see the MDisks as “managed” and as
part of the MDG_DS47, as shown in Example 7-20.
You have completed the tasks that are required to create an MDG.
348 Implementing the IBM System Storage SAN Volume Controller V5.1
2,MDG_DS81,online,0,0,0,512,0,0.00MB,0.00MB,0.00MB,0,85
Changing the MDG name: The chmdiskgrp command specifies the new name first. You
can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 15 characters in length. However, the new name cannot
start with a number, dash, or the word “mdiskgrp” (because this prefix is reserved for SVC
assignment only).
Removing an MDG from the SVC cluster configuration: If there are MDisks within the
MDG, you must use the -force flag to remove the MDG from the SVC cluster configuration,
for example:
svctask rmmdiskgrp MDG_DS81 -force
Ensure that you definitely want to use this flag, because it destroys all mapping information
and data held on the VDisks, which cannot be recovered.
This command removes the MDisk called mdisk6 from the MDG named MDG_DS45.The
-force flag is set, because there are VDisks using this MDG.
Sufficient space: The removal only takes place if there is sufficient space to migrate the
VDisk data to other extents on other MDisks that remain in the MDG. After you remove the
MDisk group, it takes time to change the mode from managed to unmanaged.
Chapter 7. SAN Volume Controller operations using the command-line interface 349
7.3 Working with hosts
This section explains the tasks that can be performed at a host level.
When we create a host in our SVC cluster, we need to define the connection method. Starting
with SVC 5.1, we can now define our host as iSCSI-attached or FC-attached, and we
describe these connection methods in detail in Chapter 2, “IBM System Storage SAN Volume
Controller” on page 7.
After you know that the WWPNs that are displayed match your host (use host or SAN switch
utilities to verify), use the svctask mkhost command to create your host.
Name: If you do not provide the -name parameter, the SVC automatically generates the
name hostx (where x is the ID sequence number that is assigned by the SVC internally).
You can use the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the
underscore (_). The name can be between one and 15 characters in length. However, the
name cannot start with a number, dash, or the word “host” (because this prefix is reserved
for SVC assignment only).
This command creates a host called Palau using WWPN 21:00:00:E0:8B:89:C1:CD and
21:00:00:E0:8B:05:4C:AA.
Ports: You can define from one up to eight ports per host, or you can use the addport
command, which we show in 7.3.5, “Adding ports to a defined host” on page 354.
350 Implementing the IBM System Storage SAN Volume Controller V5.1
Host is not powered on or not connected to the SAN
If you want to create a host on the SVC without seeing your target WWPN by using the
svcinfo lshbaportcandidate command, add the -force flag to your mkhost command, as
shown in Example 7-27. This option is more open for human errors than if you choose the
WWPN from a list, but it is typically used when many host definitions are created at the same
time, such as through a script.
In this case, you can type the WWPN of your HBA or HBAs and use the -force flag to create
the host, regardless of whether they are connected, as shown in Example 7-27.
This command forces the creation of a host called Guinea using WWPN
210000E08B89C1DC.
If you run the svcinfo lshost command again, you now see your host named Guinea under
host ID 4.
The iSCSI functionality allows the host to access volumes through the SVC without being
attached to the SAN. Back-end storage and node-to-node communication still need the FC
network to communicate, but the host does not necessarily need to be connected to the SAN.
When we create a host that is going to use iSCSI as a communication method, iSCSI initiator
software must be installed on the host to initiate the communication between the SVC and the
host. This installation creates an iSCSI qualified name (IQN) identifier that is needed before
we create our host.
Before we start, we check our server’s IQN address. We are running Windows Server 2008.
We select Start Programs Administrative tools, and we select iSCSI initiator. In our
example, our IQN, as shown in Figure 7-1 on page 352, is:
iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
Chapter 7. SAN Volume Controller operations using the command-line interface 351
Figure 7-1 IQN from the iSCSI initiator tool
We create the host by issuing the mkhost command, as shown in Example 7-28. When the
command completes successfully, we display our newly created host.
It is important to know that when the host is initially configured, the default authentication
method is set to no authentication and no Challenge Handshake Authentication Protocol
(CHAP) secret is set. To set a CHAP secret for authenticating the iSCSI host with the SVC
cluster, use the svctask chhost command with the chapsecret parameter.
We have now created our host definition. We map a VDisk to our new iSCSI server, as shown
in Example 7-29. We have already created the VDisk, as shown in 7.4.1, “Creating a VDisk”
on page 356. In our scenario, our VDisk has ID 21 and the host name is Baldur. We map it to
our iSCSI host.
352 Implementing the IBM System Storage SAN Volume Controller V5.1
After the VDisk has been mapped to the host, we display the host information again, as
shown in Example 7-30.
Note: FC hosts and iSCSI hosts are handled in the same way operationally after they have
been created.
If you need to display a CHAP secret for an already defined server, use the svcinfo
lsiscsiauth command.
IBM_2145:ITSO-CLS1:admin>svcinfo lshost
id name port_count iogrp_count
0 Palau 2 4
1 Nile 2 1
2 Kanaga 2 1
3 Siam 2 2
4 Angola 1 4
Note: The chhost command specifies the new name first. You can use letters A to Z and a
to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between
one and 15 characters in length. However, it cannot start with a number, dash, or the word
“host” (because this prefix is reserved for SVC assignment only).
Note: If you use Hewlett-Packard UNIX (HP-UX), you use the -type option. See the IBM
System Storage Open Software Family SAN Volume Controller: Host Attachment Guide,
SC26-7563, for more information about the hosts that require the -type parameter.
Chapter 7. SAN Volume Controller operations using the command-line interface 353
7.3.4 Deleting a host
Use the svctask rmhost command to delete a host from the SVC configuration. If your host is
still mapped to VDisks and you use the -force flag, the host and all of the mappings with it are
deleted. The VDisks are not deleted, only the mappings to them.
The command that is shown in Example 7-32 deletes the host called Angola from the SVC
configuration.
Deleting a host: If there are any VDisks assigned to the host, you must use the -force flag,
for example: svctask rmhost -force Angola.
If your host is currently connected through SAN with FC and if the WWPN is already zoned to
the SVC cluster, issue the svcinfo lshbaportcandidate command, as shown in
Example 7-33, to compare with the information that you have from the server administrator.
If the WWPN matches your information (use host or SAN switch utilities to verify), use the
svctask addhostport command to add the port to the host.
Adding multiple ports: You can add multiple ports all at one time by using the separator
or colon (:) between WWPNs, for example:
svctask addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau
If the new HBA is not connected or zoned, the svcinfo lshbaportcandidate command does
not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs
and use the -force flag to create the host, as shown in Example 7-35.
354 Implementing the IBM System Storage SAN Volume Controller V5.1
This command forces the addition of the WWPN named 210000E08B054CAA to the host
called Palau.
If you run the svcinfo lshost command again, you see your host with an updated port count
of 2 in Example 7-36.
If your host currently uses iSCSI as a connection method, you must have the new iSCSI IQN
ID before you add the port. Unlike FC-attached hosts, you cannot check for available
candidates with iSCSI.
After you have acquired the additional iSCSI IQN, use the svctask addhostport command,
as shown in Example 7-37.
Before you remove the WWPN, be sure that it is the correct WWPN by issuing the svcinfo
lshost command, as shown in Example 7-38.
Chapter 7. SAN Volume Controller operations using the command-line interface 355
When you know the WWPN or iSCSI IQN, use the svctask rmhostport command to delete a
host port, as shown in Example 7-39.
This command removes the WWPN of 210000E08B89C1CD from the Palau host and the
iSCSI IQN iqn.1991-05.com.microsoft:baldur from the Baldur host.
Removing multiple ports: You can remove multiple ports at one time by using the
separator or colon (:) between the port names, for example:
svctask rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
When creating a VDisk, you must enter several parameters at the CLI. There are both
mandatory and optional parameters.
See the full command string and detailed information in the Command-Line Interface User’s
Guide, SC26-7903-05.
Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.
When you are ready to create a VDisk, you must know the following information before you
start creating the VDisk:
In which MDG is the VDisk going to have its extents
From what I/O Group will the VDisk be accessed
Size of the VDisk
Name of the VDisk
When you are ready to create your striped VDisk, you use the svctask mkvdisk command
(we discuss sequential and image mode VDisks later). In Example 7-40 on page 357, this
command creates a 10 GB, striped VDisk with VDisk id0 within the MDG_DS47 MDG and
assigns it to the iogrp_0 I/O Group.
356 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-40 svctask mkvdisk command
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS47 -iogrp io_grp0 -size 10 -unit
gb -name Tiger
Virtual Disk, id [0], successfully created
To verify the results, you can use the svcinfo lsvdisk command, as shown in Example 7-41.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00MB
real_capacity 10.00MB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
Chapter 7. SAN Volume Controller operations using the command-line interface 357
7.4.2 VDisk information
Use the svcinfo lsvdisk command to display summary information about all VDisks defined
within the SVC environment. To display more detailed information about a specific VDisk, run
the command again and append the VDisk name parameter (for example, VDisk_D).
Example 7-42 shows both of these commands.
This command creates a space-efficient 10 GB VDisk. The VDisk belongs to mdiskgrp MDG
with the name of MDG_DS45 and is owned by the io_grp1 I/O Group. The real_capacity
automatically expands until the VDisk size of 10 GB is reached. The grain size is set to 32 K,
which is the default.
358 Implementing the IBM System Storage SAN Volume Controller V5.1
Disk size: When using the -rsize parameter, you have the following options: disk_size,
disk_size_percentage, and auto.
Specify the disk_size_percentage value using an integer, or an integer immediately
followed by the percent character (%).
Specify the units for a disk_size integer using the -unit parameter; the default is MB.
The -rsize value can be greater than, equal to, or less than the size of the VDisk.
The auto option creates a VDisk copy that uses the entire size of the MDisk; if you
specify the -rsize auto option, you must also specify the -vtype image option.
You can use this command to bring a non-virtualized disk under the control of the cluster.
After it is under the control of the cluster, you can migrate the VDisk from the single managed
disk. When it is migrated, the VDisk is no longer an image mode VDisk. You can add image
mode VDisks to an already populated MDG with other types of VDisks, such as a striped or
sequential VDisk.
Size: An image mode VDisk must be at least 512 bytes (the capacity cannot be 0). That is,
the minimum size that can be specified for an image mode VDisk must be the same as the
MDisk group extent size to which it is added, with a minimum of 16 MB.
You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode VDisk.
Capacity: If you create a mirrored VDisk from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting VDisk is the smaller of the two MDisks, and
the remaining space on the larger MDisk is inaccessible.
If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.
Use the svctask mkvdisk command to create an image mode VDisk, as shown in
Example 7-44.
This command creates an image mode VDisk called Image_Vdisk_A using the mdisk20
MDisk. The VDisk belongs to the MDG_Image MDG and is owned by the io_grp0 I/O Group.
Chapter 7. SAN Volume Controller operations using the command-line interface 359
If we run the svcinfo lsmdisk command again, notice that mdisk20 now has a status of
image, as shown in Example 7-45.
In addition, you can use VDisk Mirroring as an alternative method of migrating VDisks
between MDGs.
For example, if you have a non-mirrored VDisk in one MDG and want to migrate that VDisk to
another MDG, you can add a new copy of the VDisk and specify the second MDG. After the
copies are synchronized, you can delete the copy on the first MDG. The VDisk is migrated to
the second MDG while remaining online during the migration.
To create a mirrored copy of an VDisk, use the addvdiskcopy command. This command adds
a copy of the chosen VDisk to the selected MDG, which changes a non-mirrored VDisk into a
mirrored VDisk.
In the following scenario, we show creating a VDisk copy mirror from one MDG to another
MDG.
As you can see in Example 7-46, the VDisk has a copy with copy_id 0.
360 Implementing the IBM System Storage SAN Volume Controller V5.1
RC_name
vdisk_UID 60050768018301BF2800000000000002
virtual_disk_throttling (MB) 20
preferred_node_id 3
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 12.00GB
free_capacity 12.00GB
overallocation 375
autoexpand off
warning 23
grainsize 32
In Example 7-47, we add the VDisk copy mirror by using the svctask addvdiskcopy
command.
During the synchronization process, you can see the status by using the svcinfo
lsvdisksyncprogress command. As shown in Example 7-48, the first time that the status is
checked, the synchronization progress is at 86%, and the estimated completion time is
19:16:54. The second time that the command is run, the progress status is at 100%, and the
synchronization is complete.
Chapter 7. SAN Volume Controller operations using the command-line interface 361
As you can see in Example 7-49, the new VDisk copy mirror (copy_id 1) has been added and
can be seen by using the svcinfo lsvdisk command.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 12.00GB
free_capacity 12.00GB
overallocation 375
autoexpand off
warning 23
grainsize 32
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
362 Implementing the IBM System Storage SAN Volume Controller V5.1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.44MB
real_capacity 20.02GB
free_capacity 20.02GB
overallocation 224
autoexpand on
warning 80
grainsize 64
Notice that the VDisk copy mirror (copy_id 1) does not have the same values as the VDisk
copy. While adding a VDisk copy mirror, you are able to define a mirror with separate
parameters than the VDisk copy. Therefore, you can define a Space-Efficient VDisk copy
mirror for a non-Space-Efficient VDisk copy and vice-versa, which is one way to migrate a
non-Space-Efficient VDisk to a Space-Efficient VDisk.
Note: To change the parameters of a VDisk copy mirror, you must delete the VDisk copy
mirror and redefine it with the new values.
Example 7-50 shows the svctask splitvdiskcopy command, which is used to split a VDisk
copy. It creates a new vdisk_N from the copy of vdisk_B.
As you can see in Example 7-51, the new VDisk, vdisk_N, has been created as an
independent VDisk.
Chapter 7. SAN Volume Controller operations using the command-line interface 363
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF280000000000002F
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 84.75MB
real_capacity 20.10GB
free_capacity 20.01GB
overallocation 497
autoexpand on
warning 80
grainsize 64
The VDisk copy of vdisk_B VDisk has now lost its mirror. Therefore, a new VDisk has been
created.
You can specify a new name or label. The new name can be used subsequently to reference
the VDisk. The I/O Group with which this VDisk is associated can be changed. Note that this
requires a flush of the cache within the nodes in the current I/O Group to ensure that all data
is written to disk. I/O must be suspended at the host level before performing this operation.
364 Implementing the IBM System Storage SAN Volume Controller V5.1
Tips: If the VDisk has a mapping to any hosts, it is not possible to move the VDisk to an I/O
Group that does not include any of those hosts.
This operation will fail if there is not enough space to allocate bitmaps for a mirrored VDisk
in the target I/O Group.
If the -force parameter is used and the cluster is unable to destage all write data from the
cache, the contents of the VDisk are corrupted by the loss of the cached data.
If the -force parameter is used to move a VDisk that has out-of-sync copies, a full
resynchronization is required.
Base the choice between I/O and MB as the I/O governing throttle on the disk access profile
of the application. Database applications generally issue large amounts of I/O, but they only
transfer a relatively small amount of data. In this case, setting an I/O governing throttle based
on MBs per second does not achieve much. It is better to use an I/Os per second as a second
throttle.
At the other extreme, a streaming video application generally issues a small amount of I/O,
but transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle based on I/Os per second does not achieve much, so it is better to use an
MB per second throttle.
I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of
the svcinfo lsvdisk command) does not mean that zero I/Os per second (or MBs per
second) can be achieved. It means that no throttle is set.
New name: The chvdisk command specifies the new name first. The name can consist of
letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). It can be
between one and 15 characters in length. However, it cannot start with a number, the dash,
or the word “vdisk” (because this prefix is reserved for SVC assignment only).
The first command changes the VDisk throttling of vdisk7 to 20 MBps, while the second
command changes the Space-Efficient VDisk warning to 85%.
Chapter 7. SAN Volume Controller operations using the command-line interface 365
If you want to verify the changes, issue the svcinfo lsvdisk command, as shown in
Example 7-53.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 5.02GB
free_capacity 5.02GB
overallocation 199
autoexpand on
warning 85
grainsize 32
366 Implementing the IBM System Storage SAN Volume Controller V5.1
7.4.9 Deleting a VDisk
When executing this command on an existing managed mode VDisk, any data that remained
on it will be lost. The extents that made up this VDisk will be returned to the pool of free
extents available in the MDG.
If any Remote Copy, FlashCopy, or host mappings still exist for this VDisk, the delete fails
unless the -force flag is specified. This flag ensures the deletion of the VDisk and any VDisk
to host mappings and copy mappings.
If the VDisk is currently the subject of a migrate to image mode, the delete fails unless the
-force flag is specified. This flag halts the migration and then deletes the VDisk.
If the command succeeds (without the -force flag) for an image mode disk, the underlying
back-end controller logical unit will be consistent with the data that a host might previously
have read from the image mode VDisk. That is, all fast write data has been flushed to the
underlying LUN. If the -force flag is used, there is no guarantee.
If there is any nondestaged data in the fast write cache for this VDisk, the deletion of the
VDisk fails unless the -force flag is specified. Now, any nondestaged data in the fast write
cache is deleted.
Use the svctask rmvdisk command to delete a VDisk from your SVC configuration, as shown
in Example 7-54.
This command deletes the vdisk_A VDisk from the SVC configuration. If the VDisk is
assigned to a host, you need to use the -force flag to delete the VDisk (Example 7-55).
Assuming your operating systems support it, you can use the svctask expandvdisksize
command to increase the capacity of a given VDisk.
This command expands the vdisk_C VDisk, which was 35 GB before, by another 5 GB to give
it a total size of 40 GB.
To expand a Space-Efficient VDisk, you can use the -rsize option, as shown in Example 7-57
on page 368. This command changes the real size of the vdisk_B VDisk to a real capacity of
55 GB. The capacity of the VDisk remains unchanged.
Chapter 7. SAN Volume Controller operations using the command-line interface 367
Example 7-57 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B
id 1
name vdisk_B
capacity 100.0GB
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 50.00GB
free_capacity 50.00GB
overallocation 200
autoexpand off
warning 40
grainsize 32
Important: If a VDisk is expanded, its type will become striped even if it was previously
sequential or in image mode. If there are not enough extents to expand your VDisk to the
specified size, you receive the following error message:
CMMVC5860E Ic_failed_vg_insufficient_virtual_extents
When the HBA on the host scans for devices that are attached to it, it discovers all of the
VDisks that are mapped to its FC ports. When the devices are found, each one is allocated an
identifier (SCSI LUN ID).
For example, the first disk found is generally SCSI LUN 1, and so on. You can control the
order in which the HBA discovers VDisks by assigning the SCSI LUN ID as required. If you do
368 Implementing the IBM System Storage SAN Volume Controller V5.1
not specify a SCSI LUN ID, the cluster automatically assigns the next available SCSI LUN ID,
given any mappings that already exist with that host.
Using the VDisk and host definition that we created in the previous sections, we assign
VDisks to hosts that are ready for their use. We use the svctask mkvdiskhostmap command
(see Example 7-58).
This command assigns vdisk_B and vdisk_C to host Tiger as shown in Example 7-59.
Assigning a specific LUN ID to a VDisk: The optional -scsi scsi_num parameter can help
assign a specific LUN ID to a VDisk that is to be associated with a given host. The default
(if nothing is specified) is to increment based on what is already assigned to the host.
Be aware that certain HBA device drivers stop when they find a gap in the SCSI LUN IDs. For
example:
VDisk 1 is mapped to Host 1 with SCSI LUN ID 1.
VDisk 2 is mapped to Host 1 with SCSI LUN ID 2.
VDisk 3 is mapped to Host 1 with SCSI LUN ID 4.
When the device driver scans the HBA, it might stop after discovering VDisks 1 and 2,
because there is no SCSI LUN mapped with ID 3. Be careful to ensure that the SCSI LUN ID
allocation is contiguous.
It is not possible to map a VDisk to a host more than one time at separate LUNs
(Example 7-60).
This command maps the VDisk called vdisk_A to the host called Siam.
You have completed all of the tasks that are required to assign a VDisk to an attached host.
Chapter 7. SAN Volume Controller operations using the command-line interface 369
Example 7-61 svcinfo lshostvdiskmap
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
3,Siam,0,0,vdisk_A,210000E08B18FF8A,60050768018301BF280000000000000C
From this command, you can see that the host Siam has only one assigned VDisk called
vdisk_A. The SCSI LUN ID is also shown, which is the ID by which the VDisk is presented to
the host. If no host is specified, all defined host to VDisk mappings will be returned.
Specifying the flag before the host name: Although the -delim flag normally comes at
the end of the command string, in this case, you must specify this flag before the host
name. Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.
This command unmaps the VDisk called vdisk_D from the host called Tiger.
You can obtain further information about migration in Chapter 9, “Data migration” on
page 675.
As you can see from the parameters in Example 7-63, before you can migrate your VDisk,
you must know the name of the VDisk you want to migrate and the name of the MDG to which
you want to migrate. To discover the name, simply run the svcinfo lsvdisk and svcinfo
lsmdiskgrp commands.
When you know these details, you can issue the migratevdisk command, as shown in
Example 7-63.
370 Implementing the IBM System Storage SAN Volume Controller V5.1
This command moves vdisk_C to MDG_DS47.
Tips: If insufficient extents are available within your target MDG, you receive an error
message. Make sure that the source and target MDisk group have the same extent size.
The optional threads parameter allows you to assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.
You can run the svcinfo lsmigrate command at any time to see the status of the migration
process (Example 7-64).
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 16
migrate_source_vdisk_index 2
migrate_target_mdisk_grp 1
max_thread_count 4
migrate_source_vdisk_copy_id 0
Progress: The progress is given as percent complete. If you get no more replies, the
process has finished.
Chapter 7. SAN Volume Controller operations using the command-line interface 371
Example 7-65 shows an example of the command.
In this example, you migrate the data from vdisk_A onto mdisk8, and the MDisk must be put
into the MDG_Image MDG.
The command can be used to shrink the physical capacity that is allocated to a particular
VDisk by the specified amount. The command can also be used to shrink the virtual capacity
of a Space-Efficient VDisk without altering the physical capacity assigned to the VDisk:
For a non-Space-Efficient VDisk, use the -size parameter.
For a Space-Efficient VDisk real capacity, use the -rsize parameter.
For the Space-Efficient VDisk virtual capacity, use the -size parameter.
When the virtual capacity of a Space-Efficient VDisk is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.
The cluster arbitrarily reduces the capacity of the VDisk by removing a partial extent, one
extent, or multiple extents from those extents that are allocated to the VDisk. You cannot
control which extents are removed, and so, you cannot assume that it is unused space that is
removed.
Reducing disk size: Image mode VDisks cannot be reduced in size. They must first be
migrated to Managed Mode. To run the shrinkvdisksize command on a mirrored VDisk,
all of the copies of the VDisk must be synchronized.
Important:
If the VDisk contains data, do not shrink the disk.
Certain operating systems or file systems use what they consider to be the outer edge
of the disk for performance reasons. This command can shrink FlashCopy target
VDisks to the same capacity as the source.
Before you shrink a VDisk, validate that the VDisk is not mapped to any host objects. If
the VDisk is mapped, data is displayed. You can determine the exact capacity of the
source or master VDisk by issuing the svcinfo lsvdisk -bytes vdiskname command.
Shrink the VDisk by the required amount by issuing the svctask shrinkvdisksize
-size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id
command.
Assuming your operating system supports it, you can use the svctask shrinkvdisksize
command to decrease the capacity of a given VDisk.
372 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-66 svctask shrinkvdisksize
IBM_2145:ITSO-CLS1:admin>svctask shrinkvdisksize -size 44 -unit gb vdisk_A
This command shrinks a volume called Vdisk_A from a total size of 80 GB, by 44 GB, to a
new total size of 36 GB.
This command displays a list of all of the VDisk IDs that correspond to the VDisk copies that
use mdisk1.
To correlate the IDs displayed in this output to VDisk names, we can run the svcinfo lsvdisk
command, which we discuss in more detail in 7.4, “Working with VDisks” on page 356.
Chapter 7. SAN Volume Controller operations using the command-line interface 373
20,mdisk20,online,managed,1,MDG_DS47,36.0GB,0000000000000007,DS4700,600a0b80002904
de00004282485158aa00000000000000000000000000000000
If you want to know more about these MDisks, you can run the svcinfo lsmdisk command,
as explained in 7.2, “Working with managed disks and disk controller systems” on page 340
(using the ID displayed in Example 7-69 rather than the name).
7.4.20 Showing from which Managed Disk Group a VDisk has its extents
Use the svcinfo lsvdisk command, as shown in Example 7-70, to show to which MDG a
specific VDisk belongs.
374 Implementing the IBM System Storage SAN Volume Controller V5.1
preferred_node_id 6
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 80.00GB
real_capacity 80.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
If you want to know more about these MDGs, you can run the svcinfo lsmdiskgrp command,
as explained in 7.2.11, “Working with Managed Disk Groups” on page 346.
This command shows the host or hosts to which the vdisk_B VDisk was mapped. It is normal
for you to see duplicate entries, because there are more paths between the cluster and the
host. To be sure that the operating system on the host sees the disk only one time, you must
install and configure a multipath software application, such as the IBM Subsystem Driver
(SDD).
Specifying the -delim flag: Although the optional -delim flag normally comes at the end of
the command string, in this case, you must specify this flag before the VDisk name.
Otherwise, the command does not return any data.
Chapter 7. SAN Volume Controller operations using the command-line interface 375
7.4.22 Showing the VDisk to which the host is mapped
To show the VDisk to which a specific host has been assigned, run the svcinfo
lshostvdiskmap command, as shown in Example 7-72.
This command shows which VDisks are mapped to the host called Siam.
Specifying the -delim flag: Although the optional -delim flag normally comes at the end of
the command string, in this case. you must specify this flag before the VDisk name.
Otherwise, the command does not return any data.
State: In Example 7-73, the state of each path is OPEN. Sometimes, you will see the state
CLOSED, which does not necessarily indicate a problem, because it might be a result of the
path’s processing stage.
376 Implementing the IBM System Storage SAN Volume Controller V5.1
2. Run the svcinfo lshostvdiskmap command to return a list of all assigned VDisks
(Example 7-74).
Look for the disk serial number that matches your datapath query device output. This
host was defined in our SVC as Siam.
3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks
that make up the specified VDisk (Example 7-75).
4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN
number information, as shown in Example 7-76. The output displays the controller name
and the controller LUN ID to help you (provided you named your controller a unique name,
such as a serial number) to track back to a LUN within the disk subsystem
(Example 7-76).
Chapter 7. SAN Volume Controller operations using the command-line interface 377
active_WWPN 200400A0B8174433
Scripting enhances the productivity of SVC administrators and the integration of their storage
virtualization environment.
You can create your own customized scripts to automate a large number of tasks for
completion at a variety of times and run them through the CLI.
We recommend that in large SAN environments, where scripting with svctask commands is
used, that you keep the scripting as simple as possible. It is harder to manage fallback,
documentation, and verifying a successful script prior to execution in a large SAN
environment.
When the command syntax is shown, you see several parameters in square brackets, for
example, [parameter], which indicates that the parameter is optional in most if not all
instances. Any parameter that is not in square brackets is required information. You can view
the syntax of a command by entering one of the following commands:
svcinfo -?: Shows a complete list of information commands.
svctask -?: Shows a complete list of task commands.
svcinfo commandname -?: Shows the syntax of information commands.
svctask commandname -?: Shows the syntax of task commands.
svcinfo commandname -filtervalue?: Shows which filters you can use to reduce the
output of the information commands.
Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname
-h.
378 Implementing the IBM System Storage SAN Volume Controller V5.1
If you look at the syntax of the command by typing svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, backspace, and delete keys to
edit commands before you resubmit them.
Filtering
To reduce the output that is displayed by an svcinfo command, you can specify a number of
filters, depending on which svcinfo command you are running. To see which filters are
available, type the command followed by the -filtervalue? flag, as shown in Example 7-77.
When you know the filters, you can be more selective in generating output:
Multiple filters can be combined to create specific searches.
You can use an asterisk (*) as a wildcard when using names.
When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb.
For example, if we issue the svcinfo lsvdisk command with no filters, we see the output that
is shown in Example 7-78 on page 380.
Chapter 7. SAN Volume Controller operations using the command-line interface 379
Example 7-78 svcinfo lsvdisk command: No filters
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,typ
e,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count
0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000
000000,0,1
1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000
0000001,0,1
2,vdisk2,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000
000002,0,1
3,vdisk3,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000
000003,0,1
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons as opposed to wrapping text over multiple lines. This parameter is normally
used in cases where you need to get reports during script execution.
If we now add a filter to our svcinfo command (such as FC_name), we can reduce the
output, as shown in Example 7-79.
The first command shows all VDisks with the IO_group_id=0. The second command shows
us all VDisks where the mdisk_grp_name ends with a 7. You can use the wildcard asterisk
character (*) when names are used.
380 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-80 svcinfo lscluster command
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id name location partnership bandwidth
id_alias
000002006AE04FC4 ITSO-CLS1 local
000002006AE04FC4
0000020063E03A38 ITSO-CLS4 remote fully_configured 20
0000020063E03A38
0000020061006FCA ITSO-CLS2 remote fully_configured 50
0000020061006FCA
If the cluster IP address is changed, the open command-line shell closes during the
processing of the command. You must reconnect to the new IP address. The service IP
address is not used until a node is removed from the cluster. If this node cannot rejoin the
cluster, you can bring the node up in service mode. In this mode, the node can be accessed
as a stand-alone node using the service IP address.
All command parameters are optional; however, you must specify at least one parameter.
Note: Only a user with administrator authority can change the password.
After the cluster IP address is changed, you lose the open shell connection to the cluster.
You must reconnect with the newly specified IP address.
Important: Changing the speed on a running cluster breaks I/O service to the attached
hosts. Before changing the fabric speed, stop I/O from the active hosts and force these
hosts to flush any cached data by unmounting volumes (for UNIX host types) or by
removing drive letters (for Windows host types). Specific hosts might need to be rebooted
to detect the new fabric speed. The fabric speed setting applies only to the 4F2 and 8F2
model nodes in a cluster. The 8F4 nodes automatically negotiate the fabric speed on a
per-port basis.
We describe the authentication method in detail in Chapter 2, “IBM System Storage SAN
Volume Controller” on page 7.
Tip: If you do not want the password to display as you enter it on the command line, omit
the new password. The command line then prompts you to enter and confirm the password
without the password being displayed.
Chapter 7. SAN Volume Controller operations using the command-line interface 381
The only authentication that can be changed from the chcluster command is the Service
account user password, and to be able to change that, you need to have administrative rights.
The Service account user password is changed in Example 7-81.
See 7.10.1, “Managing users using the CLI” on page 394 for more information about
managing users.
In Chapter 2, “IBM System Storage SAN Volume Controller” on page 7, we described in detail
how iSCSi works. In this section, we show how to configure our cluster for usage with iSCSI.
We will configure our nodes to use the primary and secondary Ethernet ports for iSCSI, as
well as contain the cluster IP. When we configure our nodes to be used with iSCSI, we do not
affect our cluster IP. The cluster IP is changed, as shown in 7.7.2, “Changing cluster settings”
on page 381.
It is important to know that we can have more than a one IP address to one physical
connection relationship. We have the capability to have a four to one relationship (4:1),
consisting of two IPv4 plus two IPv6 addresses (four total) to one physical connection per port
per node.
We describe this function in Chapter 2, “IBM System Storage SAN Volume Controller” on
page 7.
Tip: When reconfiguring IP ports, be aware that already configured iSCSI connections will
need to reconnect if changes are made to the IP addresses of the nodes.
There are two ways to perform iSCSI authentication or CHAP, either for the whole cluster or
per host connection. Example 7-82 shows configuring CHAP for the whole cluster.
Example 7-82 Setting a CHAP secret for the entire cluster to “passw0rd”
IBM_2145:ITSO-CLS1:admin>svctask chcluster -iscsiauthmethod chap -chapsecret
passw0rd
IBM_2145:ITSO-CLS1:admin>
In our scenario, we have our cluster IP of 9.64.210.64, which is not affected during our
configuration of the node’s IP addresses.
We start by listing our ports using the svcinfo lsportip command. We can see that we have
two ports per node with which to work. Both ports can have two IP addresses that can be
used for iSCSI.
382 Implementing the IBM System Storage SAN Volume Controller V5.1
In our example, we configure the secondary port in both nodes in our I/O Group, which is
shown in Example 7-83.
While both nodes are online, each node will be available to iSCSI hosts on the IP address that
we have configured. Because iSCSI failover between nodes is enabled automatically, if a
node goes offline for any reason, its partner node in the I/O group will become available on
the failed node’s port IP address, ensuring that hosts will continue to be able to perform I/O.
The svcinfo lsportip command will display which port IP addresses are currently active on
each node.
There are now two active cluster ports on the configuration node. If the cluster IP address is
changed, the open command-line shell closes during the processing of the command. You
must reconnect to the new IP address if connected through that port.
List the IP address of the cluster by issuing the svcinfo lsclusterip command. Modify the IP
address by issuing the svctask chclusterip command. You can either specify a static IP
address or have the system assign a dynamic IP address, as shown in Example 7-84.
Important: If you specify a new cluster IP address, the existing communication with the
cluster through the CLI is broken and the PuTTY application automatically closes. You
must relaunch the PuTTY application and point to the new IP address, but your SSH key
will still work.
Chapter 7. SAN Volume Controller operations using the command-line interface 383
Table 7-1 ip_address_list formats
IP type ip_address_list format
We have completed the tasks that are required to change the IP addresses (cluster and
service) of the SVC environment.
Tip: If you have changed the time zone, you must clear the error log dump directory before
you can view the error log through the Web application.
2. To find the time zone code that is associated with your time zone, enter the svcinfo
lstimezones command, as shown in Example 7-86. A truncated list is provided for this
example. If this setting is correct (for example, 522 UTC), you can go to Step 4. If not,
continue with Step 3.
384 Implementing the IBM System Storage SAN Volume Controller V5.1
513 US/Central
514 US/Eastern
515 US/East-Indiana
516 US/Hawaii
517 US/Indiana-Starke
518 US/Michigan
519 US/Mountain
520 US/Pacific
521 US/Samoa
522 UTC
.
.
3. Now that you know which time zone code is correct for you, set the time zone by issuing
the svctask settimezone (Example 7-87) command.
4. Set the cluster time by issuing the svctask setclustertime command (Example 7-88).
You have completed the necessary tasks to set the cluster time zone and time.
Use the svctask startstats command to start the collection of statistics, as shown in
Example 7-89.
The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts
statistics collection and gathers data at 15 minute intervals.
Statistics collection: To verify that statistics collection is set, display the cluster properties
again, as shown in Example 7-90.
Chapter 7. SAN Volume Controller operations using the command-line interface 385
We have completed the required tasks to start statistics collection on the cluster.
This command stops the statistics collection. Do not expect any prompt message from this
command.
To verify that the statistics collection is stopped, display the cluster properties again, as
shown in Example 7-92.
statistics_status off
statistics_frequency 15
-- Note that the output has been shortened for easier reading. --
Notice that the interval parameter is not changed, but the status is off. We have completed the
required tasks to stop statistics collection on our cluster.
When input power is restored to the uninterruptible power supply units, they start to recharge.
However, the SVC does not permit any I/O activity to be performed to the VDisks until the
uninterruptible power supply units are charged enough to enable all of the data on the SVC
nodes to be destaged in the event of a subsequent unexpected power loss. Recharging the
uninterruptible power supply can take as long as two hours.
386 Implementing the IBM System Storage SAN Volume Controller V5.1
Shutting down the cluster prior to removing input power to the uninterruptible power supply
units prevents the battery power from being drained. It also makes it possible for I/O activity to
be resumed as soon as input power is restored.
You can use the following procedure to shut down the cluster:
1. Use the svctask stopcluster command to shut down your SVC cluster (Example 7-94).
This command shuts down the SVC cluster. All data is flushed to disk before the power is
removed. At this point, you lose administrative contact with your cluster, and the PuTTY
application automatically closes.
2. You will be presented with the following message:
Warning: Are you sure that you want to continue with the shut down?
Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy)
relationships, data migration operations, and forced deletions before continuing. Entering
y to this message will execute the command. No feedback is then displayed. Entering
anything other than y(es) or Y(ES) will result in the command not executing. No feedback
is displayed.
Important: Before shutting down a cluster, ensure that all I/O operations are stopped
that are destined for this cluster, because you will lose all access to all VDisks being
provided by this cluster. Failure to do so can result in failed I/O operations being
reported to the host operating systems.
Begin the process of quiescing all I/O to the cluster by stopping the applications on the
hosts that are using the VDisks that are provided by the cluster.
3. We have completed the tasks that are required to shut down the cluster. To shut down the
uninterruptible power supply units, press the power on button on the front panel of each
uninterruptible power supply unit.
Restarting the cluster: To restart the cluster, you must first restart the uninterruptible
power supply units by pressing the power button on their front panels. Then, press the
power on button on the service panel of one of the nodes within the cluster. After the
node is fully booted up (for example, displaying Cluster: on line 1 and the cluster name
on line 2 of the panel), you can start the other nodes in the same way.
As soon as all of the nodes are fully booted, you can reestablish administrative contact
using PuTTY, and your cluster is fully operational again.
7.8 Nodes
This section details the tasks that can be performed at an individual node level.
Chapter 7. SAN Volume Controller operations using the command-line interface 387
7.8.1 Viewing node details
Use the svcinfo lsnode command to view the summary information about the nodes that are
defined within the SVC environment. To view more details about a specific node, append the
node name (for example, SVCNode_1) to the command.
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons (:) as opposed to wrapping text over multiple lines.
To have a fully functional SVC cluster, you must add a second node to the configuration.
388 Implementing the IBM System Storage SAN Volume Controller V5.1
To add a node to a cluster, gather the necessary information, as explained in these steps:
Before you can add a node, you must know which unconfigured nodes you have as
“candidates”. Issue the svcinfo lsnodecandidate command (Example 7-96).
You must specify to which I/O Group you are adding the node. If you enter the svcinfo
lsnode command, you can easily identify the I/O Group ID of the group to which you are
adding your node, as shown in Example 7-97.
Tip: The node that you want to add must have a separate uninterruptible power supply unit
serial number from the uninterruptible power supply unit on the first node.
Now that we know the available nodes, we can use the svctask addnode command to add the
node to the SVC cluster configuration.
Example 7-98 shows the command to add a node to the SVC cluster.
This command adds the candidate node with the wwnodename of 50050768010027E2 to the
I/O Group called io_grp0.
We used the -wwnodename parameter (50050768010027E2). However, we can also use the
-panelname parameter (108283) instead (Example 7-99). If you are standing in front of the
node, it is easier to read the panel name than it is to get the WWNN.
We also used the optional -name parameter (Node2). If you do not provide the -name
parameter, the SVC automatically generates the name nodex (where x is the ID sequence
number that is assigned internally by the SVC).
Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to
9, the dash (-), and the underscore (_). The name can be between one and 15 characters
in length. However, the name cannot start with a number, dash, or the word “node”
(because this prefix is reserved for SVC assignment only).
Chapter 7. SAN Volume Controller operations using the command-line interface 389
If the svctask addnode command returns no information, your second node is powered on,
and the zones are correctly defined, preexisting cluster configuration data can be stored in
the node. If you are sure that this node is not part of another active SVC cluster, you can use
the service panel to delete the existing cluster information. After this action is complete,
reissue the svcinfo lsnodecandidate command and you will see it listed.
Name: The chnode command specifies the new name first. You can use letters A to Z and
a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one
and 15 characters in length. However, the name cannot start with a number, dash, or the
word “node” (because this prefix is reserved for SVC assignment only).
Because node4 was also the configuration node, the SVC transfers the configuration node
responsibilities to a surviving node, within the I/O Group. Unfortunately, the PuTTY session
cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses
communication and closes automatically.
We must restart the PuTTY application to establish a secure session with the new
configuration node.
Important: If this node is the last node in an I/O Group, and there are VDisks still assigned
to the I/O Group, the node is not deleted from the cluster.
If this node is the last node in the cluster, and the I/O Group has no VDisks remaining, the
cluster is destroyed and all virtualization information is lost. Any data that is still required
must be backed up or migrated prior to destroying the cluster.
Use the svctask stopcluster -node command, as shown in Example 7-102 on page 391, to
shut down a single node.
390 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-102 svctask stopcluster -node command
IBM_2145:ITSO-CLS1:admin>svctask stopcluster -node n4
Are you sure that you want to continue with the shut down?
This command shuts down node n4 in a graceful manner. When this node has been shut
down, the other node in the I/O Group will destage the contents of its cache and will go into
write-through mode until the node is powered up and rejoins the cluster.
If this is the last node in an I/O Group, all access to the VDisks in the I/O Group will be lost.
Verify that you want to shut down this node before executing this command. You must specify
the -force flag.
By reissuing the svcinfo lsnode command (Example 7-103), we can see that the node is
now offline.
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode n4
CMMVC5782E The object specified is offline.
Restart: To restart the node manually, press the power on button from the service panel of
the node.
We have completed the tasks that are required to view, add, delete, rename, and shut down a
node within an SVC environment.
Chapter 7. SAN Volume Controller operations using the command-line interface 391
Example 7-104 I/O Group details
IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrp
id name node_count vdisk_count host_count
0 io_grp0 2 3 3
1 io_grp1 2 4 3
2 io_grp2 0 0 2
3 io_grp3 0 0 2
4 recovery_io_grp 0 0 0
As we can see, the SVC predefines five I/O Groups. In a four node cluster (similar to our
example), only two I/O Groups are actually in use. The other I/O Groups (io_grp2 and
io_grp3) are for a six or eight node cluster.
The recovery I/O Group is a temporary home for VDisks when all nodes in the I/O Group that
normally owns them have suffered multiple failures. This design allows us to move the VDisks
to the recovery I/O Group and, then, into a working I/O Group. Of course, while temporarily
assigned to the recovery I/O Group, I/O access is not possible.
To see whether the renaming was successful, issue the svcinfo lsiogrp command again to
see the change.
We have completed the tasks that are required to rename an I/O Group.
392 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-106 svctask addhostiogrp command
IBM_2145:ITSO-CLS1:admin>svctask addhostiogrp -iogrp 1 Kanaga
Parameters:
-iogrp iogrp_list -iogrpall
Specifies a list of one or more I/O Groups that must be mapped to the host. This
parameter is mutually exclusive with -iogrpall. The -iogrpall option specifies that all the I/O
Groups must be mapped to the specified host. This parameter is mutually exclusive with
-iogrp.
-host host_id_or_name
Identify the host either by ID or name to which the I/O Groups must be mapped.
Use the svctask rmhostiogrp command to unmap a specific host to a specific I/O Group, as
shown in Example 7-107.
Parameters:
-iogrp iogrp_list -iogrpall
Specifies a list of one or more I/O Groups that must be unmapped to the host. This
parameter is mutually exclusive with -iogrpall. The -iogrpall option specifies that all of the
I/O Groups must be unmapped to the specified host. This parameter is mutually exclusive
with -iogrp.
-force
If the removal of a host to I/O Group mapping will result in the loss of VDisk to host
mappings, the command fails if the -force flag is not used. The -force flag, however,
overrides this behavior and forces the deletion of the host to I/O Group mapping.
host_id_or_name
Identify the host either by the ID or name to which the I/O Groups must be mapped.
To list all of the host objects that are mapped to the specified I/O Group, use the svcinfo
lsiogrphost command, as shown in Example 7-109 on page 394.
Chapter 7. SAN Volume Controller operations using the command-line interface 393
Example 7-109 svcinfo lsiogrphost command
IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrphost io_grp1
id name
1 Nile
2 Kanaga
3 Siam
All users must now be a member of a predefined user group. You can list those groups by
using the svcinfo lsusergrp command, as shown in Example 7-110.
Example 7-111 is a simple example of creating a user. User John is added to the user group
Monitor with the password m0nitor.
Local users are those users that are not authenticated by a remote authentication server.
Remote users are those users that are authenticated by a remote central registry server.
The user groups already have a defined authority role, as shown in Table 7-2 on page 395.
394 Implementing the IBM System Storage SAN Volume Controller V5.1
Table 7-2 Authority roles
User group Role User
Copy operator All svcinfo commands and For those users that control all
the following svctask of the copy functionality of the
commands: cluster
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap,
chfcmap,
startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership
Monitor All svcinfo commands and For those users only needing
the following svctask view access
commands: finderr,
dumperrlog, dumpinternallog,
and chcurrentuser
And svcconfig: backup
To view the user roles on your cluster, use the svcinfo lsusergrp command, as shown in
Example 7-112 on page 396, to list all of the users.
Chapter 7. SAN Volume Controller operations using the command-line interface 395
Example 7-112 svcinfo lsusergrp command
IBM_2145:ITSO-CLS2:admin>svcinfo lsusergrp
id name
role remote
0 SecurityAdmin
SecurityAdmin no
1 Administrator
Administrator no
2 CopyOperator
CopyOperator no
3 Service
Service no
4 Monitor
Monitor no
To view our currently defined users and the user groups to which they belong, we use the
svcinfo lsuser command, as shown in Example 7-113.
The chuser command allows you to modify a user that is already created. You can rename,
assign a new password (if you are logged on with administrative privileges), move a user from
one user group to another user group, but be aware that a member can only be a member of
one group at a time.
Most action commands that are issued by the old or new CLI are recorded in the audit log:
The native GUI performs actions by using the CLI programs.
The SVC Console performs actions by issuing Common Information Model (CIM)
commands to the CIM object manager (CIMOM), which then runs the CLI programs.
Actions performed by using both the native GUI and the SVC Console are recorded in the
audit log.
396 Implementing the IBM System Storage SAN Volume Controller V5.1
svctask dumperrlog
svctask dumpinternallog
The audit log contains approximately 1 MB of data, which can contain about 6,000 average
length commands. When this log is full, the cluster copies it to a new file in the /dumps/audit
directory on the config node and resets the in-memory audit log.
To display entries from the audit log, use the svcinfo catauditlog -first 5 command to
return a list of five in-memory audit log entries, as shown in Example 7-114.
If you need to dump the contents of the in-memory audit log to a file on the current
configuration node, use the svctask dumpauditlog command. This command does not
provide any feedback, only the prompt. To obtain a list of the audit log dumps, use the svcinfo
lsauditlogdumps command, as described in Example 7-115.
Scenario description
We use the following scenario in both the command-line section and the GUI section. In the
following scenario, we want to FlashCopy the following VDisks:
DB_Source Database files
Log_Source Database log files
App_Source Application files
In our scenario, the application files are independent of the database, so we create a single
FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and
Chapter 7. SAN Volume Controller operations using the command-line interface 397
Log_Source and, therefore, two consistency groups. Example 7-123 on page 403 shows the
scenario.
398 Implementing the IBM System Storage SAN Volume Controller V5.1
group, so that, for example, all of the files for a particular database are copied at the same
time.
In Example 7-116, the FCCG1 and FCCG2 consistency groups are created to hold the
FlashCopy maps of DB and Log. This step is extremely important for FlashCopy on database
applications. It helps to keep data integrity during FlashCopy.
In Example 7-117, we checked the status of consistency groups. Each consistency group has
a status of empty.
If you want to change the name of a consistency group, you can use the svctask
chfcconsistgrp command. Type svctask chfcconsistgrp -h for help with this command.
When executed, this command creates a new FlashCopy mapping logical object. This
mapping persists until it is deleted. The mapping specifies the source and destination VDisks.
The destination must be identical in size to the source, or the mapping will fail. Issue the
svcinfo lsvdisk -bytes command to find the exact size of the source VDisk for which you
want to create a target disk of the same size.
In a single mapping, source and destination cannot be on the same VDisk. A mapping is
triggered at the point in time when the copy is required. The mapping can optionally be given
a name and assigned to a consistency group. These groups of mappings can be triggered at
the same time, enabling multiple VDisks to be copied at the same time, which creates a
consistent copy of multiple disks. A consistent copy of multiple disks is required for database
products in which the database and log files reside on separate disks.
If no consistency group is defined, the mapping is assigned to the default group 0, which is a
special group that cannot be started as a whole. Mappings in this group can only be started
on an individual basis.
The background copy rate specifies the priority that must be given to completing the copy. If 0
is specified, the copy will not proceed in the background. The default is 50.
Chapter 7. SAN Volume Controller operations using the command-line interface 399
Tip: There is a parameter to delete FlashCopy mappings automatically after completion of
a background copy (when the mapping gets to the idle_or_copied state). Use the
command:
svctask mkfcmap -autodelete
This command does not delete mappings in cascade with dependent mappings, because it
cannot get to the idle_or_copied state in this situation.
In Example 7-118, the first FlashCopy mapping for DB_Source and Log_Source is created.
Example 7-118 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target_1
-name DB_Map1 -consistgrp FCCG1
FlashCopy Mapping, id [0], successfully created
Example 7-119 shows the command to create a second FlashCopy mapping for VDisk
DB_Source and Log_Source.
Example 7-120 shows the result of these FlashCopy mappings. The status of the mapping is
idle_or_copied.
400 Implementing the IBM System Storage SAN Volume Controller V5.1
3 DB_Map2 0 DB_Source 7
DB_Target_2 2 FCCG2 idle_or_copied 0
50 100 off no
4 Log_Map2 1 Log_Source 5
Log_Target_2 2 FCCG2 idle_or_copied 0
50 100 off no
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied
If you want to change the FlashCopy mapping, you can use the svctask chfcmap command.
Type svctask chfcmap -h to get help with this command.
When the svctask prestartfcmap command is executed, the mapping enters the Preparing
state. After the preparation is complete, it changes to the Prepared state. At this point, the
mapping is ready for triggering. Preparing and the subsequent triggering are usually
performed on a consistency group basis. Only mappings belonging to consistency group 0
can be prepared on their own, because consistency group 0 is a special group, which
contains the FlashCopy mappings that do not belong to any consistency group. A FlashCopy
must be prepared before it can be triggered.
In our scenario, App_Map1 is not in a consistency group. In Example 7-121, we show how we
initialize the preparation for App_Map1.
Another option is that you add the -prep parameter to the svctask startfcmap command,
which first prepares the mapping and then starts the FlashCopy.
In the example, we also show how to check the status of the current FlashCopy mapping.
App_Map1’s status is prepared.
Chapter 7. SAN Volume Controller operations using the command-line interface 401
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
When you have assigned several mappings to a FlashCopy consistency group, you only have
to issue a single prepare command for the whole group to prepare all of the mappings at one
time.
Example 7-122 shows how we prepare the consistency groups for DB and Log and check the
result. After the command has executed all of the FlashCopy maps that we have, all of them
are in the prepared status, and all the consistency groups are in the prepared status, too.
Now, we are ready to start the FlashCopy.
402 Implementing the IBM System Storage SAN Volume Controller V5.1
When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy
proceeds depends on the background copy rate attribute of the mapping. If the mapping is set
to 0 (NOCOPY), only data that is subsequently updated on the source will be copied to the
destination. We suggest that you use this scenario as a backup copy while the mapping exists
in the Copying state. If the copy is stopped, the destination is unusable. If you want to end up
with a duplicate copy of the source at the destination, set the background copy rate greater
than 0. This way, the system copies all of the data (even unchanged data) to the destination
and eventually reaches the idle_or_copied state. After this data is copied, you can delete the
mapping and have a usable point-in-time copy of the source at the destination.
In Example 7-123, after the FlashCopy is started, App_Map1 changes to copying status.
Chapter 7. SAN Volume Controller operations using the command-line interface 403
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
Alternatively, you can also query the copy progress by using the svcinfo lsfcmap command.
As shown in Example 7-125, both DB_Map1 and Log_Map1 return information that the
background copy is 21% completed, and both DB_Map2 and Log_Map2 return information
that the background copy is 18% completed.
404 Implementing the IBM System Storage SAN Volume Controller V5.1
id progress
2 53
When the background copy has completed, the FlashCopy mapping enters the
idle_or_copied state, and when all FlashCopy mappings in a consistency group enter this
status, the consistency group will be at idle_or_copied status.
When in this state, the FlashCopy mapping can be deleted, and the target disk can be used
independently, if, for example, another target disk is to be used for the next FlashCopy of the
particular source VDisk.
When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by
the SVC. The FlashCopy mapping needs to be prepared again or retriggered to bring the
target VDisk online again.
Tip: In a Multiple Target FlashCopy environment, if you want to stop a mapping or group,
consider whether you want to keep any of the dependent mappings. If not, issue the stop
command with the force parameter, which will stop all of the dependent maps and negate
the need for the stopping copy process to run.
Important: Only stop a FlashCopy mapping when the data on the target VDisk is not in
use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target VDisk becomes invalid and is set offline by the SVC, if the mapping is in
the Copying state and progress=100.
Example 7-126 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 has
changed to idle_or_copied.
Chapter 7. SAN Volume Controller operations using the command-line interface 405
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
Important: Only stop a FlashCopy mapping when the data on the target VDisk is not in
use, or when you want to modify the FlashCopy consistency group. When a consistency
group is stopped, the target VDisk might become invalid and set offline by the SVC,
depending on the state of the mapping.
As shown in Example 7-127, we stop the FCCG1 and FCCG2 consistency groups. The status
of the two consistency groups has changed to stopped. Most of the FlashCopy mapping
relations now have the status stopped. As you can see, several of them have already
completed the copy operation and are now in a status of idle_or_copied.
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp
id name status
1 FCCG1 stopped
2 FCCG2 stopped
406 Implementing the IBM System Storage SAN Volume Controller V5.1
Deleting a mapping only deletes the logical relationship between the two VDisks. However,
when issued on an active FlashCopy mapping using the -force flag, the delete renders the
data on the FlashCopy mapping target VDisk as inconsistent.
Tip: If you want to use the target VDisk as a normal VDisk, monitor the background copy
progress until it is complete (100% copied) and, then, delete the FlashCopy mapping.
Another option is to set the -autodelete option when creating the FlashCopy mapping.
If you want to delete all of the mappings in the consistency group, as well, you must first
delete the mappings and, then, delete the consistency group.
As shown in Example 7-129, we delete all of the maps and consistency groups, and then, we
check the result.
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap
IBM_2145:ITSO-CLS1:admin>
Chapter 7. SAN Volume Controller operations using the command-line interface 407
Example 7-130 svcinfo lsvdisk 8 command
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 8
id 8
name App_Source_SE
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AB813F100000000000000B
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 221.17MB
free_capacity 220.77MB
overallocation 462
autoexpand on
warning 80
grainsize 32
2. Define a FlashCopy mapping in which the non-Space-Efficient VDisk is the source and the
Space-Efficient VDisk is the target. Specify a copy rate as high as possible, and activate
the -autodelete option for the mapping. See Example 7-131.
408 Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap 0
id 0
name MigrtoSEV
source_vdisk_id 2
source_vdisk_name App_Source
target_vdisk_id 8
target_vdisk_name App_Source_SE
group_id
group_name
status idle_or_copied
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
3. Run the svctask prestartfcmap command and the svcinfo lsfcmap MigrtoSEV
command, as shown in Example 7-132.
Chapter 7. SAN Volume Controller operations using the command-line interface 409
partner_FC_name
restoring no
5. Monitor the copy process using the svcinfo lsfcmapprogress command, as shown in
Example 7-134.
6. The FlashCopy mapping has been deleted automatically, as shown in Example 7-135.
An independent copy of the source VDisk (App_Source) has been created. The migration
has completed, as shown in Example 7-136 on page 411.
410 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-136 svcinfo lsvdisk
Real size: Independently of what you defined as the real size of the target SEV, the real
size will be at least the capacity of the source VDisk.
To migrate a Space-Efficient VDisk to a fully allocated VDisk, you can follow the same
scenario.
Chapter 7. SAN Volume Controller operations using the command-line interface 411
7.11.15 Reverse FlashCopy
Starting with SVC 5.1, you can have a reverse FlashCopy mapping without having to remove
the original FlashCopy mapping, and without restarting a FlashCopy mapping from the
beginning.
FCMAP0_rev will show a restoring value of yes while the FlashCopy mapping is copying.
After it has finished copying, the restoring value field will change to no.
For example, if we have four VDisks in a cascade (A B C D), and the map A B is
100% complete, using the stopfcmap -split mapAB command results in mapAB becoming
idle_copied and the remaining cascade becomes B C D.
Without the -split option, VDisk A remains at the head of the cascade (A C D). Consider
this sequence of steps:
1. User takes a backup using the mapping A B. A is the production VDisk; B is a backup.
2. At a later point, the user experiences corruption on A and, so, reverses the mapping B
A.
3. The user then takes another backup from the production disk A, resulting in the cascade
B A C.
Stopping A B without the -split option results in the cascade B C. Note that the backup
disk B is now at the head of this cascade.
When the user next wants to take a backup to B, the user can still start mapping A B (using
the -restore flag), but the user cannot then reverse the mapping to A (B A or C A).
412 Implementing the IBM System Storage SAN Volume Controller V5.1
Stopping A B with the -split option results in the cascade A C. This action does not result
in the same problem, because production disk A is at the head of the cascade instead of the
backup disk B.
Note: This example is for intercluster operations only. If you want to set up intracluster
operations, we highlight those parts of the following procedure that you do not need to
perform.
In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC
cluster ITSO-CLS1 primary site and the SVC cluster ITSO-CLS4 at the secondary site.
Table 7-3 shows the details of the VDisks.
Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri VDisks, a
CG_WIN2K3_MM consistency group is created to handle Metro Mirror relationships for them.
Because, in this scenario, application files are independent of the database, a stand-alone
Metro Mirror relationship is created for the MM_App_Pri VDisk. Figure 7-3 on page 414
illustrates the Metro Mirror setup.
Chapter 7. SAN Volume Controller operations using the command-line interface 413
Figure 7-3 Metro Mirror scenario
414 Implementing the IBM System Storage SAN Volume Controller V5.1
5. Create the Metro Mirror relationship for MM_App_Pri:
– Master MM_App_Pri
– Auxiliary MM_App_Sec
– Auxiliary SVC cluster ITSO-CLS4
– Name MMREL3
Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform
the next step; instead, go to 7.12.3, “Creating a Metro Mirror consistency group” on
page 416.
Pre-verification
To verify that both clusters can communicate with each other, use the svcinfo
lsclustercandidate command.
IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate
id configured name
0000020069E03A42 no ITSO-CLS3
000002006AE04FC4 no ITSO-CLS1
0000020061006FCA no ITSO-CLS2
Example 7-139 shows the output of the svcinfo lscluster command, before setting up the
Metro Mirror relationship. We show it so that you can compare with the same relationship
after setting up the Metro Mirror relationship.
Example 7-139 Pre-verification of cluster configuration
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id name location partnership bandwidth
id_alias
000002006AE04FC4 ITSO-CLS1 local
000002006AE04FC4
IBM_2145:ITSO-CLS4:admin>svcinfo lscluster
id name location partnership bandwidth
id_alias
Chapter 7. SAN Volume Controller operations using the command-line interface 415
0000020063E03A38 ITSO-CLS4 local
0000020063E03A38
To check the status of the newly created partnership, issue the svcinfo lscluster
command. Also, notice that the new partnership is only partially configured. It remains
partially configured until the Metro Mirror relationship is created on the other node.
Example 7-140 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying the partnership
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id name location partnership bandwidth
id_alias
000002006AE04FC4 ITSO-CLS1 local
000002006AE04FC4
0000020063E03A38 ITSO-CLS4 remote fully_configured 50
0000020063E03A38
After creating the partnership, verify that the partnership is fully configured on both clusters
by reissuing the svcinfo lscluster command.
Example 7-141 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1
IBM_2145:ITSO-CLS4:admin>svcinfo lscluster
id name location partnership bandwidth
id_alias
0000020063E03A38 ITSO-CLS4 local
0000020063E03A38
000002006AE04FC4 ITSO-CLS1 remote fully_configured 50
000002006AE04FC4
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp
id name master_cluster_id master_cluster_name
aux_cluster_id aux_cluster_name primary state
relationship_count copy_type
416 Implementing the IBM System Storage SAN Volume Controller V5.1
0 CG_W2K3_MM 000002006AE04FC4 ITSO-CLS1
0000020063E03A38 ITSO-CLS4 empty 0
empty_group
By using this command, we check the possible candidates for MM_DB_Pri. After checking all
of these conditions, use the svctask mkrcrelationship command to create the Metro Mirror
relationship.
To verify the newly created Metro Mirror relationships, list them with the svcinfo
lsrcrelationship command.
Chapter 7. SAN Volume Controller operations using the command-line interface 417
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster
ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL1
RC Relationship, id [13], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster
ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL2
RC Relationship, id [14], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id
master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary
consistency_group_id consistency_group_name state bg_copy_priority progress
copy_type
13 MMREL1 000002006AE04FC4 ITSO-CLS1 13
MM_DB_Pri 0000020063E03A38 ITSO-CLS4 0 MM_DB_Sec master
0 CG_W2K3_MM inconsistent_stopped 50 0
metro
14 MMREL2 000002006AE04FC4 ITSO-CLS1 14
MM_Log_Pri 0000020063E03A38 ITSO-CLS4 1 MM_Log_Sec master
0 CG_W2K3_MM inconsistent_stopped 50 0
metro
Notice that the state of MMREL3 is consistent_stopped. MMREL3 is in this state, because it
was created with the -sync option. The -sync option indicates that the secondary (auxiliary)
VDisk is already synchronized with the primary (master) VDisk. Initial background
synchronization is skipped when this option is used, even though the VDisks are not actually
synchronized in this scenario. We want to illustrate the option of pre-synchronized master and
auxiliary VDisks, before setting up the relationship. We have created the new relationship for
MM_App_Sec using the -sync option.
Tip: The -sync option is only used when the target VDisk has already mirrored all of the
data from the source VDisk. By using this option, there is no initial background copy
between the primary VDisk and the secondary VDisk.
MMREL2 and MMREL1 are in the inconsistent_stopped state, because they were not created
with the -sync option, so their auxiliary VDisks need to be synchronized with their primary
VDisks.
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship 15
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
418 Implementing the IBM System Storage SAN Volume Controller V5.1
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type metro
sync in_sync
copy_type metro
When implementing Metro Mirror, the goal is to reach a consistent and synchronized state
that can provide redundancy for a dataset if a failure occurs that affects the production site.
In the following section, we show how to stop and start stand-alone Metro Mirror relationships
and consistency groups.
Chapter 7. SAN Volume Controller operations using the command-line interface 419
progress
freeze_time
status online
sync
copy_type metro
IBM_2145:ITSO-CLS1:admin>
Upon completion of the background copy, it enters the Consistent synchronized state.
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Metro Mirror consistency groups or relationships change state.
420 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-147 Monitoring background copy progress example
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL1
id 13
name MMREL1
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 13
master_vdisk_name MM_DB_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 0
aux_vdisk_name MM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state consistent_synchronized
bg_copy_priority 50
progress 35
freeze_time
status online
sync
copy_type metro
When all Metro Mirror relationships have completed the background copy, the consistency
group enters the Consistent synchronized state, as shown in Example 7-148.
Chapter 7. SAN Volume Controller operations using the command-line interface 421
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
In this section, we show how to stop and restart the stand-alone Metro Mirror relationships
and the consistency group.
Example 7-149 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary
IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access MMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type metro
422 Implementing the IBM System Storage SAN Volume Controller V5.1
7.12.11 Stopping a Metro Mirror consistency group
Example 7-150 shows how to stop the Metro Mirror consistency group without specifying the
-access flag. The consistency group enters the Consistent stopped state.
If, afterwards, we want to enable access (write I/O) to the secondary VDisk, reissue the
svctask stoprcconsistgrp command, specifying the -access flag, and the consistency group
transits to the Idling state, as shown in Example 7-151.
Example 7-151 Stopping a Metro Mirror consistency group and enabling access to the secondary
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_MM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
Chapter 7. SAN Volume Controller operations using the command-line interface 423
7.12.12 Restarting a Metro Mirror relationship in the Idling state
When restarting a Metro Mirror relationship in the Idling state, we must specify the copy
direction.
If any updates have been performed on either the master or the auxiliary VDisk, consistency
will be compromised. Therefore, we must issue the command with the -force flag to restart a
relationship, as shown in Example 7-152.
Example 7-152 Restarting a Metro Mirror relationship after updates in the Idling state
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force MMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
If any updates have been performed on either the master or the auxiliary VDisk in any of the
Metro Mirror relationships in the consistency group, the consistency is compromised.
Therefore, we must use the -force flag to start a relationship. If the -force flag is not used, the
command fails.
In Example 7-153, we change the copy direction by specifying the auxiliary VDisks to become
the primaries.
Example 7-153 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -force -primary aux CG_W2K3_MM
424 Implementing the IBM System Storage SAN Volume Controller V5.1
aux_cluster_name ITSO-CLS4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
If the specified VDisk, when you issue this command, is already a primary, the command has
no effect.
In Example 7-154, we change the copy direction for the stand-alone Metro Mirror relationship
by specifying the auxiliary VDisk to become the primary.
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisk that transitions from the primary to the secondary, because all of the I/O will
be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is
required prior to using the svctask switchrcrelationship command.
Example 7-154 Switching the copy direction for a Metro Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3
id 15
name MMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 15
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
Chapter 7. SAN Volume Controller operations using the command-line interface 425
freeze_time
status online
sync
copy_type metro
If the specified VDisk is already a primary when you issue this command, the command has
no effect.
In Example 7-155, we change the copy direction for the Metro Mirror consistency group by
specifying the auxiliary VDisk to become the primary.
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisk that transitions from primary to secondary, because all of the I/O will be
inhibited when that VDisk becomes the secondary. Therefore, careful planning is required
prior to using the svctask switchrcconsistgrp command.
Example 7-155 Switching the copy direction for a Metro Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
426 Implementing the IBM System Storage SAN Volume Controller V5.1
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
RC_rel_id 13
RC_rel_name MMREL1
RC_rel_id 14
RC_rel_name MMREL2
In this section, we describe how to configure the SVC cluster partnership for each
configuration.
Important: In order to have a supported and working configuration, all of the SVC clusters
must be at level 5.1 or higher.
In our scenarios, we configure the SVC partnership by referring to the clusters as A, B, C, and
D:
ITSO-CLS1 = A
ITSO-CLS2 = B
Chapter 7. SAN Volume Controller operations using the command-line interface 427
ITSO-CLS3 = C
ITSO-CLS4 = D
Example 7-156 shows the available clusters for a partnership using the lsclustercandidate
command on each cluster.
IBM_2145:ITSO-CLS2:admin>svcinfo lsclustercandidate
id configured cluster_name
000002006AE04FC4 no ITSO-CLS1
0000020069E03A42 no ITSO-CLS3
0000020063E03A38 no ITSO-CLS4
IBM_2145:ITSO-CLS3:admin>svcinfo lsclustercandidate
id configured name
000002006AE04FC4 no ITSO-CLS1
0000020063E03A38 no ITSO-CLS4
0000020061006FCA no ITSO-CLS2
IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate
id configured name
0000020069E03A42 no ITSO-CLS3
000002006AE04FC4 no ITSO-CLS1
0000020061006FCA no ITSO-CLS2
428 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-157 shows the sequence of mkpartnership commands to execute to create a star
configuration.
From ITSO-CLS1
From ITSO-CLS3
From ITSO-CLS4
Chapter 7. SAN Volume Controller operations using the command-line interface 429
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
After the SVC partnership has been configured, you can configure any rcrelationship or
rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.
Triangle configuration
Figure 7-5 shows the triangle configuration.
From ITSO-CLS1
From ITSO-CLS2
430 Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA
000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4
0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
From ITSO-CLS3
After the SVC partnership has been configured, you can configure any rcrelationship or
rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.
Example 7-159 shows the sequence of mkpartnership commands to execute to create a fully
connected configuration.
Chapter 7. SAN Volume Controller operations using the command-line interface 431
IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
From ITSO-CLS1
From ITSO-CLS2
From ITSO-CLS3
From ITSO-CLS4
After the SVC partnership has been configured, you can configure any rcrelationship or
rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.
432 Implementing the IBM System Storage SAN Volume Controller V5.1
Daisy-chain configuration
Figure 7-7 shows the daisy-chain configuration.
From ITSO-CLS1
From ITSO-CLS2
From ITSO-CLS3
Chapter 7. SAN Volume Controller operations using the command-line interface 433
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:id_alias
0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42
000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4
0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
From ITSO-CLS4
After the SVC partnership has been configured, you can configure any rcrelationship or
rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.
Note: This example is for an intercluster Global Mirror operation only. In case you want to
set up an intracluster operation, we highlight those parts in the following procedure that
you do not need to perform.
434 Implementing the IBM System Storage SAN Volume Controller V5.1
Primary Site Secondary Site
SVC Cluster - ITSO - CLS1 SVC Cluster - ITSO - CLS4
Consistency Group
CG_W2K3_GM
GM Relationship 1
GM_DB_Pri GM_DB_Sec
GM_Dlog_Pri
GM Relationship 2 GM_DBlog_Sec
GM_App_Pri
GM Relationship 3 GM_App_Sec
Chapter 7. SAN Volume Controller operations using the command-line interface 435
5. Create the Global Mirror relationship for GM_App_Pri:
– Master GM_App_Pri
– Auxiliary GM_App_Sec
– Auxiliary SVC cluster ITSO-CLS4
– Name GMREL3
Note: If you are creating an intracluster Global Mirror, do not perform the next step;
instead, go to 7.13.3, “Changing link tolerance and cluster delay simulation” on page 437.
Pre-verification
To verify that both clusters can communicate with each other, use the svcinfo
lsclustercandidate command. Example 7-161 confirms that our clusters are
communicating, because ITSO-CLS4 is an eligible SVC cluster candidate, at ITSO-CLS1, for
the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with
each other.
id configured cluster_name
0000020068603A42 no ITSO-CLS4
IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate
id configured cluster_name
0000020060C06FCA no ITSO-CLS1
In Example 7-162, we show the output of the svcinfo lscluster command, before setting up
the SVC clusters’ partnership for Global Mirror. We show this output for comparison after we
have set up the SVC partnership.
id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre
ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias
0000020060C06FCA:ITSO-CLS1:local:::10.64.210.240:10.64.210.241:::0000020060C06FCA
id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre
ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias
0000020063E03A38:ITSO-CLS4:local:::10.64.210.246.119:10.64.210.247:::0000020063E03
A38
436 Implementing the IBM System Storage SAN Volume Controller V5.1
Partnership between clusters
In Example 7-163, we create the partnership from ITSO-CLS1 to ITSO-CLS4, specifying a
10 MBps bandwidth to use for the background copy.
To verify the status of the newly created partnership, we issue the svcinfo lscluster
command. Notice that the new partnership is only partially configured. It will remain partially
configured until we run the mkpartnership command on the other cluster.
Example 7-163 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying the partnership
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS4
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id name location partnership bandwidth
id_alias
000002006AE04FC4 ITSO-CLS1 local
000002006AE04FC4
0000020063E03A38 ITSO-CLS4 remote partially_configured_local 10
0000020063E03A38
In Example 7-164, we create the partnership from ITSO-CLS4 back to ITSO-CLS1, specifying
a 10 MBps bandwidth to be used for the background copy.
After creating the partnership, verify that the partnership is fully configured by reissuing the
svcinfo lscluster command.
Example 7-164 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS1
IBM_2145:ITSO-CLS4:admin>svcinfo lscluster
id name location partnership bandwidth
id_alias
0000020063E03A38 ITSO-CLS4 local
0000020063E03A38
000002006AE04FC4 ITSO-CLS1 remote fully_configured 10
000002006AE04FC4
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster
id name location partnership bandwidth
id_alias
000002006AE04FC4 ITSO-CLS1 local
000002006AE04FC4
0000020063E03A38 ITSO-CLS4 remote fully_configured 10
0000020063E03A38
Chapter 7. SAN Volume Controller operations using the command-line interface 437
Recommendation: We strongly recommend that you use the default value. If the link is
overloaded for a period, which affects host I/O at the primary site, the relationships will be
stopped to protect those hosts.
To check the current settings for the delay simulation, use the following command:
svcinfo lscluster <clustername>
In Example 7-165, we show the modification of the delay simulation value and a change of
the Global Mirror link tolerance parameters. We also show the changed values of the Global
Mirror link tolerance and delay simulation parameters.
438 Implementing the IBM System Storage SAN Volume Controller V5.1
id_alias 000002006AE04FC4
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 20
gm_intra_cluster_delay_simulation 40
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_state invalid
inventory_mail_interval 0
total_vdiskcopy_capacity 19.00GB
total_used_capacity 19.00GB
total_overallocation 11
total_vdisk_capacity 19.00GB
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
relationship_bandwidth_limit 25
Chapter 7. SAN Volume Controller operations using the command-line interface 439
We use the svcinfo lsvdisk command to list all of the VDisks in the ITSO-CLS1 cluster and,
then, use the svcinfo lsrcrelationshipcandidate command to show the possible VDisk
candidates for GM_DB_Pri in ITSO-CLS4.
After checking all of these conditions, use the svctask mkrcrelationship command to create
the Global Mirror relationship.
To verify the newly created Global Mirror relationships, list them with the svcinfo
lsrcrelationship command.
440 Implementing the IBM System Storage SAN Volume Controller V5.1
7.13.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri
In Example 7-168, we create the stand-alone Global Mirror relationship GMREL3 for
GM_App_Pri. After it is created, we will check the status of each of our Global Mirror
relationships.
Notice that the status of GMREL3 is consistent_stopped, because it was created with the
-sync option. The -sync option indicates that the secondary (auxiliary) VDisk is already
synchronized with the primary (master) VDisk. The initial background synchronization is
skipped when this option is used.
GMREL1 and GMREL2 are in the inconsistent_stopped state, because they were not created
with the -sync option, so their auxiliary VDisks need to be synchronized with their primary
VDisks.
When implementing Global Mirror, the goal is to reach a consistent and synchronized state
that can provide redundancy in case a hardware failure occurs that affects the SAN at the
production site.
In this section, we show how to start the stand-alone Global Mirror relationships and the
consistency group.
Chapter 7. SAN Volume Controller operations using the command-line interface 441
master_vdisk_id 16
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 3
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
Upon completion of the background copy, the CG_W2K3_GM Global Mirror consistency
group enters the Consistent synchronized state (see Example 7-170).
442 Implementing the IBM System Storage SAN Volume Controller V5.1
7.13.10 Monitoring background copy progress
To monitor the background copy progress, use the svcinfo lsrcrelationship command.
This command shows us all of the defined Global Mirror relationships if it is used without any
parameters. In the command output, progress indicates the current background copy
progress. Example 7-147 on page 421 shows our Global Mirror relationships.
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Global Mirror consistency groups or relationships change state.
Chapter 7. SAN Volume Controller operations using the command-line interface 443
When all of the Global Mirror relationships complete the background copy, the consistency
group enters the Consistent synchronized state, as shown in Example 7-148 on page 421.
First, we show how to stop and restart the stand-alone Global Mirror relationships and the
consistency group.
444 Implementing the IBM System Storage SAN Volume Controller V5.1
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type global
Example 7-174 Stopping a Global Mirror consistency group without specifying -access
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
If, afterwards, we want to enable access (write I/O) for the secondary VDisk, we can reissue
the svctask stoprcconsistgrp command, specifying the -access parameter, and the
consistency group transits to the Idling state, as shown in Example 7-151 on page 423.
Chapter 7. SAN Volume Controller operations using the command-line interface 445
sync in_sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
If any updates have been performed on either the master or the auxiliary VDisk, consistency
will be compromised. Therefore, we must issue the -force parameter to restart the
relationship. If the -force parameter is not used, the command will fail, which is shown in
Example 7-152 on page 424.
Example 7-176 Restarting a Global Mirror relationship after updates in the Idling state
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force GMREL3
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3
id 16
name GMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 16
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
aux_vdisk_id 3
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
If any updates have been performed on either the master or the auxiliary VDisk in any of the
Global Mirror relationships in the consistency group, consistency will be compromised.
Therefore, we must issue the -force parameter to start the relationship. If the -force parameter
is not used, the command will fail.
In Example 7-153 on page 424, we restart the consistency group and change the copy
direction by specifying the auxiliary VDisks to become the primaries.
446 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-177 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
If the VDisk that is specified as the primary when issuing this command is already a primary,
the command has no effect.
In Example 7-154 on page 425, we change the copy direction for the stand-alone Global
Mirror relationship, specifying the auxiliary VDisk to become the primary.
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisk that transits from primary to secondary, because all I/O will be inhibited to that
VDisk when it becomes the secondary. Therefore, careful planning is required prior to
using the svctask switchrcrelationship command.
Example 7-178 Switching the copy direction for a Global Mirror relationship
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3
id 16
name GMREL3
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
master_vdisk_id 16
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
Chapter 7. SAN Volume Controller operations using the command-line interface 447
aux_vdisk_id 3
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
If the VDisk that is specified as the primary when issuing this command is already a primary,
the command has no effect.
In Example 7-155 on page 426, we change the copy direction for the Global Mirror
consistency group, specifying the auxiliary to become the primary.
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisk that transits from primary to secondary, because all I/O will be inhibited when
that VDisk becomes the secondary. Therefore, careful planning is required prior to using
the svctask switchrcconsistgrp command.
448 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-179 Switching the copy direction for a Global Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006AE04FC4
master_cluster_name ITSO-CLS1
aux_cluster_id 0000020063E03A38
aux_cluster_name ITSO-CLS4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
RC_rel_id 17
RC_rel_name GMREL1
RC_rel_id 18
RC_rel_name GMREL2
Chapter 7. SAN Volume Controller operations using the command-line interface 449
7.14.1 Upgrading software
This section explains how to upgrade the SVC software.
Requirement: It is mandatory that you run on SVC 4.3.1.7 cluster code before upgrading
to SVC 5.1.0.0 cluster code.
Check the recommended software levels on the Web at this Web site:
http://www.ibm.com/storage/support/2145
After the software file has been uploaded to the cluster (to the /home/admin/upgrade
directory), you can select the software and apply it to the cluster. Use the Web script and the
svctask applysoftware command. When a new code level is applied, it is automatically
installed on all of the nodes within the cluster.
The underlying command-line tool runs the sw_preinstall script, which checks the validity of
the upgrade file, and whether it can be applied over the current level. If the upgrade file is
unsuitable, the pre-install script deletes the files, which prevents the buildup of invalid files on
the cluster.
Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications might have I/O failures during the software upgrade.
Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem
Device Driver (SDD) command. Example 7-180 shows the output.
450 Implementing the IBM System Storage SAN Volume Controller V5.1
#datapath query device
Total Devices : 2
Write-through mode: During a software upgrade, there are periods where not all of the
nodes in the cluster are operational, and as a result, the cache operates in write-through
mode. write-through mode has an effect on the throughput, latency, and bandwidth aspects
of performance.
Verify that your uninterruptible power supply unit configuration is also set up correctly (even if
your cluster is running without problems). Specifically, make sure that the following conditions
are true:
Your uninterruptible power supply units are all getting their power from an external source,
and they are not daisy chained. Make sure that each uninterruptible power supply unit is
not supplying power to another node’s uninterruptible power supply unit.
The power cable and the serial cable, which comes from each node, go back to the same
uninterruptible power supply unit. If the cables are crossed and go back to separate
uninterruptible power supply units, during the upgrade, while one node is shut down,
another node might also be mistakenly shut down.
Important: Do not share the SVC uninterruptible power supply unit with any other devices.
You must also ensure that all I/O paths are working for each host that runs I/O operations to
the SAN during the software upgrade. You can check the I/O paths by using the datapath
query commands.
You do not need to check for hosts that have no active I/O operations to the SAN during the
software upgrade.
Procedure
To upgrade the SVC cluster software, perform the following steps:
1. Before starting the upgrade, you must back up the configuration (see 7.14.9, “Backing up
the SVC cluster configuration” on page 466) and save the backup config file in a safe
place.
2. Also, save the data collection for support diagnosis in case of problems, as shown in
Example 7-181 on page 452.
Chapter 7. SAN Volume Controller operations using the command-line interface 451
Example 7-181 svc_snap command
IBM_2145:ITSO-CLS1:admin>svc_snap
Collecting system information...
Copying files, please wait...
Copying files, please wait...
Listing files, please wait...
Copying files, please wait...
Listing files, please wait...
Copying files, please wait...
Listing files, please wait...
Dumping error log...
Creating snap package...
Snap data collected in /dumps/snap.104643.080617.002427.tgz
3. List the dump that was generated by the previous command, as shown in Example 7-182.
4. Save the generated dump in a safe place using the pscp command, as shown in
Example 7-183.
5. Upload the new software package using PuTTY Secure Copy. Enter the command, as
shown in Example 7-184 on page 453.
452 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 7-184 pscp -load command
C:\>pscp -load ITSOCL1 IBM2145_INSTALL_4.3.0.0
admin@9.43.86.117:/home/admin/upgrade
IBM2145_INSTALL_4.3.0.0-0 | 103079 kB | 9370.8 kB/s | ETA: 00:00:00 | 100%
6. Upload the SAN Volume Controller Software Upgrade Test Utility by using PuTTY Secure
Copy. Enter the command, as shown in Example 7-185.
7. Verify that the packages were successfully delivered through the PuTTY command-line
application by entering the svcinfo lssoftwaredumps command, as shown in
Example 7-186.
8. Now that the packages are uploaded, first install the SAN Volume Controller Software
Upgrade Test Utility, as shown in Example 7-187.
9. Using the following command, test the upgrade for known issues that might prevent a
software upgrade from completing successfully, as shown in Example 7-188.
Important: If the svcupgradetest command produces any errors, troubleshoot the errors
using the maintenance procedures before continuing further.
10.Now, use the svctask command set to apply the software upgrade, as shown in
Example 7-189.
Chapter 7. SAN Volume Controller operations using the command-line interface 453
While the upgrade runs, you can check the status, as shown in Example 7-190.
11.The new code is distributed and applied to each node in the SVC cluster. After installation,
each node is automatically restarted one at a time. If a node does not restart automatically
during the upgrade, you must repair it manually.
Solid-state drives: If you use solid-state drives, the data of the solid-state drive within
the restarted node will not be available during the reboot.
12.Eventually both nodes display Cluster: on line one on the SVC front panel and the name
of your cluster on line two of the SVC front panel. Be prepared for a wait (in our case, we
waited approximately 40 minutes).
Performance: During this process, both your CLI and GUI vary from sluggish (slow) to
unresponsive. The important thing is that I/O to the hosts can continue through this
process.
13.To verify that the upgrade was successful, you can perform either of the following options:
– Run the svcinfo lscluster and svcinfo lsnodevpd commands, as shown in
Example 7-191. We have truncated the lscluster and lsnodevpd information for this
example.
454 Implementing the IBM System Storage SAN Volume Controller V5.1
code_level 4.3.0.0 (build 8.15.0806110000)
FC_port_speed 2Gb
console_IP 9.43.86.115:9080
id_alias 0000020060806FCA
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
email_server 127.0.0.1
email_server_port 25
email_reply itsotest@ibm.com
email_contact ITSO User
email_contact_primary 555-1234
email_contact_alternate
email_contact_location ITSO
email_state running
email_user_count 1
inventory_mail_interval 0
cluster_IP_address_6
cluster_service_IP_address_6
prefix_6
default_gateway_6
total_vdiskcopy_capacity 156.00GB
total_used_capacity 156.00GB
total_overallocation 20
total_vdisk_capacity 156.00GB
IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lsnodevpd 1
id 1
Chapter 7. SAN Volume Controller operations using the command-line interface 455
– Copy the error log to your management workstation, as explained in 7.14.2, “Running
maintenance procedures” on page 456. Open the error log in WordPad and search for
Software Install completed.
You have now completed the required tasks to upgrade the SVC software.
If you want to generate a new log before analyzing unfixed errors, run the svctask
dumperrlog command (Example 7-192).
You can add the -prefix parameter to your command to change the default prefix of errlog to
something else (Example 7-193).
To see the file name, you must enter the following command (Example 7-194).
456 Implementing the IBM System Storage SAN Volume Controller V5.1
Maximum number of error log dump files: A maximum of ten error log dump files per
node will be kept on the cluster. When the eleventh dump is made, the oldest existing
dump file for that node will be overwritten. Note that the directory might also hold log files
retrieved from other nodes. These files are not counted. The SVC will delete the oldest file
(when necessary) for this node in order to maintain the maximum number of files. The SVC
will not delete files from other nodes unless you issue the cleandumps command.
After you generate your error log, you can issue the svctask finderr command to scan the
error log for any unfixed errors, as shown in Example 7-195.
As you can see, we have one unfixed error on our system. To analyze this error, download it
onto your own PC.
To know more about this unfixed error, look at the error log in more detail. Use the PuTTY
Secure Copy process to copy the file from the cluster to your local management workstation,
as shown in Example 7-196.
Example 7-196 pscp command: Copy error logs off of the SVC
In W2K3 Start Run cmd
In order to use the Run option, you must know where your pscp.exe is located. In this case, it
is in the C:\Program Files\PuTTY\ folder.
Open the file in WordPad (Notepad does not format the window as well). You will see
information similar to what is shown in Example 7-197. We truncated this list for the purposes
of this example.
Chapter 7. SAN Volume Controller operations using the command-line interface 457
Error Code : 1230 : Login excluded
Status Flag : UNFIXED
Type Flag : TRANSIENT ERROR
03 00 00 00 03 00 00 00 31 44 17 B8 A0 00 04 20
33 44 17 B8 A0 00 05 20 00 11 01 00 00 00 01 00
33 00 33 00 05 00 0B 00 00 00 01 00 00 00 01 00
04 00 04 00 00 00 01 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Scrolling through, or searching for the term unfixed, you can find more detail about the
problem. You might see more entries in the errorlog that have the status of unfixed.
After you take the necessary steps to rectify the problem, you can mark the error as fixed in
the log by issuing the svctask cherrstate command against its sequence numbers
(Example 7-198).
If you accidentally mark the wrong error as fixed, you can mark it as unfixed again by entering
the same command and appending the -unfix flag to the end, as shown in Example 7-199.
This command sends all errors and warning and informational events to the SVC community
on the SNMP manager with the IP address 9.43.86.160.
The syslog protocol is a client-server standard for forwarding log messages from a sender to
a receiver on an IP network. You can use syslog to integrate log messages from various types
of systems into a central repository. You can configure SVC 5.1 to send information to six
syslog servers.
458 Implementing the IBM System Storage SAN Volume Controller V5.1
You use the svctask mksyslogserver command to configure the SVC using the CLI, as
shown in Example 7-201.
Using this command with the -h parameter gives you information about all of the available
options. In our example, we only configure the SVC to use the default values for our syslog
server.
When we have configured our syslog server, we can display the current syslog server
configurations in our cluster, as shown in Example 7-202.
Important: Before the SVC can start sending e-mails, we must run the svctask
startemail command, which enables this service.
The attempt is successful when the SVC gets a positive acknowledgement from an e-mail
server that the e-mail has been received by the server.
We can configure an e-mail user that will receive e-mail notifications from the SVC cluster.
We can define 12 users to receive e-mails from our SVC.
Chapter 7. SAN Volume Controller operations using the command-line interface 459
Using the svcinfo lsemailuser command, we can verify who is already registered and what
type of information is sent to that user, as shown in Example 7-204.
We can also create a new user, as shown in Example 7-205 for a SAN administrator.
To display the error log, use the svcinfo lserrlog command or the svcinfo caterrlog
command, as shown in Example 7-206 (the output is the same).
460 Implementing the IBM System Storage SAN Volume Controller V5.1
0,internal,no,no,5,n1,0,0,080624094301,080624094301,1,00990220,
0,internal,no,no,5,n1,0,0,080624093355,080624093355,1,00990220,
11,vdisk,no,no,5,n1,0,0,080623150020,080623150020,1,00990183,
4,vdisk,no,no,5,n1,0,0,080623145958,080623145958,1,00990183,
5,vdisk,no,no,5,n1,0,0,080623145934,080623145934,1,00990183,
11,vdisk,no,no,5,n1,0,0,080623145017,080623145017,1,00990182,
6,vdisk,no,no,5,n1,0,0,080623144153,080623144153,1,00990183,
.
This command views the last error log that was generated. Use the method that is described
in 7.14.2, “Running maintenance procedures” on page 456 to upload and analyze the error
log in more detail.
To clear the error log, you can issue the svctask clearerrlog command, as shown in
Example 7-207.
Using the -force flag will stop any confirmation requests from appearing.
When executed, this command will clear all of the entries from the error log. This process will
proceed even if there are unfixed errors in the log. It also clears any status events that are in
the log.
This command is a destructive command for the error log. Only use this command when you
have either rebuilt the cluster, or when you have fixed a major problem that has caused many
entries in the error log that you do not want to fix manually.
Before you change the licensing, you can display the licenses that you already have by
issuing the svcinfo lslicense command, as shown in Example 7-208.
The current license settings for the cluster are displayed in the viewing license settings log
window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror,
Global Mirror, or Virtualization features. They also show the storage capacity that is licensed
for virtualization. Typically, the license settings log contains entries, because feature options
must be set as part of the Web-based cluster creation process.
Chapter 7. SAN Volume Controller operations using the command-line interface 461
Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro
Mirror and Global Mirror feature. Example 7-209 on page 462 shows the command that you
enter.
To turn a feature off, add 0 TB as the capacity for the feature that you want to disable.
To verify that the changes you have made are reflected in your SVC configuration, you can
issue the svcinfo lslicense command as before (see Example 7-210).
If no node is specified, the command lists the dumps that are available on the configuration
node.
The svcinfo lserrlogdumps command lists all of the dumps in the /dumps/elogs directory
(Example 7-211).
462 Implementing the IBM System Storage SAN Volume Controller V5.1
id filename
0 errlog_104643_080617_172859
1 errlog_104643_080618_163527
2 errlog_104643_080619_164929
3 errlog_104643_080619_165117
4 errlog_104643_080624_093355
5 svcerrlog_104643_080624_094301
6 errlog_104643_080624_120807
7 errlog_104643_080624_121102
8 errlog_104643_080624_122204
9 errlog_104643_080624_160522
The svcinfo lsfeaturedumps command lists all of the dumps in the /dumps/feature
directory (Example 7-212).
The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel
name, and prefix is the value that is entered by the user for the -filename parameter in the
svctask settrace command.
The command to list all of the dumps in the /dumps/iotrace directory is the svcinfo
lsiotracedumps command (Example 7-213).
Chapter 7. SAN Volume Controller operations using the command-line interface 463
interval is encountered, the I/O statistics that are collected up to this point are written to a file
in the /dumps/iostats directory.
The file names that are used for storing I/O statistics dumps are
m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether
the statistics are for MDisks or VDisks. In these file names, NNNNNN is the node front panel
name.
The command to list all of the dumps that are in the /dumps/iostats directory is the svcinfo
lsiostatsdumps command (Example 7-214).
Software dump
The svcinfo lssoftwaredump command lists the contents of the /home/admin/upgrade
directory. Any files in this directory are copied there at the time that you perform a software
upgrade. Example 7-215 shows the command.
However, files can only be copied from the current configuration node (using PuTTY Secure
Copy). Therefore, you must issue the svctask cpdumps command to copy the files from a
non-configuration node to the current configuration node. Subsequently, you can copy them to
the management workstation using PuTTY Secure Copy.
For example, you discover a dump file and want to copy it to your management workstation
for further analysis. In this case, you must first copy the file to your current configuration node.
To copy dumps from other nodes to the configuration node, use the svctask cpdumps
command.
In addition to the directory, you can specify a file filter. For example, if you specified
/dumps/elogs/*.txt, all of the files in the /dumps/elogs directory that end in .txt are copied.
464 Implementing the IBM System Storage SAN Volume Controller V5.1
Wildcards: The following rules apply to the use of wildcards with the SAN Volume
Controller CLI:
The wildcard character is an asterisk (*).
The command can contain a maximum of one wildcard.
When you use a wildcard, you must surround the filter entry with double quotation
marks (""), for example:
>svctask cleardumps -prefix "/dumps/elogs/*.txt"
Now that you have copied the configuration dump file from Node n4 to your configuration
node, you can use PuTTY Secure Copy to copy the file to your management workstation for
further analysis.
To clear the dumps, you can run the svctask cleardumps command. Again, you can append
the node name if you want to clear dumps off of a node other than the current configuration
node (the default for the svctask cleardumps command).
The commands in Example 7-217 clear all logs or dumps from the SVC Node n1.
The command to list all of the dumps in the /dumps directory is the svcinfo ls2145dumps
command (Example 7-218).
Chapter 7. SAN Volume Controller operations using the command-line interface 465
7.14.9 Backing up the SVC cluster configuration
You can back up your cluster configuration by using the Backing Up a Cluster Configuration
window or the CLI svcconfig command. In this section, we describe the overall procedure for
backing up your cluster configuration and the conditions that must be satisfied to perform a
successful backup.
The backup command extracts configuration data from the cluster and saves it to the
svc.config.backup.xml file in the /tmp directory. This process also produces an
svc.config.backup.sh file. You can study this file to see what other commands were issued
to extract information.
A svc.config.backup.log log is also produced. You can study this log for the details of what
was done and when it was done. This log also includes information about the other
commands that were issued.
Important: The tool backs up logical configuration data only, not client data. It does not
replace a traditional data backup and restore tool, but this tool supplements a traditional
data backup and restore tool with a way to back up and restore the client’s configuration.
To provide a complete backup and disaster recovery solution, you must back up both user
(non-configuration) data and configuration (non-user) data. After the restoration of the SVC
configuration, you must fully restore user (non-configuration) data to the cluster’s disks.
Prerequisites
You must have the following prerequisites in place:
All nodes must be online.
No object name can begin with an underscore.
All objects must have non-default names, that is, names that are not assigned by the SVC.
Although we recommend that objects have non-default names at the time that the backup is
taken, this prerequisite is not mandatory. Objects with default names are renamed when they
are restored.
466 Implementing the IBM System Storage SAN Volume Controller V5.1
................
CMMVC6136W No SSH key file svc.config.admin.admin.key
CMMVC6136W No SSH key file svc.config.admincl1.admin.key
CMMVC6136W No SSH key file svc.config.ITSOSVCUser1.admin.key
.......................
CMMVC6112W vdisk vdisk7 has a default name
...................
CMMVC6155I SVCCONFIG processing completed successfully
After the svcconfig restore -execute command is started, consider any prior user data on
the VDisks destroyed. The user data must be recovered through your usual application data
backup and restore process.
Chapter 7. SAN Volume Controller operations using the command-line interface 467
See IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line
Interface User’s Guide, SC26-7544, for more information about this topic.
For a detailed description of the SVC configuration backup and restore functions, see IBM
TotalStorage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543.
When using the clear command, you erase the files in the /tmp directory. This command
does not clear the running configuration and prevent the cluster from working, but the
command clears all of the configuration backup that is stored in the /tmp directory
(Example 7-221).
This procedure, in certain circumstances, is able to recover most user data. However, this
procedure is not to be used by the client or IBM service representative without direct
involvement from IBM level 3 technical support. This procedure is not published, but we refer
to it here only to indicate that the loss of a cluster can be recoverable without total data loss,
but it requires a restoration of application data from the backup. It is an extremely sensitive
procedure, which is only to be used as a last resort, and cannot recover any data that was
unstaged from cache at the time of the total cluster failure.
468 Implementing the IBM System Storage SAN Volume Controller V5.1
8
We describe the basic configuration procedures that are required to get your SVC
environment up and running as quickly as possible using the Master Console and its
associated GUI.
Chapter 2, “IBM System Storage SAN Volume Controller” on page 7 describes the features in
greater depth. In this chapter, we focus on the operational aspects.
It is possible for many users to be logged into the GUI at any given time. However, no locking
mechanism exists, so if two users change the same object at the same time, the last action
entered from the GUI is the one that will take effect.
Important: Data entries made through the GUI are case sensitive.
The SVC Welcome window (Figure 8-1) is an important window and will be referred to as the
Welcome window throughout this chapter. We expect users to be able to locate this window
without us having to show it each time.
From the Welcome window, select Work with Virtual Disks, and select Virtual Disks.
Table filtering
When you are in the Viewing Virtual Disks list, you can use the table filter option to filter the
visible list, which is useful if the list of entries is too large to work with. You can change the
filtering here as many times as you like, to further reduce the lists or for separate views.
Perform these steps to use table filtering:
1. Use the Show Filter Row icon, as shown in Figure 8-2 on page 471, or select Show
Filter Row in the list, and click Go.
470 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-2 Show Filter Row icon
2. This function enables you to filter based on the column names, as shown in Figure 8-3.
The Filter under each column name shows that no filter is in effect for that column.
3. If you want to filter on a column, click the word Filter, which opens up a filter window, as
shown in Figure 8-4 on page 472.
A list with virtual disks (VDisks) is displayed that contains names that include 01
somewhere in the name, as shown in Figure 8-5. (Notice the filter line under each column
heading, showing that our filter is in place.) If you want, you can perform additional filtering
on the other columns to further narrow your view.
4. The option to reset the filters is shown in Figure 8-6 on page 473. Use the Clear All
Filters icon or use the Clear All Filters option in the list, and click Go.
472 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-6 Clear All Filter options
Sorting
Regardless of whether you use the pre-filter or additional filter options, when you are in the
Viewing Virtual Disks window, you can sort the displayed data by selecting Edit Sort from the
list and clicking Go, or you can click the small Edit Sort icon highlighted by the mouse pointer
in Figure 8-7.
As shown in Figure 8-8 on page 474, you can sort based on up to three criteria, including
Name, State, I/O Group, Managed Disk Group (MDisk Group), Capacity (MB),
Space-Efficient, Type, Hosts, FlashCopy Pair, FlashCopy Map Count, Relationship Name,
UID, and Copies.
Sort criteria: The actual sort criteria differs based on the information that you are sorting.
When you finish making your choices, click OK to regenerate the display based on your
sorting criteria. Look at the icons next to each column name to see the sort criteria currently
in use, as shown in Figure 8-9.
If you want to clear the sort, simply select Clear All Sorts from the list and click Go, or click
the Clear All Sorts icon that is highlighted by the mouse pointer in Figure 8-9.
474 Implementing the IBM System Storage SAN Volume Controller V5.1
8.1.2 Documentation
If you need to access the online documentation, in the upper right corner of the window, click
the information icon. This action opens the Help Assistant pane on the right side of the
window, as shown in Figure 8-10.
8.1.3 Help
If you need to access the online help, in the upper right corner of the window, click the
question mark icon. This action opens a new window called the information center. Here,
you can search on any item for which you want help (see Figure 8-11 on page 476).
In addition, each time that you open a configuration or administration window using the GUI in
the following sections, it creates a link for that window along the top of your Web browser
beneath the banner graphic. As a general housekeeping task, we recommend that you close
each window when you finish using it by clicking the icon to the right of the window name,
but beneath the icon. Be careful not to close the entire browser.
You can see detailed information about the item by clicking the underlined (progress) number
in the Progress column.
476 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-12 Showing possible processes to view where the MDisk is being removed from the MDG
This section details the tasks that you can perform at a disk controller level.
3. When you click the controller Name (Figure 8-13), the Viewing General Details for Name
window (Figure 8-14 on page 478) opens for the controller (where Name is the controller
that you selected). Review the details, and click Close to return to the previous window.
3. You return to the Disk Controller Systems window. You now see the new name of your
controller displayed.
Controller name: The name can consist of the letters A to Z and a to z, the numbers 0
to 9, the dash (-), and the underscore (_). The name can be between one and 15
characters in length. However, the name cannot start with a number, the dash, or the
word “controller” (because this prefix is reserved for SVC assignment only).
478 Implementing the IBM System Storage SAN Volume Controller V5.1
8.2.3 Discovery status
You can view the status of a managed disk (MDisk) discovery from the Viewing Discovery
Status window. This status tells you if there is an ongoing MDisk discovery. A running MDisk
discovery will be displayed with a status of Active.
Tip: If, at any time, the content in the right side of frame is abbreviated, you can
minimize the My Work column by clicking the arrow to the right of the My Work heading
at the top right of the column (highlighted with the mouse pointer in Figure 8-17 on
page 479).
After you minimize the column, you see an arrow in the far left position in the same
location where the My Work column formerly appeared.
2. Review the details, and then, click Close to return to the previous window.
480 Implementing the IBM System Storage SAN Volume Controller V5.1
MDisk name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9,
the dash (-), and the underscore (_). The name can be between one and 15 characters
in length. However, the name cannot start with a number, the dash, or the word “MDisk”
(because this prefix is reserved for SVC assignment only).
2. You now see a subset (specific to the MDisk that you chose in the previous step) of the
Viewing VDisks using MDisk window in Figure 8-22. We cover the Viewing VDisks window
in more detail in 8.4, “Working with hosts” on page 493.
482 Implementing the IBM System Storage SAN Volume Controller V5.1
8.3 Working with Managed Disk Groups
In this section, we describe the tasks that can be performed with the Managed Disk Group
(MDG). From the Welcome window that is shown in Figure 8-1 on page 470, select Working
with MDisks.
3. In the Create a Managed Disk Group window, the wizard provides an overview of the
steps that will be performed. Click Next.
4. While in the Name the group and select the managed disks window (Figure 8-26 on
page 485), follow these steps:
a. Type a name for the MDG.
MDG name: If you do not provide a name, the SVC automatically generates the
name MDiskgrpx, where x is the ID sequence number that is assigned by the SVC
internally.
If you want to provide a name (as we have done), you can use the letters A to Z and
a to z, the numbers 0 to 9, and the underscore (_). The name can be between one
and 15 characters in length and is case sensitive, but it cannot start with a number
or the word “MDiskgrp” (because this prefix is reserved for SVC assignment only).
b. From the MDisk Candidates box, as shown in Figure 8-26 on page 485, one at a time,
select the MDisks that you want to put into the MDG. Click Add to move them to the
Selected MDisks box. More than one page of disks might exist; you can navigate
between the windows (the MDisks that you have selected will be preserved).
c. You can specify a threshold to send a warning to the error log when the capacity is first
exceeded. The threshold can either be a percentage or a specific amount.
d. Click Next.
484 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-26 Name the group and select the managed disks window
5. From the list that is shown in Figure 8-27, select the extent size to use. When you select a
specific extent size, the typical value is 512; the total cluster size is shown in TB. Select
Next.
6. In the Verify Managed Disk Group window (Figure 8-28 on page 486), verify that the
information that you have specified is correct. Click Finish.
7. Return to the Viewing Managed Disk Groups window (Figure 8-29) where the new MDG is
displayed.
You have now completed the tasks that are required to create an MDG.
486 Implementing the IBM System Storage SAN Volume Controller V5.1
From the Modifying Managed Disk Group MDisk Group Name window (where the MDisk
Group Name is the MDG that you selected in the previous step), type the new name that you
want to assign and click OK (see Figure 8-31).
You can also set or change the usage threshold from this window.
MDG name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, a
dash (-), and the underscore (_). The new name can be between one and 15 characters in
length, but it cannot start with a number, a dash, or the word “mdiskgrp” (because this
prefix is reserved for SVC assignment only).
It is considered a best practice to enable the capacity warning for your MDGs. You must
address the range to be used in the planning phase of the SVC installation, although this
range can always be changed without interruption.
3. If there are MDisks and VDisks within the MDG that you are deleting, you are required to
click Forced delete for the MDG (Figure 8-33 on page 488).
1. In Figure 8-34, select the MDG to which you want to add MDisks. Select Add MDisks
from the list, and click Go.
2. From the Adding Managed Disks to Managed Disk Group MDGname window (where
MDGname is the MDG that you selected in the previous step), select the desired MDisk or
MDisks from the MDisk Candidates list (Figure 8-35 on page 489). After you select all of
the desired MDisks, click OK.
488 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-35 Adding MDisks to an MDG
2. From the Deleting Managed Disks from Managed Disk Group MDGname window (where
MDGname is the MDG that you selected in the previous step), select the desired MDisk or
MDisks from the list (Figure 8-37 on page 490). After you select all of the desired MDisks,
click OK.
3. If VDisks are using the MDisks that you are removing from the MDG, you are required to
click Forced Delete to confirm the removal of the MDisk, as shown in Figure 8-38.
4. An error message is displayed if there is insufficient space to migrate the VDisk data to
other extents on other MDisks in that MDG.
From the SVC Welcome window (Figure 8-1 on page 470), select Work with Managed
Disks, and then, select Managed Disks. In the Viewing Managed Disks window (Figure 8-39
on page 491), if your MDisks are not displayed, rescan the Fibre Channel (FC) network.
Select Discover MDisks from the list, and click Go.
490 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-39 Discover MDisks
Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers
(LUNs) from your subsystem are properly assigned to the SVC (for example, using storage
partitioning with a DS4000) and that appropriate zoning is in place (for example, the SVC
can see the disk subsystem).
2. You now see a subset (specific to the MDG that you chose in the previous step) of the
Viewing Managed Disks window (Figure 8-41 on page 492) that was shown in 8.2.4,
“Managed disks” on page 479.
Note: Remember, you can collapse the column entitled My Work at any time by clicking the
arrow to the right of the My Work column heading.
8.3.9 Showing the VDisks that are associated with an MDisk group
To show a list of the VDisks that are associated with MDisks within an MDG, perform the
following steps:
1. In Figure 8-42, select the MDG from which you want to retrieve VDisk information. Select
Show VDisks using this group from the list, and click Go.
2. You see a subset (specific to the MDG that you chose in the previous step) of the Viewing
Virtual Disks window in Figure 8-43 on page 493. We describe the Viewing Virtual Disks
window in more detail in “VDisk information” on page 505.
492 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-43 VDisks belonging to selected MDG
You have now completed the required tasks to manage the disk controller systems, MDisks,
and MDGs within the SVC environment.
For more details about connecting hosts to an SVC in a SAN environment, obtain more
detailed information in IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment
Guide, SG26-7905-05.
Starting with SVC 5.1, iSCSI is introduced as an additional method for connecting your host
to the SVC. With this option, the host can now choose between FC or iSCSI as the
connection method. After the connection type has been selected, all further work with the
host is identical for the FC-attached host and the iSCSI-attached host.
To access the Viewing Hosts window from the SVC Welcome window on Figure 8-1 on
page 470, click Work with Hosts, and then, click Hosts. The Viewing Hosts window opens,
as shown in Figure 8-44. You perform each task that is shown in the following sections from
the Viewing Hosts window.
b. You can click Port Details (Figure 8-46) to see the attachment information, such as the
worldwide port names (WWPNs) that are defined for this host or the iSCSI qualified
name (IQN) that is defined for this host.
c. You can click Mapped I/O Groups (Figure 8-47 on page 495) to see which I/O Groups
this host can access.
494 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-47 Host mapped I/O Groups
d. A new feature in SVC 5.1 is the capability to create hosts that use either FC
connections or iSCSI connections. If we select iSCSI for our host in this example, we
do not see any iSCSI parameters (as shown in Figure 8-48), because this host is
already configured with an FC port, as shown in Figure 8-46 on page 494.
When you are finished viewing the details, click Close to return to the previous window.
2. In the Creating Hosts window (Figure 8-50 on page 497), type a name for your host (Host
Name).
Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally).
If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0
to 9, and the underscore. The host name can be between one and 15 characters in
length. However, the name cannot start with a number or the word “host” (because this
prefix is reserved for SVC assignment only). Although using an underscore might work
in certain circumstances, it violates the request for change (RFC) 2396 definition of
Uniform Resource Identifiers (URIs) and can cause problems. So, we recommend that
you do not use the underscore in host names.
3. Select the mode (Type) for the host. The default type is Generic. Use generic for all hosts,
except if you use Hewlett-Packard UNIX (HP-UX) or SUN, in which case, select HP_UX
(to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts
using MPxIO.
4. The connection type is either Fibre Channel or iSCSI. If you select Fibre Channel, you are
asked for the port mask and the WWPN of the server that you are creating. If you select
iSCSI, you are asked for the iSCSI initiator, which is commonly called the IQN, and the
Challenge Handshake Authentication Protocol (CHAP) authentication secret to ensure
authentication of the target host and volume access.
5. You can use a port mask to control the node target ports that a host can access. The port
mask applies to the logins from the host initiator port that are associated with the host
object.
Note: For each login between a host bus adapter (HBA) port and a node port, the node
examines the port mask that is associated with the host object for which the HBA is a
member and determines if access is allowed or denied. If access is denied, the node
responds to SCSI commands as though the HBA port is unknown.
The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled)
to 1111 (all ports enabled). The rightmost bit in the mask corresponds to the lowest
numbered SVC port (1, not 4) on a node.
As shown in Figure 8-50 on page 497, our port mask is 1111; the HBA port can access all
node ports. If, for example, a port mask is set to 0011, only port 1 and port 2 are enabled
for this host access.
496 Implementing the IBM System Storage SAN Volume Controller V5.1
6. Select and add the WWPNs that correspond to your HBA or HBAs. Click OK.
In certain cases, your WWPNs might not be displayed, although you are sure that your
adapter is functioning (for example, you see the WWPN in the switch name server) and
your zones are correctly set up. In this case, you can manually type the WWPN of your
HBA or HBAs into the Additional Ports field (type the WWPNs one per line) at the bottom
of the window and select Do not validate WWPN before you click OK.
This action brings you back to the Viewing Hosts window (Figure 8-51) where you can see the
newly added host.
Prior to starting to use iSCSI, we must configure our cluster to use the iSCSI option, which is
shown in 8.4.4, “iSCSI-attached hosts” on page 497.
In the Creating Hosts window (Figure 8-53 on page 499), type a name for your host (Host
Name). Follow these steps:
1. Select the mode (Type) for the host. The default type is Generic. Use generic for all hosts,
except for HP-UX or SUN. For HP or Sun, select HP_UX (to have more than eight LUNs
supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.
2. The connection type is iSCSI.
3. The iSCSI initiator or IQN is iqn.1991-05.com.microsoft:freyja. This IQN is obtained from
the server and generally has the same purpose as the WWPN.
4. The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster
under cluster properties or for each host definition. The CHAP must be identical on the
server and the cluster/host definition. You can create an iSCSI host definition without
using a CHAP.
In Figure 8-53 on page 499, we set the parameters for our host called Freyja.
498 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-53 iSCSI parameters
2. From the Modifying Host window (Figure 8-55 on page 500), type the new name that you
want to assign or change the Type parameter, and click OK.
Name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, and
the underscore. The name can be between one and 15 characters in length. However, it
cannot start with a number or the word “host” (because this prefix is reserved for SVC
assignment only). While using an underscore might work in certain circumstances, it
violates the RFC 2396 definition of Uniform Resource Identifiers (URIs) and thus can
cause problems. So, we recommend that you do not use the underscore in host names.
2. In the Deleting Host host name window (where host name is the host that you selected in
the previous step), click OK if you are sure that you want to delete the host. See
Figure 8-57.
500 Implementing the IBM System Storage SAN Volume Controller V5.1
3. If you still have VDisks associated with the host, you will see a window (Figure 8-58)
requesting confirmation for the forced deletion of the host. Click OK and all of the
mappings between this host and its VDisks are deleted before the host is deleted.
Note: A host definition can only have FC ports or an iSCSI port defined, but not both.
1. Select the host to which you want to add ports, as shown in Figure 8-59. Select Add Ports
from the list, and click Go.
2. From the Adding ports window, you can select whether to add an FC port (WWPN) or an
iSCSI port (IQN initiator) for the connection type. Select either the desired WWPN from the
Available Ports list and click Add, or enter the new IQN in the iSCSI window. After adding
the WWPN or IQN, click OK. See Figure 8-60 on page 502.
If your WWPNs are not in the list of the Available Ports and you are sure that your adapter
is functioning (for example, you see WWPN in the switch name server) and your zones are
correctly set up, you can manually type the WWPN of your HBAs into the Add Additional
Ports field at the bottom of the window before you click OK.
Figure 8-61 shows where IQN is added to our host called Thor.
502 Implementing the IBM System Storage SAN Volume Controller V5.1
2. On the Deleting Ports From host name window (where host name is the host that you
selected in the previous step), start by selecting the connection type of the port that you
want to delete. If you select Fibre Channel, you select the port that you want to delete
from the Available Ports list, and click Add. When you have selected all of the ports that
you want to delete from your host and when you have added them to the column to the
right, click OK. If you selected the connection type iSCSI, you select the ports from the
available iSCSI initiator and click Add. Then, click OK. Figure 8-63 shows selecting a
WWPN port to delete. Figure 8-64 shows that we have selected an iSCSI initiator to
delete.
3. If you have VDisks that are associated with the host, you receive a warning about deleting
a host port. You need to confirm your action when prompted, as shown in Figure 8-65 on
page 504. A similar warning message appears if you delete an iSCSI port.
504 Implementing the IBM System Storage SAN Volume Controller V5.1
8.5.2 VDisk information
To retrieve information about a specific VDisk, perform the following steps:
1. In the Viewing Virtual Disks window, click the underlined name of the desired VDisk in the
list.
2. The next window (Figure 8-67) that opens shows detailed information. Review the
information. When you are finished, click Close to return to the Viewing VDisks window.
5. Select the MDG from which you want the VDisk to be a member:
a. If you selected Striped, you will see the window that is shown in Figure 8-70 on
page 507. You must select the MDisk group, and then, the Managed Disk Candidates
window will appear. You can optionally add MDisks to be striped.
506 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-70 Selecting an MDG
b. If you selected Sequential mode, you see the window that is shown in Figure 8-71.
You must select the MDisk group, and then, a list of managed disks appears. You must
choose at least one MDisk as a managed disk.
Figure 8-71 Creating a VDisk wizard: Select attributes for sequential mode VDisks
c. Enter the size of the VDisk that you want to create and select the capacity
measurement (MB or GB) from the list.
d. Click Next.
6. You can enter the VDisk name if you want to create a single VDisk, or you can enter the
naming prefix if you want to create multiple VDisks. Click Next.
VDisk naming: When you create more than one VDisk, the wizard does not ask you for a
name for each VDisk to be created. Instead, the name that you use here will be a prefix
and have a number, starting at zero, appended to it as each VDisk is created.
Note: If you do not provide a name, the SVC automatically generates the name VDiskn
(where n is the ID sequence number that is assigned by the SVC internally).
If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0
to 9, and the underscore. The name can be between one and 15 characters in length,
but it cannot start with a number or the word “VDisk” (because this prefix is reserved for
SVC assignment only).
7. In the Verify Attributes window (see Figure 8-73 for striped mode and Figure 8-74 on
page 509 for sequential mode), check whether you are satisfied with the information that is
shown, and then, click Finish to complete the task. Otherwise, click Back to return to
make any corrections.
Figure 8-73 Creating a VDisk wizard: Verify the VDisk striped type
508 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-74 Creating a VDisk wizard: Verify the VDisk sequential type
8. Figure 8-75 shows the progress of the creation of your VDisks on the storage and the final
results.
While the host using this VDisk starts utilizing up to the level of the real allocation, the SVC
can dynamically grow (when you enable the autoexpand feature) until it reaches the virtual
capacity limit or the MDG physically runs out of free space. For the latter scenario, running out
of space causes the growing VDisk to go offline, affecting the host that is using that VDisk.
Therefore, enabling threshold warnings is important and recommended.
4. The Set Attributes window opens (Figure 8-69 on page 506). Perform these steps:
a. Choose the type of VDisk that you want to create: striped or sequential.
b. Select the cache mode: Read/Write or None.
c. Enter a unit device identifier (optional).
d. Enter the number of VDisks that you want to create.
e. Select Space-efficient, which expands this section with the following options:
i. Type the size of the VDisk Capacity (remember, this size is the virtual size).
ii. Type a percentage or select a specific size for the usage threshold warning.
iii. Select Auto expand, which allows the real disk size to grow as required.
iv. Select the Grain size (choose 32 KB normally, but match the FlashCopy grain size,
which is 256 KB, if the VDisk will be used for FlashCopy).
f. Optionally, format the new VDisk by selecting Format VDisk before use (write zeros to
its managed disk extents).
g. Click Next.
510 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-77 Creating a VDisk wizard: Set Attributes
5. On the Select MDisk(s) and Size for a <modetype>-Mode VDisk window, as shown in
Figure 8-78, follow these steps:
a. Select the Managed Disk Group from the list.
b. Optionally, choose the MDisk Candidates upon which to create the VDisk. Click Add to
move them to the Managed Disks Striped in this Order box.
c. Type the Real size that you want to allocate. This size is the amount of disk space that
will actually be allocated. It can either be a percentage of the virtual size or a specific
number.
6. In the Name the VDisk(s) window (Figure 8-79 on page 512), type a name for the VDisk
that you are creating. In our case, we used vdisk_sev2. Click Next.
7. In the Verify Attributes window (Figure 8-80), verify the selections. We can select Back at
any time to make changes.
8. After selecting Finish, we are presented with a window (Figure 8-81 on page 513) that
tells us the result of the action.
512 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-81 Space-Efficient VDisk creation success
If the VDisk is currently assigned to a host, you receive a secondary message where you
must click Forced Delete to confirm your decision. See Figure 8-83 on page 514. This
action deletes the VDisk-to-host mapping before deleting the VDisk.
Important: Deleting a VDisk is a destructive action for user data residing in that VDisk.
Tip: Make sure that the host is no longer using that disk. Unmapping a disk from a host
does not destroy the disk’s contents.
Unmapping a disk has the same effect as powering off the computer without first
performing a clean shutdown and, thus, might leave the data in an Inconsistent state. Also,
any running application that was using the disk will start to receive I/O errors.
Dynamic expansion of a VDisk is only supported when the VDisk is in use by one of the
following operating systems:
AIX 5L V5.2 and higher
Microsoft Windows 2000 Server and Windows Server 2003 for basic disks
Microsoft Windows 2000 Server and Windows Server 2003 with a hot fix from Microsoft
(Q327020) for dynamic disks
514 Implementing the IBM System Storage SAN Volume Controller V5.1
Assuming that your operating system supports it, to expand a VDisk, perform the following
steps:
1. Select the VDisk that you want to expand, as shown in Figure 8-68 on page 506. Select
Expand a VDisk from the list, and click Go.
2. The Expanding Virtual Disks VDiskname window (where VDiskname is the VDisk that you
selected in the previous step) opens. See Figure 8-85. Follow these steps:
a. Select the new size of the VDisk. This size is the increment to add. For example, if you
have a 5 GB disk and you want it to become 10 GB, you specify 5 GB in this field.
b. Optionally, select the MDisk candidates from which to obtain the additional capacity.
The default for a striped VDisk is to use equal capacity from each MDisk in the MDG.
c. Optionally, you can format the extra space with zeros by selecting the Format
Additional Managed Disk Extents check box. This option does not format the entire
VDisk, only the newly expanded space.
3. In the Creating Virtual Disk-to-Host Mappings window (Figure 8-87), select the target host.
We have the option to specify the SCSI LUN ID. (This field is optional. Use this field to
specify an ID for the SCSI LUN. If you do not specify an ID, the next available SCSI LUN
ID on the host adapter is automatically used.) Click OK.
4. You are presented with an information window that displays the status, as shown in
Figure 8-88 on page 517.
516 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-88 VDisk to host mapping successful
5. You now return to the Viewing Virtual Disks window (Figure 8-86 on page 516).
You have now completed all of the tasks that are required to assign a VDisk to an attached
host, and the VDisk is ready for use by the host.
New name: The name can consist of the letters A to Z and a to z, the numbers 0 to
9, and the underscore. The name can be between one and 15 characters in length.
However, it cannot start with a number or the word “VDisk” (because this prefix is
reserved for SVC assignment only).
b. Select an alternate I/O Group from the list to alter the I/O Group to which it is assigned.
c. Set performance throttling for a specific VDisk. In the I/O Governing field, type a
number and select either I/O or MB from the list. Note the following items:
• I/O governing effectively throttles the amount of I/Os per second (or MBs per
second) to and from a specific VDisk. You might want to use I/O governing if you
have a VDisk that has an access pattern that adversely affects the performance of
other VDisks on the same set of MDisks, for example, if it uses most of the available
bandwidth.
• If this application is highly important, migrating the VDisk to another set of MDisks
might be advisable. However, in certain cases, it is an issue with the I/O profile of
the application rather than a measure of its use or importance.
• Base your choice between I/O and MB as the I/O governing throttle on the disk
access profile of the application. Database applications generally issue large
amounts of I/O, but they only transfer a relatively small amount of data. In this case,
518 Implementing the IBM System Storage SAN Volume Controller V5.1
2. The Migrating Virtual Disk VDiskname window (where VDiskname is the VDisk that you
selected in the previous step) opens, as shown in Figure 8-90. From the MDisk Group
Name list, perform these steps:
a. Select the MDG to which you want to reassign the VDisk. You will only be presented
with a list of MDisk groups with the same extent size.
b. Specify the number of threads to devote to this process (a value from 1 to 4). The
optional threads parameter allows you to assign a priority to the migration process. A
setting of 4 is the highest priority setting. If you want the process to take a lower priority
over other types of I/O, you can specify 3, 2, or 1.
When you have finished making your selections, click OK to begin the migration
process.
Important: After a migration starts, you cannot stop it. Migration continues until it is
complete unless it is stopped or suspended by an error condition or the VDisk that is
being migrated is deleted.
3. You must manually refresh your browser or close it. Return to the Viewing Virtual Disks
window periodically to see the MDisk Group Name column in the Viewing Virtual Disks
window update to reflect the new MDG name.
Figure 8-91 Migrate to image mode VDisk wizard: Select the Target MDisk
4. Select the MDG that the MDisk will join (Figure 8-92). Click Next.
5. Select the priority of the migration by selecting the number of threads (Figure 8-93). Click
Next.
Figure 8-93 Migrate to image mode VDisk wizard: Select the Threads
520 Implementing the IBM System Storage SAN Volume Controller V5.1
6. Verify that the information that you specified is correct (Figure 8-94). If you are satisfied,
click Finish. If you want to change something, use the Back option.
Figure 8-94 Migrate to image mode VDisk wizard: Verify Migration Attributes
7. Figure 8-95 displays the details of the VDisk that you are migrating.
Tip: You can also create a new mirrored VDisk by selecting an option during the VDisk
creation, as shown in Figure 8-69 on page 506.
You can use a VDisk mirror for any operation for which you can use a VDisk. It is transparent
to higher level operations, such as Metro Mirror, Global Mirror, or FlashCopy.
Creating a VDisk mirror from an existing VDisk is not restricted to the same MDG, so it makes
an ideal method to protect your data from a disk system or an array failure. If one copy of the
mirror fails, it provides continuous data access to the other copy. When the failed copy is
repaired, the copies automatically resynchronize.
You can also use a VDisk mirror as an alternative migration tool, where you can synchronize
the mirror before splitting off the original side of the mirror. The VDisk stays online, and it can
be used normally, while the data is being synchronized. The copies can also be separate
structures (that is, striped, image, sequential, or space-efficient) and separate extent sizes.
You can monitor the MDisk copy synchronization progress by selecting the Manage
Progress menu option and, then, by selecting the View Progress link.
522 Implementing the IBM System Storage SAN Volume Controller V5.1
8.5.13 Creating a mirrored VDisk
In this section, we create a mirrored VDisk step-by-step. This process creates a highly
available VDisk.
Refer to 8.5.3, “Creating a VDisk” on page 505, perform steps 1 to 4, and, then, perform the
following steps:
1. In the Set Attributes window (Figure 8-97), follow these steps:
a. Select the type of VDisk to create (striped or sequential) from the list.
b. Select the cache mode (read/write or none) from the list.
c. Select a Unit device identifier (a numerical number) for this VDisk.
d. Select the number of VDisks to create.
e. Select the Mirrored Disk check box. Certain mirror disk options will appear.
f. Type the Mirror Synchronization rate, in a percent. It is set to 50%, by default.
g. Optionally, you can check the Synchronized check box. Select this option when MDisks
are already formatted or when read stability to unwritten areas of the VDisk is not
required.
h. Click Next.
2. In the Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0) window, as shown in
Figure 8-98 on page 524, follow these steps:
a. Select the MDG from the list.
b. Type the capacity of the VDisk. Select the unit of capacity from the list.
c. Click Next.
3. In the Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1) window, as shown in
Figure 8-99, select an MDG for Copy 1 of the mirror. You can define Copy 1 within the
same MDG or on another MDG. Click Next.
Figure 8-99 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1) window
4. In the Name the VDisk(s) window (Figure 8-100), type a name for the VDisk that you are
creating. In this case, we used MirrorVDisk1. Click Next.
524 Implementing the IBM System Storage SAN Volume Controller V5.1
5. In the Verify Mirrored VDisk Attributes window (Figure 8-101), verify the selections. We
can select the Back button at any time to make changes.
6. After selecting Finish, we are presented with the window, which is shown in Figure 8-102,
that informs us of the result of the action.
We click Close again, and by clicking our newly created VDisk, we can see more detailed
information about that VDisk, as shown in Figure 8-103 on page 526.
Image mode is intended for the purpose of migrating data from an environment without the
SVC to an environment with the SVC. A LUN that was previously directly assigned to a
SAN-attached host can now be reassigned to the SVC (during a short outage) and returned
to the same host as an image mode VDisk, with the user’s data intact. During the same
outage, the host, cables, and zones can be reconfigured to access the disk, now through the
SVC.
After access is re-established, the host workload can resume while the SVC manages the
transparent migration of the data to other SVC managed MDisks on the same or another disk
subsystem.
We recommend that, during the migration phase of the SVC implementation, you add one
image mode VDisk at a time to the SVC environment. This approach reduces the risk of error.
It also means that the short outages that are required to reassign the LUNs from the
subsystem or subsystems and to reconfigure the SAN and host can be staggered over a
period of time to minimize the effect on the business.
As of SVC Version 4.3, you have the ability to create a VDisk mirror or a Space-Efficient
VDisk while you are creating an image mode VDisk.
You can use the mirroring option, while making the image mode VDisk, as a storage array
migration tool, because the Copy1 MDisk will also be in image mode.
To create a space-efficient image mode VDisk, you must have the same amount of real disk
space as the original MDisk, because the SVC is unable to detect how much physical space a
host utilizes on a LUN.
526 Implementing the IBM System Storage SAN Volume Controller V5.1
Important: You can create an image mode VDisk only by using an unmanaged disk, that
is, you must create an image mode VDisk before you add the MDisk that corresponds to
your original logical volume to an MDG.
6. You can also select whether you want to have read and write operations stored in cache by
specifying a cache mode. Additionally, you can specify a unit device identifier. You can
optionally choose to have a mirrored or Space-Efficient VDisk. Click Next to continue.
Cache mode: You must specify the cache mode when you create the VDisk. After the
VDisk is created, you cannot change the cache mode.
None All read and write I/O operations that are performed by the VDisk are not stored
in cache.
If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0
to 9, a dash, and the underscore. The name can be between one and 15 characters in
length, but it cannot start with a number, a dash, or the word “VDisk” (because this
prefix is reserved for SVC assignment only).
7. Next, choose the MDisk to use for your image mode VDisk, as shown in Figure 8-105.
Figure 8-105 Select your MDisk to use for your image mode VDisk
8. Select your I/O Group and preferred node to handle the I/O traffic for the VDisk that you
are creating or have the system choose for you, as shown in Figure 8-106.
9. Figure 8-107 on page 529 shows you the characteristics of the new image VDisk. Click
Finish to complete this task.
528 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-107 Verify image VDisk attributes
You can now map the newly created VDisk to your host.
5. Figure 8-109 enables you to choose on which of the available MDisks your Copy 0 and
Copy 1 will be stored. Notice that we have selected a second MDisk that is larger than the
original MDisk. Click Next to proceed.
6. Now, you can optionally select an I/O Group and a preferred node, and you can select an
MDG for each of the MDisk copies, as shown in Figure 8-110 on page 531. Click Next to
proceed.
530 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-110 Choose an I/O Group and an MDG for each of the MDisk copies
7. Figure 8-111 shows you the characteristics of the new image mode VDisk. Click Finish to
complete this task.
You can monitor the MDisk copy synchronization progress by selecting Manage Progress
and then View Progress, as shown in Figure 8-112 on page 532.
Optionally, you can assign the VDisk to the host or wait until it is synchronized and, after
deleting the MDisk mirror Copy 1, map the MDisk copy to the host.
532 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-113 Add a space-efficient copy to VDisk
You can monitor the VDisk copy synchronization progress by selecting the Manage Progress
menu option and, then, the View Progress link, as shown in Figure 8-114 on page 534.
2. Figure 8-116 on page 535 displays both copies of the VDisk mirror. Select the original
copy (Copy ID 0), and click OK.
534 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-116 Deleting VDisk Copy 0
To migrate a Space-Efficient VDisk to a fully allocated VDisk, follow the same scenario, but
add a normal (fully allocated) VDisk as the second copy.
Important: After you split a VDisk mirror, you cannot resynchronize or recombine them.
You must create a VDisk copy from scratch.
Although easily done using the SVC, you must ensure that your operating system supports
shrinking, either natively or by using third-party tools, before using this function.
In addition, we recommend that you always have a good current backup before you execute
this task.
536 Implementing the IBM System Storage SAN Volume Controller V5.1
Assuming your operating system supports it, perform the following steps to shrink a VDisk:
1. Perform any necessary steps on your host to ensure that you are not using the space that
you are about to remove.
2. Select the VDisk that you want to shrink (Figure 8-66 on page 504). Select Shrink a
VDisk from the list, and click Go.
3. The Shrinking Virtual Disks VDiskname window (where VDiskname is the VDisk that you
selected in the previous step) opens, as shown in Figure 8-118. In the Reduce Capacity
By field, enter the capacity by which you want to reduce. Select B, KB, MB, GB, TB, or PB.
The final capacity of the VDisk is the Current Capacity minus the capacity that you specify.
Capacity: Be careful with the capacity information. The Current Capacity field shows
the capacity in MBs, while you can specify a capacity to reduce in GBs. SVC calculates
1 GB as 1,024 MB.
When you are finished, click OK. The changes become visible on your host.
From the VDisk overview drop-down list, select Show Capacity information, as shown in
Figure 8-122 on page 539.
538 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-122 Selecting capacity information for a VDisk
Figure 8-123 shows you the total MDisk capacity, the space in the MDGs, the space allocated
to the VDisks, and the total free space.
3. Now you are back at the window that is shown in Figure 8-124 on page 539. Now, you can
assign this VDisk to another host, as described in 8.5.8, “Assigning a VDisk to a host” on
page 516.
You have now completed the required tasks to manage VDisks within an SVC environment.
More detailed information about solid-state drives and internal controllers is in 2.5,
“Solid-state drives” on page 49.
540 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-126 SVC internal controller
The unmanaged MDisks (solid-state drives) are owned by the internal controllers. When
these MDisks are added to an MDG, we recommend that a dedicated MDG is created for the
solid-state drives. When those MDisks are added to an MDG, they will become “managed”
and will be treated as any other MDisks in an MDG.
If we look closer at one of the selected controllers, as shown in Figure 8-127, we can verify
the SVC node that owns this controller, and we can verify that this controller is an internal
SVC controller.
We can now check what MDisks (sourced from our solid-state drives) are provisioned from
that controller, as shown in Figure 8-128 on page 542.
From this view, we can see all of the relevant information, such as the status, the MDG, and
the size. To see more detailed information about a single MDisk (single solid-state drive), we
click a single MDisk and we will see its information, as shown in Figure 8-129.
Notice the controller type (6), which is an identifier for the internal controller type.
When you have your solid-state drives in full operation and you want to see the VDisks that
use your solid-state drives, the easiest way is to locate the MDG that contains your solid-state
drives as MDisks, and select Show VDisks Using This Group, as shown in Figure 8-130 on
page 543.
542 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-130 Showing VDisks using our solid-state drives
This action displays the VDisks that use your solid-state drives.
If you need to access the online help, in the upper right corner of the window, click the
icon. This icon opens an information center window where you can search on any item for
which you want help (see Figure 8-131 on page 544).
General maintenance
If, at any time, the content in the right side of the frame is abbreviated, you can collapse the
My Work column by clicking the icon at the top of the My Work column. When collapsed,
the small arrow changes from pointing to the left to pointing to the right ( ). Clicking the
small arrow that points right expands the My Work column back to its original size.
In addition, each time that you open a configuration or administrative window using the GUI in
the following sections, it creates a link for that window along the top of your Web browser
beneath the banner graphic. As a general maintenance task, we recommend that you close
each window when you finish using it by clicking the icon to the right of the window name,
but under the icon. Be careful not to close the entire browser.
544 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-132 View Cluster Properties: General properties
If the cluster IP address is changed, the open command-line shell closes during the
processing of the command. You must reconnect to the new IP address if the cluster is
connected through that port.
Important: If you specify a new cluster IP address, the existing communication with the
cluster through the GUI is lost. You need to relaunch the SAN Volume Controller
Application from the GUI Welcome window.
Modifying the IP address of the cluster, although quite simple, requires reconfiguration for
other items within the SVC environments, including reconfiguring the central administration
GUI by adding the cluster again with its new IP address.
Perform the following steps to modify the cluster and service IP addresses of our SVC
configuration:
1. From the SVC Welcome window, select Manage Cluster and, then, Modify IP
Addresses.
2. The Modify IP Addresses window (Figure 8-133 on page 546) opens.
Select the port that you want to modify, select Modify Port Setting, and click GO. Notice
that you can configure both ports on the SVC node, as shown in Figure 8-134.
546 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-135 Entering the new cluster IP address
3. You advance to the next window, which shows a message indicating that the IP addresses
were updated.
You have now completed the required tasks to change the IP addresses (cluster, service,
gateway, and Master Console) for your SVC environment.
3. Although it does not state the current status, clicking OK turns on the statistics collection.
To verify, click Cluster Properties, as you did in 8.8.1, “Viewing cluster properties” on
page 544. Then, click Statistics. You see the interval as specified in Step 2 and the status
of On, as shown in Figure 8-137 on page 548.
You have now completed the required tasks to start statistics collection on your cluster.
3. The window closes. To verify that the collection has stopped, click Cluster Properties, as
you did in 8.8.1, “Viewing cluster properties” on page 544. Then, click Statistics. Now, you
see the status has changed to Off, as shown in Figure 8-139.
548 Implementing the IBM System Storage SAN Volume Controller V5.1
You have now completed the required tasks to stop statistics collection on your cluster.
In Figure 8-140, we can see the overview of partnership properties and which clusters are
currently in partnership with our cluster.
8.8.6 iSCSI
From the View Cluster Properties window, we can select iSCSI to see the iSCSI overview.
The iSCSI properties show whether the iSNS server and CHAP are configured and what
type, if any, of authentication is supported (Figure 8-141).
8.8.7 Setting the cluster time and configuring the Network Time Protocol
server
Perform the following steps to configure time settings:
1. From the SVC Welcome window, select Manage Cluster and Set Cluster Time.
2. The Cluster Date and Time Settings window (Figure 8-142 on page 550) opens. At the top
of the window, you can see the current settings.
You have now completed the necessary tasks to configure an NTP server and to set the
cluster time zone and time.
If you remove the main power while the cluster is still running, the uninterruptible power
supply unit will detect the loss of power and instruct the nodes to shut down. This shutdown
can take several minutes to complete, and although the uninterruptible power supply unit has
sufficient power to perform the shutdown, you will be unnecessarily draining the
uninterruptible power supply unit batteries.
When power is restored, the SVC nodes will start; however, one of the first checks that the
SVC nodes make is to ensure that the uninterruptible power supply unit’s batteries have
sufficient power to survive another power failure, enabling the node to perform a clean
shutdown. (We do not want the uninterruptible power supply unit to run out of power while the
550 Implementing the IBM System Storage SAN Volume Controller V5.1
node’s shutdown activities have not yet completed.) If the uninterruptible power supply unit’s
batteries are not sufficiently charged, the node will not start. It can take up to three hours to
charge the batteries sufficiently for a node to start.
Note: When a node shuts down due to loss of power, the node will dump the cache to an
internal hard drive so that the cached data can be retrieved when the cluster starts. With
the 8F2/8G4 nodes, the cache is 8 GB and can take several minutes to dump to the
internal drive.
SVC uninterruptible power supply units are designed to survive at least two power failures in
a short time, before nodes will refuse to start until the batteries have sufficient power (to
survive another immediate power failure). If, during your maintenance activities, the
uninterruptible power supply unit detected power and a loss of power multiple times (and thus
the nodes start and shut down more than one time in a short time frame), you might find that
you have unknowingly drained the uninterruptible power supply unit batteries. You will have to
wait until they are charged sufficiently before the nodes will start.
Important: Before shutting down a cluster, quiesce all I/O operations that are destined for
this cluster, because you will lose access to all of the VDisks that are provided by this
cluster. Failure to do so might result in failed I/O operations being reported to your host
operating systems.
There is no need to quiesce all I/O operations if you are only shutting down one SVC node.
Begin the process of quiescing all I/O to the cluster by stopping the applications on your
hosts that are using the VDisks that are provided by the cluster.
If you are unsure which hosts are using the VDisks that are provided by the cluster, follow
the procedure in 8.5.22, “Showing the host to which the VDisk is mapped” on page 538,
and repeat this procedure for all VDisks.
Note: At this point, you will lose administrative contact with your cluster.
You have now completed the required tasks to shut down the cluster. Now, you can shut down
the uninterruptible power supply units by pressing the power buttons on their front panels.
If the cluster shuts down because the uninterruptible power supply unit has detected a loss
of power, it will automatically restart when the uninterruptible power supply unit detects that
the power has been restored (and the batteries have sufficient power to survive another
immediate power failure).
Note: To restart the SVC cluster, you must first restart the uninterruptible power supply
units by pressing the power buttons on their front panels. After they are on, go to the
service panel of one of the nodes within your SVC cluster and press the power on button,
releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and
the cluster name on line 2 of the SVC front panel), you can start the other nodes in the
same way.
As soon as all of the nodes are fully booted and you have re-established administrative
contact using the GUI, your cluster is fully operational again.
Each user account has a name, a role, and password assigned to it, which differs from the
Secure Shell (SSH)-key based role approach that is used by the CLI.
The role-based security feature organizes the SVC administrative functions into groups,
which are known as roles, so that permissions to execute the various functions can be
granted differently to the separate administrative users. There are four major roles and one
special role.
552 Implementing the IBM System Storage SAN Volume Controller V5.1
Table 8-2 Authority roles
User group Role User
Copy Operator All svcinfo commands and For those users that control all
the following svctask copy functionality of the cluster
commands:
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap,
chfcmap,
startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership
Monitor All svcinfo commands and For those users only needing
the following svctask view access
commands: finderr,
dumperrlog, dumpinternallog,
chcurrentuser and the
svcconfig command: backup
The superuser user is a built-in account that has the Security Admin user role permissions.
You cannot change permissions or delete this superuser account; you can only change the
password. You can also change this password manually on the front panels of the cluster
nodes.
Toward the upper-left side of the window, you can see the name of the user that you are
modifying. We enter our new password, as shown in Figure 8-145.
554 Implementing the IBM System Storage SAN Volume Controller V5.1
2. Select Create a User from the list, as shown in Figure 8-147.
3. Enter a name for your user and the desired password. Because we are not connected to a
Lightweight Directory Access Protocol (LDAP) server, we select Local for the
authentication type. Therefore, we can choose to which user group our user belongs. In
our scenario, we are creating a user for SAN administrative purposes, and it is therefore
appropriate to add this user to the Administrator group. We attach the SSH key, as well, so
a CLI session can be opened. We view the attributes, as shown in Figure 8-148.
2. You have the option of changing the password, assigning a new role, or changing the SSH
key for the given user name. Click OK (Figure 8-151).
556 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-152 Delete a user
2. Click OK to confirm that you want to delete the user, as shown in Figure 8-153.
Here, we have several options for our user group and we find detailed information about the
available groups. In Figure 8-155 on page 558, we can see the options, which are the same
options with which we are presented when we select Modify User group.
We have now completed the tasks that are required to create, modify, and delete a user and
user groups within the SVC cluster.
558 Implementing the IBM System Storage SAN Volume Controller V5.1
8.10 Working with nodes using the GUI
This section discusses the various configuration and administrative tasks that you can
perform on the nodes within an SVC cluster.
3. On the Renaming I/O Group I/O Group name window (where I/O Group name is the I/O
Group that you selected in the previous step), type the New Name that you want to assign
to the I/O Group. Click OK, as shown in Figure 8-159. Our new name is PROD_IO_GRP.
SVC also uses “io_grp” as a reserve word prefix. A node name cannot therefore be
changed to io_grpn where n is a numeric; however, io_grpny or io_grpyn, where y is any
non-numeric character that is used in conjunction with n, is acceptable.
560 Implementing the IBM System Storage SAN Volume Controller V5.1
3. The Viewing Clusters window opens, as shown in Figure 8-161. On the Viewing Clusters
window, select the cluster on which you want to perform actions (in our case,
ITSO_CLS3). Click Go.
4. The SAN Volume Controller Console Application launches in a separate browser window
(Figure 8-162). In this window, as with the Welcome window, you can see several links
under My Work (top left), a Recent Tasks list (bottom left), the SVC Console version and
build level information (on the right, under the graphic), and a hypertext link that takes you
to the SVC download page:
http://www.ibm.com/storage/support/2145
Under My Work, click Work with Nodes and, then, Nodes.
5. The Viewing Nodes window (Figure 8-163 on page 562) opens. Note the input/output (I/O)
group name (for example, io_grp0). Select the node that you want to add. Ensure that Add
a node is selected from the drop-down list, and click Go.
Node name: You can rename the existing node to your own naming convention
standards (we show you how to rename the existing node later). In your window, it
appears as node1, by default.
6. The next window (Figure 8-164) displays the available nodes. Select the node from the
Available Candidate Nodes drop-down list. Associate it with an I/O Group and provide a
name (for example, SVCNode2). Click OK.
Note: If you do not provide a name, the SVC automatically generates the name noden,
where n is the ID sequence number that is assigned by the SVC internally. If you want
to provide a name, you can use letters A to Z and a to z, numbers 0 to 9, and the
underscore. The name be between one and 15 characters in length, but it cannot start
with a number or the word “node” (because this prefix is reserved for SVC assignment
only).
In our case, we only have enough nodes to complete the formation of one I/O Group.
Therefore, we added our new node to the I/O Group that node1 was already using,
io_grp0 (you can rename the I/O Group from the default of iogrp0 using your own naming
convention standards).
562 Implementing the IBM System Storage SAN Volume Controller V5.1
If this window does not display any available nodes (which is indicated by the message
“CMMVC1100I There are no candidate nodes available”), check whether your second
node is powered on and whether zones are appropriately configured in your switches. It is
also possible that a pre-existing cluster’s configuration data is stored on the second node.
If you are sure that this node is not part of another active SVC cluster, use the service
window to delete the existing cluster information. When this action is complete, return to
this window and you will see the node listed.
7. Return to the Viewing Nodes window (Figure 8-165). It shows the status change of the
node from Adding to Online.
Refresh: This window does not automatically refresh. Therefore, you continue to see
the Adding status until you click Refresh.
We will configure our nodes to use the primary and secondary Ethernet ports for iSCSI, as
well as to contain the cluster IP. While we are configuring our nodes to be used with iSCSI, we
are not affecting our cluster IP. The cluster IP is changed, as shown in 8.8, “Managing the
cluster using the GUI” on page 544.
It is important to know that you can have more than a one IP address to one physical
connection relationship. The capability exists to have a four to one relationship (4:1)
consisting of two IPv4 addresses, plus two IPv6 addresses (four total), to one physical
connection per port per node.
Important: When reconfiguring IP ports, be aware that you must reconnect already
configured iSCSI connections if changes are made on the IP addresses of the nodes.
You can perform iSCSI authentication or CHAP in either of two ways, either for the whole
cluster or per host connection. We show configuring the CHAP for the entire cluster in 8.8.6,
“iSCSI” on page 549.
In our scenario, we have a cluster IP of 9.64.210.64, as shown in Figure 8-166 on page 564.
That cluster will not be impacted during our configuration of the nodes’ IP addresses.
1. We start by selecting Work with nodes from our Welcome window and by selecting Node
Ethernet Ports, as shown in Figure 8-167.
We can see that we have four (two per node) connections to use. They are all physically
connected with a 100 Mb link, but they are not configured yet.
From the list, we select Configure a Node Ethernet Port and insert the IP address that
we intend to use for iSCSI, as shown in Figure 8-168.
2. We can now see that one of our Ethernet ports is now configured and online, as shown in
Figure 8-169 on page 565. We perform the same task to configure the three remaining IP
addresses.
564 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-169 Ethernet port successfully configured and online
We configure the remaining ports and use a unique IP address for each port. When finished,
all of our Ethernet ports are configured, as shown in Figure 8-170.
Now, both physical ports on each node are configured for iSCSI.
We can see the iSCSI identifier (iSCSI name) for our SVC node by selecting Working with
nodes from our Welcome window. Then, by selecting Nodes, under the column iSCSI Name,
we see our iSCSI identifier, as shown in Figure 8-171.
Each node has a unique iSCSI name associated with two IP addresses. After the host has
initiated the iSCSI connection to a target node, this IQN from the target node will be visible in
the iSCSI configuration tool on the host.
You can also enter an iSCSI alias name for the iSCSI name on the node, as shown in
Figure 8-172 on page 566.
We change the name to a name that is easier to recognize, as shown in Figure 8-173.
566 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-174 Select FlashCopy Consistency Groups
2. Then, from the list, select Create a Consistency Group, and click Go (Figure 8-175).
3. Enter the desired FlashCopy consistency group name, and click OK, as shown in
Figure 8-176.
Autodelete: If you choose to use the Automatically Delete Consistency Group When
Empty feature, you can only use this consistency group for mappings that are marked for
autodeletion. The non-autodelete consistency group can contain both autodelete
FlashCopy mappings and non-autodelete FlashCopy mappings.
Repeat the previous steps to create another FlashCopy consistency group (Figure 8-178).
The FlashCopy consistency groups are now ready to use.
4. We are then presented with the FlashCopy creation wizard overview of the creation
process for a FlashCopy mapping, and we click Next to proceed.
568 Implementing the IBM System Storage SAN Volume Controller V5.1
5. We name the first FlashCopy mapping PROD_1, select the previously created consistency
group FC_SIGNA, set the background copy priority to 50 and the Grain Size to 64, and
click Next to proceed, as shown in Figure 8-180 on page 569.
6. The next step is to select the source VDisk. If there were many source VDisks that were
not already defined in a FlashCopy mapping, we can filter that list here. In Figure 8-181,
we define the filter * (asterisk will show us all of our VDisks) for the source VDisk, and
click Next to proceed.
7. We select Galtarey_01 from the available VDisks as our source disk, and click Next to
proceed.
8. The next step is to select our target VDisk. The FlashCopy mapping wizard only presents
a list of the VDisks that are the same size as the source VDisk. These VDisks are not
already in a FlashCopy mapping, and they are not already defined in a Metro Mirror
relationship. In Figure 8-182 on page 570, we select the target Hrappsey_01 and click
Next to proceed.
We repeat the procedure to create other FlashCopy mappings on the second FlashCopy
target VDisk named Galtarey_01:
1. We give this VDisk another FlashCopy mapping name and choose a separate FlashCopy
consistency group, as shown in Figure 8-185 on page 571.
570 Implementing the IBM System Storage SAN Volume Controller V5.1
2. As you can see in this example, we changed the background copy rate to 30, which slows
down the background copy process. The clearing rate of 60 extends the stopping process
if we had to stop the mapping during a copy process. An incremental mapping copies only
the parts of the source or target VDisk that have changed since the last FlashCopy
process.
Note: Even if the type of the FlashCopy mapping is incremental, the first copy process
copies all of the data from the source to the target VDisk.
In Figure 8-186 on page 572, you can see that Galtarey_01 is still available.
4. On the final page of the wizard, as shown in Figure 8-188 on page 573, we select Finish
after verifying all the parameters.
572 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-188 Verification of FlashCopy mapping
The background copy rate specifies the priority to give to complete the copy. If 0 is specified,
the copy does not proceed in the background. A default value is 50.
Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might not make
much sense if you plan to handle a large number of FlashCopy mappings or consistency
groups periodically, or at varying times. In this case, creating a script by using the CLI
might be more convenient.
If you only select one mapping to be prepared, the cluster will ask if you want all of the
volumes in that consistency group to be prepared, as shown in Figure 8-189.
When you have assigned several mappings to a FlashCopy consistency group, you only have
to issue a single prepare command for the whole group, to prepare all of the mappings at one
time.
We select the FlashCopy consistency group, select Prepare a consistency group from the
list, and click Go. The status changes to Preparing and, then, finally to Prepared. Click
Refresh several times until the FlashCopy consistency group is in the Prepared state.
Figure 8-190 on page 574 shows how we check the result. The status of the consistency
group has changed to Prepared.
Because we have already prepared the FlashCopy mapping, we are ready to start the
mapping right away. Notice that this mapping is not a member of any consistency group. An
overview message with information about the mapping that we are about to start is shown in
Figure 8-192, and we select Start to start the FlashCopy mapping.
After we have selected Start, we are automatically shown the copy process view that shows
the progress of our copy mappings.
574 Implementing the IBM System Storage SAN Volume Controller V5.1
FlashCopy consistency group, we select the consistency group, select Start a Consistency
Group from the list, and click Go.
In Figure 8-194, we are prompted to confirm starting the FlashCopy consistency group. We
now flush the database and OS buffers and quiesce the database. Then, we click OK to start
the FlashCopy consistency group.
Note: Because we have already prepared the FlashCopy consistency group, this option is
grayed out when you are prompted to confirm starting the FlashCopy consistency group.
As shown in Figure 8-195, we verified that the consistency group is in the Copying state, and
subsequently, we resume the database I/O.
When the background copy is completed for all FlashCopy mappings in the consistency
group, the status is changed to “Idle or Copied”.
Tip: If you want to stop a mapping or group in a Multiple Target FlashCopy environment,
consider whether you want to keep any of the dependent mappings. If not, issue the stop
command with the force parameter, which stops all of the dependent maps too and
negates the need for stopping the copy process.
Important: Only stop a FlashCopy mapping when the data on the target VDisk is useless,
or if you want to modify the FlashCopy mapping.
When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline
by the SVC.
As shown in Figure 8-197 on page 577, we stop the FC_DONA consistency group. All of the
mappings belonging to that consistency group are now in the Copying state.
576 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-197 Stop FlashCopy consistency group
2. When selecting the method to use to stop the mapping, we have the three options, as
shown in Figure 8-199.
3. Because we want to stop the mapping immediately, we select Forced Stop. The status of
the FlashCopy consistency groups changes from Copying Stopping Stopped, as
shown in Figure 8-200 on page 578.
When we initially create a mapping, we can select the “Automatically delete mapping when
the background copy completes” function, as shown in Figure 8-201.
Or, if the option has not been selected initially, you can delete the mapping manually, as
shown in Figure 8-202 on page 579.
578 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-202 Manually deleting a FlashCopy mapping
Tip: If you want to use the target VDisks in a consistency group as normal VDisks, you can
monitor the background copy progress until it is complete (100% copied) and, then, delete
the FlashCopy mapping.
When deleting a consistency group, we start by selecting a group. From the list, select Delete
a Consistency Group and click Go, as shown in Figure 8-203.
We can still delete a FlashCopy consistency group even if the consistency group has a status
of Copying, as shown in Figure 8-204, by forcing the deletion.
Figure 8-204 Deleting a consistency group with a mapping in the Copying state
And, because there is an active mapping with the state of Copying, we see a warning
message, as shown in Figure 8-205 on page 580.
Create a FlashCopy mapping with the fully allocated VDisk as the source and the
Space-Efficient VDisk as the target. We describe creating a Space-Efficient VDisk in 8.5.4,
“Creating a Space-Efficient VDisk with autoexpand” on page 509 in detail.
Important: The copy process overwrites all of the data on the target VDisk. You must back
up all of the data before you start the copy process.
You can start a FlashCopy mapping whose target is the source of another FlashCopy
mapping. This capability enables you to reverse the direction of a FlashCopy map, without
having to remove existing maps, and without losing the data from the target.
When you prepare either a stand-alone mapping or consistency group, you are prompted with
a message, as shown in Figure 8-206.
580 Implementing the IBM System Storage SAN Volume Controller V5.1
Splitting a cascaded FlashCopy mapping allows the source target of a map, which is 100%
complete, to be removed from the head of the cascade when the map is stopped.
For example, if you have four VDisks in a cascade (A B C D), and the map A B is
100% complete, as shown in Figure 8-207, clicking Split Stop, as shown in Figure 8-208,
results in FCMAP_AB becoming idle_copied and the remaining cascade becomes B C
D.
Without the split option, VDisk A remains at the head of the cascade (A C D). Consider
this sequence of steps:
User takes a backup using the mapping A B. A is the production VDisk; B is a backup.
At a later point, the user experiences corruption on A and, therefore, reverses the mapping
B A.
The user then takes another backup from the production disk A and, therefore, has the
cascade B A C.
Stopping A B without using the Split Stop option will result in the cascade B C. Note that
the backup disk B is now at the head of this cascade.
When the user next wants to take a backup to B, the user can still start mapping A B (using
the -restore flag), but the user cannot then reverse the mapping to A (B A or C A).
Stopping A B with the Split Stop option results in the cascade A C. This option does not
result in the same problem, because the production disk A is at the head of the cascade
instead of the backup disk B.
Note: This example is for intercluster Metro Mirror operations only. If you want to set up
Metro Mirror intracluster operations, we highlight those parts of the following procedure
that you do not need to perform.
Now, you can have a cluster partnership among multiple SVC clusters, which allows you to
create four types of configurations, using a maximum of four connected clusters:
Star configuration, as shown in Figure 8-209
582 Implementing the IBM System Storage SAN Volume Controller V5.1
Fully connected configuration, as shown in Figure 8-211
In the following scenario, we set up an intercluster Metro Mirror relationship between the
ITSO-CLS1 SVC cluster at the primary site and the ITSO-CLS2 SVC cluster at the secondary
site. Table 8-3 shows the details of the VDisks.
Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri VDisks, a
consistency group named CG_WIN2K3_MM is created to handle the Metro Mirror
relationships for them. While, in this scenario, application files are independent of the
database, a stand-alone Metro Mirror relationship is created for the MM_App_Pri VDisk.
Figure 8-213 on page 584 illustrates the Metro Mirror setup.
Consistency Group
CG_W2K3_MM
MM Relationship 1
MM_DB_Pri MM_DB_Sec
MM_DBlog_Pri
MM Relationship 2 MM_DBlog_Sec
MM_App_Pri
MM Relationship 3 MM_App_Sec
584 Implementing the IBM System Storage SAN Volume Controller V5.1
5. Create the Metro Mirror relationship for MM_App_Pri:
– Master MM_App_Pri
– Auxiliary MM_App_Sec
– Auxiliary SVC cluster ITSO-CLS2
– Name MMREL3
Note: If you are creating an intracluster Metro Mirror, do not perform this next step to
create the SVC cluster Metro Mirror partnership. Instead, skip to 8.14.4, “Creating a Metro
Mirror consistency group” on page 587.
To create a Metro Mirror partnership between the SVC clusters using the GUI, perform these
steps:
1. We launch the SVC GUI for ITSO-CLS1. Then, we select Manage Copy Services from
the Welcome window and click Metro & Global Mirror Cluster Partnerships. The
window opens, as shown in Figure 8-214.
As shown in Figure 8-216, our partnership is in the Partially Configured state, because we
have only performed the work on one side of the partnership so far.
3. To fully configure the Metro Mirror cluster partnership, we must perform the same steps on
ITSO-CLS2 as we did on ITSO-CLS1. For simplicity and brevity, only two most significant
windows are shown when the partnership is fully configured.
4. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the Metro Mirror cluster
partnership and specify the available bandwidth for the background copy, again 50 MBps,
and then click OK, as shown in Figure 8-217 on page 587.
586 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-217 We select the cluster partner for the secondary partner
Now that both sides of the SVC cluster partnership are defined, the resulting window shown
in Figure 8-218 confirms that our Metro Mirror cluster partnership is in the Fully Configured
state.
The GUI for ITSO-CLS2 is no longer necessary. Close this GUI, and use the GUI for the
ITSO-CLS1 cluster for all further steps.
2. The wizard appears that helps to create the Metro Mirror consistency group. First, the
wizard introduces the steps that are involved in the creation of a Metro Mirror consistency
group, as shown in Figure 8-220. Click Next to proceed.
Figure 8-220 Introduction to the Metro Mirror consistency group creation wizard
3. As shown in Figure 8-221, specify the name for the consistency group, and select the
remote cluster, which we have already defined in 8.14.3, “Creating the SVC partnership
between ITSO-CLS1 and ITSO-CLS2” on page 585. If you are planning to use this
consistency group for internal mirroring, that is, mirroring within the same cluster, select
intracluster consistency group. In our scenario, we selected Create an inter-cluster
consistency group with the remote cluster ITSO_CLS2. Click Next.
588 Implementing the IBM System Storage SAN Volume Controller V5.1
4. In Figure 8-222, we can see the Metro Mirror relationships that have already been created
that can be included in our Metro Mirror consistency group. Because we do not have any
existing relationships at this point to include in the Metro Mirror consistency group, we
create a blank group by clicking Next to proceed.
5. Verify the setting for the consistency group, and click Finish to create the Metro Mirror
consistency group, as shown in Figure 8-223.
After creating the consistency group, the GUI returns to the Viewing Metro & Global Mirror
Consistency Groups window, as shown in Figure 8-224. This page lists the newly created
consistency group. Notice that the newly created consistency group is “empty”, because
no relationships have been added to the group.
3. We are presented with the wizard that will help us create the Metro Mirror relationship.
First, the wizard introduces the steps that are involved in the creation of the Metro Mirror
relationship, as shown in Figure 8-226. Click Next to proceed.
4. As shown in Figure 8-227 on page 591, we name the first Metro Mirror relationship MMREL1
and specify the type of cluster relationship (in this case, intercluster as per the scenario
that is shown in Figure 8-213 on page 584). The wizard also gives us the option to select
the type of copy service, which, in our case, is Metro Mirror Relationship.
590 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-227 Naming the Metro Mirror relationship and selecting the type of cluster relationship
5. Next, we select a master VDisk. Because the list of VDisks can be large, the Filtering
Master VDisk Candidates window opens, which allows us to reduce the list of eligible
VDisks based on a defined filter.
In Figure 8-228, you can use the asterisk character (*) filter to list all of the VDisks, and
click Next.
Tip: In our scenario, we use MM* as a filter to avoid listing all the VDisks.
6. As shown in Figure 8-229 on page 592, we select MM_DB_Pri to be a master VDisk for
this relationship, and click Next to proceed.
7. The next step requires us to select an auxiliary VDisk. The Metro Mirror relationship
wizard will automatically filter this list, so that only eligible VDisks are shown. Eligible
VDisks are VDisks that have the same size as the master VDisk and that are not already
part of a Metro Mirror relationship.
As shown in Figure 8-230, we select MM_DB_Sec as the auxiliary VDisk for this
relationship and click Next to proceed.
8. As shown in Figure 8-231, we select the consistency group that we created, and now our
relationship is immediately added to that group. Click Next to proceed.
592 Implementing the IBM System Storage SAN Volume Controller V5.1
9. Finally, in Figure 8-232, we verify the attributes for our Metro Mirror relationship and click
Finish to create it.
After the relationship is successfully created, we are returned to the Metro Mirror relationship
list.
After the successful creation of the relationship, the GUI returns to the Viewing Metro &
Global Mirror Relationships window, as shown in Figure 8-233. This window lists the newly
created relationship. Notice that we have not started the copy process; we have only
established the connections between those two VDisks.
By following a similar process, we create the second Metro Mirror relationship, MMREL2,
which is shown in Figure 8-234.
Figure 8-235 Specifying the Metro Mirror relationship name and auxiliary cluster
4. As shown in Figure 8-236, we are prompted for a filter prior to use to present the master
VDisk candidates. We enter the MM* filter and click Next.
5. As shown in Figure 8-237 on page 595, we select MM_App_Pri to be the master VDisk of
the relationship, and we click Next to proceed.
594 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-237 Selecting the master VDisk
7. As shown in Figure 8-239, we do not select a consistency group, because we are creating
a stand-alone Metro Mirror relationship.
Note: To add a Metro Mirror relationship to a consistency group, it must be in the same
state as the consistency group.
Figure 8-240 The consistency group must have the same state as the relationship
8. Finally, Figure 8-241 shows the actions that will be performed. We click Finish to create
this new relationship.
After the successful creation, we are returned to the Metro Mirror relationship window.
Figure 8-242 now shows all of our defined Metro Mirror relationships.
596 Implementing the IBM System Storage SAN Volume Controller V5.1
8.14.7 Starting Metro Mirror
Now that we have created the Metro Mirror consistency group and relationships, we are ready
to use Metro Mirror relationships in our environment.
When performing Metro Mirror, the goal is to reach a consistent and synchronized state that
can provide redundancy if a failure occurs that affects the SAN at the production site.
In the following section, we show how to stop and start a stand-alone Metro Mirror
relationship and a consistency group.
In Figure 8-244, we do not need to change the Forced start, Mark as clean, or Copy direction
parameter, because we are invoking this Metro Mirror relationship for the first time (and we
have defined the relationship as already synchronized). We click OK to start the MMREL3
stand-alone Metro Mirror relationship.
Because the Metro Mirror relationship was in the Consistent stopped state and no updates
have been made to the primary VDisk, the relationship quickly enters the Consistent
synchronized state, as shown in Figure 8-246 on page 598.
In Figure 8-246, we select the CG_W2K3_MM Metro Mirror consistency group, and from the
list, we select Start Copy Process and click Go.
As shown in Figure 8-247, we click OK to start the copy process. We cannot select the
Forced start, Mark as clean, or Copy Direction option, because our consistency group is
currently in the Inconsistent stopped state.
As shown in Figure 8-248 on page 599, we are returned to the Metro Mirror consistency
group list and the CG_W2K3_MM consistency group has changed to the Inconsistent copying
state.
598 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-248 Viewing Metro Mirror consistency groups
Because the consistency group was in the Inconsistent stopped state, it enters the
Inconsistent copying state until the background copy has completed for all of the relationships
in the consistency group. Upon the completion of the background copy for all of the
relationships in the consistency group, the consistency group enters the Consistent
synchronized state.
Figure 8-249 Viewing background copy progress for Metro Mirror relationships
Note: Setting up SNMP traps for the SVC enables automatic notification when the Metro
Mirror consistency group or relationships change state.
In this section, we show how to stop and restart the stand-alone Metro Mirror relationship and
the consistency group.
As shown in Figure 8-251, we select Enable write access to the secondary VDisk, if it is
consistent with the primary VDisk and click OK to stop the Metro Mirror relationship.
Figure 8-251 Enable write access to the secondary VDisk while stopping the relationship
As shown in Figure 8-252, the Metro Mirror relationship transits to the Idling state, when
stopped while enabling access to the secondary VDisk.
600 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-253 Selecting the Metro Mirror consistency group to be stopped
As shown in Figure 8-254, we click OK without specifying “Enable write access to the
secondary VDisks, if they are consistent with the primary VDisks”.
Figure 8-254 Stopping consistency group without enabling access to secondary VDisks
As shown in Figure 8-255, the consistency group enters the Consistent stopped state, when
stopped without enabling access to the secondary.
Afterwards, if we want to enable write access (write I/O) to the secondary VDisks, we can
reissue the Stop Copy Process and, this time, specify that we want to enable write access to
the secondary VDisks.
In Figure 8-256 on page 602, we select the Metro Mirror relationship, select Stop Copy
Process from the list and click Go.
As shown in Figure 8-257, we check Enable write access to the secondary VDisks, if they
are consistent with the primary VDisks and click OK.
When applying the “Enable write access to the secondary VDisk, if it is consistent with the
primary VDisk option”, the consistency group transits to the Idling state, as shown in
Figure 8-258.
Figure 8-258 Viewing Metro Mirror consistency group in the Idling state
If any updates have been performed on either the master or auxiliary VDisks in the Metro
Mirror relationship, consistency is compromised. In this situation, we must check the Force
option to start the copy process; otherwise, the command fails.
As shown in Figure 8-259 on page 603, we select the Metro Mirror relationship and Start
Copy Process from the list and click Go.
602 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-259 Starting a stand-alone Metro Mirror relationship in the Idling state
As shown in Figure 8-260, we check the Force option, because write I/O has been performed
while in the Idling state, and we select the copy direction by defining the master VDisk as the
primary and click OK.
The Metro Mirror relationship enters the Consistent copying, and when background copy is
complete, the relationship transits to the Consistent synchronized state, as shown in
Figure 8-261.
If any updates have been performed on either the master or auxiliary VDisk in any of the
Metro Mirror relationships in the consistency group, consistency is compromised. In this
As shown in Figure 8-262, we select the Metro Mirror consistency group and Start Copy
Process from the list and click Go.
Figure 8-262 Starting the copy process for the consistency group
As shown in Figure 8-263, we check the Force option and set the copy direction by selecting
the primary as the master.
Figure 8-263 Specifying the options while starting the copy process in the consistency group
When the background copy completes, the Metro Mirror consistency group enters the
Consistent synchronized state, as shown in Figure 8-264.
604 Implementing the IBM System Storage SAN Volume Controller V5.1
8.14.17 Switching copy direction for a Metro Mirror consistency group
When a Metro Mirror consistency group is in the Consistent synchronized state, we can
change the copy direction for the Metro Mirror consistency group.
In Figure 8-265, we select the CG_W2K3_MM consistency group, click Switch Copy
Direction from the list, and click Go.
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the VDisks that will change from primary to secondary, because all of the I/O will be
inhibited when the VDisks become secondary. Therefore, careful planning is required prior
to switching the copy direction.
Figure 8-265 Selecting the consistency group for which the copy direction is to change
In Figure 8-266, we see that the current primary VDisks are the master. So, to change the
copy direction for the Metro Mirror consistency group, we specify the auxiliary VDisks to
become the primary, and click OK.
Figure 8-266 Selecting primary VDisk, as auxiliary, to switch the copy direction
The copy direction is now switched, and we are returned to the Metro Mirror consistency
group list, where we see that the copy direction has switched, as shown in Figure 8-267 on
page 606.
In Figure 8-268, we show the new copy direction for individual relationships within that
consistency group.
Figure 8-268 Viewing Metro Mirror relationship after changing the copy direction
In Figure 8-269, we select the MMREL3 relationship, click Switch Copy Direction from the
list, and click Go.
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the VDisk that transits from primary to secondary, because all of the I/O will be inhibited
to that VDisk when it becomes the secondary. Therefore, careful planning is required prior
to switching the copy direction for a Metro Mirror relationship.
Figure 8-269 Selecting the relationship whose copy direction needs to be changed
606 Implementing the IBM System Storage SAN Volume Controller V5.1
In Figure 8-270, we see that the current primary VDisk is the master, so to change the copy
direction for the stand-alone Metro Mirror relationship, we specify the auxiliary VDisk to
become the primary, and click OK.
Figure 8-270 Selecting the primary VDisk, as auxiliary, to switch copy direction
The copy direction is now switched. We are returned to the Metro Mirror relationship list,
where we see that the copy direction has been switched and that the auxiliary VDisk has
become the primary, as shown in Figure 8-271.
Note: This example is for intercluster Global Mirror operations only. In case you want to set
up intracluster Global Mirror operations, we highlight those parts of the following
procedure that you do not need to perform.
Starting with 5.1, we can install multiple clusters in a partnership. We show this capability in
8.14.1, “Cluster partnership” on page 582, but in the following scenario, we set up an
intercluster Global Mirror relationship between the ITSO-CLS1 SVC cluster at primary site
and the ITSO-CLS2 SVC cluster at the secondary site. Table 8-4 on page 608 shows the
details of the VDisks.
Consistency Group
CG_W2K3_MM
To set up the Global Mirror, you must perform the following steps:
1. Create an SVC partnership between ITSO-CLS1 and ITSO-CLS2, on both SVC clusters:
Bandwidth 10 MBps
2. Create a Global Mirror consistency group:
Name CG_W2K3_GM
3. Create the Global Mirror relationship for GM_DB_Pri:
– Master GM_DB_Pri
– Auxiliary GM_DB_Sec
608 Implementing the IBM System Storage SAN Volume Controller V5.1
– Auxiliary SVC cluster ITSO-CLS2
– Name GMREL1
– Consistency group CG_W2K3_GM
4. Create the Global Mirror relationship for GM_DBLog_Pri:
– Master GM_DBLog_Pri
– Auxiliary GM_DBLog_Sec
– Auxiliary SVC cluster ITSO-CLS2
– Name GMREL2
– Consistency group CG_W2K3_GM
5. Create the Global Mirror relationship for GM_App_Pri:
– Master GM_App_Pri
– Auxiliary GM_App_Sec
– Auxiliary SVC cluster ITSO-CLS2
– Name GMREL3
Note: If you are creating an intracluster Global Mirror, do not perform the next step;
instead, go to 8.15.4, “Creating a Global Mirror consistency group” on page 614.
To create a Global Mirror partnership between the SVC clusters using the GUI, perform these
steps:
1. We launch the SVC GUI for ITSO-CLS1. Then, we select Manage Copy Services and
click Metro & Global Mirror Cluster Partnerships, as shown in Figure 8-273.
2. Figure 8-274 on page 610 shows the cluster partnership that is defined for this cluster.
Because there is no existing partnership, nothing is listed. Figure 8-274 on page 610 also
gives a warning stating that for any type of copy relationship between VDisks across two
separate clusters, the partnership must exist between them. Notice that we already have
another partnership running. Select GO to continue creating your partnership.
3. Figure 8-275 lists the available SVC cluster candidates. In our case, we select ITSO-CLS4
and specify the available bandwidth for the background copy; we enter 10 MBps and, then,
click OK.
Figure 8-275 Selecting SVC cluster partner and specifying bandwidth for background copy
In the resulting window, which is shown in Figure 8-276 on page 611, the newly created
Global Mirror cluster partnership is shown as Partially Configured.
610 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-276 Viewing the newly created Global Mirror partnership
To fully configure the Global Mirror cluster partnership, we must perform the same steps
on ITSO-CLS4 that we performed on ITSO-CLS1. For simplicity, in the following figures,
only the last two windows are shown.
4. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the Global Mirror
cluster partnership, specify the available bandwidth for the background copy, which again
is 10 MBps, and then, click OK, as shown in Figure 8-277.
Figure 8-277 Selecting SVC cluster partner and specifying bandwidth for background copy
5. Now that we have defined both sides of the SVC cluster partnership, the window that is
shown in Figure 8-278 on page 612 confirms that our Global Mirror cluster partnership is
in the Fully Configured state.
Note: Link tolerance, intercluster delay simulation, and intracluster delay simulation are
introduced with the use of the Global Mirror feature.
The link tolerance values are between 60 and 86,400 seconds in increments of 10 seconds.
The default value for the link tolerance is 300 seconds.
612 Implementing the IBM System Storage SAN Volume Controller V5.1
VDisk to a secondary VDisk, is delayed. A value from 0 to 100 milliseconds in 1 millisecond
increments can be set. A value of zero disables this feature.
To check the current settings for the delay simulation, refer to “Changing link tolerance and
delay simulation values for Global Mirror” on page 613.
Changing link tolerance and delay simulation values for Global Mirror
Here, we show the modification of the delay simulations and the Global Mirror link tolerance
values. We also show the changed values for the Global Mirror link tolerance and delay
simulation parameters.
Launching the SVC GUI for ITSO-CLS1, we select Global Mirror Cluster Partnership to
view and to modify the parameters, as shown in Figure 8-279 and Figure 8-280.
Figure 8-279 View and modify Global Mirror link tolerance and delay simulation parameters
Figure 8-280 Set Global Mirror link tolerance and delay simulations parameters
After performing the steps, the GUI returns to the Global Mirror Partnership window and lists
the new parameter settings, as shown in Figure 8-281 on page 614.
2. To start the creation process, we select Create Consistency Group from the list and click
Go, as shown in Figure 8-283 on page 615. We see that, in our list, we already have one
Metro Mirror consistency group that was created between ITSO-CLS1 and ITSO-CLS2,
but now, we are creating a new Global Mirror Consistency group.
614 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-283 Creating a consistency group
3. We are presented with a wizard that helps us to create the Global Mirror consistency
group. First, the wizard introduces the steps that are involved in the creation of the Global
Mirror consistency group, as shown in Figure 8-284. Click Next to proceed.
4. As shown in Figure 8-285, we specify the consistency group name and whether it will be
used for intercluster or intracluster relationships. In our scenario, we select Create an
inter-cluster consistency group and, then, we need to select our remote cluster partner.
In Figure 8-285, we select ITSO-CLS4, because it is our Global Mirror partner, and click
Next.
6. Verify the settings for the consistency group, and click Finish to create the Global Mirror
consistency group, as shown in Figure 8-287.
Figure 8-287 Verifying the settings for the Global Mirror consistency group
When the Global Mirror consistency group is created, we are returned to the Viewing Metro &
Global Mirror Consistency Groups window. It shows our newly created Global Mirror
consistency group, as shown in Figure 8-288.
616 Implementing the IBM System Storage SAN Volume Controller V5.1
8.15.5 Creating Global Mirror relationships for GM_DB_Pri and
GM_DBLog_Pri
To create the Global Mirror Relationships for GM_DB_Pri and GM_DBLog_Pri, perform these
steps:
1. We select Manage Copy Services and click Global Mirror Cluster Relationships, from
the Welcome window.
2. To start the creation process, we select Create a Relationship from the list and click Go,
as shown in Figure 8-289.
3. We are presented with a wizard that helps us to create Global Mirror relationships. First,
the wizard introduces the steps that are involved in the creation of the Global Mirror
relationship, as shown in Figure 8-290. Click Next to proceed.
4. As shown in Figure 8-291 on page 618, we name our first Global Mirror relationship
GMREL1, click Global Mirror Relationship, and select the relationship for the cluster. In this
case, it is an intercluster relationship toward ITSO-CLS4, as shown in Figure 8-272 on
page 608.
5. The next step enables us to select a master VDisk. Because this list can be large, the
Filtering Master VDisk Candidates window opens, which enables us to define a filter to
reduce the list of eligible VDisks.
In Figure 8-292, we use the filter GM* (you can use the asterisk character (*) to list all
VDisks) and click Next.
618 Implementing the IBM System Storage SAN Volume Controller V5.1
The next step requires us to select an auxiliary VDisk. The Global Mirror relationship
wizard automatically filters this list so that only eligible VDisks are shown. Eligible VDisks
are those VDisks that have the same size as the master VDisk and that are not already
part of a Global Mirror relationship.
7. As shown in Figure 8-294, we select GM_DB_Sec as the auxiliary VDisk for this
relationship, and we click Next to proceed.
8. As shown in Figure 8-295, select the relationship to be part of the consistency group that
we have created, and click Next to proceed.
9. Finally, in Figure 8-296 on page 620, we verify the Global Mirror Relationship attributes
and click Finish to create it.
After the successful creation of the relationship, the GUI returns to the Viewing Metro &
Global Mirror Relationships window, as shown in Figure 8-297. This window lists the newly
created relationship.
Using the same process, create the second Global Mirror relationship, GMREL2.
Figure 8-297 shows both relationships.
2. Next, we are presented with the wizard that shows the steps that are involved in the
process of creating a Global Mirror relationship, as shown in Figure 8-299 on page 621.
Click Next to proceed.
620 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-299 Introduction to the Global Mirror relationship creation wizard
3. In Figure 8-300, we name the Global Mirror relationship GMREL3, specify that it is an
intercluster relationship, and click Next.
Figure 8-300 Naming the Global Mirror relationship and selecting the type of cluster relationship
4. As shown in Figure 8-301, we are prompted for a filter prior to presenting the master VDisk
candidates. We use the asterisk character (*) to list all of the candidates and click Next.
6. As shown in Figure 8-303, we select GM_App_Sec as the auxiliary VDisk for the
relationship and click Next to proceed.
As shown in Figure 8-304 on page 623, we did not select a consistency group, because
we are creating a stand-alone Global Mirror relationship.
622 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-304 Selecting options for the Global Mirror relationship
7. We also specify that the master and auxiliary VDisks are already synchronized; for the
purpose of this example, we can assume that they are pristine (Figure 8-305).
Figure 8-305 Selecting the synchronized option for the Global Mirror relationship
Note: To add a Global Mirror relationship to a consistency group, the Global Mirror
relationship must be in the same state as the consistency group.
Even if we intend to make the GMREL3 Global Mirror relationship part of the
CG_W2K3_GM consistency group, we are not offered the option, as shown in
Figure 8-305, because the states differ. The state of the GMREL3 relationship is
Consistent Stopped, because we selected the synchronized option. The state of the
CG_W2K3_GM consistency group is currently Inconsistent Stopped.
8. Finally, Figure 8-306 on page 624 prompts you to verify the relationship information. We
click Finish to create this new relationship.
After the successful creation, we are returned to the Viewing Metro & Global Mirror
Relationships window. Figure 8-307 now shows all of our defined Global Mirror relationships.
When performing Global Mirror, the goal is to reach a consistent and synchronized state that
can provide redundancy in case a hardware failure occurs that affects the SAN at the
production site.
In this section, we show how to start the stand-alone Global Mirror relationship and the
consistency group.
624 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-308 Starting the stand-alone Global Mirror relationship
2. In Figure 8-309, we do not need to change the parameters Forced start, Mark as clean, or
Copy Direction, because we are invoking this Global Mirror relationship for the first time
(and we have already defined the relationship as being synchronized in Figure 8-305 on
page 623). We click OK to start the stand-alone Global Mirror relationship GMREL3.
3. Because the Global Mirror relationship was in the Consistent Stopped state and no
updates have been made on the primary VDisk, the relationship quickly enters the
Consistent Synchronized state, as shown in Figure 8-310.
3. As shown in Figure 8-312, we click OK to start the copy process. We cannot select the
options Forced start, Mark as clean, or Copy Direction, because we are invoking this
Global Mirror relationship for the first time.
4. We are returned to the Viewing Metro & Global Mirror Consistency Groups window and
the CG_W2K3_GM consistency group has changed to the Inconsistent copying state.
Because the consistency group was in the Inconsistent stopped state, it enters the
Inconsistent copying state until the background copy has completed for all of the
relationships in the consistency group. Upon completion of the background copy for all of
the relationships in the consistency group, it enters the Consistent Synchronized state, as
shown in Figure 8-313.
626 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-314 Monitoring background copy process for Global Mirror relationships
Figure 8-315 Monitoring background copy process for Global Mirror relationships
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Global Mirror consistency groups or relationships change state.
In this section, we show how to stop and restart the stand-alone Global Mirror relationships
and the consistency group.
2. As shown in Figure 8-317, we select Enable write access to the secondary VDisk, if it
is consistent with the primary VDisk and click OK to stop the Global Mirror relationship.
Figure 8-317 Enable access to the secondary VDisk while stopping the relationship
3. As shown in Figure 8-318, the Global Mirror relationship transits to the Idling state when
stopped, while enabling write access to the secondary VDisk.
628 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-319 Selecting the Global Mirror consistency group to be stopped
2. As shown in Figure 8-320, we click OK without specifying “Enable write access to the
secondary VDisks, if they are consistent with the primary VDisks”.
Figure 8-320 Stopping the consistency group without enabling access to the secondary VDisks
The consistency group enters the Consistent stopped state when stopped.
Afterward, if we want to enable access (write I/O) to the secondary VDisks, we can reissue
the Stop Copy Process and specify to enable access to the secondary VDisks.
3. In Figure 8-321, we select the Global Mirror relationship, select Stop Copy Process from
the list, and click Go.
4. As shown in Figure 8-322 on page 630, we select Enable write access to the secondary
VDisks, if they are consistent with the primary VDisks and click OK.
When applying the Enable write access to the secondary VDisks, if they are consistent with
the primary VDisks option, the consistency group transits to the Idling state, as shown in
Figure 8-323.
Figure 8-323 Viewing the Global Mirror consistency group after write access to the secondary VDisk
If any updates have been performed on either the master or the auxiliary VDisk in any of the
Global Mirror relationships in the consistency group, consistency is compromised. In this
situation, we must check Force to start the copy process, or the command will fail.
Perform these steps to restart a Global Mirror relationship in the Idling state:
1. As shown in Figure 8-324, we select the Global Mirror relationship, click Start Copy
Process from the list, and click Go.
Figure 8-324 Starting stand-alone Global Mirror relationship in the Idling state
2. As shown in Figure 8-325 on page 631, we check Force, because write I/O has been
performed while in the Idling state. We select the copy direction by defining the master
VDisk as the primary and click OK.
630 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-325 Restarting the copy process
The Global Mirror relationship enters the Consistent copying state. When the background
copy is complete, the relationship transits to the Consistent synchronized state, as shown
in Figure 8-326.
If any updates have been performed on either the master or the auxiliary VDisk in any of the
Global Mirror relationships in the consistency group, consistency is compromised. In this
situation, we must check Force to start the copy process, or the command will fail.
Figure 8-327 Starting the copy process for Global Mirror consistency group
Figure 8-328 Restarting the copy process for the consistency group
3. When the background copy completes, the Global Mirror consistency group enters the
Consistent synchronized state, as shown in Figure 8-329.
The individual relationships within that consistency group also are shown in Figure 8-330.
632 Implementing the IBM System Storage SAN Volume Controller V5.1
Important: When the copy direction is switched, it is crucial that there is no outstanding
I/O to the VDisk that transits from primary to secondary, because all I/O will be inhibited
to that VDisk when it becomes the secondary. Therefore, careful planning is required
prior to switching the copy direction for a Global Mirror relationship.
Figure 8-331 Selecting the relationship for which the copy direction is to be changed
2. In Figure 8-332, we see that the current primary VDisk is the master, so to change the
copy direction for the stand-alone Global Mirror relationship, we specify the auxiliary VDisk
to become the primary, and click OK.
Figure 8-332 Selecting the primary VDisk as auxiliary to switch the copy direction
3. The copy direction is now switched, and we are returned to the Viewing Global Mirror
Relationships window, where we see that the copy direction has been switched, as shown
in Figure 8-333.
Figure 8-333 Viewing Global Mirror relationship after changing the copy direction
Note: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the VDisks that transit from primary to secondary, because all I/O will be inhibited
when they become the secondary. Therefore, careful planning is required prior to
switching the copy direction.
Figure 8-334 Selecting the consistency group for which the copy direction is to be changed
2. In Figure 8-335, we see that currently the primary VDisks are also the master. So, to
change the copy direction for the Global Mirror consistency group, we specify the auxiliary
VDisks to become the primary, and click OK.
Figure 8-335 Selecting the primary VDisk as auxiliary to switch the copy direction
The copy direction is now switched and we are returned to the Viewing Global Mirror
Consistency Group window, where we see that the copy direction has been switched.
Figure 8-336 on page 635 shows that the auxiliary is now the primary.
634 Implementing the IBM System Storage SAN Volume Controller V5.1
.
Figure 8-336 Viewing Global Mirror consistency groups after changing the copy direction
Figure 8-337 shows the new copy direction for individual relationships within that
consistency group.
Figure 8-337 Viewing Global Mirror Relationships, after changing copy direction for consistency group
Because everything has been completed to our expectations, we are now finished with Global
Mirror.
Note: You are prompted for a cluster user ID and password for several of the following
tasks.
Important: To use this feature, the System Storage Productivity Center/Master Console
must be able to access the Internet.
If the System Storage Productivity Center cannot access the Internet because of
restrictions, such as a local firewall, you will see the message “The update server cannot
be reached at this time.” Use the Web link that is provided in the message for the latest
software information.
636 Implementing the IBM System Storage SAN Volume Controller V5.1
8.17.3 Precautions before upgrade
In this section, we describe precautions that you must take before attempting an upgrade.
Important: Before attempting any SVC code update, read and understand the SVC
concurrent compatibility and code cross-reference matrix. Go to the following site and click
the link for Latest SAN Volume Controller code:
http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001707
During the upgrade, each node in your cluster will be automatically shut down and restarted
by the upgrade process. Because each node in an I/O Group provides an alternate path to
VDisks, use Subsystem Device Driver (SDD) to make sure that all I/O paths between all
hosts and SANs are working.
If you have not performed this check, certain hosts might lose connectivity to their VDisk and
experience I/O errors when the SVC node providing that access is shut down during the
upgrade process (Example 8-1).
Example 8-1 Using datapath query commands to check that all paths are online
C:\Program Files\IBM\SDDDSM>datapath query adapter
Active Adapters :2
Total Devices : 2
You can check the I/O paths by using datapath query commands, as shown in Example 8-1.
You do not need to check for hosts that have no active I/O operations to the SANs during the
software upgrade.
It is well worth double-checking that your uninterruptible power supply unit power
configuration is also set up correctly (even if your cluster is running without problems).
Specifically, double-check these areas:
Ensure that your uninterruptible power supply units are all getting their power from an
external source, and that they are not daisy-chained. Make sure that each uninterruptible
power supply unit is not supplying power to another node’s uninterruptible power supply
unit.
Ensure that the power cable, and the serial cable coming from the back of each node,
goes back to the same uninterruptible power supply unit. If the cables are crossed and are
going back to separate uninterruptible power supply units, during the upgrade, as one
node is shut down, another node might also be mistakenly shut down.
You can use the svcupgradetest utility to check for known issues that might cause problems
during a SAN Volume Controller software upgrade. You can use it to check for potential
problems upgrading from V4.1.0.0 and all later releases to the latest available level.
You can run the utility multiple times on the same cluster to perform a readiness check in
preparation for a software upgrade. We strongly recommend running this utility for a final time
immediately prior to applying the SVC upgrade, making sure that there have not been any
new releases of the utility since it was originally downloaded.
After you install the utility, you can obtain the version information for this utility by running the
svcupgradetest -h command.
The installation and usage of this utility are nondisruptive and do not require restarting any
SVC nodes, so there is no interruption to host I/O. The utility is only installed on the current
configuration node.
System administrators must continue to check if the version of code that they plan to install is
the latest version. You can obtain information about the latest information at this Web site:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volume_Contro
ller%20Code
This utility is intended to supplement rather than duplicate the existing tests that are carried
out by the SVC upgrade procedure (for example, checking for unfixed errors in the error log).
Prerequisites
You can install this utility only on clusters running SVC V4.1.0.0 or later.
638 Implementing the IBM System Storage SAN Volume Controller V5.1
Installation Instructions
To use the upgrade test utility, follow these steps:
1. Download the latest version of the upgrade test utility
(IBM2145_INSTALL_svcupgradetest_V.R) using the download link:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585
2. You can install the utility package by using the standard SVC Console (GUI) or
command-line interface (CLI) software upgrade procedures that are used to install any
new software onto the cluster.
3. An example CLI command to install the package, after it has been uploaded to the cluster,
is svcservicetask applysoftware -file IBM2145_INSTALL_svcupgradetest_n.nn.
4. Run the upgrade test utility by logging onto the SVC CLI and running svcupgradetest -v
<V.R.M.F> where V.R.M.F is the version number of the SVC release being installed.
5. For example, if upgrading to SVC V5.1.0.0, the command is svcupgradetest -v 5.1.0.0.
6. The output from the command will either state that there have been no problems found, or
will direct you to details about any known issues that have been discovered on this cluster.
The test has not found any problems with the 2145 cluster.
Please proceed with the software upgrade.
Note: You can ignore the error message “No such file or directory”.
5. From the SVC Welcome window, click Service and Maintenance, and then, click the
Upgrade Software link.
6. In the Software Upgrade window that is shown in Figure 8-341 on page 641, you can
either upload a new software upgrade file or list the upgrade files. Click Upload to upload
the latest SVC cluster code.
640 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-341 Software Upgrade window
7. In the Software Upgrade (file upload) window (Figure 8-342), type or browse to the
directory on your management workstation (for example, Master Console) where you
stored the latest code level, and click Upload.
8. The File Upload window (Figure 8-343) is displayed if the file is uploaded. Click Continue.
9. The Select Upgrade File window (Figure 8-344 on page 642) lists the available software
packages. Make sure that the package that you want to apply is selected. Click Apply.
10.In the Confirm Upgrade File window (Figure 8-345), click Confirm.
11.After this confirmation, the SVC will check whether there are any outstanding errors. If
there are no errors, click Continue, as shown in Figure 8-346, to proceed to the next
upgrade step. Otherwise, the Run Maintenance button is displayed, which is used to
check the errors. For more information about how to use the maintenance procedures, see
8.17.6, “Running maintenance procedures” on page 645.
12.The Check Node Status window shows the in-use nodes with their current status
displayed, as shown in Figure 8-347 on page 643. Click Continue to proceed.
642 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-347 Check Node Status window
13.The Start Upgrade window opens. Click Start Software Upgrade to start the software
upgrade, as shown in Figure 8-348.
15.During the upgrade process, you can only issue informational commands. All task
commands, such as the creation of a VDisk (as shown in Figure 8-350), are denied,
including both GUI and CLI tasks. All tasks, such as creation, modifying, mapping, and
deleting, are denied.
16.The new code is distributed and applied to each node in the SVC cluster. After installation,
each node is automatically restarted in turn.
Although unlikely, if the concurrent code load (CCL) fails, for example, if one node fails to
accept the new code level, the update on that one node will be backed out, and the node
will revert back to the original code level.
From 4.1.0 onward, the update will simply wait for user intervention. For example, if there
are two nodes (A and B) in an I/O Group, and node A has been upgraded successfully,
and then, node B experiences a hardware failure, the upgrade will end with an I/O Group
that has a single node at the higher code level. If the hardware failure is repaired on node
B, the CCL will then complete the code upgrade process.
644 Implementing the IBM System Storage SAN Volume Controller V5.1
I
Tip: Be patient. After the software update is applied, the first SVC node in a cluster will
update and install the new SVC code version shortly afterward. If multiple I/O Groups (up
to four I/O Groups are possible) exist in an SVC cluster, the second node of the second I/O
Group will load the new SVC code and restart with a 10 minute delay to the first node. A 30
minute delay between the update of the first node and the second node in an I/O Group
ensures that all paths, from a multipathing point of view, are available again.
An SVC cluster update with one I/O Group takes approximately one hour.
17.If you run into an error, go to the Analyze Error Log window. Search for Software Install
completed. Select Sort by date with the newest first, and then, click Perform to list the
software near the top. For more information about working with the Analyze Error Log
window, see 8.17.10, “Analyzing the error log” on page 655.
It might also be worthwhile to capture information for IBM Support to help you diagnose
what went wrong.
You have now completed the tasks that are required to upgrade the SVC software. Click the X
icon in the upper-right corner of the display area to close the Software Upgrade window. Do
not close the browser by mistake.
3. This action generates a new error log file in the /dumps/elogs/ directory (Figure 8-352 on
page 646). Also, we can see the list of the errors, as shown in Figure 8-352 on page 646.
4. Click the error number in the Error Code column in Figure 8-352 to see the explanation for
this error, as shown in Figure 8-353.
5. To perform problem determination, click Continue. The details for the error appear and
might provide options to diagnose and repair the problem. In this case, it asks you to
check an external configuration and, then, to click Continue (Figure 8-354 on page 647).
646 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-354 Maintenance procedures: Fixing error
6. The SVC maintenance procedure has completed, and the error is fixed, as shown in
Figure 8-355.
7. The discovery reported no new errors, so the entry in the error log is now marked as fixed
(as shown in Figure 8-356). Click OK.
4. The next window now displays confirmation that it has updated the settings, as shown in
Figure 8-359 on page 649.
648 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-359 Error notification settings confirmation
5. The next window now displays the current status, as shown in Figure 8-360.
6. You can now click the X icon in the upper-right corner of the Set SNMP Event Notification
window to close the window.
Figure 8-361 on page 650, Figure 8-362 on page 650, and Figure 8-363 on page 651 show
the sequence of windows to use to define a syslog server.
650 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-363 Syslog server confirmation
The syslog messages can be sent in either compact message format or full message format.
To run the e-mail service for the first time, the Web pages guide us through the required
steps:
Set the e-mail server and contact details
Test the e-mail service
Figure 8-364 on page 652 shows the set e-mail notification window.
Figure 8-366 shows the confirmation window for the e-mail contact details.
652 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-367 shows how to configure the Simple Mail Transfer Protocol (SMTP) server in the
SVC cluster.
Figure 8-369 on page 654 shows how to define the support e-mail to which SVC notifications
will be sent.
Figure 8-372 on page 655 shows how to send a test e-mail to all users.
654 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-372 Sending a test e-mail to all users
To display the error log for analysis, perform the following steps:
1. From the SVC Welcome window, click Service and Maintenance and, then, click Analyze
Error Log.
2. From the Error Log Analysis window (Figure 8-374 on page 656), you can choose either
Process or Clear Log:
a. Select the appropriate radio buttons and click Process to display the log for analysis.
The Analysis Options and Display Options allow you to filter the results of your log
inquiry to reduce the output.
b. You can display the whole log, or you can filter the log so that only errors, events, or
unfixed errors are displayed. You can also sort the results by selecting the appropriate
display options. For example, you can sort the errors by error priority (lowest number =
most serious error) or by date. If you sort by date, you can specify whether the newest
or oldest error displays at the top of the table. You can also specify the number of
entries that you want to display on each page of the table.
Figure 8-375 on page 657 shows an example of the error logs listed.
656 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-375 Analyzing Error Log: Process
c. Click an underlined sequence number to see the detailed log of this error (Figure 8-376
on page 658).
d. You can optionally display detailed sense code data by clicking Sense Expert, as
shown in Figure 8-377 on page 659. Click Return to go back to the Detailed Error
Analysis window.
658 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-377 Decoding Sense Data window
e. If the log entry is an error, you can optionally mark the error as fixed, which does not
cause any other checks or processes. We recommend that you do this action as a
maintenance task (see 8.17.6, “Running maintenance procedures” on page 645).
f. Click Clear Log at the bottom of the Error Log Analysis window (see Figure 8-374 on
page 656) to clear the log. If the error log contains unfixed errors, a warning message
is displayed when you click Clear Log.
3. You can now click the X icon in the upper-right corner of the Analyze Error Log window.
2. Now, you can choose between Capacity Licensing or Physical Disk Licensing.
Figure 8-379 shows the Physical Disk Licensing Settings window.
Figure 8-380 on page 661 shows the Capacity Licensing Settings window.
660 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-380 Capacity License Setting window
3. Consult your license before you make changes in the License Settings window
(Figure 8-381). If you have purchased additional features (for example, FlashCopy or
Global Mirror) or if you have increased the capacity of your license, make the appropriate
changes. Then, click Update License Settings.
4. You now see a license confirmation window, as shown in Figure 8-382 on page 662.
Review this window and ensure that you are in compliance. If you are in compliance, click
I Agree to make the requested changes take effect.
5. You return to the License Settings window to review your changes (Figure 8-383). Make
sure that your changes are reflected.
6. You can now click the X icon in the upper-right corner of the License Settings window.
662 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-384 Feature log
3. You can now click the X icon in the upper-right corner of the View License Settings Log
window.
Note: By default, the dump and log information that is displayed is available from the
configuration node. In addition to these files, each node in the SVC cluster keeps a
local software dump file. Occasionally, other dumps are stored on them. Click Check
Other Nodes at the bottom of the List Dumps window (Figure 8-386) to see which
dumps or logs exist on other nodes in your cluster.
3. Figure 8-387 shows the list of dumps from the partner node. You can see a list of the
dumps by clicking one of the Dump Types.
4. To copy a file from this partner node to the config node, click the dump type and then click
the file that you want to copy, as shown in Figure 8-388 on page 665.
664 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-388 Copy dump files
5. You will see a confirmation window that the dumps are being retrieved. You can either click
Continue to continue working with the other node or click Cancel to go back to the original
node (Figure 8-389).
6. After all of the necessary files are copied to the SVC config node, click Cancel to finish the
copy operation, and click Cancel again to return to the SVC config node. Now, for
example, if you click the Error Logs link, you see information similar to that shown in
Figure 8-390 on page 666.
7. From this window, you can perform either of the following tasks:
– Click any of the available log file links (indicated by the underlined text) to display the
log in complete detail.
– Delete one or all of the dump or log files. To delete all, click Delete All. To delete
several error log files, select the check boxes to the right of the file, and click Delete.
8. You can now click the X icon in the upper-right corner of the List Dumps window.
In the event that half of the nodes in a cluster are missing for any reason, the other half of the
cluster nodes cannot simply assume that the nodes are “dead”. It might mean that the cluster
state information is not being successfully passed between nodes for a reason (network
failure, for example). For this reason, if half of the cluster disappears from the view of the
other half, each surviving half attempts to lock the first quorum disk (index 0). In the event that
quorum disk index 0 is not available on any node, the next disk (index 1) becomes the
quorum, and so on.
The half of the cluster that is successful in locking the quorum disk becomes the exclusive
processor of I/O activity. It attempts to reform the cluster with any nodes that it can still see.
The other half will stop processing I/O, which provides a tie-breaker solution and ensures that
both halves of the cluster do not continue to operate.
In the case that all clusters can see the quorum disk, they will use this quorum disk to
communicate with each other, and they will decide which half will become the exclusive
processor of I/O activity.
666 Implementing the IBM System Storage SAN Volume Controller V5.1
If, for any reason, you want to set your own quorum disks (for example, if you have installed
additional back-end storage and you want to move one or two quorum disks onto this newly
installed back-end storage subsystem), complete the following tasks:
1. From the Welcome window, select Work with Managed Disks, then select Quorum
Disks, which takes you to the window that is shown in Figure 8-391.
2. We can now select our quorum disks and identify which disk will be the active quorum
disk.
3. To change the active quorum disk, as shown in Figure 8-392, we start by selecting another
MDisk to be the active quorum disk. We click Set Active Quorum Disk and click Go.
4. We confirm that we want to change the active quorum disk by clicking Set Active Quorum
Disk, as shown in Figure 8-393.
5. After we have changed the active quorum disk, we can see that our previous active
quorum disk is in the state of initializing, as shown in Figure 8-394 on page 668.
Quorum disks are only created if at least one MDisk is in managed mode (that is, it was
formatted by the SVC with extents in it). Otherwise, a 1330 cluster error message is displayed
in the SVC front window. You can correct it only when you place MDisks in managed mode.
This section details the tasks that you can perform to save the configuration data from an
SVC configuration node and restore it. The following configuration information is backed up:
Storage subsystem
Hosts
Managed disks (MDisks)
MDGs
SVC nodes
VDisks
VDisk-to-host mappings
FlashCopy mappings
FlashCopy consistency groups
Mirror relationships
Mirror consistency groups
668 Implementing the IBM System Storage SAN Volume Controller V5.1
Backing up the cluster configuration enables you to restore your cluster configuration in the
event that it is lost. But only the data that describes the cluster configuration is backed up. To
back up your application data, you need to use the appropriate backup methods.
To begin the restore process, consult IBM Support to determine the cause or reason why you
cannot access your original configuration data.
Important: We recommend that you make a backup of the SVC configuration data after
each major change in the environment, such as defining or changing a VDisk,
VDisk-to-host mappings, and so on.
The svc.config.backup.xml file is stored in the /tmp folder on the configuration node and
must be copied to an external and secure place for backup purposes.
Important: We strongly recommend that you change the default names of all objects to
non-default names. For objects with a default name, a warning is issued, and the object is
restored with its original name and “_r” appended to it.
3. After the configuration backup is successful, you see messages similar to the messages
that are shown in Figure 8-397 on page 670. Make sure that you read, understand, act
upon, and document the warning messages, because they can affect the restore
procedure.
4. You can now click the X icon in the upper-right corner of the Backing up a Cluster
Configuration window.
Change the default names: To avoid getting the CMMVC messages that are shown in
Figure 8-397, you need to replace all the default names, for example, mdisk1, vdisk1, and
so on.
Figure 8-398 on page 671 shows saving a software dump on the Software Dumps window.
670 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-398 Software Dumps list with options
After you have saved your configuration file, it will be presented to you as a .xml file.
Figure 8-399 shows an SVC backup configuration file example.
Carry out the restore procedure only under the direction of IBM Level 3 support.
To delete the SVC configuration backup files, perform the following steps:
1. From the SVC Welcome window, click Service and Maintenance and, then, Delete
Backup.
2. In the Deleting a Cluster Configuration window (Figure 8-400), click OK to confirm the
deletion of the C:\Program Files\IBM\svcconsole\cimom\backup\SVCclustername folder
(where SVCclustername is the SVC cluster name on which you are working) on the SVC
Master Console and all its contents.
3. Click Delete to confirm the deletion of the configuration backup data. See Figure 8-401.
8.18.5 Fabrics
From the Fabrics Link in the Service and Maintenance window, you can view the fabrics from
the SVC’s point of view. This function might be useful to debug a SAN problem.
672 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 8-402 Viewing Fabrics example
We have completed our discussion of the service and maintenance operational tasks.
Moreover, we show migrating from a fully allocated VDisk to a space-efficient virtual disk
(VDisk) using the VDisk Mirroring feature and the space-efficient volume together.
When executed, this command migrates a given number of extents from the source MDisk,
where the extents of the specified VDisk reside, to a defined target MDisk that must be part of
the same MDG.
You can specify a number of migration threads to be used in parallel (from 1 to 4).
If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is
migrated, while the MDisk access mode transitions from image to managed.
676 Implementing the IBM System Storage SAN Volume Controller V5.1
The syntax of the command-line interface (CLI) command is:
svctask migrateexts -source src_mdisk_id | src_mdisk_name -exts num_extents
-target
target_mdisk_id | target_mdisk_name [-threads number_of_threads] -vdisk vdisk_id |
vdisk_name
The parameters for the CLI command are defined this way:
-vdisk: Specifies the VDisk ID or name to which the extents belong.
-source: Specifies the source MDisk ID or name on which the extents currently reside.
-exts: Specifies the number of extents to migrate.
-target: Specifies the target MDisk ID or name onto which the extents are to be migrated.
-threads: Optional parameter that specifies the number of threads to use while migrating
these extents, from 1 to 4.
In this case, the extents that need to be migrated are moved onto the set of MDisks that are
not being deleted, and the extents are distributed. This statement holds true if multiple
MDisks are being removed from the MDG at the same time and if MDisks that are being
removed are not candidates for supplying free extents to the allocation of the free extents
algorithm.
If a VDisk uses one or more extents that need to be moved as a result of an rmmdisk
command, the virtualization type for that VDisk is set to striped (if it was previously sequential
or image).
If the MDisk is operating in image mode, the MDisk transitions to managed mode while the
extents are being migrated, and upon deletion, it transitions to unmanaged mode.
The parameters for the CLI command are defined this way:
-mdisk: Specifies one or more MDisk IDs or names to delete from the group.
-force: Migrates any data that belongs to other VDisks before removing the MDisk.
Using the -force flag: If the -force flag is not supplied and if VDisks occupy extents on one
or more of the MDisks that are specified, the command fails.
When the -force flag is supplied and when VDisks exist that are made from extents on one
or more of the MDisks that are specified, all extents on the MDisks will be migrated to the
other MDisks in the MDG, if there are enough free extents in the MDG. The deletion of the
MDisks is postponed until all extents are migrated, which can take time. In the case where
there are insufficient free extents in the MDG, the command fails.
The parameters for the CLI command are defined this way:
-vdisk: Specifies the VDisk ID or name to migrate into another MDG.
-mdiskgrp: Specifies the target MDG ID or name.
-threads: An optional parameter that specifies the number of threads to use while
migrating these extents, from 1 to 4.
-copy_id: Required if the specified VDisk has more than one copy.
678 Implementing the IBM System Storage SAN Volume Controller V5.1
In Figure 9-1, we illustrate the V3 VDisk migrating from MDG1 to MDG2.
Rule: In order for the migration to be acceptable, the source and destination MDisk must
have the same extent size.
I/O G r o u p 0
S V C 1 IO -G rp 0
Node 1
S V C 1 IO -G r p 0
Node 2
V1 V2 V4 V3 V3 V6
V5
M1 M2 M3 M4 M5 M6 M7
R A ID C o n tr o lle r A R A ID C o n tr o lle r B
Extents are allocated to the migrating VDisk, from the set of MDisks in the target MDG, using
the extent allocation algorithm.
The process can be prioritized by specifying the number of threads to use while migrating;
using only one thread will put the least background load on the system. If a large number of
extents are being migrated, you can specify the number of threads that will be used in parallel
(from 1 to 4).
The offline rules apply to both MDGs; therefore, referring back to Figure 9-1, if any of the M4,
M5, M6, or M7 MDisks go offline, the V3 VDisk goes offline. If the M4 MDisk goes offline, V3
and V5 go offline, but V1, V2, V4, and V6 remain online.
If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is
migrated while the MDisk access mode transitions from image to managed.
For the duration of the move, the VDisk is listed as being a member of the original MDG. For
the purposes of configuration, the VDisk moves to the new MDG instantaneously at the end
of the migration.
The parameters for the CLI command are defined this way:
-copy_id: Required if the specified VDisk has more than one copy.
-vdisk: Specifies the name or ID of the source VDisk to be migrated.
-mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk
must be unmanaged and large enough to contain the data of the disk being migrated.)
-mdiskgrp: Specifies the MDG into which the MDisk must be placed after the migration has
completed.
-threads: An optional parameter that specifies the number of threads to use while
migrating these extents, from 1 to 4.
Regardless of the mode in which the VDisk starts, it is reported as a managed mode during
the migration. Also, both of the MDisks involved are reported as being in image mode during
the migration. Upon completion of the command, the VDisk is classified as an image mode
VDisk.
To move a VDisk between I/O Groups, the cache must be flushed. The SVC will attempt to
destage all write data for the VDisk from the cache during the I/O Group move. This flush will
fail if data has been pinned in the cache for any reason (such as an MDG being offline). By
default, this failed flush will cause the migration between I/O Groups to fail, but this behavior
can be overridden using the -force flag. If the -force flag is used and if the SVC is unable to
destage all write data from the cache, the result is that the contents of the VDisk are
680 Implementing the IBM System Storage SAN Volume Controller V5.1
corrupted by the loss of the cached data. During the flush, the VDisk operates in cache
write-through mode.
Important: Do not move a VDisk to an offline I/O Group under any circumstance. You must
ensure that the I/O Group is online before you move the VDisks to avoid any data loss.
You must quiesce host I/O before the migration for two reasons:
If there is significant data in cache that takes a long time to destage, the command line will
time out.
Subsystem Device Driver (SDD) vpaths that are associated with the VDisk are deleted
before the VDisk move takes place in order to avoid data corruption. So, data corruption
can occur if I/O is still ongoing at a particular logical unit number (LUN) ID when it is
reused for another VDisk.
When migrating a VDisk between I/O Groups, you do not have the ability to specify the
preferred node. The preferred node is assigned by the SVC.
For detailed information about the chvdisk command parameters, refer to the SVC
command-line interface help by typing this command:
svctask chvdisk -h
The chvdisk command modifies a single property of a VDisk. To change the VDisk name and
to modify the I/O Group, for example, you must issue the command twice. A VDisk that is a
member of a FlashCopy or Remote Copy relationship cannot be moved to another I/O Group,
and you cannot override this restriction by using the -force flag.
To determine the extent allocation of MDisks and VDisks, use the following commands:
To list the VDisk IDs and the corresponding number of extents that the VDisks occupy on
the queried MDisk, use the following CLI command:
svcinfo lsmdiskextent <mdiskname | mdisk_id>
To list the MDisk IDs and the corresponding number of extents that the queried VDisks
occupy on the listed MDisks, use the following CLI command:
svcinfo lsvdiskextent <vdiskname | vdisk_id>
To list the number of available free extents on an MDisk, use the following CLI command:
svcinfo lsfreeextents <mdiskname | mdisk_id>
9.3.1 Parallelism
You can perform several of the following activities in parallel.
Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of
migration activities:
Migrate multiple extents
Migrate between MDGs
Migrate off of a deleted MDisk
Migrate to image mode
Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not
take into account whether the MDisk is the source or the destination. If more than four single
extent migrates are scheduled for a particular MDisk, further migrations are queued pending
the completion of one of the currently running migrations.
682 Implementing the IBM System Storage SAN Volume Controller V5.1
9.3.2 Error handling
If a medium error occurs on a read from the source, and the destination’s medium error table
is full, if an I/O error occurs on a read from the source repeatedly, or if the MDisks go offline
repeatedly, the migration is suspended or stopped.
The migration will be suspended if any of the following conditions exist; otherwise, it will be
stopped:
The migration is between MDGs and has progressed beyond the first extent. These
migrations are always suspended rather than stopped, because stopping a migration in
progress leaves a VDisk spanning MDGs, which is not a valid configuration other than
during a migration.
The migration is a Migrate to Image Mode (even if it is processing the first extent). These
migrations are always suspended rather than stopped, because stopping a migration in
progress leaves the VDisk in an inconsistent state.
A migration is waiting for a metadata checkpoint that has failed.
If a migration is stopped, and if any migrations are queued awaiting the use of the MDisk for
migration, these migrations are now considered. If, however, a migration is suspended, the
migration continues to use resources, and so, another migration is not started.
The SVC attempts to resume the migration if the error log entry is marked as fixed using the
CLI or the GUI. If the error condition no longer exists, the migration will proceed. The
migration might resume on another node than the node that started the migration.
Chunks
Regardless of the extent size for the MDG, data is migrated in units of 16 MB. In this
description, this unit is referred to as a chunk.
The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack (such as cache
destages) are held back. If the back-end storage is operating with significant latency, it is
possible that this operation might take time (minutes) to complete, which can have an adverse
affect on the overall performance of the SVC. To avoid this situation, if the migration of a
particular chunk is still active after one minute, the migration is paused for 30 seconds. During
this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the
chunk is resumed. This algorithm is repeated as many times as necessary to complete the
migration of the chunk.
Not to scale
16 MB
SVC guarantees read stability during data migrations even if the data migration is stopped by
a node reset or a cluster shutdown. This read stability is possible, because SVC disallows
writes on all nodes to the area being copied, and upon a failure, the extent migration is
restarted from the beginning.
Figure 9-3 on page 685 shows the data migration and write operation relationship.
684 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-3 Migration and write operation relationship
MDisk modes
There are three MDisk modes:
Unmanaged MDisk:
An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged
MDisk is not associated with any VDisks and has no metadata stored on it. The SVC will
not write to an MDisk that is in unmanaged mode except when it attempts to change the
mode of the MDisk to one of the other modes.
Image mode MDisk:
Image mode provides a direct block-for-block translation from the MDisk to the VDisk with
no virtualization. Image mode VDisks have a minimum size of one block (512 bytes) and
always occupy at least one extent. An image mode MDisk is associated with exactly one
VDisk.
Managed mode MDisk:
Managed mode Mdisks contribute extents to the pool of available extents in the MDG.
Zero or more managed mode VDisks might use these extents.
add to group
Managed
Not in group
mode
remove from group
delete image
mode vdisk
create image
mode vdisk
Migrating to
Image mode
image mode start migrate to image mode
Image mode VDisks have the special property that the last extent in the VDisk can be a
partial extent. Managed mode disks do not have this property.
To perform any type of migration activity on an image mode VDisk, the image mode disk must
first be converted into a managed mode disk. If the image mode disk has a partial last extent,
this last extent in the image mode VDisk must be the first extent to be migrated. This
migration is handled as a special case.
686 Implementing the IBM System Storage SAN Volume Controller V5.1
After this special migration operation has occurred, the VDisk becomes a managed mode
VDisk and is treated in the same way as any other managed mode VDisk. If the image mode
disk does not have a partial last extent, no special processing is performed; the image mode
VDisk is simply changed into a managed mode VDisk and is treated in the same way as any
other managed mode VDisk.
After data is migrated off of a partial extent, there is no way to migrate data back onto the
partial extent.
The recommended method is to have one MDG for all the image mode VDisks, and other
MDGs for the managed mode VDisks, and to use the migrate VDisk facility.
Be sure to verify that enough extents are available in the target MDG.
We then manage those LUNs with the SVC, migrate them from an image mode VDisk to a
VDisk, migrate one of them back to an image mode VDisk, and finally, move it to another
image mode VDisk on another storage subsystem, so that those LUNs can then be
masked/mapped back to the host directly. This approach, of course, also works if we move
the LUN back to the same storage subsystem.
Using this example will help you perform any one of the following activities in your
environment:
Move a Microsoft server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC. Perform this activity first when introducing the SVC into your
environment. This section shows that your host downtime is only a few minutes while you
remap and remask disks using your storage subsystem LUN management tool. We
describe this step in detail in 9.5.2, “Adding the SVC between the host system and the
DS4700” on page 690.
Migrate your image mode VDisk to a VDisk while your host is still running and servicing
your business application. You might perform this activity if you were removing a storage
subsystem from your SAN environment, or wanting to move the data onto LUNs that are
more appropriate for the type of data stored on those LUNs, taking into account
availability, performance, and redundancy. We describe this step in 9.5.4, “Migrating the
VDisk from image mode to managed mode” on page 700.
You can use these activities individually, or together, to migrate your server’s LUNs from one
storage subsystem to another storage subsystem using the SVC as your migration tool.
The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.
9.5.1 Windows Server 2008 host system connected directly to the DS4700
In our example configuration, we use a Windows Server 2008 host, a DS4700, and a
DS4500. The host has two LUNs (drive X and Y). The two LUNs are part of one DS4700
array. Before the migration, LUN masking is defined in the DS4700 to give access to the
Windows Server 2008 host system for the volume from DS4700 labeled X and Y (see
Figure 9-6 on page 689).
Figure 9-6 on page 689 shows the two LUNs (drive X and Y).
688 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-6 Drives X and Y
Figure 9-7 shows the properties of one of the DS4700 disks using the Subsystem Device
Driver DSM (SDDDSM). The disk appears as an IBM 1814 Fast Multipath Device.
To add the SVC between the host system and the DS4700 storage subsystem, perform the
following steps:
1. Check that you have installed supported device drivers on your host system.
2. Check that your SAN environment fulfills the supported zoning configurations.
3. Shut down the host.
4. Change the LUN masking in the DS4700. Mask the LUNs to the SVC, and remove the
masking for the host.
Figure 9-9 on page 691 shows the two LUNs with LUN IDs 12 and 13 remapped to SVC
ITSO-CLS3.
690 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-9 LUNs remapped
5. Log on to your SVC Console, open Work with Managed Disks and Managed Disks,
select Discover Managed Disks in the drop-down list, and click Go (Figure 9-10).
Figure 9-11 on page 692 shows the two LUNs discovered as Mdisk12 and Mdisk13.
6. Now, we create one new empty MDG for each MDisk that we want to use to create an
image VDisk later. Open Work with Managed Disks and Managed Disks Group, select
Create an Mdisk Group in the drop-down list, and click Go.
Figure 9-12 shows the MDisk Group creation.
7. Click Next.
8. Type the MDG name, MDG_img_1. Do not select any MDisk, as shown in Figure 9-13 on
page 693, and then, click Next.
692 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-13 MDG for image VDisk creation
9. Choose the extent size that you want to use, as shown in Figure 9-14, and then, click
Next. Remember that the extent size that you choose must be the same extent size in the
MDG to which you will migrate your data later.
13.The Create Image Mode Virtual Disk window (Figure 9-17) opens. Click Next.
14.Type the name that you want to use for the VDisk, and select the attributes, in our case,
the name is W2k8_Log. Click Next (Figure 9-18 on page 695).
694 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-18 Set the attributes for the image mode VDisk
15.Select the MDisk to use to create the image mode VDisk, and click Next (Figure 9-19).
Figure 9-19 Select the MDisk to use to create your image mode VDisk
16.Select an I/O Group, the Preferred Node, and the MDisk group that you previously
created. Optionally, you can let this system choose these settings (Figure 9-20 on
page 696). Click Next.
Multiple nodes: If you have more than two nodes in the cluster, select the I/O Group of
the nodes to evenly share the load.
17.Review the summary, and click Finish to create the image mode VDisk.
Figure 17 shows the image VDisk summary and attributes.
18.Repeat steps 6 through 17 for each LUN that you want to migrate to the SVC.
19.In the Viewing Virtual Disk view, we see the two newly created VDisks, as shown in
Figure 9-22 on page 697. In our example, they are named W2k8_log and W2k8_data.
696 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-22 Viewing Virtual Disks
20.In the Viewing Managed Disks window (Figure 9-23), we see the two new MDisks are now
shown as Image Mode disks. In our example, they are named mdisk12 and mdisk13.
21.Map the VDisks again to the Windows Server 2008 host system.
22.Expand Work with Virtual Disks, and click Virtual Disks. Select the VDisks, and select
Map VDisks to a Host, and click Go (Figure 9-24).
23.Choose the host, and enter the Small Computer System Interface (SCSI) LUN IDs. Click
OK (Figure 9-25 on page 698).
9.5.3 Putting the migrated disks onto an online Windows Server 2008 host
Perform these steps:
1. Start the Windows Server 2008 host system again, and expand Computer Management to
see the new disk properties changed to a 2145 Multi-Path Disk Device (Figure 9-26).
698 Implementing the IBM System Storage SAN Volume Controller V5.1
2. Figure 9-27 shows the Disk Management window.
3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device
Driver DSM to open the SDDDSM command-line utility (Figure 9-28).
Total Devices : 2
C:\Program Files\IBM\SDDDSM>
700 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-29 Migrate a VDisk
2. Select the MDG to which to migrate the disk, and select the number of threads to use for
this process, as shown in Figure 9-30. Click OK.
Extent sizes: If you migrate the VDisks to another MDG, the extent size of the source
MDG and the extent size of the target MDG must be equal.
4. Click the percentage to show more detailed information about this VDisk. During the
migration process, the VDisks are still in the old MDG. During the migration, your server is
still accessing the data. After the migration is complete, the VDisk is in the new
MDG_DS45 MDG and is a striped VDisk.
Figure 9-32 shows the migrated VDisk in the new MDG.
702 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-33 Migrate to an Image Mode VDisk
4. Select the source VDisk copy, and click Next (Figure 9-35).
Extent sizes: If you migrate the VDisks to another MDG, the extent size of the source
MDG must equal the extent size of the target MDG.
704 Implementing the IBM System Storage SAN Volume Controller V5.1
7. Select the number of threads (1 to 4) to use for this migration process. The higher the
number, the higher the priority (Figure 9-38). Click Next.
In this section, we describe how to migrate an image mode VDisk to another image mode
VDisk. In our example, we migrate the W2k8_Log VDisk to another disk subsystem as an
image mode VDisk. The second storage subsystem is a DS4500; a new LUN is configured on
the storage and mapped to the SVC cluster. The LUN is available in SVC as unmanaged
disk11.
To migrate the image mode VDisk to another image mode VDisk, perform the following steps:
1. Check the VDisk to migrate, and select Migrate to an image mode VDisk from the list.
Click Go.
2. The Introduction window opens, as shown in Figure 9-42 on page 707. Click Next.
706 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-42 Migrating data to an image mode VDisk
3. Select the VDisk source copy, and click Next (Figure 9-43).
4. Select a target MDisk, as shown in Figure 9-44 on page 708. Click Next.
5. Select a target MDG for the MDisk to join, as shown in Figure 9-45. Click Next.
6. Select the number of threads (1 to 4) to devote to this process, as shown in Figure 9-46.
The higher the number, the higher the priority. Click Next.
7. Verify the migration attributes, as shown in Figure 9-47 on page 709, and click Finish.
708 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-47 Verify Migration Attributes window
9. Repeat these steps for all of the image mode VDisks that you want to migrate.
10.If you want to free the data from the SVC, use the procedure that is described in 9.5.7,
“Free the data from the SVC” on page 709.
To free the data from the SVC, we use the delete vdisk command.
Note: This situation only applies to image mode VDisks. If you delete a normal VDisk, all of
the data will also be deleted.
As shown in Example 9-1 on page 700, the SAN disks currently reside on the SVC 2145
device.
Check that you have installed the supported device drivers on your host system.
710 Implementing the IBM System Storage SAN Volume Controller V5.1
9.5.8 Put the free disks online on Windows Server 2008
Put the disks, which have ben freed from the SVC, online on WIndows Server 2008:
1. Using your DS4500 Storage Manager interface, now remap the two LUNs that were
MDisks back to your Windows Server 2008 server.
2. Open your Computer Management window. Figure 9-51 shows that the LUNs are now
back to an IBM 1814 type.
3. Open your Disk Management window, you will see that the disks have appeared. You
might need to reactivate your disk using the right-click option on each disk.
Using this example can help you to perform any of the following activities in your environment:
Move a Linux server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC. Perform this activity first when introducing the SVC into your
environment. This section shows that your host downtime is only a few minutes while you
remap and remask disks using your storage subsystem LUN management tool. We
describe this step in detail in 9.6.2, “Preparing your SVC to virtualize disks” on page 715.
Move data between storage subsystems while your Linux server is still running and
servicing your business application. You might perform this activity if you are removing a
storage subsystem from your SAN environment. Or, perform this activity if you want to
move the data onto LUNs that are more appropriate for the type of data that is stored on
those LUNs, taking availability, performance, and redundancy into account. We describe
this step in 9.6.4, “Migrate the image mode VDisks to managed MDisks” on page 722.
Move your Linux server’s LUNs back to image mode VDisks so that they can be remapped
and remasked directly back to the Linux server. We describe this step in 9.6.5, “Preparing
to migrate from the SVC” on page 725.
You can use these three activities individually, or together, to migrate your Linux server’s
LUNs from one storage subsystem to another storage subsystem using the SVC as your
migration tool. If you do not use all three activities, you can introduce or remove the SVC from
your environment.
712 Implementing the IBM System Storage SAN Volume Controller V5.1
The only downtime required for these activities is the time that it takes to remask and remap
the LUNs between the storage subsystems and your SVC.
LINUX
Host
SAN
Green Zone
IBM or OEM
Storage
Subsystem
Figure 9-53 shows our Linux server connected to our SAN infrastructure. It has two LUNs that
are masked directly to it from our storage subsystem:
The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise
Linux V5.1), and this LUN is used to boot the system directly from the storage subsystem.
The operating system identifies it as /dev/mapper/VolGroup00-LogVol00.
SCSI LUN ID 0: To successfully boot a host off of the SAN, you must have assigned the
LUN as SCSI LUN ID 0.
Example 9-2 shows our disks that are directly attached to the Linux hosts.
Our Linux server represents a typical SAN environment with a host directly using LUNs that
were created on a SAN storage subsystem, as shown in Figure 9-53 on page 713:
The Linux server’s host bus adapter (HBA) cards are zoned so that they are in the Green
zone with our storage subsystem.
The two LUNs that have been defined on the storage subsystem, using LUN masking, are
directly available to our Linux server.
If you have an SVC that is already connected, skip to 9.6.2, “Preparing your SVC to virtualize
disks” on page 715.
Connecting the SVC to your SAN fabric requires that you perform these tasks:
Assemble your SVC components (nodes, uninterruptible power supply unit, and Master
Console), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on
your SAN. We describe these tasks in much greater detail in Chapter 3, “Planning and
configuration” on page 65.
Create and configure your SVC cluster.
Create these additional zones:
– An SVC node zone (our Black zone in Figure 9-54 on page 715). This zone only
contains all of the ports (or worldwide names (WWN)) for each of the SVC nodes in
your cluster. Our SVC is made up of a two node cluster, where each node has four
ports. So, our Black zone has eight defined WWNs.
– A storage zone (our Red zone). This zone also has all of the ports/WWNs from the
SVC node zone, as well as the ports/WWNs for all the storage subsystems that SVC
will virtualize.
– A host zone (our Blue zone). This zone contains the ports/WWNs for each host that will
access the VDisk, together with the ports that are defined in the SVC node zone.
Important: Do not put your storage subsystems in the host (Blue) zone. The host
zone is an unsupported configuration. Putting your storage subsystems in the host
zone can lead to data loss.
We set our environment in this manner. Figure 9-54 on page 715 shows our environment.
714 Implementing the IBM System Storage SAN Volume Controller V5.1
Zoning per Migration Scenarios
LINUX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
By Pinocchio 12-09-2005
These activities are all nondisruptive. They do not affect your SAN fabric or your existing SVC
configuration (if you already have a production SVC in place).
First, we need to create an empty MDG for each of the disks, using the commands in
Example 9-3. We name our MDGs Palau-MDG0 to hold our boot LUN. And, we name the
second MDG Palau-MDG1 to hold the data LUN.
The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have
not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 9-4 shows
the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates
that we have a zone configuration problem.)
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2
IBM_2145:ITSO-CLS1:admin>
If you do not know the WWN of your Linux server, you can look at which WWNs are currently
configured on your storage subsystem for this host. Figure 9-55 shows our configured ports
on an IBM DS4700 storage subsystem.
716 Implementing the IBM System Storage SAN Volume Controller V5.1
After verifying that the SVC can see our host (linux2), we create the host entry and assign the
WWN to this entry. Example 9-5 shows these commands.
You can rename the storage subsystem to a more meaningful name (if we had multiple
storage subsystems that were connected to our SAN fabric, renaming them makes it
considerably easier to identify them) with the svctask chcontroller -name command.
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode VDisks.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are
shown in Figure 9-56 on page 718 and in Figure 9-57 on page 718.
718 Implementing the IBM System Storage SAN Volume Controller V5.1
Before we move the LUNs to the SVC, we must configure the host multipath configuration for
the SVC. Add the following entry to your multipath.conf file, as shown in Example 9-7, and
add the content of Example 9-8 to the file.
# SVC
device {
vendor "IBM"
product "2145CF8"
path_grouping_policy group_by_serial
}
We are now ready to move the ownership of the disks to the SVC, to discover them as
MDisks, and to give them back to the host as VDisks.
Our Linux server has two LUNs: One LUN is for our boot disk and operating system file
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.
If we only wanted to move the LUN that holds our application and data files, we do not have to
reboot the host. The only requirement is that we unmount the file system and vary off the
Volume Group to ensure the data integrity between the reassignment.
The following steps are required, because we intend to move both LUNs at the same time:
1. Confirm that the multipath.conf file is configured for SVC.
2. Shut down the host.
If you are only moving the LUNs that contain the application and data, follow this
procedure instead:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems with the umount MOUNT_POINT command.
c. If the file systems are a logical volume manager (LVM) volume, deactivate that Volume
Group with the vgchange -a n VOLUMEGROUP_NAME.
d. If possible, also unload your HBA driver using the rmmod DRIVER_MODULE command.
This command removes the SCSI definitions from the kernel (we will reload this
module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem
to rescan for new disks without requiring you to unload the HBA driver; however, we do
not provide those details here.
LUN IDs: Even though we are using boot from SAN, you can also map the boot disk
with any LUN number to the SVC. It does not have to be 0 until later when we configure
the mapping in the SVC to the host.
4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks
will be discovered and named mdiskN, where N is the next available MDisk number
(starting from 0). Example 9-9 shows the commands that we used to discover our MDisks
and to verify that we have the correct MDisks.
Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
task display) with the serial number that you recorded earlier (in Figure 9-56 and
Figure 9-57 on page 718).
5. After we have verified that we have the correct MDisks, we rename them to avoid
confusion in the future when we perform other MDisk-related tasks (Example 9-10).
720 Implementing the IBM System Storage SAN Volume Controller V5.1
6. We create our image mode VDisks with the svctask mkvdisk command and the -vtype
image option (Example 9-11). This command virtualizes the disks in the exact same layout
as though they were not virtualized.
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
26 md_palauS online image 6
Palau_SANB 12.0GB 0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000
27 md_palauD online image 7
Palau_Data 5.0GB 0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count
29 palau_SANB 0 io_grp0 online 4
Palau_SANB 12.0GB image
60050768018301BF280000000000002B 0 1
30 palau_Data 0 io_grp0 online 4
Palau_Data 5.0GB image
60050768018301BF280000000000002C 0 1
7. Map the new image mode VDisks to the host (Example 9-12).
Important: Make sure that you map the boot VDisk with SCSI ID 0 to your host. The
host must be able to identify the boot volume during the boot process.
FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image VDisks onto other VDisks. You do not need to wait
until the FlashCopy process has completed before starting your application.
8. Power on your host server and enter your Fibre Channel (FC) HBA adapter BIOS before
booting the operating system, and make sure that you change the boot configuration so
that it points to the SVC. In our example, we performed the following steps on a QLogic
HBA:
a. Press Ctrl+Q to enter the HBA BIOS.
b. Open Configuration Settings.
c. Open Selectable Boot Settings.
d. Change the entry from your storage subsystem to the SVC 2145 LUN with SCSI ID 0.
e. Exit the menu and save your changes.
9. Boot up your Linux operating system.
If you only moved the application LUN to the SVC and left your Linux server running, you
only need to follow these steps to see the new VDisk:
a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and
cannot) unload your HBA driver, you can issue commands to the kernel to rescan the
SCSI bus to see the new VDisks (these details are beyond the scope of this book).
b. Check your syslog, and verify that the kernel found the new VDisks. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, rediscover the Volume Group, and
then, run the vgchange -a y VOLUME_GROUP command to activate the Volume Group.
10.Mount your file systems with the mount /MOUNT_POINT command (Example 9-13). The df
output shows us that all of disks are available again.
722 Implementing the IBM System Storage SAN Volume Controller V5.1
Preparing MDisks for striped mode VDisks
From our second storage subsystem, we have performed these tasks:
Created and allocated three new LUNs to the SVC
Discovered them as MDisks
Renamed these LUNs to more meaningful names
Created a new MDG
Placed all of these MDisks into this MDG
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 9-15. Listing the MDG with the svcinfo lsmdiskgrp command shows that
the free capacity on the old MDGs is slowly increasing as those extents are moved to the new
MDG.
After this task has completed, Example 9-16 shows that the VDisks are now spread over
three MDisks.
724 Implementing the IBM System Storage SAN Volume Controller V5.1
real_capacity 17.00GB
overallocation 70
warning 0
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANB
id
28
29
30
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data
id
28
29
30
IBM_2145:ITSO-CLS1:admin>
Our migration to striped VDisks on another storage subsystem (DS4500) is now complete.
The original MDisks (Palau-MDG0 and Palau-MD1) can now be removed from the SVC, and
these LUNs can be removed from the storage subsystem.
If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can
remove it from our SAN fabric.
You might want to perform this activity for any one of these reasons:
You purchased a new storage subsystem, and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
You used the SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk, and you no
longer need that host connected to the SVC.
You want to ship a host, and its data, that is currently connected to the SVC to a site where
there is no SVC.
Changes to your environment no longer require this host to use the SVC.
There are also other preparation activities that we can perform before we have to shut down
the host and reconfigure the LUN masking and mapping. This section covers those activities.
If you are moving the data to a new storage subsystem, it is assumed that the storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 9-58 on
page 726.
LINUX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
We also need a Green zone for our host to use when we are ready for it to directly access the
disk after it has been removed from the SVC.
After your zone configuration is set up correctly, the SVC sees the new storage subsystem’s
controller using the svcinfo lscontroller command, as shown in Figure 9-10 on page 691.
It is also a good idea to rename the new storage subsystem’s controller to a more useful
name, which can be done with the svctask chcontroller -name command.
726 Implementing the IBM System Storage SAN Volume Controller V5.1
0 mdisk0 online managed
600a0b800026b282000042f84873c7e100000000000000000000000000000000
28 palau-md1 online managed 8
MD_palauVD 8.0GB 0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000
29 palau-md2 online managed 8
MD_palauVD 8.0GB 0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000
30 palau-md3 online managed 8
MD_palauVD 8.0GB 0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000
31 mdisk31 online unmanaged
6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdisk32 online unmanaged
12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
Even though the MDisks will not stay in the SVC for long, we still recommend that you rename
them to more meaningful names, so that they do not get confused with other MDisks that are
used by other activities. Also, we create the MDGs to hold our new MDisks, which is shown in
Example 9-18.
Our SVC environment is now ready for the VDisk migration to image mode VDisks.
During the migration, our Linux server is unaware that its data is being physically moved
between storage subsystems.
After the migration has completed, the image mode VDisks are ready to be removed from the
Linux server, and the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.
728 Implementing the IBM System Storage SAN Volume Controller V5.1
9.6.7 Removing the LUNs from the SVC
The next step requires downtime on the Linux server, because we will remap and remask the
disks so that the host sees them directly through the Green zone, as shown in Figure 9-58 on
page 726.
Our Linux server has two LUNs: one LUN is our boot disk and operating system file systems,
and the other LUN holds our application and data files. Moving both LUNs at one time
requires shutting down the host.
If we only want to move the LUN that holds our application and data files, we can move that
LUN without rebooting the host. The only requirement is that we unmount the file system and
vary off the Volume Group to ensure the data integrity during the reassignment.
Before you start: Moving LUNs to another storage subsystem might need an additional
entry in the multipath.conf file. Check with the storage subsystem vendor to see which
content you must add to the file. You might be able to install and modify the file ahead of
time.
When you intend to move both LUNs at the same time, you must use these required steps:
1. Confirm that your operating system is configured for the new storage.
2. Shut down the host.
If you are only moving the LUNs that contain the application and data, you can follow this
procedure instead:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems with the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that Volume Group with the vgchange
-a n VOLUMEGROUP_NAME command.
d. If you can, unload your HBA driver using the rmmod DRIVER_MODULE command. This
command removes the SCSI definitions from the kernel (we will reload this module and
rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for
new disks without requiring you to unload the HBA driver; however, we do not provide
these details here.
3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command
(Example 9-20). To double-check that you have removed the VDisks, use the svcinfo
lshostvdiskmap command, which shows that these disks are no longer mapped to the
Linux server.
4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This step
makes them unmanaged, as seen in Example 9-21 on page 730.
CMMVC6212E The command failed because data in the cache has not been committed
to disk
You will have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the VDisk.
The SVC will automatically destage uncommitted cached data two minutes after the
last write activity for the VDisk. How much data there is to destage, and how busy the
I/O subsystem is, determines how long this command takes to complete.
You can check if the VDisk has uncommitted data in the cache by using the command
svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This
attribute has the following meanings:
empty No modified data exists in the cache.
not_empty Modified data might exist in the cache.
corrupt Modified data might have existed in the cache, but any data has been
lost.
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask
the disks from the SVC back to the Linux server.
Important: If one of the disks is used to boot your Linux server, you must make sure
that it is presented back to the host as SCSI ID 0, so that the FC adapter BIOS finds
that disk during its initialization.
6. Power on your host server and enter your FC HBA BIOS before booting the OS. Make
sure that you change the boot configuration, so that it points to the SVC. In our example,
we have performed the following steps on a QLogic HBA:
a. Press Ctrl+Q to enter the HBA BIOS.
b. Open Configuration Settings.
c. Open Selectable Boot Settings.
730 Implementing the IBM System Storage SAN Volume Controller V5.1
d. Change the entry from the SVC to your storage subsystem LUN with SCSI ID 0.
e. Exit the menu and save your changes.
Important: This is the last step that you can perform and still safely back out everything
that you have done so far.
Up to this point, you can reverse all of the actions that you have performed so far to get
the server back online without data loss:
Remap and remask the LUNs back to the SVC.
Run the svctask detectmdisk command to rediscover the MDisks.
Recreate the VDisks with the svctask mkvdisk command.
Remap the VDisks back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
We then manage those LUNs with the SVC, move them between other managed disks, and
then, finally move them back to image mode disks, so that those LUNs can then be masked
and mapped back to the VMware ESX server directly.
This example can help you perform any one of the following activities in your environment:
Move your ESX server’s data LUNs (that are your VMware vmfs file systems where you
might have your virtual machines stored), which are directly accessed from a storage
subsystem, to virtualized disks under the control of the SVC.
Move LUNs between storage subsystems while your VMware virtual machines are still
running. You might perform this activity to move the data onto LUNs that are more
appropriate for the type of data that is stored on those LUNs, taking into account
availability, performance, and redundancy. We describe this step in 9.7.4, “Migrating the
image mode VDisks” on page 742.
Move your VMware ESX server’s LUNs back to image mode VDisks so that they can be
remapped and remasked directly back to the server. This step starts in 9.7.5, “Preparing to
migrate from the SVC” on page 745.
You can use these activities individually, or together, to migrate your VMware ESX server’s
LUNs from one storage subsystem to another storage subsystem, using the SVC as your
migration tool. If you do not use all three activities, you can introduce the SVC in your
environment, or move the data between your storage subsystems.
The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.
732 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-59 ESX environment before migration
Figure 9-59 shows our ESX server connected to the SAN infrastructure. It has two LUNs that
are masked directly to it from our storage subsystem.
Our ESX server represents a typical SAN environment with a host directly using LUNs that
were created on a SAN storage subsystem, as shown in Figure 9-59:
The ESX Server’s HBA cards are zoned so that they are in the Green zone with our
storage subsystem.
The two LUNs that have been defined on the storage subsystem and that use LUN
masking are directly available to our ESX server.
If you have an SVC already connected, skip to the instructions that are given in 9.7.2,
“Preparing your SVC to virtualize disks” on page 735.
Connecting the SVC to your SAN fabric will require you to perform these tasks:
Assemble your SVC components (nodes, uninterruptible power supply unit, and Master
Console), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on
your SAN area network.
Create and configure your SVC cluster.
Create these additional zones:
– An SVC node zone (the Black zone in our picture on Example 9-45 on page 757). This
zone only contains all of the ports (or WWNs) for each of the SVC nodes in your
cluster. Our SVC is made up of a two node cluster where each node has four ports. So,
our Black zone has eight WWNs defined.
– A storage zone (our Red zone). This zone also has all of the ports or WWNs from the
SVC node zone, as well as the ports/WWNs for all of the storage subsystems that SVC
will virtualize.
– A host zone (our Blue zone). This zone contains the ports or WWNs for each host that
will access VDisks, together with the ports that are defined in the SVC node zone.
Important: Do not put your storage subsystems in the host (Blue) zone. This zone is
an unsupported configuration and can lead to data loss.
Figure 9-60 on page 735 shows the environment that we set up.
734 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-60 SAN environment with SVC attached
We create an empty MDG for each of the disks, by using the commands in Example 9-23.
Our ESX-BOOT-MDG MDG holds the boot LUN and our ESX-DATA-MDG MDG holds our
data LUN.
Log in to your VMware management console as root, navigate to Configuration, and then,
select Storage Adapter. The Storage Adapters are shown on the right side of this window
and display all of the necessary information. Figure 9-61 shows our WWNs, which are
210000E08B89B8C0 and 210000E08B892BCD.
Figure 9-61 Obtain your WWN using the VMware Management Console
Use the svcinfo lshbaportcandidate command on the SVC to list all of the WWNs, which
have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 9-24
shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it
indicates that we have a zone configuration problem.)
After verifying that the SVC can see our host, we create the host entry and assign the WWN
to this entry. Example 9-25 shows these commands.
736 Implementing the IBM System Storage SAN Volume Controller V5.1
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B892BCD
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active
IBM_2145:ITSO-CLS1:admin>
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode VDisks.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive, and choose Properties. Figure 9-63 on
page 738 and Figure 9-62 on page 738 show our serial numbers.
Now, we are ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as VDisks.
738 Implementing the IBM System Storage SAN Volume Controller V5.1
9.7.3 Move the LUNs to the SVC
In this step, we move the LUNs that are assigned to the ESX server and reassign them to the
SVC.
The virtual machines are located on these LUNs. So, in order to move these LUNs under the
control of the SVC, we do not need to reboot the entire ESX server, but we have to stop and
suspend all VMware guests that are using these LUNs.
2. Next, identify all of the VMware guests that are using this LUN and shut them down. One
way to identify them is to highlight the virtual machine and open the Summary Tab. The
datapool that is used is displayed under Datastore. Figure 9-66 on page 740 shows a
Linux virtual machine using the datastore named SLES_Costa_Rica.
3. If you have several ESX hosts, also check the other ESX hosts to make sure that there is
no guest operating system that is running and using this datastore.
4. Repeat steps 1 to 3 for every datastore that you want to migrate.
5. After the guests are suspended, we use Storage Manager (our storage subsystem
management tool) to unmap and unmask the disks from the ESX server and to remap and
to remask the disks to the SVC.
6. From the SVC, discover the new disks with the svctask detectmdisk command. The disks
will be discovered and named as mdiskN, where N is the next available MDisk number
(starting from 0). Example 9-27 shows the commands that we used to discover our
MDisks and to verify that we have the correct MDisks.
740 Implementing the IBM System Storage SAN Volume Controller V5.1
Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
command task display) with the serial number that you obtained earlier (in Figure 9-62 and
Figure 9-63 on page 738).
7. After we have verified that we have the correct MDisks, we rename them to avoid
confusion in the future when we perform other MDisk-related tasks (Example 9-28).
8. We create our image mode VDisks with the svctask mkvdisk command (Example 9-29).
The parameter -vtype image makes sure that it will create image mode VDisks, which
means that the virtualized disks will have the exact same layout as though they were not
virtualized.
9. Finally, we can map the new image mode VDisks to the host. Use the same SCSI LUN IDs
as on the storage subsystem for the mapping (Example 9-30).
742 Implementing the IBM System Storage SAN Volume Controller V5.1
We also need a Green zone for our host to use when we are ready for it to directly access the
disk, after it has been removed from the SVC.
While the migration is running, our VMware ESX server, as well as our VMware guests, will
remain running.
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 9-32. Listing the MDG with the svcinfo lsmdiskgrp command shows that
the free capacity on the old MDG is slowly increasing as those extents are moved to the new
MDG.
744 Implementing the IBM System Storage SAN Volume Controller V5.1
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 MDG_Nile_VM online 2 2
130.0GB 512 1.0GB 130.00GB 130.00GB
130.00GB 100 0
4 MDG_ESX_VD online 3 0
165.0GB 512 35.0GB 0.00MB 0.00MB
0.00MB 0 0
IBM_2145:ITSO-CLS1:admin>
If you compare the svcinfo lsmdiskgrp output after the migration, as shown in
Example 9-33, you can see that all of the virtual capacity has now been moved from the old
MDG (MDG_Nile_VM) to the new MDG (MDG_ESX_VD). The mdisk_count column shows
that the capacity is now spread over three MDisks.
Our migration to the SVC is complete. You can remove the original MDisks from the SVC, and
you can remove these LUNs from the storage subsystem.
If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it
from our SAN fabric.
You might want to perform this activity for any one of these reasons:
You purchased a new storage subsystem, and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
You used SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk, and you no
longer need that host connected to the SVC.
You want to ship a host, and its data, that currently is connected to the SVC, to a site
where there is no SVC.
Changes to your environment no longer require this host to use the SVC.
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as described in “Adding a new
storage subsystem to SVC” on page 742 and “Make fabric zone changes” on page 742.
Even though the MDisks will not stay in the SVC for long, we still recommend that you rename
them to more meaningful names, so that they do not get confused with other MDisks being
used by other activities. Also, we create the MDGs to hold our new MDisks. Example 9-35
shows these tasks.
746 Implementing the IBM System Storage SAN Volume Controller V5.1
4 MDG_ESX_VD online 3 2
165.0GB 512 35.0GB 130.00GB 130.00GB
130.00GB 78 0
5 MDG_IVD_ESX online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
IBM_2145:ITSO-CLS1:admin>
Our SVC environment is ready for the VDisk migration to image mode VDisks.
During the migration, our ESX server is unaware that its data is being physically moved
between storage subsystems. We can continue to run and continue to use the virtual
machines that are running on the server.
You can check the migration status with the svcinfo lsmigrate command, as shown in
Example 9-37 on page 748.
After the migration has completed, the image mode VDisks are ready to be removed from the
ESX server, and the real LUNs can be mapped and masked directly to the host using the
storage subsystem’s tool.
In our example, we have moved the virtual machine disks, so in order to remove these LUNs
from the control of the SVC, we have to stop and suspend all of the VMware guests that are
using this LUN. Perform the following steps:
1. Check which SCSI LUN IDs are assigned to the migrated disks, by using the svcinfo
lshostvdiskmap command, as shown in Example 9-38. Compare the VDisk UID and sort
out the information.
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count
0 vdisk_A 0 io_grp0 online
2 MDG_Image 36.0GB image
29 ESX_W2k3_IVD 0 io_grp0 online
4 MDG_ESX_VD 70.0GB striped
60050768018301BF2800000000000029 0 1
748 Implementing the IBM System Storage SAN Volume Controller V5.1
30 ESX_SLES_IVD 0 io_grp0 online
4 MDG_ESX_VD 60.0GB striped
60050768018301BF280000000000002A 0 1
IBM_2145:ITSO-CLS1:admin>
2. Shut down and suspend all of our guests using the LUNs. You can use the same method
that is used in “Move VMware guest LUNs” on page 739 to identify the guests that are
using this LUN.
3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command
(Example 9-39). To double-check that you have removed the VDisks, use the svcinfo
lshostvdiskmap command, which shows that these VDisks are no longer mapped to the
ESX server.
4. Remove the VDisks from the SVC by using the svctask rmvdisk command, which makes
the MDisks unmanaged, as shown in Example 9-40.
Cached data: When you run the svctask rmvdisk command, the SVC first
double-checks that there is no outstanding dirty cached data for the VDisk that is being
removed. If there is still uncommitted cached data, the command fails with this error
message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the VDisk.
The SVC will automatically destage uncommitted cached data two minutes after the
last write activity for the VDisk. Depending on the amount of data to destage and how
busy the I/O subsystem is determine how long this command takes to complete.
You can check if the VDisk has uncommitted data in the cache by using the svcinfo
lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This
attribute has the following meanings:
empty No modified data exists in the cache.
not_empty Modified data might exist in the cache.
corrupt Modified data might have existed in the cache, but the data has been
lost.
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask
the disks from the SVC back to the ESX server. Remember in Example 9-38 on page 748
that we have recorded the SCSI LUNs’ IDs. To map your LUNs on the storage subsystem,
use the same SCSI LUN IDs that you used in the SVC.
Important: This is the last step that you can perform and still safely back out of
everything you have done so far.
Up to this point, you can reverse all of the actions that you have performed so far to get
the server back online without data loss:
Remap and remask the LUNs back to the SVC.
Run the svctask detectmdisk command to rediscover the MDisks.
Recreate the VDisks with the svctask mkvdisk command.
Remap the VDisks back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
6. Using the VMware management console, rescan to discover the new VDisk. Figure 9-68
shows the view before the rescan. Figure 9-69 on page 751 shows the view after the
rescan. Note that the size of the LUN has changed, because we have moved to another
LUN on another storage subsystem.
750 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-69 After adapter rescan
During the rescan, you might receive geometry errors when ESX discovers that the old
disk has disappeared. Your VDisk will appear with a new vmhba address, and VMware will
recognize it as our VMWARE-GUESTS disk.
7. We are now ready to restart the VMware guests.
8. Finally, to make sure that the MDisks are removed from the SVC, run the svctask
detectmdisk command. The MDisks are discovered as offline and, then, automatically
removed when the SVC determines that there are no VDisks associated with these
MDisks.
We then manage those LUNs with the SVC, move them between other managed disks, and
then finally move them back to image mode disks, so that those LUNs can then be masked
and mapped back to the AIX server directly.
If you use this example, it can help you perform any of the following activities in your
environment:
Move an AIX server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC, which is the first activity that you perform when introducing the
SVC into your environment. This section shows that your host downtime is only a few
minutes while you remap and remask disks using your storage subsystem LUN
management tool. This step starts in 9.8.2, “Preparing your SVC to virtualize disks” on
page 754.
Move data between storage subsystems while your AIX server is still running and
servicing your business application. You might perform this activity if you were removing a
storage subsystem from your SAN environment and if you want to move the data onto
LUNs that are more appropriate for the type of data that is stored on those LUNs, taking
Use these activities individually or together to migrate your AIX server’s LUNs from one
storage subsystem to another storage subsystem, using the SVC as your migration tool. If
you do not use all three activities, you can introduce or remove the SVC from your
environment.
The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.
AIX
Host
SAN
Green Zone
IBM or OEM
Storage
Subsystem
Figure 9-70 shows our AIX server connected to our SAN infrastructure. It has two LUNs
(hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.
The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the
itsoaixvg1 LVM group, as shown in Example 9-41 on page 753.
752 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 9-41 AIX SAN configuration
#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 1814 DS4700 Disk Array Device
hdisk4 Available 1D-08-02 1814 DS4700 Disk Array Device
#lspv
hdisk0 0009cddaea97bf61 rootvg active
hdisk1 0009cdda43c9dfd5 rootvg active
hdisk2 0009cddabaef1d99 rootvg active
hdisk3 0009cdda0a4c0dd5 itsoaixvg active
hdisk4 0009cdda0a4d1a64 itsoaixvg1 active
#
Our AIX server represents a typical SAN environment with a host directly using LUNs that
were created on a SAN storage subsystem, as shown in Figure 9-70 on page 752:
The AIX server’s HBA cards are zoned so that they are in the Green (dotted line) zone,
with our storage subsystem.
The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem, and
using LUN masking, are directly available to our AIX server.
If you have an SVC already connected, skip to 9.8.2, “Preparing your SVC to virtualize disks”
on page 754.
Be extremely careful, because connecting the SVC into your storage area network requires
you to connect cables to your SAN switches and alter your switch zone configuration.
Performing these activities incorrectly can render your SAN inoperable, so make sure that you
fully understand the effect of your actions.
Connecting the SVC to your SAN fabric will require you to perform these tasks:
Assemble your SVC components (nodes, uninterruptible power supply unit, and Master
Console), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on
your SAN.
Create and configure your SVC cluster.
Create these additional zones:
– An SVC node zone (our Black zone in Example 9-54 on page 763). This zone only
contains all of the ports (or WWNs) for each of the SVC nodes in your cluster. Our SVC
is made up of a two node cluster, where each node has four ports. So, our Black zone
has eight defined WWNs.
– A storage zone (our Red zone). This zone also has all of the ports and WWNs from the
SVC node zone, as well as the ports and WWNs for all of the storage subsystems that
SVC will virtualize.
Important: Do not put your storage subsystems in the host (Blue) zone, which is an
unsupported configuration and can lead to data loss.
AIX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
754 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 9-42 Create empty mdiskgroup
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512
MDisk Group, id [7], successfully created
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size
free_capacity virtual_capacity used_capacity real_capacity overallocation
warning
7 aix_imgmdg online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
IBM_2145:ITSO-CLS2:admin>
First, we get the WWN for our AIX server’s HBA, because we have many hosts that are
connected to our SAN fabric and in the Blue zone. We want to make sure we have the correct
WWN to reduce our AIX servers’ downtime. Example 9-43 shows the commands to get the
WWN; our host has a WWN of 10000000C932A7FB.
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A67B
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A800
ROS Level and ID............02C03891
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........02000909
Device Specific.(Z4)........FF401050
Device Specific.(Z5)........02C03891
Device Specific.(Z6)........06433891
Device Specific.(Z7)........07433891
Device Specific.(Z8)........20000000C932A800
Device Specific.(Z9)........CS3.82A1
Device Specific.(ZA)........C1D3.82A1
Device Specific.(ZB)........C2D3.82A1
Device Specific.(YL)........U0.1-P2-I5/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I5/Q1
##
The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have
not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 9-44
shows the output of the nodes that it found in our SAN fabric. (If the port did not show up, it
indicates that we have a zone configuration problem.)
IBM_2145:ITSO-CLS2:admin>svcinfo lshbaportcandidate
id
10000000C932A7FB
10000000C932A800
210000E08B89B8C0
IBM_2145:ITSO-CLS2:admin>
756 Implementing the IBM System Storage SAN Volume Controller V5.1
After verifying that the SVC can see our host (Kanaga), we create the host entry and assign
the WWN to this entry, as shown with the commands in Example 9-45.
Names: The svctask chcontroller command enables you to change the discovered
storage subsystem name in SVC. In complex SANs, we recommend that you rename your
storage subsystem to a more meaningful name.
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode VDisks.
If you also use a DS4000 family storage subsystem, Storage Manager will provide the LUN
serial numbers. Right-click your logical drive, and choose Properties. Figure 9-72 on
page 758 and Figure 9-73 on page 758 show our serial numbers.
We are now ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as VDisks.
758 Implementing the IBM System Storage SAN Volume Controller V5.1
9.8.3 Moving the LUNs to the SVC
In this step, we move the LUNs that are assigned to the AIX server and reassign them to the
SVC.
Because we only want to move the LUN that holds our application and data files, we move
that LUN without rebooting the host. The only requirement is that we unmount the file system
and vary off the Volume Group to ensure data integrity after the reassignment.
Before you start: Moving LUNs to the SVC requires that the Subsystem Device Driver
(SDD) device driver is installed on the AIX server. You can install the SDD ahead of time;
however, it might require an outage of your host to do so.
The following steps are required, because we intend to move both LUNs at the same time:
1. Confirm that the SDD is installed.
2. Unmount and vary off the Volume Groups:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems with the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that Volume Group with the
varyoffvg VOLUMEGROUP_NAME command.
Example 9-47 shows the commands that we ran on Kanaga.
3. Using Storage Manager (our storage subsystem management tool), we can unmap and
unmask the disks from the AIX server and remap and remask the disks to the SVC.
4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks
will be discovered and named mdiskN, where N is the next available mdisk number
(starting from 0). Example 9-48 shows the commands that we used to discover our
MDisks and to verify that we have the correct MDisks.
Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
command task display) with the serial number that you discovered earlier (in
Figure 9-72 and Figure 9-73 on page 758).
5. After we have verified that we have the correct MDisks, we rename them to avoid
confusion in the future when we perform other MDisk-related tasks (Example 9-49).
6. We create our image mode VDisks with the svctask mkvdisk command and the option
-vtype image (Example 9-50). This command virtualizes the disks in the exact same layout
as though they were not virtualized.
7. Finally, we can map the new image mode VDisks to the host (Example 9-51).
FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image VDisks onto other VDisks. You do not need to wait until
the FlashCopy process has completed before starting your application.
760 Implementing the IBM System Storage SAN Volume Controller V5.1
Now, we are ready to perform the following steps to put the image mode VDisks online:
1. Remove the old disk definitions, if you have not done so already.
2. Run the cfgmgr -vs command to rediscover the available LUNs.
3. If your application and data are on an LVM volume, rediscover the Volume Group, and
then, run the varyonvg VOLUME_GROUP command to activate the Volume Group.
4. Mount your file systems with the mount /MOUNT_POINT command.
5. You are ready to start your application.
While the migration is running, our AIX server is still running, and we can continue accessing
the files.
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 9-53. Listing the MDG with the svcinfo lsmdiskgrp command shows that
the free capacity on the old MDG is slowly increasing while those extents are moved to the
new MDG.
IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 10
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 9
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS2:admin>
After this task has completed, Example 9-54 on page 763 shows that the VDisks are spread
over three MDisks in the aix_vd MDG. The old MDG is empty.
762 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 9-54 Migration complete
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vd
id 6
name aix_vd
status online
mdisk_count 3
vdisk_count 2
capacity 18.0GB
extent_size 512
free_capacity 5.0GB
virtual_capacity 13.00GB
used_capacity 13.00GB
real_capacity 13.00GB
overallocation 72
warning 0
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg
id 7
name aix_imgmdg
status online
mdisk_count 2
vdisk_count 0
capacity 13.0GB
extent_size 512
free_capacity 13.0GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 0
IBM_2145:ITSO-CLS2:admin>
Our migration to the SVC is complete. You can remove the original MDisks from the SVC, and
you can remove these LUNs from the storage subsystem.
If these LUNs are the LUNs that were used last on our storage subsystem, we can remove it
from our SAN fabric.
You might want to perform this activity for one of these reasons:
You purchased a new storage subsystem, and you were using the SVC as a tool to
migrate from your old storage subsystem to this new storage subsystem.
You used the SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk and you no
longer need that host connected to the SVC.
You want to ship a host, and its data, that is currently connected to the SVC to a site where
there is no SVC.
Changes to your environment no longer require this host to use the SVC.
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as shown in Figure 9-74.
AIX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
Create a Green zone for our host to use when we are ready for it to directly access the disk,
after it has been removed from the SVC.
After your zone configuration is set up correctly, the SVC sees the new storage subsystem’s
controller by using the svcinfo lscontroller command, as shown in Example 9-55 on
page 765. It is also a good idea to rename the controller to a more meaningful name. You can
use the svctask chcontroller -name command.
764 Implementing the IBM System Storage SAN Volume Controller V5.1
Example 9-55 Discovering the new storage subsystem
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814 FAStT
IBM_2145:ITSO-CLS2:admin>
Even though the MDisks will not stay in the SVC for long, we still recommend that you rename
them to more meaningful names so that they do not get confused with other MDisks that are
used by other activities. Also, we create the MDGs to hold our new MDisks, as shown in
Example 9-57.
Our SVC environment is ready for the VDisk migration to image mode VDisks.
766 Implementing the IBM System Storage SAN Volume Controller V5.1
30 AIX_MIG1 online image 3
KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000
IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 9
migrate_target_mdisk_index 30
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 8
migrate_target_mdisk_index 29
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS2:admin>
During the migration, our AIX server is unaware that its data is being physically moved
between storage subsystems.
After the migration is complete, the image mode VDisks are ready to be removed from the
AIX server, and the real LUNs can be mapped and masked directly to the host by using the
storage subsystems tool.
Because our LUNs only hold data files, and because we use a unique Volume Group, we can
remap and remask the disks without rebooting the host. The only requirement is that we
unmount the file system and vary off the Volume Group to ensure data integrity after the
reassignment.
Before you start: Moving LUNs to another storage system might need another driver than
SDD. Check with the storage subsystems vendor to see which driver you will need. You
might be able to install this driver ahead of time.
4. Remove the VDisks from the SVC by using the svctask rmvdisk command, which will
make the MDisks unmanaged, as shown in Example 9-60.
Cached data: When you run the svctask rmvdisk command, the SVC first
double-checks that there is no outstanding dirty cached data for the VDisk being
removed. If uncommitted cached data still exists, the command fails with the following
error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You will have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the VDisk.
The SVC will automatically destage uncommitted cached data two minutes after the
last write activity for the VDisk. How much data there is to destage and how busy the
I/O subsystem is will determine how long this command takes to complete.
You can check whether the VDisk has uncommitted data in the cache by using the
svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute.
This attribute has the following meanings:
empty No modified data exists in the cache.
not_empty Modified data might exist in the cache.
corrupt Modified data might have existed in the cache, but any modified data
has been lost.
768 Implementing the IBM System Storage SAN Volume Controller V5.1
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask
the disks from the SVC back to the AIX server.
Important: This step is the last step that you can perform and still safely back out of
everything you have done so far.
Up to this point, you can reverse all of the actions that you have performed so far to get
the server back online without data loss:
Remap and remask the LUNs back to the SVC.
Run the svctask detectmdisk command to rediscover the MDisks.
Recreate the VDisks with the svctask mkvdisk command.
Remap the VDisks back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
We are ready to access the LUNs from the AIX server. If all of the zoning and LUN masking
and mapping were done successfully, our AIX server will boot as though nothing has
happened:
1. Run the cfgmgr -S command to discover the storage subsystem.
2. Use the lsdev -Ccdisk command to verify the discovery of the new disk.
3. Remove the references to all of the old disks. Example 9-61 shows the removal using SDD
and Example 9-62 on page 770 shows the removal using SDDPCM.
4. If your application and data are on an LVM volume, rediscover the Volume Group, and
then, run the varyonvg VOLUME_GROUP command to activate the Volume Group.
5. Mount your file systems with the mount /MOUNT_POINT command.
6. You are ready to start your application.
Finally, to make sure that the MDisks are removed from the SVC, run the svctask
detectmdisk command. The MDisks will first be discovered as offline, and then, they will
automatically be removed after the SVC determines that there are no VDisks associated with
these MDisks.
To use the SVC for migration purposes only, perform the following steps:
1. Add the SVC to your SAN environment.
2. Prepare the SVC.
770 Implementing the IBM System Storage SAN Volume Controller V5.1
3. Depending on your operating system, unmount the selected LUNs or shut down the host.
4. Add the SVC between your storage and the host.
5. Mount the LUNs or start the host again.
6. Start the migration.
7. After the migration process is complete, unmount the selected LUNs or shut down the
host.
8. Remove the SVC from your SAN.
9. Mount the LUNs, or start the host again.
10.The migration is complete.
As you can see, extremely little downtime is required. If you prepare everything correctly, you
are able to reduce your downtime to a few minutes. The copy process is handled by the SVC,
so the host does not hinder the performance while the migration progresses.
To use the SVC for storage migrations, perform the steps that are described in the following
sections:
9.5.2, “Adding the SVC between the host system and the DS4700” on page 690
9.5.6, “Migrating the VDisk from image mode to image mode” on page 705
9.5.7, “Free the data from the SVC” on page 709
To migrate from a fully allocated VDisk to a Space-Efficient VDisk, perform these steps:
1. Add the target space-efficient copy.
2. Wait for synchronization to complete.
3. Remove the source fully allocated copy.
By using this feature, clients can easily free up managed disk space and make better use of
their storage, without needing to purchase any additional function for the SVC.
VDisk Mirroring and Space-Efficient VDisk functions are included in the base virtualization
license. Clients with thin-provisioned storage on an existing storage system can migrate their
data under SVC management using Space-Efficient VDisks without having to allocate
additional storage space.
Zero detect only works if the disk actually contains zeroes; an uninitialized disk can contain
anything, unless the disk has been formatted (for example, using the - fmtdisk flag on the
mkvdisk command).
772 Implementing the IBM System Storage SAN Volume Controller V5.1
Figure 9-76 Space-Efficient VDisk organization
copy_id 0
status offline
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
2. We then add a Space-Efficient VDisk copy with the VDisk Mirroring option by using the
addvdiskcopy command and the autoexpand parameter, as shown in Example 9-64.
774 Implementing the IBM System Storage SAN Volume Controller V5.1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 2
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
copy_id 1
status online
sync no
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
As you can see in Example 9-64 on page 774, the VD_Full has a copy_id 1 where the
used_capacity is 0.41 MB, which is equal to the metadata, because only zeros exist in the
disk. The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in
the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real
capacity minus the used capacity.
If zeros are written on the disk, the Space-Efficient VDisk does not consume space.
Example 9-65 shows that the Space-Efficient VDisk does not consume space even when they
are in sync.
776 Implementing the IBM System Storage SAN Volume Controller V5.1
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
3. We can split the VDisk Mirror or remove one of the copies, keeping the space-efficient
copy as our valid copy by using the splitvdiskcopy command or the rmvdiskcopy
command:
– If you need your copy as a space-efficient clone, we suggest that you use the
splitvdiskcopy command, because that command will generate a new VDisk and you
will be able to map to any server that you want.
– If you need your copy, because you are migrating from a previously fully allocated
VDisk to go to a Space-Efficient VDisk without any effect on the server operations, we
suggest that you use the rmvdiskcopy command. In this case, the original VDisk name
is kept, and it remains mapped to the same server.
778 Implementing the IBM System Storage SAN Volume Controller V5.1
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 1
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
Remember that VDisk Mirroring or migrating a VDisk is a concurrent operation, and Metro
Mirror must be considered as disruptive for data access, when at the end of the migration, we
must map the Metro Mirror target VDisk to the server.
With this example, we show how you can migrate data with intracluster Metro Mirror using a
Space-Efficient VDisk as a target VDisk. We also show how the real capacity and the free
780 Implementing the IBM System Storage SAN Volume Controller V5.1
fast_write_state empty
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV
id 7
name VD_SEV
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018401BF280000000000000F
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 307.20MB
free_capacity 306.79MB
overallocation 5000
autoexpand off
warning 1
grainsize 32
IBM_2145:ITSO-CLS2:admin>svctask mkrcrelationship -cluster 0000020061006FCA
-master VD_Full -aux VD_SEV -name MM_SEV_rel
RC Relationship, id [2], successfully created
IBM_2145:ITSO-CLS2:admin>svcinfo lsrcrelationship MM_SEV_rel
2. We start the rcrelationship and observe how the space allocation in the target VDisk will
change until it reaches the total of the used capacity.
Example 9-69 shows how to start the rcrelationship and shows the space allocation
changing.
type striped
mdisk_id
mdisk_name
fast_write_state not_empty
used_capacity 3.64GB
real_capacity 3.95GB
free_capacity 312.89MB
overallocation 380
autoexpand on
warning 80
grainsize 32
782 Implementing the IBM System Storage SAN Volume Controller V5.1
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV
id 7
name VD_SEV
IO_group_id 0
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 15.02GB
real_capacity 15.03GB
free_capacity 11.97MB
overallocation 99
autoexpand on
warning 80
grainsize 32
3. In conclusion, it is possible to use Metro Mirror to migrate data, and we can use a
Space-Efficient VDisk as the target VDisk. However, this action does not make sense,
because at the end of the initial data synchronization, the Space-Efficient VDisk will
allocate as much space as the source (in our case, VD_Full). If you want to use Metro
Mirror to migrate your data, we suggest that you use a fully allocated VDisk for the source
and the target VDisks.
Appendix A. Scripting
In this appendix, we present a high-level overview of how to automate various tasks by
creating scripts using the IBM System Storage SAN Volume Controller (SVC) command-line
interface (CLI).
Create
connection
(SSH) to the
SVC
Scheduled
activation
Run the or
command(s)
command Manual
activation
Perform
logging
On UNIX systems, you can use the ssh command to create an SSH connection with the SVC.
On Windows systems, you can use a utility called plink.exe, which is provided with the PuTTY
tool, to create an SSH connection with the SVC. In the following examples, we use plink to
create the SSH connection to the SVC.
Performing logging
When using the CLI, not all commands provide a usable response to determine the status of
the invoked command. Therefore, we recommend that you always create checks that can be
logged for monitoring and troubleshooting purposes.
786 Implementing the IBM System Storage SAN Volume Controller V5.1
Automated virtual disk creation
In the following example, we create a simple bat script to be used to automate virtual disk
(VDisk) creation to illustrate how scripts are created. Creating scripts to automate SVC
administrative tasks is not limited to bat scripting, and you can, in principle, encapsulate the
CLI commands in scripts using any programming language that you prefer, or you can use
program applets to perform routine tasks.
Your version of PuTTY might have these parameters set in other categories.
788 Implementing the IBM System Storage SAN Volume Controller V5.1
Listing created VDisks
To log the fact that our script created the VDisk that we defined when executing the script, we
use the -filtervalue parameter:
svcinfo lsvdisk -filtervalue 'name=%2' >> C:\DirectoryPath\VDiskScript.log
-------------------------------------VDiskScript.bat---------------------------
plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb
-name %2 -mdiskgrp %3
From the output of the log, as shown in Example A-2, we verify that the VDisk is created as
intended.
We have written this script in Perl to work without modification using Perl on UNIX systems
(such as AIX or Linux), Perl for Windows, or Perl in a Windows Cygwin environment.
790 Implementing the IBM System Storage SAN Volume Controller V5.1
+ msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped)
+ mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: DS4500)
+ mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: DS4500)
+ msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped)
+ mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: DS4500)
+ mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: DS4500)
+ iogrp1 (1)
+ NODES
+ HOSTS
+ VDISKS
+ iogrp2 (2)
+ NODES
+ HOSTS
+ VDISKS
+ iogrp3 (3)
+ NODES
+ HOSTS
+ VDISKS
+ recovery_io_grp (4)
+ NODES
+ HOSTS
+ VDISKS
+ recovery_io_grp (4)
+ NODES
+ HOSTS
+ itsosvc1 (2200642269468)
+ VDISKS
$HOST = $ARGV[0];
$USER = ($ARGV[1] ? $ARGV[1] : “admin”);
$PRIVATEKEY = ($ARGV[2] ? $ARGV[2] : “/path/toprivatekey”);
$DEBUG = 0;
die(sprintf(“Please call script with cluster IP address. The syntax is: \n%s
ipaddress loginname privatekey\n”,$0))
if (! $HOST);
sub TalkToSVC() {
my $COMMAND = shift;
my $NODELIM = shift;
my $ARGUMENT = shift;
my @info;
} else {
die (“ERROR: Unknown SSHCLIENT [$SSHCLIENT]\n”);
if ($NODELIM) {
$CMD = “$SSH svcinfo $COMMAND $ARGUMENT\n”;
} else {
$CMD = “$SSH svcinfo $COMMAND -delim : $ARGUMENT\n”;
}
print “Running $CMD” if ($DEBUG);
open SVC,”$CMD|”;
while (<SVC>) {
print “Got [$_]\n” if ($DEBUG);
chomp;
push(@info,$_);
}
close SVC;
return @info;
}
sub DelimToHash() {
my $COMMAND = shift;
my $MULTILINE = shift;
my $NODELIM = shift;
my $ARGUMENT = shift;
my %hash;
@details = &TalkToSVC($COMMAND,$NODELIM,$ARGUMENT);
my $linenum = 0;
foreach (@details) {
print “$linenum, $_” if ($debug);
if ($linenum == 0) {
@heading = split(‘:’,$_);
} else {
@line = split(‘:’,$_);
$counter = 0;
foreach $id (@heading) {
printf(“$COMMAND: ID [%s], value [%s]\n”,$id,$line[$counter]) if
($DEBUG);
if ($MULTILINE) {
$hash{$linenum,$id} = $line[$counter++];
} else {
$hash{$id} = $line[$counter++];
}
}
}
$linenum++;
}
792 Implementing the IBM System Storage SAN Volume Controller V5.1
return %hash;
}
sub TreeLine() {
my $indent = shift;
my $line = shift;
my $last = shift;
for ($tab=1;$tab<=$indent;$tab++) {
print “ “;
}
if (! $last) {
print “+ $line\n”;
} else {
print “| $line\n”;
}
}
sub TreeData() {
my $indent = shift;
my $printline = shift;
*data = shift;
*list = shift;
*condition = shift;
my $item;
foreach (@list) {
push(@show,$data{$numitem,$_})
}
&TreeLine($indent,sprintf($printline,@show),0);
}
}
# CONTROLLERS
&TreeLine($indentiogrp+1,’CONTROLLERS’,0);
$lastnumcontroller = ““;
foreach $controller (sort keys %controllers) {
$indentcontroller = $indent+2;
($numcontroller,$detail) = split($;,$controller);
next if ($numcontroller == $lastnumcontroller);
$lastnumcontroller = $numcontroller;
&TreeLine($indentcontroller,
sprintf(‘%s (%s)’,
$controllers{$numcontroller,’controller_name’},
$controllers{$numcontroller,’id’})
,0);
# MDISKS
&TreeData($indentcontroller+1,
‘%s (ID: %s CAP: %s MODE: %s)’,
*mdisks,
[‘name’,’id’,’capacity’,’mode’],
{“SRC”=>$controllers{$numcontroller,’controller_name’},”DST”=>”controller_name”});
# MDISKGRPS
&TreeLine($indentiogrp+1,’MDISK GROUPS’,0,[]);
$lastnummdiskgrp = ““;
foreach $mdiskgrp (sort keys %mdiskgrps) {
$indentmdiskgrp = $indent+2;
($nummdiskgrp,$detail) = split($;,$mdiskgrp);
next if ($nummdiskgrp == $lastnummdiskgrp);
$lastnummdiskgrp = $nummdiskgrp;
&TreeLine($indentmdiskgrp,
sprintf(‘%s (ID: %s CAP: %s FREE: %s)’,
$mdiskgrps{$nummdiskgrp,’name’},
$mdiskgrps{$nummdiskgrp,’id’},
$mdiskgrps{$nummdiskgrp,’capacity’},
794 Implementing the IBM System Storage SAN Volume Controller V5.1
$mdiskgrps{$nummdiskgrp,’free_capacity’})
,0);
# MDISKS
&TreeData($indentcontroller+1,
‘%s (ID: %s CAP: %s MODE: %s)’,
*mdisks,
[‘name’,’id’,’capacity’,’mode’],
{“SRC”=>$mdiskgrps{$nummdiskgrp,’id’},”DST”=>”mdisk_grp_id”});
}
# IOGROUP
$lastnumiogrp = ““;
foreach $iogrp (sort keys %iogrps) {
$indentiogrp = $indent+1;
($numiogrp,$detail) = split($;,$iogrp);
next if ($numiogrp == $lastnumiogrp);
$lastnumiogrp = $numiogrp;
&TreeLine($indentiogrp,sprintf(‘%s
(%s)’,$iogrps{$numiogrp,’name’},$iogrps{$numiogrp,’id’}),0);
$indentiogrp++;
# NODES
&TreeLine($indentiogrp,’NODES’,0);
&TreeData($indentiogrp+1,
‘%s (%s)’,
*nodes,
[‘name’,’id’],
{“SRC”=>$iogrps{$numiogrp,’id’},”DST”=>”IO_group_id”});
# HOSTS
&TreeLine($indentiogrp,’HOSTS’,0);
$lastnumhost = ““;
%iogrphosts = &DelimToHash(‘lsiogrphost’,1,0,$iogrps{$numiogrp,’id’});
foreach $host (sort keys %iogrphosts) {
my $indenthost = $indentiogrp+1;
($numhost,$detail) = split($;,$host);
next if ($numhost == $lastnumhost);
$lastnumhost = $numhost;
&TreeLine($indenthost,
sprintf(‘%s
(%s)’,$iogrphosts{$numhost,’name’},$iogrphosts{$numhost,’id’}),
0);
# HOSTVDISKMAP
%vdiskhostmap = &DelimToHash(‘lshostvdiskmap’,1,0,$hosts{$numhost,’id’});
$lastnumvdisk = ““;
foreach $vdiskhost (sort keys %vdiskhostmap) {
($numvdisk,$detail) = split($;,$vdiskhost);
&TreeData($indenthost+1,
‘%s (ID: %s CAP: %s TYPE: %s STAT: %s)’,
*vdisks,
[‘name’,’id’,’capacity’,’type’,’status’],
{“SRC”=>$vdiskhostmap{$numvdisk,’vdisk_id’},”DST”=>”id”});
}
}
# VDISKS
&TreeLine($indentiogrp,’VDISKS’,0);
$lastnumvdisk = ““;
foreach $vdisk (sort keys %vdisks) {
my $indentvdisk = $indentiogrp+1;
($numvdisk,$detail) = split($;,$vdisk);
next if ($numvdisk == $lastnumvdisk);
$lastnumvdisk = $numvdisk;
&TreeLine($indentvdisk,
sprintf(‘%s (ID: %s CAP: %s TYPE: %s)’,
$vdisks{$numvdisk,’name’},
$vdisks{$numvdisk,’id’},
$vdisks{$numvdisk,’capacity’},
$vdisks{$numvdisk,’type’}),
0)
if ($iogrps{$numiogrp,’id’} == $vdisks{$numvdisk,’IO_group_id’});
# VDISKMEMBERS
if ($iogrps{$numiogrp,’id’} == $vdisks{$numvdisk,’IO_group_id’}) {
%vdiskmembers =
&DelimToHash(‘lsvdiskmember’,1,1,$vdisks{$numvdisk,’id’});
}
}
}
}
}
796 Implementing the IBM System Storage SAN Volume Controller V5.1
Scripting alternatives
For an alternative to scripting, visit the Tivoli Storage Manager for Advanced Copy Services
product page:
http://www.ibm.com/software/tivoli/products/storage-mgr-advanced-copy-services/
Additionally, IBM provides a suite of scripting tools that is based on Perl. You can download
these scripting tools from this Web site:
http://www.alphaworks.ibm.com/tech/svctools
Recommendation: If you are planning to redeploy the old nodes in your environment to
create a test cluster or to add to another cluster, you must ensure that each WWNN of
these old nodes is set to a unique number on your SAN. We recommend that you
document the factory WWNN of the new nodes that you use to replace the old nodes and,
in effect, swap the WWNN so that each node still has a unique number. Failure to do so
can lead to a duplicate WWNN and worldwide port name (WWPN), causing unpredictable
SAN problems.
800 Implementing the IBM System Storage SAN Volume Controller V5.1
d. Under the IO_group_id and IO_group_name columns, record the iogroup_id or
iogroup_name for all of the nodes in the cluster.
e. Issue the following command from the CLI for each node_name or node_id to
determine the front_panel_id for each node and record the ID. This front_panel_id is
physically located on the front of every node (it is not the serial number), and you can
use this front_panel_id to determine which physical node equates to the node_name or
node_id that you plan to replace:
svcinfo lsnodevpd node_name or node_id
2. Perform the following steps to record the WWNN of the node that you want to replace:
a. Issue the following command from the CLI, where node_name or node_id is the name
or ID of the node for which you want to determine the WWNN:
svcinfo lsnode -delim : node_name or node_id
b. Record the WWNN of the node that you want to replace.
3. Verify that all VDisks, MDisks, and disk controllers are online and that none are in a state
of “Degraded”. If there are any VDisks, MDisks, or controllers in this state, resolve this
issue before going forward, or the loss of access to data might occur when you perform
step 4. This action is an especially important step if this node is the second node in the I/O
Group to be replaced.
Issue the following commands from the CLI, where object_id or object_name is the
controller ID or controller name that you want to view. Verify that each disk controller
shows its status as “degraded no”:
svcinfo lsvdisk -filtervalue “status=degraded”
svcinfo lsmdisk -filtervalue “status=degraded”
svcinfo lscontroller object_id or object_name
4. Issue the following CLI command to shut down the node that will be replaced, where
node_name or node_id is the name or ID of the node that you want to delete:
svctask stopcluster -node node_name or node_id
Important:
Do not power off the node through the front panel instead of using this command.
Be careful that you do not issue the stopcluster command without the -node
node_name or node_id parameter, because you will shut down the entire cluster if
you do.
Issue the following CLI command to ensure that the node is shut down and that the status
is “offline”, where node_name or node_id is the name or ID of the original node. The node
status must be “offline”:
svcinfo lsnode node_name or node_id
5. Issue the following CLI command to delete this node from the cluster and the I/O Group,
where node_name or node_id is the name or ID of the node that you want to delete:
svctask rmnode node_name or node_id
6. Issue the following CLI command to ensure that the node is no longer a member of the
cluster, where node_name or node_id is the name or ID of the original node. Do not list the
node in the command output:
svcinfo lsnode node_name or node_id
Important recommendation:
Record and mark the Fibre Channel (FC) cables with the SVC node port number
(1-4) before removing them from the back of the node that is being replaced. You
must reconnect the cables on the new node exactly as they were connected on the
old node. Looking at the back of the node, the FC ports on the SVC nodes are
numbered 1-4 from left to right and must be reconnected in the same order, or the
port IDs will change, which can affect the hosts’ access to VDisks or cause
problems with adding the new node back into the cluster. The SVC Hardware
Installation Guide for your model shows the port numbering of the various node
models.
Failure to disconnect the FC cables now will likely cause SAN devices and SAN
management software to discover these new WWPNs that are generated when the
WWNN is changed to FFFFF in the following steps. This discovery might cause
ghost records to be seen after the node is powered down. These ghost records do
not necessarily cause a problem, but you might have to reboot a SAN device to clear
out the record.
In addition, the ghost records might cause problems with AIX dynamic tracking
functioning correctly, assuming that it is enabled, so we highly recommend
disconnecting the node’s FC cables as instructed in the following step before
continuing to any other steps.
a. Disconnect the four FC cables from this node before powering the node on in the next
step.
b. Power on this node using the power button on the front panel and wait for it to boot up
before going to the next step.
c. From the front panel of the node, press the down button until the Node: panel is
displayed, and then use the right and left navigation buttons to display the Status:
panel.
d. Press and hold the down button, press and release the select button, and then release
the down button. The WWNN of the node is displayed.
e. Press and hold the down button, press and release the select button, and then release
the down button to enter the WWNN edit mode. The first character of the WWNN is
highlighted.
f. Press the up or down button to increment or decrement the character that is displayed.
g. Press the left navigation button to move to the next field or the right navigation button to
return to the previous field and repeat step f for each field. At the end of this step, the
characters that are displayed must be FFFFF.
h. Press the select button to retain the characters that you have updated and return to the
WWNN window.
i. Press the select button again to apply the characters as the new WWNN for the node.
802 Implementing the IBM System Storage SAN Volume Controller V5.1
Note: You must press the select button twice as steps h and i instruct you to do. After
step h, it might appear that the WWNN has been changed, but step i actually applies
the change.
8. Power off this node using the power button on the front panel and remove the node from
the rack, if desired.
9. Install the replacement node and its uninterruptible power supply unit in the rack and
connect the node to the uninterruptible power supply unit cables according to the SVC
Hardware Installation Guide, which is available at this Web site:
http://www.ibm.com/storage/support/2145
Note: Do not connect the FC cables to the new node during this step.
10.Power on the replacement node from the front panel with the FC cables disconnected.
After the node has booted, ensure that the node displays Cluster: on the front panel and
nothing else. If a word other than Cluster: is displayed, contact IBM Support for
assistance before continuing.
11.Record the WWNN of this new node, because you will need the WWNN if you plan to
redeploy the old nodes that are being replaced. Perform the following steps to change the
WWNN of the replacement node to match the WWNN that you recorded in step 2 on
page 801:
a. From the front panel of the node, press the down button until the Node: panel is
displayed, and then use the right and left navigation buttons to display the Status:
panel.
b. Press and hold the down button, press and release the select button, and then, release
the down button. The WWNN of the node is displayed. Record this number for use in
the redeployment of the old nodes.
c. Press and hold the down button, press and release the select button, and then, release
the down button to enter the WWNN edit mode. The first character of the WWNN is
highlighted.
d. Press the up or down button to increment or decrement the character that is displayed.
e. Press the left navigation button to move to the next field or the right navigation button to
return to the previous field and repeat step d for each field. At the end of this step, the
characters that are displayed must be the same as the WWNN that you recorded in
step 2 on page 801.
f. Press the select button to retain the characters that you have updated, and return to
the WWNN panel.
g. Press the select button to apply the characters as the new WWNN for the node.
Press select twice: You must press the select button twice as steps f and g instruct
you to do. After step f, it might appear that the WWNN has been changed, but step g
actually applies the change.
h. The node displays Cluster: on the front panel and is now ready to begin the process
of adding the node to the cluster. If another word is displayed, contact IBM Support for
assistance before continuing.
12.Connect the FC cables to the same port numbers on the new node that they were
connected to originally on the old node. See step 7 on page 802.
13.Issue the following CLI command to verify that the last five characters of the WWNN are
correct:
svcinfo lsnodecandidate
Note: If the WWNN does not match the original node’s WWNN exactly as recorded in
step 2 on page 801, you must repeat step 11 on page 803.
14.Add the node to the cluster and ensure that it is added back to the same I/O Group as the
original node. Using the following command, where wwnn_arg and iogroup_name or
iogroup_id are the items that you recorded in steps 1 on page 800 and 2 on page 801.
svctask addnode -wwnodename wwnn_arg -iogrp iogroup_name or iogroup_id
15.Verify that all of the VDisks for this I/O Group are back online and are no longer degraded.
If you are perform the node replacement process disruptively, so that no I/O occurs to the
I/O Group, you still must wait a certain period of time (we recommend 30 minutes in this
case, too) to make sure that the new node is back online and available to take over before
you replace the next node in the I/O Group. See step 3 on page 801.
Both nodes in the I/O Group cache data; however, the cache sizes are asymmetric if the
remaining partner node in the I/O Group is a SAN Volume Controller 2145-4F2 node. The
replacement node is limited by the cache size of the partner node in the I/O Group in this
case. Therefore, the replacement node does not utilize the full 8 GB cache size until the other
2145-4F2 node in the I/O Group is replaced.
You do not have to reconfigure the host multipathing device drivers because the replacement
node uses the same WWNN and WWPNs as the previous node. The multipathing device
drivers detect the recovery of paths that are available to the replacement node.
The host multipathing device drivers take approximately 30 minutes to recover the paths.
Therefore, do not upgrade the other node in the I/O Group for at least 30 minutes after
successfully upgrading the first node in the I/O Group. If you have other nodes in other I/O
Groups to upgrade, you can perform other upgrades while you wait the 30 minutes for the
host multipathing device drivers to recover the paths.
16.Repeat steps 2 on page 801 to 15 for each node that you want to replace.
804 Implementing the IBM System Storage SAN Volume Controller V5.1
This task assumes the following situation:
Your cluster contains six or fewer nodes.
All nodes that are configured in the cluster are present.
All errors in the cluster error log are fixed.
All managed disks (MDisks) are online.
You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new SAN
Volume Controller 2145-8G4 node.
There are no VDisks, MDisks, or controllers with a status of degraded or offline.
The SVC configuration has been backed up through the CLI or GUI, and the file has been
saved to the Master Console.
Download, install, and run the latest “SVC Software Upgrade Test Utility” from this Web
site to verify that there are no known issues with the current cluster environment before
beginning the node upgrade procedure:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585
8. Zone additional node ports in the existing SVC-only zones. You must have an SVC zone in
each fabric with nothing but the ports from the SVC nodes in it. These zones are
necessary for the initial formation of the cluster, because nodes need to see each other to
form a cluster. This zone might not exist, and the only way that the SVC nodes see each
other is through a storage zone that includes all of the node ports. However, we highly
recommend that you have a separate zone in each fabric with only the SVC node ports
included to avoid the risk of the nodes losing communication with each other if the storage
zones are changed or deleted.
9. Zone new node ports in the existing SVC/Storage zones. You must have an SVC/Storage
zone in each fabric for each disk subsystem that is used with the SVC. Each zone must
have all of the SVC ports in that fabric, along with all of the disk subsystem ports in that
fabric that will be used by the SVC to access the physical disks.
10.On each disk subsystem that is seen by the SVC, use its management interface to map
the LUNs that are currently used by the SVC to all of the new WWPNs of the new nodes
that will be added to the SVC cluster. This step is a critical step, because the new nodes
must see the same LUNs that the existing SVC cluster nodes see before adding the new
nodes to the cluster; otherwise, problems might arise. Also, note that all of the SVC ports
that are zoned with the back-end storage must see all of the LUNs that are presented to
SVC through all of those same storage ports, or the SVC will mark the devices as
degraded.
11.After all of these activities have been completed, you can add the additional nodes to the
cluster by using the SVC GUI or CLI. The cluster does not mark any devices as degraded,
because the new nodes will see the same cluster configuration, the same storage zoning,
and the same LUNs as the existing nodes.
12.Check the status of the controllers and MDisks to ensure that there is nothing marked
degraded. If a controller or MDisk is marked degraded, it is not configured properly, and
you must fix the configuration immediately before performing any other action on the
cluster. If you cannot determine fairly quickly what is wrong, remove the newly added
nodes from the cluster until the problem is resolved. You can contact IBM Support for
assistance.
806 Implementing the IBM System Storage SAN Volume Controller V5.1
control. This task is not a difficult process, but it can take time to complete, so you must plan
accordingly.
Important: Both nodes in the I/O Group cache data; however, the cache sizes are
asymmetric. The replacement node is limited by the cache size of the partner node in
the I/O Group. Therefore, the replacement node does not utilize the full size of its
cache.
10.From each host, issue a rescan of the multipathing software to discover the new paths to
VDisks. If your system is inactive, you can perform this step after you have replaced all of
the nodes in the cluster. The host multipathing device drivers take approximately 30
minutes to recover the paths.
11.Refer to the documentation that is provided with your multipathing device driver for
information about how to query paths to ensure that all of the paths have been recovered
before proceeding to the next step.
12.Repeat steps 1 to 10 for the partner node in the I/O Group.
Symmetric cache sizes: After you have upgraded both nodes in the I/O Group, the
cache sizes are symmetric, and the full 8 GB of cache is utilized.
13.Repeat steps 1 to 11 for each node in the cluster that you want to replace.
14.Resume host I/O.
808 Implementing the IBM System Storage SAN Volume Controller V5.1
C
Although this book was written at an SVC V4.3.x level, many of the underlying principles
remain applicable to SVC 5.1.
To ensure the desired performance and capacity of your storage infrastructure, from time to
time, we recommend that you conduct a performance and capacity analysis to reveal the
business requirements of your storage environment.
Performance considerations
When discussing performance for a system, it always comes down to identifying the
bottleneck, and thereby, the limiting factor of a given system. At the same time, you must take
into consideration the component for whose workload you do identify a limiting factor,
because it might not be the same component that is identified as the limiting factor for other
workloads.
When designing a storage infrastructure using SVC, or using an SVC storage infrastructure,
you must therefore take into consideration the performance and capacity of your
infrastructure. Ensuring that your SVC is monitored is a key point to ensure that you obtain
the desired performance.
SVC
The SVC cluster is scalable up to eight nodes, and the performance is almost linear when
adding more nodes into an SVC cluster, until it becomes limited by other components in the
storage infrastructure. While virtualization with the SVC provides a great deal of flexibility, it
does not diminish the necessity to have a storage area network (SAN) and disk subsystems
that can deliver the desired performance. Essentially, SVC performance improvements are
gained by having as many managed disks (MDisks) as possible, therefore, creating a greater
level of concurrent I/O to the back end without overloading a single disk or array.
In the following sections, we discuss the performance of the SVC and assume that there are
no bottlenecks in the SAN or on the disk subsystem.
Performance monitoring
In this section, we discuss several performance monitoring techniques.
810 Implementing the IBM System Storage SAN Volume Controller V5.1
Statistics gathering is enabled or disabled on a cluster basis. When gathering is enabled, all
of the nodes in the cluster gather statistics.
SVC supports sampling periods of the gathering of statistics from 1 to 60 minutes in steps of
one minute.
Previous versions of the SVC used to provide per cluster statistics. These statistics were later
superseded by per node statistics, which provide a greater range of information. SVC 5.1.0
onward only provides per node statistics; per cluster statistics are no longer generated.
Clients need to use per node statistics instead.
The date is in the form <yymmdd> and the time is in the form <hhmmss>.
Nm_stats_1_020808_105224
Example 9-70 shows typical MDisk and VDisk statistics file names.
Tip: You can use pscp.exe, which is installed with PuTTY, from an MS-DOS command-line
prompt to copy these files to local drives. You can use WordPad to open them, for example:
C:\Program Files\PuTTY>pscp -load ITSO-CLS1
admin@10.64.210.242:/dumps/iostats/* c:\temp\iostats
Use the -load parameter to specify the session that is defined in PuTTY.
After you have saved your performance statistics data files, because they are in .xml format,
you can format and merge your data to get more detail about the performance in your SVC
environment.
You can also process your statistics data with a spreadsheet application to get the report that
is shown in Figure C-1.
TotalStorage Productivity Center for Disk comes preinstalled on your System Storage
Productivity Center Console and can be made available by activating the specific licensing for
TotalStorage Productivity Center for Disk.
By activating this license, you upgrade your running TotalStorage Productivity Center-Basic
Edition to a TotalStorage Productivity Center for Disk edition.
You can obtain more information about using TotalStorage Productivity Center to monitor your
storage subsystem in SAN Storage Performance Management Using TotalStorage
Productivity Center, SG24-7364, at this Web site:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
812 Implementing the IBM System Storage SAN Volume Controller V5.1
IBM TotalStorage Reporter for Disk (a utility for anyone running IBM TotalStorage Productivity
Center) provides more information about creating a performance report. This utility is
available at this Web site:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS2618
IBM is withdrawing TotalStorage Productivity Center Reporter for Disk for Tivoli Storage
Productivity Center Version 4.1. The replacement function for this utility is packaged with
Tivoli Productivity Center Version 4.1 in Business Intelligence and Reporting Tools (BIRT).
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources:
IBM System Storage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
IBM System Storage Master Console: Installation and User’s Guide, GC30-4090
Subsystem Device Driver User’s Guide for the IBM TotalStorage Enterprise Storage
Server and the IBM System Storage SAN Volume Controller, SC26-7540
IBM System Storage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
IBM System Storage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
IBM System Storage Open Software Family SAN Volume Controller: Command-Line
Interface User’s Guide, SC26-7544
IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
IBM TotalStorage Multipath Subsystem Device Driver User’s Guide, SC30-4096
Online resources
These Web sites are also relevant as further information sources:
IBM TotalStorage home page:
http://www.storage.ibm.com
SAN Volume Controller supported platform:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html
Download site for Windows Secure Shell (SSH) freeware:
http://www.chiark.greenend.org.uk/~sgtatham/putty
IBM site to download SSH for AIX:
http://oss.software.ibm.com/developerworks/projects/openssh
Open source site for SSH for Windows and Mac:
http://www.openssh.com/windows.html
Cygwin Linux-like environment for Windows:
http://www.cygwin.com
IBM Tivoli Storage Area Network Manager site:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe
tworkManager.html
Microsoft Knowledge Base Article 131658:
http://support.microsoft.com/support/kb/articles/Q131/6/58.asp
Microsoft Knowledge Base Article 149927:
http://support.microsoft.com/support/kb/articles/Q149/9/27.asp
Sysinternals home page:
http://www.sysinternals.com
Subsystem Device Driver download site:
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
816 Implementing the IBM System Storage SAN Volume Controller V5.1
IBM TotalStorage Virtualization home page:
http://www-1.ibm.com/servers/storage/software/virtualization/index.html
SVC support page:
http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&
brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu
e.x=1
SVC online documentation:
http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp
lBM Redbooks publications about SVC:
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
Numerics B
back-end application 60
64-bit kernel 57
background copy 301, 309, 321, 328
background copy bandwidth 333
A background copy progress 420, 443
abends 465 background copy rate 276–277
abends dump 465 backup 257
access pattern 517 of data with minimal impact on production 262
active quorum disk 36 backup speed 257
active SVC cluster 563 backup time 257
add a new volume 167, 172 bandwidth 66, 95, 319, 585, 610
add a node 389 bandwidth impact 333
add additional ports 501 basic setup requirements 130
add an HBA 354 bat script 787
Add SSH Public Key 129 bind 252
administration tasks 493, 559 bitmaps 261
Advanced Copy Services 93 boot 99
AIX host system 181 boss node 35
AIX specific information 162 bottleneck 49
AIX toolbox 181 bottlenecks 100, 102, 810
AIX-based hosts 162 budget 26
alias 27 budget allowance 26
alias string 158 business requirements 100, 810
aliases 27
analysis 100, 655, 810
application server guidelines 92
C
cable connections 73
application testing 257
cable length 48
assign VDisks 369
cache 37, 269, 310
assigned VDisk 167, 172
caching 101
asynchronous 309
caching capability 100, 810
asynchronous notifications 280–281
candidate node 389
Asynchronous Peer-to-Peer Remote Copy 309
capacity 90, 180
asynchronous remote 310
capacity information 538
asynchronous remote copy 32, 283, 309, 311
capacity measurement 507
asynchronous replication 331
CDB 27
asynchronously 309
challenge message 30
attributes 527
Challenge-Handshake Authentication Protocol 30, 160,
audit log 40
352, 496
Authentication 160
change the IP addresses 384
authentication 41, 58, 132
Channel extender 59
authentication service 44
channel extender 62
Autoexpand 25
channels 317
automate tasks 786
CHAP 30, 160, 352, 496
automatic Linux system 225
CHAP authentication 30, 160
automatic update process 226
CHAP secret 30, 160
automatically discover 342
check software levels 636
automatically formatted 52
chpartnership 333
automatically restarted 644
chrcconsistgrp 335
automation 378
chrcrelationship 335
auxiliary 320, 328, 425, 447
chunks 88, 683
auxiliary VDisk 310, 321, 328
CIM agent 38
available managed disks 343
CIM Client 38
CIMOM 28, 38, 125, 159
CLI 125, 434
820 Implementing the IBM System Storage SAN Volume Controller V5.1
discovering assigned VDisk 167, 172, 190 a VDisk 177, 195, 367
discovering newly assigned MDisks 481 a volume 196
disk access profile 365 expand a space-efficient VDisk 367
disk controller expiry timestamp 44
renaming 478 expiry timestamps 45
systems 340 extended distance solutions 282
viewing details 340, 477 Extent 60
disk internal controllers 50 extent 88, 676
disk timeout value 246 extent level 676
disk zone 76 extent sizes 88
Diskpart 196
display summary information 343
displaying managed disks 490 F
distance 61, 282 fabric
distance limitations 282 remote 102
documentation 66, 475 fabric interconnect 61
DSMs 188 factory WWNN 800
dump failover 60, 249, 311
I/O statistics 463 failover only 229
I/O trace 463 failover situation 282
listing 462, 663 fan-in 60
other nodes 464 fast fail 163
durability 50 fast restore 257
dynamic pathing 249–250 FAStT 281
dynamic shrinking 536 FC optical distance 48
dynamic tracking 163 feature log 662
feature, licensing 659
features, licensing 461
E featurization log 463
elapsed time 93 Featurization Settings 122
empty MDG 346 Fibre Channel interfaces 47
empty state 301, 328 Fibre Channel port fan in 62, 102
Enterprise Storage Server (ESS) 281 Fibre Channel Port Login 28
entire VDisk 262 Fibre Channel port logins 60
error 298, 322, 325, 345, 460, 655 Fibre Channel ports 73
Error Code 59, 646 file system 232
error handling 278 filtering 379, 470
Error ID 60 filters 379
error log 460, 655 fixed error 460, 655
analyzing 655 FlashCopy 33, 256
file 645 bitmap 266
error notification 458, 647 how it works 257, 261
error number 646 image mode disk 270
error priority 656 indirection layer 266
ESS 44 mapping 257
ESS (Enterprise Storage Server) 281 mapping events 271
ESS server 44 rules 270
ESS to SVC 687 serialization of I/O 278
ESS token 44 synthesis 278
eth0 48 FlashCopy indirection layer 266
eth0 port 48 FlashCopy mapping 262, 271
eth1 48 FlashCopy mapping states 274
Ethernet 73 Copying 274
Ethernet connection 74 Idling/Copied 274
event 460, 655 Prepared 275
event log 462 Preparing 275
events 293, 320 Stopped 274
Excluded 60 Suspended 274
excludes 481 FlashCopy mappings 265
Execute Metro Mirror 419, 441 FlashCopy properties 265
expand FlashCopy rate 93
Index 821
flexibility 100, 810 housekeeping 476, 544
flush the cache 573 HP-UX support information 249–250
forced deletion 501
foreground I/O latency 333
format 506, 510, 515, 522 I
free extents 367 I/O budget 26
front-end application 60 I/O Governing 26
FRU 60 I/O governing 26, 365, 517
Full Feature Phase 28 I/O governing rate 365
I/O Group 61
I/O group 37, 61–62
G name 473
gateway IP address 114 renaming 392, 559
GBICs 61 viewing details 391
general housekeeping 476, 544 I/O pair 69
generating output 379 I/O per secs 66
generator 128 I/O statistics dump 463
geographically dispersed 281 I/O trace dump 463
Global Mirror guidelines 96 ICAT 38–39
Global Mirror protocol 32 identical data 320
Global Mirror relationship 313 idling 299, 326
Global Mirror remote copy technique 310 idling state 306, 335
gminterdelaysimulation 330 IdlingDisconnected 300, 326
gmintradelaysimulation 330 Image Mode 61
gmlinktolerance 330–331 image mode 526, 685
governing 26 image mode disk 270
governing rate 26 image mode MDisk 685
governing throttle 517 image mode to image mode 705
graceful manner 391 image mode to managed mode 700
grain 60, 266, 278 image mode VDisk 680
grain sizes 93 image mode virtual disks 91
grains 93, 277 inappropriate zoning 84
granularity 262 inconsistent 296, 323
GUI 117, 131 Inconsistent Copying state 294, 321
Inconsistent Stopped state 294, 321, 598–599, 626
InconsistentCopying 298, 325
H InconsistentDisconnected 300, 327
Hardware Management Console 38 InconsistentStopped 298, 324
hardware nodes 46, 56 index number 666
hardware overview 46 Index/Secret/Challenge 30
hash function 30 indirection layer 266
HBA 60, 350 indirection layer algorithm 267
HBA fails 86 informational error logs 280
HBA ports 92 initiator 158
heartbeat signal 36 initiator name 27
heartbeat traffic 95 input power 386
help 475, 543 install 65
high availability 34, 66 insufficient bandwidth 278
home directory 181 integrity 264, 289, 315
host interaction with the cache 269
and application server guidelines 92 intercluster communication and zoning 317
configuration 153 intercluster link 291, 317
creating 350 intercluster link bandwidth 333
deleting 500 intercluster link maintenance 291–292, 317
information 494 intercluster Metro Mirror 282, 309
showing 375 intercluster zoning 291–292, 317
systems 76 Internet Storage Name Service 30, 61, 159
host adapter configuration settings 183 interswitch link (ISL) 62
host bus adapter 350 interval 385
Host ID 60 intracluster Metro Mirror 281, 309
host workload 526 IP address
822 Implementing the IBM System Storage SAN Volume Controller V5.1
modifying 383, 545 Local fabric 61
IP addresses 66, 545 local fabric interconnect 61
IP subnet 74 Local users 42
ipconfig 137 log 315
IPv4 136 logged 460
ipv4 and 48 Logical Block Address 302, 328
IPv4 stack 141 logical configuration data 466
IPv6 136 Login Phase 28
IPv6 address 140 logs 314
IPv6 addresses 137 lsrcrelationshipcandidate 334
IQN 27, 60, 158 LU 61
iSCSI 26, 49, 66, 159 LUNs 61
iSCSI Address 27
iSCSI client 158
iSCSI IP address failover 160 M
iSCSI Multipathing 31 magnetic disks 50
iSCSI Name 27 maintenance levels 183
iSCSI node 27 maintenance procedures 645
iSCSI protocol 57 maintenance tasks 449, 635
iSCSI Qualified Name 27, 60 Managed 61
iSCSI support 57–58 Managed disk 61
iSCSI target node failover 160 managed disk 61, 479
ISL (interswitch link) 62 displaying 490
ISL hop count 282, 309 working with 477
iSNS 30, 61, 159 managed disk group 347
issue CLI commands 214 creating 484
ivp6 48 viewing 486
Managed Disks 61
managed mode MDisk 685
J managed mode to image mode 702
Jumbo Frames 30 managed mode virtual disk 91
management 100, 810
map a VDisk 516
K map a VDisk to a host 368
kernel level 226 mapping 261
key 160 mapping events 271
key files on AIX 181 mapping state 271
Master 62
L master 320, 328
LAN Interfaces 48 master console 67
last extent 686 master VDisk 321, 328
latency 32, 95 maximum supported configurations 58
LBA 302, 328 MC 62
license 114 MD5 checksum hash 30
license feature 659 MDG 61
licensing feature 461 MDG information 538
licensing feature settings 461, 659 MDG level 347
limiting factor 100, 810 MDGs 67
link errors 47 MDisk 61, 67, 479, 490
Linux 181 adding 346, 488
Linux kernel 35 discovering 342, 481
Linux on Intel 225 including 345, 481
list dump 462 information 479
list of MDisks 491 modes 685
list of VDisks 492 name parameter 343
list the dumps 663 removing 349, 489
listing dumps 462, 663 renaming 344, 480
Load balancing 229 showing 374, 491, 537
Local authentication 40 showing in group 346
local cluster 303, 330 MDisk group
creating 348, 484
Index 823
deleting 349, 487 N
name 473 network bandwidth 98
renaming 348, 486 Network Entity 158
showing 346, 374, 482, 538 Network Portals 158
viewing information 348 new code 644
MDiskgrp 61 new disks 169, 175
Metro Mirror 281 new mapping 368
Metro Mirror consistency group 304, 306–308, 334–337 Node 62
Metro Mirror features 283, 311 node 35, 61, 387
Metro Mirror process 292, 319 adding 388
Metro Mirror relationship 305–306, 308, 313, 334–335, adding to cluster 560
337, 597, 624 deleting 390
microcode 36 failure 278
Microsoft Active Directory 43 port 60
Microsoft Cluster 195 renaming 390
Microsoft Multi Path Input Output 188 shutting down 390
migrate 675 using the GUI 559
migrate a VDisk 680 viewing details 388
migrate between MDGs 680 node details 388
migrate data 685 node discovery 666
migrate VDisks 370 node dumps 464
migrating multiple extents 676 node level 387
migration Node Unique ID 35
algorithm 683 nodes 66
functional overview 682 non-preferred path 249
operations 676 non-redundant 59
overview 676 non-zero contingency 25
tips 687 N-port 62
migration activities 676
migration phase 526
migration process 371 O
migration progress 681 offline rules 679
migration threads 676 offload features 30
mirrored 310 older disk systems 101
mirrored copy 309 on screen content 379, 470, 543
mirrored VDisks 54 online help 475, 543
mkpartnership 333 on-screen content 379
mkrcconsistgrp 334 OpenSSH 181
mkrcrelationship 334 OpenSSH client 214
MLC 49 operating system versions 183
modify a host 353 ordering 32, 264
modifying a VDisk 364 organizing on-screen content 379
mount 232 other node dumps 464
mount point 232 overall performance needs 66
moving and migrating data 256 Oversubscription 62
MPIO 92, 188 oversubscription 62
MSCS 195 overwritten 261, 457
MTU sizes 30, 159
multi layer cell 49
multipath configuration 165
P
package numbering and version 450, 636
multipath I/O 92
parallelism 682
multipath storage solution 188
partial extents 25
multipathing device driver 92
partial last extent 686
Multipathing drivers 31
partnership 291, 317, 330
multiple disk arrays 100, 810
passphrase 128
multiple extents 676
path failover 249
multiple paths 31
path failure 279
multiple virtual machines 240
path offline 279
path offline for source VDisk 279
path offline for target VDisk 280
824 Implementing the IBM System Storage SAN Volume Controller V5.1
path offline state 279 public key 125, 128, 181, 786
path-selection policy algorithms 229 PuTTY 39, 125, 130, 387
peak 333 CLI session 134
peak workload 95 default location 128
pended 26 security alert 135
per cluster 682 PuTTY application 134, 390
per managed disk 682 PuTTY Installation 214
performance 90 PuTTY Key Generator 128–129
performance advantage 100, 810 PuTTY Key Generator GUI 126
performance boost 45 PuTTY Secure Copy 452
performance considerations 810 PuTTY session 129, 135
performance improvement 100, 810 PuTTY SSH client software 214
performance monitoring tool 96 PVLinks 250
performance requirements 66
performance scalability 34
performance statistics 96 Q
performance throttling 517 QLogic HBAs 226
physical location 67 Queue Full Condition 26
physical planning 67 quiesce 387
physical rules 69 quiesce time 573
physical site 67 quiesced 806
Physical Volume Links 250 quorum 35
PiT 34 quorum candidates 36
PiT consistent data 257 Quorum Disk 35
PiT copy 266 quorum disk 35, 666
PiT semantics 264 setting 666
planning rules 66 quorum disk candidate 36
plink 786 quorum disks 25
PLOGI 28
Point in Time 34 R
point in time 33 RAID 62
point-in-time copy 297, 324 RAID controller 76
policy decision 302, 328 RAMAC 50
port RAS 62
adding 354, 501 read workload 53
deleting 355, 502 real capacity 25
port binding 252 real-time synchronized 281
port mask 93 reassign the VDisk 370
Power Systems 181 recall commands 340, 379
PPRC recommended levels 636
background copy 301, 309, 328 Redbooks Web site 817
commands 303, 329 Contact us xxiii
configuration limits 329 redundancy 48, 96
detailed states 298, 324 redundant 59
preferred access node 91 Redundant SAN 62
preferred path 249 redundant SAN 62
pre-installation planning 66 redundant SVC 563
Prepare 62 relationship 262, 309, 319
prepare (pre-trigger) FlashCopy mapping command 401 relationship state diagram 293, 320
PREPARE_COMPLETED 280 reliability 90
preparing volumes 172, 177 Reliability Availability and Serviceability 62
pre-trigger 401 Remote 62
primary 311, 425, 447 Remote authentication 40
primary copy 328 remote cluster 61
priority 371 remote fabric 61, 102
priority setting 371 interconnect 61
private key 125, 128, 181, 786 Remote users 43
production VDisk 328 remove a disk 211
provisioning 333 remove a VDisk 181
pseudo device driver 165 remove an MDG 349
Index 825
remove WWPN definitions 355 service, maintenance using the GUI 635
rename a disk controller 478 set attributes 527
rename an MDG 486 set the cluster time zone 549
rename an MDisk 480 set up Metro Mirror 413, 434, 583, 607
renaming an I/O group 559 SEV 365
repartitioning 90 shells 378
rescan disks 193 show the MDG 538
restart the cluster 387 show the MDisks 537
restart the node 391 shrink a VDisk 536
restarting 424, 446 shrinking 536
restore points 258 shrinkvdisksize 372
restore procedure 672 shut down 195
Reverse FlashCopy 34, 258 shut down a single node 390
reverse FlashCopy 57 shut down the cluster 386, 550
RFC3720 27 Simple Network Management Protocol 302, 329, 345
rmrcconsistgrp 337 single layer cell 49
rmrcrelationship 337 single point of failure 62
round robin 91, 229, 249 single sign on 58
single sign-on 39, 44
site 67
S SLC 49
sample script 789 SLP 30, 63, 159
SAN Boot Support 249, 251 SLP daemon 30
SAN definitions 102 SNIA 2
SAN fabric 76 SNMP 302, 329, 345
SAN planning 74 SNMP alerts 481
SAN Volume Controller 62 SNMP manager 458
documentation 475 SNMP trap 280
general housekeeping 476, 544 software upgrade 450, 636–637
help 475, 543 software upgrade packages 636
virtualization 38 Solid State Disk 57
SAN Volume Controller (SVC) 62 Solid State Drive 34
SAN zoning 125 Solid State Drives 46
SATA 97 solution 100
scalable 102, 810 sort 473
scalable architecture 51 sort criteria 473
SCM 50 sorting 473
scripting 302, 329, 378 source 277, 328
scripts 196, 785 space-efficient 359
SCSI 62 Space-efficient background copy 319
SCSI Disk 61 space-efficient VDisk 372, 526
SCSI primitives 342 space-efficient VDisks 509
SDD 91–92, 162, 165, 170, 176, 251 Space-Efficient Virtual Disk 57
SDD (Subsystem Device Driver) 170, 176, 226, 251, 689 space-efficient volume 372
SDD Dynamic Pathing 249 special migration 687
SDD installation 165 split per second 93
SDD package version 165, 185 splitting the SAN 62
SDDDSM 188 SPoF 62
secondary 311 spreading the load 90
secondary copy 328 SSD 51
secondary site 66 SSD market 50
secure data flow 125 SSD solution 50
secure session 390 SSD storage 52
Secure Shell (SSH) 125 SSH 38, 125, 786
Secure Shell connection 38 SSH (Secure Shell) 125
separate physical IP networks 48 SSH Client 39
sequential 91, 356, 506, 510, 522, 532 SSH client 181, 214
serial numbers 167, 174 SSH client software 125
serialization 278 SSH key 41
serialization of I/O by FlashCopy 278 SSH keys 125, 130
Service Location Protocol 30, 63, 159
826 Implementing the IBM System Storage SAN Volume Controller V5.1
SSH server 125 SVC configuration 66
SSH-2 125 backing up 668
SSO 44 deleting the backup 672
stack 684 restoring 672
stand-alone Metro Mirror relationship 418, 441 SVC Console 38
start (trigger) FlashCopy mapping command 402, 404, SVC device 63
574 SVC GUI 39
start a PPRC relationship command 306, 335 SVC installations 86
startrcrelationship 335 SVC master console 125
state 298, 324–325 SVC node 37, 86
connected 295, 322 SVC PPRC functions 283
consistent 296–297, 323–324 SVC setup 154
ConsistentDisconnected 300, 327 SVC SSD storage 52
ConsistentStopped 298, 325 SVC superuser 41
ConsistentSynchronized 299, 326 svcinfo 340, 344, 378
disconnected 295, 322 svcinfo lsfreeextents 681
empty 301, 328 svcinfo lshbaportcandidate 354
idling 299, 326 svcinfo lsmdiskextent 681
IdlingDisconnected 300, 326 svcinfo lsmigrate 681
inconsistent 296, 323 svcinfo lsVDisk 373
InconsistentCopying 298, 325 svcinfo lsVDiskextent 681
InconsistentDisconnected 300, 327 svcinfo lsVDiskmember 374
InconsistentStopped 298, 324 svctask 340, 344, 378, 381
overview 293, 322 svctask chlicense 461
synchronized 297, 324 svctask finderr 456
state fragments 296, 323 svctask mkfcmap 303–306, 330, 333–335, 398–399,
state overview 295, 329 566, 568
state transitions 280, 322 switching copy direction 425, 447, 606, 632
states 271, 277, 293, 320 switchrcconsistgrp 338
statistics 385 switchrcrelationship 337
statistics collection 547 symmetrical 1
starting 547 symmetrical network 62
stopping 386, 548 symmetrical virtualization 1
statistics dump 463 synchronized 297, 320, 324
stop 322 synchronized clocks 45
stop FlashCopy consistency group 406, 576 synchronizing 319
stop FlashCopy mapping command 405 synchronous data mirroring 57
STOP_COMPLETED 280 synchronous reads 684
stoprcconsistgrp 336 synchronous writes 684
stoprcrelationship 335 synthesis 278
storage cache 37 Syslog error event logging 58
storage capacity 66 System Storage Productivity Center 63
Storage Class Memory 50
stripe VDisks 100, 810
striped 506, 510, 522, 532 T
striped VDisk 356 T0 34
subnet mask IP address 114 target 158, 328
Subsystem Device Driver (SDD) 170, 176, 226, 251, 689 target name 27
Subsystem Device Driver DSM 188 test new applications 257
SUN Solaris support information 249 threads parameter 519
superuser 381 threshold level 26
surviving node 390 throttles 517
suspended mapping 405 throttling parameters 517
SVC tie breaker 35
basic installation 111 tie-break situations 35
task automation 378 tie-break solution 666
SVC cluster 560 tie-breaker 35
SVC cluster candidates 585, 610 time 384
SVC cluster partnership 303, 330 time zone 384
SVC cluster software 639 timeout 246
timestamp 44–45
Index 827
Time-Zero copy 34 showing using group 373
Tivoli Directory Server 43 shrinking 371, 519
Tivoli Embedded Security Services 40, 44 working with 356
Tivoli Integrated Portal 39 VDisk discovery 159
Tivoli Storage Productivity Center 39 VDisk mirror 526
Tivoli Storage Productivity Center for Data 39 VDisk Mirroring 53
Tivoli Storage Productivity Center for Disk 39 VDisk-to-host mapping 370
Tivoli Storage Productivity Center for Replication 39 deleting 514
Tivoli Storage Productivity Center Standard Edition 39 Veritas Volume Manager 249
token 44–45 View I/O Group details 391
token expiry timestamp 45 viewing managed disk groups 486
token facility 44 virtual disk 262, 356, 468, 504
trace dump 463 Virtual Machine File System 238, 240
traffic 95 virtualization 38
traffic profile activity 66 VLUN 61
transitions 685 VMFS 238, 240–242
trigger 402, 404, 574 VMFS datastore 244
volume group 177
Voting Set 35
U voting set 35
unallocated capacity 198 vpath configured 169, 175
unallocated region 319
unassign 514
unconfigured nodes 389 W
undetected data corruption 323 warning capacity 25
unfixed error 460, 655 warning threshold 372
uninterruptible power supply 73, 86, 386, 451 Web interface 252
unmanaged MDisk 685 Windows 2000 based hosts 182
unmap a VDisk 370 Windows 2000 host configuration 182, 238
up2date 225 Windows 2003 188
updates 225 Windows host system CLI 214
upgrade 636–637 Windows NT and 2000 specific information 182
upgrade precautions 450 working with managed disks 477
upgrading software 636 workload cycle 96
use of Metro Mirror 301, 328 worldwide node name 800
used capacity 25 worldwide port name 164
used free capacity 25 Write data 37
User account migration 38 Write ordering 324
using SDD 170, 176, 226, 251 write ordering 288, 313, 323
write through mode 86
write workload 96
V writes 314
VDisk 490 write-through mode 37
assigning 516 WWNN 800
assigning to host 368 WWPNs 164, 350, 355, 497
creating 356, 358, 505
creating in image mode 359, 526
deleting 367, 509, 513 Y
discovering assigned 167, 172, 190 YaST Online Update 225
expanding 367
I/O governing 364
image mode migration concept 685 Z
information 358, 505 zero buffer 319
mapped to this host 369 zero contingency 25
migrating 92, 370, 518 Zero Detection 57
modifying 364, 517 zero-detection algorithm 25
path offline for source 279 zone 76
path offline for target 280 zoning capabilities 76
showing 492 zoning recommendation 194, 208
showing for MDisk 373, 482
showing map to a host 539
828 Implementing the IBM System Storage SAN Volume Controller V5.1
Implementing the IBM System
Storage SAN Volume Controller V5.1
(1.5” spine)
1.5”<-> 1.998”
789 <->1051 pages
Implementing the IBM System
Storage SAN Volume Controller V5.1
Implementing the IBM System Storage SAN Volume Controller V5.1
Implementing the IBM System Storage SAN Volume Controller V5.1
Implementing the IBM System
Storage SAN Volume Controller V5.1
Implementing the IBM System
Storage SAN Volume Controller V5.1
Back cover ®
Install, use, and This IBM Redbooks publication is a detailed technical guide
to the IBM System Storage SAN Volume Controller (SVC), INTERNATIONAL
troubleshoot the SAN
a virtualization appliance solution that maps virtualized TECHNICAL
Volume Controller
volumes that are visible to hosts and applications to SUPPORT
physical volumes on storage devices. Each server within ORGANIZATION
Learn about and how the SAN has its own set of virtual storage addresses, which
to attach iSCSI hosts are mapped to physical addresses. If the physical
addresses change, the server continues running using the
Understand what same virtual addresses that it had before. This capability
solid-state drives means that volumes or storage can be added or moved BUILDING TECHNICAL
while the server is still running. The IBM virtualization INFORMATION BASED ON
have to offer technology improves the management of information at the PRACTICAL EXPERIENCE
“block” level in a network, enabling applications and servers
to share storage devices on a network. IBM Redbooks are developed
by the IBM International
This book is intended to allow you to implement the SVC at Technical Support
a 5.1.0 release level with a minimum of effort. Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.