Anda di halaman 1dari 656

Front cover

IBM i and IBM System Storage:


A Guide to Implementing External Disks on IBM i

Take advantage of DS8000 and DS6000


with IBM i

Learn about the storage performance


and HA enhancements in IBM i 6.1

Understand how to migrate


from internal to external disks

Nick Harris
Hernando Bedoya
Amit Dave
Ingo Dimmer
Jana Jamsek
David Painter
Veerendra Para
Sanjay Patel
Stu Preacher
Ario Wicaksono

ibm.com/redbooks
International Technical Support Organization

IBM i and IBM System Storage:


A Guide to Implementing External Disks on IBM i

September 2008

SG24-7120-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.

Second Edition (September 2008)

This edition applies to i5/OS Version 6, Release 1

© Copyright International Business Machines Corporation 2008. All rights reserved.


Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv

Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 New features with Version 6 Release 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 System i storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 IBM System Storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 System i integrated storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Managing System i availability with IBM System Storage. . . . . . . . . . . . . . . . . . . . 6
1.2.4 Copy Services and i5/OS Fibre Channel Load Source . . . . . . . . . . . . . . . . . . . . . . 7
1.2.5 Using Backup Recovery and Media Services with FlashCopy . . . . . . . . . . . . . . . . 7
1.2.6 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Chapter 2. IBM System Storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


2.1 Infrastructure simplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Business continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Information life cycle management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Positioning the IBM System Storage DS family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 DS8000 compared to ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 DS8000 compared to DS6000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 DS6000 series compared to ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.4 DS6000 series compared to DS8000 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.5 DS6000 series compared to DS4000 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Overview of the DS8000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.2 Storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.3 Storage system logical partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.4 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.5 Resiliency family for Business Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.6 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3.7 Service and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4 Common set of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.1 Common management functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.2 Scalability and configuration flexibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.3 Future directions of storage system LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 IBM System Storage DS6000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5.1 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.2 Storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.3 DS management console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.4 Supported environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.5 Business continuance functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5.6 Resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

© Copyright IBM Corp. 2008. All rights reserved. iii


2.5.7 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.8 Service and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.9 Configuration flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.6 DS8000 and DS6000 for i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.6.1 Understanding the architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.6.2 DS8000 and DS6000 virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.7 Copy Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.7.1 Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.7.2 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.7.3 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.7.4 Incremental FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.7.5 Inband FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.7.6 Multiple Relationship FlashCopy (V2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.7.7 FlashCopy consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.7.8 Space efficient FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Chapter 3. System i external storage solution examples . . . . . . . . . . . . . . . . . . . . . . . 55


3.1 One-site System i external storage solution examples . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1.1 System i5 model and all disk storage in external storage . . . . . . . . . . . . . . . . . . . 56
3.1.2 System i model with internal load source and external storage . . . . . . . . . . . . . . 58
3.1.3 System i model with mixed internal and external storage . . . . . . . . . . . . . . . . . . . 59
3.1.4 Migration of internal drives to external storage including load source . . . . . . . . . 60
3.2 Migration of an external mirrored load source to a boot load source . . . . . . . . . . . . . . 61
3.2.1 Cloning i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.2.2 Full system and IASP FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3 Two-site System i external storage solution examples . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.1 The System i platform and external storage HA environments. . . . . . . . . . . . . . . 64
3.3.2 Metro Mirror examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3.3 Global Mirror examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.3.4 Geographic mirroring with external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Part 2. Planning and sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Chapter 4. i5/OS planning for external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75


4.1 Planning for external storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2 Solution implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.1 Planning considerations for boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.2 Planning considerations for i5/OS multipath Fibre Channel attachment. . . . . . . . 81
4.2.3 Planning considerations for Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.2.4 Planning storage consolidation from different servers . . . . . . . . . . . . . . . . . . . . . 92
4.2.5 Planning for SAN connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.2.6 Planning for capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2.7 Planning considerations for performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Chapter 5. Sizing external storage for i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115


5.1 General sizing discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.1.1 Flow of I/O operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.1.2 Description of response times. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.2 Rules of thumb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.2.1 Number of RAID ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.2.2 Number of Fibre Channel adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.2.3 Size and allocation of logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.2.4 Sharing ranks among multiple workloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.2.5 Connecting using switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

iv IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5.2.6 Sizing for multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.2.7 Sizing for applications in an IASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.2.8 Sizing for space efficient FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.3 Sizing tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3.1 Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3.2 IBM Systems Workload Estimator for i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.3.3 IBM System Storage Productivity Center for Disk. . . . . . . . . . . . . . . . . . . . . . . . 133
5.4 Gathering information for sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.4.1 Typical workloads in i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.4.2 Identifying peak periods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.4.3 i5/OS Performance Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.5 Sizing examples with Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.5.1 Sizing the System i5 with DS8000 for a customer with iSeries model 8xx and internal
disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.5.2 Sharing DS8100 ranks between two i5/OS systems (partitions). . . . . . . . . . . . . 163
5.5.3 Modeling System i5 and DS8100 for a batch job currently running
Model 8xx and ESS 800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.5.4 Using IBM Systems Workload Estimator connection to Disk Magic:
Modeling DS6000 and System i for an existing workload. . . . . . . . . . . . . . . . . . 189

Part 3. Implementation and additional topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Chapter 6. Implementing external storage with i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . 207


6.1 Supported environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
6.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
6.1.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
6.2 Logical volume sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
6.3 Protected versus unprotected volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6.4 Setting up an external load source unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
6.4.1 Tagging the load source IOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
6.4.2 Creating the external load source unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6.5 Adding volumes to the System i5 configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
6.5.1 Adding logical volumes using the 5250 interface . . . . . . . . . . . . . . . . . . . . . . . . 221
6.5.2 Adding volumes to an independent auxiliary storage pool . . . . . . . . . . . . . . . . . 224
6.6 Adding multipath volumes to System i using a 5250 interface . . . . . . . . . . . . . . . . . . 231
6.7 Adding volumes to System i using iSeries Navigator . . . . . . . . . . . . . . . . . . . . . . . . . 233
6.8 Managing multipath volumes using iSeries Navigator. . . . . . . . . . . . . . . . . . . . . . . . . 236
6.9 Changing from single path to multipath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
6.10 Protecting the external load source unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6.10.1 Setting up load source mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
6.11 Migration from mirrored to multipath load source . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
6.12 Migration considerations from IOP-based to IOP-less Fibre Channel. . . . . . . . . . . . 266
6.12.1 IOP-less migration in a multipath configuration. . . . . . . . . . . . . . . . . . . . . . . . . 266
6.12.2 IOP-less migration in a mirroring configuration . . . . . . . . . . . . . . . . . . . . . . . . . 266
6.12.3 IOP-less migration in a configuration without path redundancy . . . . . . . . . . . . 267
6.13 Resetting a lost multipath configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
6.13.1 Resetting the lost multipath configuration for V6R1 . . . . . . . . . . . . . . . . . . . . . 267
6.13.2 Resetting a lost multipath configuration for versions prior to V6R1 . . . . . . . . . 270

Chapter 7. Migrating to i5/OS boot from SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277


7.1 Overview of this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.2 Migration prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.2.1 Pre-migration checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.3 Migration scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280

Contents v
7.3.1 RAID protected internal LSU migrating to external mirrored or
multipath LSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.3.2 Internal LSU mirrored to internal LSU migrating to external LSU . . . . . . . . . . . . 321
7.3.3 Internal LSU mirrored to internal remote LSU migrating to external LSU . . . . . . 339
7.3.4 Internal LSU mirrored to external remote LSU migrating to external LSU . . . . . 358
7.3.5 Unprotected internal LSU migrating to external LSU . . . . . . . . . . . . . . . . . . . . . 367
7.3.6 Migrating to external LSU from iSeries 8xx or 5xx with 8 Gb LSU . . . . . . . . . . . 386
7.3.7 SAN to SAN storage migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388

Chapter 8. Using DS CLI with System i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391


8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
8.1.1 Functions of DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
8.1.2 Command flow of DS CLI in i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
8.1.3 Using DS CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
8.2 DS CLI logical storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
8.2.1 Managing user IDs, profiles, and license keys . . . . . . . . . . . . . . . . . . . . . . . . . . 395
8.2.2 Configuring arrays, ranks, extent pools, and volumes . . . . . . . . . . . . . . . . . . . . 398
8.2.3 Configuring volume groups, I/O ports, and host connections . . . . . . . . . . . . . . . 403
8.3 DS CLI support for i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
8.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
8.3.2 Installing DS CLI on i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
8.3.3 Invoking DS CLI from i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
8.3.4 Setting up DS CLI on i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
8.4 Using DS CLI on i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
8.4.1 Invoking FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
8.4.2 Starting Remote Mirror (PPRC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431

Chapter 9. Using DS GUI with System i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439


9.1 Installing DS Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
9.1.1 Installing DS6000 Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
9.1.2 Installing DS8000 Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
9.1.3 Applying storage unit activation keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
9.2 Configuring DS Storage Manager logical storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
9.2.1 Configuring arrays and ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
9.2.2 Creating extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
9.2.3 Creating logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
9.2.4 Configuring I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
9.2.5 Creating volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
9.2.6 Creating host systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508

Chapter 10. Installing the IBM System Storage DS6000 storage system . . . . . . . . . 519
10.1 Preparing the site and verifying the ship group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
10.1.1 Pre-installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
10.1.2 Ship group verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
10.2 Installing the DS6000 in a rack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
10.2.1 Installing storage and server enclosures in a rack . . . . . . . . . . . . . . . . . . . . . . 521
10.2.2 Attaching IBM System i host systems to the DS6000 . . . . . . . . . . . . . . . . . . . . 521
10.3 Cabling the DS6000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
10.3.1 Connecting IBM System i hosts to the DS6000 processor cards . . . . . . . . . . . 523
10.3.2 Connecting the DS6000 to the customer network. . . . . . . . . . . . . . . . . . . . . . . 523
10.3.3 Connecting optional storage enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
10.3.4 Turning on the DS6000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
10.4 Setting the DS6000 IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
10.4.1 Setting the DS6000 server enclosure processor card IP addresses. . . . . . . . . 526

vi IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Chapter 11. Usage considerations for Copy Services with i5/OS . . . . . . . . . . . . . . . 537
11.1 Usage considerations for Copy Services with boot from SAN . . . . . . . . . . . . . . . . . 538
11.2 Copying the entire DASD space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
11.2.1 Creating a clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
11.2.2 System backups using FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
11.2.3 Using a copy for Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
11.2.4 Considerations when copying the entire DASD space . . . . . . . . . . . . . . . . . . . 544
11.3 Using IASPs and Copy Services for System i high availability . . . . . . . . . . . . . . . . . 545
11.3.1 System architecture for System i availability. . . . . . . . . . . . . . . . . . . . . . . . . . . 545
11.3.2 Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
11.3.3 Providing both backup and DR capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . 549

Chapter 12. Cloning i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553


12.1 Understanding the cloning concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
12.2 Considerations when cloning i5/OS systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
12.3 Creating an i5/OS clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
12.3.1 Creating a partition with DLPAR scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
12.3.2 Producing a rack configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
12.3.3 Creating a storage copy with scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
12.3.4 Handling clone startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563

Chapter 13. Troubleshooting i5/OS with external storage. . . . . . . . . . . . . . . . . . . . . . 569


13.1 i5/OS actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
13.1.1 Debugging external load source problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
13.2 DS8000 actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
13.2.1 Checking the DS8000 problem log for open problems . . . . . . . . . . . . . . . . . . . 574
13.2.2 Unlocking a DS8000 Storage Manager administrative password . . . . . . . . . . . 576
13.3 DS6000 actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
13.3.1 Checking the DS6000 system status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
13.3.2 Checking the DS6000 problem log for open problems . . . . . . . . . . . . . . . . . . . 578
13.3.3 Contacting IBM support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
13.3.4 Collecting DS6000 problem determination data . . . . . . . . . . . . . . . . . . . . . . . . 584
13.3.5 Activating an IBM remote support VPN connection . . . . . . . . . . . . . . . . . . . . . 587
13.3.6 Unlocking a DS6000 Storage Manager administrative password . . . . . . . . . . . 589
13.4 DS CLI and GUI actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
13.4.1 Debugging DS6000 DS CLI and GUI connection problems . . . . . . . . . . . . . . . 589
13.4.2 Debugging DS CLI for i5/OS installation issues . . . . . . . . . . . . . . . . . . . . . . . . 590
13.4.3 Debugging DS CLI and GUI rank creation failures . . . . . . . . . . . . . . . . . . . . . . 591
13.5 DS CLI data collection for IBM technical support . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
13.6 SAN actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
13.6.1 Isolating Fibre Channel link problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
13.6.2 SAN data collection for IBM technical support . . . . . . . . . . . . . . . . . . . . . . . . . 593

Appendix A. FAQs for boot from SAN with IOP-less


and #2847 IOP-based Fibre Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
Question 1. What is boot from SAN and what is it used for? . . . . . . . . . . . . . . . . . . . . . . . 596
Question 2. What is new for boot from SAN with IOP-less Fibre Channel?. . . . . . . . . . . . 596
Question 3. What is the purpose of the #2847 IOP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
Question 4. How is the #2844 IOP different from the #2847 IOP? . . . . . . . . . . . . . . . . . . 596
Question 5. What System i hardware models and IBM System Storage subsystems does the
#2847 IOP support? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
Question 6. How many LUNs do the adapters for boot from SAN support? . . . . . . . . . . . 597
Question 7. Is multipath supported with boot from SAN? . . . . . . . . . . . . . . . . . . . . . . . . . 597
Question 8. Why do I need an HMC with boot from SAN? . . . . . . . . . . . . . . . . . . . . . . . . 597

Contents vii
Question 9. Do I need to upgrade any software on my HMC?. . . . . . . . . . . . . . . . . . . . . . 597
Question 10. What are the minimum software requirements to support #2847? . . . . . . . . 598
Question 11. Will the #2847 IOP work with iSeries models? . . . . . . . . . . . . . . . . . . . . . . . 598
Question 12. Do I need to upgrade my system firmware on a System i5 server? . . . . . . . 598
Question 13. What changes do I need to make to an IBM System Storage DS8000, DS6000,
or ESS model 800 series to support boot from SAN? . . . . . . . . . . . . . . . . . . . . . . . . 598
Question 14. Will I have to define the load source LUN as a “protected” or as an “unprotected”
LUN? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Question 15. Will the Fibre Channel load source require direct connectivity to my SAN storage
device, or can I go through a SAN fabric? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Question 16. Do I have to replace all of my #2844 IOPs with #2847? . . . . . . . . . . . . . . . . 599
Question 17. Can I share #2847 across multiple LPARs on the same system? . . . . . . . . 599
Question 18. Is the #2847 IOP supported in Linux or AIX partitions on System i? . . . . . . 600
Question 19. Where can I get additional information about #2847 IOP? . . . . . . . . . . . . . . 600
Question 20. Is the #2847 customer set up? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Question 21. Will my system come preloaded with i5/OS when I order #2847? . . . . . . . . 600
Question 22. What is the difference between V5R3M5 and V5R3M0 for LIC? . . . . . . . . . 600
Question 23. Can I continue to use both internal and external storage even though I have
ordered the #2847 IOP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Question 24. Could I install #2847 on my iSeries model 8xx system, or in one of the LPARs on
this system? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
Question 25. Will the #2847 IOP work with V5R3M0 Licensed Internal Code? . . . . . . . . . 601
Question 26. What happens to my system name and network attributes when I perform a point
in time FlashCopy operation?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
Question 27. Prior to i5/OS V6R1, multipath I/O is not supported on #2847 for SAN Load
Source. Does this mean that the LUNs attached to the Fibre Channel I/O adapter are
unprotected by multipath I/O? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
Question 28. Will the base IOP that is installed in every system unit be replaced with the new
#2847 IOP? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
Question 29. Why does it take a long time to ship the #2847 IOP? . . . . . . . . . . . . . . . . . . 602
Question 30. Do I need to complete the questionnaire that I got
after I ordered the #2847 IOP?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
Question 31. Where do I obtain information about the #2847 IOP in Information Center? 602
Question 32. How many Fibre Channel adapters are supported by the #2847 IOP? . . . . 602
Question 33. Can I use the #2847 IOP to attach my tape Fibre Channel I/O adapter and also
to boot from it? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Question 34. How many card slots does the #2847 IOP require? Can I install the IOP in 32-bit
slot, or does it need to be in a 64-bit slot? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603

Appendix B. HMC CLI command definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605


The mksyscfg command definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
The rmsyscfg command definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
The chsysstate command definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
The lshwres command definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629


IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631

viii IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2008. All rights reserved. ix


Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L™ i5/OS® System i®
AIX® IBM® System p®
AS/400® iSeries® System Storage™
BladeCenter® Lotus® System Storage DS®
Calibrated Vectored Cooling™ OS/390® System x™
DB2® OS/400® System z®
DFSMSdss™ PartnerWorld® Tivoli®
Domino® POWER5™ TotalStorage®
DS4000™ POWER5+™ Virtualization Engine™
DS6000™ POWER6™ WebSphere®
DS8000™ PowerHA™ Workplace™
Enterprise Storage Server® PowerPC® xSeries®
ESCON® Predictive Failure Analysis® z/OS®
eServer™ Redbooks® z/VM®
Express™ Redbooks (logo) ® zSeries®
FICON® RETAIN®
FlashCopy® System i5®

The following terms are trademarks of other companies:

InfiniBand, and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade
Association.

Disk Magic, IntelliMagic, and the IntelliMagic logo are trademarks of IntelliMagic BV in the United States, other
countries, or both.

Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States
and other countries.

SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.

Java, JDK, JRE, JVM, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc.
in the United States, other countries, or both.

Internet Explorer, Microsoft, Windows Server, Windows, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.

Intel, Pentium, Pentium 4, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

x IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Preface

This IBM® Redbooks® publication provides a broad discussion of a new architecture of the
IBM System Storage™ DS6000™ and DS8000™ and how these products relate to System i
servers. The book includes information for both planning and implementing IBM System i®
with the IBM System Storage DS6000 or DS8000 series where you intend to externalize the
i5/OS® loadsource disk unit using boot from SAN. It also covers migration from System i
internal disks to IBM System Storage DS6000 and DS8000.

This book is intended for IBMers, IBM Business Partners, and customers in the planning and
implementation of external disk attachments to System i servers.

The newest release of this book accounts for the following new functions of IBM System i
POWER6™, i5/OS V6R1, and IBM System Storage DS8000 Release 3:
 System i POWER6 IOP-less Fibre Channel
 i5/OS V6R1 multipath load source support
 i5/OS V6R1 quiesce for Copy Services
 i5/OS V6R1 High Availability Solution Manager (HASM)
 i5/OS V6R1 SMI-S support
 i5/OS V6R1 multipath resetter HSM function
 System i HMC V7
 DS8000 R3 space efficient FlashCopy®
 DS8000 R3 storage pool striping
 DS8000 R3 System Storage Productive Center (SSPC)
 DS8000 R3 Storage Manager GUI

The team that wrote this book


This book was produced by a team of specialists from around the world working at the
International Technical Support Organization (ITSO), Rochester Center.

Nick Harris is a Consulting IT Specialist for the System i and has spent the last eight years in
the ITSO, Rochester Center. He specializes in LPAR, System i hardware and software,
external disk, Integrated xSeries® Server for iSeries®, and Linux®. He writes IBM Redbooks
publications and teaches IBM classes worldwide on all these subjects and how they are
related to system design and server consolidation. He spent 13 years in the U.K. AS/400®
Business and has experience in S/36, S/38, AS/400, and System i servers. You can contact
him by sending e-mail to mailto:niharris@us.ibm.com.

Hernando Bedoya is an IT Specialist at the ITSO Rochester Center. He writes extensively


and teaches IBM classes worldwide in all areas of DB2® for i5/OS. Before joining the ITSO
more than seven years ago, he worked for IBM Colombia as an IBM AS/400 IT Specialist
doing pre-sales support for the Andean countries. He has 24 years of experience in the
computing field and has taught database classes in Colombian universities. He holds a
master degree in computer science from EAFIT, Colombia. His areas of expertise are
database technology, application development, and data warehousing. You can contact him
by sending e-mail to nbedoya@us.ibm.com.

© Copyright IBM Corp. 2008. All rights reserved. xi


Amit Dave is a System i architect and a Senior Technical Staff Member responsible for
defining and architecting System i virtualization technologies, logical partitioning, storage,
and infrastructure integration initiatives. Prior to this role, he was a consultant at the
Rochester Executive Briefing Center and Teraplex Database Integration Center, providing
technical leadership to customers in the areas of server consolidation, high availability
clustering, and business intelligence. He moved to IBM Rochester from the U.K. in 1994 and
has authored several IBM Redbooks publications on various technologies. His broad skills in
managing enterprise System i systems includes practical experience as an IBM business
partner and customer. You can contact him by sending e-mail to adave@us.ibm.com.

Ingo Dimmer is an IBM Advisory IT Specialist for System i and a PMI Project Management
Professional working in the IBM STG Europe storage support organization in Mainz,
Germany. He has eight years of experience in enterprise storage support from working in IBM
post-sales and pre-sales support. He holds a degree in Electrical Engineering from the
Gerhard-Mercator University, Duisburg. His areas of expertise include System i external disk
storage solutions, I/O performance, and tape encryption for which he has been an author of
several whitepapers and IBM Redbooks publications. You can contact him by sending e-mail
to idimmer@de.ibm.com.

Jana Jamsek is an IT specialist in IBM Slovenia. She works in Storage Advanced Technical
Support for Europe as a specialist for IBM System Storage and i5/OS systems. Jana has
eight years of experience in the iSeries and AS/400 area and five years experience in
storage. She holds a masters degree in computer science and a degree in mathematics from
the University of Ljubljana, Slovenia. She has authored several IBM Redbooks publications.
You can contact Jana by sending e-mail to mailto:jana.jamsek@si.ibm.com.

David Painter is a System i Technical Manager with Morse Group in the U.K. He studied
Electronic Physics at University of London and is also an IBM Certified Solutions Expert. He
provides both pre- and post-sales technical support to customers across Europe. David has
20 years experience with the iSeries and System 38 product line and currently holds
numerous IBM Certifications. You can contact him by sending e-mail to
mailto:david.painter@morse.com.

Veerendra Para is an advisory IT Specialist for System i in IBM Bangalore, India. His job
responsibility includes planning, implementation, and support for all the System i platforms.
He has nine years of experience in IT field. He has over six years of experience in AS/400
installations, networking, transition management, problem determination and resolution, and
implementations at customer sites. He has worked for IBM Global Services and IBM SWG.
He holds a diploma in Electronics and Communications. You can contact him by sending
e-mail to veerendra.para@in.ibm.com or veepara2@in.ibm.com.

Sanjay Patel is a staff software engineer within the IBM Systems and Technology group at
Rochester, Minnesota. He has 10 years of experience with the System i platform, having
worked in Backup Recovery and Media Services (BRMS) development at IBM since 2001.
Currently, Sanjay is a Technical Leader for BRMS product and is involved with product design
and development. You can contact him by sending e-mail to sanjaypa@us.ibm.com.

Stu Preacher is a Consulting IT Specialist from IBM U.K. Stu has extensive experience in
System i availability and external storage. Stu now works for IBM System Storage specializing
in attachment to System i servers. You can contact him by sending e-mail to
stu_preacher@uk.ibm.com.

Ario Wicaksono is an IT Specialist for System i at IBM Indonesia. He has two years of
experience in Global Technology Services as System i support. His areas of expertise are
System i hardware and software, external storage for System i, Hardware Management

xii IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Console, and LPAR. He holds a degree in Electrical Engineering from the University of
Indonesia. You can contact him by sending e-mail to wario@id.ibm.com.

Thanks to the following people for their contributions to this project:

Ginny McCright
Mike Petrich
Curt Schemmel
Clark Anderson
Joe Writz
Scott Helt
Jeff Palm
Henry May
Tom Crowley
Andy Kulich
Jim Lembke
Lee La Frese
Kevin Gibble
Diane E. Olson
Jenny Dervin
Adam Aslakson
Steven Finnes
Selwyn Dickey
John Stroh
Tim Klubertanz
Dave Owen
Ron Devroy
Scott Maxson
Dawn May
Sergrey Zhiganov
Gerhard Pieper
IBM Rochester development lab

Thanks also to the following people who shared written material from IBM System Storage
DS8000: Copy Services in Open Environments, SG24-6788:

Jana Jamsek
Bertrand Dufrasne
International Support Center Organization, San Jose, California

Become a published author


Join us for a two- to six-week residency program! Help write a book dealing with specific
products or solutions, while getting hands-on experience with leading-edge technologies. You
will have the opportunity to team with IBM technical professionals, Business Partners, and
Clients.

Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Preface xiii
Comments welcome
Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks in one of the following ways:
 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
 Send your comments in an e-mail to:
redbooks@us.ibm.com
 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

xiv IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Part 1

Part 1 Introduction
This book is divided into multiple sections. This part introduces the new architecture of the
IBM System Storage DS6000 and DS8000 and how these products relate to System i
servers.

This part includes the following chapters:


 Chapter 1, “Introduction” on page 3
 Chapter 2, “IBM System Storage solutions” on page 9
 Chapter 3, “System i external storage solution examples” on page 55

© Copyright IBM Corp. 2008. All rights reserved. 1


2 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
1

Chapter 1. Introduction
This chapter provides an introduction to System i and External Storage.

The IBM System i platform is one of the most complete and secure integrated business
systems, designed to run thousands of the world’s most popular business applications. It
provides faster, more reliable, and highly secure ways to help you simplify your IT
environment by reducing the number of servers and associated staff required to help you
save money and reinvest in growing your business.

© Copyright IBM Corp. 2008. All rights reserved. 3


1.1 New features with Version 6 Release 1
With the new i5/OS operating system Version 6 Release 1 (V6R1), IBM delivers several
significant enhancements in the areas of improving availability, performance, integration, and
ease-of-use with external IBM System Storage disk systems.

The System i POWER6 IOP-less Fibre Channel technology takes full advantage of the
performance potential of IBM System Storage and eliminates the previously required I/O
processor (IOP), which can become a bottleneck for I/O performance. The dual-port IOP-less
Fibre Channel cards now supporting up to 64 LUNs per port significantly reduce the required
number of PCI slots. IOP-less Fibre Channel also introduces new D-mode IPL boot from tape
support for Fibre Channel attached tapes.

i5/OS V6R1 enhances the multipath function to support multipath external load source, which
brings enhanced functionality to boot from SAN support that was introduced previously with
i5/OS V5R3M5. For FlashCopy solutions, the i5/OS V6R1 quiesce for Copy Services function
can help to improve system availability by eliminating the need to turn off the server before
taking a FlashCopy system image. The new i5/OS V6R1 licensed program 5761-HAS High
Availability Solutions Manager (HASM) or PowerHA™ for i integrates the management of
disaster recovery or high availability solutions with IBM System Storage Copy Services into
i5/OS. From the system management GUI perspective, a new IBM Systems Director
Navigator for i5/OS, which is a Web browser based GUI, replaces the iSeries Navigator. The
SMI-S protocol support of i5/OS V6R1 helps to integrate i5/OS systems with IBM Systems
Director management software solutions.

On the storage side, with the IBM System Storage DS8000 Release 3 especially, the storage
virtualization and integration has been enhanced.

The DS8000 R3 space efficient FlashCopy function eliminates the need to fully provision the
physical capacity for the FlashCopy target volume. In addition, the System Storage
Productivity Center that ships with new DS8000 R3 systems improves functionality and
usability by consolidating the management consoles for DS8000 and IBM SAN Volume
Controller (SVC) and by providing a pre-installed System Storage Productivity Center storage
management solution. The enabled System Storage Productivity Center Basic Edition
provides basic storage, SAN, and data management capabilities that you can upgrade easily
to enhanced functionality by applying licenses for System Storage Productivity Center for
Disk, System Storage Productivity Center for Data, or System Storage Productivity Center for
Fabric. A new cache algorithm Adaptive Multi-Stream Prefetching (AMP) is implemented in
DS8000 to optimize cache efficiency, especially for single-rank constrained sequential
workload.

1.2 System i storage solutions


The System i platform provides advanced, complex server technology that is capable of
running many different types of workloads simultaneously, creating a very diverse and varying
demand from storage. Any storage solution on System i must be capable of supporting both
traditional DB2 UDB online transaction and batch workloads and other native System i
workloads, such as image and file serving, storage hosting for Windows® Integration, Linux
and AIX® partitions, Domino®, Java™, IBM WebSphere®, all at the same time and usually
from the same or multiple storage pools.

4 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
1.2.1 IBM System Storage solutions
IBM System Storage solutions bring new advanced storage capabilities to System i by
allowing more storage consolidation and flexibility in an enterprise environment, such as
multiserver connectivity, fully redundant hardware (including NVS cache, RAID-5, or RAID-10
protection), Copy Services solutions, and advanced storage allocation capabilities. Many
customer environments employ a strategic direction where all severs utilize SAN-based
storage, and the DS8000 is designed to meet those needs.

When an IBM System Storage subsystem is attached to a System i server, it becomes an


automatic extension of the overall storage pool and the i5/OS automated storage
management capabilities, so the IBM System Storage devices also benefit from the i5/OS
automated data distribution, graphical storage management, availability capabilities,
integrated backups, and ability to support all types of i5/OS workloads. In fact, i5/OS solutions
such as Independent Auxiliary Storage Pools (IASP) are exploited by Copy Services functions
to enable business continuity solutions and increasing overall system availability.

1.2.2 System i integrated storage


The iSeries and eServer™ i5 systems offer a sophisticated, high performance, automated,
integrated storage solution. System i integrated storage offers high performance and large
capacities (up to 387 TBs) through multiple high speed controllers, scalable, jumbo write
cache (1.5 GB of NVS write cache for every 18 disk units), RAID and mirroring performance
configuration options, and 70/141/282 GB 15 KB RPM disk options.

Highlights of integrated storage management with i5/OS include:


 Offers automatic storage balancing and data spread.
 Implements optimized RAID-5 or RAID-6 with flexibility to exploit built in system level i5/OS
mirroring (similar to RAID1) for parallel reads/writes.
 Deploys extensive write cache capabilities.
 Self-optimizes workload I/O through self-caching architecture, further enhanced by Expert
Cache.
 Provides innovative SAN-like storage management through Virtual I/O hosting to Linux,
AIX and Windows server environments, as shown in Figure 1-1.
 Incorporates advanced technologies such as Skip Read/Write, Robust Error Recovery,
and Longitude Redundancy Check (LRC).

Chapter 1. Introduction 5
Figure 1-1 Virtual I/O hosting by i5/OS

i5/OS integrated storage management can eliminate the need for specialized storage skills,
such as storage subsystem, SAN, and fix management.

Internal storage does not allow you to migrate internal storage capacity easily to other
systems and must stay within the limits of SCSI, High Speed Loop (HSL), or 12X InfiniBand®
boundaries Alternatively, SAN offers greater flexibility when it comes to sharing storage
among multiple servers or to attaching storage across distances using Fibre Channel.

Data replication solutions with internal storage operating on a logical object level typically
increase the System i CPU usage because of the server processing the data replication and
might require additional administrative effort to keep the redundant System i servers
synchronized.

1.2.3 Managing System i availability with IBM System Storage


The combination of i5/OS support for Fibre Channel loadsource, multipath I/O, Independent
Auxiliary Storage Pools (IASP), Cross Site Mirroring (XSM), IBM System Storage Copy
Services Solutions, and switchable IASPs with the System i High Availability Solution
Manager (HASM) or the Copy Services Toolkit offer new solutions in the areas of improving
overall availability and business continuity. Chapter 3, “System i external storage solution
examples” on page 55 includes examples of System i external storage solutions.

When combined with i5/OS availability solutions, including journaling and commitment
control, High Availability Business Partner (HABP) logical replication solutions, Domino online
backup, and System i clusters, customers have many options for meeting their business
continuity objectives. These solutions are not mutually exclusive, but they are not always
interchangeable either. Each solution has its own benefits and considerations. For a good
overview of these data resiliency solutions, see Data Resilience Solutions for IBM i5/OS High
Availability Clusters, REDP-0888.

6 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
1.2.4 Copy Services and i5/OS Fibre Channel Load Source
The introduction of i5/OS boot from SAN further expands i5/OS availability options by
exploiting solutions such as FlashCopy, which is provided through IBM System Storage Copy
Services functions. With boot from SAN, you no longer have to use remote load source
mirroring to mirror your internal load source to a SAN attached load source. Instead, you can
now place the load source directly inside a SAN attached storage subsystem and with i5/OS
V6R1 even use multipath attachment to the external load source for redundancy.

Boot from SAN enables easier bring up of a system environment that has been copied using
Copy Services functions such as FlashCopy or PPRC. During the restart of a cloned
environment, you no longer have to perform the Recover Remote Load Source Disk Unit
through Dedicated Service Tools (DST), thus reducing the time and overall steps required to
bring up a point-in-time system image after FlashCopy and PPRC functions have been
completed.

Since June of 2005, you can manage Copy Services functions on eServer i5 systems through
a command-line interface (CLI), called IBM System Storage DS® CLI, and a Web-based
interface called IBM System Storage DS Storage Manager. The DS Storage Manager allows
you to set up and manage point-in-time copy. This feature includes FlashCopy and enables
you to create full volume copies of data using source and target volumes that span logical
subsystems within a single storage unit. After the FlashCopy function completes, you can
access the target point-in-time system image immediately by associating it with another
System i server or a logical partition.

Note: Using the new i5/OS V6R1 quiesce for Copy Services function effective for both
SYSBAS and IASPs, you no longer need to turn off the source system prior to initiating the
FlashCopy function. Similar to a system shutdown, the quiesce function ensures that
contents of main storage (memory) are written to disks.

Utilities such as clear pool commands (CLRPOOL) can help clear the contents of the memory
pool but do not necessarily clear all of the contents of main storage. The requirement before
i5/OS V6R1 for shutting down the source system completely to assure that all contents of
main storage is written to the disks, prior to initiating FlashCopy can be avoided when
combining Copy Services functions with IASP. The advantage of using an IASP for your data
and application needs enables you to perform a vary off function on the source system,
therefore ensuring that all in flight transactions in the system memory are committed to the
disk before varying off the disk units. FlashCopy can then be initiated and the disks can be
varied on immediately after the FlashCopy task has completed. You can also automate this
entire process using IASP Copy Services Toolkit. Refer to IBM System Storage Copy
Services and IBM i: A Guide to Planning and Implementation, SG24-7103 for more
information.

1.2.5 Using Backup Recovery and Media Services with FlashCopy


Backup Recovery and Media Services (BRMS) for i5/OS is an IBM strategic solution for
performing backups and recovering System i and eServer i5 systems. BRMS has a wealth of
features, including the ability to work in a network environment, sharing a common inventory
of media and tape volumes with other i5/OS hosts.

The cloning of a System i source system using BRMS means that an identical disk image is
created, including system unique attributes such as system name, local location name,
TCP/IP configuration, and relational database directory entries. Because BRMS relies on
managing its common tape and media inventory based on unique system names, a change

Chapter 1. Introduction 7
had to be made to accommodate use of BRMS with FlashCopy. The enhancement enables
you to continue to use BRMS as you backup choice, and enables you to keep on using the
recovery options and recovery reports to restore the source system, should it be required,
even when the backups were completed, using the point-in-time copy of your entire disk
storage attached to a target system or a logical partition.

For additional planning considerations and enabling BRMS with FlashCopy, refer to IBM
System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.

1.2.6 Summary
The first release of this book in 2005 focussed on the new boot from SAN function with its
capability to place the loadsource disk unit directly inside an IBM ESS 800, DS6000, or
DS8000 series. Boot from SAN enables additional availability options, such as creating a
point-in-time instance with FlashCopy for backup and recovery as well as disaster recovery
needs. This capability, combined with the enhancements to BRMS, enables you to better
manage point-in-time disk images and reduces the overall steps that are required to start a
System i server or a logical partition utilizing the cloned disk image.

The new System i POWER6 IOP-less support takes advantage of the full performance
potential of IBM System storage and reduces the cost of deploying a SAN storage solution on
System i by requiring significantly fewer PCI slots and adapters.

Tighter integration of i5/OS with IBM System Storage with the new i5/OS V6R1 availability
feature quiesce for Copy Services and its integrated High Availability Solution Manager ease
the deployment of external storage with System i.

If you use System i internal storage, now is the time for a paradigm shift that follows the
strategy available from IBM System i storage.

Chapter 4, “i5/OS planning for external storage” on page 75 discusses important planning
considerations that are required to exploit the external storage capabilities with System i.

8 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2

Chapter 2. IBM System Storage solutions


IBM has a wide range of product offerings that are based on open standards and that share a
common set of tools, interfaces, and innovative features. The IBM System Storage DS family
and its new members, the DS8000 and the DS6000, provide the freedom to choose the right
combination of solutions for your current needs and the flexibility to help your infrastructure
evolve as needs change. The System Storage DS family offers high availability, multi-platform
support, and simplified management tools, which are intended to help adjust to an on
demand world in a cost effective manner.

© Copyright IBM Corp. 2008. All rights reserved. 9


2.1 Infrastructure simplification
Consolidation begins with compatibility. The IBM System Storage DS family supports a broad
array of IBM and non-IBM server platforms, including IBM z/OS®, z/VM®, OS/400®, i5/OS,
and AIX 5L™ operating systems, as well as Linux, HP-UX, Sun™ Solaris™, Novell®
NetWare, UNIX®, and Microsoft® Windows environments. Consequently, you have the
freedom to choose preferred vendors and to run the applications that are required to meet the
enterprise’s needs while extending previous IT investments.

Storage asset consolidation can be greatly assisted by virtualization. Virtualization software


solutions are designed to logically combine separate physical storage systems into a single,
virtual storage pool, thereby offering dramatic opportunities to help reduce the total cost of
ownership (TCO), particularly when used in combination with the DS6000 series.

2.1.1 Business continuity


The IBM System Storage DS family supports enterprise-class data backup and disaster
recovery capabilities. As part of the IBM System Storage Resiliency family of software, IBM
System Storage FlashCopy point-in-time copy capabilities back up data in the background,
while allowing users nearly instant access to information about both the source and target
volumes. Metro and Global Mirror and Global Copy capabilities allow the creation of duplicate
copies of application data at remote sites.

2.1.2 Information life cycle management


By retaining frequently accessed or high-value data in one storage server and archiving less
valuable information in a less costly one, systems like the IBM System Storage DS family can
help improve the management of information according to its business value—from the
moment of its creation to the moment of its disposal. The policy-based management
capabilities built into the IBM System Storage Open Software family, IBM DB2 Content
Manager and IBM Tivoli® Storage Manager for Data Retention, are designed to help you
automatically preserve critical data, while preventing deletion of that data before its scheduled
expiration.

2.2 Positioning the IBM System Storage DS family


This section provides a broad comparison of the models in the IBM System Storage DS family
and the Enterprise Storage Server® (ESS).

2.2.1 DS8000 compared to ESS


The DS8000 is the next generation of the Enterprise Storage Server, so all functions that are
available in the ESS are also available in the DS8000 (with the exception of Metro/Global
Copy). From a consolidation point of view, it is now possible to replace four ESS Model 800s
with one DS8300. In addition, with LPAR implementation, you get an additional consolidation
opportunity because you get two storage system logical partitions in one physical machine.

Because the mirror solutions are compatible between the ESS and the DS8000 series, it is
possible to think about a setup for a disaster recovery solution with the high performance
DS8000 at the primary site and the ESS at the secondary site, where the same performance
is not required.

10 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2.2.2 DS8000 compared to DS6000
DS6000 and DS8000 now offer an enterprise continuum of storage solutions. All copy
functions (with the exception of Global Mirror, which is available only on the DS8000) are
available on both systems. You can do Metro Mirror, Global Mirror, and Global Copy between
the series. The CLI commands and the GUI look the same for both systems.

Obviously, the DS8000 can deliver a higher throughput and scales higher than the DS6000,
but not all customers need this high throughput and capacity. You can choose the system that
fits your needs. Both systems support the same SAN infrastructure and the same host
systems.

So it is very easy to have a mixed environment with DS8000 and DS6000 systems to optimize
the cost effectiveness of your storage solution, while providing the cost efficiencies of
common skills and management functions.

Logical partitioning with some DS8000 models is not available on the DS6000. For more
information about the DS6000, refer to IBM System Storage DS6000 Series: Architecture and
Implementation, SG24-6781 throughout when we mention this book.

2.2.3 DS6000 series compared to ESS


Replacing ESS systems with a DS6800 is not complicated. All functions (with the exception of
cascading Metro Mirror, Global Copy, and z/OS Global Mirror), are the same on the DS6000
as on the ESS and are also available on a DS6800.

If you want to keep your ESS and if it is a model 800 or 750 with Fibre Channel adapters, you
can use your existing ESS, for example, as a secondary for remote copy. With the ESS at the
appropriate LIC level, scripts or CLI commands written for Copy Services work for both the
ESS and the DS6800.

For most environments, the DS6800 performs better than an ESS. You might even replace
two ESS 800s with one DS6800. The sequential performance of the DS6800 is excellent.
However, when you plan to replace an ESS with a large cache (for example, more than
16 GB) with a DS6800 (which comes with 4 GB cache) and you currently get the benefit of a
high cache hit rate, your cache hit rate on the DS6800 will lower. This lower cache hit rate is
caused by the smaller cache. z/OS benefits from large cache, so for transaction-oriented
workloads with high read cache hits, careful planning is required.

2.2.4 DS6000 series compared to DS8000 series


Refer 2.2.2, “DS8000 compared to DS6000” on page 11 for this comparison.

Logical partitioning with some DS8000 models is not available on the DS6000. For more
information about the DS8000 refer to IBM System Storage DS8000 Series: Architecture and
Implementation, SG24-6786 when we mention this book throughout.

2.2.5 DS6000 series compared to DS4000 series


Previous DS4000™ series (formerly called FAStT) clients will find more of a difference
between the DS4000 series and DS6000 series of products.

Both product families have about the same size and capacity but their functions differ. With
respect to performance, the DS4000 series range is below the DS6000 series. You have the

Chapter 2. IBM System Storage solutions 11


option to choose from different DS4000 models, among them very low cost entry models.
DS4000 series storage systems can also be equipped with cost efficient, high capacity Serial
ATA drives.

The DS4000 series products allow you to grow with a granularity of a single disk drive, while
with the DS6000 series you have to order at least four drives. Currently the DS4000 series
also is more flexible with respect to changing RAID arrays on the fly and changing LUN sizes.

The implementation of FlashCopy on the DS4000 series is different when compared to the
DS6000 series. On a DS4000 series, space is needed only for the changed data; however,
you need the full LUN size for the copy LUN on a DS6000 series. Although the target LUN on
a DS4000 series cannot be used for production, it can be used for production on the DS6000
series. If you need a real copy of a LUN on a DS4000 series, you can do a volume copy.
However, this process can take a long time before the copy is available for use. On a DS6000
series, the copy is available for production after a few seconds.

While the DS4000 series also offers remote copy solutions, these functions are not
compatible with the DS6000 series.

2.3 Overview of the DS8000 series


The IBM System Storage DS8000 is a high-performance, high-capacity series of disk storage
systems (see Figure 2-1). It offers balanced performance that is up to six times higher than
the previous IBM System Storage Enterprise Storage Server (ESS) Model 800. The capacity
scales linearly from 1.1 TB up to 192 TB.

With the implementation of the POWER5™ Server Technology in the DS8000 it is possible to
create storage system logical partitions (LPARs) that can be used for completely separate
production, test, or other unique storage environments.

The DS8000 is a flexible and extendable disk storage subsystem because it is designed to
add and adapt to new technologies as they become available. The new packaging also
includes new management tools, such as the DS Storage Manager and the DS command-line
interface (CLI), which allow for the management and configuration of the DS8000 series as
well as the DS6000 series.

The DS8000 series is designed for 24x7 environments in terms of availability while still
providing the industry leading remote mirror and copy functions to ensure business continuity.

12 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-1 DS8000 base frame

The IBM System Storage DS8000 highlights include that it:


 Delivers robust, flexible, and cost-effective disk storage for mission-critical workloads.
 Helps to ensure exceptionally high system availability for continuous operations.
 Scales to 192 TB, and facilitates unprecedented asset protection with model-to-model field
upgrades.
 Supports storage sharing and consolidation for a wide variety of operating systems and
mixed server environments
 Helps increase storage administration productivity with centralized and simplified
management.
 Provides the creation of multiple storage system LPARs that can be used for completely
separate production, test, or other unique storage environments.
 Occupies 20% less floor space than the ESS Model 800’s base frame, and holds even
more capacity.
 Provides the industry’s first four year warranty.

2.3.1 Hardware overview


The DS8000 hardware is optimized to provide enhancements in terms of performance,
connectivity, and reliability. From an architectural point of view, the DS8000 series is much the
same with respect to the fundamental architecture of the previous ESS models, and 75% of
the operating environment remains the same as for the ESS Model 800. This architecture
ensures that the DS8000 can take advantage of a stable and well-proven operating
environment, offering the optimum in availability.

Chapter 2. IBM System Storage solutions 13


The DS8000 series features several models in a new, higher-density footprint than the ESS
Model 800, providing configuration flexibility. For more information about the different models,
see 2.3, “Overview of the DS8000 series” on page 12.

In this section, we give a short description of the main hardware components.

POWER5+ technology
The DS8000 series exploits the IBM System p® POWER5+™ technology, which is the
foundation of the storage system LPARs. The DS8100 Model 931 utilizes the 64-bit
microprocessors’ dual 2-way processor complexes, and the DS8300 Model 932/9B2 uses the
64-bit dual 4-way processor complexes. Within the POWER5+ servers the DS8000 series
offers up to 256 GB of cache, which is up to four times as much cache as the previous ESS
models.

Internal fabric
DS8000 comes with a high bandwidth, fault tolerant internal interconnection, which is also
used in the IBM System p server, called RIO-2 (Remote I/O). RIO-2 can operate at speeds up
to 1 GHz and offers a 2 GBps sustained bandwidth per link. On System i and System p
servers, it is also known as High Speed Link (HSL).

DS8000 internal FC loop technology


The disk interconnection has changed in comparison to the previous ESS. Instead of the SSA
loops, there is now a switched FC-AL implementation, which offers a point-to-point
connection to each drive and adapter. So, there are four paths available from the controllers
to each disk drive.

Fibre Channel disk drives


The DS8000 offers a selection of industry standard Fibre Channel disk drives. There are
73 GB with 15 000 revolutions per minute (RPM), 146 GB (both 10 000 and 15 000 RPM) and
300 GB (10 000 RPM) Fibre Channel disk drive modules (DDMs) available. The 300 GB
DDMs allow a single system to scale up to 192 TB of capacity.

Host adapters
The DS8000 offers enhanced connectivity with the availability of four-port Fibre
Channel/FICON® host adapters. The 4 Gbps Fibre Channel/FICON host adapters, which are
offered in longwave and shortwave, can also auto-negotiate to 1 Gbps and 2 Gbps link
speeds. This flexibility enables immediate exploitation of the benefits offered by the higher
performance, 4 Gbps SAN-based solutions, while also maintaining compatibility with existing
1 Gbps and 2 Gbps infrastructures. In addition, the four ports on the adapter can be
configured with an intermix of Fibre Channel Protocol (FCP) and FICON, which can help
protect your investment in fibre adapters and increase the ability to migrate to new servers.
The DS8000 also offers 2-port ESCON® adapters. A DS8000 can support up to a maximum
of 32 host adapters, which provide up to 128 Fibre Channel/FICON ports.

DS8000 management consoles


The DS8000 offers an integrated Storage Hardware Management Console (S-HMC). This
console, running a Linux operating system on a System x™ server, is the service and
configuration portal for up to eight DS8000s. You can configure a secondary S-HMC for
redundancy. The S-HMC is also running the DS8000 Storage Manager GUI backend, which
is the focal point for configuration and Copy Services management. It can be accessed locally
with the integrated keyboard and display or remotely using a Web browser.

14 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
With DS8000 Release 3, new machines are shipped with a System Storage Productivity
Center (SSPC) console. The SSPC helps to centralize data center storage management with
supporting heterogeneous SMI-S conforming system and storage devices. The first SSPC
release supports management of IBM System Storage DS8000 and SAN Volume Controller
(SVC). The SSPC is an external, that is not rack-mounted, System x server running Windows
2003 Server and System Storage Productivity Center Basic Edition. With pre-installed
System Storage Productivity Center for Disk, Data and Fabric and optional System Storage
Productivity Center for Replication installation, an easy upgrade path for advanced
end-to-end storage management functionality is available as shown in Figure 2-2.

IBM System Storage Productivity Center on SSPC


Pre-installed
and enabled
on SSPC System Storage Productivity Center Basic Edition
Basic Storage, SAN and Data management capabilities

System
System Storage
Storage Productivity
Productivity Center
Center for
for Disk
Disk
•• Disk
Disk and
and Virtualization
Virtualization Performance
Performance management,
management, administration,
administration, and
and
operations
operations

Pre-installed
and System
System Storage
Storage Productivity
Productivity Center
Center for
for Data
Data
Unlocked •• Asset
Asset and
and capacity
capacity reporting
reporting and
and monitoring
monitoring
by priced •• File
File systems
systems and
and database
database management
management
license file
System
System Storage
Storage Productivity
Productivity Center
Center for
for Fabric
Fabric
•• SAN
SAN administration,
administration, operations
operations and
and performance
performance management.
management.

Optional System Storage Productivity Center for Replication


customer • Administration & operations management for advanced copy services
install (ESS, DS8000, DS6000, SVC)
on SSPC • Optional features for 2 site, 3 site replication failover and fail back

Figure 2-2 SSPC with System Storage Productivity Center pre-installed

With SSPC the DS8000 Storage Manager GUI front-end has been moved from the S-HMC to
the SSPC where it can be accessed directly from the TPC Element Manager or using a Web
browser pointing to the SSPC (see 9.1.2, “Installing DS8000 Storage Manager” on page 465).

2.3.2 Storage capacity


The physical capacity for the DS8000 is purchased through disk drive sets. A disk drive set
contains 16 identical disk drives, which have the same capacity and the same revolution per
minute (RPM). Disk drive sets are available in:
 73 GB (15 000 RPM)
 146 GB (10 000 RPM and 15 000 RPM)
 300 GB (10 000 RPM and 15 000 RPM)

For additional flexibility, feature conversions are available to exchange existing disk drive sets
when purchasing new disk drive sets with higher capacity, or higher speed disk drives.

In the first frame, there is space for a maximum of 128 disk drive modules (DDMs) and every
expansion frame can contain 256 DDMs. Thus there is, at the moment, a maximum limit of
640 DDMs, which in combination with the 300 GB drives gives a maximum capacity of
192 TB.

Chapter 2. IBM System Storage solutions 15


The DS8000 can be configured as RAID-5, RAID-10, or a combination of both. As a price and
performance leader, RAID-5 offers excellent performance for many customer applications,
whereas RAID-10 can offer better performance for selected applications.

Price, performance, and capacity can further be optimized to help meet specific application
and business requirements through the intermix of different DDM sizes and speeds.

Note: The intermix of DDMs is not supported within the same disk enclosure or disk
enclosure pair (front/back enclosure of the same rack position).

IBM Standby Capacity on Demand offering for the DS8000


Standby Capacity on Demand (CoD) provides standby on-demand storage for the DS8000
and allows you to access the extra storage capacity whenever the need arises. With Standby
CoD, IBM installs up to 64 drives (in increments of 16) in your DS8000. At any time, you can
configure your Standby CoD capacity logically for use. It is a nondisruptive activity that does
not require intervention from IBM. Upon logical configuration, you are charged for the
capacity.

For more information about capacity planning, see 4.2.6, “Planning for capacity” on page 93.

2.3.3 Storage system logical partitions


The DS8000 series provides storage system logical partitions (LPARs)—a first in the industry.
Storage system LPARs means that you can run two completely segregated, independent
virtual storage images with differing workloads and with different operating environments,
within a single physical DS8000 storage subsystem. The LPAR functionality is available in the
DS8300 model 9B2.

The first application of the IBM Virtualization Engine™ technology in the DS8000 partitions
the subsystem into two virtual storage system images. The processors, memory, adapters,
and disk drives are split between the images. There is a robust isolation between the two
images through hardware and the POWER5 Hypervisor firmware.

Initially each storage system LPAR has access to:


 50% of the processors
 50% of the processor memory
 Up to 16 host adapters
 Up to 320 disk drives (up to 96 TB of capacity)

With these separate resources, each storage system LPAR can run the same or different
versions of microcode and can be used for completely separate production, test, or other
unique storage environments within this single physical system. This capability can enable
storage consolidations, where separate storage subsystems were required previously and
can help to increase management efficiency and cost effectiveness.

2.3.4 Supported environments


The DS8000 series offers connectivity support across a broad range of server environments,
including IBM System i, System p, System z®, and System x servers, servers from Sun and
Hewlett-Packard, and non-IBM Intel®-based servers. The operating system support for the
DS8000 series is almost the same as for the previous ESS Model 800. There are over 90
supported platforms. This rich support of heterogeneous environments and attachments,
along with the flexibility to easily partition the DS8000 series storage capacity among the

16 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
attached environments, can help support storage consolidation requirements and dynamic,
changing environments.

2.3.5 Resiliency family for Business Continuity


Business Continuity means that business processes and business-critical applications need
to be available at all times. Thus, it is very important to have a storage environment that offers
resiliency for both planned and unplanned outages.

The DS8000 supports a rich set of Copy Service functions and management tools that you
can use to build solutions to help meet business continuance requirements. These include
IBM System Storage Resiliency family point-in-time copy and Remote Mirror and Copy
solutions that are supported currently by the ESS.

Note: Remote Mirror and Copy was referred to as Peer-to-Peer Remote Copy (PPRC) in
earlier documentation for the IBM System Storage Enterprise Storage Server.

You can manage Copy Services functions through the DS command-line interface (CLI),
called the IBM System Storage DS CLI, and the Web-based interface, called the IBM System
Storage DS Storage Manager. The DS Storage Manager allows you to set up and manage
data copy features from anywhere that network access is available.

IBM System Storage FlashCopy


FlashCopy can help reduce or eliminate planned outages for critical applications. FlashCopy
provides the same point-in-time copy capability for logical volumes on the DS6000 series and
the DS8000 series as FlashCopy V2 does for ESS, and allows access to the source data and
the copy almost immediately.

FlashCopy supports many advanced capabilities, including:


 Full volume FlashCopy
Allows a FlashCopy of a logical volume by either copying all source tracks to the target
(background copy) or by copying tracks to be modified only.
 Space efficient FlashCopy
A new FlashCopy function introduced with DS8000 Release 3 that can help to lower the
amount of physical storage for the FlashCopy target volumes significantly by thinly
provisioning the target space proportional to the amount of write activity from the host.
 Data Set FlashCopy
Allows a FlashCopy of a data set in a System z environment.
 Multiple Relationship FlashCopy
Allows a source volume to have up to 12 targets simultaneously.
 Incremental FlashCopy
Provides the capability to update a FlashCopy target without having to recopy the entire
volume. Incremental FlashCopy is used, for example in a Global Mirror setup for saving
consistency groups.
 FlashCopy to a Remote Mirror primary
Provides the possibility to use a FlashCopy target volume also as a Remote Mirror primary
volume. This process allows you to create a point-in-time copy and then make a copy of
that data at a remote site.

Chapter 2. IBM System Storage solutions 17


 Consistency Group commands
Allow DS8000 series systems to hold off I/O activity to a LUN or volume until the
FlashCopy Consistency Group command is issued. You can use Consistency Groups to
create a consistent point-in-time copy across multiple LUNs or volumes and even across
multiple DS8000s.
 Inband Commands over Remote Mirror link
In a remote mirror environment, you can issue commands to manage FlashCopy at the
remote site from the local or intermediate site and then transmit the commands over the
remote mirror Fibre Channel links. This process eliminates the need for a network
connection to the remote site solely for the management of FlashCopy.

IBM System Storage Metro Mirror (Synchronous PPRC)


Metro Mirror is a remote data mirroring technique for all supported servers, including z/OS
and open systems. It is designed to constantly maintain an up-to-date copy of the local
application data at a remote site which is within the metropolitan area (typically up to 300 km
away using DWDM). With synchronous mirroring techniques, data currency is maintained
between sites, though the distance can have some impact on performance. Metro Mirror is
used primarily as part of a business continuance solution for protecting data against disk
storage system loss or complete site failure.

IBM System Storage Global Copy (PPRC Extended Distance, PPRC-XD)


Global Copy is an asynchronous remote copy function for z/OS and open systems for longer
distances than are possible with Metro Mirror. With Global Copy, write operations complete on
the primary storage system before they are received by the secondary storage system. This
capability is designed to prevent the primary system’s performance from being affected by
wait time from writes on the secondary system. Therefore, the primary and secondary copies
can be separated by any distance. This function is appropriate for remote data migration,
off-site backups and transmission of inactive database logs at virtually unlimited distances.

IBM System Storage Global Mirror (Asynchronous PPRC)


Global Mirror copying provides a 2-site extended distance remote mirroring function for z/OS
and open systems servers. With Global Mirror, the data that the host writes to the storage unit
at the local site is asynchronously shadowed to the storage unit at the remote site. A
consistent copy of the data is then automatically maintained on the storage unit at the remote
site without the performance penalty of a synchronous data replication solution. This 2-site
data mirroring function is designed to provide a high performance, cost effective, global
distance data replication and disaster recovery solution.

IBM System Storage Metro/Global Mirror


Metro/Global Mirror (previously called Asynchronous Cascading PPRC) is a three-site
extended distance remote mirroring function for z/OS and open systems servers. It is a
combination of Metro Mirror and Global Mirror. Data is replicated synchronously using Metro
Mirror to an intermediate site and from there a remote second consistent data copy is created
using Global Mirror on a more distant site.

Note: Metro/Global Mirror is not supported on the DS6000.

IBM System Storage z/OS Global Mirror (Extended Remote Copy XRC)
z/OS Global Mirror is a remote data mirroring function available for the z/OS and OS/390®
operating systems. It maintains a copy of the data asynchronously at a remote location over

18 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
unlimited distances. z/OS Global Mirror is well suited for large zSeries® server workloads and
can be used for business continuance solutions, workload movement, and data migration.

IBM System Storage z/OS Metro/Global Mirror


This mirroring capability uses z/OS Global Mirror to mirror primary site data to a location that
is a long distance away and also uses Metro Mirror to mirror primary site data to a location
within the metropolitan area. This enables a z/OS three-site high availability and disaster
recovery solution for even greater protection from unplanned outages.

For further information about the function of the IBM System Storage Copy Services features,
refer to 2.7, “Copy Services overview” on page 46.

2.3.6 Interoperability
As we mentioned before, the DS8000 supports a broad range of server environments. But
there is another big advantage regarding interoperability. The DS8000 Remote Mirror and
Copy functions can interoperate between the DS8000, the DS6000, and ESS Models
800/750. This offers a dramatically increased flexibility in developing mirroring and remote
copy solutions and also the opportunity to deploy business continuity solutions at lower costs
than have been previously available.

2.3.7 Service and setup


The installation of the DS8000 is performed by IBM in accordance to the installation
procedure for this system. The customer’s responsibility is the installation planning, the
retrieval and installation of feature activation codes, and the logical configuration planning and
application. This installation process has not changed in regard to the previous ESS model.

For maintenance and service operations, the Storage Hardware Management Console
(S-HMC) is the focal point. The management console is a dedicated workstation that is
physically located (installed) inside the DS8000 subsystem and can monitor the state of the
system automatically, notifying you and IBM when service is required.

The S-HMC is also the interface for remote services (call home and call back). You can
configure remote connections to meet customer requirements. It is possible to allow one or
more of the following:
 Call on error (machine detected)
 Connection for a few days (customer initiated)
 Remote error investigation (service initiated)

The remote connection between the management console and the IBM service organization
is done using a virtual private network (VPN) point-to-point connection over the internet or
modem.

The DS8000 comes with a four year warranty on both hardware and software. This type of
warranty is outstanding in the industry and shows the confidence that IBM has in this product.
In addition, this warranty provides the DS8000 a product with a low total cost of ownership
(TCO).

Chapter 2. IBM System Storage solutions 19


2.4 Common set of functions
The DS8000 series supports many useful features and functions which are not limited to the
DS8000 series. There is a set of common functions that can be used on the DS6000 series
as well as the DS8000 series. Thus there is only one set of skills necessary to manage both
families. This helps to reduce the management costs and the total cost of ownership.

The common functions for storage management include the IBM System Storage DS Storage
Manager, which is the Web-based graphical user interface, the IBM System Storage DS
command-line interface (CLI), and the IBM System Storage DS open application
programming interface (API).

FlashCopy, Metro Mirror, Global Copy, and Global Mirror are the common functions regarding
the Advanced Copy Services. In addition to this, the DS6000/DS8000 series mirroring
solutions are also compatible between IBM System Storage ESS 800 and ESS 750, which
offers a new era in flexibility and cost effectiveness in designing business continuity solutions.

2.4.1 Common management functions


The following DS8000 series configuration and Copy Services management tools and
interfaces are also applicable to the DS6000 series.

IBM System Storage DS Storage Manager


The DS Storage Manager is a Web-based graphical user interface (GUI) that is used to
perform logical configurations and Copy Services management functions. It can be accessed
from any location that has network access using a Web browser or locally from the S-HMC or
System Storage Productivity Center (SSPC).

You have the following options to use the DS Storage Manager:


 Simulated (Offline) configuration
This application which can be installed on a Windows PC allows the user to create or
modify logical configurations when disconnected from the network. After creating the
configuration, you can save it and then apply it to a network-attached storage unit at a later
time.
 Real-time (Online) configuration
This provides real-time management support for logical configuration and Copy Services
features for a network-attached storage unit.

IBM System Storage DS command-line interface (DS CLI)


The DS CLI is a single CLI that has the ability to perform a full set of commands for logical
configuration and Copy Services activities. It is now possible to combine the DS CLI
commands into a script. This can enhance your productivity because it eliminates the
previous requirement for you to create and save a task using the GUI. The DS CLI can also
issue Copy Services commands to an ESS Model 750, ESS Model 800, or DS6000 series
system.

The following list highlights a few of the specific types of functions that you can perform with
the DS command-line interface:
 Check and verify your storage unit configuration
 Check the current Copy Services configuration that is used by the storage unit
 Create new logical storage and Copy Services configuration settings
 Modify or delete logical storage and Copy Services configuration settings

20 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
We describe the DS CLI in detail in Chapter 9, “Using DS GUI with System i” on page 439.

DS Open application programming interface


The DS Open application programming interface (API) is a non-proprietary storage
management client application that supports routine LUN management activities, such as
LUN creation, mapping and masking, and the creation or deletion of RAID-5 and RAID-10
volume spaces. The DS Open API also enables Copy Services functions such as FlashCopy
and Remote Mirror and Copy.

2.4.2 Scalability and configuration flexibility


With the IBM System Storage DS8000 you are getting the opportunity to have a linearly
scalable capacity growth up to 192 TB. The architecture is designed to scale with today’s 300
GB disk technology to over 1 PB. However, the theoretical architectural limit, based on
addressing capabilities, is an incredible 96 PB.

With the DS8000 series there are various choices of base and expansion models, so it is
possible to configure the storage units to meet your particular performance and configuration
needs. The DS8100 (latest model 931) features a dual 2-way processor complex and support
for one expansion frame. The DS8300 (latest models 932 and 9B2) features a dual 4-way
processor complex and support for one or two expansion frames which can be extended
through a RPQ to even four expansion frames supporting up to 1024 DDMs. The Model 9B2
supports two IBM System Storage System LPARs (Logical Partitions) in one physical
DS8000.

The DS8100 offers up to 128 GB of processor memory and the DS8300 offers up to 256 GB
of processor memory. In addition, the Non-Volatile Storage (NVS) scales to the processor
memory size selected, which can also help optimize performance.

Another important feature regarding flexibility is the LUN/Volume Virtualization. In contrast to


the previous ESS architecture, it is now possible with DS8000 and DS6000 to create and
delete a LUN or volume without affecting other LUNs on the RAID rank. When you delete a
LUN or a volume, the capacity can be reused, for example, to form a LUN of a different size.
The possibility to allocate LUNs or volumes by spanning RAID ranks allows you to create
LUNs or volumes to a maximum size of 2 TB.

The access to LUNs by the host systems is controlled through volume groups. Hosts or disks
in the same volume group share access to data. This is the new form of LUN masking.

The DS8000 series allows:


 Up to 255 logical subsystems (LSS) with two storage system LPARs, up to 510 LSSs
 Up to 65280 logical devices with two storage system LPARs, up to 130 560 logical devices

2.4.3 Future directions of storage system LPARs


Plans for future directions of storage system LPARs from IBM include offering even more
flexibility in the use of storage system LPARs. Current plans call for offering a more granular
I/O allocation. Also, the processor resource allocation between LPARs is expected to move
from 50/50 to possibilities like 25/75, 0/100, 10/90, or 20/80. The processor resources are
more flexible, and in the future, plans call for a more dynamic movement of memory between
the storage system LPARs.

These are all features that can react to changing workload and performance requirements,
showing the enormous flexibility of the DS8000 series.

Chapter 2. IBM System Storage solutions 21


Application LPARs also maximize the value of using the storage system LPARs. IBM is
currently evaluating which potential storage applications offer the most value to the
customers. On the list of possible applications are, for example, backup/recovery applications
(Tivoli Storage Manager, Legato, Veritas, and so on).

2.5 IBM System Storage DS6000 series


The IBM System Storage DS6000 series is a Fibre Channel based storage system that
supports a wide range of IBM, including System z and System i, as well as non-IBM server
platforms and operating environments.

Figure 2-3 DS6000 series

In a small 3U footprint, the new storage subsystem provides performance and functions for
business continuity, disaster recovery and resiliency, previously only available in expensive
high-end storage subsystems. The DS6000 series is compatible regarding Copy Services
with the previous Enterprise Storage Server (ESS) Models 800 and 750, as well as with the
new DS8000 series.

The DS6000 series offers an entirely new era in price, performance, and scalability. Now for
the first time zSeries and iSeries customers have the option for a midrange priced storage
subsystem with all the features and functions of an enterprise storage subsystem.

22 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Some clients do not like to put large amounts of storage behind one storage controller. In
particular, the controller part of a high-end storage system makes it expensive. Now you have
the option of choice. You can build very cost efficient storage systems by adding expansion
enclosures to the DS6800 controller, but because the DS6800 controller is not really
expensive, you can also grow horizontally by adding other DS6800 controllers. You also have
the option to grow into the DS8000 series easily by adding DS8000 systems to your
environment or by replacing DS6000 systems (see Figure 2-4).

Scale
up

DS8000
DS6000 DS8000

DS6000
Scale
out
Figure 2-4 Scaling options of the DS6000 and DS8000 series

Chapter 2. IBM System Storage solutions 23


2.5.1 Hardware overview
The DS6000 series consists of the DS6800, models 1750-511 and 1750-522, which have
dual Fibre Channel RAID controllers with up to 16 disk drives in the enclosure (see Figure 2-3
on page 22). Model 522 has the same storage capabilities and functionality as model 511 but
different warranty and service terms. Capacity can be increased by adding up to 13 DS6000
expansion enclosures, models 1750-EX1 and 1750-EX2, each with up to 16 disk drives, as
shown Figure 2-5.

Figure 2-5 DS6800 with five DS6000 expansion enclosures in a rack

DS6800 controller enclosure (models1750-511/522)


IBM System Storage systems are based on a server architecture. At the core of the DS6800
controller unit are two active/active RAID controllers based on the industry-leading
PowerPC® architecture from IBM. By employing a server architecture with standard hardware
components, the IBM storage division can always take advantage of the best of breed
components developed by other IBM divisions. The customer gets the benefit of a very cost
efficient and high performing storage system.

The processors
The DS6800 utilizes two 64-bit PowerPC 750GX 1 GHz processors for the storage server and
the host adapters, respectively, and another PowerPC 750FX 500 MHz processor for the
device adapter on each controller card. The DS6800 is equipped with 2 GB memory in each
controller card, adding up to 4 GB. Some part of the memory is used for the operating system
and another part in each controller card acts as nonvolatile storage (NVS), but most of the
memory is used as cache. This design to use processor memory makes cache accesses very
fast.

When data is written to the DS6800, it is placed in cache and a copy of the write data is also
copied to the NVS of the other controller card, so there are always two copies of write data
until the updates have been destaged to the disks. On System z, this mirroring of write data

24 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
can be disabled by application programs, for example, when writing temporary data (Cache
Fast Write). The NVS is battery backed up and the battery can keep the data for at least 72
hours if power is lost.

The DS6000 series controller’s Licensed Internal Code (LIC) is based on the DS8000 series
software, a greatly enhanced extension of the ESS software. Because 97% of the functional
code of the DS6000 is identical to the DS8000 series, the DS6000 has a very good base to
be a stable system.

The disk drives


The DS6800 controller unit can be equipped with up to 16 internal FC-AL disk drive modules,
offering up to 4.8 TB of physical storage capacity in only 3U (5.25”) of standard 19” rack
space.

Dense packaging
Calibrated Vectored Cooling™ technology used in System x and BladeCenter® to achieve
dense space saving packaging is also used in the DS6800. The DS6800 weighs only 49.6 kg
(109 lbs.) with 16 drives. It connects to normal power outlets with its two power supplies in
each DS6800 or DS6000 expansion enclosure. All this provides savings in space, cooling,
and power consumption.

Host adapters
The DS6800 has eight 2 Gbps Fibre Channel ports that can be equipped with two or up to
eight shortwave or longwave Small Formfactor Plugables (SFP). You order SFPs in pairs. The
2 Gbps Fibre Channel host ports (when equipped with SFPs) can also auto-negotiate to
1 Gbps for existing SAN components that support only 1 Gbps. Each port can be configured
individually to operate in Fibre Channel or FICON mode, but you should always have pairs.
Host servers should have paths to each of the two RAID controllers of the DS6800.

Switched FC-AL subsystem


The disk drives in the DS6800 or DS6000 expansion enclosure have a dual ported FC-AL
interface. Instead of forming an FC-AL loop, each disk drive is connected to two Fibre
Channel switches within each enclosure. With this switching technology there is a
point-to-point connection to each disk drive. This allows maximum bandwidth for data
movement, eliminates the bottlenecks of loop designs, and allows for specific disk drive fault
indication.

There are four paths from the DS6800 controllers to each disk drive to provide greater data
availability in the event of multiple failures along the data path. The DS6000 series systems
provide preferred path I/O steering and can automatically switch the data path used to
improve overall performance.

The DS6800 major features include:


 Two RAID controller cards
 Two PowerPC 750GX 1 GHz processors and one PowerPC 750FX processor on each
RAID controller card
 4 GB of cache
 Two battery backup units (one for each controller card)
 Two AC/DC power supplies with imbedded enclosure cooling units
 Eight 2 Gbps device ports for additional DS6000 expansion enclosures connectivity
 Two Fibre Channel switches for disk drive connectivity in each DS6000 series enclosure

Chapter 2. IBM System Storage solutions 25


 Eight Fibre Channel host ports that can be configured as pairs of FCP or FICON host
ports. The host ports auto-negotiate to either 2 Gbps or 1 Gbps link speeds
 Attachment to up to 13 DS6000 expansion enclosures
 Very small size, weight, and power consumption. All DS6000 series enclosures are 3U in
height and mountable in a standard 19-inch rack

DS6000 expansion enclosure (models 1750-EX1 and 1750-EX2)


The size and the front look of the DS6000 expansion enclosure (1750-EX1 and 1750-EX2) is
the same as the DS6800 controller enclosure. In the front you can have up to 16 disk drives.

Aside from the drives, the DS6000 expansion enclosure contains two Fibre Channel switches
to connect to the drives and two power supplies with integrated fans.

Up to 13 DS6000 expansion enclosures can be added to a DS6800 controller enclosure. The


DS6800 supports two dual redundant switched loops. The first loop is for the DS6800 and up
to six DS6000 expansion enclosures. The second switched loop is for up to seven expansion
enclosures. For connections to the previous and next enclosure, four inbound and four
outbound 2 Gbps Fibre Channel ports are available.

2.5.2 Storage capacity


The DS6000 series offers outstanding scalability with physical capacities ranging from
584 GB up to 67.2 TB, while maintaining excellent performance. Physical capacity for the
DS6800 and DS6000 expansion enclosure is purchased through disk drive sets. A disk drive
set contains four identical disk drives, same capacity and revolutions per minute (RPM).
Currently, a minimum of eight drives (two disk drive sets) are required for the DS6800. You
can increase the capacity of your DS6000 by adding one or more disk drive sets to the
DS6800 or DS6000 expansion enclosure. Within the controller model DS6800, you can install
up to four disk drive sets (16 DDMs). Up to seven DS6000 expansion enclosures can be
added non-disruptively, on demand, as your storage needs grow.

According to your performance needs you can select from three different disk drive types: fast
73 GB drives rotating at 15 000 RPM, good performing and cost efficient 146 GB drives
operating at 10 000 or 15 000 RPM, and high capacity 300 GB drives running at 10 000 or
150 000 RPM.

The minimum storage capability with eight 73 GB DDMs is 584 GB. The maximum storage
capability with 16 300 GB DDMs for the DS6800 controller enclosure is 4.8 TB. If you want to
connect more than 16 disks, you can use the optional DS6000 expansion enclosures that
allow a maximum of 224 DDMs per storage system and provide a maximum storage
capability of 67.2 TB.

Every four or eight drives form a RAID array, and you can choose between RAID-5 and
RAID-10. The configuration process enforces that at least two spare drives are defined on
each loop. In case of a disk drive failure or even when the DS6000’s predictive failure analysis
comes to the conclusion that a disk drive might fail soon, the data of the failing disk is
reconstructed on the spare disk. More spare drives might be assigned if you have drives of
mixed capacity and speed. The mix of different capacities and speeds will not be available at
general availability, but at a later time.

26 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2.5.3 DS management console
The DS management console consists of the DS Storage Manager software, shipped with
every DS6000 series system, and a computer system on which the software can run. The
DS6000 management console running the DS Storage Manager software is used to
configure and manage DS6000 series systems. The software runs on a Windows or Linux
system that the client can provide.

IBM System Storage DS Storage Manager


The DS Storage Manager is a Web-based graphical user interface (GUI) that is used to
perform logical configurations and Copy Services management functions. It can be accessed
from any location that has network access to the DS management console using a Web
browser. You have the following options to use the DS Storage Manager:
 Simulated (offline) configuration
This application allows the user to create or modify logical configurations when
disconnected from the network. After creating the configuration, you can save it and then
apply it to a storage unit at a later time.
 Real-time (online) configuration
This provides real-time management support for logical configuration and Copy Services
functions to a network attached storage unit.

The DS6000 series’ Express™ Configuration Wizards guide you through the configuration
process and help get the system operational in minimal time. The DS Storage Manager’s GUI
is intuitive and very easy to understand.

IBM System Storage DS command-line interface


The DS command-line interface (CLI) is a single CLI that has the ability to perform a full set of
commands for logical configuration, Copy Services activities, or both. The DS CLI can also
issue Copy Services commands to an ESS Model 750, ESS Model 800 (at LIC level 2.4.2 and
above), or DS8000 series system. It is possible to combine the DS CLI commands into a
script. This can help enhance your productivity because it eliminates the previous (on ESS)
requirement for you to create and save a task using the GUI.

A few of the specific types of functions that you can perform with the DS CLI include:
 Checking and verifying storage unit configuration
 Checking the current Copy Services configuration that is used by the storage unit
 Creating new logical storage and Copy Services configuration settings
 Modifying or deleting logical storage and Copy Services configuration settings

DS Open application programming interface


The DS Open application programming interface (API) is a non-proprietary storage
management client application that supports routine LUN management activities, such as
LUN creation, mapping, and masking; and the creation or deletion of RAID-5 and RAID-10
volume spaces. The DS Open API also enables Copy Services functions such as FlashCopy
and Remote Mirror and Copy.

2.5.4 Supported environment


The DS6000 system can be connected across a broad range of server environments,
including IBM System i, System p, System z, System x, BladeCenter, as well as servers from
Sun Microsystems, Hewlett-Packard, and other providers. You can split the DS6000 system

Chapter 2. IBM System Storage solutions 27


storage capacity easily among the attached environments. This capability makes it an ideal
system for storage consolidation in a dynamic and changing on demand environment.

Particularly for System z and System i customers, the DS6000 series is an exciting product,
because for the first time they have the choice of a midrange priced storage system for their
environment with a performance that is similar to or exceeds that of an IBM ESS.

2.5.5 Business continuance functions


As data and storage capacity are growing faster year by year most customers can no longer
afford to stop their systems to back up terabytes of data, it just takes too long. Therefore, IBM
has developed fast replication techniques that can provide a point-in-time copy of the
customer’s data in a few seconds or even less. This function is called FlashCopy on the
DS6000 series, as well as on the ESS models and DS8000 series.

As data becomes more and more important for an enterprise, losing data or access to data,
even only for a few days, might be fatal for the enterprise. Therefore, many customers,
particularly those with high end systems like the ESS and the DS8000 series, have
implemented Remote Mirroring and Copy techniques previously called Peer-to-Peer Remote
Copy (PPRC) and now called Metro Mirror, Global Mirror, or Global Copy. These functions are
also available on the DS6800 and are fully interoperable with ESS 800 and 750 models and
the DS8000 series.

Point-in-time Copy feature


The Point-in-time Copy feature consists of the FlashCopy function. The primary objective of
FlashCopy is to create very quickly a point-in-time copy of a source volume on the target
volume. When you initiate a FlashCopy operation, a FlashCopy relationship is created
between the source volume and target volume. A FlashCopy relationship is a mapping of a
FlashCopy source volume and a FlashCopy target volume. The FlashCopy relationship exists
between the volume pair from the time that you initiate a FlashCopy operation until the
storage unit copies all data from the source volume to the target volume, or until you delete
the FlashCopy relationship if it is a persistent FlashCopy.

The benefits of FlashCopy are that the point-in-time copy is immediately available for use for
backups and the source volume is immediately released so that applications can be
restarted, with minimal application downtime. The target volume is available for read and write
processing so it can be used for testing or backup purposes. You can choose to leave the
copy as a logical copy or to physically copy the data. If you choose to physically copy the data,
a background process copies tracks from the source volume to the target volume.

FlashCopy is an additional charged feature. You have to order the Point-in-time Copy feature,
which includes FlashCopy. Then you have to follow a procedure to get the key from the
internet and install it on your DS6800.

To make a FlashCopy of a LUN or a z/OS CKD volume you need a target LUN or z/OS CKD
volume of the same size as the source within the same DS6000 system (some operating
systems also support a copy to a larger volume). z/OS customers can even do FlashCopy on
a data set level basis when using DFSMSdss™. The DS6000 also supports Concurrent Copy.

The DS Storage Manager’s GUI provides an easy way to set up FlashCopy or Remote Mirror
and Copy functions. Not all functions are available through the GUI. Instead, we recommend
that you use the new DS command-line interface (DS CLI), which is much more flexible.

28 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Full volume FlashCopy
Full volume FlashCopy allows a FlashCopy of a logical volume by either copying all source
tracks to the target (background copy) or by copying tracks to be modified only

Data Set FlashCopy


In a z/OS environment when DFSMSdss is used to copy a data set, by default FlashCopy is
used to do the copy. In this environment FlashCopy can operate at a data set level.

Multiple Relationship FlashCopy


Multiple Relationship FlashCopy allows a source to have FlashCopy relationships with
multiple targets simultaneously. This flexibility allows you to initiate up to 12 FlashCopy
relationships on a given logical unit number (LUN), volume, or data set, without needing to
first wait for or cause previous relationships to end.

Incremental FlashCopy
Incremental FlashCopy provides the capability to refresh a LUN or volume involved in a
FlashCopy relationship. When a subsequent FlashCopy is initiated, only the data required to
bring the target current to the source's newly established point-in-time is copied. This
unburdens the backend and the disk drives are not so busy and can do more production I/Os.
The direction of the refresh can also be reversed, in which case the LUN or volume previously
defined as the target becomes the source for the LUN or volume previously defined as the
source (and now the target).

FlashCopy to a remote mirror primary


When we mention remote mirror primary, we mean any primary volume of a Metro Mirror or
Global Copy pair. FlashCopy to a remote mirror primary lets you establish a FlashCopy
relationship where the target is a remote mirror primary volume. This enables you to create
full or incremental point-in-time copies at a local site and let remote mirroring operations copy
the data to the remote site. Many clients that utilize a remote copy function use it for all their
volumes. The DS6800 allows a FlashCopy onto a remote mirror primary volume.

Previous ESS clients faced the issue that they could not use FlashCopy because a
FlashCopy onto a volume that was mirrored was not possible. This restriction particularly
affected z/OS clients using data set level FlashCopy for copy operations within a mirrored
pool of production volumes.

Consistency Group commands


The Consistency Group function of FlashCopy allows the DS6800 to hold off I/O activity to a
LUN or volume until all LUNs/volumes within the group have established a FlashCopy
relationship to their targets and the FlashCopy Consistency Group Created command is
issued or a timer expires. Consistency Groups can be used to help create a consistent
point-in-time copy across multiple LUNs or volumes, and even across multiple DS6800
systems, as well as across DS8000 series, ESS 800, and ESS 750 systems.

Inband commands over remote mirror link


In a remote mirror environment where you want to do a FlashCopy of the remote volumes,
instead of sending FlashCopy commands across an Ethernet connection to the remote
DS6800, Inband FlashCopy allows commands to be issued from the local or intermediate
site, and transmitted over the remote mirror Fibre Channel links for execution on the remote
DS6800. This eliminates the need for a network connection to the remote site solely for the
management of FlashCopy.

Chapter 2. IBM System Storage solutions 29


Remote Mirror and Copy feature
Remote Mirror and Copy is another separately orderable priced feature; it includes Metro
Mirror, Global Copy, and Global Mirror. The local and remote storage systems must have a
Fibre Channel connection between them. Remote Mirror and Copy functions can also be
established between DS6800 and ESS 800/750 systems but not between a DS6800 and
older ESS models like the F20, because these models do not support remote mirror or copy
across Fibre Channel (they only support ESCON) and the DS6800 does not support ESCON.

Metro Mirror
Metro Mirror was previously called Synchronous Peer-to-Peer Remote Copy (PPRC) on the
ESS. It provides a synchronous copy of LUNs or zSeries CKD volumes. A write I/O to the
source volume is not complete until it is acknowledged by the remote system. Metro Mirror
supports distances of up to 300 km.

Global Copy
This is a non-synchronous long distance copy option for data migration and backup. Global
Copy was previously called PPRC-XD on the ESS. It is an asynchronous copy of LUNs or
System z CKD volumes. An I/O is signaled complete to the server as soon as the data is in
cache and mirrored to the other controller cache. The data is then sent to the remote storage
system. Global Copy allows for copying data to far away remote sites. However, if you have
more than one volume, there is no mechanism that guarantees that the data of different
volumes at the remote site is consistent in time.

Global Mirror
Global Mirror is similar to Global Copy but it provides data consistency.

Global Mirror is a long distance remote copy solution across two sites using asynchronous
technology. It is designed to provide the following:
 Support for virtually unlimited distances between the local and remote sites, with the
distance typically limited only by the capabilities of the network and channel extension
technology being used. This can better enable you to choose your remote site location
based on business needs and enables site separation to add protection from localized
disasters.
 A consistent and restartable copy of the data at the remote site, created with little impact
to applications at the local site.
 Data currency, where for many environments the remote site lags behind the local site an
average of three to five seconds, helps to minimize the amount of data exposure in the
event of an unplanned outage. The actual lag in data currency experienced will depend
upon a number of factors, including specific workload characteristics and bandwidth
between the local and remote sites.
 Efficient synchronization of the local and remote sites, with support for failover and failback
modes, which helps to reduce the time required to switch back to the local site after a
planned or unplanned outage.

Remote Mirror connections


All of the remote mirroring solutions described here use Fibre Channel as the
communications link between the primary and secondary systems. The Fibre Channel ports
used for Remote Mirror and Copy can be configured either as dedicated remote mirror links or
as shared ports between remote mirroring and Fibre Channel Protocol (FCP) data traffic.

z/OS Global Mirror


z/OS Global Mirror (previously called XRC) offers a specific set of very high scalability and
high performance asynchronous mirroring capabilities designed to match very demanding,

30 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
large zSeries resiliency requirements. The DS6000 series systems can only be used as a
target system in z/OS Global Mirror operations.

2.5.6 Resiliency
The DS6000 series has built in resiliency features that are not generally found in small
storage devices. The DS6000 series is designed and implemented with component
redundancy to help reduce and avoid many potential single points of failure.

Within a DS6000 series controller unit, there are redundant RAID controller cards, power
supplies, fans, Fibre Channel switches, and Battery Backup Units (BBUs).

There are four paths to each disk drive. Using Predictive Failure Analysis®, the DS6000 can
identify a failing drive and replace it with a spare drive without customer interaction.

Four path switched drive subsystem


Most vendors feature Fibre Channel Arbitrated Loops, which can make it difficult to identify
failing disks and is more susceptible to losing access to storage. The IBM DS6000 series
provides dual active/active design architecture, including dual Fibre Channel switched disk
drive subsystems which provide four paths to each disk drive. This ensures high data
availability even in the event of multiple failures along the data path. Fibre Channel switches
are used for each 16 disk drive enclosure unit, so that if a connection to one drive is lost, the
remaining drives can continue to function, unlike disk drives configured in a loop design.

Spare drives
The configuration process when forming RAID-5 or RAID-10 arrays will require that two global
spares are defined in the DS6800 controller enclosure. If you have expansion enclosures, the
first enclosure will have another two global spares. More spares could be assigned when
drive groups with larger capacity drives are added.

Predictive Failure Analysis


The DS6800 uses the well known (System x and ESS) Predictive Failure Analysis (PFA) to
monitor the operations of its drives. PFA takes preemptive and automatic actions before
critical drive failures occur. This functionality is based on a policy-based disk responsiveness
threshold and takes the disk drive offline. The content of the failing drive is reconstructed from
data and parity information of the other RAID array drives on the global spare disk drive. At
the same time, service alerts are invoked, the failed disk is identified with Light Path
indicators, and an alert Message Popup occurs on the management server.

2.5.7 Interoperability
The DS6800 features unsurpassed enterprise interoperability for a modular storage
subsystem because it uses the same software as the DS8000 series, which is an extension of
the proven IBM ESS code. This allows for cross-DS6000/8000 management and common
software function interoperability, for example, Metro Mirror between a DS6000 and an ESS
Model 800, while maintaining a Global Mirror between the same DS6000 and a DS8000 for
some other volumes.

2.5.8 Service and setup


DS6000 series systems are designed to be easy to install and maintain; they are customer
setup products. The DS Storage Manager’s intuitive Web-based GUI makes the configuration

Chapter 2. IBM System Storage solutions 31


process easy. For most common configuration tasks, Express Configuration Wizards are
available to guide you through the installation process.

Light Path Diagnostics and controls are available for easy failure determination, component
identification, and repair if a failure does occur. The DS6000 series can also be remotely
configured and maintained when it is installed in a remote location.

The DS6800 consists of only five types of customer replaceable units (CRU). Light Path
indicators will tell you when you can replace a failing unit without having to shut down your
whole environment. If a concurrent maintenance is not possible, which might be the case for
some double failures, the DS Storage Manager’s GUI will guide you on what to do. Of course,
a customer can also sign a service contract with IBM or an IBM Business Partner for
extended service.

The DS6800 can be configured for a call home in the event of a failure and it can do event
notification messaging. In this case an Ethernet connection to the external network is
necessary. The DS6800 can use this link to place a call to IBM or to another service provider
when it requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.

At regular intervals the DS6000 sends out a heartbeat. The service provider uses this report
to monitor the health of the call home process.

Configuration changes like adding disk drives or expansion enclosures are a nondisruptive
process. Most maintenance actions are nondisruptive, including downloading and activating
new Licensed Internal Code.

The DS6000 model 511 comes with a four year warranty, while model 522 has a 1-year IBM
onsite repair warranty which can be extended to three years by IBM Global Services
offerings. This type of warranty is outstanding in the industry and shows the confidence from
IBM in this product. In addition, this warranty provides the DS6800 a product with a low total
cost of ownership (TCO).

2.5.9 Configuration flexibility


The DS6000 series uses virtualization techniques to separate the logical view of hosts onto
LUNs from the underlying physical layer. This provides high configuration flexibility (see 2.6.2,
“DS8000 and DS6000 virtualization” on page 38).

Dynamic LUN/volume creation and deletion


The DS6800 gives you a high degree of flexibility in managing your storage. LUNs can be
created and also deleted without having to reformat a whole array.

LUN and volume creation and deletion is nondisruptive. When you delete a LUN or volume,
the capacity can be reused, for example, to form a LUN of a different size.

Large LUN and large CKD volume support


You can configure LUNs and volumes to span arrays, which allows for large LUN sizes. LUNs
can be as large as 2 TB.

The maximum volume size has also been increased for CKD volumes. Volumes with up to
65520 cylinders can now be defined, which corresponds to about 55.6 GB. This can greatly
reduce the number of volumes that have to be managed.

32 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Flexible LUN to LSS association
A Logical Subsystem (LSS) is constructed to address up to 256 devices.

On an ESS there was a predefined association of arrays to Logical Subsystems, which


causes some inconveniences, particularly for zSeries customers. Because in zSeries you
works with relatively small volume sizes, the available address range for an LSS at times was
not sufficient to address the entire capacity that was available in the arrays that were
associated with the LSS. Alternatively, in large capacity configurations where large CKD
volumes were used, clients had to define several address ranges, even when there were only
a few addresses used in an LSS. Some customers were confronted with a zSeries
addressing limit of 64 KB addresses.

There is no predefined association of arrays to LSSs on the DS6000 series. Clients are free
to put LUNs or CKD volumes into LSSs and make the best use of the 256 address range of
an LSS.

Simplified LUN masking


The access to LUNs by host systems is controlled using volume groups. Hosts or disks in the
same volume group share access to data. This is a new form of LUN masking. Instead of
doing LUN masking for individual World Wide Port Names (WWPN), as implemented on the
ESS, you can now do LUN masking at the host level by grouping all or some WWPNs of a
host into a so-called Host Attachment and associating the Host Attachment to a Volume
Group.

This new LUN masking process simplifies storage management because you no longer have
to deal with individual Host Bus Adapters (HBAs) and volumes, but instead with groups.

Summary
In summary, the DS6000 series allows for:
 Up to 32 logical subsystems
 Up to 8192 logical volumes
 Up to 1040 volume groups
 Up to 2 TB LUNs
 Large z/OS volumes with up to 65520 cylinders

2.6 DS8000 and DS6000 for i5/OS


Before we discuss planning for DS8000 or DS6000 for i5/OS, we discuss briefly the basics of
DS8000 and DS6000 architecture and virtualization concepts.

2.6.1 Understanding the architecture


The section provides information about the DS8000 and DS6000 architectures.

DS8000 architecture
DS8000 consists of a base frame and up to four expansion frames. The base frame contains:
 Two processor complexes System p models 570.
 I/O enclosures, which contain up to 16 host adapters to connect to host servers, and up to
eight device adapters to connect to disk drives. Device adapters are arranged in up to four
device adapters pairs.

Chapter 2. IBM System Storage solutions 33


 Up to eight storage enclosures each of which each can contain up to 16 disk drives.
 Storage Hardware Management Console (S-HMC) and two Ethernet switches.
 Cooling fans, power supply cards, rack power control cards, and battery backups.

An expansion frame can contain:


 Up to 16 storage enclosures of which each can contain up to 16 disk drives.
 I/O enclosures (only in the first expansion frame) with up to 16 host adapters and up to 4
DA pairs.
 Cooling fans, power supply cards, and battery backups

The DS8000 is available in three different types and models:


 D8100 type 2107 model 931, which has two 2-way processors. It is available as a base
frame to which one 92E expansion frame with only disk enclosures can be attached.
 DS8300 type 2107 model 932, which has two 4-way processors. It is available as a base
frame to which two 92E expansion frames with disk enclosures and host adapters can be
attached.
 DS8300 type 2107 model 9B2, which has two 4-way processors, partitioned into two
storage images LPARs. It is available as a base frame to which two 9AE expansion frames
containing disk enclosures and host adapters can be attached.

Figure 2-6 shows the DS8000 base frame and expansion frames.

Rack Cooling plenum Cooling plenum Fan Cooling plenum


Fan
power sense
disk enclosure pair sense disk enclosure pair
disk enclosure pair card
control card
disk enclosure pair disk enclosure pair disk enclosure pair

disk enclosure pair disk enclosure pair disk enclosure pair


Primary Primary Primary
power power power
supply disk enclosure pair supply disk enclosure pair supply disk enclosure pair

disk enclosure pair disk enclosure pair

eServer p5 570 disk enclosure pair disk enclosure pair


Primary Primary Primary
power power disk enclosure pair power disk enclosure pair
supply
eServer p5 570 supply
supply
disk enclosure pair disk enclosure pair

Battery
Battery
Backup unit
I/O I/O Backup unit I/O I/O
Battery Enclosure 1 enclosure 0 Battery Enclosure 5 enclosure 4
Backup unit backup unit
I/O I/O I/O I/O
Battery Battery
Backup unit Enclosure 3 enclosure 2 Enclosure 7 enclosure 6
Backup unit

Figure 2-6 DS8000 model 932 with two expansion frames

The DS8000 storage system consists of two processor complexes. Each processor complex
has access to multiple host adapters to connect to host systems or use for Copy Services.
Each processor complex uses several Fibre Channel Arbitrated Loop (FC-AL) device
adapters to connect to disk enclosures, and a DS8000 can have up to 16 device adapters,

34 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
arranged in up to eight device adapter pairs. Each device adapter connects the processor
complex to two switched fabric channel networks, each switched network attaches storage
enclosures that contain disk drives.

The DS8000 contains Fibre Channel disk drives that reside in disk enclosures. Each
enclosure can contain up to two array sets, and each array set can contain up to 16 disk
drives. In each enclosure are two Fibre Channel switches, both of them are connected to all
disk drives in an enclosure.

Each device adapter connects a processor complex to disk drives in up to two disk
enclosures and connects the disk drives in each enclosure both switches. Disk drives from
each enclosure are connected to two device adapters in a pair. Each device adapter has
access to any disk drive through two Fibre Channel networks.

Device adapters and host adapters operate in a high bandwidth interconnect called Remote
Input Output-G (RIO-G).

Table 2-1 shows the possible amounts of processor memory (PM) and write cache or none for
each model. Note that in this table SF means storage facility image of which there are two for
the 9B2 LPAR machine model.

Table 2-1 Processor memory and write cache for the various models
PM / write cache PM / write cache PM / write cache PM / write cache PM / write cache

DS8100 931 16 / 1 32 / 1 64 / 2 128 / 4

DS8300 932 32 / 1 64 / 2 128 / 4 256 / 8

DS8300 9B2 32 / 1 on each SF 64 / 1 on each SF 128 / 2 on each SF 256 / 4 on each SF

Chapter 2. IBM System Storage solutions 35


Figure 2-7 shows the DS8000 architecture.

Processor SAN fabric Processor


Complex 0 Complex 1
Host ports
Host adapter Host adapter
Volatile in I/O enclosure Volatile
in I/O enclosure
memory memory
Persistent memory Persistent memory
RIO-G

RIO-G
N-way First RIO-G loop N-way
SMP SMP

Device adapter Device adapter


in I/O enclosure in I/O enclosure

Fibre channel switch

Fibre channel switch


Fibre channel switch
Fibre channel switch

Front storage Rear storage


enclosure with enclosure with
16 DDMs 16 DDMs

Figure 2-7 DS8000 architecture

For detailed information about the DS8000 architecture, refer to IBM System Storage DS8000
Series: Architecture and Implementation, SG24-6786, which is available at:

http://www.redbooks.ibm.com/abstracts/sg246786.html?Open

DS6000 architecture
The DS6000 contains disk drives, controller cards, and power supplies in a chassis, which is
called a server enclosure. Up to seven expansion enclosures with disk drives can be
connected to the server enclosure.

Two controller cards are in the server enclosure, each of them contains a 4-port host adapter
to connect to host servers or use for Copy Services. Each controller card has also an
integrated 4-port FC-AL device adapter to connect it to two separate Fibre Channel loops,
each of them attaches disk enclosures.

36 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS6000 contains Fibre Channel disk drives. Each server enclosure or expansion enclosure
can contain up to 16 disk drives and two Fibre Channel switches through which the drives are
connected. Four ports in each switch are used to connect to other enclosures.

A device adapter on each controller card connects disk drives in the server enclosure and the
first expansion enclosure using both switches in the enclosure. Figure 2-8 shows the next
expansion enclosures connected to controller cards in FC loops.

OUT ports to next enclosure

Up to 16 DDMs
Second
16

per enclosure

FC switch

FC switch
expansion
enclosure

maximum per loop


Seven enclosures
1

Loop 0
16

FC switch
FC switch
Server
1

enclosure

Server 0 Server 1

First
1
FC switch
Loop 1

FC switch

expansion
maximum per loop
Seven enclosures

enclosure 16

Cables between
enclosures
1

Third
FC switch

FC switch

expansion Fibre channel


enclosure switches
16

OUT ports to next enclosure

Figure 2-8 DS6000 connection to expansion enclosures

Chapter 2. IBM System Storage solutions 37


The processor card in the DS6000 contains 2 GB of processor memory in each card.
Figure 2-9 shows the DS6000 architecture.

SAN fabric

host adapter Controller Controller host adapter


chipset
card 0 card 1 chipset

Power PC Power PC
chipset Volatile Volatile chipset
memory memory
device adapter Persistent memory device adapter
Persistent memory
chipset chipset

20 port fibre channel switch


20 port fibre channel switch
20 port fibre channel switch

20 port fibre channel switch

Server First
enclosure expansion
enclosure
(if present)

Figure 2-9 DS6000 architecture

2.6.2 DS8000 and DS6000 virtualization


With DS8000 and DS6000, virtualization is a process to prepare disk drives that are used by
host operating systems. The virtualization hierarchy consists of several layers that “transform”
disk drives to devices that the host operating systems can see. These layers are:
 Array sites
 Arrays
 Ranks
 Extent pools
 Logical volumes
 Logical subsystems
 Host attachments and volume groups

38 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In this section we describe each of these layers briefly. Figure 2-10 illustrates the
virtualization layers.

Array R AID Rank Extent Logical


S ite Array Type FB Pool Volum e

Data

1 G B FB

1 G B FB
1 G B FB
Data

1 GB FB

1 GB FB

1 GB FB
Data

S e rv e r0
Data
Data

1 GB FB

1 GB FB

1 GB FB
Data
P arity
S pare

LSS Address V olum e Host


FB Group G roup Attachm ent

X'2x' FB
409 6
addresses

LS S X '27'

X '3x' CK D
409 6
addresses

Figure 2-10 Virtualization layers

Array sites
An array site in DS8000 is a group of eight disk drives, sometimes also referred to as Disk
Drive Modules (DDMs). An array site in DS8000 is pre-determined by IBM manufacturing and
is made of DDMs from the same disk enclosure. All DDMs is an Array site are of the same
type and capacity.

An array site in DS6000 is a group of four DDMs. An array site in DS6000 is pre-determined
and is made of DDMs from the same disk enclosure. All DDMs is an array site are of the
same type and capacity.

Chapter 2. IBM System Storage solutions 39


Figure 2-11 shows an array site in DS8000.

Disk Enclosure in DS8000

Drive set
(16-pack) Array site

Figure 2-11 Array site in DS8000

Arrays
In DS8000 an array is created from one array site. In DS6000 an array is created from one or
two array sites. Forming an array means defining it for a specific RAID type, the supported
RAID types are RAID-5 and RAID-10.

In DS8000, depending on the sparing rules, some RAID-5 arrays contain a spare disk drive,
and have parity data distributed across seven disk drives. Such an array is referred to as a
6+P+S array. In a RAID-5 array that does not contain a spare disk drive, parity data is
distributed across 8 disk drives, such an array is referred to as a 7+P array.

Similarly, in DS8000, depending on sparing rules some RAID-10 arrays contain two spare
drives and are referred to as 3+3+2 arrays. A RAID-10 array without spare disks is referred to
as a 4+4 array.

In DS6000 a RAID-5 array can contain eight DDMs (such array is referred to as 8-array), or it
can contain four DDMs (such an array is referred to as 4-array). Depending on sparing rules,
in DS6000 some arrays contain spare disk drive. In an 8-array with a spare disk drive, the
parity data is spread across seven disk drives, such array is referred to as 6+P+S array. In an
8-array that does not contain a spare disk drive, parity data are distributed across 8 disk
drives, such array is referred to as 7+P array. In a 4-array with a spare disk drive parity data
are spread across three disk drives. In a 4-array with no spare disk drive parity data are
spread across four DDMs.

In DS6000, some RAID-10 8-arrays or RAID-10 4-arrays contain two spare disks, depending
on sparing rules of DS6000.

For information about sparing rules in DS8000 and DS6000, refer to 4.2.6, “Planning for
capacity” on page 93.

40 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-12 shows 8-arrays in RAID-5 protection. Note that in reality, parity data is spread
across seven or eight disk drives, but because they take capacity of one disk drive, we usually
show them as a disk drive.

parity

S 6+ P arra y

P S

7+ P arra y

Figure 2-12 Eight arrays in RAID-5

Ranks
A rank is logically contiguous storage space made up from one array. When formatting a rank,
you decide if the rank is formatted as fixed block or CKD. If you specify that a rank is fixed
blocked, the corresponding array is defined for fixed block data and can be used by open
systems. i5/OS uses fixed block arrays. If you specify that a rank is CKD, the corresponding
array is defined for CKD data and can be used by System z hosts.

When forming a rank, the capacity of corresponding array is divided into equal extents. An
extent size of a fixed blocked rank is 1 GB, where GB is 230 bytes, It is also called binary GB.
On DS storage systems, the strip size, which is the piece of data of a RAID stripe on each
physical disk, is 256 KB. So, each extent is made of 4096 strips.

Chapter 2. IBM System Storage solutions 41


Figure 2-13 shows fixed block RAID-5 ranks of 73 GB DDMs.

Note: Metadata is presented in this figure as a block of data for clarity.

6+P+S rank
73 GB

Metadata
Extent 1
Extent 2 S
Extent 388

7+P rank
73 GB

Metadata
Extent 1
Extent 3
Extent 452

Figure 2-13 Fixed block ranks of 73 GB DDMs

Extent pools
An extent pool is a group of extents of the same type (either FB or CKD) that belong to one or
more ranks of the same rankgroup. Logical volumes (LUNs) are created from extent pools.
Although it is possible that an extent pool contains extents from ranks with different
characteristics such as RAID types and DDM speeds or capacities, it is recommended that all
ranks that belong to an extent pool have the same characteristics and, consequently, that all
extents in the extent pool have homogenous characteristics. An extent pool can be created
only from ranks with the same extent type, either fixed block or CKD.

Note: We recommend that you define one extent pool for each single rank to better keep
evidence of the location and performance of LUNs and to ensure that LUNs are evenly
spread between the two processors.

When you define an extent pool, you decide to which processor the ranks in the extent pool
have an affinity. This affinity determines which of the two processors is handling its I/O
processing using the rankgroup parameter. If an extent pool is defined for rankgroup 0, the
ranks belonging to it have an affinity to processor 0. Likewise, if the extent pool is defined for
rankgroup 1, the ranks in it have an affinity to processor 1.

42 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-14 shows an example of extent pools.

Extent pool P0 - rankgroup 0

P S

Extent pool P1 - rankgroup 1

Figure 2-14 Extent pools

Logical volumes
A standard logical volume is a SCSI logical unit (LUN) or CKD volume that is made of a set of
real extents from one extent pool. The capacity allocated to a LUN is always a multiple of
1 GB extent, so any LUN size that is not an exact multiple of 1 GB leaves some space in the
last extent that is allocated to the LUN unused. For more information about i5/OS LUNs in an
extent pool, refer to 4.2.6, “Planning for capacity” on page 93.

DS8000 Release 3 includes the virtualization concept of a space efficient logical volume,
which is a thinly provisioned volume that is made of virtual extents from a repository volume
within the same extent pool. The repository volume is the backstore for providing real storage
space to all space efficient volumes within the same extent pool. Space from the repository
volume is allocated to a space efficient volume proportional to the host I/O write activity on
track level (64 KB) granularity. A space efficient volume reports to the host with its full virtual
capacity, although in reality only a user-defined smaller piece of its storage space, such as
20% of its virtual capacity, is available physically from the repository volume.

The intended usage case for space efficient volumes is space efficient FlashCopy where a
space efficient volume is used as the FlashCopy target volume in a “short-lived” FlashCopy
relationship, for example for a nightly system backup where the FlashCopy source volume is
changed merely for the duration of the backup. For more information, see 2.7.8, “Space
efficient FlashCopy” on page 52.

For an i5/OS workload, it is important that enough physical disk arms are available for the
i5/OS system. A LUN can be spread over DDMs from only one rank, so that it can be served
by only six or seven disk arms, depending on the type of rank. Thus, with the use of extent
pools on DS, the question arises of can a LUN contain extents from multiple ranks and use
disk arms from more than one rank?

Normally, extents for a LUN are taken from only one rank, even if there are many ranks in the
extent pool. LUNs are defined so that they use extents of the first free rank until it is full, then
use extents from the next rank in the extent pool, and so forth. Therefore, a LUN usually uses
disk arms from only one rank. However, a LUN uses disk arms from two ranks when it is
created in a multi-rank extent pool and when the first rank does not have enough free extents
so that some first extents from the next rank are used also.

As we mentioned in “Extent pools” on page 42, we recommend that you use only one rank
per extent pool, so that any single LUN is always using extents from one single rank only. To

Chapter 2. IBM System Storage solutions 43


use more physical disk arms for an i5/OS system, simply create other LUNs on other ranks
and assign these LUNs to the same i5/OS auxiliary storage pool.

Figure 2-15 shows an example of a LUN.

LUN S

Figure 2-15 Logical volume (LUN)

The DS8000 Release 3 microcode supports storage pool striping, also known as extent
rotation, which allows you to create LUNs in a multi-rank extent pool so that, with the same
LUN, extents are used from multiple ranks.

Note: We generally do not recommend that you use storage pool striping to create LUNs
for i5/OS. We recommend that you create single-rank extent pools only. System i storage
management balances the I/O as best as possible across all available LUNs, and DS8000
storage pool striping simply introduces another virtualization layer that is not required for
i5/OS.

DS8000 Release 3 also includes the Dynamic Volume Expansion function, which provides the
ability to increase the size of a logical volume when it is online to a host system, with the
restriction that it can be used only for logical volumes that are not in a Copy Services relation.

Important: Dynamic Volume Expansion is not supported on i5/OS.

Logical subsystems
A logical subsystem (LSS) is a logical construct for grouping up to 256 logical volumes, which
are assigned a unique logical volume identifier (ID) that consists of the LSS number and the
volume number. The volume ID is represented in hexadecimal format by a 2-digit LSS
number, followed by a 2-digit volume number. For example, volume 0x1023 belongs to LSS
0x10 and has volume number 0x23. When you create a LUN, you determine to which LSS it
belongs with the first two digits of the specified volume ID.

On ESS, there was a fixed association between the LSS and device adapters (and associated
ranks). On DS, there is no fixed binding between any rank. In addition, any logical subsystem,
except the LSS that is associated with volumes on ranks that belong to rankgroup 0, have
even numbers. The LSS that is associated with volumes on ranks belonging to rankgroup 1
have odd numbers.

For open systems and i5/OS, LSSs are important in two aspects:
 To determine to which processor a LUN has affinity
 To allocate LUNs correctly for Copy Services

For more information about planning LSSs, refer to 4.2.6, “Planning for capacity” on page 93.

An address group is a group of fixed block or CKD LSSs that can contain up to 16 LSSs. An
address group is created automatically when the first LSS that is associated with that address
group is created. When you create a logical volume, you determine the address group of that

44 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
volume by the first digit of the volume’s identifier. For example, when you create a LUN and
specify its ID as 1200, you determine that the LUN is in address group 1.

Note: Do not use address group 0 for creating open systems host volumes, that is LSS
0x00 to 0x0F, for reasons of consistency because it was limited for CKD volumes in the
past.

Host attachments and volume groups


A host Fibre Channel adapter port is identified by the DS through its World Wide Port Name
(WWPN). A set of host ports can be associated to a port group and managed together. This
port group is referred to as host attachment within the GUI and is referred to as a host
connection in the DSC CLI. A host attachment or connection can be associated with a volume
group to define which LUNs are assigned to that host adapter.

A volume group is a group of logical volumes (LUNs) that is attached to a host adapter. Two
types of volume groups are used with open system hosts. These volume groups determine
how the logical volume number is converted to the host-addressable LUN_ID on the Fibre
Channel SCSI interface as follows:
 A map volume group is used in conjunction with FC SCSI host types that poll for LUNs by
walking the address range on the SCSI interface.
 A mask volume group type is used in conjunction with FC SCSI host types that use the
SCSI Report LUN command to determine the LUN_IDs that are accessible.

i5/OS uses the Report LUN command to determine the LUN_ID. Therefore, mask is the
correct volume group type for i5/OS.

When associating a host attachment to the volume group, the host attachment contains
attributes that define the logical blocksize and the address discovery method that the host
adapter uses. These attributes must be consistent with the type of volume group that is
assigned to that host attachment.

i5/OS LUNs use 520 bytes per sector. From these 520 bytes, 8 bytes are the header
metadata that is used by System i storage management. The remaining 512 bytes are for
user data, such as in LUNs for other open system platforms. So, for an i5/OS host
attachment, 520 is the correct blocksize to define when creating LUNs. The correct address
discovery method is Report LUN.

Figure 2-16 shows an example of a volume group.

V o lu m e G ro u p

LUN LU N

R ank

Figure 2-16 Volume group

Chapter 2. IBM System Storage solutions 45


2.7 Copy Services overview
Copy Services is an optional feature of the IBM System Storage DS8000, DS6000, and
Enterprise Storage Server (ESS). It brings powerful data copying and mirroring technologies
to open systems environments that were available previously only for mainframe storage.

This section introduces the main features of Copy Services for the open systems
environment. We limit the discussion to those functions that are supported on LIC level 2.4.0
and above for ESS and for all DS6000 and DS8000 code levels as follows:
 Metro Mirror (previously known as Synchronous Peer-to-Peer Remote Copy)
 Global Mirror (previously known as Asynchronous Peer-to-Peer Remote Copy)
 FlashCopy
 Incremental FlashCopy
 Inband FlashCopy
 Multiple Relationship FlashCopy (V2)
 FlashCopy consistency groups
 Space efficient FlashCopy

For information about implementing IBM System Storage Copy Services solutions with i5/OS,
refer to IBM System Storage Copy Services and IBM i: A Guide to Planning and
Implementation, SG24-7103.

2.7.1 Metro Mirror


Metro Mirror is a synchronous data replication function of an IBM System Storage disk
subsystem that updates a secondary copy of a volume constantly to match changes made to
a primary volume. The primary and the secondary volumes can be on the same storage
subsystems, although for disaster recovery purposes they are usually on separate storage
subsystems, which can be up to 300 km apart. Performance implications can occur, however,
when using synchronous replication over this distance. Thus, it might be more practical to
consider either shorter distances (for example 40 km) or use Global Mirror as an alternative
for extended distance data replication without a performance impact to the host. (For more
information, see 2.7.2, “Global Mirror” on page 47.

Metro Mirror is operating system and application independent. Because the copying function
occurs at the disk subsystem level, it is transparent to the host and its applications.

46 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-17 shows an example of a Metro Mirror configuration. From a host perspective, the
I/O flows between the host and primary disk subsystem as though there is no data replication.

IB
M

1 4

2
VOLUME VOLUME
A
CHAN
EXT
CHAN
EXT B

3
PRIMARY SECONDARY
1 - Write to primary volume (cache/NVS)
2 - Write to secondary volume(cache/NVS)
3 - Acknowledgment
4 - Post I/O complete (DE)
Figure 2-17 Metro Mirror architecture

The synchronous protocol guarantees that the secondary copy is up-to-date and consistent
by ensuring that the primary copy is committed only if the primary receives acknowledgment
that the secondary copy is written.

Important: Metro Mirror provides a remote copy that is synchronized with the source copy,
providing a Recovery Point Objective (RPO) of zero. In other words, the disaster recovery
system is at the point of failure when it is recovered. However, this recovery does not take
into account any further recovery actions that are necessary to bring the applications to a
clean recovery point, such as applying or removing journal entries. The additional actions
happen at the database recovery stage. Thus, take this into account when considering
your Recovery Time Objective (RTO). This recovery process is much less time consuming
and complicated than recovering the system from tape.

2.7.2 Global Mirror


Global Mirror is designed to provide a long-distance remote copy solution across two sites
using asynchronous technology. It operates over high-speed, Fibre Channel protocol
communication links and is designed to maintain a complete and consistent remote mirror of
data asynchronously at virtually unlimited distances with almost no application response time
degradation.

Chapter 2. IBM System Storage solutions 47


Separating data centers by longer distances helps to provide protection from regional
outages. This asynchronous technique provides better performance at unlimited distances by
allowing the secondary site to trail in currency a few seconds behind the primary site. With
Global Mirror, you can configure currency to be as little as three to five seconds with respect
to host I/O. Global Mirror consistency groups can contain a mix of System z and open data
and can be created across up to eight DS8000 series systems, DS6000 series systems, or
ESS systems, allowing scalability for application growth. This 2-site data mirroring function is
designed to provide a high-performance, cost effective global distance data replication and
disaster recovery solution.

Global Mirror provides the following capabilities:


 Capability to achieve a Recovery Point Objective (RPO) of three to five seconds with
sufficient bandwidth and resources
 Does not impact production applications when insufficient bandwidth or resources are
available
 Scalable so that it provides consistency across multiple primary and secondary disk
subsystems
 Allows for removal of duplicate writes within a consistency group before sending data to
remote site
 Allows for less than peak bandwidth to be configured by allowing RPO to increase without
restriction at these times
 Provides consistency between System z and open systems data and between different
platforms on open systems

Important: Global Mirror provides a remote copy that is some time behind the source copy,
which gives an RPO of between a few seconds and a few minutes, depending on write I/O
activity and the available communication bandwidth. In other words, the disaster recovery
system is a few seconds or minutes behind the production system point of failure when it is
recovered. This recovery does not take into account any further recovery actions that are
necessary to bring the applications to a clean recovery point, such as applying or removing
journal entries. This happens at the database recovery stage. Thus, take this into account
when considering your Recovery Time Objective (RTO). This recovery process is much
less than recovering the system from tape.

Global Mirror architecture


For any asynchronous replication function, three major functions are required to provide
consistent mirroring of data:
 Creating a consistent data point across the replicated environment
 Transmitting the required updates to the secondary location
 Saving consistent data to ensure a consistent image of the data is always available

48 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-18 shows the functions that are provided with Global Mirror.

PRIMARY REMOTE
APPLICATION APPLICATION
HOSTS HOSTS

FlashCopy

SAN
'A‘ SAN ‘B’
Global Copy
Primary Secondary
Global Copy
0
1
0
0
1
(PPRC-XD) FlashCopy FlashCopy
1
0
1
0
1 Source ‘C’ Target
0
0 0
0
0 1
0
0 0
1
0
0
Change 1
0 Out of
0
0
0 1
0
0 Recording 0
0 Sync 0
0
Change
1
bitmap 1 0
Recording
0
0 bitmap 0
1
0 bitmap
Remote Site
Local Site

Figure 2-18 Global Mirror architecture

The primary disk subsystems provide functionality to coordinate the formation of data
consistency groups. Fibre Channel protocol links provide low latency connections between
disk subsystems, ensuring that this process involves negligible impact to the production
applications. The consistency group information is held in bitmaps rather than requiring the
data updates itself to be maintained in cache.

These consistency groups are sent to the secondary location using asynchronous Global
Copy (previously known as PPRC-XD). Using Global Copy means that duplicate updates
within the consistency group are not sent and, if the data sent is still in the cache on the
primary disk subsystems, that only the changed blocks are sent.

When the complete consistency group is sent to the secondary location, this consistent
image of the primary data is saved using incremental FlashCopy and the Global Mirror
consistency group process starts over again. This ensures that there is always a consistent
image of the primary data at the secondary location.

Dependant write consistency


Global Mirror provides consistency using a data freeze, which preserves the order of
dependant writes. Other storage vendor solutions might provide consistency using sequence
numbers.

Using the data freeze concept, consistency is obtained by temporarily inhibiting write I/O to
the devices and then performing the actions required to create consistency. When all devices
have performed the required actions the write I/O is allowed to resume. This might be
suspending devices in a Metro Mirror environment or performing a FlashCopy when using
consistent FlashCopy. With Global Mirror, this action is the creation of the bitmaps for the
consistency group and we are able to create the consistent point in approximately 1 to 3
milliseconds. If you now consider that this consistency group creation is done, for example,
every 3 seconds the host I/O performance impact with Global Mirror freezing the write I/O for
1 to 3 milliseconds is negligible.

Chapter 2. IBM System Storage solutions 49


2.7.3 FlashCopy
FlashCopy makes a single point-in-time copy of a logical volume, also known as a time-zero
copy, that provides an instantaneous copy, or view, of the original data at a specific
point-in-time. The target volume is totally independent of the source volume and is available
for both read and write access after the FlashCopy command is processed.

Figure 2-19 shows the relationship between the source volume on the left and the target
volume to the right. During establishment of a FlashCopy relationship, a bitmap (indicated by
the grid in the diagram) is created in the storage system’s cache for tracking which tracks are
copied to the target volume and which are not.

FlashCopy command issued

Bitmap is created and copy is


immediately available to be
used
Write Read
Time

Read and write to both source


and target are possible

When copy is complete,


relationship between
source and target ends

Figure 2-19 FlashCopy

You can use FlashCopy with either the COPY or NOCOPY option. In both cases, the target is
available as soon as the FlashCopy command is processed, usually a few milliseconds after
the command runs. When you use the COPY option, a background task runs to copy the data
sequentially track-by-track from source to target. With NOCOPY, the data is copied only on
write, meaning that only those tracks that are written to on either the source or target are
actually written to the target volume. The shaded segments in the second grid show this
NOCOPY option, and the shaded segments indicate the tracks that have been updated on
the target volume.

When the source volume is accessed for read, data is simply read from the source volume. If
a write operation occurs to the source, it is applied immediately if the track is copied to the
target already, either because a COPY option or because a copy on write has occurred. If the
source track to be written is not already copied, the unchanged track is copied to the target
volume first to secure the point-in-time copy state, and then the source is updated.

When the target volume is accessed for read, the bitmap shows which tracks are copied
already. If the track has been copied, either by a background COPY or by a copy on write, the
read is done from the target volume. If the track has not been copied, either because the
background copy has not reached that track yet or because it has not been updated on either
the source or target volume, the data is read from the source volume.

50 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Thus, both source and target volumes are totally independent. If you use the COPY option,
the relationship between source and target ends automatically when all target tracks are
written. With NOPCOPY, the relationship is maintained until it is ended explicitly or until all
tracks are copied. Although, Figure 2-19 on page 50 shows only one source and one target
volume, the same concept applies for all volumes in a relationship. Because System i storage
management stripes its data across all available volumes in an ASP (System, User, or
Independent), FlashCopy, as any other storage-based copying solution, must treat all
volumes in the ASP as one entity.

Typically, you use the point-in-time copy that FlashCopy creates when you need to produce a
copy of production data with minimal application downtime. You can use the point-in-time
copy for online backup, testing of new applications, or copying a database for data mining
purposes. The copy looks exactly like the original source volume and is an instantly available,
binary copy.

For copies that are usually access only once, such as for creating a point-in-time backup, you
normally use the NOCOPY option. When the target is required for regular access (for
example, for populating a Data Warehouse or creating a cloned environment), we
recommend that you use the COPY option.

Note: The i5/OS V6R1 quiesce for Copy Services function (CHGASPACT CL command)
allows you to suspend or resume ASP I/O activity to take an online FlashCopy without
needing to vary-off the IASP or turning off the system.

2.7.4 Incremental FlashCopy


Incremental FlashCopy is a feature of FlashCopy that is available since Copy Services
version 2. This function uses a second bitmap to track the changes on the source volume,
because the last FlashCopy relationship was invoked and is primarily used with Global Mirror
for saving the consistency groups.

When this option is selected, only the tracks that have been changed on the source are
copied again to the target. The direction of the refresh can also be reversed, copying the
changes made to the new source (originally the target volume) to the new target volume
(originally the source volume).

2.7.5 Inband FlashCopy


The inband management capability feature that comes with Copy Services version 2 allows
you to invoke FlashCopy on a remote site IBM DS8000, DS6000, or ESS. If you have two
sites, one local and one remote, that are in a PPRC relationship, you can invoke a FlashCopy
task on the remote site from the primary site through a PPRC inband connection. You can use
Inband FlashCopy to issue FlashCopy commands from the primary to the secondary
machine, for example in a Global Mirror configuration.

2.7.6 Multiple Relationship FlashCopy (V2)


With Copy Services version 2, one FlashCopy source volume can have up to 12 FlashCopy
target volumes, which provides more flexibility because you can initiate the multiple
relationships using the same source volume without needing to wait for other relationships to
end.

Chapter 2. IBM System Storage solutions 51


2.7.7 FlashCopy consistency groups
With Copy Services version 2 new options are available to facilitate the creation of FlashCopy
consistency groups. With FlashCopy consistency groups, to create a consistent state in terms
of dependent writes across multiple FlashCopy source volumes, I/O write activity to the
source volumes is held off until the extended long busy timeout has expired or an unfreeze
operation is issued.

Note: Using FlashCopy consistency groups with i5/OS, which requires that it is either shut
down or that its database I/O is quiesced before taking a FlashCopy, has no real benefit,
although it cannot harm either.

2.7.8 Space efficient FlashCopy


Traditionally FlashCopy required that all space of the source volume was allocated physically
and available for the target volume even if only a few changed tracks are copied from the
source to the target volume.

Space efficient FlashCopy, available as an IBM FlashCopy SE licensed function and


introduced with DS8000 Release 3, significantly lowers the amount of physical storage space
for the FlashCopy target volumes by thinly provisioning the target space proportional to the
amount of write activity from the host to the source and target volumes.

Space efficient FlashCopy is implemented using a space efficient logical volume, that is a
non-provisioned volume that has no physical storage allocated, as a target volume for a
FlashCopy no-background copy relationship. The actual physical storage for all space
efficient volumes in an extent pool is derived from a single shared repository volume that
needs to be created within the same extent pool.

For host write I/Os to either the fully-provisioned FlashCopy source or the non-provisioned
target causing a destage to the target volume, space is allocated to the space efficient target
from the repository volume on track-level granularity (64 KB). The repository manager within
DS8000 microcode uses a mapping table to keep track of the allocation of virtual track IDs of
the space efficient volumes to physical track IDs from the repository volume. Less for the
mapping table look-up than because of the initial space allocation on destage writes and
non-sequential stage or destage of the shared repository volume, space efficient FlashCopy
causes a slight performance penalty compared to regular, that is fully-provisioned,
FlashCopy.

52 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 2-20 shows the track mapping for space efficient FlashCopy.

Non-provisioned
space efficient volumes
(no space EVER allocated)

FlashCopy source Space Space FlashCopy source


(fully-provisioned) Efficient Efficient (fully-provisioned)
Target Target

Mapping Table Mapping Table


Target Repository Target Repository
Address Address Address Address
5 21 2 24
7 31

Repository Volume
(over-provisioned,
for example 500 GB virtual and
100 GB real capacity)

Figure 2-20 Space efficient FlashCopy track mapping

The repository volume is over-provisioned in a sense that it is configured with a virtual


capacity, available for all the space efficient volumes in the extent pool, which is less than its
allocated physical capacity. Usage of the repository’s physical capacity is monitored by the
DS8000 microcode. If the used physical capacity exceeds a configurable percentage
threshold a SNMP trap 221 notification is sent out. When the repository volume runs out of
physical storage space the FlashCopy relationship fails, that is it is automatically withdrawn,
or optionally write I/O access to the space-efficient logical volumes is inhibited (for future
support of space efficient FlashCopy with Global Mirror). Physical storage space in the
repository volume is released when the FlashCopy relationship is withdrawn or the space
efficient volume is either formatted or deleted.

Correct sizing of the repository volume space becomes very important to prevent running out
of space situations, which can cause the FlashCopy relationship to fail and make the target
volume that contains only the update data useless. For more information, see 5.2.8, “Sizing
for space efficient FlashCopy” on page 130.

Restriction: The intended usage of space efficient FlashCopy is for short-lived FlashCopy
no-background copy relationships with only limited host write I/O activity such as for low
workload period system backups. If much more than 20% of the data is changed, using
regular fully-provisioned FlashCopy is recommended.

For further information about implementing space efficient FlashCopy with System i, refer to
IBM System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.

Chapter 2. IBM System Storage solutions 53


54 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3

Chapter 3. System i external storage


solution examples
In this chapter, we discuss possible scenarios where System i environments and IBM System
Storage DS solutions are connected. We start by providing basic examples for a 1-site local
solution with using System i external storage optionally together with IASPs for high
availability (HA) and FlashCopy for backup. Then, we discuss 2-site solutions that take
advantage of remote data replication to a secondary site for disaster recovery (DR) and high
availability. Using these example environments can guide you through the planning and
implementation of external storage for i5/OS and the System i platform.

© Copyright IBM Corp. 2008. All rights reserved. 55


3.1 One-site System i external storage solution examples
Attaching an external disk to a System i platform is a relatively simple task if those who are
performing the task understand the i5/OS operating system environment, the external storage
environment, and the storage area network (SAN). The following sections show 1-site
solution examples of implementing IBM System Storage DS8000, DS6000, or ESS model
800 external storage with System i.

3.1.1 System i5 model and all disk storage in external storage


In this scenario, the System i model has all its disk storage, including the load source, in the
external storage server. Such a boot from SAN configuration is available only to HMC
managed System i POWER5 or later servers and IBM System Storage model 800, DS6000
and DS8000 series. For further information about boot from SAN requirements, refer to 4.2.1,
“Planning considerations for boot from SAN” on page 78)

Figure 3-1 shows the simplest example. The i5/OS load source is a logical unit number (LUN)
in the DS model. To avoid single-points of failure for the storage attachment i5/OS
multipathing should be implemented to the LUNs in the external storage server.

Note: With i5/OS V6R1 and later, multipathing is also supported for the external load
source unit.

Prior to i5/OS V6R1 the external load source unit should be mirrored to another LUN on the
external storage system to provide path protection for the load source. The System i model is
connected to the DS model with Fibre Channel (FC) cables through a storage area network
(SAN).

Figure 3-1 System i5® model and all disk storage in external storage

56 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The FC connections through the SAN switched network are either through direct FC local
connections or through a dark fiber up providing up to 10 km distance. Figure 3-2 is the same
simple example, but the System i platform is divided into logical partitions (LPAR). Each LPAR
has it own mirrored pair of LUNs in the DS model.

Figure 3-2 LPAR System i5 environment and all disk storage in external storage

Chapter 3. System i external storage solution examples 57


3.1.2 System i model with internal load source and external storage
This example shows the previous scenario for implementing external disk with System i
models but without boot from SAN. In this case the load source drive remains in either the
System i central electronic complex (CEC) or the expansion tower where the system is
logically partitioned. In this example, there are three logical partitions with the load source in
an expansion tower. There is a remote load source in the external storage system for
protection. Multipath can be implemented to all the LUNs in the external storage server.

Unless using switchable independent ASPs boot from SAN helps to significantly reduce the
recovery time in case of a system failure by eliminating the requirement for a manual D-type
IPL with remote load source recovery.

Figure 3-3 External disk with the System i5 internal load source drive

58 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3.1.3 System i model with mixed internal and external storage
Examples of selected environments where the internal disk is retained in the System i model
and additional disk is located in the external storage server include:
 A solution where the internal drives support *SYSBAS storage and the external storage
supports the IASP, which is similar to the example in “Metro Mirror with switchable IASP
replication” on page 66.
 A solution where the internal drives are one half of the mirrored environment and the
external storage LUNs are the other half, giving mirrored protection and distance
capability.
 A solution that requires a considerable amount of space for archiving.
In Figure 3-4, the external disk is used typically for a user auxiliary storage pool (ASP) or
an independent ASP (IASP). This ASP disk space can house the archive data, and his
storage is fairly independent of the production environment.

Figure 3-4 Mixed internal and external drives

It is possible to mix internal and external drives in the same ASP, but we do not recommend
this mixing because performance management becomes difficult.

Chapter 3. System i external storage solution examples 59


3.1.4 Migration of internal drives to external storage including load source
In this case, the customer has decided to adopt a consolidated storage strategy. The
customer must have a path to migrate from their internal drives to the new external disk. For
our example (shown in Figure 3-5), we assume the internal disk drives are all RAID protected.

Figure 3-5 Migration from internal RAID protected disk drives to external storage

There are multiple techniques for implementing this migration.

One such technique is to add additional I/O hardware to the existing System i model to
support the new external disk environment. This hardware can be an expansion tower, I/O
loops (HSL or 12X), #2847 IOP-based or POWER6 IOP-less Fibre Channel IOAs for external
load source support, other #2844 IOP-based or IOP-less FC adapters for the non-load source
volumes.

The movement of data from internal to external storage is achieved by the Disk Migration
While Active function (see Figure 3-5). Not all data is removed from the disk. Certain object
types, such as temporary storage, journals and receivers, and integrated file system objects,
are not moved. These objects are not removed until the disk is removed from the i5/OS
configuration. The removal of disk drives is disruptive, because it has to be done from DST.
The time to remove them depends on the amount of residual data left on the drive.

The removal method roughly follows this process:


1. Plan your data migration. When removing disks from the configuration, you must
understand the RAID set arrangements to maintain protection.
2. Test the draining process outside the production environment to ensure that you are
confident with the process.
3. Increase the load source drive to 17 GB or more, and load new operating system support.
4. Attach the new I/O and external storage.
5. Create LUNs in external storage.
6. Add LUNs to i5/OS.
7. Use the Disk Migrate While Active function (*MOVDTA) of the Start ASP Balance
(STRASPBAL) command on the drives that are to be drained.

60 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. For further details about this function, refer to IBM eServer iSeries Migration: System
Migration and Upgrades at V5R1 and V5R2, SG24-6055.
9. Perform a manual IPL to DST, and remove the disks that have had the data drained from
the i5/OS configuration.
10.Stop device parity protection for the load source RAID set.
11.Migrate the load source drive by copying the load source unit data.
12.Physically remove the old internal load source unit.
13.Change the I/O tagging to the new external load source.
14.Re-start device parity protection.

For detailed information about migrating an internal load source to boot from SAN, refer to
IBM i and IBM System Storage: A Guide to Implementing External Disk on IBM i, SG24-7120.

Attention: The Disk Migrate While Active function starts a job for every disk migration.
These jobs can impact performance if many are started. If data is migrated from a disk and
the disk is not removed from the configuration, a job is started. Do not start data moves on
more drives than you can support without impacting your existing workload. Schedule the
data movement outside normal business hours.

3.2 Migration of an external mirrored load source to a boot load


source
In this example, the System i environment is already attached to an external storage server
before the new boot from SAN support became available. Typically, in this environment, the
solution includes the internal load source that is mirrored to an external pair, which is a
similar-sized LUN in the external storage server.

This technique provides protection for the internal load source. The System i load source
drive should always be protected either by RAID or mirroring.

To migrate from a remote mirrored load source to external mirrored load source (Figure 3-6):
1. Increase the size of your existing load source to 17 GB or greater.
2. Load the new i5/OS V5R3M5 or later operating system support for boot from SAN.
3. Create the new mirrored load source pair in the external storage server.
4. Turn off System i and change the load source I/O tagging to the remote external load
source.
5. Remove the internal load source.
6. Perform a manual IPL to DST.
7. Use the replace configured unit function to replace the internal suspended load source
with the new external load source.
8. Perform an IPL on the new external mirrored load source.

For detailed information about migrating an internal load source to boot from SAN refer to IBM
i and IBM System Storage: A Guide to Implementing External Disk on IBM i, SG24-7120.

Chapter 3. System i external storage solution examples 61


Figure 3-6 Migration of the load source to external storage

3.2.1 Cloning i5/OS


Cloning is a new concept for the System i platform since the introduction of boot from SAN
with V5R3M5. Previously to create a new system image, you had to perform a full installation
of the SLIC and i5/OS. When cloning i5/OS, you create an exact copy of the existing i5/OS
system or partition. The copy can be attached to another System i model, a separate LPAR,
or if the production system is powered off, the existing partition or system. After the copy is
created, you can use it for offline backup, system testing, or migration.

Boot from SAN enables you to take advantage of some of the advanced features that are
available with the DS8000 and DS6000 family, such as FlashCopy. It allows you to perform a
point-in-time instantaneous copy of the data held on a LUN or group of LUNs. Therefore,
when you have a system that has only SAN LUNs with no internal drives, you can create a
clone of your system.

Important: When we refer to a clone, we are referring to a copy of a system that only uses
SAN LUNs. Therefore, boot from SAN is a prerequisite.

3.2.2 Full system and IASP FlashCopy


FlashCopy allows you to take a system image for cloning and is also an ideal solution for
increasing the availability of a System i production system by reducing the time for system
backups.

To obtain a full system backup of i5/OS with FlashCopy, either a system shutdown or, since
i5/OS V6R1, a quiesce is required to flush modified data from memory to disk. FlashCopy
copies only the data on the disk. Therefore, a significant amount of data is left in memory, and
extended database recovery is required if the FlashCopy is taken with the system running or
not suspended.

62 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Note: The new i5/OS V6R1 quiesce for Copy Services function (CHGASPACT) allows you
to suspend all database I/O activity for *SYSBAS and IASP devices before taking a
FlashCopy system image eliminating the requirement to power down your system. (For
more information refer to IBM System Storage Copy Services and IBM i: A Guide to
Planning and Implementation, SG24-7103.

An alternative method to perform offline backups without a shutdown and IPL of your
production system is using FlashCopy with IASPs, as shown in Figure 3-7. You might
consider using an IASP FlashCopy backup solution for an environment that has no boot from
SAN implementation or that is using IASPs in anyway for high availability. Because the
production data is located in the IASP, the IASP can be varied off or because i5/OS V6R1
quiesced before taking a FlashCopy without shutting down the whole i5/OS system. It also
has the advantage that no load source recovery is required.

Note: Temporary space includes QTEMP libraries, index build space, and so on. There is a
statement of direction to allow spooled files in an IASP in the future.

Figure 3-7 FlashCopy of IASP for offline backup

Planning considerations
Keep in mind the following considerations:
 You must vary off or quiesce the IASP before the FlashCopy can be taken. Customer
application data must be in an IASP environment in order to use FlashCopy. Using
storage-based replication of IASPs requires using the System i Copy Services Toolkit or
the new i5/OS V6R1 High Availability Solutions Manager (HASM).
 Disk sizing for a system ASP is important because it requires the fastest disk on the
system because this is where memory paging, index builds, and so on happen.

Chapter 3. System i external storage solution examples 63


 An IPL is required of the backup system after a save to clean up the cluster management
objects on the target system.
 A separate FC I/O processor (IOP) or I/O adapter (IOA) is required for each IASP on the
target system. For more information, contact the High Continuous Availability and Cluster
group within the IBM System i Technology Center (iTC) by sending e-mail to
rchclst@us.ibm.com.

3.3 Two-site System i external storage solution examples


We now take a closer look at some examples of 2-site solutions that maintain a copy of your
production data at a second remote site for disaster recovery (DR) and high availability (HA).

3.3.1 The System i platform and external storage HA environments


With huge demand for high available business systems, there are many instances where the
System i platform collaborates with external storage servers to take advantage of both
systems availability features and applications. The System i platform offers both RAID and
mirrored protection for its disk subsystem. This function is provided by the operating system
and I/O adapters (IOAs). Customers who have external storage can take advantage of the
i5/OS mirroring function, which gives the possibility of separation by up to 10 km, giving this
solution disaster recovery characteristics.

Figure 3-8 shows a System i model with internal drives that are one half of the mirror to an
external storage server that is at a distance with a remote load source mirror and a set of
LUNs that are mirrored to the internal drives.

Figure 3-8 Internal to external mirroring for disaster recovery

If the production site has a disk hardware failure, the system can continue off the remote
mirrored pairs. If a disaster occurs that causes the production site to be unavailable, it is

64 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
possible to IPL your recovery System i server from the attached remote LUNs. If your
production system is running i5/OS V5R3M5 or later and your recovery system is configured
for boot from SAN, it can directly IPL from the remote load source even without requiring a
remote load source recovery.

Restriction: If using i5/OS mirroring for disaster recovery as we describe, your production
system must not use boot from SAN because, at failback from your recovery to your
production site, you cannot control which mirror side you want to be the active one.

3.3.2 Metro Mirror examples


In the following sections, we describe how to use Metro Mirror in conjunction with the
System i platform.

Metro Mirror and full system replication


Metro Mirror offers synchronous replication between two DS models or between a DS and
ESS model 800. In the example shown in Figure 3-9 on page 66, two System i servers are
separated by some distance to achieve a disaster recovery solution at the second site. This is
a fairly simple arrangement to implement and manage. Synchronous replication is desirable
because it ensures the integrity of the I/O traffic between the two storage complexes and
provides a recovery point of objective (RPO) of zero (that is, no transaction gets lost). The
data on the second DS system is not available to the second System i model while Metro
Mirror replication is active, that is it must be turned off.

The main consideration with this solution is distance. The solution is limited by the distance
between the two sites. Synchronous replication needs sufficient bandwidth to prevent latency
in the I/O between the two sites. I/O latency can cause application performance problems.
Testing is necessary to ensure that this solution is viable depending on a particular
application’s design and business throughput.

When you recover in the event of a failure, the IPL of your recovery system will always be an
abnormal IPL of i5/OS on the remote site.

Note: Using i5/OS journaling for Metro Mirror or Global Mirror replication solutions is highly
recommended to ensure transaction consistency and faster recovery.

Chapter 3. System i external storage solution examples 65


Figure 3-9 Metro Mirror full-system replication

Metro Mirror with switchable IASP replication


In this example, we have the same DS configuration, but the data that we want to replicate is
in an IASP located in the DS model (see Figure 3-10). The *SYSBAS disks can be on an
internal or external storage. Both the production system and the recovery system have to be a
System i cluster device domain. The IASP is connected to the System i model through a
switchable expansion tower in a device cluster resource group (CRG). While the backup
system is powered on and can be running other applications, none of the data in the IASP
environments that are shown is available to the backup system until a switchover occurs.

Note: Replicating switchable independent ASPs to a remote site provides both disaster
recovery and high availability and is supported only with either using the System i Copy
Services Toolkit or i5/OS V6R1 High Availability Solutions Manager (HASM).

66 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 3-10 Metro Mirror IASP replication

Using switchable IASPs with Copy Services requires either the System i Copy Services
Toolkit or the new i5/OS V6R1 High Availability Solutions Manager (HASM) for managing the
failover or switchover. If there is a failure at the production site, i5/OS cluster management
detects the failure and switches the IASP to the backup system. In this environment, we
normally have only one copy of the IASP, but we are using Copy Services technology to
create a second copy of the IASP at the remote site and provide distance.

The switchover and the recovery to the backup system are a relatively simple operation,
which is a combination of i5/OS cluster services commands and DS command-line interface
(CLI) commands. The IASP switch is cluster services passing the management over to the
backup system. The backup IASP is then varied on the active backup system. During a
disaster journal recovery attempts to recover or rollout any damaged objects. After the vary
on action completes, the application is available. These functions are automated with the
System i Copy Services Toolkit (see IBM System Storage Copy Services and IBM i: A Guide
to Planning and Implementation, SG24-7103).

3.3.3 Global Mirror examples


In this section we present examples of using Global Mirror with the System i platform.
Compared with Metro Mirror synchronous replication Global Mirror uses asynchronous
replication of data consistency groups to allow for long-distance replication solutions with

Chapter 3. System i external storage solution examples 67


guaranteeing data consistency. The design of Global Mirror prevents performance impacts to
the production host provided that enough replication link bandwidth is available.

Global Mirror and full system replication


In this example (Figure 3-11), no disk is located inside the production or backup system; all
System i disk units are provided from the DS models. This is a disaster recovery environment.
For this full-system replication scenario i5/OS clustering is not involved so there is no
switchover.

All the data on the production system is asynchronously transmitted to the remote DS model.
Asynchronous replication through Global Copy alone does not guarantee the order of the
writes, and the remote production copy will lose consistency quickly. In order to guarantee
data consistency Global Mirror creates consistency groups at regular intervals, by default as
fast as the environment and the available bandwidth allows. FlashCopy is used at the remote
site to save these consistency groups to ensure a consistent set of data is available at the
remote site which is only a few seconds behind the production site, i.e. with using Global
Mirror a recovery point objective (RPO) of only a few seconds can be achieved normally
without any performance impact to the production site.

Figure 3-11 Global Mirror and the System i5 platform

This is an attractive solution because of the extreme distances that can be achieved with
Global Mirror. However, it requires a proper sizing of the replication link bandwidth to ensure
the RPO targets can be achieved, and testing should be performed to ensure the resulting
image is usable.

68 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Global Mirror and switchable IASP replication
Global Mirror and switchable IASPs offer a new and exciting opportunity for a highly available
environment. It enables customers to replicate their environment over an extremely long
distance without the use of traditional i5/OS replication software. This environment comes in
two types, asymmetrical and symmetrical.

While Global Mirror can entail a fairly complex setup, the operation of this environment is
simplified for i5/OS with the use of the System i Copy Services Toolkit, automating the
switchover and failover or the IASP from production to backup.

Asymmetrical replication
The configuration shown in Figure 3-12 provides both availability switching between the
production system and the backup system. It also provides disaster recovery between either
the production system or backup system, depending on which system has control when the
disaster occurs, and the disaster recovery system. With the asymmetrical configuration, only
one consistency group is setup, and it resides at the remote site. This means that you cannot
do regular role swaps and reverse the I/O direction (disaster recovery to production).

In a normal operation, the IASP holds the application data and runs varied on to the
production system. I/O is asynchronously replicated through Global Copy to the backup DS
model maintaining a copy of the IASP. At regular intervals, FlashCopy is used to save the
consistency groups created at repeated intervals by the Global Mirror algorithm. The
consistency groups can be only a few seconds behind the production system, offering the
opportunity for a fast recovery.

Figure 3-12 Global Mirror with asymmetrical IASP

Two primary operations can occur in this environment: switchover from production to backup
and failover to backup. Switchover from production to backup does not involve the DS models
in the previous example. It is simply a matter of running the System i Copy Services Toolkit

Chapter 3. System i external storage solution examples 69


switch PPRC (swpprc) command on the production system. The switch PPRC command
varies off the IASP from the production system and varies it on to the backup system.
Stopping the application on the production system and restarting it on the backup system
must also be considered in a switchover. This can be either a planned event where users
simply log off, or it can be required for a programmatic activity to force users from the system.
Both events can be automated with the use of i5/OS cluster resource group (CRG) exit
programs.

The failover to backup configuration change is after a failure. In this case, you run the failover
PPRC command (failoverpprc) on the backup system. Running this command allows the
disaster recovery system to take over the production role, vary on the copy IASP as though it
were the original, and restart the application. During vary on processing, journal recovery
occurs. If the application does not use journaling, the vary on process is considerably long
because the recovery process can fail due to damage and unrecoverable objects. You can
restore these objects from backup tapes, but some data integrity analysis needs to occur,
which can delay users who are allowed to access the application. This is similar to a disaster
crash on a single system, where the same recover process needs to occur.

Symmetrical replication
In this configuration, an additional FlashCopy consistency group is created on the source
production DS model. It provides all the capabilities of asymmetrical replication, but adds the
ability to do regular role swaps between the production and disaster recovery sites. When the
role swaps occur with a configuration as shown in Figure 3-13, the backup system does not
provide any planned switch capability for the disaster recovery site.

Figure 3-13 Metro Mirror symmetrical replication

In this configuration, there are multiple capabilities, local planned availability between the
production and backup, and role swap or disaster recovery between the production and

70 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
disaster recovery site. The planned availability switch between production and backup is the
same as described in “Asymmetrical replication” on page 69, which does not involve the DS
models.

If you are going to do a role swap between the production system and the disaster recovery
site, you must also work with the DS models. Role swap involves the reversal of the flow of
data between production DS and disaster recovery DS. While this is more complex, the tasks
can be simply run from DS CLI and scripts. Either the System i Copy Services Toolkit or the
i5/OS V6R1 High Availability Solutions Manager (HASM) is required for this solution. For
more information about these System i Copy Services management tools, refer to IBM
System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.

3.3.4 Geographic mirroring with external storage


The geographic mirroring function of i5/OS cross-site mirroring (XSM) provides the ability to
move a mirrored copy of data considerable distances from the production site. This removes
the previous internal drive limitation of the copies only being separated by the maximum
distance of the HSL copper or optical loops.

Figure 3-14 shows the internal drive solution for XSM. The replication between the source
and target system is TCP/IP based, so considerable distance is achievable. Figure 3-14 also
shows a local backup server, which enables an administrative (planned) switchover to occur if
the primary system should need to be made unavailable for maintenance.

Figure 3-14 Geographic mirror with internal drives

Chapter 3. System i external storage solution examples 71


Figure 3-15 shows a combination of the two disk technologies, with internal drives for the
system ASP on each server with the IASPs located in the external storage server. In this
instance, the expansion tower attached to the external disk storage becomes the switchable
resource. Therefore, any I/O hardware that is needed by the source server should not be
located in this tower. Functionally, this solution is the same as the internal solution.

If the load source and system base are located in the external storage system, it is possible to
have all disks within the external storage system. Separation of the *SYSBAS LUNs and the
IASP LUNs and switchable tower are done at the expansion tower level.

Figure 3-15 Geographic mirroring with a mix of internal and external drives

72 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Part 2

Part 2 Planning and sizing


In this part, we explain planning and sizing considerations for external storage on System i.

This part includes the following chapters:


 Chapter 4, “i5/OS planning for external storage” on page 75
 Chapter 5, “Sizing external storage for i5/OS” on page 115

© Copyright IBM Corp. 2008. All rights reserved. 73


74 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4

Chapter 4. i5/OS planning for external


storage
In this chapter, we discuss important planning considerations for setting up your i5/OS
environment with Fibre Channel attached external IBM System Storage disk subsystems.

Good planning is essential for the successful setup and use of your server and storage
subsystems. It ensures that you have met all of the prerequisites for your server and storage
subsystems and everything you need to gain advantage from best practices for functionality,
redundancy, performance, and availability.

Continue to use and customize the planning and implementation considerations based on
your hardware setup and as recommended through the IBM Information Center
documentation that is provided. Do not use the contents in this chapter as a substitute for
completing your initial server setup (IBM System i or IBM System p with i5/OS logical
partitions), IBM System Storage subsystems, and configuration of the Hardware
Management Console (HMC).

© Copyright IBM Corp. 2008. All rights reserved. 75


4.1 Planning for external storage solutions
To plan the implementation of ESS 800, DS6000, or DS8000 series correctly with the System
i platform, you must plan for the solution that you want to implement. The solutions can vary,
based on application and overall business continuity requirements. Some of these solutions
are:
 Implement a SAN solution instead of integrated internal storage.
 Implement a disaster recovery solution by enabling IBM System Storage Copy Services
functions for Disk Storage (DS).
 Minimize batch or backup job window using DS FlashCopy to provide a point-in-time
replica of your source data.
 Implement a combination of the previous solutions to achieve disaster recovery and
business continuity goals.

For example, you might want to use DS6000 or DS8000 storage for i5/OS, AIX, and Linux,
which reside on your System i servers. You might want to also implement a disaster recovery
solution with Remote Mirror and Copy features, such as IBM System Storage Metro Mirror or
Global Mirror, as well as plan to implement FlashCopy to minimize the backup window.

76 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The flowchart in Figure 4-1 can assist you with the important planning steps that you need to
consider based on your solution requirement. We strongly recommend that you evaluate the
flow in this diagram and create the appropriate planning checklists for each of the solutions.

Customer's
aim

Minimizing
Disaster External backup
recovery disk, window
consolidation

Which Which
solution solution
Cloning Cloning FlashCopy
Copy
Workload from i5/OS services
i5/OS services other servers
system system of IASP
of IASP

Yes Boot from No Boot from


Boot from Remote
SAN SAN Flashcopy
SAN Mirror and
Copy of of IASP
IASP
Boot
Cloning of from Cloning of
i5/OS SAN i5/OS
system system

Capacity planning

Performance expectations
and sizing

Multipath

Planning SAN

Figure 4-1 Pre-order planning

When planning for external storage solutions, review the following planning considerations:
 Evaluate the supported hardware configurations.
 Understand the minimum software and firmware requirements for i5/OS, HMC, system
firmware, and microcode for the ESS Model 800, DS6000, and DS8000 series.
 Understand additional implementation considerations, such as multipath I/O, redundancy,
and port setup on the storage subsystem.

Chapter 4. i5/OS planning for external storage 77


4.2 Solution implementation considerations
You need to consider multiple implementation considerations. The considerations vary based
on the solution that you are trying to implement. In the following section, we highlight some of
the important planning and implementation considerations in the following areas:
 Boot from SAN
 i5/OS multipath Fibre Channel attachment
 IBM System Storage Copy Services
 Storage consolidation from different servers
 SAN connectivity
 Capacity
 Performance

4.2.1 Planning considerations for boot from SAN


The deployment of boot support for the Fibre Channel (FC) i5/OS load source requires you to
have minimum hardware and software configurations. In this section, we guide you through
the required installation planning considerations when enabling boot from SAN support with
the i5/OS load source being external within IBM System Storage disk subsystems.

Note that boot from SAN is required only if you are planning to externalize your i5/OS load
source completely and to place all disk volumes that belong to that system or LPAR in the
IBM System Storage subsystem. You might not need boot from SAN if you plan to use
independent auxiliary storage pools (IASPs) with external storage, where the system objects
(*SYSBAS) could remain on System i integrated internal storage.

The new System i POWER6 IOP-less Fibre Channel cards 5749 or 5774 support boot from
SAN for Fibre Channel attached IBM System Storage DS8000 models and tape drives. Refer
to Table 4-1 and Table 4-2 for the minimum hardware and software requirements for IOP-less
Fibre Channel and to 4.2.2, “Planning considerations for i5/OS multipath Fibre Channel
attachment” on page 81 for further configuration planning information.

The 2847 I/O processor (IOP) introduced with the i5/OS V5R3M5 IOP-based Fibre Channel
boot from SAN support is intended only to support boot capability for the disk unit of the FC
i5/OS load source and up to 31 additional LUNs, in addition to the load source, attached using
a 2766, 2787, or 5760 FC disk adapter. This IOP cannot be used as an alternate IPL device
for booting from any other devices, such as a DVD-ROM, CD-ROM, or integrated internal load
source. Also, the 2847 IOP cannot be used as a substitute for 2843 or 2844 IOP to drive
non-FC storage, LAN, or any other System i adapters.

Important: The IBM Manufacturing Plant does not preload i5/OS and licensed programs
on new orders or upgrades to existing System i models when the 2847 IOP is selected.
You must install the system or partitions using the media that is supplied with the order,
after you complete the set up the ESS 800, DS6000, or DS8000 series.

78 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
For information about more resources to assist with planning and implementation tasks, see n
“Related publications” on page 629.

Minimum hardware requirements for IOP-less Fibre Channel


Table 4-3 highlights the minimum hardware configuration required for IOP-less Fibre Channel
attachment of external disk storage subsystems. These attachment requirements implicitly
include all the hardware requirements for boot from SAN with IOP-less Fibre Channel.

Table 4-1 Minimum IOP-less boot from SAN hardware requirements


Requirement Complete
System i POWER6 server with PCIe or PCI-X I/O slots to support the I/O adapter (IOA)
requirements
5749 or 5774 Dual-port IOP-less Fibre Channel Disk Adapter (IOA) for attaching i5/OS
storage to DS8000 series
HMC attached to a System i POWER6 model
IBM System Storage DS8000 seriesa
Storage capacity in DS8000 series to define a LUN for the loadsource unit (with boot
from SAN only)
a. No DS6000 and no ESS models are supported for System i IOP-less Fibre Channel.

Minimum software requirements for IOP-less Fibre Channel


When planning to use IOP-less Fibre Channel for SAN external disk storage, refer to
Table 4-2 for the minimum required levels of software. These attachment requirements
implicitly include all the software requirements for boot from SAN with IOP-less Fibre
Channel.

Table 4-2 Minimum IOP-less boot from SAN software requirements


Requirement Complete

i5/OS Version 6 Release 1 Modification 0 (V6R1M0)

HMC firmware V7.3.1

DS8000 microcode V2.4.3 However, we strongly recommend that you to install the
latest level of FBM code available at the time of installation. Contact your IBM System
Storage specialist for additional information.

Minimum hardware requirements for IOP-based boot from SAN


Table 4-3 highlights the minimum hardware configuration required to enable 2847 IOP-based
boot support for an FC i5/OS load source.

Table 4-3 Minimum IOP-based boot from SAN hardware requirements


Requirement Complete

2847 IOP for each server instance that requires a load source or for each LPAR that is
enabled to boot i5/OS from Fibre Channel load sourcea

When using i5/OS prior to V6R1 we recommend that the FC i5/OS load source is
mirrored using i5/OS mirroring at an IOP level, with the remaining LUNs protected with
i5/OS multipath I/O capabilities. For IOP-level redundancy, you need at least two 2847
IOPs and two FC adapters for each system image or LPAR.

Chapter 4. i5/OS planning for external storage 79


Requirement Complete

System i POWER5 or POWER6 model, for POWER5 I/O slots in the system unit,
expansion drawers or towers to support the IOP and I/O adapter (IOA) requirements,
for POWER6 IOPs are only supported in HSL loop attached supported expansion
drawers or towers

System p models for i5/OS in an LPAR (9411-100) with I/O slots in expansion drawers
or towers to support the IOP and IOA requirements

2766, 2787, or 5760 Fibre Channel Disk Adapter (IOA) for attaching i5/OS storage to
ESS 800, DS6000, or DS8000 seriesb

HMC attached to a System i or p model

IBM System Storage DS8000, DS6000 or Enterprise Storage Server (ESS) 800 series

A PC workstation to install DS6000 Storage Manager

The PC must be in the same subnet mask as the DS6000. The PC configuration must
have a minimum of 700 MB disk, 512 MB of memory, and Intel Pentium® 4 1.4 Ghz or
more processor configuration.

Additional storage capacity in ESS 800, DS6000 or DS8000 series, to define an


additional LUN for mirroring the loadsource unit
a. The 2847 IOP is not supported on iSeries Models 8xx, any previous iSeries or AS/400 models,
or any OEM hardware. Prior to i5/OS V6R1 the 2847 IOP does not support multipath for the
i5/OS load source unit, but supports multipath for all other LUNs attached to this IOP.
b. Each adapter requires a dedicated I/O processor. Use the 2847 IOP where one of the LUNs is
an i5/OS load source. Use the 2844 PCI-X I/O processor for attaching additional LUNs through
the 2766, 2787, or 5760 IOA. You cannot use the 2847 IOP as a substitute IOP to connect any
other components.
c. If multiple IP addresses are on the same DS6000 Storage Manager management console, the
first network adapter must be on the same subnetwork as the DS6000.

Minimum software requirements for IOP-based boot from SAN


When planning to install the 2847 IOP, you must consider several updates that need to be
completed on the server, HMC, and system firmware. Table 4-4 lists the minimum levels of
software that are required to enable support for the FC i5/OS load source using 2847 IOP.

Table 4-4 Minimum IOP-based boot from SAN software requirements


Requirement Complete
i5/OS Licensed Internal Code (LIC): V5R3M5 (level RS 535-A or later)
i5/OS Version 5 Release 3 Modification 0 (V5R3M0) Resave (level RS 530-10 or later)
Program temporary fixes (PTFs) i5/OS and LIC: MF33328, MF33845, MF33437,
MF33303, SI14550, SI14690, SI14755, or their supersedes
System Firmware for System i or System p servers V2.3.5 or later
HMC Firmware V5.1 or later
DS6000 microcode: We strongly recommend that you install the latest level of Field Bill
Material (FBM) code available at the time of installation. Go to the following Web page,
and click Downloadable files to obtain more information about DS6000 microcode:
http://www-03.ibm.com/servers/storage/support/disk/ds6800/downloading.html
DS6000 Storage Manager

80 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Requirement Complete
DS8000 microcode. We strongly recommend that you to install the latest level of FBM
code available at the time of installation. Contact your IBM System Storage specialist
for additional information.
ESS 800: 2.4.3.35 or later

4.2.2 Planning considerations for i5/OS multipath Fibre Channel attachment

Important: With i5/OS V6R1, multipath is now supported also for an external load source
disk unit for both the older 2847 IOP-based and the new IOP-less Fibre Channel adapters.

The new multipath function with i5/OS V6R1 eliminates the need with previous i5/OS
V5R3M5 or V5R4 versions to mirror the external load source merely for the purpose of
achieving path redundancy (see 6.10, “Protecting the external load source unit” on page 240).

Originally multipath support was added for System i external disks in V5R3 of i5/OS. Other
platforms have a specific software component, such as the Subsystem Device Driver (SDD).
Multipath is part of the base operating system. With V5R3 and later, you can define up to
eight connections from multiple I/O adapters on an iSeries or System i server to a single
logical volume in the DS8000, DS6000 or ESS. Each connection for a multipath disk unit
functions independently. Several connections provide redundancy by allowing disk storage to
be used even if a single path fails.

Multipath is important for the System i platform because it provides greater resilience to
storage area network (SAN) failures, which can be critical to i5/OS due to the single-level
storage architecture. Multipath is not available for System i internal disk units, but the
likelihood of path failure is much less with internal drives because there are fewer interference
points. There is an increased likelihood of issues in a SAN-based I/O path because there are
more potential points of failure, such as long fiber cables and SAN switches. There is also an
increased possibility of human error occurring when performing such tasks as configuring
switches, configuring external storage, or applying concurrent maintenance on DS6000 or
ESS, which might make some I/O paths temporarily unavailable.

Many System i customers still have their entire environment on the system or user auxiliary
storage pools (ASPs). Loss of access to any disk causes the system to enter a freeze state
until the disk access problem gets resolved. Even a loss of a user ASP disk will eventually
cause the system to stop. Independent ASPs (IASPs) provide isolation so that loss of disks in
the IASP only affect users who access that IASP while the remainder of the system is
unaffected. However, with multipath, even loss of a path to disk in an IASP will not cause an
outage.

Prior to multipath, some customers used i5/OS mirroring to two sets of disks, either in the
same or different external disk subsystems. This mirroring provided implicit dual path as long
as the mirrored copy was connected to a different I/O processor (IOP) or I/O adapter (IOA),
bus, or I/O tower. However, this mirroring also required twice as much capacity for two copies
of data. Because disk failure protection is already provided by RAID-5 or RAID-10 in the
external disk subsystem, this was sometimes considered unnecessary.

With the combination of multipath and RAID-5 or RAID-10 protection in DS8000, DS6000, or
ESS, you can provide full protection of the data paths and the data itself without the
requirement for additional disks.

Chapter 4. i5/OS planning for external storage 81


Avoiding single points of failure
Figure 4-2 shows 15 single points of failure, excluding the System i model itself and the disk
subsystem storage facility. Failure points 9-12 are not present if you do not use an
inter-switch link (ISL) to extend your SAN. An outage to any one of these components (either
planned or unplanned) causes the system to fail if IASPs are not used or causes the
applications within an IASP to fail if IASPs are used.

1. IO Frame
2. BUS
3. IOP
4. IOA 6. Port
7. Switch
5. Cable 8. Port
9. ISL
10. Port
11. Switch
12. Port
13. Cable
14. Host Adapter
15. IO Drawer

Figure 4-2 Single points of failure

When implementing multipath, provide as much redundancy as possible. At a minimum,


multipath requires two IOAs that connect the same logical volumes. Ideally, these should be
on different buses, in different I/O racks in the System i environment, and if possible, on
different high-speed link (HSL) or 12X loops. If a SAN is included, use separate switches in
two different fabrics for each path. You should also use host adapters in different I/O drawer
pairs in the DS6000 or DS8000 as shown in Figure 4-3.

Figure 4-3 Multipath removes single points of failure

82 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Unlike other systems that might support only two paths (dual-path), i5/OS V5R3 supports up
to eight paths to the same logical volumes. At a minimum, you should use two paths, although
some small performance benefits might be experienced with more paths. However, because
i5/OS multipath spreads I/O across all available paths in a round-robin manner, there is no
load balancing, only load sharing.

Configuration planning
The System i platform has three IOP-based Fibre Channel I/O adapters that support DS8000,
DS6000, and ESS model 800:
 FC 5760 / CCIN 280E 4 Gigabit Fibre Channel Disk Controller PCI-X
 FC 2787 / CCIN 2787 2 Gigabit Fibre Channel Disk Controller PCI-X (withdrawn from
marketing)
 FC 2766 / CCIN 2766 2 Gigabit Fibre Channel Disk Controller PCI (withdrawn from
marketing)

The following new System i POWER6 IOP-less Fibre Channel I/O adapters support DS8000
as external disk storage only:
 FC 5749 / CCIN 576B 4 Gigabit Dual-Port IOP-less Fibre Channel Controller PCI-X (see
Figure 4-4)
 FC 5774 / CCIN 5774 4 Gigabit Dual-Port IOP-less Fibre Channel Controller PCIe (see
Figure 4-5)

Note: The 5749/5774 IOP-less FC adapters are supported with System i POWER6 and
i5/OS V6R1 or later only. They support both Fibre Channel attached disk and tape devices
on the same adapter but not on the same port. As a new feature these adapters support
D-mode IPL boot from a tape drive which should be either direct attached or, by proper
SAN zoning, the only tape drive seen by the adapter. Otherwise, with multiple tape drives
seen by the adapter, it picks only the first one that reported in and is loaded, and if it
contains no valid IPL source, the IPL fails.

Chapter 4. i5/OS planning for external storage 83


Figure 4-4 New 5749 IOP-less PCI-X Fibre Channel Disk Controller

Figure 4-5 New 5774 PCIe IOP-less Fibre Channel Disk Controller

Important: For direct attachment, that is point-to-point topology connections using no SAN
switch, the IOP-less Fibre Channel adapters support only the Fibre Channel arbitrated loop
(FC-AL) protocol. This support is different to the previous 2847 IOP-based FC adapters,
which supported only the Fibre Channel switched-fabric (FC-SW) protocol, whether direct-
or switch-connected, although other 2843 or 2844 IOP-based FC adapters support either
FC-SW or FC-AL.

84 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
All these System i Fibre Channel I/O adapters can be used for multipath.

Important: Though there is no requirement for all paths of a multipath disk unit group to
use the same type of adapter we strongly recommend to avoid mixing IOP-based and
IOP-less FC I/O adapters within the same multipath group. In a multipath group with mixed
IOP-based and IOP-less adapters the IOP-less adapter performance would be throttled by
the lower performance IOP-based adapter due to the I/O being distributed by a round-robin
algorithm across all paths of a multipath group.

The IOP-based single-port adapters can address up to 32 logical units (LUNs) while the
dual-port IOP-less adapters support up to 64 LUNs per port.

Table 4-5 summarizes the key differences between IOP-based and IOP-less Fibre Channel.

Table 4-5 Key differences between IOP-based versus IOP-less Fibre Channel
Function IOP-based IOP-less

System i support All models (#2847 requires POWER6 models only


POWER5 or later)

A / B mode IPL (boot from SAN) Yes (with #2847 IOP only) Yes

D mode IPL (from tape) No Yes

Direct-attach protocol FC-AL or FC-SW (FC-SW only FC-AL


for boot from SAN)

Disk LUNs per port 32 64

Max. concurrent I/Os 1 6

ESS (2105) support Yes (#2847 supports ESS No


model 800 only)

DS6000 support Yes No

DS8000 support Yes Yes

Multipath Load Source Yes (with V6R1) Yes (with V6R1)

The System i i5/OS multipath implementation requires each path of a multipath group to be
connected to a separate System i I/O adapter to be utilized as an active path. Attaching a
System i I/O adapter to a switch and going from the switch to two different storage subsystem
ports results in only one of the two paths between the switch and the storage subsystem
being used with the second path only being used in case of a failure of the first one,
sometimes referred to as backup-link and used to be a solution for higher redundancy with
ESS external storage before i5/OS multipathing became available.

It is important to plan for multipath so that the two or more paths to the same set of LUNs use
different hardware elements of connection, such as storage subsystem host adapters, SAN
switches, System i I/O towers, and high-speed link (HSL) or 12X loops.

Good planning for multipath includes:


 Connections to the same set of LUNs through different DS host cards on DS8000
 Connections to the same set of LUNs through host adapters on different processors of
DS6000
 Connections to the same set of LUNs through different SAN switches resp. fabrics

Chapter 4. i5/OS planning for external storage 85


 Connections to the same set of LUNs on physically different IOA adapters, that is not
using multipath to the same set of LUNs through the same dual-port IOP-less IOA adapter
 Placement of the IOA adapter pairs in the System i I/O tower that connects to the same set
of LUNs, in different expansion towers and HSL/12X loops wherever possible

When deciding how many I/O adapters to use, your first priority should be to consider
performance throughput of the IOA because this limit can be reached before the maximum
number of logical units. See Chapter 5, “Sizing external storage for i5/OS” on page 115, for
more information about sizing and performance guidelines.

For more information about implementing multipath, see Chapter 6, “Implementing external
storage with i5/OS” on page 207.

Multipath rules for multiple System i models or partitions


When you use multipath disk units, you must consider the implications of moving IOPs or
IOAs with multipath connections between nodes. You must not split multipath connections
between nodes, either by moving IOPs/IOAs between logical partitions (LPARs) or by
switching expansion units between systems. If two different nodes both have connections to
the same logical unit number (LUN) in the IBM System Storage disk subsystem, both nodes
might potentially overwrite data from the other node.

System i single-level storage requires you to adhere to the following rules when you use
multipath disk units in a multiple-system environment:
 If you move an IOP with a multipath connection to a different LPAR, you must also move all
other IOPs with connections to the same disk unit to the same LPAR.
 When you make an expansion unit switchable, make sure that all multipath connections to
a disk unit switch with the expansion unit.
 When you configure a switchable independent disk pool, make sure that all of the required
IOPs for multipath disk units switch with the independent disk pool.

If a multipath configuration rule is violated, the system issues warnings or errors to alert you
of the condition. It is important to pay attention when disk unit connections are reported
missing. You want to prevent a situation where a node might overwrite data on a LUN that
belongs to another node.

Disk unit connections might be missing for a variety of reasons, but especially if one of the
preceding rules has been violated. If a connection for a multipath disk unit in any disk pool is
found to be missing during an IPL or vary on, a message is sent to the QSYSOPR message
queue.

If a connection is missing, and you confirm that the connection has been removed, you can
update Hardware Service Manager to remove that resource. Hardware Service Manager is a
tool to display and work with system hardware from both a logical and a packaging viewpoint,
an aid for debugging IOPs and devices, and for fixing failing and missing hardware. You can
access Hardware Service Manager in System Service Tools (SST) and Dedicated Service
Tools (DST) by selecting the option to start a service tool.

86 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4.2.3 Planning considerations for Copy Services
In this section, we discuss important planning considerations for implementing IBM System
Storage Copy Services solutions for i5/OS.

FlashCopy storage configuration considerations


When planning for a FlashCopy, implementation the following configuration guidelines from a
DS storage system perspective help you to achieve good performance and smooth
operations:
 Configure the FlashCopy source and target volumes within the same rankgroup, that is
avoid cross-cluster FlashCopy relationships with the source being on an even LSS and the
target on an odd LSS or vise versa.
 Use the same disk speed (preferably 15 KB RPM drives) for both source and target
volumes.
 Use FlashCopy with the no-background copy (no-copy) option, instead of the default
full-copy option, for short-lived FlashCopy relationships, such as for system backup to tape
to limit the performance impact to your production system.
 If for the duration of the FlashCopy relationship your host write I/O workload causes more
than about 20% of the data to be changed refrain from using space efficient FlashCopy
and use regular FlashCopy instead.
 When planning to use the new DS8000 R3 space efficient FlashCopy function carefully
size the storage space for your repository volumes to prevent running out of space
causing the relationship to fail (see 5.2.8, “Sizing for space efficient FlashCopy” on
page 130).

Note: The first release of space efficient FlashCopy with DS8000 R3 does not allow
you to increase the repository capacity dynamically. That is, to increase the capacity,
you will need to delete the repository storage space and re-create it with more physical
capacity.

 For better space efficient FlashCopy write performance, you might consider using RAID10
for the target volumes as the writes to the shared repository volumes always have random
I/O character (see 5.2.8, “Sizing for space efficient FlashCopy” on page 130).

Planning for FlashCopy with i5/OS


The FlashCopy Copy Services function of IBM System Storage DS8000, DS6000 or ESS
essentially enables you to create an i5/OS system image as an identical point-in-time replica
of your entire storage space. This capability has become more realistic since i5/OS V5R3M5
with the advent of being able to place an i5/OS load source directly in a SAN storage
subsystem.

By using FlashCopy for creating a duplicate i5/OS system image of your production system
and IPLing another i5/OS LPAR from it running the backup to tape, you can increase the
availability of your production system by reducing or eliminating down-times for system saves.
FlashCopy can also assist you with having a backup image of your entire system
configuration to which you can rollback easily in the event of a failure during a release
migration or a major application upgrade.

Chapter 4. i5/OS planning for external storage 87


Important: The i5/OS V6R1 quiesce for Copy Services function helps to ensure that you
have modified data residing in main memory written to disks prior to creating an i5/OS
image with FlashCopy. For i5/OS versions prior to V6R1 we recommend that you shut
down the system completely (PWRDWNSYS) before you initiate a full-system FlashCopy.
Ending subsystems or bringing the system to a restricted state does not guarantee that all
contents of main storage will be written to the disk.

Keep in mind that creating an i5/OS image through FlashCopy is a point-in-time instance and
thus should be used only for recovery of the production system only as a full backup for the
production system image. Many of the objects, such as history logs, journal receivers, and
journals, have different data history reflected in them and must not be restored to the
production system.

You must not attach any copied LUNs to the original parent system unless they have been
used on another partition first or initialized within the IBM System Storage subsystems.
Failure to observe this restriction will have unpredictable results and can lead to loss of data.
This is due to the fact that the copied LUNs are perfect copies of LUNs that are on the parent
system. As such, the system would not be able to tell the difference between the original and
the cloned LUN if they were attached to the same system.

As soon as you copy an i5/OS image, attach it to a separate partition that will own the LUNs
that are associated with the copied image. By doing this, you make them safe to be reused
again on the parent partition.

When planning to implement FlashCopy or Remote Mirror and Copy functions such as Metro
Mirror and Global Mirror for copying of an i5/OS system consider the following points:
 Storage system licenses for use of Copy Services functions are required.
 Have a sizing exercise completed to ensure that your system and storage configuration is
capable of handling the additional I/O requests. You also need to account for additional
memory, I/O, and disk storage requirements in the storage subsystem in addition to
hardware resources at the system side.
 Ensure that the recovery system or backup partition is configured for boot from SAN to IPL
from the copied i5/OS load source.
 Sufficient capacity (processor, memory, I/O towers, IOPs, IOAs, and storage) is reserved
to bring up the target environment, either in an LPAR or on a separate system that is
locally available in the same data center complex.
 When restarting the environment after attaching the copied LUNs, it is important to
understand that, because these are identical copies of the LUNs in your production
environment, all of the attributes that are unique to your production environment are also
copied, such as network attributes, resource configuration, and system names. It is
important that you perform a manual IPL when you first start the system or partition so that
you can change the configuration attributes before the system fully starts. Examples of the
changes that you need to perform are:
– System Name, Local Location Name, and Default Location Name
You need to change these attributes before you restart SNA or APPC communications,
or prior to using BRMS.

Tip: You might want to create a “backup” startup program that you invoke during the
restart of a cloned i5/OS image so that you can automate many of the configuration
attribute changes that otherwise need manual intervention.

88 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
– TCPI/IP network attributes
You need to reassign a new IP address for the new system and reconfigure any related
attributes before the cloned image is added to the network, either for performing a full
system save or for performing any read-only operations such as database queries or
report printing.
– System name in the relational database directory entry
You might need to update this entry using the WRKRDBDIRE command before you
start any database activities that rely on these attributes.
 The hardware resource configuration will not match what is on the production system and
needs to be updated prior to starting any network or tape connectivity.
 Remember that any jobs in the job queue will still be there, and any scheduled entries in
the job scheduler will also be there. You might want to clear job queues or hold the job
scheduler on the backup server to avoid any updates to the files, enabling you to have a
true point-in-time instance of your production server.
 You must understand the usage of BRMS when saving from a FlashCopy image of your
production system (see “Using Backup Recovery and Media Services with FlashCopy” on
page 89.)

Using Backup Recovery and Media Services with FlashCopy


In addition to the planning considerations discussed in the previous section, here we provide
additional considerations for using BRMS with FlashCopy for which you need to plan:
 Enable BRMS on the production system to allow use of FlashCopy on the backup system.
 During the save operation on the backup machine (after FlashCopy has completed, and
you have completed all of the IPL steps, including changes to the network attributes and
system attributes), ensure that no backups are conducted on the production system.
BRMS treats your backup as a point-in-time instance of your production system and
maintains the BRMS network and media information across other systems that share the
media inventory.
 After BRMS has completed the save operation, complete the post backup options such as
taking a full save of your QUSRBRM library and restoring it on the production system. You
can do this by using either a tape drive or FTP to transfer the save file. This step is
required to ensure that the BRMS management and media information is transferred back
to the production system before you reuse the disk space associated with the FlashCopy
instance. The restore of QUSRBRM back on the production system provides an accurate
picture of the BRMS environment on the production system, which reflects the backups
that were just performed on the clone system.
 After QUSRBRM is restored, indicate on the production system that the BRMS FlashCopy
function is complete.

Important: If you have to restore your application data or libraries back on the
production system, do not restore any journal receivers that are associated with that
library. Use the OMTOBJ parameter during the restore library operation.

 BRMS for V5R3 has been enhanced to support FlashCopy by adding more options that
can be initiated prior to starting the FlashCopy operation.

For more information about using BRMS with FlashCopy, see 1.2.5, “Using Backup Recovery
and Media Services with FlashCopy” on page 7.

Chapter 4. i5/OS planning for external storage 89


Planning for Remote Mirror and Copy with i5/OS
The Remote Mirror and Copy feature (formerly PPRC) copies data between volumes on two
or more storage units. When your host system performs I/O update operations to the source
volume, they are copied or mirrored to the target volume automatically. After you create a
remote mirror and copy relationship between a source volume and target volume, the target
volume continues to be updated with changes from the source volume until you remove the
relationship between the volumes.

Note the following considerations when planning for Remote Mirror and Copy:
 Determine the recovery point objective (RPO) for your business and clearly understand
the differences between synchronous storage-based data replication with Metro Mirror
and asynchronous replication with Global Mirror, and Global Copy.
 When planning for a synchronous Metro Mirror solution, be aware of the maximum
supported distance of 300 km and expect a delay of your write I/O of around 1 ms per 100
km distance.
 Have a sizing exercise completed to ensure that your system and storage configuration is
capable of handling additional I/O requests, that your I/O performance expectations are
met and that your network bandwidth supports your data replication traffic to meet your
recovery point objective (RPO) targets.
 Acquire storage system licenses for the Copy Services functions to be implemented.
 Unless you are not replicating IASPs only, configure your System i production system and
target system with boot from SAN for faster recovery times.
 Sufficient capacity (processor, memory, I/O towers, IOPs, IOAs, and storage) is reserved
to bring up the target environment, either in an LPAR or on a separate system that is
locally available in the same data center complex.

Planning Remote Mirror and Copy with i5/OS IASPs


This solution involves the replication of data at the storage controller level to a second storage
server using IBM System Storage DS8000, DS6000 or ESS. An independent auxiliary
storage pool (IASP) is the basic unit of System i storage you can replicate using Copy
Services. Using IASPs with Copy Services is supported by using a management tool such as
the new i5/OS V6R1 High Availability Solutions Manager (HASM) licensed program product
(5761-HAS) or the System i Copy Services Toolkit services offering (see Figure 4-6 on
page 91)

These tools provide a set of functions to combine PPRC, IASP, and i5/OS cluster services for
coordinated switchover and failover processing through a cluster resource group (CRG) which
is not provided by stand-alone Copy Services management tools such as TPC-R or DS CLI.
This solution provides the benefit of the Remote Copy function and coordinated switching of
operations, which gives you good data resiliency capability if the replication is done
synchronously.

90 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
System i Copy Services Toolkit
Fully automated solution for i5/OS
Clustering and Copy Services management
Easy to use Copy Services setup scripts

High Availability Solutions Manager (HASM)


i5/OS V6R1 fully integrated solution
CL commands and GUI interface for i5/OS clustering &
CopyServices management

System Storage Productivity Center for Replication


Stand-alone storage CopyServices management using GUI & CSMCLI

DS command-line interface (DS CLI)


Stand-alone storage and Copy Services setup

Figure 4-6 Enhanced functionality of integrated System i Copy Services Management Tools

One of the biggest advantages of using IASP is that you do not need to shut down the
production server for switching over to your recovery system. A vary off of the IASP ensures
that data is written to the disk prior to initiating a switchover. HASM or the toolkit enables you
to attach the second copy to a backup server without an IPL. Replication of IASPs only
instead of your whole System i storage space can also help you to reduce your network
bandwidth requirements for data replication by excluding write I/O to temporary objects in
*SYSBAS. You also have the ability to combine this solution with other functions, such as
FlashCopy, for additional benefits such as save window reduction.

Note the following considerations when planning for Remote Mirror and Copy of IASP:
 Complete the feasibility study for enabling your applications to take advantage of IASP. For
the latest information about high availability and resources on IASP, refer to the System i
high availability Web site:
http://www.ibm.com/eserver/iseries/ha
 Ensure that you have i5/OS 5722-SS1 option 41 - Switchable resources installed on your
system and that you have set up an IASP environment.
 Keep in mind that journaling your database files is still required, even when your data is
residing in an IASP.
 Objects that reside in *SYSBAS, that is the disk space that is not IASP must be
maintained at equal levels on both the production and backup systems. You can do this by
using the software solutions offered by one of the High Availability Business Partners
(HABPs).
 Set up IASPs and install your applications in IASP. After the application is prepared to run
in an IASP and is tested, implement HASM or the System i Copy Services Toolkit, which is
provided as a service offering from IBM STG lab services.

The toolkit contains the code and services needed to implement a disaster recovery solution.
For more information about the toolkit, contact the High/Continuous Availability and Cluster

Chapter 4. i5/OS planning for external storage 91


group within the IBM System i Technology Center (iTC) by contacting IBM System Storage
Advanced Technical Support.

4.2.4 Planning storage consolidation from different servers


In this section, we refer only to the DS8000 or DS6000 because they are the new storage
offerings. The assumptions are that the new consolidations will primarily happen by using the
latest IBM offerings in the areas of IBM System Storage Disk Storage subsystems.
 Make sure that the DS8000 or DS6000 series is properly sized for open systems. For
sizing of the i5/OS, refer to Chapter 5, “Sizing external storage for i5/OS” on page 115.
 For the DS8000 or DS6000 series with which you plan to share the i5/OS production
workload and other servers, we recommend for performance reasons that you dedicate
RAID ranks to i5/OS. Therefore, plan to allocate sufficient ranks for i5/OS workloads and
separate them from being shared by other open systems.
 Ensure that the host ports that are to be shared between i5/OS and other open systems
are sized adequately to account for combined I/O rates driven by all of the hosts.
 Understand which systems need disaster recovery solution and plan use of Remote Mirror
and Copy functions accordingly.
 Understand which systems need FlashCopy and plan capacity for FlashCopy accordingly.
 Plan the multipath attachment optimized for high redundancy, refer to “Avoiding single
points of failure” on page 82

4.2.5 Planning for SAN connectivity


When planning for System i SAN attached storage keep the following considerations in mind:
 Ensure that the FC switches are supported by your combination of IBM System Storage
disk subsystem and System i model prior to ordering the SAN fabric. The best way to
determine if a given SAN switch or director is supported is to check the System Storage
Interoperation Center (SSIC) at:
http://www-01.ibm.com/servers/storage/support/config/ess/index.jsp
 We usually zone the switches so that multiple i5/OS FC adapters are in a zone with one
storage subsystem host port.

Note: Avoid putting more than one storage subsystem host port into a switch zone with
System i FC adapters. At any given time a System i FC adapter uses only one of the
available storage ports in the switch zone whichever reports in first. A slack
configuration of the SAN switch with multiple System i FC adapters having access to
multiple storage ports can result in performance degradation by an excessive number
of System i FC adapters accidentally sharing the same link to the storage port.

Refer to Chapter 5, “Sizing external storage for i5/OS” on page 115 for recommendations
on the numbers of FC adapters per host ports.
 If the IBM System Storage disk subsystem is connected remotely to a System i host, or if
local and remote storage subsystems are connected using SAN, plan for enough FC links
to meet the I/O requirement of your workload.
 If extenders or dense wavelength division multiplexing (DWDMs) are used for remote
connection, take into account their expected latency when planning for performance.
 If FC over IP is planned for remote connection, carefully plan for the IP bandwidth.

92 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4.2.6 Planning for capacity
When planning the capacity of external disk storage for System i environments, ensure that
you understand the difference between the following three capacity terms of the DS8000 and
DS6000 series:
 Raw capacity
 Effective capacity
 Capacity usable for i5/OS

In this section, we explain these capacity terms and highlight the differences between them.

Raw capacity
Raw capacity of a DS, also referred to as physical capacity, is the capacity of all physical disk
drives in a DS system including the spare drives. When calculating raw capacity, we do not
take into account any capacity that is needed for parity information of RAID protection. We
simply multiply the number of disk drive modules (DDMs) by their capacity. Consider the
example where a DS8000 has five disk drive enclosures (each enclosure has 16 disk drives)
of 73 GB. Thus, the DDMs have 5.84 TB of raw capacity based on the following equation:
5 x 16 x 73 GB = 5840 GB, which is 5.84 TB of raw capacity

Spare disk drives


In order to know effective capacity of a DS, you need to understand the rule for how spare
disks are assigned. This rule is called the sparing rule.

Device adapters in DS8000


A DS8100 (2-way processor) can contain up to four device adapter (DA) pairs. They are
connected in the order 2, 0, 3, and 1, as we explain next. DA 2 is first used to connect the
arrays. It is filled with arrays until it connects eight arrays (four array sites or two enclosures) in
the base frame. Then DA pair 0 is used until it connects eight arrays in the base frame. If the
expansion frame is present, DA 3 is used until it is filled with eight arrays in the expansion
frame. Then DA 1 is used until it is filled with eight arrays in the expansion frame. If there are
more arrays in the expansion frame, DA 2 is used again to connect them until it is filled with
eight arrays in the expansion frame. Then DA 0 is used again.

Chapter 4. i5/OS planning for external storage 93


Figure 4-7 illustrates this method.

Note: Figure 4-7 shows only front disk enclosures. However, there are actually as many
back enclosures, that is up to eight enclosures per base frame and up to 16 enclosures per
expansion frame.

Figure 4-7 DS8100 device adapters

94 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In DS8300 (4-way processors), there can be up to eight DA pairs. The pairs are connected in
the following order: 2, 0, 4, 6, 7, 5, 3, and 1. They are connected to arrays in the same way as
described for DS8100. All the DA pairs are filled with arrays until eight arrays per DA pair are
reached. DA pair 0 and 2 are used for more than eight arrays if needed.

Figure 4-8 shows this method.

Note: Figure 4-8 shows only the front disk enclosures. However, there are actually as
many back enclosures, that is up to eight enclosures per base frame and up to 16
enclosures per expansion frame.

Figure 4-8 Device adapters in DS8300

Spares in DS8000
In DS8000, a minimum of one spare is required for each array site (or array) until the following
conditions are met:
 Minimum of four spares per DA pair
 Minimum of four spares of the largest capacity array site on the DA pair
 Minimum of two spares of capacity and an RPM greater than or equal to the fastest array
site of any given capacity on the DA pair

Knowing the rule of how DA pairs are used, we can determine the number of spares that are
needed in a DS configuration and which RAID arrays will have a spare. If there are DDMs of a
different size, more work is needed to calculate which arrays will have spares.

Consider the same example for which we calculate raw capacity in “Raw capacity” on
page 93, and now calculate the spares. DS8100 with 10 array sites (10 arrays) of 73 GB
DDMs, all of the arrays are RAID-5. Eight arrays are connected to DA pair 2. The first four
arrays have a spare (6+P arrays) to fulfill the rule minimum of four spares for DA pair. The
next four arrays on this DA pair are without a spare (7+P arrays). Two arrays are connected to

Chapter 4. i5/OS planning for external storage 95


DA pair 0. Both of them have a spare (6+P arrays) to fulfill the rule minimum of one spare per
array site (array). Therefore in this DS configuration, there are six 6+P+S ranks and four 7+P
ranks.

Figure 4-9 illustrates this example, which is a result of the DS CLI command lsarray.

Array State Data RAIDtype arsite Rank DA Pair DDMcap (Decimal GB)
======================================================================
A0 Assigned Normal 5 (6+P) S1 R0 0 73.0
A1 Assigned Normal 5 (6+P) S2 R1 0 73.0
A2 Assigned Normal 5 (6+P) S3 R2 2 73.0
A3 Assigned Normal 5 (6+P) S4 R3 2 73.0
A4 Assigned Normal 5 (6+P) S5 R4 2 73.0
A5 Assigned Normal 5 (6+P) S6 R5 2 73.0
A6 Assigned Normal 5 (7+P) S7 R6 2 73.0
A7 Assigned Normal 5 (7+P) S8 R7 2 73.0
A8 Assigned Normal 5 (7+P) S9 R8 2 73.0
A9 Assigned Normal 5 (7+P) S10 R9 2 73.0
Figure 4-9 Sparing rule for DS8000

Spares in DS6000
DS6000 has two device adapters or one device adapter pair that is used to connect disk
drives in two FC loops, as shown in Figure 4-10 and Figure 4-11. In DS6000, a minimum of
one spare is required for each array site until the following conditions are met:
 Minimum of two spares on each FC loop
 Minimum of two spares of the largest capacity array site on the FC loop
 Minimum of two spares of capacity and rpm greater than or equal to the fastest array site
of any given capacity on the DA pair

Therefore, if only a single RAID-5 array is configured, then one spare is in the server
enclosure. If two RAID-5 arrays are configured, two spares are present in the enclosure as
shown in Figure 4-10. This figure shows the first expansion enclosure and its location on the
second FC loop, which is separate from the server enclosure FC loop. Therefore the same
sparing rules apply. That is, if the expansion enclosure has only one RAID-5 array, there is
one spare. If two RAID arrays are configured in the expansion enclosure, then two spares are
present.

96 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 4-10 DS6000 spares for RAID-5

Figure 4-11 shows an example of spares in DS6000 with RAID-10 arrays.

Figure 4-11 DS6000 spares for RAID-10

Effective capacity
Effective capacity of a DS system is the amount of storage capacity that is available for the
host system after the logical configuration of DS has been completed. However, the actual
capacity that is visible by i5/OS is smaller than the effective capacity. Therefore, we discuss
the actual usable capacity for i5/OS in “i5/OS LUNs and usable capacity for i5/OS” on
page 99.

Effective capacity of a rank depends on the number of spare disks in the corresponding array
and on the type of RAID protection of the array. When calculating effective capacity of a rank,
we take into account the capacity of the spare disk, the capacity needed for RAID parity, the
and capacity needed for metadata, which internally describes the logical to physical volume
mapping. Also, effective capacity of a rank depends on the type of rank, either CKD or fixed
block. Because i5/OS uses fixed block ranks, we limit our discussion to these ranks.

Chapter 4. i5/OS planning for external storage 97


Table 4-6 shows the effective capacities of fixed block RAID ranks in DS8000 in decimal GB
and binary GB. It also shows the number of extents that are created from a rank.

Table 4-6 DS8000 RAID rank effective capacities


RAID type DDM cap. Array form . Extents Binary GB Decim al GB

RAID-5 73 GB 6+P+S 386 386 414.46


RAID-5 73 GB 7+P 450 450 483.18
RAID-5 146 GB 6+P+S 779 779 836.44
RAID-5 146 GB 7+P 909 909 976.03
RAID-5 300 GB 6+P+S 1582 1582 1698.66
RAID-5 300 GB 7+P 1844 1844 1979.98
RAID-10 73 GB 3+3+2S 192 192 206.16
RAID-10 73 GB 4+4 256 256 274.88
RAID-10 146 GB 3+3+2S 386 386 414.46
RAID-10 146 GB 4+4 519 519 557.27
RAID-10 300 GB 3+3+2S 785 785 842.89
RAID-10 300 GB 4+4 1048 1048 1125.28

Table 4-7 shows the effective capacity of fixed block 8-width RAID ranks in DS6000 in decimal
GB and binary GB. It also shows the number of extents.

Table 4-7 DS6000 8-width RAID rank effective capacity


R AID typ e D D M c ap . Array fo rm . E xte n ts B in a ry G B D e cim al G B

R A ID -5 73 G B 6 +P + S 38 2 382 4 10.1 7
R A ID -5 73 G B 7 +P 44 5 445 4 77.8 1
R A ID -5 1 46 G B 6 +P + S 77 3 773 8 30.0 0
R A ID -5 1 46 G B 7 +P 90 2 902 9 68.5 1
R A ID -5 3 00 G B 6 +P + S 1 576 15 76 16 92.2 1
R A ID -5 3 00 G B 7 +P 1 837 18 37 19 72.4 6
R A ID -10 73 G B 3 +3 + 2S 19 0 190 2 04.0 1
R A ID -10 73 G B 4 +4 25 4 254 2 72.7 3
R A ID -10 1 46 G B 3 +3 + 2S 38 6 386 4 14.4 6
R A ID -10 1 46 G B 4 +4 51 5 515 5 52.9 7
R A ID -10 3 00 G B 3 +3 + 2S 78 7 787 8 45.0 3
R A ID -10 3 00 G B 4 +4 1 050 10 50 11 27.4 2

98 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Table 4-8 shows the effective capacities of 4-width RAID ranks in DS6000.

Table 4-8 DS6000 4-width RAID rank effective capacity


R AID typ e D D M c ap . Arra y fo rm . E x te n ts B ina ry G B D e cim al G B

R A ID -5 73 G B 2+P +S 1 27 12 7 13 6.3 6
R A ID -5 73 G B 3+P 1 90 19 0 20 4.0 1
R A ID -5 146 G B 2+P +S 2 56 25 6 27 4.8 7
R A ID -5 146 G B 3+P 3 86 38 6 41 4.4 6
R A ID -5 300 G B 2+P +S 5 24 52 4 56 2.6 4
R A ID -5 300 G B 3+P 7 87 78 7 84 5.0 3
R A ID -1 0 73 G B 1+1 +2S 62 62 6 6.5 7
R A ID -1 0 73 G B 2+2 1 27 12 7 13 6.3 6
R A ID -1 0 146 G B 1+1 +2S 1 27 12 7 13 6.3 6
R A ID -1 0 146 G B 2+2 2 56 25 6 27 4.8 7
R A ID -1 0 300 G B 1+1 +2S 2 61 26 1 28 0.2 4
R A ID -1 0 300 G B 2+2 5 24 52 4 56 2.6 4

As an example, we calculate the effective capacity for the same DS configuration as we use in
“Raw capacity” on page 93, and “Spare disk drives” on page 93. For a DS8100 with 10
RAID-5 ranks of 73 GB DDMs, six ranks are 6+P+S and four ranks are 7+P. The effective
capacity is:
(6 x 414.46 GB) + (4 x 483.18 GB) = 4419.48 GB

i5/OS LUNs and usable capacity for i5/OS


i5/OS LUNs have fixed sizes and are composed of 520 byte blocks consisting of 512 usable
bytes for data and 8 header bytes used by System i for storing metadata like the virtual
address. The sizes of i5/OS LUNs that are expressed in decimal GB are 8.59 GB, 17.54 GB,
35.16 GB, and so on. These sizes expressed in binary GB are 8 GB, 16.34 GB, 32.75 GB,
and so on.

A LUN on DS8000 and DS6000 is formed of so called extents of the size 1 binary GB.
Because i5/OS LUN sizes expressed in binary GB are not whole multipliers of 1 GB, part of
the space of an assigned extent will not be used but can also not be used for other LUNs.

Table 4-9 shows the models of i5/OS LUNs, their sizes in decimal GB, the number of extents
they use, and the percentage of usable space (not waisted) in decimal GB for each LUN.

Table 4-9 i5/OS LUN sizes


Model, Model, i5/OS device size Number of % of usable space
unprotected protected (decimal GB) extents (decimal GB)
A81
A82
A01
A02
* 17.54
8.59
17
8 100
96.14
A85 A05 35.16 33 99.24
A84 A04 70.56 66 99.57
A86 A06 141.1 132 99.57
A87 A07
* 282.2 263 99.95

Chapter 4. i5/OS planning for external storage 99


Important: The supported logical volume sizes for a load source unit that is located on
ESS Model 800 and DS6000 and DS8000 products are 17.54 GB, 35.16 GB, 70.56 GB,
and 141.1 GB. Logical volumes of size 8.59 and 282.2 (noted by the asterisk (*) in
Table 4-9) are not supported as System i5 load source units, where the load source unit is
to be located in the external storage server.

When defining a LUN for i5/OS, it is possible to specify whether the LUN is seen by i5/OS as
RAID protected or as unprotected. You achieve this by specifying the correct model of i5/OS
LUN. Models A0x are seen by i5/OS as protected, while models A8x are seen as unprotected.
Here, x stands for 1, 2, 4, 5, 6, or 7.

The general recommendation is to define LUNs as protected models. However you must take
into account that, whenever a LUN shall be mirrored by i5/OS mirroring, you must define it as
unprotected. Whenever there will be mirrored and non-mirrored LUNs in the same ASP,
define the LUNs that shall not be mirrored as protected. When mirroring on ASP is started,
only the unprotected LUNs from this ASP are mirrored, but all the protected ones are left out
of mirroring. This should be considered, e.g when using i5/OS prior to V6R1 when the load
source used to be mirrored between an internal disk and a LUN or between two LUNs to
provide path redundancy when multipathing was not supported yet for the load source unit.

LUNs are created in DS8000 or DS6000 storage from an extent pool which can contain one
or more RAID ranks. For information about the number of available extents from a certain
type of DS rank, see Table 4-6 on page 98, Table 4-7 on page 98, and Table 4-8 on page 99.

Note: We generally recommend to configure DS8000 or DS6000 storage with only one
single rank per extent pool for System i host attachment. This ensures that storage space
for a LUN is allocated from a single rank only which helps to better isolate potential
performance problems. It also supports the recommendation to use dedicated ranks for
System i server or LPARs not shared with other platform servers.

This implies that we also generally do not recommend to use the DS8000 Release 3
function of storage pool striping (also known as extent rotation) for System i host
attachment. System i storage management already distributes its I/O as best as possible
across the available LUNs in an auxiliary storage pool so that using extent rotation to
distribute the storage space of a single LUN across multiple ranks is rather
over-virtualization.

An i5/OS LUN uses a fixed number of extents. After a certain number of LUNs are created
from an extent pool, usually some is space left. Usually, we define as much as possible LUNs
of one size from an extent pool and optionally define LUNs of the next smaller size from the
space remaining in the extent pool. We try to define the LUNs of as equal size as possible in
order to have balanced I/O rate and consequently better performance.

100 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Table 4-10 and Table 4-11 show possibilities for defining i5/OS LUNs in an extent pool.

Table 4-10 LUNs from a 6+P+S rank of 73 GB DDMs (386 extents, 414.46 GB)
70 GB 35 GB 17 GB 8 GB LUNs Used Used decimal GB
LUNs LUNs LUNs extents
5 1 0 0 381 387.96
0 11 1 0 380 404.3
0 10 3 0 381 404.22
0 9 5 0 382 404.14
0 8 7 0 383 404.06
0 0 22 1 382 394.47
0 0 21 3 381 394.11
0 0 20 5 380 393.75
0 0 19 7 379 393.39
0 0 18 10 386 401.62
0 0 17 12 385 401.26
0 0 16 14 384 400.9

Table 4-11 LUNs from a 7+P rank of 73 GB DDMs (450 extents, 483.18 GB)
70 GB 35 GB 17 GB 8 GB LUNs Used Used decimal GB
LUNs LUNs LUNs extents
6 1 0 0 429 458.52
0 13 1 0 446 474.62
0 12 3 0 447 474.54
0 11 5 0 448 474.46
0 10 7 0 449 474.38
0 0 26 1 450 464.63
0 0 25 3 449 464.27
0 0 24 5 448 463.91
0 0 23 7 447 463.55
0 0 22 9 446 463.19
0 0 21 11 446 462.83
0 0 20 13 444 462.47

Use the following equation to determine the number of LUNs of a given size that one extent
pool can contain:
number of extents in extent pool - (number of LUNs x number of extents in a LUN) =
residual

Optionally, repeat the same operation to define smaller LUNs from the residual.

Chapter 4. i5/OS planning for external storage 101


The capacity that is available to i5/OS is the number of defined LUNs multiplied by the
capacity of each LUN. If for example the DS8000 is configured with six 6+P+S ranks and four
7+P ranks of 73 GB DDMs then from each 6+P+S rank, we define 11 35 GB LUNs and one
17 GB LUN, and from each 7+P rank we define 13 35 GB LUNs and one 17 GB LUN. The
capacity available to i5/OS is:
(6 x 404.03 GB) + (4 x 474.62 GB) = 4322.66 GB

Capacity Magic
Capacity Magic, from IntelliMagic™ (Netherlands), is a Windows-based tool that calculates
the raw and effective capacity of DS8000, DS6000 or ESS model 800 based on the input of
the number of ranks, type of DDMs, and RAID type. The input parameters can be entered
through a graphical user interface. The output of Capacity Magic is a detailed report and a
graphical representation of capacity.

For more information, refer to:


http://www.intellimagic.net/en/product.phtml?p=Capacity+Magic

102 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Example of using Capacity Magic
In this example, we plan a DS8100 with 9 TB of effective capacity in RAID-5. We use
Capacity Magic to calculate the needed raw capacity and to present the structure of spares
and parity disks. The process that we use is as follows:
1. Launch Capacity Magic.
2. In the Welcome to Capacity Magic for Windows window (Figure 4-12), specify the type of
planned storage system and the desired way to create a Capacity Magic project. In our
example, we select DS6000 and DS8000 Configuration Wizard and select OK to guide
us through the Capacity Magic configuration.

Figure 4-12 Selecting the type of storage system

Chapter 4. i5/OS planning for external storage 103


3. After clicking Next in the Wizard’s informational window, select which model of DS to use
(Figure 4-13). In our example, we select DS 8100 - 2-way and click Next.

Figure 4-13 Specifying the DS model for Capacity Magic

104 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Select the way in which you plan to define the extent pools. For System i attachment, we
define 1 Extent Pool for each RAID rank (see Figure 4-14). Click Next.

Figure 4-14 Method for defining the extent pool

Chapter 4. i5/OS planning for external storage 105


5. Select the type of host system. In this example, we plan DS8000 for i5/OS, so we select
iSeries (see Figure 4-15). Then, click Next.

Figure 4-15 Specifying the host system

106 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Specify the type of DDMs and the type of RAID protection. As shown in Figure 4-16,
observe that 73 GB DDMs and RAID-5 are already inserted as the default. In our example,
we leave the default values. Click Next.

Figure 4-16 Specifying the DDM type and RAID type

Chapter 4. i5/OS planning for external storage 107


7. Specify the desired effective capacity. In our example, we need effective capacity 9 TB, so
we enter this value (as shown in Figure 4-17). Note that usable capacity for i5/OS is
smaller than the effective capacity, as explained in “i5/OS LUNs and usable capacity for
i5/OS” on page 99. Click Next.

Figure 4-17 Inserting the desired effective capacity

108 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. Next, review the selected configuration and click Finish to continue, as shown in
Figure 4-18.

Figure 4-18 Reviewing the selected configuration

Chapter 4. i5/OS planning for external storage 109


9. The graphical output displays, which contains the needed number of arrays, spares, and
parity disks (see Figure 4-19). Click Report table on the right.

Figure 4-19 Graphical output of Capacity Magic

110 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
A detailed report displays of the needed drive sets (megapacks), including disk enclosure
fillers, number of extents, raw capacity, effective capacity, and so on. Figure 4-20 shows a
part of this report.

Figure 4-20 Capacity Magic report

4.2.7 Planning considerations for performance


It is extremely important that you plan and size both the System i platform and the DS8000,
DS6000, or ESS series, properly for an i5/OS workload. You should start by understanding
the critical I/O performance periods and the performance expectations. For some customers,
response time during transaction workload is critical. For other customers, reduction in the
overall batch runs might be important or reduction in the overall save time can be important.

It is equally important to ensure that the sizing requirements for your SAN configuration also
take into account the additional resources required when enabling advanced Copy Services
functions such as FlashCopy or PPRC. This is particularly important if you are planning to
enable synchronous Metro Mirror storage-based replication or space efficient FlashCopy.

Attention: You must correctly size the Copy Services functions that are enabled at the
system level to account for additional I/O resources, bandwidth, memory, and storage
capacity. The use of these functions, either synchronously or asynchronously, can impact
the overall performance of your system. To reduce the overhead of not duplicating the
temporary objects that are created in the system libraries, such as QTEMP, consider using
IASP with Copy Services functions.

We recommend that you obtain i5/OS performance reports from data that is collected during
critical workload periods and size the DS8000 or DS6000 accordingly, for every System i
environment or i5/OS LPAR that you want to attach to a SAN configuration. For information
about how to size IBM System Storage external disk subsystems for i5/OS workloads see
Chapter 5, “Sizing external storage for i5/OS” on page 115.

Chapter 4. i5/OS planning for external storage 111


Where performance data is not available, we recommend that you use one of the IBM
Benchmark Centers, either in Rochester or France. For more information, see:
http://www-03.ibm.com/systems/services/benchmarkcenter/servers/benchmark_i.html

PCI I/O card placement rules


Implementation of the PCI architecture provides flexibility in the placement of IOPs and IOAs
in IBM System i Models 515, 520, 525, 550, 570, and 595.

With PCI-X, the maximum bus speed is increased to 133 MHz from a PCI maximum of
66 MHz. PCI-X is backward compatible and can run at slower speeds, which means that you
can plug a PCI-X adapter into a PCI slot and it runs at the PCI speed, not the PCI-X speed.
This can result in a more efficient use of card slots but potentially for the tradeoff of less
performance.

Increased configuration flexibility reinforces a requirement to understand the detailed


configuration rules. See PCI, PCI-X, PCI-X DDR, and PCIe Placement Rules for IBM System
i Models, REDP-4011, at:
http://www.redbooks.ibm.com/redpieces/abstracts/redp4011.html?Open

Attention: If the configuration rules and restrictions are not fully understood and followed,
it is possible to create a hardware configuration that does not work, marginally works, or
quits working when a system is upgraded to future software releases.

Follow these plugging rules for the #5760, #2787, and #2766 Fibre Channel Disk Controllers:
 Each of these adapters requires a dedicated IOP. No other IOAs are allowed on that IOP.
 For best performance, place these 64-bit adapters in 64-bit PCI-X slots. They can be
plugged into 32-bit or 64-bit PCI slots but the performance might not be optimized.
 If these adapters are heavily used, we recommend that you have only one per
Multi-Adapter Bridge (MAB) boundary.

In general spread any Fibre Channel disk controller IOAs as evenly as possible among the
attached I/O towers and spread I/O towers as evenly as possible among the I/O loops.

Refer to the recommendations in Table 4-12 for limiting the number of FC adapters per
System i I/O half-loop to prevent performance degradation due to congestion on the loop.

Table 4-12 System i I/O loop Fibre Channel adapter recommendations


I/O Half-Loop Maximum number of Maximum number of Maximum number of
#2766/#2787 for #5760 for IOP-lessa for
transaction/ transaction/ transaction/
sequential workload sequential workload sequential workload

HSL-1 (1 GBps) 8/4 8/3 not supported


(~400 MBps effective
bandwidth
unidirectional)

HSL-2 (2 GBps) 14 / 8 14 / 6 2/2


(~750 MBps effective
bandwidth
unidirectional)

112 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
I/O Half-Loop Maximum number of Maximum number of Maximum number of
#2766/#2787 for #5760 for IOP-lessa for
transaction/ transaction/ transaction/
sequential workload sequential workload sequential workload

12X (3 GBps) not supported not supported 3/3


(~1.2 GBps effective
bandwidth
unidirectional)
a. IOP-less FC dual-port adapters are #5774 and #5749

Our sizing is based on an I/O half-loop concept because, as shown in Figure 4-21, a
physically closed I/O loop with one or more I/O towers is actually used by the system as two
I/O half-loops. There is an exception to this though only for older iSeries hardware prior to
POWER5 where a single I/O tower per loop configuration resulted in only one half-loop being
actively used. As can be seen with three I/O towers in a loop, one half-loop will get two, the
other half-loop will get I/O tower. The PHYP bringup code determines which half-loop gets the
extra I/O tower.

One I/O tower Two I/O towers Three I/O


per loop per loop towers per loop

CEC CEC CEC

I/O tower I/O tower I/O tower I/O tower I/O tower

I/O tower

Note: I/O half-loops are indicated by dotted lines


Figure 4-21 I/O half-loop concept

With the System i POWER6 12X loop technology, the parallel bus data-width is increased
from previous 8 bits used by HSL-1 and HSL-2 to 12 bits, which is where the name 12X
comes from referring to the number of wires used for data transfer. In addition, with 12X the
clock rate is increased to 2.5 GHz compared to 2.0 GHz of the previous HSL-2 technology.

For using System i POWER6 with12X loops for external storage attachment plan for using it
with GX slot P1-C9 (right one from behind) in the CEC which in contrast to its neighbor GX
slot P1-C8 does not need to share bandwidth with the CEC’s internal slots.

Chapter 4. i5/OS planning for external storage 113


114 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5

Chapter 5. Sizing external storage for i5/OS


IBM System Storage Disk Storage series provides maximum flexibility for multiserver storage
consolidation and can be designed to meet a variety of customer requirements, including both
large storage capacity and good performance. Most customers’ System i models typically
have mixed application workloads with varying I/O characteristics, such as high I/O rates from
transaction workload, as well as large capacity requirements for slower workloads, like data
archival.

Fully understanding your customer’s i5/OS workload I/O characteristics and then using
specific recommended analysis and sizing techniques to configure a DS and System i
solution is key to meeting the customer’s storage performance and capacity expectations. A
properly sized and configured DS system on a System i model provides the customer with an
optimized solution for their storage requirements. However, configurations that are drawn
without care of proper planning or understanding of workload requirements can result in poor
performance and even customer impact events.

In this chapter, we describe how to size a DS system for the System i platform. We present
the rules of thumb and describe several tools to help with the sizing tasks.

For good performance of a DS system with i5/OS workload, it is important to provide enough
resources, such as disk arms and FC adapters. Therefore, we recommend that you follow the
general sizing guidelines or rules of thumb even before you use the Disk Magic™ tool for
modeling performance of a DS system with the System i5 platform.

© Copyright IBM Corp. 2008. All rights reserved. 115


Figure 5-1 illustrates the recommended steps to follow when sizing a DS system for an i5/OS
workload.

Workload PT1 reports


description available

Workload Workload
characteristics statistics
Other
requirements:
HA, BC,..
Rules of thumb
SAN Fabric

Proposed
configuration
Workload from
other servers
Modeling with
Disk Magic Adjust conf.
based on DM
modeling
Requirements
and expectations
met ? No

Yes
Finish

Figure 5-1 Sizing a DS system for i5/OS

5.1 General sizing discussion


To better understand the sizing guidelines, in this section, first we briefly describe the I/O flow
operation in the System i5 and DS systems. Then we explain how disk response time relates
to application response time.

5.1.1 Flow of I/O operations


The System i platform with i5/OS uses the same architectural component that is used by the
iSeries and AS/400 platform, single-level storage. It sees all disk space and the main memory
as one storage area. It uses the same set of 64-bit virtual addresses to cover both main
memory and disk space. Paging in this virtual address space is performed in 4 KB memory
pages.

116 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-2 illustrates the concept of single-level storage.

Single-level storage

Main memory

Figure 5-2 Single-level storage

When the application performs an I/O operation, the portion of the program that contains read
or write instructions is first brought into main memory where the instructions are then
executed.

With the read request, the virtual addresses of the needed record are resolved, and for each
needed page, storage management first looks to see if it is in the main memory. If the page is
there, it is used to resolve the read request. However, if the corresponding page is not in main
memory, a page fault is encountered and it must be retrieved from disk. When a page is
retrieved, it replaces another page in memory that recently was not used; the replaced page
is paged out (destaged) to disk.

Similarly writing a new record or updating an existing record is done in main memory, and the
affected pages are marked as changed. A changed page normally remains in main memory
until it is written to disk as a result of a page fault. Pages are also written to disk when a file is
closed or when write-to-disk is forced by a user through commands and parameters. Also,
database journals are written to the disk.

When a page must be retrieved from disk or a page is written to disk, System Licensed
Internal Code (SLIC) storage management translates the virtual address to a real address of
a disk location and builds an I/O request to disk. The amount of data that is transferred to disk
at one I/O request is called a blocksize or transfer size. From the way reads and writes are
performed in single-level storage, you would expect that the amount of transferred data is
always one page or 4 KB. In fact, data is usually blocked by the i5/OS database to minimize
disk I/O requests and transferred in blocks that are larger than 4 KB. The blocking of
transferred data is done based on the attributes of database files, the amount that a file
extends, user commands, the usage of expert cache, and so on.

Chapter 5. Sizing external storage for i5/OS 117


Figure 5-3 shows how i5/OS storage management handles read and write operations.

Storage management
Page swap, close files, and
Main memory so forth. Disk space

Page fault

Blocking

Figure 5-3 Handling I/O operations

An I/O request to disk is created by the IOA device driver (DD) which for System i POWER6
now resides in SLIC instead of inside the I/O processor (IOP). It proceeds through the RIO
bus to the Fibre Channel I/O adapter (IOA) which is used to connect to the external storage
subsystem. Each IOA accesses a set of logical volumes, logical unit numbers (LUNs), in a DS
system; each LUN is seen by i5/OS as a disk unit. Therefore, the I/O request for a certain
System i disk (LUN) goes to an IOA to which a particular LUN is assigned; I/O requests for a
LUN are queued in IOA. From IOA, the request proceeds through an FC connection to a host
adapter in the DS system. The FC connection topology between IOAs and storage system
host adapters can be point-to-point or can be done using switches.

In a DS system, an I/O request is received by the host adapter. From the host adapter, a
message is sent to the DS processor that is requesting access to a disk track that is specified
for that I/O operation. The following actions are then performed for a read or write operation:
 Read operation: A directory lookup is performed if the requested track is in cache. If the
requested track is not found in the cache, the corresponding disk track is staged to cache.
The setup of the address translation is performed to map the cache image to the host
adapter PCI space. The data is then transferred from cache to host adapter and further to
the host connection, and a message is sent indicating that the transfer is completed.
 Write operation: A directory lookup is performed if the requested track is in cache. If the
requested track is not in cache, segments in the write cache are allocated for the track
image. Setup of the address translation is performed to map the write cache image pages
to the host adapter PCI space. The data is then transferred through DMA from the host
adapter memory to the two redundant write cache instances, and a message is sent
indicating that the transfer is completed.

118 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-4 shows the described I/O flow between System i POWER6 and a DS8000 storage
system with the previous IOP.

i5/OS LPAR
in System i POWER6 Main Memory
SLIC IOA DD

RIO
PCI-X PCI-X

IOA IOA

FC connection

SAN Switch SAN Switch

I/O Flow
DS8000
HA HA

Processor + Processor +
Cache/NVS Cache/NVS

DA DA

FC connection

Switch Switch

Figure 5-4 System i POWER6 external storage I/O flow

5.1.2 Description of response times


When sizing, it is important to understand how performance of the disk subsystem influences
application performance. To explain this, we first describe the critical performance times:
 Application response time: The response time of an application transaction. This time is
usually critical for the customer.
 Duration of batch job: Batch jobs usually run during the night; the duration of a batch job
is critical for the customer, because it must be finished before regular daily transactions
start.
 Disk response time: The time that is needed for a disk I/O operation to complete which
includes the service time for actual I/O processing and the wait time for potential I/O
queuing on the System i host. For IOP-based IOAs disk response time is derived from
sampling at the IOP level so that this data was only representative for at least around five
I/O per second. With System i POWER6 IOP-less IOAs the disk response time is really
measured in SLIC.

Chapter 5. Sizing external storage for i5/OS 119


Single-level storage makes main memory work as a big cache. Reads are done from pages in
main memory, and requests to disk are done only when the needed page is not there. Writes
are done to main memory, and write operations to disk are performed only as a result of swap
or file close, and so on. Therefore, application response time depends not only on disk
response time but on many other factors, such as how large the i5/OS storage pool is for the
application, how frequently the applications closes files, whether it uses journaling, and so on.
These factors differ from application to application. Thus, it is difficult to give a general rule
about how disk response time influences application response time or duration of a batch job.

Performance measurements were done in IBM Rochester that show how disk response time
relates to throughput. These measurements show the number of transactions per second for
a database workload. This workload is used as an approximation for an i5/OS transaction
workload. The measurements were performed for different configurations of DS6000
connected to the System i platform and different workloads. The graphs in Figure 5-5 show
disk response time at workloads for 25, 50, 75, 100, and 125 database users.

2* (1 Fbr 7 LUNs) 2* (2 Fbr 7 LUNs)


16 16
15 15
14 14
Throughput (ops/sec)

Throughput (ops/sec)
13 13
12 12
11 11 10502 10754
10330
9885
10 10
9041
9 9
8 8
7 6204 6258 6314 7
6000
6 5582 6
5 5
4.3 6.6 7.8 8.4 8.6 2.7 4.6 6.0 7.0 7.8
Disk response time (ms) Disk response time (ms)

4* (2 Fbr 7 LUNs)
16
15 14394 14732
13896 14240
14
Throughput (ops/sec)

13 12588
12
11
10
9
8
7
6
5
1.9 3.8 5.3 6.5 7.3
Disk response time (ms)

Figure 5-5 Disk response time at different database workloads

From the three graphs, notice that as we increase the number of FC adapters and LUNs, we
gain more throughput. If we merely increased the throughput for a given configuration, we can
see the disk response time grow sharply.

5.2 Rules of thumb


When sizing on i5/OS workload for external storage, we recommend that you use some sizing
rules of thumb even before you start your external storage performance modeling with the
Disk Magic sizing tool (see 5.3.1, “Disk Magic” on page 132). This way, we ensure that the

120 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
basic performance requirements are met and eliminate future performance bottlenecks as
much as possible.

Through these rules of thumb, we determine the following characteristics of a DS6000 or


DS8000 system, a System i model, and a storage area network (SAN) configuration:
 The number of RAID ranks in DS6000 or DS8000
 The number of System i5 Fibre Channel adapters
 The size of System i LUNs to create and how to spread them over extent pools
 The manner in which to share a DS6000 or DS8000 among multiple System i models or
between a System i environment and another workload
 When connecting through SAN switches, the number of System i FC adapters to connect
to one FC port in DS6000 or DS8000

5.2.1 Number of RAID ranks


For a typical System i transaction workload which due to its largely random I/O character is
rather cache unfriendly it is extremely important to provide i5/OS with enough disk arms to
achieve good application performance.

When a page or a block of data is written to disk space, storage management spreads it over
multiple disks. By spreading data over multiple disks, it is achieved that multiple disk arms
work in parallel for any request to this piece of data, so writes and reads are done faster.

When using external storage with i5/OS what SLIC storage management sees as a “physical”
disk unit is actually a logical unit (LUN) composed of multiple stripes of a RAID rank in the
IBM DS storage subsystem (see Figure 5-6). A LUN uses multiple disk arms in parallel
depending on the width of the used RAID rank. For example, the LUNs configured on a single
DS8000 RAID5 rank use six or seven disk arms in parallel, while with evenly distributing these
LUNs over two ranks twice as much disk arm are used.

Seen as one disk arm by System i


SLIC storage management

RAID5 6+P+S Array P S


Disk LUN 0
Unit 1 LUN 1
block of data

Disk RAID5 6+P+S Array P S


Unit 2
LUN 2

Disk
Unit 3

Figure 5-6 Usage of disk arms

Typically the number of physical disk arms that should be made available for a performance
critical i5/OS transaction workload is prevailing over the capacity requirements.

Important: Determining the number of RAID ranks for a System i external storage solution
by looking at how many ranks of a given physical DDM size and RAID protection level
would be required for the desired storage capacity typically does not satisfy the
performance requirements of System i workload.

Chapter 5. Sizing external storage for i5/OS 121


The rule of thumb how many ranks are needed for on i5/OS workload is based on
performance measurements for one DS6000 and DS8000 RAID rank. Our calculations are
based on 15 KB RPM disk drive modules (DDMs).

Note: We generally do not recommend using lower speed 10 KB RPM drives for i5/OS
workload.

The calculation for the recommended number of RAID ranks is as follows, providing that
reads per second and writes per second of an i5/OS workload are known:
 A RAID-5 rank of 8 * 15 KB RPM DDMs without a spare disk (7+P rank) is capable of
maximum 1700 disk operations per second at 100% utilization without cache hits. This is
valid for both DS8000 and DS6000.
 We take into account a recommended 40% utilization of a rank so the rank can handle
40% of 1700 = 680 disk operations per second. From the same measurement we can
calculate maximum number of disk operations per second for other RAID ranks by
calculating disk operations per second for one disk drive and then multiplying them by the
number of active drives in a rank. For example, a RAID-5 rank with a spare disk (6+P+S
rank) can handle maximum 1700 / 7 * 6 = 1458 disk operations per second. At
recommended 40% utilization it can handle 583 disk operations per second.
 We calculate disk operations of i5/OS workload so that we take into account percentage of
read cache hits, percentage of write cache hits, and the fact that each write operation in
RAID-5 results in 4 disk operations (RAID-5 write penalty). If cache hits are not known, we
make a save assumption of 20% read cache hits and 30% write cache hits. We use the
following formula:
disk operations=(reads/sec - read cache hits) + 4 * (writes/sec - write cache hits)
As an example, a workload of 1000 reads per second and 700 writes per second results
in:
(1000 - 20% of 1000) + 4 * (700 - 30% of 700) = 2760 disk operations/sec
 To obtain the needed number of ranks, we divide disk operations per second of i5/OS
workload by the maximum I/O rate one rank can handle at 40% utilization.
As an example, for workload with previously calculated 2760 disk operations per second,
we need the following number of 7+P raid-5 ranks:
2760 / 680 = 4
So, we recommend to use 4 ranks in DS for this workload.

A handy reference for determining the recommened number of RAID ranks for a known
System i workload is provided by the table in Table 5-1 on page 123, which shows the I/O
capabilities of different RAID-5 and RAID-10 rank configurations. The I/O capability numbers
in the two columns for the host I/O workload examples of 70/30 and 50/50 read/write ratios
imply no cache hits and 40% rank utilization. If the System i workload is similar to one of the

122 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
two listed read/write ratios a rough estimate for the number of recommended RAID ranks can
simply be determined by dividing the total System i I/O workload by the listed I/O capability for
the corresponding RAID rank configuration.

Table 5-1 DS8000/DS6000 RAID rank capabilities


RAID rank type Disk I/O per second Host I/O per second Host I/O per second
(70% read) (50% read)

RAID-5 15 KB 1700 358 272


(7 + P)

RAID-5 10 KB 1100 252 176


(7 + P)

RAID-5 15 KB 1458 313 238


(6 + P + S)

RAID-5 10 KB 943 199 151


(6 + P + S)

RAID-10 15 KB 1275 392 340


(3 + 3 + 2S)

RAID-10 10 KB 825 254 220


(3 + 3 + 2S)

RAID-10 15 KB 1700 523 453


(4 + 4)

RAID-10 15 KB 1100 338 293


(4 + 4)

5.2.2 Number of Fibre Channel adapters


For connecting the System i platform to IBM System Storage disk subsystems, 4 Gb
IOP-based single-port Fibre Channel (FC) adapters with System i feature code 5760 or for
System i POWER6 models only the new IOP-less dual-port FC adapters 5749 or 5774 are
used. Also older 2 Gb IOP-based single-port FC adapters with System i feature number 2766
or 2787 can be used for this connection together with an IOP 2843, 2844, or 2847. Refer to
4.2, “Solution implementation considerations” on page 78 for further information about
planning the attachment of System i to external disk storage.

Similar to the number of ranks, to avoid potential I/O performance bottleneck due to
undersized configurations it is also important to properly size the number of Fibre Channel
adapters used for System i external storage attachment. To better understand this sizing, we
present a short description of the data flow through IOPs and the FC adapter (IOA).

A block of data in main memory consists of an 8 byte header and actual data that is 512 bytes
long. When the block of data is written from main memory to external storage or read to main
memory from external storage, requests are first sent to the IOA device driver which converts
the requests to generate a corresponding SCSI command understood by the disk unit resp.
storage system. The IOA device driver either resides within the IOP for IOP-based IOAs or
within SLIC for IOP-less IOAs. In addition, data descriptor lists (DDLs) tell the IOA where in
system memory the data and headers reside. See Figure 5-7.

Chapter 5. Sizing external storage for i5/OS 123


System i5
System memory

Memory buffers

HSL Hub chip

RIO-G

RIO-G - PCI X Bridge chip

MAB MAB MAB


IO request
Header Data

IOP Memory Memory IOA


DDL

SAN Data

DS

Figure 5-7 Data flow through IOP and IOA

With IOP-less Fibre Channel architectural changes in the process of getting the eight headers
for a 4 KB page out of or to main memory by packing them into just one DMA request reduce
the latency for disk I/O operations and put less burden on the PCI-X.

You need to size the number of FC adapters carefully for the throughput capability of an
adapter. Here, you must also take into account the capability of the IOP and the PCI
connection between the adapter and IOP.

We performed several measurements in the testing for this book, by which we can size the
capability of an adapter in terms of maximal I/O per second at different block sizes or maximal
MBps. Table 5-2 shows the results of measuring maximal I/O per second for different System
i Fibre Channel adapters and the I/O capability at 70% utilization which is relevant for sizing
the number of required System i FC adapters for a known transactional I/O workload.

Table 5-2 Maximal I/O per second per Fibre Channel IOA
IOP/IOA Maximal I/O per second per I/O per second per port
port at 70% utilization
IOP-less 5749 or 5774 15000 10500
2844 IOP / 5760 IOA 3900 3200
2844 IOP / 2787 IOA 3650 2555

124 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Table 5-3 shows the maximum throughput for System i Fibre Channel adapters based on
measurement of large 256 KB block sequential transfers and typical transaction workload with
rather small 14 KB block transfers.

Table 5-3 Maximum adapter throughput


FC adapter Maximum sequential Maximum transaction
throughput per port workload throughput per port

IOP-less 5749 or 5774 310 MBps 250 MBps

2844 IOP / 5760 IOA 140 MBps 45 - 54 MBps

2844 IOP / 2787 IOA 92 MBps 34 - 54 MBps

When using IOP-based FC adapters there is another reason why the number of FC adapters
is important for performance. With IOP-based FC adapters only one I/O operation per path to
a LUN can be done at a time, so I/O requests could queue up in each LUN queue in the IOP
resulting in undesired I/O wait time. SLIC storage management allows a maximum of six I/O
requests in an IOP queue per LUN and path. By using more FC adapters for adding paths to
a LUN the number of active I/O and the number of available IOP LUN I/O queues can be
increased.

Note: For IOP-based Fibre Channel using more FC adapters for multipath with adding
more paths to a LUN can help to significantly reduce the disk I/O wait time.

With IOP-less Fibre Channel support the limit of one active I/O per LUN per path has been
removed and up to six active I/O per path and LUN are now supported. This inherently
provides six times better I/O concurrency compared to previous IOP-based Fibre Channel
technology and makes multipath for IOP-less a function primarily for redundancy which less
potential performance benefits compared to IOP-based Fibre Channel technology.

When a System i customer plans for external storage, the customer usually decides first how
much disk capacity is needed and then asks how many FC adapters will be necessary to
handle the planned capacity. It is useful to have a rule of thumb to determine how much disk
capacity to plan per FC adapter. We calculate this by using the access density of an i5/OS
workload. The access density of a workload is the number of I/O per second per GB and
denotes how “dense” I/O operations are on available disk space.

To calculate the capacity per FC adapter, we take the maximum I/O per second that an
adapter can handle at 70% utilization (see Table 5-2). We divide the maximal number of I/O
per second by access density to get the capacity per FC adapter. We recommend that LUN
utilization does not exceed 40%. Therefore, we apply 40% to the calculated capacity.

Consider this example. An i5/OS workload has an access density of 1.4 I/O per second per
GB. Adapter 5760 at IOP 2844 is capable of a maximum of 3200 I/O per second at 70%
utilization. Therefore, it handles the capacity 2285 GB, that is:
3200 / 1.4 = 2285 GB

After applying 40% for LUN utilization, the sized capacity per adapter is 40% of 2285 GB
which is:
2285 * 40% = 914 GB

In addition to a proper sizing of the number of FC adapters to use for external storage
attachment also following the guidelines for placing IOPs and FC adapters (IOAs) in the
System i platform (see 4.2.7, “Planning considerations for performance” on page 111).

Chapter 5. Sizing external storage for i5/OS 125


5.2.3 Size and allocation of logical volumes
A logical volume is seen by i5/OS as a disk drive, but in fact, it is composed of multiple data
stripes taken from one RAID rank in the IBM System Storage disk subsystem, as shown in
Figure 5-6 on page 121. From the DS perspective, the size of the LUN does not affect its
performance.

With IOP-based Fibre Channel LUN size considerations are very important from System i
perspective because of the limit of one active I/O per path per LUN. (We discuss this limitation
in 5.2.2, “Number of Fibre Channel adapters” on page 123 and mention multipath for
IOP-based Fibre Channel as a solution that can reduce the wait time as each additional path
to a LUN enables one more active I/O to this LUN.) For the same reason of increasing the
amount of System i active I/O with IOP-based Fibre Channel, we rather recommend using
more smaller LUNs than fewer larger LUNs.

Note: As a rule of thumb for IOP-based Fibre Channel, we recommend choosing the LUN
size so that the ratio between the capacity of the DDM and LUN is at least two LUNs per
capacity of the DDM.

With 73 GB DDM capacity in the DS system, a customer can define 35.1 GB LUNs. For even
better performance 17.54 GB LUNs can be considered.

IOP-less Fibre Channel supports up to six active I/O for each path to a LUN so compared to
IOP-based Fibre Channel there is no stringent requirement anymore to use small LUN sizes
for better performance.

Note: With IOP-based Fibre Channel we generally recommend using a LUN size of
70.56 GB, that is protected and unprotected volume model A04/A84, when configuring
LUNs on external storage.

126 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Currently the only exception, for when we recommend using larger LUN sizes than 70.56 GB,
is when the customer anticipates a low capacity usage within the System i auxiliary storage
pool (ASP). For a low ASP capacity usage, using larger LUNs can provide better performance
by reducing the data fragmentation on the disk subsystem’s RAID array resulting in less disk
arm movements as illustrated in Figure 5-8.

RAID array with low capacity usage and a "large" size LUN

data
LUN 0

little movement of disk arms

RAID array with low capacity usage and "regular" size LUNs

data
LUN 0
data
LUN 1
data
LUN 2
data
LUN 3

much movement of disk arms

Figure 5-8 RAID array data distribution

When allocating the LUNs for i5/OS, consider the following guidelines for better performance:
 Balance the activity between the two DS processors, referred to as cluster0 and cluster1,
as much as possible. Because each cluster has separate memory buses and cache, this
maximizes the use of those resources.
In the DS system, an extent pool has an affinity to either cluster0 or cluster1. We define it
by specifying a rank group for a particular extent pool with rank group 0 served by cluster0
and rank group 1 served by cluster1. Therefore, define the same amount of extent pools in
rank group 0 as in rank group 1 for the i5/OS workload and allocate the LUNs evenly
among them.

Recommendation: We recommend that you to define one extent pool from one rank to
keep better evidence of LUNs and to ensure that LUNs are spread evenly between the
two processors.

 Balance the activity of a critical application among device adapters in the DS system.
When choosing extent pools (ranks) for a critical application, make sure that they are
evenly served by as much as possible by device adapters.

In the DS system, we define a volume group that is a group of LUNs that are assigned to one
System i FC adapter or to multiple FC adapters in a multipath configuration. Create a volume
group so that it contains LUNs from the same rank group, that is do not mix logical subsystem
(LSS) LUNs server by cluster0 and odd LSS LUNs served by cluster1 on the same System i
host adapter. This multipath configuration helps to optimize sequential read performance with
making most efficient usage of the available DS8000 RIO loop bandwidth.

Chapter 5. Sizing external storage for i5/OS 127


5.2.4 Sharing ranks among multiple workloads
Looking from one angle, sharing a DS rank among multiple workloads can improve
performance because workloads that share a rank do not use the disk arms at the same time.
When one workload is idle, the other can use disk arms of a rank. It appears to each workload
that it uses all disk arms of a rank. By sharing multiple ranks among workloads, we provide
each workload with more disk arms compared to dedicating fewer ranks to a workload.

A heavy workload might get hold of disk arms in a rank and cache for almost all the time, so
the other workload will rarely have a chance to use them. Alternatively, if two heavy critical
workloads share a rank, they can prevent each other from using disk arms and cache at the
times when both are busy.

Therefore, we recommend that you dedicate ranks for a heavy critical i5/OS workload such as
SAP® or banking applications. When the other workload does not exceed 10% of the
workload from your critical i5/OS application, consider sharing the ranks.

Consider sharing ranks among multiple i5/OS systems or among i5/OS and open systems
when the workloads are less important and not I/O intensive. For example, testing and
developing, mail, and so on can share ranks with other systems.

5.2.5 Connecting using switches


When connecting System i FC adapters using a SAN switch to a storage subsystem, refer to
4.2.5, “Planning for SAN connectivity” on page 92 for information about how to zone SAN
switches. Implementing a proper SAN switch zoning is crucial to help prevent performance
degradation caused by potential link congestion problems. Still the question usually arises
regarding the number of FC adapters to plan for attaching to one DS port to ensure good
performance.

With DS8000, we recommend that you size up to four 2 Gb FC adapters per one 2 Gb DS
port. In DS6000, consider sizing two System i FC adapters for one DS port. Figure 5-9 shows
an example of SAN switch zoning for four System i FC adapters accessing one DS8000 host
port.

i5/OS partition in System i

2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb 2 Gb
IOA IOA IOA IOA IOA IOA IOA IOA

Zone 1 SAN Switch Zone 2

Host Host
port. port.
DS8000

Figure 5-9 Connecting a System i environment to a DS system using SAN switches

128 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Consider the following guidelines for connecting System i 4 Gb FC IOAs to 4 Gb adapters in
the DS8000:
 Connect one 4 Gb IOA port to one port on DS8000, provided that all four ports of the
DS8000 adapter card are used.
 Connect two 4 Gb IOA ports to one port in DS8000, provided that only two ports of the
DS8000 adapter card are used.

5.2.6 Sizing for multipath


Multipath enables up to eight different paths to a set of LUNs. To ensure redundancy for each
path, separate System i FC adapters and usually separate physical connections to DS are
used. For IOP-based Fibre Channel IOAs multipath does not provide only high availability in
case one path fails but also provides better performance compared to a single path
connection due to more active I/O and higher I/O throughput. Regarding this, how much
better does a set of LUNs in IOP-based multipath perform compared to a single path?

Figure 5-10 shows the disk response time measurements of the same database workload
running in a single path and dual path at different I/O rates. The blue line represents a single
path, and the yellow line represents dual path.

Read / Write 50/50, 32K Records, 100% Read Hit

6.00

5.00
Response Time (ms)

4.00

3.00

2.00

1.00

0.00
0 500 1000 1500 2000 2500
Throughput (IO/s)

i5 single-path i5 dual-path

Figure 5-10 Single path versus dual path performance

The response time in IOP with a single path starts to increase drastically at about 1200 I/O
per second. With two paths, it starts to increase at about 1800 I/Os per second. From this, we
can make a rough rule of thumb that for IOP-based Fibre Channel multipath with two paths is
capable of 50% more I/Os than a single path and provides significantly shorter wait time than
a single path. Disk response time consists of service time and wait time. With multipath, only
the wait time is improved, while it does not influence service time. With IOP-less Fibre
Channel allowing six times as much active I/O as IOP-based Fibre Channel the performance
improvement by using multipath is of minor importance and multipath is primarily used for
redundancy.

Chapter 5. Sizing external storage for i5/OS 129


The sizing tool Disk Magic takes the performance improvement due to multipath into account
and is planned to be updated for modelling System i IOP-less Fibre Channel performance.

For more information about how to plan for multipath, refer to 4.2.2, “Planning considerations
for i5/OS multipath Fibre Channel attachment” on page 81.

5.2.7 Sizing for applications in an IASP


To implement a high availability or disaster recovery solution for an application using
independent auxiliary storage pools (IASPs) or using IASPs for other purposes such as
server consolidation, we recommend that you size external storage for IASP and *SYSBAS
separately.

i5/OS performance reports—Resource report - Disk utilization and System report - Disk
utilization—show the average number of I/O per second for both IASP and *SYSBAS. To see
how many I/Os per second actually go to an IASP, we recommend that you look at the System
report - Resource utilization. This report shows the database reads per second and writes
per second for each application job, as shown in Figure 5-12.

Figure 5-11 Database reads and writes

Add the database reads per second (synchronous DBR and asynchronous DBR) and the
database writes per second (synchronous DBW and asynchronous DBW) of all application jobs
in IASP. Then, you can obtain reads per second and writes per second of IASP. Calculate the
number of reads per second and writes per second for *SYSBAS so that you subtract the
reads per second of the IASP from the overall reads per second and subtract the writes per
second of the IASP from the overall writes per second.

To allocate LUNs for IASP and *SYSBAS, we recommend that you first create the LUNs for
the IASP and spread them across available ranks in the DS system. From the left free space
on each rank, define (smaller) LUNs for using as *SYSBAS disk units. The reasoning for this
approach is that the first LUNs created on a RAID rank are created on the outer cylinders of
the disk drives which provide a higher data rate than the inner cylinders.

5.2.8 Sizing for space efficient FlashCopy


Properly sizing the storage space for the repository volume that provides the physical storage
capacity for the space efficient volumes within the same extent pool is very important. Proper
sizing prevents you from running out of physical storage space and causes the space efficient
FlashCopy relationship to fail, which causes performance issues due to an undersized
number of disk arms for the repository volume.

130 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The following sizing approach can help you prevent this undesired situation:
1. Use i5/OS Performance Tools to collect a resource report for disk utilization from the
production system, which accesses the FlashCopy source volumes, and the backup
system, which accesses the FlashCopy target volumes (see 5.4.3, “i5/OS Performance
Tools” on page 137).
2. Determine the amount of write I/O activity from the production and backup system for the
expected duration of the FlashCopy relationship, that is the duration of the system save to
tape.
3. Assuming that one track (64 KB) is moved to the repository for each write I/O and 33% of
all writes are re-writes to the same track, calculate 50% contingency for the recommended
space for the repository capacity as follows:
Recommended repository capacity [GB] = write IO/s x 67% x FlashCopy active time
[s] x 64 KB/IO / (1048576 KB/GB) x 150%
For example, let us assume an i5/OS partition with a total disk space of 1.125 TB, a
system save duration of 3 hours, and a given System i workload of 300 write I/O per
second.
The recommended repository size is then is as follows:
300 IO/s x 67% x 10800 s x 64 KB/IO / 1048576 GB/KB x 150% = 199 GB
So, the repository capacity needs to be 18% of its virtual capacity of 1.125 TB for the copy
of the production system space.

Sizing the repository number of disk arms


Because the workload to the shared repository volume has random I/O character, it is also
important to provide enough physical disk arms for the repository volume space to ensure
adequate performance.

To calculate the recommended number of physical disk arms for the repository volume space
depending on your write I/O workload in tracks per second (at 50% disk utilization), refer to
Table 5-4.

Table 5-4 Recommended number of physical arms


Disk Configuration Tracks per second per disk arm

RAID5 15 KB RPM 25

RAID5 10 KB RPM 18

RAID10 15 KB RPM 50

RAID10 10 KB RPM 36

For example, if you are using RAID5 with 15 KB RPM drives and 600 I/O per second, your
production host peak write I/O throughput during the active time of the space efficient
FlashCopy relationship is 600 I/O per second x 67% (accounting for 33% re-writes),
corresponding to 402 tracks per second and resulting in a recommended number of disk arms
as follows:
402 tracks per second / (25 tracks per second) / disk arm = 16 disk arms of 15 KB RPM
disks with RAID5

Chapter 5. Sizing external storage for i5/OS 131


5.3 Sizing tools
Several tools are available for sizing and performance measurements of the System i5
platform with external storage. In this section, we present the most important tools. Some of
these tools are System i sizing tools such as the Workload Estimator. Other tools, such as
Disk Magic, are IBM System Storage DS or Enterprise Storage Server (ESS) performance
tools.

5.3.1 Disk Magic


Disk Magic is a tool for sizing and modeling disk systems for various servers. You can use it to
model IBM and other disk systems attached to IBM System i, System p, System z, System x,
and other servers. Disk Magic is developed by IntelliMagic and is available for download as
follows:
 For IBM employees:
http://w3-1.ibm.com/sales/systems/portal/_s.155/254?navID=f220s380&geoID=All&pr
odID=Disk&docID=SSD5D00689DF4
 For business partners: Sign on to the IBM PartnerWorld® Web site and search for Disk
Magic:
http://www-1.ibm.com/partnerworld/pwhome.nsf/weblook/index_emea_en.html

To use Disk Magic for sizing the System i platform with the DS system, you need the following
i5/OS Performance Tools reports:
 Resource report: Disk utilization section
 System report: Disk utilization section
 Optional: System report: Storage utilization section
 Component report: Disk Activity section

For instructions on how to use Disk Magic to size System i with a DS system, refer to 5.5,
“Sizing examples with Disk Magic” on page 139, which presents several examples of using
Disk Magic for the System i platform. The Disk Magic Web site also provides a Disk Magic
Learning Guide that you can download, which contains a few step-by-step examples for using
Disk Magic for modelling external storage performance.

5.3.2 IBM Systems Workload Estimator for i5/OS


IBM Systems Workload Estimator (WLE) is a tool that provides sizing recommendations for
System i or iSeries models that are running one or more workloads. WLE recommends
model, memory, and disk requirements that are necessary to meet reasonable performance
expectations, based on inserted existing workloads or planned workloads.

To use WLE, you select one or more workloads from an existing selection list and answer a
series of questions about each workload. Based on the answers, WLE generates a
recommendation and shows the predicted processor utilization.

WLE also provides the capability to model external storage for recommended System i
hardware. When the recommended System i models are shown in WLE, you can choose to
directly invoke Disk Magic and model external storage for this workload. Therefore, you can
obtain both recommendations for System i hardware and recommendations for external
storage in the same run of WLE combined with Disk Magic.

132 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
For an example of how to use WLE with Disk Magic, see 5.5.4, “Using IBM Systems
Workload Estimator connection to Disk Magic: Modeling DS6000 and System i for an existing
workload” on page 189.

5.3.3 IBM System Storage Productivity Center for Disk


IBM System Storage Productivity Center is an integrated set of software components that
provides end-to-end storage management, from the host and application to the target storage
device in a heterogeneous platform environment. This software offering provides disk and
tape library configuration and management, performance management, SAN fabric
management and configuration, and host-centered usage reporting and monitoring from the
perspective of the database application or file system.

IBM System Storage Productivity Center is comprised of the following elements:


 A data component: IBM System Storage Productivity Center for Data
 A fabric component: IBM System Storage Productivity Center for Fabric
 A disk component: IBM System Storage Productivity Center for Disk
 A replication component: IBM System Storage Productivity Center for Replication

IBM System Storage Productivity Center for Disk enables the device configuration and
management of SAN-attached devices from a single console. In addition, it includes
performance capabilities to monitor and manage the performance of the disks.

The functions of System Storage Productivity Center for Disk performance include:
 Collect and store performance data and provide alerts
 Provide graphical performance reports
 Help optimize storage allocation
 Provide volume contention analysis

When using System Storage Productivity Center for Disk to monitor a System i workload on
DS8000 or DS6000, we recommend that you inspect the following information:
 Read I/O Rate (sequential)
 Read I/O Rate (overall)
 Write I/O Rate (normal)
 Read Cache Hit Percentage (overall)
 Write Response Time
 Overall Response Time
 Read Transfer Size
 Write Transfer Size
 Cache to Disk Transfer Rate
 Write-cache Delay Percentage
 Write-cache Delay I/O (I/O delayed due to NVS overflow)
 Backend Read Response Time
 Port Send Data Rate
 Port Receive Data Rate
 Total Port Data Rate (should be balanced among ports)
 Port Receive Response Time
 I/O per rank
 Response time per rank
 Response time per volumes

Chapter 5. Sizing external storage for i5/OS 133


Figure 5-12 shows the read and write rate of the System Storage Productivity Center graph.

Figure 5-12 Read and write rate

134 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-13 shows the cache hit percentage of the System Storage Productivity Center
graph.

Figure 5-13 Read cache hit percentage

Chapter 5. Sizing external storage for i5/OS 135


Figure 5-14 shows the write cache delay percentage of the System Storage Productivity
Center graph.

Figure 5-14 Write cache delay percentage

5.4 Gathering information for sizing


In this section, we discuss the methods and techniques for acquiring data for sizing the
storage solution.

5.4.1 Typical workloads in i5/OS


To correctly size the DS system for the System i platform, it is important to know the
characteristics of the workload that use the DS disk space. Many System i customer
applications tend to follow the same patterns as the System i benchmark commercial
processing workload (CPW). These applications typically have many jobs that run brief
transactions with database operations.

Other applications tend to follow the same patterns as the System i benchmark compute
intensive workload (CIW). These applications typically have fewer jobs running transactions
that spend a substantial amount of time in the application itself. An example of such a
workload is Lotus® Domino Mail and Calendar.

In general, System i batch workloads can be I/O or compute intensive. For I/O intensive batch
applications, the overall batch performance is dependent on the speed of the disk subsystem.

136 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
For compute-intensive batch jobs, the run time likely depends on the processor power of the
System i platform. For many customers, batch workloads run with large block sizes.

Typically batch jobs run during the night. For some environments, it is important that these
jobs finish on time to enable timely starting of the daily transaction application. The amount of
time that a batch job takes is called a batch window.

5.4.2 Identifying peak periods


To size a DS system for an i5/OS system, we recommend that you identify one or two peak
periods, each of them lasting one hour, and collect performance data during these periods.
For instructions how to collect performance data and produce reports, refer to 5.4.3, “i5/OS
Performance Tools” on page 137.

In many cases, you know when the peak periods or the most critical periods occur. If you
know when these times are, collect performance data during these periods. In some cases,
you might not know when the peak periods occur. In such a case, we recommend that you
collect performance data during a 24-hour period and in different time periods, for example,
during end-of-week and end-of-month jobs.

After the data is collected, produce a Resource report with a disk utilization section and use
the following guidelines to identify peak periods:
 Look for one hour with the most I/O per seconds. You can insert the report into a
spreadsheet, calculate the hourly average of I/O per second, and look for the maximum of
the hourly average. Figure 5-15 shows part of such a spreadsheet.
 For many customers, performance data shows patterns in block sizes, with significantly
different block sizes in different periods of time. If this is so, calculate the hourly average of
the block sizes and use the hour with the maximal block sizes as the second peak.
 If you identified two peak periods, size the DS system so that both are accommodated.

Figure 5-15 Identifying the peak period for the System Storage Productivity Center

5.4.3 i5/OS Performance Tools


To use the sizing rules of thumb and Disk Magic, you need the following performance reports
from i5/OS:
 System Report: Disk Utilization and Storage Pool utilization sections
 Component Report: Disk Activity section
 Resource Report: Disk Utilization section

Chapter 5. Sizing external storage for i5/OS 137


To produce the System i5 performance reports that are needed for sizing the DS system:
1. Install the licensed program Performance Tools 5722-PT1 on i5/OS.
2. On the i5/OS command line, enter the GO PERFORM command.
3. In the IBM Performance Tools for i5/OS panel that opens, select 2. Collect Performance
Data as shown in Figure 5-16.

PERFORM IBM Performance Tools for i5/OS


System: RCHLTTN1
Select one of the following:

1. Select type of status


2. Collect performance data
3. Print performance report

5. Performance utilities
6. Configure and manage tools
7. Display performance data
8. System activity
9. Performance graphics
10. Advisor

70. Related commands

Selection or command
===> 2

F3=Exit F4=Prompt F9=Retrieve F12=Cancel F13=Information Assistant


F16=System main menu
Figure 5-16 PERFORM menu panel

4. On the Collect Performance Data panel, select 1. Start Collecting Data.


5. On the Start Collecting Data panel, specify the collection interval as 15 minutes or 5
minutes, and press Enter. i5/OS starts collecting the performance data.
6. After a period of time, on the Collect Performance Data panel, select 2. Stop collecting
data.
7. On the IBM Performance Tools for iSeries panel, select 3. Print performance Report.
8. On the Print Performance Report - Sample Data panel, make sure that the listed library at
the field Library is the one to which you collected data. You might need to change the
name of the library. For the member, select 1. System report, and press Enter.
9. On the Select Section for Report panel, select Disk Utilization and Storage Pool
Utilization, and press Enter.
10.On the Select Categories for Report panel, select Time Interval.
11.On the next panel, you can select the intervals for the report. If you collected data for 24
hours and then identified a peak hour, select only the intervals of this particular hour.
Select the intervals and press Enter. This job starts and produces report in a spooled file.
12.On Print Performance Report - Sample Data panel, for member, select 2. Component
report.

138 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.On the Select Section for Report panel, select Disk Activity, and then select Time
Interval. Then select all intervals or just the intervals of the peak period. Press Enter to
start the job for report.
14.On the Print Performance report - Sample Data panel, for member, select 5. Resource
report.
15.On the Select Section for Report panel, select Disk Utilization and then select Time
Interval. Then select all intervals or just the intervals of the peak period. Press Enter to
start the job for the report.
16.To insert the reports into Disk Magic, transfer the reports from the spooled file to a PC
using iSeries Navigator.
17.In iSeries Navigator, expand the i5/OS system on which the reports are located. Expand
Basic Operations and double-click Printer output.
18.Performance reports in the spooled file are shown on the right side of the panel. Copy and
paste the necessary reports to your PC.

5.5 Sizing examples with Disk Magic


In this section, we describe three examples of using Disk Magic to size the DS system for the
System i platform.

5.5.1 Sizing the System i5 with DS8000 for a customer with iSeries model 8xx
and internal disks
In this example, DS8000 is sized for a customer’s production workload. The customer is
currently running a host workload on internal disks; performance reports from a peak period
are available. For instruction on how to produce performance reports, refer to 5.4.3, “i5/OS
Performance Tools” on page 137.

Chapter 5. Sizing external storage for i5/OS 139


To size DS8000 using Disk Magic:
1. On the Welcome to Disk Magic panel (Figure 5-17), select Open and iSeries Automated
Input (*.IOSTAT, *.TXT, *.CSV) and click OK.

Figure 5-17 Disk Magic Welcome dialog box

140 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. In the Open window (Figure 5-18), choose the directory that contains the performance
reports, select altogether the corresponding system, resource and component report files
and click Open.
You can also concatenate all necessary iSeries performance reports into one file and
insert it into Disk Magic. In this example, both System report - Storage pool utilization and
System report - Disk utilization are concatenated into one System report file.

Figure 5-18 Inserting PT reports to Disk Magic

Chapter 5. Sizing external storage for i5/OS 141


3. Disk Magic shows you an overview of the read performance files in the Multiple File
Open - File Overview window as shown in Figure 5-19. By default Disk Magic accounts for
different ASPs by treating them as separate I/O workloads allowing to model the external
storage performance on an ASP level.

Figure 5-19 Disk Magic - File Overview dialog box

142 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
If you want to model your external storage solution with a system I/O workload aggregated
from all fASPs or if you want to continue using potentially configured i5/OS mirroring with
external storage:
a. Click Edit Properties.
b. Click Discern ASP level.
c. Select Keep mirroring, if applicable,
d. Click OK as shown in Figure 5-20.
Otherwise, click Process All Files (in Figure 5-19) to continue.

Figure 5-20 Disk Magic - Server processing options

While inserting reports, Disk Magic might show a warning message about inconsistent
interval star and stop times (see Figure 5-21).

Figure 5-21 Inconsistent start/stop times message

One cause for inconsistent start and stop times might be that the customer gives you
performance reports for 24 hours, and you select a one-hour peak period from them. Then
the customer produces reports again and selects only the interval of the peak period from
the collected data. In such reports, the start and stop time of the collection does not match
the start and stop time of produced reports. The reports are correct, and you can ignore
this warning. However, there can be other instances where inconsistent reports are

Chapter 5. Sizing external storage for i5/OS 143


inserted by mistake, so we recommend that you resolve this issue by getting a set of
consistent reports.
4. After successful processing of the performance report files Disk Magic shows the I/O Load
Summary as shown in Figure 5-22. Click Create Model to proceed with the external
storage performance modeling.

Figure 5-22 Disk Magic - Successfully imported performance reports

5. In the TreeView panel in Disk Magic, observe the following two icons (Figure 5-23):
– Example1 denotes a workload.
– iSeries1 denotes a disk subsystem for this workload.
Double-click iSeries1.

Figure 5-23 Selecting the disk subsystem

144 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. The Disk Subsystem - iSeries1 panel displays, which contains data about the current
workload on the internal disks. The General tab shows the current type of disks
(Figure 5-24).

Figure 5-24 Disk Subsystem window: General tab

Chapter 5. Sizing external storage for i5/OS 145


The iSeries Disk tab on the Disk Subsystem - iSeries1 window shows the current capacity
and number of disk devices (Figure 5-25).

Figure 5-25 Disk subsystem window: iSeries Disk tab

146 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The iSeries Workload tab on the same panel (Figure 5-26) shows the characteristics of
the iSeries workload. These include reads per sec, writes per sec, block size, and reported
current disk service time and wait time.
a. Click the Cache Statistics button.

Figure 5-26 Disk Subsystem: iSeries Workload tab

b. You can observe the current percentage of cache read hits and write efficiency as
shown in Figure 5-27. Click OK to return to the iSeries Workload tab.

Figure 5-27 Cache statistics of workload on internal disk

c. Click Base to save the current disk subsystem as a base for Disk Magic modeling.

Chapter 5. Sizing external storage for i5/OS 147


d. Disk Magic informs you that the base is created successfully, as shown in Figure 5-28.
Click OK to save the base.

Figure 5-28 Saving the base

7. Insert the planned DS configuration in the disk subsystem model by inserting the relevant
values on each tab, as shown in the next steps. In this example, we insert the following
planned configuration:
– DS8100 with 32 GB cache
– 12 FC adapters in System i5 in multipath, two paths for each set of LUNs
– Six FC ports in DS8100
– Eight ranks of 73 GB DDMs used for the System i5 workload
– 182 LUNs of size 17.54 GB
To insert the planned DS configuration information:
a. On the General tab in the Disk Subsystem - iSeries 1 window, choose the type of
planned DS for Hardware Type (Figure 5-29).

Figure 5-29 Inserting a planned type of DS

148 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Notice that the General tab interface changes as shown in Figure 5-30. If you use
multipath, select Multipath with iSeries. In our example, we use multipath, so we
select this box. Notice that the Interfaces tab is added as soon as you select DS8000
as a disk subsystem.

Figure 5-30 Disk Magic: Selecting the hardware and specifying multipath

b. Click the Hardware Details button. In the Hardware Details window (Figure 5-31), for
System Memory, choose the planned amount of cache, and for Fibre Host Adapters,
enter the planned number of host adapters, and click OK.

Figure 5-31 Disk Magic: Specifying hardware details of DS

Chapter 5. Sizing external storage for i5/OS 149


c. Next, in the Disk Subsystem - iSeries1 window, select the Interfaces tab, as shown in
Figure 5-32. On the Interfaces tab, under the From Disk Subsystem tab, click the Edit
button.

Figure 5-32 Specifying the DS host ports: Interfaces tab

d. In the Edit Interfaces for Disk Subsystem window (Figure 5-33), for Count, enter the
planned number of DS ports, and click OK.

Figure 5-33 Inserting the DS host ports: Edit Interfaces for Disk Subsystem

150 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
e. Back on the Interfaces tab (Figure 5-32), select the From Servers tab, and click Edit. In
the Edit Interfaces window (Figure 5-34), enter the number of planned System i5 FC
adapters. Click OK.

Figure 5-34 Inserting the System i5 FC adapters

f. Next, in the Disk Subsystem - iSeries1 window, select the iSeries Disk tab, as shown in
Figure 5-35. Notice that Disk Magic uses the reported capacity on internal disks as the
default capacity on DS. Click Edit.

Figure 5-35 iSeries disk

Chapter 5. Sizing external storage for i5/OS 151


g. In the Edit a Disk Type window (Figure 5-36), enter the desired capacity to achieve
modeling of the planned number of ranks.
In our example, we enter the capacity of planned eight ranks. Each RAID-5 rank with a
spare disk (6+P+S rank) has 415 GB of effective capacity, and a RAID-5 rank without a
spare disk (7+P ranks) has 483 GB of effective capacity. For Disk Magic modeling, we
assume that only 6+P ranks are used, so we plan for 8 x 415 GB = 3320 GB capacity.
Refer to 4.2.6, “Planning for capacity” on page 93 for more information about available
capacity.
The actual capacity used by i5/OS is specified in the Workload window. The capacity
might be lower than the capacity that was specified in this panel. This is so because
you cannot allocate all available capacity i5/OS. If you do, the capacity used by i5/OS
will be lower because of fixed LUN sizes for i5/OS. Refer to Chapter 4, “i5/OS planning
for external storage” on page 75 for more information about LUN sizes.
Observe that 73 GB DDMs and RAID-5 protection are the default values in this panel.
Notice also that a default extent pool for iSeries workload is created in Disk Magic.

Figure 5-36 Inserting the capacity for the planned number of ranks

152 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
After you insert the capacity for the planned number of ranks, the iSeries Disk tab
shows the correct number of planned ranks (see Figure 5-37).

Figure 5-37 Planned number of ranks

Chapter 5. Sizing external storage for i5/OS 153


h. Finally, select the iSeries Workload tab. Specify the planned number of LUNs and the
usable capacity for i5/OS.
In our example, we use 182 x 17.54 GB LUNs, so the usable capacity for i5/OS is
3192 GB (see Figure 5-38). We recommend that you create one extent pool from one
DS rank. Nevertheless in Disk Magic, you can model one extent pool that contains all
planned ranks, because modeled values do not depend on the way in which extent
pools are specified in Disk Magic. In our example, we use only the extent pool created
by Disk Magic as the default.
Click Cache Statistics.

Figure 5-38 Planned number of LUNs

154 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
i. In the Cache Statistics for Host window (see Figure 5-39), notice that Disk Magic
models cache usage on DS8000 automatically based on the reported current cache
usage on internal disks. Click OK.

Figure 5-39 Automatic cache modeling

8. After you enter the planned values of the DS configuration, in the Disk Subsystem -
iSeries1 panel (Figure 5-38), click Solve.
9. A Disk Magic message displays indicating that the model of planned scenario is
successfully solved (Figure 5-40). Click OK to solve the model of iSeries or i5/OS
workload on DS.

Figure 5-40 Solving the model of planned scenario

Chapter 5. Sizing external storage for i5/OS 155


10.After you solve the model of the planned scenario, on the iSeries Workload tab
(Figure 5-41), notice the modeled disk service time and wait time. Click Utilizations.

Figure 5-41 Modeled disk service time and wait time

156 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11.In the Utilizations IBM DS8100 window (Figure 5-42), observe the modeled utilization of
physical disk drives or hard disk drives (HDDs), DS device adapters, LUNs, FC ports in
DS, and so on.
In our example, none of the utilization values exceeds the recommended maximal value.
However, the HDD utilization of 32% approaches the recommended threshold of 40%.
Thus, you need to consider additional ranks if you intend to grow the workload. Click OK.

Figure 5-42 Modeled utilizations

12.On iSeries Workload tab (Figure 5-41), click Cache Statistics. In the Cache Statistics for
Host window (Figure 5-43), notice the modeled cache values on DS. In our example, the
modeled read cache percentage is higher than the current read cache percentage with
internal disks, but modeled write cache efficiency on DS is about the same as current
rather high write cache percentage. Notice also that the modeled disk seek percentage
dropped to almost half of the reported seek percentage on internal disks.

Figure 5-43 Modeled cache hits

Chapter 5. Sizing external storage for i5/OS 157


You can also see modeled utilizations, disk service, wait times, and cache percentages in
the Disk Magic log, as shown in Figure 5-44.

Cache Size / Backstore Sensitivity 6.0

Advanced DS6000/DS8000 Outputs:


Processor Utilization: 13.0%
Highest HDD Utilization: 32.3%
Back End Interface Utilization: 20.0%
Internal Bus Utilization: 3.4%
Avg. Host Adapter Utilization: 2.5%
Avg. Host Interface Utilization: 17.5%

Extent Pool Type HDD RAID Devices GBytes Log.Type


Pool_Example1 FBiSeries 73GB/15k RAID 5 182 3320.0 LUN

Extent Pool I/O IOSQ Pend Conn Disc Resp Highest


Rate Time HDD Util
Pool_Example1 4926.0 0.0 --- --- --- 3.8 32.3%

iSeries Server I/O Transfer Serv Wait Read Read Write Write LUN LUN
Rate Size (KB) Time Time Perc Hit% Hit% Eff % Cnt Util%
Average 4926 9.0 3.8 0.0 60 41 100 74 182 10
Example1 4926 9.0 3.8 0.0 60 41 100 74 182 10
Figure 5-44 Modeled values in the Disk Magic log

158 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.You can use Disk Magic to model the critical values for planned growth of a customer’s
workload, which can be predicted to a point at which the current DS configuration no
longer meet performance requirements and the customer must consider additional ranks,
FC adapters, and so on. To model DS for growth of the workload:
a. In the Disk Subsystem - iSeries1 window, click Graph. In the Graph Options window
(Figure 5-45), select the following options:
• For Graph Data, choose Response Time in ms.
• For Graph Type, select Line.
• For Range Type, select I/O Rate.
Observe that the values for range of I/O rate are already filled with default values,
starting from current I/O rate. In our example, we predict a growth rate of three times
larger than the current I/O rate, increasing by 1000 I/O per second at a time. Therefore,
we insert 14800 in the To field and 1000 in the By field.
b. Click Plot.

Figure 5-45 Graph options for disk response time

Chapter 5. Sizing external storage for i5/OS 159


A spreadsheet is created that contains a graph with the predicted disk response time
(service time + wait time) for I/O rate growth. Figure 5-46 shows the graph for our
example. Notice that at about 9000 I/Os per second, the predicted response time will
exceed 5 ms, which we consider as a high limit for good response time. At about 12000
I/Os per second, disk response time will go over 7 ms and start to drastically increase.
The customer can increase the I/O rate to about 9000 I/Os per second with a disk
response time that is still acceptable. If the customer increases the I/O rate even more, the
disk response time increases accordingly, but at about 12000 I/Os per second, the current
DS configuration is saturated.

Model1
iSeries1
Response Time in ms (iSeries)

20

15

DS8100 / 16 GB
10

0
4926 6926 8926 10926 12926
5926 7926 9926 11926 13926
Total I/O Rate (I/Os per second)
Figure 5-46 Disk response time at I/O growth

160 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
14.Next, produce the graph of HDD utilizations at workload growth.
a. In the Disk Subsystem - iSeries1 window, on the iSeries Workload tab, click Graph. In
the Graph Options window (Figure 5-47):
• For Graph Data, select Highest HDD Utilization (%).
• For Graph Type, select Line.
• For Range Type, select I/O Rate and select the appropriate range values. In our
example, we use the same I/O rate values as for disk response time.
b. Click Plot.

Figure 5-47 Graph options for HDD utilization

Chapter 5. Sizing external storage for i5/OS 161


A spreadsheet is generated with the desired graph. Figure 5-48 shows the graph for our
example. Notice that the recommended 40% HDD utilization is exceeded at about 6000
I/O per second, and 70% is exceeded at about 11000 I/O per second, which confirms that
the current configuration is saturated at 11000 to 12000 I/O per second.

Model1
Highest HDD utilization (%) (iSeries) iSeries1
100
90
80
70
60 DS8100 / 16 GB
50
40
30
20
4926 6926 8926 10926 12926
5926 7926 9926 11926 13926
Total I/O Rate (I/Os per second)

Figure 5-48 HDD utilization at I/O rate growth

After the installing the System i5 platform and DS8100, the customer used initially six ranks
and 10 FC adapters in multipath for the production workload. Because an iSeries model 825
replaced a System i5 model, the I/O characteristics of the production workload changed,
because of higher processor power and larger memory pool in the System i5 model. The
production workload produces 230 reads per second and 1523 writes per second. Also, the
actual service times and wait times do not exceed one millisecond.

162 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5.5.2 Sharing DS8100 ranks between two i5/OS systems (partitions)
In this example, we use Disk Magic to model two i5/OS workloads that share the same extent
pool in DS8000. To model this scenario with Disk Magic:
1. Insert into Disk Magic reports of the first workload as described in 5.5.1, “Sizing the
System i5 with DS8000 for a customer with iSeries model 8xx and internal disks” on
page 139.
2. After reports of the first i5/OS system are inserted, add the reports for the other system. In
the Disk Magic TreeView panel, right-click the disk subsystem icon, and select Add
Reports as shown in Figure 5-49.

Figure 5-49 Adding reports from the other system

3. In the Open window (Figure 5-50), select the reports of another workload to insert, and
click Open.

Chapter 5. Sizing external storage for i5/OS 163


Figure 5-50 Inserting reports from another i5/OS system

4. After the reports of the second system are inserted, observe that the models for both
workloads are present in TreeView panel as shown in Figure 5-51. Double-click the
iSeries disk subsystem.

Figure 5-51 Models for both systems

5. In the Disk Subsystem - iSeries1 window (Figure 5-52), select the iSeries Disk tab. Notice
that the two subtabs on the iSeries Disk tab and that each shows the current capacity for
the internal disks of one workload.
a. Click the Example2-1 tab, and observe the current capacity for the first i5/OS workload.

164 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-52 Current capacity of the first i5/OS system

Chapter 5. Sizing external storage for i5/OS 165


b. Select the Example2-2 tab to see the workload of the second i5/OS system
(Figure 5-53).

Figure 5-53 Workload characteristics of each system

6. Select the iSeries Workload tab, and click Cache Statistics. The Cache Statistics for Host
window opens and shows the current cache usage. Figure 5-54 shows the cache usage of
the second i5/OS system. Click OK.

Figure 5-54 Current cache usage of workloads

7. In the Disk Subsystem - iSeries1 window, click Base to save the current configuration of
both i5/OS systems as a base for further modeling.

166 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. After the base is saved, model the external disk subsystem for both workloads:
a. In the Disk subsystem - iSeries1 window, select the General tab. For Hardware type,
select the desired disk system. In our example, we select DS8100 and Multipath with
iSeries, as shown in Figure 5-55.

Figure 5-55 Selecting the external disk subsystem

In our example, we plan the following configurations for each i5/OS workload:
• Workload Example2-1: 12 LUNs of size 17 GB and 2 System i5 FC adapters in
multipath
• Workload Example2-2: 22 LUNs of size 17 GB and 2 System i5 FC adapters in
multipath
The four System i5 FC adapters is connected to two DS host ports using switches.

Chapter 5. Sizing external storage for i5/OS 167


b. To model the number of System i5 adapters, select the Interfaces tab, and then select
the From Servers tab. You see the current workloads with the four default interfaces
(see Figure 5-56). For each workload, highlight the workload, and click Edit.

Figure 5-56 Current interfaces

c. In the Edit Interfaces window (Figure 5-57), change the number of interfaces as
planned, and click OK.

Figure 5-57 Insert planned no of System i5 adapters

d. To model the number of DS host ports, select the Interfaces tab, and then select the
From Disk Subsystem tab. You see the interfaces from DS8100. Click Edit, and insert
the planned number of DS host ports. Click OK.

168 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
e. In the Disk Subsystem - iSeries1 window, select the iSeries Disk tab. Notice that Disk
Magic creates an extent pool for each i5/OS system automatically. Each extent pool
contains the same capacity that is reported for internal disks. See Figure 5-58.

Figure 5-58 Current capacity in the extent pools

In our example, we plan to share two ranks between the two i5/OS systems, so we do
not want a separate extent pool for each i5/OS system. Instead, we want one extent
pool for both systems.
f. On the iSeries Disk tab, click the Add button. In the Add a Disk Type window
(Figure 5-59), in the Capacity (GB) field, enter the needed capacity of the new extent
pool. For Extent Pool, select Add New.

Figure 5-59 Creating an extent pool to share between the two workloads

Chapter 5. Sizing external storage for i5/OS 169


g. In the Specify Extent Pool name window (Figure 5-60), enter the name of the new
extent pool, and click OK.

Figure 5-60 Name of the new extent pool

h. The iSeries Disk tab shows the new extent pool along with the two previous extent
pools (Figure 5-61). Select each extent pool, and click Delete.

Figure 5-61 Deleting the previous extent pools

170 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
After you delete both of the previous extent pools, only the new extent pool named
Shared is shown on the iSeries Disk tab, as shown in Figure 5-62.

Figure 5-62 Only the Shared extent pool is available

Chapter 5. Sizing external storage for i5/OS 171


i. In the Disk Subsystem - iSeries1 window, select the iSeries Workload tab
(Figure 5-63). Then, select the tab with the name of the first workload, which in this
case is Example2-1. Complete the following information:
• For Extent Pool, select the pool name Shared.
• In the LUN count field, enter the planned number of LUNs for the first i5/OS system.
• In the Used Capacity (GB) field, enter the usable capacity for i5/OS.

Figure 5-63 Inserting the values for first workload

172 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
j. Select the tab with the name of the second i5/OS workload, which in this case is
Example2-2 (Figure 5-64). Then, complete the following information:
• For Extent Pool, select the extent pool named Shared.
• For LUN count, enter the planned number of LUNs.
• For Used Capacity, enter the amount of usable capacity.

Figure 5-64 Inserting values for the second workload

k. In the Disk Subsystem - iSeries1 window, click Solve to solve the modeled DS
configuration.

Chapter 5. Sizing external storage for i5/OS 173


l. Then, select the iSeries Workload tab. Click the tab with the name of the first workload,
which in this case is Example2-1. Notice the modeled disk service time and wait time,
as shown in Figure 5-65.

Figure 5-65 Modeled service time and wait time for the first workload

174 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
m. Click the tab with the name of second workload, which in this case is Example2-2.
Notice the modeled disk service time and wait time, as shown in Figure 5-66.
n. Select the Average tab, and then click Utilizations.

Figure 5-66 Modeled service time and wait time for the second workload

Chapter 5. Sizing external storage for i5/OS 175


o. In the Utilizations IBM 8100 window, observe the modeled utilizations of DDMs
(HDDs), FC adapters, and average utilization of LUNs for both workloads. See
Figure 5-67.

Figure 5-67 Modeled utilizations

5.5.3 Modeling System i5 and DS8100 for a batch job currently running
Model 8xx and ESS 800
In this example, we describe the sizing of DS8100 for a batch job that currently runs on
iSeries Model 825 with ESS 800. The needed performance reports are available, except for
System report - Storage pool utilization, which is optional for modeling with Disk Magic.

To size a DS system for a workload that currently runs on ESS 800:


1. Insert an iSeries performance reports from current the workload to Disk Magic. For
instructions about how to insert performance reports into Disk Magic, see 5.5.1, “Sizing
the System i5 with DS8000 for a customer with iSeries model 8xx and internal disks” on
page 139.

176 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. After you insert the performance reports, Disk Magic creates one disk subsystem for the
I/O rate and capacity part of the iSeries workload that runs on ESS 800, and one disk
subsystem for the part of the workload that runs on internal disks, as shown in
Figure 5-68.

Figure 5-68 Disk Magic model for iSeries with external disk

3. Double-click iSeries1.
4. In the Disk Subsystem - iSeries1 window (Figure 5-69), select the iSeries Disk tab.

Figure 5-69 Subsystem for internal disks

Chapter 5. Sizing external storage for i5/OS 177


As shown in Figure 5-70, notice the capacity that is used by the part of the workload on
internal disks. In our example, the customer has only four 6 GB internal drives. Disk Magic
does not take one of the drives into account because it considers it to be a mirrored load
source. Therefore, three of them are in this model.

Figure 5-70 Internal capacity

178 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select the iSeries Workload tab. Notice the I/O rate on the internal disks as shown in
Figure 5-71. In our example, a low I/O rate is used for the internal disks.

Figure 5-71 Workload on internal disks

Chapter 5. Sizing external storage for i5/OS 179


6. In the Disk Subsystems - iSeries1 window, click Base to save the base for internal disks.
7. In the TreeView panel, double-click the ESS1 icon.
The Disk Subsystem - ESS1 window (Figure 5-72) opens. It shows the model for the part
of capacity and workload on ESS 800.

Figure 5-72 Workload on the ESS

8. Adjust the model for the currently used ESS 800 so that it reflects the correct number of
ranks, size of DDMs, and FC adapters as described in the steps that follow. In our
example, the existing ESS 800 contains 8 GB cache, 12 ranks of 73 GB 15 KB rpm DDMs
and four FC adapters with feature number 2766, so we enter these values for disk
subsystem ESS1. To adjust the model:
a. Select the General tab, and click Hardware Details.
b. The ESS Configuration Details window (Figure 5-73 on page 181) opens. Replace the
default values with the correct values for the existing ESS. In our example, we use four
FC adapters and 8 GB of cache, so we do not change the default values. Click OK.

180 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 5-73 Hardware details for the existing ESS 800

c. Select the lnterfaces tab, and click the From Disk Subsystem subtab. Click Edit.
d. The Edit Interfaces for Disk Subsystem window (Figure 5-74) opens. Enter the correct
values for the current ESS 800. In our example, the customer uses four host ports from
ESS, so we do not change the default value of 4. However, we change the type of
adapters for Server side to Fibre 1 Gb to reflect the existing iSeries adapter 2766.
Click OK.

Figure 5-74 Insert Interfaces for Disk Subsystem

e. On the lnterfaces tab, click the From Servers subtab and click Edit.
f. In the Edit Interfaces window (Figure 5-75), enter the current number and type of
iSeries FC adapters. In our example, we use four iSeries 2766 adapters, so we leave
the default value of 4. However, for Server side, we change the type of adapters to
Fibre 1 Gb to reflect the current adapters 2766.

Figure 5-75 Current iSeries FC adapters

Chapter 5. Sizing external storage for i5/OS 181


g. In the Disk Subsystem - ESS1 window, select the iSeries Disk tab.
On the iSeries Disk tab, observe that current capacity and the number of LUNs are
inserted by Disk Magic and that 36 GB 15 KB rpm ranks are used as default for
Physical Device Type. If necessary, select another value for Physical Device Type to
reflect the current ESS configuration.
In our example, we select ESS 73 GB/15000 because the customer currently uses 73
GB 15 KB rpm DDMs on the ESS. Observe that the number of used ranks change
when we change the type of DDMs.
In some cases, it might be necessary to configure more ranks for performance than are
required for capacity. Disk Magic can validate the proposed configuration. We
recommend that you use Capacity Magic for capacity planning because Disk Magic
does not take sparing into account.
In our example, Disk Magic models only two ranks for the customer’s workload, as
shown in Figure 5-76. With the DS systems, we can model less capacity used for a
System i5 model than is the total capacity of used ranks.

Figure 5-76 Current capacity on the ESS

182 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
h. Select the iSeries Workload tab. Notice that the current I/O rate and block sizes are
inserted by Disk Magic as shown in Figure 5-77.

Figure 5-77 Current workload

i. On the iSeries Workload tab, click Cache Statistics. In the Cache Statistics for Host
window (Figure 5-78), notice the currently used cache percentages. Click OK.

Figure 5-78 Current cache usage

j. In the Disk Subsystem - ESS1 window, click Base to save the current model of ESS.

Chapter 5. Sizing external storage for i5/OS 183


9. Next, insert the planned values for the DS system in the Disk Subsystem - ESS1 window.
a. Select the General tab. For Hardware Type, select the planned model of the DS
system. In our example, we select DS8100, which is planned for this customer. See
Figure 5-79.

Figure 5-79 Planned hardware type

b. Click Hardware Details. In the Hardware Details IBM DS8100 window (Figure 5-80),
enter the values for the planned DS system. In our example, the customer uses four
DS FC host ports, so we enter 4 for Fibre Host Adapters. Click OK.

Figure 5-80 Hardware details of planned DS

184 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
c. Select the Interfaces tab. Select the From Disk Subsystem tab and click Edit.
d. The Edit Interfaces for Disk Subsystem window (Figure 5-81) opens. Enter the planned
number and type of DS host ports. In our example, the customer plans on four DS
ports and four adapters with feature number 2787 in the System i5 model. Therefore,
we leave the default value for Count. However, for Server side, we change the type to
Fibre 2 Gb. Click OK.

Figure 5-81 Planned DS ports

e. On the Interfaces tab, select the From Servers tab and click Edit. The Edit Interfaces
window (Figure 5-82) opens. Enter the planned number and type of System i5 FC
adapters. In our example, the customer plans for four FC adapters 2787, so we leave
the default value of 4 for Count. However, for Server side, we select Fibre 2 Gb. Click
OK.

Figure 5-82 Planned System i5 FC adapters

Chapter 5. Sizing external storage for i5/OS 185


f. Select the iSeries Disk tab. Notice that an extent pool is already created with the same
capacity as is used on ESS. See Figure 5-83. Click Edit.

Figure 5-83 Planned capacity -1

g. In the Edit a Disk Type panel (Figure 5-84), enter the capacity that corresponds to the
desired number of ranks for Capacity. Observe that 73 GB 15 KB rpm ranks are
already inserted as the default for HDD Type.
In our example, the customer plans nine ranks. The available capacity of one RAID-5
73 GB rank with spare (6+P+S rank) is 414.46 GB. We enter a capacity of 3730 (9 x
414.46 GB = 3730 GB), and click OK.

Figure 5-84 Planned number of ranks

186 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
h. Select the iSeries Workload tab. Enter the planned number of LUNs and the amount of
capacity that is used by the System i5 model. Notice that the extent pool for the i5/OS
workload is already specified for Extent Pool.
In our example, the customer plans for 113 of 17 GB LUNs, so we enter 113 for LUN
count. We also enter 1982 (using the equation 113 x 17.54 = 1982 GB) for Used
Capacity. See Figure 5-85.

Figure 5-85 Planned capacity-2

i. On the iSeries Workload tab, click Cache Statistics. In the Cache Statistics for Host
window (Figure 5-86), notice that the box Automatic cache Modeling is selected. This
indicates that Disk Magic will model cache percentages automatically for DS8100
based on the reported values from performance reports for the currently used ESS
800. Note that write cache efficiency reported in performance reports is not correct for
ESS 800, so Disk Magic uses a default value 30%.

Figure 5-86 Automatic cache modeling

Chapter 5. Sizing external storage for i5/OS 187


j. In the Disk Subsystem - ESS1 window, click Solve to ensure that the planned DS
configuration is modeled for the current workload. On the iSeries Workload tab
(Figure 5-87), notice the modeled disk service time and wait time. In our example, the
modeled service time is 3.8 ms, and the modeled wait time is 0.4 milliseconds.

Figure 5-87 Modeled service time and wait time

188 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
k. On the iSeries Workload tab, click Utilizations. Notice the modeled utilization of HDDs,
DS FC ports, LUNs, and so on, as shown in Figure 5-88. In our example, the modeled
utilizations are rather low so the customer can grow the workload to a certain extent
without needing additional hardware in the DS system.

Figure 5-88 Modeled utilizations

In our example, the customer migration from iSeries model 825 to a System i5 model was
performed at the same time as the installation of DS8100. Therefore, the number of I/Os per
second and the cache values differ from the ones that were used by Disk Magic. The actual
disk response times were lower than the modeled ones. The actual reported disk service time
is 2.2 ms, and disk wait time is 1.4 ms.

5.5.4 Using IBM Systems Workload Estimator connection to Disk Magic:


Modeling DS6000 and System i for an existing workload
In this example, we present usage of IBM Systems Workload Estimator (WLE) together with
Disk Magic, to size a System i server and DS6000 for an existing workload that runs on
iSeries model 870 and internal disks. To perform this, you must have the following i5/OS
Performance Tools reports:
 System report
– Workload
– Resource utilization
– Storage Pool utilization
– Disk utilization
 Resource report
– Disk utilization
– IOP utilizations
 Information about currently used disk adapters

Chapter 5. Sizing external storage for i5/OS 189


To size System i5 and DS6000 with WLE and Disk Magic:
1. Start IBM Systems WLE by accessing the following Web page:
http://www-912.ibm.com/wle/EstimatorServlet
2. On the License Agreement page, read the license agreement and then click I Accept if
you accept the terms of the agreement.
3. On the User Demographic Information page, provide your demographic information and
click Continue.
4. A panel displays as shown in Figure 5-89. To size an existing workload, click Workload:
Add in the blue tab at the top of the panel.

Figure 5-89 WLE initial panel

190 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. In the Workload Selection panel (Figure 5-90), for Add Workload, select Existing and click
Go.

Figure 5-90 Workload selection

Chapter 5. Sizing external storage for i5/OS 191


6. In the next panel (Figure 5-91), you can add another workload. Notice that Existing
workload #1 that you selected in the previous panel is shown. Do not select another
workload. Click Return.

Figure 5-91 Selecting another workload

7. You return to the initial panel, which contains the Existing #1 workload (see Figure 5-92).
Click Continue.

Figure 5-92 Initial panel with the Existing #1 workload

192 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. In the Existing #1 - Existing System Workload Definition panel (Figure 5-93), enter the
hardware and characteristics of the existing workload as described in the next steps.

Figure 5-93 Inserting the characteristics of the existing workload

Chapter 5. Sizing external storage for i5/OS 193


a. For Processor Model, select the model and processor features of the iSeries system on
which the existing workload runs. First, obtain this information from the System report
(see Figure 5-94).

System Report 19-07-05 12:01:07


Workload Page 0001
Panter 14 7 2005 14:00 t/m 14:45
Member . . . : Q195000004 Model/Serial . : 870/xxxxxx Main storage . . : 4096,0 MB Started . . . . : 14-07-05 00:00:06
Library . . : QMPGPANT System name . . : Example 4 Version/Release : 5/ 2,0 Stopped . . . . : 15-07-05 00:00:00
Partition ID : 002 Feature Code . : 7433-2489-7433
QPFRADJ . . . : 2 QDYNPTYSCD . . : 1 QDYNPTYADJ . . . : 1
Interactive Workload

Figure 5-94 Model characteristics from the System report

b. Next to Processor model, select the corresponding model and features (see
Figure 5-95).

Figure 5-95 Selecting the model and features

194 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
c. Obtain the total CPU utilization and Interactive CPU utilization data from the System
report - Workload (see Figure 5-96).

NETSERVER 1 0 0 0 0 0,0000 0,0


Total 489 3.091.609 647.250 13.859 7.794
Average 0,0003 858,7
Total CPU Utilization . . . . . . . . . . . . .: 53,9
Total CPU Utilization (Interactive Feature) . .: 11,2
Total CPU Utilization (Database Capability) . .: 15,7

Figure 5-96 CPU utilization

d. Obtain memory data from the System report in the Main Storage field (see Figure 5-94
on page 194).
e. Insert these values into the Total CPU Utilization, Interactive Utilization, and Memory
(MB) fields. If the workload runs in a partition, specify the number of processors for this
partition and select Yes for Represent a Logical partition. See Figure 5-97.

Figure 5-97 Existing hardware

Chapter 5. Sizing external storage for i5/OS 195


f. In the Disk Configuration fields (see Figure 5-98), specify as many groups as there are
different internal disk types on the system. In our example, we have only one disk type,
so we use only one group. If necessary, you can add other groups by clicking Add New
Group.

Figure 5-98 Disk configuration

g. Obtain the current IOA feature and RAID protection used from the iSeries
configuration. Obtain the Drive Type and number of disk units from the System report -
Disk Utilization (Figure 5-99).

Unit Size IOP IOP Dsk CPU ASP Rsc ASP --Percent-- Op Per K Per - Average Time Per I/O --
Unit Name Type (M) Util Name Util Name ID Full Util Second I/O Service Wait Response
---- ---------- ---- ------- ---- ---------- ------- ---------- --- ---- ---- -------- --------- ------- ------ --------
0001 DD004 4326 30.769 0,7 CMB01 0,6 1 59,0 1,8 14,98 9,7 .0012 .0002 .0014
0002 DD003 4326 26.373 0,7 CMB01 0,6 1 59,0 1,6 13,72 10,0 .0011 .0002 .0013
0003 DD011 4326 30.769 0,7 CMB01 0,6 1 59,0 1,6 11,83 11,7 .0013 .0003 .0016
0004 DD005 4326 30.769 0,7 CMB01 0,6 1 59,0 1,7 16,49 8,2 .0010 .0000 .0010
0005 DD009 4326 30.769 0,7 CMB01 0,6 1 59,0 1,5 15,17 9,5 .0009 .0002 .0011
0006 DD010 4326 26.373 0,7 CMB01 0,6 1 59,0 1,3 15,90 9,3 .0008 .0001 .0009
0007 DD007 4326 26.373 0,7 CMB01 0,6 1 59,0 1,2 11,42 10,2 .0010 .0001 .0011
0008 DD012 4326 30.769 0,7 CMB01 0,6 1 59,0 1,5 10,22 10,8 .0014 .0003 .0017
0009 DD008 4326 30.769 0,7 CMB01 0,6 1 59,0 1,5 15,67 9,0 .0009 .0001 .0010
0010 DD001 4326 26.373 0,7 CMB01 0,6 1 59,0 1,5 15,20 8,7 .0009 .0002 .0011
0011 DD006 4326 30.769 0,7 CMB01 0,6 1 59,0 1,7 21,17 8,3 .0008 .0000 .0008

Figure 5-99 Disk units

h. In the Storage (GB) field, insert the number of disk units multiplied by the size of a unit.
In our example, we have 24 of disk feature 4326, which is a 15 KB RPM 35.16 GB
internal disk drive. They are connected through IOA 2780.

196 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
i. In the Storage field, insert the total current disk capacity, by multiplying the capacity of
one disk unit with the number of disks. In our example, there are 24 x 35.16 GB disk
units, so we insert in the Storage field, 24 x 35.16 GB = 844 GB (see Figure 5-98).
You can also click the WLE Help/Tutorials tab for instructions on how to obtain the
necessary values to enter in the WLE.
j. Obtain the Read Ops Per Second value from the Resource report - Disk utilization (see
Figure 5-100).

Average Average Average High High High Disk


Itv Average Reads Writes K Per Avg High Util Srv Srv ce
End I/O /Sec /Sec /Sec I/O Util Util Unit Time Unit d
----- --------- -------- -------- ------- ---- ---- ---- ----- -----
14:00 357,4 111,5 245,8 11,7 1,6 2,3 0019 ,0016 0019 5
14:15 327,4 104,5 222,8 10,4 1,5 2,2 0002 ,0016 0002 5
14:30 501,6 188,0 313,6 8,7 2,1 2,9 0005 ,0014 0016 1
14:45 132,0 44,2 87,8 6,5 0,6 1,0 0020 ,0000 415.487
--------- -------- -------- ------- ----
Average: 329,6 112,0 217,5 9,7 1,5

Figure 5-100 I/O per second and block size

k. If the workload is small or if WebFacing or HATS is used, specify the values for in the
Additional Characteristics and WebFacing or HATS Support fields. Refer to WLE Help
for more information about these fields.
l. The System reports are shown in one block size (size of operation) for both reads and
writes, so insert this size for both operations. Click Continue (see Figure 5-98).
9. The Selected System - Choose Base System panel displays as shown in Figure 5-101.
Here you can limit your selection to an existing system, or you can use WLE to size any
system for the inserted workload. In our example, we use WLE to size any system. We
click the two Select buttons.

Figure 5-101 Selecting the sizing to size

Chapter 5. Sizing external storage for i5/OS 197


10.The Selected System panel displays, as shown in Figure 5-102, on which two
recommended models are shown. One model is intended as an immediate solution, and
the other is meant to accommodate the workload growth. You can choose other models
and features from Model/Feature and observe the predicted utilization with the existing
workload. To size external storage with Disk Magic, click the External Storage link in the
blue tab area at the top of the Selected System panel.

Figure 5-102 Selected system

11.The Selected System - External Storage Sizing Information panel displays as shown in
Figure 5-103. For Which system, select either Immediate or Growth for the system for
which you want to size external storage. In our example, we select Immediate to size our
external storage. Then click Download Now.

Figure 5-103 Selecting a system to size external storage

198 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
12.The File Download window opens. You can choose to start Disk Magic immediately for the
sized workload (by clicking Open), or you can choose to save the Disk Magic command
file and use it later (by clicking Save). In our example, we want to start Disk Magic
immediately, so we click Open.

Important: At this point, to start Disk Magic, you must have Disk Magic installed.

13.Disk Magic starts with the workload modeled with WLE (see Figure 5-104). Observe that
the workload Existing #1 is already shown under TreeView. Double-click dss1.

Figure 5-104 Disk Magic model

14.The Disk Subsystem - dss1 window (Figure 5-105) opens, displaying the General tab.
Follow these steps:
a. To size DS6800 for the Existing #1 workload, from Hardware Type, select DS6800. We
highly recommend that you use multipath with DS6800. To model multipath, select
Multipath with iSeries.

Figure 5-105 Selecting DS6800 and Multipath with iSeries

Chapter 5. Sizing external storage for i5/OS 199


b. Select the Interfaces tab and then select the From Servers tab (see Figure 5-106).
Observe that four interfaces from servers with workload Existing #1 are already
configured as the default. In our example, we plan four System i5 FC adapters in
multipath so we leave this default value. If necessary, you can change it by clicking the
Edit button and specifying the number of interfaces.

Figure 5-106 Interfaces from the server

c. Click the From Disk Subsystem tab. Notice that four interfaces from DS6000 are
configured as the default. In our example, we use two DS6000 host ports for
connecting to the System i5 platform, so we change the number of interfaces. Click
Edit to open the Edit Interfaces for Disk Subsystem window. In the Count field, enter
the number of planned DS6000 ports. Click OK. In our example, we insert two ports as
shown in Figure 5-107.

Figure 5-107 Interfaces from the DS

200 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
d. In the Disk Subsystem - dss1 window, click the iSeries Disk tab (Figure 5-108).
Observe that an extent pool is already configured for the Existing #1 workload. Its
capacity is equal to the capacity that you specified in the Storage field of the WLE.

Figure 5-108 Capacity in Disk Magic

e. In the Disk Subsystem - dss1 window, select the iSeries Workload tab. Notice that the
number of reads per second and writes per second, the number of LUNs, and the
capacity are specified based on values that you inserted in WLE. You might want to
check the modeled expert cache size, by comparing it to the sum of all expert cache
storage pools in the System report (Figure 5-109).

Pool Expert Size Act CPU Number Average ------ DB ------ ---- Non-DB ---- Act-
ID Cache (KB) Lvl Util Tns Response Fault Pages Fault Pages Wait
---- ------- ----------- ----- ----- ----------- -------- ------- ------- ------- ------- --------
01 0 808.300 0 28,5 0 0,00 0,0 0,0 0,3 1,0 257 0
*02 3 1.812.504 147 15,7 825 0,31 3,8 17,9 32,7 138,8 624 0
*03 3 1.299.488 48 9,6 4.674 0,56 2,4 13,0 28,1 107,0 198 0
04 3 121.244 5 0,0 0 0,00 0,0 0,0 0,0 0,0 0 0
Total 4.041.536 53,9 5.499 6,3 31,0 61,2 246,9 1.080 0

Figure 5-109 Expert cache

Chapter 5. Sizing external storage for i5/OS 201


f. Enter the block size that was used for WLE, if needed (Figure 5-110). Click Cache
Statistics.

Figure 5-110 Workload in Disk Magic

g. The Cache Statistics for Host Existing #1 window (Figure 5-111) opens. Notice that the
cache statistics are already specified in the Disk Magic model. For more conservative
sizing, you might want to change them to lower values, such as 20% read cache and
30% write cache. Then, click OK.

Figure 5-111 Cache values in Disk Magic

202 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
h. On the Disk Subsystem - dss1 window, click Base to save the current model as base.
After the base is saved successfully, notice the modeled disk service time and wait
time, as shown in Figure 5-112.

Figure 5-112 Modeled service and wait times

i. On the iSeries Workload tab, click Utilizations. The Utilizations IBM DS6800 window
(Figure 5-113) opens. Observe the modeled utilizations for the existing workload. In
our example, the modeled hard disk drive (HDD) utilization and LUN utilization are far
below the limits that are recommended for good performance. There is room for growth
in the modeled DS configuration.

Figure 5-113 Modeled utilizations

Chapter 5. Sizing external storage for i5/OS 203


204 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Part 3

Part 3 Implementation and


additional topics
This part covers different implementation methods and additional topics concerning external
storage on System i. It has the following chapters:
 Chapter 6, “Implementing external storage with i5/OS” on page 207
 Chapter 7, “Migrating to i5/OS boot from SAN” on page 277
 Chapter 8, “Using DS CLI with System i” on page 391
 Chapter 9, “Using DS GUI with System i” on page 439
 Chapter 10, “Installing the IBM System Storage DS6000 storage system” on page 519
 Chapter 11, “Usage considerations for Copy Services with i5/OS” on page 537
 Chapter 12, “Cloning i5/OS” on page 553
 Chapter 13, “Troubleshooting i5/OS with external storage” on page 569

© Copyright IBM Corp. 2008. All rights reserved. 205


206 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6

Chapter 6. Implementing external storage


with i5/OS
In this chapter, we discuss the supported environment for external storage including the
logical volumes that are supported and the protection methods that are available. We also
show how to add the logical volumes to the System i environment.

© Copyright IBM Corp. 2008. All rights reserved. 207


6.1 Supported environment
Not all hardware and software combinations for i5/OS support DS8000 and DS6000. This
section describes the hardware and software prerequisites for attaching DS8000 and
DS6000.

6.1.1 Hardware
DS8000, DS6000, and ESS model 800 are supported on all System i models that support
Fibre Channel (FC) attachment for external storage. Fibre channel was supported on all
iSeries 8xx models and later. AS/400 models 7xx and earlier only supported SCSI
attachment for external storage so they cannot support DS8000 or DS6000.

The following IOP-based FC adapters for System i support DS8000 and DS6000:
 2766 2 Gb Fibre Channel Disk Controller PCI
 2787 2 Gb Fibre Channel Disk Controller PCI-X
 5760 4 Gb Fibre Channel Disk Controller PCI-X

Each of these adapters requires its own dedicated I/O processor.

With System i POWER6 new IOP-less FC adapters are available which only support IBM
System Storage DS8000 on LIC level 2.4.3 or later for external disk storage attachment:
 5749 IOP-less 4 Gb dual-port Fibre Channel Disk Controller PCI-X
 5774 IOP-less 4 Gb dual-port Fibre Channel Disk Controller PCIe

For further planning information with these System i FC adapters, refer to 4.2, “Solution
implementation considerations” on page 78.

For information about current hardware requirements, including support for switches, refer to:
http://www-1.ibm.com/servers/eserver/iseries/storage/storage_hw.html

To support boot from SAN with the load source unit on external storage, either the #2847 I/O
processor (IOP) or an IOP-less FC adapter is required.

Restriction: Prior to i5/OS V6R1 the #2847 IOP for SAN load source does not support
multipath for the load source unit but does support multipath for all other logical unit
numbers (LUNs) attached to this I/O processor (IOP). See 6.10, “Protecting the external
load source unit” on page 240 for more information.

6.1.2 Software
The iSeries or System i environment must be running V5R3, V5R4 or V6R1of i5/OS. In
addition the following PTFs are required:
 V5R3
– MF33328
– MF33845
– MF33437
– MF33303
– SI14690
– SI14755
– SI14550

208 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
 V5R3M5 and later
– Load source must be at least 17.54 GB

Important:
 The #2847 PCI-X IOP for SAN load source requires i5/OS V5R3M5 or later.
 The #5760 FC I/O adapter (IOA) requires V5R3M0 resave RSI or V5R3M5 RSB with
C6045530 or later (ref. #5761 APAR II14169) and for System i5 firmware level
SF235_160 or later
 The #5749/#5774 IOP-less FC IOA is supported on System i POWER6 models only

Prior to attaching a DS8000, DS6000, or ESS model 800 system to a System i model, check
for the latest PTFs, which probably have superseded the minimum requirements listed
previously.

Note: We generally recommend installing one of the latest i5/OS cumulative PTFs
(cumPTFs) before attaching IBM System Storage external disk storage subsystems to
System i.

6.2 Logical volume sizes


i5/OS is supported on the DS8000 and DS6000 system using fixed block storage. Unlike
other Open Systems that use the fixed block architecture, i5/OS supports only specific
volume sizes, which might not be an exact number of extents. In general the LUN sizes relate
to the volume sizes available with System i internal disk devices. i5/OS volumes are defined in
decimal GB (109 bytes).

Table 6-1 indicates the number of extents that are required for different System i volume
sizes. The value xxxx represents 1750 for DS6000 and 2107 for DS8000.

Table 6-1 i5/OS logical volume sizes


Model type i5/OS Number of Extents Unusable Usable
Device logical space space%
Protected Unprotected size (GB) block (GiB)a
addresses

xxxx-A01 xxxx-A81 8.59 16,777,216 8 0.00 100.00

xxxx-A02b xxxx-A82 17.54 34,275,328 17 0.66 96.14

xxxx-A05b xxxx-A85 35.16 68,681,728 33 0.25 99.24

xxxx-A04b xxxx-A84 70.56 137,822,208 66 0.28 99.57

xxxx-A06b xxxx-A86 141.12 275,644,416 132 0.56 99.57

xxxx-A07 xxxx-A87 282.25 551,288,832 263 0.13 99.95


a. GiB represents “Binary GB” (230 bytes) and GB represents “Decimal GB” (109 bytes).
b. Only Ax2, Ax4, Ax5, and Ax6 models are supported as external load source unit LUNs.

When creating the logical volumes for use with i5/OS, in almost every case, the i5/OS device
size does not match a whole number of extents, so some space remains unused. Use the
values in Table 6-1 in conjunction with extent pools to see how much space will be wasted for
your specific configuration. Also, note that the #2766, #2787, and #5760 Fibre Channel Disk

Chapter 6. Implementing external storage with i5/OS 209


Adapters used by the System i platform can only address up to 32 LUNs while the IOP-less
FC adapter #5749 and #5774 support up to 64 LUNs per port.

For more information about sizing guidelines for i5/OS, refer to Chapter 5, “Sizing external
storage for i5/OS” on page 115.

6.3 Protected versus unprotected volumes


When defining i5/OS logical volumes, you must decide whether these should be protected or
unprotected volume models. This protection mode is simply a SCSI Inquiry data notification to
i5/OS and does not mean that the data is protected or unprotected. In reality, all DS8000 or
DS6000 LUNs are protected, either by RAID-5 or RAID-10. An unprotected volume is
available for i5/OS to mirror that volume to another volume of equal capacity, either internal or
external. Unless you intend to use i5/OS (host-based) mirroring, you should define your
logical volumes as protected.

Under some circumstances, you might want to mirror the i5/OS internal load source unit to a
LUN in the DS8000 or DS6000 storage system. In this case, define only one LUN as
unprotected. Otherwise, when mirroring is started to mirror the load source unit to the
DS6000 or DS8000 LUN, i5/OS attempts to mirror all unprotected volumes.

Important: Prior to i5/OS V6R1, we strongly recommend that if you use an external load
source unit that you use i5/OS mirroring to another LUN in external storage system to
provide path protection for the external load source unit (see 6.10, “Protecting the external
load source unit” on page 240).

Although it is possible to change a volume from protected to unprotected (or vice versa) using
the DS command-line interface (CLI) chfbvol command, you need to be extremely careful
when changing LUN protection.

Attention: Changing the LUN protection of a System i volume is only supported for
non-configured volumes, that is volumes not a part of the System i auxiliary storage pool
configuration.

If the volume is configured, that is within an auxiliary storage pool (ASP) configuration, do not
change the protection. In this case if you want to change the protection, you must remove the
volume from the ASP configuration first and add it back later after having changed its
protection mode. This process is unlike ESS models E20, F20, and 800 where from storage
side no dynamic change of the LUN protection mode is supported so that the logical volume
would have to be deleted, requiring the entire array that contains the logical volume to be
reformatted, and created new with the desired other volume protection mode.

Important: Removing a logical volume from the System i configuration is an i5/OS


disruptive task if the LUN is in the system auxiliary storage pool (ASP) or user ASPs 2
through 32 because it requires an initial program load (IPL) of i5/OS to completely remove
the volume from the i5/OS configuration. However volumes can be removed from an
independent ASP (IASP) with the IASP varied off without performing an IPL on the system.
This is no difference from removing an internal disk from an i5/OS configuration.

210 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.4 Setting up an external load source unit
The new #5749 and #5774 IOP-less Fibre Channel IOAs for System i POWER6 allow to
perform an IPL from a LUN in the IBM System Storage DS8000 series.

The #2847 PCI-X IOP for SAN load source allows a System i to perform an IPL from a LUN in
a DS6000, DS8000, or ESS model 800. This IOP supports only a single FC IOA. No other
IOAs are supported.

Restrictions:
 The new IOP-less Fibre Channel IOAs #5749 or and #5774 support for direct
attachment the FC-AL protocol only.
 For the #2847 IOP driven IOAs Point-to-Point (also known as FC-SW and SCSI-FCP) is
the only support protocol. You must not define the host connection (DS CLI) or the Host
Attachment (Storage Manager GUI) as FC-AL because this prevents you from using the
system.

Creating a new load source unit on external storage is similar to creating one on an internal
drive. However, instead of tagging a RAID disk controller for the internal load source unit, you
must tag your load source IOA for the SAN load source.

Note: With System i SLIC V5R4M5 and later all buses and IOPs are booted in the D-mode
IPL environment and if no existing loadsource disk unit is found, a list of eligible disk units
(of the correct capacity) displays for the user to select which disk to use as the loadsource
disk.

For previous SLIC versions, we recommend that you assign only your designated load source
LUN to your load source IOA first to make sure that this is the LUN chosen by the system for
your load source at SLIC install. Then, assign the other LUNs to your load source IOA
afterwards.

6.4.1 Tagging the load source IOA


Even if you are only creating a system with one partition, you must use a Hardware
Management Console (HMC) to tag the load source IOA. This tells the system which IOA to
use when building the load source unit during the D-mode IPL SLIC installation. The external
load source unit does not work on a system without an HMC.

Chapter 6. Implementing external storage with i5/OS 211


On the HMC, set the tagged load source unit to the FC Disk Controller that is controlling your
new external load source unit. On the HMC, follow these steps:
1. Select the partition name with which you are working. Then, select Tasks →
Configuration → Manage Profiles as shown in Figure 6-1.

Note: For below HMC V7, right-click the partition profile name and select Properties.

Figure 6-1 Selecting the HMC partition profile properties

2. Select Actions → Edit as shown in Figure 6-2

Figure 6-2 Managed Profiles

212 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Logical Partition Profile Properties window (Figure 6-3), select the Tagged I/O tab.

Figure 6-3 Logical Partition Profile Properties window

4. On the Tagged I/O tab, click the Select button that corresponds to the load source as
shown in Figure 6-4.

Figure 6-4 Tagged I/O properties

Chapter 6. Implementing external storage with i5/OS 213


5. In the Load Source Device window (Figure 6-5), select the IOA to which your new load
source unit is assigned. Click OK.

Figure 6-5 Tagging the load source unit

6. Change the partition to do a manual IPL as follows:


a. Select Tasks → Properties from the drop-down menu as shown in Figure 6-6.

Figure 6-6 Selecting the Properties option

Note: For below HMC V7, right-click the partition name and select Properties.

214 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
b. In the Partition Properties window (Figure 6-7), select the Settings tab.

Figure 6-7 Partition Properties window

c. On the Settings tab, for Keylock position, select Manual as shown in Figure 6-8.

Figure 6-8 Setting the IPL type

Chapter 6. Implementing external storage with i5/OS 215


6.4.2 Creating the external load source unit
After you tag the load source IOA, the installation process is the same as installing on an
internal load source unit. Follow these steps:
1. Insert the I_BASE SLIC CD into the alternate IPL DVD-ROM device and perform a
D-mode IPL by selecting the partition and choosing Tasks → Operations → Activate as
shown in Figure 6-9.

Note: For below HMC V7, right-click the partition, select Properties, and click
Activate.

Figure 6-9 Activating a partition

2. In the Activate Logical Partition window (Figure 6-10), select the partition profile to be
used and click OK.

Figure 6-10 Selecting the profile for activation

In the HMC, a status window displays, which closes when the task is complete and the
partition is activated. Wait for the Dedicated Service Tools (DST) panel to open.
3. After the system has done an IPL to DST, select 3. Use Dedicated Service Tools (DST).

216 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. On the OS/400 logo panel (Figure 6-11), enter the language feature code.

OOOOOO SSSSS // 44 00000 00000


OO OO SS SS // 444 00 00 00 00
OO OO SS // 4444 00 00 00 00
OO OO SS // 44 44 00 00 00 00
OO OO SSS // 44 44 00 00 00 00
OO OO SSS // 44 44 00 00 00 00
OO OO SS // 44 44 00 00 00 00
OO OO SS // 44444444444 00 00 00 00
OO OO SS SS // 44 00 00 00 00
OOOOOO SSSSSS // 44 00000 00000

LANGUAGE FEATURE ===> 2924


Figure 6-11 OS/400 logo panel

5. On the Confirm Language Group panel (Figure 6-12), press Enter to confirm the language
code.

Confirm Language Group

Language feature . . . . . . . . . . . . . . : 2924

Press Enter to confirm your choice for language feature.


Press F12 to change your choice for language feature.

F12=Cancel
Figure 6-12 Confirming the language feature

Chapter 6. Implementing external storage with i5/OS 217


6. On the Install Licensed Internal Code panel (Figure 6-13), select 1. Install Licensed
Internal Code.

Install Licensed Internal Code


System: G1016730
Select one of the following:

1. Install Licensed Internal Code


2. Work with Dedicated Service Tools (DST)
3. Define alternate installation device

Selection
1
Figure 6-13 Install Licensed Internal Code panel

7. The next panel shows the volume that is selected as the external load source unit and a
list of options for installing the Licensed Internal Code (see Figure 6-14). Select 2. Install
Licensed Internal Code and Initialize System.

Install Licensed Internal Code (LIC)

Disk selected to write the Licensed Internal Code to:


Serial Number Type Model I/O Bus Controller Device
30-02000 1750 A85 0 1 1

Select one of the following:

1. Restore Licensed Internal Code


2. Install Licensed Internal Code and Initialize system
3. Install Licensed Internal Code and Recover Configuration
4. Install Licensed Internal Code and Restore Disk Unit Data
5. Install Licensed Internal Code and Upgrade Load Source

Selection
2

F3=Exit F12=Cancel
Figure 6-14 Install Licensed Internal Code options

218 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. On the Confirmation panel, read the warning message that displays (as shown in
Figure 6-15) and press F10=Continue when you are sure that you want to proceed.

Install LIC and Initialize System - Confirmation

Warning:
All data on this system will be destroyed and the Licensed
Internal Code will be written to the selected disk if you
choose to continue the initialize and install.

Return to the install selection screen and choose one of the


other options if you want to perform some type of recovery
after the install of the Licensed Internal Code is complete.

Press F10 to continue the install.


Press F12 (Cancel) to return to the previous screen.
Press F3 (Exit) to return to the install selection screen.

F3=Exit F10=Continue F12=Cancel


Figure 6-15 Confirmation warning

9. The Initialize the Disk - Status panel displays for a short time (see Figure 6-16). Unlike
internal drives, formatting external LUNs on DS8000 and DS6000 is a task that is run by
the storage system in the background, that is the task might complete faster than you
expect.

Initialize the Disk - Status

The load source disk is being initialized.

Estimated time to initialize in minutes : 55

Elapsed time in minutes . . . . . . . . : 0.0

Please wait.

Wait for next display or press F16 for DST main menu
Figure 6-16 Initialize the Disk Status panel

Chapter 6. Implementing external storage with i5/OS 219


When the logical formatting has finished, you see the Install Licensed Internal Code - Status
panel as shown in Figure 6-17.

Install Licensed Internal Code - Status

Install of the Licensed Internal Code in progress.

+--------------------------------------------------+
Percent | 100% |
complete +--------------------------------------------------+

Elapsed time in minutes . . . . . . . . : 2.5

Please wait.
Figure 6-17 Install Licensed Internal Code status

When the Install Licensed Internal Code process is complete, the system does another IPL to
DST automatically. You have now built an external load source unit.

6.5 Adding volumes to the System i5 configuration


After the logical volumes are created and assigned to the host, they appear as
non-configured units to i5/OS. It can take some time for i5/OS to recognize the logical
volumes after they are created. At this stage, they are used in exactly the same way as
non-configured internal units. There is nothing particular to external logical volumes as far as
i5/OS is concerned. You should use the same functions for adding logical units to an ASP as
you would for internal disks.

Adding disk units to the configuration can be done either by using the 5250 interface with
Dedicated Service Tools (DST) or System Service Tools (SST) or with iSeries Navigator.

220 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.5.1 Adding logical volumes using the 5250 interface
To add a logical volume in the DS8000 or DS6000 to the system ASP, using System Service
Tools (SST), follow these steps:
1. Enter the command STRSST and sign on System Service Tools.
2. In the System Service Tools (SST) panel (Figure 6-18), select 3. Work with disk units.

System Service Tools (SST)

Select one of the following:

1. Start a service tool


2. Work with active service tools
3. Work with disk units
4. Work with diskette data recovery
5. Work with system partitions
6. Work with system capacity
7. Work with system security
8. Work with service tools user IDs
Selection
3

F3=Exit F10=Command entry F12=Cancel


Figure 6-18 System Service Tools menu

3. In the Work with Disk Units panel (Figure 6-19), select 2. Work with disk configuration.

Work with Disk Units

Select one of the following:

1. Display disk configuration


2. Work with disk configuration
3. Work with disk unit recovery

Selection
2

F3=Exit F12=Cancel
Figure 6-19 Work with Disk Units panel

Chapter 6. Implementing external storage with i5/OS 221


4. When adding disk units to a configuration, you can add them as empty units by selecting
Option 2, or you can allow i5/OS to balance the data across all the disk units. Normally, we
recommend that you balance the data. In the Work with Disk Configuration panel
(Figure 6-20), select 8. Add units to ASPs and balance data.

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Add units to ASPs
3. Work with ASP threshold
4. Include unit in device parity protection
5. Enable remote load source mirroring
6. Disable remote load source mirroring
7. Start compression on non-configured units
8. Add units to ASPs and balance data
9. Start device parity protection

Selection
8

F3=Exit F12=Cancel
Figure 6-20 Work with Disk Configuration panel

5. In the Specify ASPs to Add Units to panel (Figure 6-21), specify the ASP number next to
the desired units. Here, we specify 1 for ASP, which is the System ASP. Press Enter.

Specify ASPs to Add Units to

Specify the ASP to add each unit to.

Specify Serial Resource


ASP Number Type Model Capacity Name
21-662C5 4326 050 35165 DD124
21-54782 4326 050 35165 DD136
1 75-1118707 2107 A85 35165 DD006

F3=Exit F5=Refresh F11=Display disk configuration capacity


F12=Cancel
Figure 6-21 Specify ASPs to Add Units to panel

222 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Confirm Add Units panel (Figure 6-22), review the information and verify that
everything is correct. If the information is correct, press Enter to continue. Depending on
the number of units that you are adding, this step can take some time to complete.

Confirm Add Units

Add will take several minutes for each unit. The system will
have the displayed protection after the unit(s) are added.

Press Enter to confirm your choice for Add units.


Press F9=Capacity Information to display the resulting capacity.
Press F12=Cancel to return and change your choice.

Serial Resource
ASP Unit Number Type Model Name Protection
1 Unprotected
1 02-89058 6717 074 DD004 Device Parity
2 68-0CA4E32 6717 074 DD003 Device Parity
3 68-0C9F8CA 6717 074 DD002 Device Parity
4 68-0CA5D96 6717 074 DD001 Device Parity
5 75-1118707 2107 A85 DD006 Unprotected

F9=Resulting Capacity F12=Cancel


Figure 6-22 Confirm Add Units panel

7. After the units are added, view your disk configuration to verify the capacity and data
protection.

Chapter 6. Implementing external storage with i5/OS 223


6.5.2 Adding volumes to an independent auxiliary storage pool
IASPs can be defined as switchable or private. Disks must be added to an IASP using the
iSeries Navigator. That is, you cannot manage your IASP disk configuration from the 5250
interface. In this example, we add a logical volume to a private (non-switchable) IASP. Follow
these steps:
1. Start iSeries Navigator. Figure 6-23 shows the initial window.

Figure 6-23 iSeries Navigator initial window

224 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Expand the iSeries to which you want to add the logical volume and sign on to that server.
Then expand Configuration and Service → Hardware → Disk Units (see Figure 6-24).

Figure 6-24 Series Navigator Disk Units

3. Sign on to SST. Enter your Service tools ID and password and then click OK.

Chapter 6. Implementing external storage with i5/OS 225


4. Under Disk Units, right-click Disk Pools, and select New Disk Pool as shown in
Figure 6-25.

Figure 6-25 Creating a new disk pool

5. The New Disk Pool wizard opens. Figure 6-26 shows the Welcome window. Click Next.

Figure 6-26 New Disk Pool Welcome window

226 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the New Disk Pool window (Figure 6-27):
a. For Type of disk pool, select Primary.
b. For Disk pool, type the new disk pool name.
c. Leave Database set to the default of Generated by the system.
d. Ensure that the disk protection method matches the type of logical volume that you are
adding. If you leave it deselected, you will see all available disks.
e. Select OK to continue.

Figure 6-27 Defining a new disk pool

7. The New Disk Pool - Select Disk Pool window (Figure 6-28) summarizes the disk pool
configuration. Review the configuration and click Next.

Figure 6-28 Confirming the disk pool configuration

Chapter 6. Implementing external storage with i5/OS 227


8. In the New Disk Pool - Add to Disk Pool window (Figure 6-29), click Add Disks to add
disks to the new disk pool.

Figure 6-29 Adding disks to the disk pool

9. The Disk Pool - Add Disks window lists the non-configured units. Highlight the disk or
disks that you want to add to the disk pool, and click Add, as shown in Figure 6-30.

Figure 6-30 Choosing the disks to add to the disk pool

228 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
10.The next window confirms the selection (see Figure 6-31). Click Next to continue.

Figure 6-31 Confirming the disks to be added to the disk pool

11.In the New Disk Pool - Summary window, review the summary of the configuration. Click
Finish to add the disks to the disk pool, as shown in Figure 6-32.

Figure 6-32 New Disk Pool - Summary window

Chapter 6. Implementing external storage with i5/OS 229


12.Take note of and respond to any messages that appear. After you take any necessary
action regarding any messages, you see the New Disk Pool Status window (Figure 6-33),
which shows the progress. This step might take some time, depending on the number and
size of the logical units that are being added.

Figure 6-33 New Disk Pool Status

13.When the process is complete, a message window displays. Click OK as shown in


Figure 6-34.

Figure 6-34 Disks added successfully to the disk pool

14.In iSeries Navigator, you can see the new disk pool under Disk Pools (see Figure 6-35).

Figure 6-35 New disk pool shown in iSeries Navigator

230 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
15.To see the logical volume, expand Configuration and Service → Hardware → Disk
Pools and select the disk pool that you just created. See Figure 6-36.

Figure 6-36 New logical volume in iSeries Navigator

6.6 Adding multipath volumes to System i using a 5250


interface
If you are using the 5250 interface, sign on to SST and perform the following steps:
1. On the first panel, select 3. Work with disk units.
2. On the next panel, select 2. Work with disk configuration.
3. On the next panel, select 8. Add units to ASPs and balance data.
4. In the Specify ASPs to Add Units to panel (Figure 6-37), the values in the Resource Name
column show DDxxx for single path volumes and DMPxxx for those which have more than
one path. In this example, the 2107-A85 logical volume with serial number 75-1118707 is
available through more than one path and reports in as DMP135.
Specify the ASP to which you want to add the multipath volumes.

Note: For multipath volumes, only one path is shown. For the additional paths, see 6.8,
“Managing multipath volumes using iSeries Navigator” on page 236.

Chapter 6. Implementing external storage with i5/OS 231


Specify ASPs to Add Units to

Specify the ASP to add each unit to.

Specify Serial Resource


ASP Number Type Model Capacity Name
21-662C5 4326 050 35165 DD124
21-54782 4326 050 35165 DD136
1 75-1118707 2107 A85 35165 DMP135

F3=Exit F5=Refresh F11=Display disk configuration capacity


F12=Cancel
Figure 6-37 Adding multipath volumes to an ASP

5. On the Confirm Add Units panel (Figure 6-38), check the configuration details. If the
details are correct, press Enter.

Confirm Add Units

Add will take several minutes for each unit. The system will
have the displayed protection after the unit(s) are added.

Press Enter to confirm your choice for Add units.


Press F9=Capacity Information to display the resulting capacity.
Press F12=Cancel to return and change your choice.

Serial Resource
ASP Unit Number Type Model Name Protection
1 Unprotected
1 02-89058 6717 074 DD004 Device Parity
2 68-0CA4E32 6717 074 DD003 Device Parity
3 68-0C9F8CA 6717 074 DD002 Device Parity
4 68-0CA5D96 6717 074 DD001 Device Parity
5 75-1118707 2107 A85 DMP135 Unprotected

F9=Resulting Capacity F12=Cancel


Figure 6-38 Confirm Add Units panel

232 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.7 Adding volumes to System i using iSeries Navigator
You can use iSeries Navigator to add volumes to the system ASP, user ASPs, or IASPs. In
this example, we add a multipath logical volume to a private (non-switchable) IASP. The same
principles apply when adding multipath volumes to the system ASP or user ASPs.
1. Follow the steps in 6.5.2, “Adding volumes to an independent auxiliary storage pool” on
page 224. When you reach the point where you select the volumes to add, a panel similar
to the panel that is shown in Figure 6-39 displays. Multipath volumes appear as DMPxxx.
Highlight the disk or disks that you want to add to the disk pool and click Add.

Figure 6-39 Adding a multipath volume

Note: For multipath volumes, only one path is shown. To see the additional paths, see
6.8, “Managing multipath volumes using iSeries Navigator” on page 236.

Chapter 6. Implementing external storage with i5/OS 233


2. The remaining steps are identical to those in 6.5.2, “Adding volumes to an independent
auxiliary storage pool” on page 224.
When you have completed the steps, you can see the new disk pool in iSeries Navigator
under Disk Pools (see Figure 6-40).

Figure 6-40 New disk pool in iSeries Navigator

234 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. To see the logical volume, expand Configuration and Service → Hardware → Disk
Units → Disk Pools, and click the disk pool that you just created as shown in Figure 6-41.

Figure 6-41 New logical volume shown in iSeries Navigator

Chapter 6. Implementing external storage with i5/OS 235


6.8 Managing multipath volumes using iSeries Navigator
All units are initially created with a prefix of DD. As soon as the system detects that there is
more than one path to a specific logical unit, it automatically assigns a unique resource name
with a prefix of DMP for both the initial path and any additional paths.

When using the standard disk panels in iSeries Navigator, only a single path, the initial path,
is shown. To see the additional paths follow these steps:
1. To see the number of paths available for a logical unit, open iSeries Navigator and expand
Configuration and Service → Hardware → Disk Units. As shown in Figure 6-42, the
number of paths for each unit is in the Number of Connections column (far right side of the
panel). In this example, there are eight connections for each of the multipath units.

Figure 6-42 Example of multipath logical units

236 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. To see the other connections to a logical unit, right-click a unit, and select Properties, as
shown in Figure 6-43.

Figure 6-43 Selecting properties for a multipath logical unit

Chapter 6. Implementing external storage with i5/OS 237


3. In the Properties window (Figure 6-44), you see the General tab for the selected unit. The
first path is shown as Device 1 in the Storage section of the dialog box.

Figure 6-44 Multipath logical unit properties

238 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To see the other paths to this unit, click the Connections tab, where the other seven
connections for this logical unit are displayed, as shown in Figure 6-45.

Figure 6-45 Multipath connections

6.9 Changing from single path to multipath


If you have an existing configuration where the logical units were assigned only to one Fibre
Channel I/O adapter, you can change to multipath easily. Simply assign the logical units in the
DS8000 or DS6000 system to another System i I/O adapter. Then the existing DDxxx
resource names change automatically to DMPxxx, and new DMPyyy resources are created
for the new path.

Figure 6-46 shows an example where 48 logical volumes are configured in the DS8000. The
first 24 of these being in one DS volume group are assigned using a host adapter in the top
left I/O drawer in the DS8000 to a Fibre Channel (FC) I/O adapter in the first iSeries I/O tower
or rack. The next 24 logical volumes within another DS volume group are assigned using a
host adapter in the lower left I/O drawer in the DS8000 to an FC I/O adapter on a different bus
in the first iSeries I/O tower or rack. This is a valid single path configuration.

To implement multipath, the first group of 24 logical volumes is also assigned to an iSeries FC
I/O adapter in the second iSeries I/O tower or rack through a host adapter in the lower right
I/O drawer in the DS8000. The second group of 24 logical volumes is also assigned to an FC
I/O adapter on a different bus in the second iSeries I/O tower or rack through a host adapter in
the upper right I/O drawer.

Chapter 6. Implementing external storage with i5/OS 239


Volumes 1-24

Volumes 25-48

Host Adapter 1 Host Adapter 2

IO Drawers and
IO Drawer IO Drawer

IO Drawer IO Drawer
Host Adapters
Host Adapter 3 Host Adapter 4

BUS a BUS x
FC IOA FC IOA
iSeries IO
BUS b
FC IOA FC IOA
BUS y Towers/Racks
Logical connection

Figure 6-46 Example of multipath with the iSeries server

6.10 Protecting the external load source unit


With i5/OS V6R1 multipath is now also supported for the SAN load source unit for both #2847
IOP-based and #5749 or #5774 IOP-less Fibre Channel adapters. Therewith the load source
unit data is not only data protected within the external storage unit, either by RAID-5 or
RAID-10, but also protected against I/O path failures. Implementation for i5/OS LUN
multipathing is achieved simply by configuring logical volumes from storage side to at least
two System i Fibre Channel I/O adapters as discussed in 6.9, “Changing from single path to
multipath” on page 239.

Note: For the remainder of this section, we focus on implementing load source mirroring
for an #2847 IOP-based SAN load source prior to i5/OS V6R1.

Prior to i5/OS V6R1, the #2847 PCI-X IOP for SAN load source did not support multipath for
the external load source unit. To provide path protection for the external load source unit prior
to V6R1 it has to be mirrored using i5/OS mirroring. Therefore, the two LUNs used for
mirroring the external load source across two #2847 IOP-based Fibre Channel adapters
(ideally in different I/O towers to provide highly redundant path protection) are created as
unprotected LUN models.

240 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To mirror the load source unit, unless you are using SLIC V5R4M5 or later (see 6.4, “Setting
up an external load source unit” on page 211) initially assign only one LUN to the IOA that is
tagged as the load source unit IOA. Other LUNs, including the “mirror mate” for the load
source unit, should be assigned to another #2847 IOP-based IOA as shown in Figure 6-47.
The simplest way to do this is to create two volume groups on the DS8000 or DS6000. The
first volume group (shown on the left) contains only the load source unit and is assigned to the
#2847 tagged as the load source IOA. The second volume group (shown on the right)
contains the load source unit mirror mate plus the remaining LUNs, which eventually will have
multipaths. This volume group is assigned to the second #2847 IOP-based IOA.

iSeries
IO Tower IO Tower
#2847 IOP #2847 IOP
Fibre Channel Fibre Channel
IOA IOA

Unprot Unprot
LSU LSU'

Figure 6-47 Initial LUN allocation

Chapter 6. Implementing external storage with i5/OS 241


After you have loaded SLIC onto the load source unit, you can assign the remaining LUNs to
the second #2847 IOP-based IOA to provide multipath as shown in Figure 6-48 by assigning
those LUNs that will have multipaths to the volume group on the left.

iSeries
IO Tower IO Tower
#2847 IOP #2847 IOP
Fibre Channel Fibre Channel
IOA IOA

Unprot Unprot
LSU LSU'

Figure 6-48 Final LUN allocation

If you have more LUNs that require more IOPs and IOAs, you can assign these to volume
groups with already using a multipath configuration as shown in Figure 6-49. It is important to
ensure that your load source unit initially is the only volume assigned to the #2487 IOP-based

242 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
IOA that is tagged in Hardware Management Console (HMC) as the load source IOA. Our
example including SAN switches shows a configuration with two redundant SAN switches to
avoid a single-point of failure.

iSeries
IO Tower IO Tower
BUS BUS BUS BUS
#2847 IOP #2844 IOP #2844 IOP #2847 IOP
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA

Switch Switch

Unprot Unprot

LSU LSU'

Figure 6-49 Initial LUN allocation with additional multipath LUNs

Chapter 6. Implementing external storage with i5/OS 243


After SLIC is loaded on the load source unit, you can assign the multipath LUNs to the #2847
tagged as the load source unit by adding them to the volume group (on the left in
Figure 6-50), which initially only contained the load source unit.

iSeries
IO Tower IO Tower
BUS BUS BUS BUS
#2847 IOP #2844 IOP #2844 IOP #2847 IOP
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA

Switch Switch

Unprot Unprot

LSU LSU'

Figure 6-50 Final LUN allocation with additional multipath LUNs

244 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.10.1 Setting up load source mirroring
After you create the LUN to be set up as the remote load source unit pair, this LUN and any
other LUNs are identified by SLIC and displayed under non-configured units in DST and SST.
To set up load source mirroring on the System i5 platform, you must use DST:
1. From the DST menu (Figure 6-51), select 4. Work with disk units.

Use Dedicated Service Tools (DST)


System: S101880D
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

Selection
4

F3=Exit F12=Cancel
Figure 6-51 Using Dedicated Service Tools panel

2. From the Work with Disk Units menu (Figure 6-52), select 1. Work with disk
configuration.

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 6-52 Working with Disk Units panel

Chapter 6. Implementing external storage with i5/OS 245


3. From the Work with Disk Configuration menu (Figure 6-53), select 4. Work with mirrored
protection.

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Work with ASP threshold
3. Work with ASP configuration
4. Work with mirrored protection
5. Work with device parity protection
6. Work with disk compression

Selection
4

F3=Exit F12=Cancel
Figure 6-53 Work with Disk Configuration panel

4. From the Work with mirrored protection menu (Figure 6-54), select 4. Enable remote load
source mirroring. This option does not perform the remote load source mirroring but tells
the system that you want to mirror the load source when mirroring is started.

Work with mirrored protection

Select one of the following:

1. Display disk configuration


2. Start mirrored protection
3. Stop mirrored protection
4. Enable remote load source mirroring
5. Disable remote load source mirroring

Selection
4

F3=Exit F12=Cancel
Figure 6-54 Setting up remote load source mirroring

246 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. In the Enable Remote Load Source Mirroring confirmation panel (Figure 6-55), press
Enter to confirm that you want to enable remote load source mirroring.

Enable Remote Load Source Mirroring

Remote load source mirroring will allow you to place the two
units that make up a mirrored load source disk unit (unit 1) on
two different IOPs. This may allow for higher availability
if there is a failure on the multifunction IOP.

Note: When there is only one load source disk unit attached to
the multifunction IOP, the system will not be able to IPL if
that unit should fail.

This function will not start mirrored protection.

Press Enter to enable remote load source mirroring.


Figure 6-55 Enable Remote Load Source Mirroring panel

6. In the Work with mirrored protection panel, you see a message at the bottom of the panel,
indicating that remote load source mirroring is enabled (Figure 6-56). Select 2. Start
mirrored protection, for the load source unit.

Work with mirrored protection

Select one of the following:

1. Display disk configuration


2. Start mirrored protection
3. Stop mirrored protection
4. Enable remote load source mirroring
5. Disable remote load source mirroring

Selection
2

F3=Exit F12=Cancel
Remote load source mirroring enabled successfully.
Figure 6-56 Confirmation that remote load source mirroring is enabled

Chapter 6. Implementing external storage with i5/OS 247


7. In the Work with mirrored protection menu, select 1. Display disk configuration, and
then select 1. Display disk configuration status.
Figure 6-57 shows the two unprotected LUNs (model A85) for the load source unit and its
mirror mate as disk serial number 30-1000000 and 30-1100000. You can also see that
there are four more protected LUNs (model A05) that are protected by multipath because
their resource names begin with DMP.

Display Disk Configuration Status

Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 30-1000000 1750 A85 DD001 Active
1 30-1100000 1750 A85 DD004 Active
2 30-1001000 1750 A05 DMP002 DPY/Active
3 30-1002000 1750 A05 DMP004 DPY/Active
5 30-1101000 1750 A05 DMP006 DPY/Active
6 30-1102000 1750 A05 DMP008 DPY/Active

Press Enter to continue.

F3=Exit F5-Refresh F9-Display disk unit details


F11=Disk configuration capacity F12=Cancel
Figure 6-57 Unprotected load source unit ready to start remote load source mirroring

8. When the remote load source mirroring task is finished, perform an IPL on the system to
start mirroring the data from the source unit to the target. This process is done during the
database recovery phase of the IPL.

6.11 Migration from mirrored to multipath load source


With the new i5/OS V6R1 release System i supports multipath to the load source LUN for
both 2847 IOP-based or IOP-less IOAs.

Note: This migration procedure is a disruptive procedure because it involves stopping


mirrored protection and optionally changing the LUN protection mode for the load source
unit.

To migrate from a mirrored external load source unit to a multipath load source unit, follow
these steps:
1. Enter STRSST to start System Service Tools from the i5/OS command line.
2. Select 3. Work with disk units.
3. Select 2. Work with disk configuration.
4. Select 1. Display disk configuration.

248 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select 1. Display disk configuration status to look at your currently mirrored external
load source LUNs. Take note of the two serial numbers for your mirrored load source unit 1
(105E951 and 1060951 in our example) because you will need these numbers later for
changing the DS storage system configuration to a multipath setup.
6. Press F12 to exit from the Display Disk Configuration Status panel, as shown in
Figure 6-58.

Display Disk Configuration Status

Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 50-105E951 2107 A85 DD001 Active
1 50-1060951 2107 A85 DD002 Active
2 50-1061951 2107 A05 DMP003 RAID 5/Active
3 50-105F951 2107 A05 DMP001 RAID 5/Active
Figure 6-58 Displaying mirrored disks

7. Select 6. Disable remote load source mirroring to turn off the remote load source
mirroring function as shown in Figure 6-59.

Note: Turning off the remote load mirroring function does not stop the mirrored
protection. However, disabling this function is required to actually allow stop mirroring in
a later step.

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Add units to ASPs
3. Work with ASP threshold
4. Add units to ASPs and balance data
5. Enable remote load source mirroring
6. Disable remote load source mirroring
7. Start compression on non-configured units
8. Work with device parity protection
9. Start hot spare
10. Stop hot spare

Selection
6

F3=Exit F12=Cancel
Figure 6-59 Disable remote load source mirroring

Chapter 6. Implementing external storage with i5/OS 249


8. Press Enter to confirm your action in the Disable Remote Load Source Mirroring panel, as
shown in Figure 6-60.

Disable Remote Load Source Mirroring

Remote load source mirroring is currently enabled. You


selected to turn this function off. This may require that both
units that make up your mirrored load source disk unit (unit 1)
be attached to the same IOP.

This function will not stop mirrored protection.

Press Enter to disable remote load source mirroring.

F3=Exit F12=Cancel
Figure 6-60 Disable Remote Load Source Mirroring confirmation panel

9. A completion message displays, as shown in Figure 6-61.

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Add units to ASPs
3. Work with ASP threshold
4. Add units to ASPs and balance data
5. Enable remote load source mirroring
6. Disable remote load source mirroring
7. Start compression on non-configured units
8. Work with device parity protection
9. Start hot spare
10. Stop hot spare

Selection

F3=Exit F12=Cancel
Remote load source mirroring disabled successfully.
Figure 6-61 Message after disabling the remote load source mirroring

10.To stop mirror protection, set your system to B-type manual mode IPL, and re-IPL the
system. When you get to the Dedicated Service Tools (DST) panel, continue with these
steps.

250 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11. Select 4. Work with disk units as shown in Figure 6-62.

Use Dedicated Service Tools (DST)

System: RCHLTTN1
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 6-62 Work with disk units

12.Select 1. Work with disk configuration as shown Figure 6-63.

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 6-63 Work with disk units

Chapter 6. Implementing external storage with i5/OS 251


13.Select 4. Work with mirrored protection as shown in Figure 6-64.

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Work with ASP threshold
3. Work with ASP configuration
4. Work with mirrored protection
5. Work with device parity protection
6. Work with disk compression
7. Work with hot spare protection

Selection
4

F3=Exit F12=Cancel
Figure 6-64 Work with mirrored protection

14.Select 3. Stop mirrored protection as shown in Figure 6-65.

Work with Mirrored Protection

Select one of the following:

1. Display disk configuration


2. Start mirrored protection
3. Stop mirrored protection
4. Enable remote load source mirroring
5. Disable remote load source mirroring
6. Select delay for unit synchronization

Selection
3

F3=Exit F12=Cancel
Figure 6-65 Stop mirrored protection

252 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
15.Enter 1 to select ASP 1, as shown in Figure 6-66.

Select ASP to Stop Mirrored Protection

Select the ASPs to stop mirrored protection on.

Type options, press Enter.


1=Select

Option ASP Protection


1 1 Mirrored

F3=Exit F12=Cancel
Figure 6-66 Selecting ASP to stop mirror

16.On the Confirm Stop Mirrored Protection panel, confirm that ASP 1 is selected, as shown
in Figure 6-67, and then press Enter to proceed.

Confirm Stop Mirrored Protection

Press Enter to confirm your choice to stop mirrored


protection. During this process the system will be IPLed.
You will return to the DST main menu after the IPL is
complete. The system will have the displayed protection.

Press F12 to return to change your choice.

Serial Resource
ASP Unit Number Type Model Name Protection
1 Unprotected
1 50-105E951 2107 A85 DD001 Unprotected
2 50-1061951 2107 A05 DMP003 RAID 5
3 50-105F951 2107 A05 DMP001 RAID 5

Figure 6-67 Confirm to stop mirrored protection

17.When the stop for mirrored protection completes, a confirmation panel displays as shown
in Figure 6-68.

Disk Configuration Information Report

The following are informational messages about disk


configuration changes started in the previous IPL.

Information

Stop mirroring completed successfully

Press Enter to continue


Figure 6-68 Successful message to stop mirroring

Chapter 6. Implementing external storage with i5/OS 253


18.The previously mirrored load source is now a non-configured disk unit, as highlighted in
Figure 6-69.

Display Non-Configured Units

Serial Resource
Number Type Model Name Capacity Status
50-1060951 2107 A85 DD002 35165 Non-configured

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display device parity status F12=Cancel
Figure 6-69 Non-configured disk

19.Now, you can exit from the DST panels to continue the manual mode IPL. At the Add All
Disk Units to the System panel, select 1. Perform any disk configuration at SST as
shown in Figure 6-70.

Add All Disk Units to the System


System: RCHLTTN1
Non-configured device parity capable disk units are attached
to the system. Disk units can not be added automatically.
It is more efficient to device parity protect these
units before adding them to the system.
These disk units may be parity enabled and added at SST.
Configured disk units must have parity enabled at DST.

Select one of the following:

1. Perform any disk configuration at SST


2. Perform disk configuration using DST

Selection
1
Figure 6-70 Message to add disks

254 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
20.You have stopped mirrored protection for the load source unit and re-IPLed the system
successfully. Now, use the DS CLI to identify the volume groups that contain the two LUNs
of your previously mirrored load source unit by entering the showfbvol volumeID command
for the previously mirrored load source unit (for volumeID use the four digits from the disk
unit serial number noted down in step 5) as shown in Figure 6-71.

dscli> showfbvol 1060


Date/Time: 9. November 2007 02:33:02 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-
7589951
Name TN1mm
ID 1060
accstate Online
datastate Normal
configstate Normal
deviceMTM 2107-A05
datatype FB 520P
addrgrp 1
extpool P4
exts 33
captype iSeries
cap (2^30B) 32.8
cap (10^9B) 35.2
cap (blocks) 68681728
volgrp V22
ranks 1
dbexts -
sam Standard
repcapalloc -
eam legacy
reqcap (blocks) 68681728
Figure 6-71 DS CLI: The showfbvol command

21.Enter showvolgrp volumegroup_ID for the two volume groups that contain the previously
mirrored load source unit LUNs, as shown in Figure 6-72 and Figure 6-73.

dscli> showvolgrp v13


Date/Time: November 7, 2007 3:31:51 AM IST IBM DSCLI Version: 5.3.0.991 DS:
IBM.2107-7589951
Name RedBookTN1LS_VG
ID V13
Type OS400 Mask
Vols 105E 105F 1061
Figure 6-72 DS CLI: The showvolgroup command

dscli> showvolgrp v22


Date/Time: November 7, 2007 3:31:54 AM IST IBM DSCLI Version: 5.3.0.991 DS:
IBM.2107-7589951
Name RedBookTN1MM_VG
ID V22
Type OS400 Mask
Vols 105F 1060 1061
Figure 6-73 DS CLI: The showvolgroup command

Chapter 6. Implementing external storage with i5/OS 255


22.To start using multipath for all volumes, including the load-source attached to the IOAs,
add the previous load source mirror volume that has become the non-configured unit into
the volume group of the load source IOA, as shown in Figure 6-74. At this point in the
process, you have established two paths to the non-configured previous load source
mirror LUN.

dscli> chvolgrp -action add -volume 1060 V13


Date/Time: November 7, 2007 3:36:34 AM IST IBM DSCLI Version: 5.3.0.991 DS:
IBM.2107-7589951
CMUC00031I chvolgrp: Volume group V13 successfully modified.
Figure 6-74 Adding a volume into a volume group

23.To finish the multipath setup, make sure that the current load source unit LUN (LUN 105E
in our example) is also assigned to both System i IOAs. You assign the load source unit
LUN to the second IOA by assigning the volume group (V13 in our example) that now
contains both previously mirrored load source unit LUNs to both IOAs. To obtain the IOAs
host connection ID on the DS storage system for changing the volume group assignment,
enter the lshostconnect command as shown in Figure 6-75. Note the ID for the lines that
show the two load source IOA volume groups determined previously.

dscli> lshostconnect
Date/Time: November 7, 2007 3:30:36 AM IST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7589951
Name ID WWPN HostType Profile portgrp volgrpID ESSIOpo
===============================================================================================
RedBookTN1LS 0010 10000000C94C45CE iSeries IBM iSeries - OS/400 0 V13 all

RedBookTN1MM 001B 10000000C9509E12 iSeries IBM iSeries - OS/400 0 V22 all


Figure 6-75 DS SLI: The lshostconnect command

24.Change the volume group assignment of the IOA host connection that does not yet have
access to the current load source. (In our example, volume group V22 does not contain
the current load source unit LUN, so we have to assign volume group V13 that contains
both previous load source units to host connection 001B.) Use the chhostconnect -volgrp
volumegroupID hostconnecID command as shown in Figure 6-76.

dscli> chhostconnect -volgrp V13 001B


Date/Time: November 7, 2007 3:40:25 AM IST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7589951
CMUC00013I chhostconnect: Host connection 001B successfully modified.
dscli> lshostconnect
Date/Time: November 7, 2007 3:40:33 AM IST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7589951
Name ID WWPN HostType Profile portgrp volgrpID ESSIOport
===============================================================================================
RedBookTN1LS 0010 10000000C94C45CE iSeries IBM iSeries - OS/400 0 V13 all
RedBookTN1MM 001B 10000000C9509E12 iSeries IBM iSeries - OS/400 0 V13 all
Figure 6-76 DS CLI: Change the host connection volume group assignment

256 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Now, we describe how to change two previously mirrored unprotected disk units to protected
ones.

Important: It is not supported to change the LUN protection status of a LUN that is being
configured, that is a LUN that is part of an ASP configuration. To convert the unprotected
load source disk unit to a protected model follow, steps 12 to 18 in the process that follows.

Follow these steps:


1. Display the unprotected disk units by selecting Display disk configuration status and
Display non-configured disks on System i SST or DST as shown in Figure 6-77.

Display Disk Configuration Status

Serial Resource
ASP Unit Number Type Model Name Status
1 Unprotected
1 50-105E951 2107 A85 DMP007 Configured
2 50-1061951 2107 A05 DMP003 RAID 5/Active
3 50-105F951 2107 A05 DMP001 RAID 5/Active

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Disk configuration capacity F12=Cancel

Display Non-Configured Units

Serial Resource
Number Type Model Name Capacity Status
50-1060951 2107 A85 DMP005 35165 Non-configured

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display device parity status F12=Cancel
Figure 6-77 Displaying unprotected disks

2. On the storage system, use the DS CLI lsfbvol command output to display the
unprotected, previously mirrored load source LUNs with a datatype of 520U that refer to
unprotected volumes, as shown in Figure 6-78.

dscli> lsfbvol
Date/Time: November 7, 2007 3:25:51 AM IST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7589951
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks
==================================================================================================================
TN1ls 105E Online Normal Normal 2107-A85 FB 520U P0 32.8 35.2 6868172
TN1Vol1 105F Online Normal Normal 2107-A05 FB 520P P0 32.8 35.2 6868172
TN1mm 1060 Online Normal Normal 2107-A85 FB 520U P4 32.8 35.2 6868172
TN1Vol2 1061 Online Normal Normal 2107-A05 FB 520P P4 32.8 35.2 6868172

Figure 6-78 Listing unprotected disks

Chapter 6. Implementing external storage with i5/OS 257


3. Change only the unconfigured previous load source volume from unprotected to protected
using the chfbvol -os400 protected volumeID command as shown in Figure 6-79.

dscli> chfbvol -os400 protected 1060


Date/Time: November 7, 2007 4:04:41 AM IST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7589951
CMUC00026I chfbvol: FB volume 1060 successfully modified.

TN1ls 105E Online Normal Normal 2107-A85 FB 520U P0 32.8 35.2 6868172
TN1Vol1 105F Online Normal Normal 2107-A05 FB 520P P0 32.8 35.2 6868172
TN1mm 1060 Online Normal Normal 2107-A05 FB 520P P4 32.8 35.2 6868172
TN1Vol2 1061 Online Normal Normal 2107-A05 FB 520P P4 32.8 35.2 6868172

Figure 6-79 Changing volume protection

4. Perform an IOP reset for the IOA that is attached to the unconfigured previous load source
volume on which you changed the protection mode on the storage system in the previous
step.

Note: Note this IOP reset is required for System i to rediscover its devices for
recognizing the changed LUN protection mode.

To reset the IOP from SST/DST select the following options:


– 1. Start a service tool
– 7. Hardware service manager
– 2. Logical hardware resources
– 1. System bus resources
Then, select the correct 2847 IOP (the one that is not the load source IOP), and choose
6. I/O debug, as shown in Figure 6-80.

Logical Hardware Resources on System Bus

System bus(es) to work with . . . . . . *ALL *ALL, *SPD, *PCI, 1-9999


Subset by . . . . . . . . . . . . . . . *ALL *ALL, *STG, *WS, *CMN, *CRP

Type options, press Enter.


2=Change detail 4=Remove 5=Display detail 6=I/O debug
7=Display system information
8=Associated packaging resource(s) 9=Resources associated with IOP

Resource
Opt Description Type-Model Status Name
Bus Expansion Adapter 28E7- Operational BCC10
System Bus 28B7- Operational LB09
Multi-adapter Bridge 28B7- Operational PCI11D
6 Combined Function IOP 2847-001 Operational CMB03
HSL I/O Bridge 28E7- Operational BC05
Bus Expansion Adapter 28E7- Operational BCC05
System Bus 28B7- Operational LB04
More...
F3=Exit F5=Refresh F6=Print F8=Include non-reporting resources
F9=Failed resources F10=Non-reporting resources
F11=Display serial/part numbers F12=Cancel
Figure 6-80 Selecting IOP for reset

258 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select 3. Reset I/O processor to reset the IOP as shown in Figure 6-81.

Select IOP Debug Function

Resource name . . . . . . . . : CMB03


Dump type . . . . . . . . . . : Normal

Select one of the following:

1. Read/Write I/O processor data


2. Dump I/O processor data
3. Reset I/O processor
4. IPL I/O processor
5. Enable I/O processor trace
6. Disable I/O processor trace

Selection
3

F3=Exit F12=Cancel
F8=Disable I/O processor reset F9=Disable I/O processor IPL
Figure 6-81 Reset IOP option

6. Press Enter to confirm the IOP reset, as shown in Figure 6-82.

Confirm Reset Of IOP

You have requested that an I/O processor be reset.

Note: This will disturb active jobs running on


this IOP or on the devices attached to this IOP.

Press Enter to confirm your actions.


Press F12 to cancel this request.

F3=Exit F12=Cancel
Figure 6-82 Confirming IOP reset

Chapter 6. Implementing external storage with i5/OS 259


After a the IOP is reset successfully, a confirmation message displays, as shown in
Figure 6-83.

Select IOP Debug Function

Resource name . . . . . . . . : CMB03


Dump type . . . . . . . . . . : Normal

Select one of the following:

1. Read/Write I/O processor data


2. Dump I/O processor data
3. Reset I/O processor
4. IPL I/O processor
5. Enable I/O processor trace
6. Disable I/O processor trace

Selection

F3=Exit F12=Cancel
F8=Disable I/O processor reset F9=Disable I/O processor IPL
Reset of IOP was successful.
Figure 6-83 IOP reset confirmation message

7. Now, select 4. IPL I/O processor in the Select IOP Debug Function menu to IPL the I/O
as shown in Figure 6-84. Press Enter to confirm your selection.

Select IOP Debug Function

Resource name . . . . . . . . : CMB03


Dump type . . . . . . . . . . : Normal

Select one of the following:

1. Read/Write I/O processor data


2. Dump I/O processor data
3. Reset I/O processor
4. IPL I/O processor
5. Enable I/O processor trace
6. Disable I/O processor trace

Selection
4

F3=Exit F12=Cancel
F8=Disable I/O processor reset F9=Disable I/O processor IPL
Figure 6-84 IPL I/O

260 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
After a successful IPL, a confirmation message displays, as shown in Figure 6-85.

Select IOP Debug Function

Resource name . . . . . . . . : CMB03


Dump type . . . . . . . . . . : Normal

Select one of the following:

1. Read/Write I/O processor data


2. Dump I/O processor data
3. Reset I/O processor
4. IPL I/O processor
5. Enable I/O processor trace
6. Disable I/O processor trace

Selection

F3=Exit F12=Cancel
F8=Disable I/O processor reset F9=Disable I/O processor IPL
Re-IPL of IOP was successful.
Figure 6-85 I/O IPL confirmation message

8. Next, check the changed protection status for the unconfigured previous load source LUN
in the SST Display non-configured units menu as shown in Figure 6-86.

Display Non-Configured Units

Serial Resource
Number Type Model Name Capacity Status
50-1060951 2107 A05 DMP006 35165 Non-configured

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display device parity status F12=Cancel
Figure 6-86 SST - Display Non-Configured Units

Now, we explain the remaining steps to change the unprotected load source unit to a
protected load source. To look at the current unprotected load source unit, we choose the
DST menu function Display disk configuration status as shown in Figure 6-87.

Display Disk Configuration Status

Serial Resource
ASP Unit Number Type Model Name Status
1 Unprotected
1 50-105E951 2107 A85 DMP007 Configured
2 50-1061951 2107 A05 DMP003 RAID 5/Active
3 50-105F951 2107 A05 DMP001 RAID 5/Active

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Disk configuration capacity F12=Cancel
Figure 6-87 Display Disk Configuration Status

Chapter 6. Implementing external storage with i5/OS 261


9. Select 4. Work with disk units in the DST main menu, as shown in Figure 6-88.

Use Dedicated Service Tools (DST)


System: RCHLTTN1
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 6-88 DST: Main menu

10.Select 2. Work with disk unit recovery as shown in Figure 6-89.

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
2

F3=Exit F12=Cancel
Figure 6-89 Work with Disk Units

262 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11.Select 9. Copy disk unit data as shown in Figure 6-90.

Work with Disk Unit Recovery

Select one of the following:

1. Save disk unit data


2. Restore disk unit data
3. Replace configured unit
4. Assign missing unit
5. Recover configuration
6. Disk unit problem recovery procedures
7. Suspend mirrored protection
8. Resume mirrored protection
9. Copy disk unit data
10. Delete disk unit data
11. Upgrade load source utility
12. Rebuild disk unit data
13. Reclaim IOA cache storage
More...

Selection
9

F3=Exit F11=Display disk configuration status F12=Cancel


Figure 6-90 DST: Copy disk unit data

12.Select the current unprotected load source unit 1 as the disk unit from which to copy, as
shown in Figure 6-91.

Select Copy from Disk Unit

Type option, press Enter.


1=Select

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 50-105E951 2107 A85 DMP007 Configured
2 1 50-1061951 2107 A05 DMP003 RAID 5/Active
3 1 50-105F951 2107 A05 DMP001 RAID 5/Active

F3=Exit F5=Refresh F11=Display non-configured units F12=Cancel


Figure 6-91 DST: Copy from Disk Unit

Chapter 6. Implementing external storage with i5/OS 263


13.Select the unconfigured previous load source mirror as the copy-to-disk-unit, as shown in
Figure 6-92.

Select Copy to Disk Unit Data

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-105E951 2107 A85 DMP007 Configured

1=Select

Serial Resource
Option Number Type Model Name Status
1 50-1060951 2107 A05 DMP006 Non-configured

F3=Exit F11=Display disk configuration status F12=Cancel


Figure 6-92 DST: Select Copy to Disk Unit Data

14.Press Enter to confirm the choice, as shown in Figure 6-93.

Confirm Copy Disk Unit Data

Press Enter to confirm your choice for copy.


Press F12 to return to change your choice.

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-105E951 2107 A85 DMP007 Configured

Disk that is copied to:

Serial Resource
Number Type Model Name Status
50-1060951 2107 A05 DMP006 Non-configured

F12=Cancel
Figure 6-93 Confirm Copy Disk Unit Data

During the copy process, the system displays the Copy Disk Unit Data Status panel, as
shown in Figure 6-94.

Copy Disk Unit Data Status

The operation to copy a disk unit will be done in several phases.


The phases are listed here and the status will be indicated when
known.

Phase Status

Stop compression (if needed) . . . . . . : Completed


Prepare disk unit . . . . . . . . . . . : Completed
Start compression (if needed) . . . . . : Completed
Copy status. . . . . . . . . . . . . . . : 99 % Complete

Number of unreadable pages:


Figure 6-94 Copy status

264 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
15.After the copy process completes successfully, the system IPLs automatically. During the
IPL, a message displays, as shown in Figure 6-95, because the system found an
unconfigured unit as the previous load source IPL. You can continue by selecting 1. Keep
the current disk configuration, as shown in Figure 6-95.

Add All Disk Units to the System


System: RCHLTTN1
Select one of the following:

1. Keep the current disk configuration


2. Perform disk configuration using DST
3. Add all disk units to the system auxiliary storage pool
4. Add all disk units to the system ASP and balance data

Selection
1

Figure 6-95 Add All Disk Units to the System

16.Next, look at the protected load source unit using the Display Disk Configuration Status
menu, as shown in Figure 6-96.

Display Disk Configuration Status

Serial Resource
ASP Unit Number Type Model Name Status
1 Unprotected
1 50-1060951 2107 A05 DMP006 RAID 5/Active
2 50-1061951 2107 A05 DMP003 RAID 5/Active
3 50-105F951 2107 A05 DMP001 RAID 5/Active

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Disk configuration capacity F12=Cancel
Figure 6-96 Display Disk Configuration Status

17.Then, look at the previous load source unit with its unprotected status using the Display
Non-Configured Units menu as shown in Figure 6-97.

Display Non-Configured Units

Serial Resource
Number Type Model Name Capacity Status
50-105E951 2107 A85 DMP007 35165 Non-configured

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display device parity status F12=Cancel
Figure 6-97 Display Non-Configured Units

Chapter 6. Implementing external storage with i5/OS 265


18.If you want to change this non-configured unit that was the previous load source from
which you migrated the data to a protected unit, then use the DS CLI chfbvol command
and an IOP reset or re-IPL as described in steps 1 to 4.

6.12 Migration considerations from IOP-based to IOP-less Fibre


Channel
The migration from IOP-based to IOP-less Fibre Channel applies only to customers who
continued using older #2787 or #5760 IOP-based Fibre Channel IOAs on a new System i
POWER6 server (note that #2766 is not supported on System i POWER6) and now want to
remove the old IOP-based technology to take advantage from the new IOP-less Fibre
Channel performance and its higher integration.

Note: Carefully plan and size your IOP-less Fibre Channel adapter card placement in your
System i server and its attachment to your storage system to avoid potential I/O loop or FC
port performance bottlenecks with the increased IOP-less I/O performance. Refer to
Chapter 4, “i5/OS planning for external storage” on page 75 and Chapter 5, “Sizing
external storage for i5/OS” on page 115 for further information.

Important: Do not try to workaround the migration procedures that we discuss in this
section by concurrently replacing the IOP/IOA pair for one mirror side or one path after the
other. Concurrent hardware replacement is supported only for like-to-like replacement
using the same feature codes.

Because the migration procedures are pretty much straightforward, we only outline the
required steps for different configurations.

6.12.1 IOP-less migration in a multipath configuration


When using IOP-based i5/OS multipathing, you can perform the migration to IOP-less Fibre
Channel concurrently without shutting down the system as follows:
1. Add an IOP-less IOA into another I/O slot.
2. Move the FC cable from the old IOP-based FC IOA to the new IOP-less IOA.
3. Change the host connection on the DS storage system to reflect the new WWPN.

Internally for each multipath group, this process creates a new multipath connection. Some
time later, you need to remove the obsolete connection using the multipath reset function (see
6.13, “Resetting a lost multipath configuration” on page 267).

6.12.2 IOP-less migration in a mirroring configuration


When using IOP-based i5/OS mirroring for external storage, you can migrate to IOP-less
Fibre Channel as follows:
1. Turn off the System i server.
2. Replace the Fibre Channel IOP/IOA cards with IOP-less cards.
3. Change the host connections on the DS storage system to reflect the new WWPNs.

266 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6.12.3 IOP-less migration in a configuration without path redundancy
If you do not use multipath or mirroring, you need to follow these steps to migrate to IOP-less
Fibre Channel:
1. Turn off the System i server.
2. Replace the Fibre Channel IOP/IOA cards with IOP-less cards.
3. Change the host connections on the DS storage system to reflect the new WWPNs.

6.13 Resetting a lost multipath configuration


If after changing your System i storage attachment configuration, any unknown paths are
reported, which can happen typically when you have reduced the number of Fibre Channel
paths to the System i host, follow the procedure that we describe in this section. A reset of the
multipath configuration on the System i server frees up orphan path resource information
after a multipath configuration change.

Note: An IPL might be required so that the System i recognizes the missing paths.

6.13.1 Resetting the lost multipath configuration for V6R1


To reset the lost multipath configuration for V6R1:
1. Log in to i5/OS System Service Tools using the STRSST command.
2. Select 1. Start a service tool.
3. Select 7. Hardware service manager.
4. Select 1. Packaging hardware resources.
5. Select 9 for Disk Unit System as shown in Figure 6-98.

Packaging Hardware Resources

Local system type . . . . : 9406


Local system serial number: XX-XXXXX
Type options, press Enter.
2=Change detail 3=Concurrent maintenance 4=Remove 5=Display detail
8=Associated logical resource(s) 9=Hardware contained within package
Type- Resource
Opt Description Model Unit ID Name
Optical Storage Unit = 6333-002 U787B.001.DNW5A3B SD001
Tape Unit 6380-001 SD003
9 Disk Unit System + DE01

Figure 6-98 Packaging Hardware Resources

Chapter 6. Implementing external storage with i5/OS 267


6. Select 7=Paths to multiple path disk on the disks that you want to reset as shown in
Figure 6-99.

Disk Units Contained Within Package


Resource name: DE01

Type options, press Enter.


2=Change detail 4=Remove 5=Display detail 7=Paths to multiple path
disk
8=Associated logical resource(s)

Type- Serial Resource Multiple


Opt Model Number Name Status Path Disk
_ 2107-A82 50-10000B2 DMP025 Unknown Yes
7 2107-A82 50-10000B2 DMP024 Operational Yes
_ 2107-A82 50-10010B2 DMP027 Unknown Yes
_ 2107-A82 50-10010B2 DMP026 Unknown Yes
_ 2107-A82 50-10010B2 DMP023 Unknown Yes
_ 2107-A82 50-10010B2 DMP022 Operational Yes
_ 2107-A82 50-10020B2 DMP011 Unknown Yes
_ 2107-A82 50-10020B2 DMP021 Operational Yes

F3=Exit F5=Refresh F6=Print F12=Cancel F14=Reset paths

Figure 6-99 Disk Units Contained Within Package

7. When prompted for confirmation, press F10, as shown in Figure 6-100.

Reset Paths to Mulitple Path Disk Unit

WARNING: This service function should be run only under the direction of
the IBM Hardware Service.
You have selected to reset the number of paths on a multipath unit to
equal
the number of paths currently enlisted.
Press F10 to reset the paths to the following multipath disk units.

See help for more details

Type- Serial Resource


Model Number Name Logical Address
2107-A82 50-10000B2 DMP024 2/ 34/0/ 32-2/ 6/ 0/ 3/ 1/ /
2107-A82 50-10000B2 DMP025 2/ 34/0/ 32-2/ 4/ 0/ 3/ 1/ /

F3=Exit F10=Confirm F12=Cancel


Figure 6-100 Reset Paths to Multiple Path Disk Unit

268 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. When the operation is complete, a confirmation panel displays, as shown in Figure 6-101.

Reset Paths to Mulitple Path Disk Unit

WARNING: This service function should be run only under the direction o
the IBM Hardware Service.
You have selected to reset the number of paths on a multipath unit to
equal
the number of paths currently enlisted.

Press F10 to reset the paths to the following multipath disk units.
See help for more details

Type- Serial Resource


Model Number Name Logical Address
2107-A82 50-10000B2 DMP024 2/ 34/0/ 32-2/ 6/ 0/ 3/ 1/
2107-A82 50-10000B2 DMP025 2/ 34/0/ 32-2/ 4/ 0/ 3/ 1/

F3=Exit F10=Confirm F12=Cancel


moval of the selected resources was successful.

Figure 6-101 SST: Multipath reset confirmation panel

Note: The DMPxxx resource name is not reset to DDxxx when multipathing is stopped.

Chapter 6. Implementing external storage with i5/OS 269


6.13.2 Resetting a lost multipath configuration for versions prior to V6R1
To reset a lost multipath configuration for versions prior to V6R1:
1. Start i5/OS to DST or, if i5/OS is running, access SST and sign in. Select 1. Start a
Service Tool.
2. In the Start a Service Tool panel, select 1. Display/Alter/Dump, as shown in
Figure 6-102.

Start a Service Tool


System: RCHLTTN3
Attention: Incorrect use of this service tool can cause damage
to data in this system. Contact your service representative
for assistance.

Select one of the following:

1. Display/Alter/Dump
2. Licensed Internal Code log
3. Trace Licensed Internal code
4. Hardware service manager
5. Main storage dump manager
6. Product activity log
7. Operator panel functions
8. Performance data collector

Selection
1
F3=Exit F12=Cancel
Figure 6-102 Starting a Service Tool panel

270 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Display/Alter/Dump Output Device panel, select 1. Display/Alter/Dump, as shown
in Figure 6-103.

Attention: Use extreme caution when using the Display/Alter/Dump Output panel
because you can end up damaging your system configuration. Ideally, when performing
these tasks for the first time, do so after referring to IBM Support.

Display/Alter/Dump Output Device

Select one of the following:

1. Display/Alter storage
2. Dump to printer

4. Dump to media

6. Print dump from media


7. Display dump status
Selection
1
F3=Exit F12=Cancel
Figure 6-103 Display/Alter/Dump Device Output panel

4. In the Select Data panel, select 2. Licensed Internal Code (LIC) data, as shown in
Figure 6-104.

Select Data

Output device . . . . . . : Display

Select one of the following:

1. Machine Interface (MI) object


2. Licensed Internal Code (LIC) data
3. LIC module
4. Tasks/Processes
5. Starting address
Selection
2
F3=Exit F12=Cancel
Figure 6-104 Selecting data for Display/Alter/Dump

Chapter 6. Implementing external storage with i5/OS 271


5. In the Select LIC Data panel, scroll down the page and select 14. Advanced analysis (as
shown in Figure 6-105), and press Enter.

Select LIC Data

Output device . . . . . . : Display

Select one of the following:

11. Main storage usage trace


12. Transport manager traces
13. Storage management functional trace
14. Advanced analysis
15. Journal list
16. Journal work segment
17. Database work segment
18. Vnode data
19. Allow fix apply on altered LIC

Bottom
Selection
14

F3=Exit F12=Cancel
Figure 6-105 Selecting Advanced analysis

272 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Select Advanced Analysis Command panel, scroll down the page, and select 1 to
run the MULTIPATHRESETTER macro, as shown in Figure 6-106.

Select Advanced Analysis Command

Output device . . . . . . : Display

Type options, press Enter.


1=Select

Option Command

JAVALOCKINFO
LICLOG
LLHISTORYLOG
LOCKINFO
MASOCONTROLINFO
MASOWAITERINFO
MESSAGEQUEUE
MODINFO
MPLINFO
1 MULTIPATHRESETTER
MUTEXDEADLOCKINFO
MUTEXINFO
More...
F3=Exit F12=Cancel
Figure 6-106 Select Advanced Analysis Command panel

7. The multipath resetter macro has various options, which are displayed in the Specify
Advanced Analysis Options panel (Figure 6-107). For Options, enter -RESTMP -ALL.

Specify Advanced Analysis Options

Output device . . . . . . : Display

Type options, press Enter.

Command . . . . : MULTIPATHRESETTER

Options . . . . . -RESETMP -ALL

F3=Exit F4=Prompt F12=Cancel


Figure 6-107 Multipath reset options

Chapter 6. Implementing external storage with i5/OS 273


The Display Formatted Data panel displays as confirmation (Figure 6-108).

Display Formatted Data


Page/Line. . . 1 / 1
Columns. . . : 1 - 78
Find . . . . . . . . . . .
....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
DISPLAY/ALTER/DUMP
Running macro: MULTIPATHRESETTER -RESETMP -ALL
Reset the paths for Multiple Connections

***RESET MULTIPATH UNIT PATHS TO NUMBER CURRENTLY ENLISTED***

This service function should be run only under the direction of the
IBM Hardware Service Support. You have selected to reset the
number of paths on a multipath unit to equal the number of paths
that have currently enlisted.

To force the error, do the following:


1. Press Enter now to return to the previous display.
2. Change the '-RESETMP keyword on the Options line to
'-CONFIRM' and press Enter

More...
F2=Find F3=Exit F4=Top F5=Bottom F10=Right F12=Cancel
Figure 6-108 Multipath reset confirmation

8. Press Enter to return to the Specify Advanced Analysis Options panel (Figure 6-109). For
Options, enter -CONFIRM -ALL.

Specify Advanced Analysis Options

Output device . . . . . . : Display

Type options, press Enter.

Command . . . . : MULTIPATHRESETTER

Options . . . . . -CONFIRM -ALL


F3=Exit F4=Prompt F12=Cancel
Figure 6-109 Confirming the multipath reset

274 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. In the Display Formatted Data panel (Figure 6-110), press F3 to return to the Specify
Advanced Analysis Options panel (Figure 6-107 on page 273).

Display Formatted Data


Page/Line. . . 1 / 1
Columns. . . : 1 - 78
Find . . . . . . . . . . .
....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
DISPLAY/ALTER/DUMP
Running macro: MULTIPATHRESETTER -CONFIRM -ALL
Reset the paths for Multiple Connections

*********************************************************************
***CONFIRM RESET MULTIPATH UNIT PATHS TO NUMBER CURRENTLY ENLISTED***
*********************************************************************

This service function should be run only under the direction of the
IBM Hardware Service Support.

You have selected to reset the number of paths on a multipath unit


to equal the number of paths that have currently enlisted.

Attempting to reset path for resource name: DMP003

More...
F2=Find F3=Exit F4=Top F5=Bottom F10=Right F12=Cancel
Figure 6-110 Multipath reset results

10.In the Specify Advanced Analysis Options panel (Figure 6-109 on page 274), repeat the
confirmation process to ensure that the path reset is performed. Retain the setting for the
Option parameter as -CONFIRM -ALL, and press Enter again.

Chapter 6. Implementing external storage with i5/OS 275


11.The Display Formatted Data panel shows the results (Figure 6-111). In our example, it
indicates that no disk unit paths have to be reset.

Display Formatted Data


Page/Line. . . 1 / 1
Columns. . . : 1 - 78
Find . . . . . . . . . . .
....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
DISPLAY/ALTER/DUMP
Running macro: MULTIPATHRESETTER -CONFIRM -ALL
Reset the paths for Multiple Connections

*********************************************************************
***CONFIRM RESET MULTIPATH UNIT PATHS TO NUMBER CURRENTLY ENLISTED***
*********************************************************************

This service function should be run only under the direction of the
IBM Hardware Service Support.

You have selected to reset the number of paths on a multipath unit


to equal the number of paths that have currently enlisted.

Could not find any disk units with paths which need to be reset.
Bottom
F2=Find F3=Exit F4=Top F5=Bottom F10=Right F12=Cancel
Figure 6-111 No disks have to be reset

Note: The DMPxxx resource name is not reset to DDxxx when multipathing is stopped.

276 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7

Chapter 7. Migrating to i5/OS boot from SAN


This chapter describes how to migrate an existing system to use the i5/OS boot from SAN
capability. In addition, it also provides the following migration scenarios:
 RAID protected internal LSU migrating to external mirrored or multipath LSU
 Internal LSU mirrored to internal LSU migrating to external LSU
 Internal LSU mirrored to internal remote LSU migrating to external LSU
 Internal LSU mirrored to external remote LSU migrating to external LSU
 Unprotected internal LSU migrating to external LSU
 Migrating to external LSU from iSeries 8xx or 5xx with 8 Gb LSU
 SAN to SAN storage migration

© Copyright IBM Corp. 2008. All rights reserved. 277


7.1 Overview of this chapter
Traditionally, System i has required an internal disk as its boot disk or load source unit (LSU).
Boot from SAN since i5/OS V5R3M5 through #2847 IOP-based or since i5/OS V6R1 through
IOP-less Fibre Channel removes this requirement. In this chapter, we provide the steps that
are necessary to migrate systems safely to use this support.

Important: Before you start with any migration, as with any form of upgrade or system
re-configuration, it is important that you be able to recover the system in the event of a
failure. Therefore, we strongly recommend that you have two copies of a current full system
backup before attempting any migration of your systems.

Important: The migration procedures that we describe require that you use tools from DST
and that you perform removals of disk units. If you are uncomfortable performing these
tasks, consult an IBM Sales Representative or your IBM Business Partner.

7.2 Migration prerequisites


For you to implement the boot from SAN function successfully, we highly recommend that you
read the planning information included in 4.2.1, “Planning considerations for boot from SAN”
on page 78.

You must ensure that your system has the correct level of i5/OS, service processor, and HMC
code, as well as a minimum of one, but preferably two, #2847 IOPs or IOP-less Fibre Channel
cards for SAN Load before attempting any of the migration procedures that we describe in
this chapter.

For the scenarios that we describe, we assume that you have already configured the storage
system and have performed any other disk migration work. Depending on your migration
scenario, you need one protected and or two unprotected LUNs of at least 17 GB for use as
the new LSU. If you have two LUNs for mirroring the external load source, then they need to
be on separate #2847 IOP-based or IOP-less Fibre Channel adapters.

Important: Make sure that the DS host port that is used for IOP-less direct storage
attachment is configured to the FC-AL protocol. In all other cases, such as using SAN
switch attached IOP-less or using #2847 IOP-based Fibre Channel adapters, make sure
the DS host port is configured for FC-SW (SCSI-FCP).

These scenarios cover only the migration of the LSU to the external storage subsystem. If you
want to perform any further configuration, review the discussion about implementing external
storage in Chapter 6, “Implementing external storage with i5/OS” on page 207.

You need to know which of your internal disks is currently your LSU.

278 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7.2.1 Pre-migration checklist
Table 7-1 includes a checklist that can help you gather information about your system that you
need before you begin the migration process. You need to complete the information in this
table before you start any procedures to migrate your LSU. You can determine the locations of
the #2847 IOPs from the Hardware Configuration Listing. We describe the procedure for
determining the current load source protection in 7.3, “Migration scenarios” on page 280.

Table 7-1 Pre-migration checklist


Item Value

HMC Code Level

SP Code Level

i5/OS Level

First #2847 or IOP-less FC Location

Second #2847 or IOP-less FC Location

Current Load Source Protection

Current First Load Source Location

Current Second Load Source Location (if applicable)

Chapter 7. Migrating to i5/OS boot from SAN 279


7.3 Migration scenarios
There are a number of different scenarios that can exist and that depend on how your system
is currently configured. This section describes some of the possible scenarios.

To determine which scenario is appropriate for your environment, you first need to identify
your current environment as follows:
1. Issue the STRSST command. A panel similar to that shown in Figure 7-1 opens.

Start Service Tools (STRSST) Sign On

SYSTEM: MICKEY

Type choice, press Enter.

Service tools user ID. . . .


Service tools password . . .

Note: The password is case-sensitive.

F3=Exit F9=Change Password F12=Cancel


Figure 7-1 STRSST Sign On panel

280 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Enter your user ID and password. In the System Service Tools (SST) main menu, select
3. Work with disk units, as shown in Figure 7-2.

System Service Tools (SST)

Select one of the following:

1. Start a service tool


2. Work with active service tools
3. Work with disk units
4. Work with diskette data recovery
5. Work with system partitions
6. Work with system capacity
7. Work with system security
8. Work with service tools user IDs and Devices

Selection
3

F3=Exit F10=Command entry F12=Cancel


Figure 7-2 SST main menu

Chapter 7. Migrating to i5/OS boot from SAN 281


3. In the Work with Disk Units panel, select 1. Display disk configuration, as shown in
Figure 7-3.

Work with Disk Units

Select one of the following:

1. Display disk configuration


2. Work with disk configuration
3. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 7-3 Work with Disk Units

282 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. In the Display Disk Configuration panel, select 3. Display disk configuration protection,
as shown in Figure 7-4.

Display Disk Configuration

Select one of the following:

1. Display disk configuration status


2. Display disk configuration capacity
3. Display disk configuration protection
4. Display non-configured units
5. Display device parity status
6. Display disk hardware status
7. Display disk compression status

Selection
3

F3=Exit F12=Cancel
Figure 7-4 Display Disk Configuration

Chapter 7. Migrating to i5/OS boot from SAN 283


5. In the Display Disk Configuration Protection panel, shown in Figure 7-5, you can identify
the set up of the existing load source. The unit numbered 1 is the load source unit. You can
have two load source units.

Display Disk Configuration Protection

Serial Resource
ASP Unit Number Type Model Name Protection
1 Mirrored
1 68-0D0BC12 6718 050 DD007 I/O Bus
1 68-0D0A0DA 6718 050 DD002 I/O Bus
2 68-0D0A6AE 6718 050 DD010 Bus
2 68-0D09EA2 6718 050 DD006 Bus
3 68-0D0A722 6718 050 DD011 Bus
3 68-0D0A773 6718 050 DD001 Bus
4 68-0D0A733 6718 050 DD012 Bus
4 68-0D09F9C 6718 050 DD008 Bus
5 68-0D0AB08 6718 050 DD009 Bus
5 68-0D0BBB0 6718 050 DD003 Bus
6 68-0D0A51B 6718 050 DD005 I/O Bus
6 68-0D0BB13 6718 050 DD004 I/O Bus

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display non-configured units F12=Cancel
Figure 7-5 Display Disk Configuration Protection

If you have:
 If you have a single load source unit and the protection shows Unprotected, you have an
unprotected load source unit. Follow the procedure that we describe in 7.3.5, “Unprotected
internal LSU migrating to external LSU” on page 367.
 If you have a single load source unit and the protection shows Device Parity, you have a
RAID protected load source unit. Follow the procedure that we describe in 7.3.1, “RAID
protected internal LSU migrating to external mirrored or multipath LSU” on page 285.
 If you have dual load source units and the protection shows Controller, you are mirrored
to an internal load source unit. Follow the procedure that we describe in 7.3.2, “Internal
LSU mirrored to internal LSU migrating to external LSU” on page 321.
 If you have dual load source units and the protection shows anything other than
Controller, then you have enabled remote load source mirroring. If one of the unit 1 disk
units shows a type of 2105, 1750, or 2107, then you have an external remote load source
mirror. Follow the procedure that we describe in 7.3.4, “Internal LSU mirrored to external
remote LSU migrating to external LSU” on page 358.
 If your devices are any other type, then you have a remotely mirrored internal load source
unit. Follow the procedure that we describe in 7.3.3, “Internal LSU mirrored to internal
remote LSU migrating to external LSU” on page 339.

284 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7.3.1 RAID protected internal LSU migrating to external mirrored or
multipath LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this section, we describe the steps that to
migrate the system from using an internal LSU that is device parity protected to using the
boot from SAN function.

Attention: Do not turn off the system while any disk unit data function is running.
Unpredictable errors can occur if the system is turned off in the middle of the load source
migration function.

Important: This procedure requires that you stop RAID protection on the LSU set, which
leaves your system unprotected, and also that you remove a disk unit. If you are not
comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.

When migrating from an internal load source to external, you need to decide on your
protection strategy:
 For i5/OS V6R1 and later
We suggest that you retain a RAID protected load source and that you provide path
redundancy using load source multipathing. To enable this protection, create the LUN in
the DS system as protected and include it in one DS volume group, which is assigned to
two Fibre Channel IOAs to be used for boot from SAN (see Figure 7-6).

Parity protected
System i LPAR System i LPAR
internal load source
LSU

I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA
i5/OS V6R1 and later IOA

... LSU ...

Boot from SAN


multipath load source

Figure 7-6 Boot from SAN Migration for parity protected internal LSU for i5/OS V6R1 and later

Chapter 7. Migrating to i5/OS boot from SAN 285


 Prior to i5/OS V6R1
Because multipathing for the load source is not supported prior to i5/OS V6R1, mirror your
external load across two #2847 I/O processor (IOP) and I/O adapter (IOA) pairs to provide
path redundancy. For this purpose, you need to create two LUNs in the DS system as
unprotected and assigned the LUNs to two different DS volume groups, which allows you
to attach other LUNs to two #2847 IOP-based FC adapters in a multipath configuration
(see Figure 7-7).

Parity
System i LPAR System i LPAR
protected
LSU
internal LSU
I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
#2844 IOP #2844 IOP #2847 IOP #2847 IOP
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA
prior to i5/OS V6R1 IOA

... LSU ... LSU' ...

Boot from SAN Boot from SAN


mirrored load source mirrored load source
unit A unit B

Figure 7-7 Boot from SAN Migration for parity protected internal LSU prior to i5/OS V6R1

In our discussion, we assume that you follow these recommendations for a protection
strategy.

Before you begin, use either DS CLI or the DS Storage Manager GUI to configure the load
source unit LUN or LUNs on the external storage system. For further information, refer to
Chapter 8, “Using DS CLI with System i” on page 391 if you are using DS CLI or refer ti 9.2,
“Configuring DS Storage Manager logical storage” on page 474 if you are using the GUI.

If you plan to use a mirrored external load source, add only one of the two unprotected LUNs
to the system ASP at this time.

Attention: At this time, you should have one LUN that is non-configured, which a protected
model A0x if you are using i5/OS V6R1 multipath load source and an unprotected model
A8x if you are going to mirror your external load source LUN. You should now add another
unprotected LUN added to your system ASP only if you are going to use load source
mirroring. Make sure to note the serial number of your external load source LUN or LUNs
for further reference.

286 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Begin the migration process by accessing the Hardware Management Console (HMC) and
changing the partition settings to do a manual IPL as follows:
1. From the Systems Management → Servers navigation tree, select your managed server.
Select the partition with which you are working. Then, click Tasks → Properties as shown
in Figure 7-8.

Note: For below HMC V7, right-click the partition name and select Properties.

Figure 7-8 Select Partition Properties

2. In the Partition Properties window, shown in Figure 7-9, select the Settings tab.

Figure 7-9 Partition Properties

Chapter 7. Migrating to i5/OS boot from SAN 287


3. On the Settings tab, change the Keylock Position to Manual (see Figure 7-10).

Figure 7-10 Setting the IPL type

4. IPL to DST using the following command:


PWRDWNSYS OPTION(*IMMED) RESTART(*YES)

Stopping device parity protection


To stop the device parity protection, follow these steps:
1. After your system re-IPLs, in the IPL or Install the System panel, select 3. Use Dedicated
Service Tools (DST), as shown in Figure 7-11.

IPL or Install the System


System: BUCKLEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code

Selection
3
Figure 7-11 IPL or Install the System

288 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Log on with your DST user ID and password, as shown in Figure 7-12.

Dedicated Service Tools (DST) Sign On


System: BUCKLEY
Type choices, press Enter.

Service tools user . . . . . . . . . . .


Service tools password . . . . . . . . .

F3=Exit F5=Change password F12=Cancel


Figure 7-12 Logging in to DST

Chapter 7. Migrating to i5/OS boot from SAN 289


3. At the Use Dedicated Service Tools (DST) panel, select 4. Work with disk units, as
shown in Figure 7-13.

Use Dedicated Service Tools (DST)


System: BUCKLEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 7-13 Use Dedicated Service Tools

290 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Turn off the device parity protection on the load source to prepare for the migration by
selecting 1. Work with disk configuration (Figure 7-14).

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 7-14 Work with Disk Units

Chapter 7. Migrating to i5/OS boot from SAN 291


5. In the Work with Disk Configuration panel, select 5. Work with device parity protection
(Figure 7-15).

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Work with ASP threshold
3. Work with ASP configuration
4. Work with mirrored protection
5. Work with device parity protection
6. Work with disk compression

Selection
5

F3=Exit F12=Cancel
Figure 7-15 Work with Disk Configuration

292 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Work with Device Parity Protection panel, select 1. Display device parity status
(Figure 7-16).

Work with Device Parity Protection

Select one of the following:

1. Display device parity status


2. Start device parity protection
3. Stop device parity protection
4. Include unit in device parity protection
5. Exclude unit from device parity protection

Selection
1

F3=Exit F12=Cancel
Figure 7-16 Work with Device Parity Status

Chapter 7. Migrating to i5/OS boot from SAN 293


The Display Device Parity Status panel lists all the parity sets on your system and the disk
units that are contained in them. You need to locate your LSU, which is unit 1, and note in
which parity set it is contained (Figure 7-17).

Display Device Parity Status

Parity Serial Resource


Set ASP Unit Number Type Model Name Status
1 0C-4275332 2757 001 DC01
* * 68-0D0A773 6718 072 DD001 Active
* * 68-0D09EA2 6718 072 DD006 Active
1 1 68-0D0BC12 6718 072 DD007 Active
* * 68-0D0A51B 6718 072 DD005 Active
* * 68-0D0BBB0 6718 072 DD003 Active
* * 68-0D0A0DA 6718 072 DD002 Active
* * 68-0D0BB13 6718 072 DD004 Active
* * 68-0D09F9C 6718 072 DD008 Active
2 0C-3339230 5703 001 DC05
* * 68-0D0AB08 6718 074 DD009 Active
* * 68-0D0A733 6718 074 DD010 Active
More...
* - See help for more information

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display disk hardware status F12=Cancel
Figure 7-17 Display Device Parity Status

7. Press F12 to return to the Work with Device Parity Status panel.

294 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. In the Work with Device Parity Protection panel, select 3. Stop device parity protection
(Figure 7-18).

Work with Device Parity Protection

Select one of the following:

1. Display device parity status


2. Start device parity protection
3. Stop device parity protection
4. Include unit in device parity protection
5. Exclude unit from device parity protection

Selection
3

F3=Exit F12=Cancel
Figure 7-18 Stop device parity protection

Chapter 7. Migrating to i5/OS boot from SAN 295


9. The Stop Device Parity Protection panel lists the parity sets that are on the system. Place
a 1 next to the set that you identified as containing the LSU (Figure 7-19).

Attention: After this step, the system is running unprotected. Thus, make sure that you
know the location of backups before you continue.

10.Press Enter to stop the parity set.

Stop Device Parity Protection

Select the subsystems to stop device parity protection.

Type choice, press Enter.


1=Stop device parity protection

Parity Serial Resource


Option Set Number Type Model Name
1 1 0C-4275332 2757 001 DC01
2 0C-3339230 5703 001 DC05

F3=Exit F12=Cancel
Figure 7-19 Stop Device Parity Protection

296 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11.In the Confirm Stop Device Parity Protection panel, shown in Figure 7-20, take a moment
to confirm that the load source unit is listed. When you are sure you have selected the
correct parity set, press Enter.

Confirm Stop Device Parity Protection

Attention: Disk units connected to these subsystems will not be


protected after you confirm your choice.

Press Enter to continue.


Press F12=Cancel to return and change your choice.

Parity Serial Resource


Option Set ASP Unit Number Type Model Name
1 1 0C-4275332 2757 001 DC01
1 1 * * 68-0D0A773 6718 050 DD001
1 1 * * 68-0D09EA2 6718 050 DD006
1 1 1 1 68-0D0BC12 6718 050 DD007
1 1 * * 68-0D0A51B 6718 050 DD005
1 1 * * 68-0D0BBB0 6718 050 DD003
1 1 * * 68-0D0A0DA 6718 050 DD002
1 1 * * 68-0D0BB13 6718 050 DD004
1 1 * * 68-0D09F9C 6718 050 DD008

F12=Cancel
Figure 7-20 Confirm Stop Device Parity Protection

Chapter 7. Migrating to i5/OS boot from SAN 297


A status panel similar to that in Figure 7-21 displays.

Stop Device Parity Protection Status

The operation to stop device parity protection will be done


in several phases. The phases are listed here and the status
will be indicated when known.

Operation Status
Prepare to stop . . . . . . . . . . . . : Completed
Stop device parity protection . . . . . : 83 %

Wait for next display or press F16 for DST main menu
Figure 7-21 Stop Device Parity Protection Status

12.When the stop function has completed, press Enter to continue.

298 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Copying the LSU data
Now that the LSU is no longer RAID protected, you can copy it as follows:
1. Press F12 to return to the Work with Disk Configuration panel.
2. Press F12 again to return to the Work with Disk Units panel.
3. In the Work with Disk Units panel, select 2. Work with disk unit recovery as shown in
Figure 7-22.

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
2

F3=Exit F12=Cancel
Figure 7-22 Work with Disk Units

Important: Take care when using the following options, because using the incorrect
option can result in loss of data.

Chapter 7. Migrating to i5/OS boot from SAN 299


4. In the Work with Disk Unit Recovery panel, select 9. Copy disk unit data (Figure 7-23).

Work with Disk Unit Recovery

Select one of the following:

1. Save disk unit data


2. Restore disk unit data
3. Replace configured unit
4. Assign missing unit
5. Recover configuration
6. Disk unit problem recovery procedures
7. Suspend mirrored protection
8. Resume mirrored protection
9. Copy disk unit data
10. Delete disk unit data
11. Upgrade load source utility
12. Rebuild disk unit data
13. Reclaim IOP cache storage
More...

Selection
9

F3=Exit F11=Display disk configuration status F12=Cancel


Figure 7-23 Work with Disk Unit Recovery

300 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select your existing internal load source (disk unit 1) as the copy from unit (Figure 7-24).

Select Copy from Disk Unit

Type option, press Enter.


1=Select

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Configured
2 1 50-1106741 2107 A05 DD014 DPY/Active
3 1 50-1003741 2107 A05 DD015 DPY/Active
4 1 50-1105741 2107 A05 DD016 DPY/Active
5 1 50-1103741 2107 A05 DD018 DPY/Active
6 1 50-1009741 2107 A05 DD019 DPY/Active
7 1 50-1108741 2107 A05 DD021 DPY/Active
8 1 50-110A741 2107 A05 DD022 DPY/Active
9 1 50-1102741 2107 A05 DD023 DPY/Active
10 1 50-1104741 2107 A05 DD024 DPY/Active
11 1 50-1002741 2107 A05 DD025 DPY/Active
12 1 50-1001741 2107 A05 DD026 DPY/Active
13 1 50-1005741 2107 A05 DD027 DPY/Active
14 1 50-1006741 2107 A05 DD028 DPY/Active
More...
F3=Exit F5=Refresh F11=Display non-configured units F12=Cancel
Figure 7-24 Select Copy from Disk Unit

Chapter 7. Migrating to i5/OS boot from SAN 301


6. Select the designated external load source LUN for which you noted the serial number
previously as the copy to unit (Figure 7-25)

Select Copy to Disk Unit Data

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 072 DD007 DPY/Active

1=Select

Serial Resource
Option Number Type Model Name Status
1 50-1000741 2107 A85 DD013 Non-configured
50-1103741 2107 A05 DD014 Non-configured
50-1007741 2107 A05 DD023 Non-configured
50-1006741 2107 A05 DD020 Non-configured
50-110A741 2107 A05 DD019 Non-configured
50-1109741 2107 A05 DD024 Non-configured
50-1004741 2107 A05 DD025 Non-configured
50-1002741 2107 A05 DD029 Non-configured
50-1005741 2107 A05 DD030 Non-configured
More...
F3=Exit F11=Display disk configuration status F12=Cancel
Figure 7-25 Select Copy to Disk Unit

302 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7. When you are certain that you have selected the correct from and to units, press Enter.
You might see the panel that is shown in Figure 7-26 if the LUN was attached previously to
a system. If see this panel and if you are sure that it is the correct LUN, press F10 to
ignore the problem report and continue.

Problem Report

Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.

Type option, press Enter.


5=Display Detailed Report

OPT Problem
Unit possibly configured for Power PC AS

F3=Exit F10=Ignore problems and continue F12=Cancel


Figure 7-26 Disk Problem Report

Chapter 7. Migrating to i5/OS boot from SAN 303


8. Confirm your choice again to prevent any chance of copying the wrong disk unit
accidentally. The example in Figure 7-27 assumes that the external load source will be
mirrored and shows an unprotected external load source LUN (model A8x). Otherwise, if
using load source multipathing, your copy to unit should be a protected LUN model A0x.

Confirm Copy Disk Unit Data

Press Enter to confirm your choice for copy.


Press F12 to return to change your choice.

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 072 DD007 DPY/Active

Disk that is copied to:

Serial Resource
Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured

F12=Cancel
Figure 7-27 Confirm Copy of Disk Unit

304 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. Press Enter to copy the existing LSU to the new LUN. A panel displays that indicates the
progress of the copy, as shown in Figure 7-28.

Copy Disk Unit Data Status

The operation to copy a disk unit will be done in several phases.


The phases are listed here and the status will be indicated when
known.

Phase Status

Stop compression (if needed) . . . . . . : Completed


Prepare disk unit . . . . . . . . . . . : 31 % Complete
Start compression (if needed) . . . . . :
Copy status. . . . . . . . . . . . . . . :

Number of unreadable pages:

Wait for next display or press F16 for DST main menu
Figure 7-28 Copy Disk Unit Data Status

The copy process can take from 10 to 60 minutes. After the copy, you are returned to the
Work with Disk Unit Recovery panel.

Chapter 7. Migrating to i5/OS boot from SAN 305


10.Shut down the system by pressing F12 to return to the Work with Disk Units panel.
11.Then, press F12 again to return to the Dedicated Service Tools main menu. Select
7. Start a service tool to get to the Start a Service Tool panel. Then, select 7. Operator
panel functions, as shown in Figure 7-29.

Start a Service Tool


System: BUCKLEY
Attention: Incorrect use of this service tool can cause damage
to data in this system. Contact your service representative
for assistance.

Select one of the following:

1. Display/Alter/Dump
2. Licensed Internal Code log
3. Trace Licensed Internal code
4. Hardware service manager
5. Main storage dump manager
6. Product activity log
7. Operator panel functions
8. Performance data collector

Selection

F3=Exit F12=Cancel
Figure 7-29 Start a Service Tool

306 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
12.In the Operator Panel Functions panel, ensure that the IPL source is set to 1 or 2 and that
IPL mode is set to 1, as shown in Figure 7-30. Then, press F10 to turn off the system.

Operator Panel Functions


System: BUCKLEY
IPL source: 2 (1=A, 2=B or 3=D)
IPL mode: 1 (1=Manual, 2=Normal, 3=Secure or 4=Auto)

Press Enter to change the IPL attributes and return


to the main DST menu.
Press F8 to set the IPL attributes and restart the system.
Machine processing will be ended and the system will be
restarted.
Press F10 to set the IPL attributes and power off the system.
Machine processing will be ended and the system will be
powered off.
Press F12 to return to the main DST menu without changing
IPL attributes.

F3=Exit F8=Restart F10=Power off F12=Cancel


Figure 7-30 Operator Panel Functions

Chapter 7. Migrating to i5/OS boot from SAN 307


13.Confirm that you are ready to turn off the system (Figure 7-31) by pressing Enter.

Confirm System Power Down


System: BUCKLEY
Press Enter to confirm the power down of the system.
Press F3 or F12 to cancel and return to the main DST menu.

F3=Exit F12=Cancel
Figure 7-31 Confirm System Power Down

Important: Now, you are required to remove a disk unit from the system. If you are unsure
about how to remove a disk unit, contact your local customer engineer. This service is a
chargeable service. Tell the customer engineer that you are migrating to a Boot from SAN
configuration.

After the system is shut down, refer to the checklist that you created previously (using
Table 7-1 on page 279). Then, physically remove the first LSU from the machine.

Important: Do not proceed beyond this point until you are sure that you have removed the
correct disk unit.

308 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Changing the tagged LSU
Now, go to your HMC, and change the tagged LSU from the RAID Controller Card for the
internal LSU to the Fibre Channel Disk Controller that is controlling the new SAN LSU by
following these steps:
1. Select the partition name with which you are working, and choose Tasks →
Configuration → Manage profiles, as shown in Figure 7-32.

Note: For below HMC V7, right-click the partition name and select Properties.

Figure 7-32 Select HMC partition profile properties

2. In the Managed Profiles window, select Actions → Edit, as shown in Figure 7-33.

Figure 7-33 Managed Profiles

Chapter 7. Migrating to i5/OS boot from SAN 309


3. In the Logical Partition Profile Properties window, shown in Figure 7-34, select the Tagged
I/O tab.

Figure 7-34 Logical Partition Properties

4. Click Select for the load source, as shown in Figure 7-35.

Figure 7-35 Tagged I/O properties

310 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select the IOA that is assigned to your new LSU, as shown in Figure 7-36. Then, click OK.

Figure 7-36 Tag the Load Source Unit

6. Click OK.
7. Activate the partition by selecting Tasks → Operations → Activate, as shown in
Figure 7-37.

Note: For below HMC V7, right-click the partition name and select Activate.

Figure 7-37 Activating a partition

Chapter 7. Migrating to i5/OS boot from SAN 311


8. Select the profile to use for activating the partition, and click OK, as shown in Figure 7-38.

Figure 7-38 Select the profile for activation

9. The HMC displays a status dialog box that closes when the task is complete and when the
partition is activated. Then, wait for the DST panel to display.
10.When the system has IPLed to DST, select 3. Use Dedicated Service Tools (DST), as
shown in Figure 7-39.

IPL or Install the System


System: BUCKLEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code

Selection
3
Figure 7-39 Use Dedicated Service Tools (DST)

Note: If you are migrating to a RAID protected load source for using i5/OS V6R1
multipathing for the load source LUN in the DS system, skip to “Re-enabling parity
protection” on page 316.

312 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Mirroring of the LSU
If you are migrating to a mirrored external load source, you next enable remote load source
mirroring because the mirror is on a separate IOA. Follow these steps:
1. In the Use Dedicated Service Tools (DST) panel, select 4. Work with disk units, as
shown in Figure 7-40.

Use Dedicated Service Tools (DST)


System: BUCKLEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 7-40 Work with disk units

Chapter 7. Migrating to i5/OS boot from SAN 313


2. In the Work with Disk Units panel, select 1. Work with disk configuration, as shown in
Figure 7-42.

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 7-41 Work with Disk Units menu

3. In the Work with Disk Configuration panel, select 4. Work with mirrored protection
(Figure 7-42).

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Work with ASP threshold
3. Work with ASP configuration
4. Work with mirrored protection
5. Work with device parity protection
6. Work with disk compression

Selection
4

F3=Exit F12=Cancel
Figure 7-42 Work with Disk Configuration

314 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. With the Work with Mirrored Protection panel, select 4. Enable remote load source
mirroring (Figure 7-43).

Work with Mirrored Protection

Select one of the following:

1. Display disk configuration


2. Start mirrored protection
3. Stop mirrored protection
4. Enable remote load source mirroring
5. Disable remote load source mirroring

Selection
4

F3=Exit F12=Cancel
Figure 7-43 Work with Mirrored Protection

5. The confirmation panel shown in Figure 7-44 displays. This panel only enables remote
mirroring. It does not actually start it. Press Enter to enable remote load source mirroring.

Enable Remote Load Source Mirroring

Remote load source mirroring will allow you to place the two
units that make up a mirrored load source disk unit (unit 1) on
two different IOPs. This may allow for higher availability
if there is a failure on the MFIOP.

Note: When there is only one load source disk unit attached to
the multifunction IOP, the system will not be able to IPL if
that unit should fail.

This function will not start mirrored protection.

Press Enter to enable remote load source mirroring.

F3=Exit F12=Cancel
Figure 7-44 Confirm Enable Remote Load Source Mirroring

Chapter 7. Migrating to i5/OS boot from SAN 315


A confirmation on the Work with Mirrored Protection panel displays that Remote load
source mirroring is enabled successfully.
6. Select 2. Start mirrored protection to actually start the mirrored protection. You are
prompted for the ASP that you want to mirror, as shown in Figure 7-45. Select ASP 1 (the
system ASP).

Select ASP to Start Mirrored Protection

Select the ASPs to start mirrored protection on.

Type options, press Enter.


1=Select

Option ASP Protection


1 1 Unprotected

F3=Exit F12=Cancel
Figure 7-45 Select ASP to Start Mirrored Protection

After the system IPLs, you are returned to the DST, and mirroring is activated. The next IPL
fully starts the mirror by synchronizing the LUNs.

Re-enabling parity protection


Remember that you still have unprotected internal drives. If you want these drives to be
protected, add the old LSU to your system if you require it to make the RAID set. Then, start
parity protection on the unprotected drives.

Alternatively, you can choose to remove the unprotected internal drives if you do not intend to
use them further. For removing the drives, perform normal service actions. Then, restart the
system from the Dedicated Service Tools Menu, and choose 1. Perform an IPL.

If you want to continue using the internal drives from your previous load source parity set,
restart device parity protection as follows:
1. From the IPL or Install the System panel, select 3. Use Dedicated Service Tools (DST),
as shown in Figure 7-46.

IPL or Install the System


System: BUCKLEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code

Selection
3
Figure 7-46 Use Dedicated Service Tools (DST)

316 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Sign on and select 4. Work with disk units (Figure 7-47).

Use Dedicated Service Tools (DST)


System: BUCKLEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 7-47 Dedicated Service Tools menu

3. In the Work with Disk Units panel, select 1. Work with disk configuration, as shown in
Figure 7-48.

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 7-48 Work with Disk Units panel

Chapter 7. Migrating to i5/OS boot from SAN 317


4. In the Work with Disk Configuration panel, select 5. Work with device parity protection,
as shown in Figure 7-49.

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Work with ASP threshold
3. Work with ASP configuration
4. Work with mirrored protection
5. Work with device parity protection
6. Work with disk compression

Selection
5

F3=Exit F12=Cancel
Figure 7-49 Work with Disk Configuration panel

5. In the Work with Device Parity Protection panel, select 2. Start device parity protection,
as shown in Figure 7-50.

Work with Device Parity Protection

Select one of the following:

1. Display device parity status


2. Start device parity protection
3. Stop device parity protection
4. Include unit in device parity protection
5. Exclude unit from device parity protection

Selection
2

F3=Exit F12=Cancel
Figure 7-50 Work with Device Parity Protection panel

318 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. You are presented with a panel similar to that shown in Figure 7-51, which indicates how
many parity sets can be started. Select each set that you want to start by selecting
1=Start device parity protection.

Start Device Parity Protection

Select the subsystems to start device parity protection.

Type choice, press Enter.


1=Start device parity protection

Parity Serial Resource


Option Set Number Type Model Name
1 2 0C-4275332 2757 001 DC01

F3=Exit F12=Cancel
Figure 7-51 Start Device Parity Protection

You might receive the warning panel that is shown in Figure 7-52. This warning tells you
that one or more of the non-configured disks was used on an i5/OS system previously.
Press F10 to ignore this warning and continue.

Problem Report

Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.

Type option, press Enter.


5=Display Detailed Report

OPT Problem
Unit possibly configured for Power PC AS

F3=Exit F10=Ignore problems and continue F12=Cancel


Figure 7-52 Problem Report

Chapter 7. Migrating to i5/OS boot from SAN 319


7. The confirmation panel, shown in Figure 7-53, displays. It confirms which DASD units are
protected by this process. Press Enter to confirm.

Confirm Starting Device Parity Protection

During the preparation for starting device parity protection, data


will be moved from parts of some disk units. This may take several
minutes for each subsystem selected.

Press Enter to continue.


Press F12=Cancel to return and change your choice.

Parity Serial Resource


Option Set ASP Unit Number Type Model Name
1 2 0C-4275332 2757 001 DC01
1 2 * * 68-0D0A773 6718 074 DD001
1 2 * * 68-0D09EA2 6718 070 DD006
1 2 * * 68-0D0BC12 6718 070 DD007
1 2 * * 68-0D0A51B 6718 074 DD005
1 2 * * 68-0D0BBB0 6718 074 DD003
1 2 * * 68-0D0A0DA 6718 074 DD002

F12=Cancel
Figure 7-53 Confirm Starting Device Parity Protection

A progress panel displays, as shown in Figure 7-54.

Start Device Parity Protection Status

The operation to start device parity protection will be done


in several phases. The phases are listed here and the status
will be indicated when known.

Phase Status
Prepare to start . . . . . . . . . . . . : 4 %
Start device parity protection . . . . . :

Wait for next display or press F16 for DST main menu
Figure 7-54 Start Device Parity Protection Status

When this function returns, you have completed the procedure.

320 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7.3.2 Internal LSU mirrored to internal LSU migrating to external LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this section, we describe how to migrate a
system using an internal LSU that is protected currently by i5/OS mirroring to boot from SAN
with an external mirrored load source (see Figure 7-55).

Mirrored
internal
System i LPAR System i LPAR
load source LSU LSU'

I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA

... LSU ... LSU' ...

Boot from SAN Boot from SAN


mirrored load source mirrored load source
unit A unit B

Figure 7-55 Boot from SAN Migration from internal mirrored LSU to external mirrored LSU

Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.

Important: This procedure requires that you suspend mirroring on the load source unit.
While mirroring is suspended, the remaining load source is a single point of failure. Thus,
ensure that you have the necessary backups so that you can recover your system in the
event of failure. You are required also to remove the internal load source unit. If you are not
comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.

When migrating from internal load source to external, you first need to configure two
unprotected load source unit LUNs in separate DS volume groups. Assign each volume group
to separate #2847 IOP-based or IOP-less Fibre Channel adapters in your System i server.
For further information, refer to Chapter 8, “Using DS CLI with System i” on page 391 if you
use the DS CLI or 9.2, “Configuring DS Storage Manager logical storage” on page 474 if you
use the GUI. Add one of the two unprotected LUNs to your system ASP.

Attention: At this time, you should have one unprotected LUN that is non-configured and
one that is in the system ASP.

Now that your two new load source LUNs are attached to your system, you can use SST to
check that the disks are reporting correctly. You should see at least one non-configured
device (model A8x) to be used for your new external load source LUN.

Chapter 7. Migrating to i5/OS boot from SAN 321


To check that your disks are reporting correctly:
1. Enter the STRSST command.
2. Select 3. Work with disk units.
3. Select 1. Display disk configuration.
4. Select 4. Display non-configured units.
A panel similar to the one that is shown in Figure 7-56 displays.
5. Note the serial number for the non-configured unprotected LUN that is used for the
external load source.

Display Non-Configured Units

Serial Resource
Number Type Model Name Capacity Status
50-1000741 2107 A85 DD013 35165 Non-configured
50-1105741 2107 A05 DD027 35165 Non-configured
50-110A741 2107 A05 DD019 35165 Non-configured
50-1107741 2107 A05 DD017 35165 Non-configured
50-1101741 2107 A05 DD033 35165 Non-configured
50-1108741 2107 A05 DD032 35165 Non-configured
50-1005741 2107 A05 DD030 35165 Non-configured
50-1003741 2107 A05 DD031 35165 Non-configured
50-1104741 2107 A05 DD026 35165 Non-configured
50-1004741 2107 A05 DD025 35165 Non-configured
50-1102741 2107 A05 DD028 35165 Non-configured
50-1109741 2107 A05 DD024 35165 Non-configured
50-1002741 2107 A05 DD029 35165 Non-configured
50-100A741 2107 A05 DD015 35165 Non-configured
More...
Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display device parity status F12=Cancel
Figure 7-56 Non-configured disk units

322 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Access the Hardware Management Console (HMC) and change the partition settings to do a
manual IPL as follows:
1. From the Systems Management → Servers navigation tree, select your managed server.
Select the partition with which you are working. Then, click Tasks → Properties, as
shown in Figure 7-57.

Note: For below HMC V7, right-click the partition name and select Properties.

Figure 7-57 Select Partition Properties

2. In the Partition Properties window, select the Settings tab (Figure 7-58).

Figure 7-58 Partition Properties

Chapter 7. Migrating to i5/OS boot from SAN 323


3. On the Settings tab, change the Keylock Position to Manual, as shown in Figure 7-59.

Figure 7-59 Setting the IPL type

4. Issue the following command to IPL to DST:


PWRDWNSYS OPTION(*IMMED) RESTART(*YES)

Suspending LSU mirrored protection


Before you can copy the load source, you must suspend its mirrored protection as follows:
1. After you IPL your system to DST, select 3. Use Dedicated Service Tools (DST) from the
IPL or Install the System panel, as shown in Figure 7-60.

IPL or Install the System


System: MICKEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code

Selection
3
Figure 7-60 IPL or Install the System panel

324 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Sign on, and select 4. Work with disk units (Figure 7-61).

Use Dedicated Service Tools (DST)


System: MICKEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 7-61 Use Dedicated Service Tools (DST)

3. In the Work with Disk Units panel, select 2. Work with disk unit recovery (Figure 7-62).

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 7-62 Work with Disk Units

Important: Exercise care when using the following options, because using the incorrect
option can result in loss of data.

Chapter 7. Migrating to i5/OS boot from SAN 325


4. In the Work with Disk Unit Recovery panel, select 7. Suspend Mirrored Protection, as
shown in Figure 7-63.

Work with Disk Unit Recovery

Select one of the following:

1. Save disk unit data


2. Restore disk unit data
3. Replace configured unit
4. Assign missing unit
5. Recover configuration
6. Disk unit problem recovery procedures
7. Suspend mirrored protection
8. Resume mirrored protection
9. Copy disk unit data
10. Delete disk unit data
11. Upgrade load source utility
12. Rebuild disk unit data
13. Reclaim IOP cache storage
More...

Selection
7

F3=Exit F11=Display disk configuration status F12=Cancel


Figure 7-63 Work with Disk Unit Recovery: Suspending a mirror

5. A list of the LUNs that you can suspend displays (Figure 7-64). Select unit 1. There is only
one of these, so you can only select the LSU mirror, not the actual primary LSU.

Suspend Mirrored Protection

Type option, press Enter.


1=Suspend Mirrored Protection

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Active
3 1 68-0D0A0DA 6718 050 DD002 Active
3 1 68-0D0A51B 6718 050 DD005 Active
6 1 68-0D09EA2 6718 050 DD006 Active
6 1 68-0D0A6AE 6718 050 DD012 Active
9 1 68-0D0AB08 6718 050 DD009 Active
9 1 68-0D0BBB0 6718 050 DD003 Active
10 1 68-0D0A733 6718 050 DD010 Active
10 1 68-0D0BB13 6718 050 DD004 Active
11 1 68-0D0A722 6718 050 DD011 Active
11 1 68-0D09F9C 6718 050 DD008 Active

F3=Exit F5=Refresh F12=Cancel


Figure 7-64 Selecting the unit to suspend

326 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Copying the LSU data
To copy the LSU data, follow these steps:
1. From the Work with Disk Unit Recovery panel select 9. Copy disk unit data (Figure 7-65).

Work with Disk Unit Recovery

Select one of the following:

1. Save disk unit data


2. Restore disk unit data
3. Replace configured unit
4. Assign missing unit
5. Recover configuration
6. Disk unit problem recovery procedures
7. Suspend mirrored protection
8. Resume mirrored protection
9. Copy disk unit data
10. Delete disk unit data
11. Upgrade load source utility
12. Rebuild disk unit data
13. Reclaim IOP cache storage
More...

Selection
9

F3=Exit F11=Display disk configuration status F12=Cancel


Selected units have been suspended successfully
Figure 7-65 Work with Disk Unit Recovery

2. Select the existing internal load source (disk unit 1) as the copy from unit (Figure 7-66).

Select Copy from Disk Unit

Type option, press Enter.


1=Select
Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0BC12 6718 050 DD007 Active

F3=Exit F5=Refresh F11=Display non-configured units F12=Cancel


Figure 7-66 Select Copy from Disk Unit

Chapter 7. Migrating to i5/OS boot from SAN 327


3. Select the designated unprotected external load source LUN (model A8x) for which you
noted the serial number previously as the copy to unit (Figure 7-67)
In a mirrored system environment, you see only the single LSU in the Select Copy from
Disk Unit list, because you are unable to copy a unit that is part of a live mirrored pair.
When you are certain that you have selected the correct from and to units, press Enter.

Select Copy to Disk Unit Data

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Active

1=Select

Serial Resource
Option Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured
50-1104741 2107 A05 DD032 Non-configured
50-1009741 2107 A05 DD026 Non-configured
50-1007741 2107 A05 DD019 Non-configured
50-1006741 2107 A05 DD023 Non-configured
50-1005741 2107 A05 DD034 Non-configured
50-1002741 2107 A05 DD022 Non-configured
50-1100741 2107 A85 DD033 Non-configured
50-110A741 2107 A05 DD014 Non-configured
More...
F3=Exit F11=Display disk configuration status F12=Cancel
Figure 7-67 Select Copy to Disk Unit

328 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. You might see the panel shown in Figure 7-68 if the LUN was attached to a system
previously. If so, and you are sure it is the correct LUN, press F10 to ignore the warning
and continue.

Problem Report

Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.

Type option, press Enter.


5=Display Detailed Report

OPT Problem
Unit possibly configured for Power PC AS
Other sub-unit will become missing

F3=Exit F10=Ignore problems and continue F12=Cancel


Figure 7-68 Disk problem report

5. Confirm your choice to prevent any chance of accidentally copying a disk unit
(Figure 7-69).

Confirm Copy Disk Unit Data

Press Enter to confirm your choice for copy.


Press F12 to return to change your choice.

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Active

Disk that is copied to:

Serial Resource
Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured

F12=Cancel
Figure 7-69 Confirm Copy of Disk Unit

Chapter 7. Migrating to i5/OS boot from SAN 329


6. Press Enter, and the system proceeds to copy the existing LSU to the new LUN. A
progress panel displays, as shown in Figure 7-70.

Copy Disk Unit Data Status

The operation to copy a disk unit will be done in several phases.


The phases are listed here and the status will be indicated when
known.

Phase Status

Stop compression (if needed) . . . . . . : Completed


Prepare disk unit . . . . . . . . . . . : 31 % Complete
Start compression (if needed) . . . . . :
Copy status. . . . . . . . . . . . . . . :

Number of unreadable pages:

Wait for next display or press F16 for DST main menu
Figure 7-70 Copy Disk Unit Data Status

The copy process can take from 10 to 60 minutes. After the copy, you return to the Work
with Disk Unit Recovery panel.
7. Shut down the system by pressing F12 to return to the work with Disk Units panel. Then,
press F12 again to return to the Dedicated Service Tools main menu. Select 7. Start a
service tool to get to the Service Tools menu shown as shown in Figure 7-29 on
page 306.

330 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. Select 7. Operator Panel Functions. Make sure that the IPL source is set to 1 or 2 and
that the IPL mode is set to 1, as shown in Figure 7-71. Press F10 to shut down the system.

Operator Panel Functions


System: MICKEY
IPL source: 2 (1=A, 2=B or 3=D)
IPL mode: 1 (1=Manual, 2=Normal, 3=Secure or 4=Auto)

Press Enter to change the IPL attributes and return


to the main DST menu.
Press F8 to set the IPL attributes and restart the system.
Machine processing will be ended and the system will be
restarted.
Press F10 to set the IPL attributes and power off the system.
Machine processing will be ended and the system will be
powered off.
Press F12 to return to the main DST menu without changing
IPL attributes.

F3=Exit F8=Restart F10=Power off F12=Cancel


Figure 7-71 Restarting your system

9. Next, you need to confirm the restart request, as shown in Figure 7-72.

Confirm System Reset


System: MICKEY
Press Enter to confirm the reset of the system.
Press F3 or F12 to cancel and return to the main DST menu.

F3=Exit F12=Cancel
Figure 7-72 Confirm System Reset

Important: You are required to remove a disk unit from the system. If you are unsure about
this process, contact your local customer engineer. This service is a chargeable service,
and you need to tell them you are migrating to a Boot from SAN configuration.

After the system is shut down, refer to the checklist that you created previously (Table 7-1 on
page 279). Then, physically remove the first Load Source Unit from the machine. Do not
proceed beyond this point until you are sure that you have removed the correct disk unit.

Chapter 7. Migrating to i5/OS boot from SAN 331


Changing the tagged LSU
Now, go to the Hardware Management Console (HMC), and change the tagged load source
unit from the RAID Controller Card for the internal LSU to the Fibre Channel Disk Controller
that is controlling your new SAN LSU. Follow these steps on the HMC:
1. Select the partition with which you are working. Go to Tasks → Configuration → Manage
Profiles as shown in Figure 7-73.

Note: For below HMC V7, right-click the partition profile name and select Properties.

Figure 7-73 Select HMC partition profile properties

2. In the Managed Profiles window, select Actions → Edit as shown in Figure 7-74.

Figure 7-74 Managed Profiles

332 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Logical Partition Profile Properties window, choose the Tagged I/O tab, as shown in
Figure 7-75.

Figure 7-75 Logical Partition Properties

4. On the Tagged I/O tab, click Select for the load source, as shown in Figure 7-76.

Figure 7-76 Tagged I/O properties

Chapter 7. Migrating to i5/OS boot from SAN 333


5. Next, select the IOA that is assigned to the new LSU, as shown in Figure 7-77. Then, click
OK to confirm the selection.

Figure 7-77 Tag the Load Source Unit

Attention: You must shut down the partition fully and then reactivate it from the HMC
so that the new load source tagging takes effect.

6. After shutting down your partition, activate it from the HMC by selecting Tasks →
Operations and then selecting Activate (Figure 7-78).

Note: For below HMC V7, right-click partition, select Properties, and click Activate.

Figure 7-78 Activating a partition

334 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7. In the Activate Logical Partition, select the profile to be used for activating the partition
(Figure 7-79).

Figure 7-79 Select the profile for activation

The HMC then displays a status dialog box that closes when the task is complete and
when the partition is activated. Then, wait for the DST panel to display.

Replacing the internal LSU


Because the mirrored pair is still the old internal LSU that you suspended earlier, you now
must replace it as follows:
1. Start from the IPL or Install the System panel, and select the following options:
a. 3. Use Dedicated Service Tools (DST) and sign-on to DST
b. 4. Work with disk units
c. 2. Work with disk unit recovery
d. 3. Replace configured unit
2. In the Select Configured Unit to Replace panel, select the suspended internal load source
unit, as shown in Figure 7-80, and press Enter.

Select Configured Unit to Replace

Type option, press Enter.


1=Select

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Suspended

F3=Exit F5=Refresh F12=Cancel


Figure 7-80 Select Suspended Unit

Chapter 7. Migrating to i5/OS boot from SAN 335


3. Select the new external load source mirror from the list that is provided, and press Enter
(Figure 7-81).

Select Replacement Unit

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended

Type option, press Enter.


1=Select

Serial Resource
Option Number Type Model Name Status
50-1100741 2107 A85 DD033 Non-configured

F3=Exit F12=Cancel
Figure 7-81 Select Replacement Unit

4. You might receive a problem report similar to that shown in Figure 7-82. If so, look at the
errors and check that you have not chosen the wrong LUN or that the configuration of the
LUN is correct. If everything is correct, then press F10 to continue.

Problem Report

Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.

Type option, press Enter.


5=Display Detailed Report

OPT Problem
Unit possibly configured for Power PC AS
Lower level of mirrored protection

F3=Exit F10=Ignore problems and continue F12=Cancel


Figure 7-82 Problem Report

336 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. You then receive a confirmation panel similar to that shown in Figure 7-83. Press Enter to
continue.

Confirm Replace of Configured Unit

This screen allows the confirmation of the configured unit to


be replaced with the selected replacement unit.

Press Enter to confirm your choice for replace.


Press F12 to return to change your choice.

The configured unit being replaced is:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended

The replacement unit will be:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-1100741 2107 A85 DD033 Resuming

F12=Cancel
Figure 7-83 Confirm Replace of Configured Unit

A progress panel displays, as shown in Figure 7-84.

Replace Disk Unit Data Status

The operation to replace a disk unit from the selected disk


units will be done in several phases. The phases are listed
here and the status will be indicated when known.

Phase Status

Stop compression (if needed) . . . . . . : Completed


Prepare disk unit . . . . . . . . . . . : 0 % Complete
Start compression (if needed) . . . . . :
Replace status . . . . . . . . . . . . . :

Number of unreadable pages:

Wait for next display or press F16 for DST main menu
Figure 7-84 Replace Disk Unit Data Status

Chapter 7. Migrating to i5/OS boot from SAN 337


6. When the replacement process completes, you return to the Work with Disk Units
recovery panel. Press F12 and then select the following options:
a. 1. Work with disk configuration
b. 1. Display disk configuration
c. 1. Display disk configuration status
A panel displays as shown in Figure 7-85.

Display Disk Configuration Status

Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 50-1000741 2107 A85 DD013 Active
1 50-1100741 2107 A85 DD033 Resuming
3 68-0D0A0DA 6718 050 DD002 Active
3 68-0D0A51B 6718 050 DD005 Active
6 68-0D09EA2 6718 050 DD006 Active
6 68-0D0A6AE 6718 050 DD012 Active
9 68-0D0AB08 6718 050 DD009 Active
9 68-0D0BBB0 6718 050 DD003 Active
10 68-0D0A733 6718 050 DD010 Active
10 68-0D0BB13 6718 050 DD004 Active
11 68-0D0A722 6718 050 DD011 Active
11 68-0D09F9C 6718 050 DD008 Active

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Disk configuration capacity F12=Cancel
Figure 7-85 Display Disk Configuration Status

The external LSU disks are now mirrored, and the mirror completely resumes during IPL.

338 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7.3.3 Internal LSU mirrored to internal remote LSU migrating to external LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this section, we describe the steps to
migrate a system from using an internal load source unit that is currently protected by a
remote load source unit currently housed on a remote internal unit to using the boot from SAN
function (see Figure 7-86).

Remote System i LPAR System i LPAR


mirrored
internal LSU LSU'
load source
I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA

... LSU ... LSU' ...

Boot from SAN Boot from SAN


mirrored load source mirrored load source
unit A unit B

Figure 7-86 Boot from SAN migration from internal mirrored to internal remote LSU to external LSU

Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.

Important: This procedure requires that you suspend mirroring on the load source unit.
While mirroring is suspended, the remaining load source is a single point of failure. Thus,
ensure that you have the necessary backups so that you can recover your system in the
event of failure. You are required also to remove the internal load source unit. If you are not
comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.

When migrating from an internal remotely mirrored load source to an external mirrored load
source, you first need to configure two unprotected load source unit LUNs in separate DS
volume groups. Assign each of these to separate System i host #2847 IOP-based or IOP-less
Fibre Channel adapters. For further information, refer to Chapter 8, “Using DS CLI with
System i” on page 391 if you are using DS CLI or 9.2, “Configuring DS Storage Manager
logical storage” on page 474 if you are using the GUI. Add one of the two unprotected LUNs
to your system ASP now.

Attention: At this time, you should have one unprotected LUN that is non-configured and
one that is in the system ASP.

Chapter 7. Migrating to i5/OS boot from SAN 339


Having now gotten the new LUNs attached to your system, you can use SST to check that
they are reporting correctly. You should see at least one non-configured device (model A8x) to
be used for your new external load source LUN.

To check that disks are reporting correctly, follow these steps:


1. Enter the STRSST command.
2. Select 3. Work with disk units.
3. Select 1. Display disk configuration.
4. Select 4. Display non-configured units.
A panel similar to the one shown in Figure 7-87 displays.
5. Note the serial number that the non-configured unprotected LUN will use for the external
load source.

Display Non-Configured Units

Serial Resource
Number Type Model Name Capacity Status
50-1000741 2107 A85 DD013 35165 Non-configured
50-1105741 2107 A05 DD027 35165 Non-configured
50-110A741 2107 A05 DD019 35165 Non-configured
50-1107741 2107 A05 DD017 35165 Non-configured
50-1101741 2107 A05 DD033 35165 Non-configured
50-1108741 2107 A05 DD032 35165 Non-configured
50-1005741 2107 A05 DD030 35165 Non-configured
50-1003741 2107 A05 DD031 35165 Non-configured
50-1104741 2107 A05 DD026 35165 Non-configured
50-1004741 2107 A05 DD025 35165 Non-configured
50-1102741 2107 A05 DD028 35165 Non-configured
50-1109741 2107 A05 DD024 35165 Non-configured
50-1002741 2107 A05 DD029 35165 Non-configured
50-100A741 2107 A05 DD015 35165 Non-configured
More...
Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display device parity status F12=Cancel
Figure 7-87 Displaying non-configured disk units

340 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Proceed with the migration process by performing a manual IPL to DST as follows:
1. Select the partition name with which you are working. Then, click Tasks → Properties
(see Figure 7-88).

Note: For below HMC V7, right-click the partition profile name and select Properties.

Figure 7-88 Select partition properties

2. In the Partition Properties window, select the Settings tab (Figure 7-89).

Figure 7-89 Partition Properties

Chapter 7. Migrating to i5/OS boot from SAN 341


3. On the Settings tab, change the Keylock Position to Manual (Figure 7-90).

Figure 7-90 Setting the IPL type

4. Then, IPL to DST by issuing the command:


PWRDWNSYS OPTION(*IMMED) RESTART(*YES)

Suspending LSU mirrored protection


Before we can copy the load source, we have to suspend its mirrored protection as follows:
1. After the system has IPLed to DST, select 3. Use Dedicated Service Tools (DST) as
shown in Figure 7-91.

IPL or Install the System


System: MICKEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code

Selection
3
Figure 7-91 IPL or Install the System

342 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
2. Sign on, and then in the Use Dedicated Service Tools (DST) panel, select 4. Work with
disk units, as shown in Figure 7-92.

Use Dedicated Service Tools (DST)


System: MICKEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 7-92 Use Dedicated Service Tools (DST)

3. On the Work with Disk Units panel, select 2. Work with disk unit recovery, as shown in
Figure 7-93.

Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 7-93 Work with Disk Units

Important: Exercise care when using these options, because using the incorrect option
can result in loss of data.

Chapter 7. Migrating to i5/OS boot from SAN 343


4. In the Work with Disk Unit Recovery panel, select 7. Suspend Mirrored Protection
(Figure 7-94).

Work with Disk Unit Recovery

Select one of the following:

1. Save disk unit data


2. Restore disk unit data
3. Replace configured unit
4. Assign missing unit
5. Recover configuration
6. Disk unit problem recovery procedures
7. Suspend mirrored protection
8. Resume mirrored protection
9. Copy disk unit data
10. Delete disk unit data
11. Upgrade load source utility
12. Rebuild disk unit data
13. Reclaim IOP cache storage
More...

Selection
7

F3=Exit F11=Display disk configuration status F12=Cancel


Figure 7-94 Work with Disk Unit Recovery - Suspending a mirror

344 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. A list of the LUNs that you can suspend displays, as shown in Figure 7-95. Select unit 1.
Note that there is only one of these units, because you can select only the LSU mirror and
not the actual primary LSU.

Suspend Mirrored Protection

Type option, press Enter.


1=Suspend Mirrored Protection

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Active
3 1 68-0D0A0DA 6718 050 DD002 Active
3 1 68-0D0A51B 6718 050 DD005 Active
6 1 68-0D09EA2 6718 050 DD006 Active
6 1 68-0D0A6AE 6718 050 DD012 Active
9 1 68-0D0AB08 6718 050 DD009 Active
9 1 68-0D0BBB0 6718 050 DD003 Active
10 1 68-0D0A733 6718 050 DD010 Active
10 1 68-0D0BB13 6718 050 DD004 Active
11 1 68-0D0A722 6718 050 DD011 Active
11 1 68-0D09F9C 6718 050 DD008 Active

F3=Exit F5=Refresh F12=Cancel


Figure 7-95 Selecting which unit to suspend

Chapter 7. Migrating to i5/OS boot from SAN 345


Copying the LSU data
To copy the LSU data, follow these steps:
1. After mirrored protection is suspended, you return to the Work with Disk Unit Recovery
panel. Select 9. Copy disk unit data, as shown in Figure 7-96.

Work with Disk Unit Recovery

Select one of the following:

1. Save disk unit data


2. Restore disk unit data
3. Replace configured unit
4. Assign missing unit
5. Recover configuration
6. Disk unit problem recovery procedures
7. Suspend mirrored protection
8. Resume mirrored protection
9. Copy disk unit data
10. Delete disk unit data
11. Upgrade load source utility
12. Rebuild disk unit data
13. Reclaim IOP cache storage
More...

Selection
9

F3=Exit F11=Display disk configuration status F12=Cancel


Selected units have been suspended successfully
Figure 7-96 Work with Disk Unit Recovery

2. Select the existing internal load source (disk unit 1) as the copy from unit (Figure 7-97).

Select Copy from Disk Unit

Type option, press Enter.


1=Select

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0BC12 6718 050 DD007 Active

F3=Exit F5=Refresh F11=Display non-configured units F12=Cancel


Figure 7-97 Select Copy from Disk Unit

346 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select the designated unprotected external load source LUN (model A8x) for which you
noted the serial number previously as the copy to unit (Figure 7-98).
In a mirrored environment, you probably only see the single load source unit in the list,
because you are unable to copy a unit that is part of a live mirrored pair.
When you are certain you have selected the correct from and to units, press Enter.

Select Copy to Disk Unit Data

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Active

1=Select

Serial Resource
Option Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured
50-1104741 2107 A05 DD032 Non-configured
50-1009741 2107 A05 DD026 Non-configured
50-1007741 2107 A05 DD019 Non-configured
50-1006741 2107 A05 DD023 Non-configured
50-1005741 2107 A05 DD034 Non-configured
50-1002741 2107 A05 DD022 Non-configured
50-1100741 2107 A85 DD033 Non-configured
50-110A741 2107 A05 DD014 Non-configured
More...
F3=Exit F11=Display disk configuration status F12=Cancel
Figure 7-98 Select Copy to Disk Unit

Chapter 7. Migrating to i5/OS boot from SAN 347


4. You might see the panel in Figure 7-99 if the LUN was attached previously to a system. If
you see the problem report and if you are sure that the LUN you have selected is the
correct LUN, press F10 to ignore problems and continue.

Problem Report

Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.

Type option, press Enter.


5=Display Detailed Report

OPT Problem
Unit possibly configured for Power PC AS
Other sub-unit will become missing

F3=Exit F10=Ignore problems and continue F12=Cancel


Figure 7-99 Disk problem report

5. In the Confirm Copy Disk Unit Data panel, review your choice carefully to prevent copying
a wrong disk unit accidentally. Then, press Enter (Figure 7-100).

Confirm Copy Disk Unit Data

Press Enter to confirm your choice for copy.


Press F12 to return to change your choice.

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Active

Disk that is copied to:

Serial Resource
Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured

F12=Cancel
Figure 7-100 Confirm Copy of Disk Unit

348 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. The system proceeds to copy the existing suspended internal LSU to the new LUN. A
panel displays that indicates the progress of the copy, as shown in Figure 7-101.

Copy Disk Unit Data Status

The operation to copy a disk unit will be done in several phases.


The phases are listed here and the status will be indicated when
known.

Phase Status

Stop compression (if needed) . . . . . . : Completed


Prepare disk unit . . . . . . . . . . . : 31 % Complete
Start compression (if needed) . . . . . :
Copy status. . . . . . . . . . . . . . . :

Number of unreadable pages:

Wait for next display or press F16 for DST main menu
Figure 7-101 Copy Disk Unit Data Status

The process can take 10 to 60 minutes. Then, you return to the Work with Disk Unit Recovery
panel. You now have migrated the load source unit, but you need to IPL the system to use it.
Follow these steps:
1. Press F12 twice to get to the main DST panel.
2. Select 7. Start a service tool.
3. Select 7. Operator panel functions.

Chapter 7. Migrating to i5/OS boot from SAN 349


4. Make sure that the IPL source is set to 1 or 2, and that the IPL mode is set to 1, as shown
in Figure 7-102. Then, press F10 to shut down the system.

Operator Panel Functions


System: MICKEY
IPL source: 2 (1=A, 2=B or 3=D)
IPL mode: 1 (1=Manual, 2=Normal, 3=Secure or 4=Auto)

Press Enter to change the IPL attributes and return


to the main DST menu.
Press F8 to set the IPL attributes and restart the system.
Machine processing will be ended and the system will be
restarted.
Press F10 to set the IPL attributes and power off the system.
Machine processing will be ended and the system will be
powered off.
Press F12 to return to the main DST menu without changing
IPL attributes.

F3=Exit F8=Restart F10=Power off F12=Cancel


Figure 7-102 Restarting your system

5. Confirm the restart request as shown in Figure 7-103.

Confirm System Reset


System: MICKEY
Press Enter to confirm the reset of the system.
Press F3 or F12 to cancel and return to the main DST menu.

F3=Exit F12=Cancel
Figure 7-103 Confirm System Reset

Important: Now, you need to remove a disk unit from the system. If you are unsure how to
perform this process, contact your local customer engineer. This service is a chargeable
service, and you should tell them that you are migrating to a Boot from SAN configuration.

After the system is shut down, refer to the checklist that you created previously (Table 7-1 on
page 279). Next, you need to physically remove the first internal Load Source Unit from the
machine.

350 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Changing the tagged LSU
Go to the Hardware Management Console (HMC), and change the tagged load source unit
from the RAID Controller Card for the internal LSU to the Fibre Channel IOA that is controlling
your new LSU on external storage. Follow these steps on the HMC:
1. Select the partition name with which you are working. Go to Tasks → Configuration →
Manage Profiles, as shown in Figure 7-104.

Note: For below HMC V7, right-click the partition profile name and select Properties.

Figure 7-104 Select HMC partition profile properties

2. Select Actions → Edit as shown in Figure 7-105.

Figure 7-105 Selecting Edit functions

Chapter 7. Migrating to i5/OS boot from SAN 351


3. In the Logical Partition Profile Properties window, select the Tagged I/O tab, as shown in
Figure 7-106.

Figure 7-106 Logical Partition Properties

4. On the Tagged I/O tab, click Select for the load source, as shown in Figure 7-107.

Figure 7-107 Tagged I/O properties

352 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Now, select the Fibre Channel IOA that is assigned to the new external LSU, as shown in
Figure 7-108. Click OK to proceed.

Figure 7-108 Tag the Load Source Unit

The system is already in the process of shutting down.

Attention: You must fully shut down the system and then reactivate it from the HMC so
that the new load source tag takes effect.

After the system has shut down, you must activate it again from the HMC:
1. From the drop-down menu, select Tasks → Operations → Activate (Figure 7-109).

Note: For below HMC V7, right-click the partition, then select Properties and click
Activate.

Figure 7-109 Activating a partition

Chapter 7. Migrating to i5/OS boot from SAN 353


2. Select the partition profile that you want to use to activate the partition (Figure 7-110).

Figure 7-110 Select the profile for activation

The HMC then displays a status dialog box that closes when the task is complete and when
the partition is activated. Wait for the DST panel to display.

Replacing the internal LSU


You might receive a message at the IPL that the load source is not at a valid location. If so,
press F3 to exit the panel. Because the mirrored pair is still the old internal LSU that you
suspended earlier, you now have to replace it. Follow these steps:
1. Start from the IPL or Install the System panel, and then select the following options:
a. 3. Use Dedicated Service Tools (DST) and sign-on to DST
b. 4. Work with disk units
c. 2. Work with disk unit recovery
d. 3. Replace configured unit
2. In the Select Configured Unit to Replace panel, select the old, suspended internal LSU, a
shown in Figure 7-111, and press Enter.

Select Configured Unit to Replace

Type option, press Enter.


1=Select

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Suspended

F3=Exit F5=Refresh F12=Cancel


Figure 7-111 Select suspended unit

354 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select the new external load source mirror from the list provided (Figure 7-112) and press
Enter.

Select Replacement Unit

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended

Type option, press Enter.


1=Select

Serial Resource
Option Number Type Model Name Status
50-1100741 2107 A85 DD033 Non-configured

F3=Exit F12=Cancel
Figure 7-112 Select replacement unit

4. You might receive a problem report (Figure 7-113). If so, look at the errors, and check that
you have not chosen the wrong LUN or that the configuration of the LUN is correct. If you
determine that the LUN or the configuration is correct, then press F10 to continue.

Problem Report

Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.

Type option, press Enter.


5=Display Detailed Report

OPT Problem
Unit possibly configured for Power PC AS
Lower level of mirrored protection

F3=Exit F10=Ignore problems and continue F12=Cancel


Figure 7-113 Problem Report

Chapter 7. Migrating to i5/OS boot from SAN 355


5. A confirmation panel displays as shown in Figure 7-114. Press Enter to continue.

Confirm Replace of Configured Unit

This screen allows the confirmation of the configured unit to


be replaced with the selected replacement unit.

Press Enter to confirm your choice for replace.


Press F12 to return to change your choice.

The configured unit being replaced is:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended

The replacement unit will be:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-1100741 2107 A85 DD033 Resuming

F12=Cancel
Figure 7-114 Confirm Replace of Configured Unit

The progress panel is very similar to one that you saw previously (Figure 7-115).

Replace Disk Unit Data Status

The operation to replace a disk unit from the selected disk


units will be done in several phases. The phases are listed
here and the status will be indicated when known.

Phase Status

Stop compression (if needed) . . . . . . : Completed


Prepare disk unit . . . . . . . . . . . : 0 % Complete
Start compression (if needed) . . . . . :
Replace status . . . . . . . . . . . . . :

Number of unreadable pages:

Wait for next display or press F16 for DST main menu
Figure 7-115 Replace Disk Unit Data Status

356 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. After the replacement process completes, the Work with Disk Units recovery panel
displays again. Press F12, and then select:
a. 1. Work with disk configuration
b. 1. Display disk configuration
c. 1. Display disk configuration status
A panel similar to that shown in Figure 7-116 displays.

Display Disk Configuration Status

Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 50-1000741 2107 A85 DD013 Active
1 50-1100741 2107 A85 DD033 Resuming
3 68-0D0A0DA 6718 050 DD002 Active
3 68-0D0A51B 6718 050 DD005 Active
6 68-0D09EA2 6718 050 DD006 Active
6 68-0D0A6AE 6718 050 DD012 Active
9 68-0D0AB08 6718 050 DD009 Active
9 68-0D0BBB0 6718 050 DD003 Active
10 68-0D0A733 6718 050 DD010 Active
10 68-0D0BB13 6718 050 DD004 Active
11 68-0D0A722 6718 050 DD011 Active
11 68-0D09F9C 6718 050 DD008 Active

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Disk configuration capacity F12=Cancel
Figure 7-116 Display Disk Configuration Status

Your external LSU disks are now mirrored, and the mirror resumes completely during IPL.

Chapter 7. Migrating to i5/OS boot from SAN 357


7.3.4 Internal LSU mirrored to external remote LSU migrating to external LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this section, we describe the steps to
migrate a system from using an internal LSU that is currently protected by a remote LSU
residing on external storage to using the boot from SAN function with an external mirrored
LSU (see Figure 7-117).

Remote
mirrored
internal System i LPAR System i LPAR
load source
LSU

I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA

LSU' ... ... LSU ... LSU' ...

Boot from SAN Boot from SAN


mirrored load source mirrored load source
unit A unit B

Figure 7-117 Boot from SAN Migration from remote mirrored LS to external mirrored LS

Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.

Important: This procedure requires you to remove your internal load source unit. If you are
not comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.

The migration from an internal load source with a remote mirrored load source on external
storage is probably the simplest of all the migration scenarios as most of the hard work has
already been done with having a copy of the load source unit data on external storage.

First, you need to configure another unprotected load source unit LUN as the load source
mirror mate in a separate DS volume group than your existing remote external load source.
Each DS volume group with a load source unit LUN should be assigned to separate System i
host #2847 IOP-based or IOP-less Fibre Channel adapters. For further information, refer to
Chapter 8, “Using DS CLI with System i” on page 391 if you are using DS CLI or 9.2,
“Configuring DS Storage Manager logical storage” on page 474 if you are using the GUI.

Attention: At this time, you should have one unprotected LUN that is non-configured.

358 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To start the migration to boot from SAN, shut down the system by entering the following
command:
PWRDWNSYS OPTION(*IMMED) RESTART(*NO)

Note: The partition must be fully deactivated and not just restarted, because you are going
to change the load source tagging afterwards.

Changing the tagged LSU


Now, go to the Hardware Management Console (HMC) and change the tagged LSU from the
RAID Controller Card for the internal LSU to the Fibre Channel IOA that is controlling the new
external LSU. Follow these steps on the HMC:
1. Select the partition name with which you are working. Go to Tasks → Configuration →
Manage Profiles as shown in Figure 7-118.

Note: For below HMC V7, right-click the partition profile name and select Properties.

Figure 7-118 Select HMC partition profile properties

2. In the Managed Profiles window, select Actions → Edit as shown in Figure 7-119.

Figure 7-119 Selecting Edit functions

Chapter 7. Migrating to i5/OS boot from SAN 359


3. Then, select the Tagged I/O tab, as shown in Figure 7-120.

Figure 7-120 Logical Partition Properties

4. On the Tagged I/O tab, click Select for the load source, as shown in Figure 7-121.

Figure 7-121 Tagged I/O properties

360 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Now, select the Fibre Channel IOA that is assigned to your new LSU, as shown in
Figure 7-122. Click OK to proceed.

Figure 7-122 Tag the Load Source Unit

Next, you need to change the partition settings to do a manual IPL as follows:
1. Click Tasks → Properties in the drop-down menu as shown in (Figure 7-123).

Note: For below HMC V7, right-click the partition name and select Properties.

Figure 7-123 Select Partition Properties

Chapter 7. Migrating to i5/OS boot from SAN 361


2. Select the Settings tab (Figure 7-124).

Figure 7-124 Partition Properties

3. On the Settings tab, change the Keylock Position to Manual, and click OK (Figure 7-125).

Figure 7-125 Setting the IPL type

Important: Now, you need to remove a disk unit from the system. If you are unsure how to
remove a disk unit, contact your local customer engineer. This service is a chargeable
service, and you need to explain that you are migrating to a Boot from SAN configuration.

362 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Now, you need to physically remove the old internal LSU. You noted the location of the LSU
earlier (see Table 7-1 on page 279). After you have removed the old internal LSU, activate the
system again from the HMC as follows:
1. Select Tasks → Operations → Activate, as shown in Figure 7-126.

Note: For below HMC V7, right-click the partition, select Properties, and click
Activate.

Figure 7-126 Activating a partition

2. Select the profile that you want to use for activating the partition, and click OK
(Figure 7-127)

Figure 7-127 Select the profile for activation

The HMC then displays a status dialog box that closes when the task is complete and
when the partition is activated. Then, wait for the DST panel to display.

Chapter 7. Migrating to i5/OS boot from SAN 363


3. The Disk Configuration Attention Report panel displays information about missing mirror
protected units (Figure 7-128). Select F10=Accept the problems and continue.

Disk Configuration Attention Report

Type option, press Enter.


5=Display Detailed Report

Press F10 to accept all the following problems and continue.


The system will attempt to correct them.

Opt Problem
Missing mirror protected units in the configuration

F3=Exit F10=Accept the problems and continue F12=Cancel


Figure 7-128 Missing disk units

Replacing the internal LSU


Because the mirrored pair is still the old internal that you suspended earlier, you now need to
replace it. Follow these steps:
1. Start from the IPL or Install the System panel, and select the following options:
a. 3. Use Dedicated Service Tools (DST) and sign-on to DST
b. 4. Work with disk units
c. 2. Work with disk unit recovery
d. 3. Replace configured unit
2. In the Select Configured Unit to Replace panel, select the old, suspended internal LSU
(Figure 7-129), and press Enter.

Select Configured Unit to Replace

Type option, press Enter.


1=Select

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0A773 6718 050 DD001 Suspended

F3=Exit F5=Refresh F12=Cancel


Figure 7-129 Select Suspended Unit

364 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select the new external load source mirror from the list that is provided (Figure 7-130).

Select Replacement Unit

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended

Type option, press Enter.


1=Select

Serial Resource
Option Number Type Model Name Status
50-1100741 2107 A82 DD033 Non-configured

F3=Exit F12=Cancel
Figure 7-130 Select Replacement Unit

4. You might receive a problem report, as shown in Figure 7-131. If so, look at the errors, and
check that you have not chosen the wrong LUN or that the configuration of the LUN is
correct. If the LUN and LUN configuration are correct, then press F10 to continue.

Problem Report

Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.

Type option, press Enter.


5=Display Detailed Report

OPT Problem
Unit possibly configured for Power PC AS
Lower level of mirrored protection

F3=Exit F10=Ignore problems and continue F12=Cancel


Figure 7-131 Problem Report

Chapter 7. Migrating to i5/OS boot from SAN 365


5. You then receive a confirmation panel, similar to that shown in Figure 7-132. Press Enter
to continue.

Confirm Replace of Configured Unit

This screen allows the confirmation of the configured unit to


be replaced with the selected replacement unit.

Press Enter to confirm your choice for replace.


Press F12 to return to change your choice.

The configured unit being replaced is:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0A773 6718 050 DD001 Suspended

The replacement unit will be:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 50-1100741 2107 A82 DD033 Resuming

F12=Cancel
Figure 7-132 Confirm Replace of Configured Unit

The progress panel is very similar to one that displayed previously (Figure 7-133).

Replace Disk Unit Data Status

The operation to replace a disk unit from the selected disk


units will be done in several phases. The phases are listed
here and the status will be indicated when known.

Phase Status

Stop compression (if needed) . . . . . . : Completed


Prepare disk unit . . . . . . . . . . . : 0 % Complete
Start compression (if needed) . . . . . :
Replace status . . . . . . . . . . . . . :

Number of unreadable pages:

Wait for next display or press F16 for DST main menu
Figure 7-133 Replace Disk Unit Data Status

366 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. After the replacement process completes, you return to the Work with Disk Units recovery
panel. Press F12, and select the following options:
a. 1. Work with disk configuration
b. 1. Display disk configuration
c. 1. Display disk configuration status
A panel similar to that shown in Figure 7-134 displays.

Display Disk Configuration Status

Serial Resource
ASP Unit Number Type Model Name Status
1 Mirrored
1 50-1000741 2107 A82 DD013 Active
1 50-1100741 2107 A82 DD033 Resuming
3 68-0D0A0DA 6718 050 DD002 Active
3 68-0D0A51B 6718 050 DD005 Active
6 68-0D09EA2 6718 050 DD006 Active
6 68-0D0A6AE 6718 050 DD012 Active
9 68-0D0AB08 6718 050 DD009 Active
9 68-0D0BBB0 6718 050 DD003 Active
10 68-0D0A733 6718 050 DD010 Active
10 68-0D0BB13 6718 050 DD004 Active
11 68-0D0A722 6718 050 DD011 Active
11 68-0D09F9C 6718 050 DD008 Active

Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Disk configuration capacity F12=Cancel
Figure 7-134 Display Disk Configuration Status

Your external LSU disks are now mirrored, and the mirror resumes completely during IPL.

7.3.5 Unprotected internal LSU migrating to external LSU


For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this section we describe the steps to
migrate a system from using an unprotected internal load source unit to using the boot from
SAN function.

Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.

Chapter 7. Migrating to i5/OS boot from SAN 367


When migrating from internal load source to external, you need to decide on your protection
strategy:
 For i5/OS V6R1 and later
We suggest that you use a RAID protected external load source and provide path
redundancy by using load source multipathing. To enable this multipathing, create the LUN
in the DS system as protected and include it in one DS volume group, which is assigned to
two Fibre Channel IOAs to be used for boot from SAN (see Figure 7-135).

Unprotected internal System i LPAR System i LPAR


load source
LSU

I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA
i5/OS V6R1 and later IOA

... LSU ...

Boot from SAN


multipath load source

Figure 7-135 Boot from SAN Migration for unprotected internal LSU for i5/OS V6R1 and later

368 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
 Prior to i5/OS V6R1
Because multipathing for the load source is not supported prior to i5/OS V6R1, you need
to mirror the external load across two #2847 IOP/IOA pairs to provide path redundancy.
For this purpose, you must create two LUNs in the DS system as unprotected and assign
them to two different DS volume groups, which allows you to attach other LUNs to two
#2847 IOP-based FC adapters in a multipath configuration (see Figure 7-136).

Unprotected System i LPAR System i LPAR


internal
load source LSU

I/O Tower I/O Tower Boot from SAN Migration I/O Tower I/O Tower
#2844 IOP #2844 IOP #2847 IOP #2847 IOP
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA
prior to i5/OS V6R1 IOA

... LSU ... LSU' ...

Boot from SAN Boot from SAN


mirrored load source mirrored load source
unit A unit B

Figure 7-136 Boot from SAN Migration for unprotected internal LSU prior to i5/OS V6R1

In this section, we assume that you follow these recommendations for protection.

First, you need to configure your load source unit LUN or LUNs on the external storage
system using either DS CLI or the DS Storage Manager GUI. For further information, refer to
Chapter 8, “Using DS CLI with System i” on page 391 if you are using DS CLI or 9.2,
“Configuring DS Storage Manager logical storage” on page 474 if you are using the GUI.

If you are planning to use a mirrored external load source, add only one of the two
unprotected LUNs to your system ASP at this time.

Attention: At this time, you should have one LUN that is non-configured. It should be a
protected model A0x if you are using i5/OS V6R1 multipath load source and an
unprotected model A8x if you are going to mirror the external load source LUN. Only if you
are going to use load source mirroring, you now add another unprotected LUN to your
system ASP. Make sure to note the serial number of the external load source LUN or LUNs
for further reference.

Important: You are required to remove the internal load source unit. If you are not
comfortable with task, engage services from your IBM Sales Representative or IBM
Business Partner.

Now that the two new load source LUNs are attached to your system, you can use SST to
check that the disks are reporting correctly. When you display non-configured disk devices,
you should see one unprotected LUN (model A8x) for your new load source.

Chapter 7. Migrating to i5/OS boot from SAN 369


To check that your disks are reporting correctly:
1. Enter the STRSST command.
2. Select 3. Work with disk units.
3. Select 1. Display disk configuration.
4. Select 4. Display non-configured units. A panel similar to the one shown in Figure 7-137
displays.
Note the serial number for the non-configured unprotected LUN to use for the external
load source.

Display Non-Configured Units

Serial Resource
Number Type Model Name Capacity Status
50-1000741 2107 A85 DD013 35165 Non-configured
50-1105741 2107 A05 DD027 35165 Non-configured
50-110A741 2107 A05 DD019 35165 Non-configured
50-1107741 2107 A05 DD017 35165 Non-configured
50-1101741 2107 A05 DD033 35165 Non-configured
50-1108741 2107 A05 DD032 35165 Non-configured
50-1005741 2107 A05 DD030 35165 Non-configured
50-1003741 2107 A05 DD031 35165 Non-configured
50-1104741 2107 A05 DD026 35165 Non-configured
50-1004741 2107 A05 DD025 35165 Non-configured
50-1102741 2107 A05 DD028 35165 Non-configured
50-1109741 2107 A05 DD024 35165 Non-configured
50-1002741 2107 A05 DD029 35165 Non-configured
50-100A741 2107 A05 DD015 35165 Non-configured
More...
Press Enter to continue.

F3=Exit F5=Refresh F9=Display disk unit details


F11=Display device parity status F12=Cancel
Figure 7-137 Non configured disk units

370 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Changing the tagged LSU
Now, go to the Hardware Management Console (HMC) and change the tagged LSU from the
RAID Controller Card for the internal LSU to the Fibre Channel IOA that is controlling your
new SAN LSU. Follow these steps on the HMC:
1. Select the partition name with which you are working. Choose Tasks → Configuration →
Manage Profiles, as shown in Figure 7-138.

Figure 7-138 Select HMC partition profile properties

2. In the Managed Profiles window, click Actions → Edit, as shown in Figure 7-139.

Figure 7-139 Selecting Edit functions.

Chapter 7. Migrating to i5/OS boot from SAN 371


3. From the Logical Partition Properties panel, select the Tagged I/O tab, as shown in
Figure 7-140.

Figure 7-140 Logical Partition Properties

4. On the Tagged I/O tab, click Select for the load source, as shown in Figure 7-141.

Figure 7-141 Tagged I/O properties

372 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Now, select the IOA that is assigned to the new LSU, as shown in Figure 7-142. Click OK
to proceed.

Figure 7-142 Tag the Load Source Unit

Now, change the partition settings to do a manual IPL as follows:


1. Click Tasks → Properties in the drop-down menu as shown in Figure 7-143.

Note: For below HMC V7, right-click partition name and select Properties.

Figure 7-143 Select Partition Properties

Chapter 7. Migrating to i5/OS boot from SAN 373


2. In the Partition Properties window, click the Settings tab (Figure 7-144).

Figure 7-144 Partition Properties

3. On the Settings panel, change the Keylock Position to Manual, as shown in Figure 7-145.

Figure 7-145 Setting the IPL type

4. IPL to DST by issuing the following command:


PWRDWNSYS OPTION(*IMMED) RESTART(*NO)

Attention: You must fully shut down the system and then reactivate it from the HMC so
that the new load source tag takes effect.

374 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
After the system has shut down, activate it again from the HMC. Follow these steps:
1. Select Tasks → Operations → Activate, as shown in Figure 7-146.

Figure 7-146 Activating a partition

2. Select the profile that you want to use to activate the partition (Figure 7-147).

Figure 7-147 Select the profile for activation

The HMC then displays a status dialog box that closes when the task is complete and when
the partition is activated. Then, wait for the DST panel to display.

Chapter 7. Migrating to i5/OS boot from SAN 375


Copying the LSU data
To copy the LSU data, follow these steps:
1. After the partition has IPLed to DST, select 3. Use Dedicated Service Tools (DST) as
shown in Figure 7-148.

IPL or Install the System


System: MICKEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code

Selection
3
Figure 7-148 Use Dedicated Service Tools (DST)

2. After signing on to DST, select 4. Work with disk units from the Use Dedicated Service
Tools (DST) panel (Figure 7-149).

Use Dedicated Service Tools (DST)


System: MICKEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 7-149 Selecting - Work with disk units

3. At the Work with disk units panel, select 2. Work with disk unit recovery (Figure 7-150).

376 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 7-150 Work with Disk Units

Important: Exercise care when using the options that we describe here, because using
the incorrect option can result in loss of data.

4. On the Work with Disk Unit Recovery panel, select 9. Copy disk unit data (Figure 7-151).

Work with Disk Unit Recovery

Select one of the following:

1. Save disk unit data


2. Restore disk unit data
3. Replace configured unit
4. Assign missing unit
5. Recover configuration
6. Disk unit problem recovery procedures
7. Suspend mirrored protection
8. Resume mirrored protection
9. Copy disk unit data
10. Delete disk unit data
11. Upgrade load source utility
12. Rebuild disk unit data
13. Reclaim IOP cache storage
More...

Selection
9

F3=Exit F11=Display disk configuration status F12=Cancel


Figure 7-151 Work with Disk Unit Recovery

Chapter 7. Migrating to i5/OS boot from SAN 377


5. Select the existing internal load source (disk unit 1) as the copy from unit (Figure 7-152).

Select Copy from Disk Unit

Type option, press Enter.


1=Select

Serial Resource
OPT Unit ASP Number Type Model Name Status
1 1 1 68-0D0BC12 6718 050 DD007 Configured
2 1 68-0D09F9C 6718 050 DD008 Configured
3 1 68-0D0A773 6718 050 DD001 Configured
4 1 68-0D0A0DA 6718 050 DD002 Configured
5 1 68-0D0A51B 6718 050 DD005 Configured
6 1 68-0D09EA2 6718 050 DD006 Configured
7 1 68-0D0BBB0 6718 050 DD003 Configured
8 1 68-0D0BB13 6718 050 DD004 Configured
9 1 68-0D0AB08 6718 050 DD009 Configured
10 1 68-0D0A733 6718 050 DD010 Configured
11 1 68-0D0A722 6718 050 DD011 Configured
12 1 68-0D0A6AE 6718 050 DD012 Configured

F3=Exit F5=Refresh F11=Display non-configured units F12=Cancel


Figure 7-152 Select Copy from Disk Unit

6. Select the designated unprotected external load source LUN (model A8x), for which you
noted the serial number previously, as the copy to unit (Figure 7-153).

Select Copy to Disk Unit Data

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Configured

1=Select

Serial Resource
Option Number Type Model Name Status
1 50-1000741 2107 A85 DD013 Non-configured
50-1103741 2107 A05 DD014 Non-configured
50-1007741 2107 A05 DD023 Non-configured
50-1006741 2107 A05 DD020 Non-configured
50-110A741 2107 A05 DD019 Non-configured
50-1109741 2107 A05 DD024 Non-configured
50-1004741 2107 A05 DD025 Non-configured
50-1002741 2107 A05 DD029 Non-configured
50-1005741 2107 A05 DD030 Non-configured
More...
F3=Exit F11=Display disk configuration status F12=Cancel
Figure 7-153 Select Copy to Disk Unit

378 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7. When you are certain that you have selected the correct from and to units, press Enter.
If the LUN was previously attached to a system, you might see the panel shown in
Figure 7-154. If so and if you are sure it is the correct LUN, press F10 to ignore the
problem report and continue.

Problem Report

Note: Some action for the problems listed below may need to
be taken. Please select a problem to display more detailed
information about the problem and to see what possible
action may be taken to correct the problem.

Type option, press Enter.


5=Display Detailed Report

OPT Problem
Unit possibly configured for Power PC AS

F3=Exit F10=Ignore problems and continue F12=Cancel


Figure 7-154 Disk problem report

8. On the Confirm Copy Disk Unit Data panel, review your choice again to prevent
accidentally copying the wrong disk unit, and press Enter to confirm (Figure 7-155).

Confirm Copy Disk Unit Data

Press Enter to confirm your choice for copy.


Press F12 to return to change your choice.

Disk being copied:

Serial Resource
Unit ASP Number Type Model Name Status
1 1 68-0D0BC12 6718 050 DD007 Configured

Disk that is copied to:

Serial Resource
Number Type Model Name Status
50-1000741 2107 A85 DD013 Non-configured

F12=Cancel
Figure 7-155 Confirm Copy of Disk Unit

Chapter 7. Migrating to i5/OS boot from SAN 379


The system copies the existing LSU to the new LUN, and a panel displays that indicates
progress, as shown in Figure 7-156.

Copy Disk Unit Data Status

The operation to copy a disk unit will be done in several phases.


The phases are listed here and the status will be indicated when
known.

Phase Status

Stop compression (if needed) . . . . . . : Completed


Prepare disk unit . . . . . . . . . . . : 31 % Complete
Start compression (if needed) . . . . . :
Copy status. . . . . . . . . . . . . . . :

Number of unreadable pages:

Wait for next display or press F16 for DST main menu
Figure 7-156 Copy Disk Unit Data Status

It can take 10 to 60 minutes for the process to complete. When it is complete, you return to
the Work with Disk Unit Recovery panel. You now have migrated the LSU, but to use it, you
need to IPL your partition as follows:
1. Press F12 twice to return to the main DST panel.
2. Select 7. Start a service tool.
3. Select 7. Operator panel functions.

380 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Make sure that the IPL source is set to 1 or 2 and that the IPL mode is set to 1 as shown in
Figure 7-157. Then, press F10 to shut down the system.

Operator Panel Functions


System: MICKEY
IPL source: 2 (1=A, 2=B or 3=D)
IPL mode: 1 (1=Manual, 2=Normal, 3=Secure or 4=Auto)

Press Enter to change the IPL attributes and return


to the main DST menu.
Press F8 to set the IPL attributes and restart the system.
Machine processing will be ended and the system will be
restarted.
Press F10 to set the IPL attributes and power off the system.
Machine processing will be ended and the system will be
powered off.
Press F12 to return to the main DST menu without changing
IPL attributes.

F3=Exit F8=Restart F10=Power off F12=Cancel


Figure 7-157 Restarting the system

5. Press Enter to confirm the restart request, as shown in Figure 7-158.

Confirm System Reset


System: MICKEY
Press Enter to confirm the reset of the system.
Press F3 or F12 to cancel and return to the main DST menu.

F3=Exit F12=Cancel
Figure 7-158 Confirm System Reset

Important: You now need to remove a disk unit from the system. If you are unsure how to
perform this task, contact a local customer engineer. This service is a chargeable service,
and you need to tell them that you are migrating to a Boot from SAN configuration.

Chapter 7. Migrating to i5/OS boot from SAN 381


Next, you need to activate the partition from the HMC as follows:
1. Select Tasks → Operations → Activate from the drop-down menu as shown in
Figure 7-159.

Figure 7-159 Activating a partition

2. Select the profile that you want to use to activate the partition (Figure 7-160).

Figure 7-160 Select the profile for activation

The HMC then displays a status dialog box that closes when the task is complete and the
partition is activated. Then, wait for the DST window to open.

Note: For prior to i5/OS V6R1, we recommended that you mirror the new external load
source for path protection. However, if you have other unprotected volumes in your system
ASP and do not want to start mirrored protection for ASP 1 yet, then your boot from SAN
migrating procedure ends here, and you can perform an IPL of your system from the IPL or
Install the System panel.

382 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Mirroring of the LSU
To mirror the LSU, follow these steps:
1. When the system has IPLed to DST, select 3. Use Dedicated Service Tools (DST) from
the IPL or Install the System panel (Figure 7-161).

IPL or Install the System


System: MICKEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Use Dedicated Service Tools (DST)
4. Perform automatic installation of the operating system
5. Save Licensed Internal Code

Selection
3
Figure 7-161 Use Dedicated Service Tools (DST)

2. After signing on to DST, select 4. Work with disk units from the Use Dedicated Service
Tools (DST) panel (Figure 7-162).

Use Dedicated Service Tools (DST)


System: MICKEY
Select one of the following:

1. Perform an IPL
2. Install the operating system
3. Work with Licensed Internal Code
4. Work with disk units
5. Work with DST environment
6. Select DST console mode
7. Start a service tool
8. Perform automatic installation of the operating system
9. Work with save storage and restore storage
10. Work with remote service support

12. Work with system capacity


13. Work with system security
14. End batch restricted state

Selection
4

F3=Exit F12=Cancel
Figure 7-162 Selecting - Work with disk units

3. Because the mirror is on a separate IOA, you must first enable remote load source
mirroring from the Work with Disk Configuration panel. Select 1. Work with disk
configuration (Figure 7-163).

Chapter 7. Migrating to i5/OS boot from SAN 383


Work with Disk Units

Select one of the following:

1. Work with disk configuration


2. Work with disk unit recovery

Selection
1

F3=Exit F12=Cancel
Figure 7-163 Work with disk units

4. Select 4. Work with mirrored protection (Figure 7-164).

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Work with ASP threshold
3. Work with ASP configuration
4. Work with mirrored protection
5. Work with device parity protection
6. Work with disk compression

Selection
4

F3=Exit F12=Cancel
Figure 7-164 Work with mirrored protection

384 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Select 4. Enable remote load source mirroring (Figure 7-165).

Work with Mirrored Protection

Select one of the following:

1. Display disk configuration


2. Start mirrored protection
3. Stop mirrored protection
4. Enable remote load source mirroring
5. Disable remote load source mirroring

Selection
4

F3=Exit F12=Cancel
Figure 7-165 Work with Mirrored Protection

6. The confirmation panel shown in Figure 7-166 displays explaining that this action only
enables remote mirroring and that it does not actually start it. Press Enter.

Enable Remote Load Source Mirroring

Remote load source mirroring will allow you to place the two
units that make up a mirrored load source disk unit (unit 1) on
two different IOPs. This may allow for higher availability
if there is a failure on the MFIOP.

Note: When there is only one load source disk unit attached to
the multifunction IOP, the system will not be able to IPL if
that unit should fail.

This function will not start mirrored protection.

Press Enter to enable remote load source mirroring.

F3=Exit F12=Cancel
Figure 7-166 Enable Remote Load Source Mirroring Confirmation panel

Chapter 7. Migrating to i5/OS boot from SAN 385


7. The Work with Mirrored Protection panel displays with a confirmation that Remote load
source mirroring enabled successfully. Select 2. Start mirrored protection to actually
start the mirrored protection. You are then prompted to select the ASP that you want to
mirror. Select ASP 1 (the system ASP), as shown in Figure 7-167.

Select ASP to Start Mirrored Protection

Select the ASPs to start mirrored protection on.

Type options, press Enter.


1=Select

Option ASP Protection


1 1 Unprotected

F3=Exit F12=Cancel
Figure 7-167 Select ASP to Start Mirrored Protection

When the system has IPLed, you are returned to the DST and mirroring is activated. The next
IPL fully starts the mirror by synchronizing the LUNs.

You can now either add the old LSU back into the ASP configuration or proceed to migrate the
remaining internal drives to the SAN.

7.3.6 Migrating to external LSU from iSeries 8xx or 5xx with 8 Gb LSU
For this scenario, we assume that your systems meets the prerequisites for boot from SAN
(see 7.2, “Migration prerequisites” on page 278). In this scenario, we describe the steps to
migrate an older system, with all of its storage except for the LSU that is on external storage,
that you cannot upgrade to V5R3M5. The older system can be running an older release or
later for boot from SAN support, or it might have an 8 GB internal load source unit that cannot
accommodate V5R3M5 and later, which requires at least a 17 GB LSU. In either case, you
have an existing system that is not capable of running V5R3M5 and later. Thus, you use this
process to migrate to using a larger external load source housed in a SAN and using the boot
from SAN functionality.

Important: If you have already loaded V5R3M5 or later onto your system, you cannot use
the process that we describe here.

Attention: Do not turn off the system while the disk unit data function is running.
Unpredictable errors can occur if the system is shut down in the middle of the load source
migration function.

This process actually allows you to deal with two issues at the same time—increasing the
LSU size and migrating to V5R3M5 or later.

386 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
You can perform this procedure with any release that can support the new SAN connection
and an upgrade to the target release, in this case V5R3M5 or later.

You basically follow the same procedure that we describe in 7.3, “Migration scenarios” on
page 280 with the following changes:
1. If you are not running yet i5/OS V5R3M0 and, for this boot from SAN migration, plan to
upgrade to a release that is higher than i5/OS V5R3M5 (such as V5R4 or V6R1), make
sure that you meet all the i5/OS upgrade requirements, such as preparing for the
installation of PTFs). Refer to the i5/OS Information Center for the corresponding i5/OS
target release before proceeding with the migration.

Note: You cannot IPL an i5 system from a previous iSeries 8xx or 5xx load source to
migrate to SAN unless it is running V5R3M5 or higher.

2. The opening steps for this boot from SAN migration remain the same, in that you have to
suspend any mirrored pair for your load source. However, at this time, the Fibre Disk
Controller IOA that is intended to connect to the unprotected LUN for the load source is
still driven by an older IOP #2843 or 2844.

Note: If you currently have your internal load source mirrored to SAN external storage,
you must suspend the SAN unit.

3. When the internal load source is a single unit, you can then use the copy disk unit data
procedure to migrate the load source to a new LUN in the SAN that is 17 GB or larger.

Attention: At this point the migration procedure deviates from the other methods that
we have described previously.

4. You now have your load source out in the SAN on a 17 GB LUN or larger. The next step
you need to do is to shut down the system by using the DST option 7. Operator panel
functions from the Start a Service Tool panel.
5. If you are going to connect a new System i POWER5 or POWER6 system, detach the
SAN storage from the old iSeries model 8xx and connect to your System i server.
6. Before you IPL your System i server, ensure that the load source LUN is now driven by a
#2847 IOP-based or IOP-less Fibre Channel IOA for boot from SAN. Also, ensure that the
HMC load source tagging is set correctly to the Fibre Channel IOA that is attached to your
new SAN LSU. If you are not going to i5/OS V6R1 load source multipathing and plan to
mirror your external load source instead, also ensure that a second unprotected LUN is
attached to a second boot from SAN Fibre Channel IOA and is added to the system ASP.
7. Make sure that your i5/OS target release (V5R3M5 or higher) SLIC CD I_Base_01 is
loaded into the CD/DVD drive, change the partition settings to a D-type manual mode IPL,
and then activate the partition.
8. When the system has IPLed to DST, perform an i5/OS software upgrade as described in
the i5/OS and related software. Refer to information about installing, upgrading, or deleting
i5/OS and related software in the i5/OS Information Center for your corresponding target
release.

Chapter 7. Migrating to i5/OS boot from SAN 387


7.3.7 SAN to SAN storage migration
Migrating between one SAN and another is potentially a lot simpler than it sounds. This type
of migration involves basically three possible scenarios. The choice is limited by the hardware
and configuration options that you have available to you.

The three possible migration methods are:


 Unload and reload migration between SANs
 Data migration within the system
 PPRC between SANs

Comparison of SAN storage migration methods


Table 7-2 shows the differences in these three methods and their requirements.

Table 7-2 Comparison of methods


Unload and Reload Data Migration PPRC between SANs

Downtime required? High Mainly concurrent Shut down, repoint


SANs, IPL

Additional hardware None Fibre Channel IOAs PPRC connections


required?

Additional software None None PPRC licenses


required?

Unload and reload migration between SANs


This method is the standard method of backing up your system and then reloading it again
when you have changed the hardware. It requires no additional hardware or software, just a
common tape media and the downtime to perform the backup and restore. This downtime will
therefore be considerable. Unless you have the luxury of two systems, you will need to make
two copies of your backup, to provide protection from media errors.

For documentation about this method, see Backup and Recovery, SC41-5304, which is
available online at:
http://publib.boulder.ibm.com/iseries/

Data migration within the system


By attaching your new SAN to your system alongside your existing SAN you are able to
perform most of the migration work while your system is in use. This will cause considerably
less disruption to your users and might prove more manageable as a process.

You achieve the migration by adding all your new LUNs into the system from system service
tools (SST). This might require that you add Fibre Disk Controllers to your system to
accommodate the additional LUNs, or you might have sufficient capacity on your existing
adapters to accommodate the migration.

Next, instruct the system not to write any new data to the LUNs that are in the old SAN by
using the command STRASPBAL TYPE(*ENDALC) UNIT(a b c d), where a, b, c, and d are
the disk unit numbers of the LUNs in the old SAN.

Now, tell the system to move all of the permanent data from the old SAN to the new SAN by
using the command STRASPBAL TYPE(*MOVDTA) TIMLMT(t), where t is the time in
minutes for which you allow the function to run, or *NOMAX.

388 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
At this stage, you have only temporary data remaining on the old SAN. So, at a convenient
time, you can perform a manual IPL to DST, where you can then remove the old SAN LUNs
from the ASPs.

If your load source is also in the SAN, you also migrate the load source using the DST copy
disk unit data function. If it is an internal drive you would leave it unchanged.

When the old SAN LUNs are removed, turn off the system, and disconnect the old SAN.

PPRC between SANs


Using PPRC as a tool to migrate between SANs causes the least disruption of all to systems
availability. However, using PPRC comes at the expense of possibly needing long wave
adapters for the old and the new SAN and also a PPRC license for both the old and the new
SAN. If you have these, then this method is the simplest method by far. If you do not, it can be
an expensive option.

You first start by setting up PPRC between the old and new SANs to build and maintain an
identical disk set within the new SAN. When the image is fully established, you then turn off
the system and deconfigure the host link in the old SAN, while at the same time configuring
the link in the new SAN. Finally, IPL the system and, provided that the links are correct, the
system is back and operational.

Chapter 7. Migrating to i5/OS boot from SAN 389


390 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8

Chapter 8. Using DS CLI with System i


IBM System Storage DS command-line interface (DS CLI) is an interface to configure and
manage DS storage complex and Copy Services from an open server platform. DS CLI
comes as a software package, and you can install it on several operating systems, including
i5/OS, AIX, Windows, Red Hat and SUSE® Linux, Sun Solaris, and HP-UX.

You can use DS CLI to perform storage management and Copy Services functions on the
DS6000 and DS8000. You can also use it to perform Copy Services functions on ESS 800
and ESS 750 with microcode levels 2.4.2.x and above. For example, with DS CLI, you can
format arrays and ranks on DS6000 and DS8000, create and delete LUNs, connect LUNs to
host systems, create Copy Services relationships, and so forth.

Before DS CLI became available, you managed ESS storage and Copy Services by the ESS
Command Line Interface (CLI) from open servers. The primary difference between the two
interfaces is that with DS CLI, you can invoke a Copy Service relationship directly. With ESS
CLI, you must first create a Copy Services task with the ESS Copy Services GUI and then
invoke it from ESS CLI using the rsExecuteTask command.

Also, ESS CLI was not available for i5/OS. So, customers who wanted to invoke Copy
Services tasks for i5/OS automatically had to use ESS CLI on a Windows server and then
trigger Copy Services using remote commands from i5/OS.

© Copyright IBM Corp. 2008. All rights reserved. 391


8.1 Overview
In this section, we provide an overview of the DS CLI functions and provide information about
how to use it.

8.1.1 Functions of DS CLI


You can use the DS CLI to invoke the following storage configuration tasks:
 Create user IDs that can be used with both the DS CLI and the DS Storage Manager GUI
 Manage user ID passwords
 Install activation keys for licensed features
 Manage storage complexes and units
 Configure and manage storage facility images
 Create and delete RAID arrays, ranks and extent pools
 Create and delete logical volumes
 Manage host access to volumes
 Configure host adapter ports

You can use the DS CLI to invoke the following Copy Services functions:
 FlashCopy
 Metro Mirror, also known as synchronous Peer-to-Peer Remote Copy (PPRC)
 Global Copy, also known as PPRC-XD
 Global Mirror, also known as asynchronous PPRC

A storage unit is a physical unit that consists of storage server and integrated devices.

A storage image is a partition of a storage unit that provides emulation of a storage server and
devices.

A storage complex is a configuration of one or more storage units that is managed by a


common Hardware Management Console (HMC). For example, a storage complex consists
of two storage units between which PPRC is running.

8.1.2 Command flow of DS CLI in i5/OS


When a CLI command is invoked, it is sent through a client layer on the host to the ESS
Network Interface (ESSNI) server on the DS6000 System Management Console (SMC) or
DS8000 HMC. The ESSNI server on the SMC or HMC transmits the command to the DS. The
DS attempts to perform the requested command and returns success or failure, as well as any
query data that is requested, to the ESSNI server on the SMC or HMC. The ESSNI server
then transmits the data and the success or failure indication to the ESSNI client layer on the
host, where it is displayed to the DS CLI user.

When used with the DS8000, DS CLI connects to the HMC and can communicate with any
DS8000 systems that are connected to that HMC. DS CLI sends commands to the HMC,
where the ESSNI server directs them to the appropriate DS8000, based on the Machine Type
and Machine Serial (MTMS) supplied. The commands are executed on the DS8000 against
the microcode. Then, the response data is gathered and sent back to the ESSNI server on
the HMC and back to the DS CLI client.

392 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 8-1 shows the DS CLI command flow between the DS CLI client and the HMC or SMC.

DS CLI Client

User Scripts
User
interaction through
interaction
Interactive batch jobs
single
m ode for
comm and
calls
automation

DS CLI
DS CLI Fram ework
Fram ework

ESSNI client interface

ESSNI server interface

HMC or SM C

Figure 8-1 DS CLI Command flow

8.1.3 Using DS CLI commands


This section covers the different DS CLI command modes and the general command syntax.

Command modes
You can use the following command modes to invoke DS CLI commands:
 Single-shot
 Script
 Interactive

Typically, you use the DS CLI single-shot command mode when you want to issue only an
occasional command. In this mode, you enter DS CLI followed by the command.

An example of the lssi command, which lists the storage image configuration, in single-shot
command mode in Windows is as follows:
dscli -hmc1 9.5.17.156 -user admin -password itso4all lssi

The same single-shot DS CLI command in i5/OS is as follows:


DSCLI SCRIPT(*NONE) HMC1('9.5.17.156') USER(admin) PASSWORD(itso4all) DSCL(lssi)

The DS CLI script mode is useful when you want to issue a sequence of DS CLI commands
repeatedly. To use DS CLI script mode, create a file and insert the DS CLI commands that

Chapter 8. Using DS CLI with System i 393


you want to execute into that file. Then, specify the name of the script file when invoking DS
CLI.

The following example shows invoking a DS CLI script in Windows:


dscli -hmc1 9.5.17.156 -user admin -passwd itso4all -script /myscript.txt

To invoke a DS CLI script in i5/OS, you use a command similar to:


DSCLI SCRIPT('/myscript') USER(admin) OUTPUT('/outfile')

You use the DS CLI interactive command mode when you need to perform multiple
commands that you cannot incorporate into a script, for example when you perform initial
storage setup configuration tasks. You invoke interactive DS CLI mode by typing DS CLI,
where you can specify or are prompted for the IP address of HMC or SMC, the user ID, and
the password.

Command syntax
A DS CLI command consists of the following components:
 The command name specifies the task that DS CLI is to perform.
 One or more flags, each followed by flag parameters if required. Flags provide additional
information that directs DS CLI to perform a command task in a specific way.
 The command parameter, which provides the basic operations that are necessary to
perform the command task.

The following example shows a DS CLI command that lists DS ranks R1, R2, and R3:
lsrank -dev IBM.2107-7580741 R1 R2 R3

In this example, lsrank is the command, -dev is a flag, IBM.2107-7580741 is a flag parameter,
and R1, R2, and R3 are command parameters.

8.2 DS CLI logical storage configuration


If you want to implement boot from SAN with an external load source, you have to set up the
DS storage before you can install i5/OS. Also, if you want to use DS CLI for the initial storage
setup instead of the DS Storage Manager GUI, you need to install DS CLI on another system
for initial set up of the DS.

In this section, we describe how to configure the DS storage using DS CLI from Windows. We
describe how to use DS CLI commands to set up a DS6000 or DS8000 for System i external
storage. The commands that we show were performed on a DS6000. To set up a DS8000 for
System i external storage, you can use the same commands, with the exception that in
DS6000 array sites that contain 4 DDMs, you can make a 4 DDM array from one array site or
you can make an 8 DDM array out of two array sites. However, in DS8000, an array site
contains 8 DDMS, so in our examples, we always create an 8 DDM array from one array site.

For a description of DS CLI commands, refer to IBM System Storage DS6000 Command-Line
Interface User’s Guide, GC26-7922, or IBM System Storage DS8000 Command-Line
Interface User’s Guide, SC26-7916.

In our test environment, we used DS CLI on a Windows server and a LAN connection to SMC
of DS6000. You can issue the same commands from DS CLI on i5/OS, but we performed
them from Windows because customers who use an external load source have to set up the
DS before IPL of i5/OS. These customers must use DS CLI on another server.

394 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Perform the following steps to enter the DS CLI interactive command mode:
1. In a Windows command prompt, type dscli to start the DS CLI console.
2. Enter the IP address of your primary management console.
3. Enter the IP address of secondary management console. If you established a second
HMC or SMC configuration that will serve as a backup for the first, enter the IP address of
secondary management system. Otherwise, press Enter.
4. Enter a user ID and a password. The initial user ID for DS CLI or GUI is admin, and the
initial password is admin.

The prompt DS CLI> in your Windows command prompt indicates that you are now in the DS
CLI console. You can verify your connection to the HMC or SMC ESSNI server by entering
the lssi command. If the connection is successful, this command lists the available storage
images.

8.2.1 Managing user IDs, profiles, and license keys


In this section, we describe how to create user IDs and how to manage profiles and license
keys with DS CLI.

Creating the DS CLI user ID


You need to create a user ID that you use for DS CLI. To create a user ID, use the mkuser
command, and specify parameters for password and the user access authority group.
Example 8-1 shows an example of the mkuser command.

Example 8-1 The DS CLI mkuser command


dscli> mkuser -pw abc123df -group admin itso01
Date/Time: July 8, 2005 11:29:30 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32
CMUC00133I mkuser: User itso01 successfully created.
dscli>

When you enter the DS CLI for the first time after you create a user, it prompts you to change
the password. Use the chuser command to change the password, as shown in Example 8-2.

Example 8-2 The DS CLI chuser command


dscli> chuser -pw itso4all itso01
CMUC00134I chuser: User itso01 successfully modified.
dscli>

We recommend that you create a password file for the user ID to contain an encrypted user
ID password. After you specify the name of the password file in the DS CLI profile, you do not
need to insert a password every time you use the DS CLI command framework. We describe
the DS CLI profile in “Setting up a DS CLI profile” on page 396.

To create a password file, use the managepwfile command. As shown in Example 8-3, a
password file is created in a current working directory and a DS CLI message indicated in
which directory the password file is created.

Example 8-3 The DS CLI managepwfile command


dscli> managepwfile -action add -pwfile itso01pw -name itso01 -pw itso4all
CMUC00205I managepwfile: Password file C:\Program Files\ibm\dscli\itso01pw succe
ssfully created.

Chapter 8. Using DS CLI with System i 395


CMUC00206I managepwfile: Record 9.5.17.171/itso01 successfully added to password
file C:\Program Files\ibm\dscli\itso01pw.
dscli>

Setting up a DS CLI profile


We recommend that you create a DS CLI profile or adjust the default DS CLI profile with
values that are specific for your DS CLI environment (for example, the IP address of HMC or
SMC, password file, and storage image ID). By using a DS CLI profile, you do not need to
insert these values every time you use DS CLI or use a DS CLI command.

After you install DS CLI, you can find a default profile with the name dscli.profile in the
C:\Program Files\IBM\dscli\profile directory. Insert the corresponding values for the following
parameters:
hmc1 The IP address of the primary management console.
hmc2 The IP address of the secondary management console, if available
pwfile If you created a password file, insert the path and name of the file. You have
to specify \\ to qualify the directory path separator.
username Insert the user ID for DS CLI. If you set up the password file, use the same
user ID that you specified in the password file.
password Insert the password for the specified user ID. This value is not required if you
use the password file.
devid Insert the storage image ID of the DS storage system, which consists of the
manufacturer, type, and serial number. For DS8000, use IBM.2107-xxxxx,
for DS6000, use IBM.1750-xxxxx, where xxxxx denotes the serial number of
DS. You can find the serial number of a DS8000 on the operator panel on
the DS base frame. When you insert the storage image ID in the profile,
replace the last digit 0 with 1 or 2, depending on which storage image with
which you want to work when using this profile.
The DS6000 serial number is on the label in the lower right corner of the
front side of the base enclosure. You can also find it using the DS6000 GUI
or DS8000 GUI. Go to My Work, expand Real Time Manager, expand
Manage hardware, and click Storage Images. The serial number of the
storage image is displayed in the frame, Storage images.
remotedevid If you use Copy Services, specify the target storage image ID.

Example 8-4 shows part of a DS CLI profile with customized parameter values.

Example 8-4 DS CLI profile


#
# DS CLI Profile
#

#
# Management Console/Node IP Address(es)
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1: 9.5.17.171
#hmc2:
# Password filename
# The password file can be generated using mkuser command.
#
pwfile: c:\\Program files\\ibm\\dscli\\itso01pw

396 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
username: itso01

#
# Default target Storage Image ID
# "devid" and "remotedevid" are equivalent to
# "-dev storage_image_ID" and "-remotedev storeage_image_ID" command options,
respectively.
devid: IBM.1750-13abvda
#remotedevid:IBM.1750-13abvda
#
# locale
# Default locale is based on user environment.
#locale: en

# Timeout value of client/server syncronous communication in second.


# DSCLI command timeout value may be longer than client/server communication
# timeout value since multiple requests may be made by one DSCLI command
# The number of the requests made to server depends on DSCLI commands.
# The default timeout value is 420 seconds.
timeout 840
#

Activating license keys


Before you configure the storage, you need to insert the License Machine Code (LIC)
activation keys for the DS. You need the signature, which you can obtain with the showsi
command, as shown in Example 8-5.

Example 8-5 The DS CLI showsi command


dscli> showsi -fullid ibm.1750-13abvda
Date/Time: July 9, 2005 12:07:33 AM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: ib
m.1750-13abvda
Name -
desc -
ID IBM.1750-13ABVDA
Storage Unit IBM.1750-13ABVDA
Model 511
WWNN 500507630EFE0154
Signature 462128FD37CDB166
State Online
ESSNet Enabled
Volume Group IBM.1750-13ABVDA/V0

After you obtain the signature, you can download the LIC activation keys from:
http://www.ibm.com/storage/dsfa

Then, insert the activation keys with the applykey command:


dscli> applykey -key xxxxxxxxx ibm.1750-13abvda

Chapter 8. Using DS CLI with System i 397


8.2.2 Configuring arrays, ranks, extent pools, and volumes
In this section, we illustrate how to create and configure arrays, ranks, extent pools, and
volumes using DS CLI.

Creating arrays
List the available array sites using the lsarraysite command. Example 8-6 shows the DS
CLI response, which shows four unassigned array sites. In the displayed array sites, you can
also observe through which device adapter (DA) pair they are attached.

Example 8-6 The DS CLI lsarraysite command


dscli> lsarraysite
Date/Time: July 9, 2005 12:15:46 AM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
arsite DA Pair dkcap (10^9B) State Array
===========================================
S1 0 146.0 Unassigned -
S2 0 146.0 Unassigned -
S3 0 146.0 Unassigned -
S4 0 146.0 Unassigned -

You create RAID-5 arrays by using the mkarray command. You might decide to have RAID-5
or RAID-10 arrays. Also, you can have 8 DDMs in an array or 4 DDMs in an array. For
performance reasons, we created arrays with 8 DDMs for our examples.

Example 8-7 shows the mkarray command, which creates a RAID-5 and an 8 array from 2
array sites. It also shows the DS CLI response.

Example 8-7 The DS CLI mkarray command


dscli> mkarray -raidtype 5 -arsite s1,s2
Date/Time: July 8, 2005 2:39:54 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IBM
.1750-13ABVDA
CMUC00004I mkarray: Array A0 successfully created.

After the arrays are created, you can use the command lsarray to list them, as shown in
Example 8-8.

Example 8-8 The DS CLI lsarray command


dscli> lsarray
Date/Time: July 8, 2005 2:53:00 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IBM
.1750-13ABVDA
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
======================================================================
A0 Unassigned Normal 5 (6+P) S1,S2 - 0 146.0
A1 Unassigned Normal 5 (6+P) S3,S4 - 0 146.0

398 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Creating ranks
From each array, create a fixed block rank using the mkrank command, as shown in
Example 8-9.

Example 8-9 The DS CLI mkrank command


dscli> mkrank -array a0 -stgtype fb
Date/Time: July 8, 2005 3:27:06 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IBM
.1750-13ABVDA
CMUC00007I mkrank: Rank R0 successfully created.

After you create the ranks, use the lsrank command to list them, as shown in Example 8-10.

Example 8-10 The DS CLI lsrank command


dscli> lsrank
Date/Time: June 23, 2005 5:57:57 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
ID Group State datastate Array RAIDtype extpoolID stgtype
==============================================================
R0 - Unassigned Normal A0 5 - fb
R1 - Unassigned Normal A1 5 - fb

Creating extent Pools

Note: We recommend that you create an extent pool for each rank (see “Extent pools” on
page 42).

Use the mkextpool command to create an extent pool. After you create the extent pool, assign
a rank to it with the chrank command, as shown in Example 8-13 on page 400. Alternatively,
you can also create extent pools before you create ranks and assign each rank to an extent
pool with the extpool parameter in the mkrank command.

Create as many extent pools as there are ranks so that you can assign each rank to one
extent pool, as recommended. In the mkextpool command, determine which processor in a
cluster that this particular extent pool will use by specifying cluster number 0 or 1 for the
rankgrp parameter.

Example 8-11 shows how to create an extent pool that is assigned to processor 0. We
recommend that you assign extent pools evenly between the two processors. Observe that a
created extent pool has an ID that is assigned to it automatically, which is different from the
name of the extent pool that you specify in the mkextpool command. In our example, the
name of the extent pool is extpool-01, but the ID is P0. When you use commands later that
refer to this extent pool, such as change extent pool or show extent pool, you refer to it by its
ID, not by the name.

If you create a large number of extent pools, you can consider using a DS CLI script.

Example 8-11 The mkextpool command


dscli> mkextpool -rankgrp 0 -stgtype fb extpool-0
Date/Time: June 23, 2005 6:00:23 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: I
M.1750-13ABVDA
CMUC00000I mkextpool: Extent pool P0 successfully created.

Chapter 8. Using DS CLI with System i 399


After you create the extent pools, list them with the lsextpool command, as shown in
Example 8-12. In this example, the status for the extent pools is exceeded, which means that
the specified percentage of the threshold of available extents in an extent pool is exceeded.
You specify the threshold with the mkextpool command. By default, this value is 0. Because
we did not specify the threshold values with the mkextpool command, the value 0 is taken as
default. Thus, the threshold is exceeded with available extents.

Example 8-12 The DS CLI lsextpool command


dscli> lsextpool
Date/Time: July 12, 2005 4:30:10 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
=================================================================================================
extpool-0 P0 fb 0 exceeded 0 0 0 0 0
extpool-1 P1 fb 1 exceeded 0 0 0 0 0

After you create the extent pools, assign each rank to an extent pool using the chrank
command, as shown in Example 8-13.

Example 8-13 The DS CLI chrank command


dscli> chrank -extpool p0 r0
Date/Time: July 5, 2005 10:50:23 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
CMUC00008I chrank: Rank R0 successfully modified.

After you assign ranks to extent pools, use the lsrank command to observe to which extent
pool and cluster a particular rank is assigned, as shown in Example 8-14.

Example 8-14 The DS CLI lsrank command


dscli> lsrank
Date/Time: July 12, 2005 5:25:04 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
ID Group State datastate Array RAIDtype extpoolID stgtype
==========================================================
R0 0 Normal Normal A0 5 P0 fb
R1 1 Normal Normal A1 5 P1 fb

Creating logical volumes


To create logical volumes, that is LUNs for the System i server, us the mkfbvol command.
When creating volumes, you decide to which logical subsystem (LSS) a particular volume
belongs. The first two digits of a volume ID are the number of LSSs to which the volume
belongs. When creating a volume, you specify its volume ID, and so you determine to which
LSS it will belong. For example, you might specify ID 2000, which means that the volume will
belong to LSS 20.

400 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
When creating logical volumes, take into account the following rules:
 Volumes that belong to an even rankgroup (cluster) must be in an a even LSS. Volumes
that belong to an odd rankgroup (cluster) must be in an odd LSS. You determine to which
cluster a volume will belong by specifying the extent pool in which a volume will be
created. Volumes that are created in an extent pool with rankgroup 0 belong to cluster 0,
and volumes in an extent pool with rankgroup 1 belong to cluster 1. So, the volumes in an
extent pool with rankgroup 0 must have an even LSS number, and volumes in an extent
pool with rankgroup 1 must have an odd LSS number.
 LSS number FF is reserved for internal use. Do not use it for volume IDs.
 Avoid LSS number 00 for the following reasons:
– If the DS is attached by ESCON connection, then you must use LSS 00 for ESCON
connected volumes.
– If i5/OS external Load Source is on volume with ID 0000, and the serial number of DS
happens to be 0000000, i5/OS will not recognize Load Source. The disk serial number
in i5/OS is composed from the volume ID and the serial number of DS. Thus, if the
serial number of a disk is 0000000, the disk cannot be a Load Source.
 If you use DS Copy Services, plan which volumes are the source and target volumes and
make decisions for LSSs of volumes accordingly.

Note: We generally recommend that you use one LSS for volumes from the same rank to
keep track of your lDS volume layout more easily.

i5/OS volumes have fixed sizes, such as 8.59 GB, 17.54 GB, 35.16 GB, and so on. You can
define each volume as protected or unprotected. (For more information about sizes and
protection of volumes, refer to Chapter 6, “Implementing external storage with i5/OS” on
page 207.) You determine the size and protection of a volume by specifying the volume model
in the mkfbvol command parameter -os400. For example, model A85 designates an
unprotected volume of size 35.16 GB.

You can use the mkfbvol command to create multiple volumes at the same time by specifying
a range of volume IDs. The volume ID format is hexadecimal, where XY is a logical
subsystem number and ZZ is a volume number that is contained in a logical subsystem.
When creating a volume or a range of volumes and specifying #h as part of the volume name,
it is replaced by the volume ID of each volume. Observe the difference between volume name
and volume ID. When you use commands later that refer to this volume, such as change
volume or show volume, you refer to it by its ID, not by its name.

Example 8-15 shows the creation of one protected 35.16 GB volume and a range of protected
35.16 GB volumes.

Example 8-15 The DS CLI mkfbvol command


dscli> mkfbvol -extpool p0 -os400 A05 -name i5_prot_#h 1000
Date/Time: July 5, 2005 11:21:00 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
CMUC00025I mkfbvol: FB volume 1000 successfully created.
dscli> mkfbvol -extpool p0 -os400 A05 -name i5_prot_#h 1001-1002
Date/Time: July 5, 2005 11:26:40 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
CMUC00025I mkfbvol: FB volume 1001 successfully created.
CMUC00025I mkfbvol: FB volume 1002 successfully created.

Chapter 8. Using DS CLI with System i 401


Use the lsfbvol command to list all volumes on DS. Example 8-16 shows an example of the
lsfbvol command. In this example, an i5/OS partition uses volumes 1000, 1001, and 1002 in
LSS 10 and volumes 1100, 1101, and 1102 in LSS 11. Volume with ID 1000 has been named
to reflect that it will be used as an external Load Source. You can use FlashCopy on the other
set of volumes (that is 1003, 1004, 1005, 1103, 1104, and 1105). For a description of how
volume IDs reflect in disk serial number in i5/OS, refer to Chapter 6, “Implementing external
storage with i5/OS” on page 207.

Example 8-16 The DS CLI lsfbvol command


dscli> lsfbvol
Date/Time: July 12, 2005 9:02:17 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks)
=================================================================================================================
i5_LS_1000 1000 Online Normal Normal 1750-A05 FB 520P P0 32.8 35.2 68681728
i5_prot_1001 1001 Online Normal Normal 1750-A05 FB 520P P0 32.8 35.2 68681728
i5_prot_1002 1002 Online Normal Normal 1750-A05 FB 520P P0 32.8 35.2 68681728
i5_LS_1003 1003 Online Normal Normal 1750-A05 FB 520P P0 32.8 35.2 68681728
i5_prot_1004 1004 Online Normal Normal 1750-A05 FB 520P P0 32.8 35.2 68681728
i5_prot_1005 1005 Online Normal Normal 1750-A05 FB 520P P0 32.8 35.2 68681728
i5_prot_1100 1100 Online Normal Normal 1750-A05 FB 520P P1 32.8 35.2 68681728
i5_prot_1101 1101 Online Normal Normal 1750-A05 FB 520P P1 32.8 35.2 68681728
i5_prot_1102 1102 Online Normal Normal 1750-A05 FB 520P P1 32.8 35.2 68681728
i5_prot_1103 1103 Online Normal Normal 1750-A05 FB 520P P1 32.8 35.2 68681728
i5_prot_1104 1104 Online Normal Normal 1750-A05 FB 520P P1 32.8 35.2 68681728
i5_prot_1105 1105 Online Normal Normal 1750-A05 FB 520P P1 32.8 35.2 68681728
dscli>

Use the showfbvol command to show specifications and status of a particular volume, as
shown in Example 8-17.

Example 8-17 The DS CLI showfbvol command


dscli> showfbvol 0101
Date/Time: July 12, 2005 9:20:56 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
Name i5_prot_0101
ID 0101
accstate Online
datastate Normal
configstate Normal
deviceMTM 1750-A05
datatype FB 520P
addrgrp 0
extpool P1
exts 33
captype iSeries
cap (2^30B) 32.8
cap (10^9B) 35.2
cap (blocks) 68681728
volgrp V30

402 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8.2.3 Configuring volume groups, I/O ports, and host connections
To assign the volumes to System i Fibre Channel adapters, you have to create volume groups
as a container entity for your logical volumes and assign a volume group to a host connection
that you create for each of your System i Fibre Channel IOAs.

Creating volume groups


Use the mkvolgrp command to create a volume group that contains volumes to be assigned
to an System iFC adapter. In multipath, a volume group is assigned to two or more System i
FC adapters, each of them providing one path to volumes in this volume group. In our
example, we create a volume group first, with one volume only, that is the designated load
source volume, to ensure with using System i SLIC prior to V5R4M5 that this volume group is
the one chosen by the system as the external load source at SLIC installation.

When the partition is IPLed from the external Load Source, we add more volumes to this
volume group so that i5/OS recognizes them and can use them. For more information about
implementing external Load Source, refer to 6.4, “Setting up an external load source unit” on
page 211.

When creating a volume group for i5/OS, specify the -type os400mask parameter. Note that
i5/OS uses a blocksize of 520 bytes per sector. By specifying -type os400mask, you denote
that the volumes are formatted with 520 bytes per sector. Observe that a created volume
group has an ID that is assigned to it automatically, which is different than the name of volume
group that you specify in the mkvolgrp command. When you use commands later that refer to
this volume group, such as change volume group or show volume group, you refer to it by its
ID, not by its name.

In Example 8-18, the name of the volume group is blue, but the ID is V14. Also, a volume
group assigned the logical volume ID 1000 is created.

Example 8-18 The DS CLI mkvolgrp command


dscli> mkvolgrp -type os400mask -volume 1000 blue
Date/Time: July 5, 2005 11:57:50 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
CMUC00030I mkvolgrp: Volume group V14 successfully created.

If you will not use external Load Source, you might want to create a volume group that
contains all volumes that are assigned to an i5 FC adapter using the following command:
DS CLI> mkvolgrp -type os400mask -volume 1000-1002 volgrp10

Configuring I/O ports


To connect volumes to System i FC adapters through DS ports, set up the ports in DS for the
correct Fibre Channel topology. Use the setioport command to set up DS ports for FC
topology.

Important: Remember that the #2847 I/O processor supports the FC-SW (SCSI-FCP)
protocol only, while direct-attached IOP-less Fibre Channel IOAs require the FC-AL
protocol (see 4.2.2, “Planning considerations for i5/OS multipath Fibre Channel
attachment” on page 81).

Chapter 8. Using DS CLI with System i 403


Example 8-19 use #2847 IOPs with the DS ports as SCSI-FCP.

Example 8-19 The DS CLI setioport command


dscli> setioport -topology scsi-fcp i0001
Date/Time: June 23, 2005 10:47:26 PM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS:
BM.1750-13ABVDA
CMUC00011I setioport: I/O Port I0001 successfully configured.

For each System i FC adapter, create a host connection and specify which volume group is
assigned to it. Then, you can assign volumes from one volume group to connect to a System i
FC adapter. You create a host connection with the mkhostconnect command. When creating a
host connection, specify the following parameters:
 Specify -wwname as the world-wide port name (WWPN) of your System i FC adapter.
 Specify -hosttype iSeries. With this parameter, you implicitly determine the correct
blocksize of 520 bytes per sector and address discovery method (Report LUNs) which is
used by i5/OS.
 Specify -volgrp as the volume group that is assigned to this host connection.

Example 8-20 shows an example of creating a host connection for a System i Fibre Channel
I/O adapter (IOA) being assigned volume group V14. Observe that the name chosen for the
host connection is adapter0 but its ID is 0002.

Example 8-20 The DS CLI mkhostconnect command


dscli> mkhostconnect -wwname 10000000C942BA4D -hosttype iSeries -volgrp V14
adapter0
Date/Time: July 6, 2005 12:15:43 AM GMT+01:00 IBM DSCLI Version: 5.0.4.32 DS: IB
M.1750-13ABVDA
CMUC00012I mkhostconnect: Host connection 0002 successfully created.

8.3 DS CLI support for i5/OS


The availability of DS CLI on i5/OS brings new possibilities, for example using scripts for Copy
Services from i5/OS for implementing automation solutions such as for unattended
FlashCopy system backups. Also, the setup of a DS and changes to logical configuration can
be performed from DS CLI in i5/OS.

The following objects are created when you install DS CLI in i5/OS:
 Library QDSCLI
The library QDSCLI contains the code for DS CLI commands.
 Files in IFS directory /ibm/dscli
The IFS directory /ibm/dscli contains sample files, instructions, and necessary Java .jar
files to run the DS CLI commands.

When you invoke a DS CLI command, it initiates a Java process and a Java Virtual Machine
(JVM™). It uses JDK™ APIs to communicate to HMC or SMC through Java Secure Socket
Layer (SSL).

If you use DS CLI on i5/OS in command line mode, the DS CLI commands issue in JVM and
responses display in JVM as well.

404 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
With single-shot commands, JVM is initiated, and JVM takes input from the command,
executes it, and displays a response. After the response displays, JVM ends.

When using DS CLI scripts, JVM takes input from the script file, executes it in JVM, creates a
response file with a name that is specified in the DS CLI command, and returns a response to
this file.

Initializing JVM with connecting Java jars from directory /ibm/dscli is shown in Figure 8-16 on
page 416.

8.3.1 Prerequisites
Before you install DS CLI to i5/OS, check that the following prerequisites are installed on
i5/OS:
 The latest Java group PTF
 i5/OS 5722-SS1 option 34 - Digital certificate manager
 Licensed product 5722-AC3 option *base - Crypto Access Provider 128 bit (before V5R4
only)
 Licensed product 5722-DG1option *base - IBM HTTP Server for iSeries
 Licensed product 5722-JV1 options 6 - Java Developer Kit 1.4
 The latest CUM package is installed on i5/OS

8.3.2 Installing DS CLI on i5/OS


You install DS CLI on i5/OS from a Windows server using the DS CLI installation CD for
Windows. To install DS CLI on i5/OS, use a Windows server connected to i5/OS through an
IP connection. Follow these steps:
1. Insert the CD with DS CLI for Windows. In a Windows command prompt, change the
directory to your CD device (usually it is D:). Enter:
setupwin32 -os400
If you are using an image of installation CD, go to a Windows command prompt and
change to the directory contains the CD image files. Enter:
setupwin32 -os400
See Example 8-21.

Example 8-21 Installing DS CLI for i5/OS


*Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\Documents and Settings\Administrator>cd..


C:\Documents and Settings>cd..
C:\>cd residency

C:\Residency>cd cd_image
The system cannot find the path specified.
C:\Residency>cd dscli-windows*

C:\Residency\DSCLI-Windows-Install-Imager>cd cd_*

C:\Residency\DSCLI-Windows-Install-Imager\cd_image>setupwin32 -os400

Chapter 8. Using DS CLI with System i 405


The InstallShield wizard runs, as shown in Figure 8-2.

Figure 8-2 InstallShield Wizard

2. Enter the IP address or the DNS name of i5/OS server, the i5/OS user ID, and the
password for i5/OS user ID, as shown in Figure 8-3.

Figure 8-3 Sign on to i5/OS server

406 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. The wizard initializes, as shown in Figure 8-4. Click Next to continue.

Figure 8-4 Initializing wizard

4. The Welcome panel displays, as shown in Figure 8-5. Click Next to continue.

Figure 8-5 Welcome panel

Chapter 8. Using DS CLI with System i 407


5. Next, the license agreement displays, as shown in Figure 8-6. Read the license
agreement and if you agree with it, select I accept the terms of the license agreement,
and click Next to continue. Otherwise, click Cancel.

Figure 8-6 Accepting the License agreement

408 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Specify the IFS directory where Java is installed. Observe that the default directory,
QOpenSys, is inserted by default (Figure 8-7). If Java is installed in the specified directory,
click Next. Otherwise click Browse, select the IFS directory where Java is installed, and
click Next.

Figure 8-7 Specify Java directory

Chapter 8. Using DS CLI with System i 409


7. DS CLI is installed in the i5/OS integrated file system (IFS). Summary information displays
regarding in which IFS directory DS CLI will be installed (Figure 8-8). If you agree to install
DS CLI in the specified directory, click Next. Otherwise, click Browse, select the IFS
directory in which you want DS CLI to be installed, and click Next.

Figure 8-8 Specify directory for DS CLI

410 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The wizard continues to install DS CLI to the specified IFS directory in i5/OS, as shown in
Figure 8-9.

Figure 8-9 Installing DS CLI to i5/OS

8. A message that indicates a successful installation displays, as shown in Figure 8-10. Click
Next to continue.

Figure 8-10 Message about successful installation

Chapter 8. Using DS CLI with System i 411


9. When installation completes, the DS CLI readme file displays. Read this information, and
click Finish, as shown in Figure 8-11.

Figure 8-11 The readme file

DS CLI is installed in the following places in i5/OS:


 IFS directory /ibm/dscli directory, which contains profiles, executable files, Java .jar files,
readme files, and so forth
 The library QDSCLI, which contains executable code

412 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8.3.3 Invoking DS CLI from i5/OS
Before you invoke DS CLI from i5/OS, add the library QDSCLI to the i5/OS library list using
the addlible qdscli command, as shown in Figure 8-12.

MAIN OS/400 Main Menu


System: MINNIE
Select one of the following:

1. User tasks
2. Office tasks
3. General system tasks
4. Files, libraries, and folders
5. Programming
6. Communications
7. Define or change the system
8. Problem handling
9. Display a menu
10. Information Assistant options
11. iSeries Access tasks

90. Sign off

Selection or command
===> addlible qdscli

F3=Exit F4=Prompt F9=Retrieve F12=Cancel F13=Information Assistant


F23=Set initial menu
Figure 8-12 Add QDS LI to the library list

Chapter 8. Using DS CLI with System i 413


To invoke DS CLI from the i5/OS green panel:
1. Enter dscli in the command line, as shown in Figure 8-13, and press F4 to enter the
command prompt.

MAIN OS/400 Main Menu


System: MINNIE
Select one of the following:

1. User tasks
2. Office tasks
3. General system tasks
4. Files, libraries, and folders
5. Programming
6. Communications
7. Define or change the system
8. Problem handling
9. Display a menu
10. Information Assistant options
11. iSeries Access tasks

90. Sign off

Selection or command
===> dscli

F3=Exit F4=Prompt F9=Retrieve F12=Cancel F13=Information Assistant


F23=Set initial menu
Figure 8-13 Invoking DS CLI for i5/OS

DS CLI displays the panel where you specify whether you are using a DS CLI script for DS
CLI commands and which DS CLI profile you are using. In our example, we did not use a
script, so we specify *none.
DS CLI on i5/OS comes with a default profile in the file DS CLI.profile in IFS directory
/ibm/dscli/PROFILE. If you use the default profile, leave the value *DEFAULT in the Profile
field, but if you use another file as a profile, specify the name and path of this file in the
Profile field.

414 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In our example, we use the default profile. However, at this point, we have not set up the
profile with values for DS. So, we leave the *DEFAULT value in the Profile field
(Figure 8-14). The profile does not have any impact on our use of DS CLI yet.
After you specify the values on this panel, press Enter.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . *none

Profile . . . . . . . . . . . . *DEFAULT

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-14 Script and profile with DS CLI on i5/OS

2. In the next panel, specify the following values in the fields, as shown in Figure 8-15 on
page 416.
HMC1 Specify the IP address of the primary management console of DS.
HMC2 Insert the IP address of the secondary management console of DS. If the
secondary management console is not used, you can leave the field
specified as *PROFILE.
For a description of the primary and secondary management consoles,
refer to 8.4, “Using DS CLI on i5/OS” on page 422.
User Insert the user ID for accessing DS. In our example, we use the initial user
ID admin.
Password Insert the initial password of user admin for accessing DS.
Install Path Insert the IFS directory in which the DS CLI stream files are installed.
DS CLI CMD Insert the DS CLI command. Alternatively, if you use the DS CLI command
frame, insert *int to start the command frame. In our example, we use the
DS CLI command frame, so we insert *int.
After you insert the values, press Enter.

Chapter 8. Using DS CLI with System i 415


Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . > *NONE

Profile . . . . . . . . . . . . *DEFAULT

HMC1 . . . . . . . . . . . . . . 9.5.17.171
HMC2 . . . . . . . . . . . . . . *PROFILE
User . . . . . . . . . . . . . . admin
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
DSCLI CMD . . . . . . . . . . . *int

Figure 8-15 Inserted values for use DS CLI on i5/OS with DS

The first time that you invoke DS CLI on i5/OS, you see informational information, as shown in
Figure 8-16. This informational panel is not shown on subsequent DS CLI invocations.

Java Shell Display

Attaching Java program to /ibm/dscli/lib/dscli.jar.


Attaching Java program to /ibm/dscli/lib/ssgfrmwk.jar.
Attaching Java program to /ibm/dscli/lib/ESSNIClient.jar.
Attaching Java program to /ibm/dscli/lib/logger.jar.
Attaching Java program to /ibm/dscli/lib/xerces.jar.

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-16 Information about initializing Java programs

416 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
You are presented the panel with the DS CLI command frame, where you can enter DS CLI
commands to the command line of the interface (Java shell), as shown in Figure 8-17.

Java Shell Display

Date/Time: July 8, 2005 2:55:20 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA

dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-17 DS CLI command frame on i5/OS

Use the DS CLI lssi command to list available storage images, as shown in Figure 8-18.

Java Shell Display

Date/Time: July 8, 2005 2:55:20 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA

dscli>
> lssi
Date/Time: July 8, 2005 2:56:14 PM CDT IBM DSCLI Version: 5.0.4.32
Name ID Storage Unit Model WWNN State ESSNet
============================================================================
- IBM.1750-13ABVDA IBM.1750-13ABVDA 511 500507630EFE0154 Online Enabled
dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-18 Available storage images

Chapter 8. Using DS CLI with System i 417


8.3.4 Setting up DS CLI on i5/OS
After you install DS CLI on i5/OS, create a user ID using the DS CLI mkuser command.

We recommend that you create a password file that contains an encrypted user ID and
password. After you create a password file and insert its name in the DS CLI profile, you do
not need to insert the password every time that you invoke DS CLI from i5/OS. Use the DS
CLI managepwfile command to create a password file, as shown in Figure 8-19. If you specify
an unqualified name of the password file, it is created in the home IFS directory.

Java Shell Display

Date/Time: July 18, 2005 3:27:14 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA

dscli>
> managepwfile -action add -pwfile itso02pw -name itso02 -pw itso4all
Date/Time: July 18, 2005 3:28:12 PM CDT IBM DSCLI Version: 5.0.4.32
CMUC00205I managepwfile: Password file /itso02pw successfully created.
CMUC00206I managepwfile: Record 9.5.17.171/itso02 successfully added to
passw
ord file /itso02pw.
dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry

Figure 8-19 Create password file

418 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
We recommend that you create a DS CLI Profile that contains values such as the IP address
of the DS management console, the Storage image ID, the name of the password file, and so
forth. After you have stored these values in DS CLI profile, you do not need to specify them
every time that you invoke DS CLI or enter a command within DS CLI framework.

After installing DS CLI to i5/OS, you see a sample streamfile DS CLI.profile in the IFS
directory /ibm/dscli/profile. You might want to change this file so that it contains values for your
installation, or you can copy it to another file and change.

The default name of DS CLI Profile is DS CLI.profile, its default location is IFS directory
/ibm/DS CLI/profile. If your DS CLI profile is the default file, leave the Profile parameter as
*DEFAULT when invoking DS CLI from i5/OS, as shown in Figure 8-20.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . > *NONE

Profile . . . . . . . . . . . . *DEFAULT

HMC1 . . . . . . . . . . . . . . *PROFILE
HMC2 . . . . . . . . . . . . . . *PROFILE
User . . . . . . . . . . . . . . itso02
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
DSCLI CMD . . . . . . . . . . . *int

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-20 Using default DS CLI profile

Chapter 8. Using DS CLI with System i 419


If you use a file other than the default file for DS CLI profile, specify its qualified file name at
the parameter.

Use *PROFILE when invoking DS CLI from i5/OS, as shown in Figure 8-21.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . > *NONE

Profile . . . . . . . . . . . . > '/myprof.profile'

HMC1 . . . . . . . . . . . . . . *PROFILE
HMC2 . . . . . . . . . . . . . . *PROFILE
User . . . . . . . . . . . . . . itso02
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
DSCLI CMD . . . . . . . . . . . *int

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys

Figure 8-21 Using another than default DS CLI profile

Figure 8-22 shows a part of the profile for DS CLI on i5/OS.

Edit File: /myprof.profile


Record : 1 of 71 by 8 Column : 1 95 by 74
Control :

CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
************Beginning of data**************
#
# DS CLI Profile
#

# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1: 9.5.17.171
#
pwfile: /itso02pw
#sername:
# "devid" and "remotedevid" are equivalent to
# "-dev storage_image_ID" and "-remotedev storeage_image_ID" command opt
devid: IBM.1750-13ABVDA
#remotedevid: IBM.2107-AZ12341

F2=Save F3=Save/Exit F12=Exit F15=Services F16=Repeat find


F17=Repeat change F19=Left F20=Right
Figure 8-22 Profile of DS CLI on i5/OS

420 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
End the DS CLI command framework on i5/OS by typing exit, as shown in Figure 8-23.

Java Shell Display

Date/Time: July 18, 2005 4:18:32 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA

dscli>

===> exit

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-23 Exit DS CLI command framework

A message displays to inform you that the Java program has completed, as shown in
Figure 8-24.

Java Shell Display

Date/Time: July 18, 2005 4:18:32 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA

dscli>
> exit
Java program completed

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-24 DS CLI Framework completed

Leave the Java shell by pressing Enter or by F3=Exit.

Chapter 8. Using DS CLI with System i 421


To use DS CLI scripts from i5/OS use the edtf command to create a streamfile IFS that
contains the DS CLI script.

8.4 Using DS CLI on i5/OS


After you set up the DS, you can use the DS CLI from i5/OS to invoke Copy Services, to make
changes to the DS configuration, to manage DS, and so forth. It might be a good idea to
maintain DS CLI in an i5/OS partition so that it is always on the correct level for installed DSs.

To use DS CLI on i5/OS, follow these steps:


1. To invoke the DS CLI interactive command framework, enter dscli and press F4. The
command prompt shown in Figure 8-14 on page 415 displays. Type the value *none for the
Script parameter, and press Enter.
2. The command prompt shown in Figure 8-17 on page 417 displays. Insert the values
shown in Figure 8-15 on page 416, and press Enter, which brings you to DS CLI
interactive command framework.
3. After you finish working with interactive command framework, type exit to finish the JVM,
as shown in Figure 8-24 on page 421.
4. To invoke a DS CLI single-shot command. enter dscli and press F4. The panel shown in
Figure 8-25 displays.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . .

Profile . . . . . . . . . . . . *DEFAULT

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-25 Prompt when invoking DS CLI

422 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. Insert *none for the Script parameter, as shown in Figure 8-26, and press Enter.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . *none

Profile . . . . . . . . . . . . *DEFAULT

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys

Figure 8-26 Script *none

The panel shown in Figure 8-27 displays.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . > *NONE

Profile . . . . . . . . . . . . *DEFAULT

HMC1 . . . . . . . . . . . . . . *PROFILE
HMC2 . . . . . . . . . . . . . . *PROFILE
User . . . . . . . . . . . . . .
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
DSCLI CMD . . . . . . . . . . .

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-27 Invoking DS CLI single-shot command

Chapter 8. Using DS CLI with System i 423


6. Insert the values shown in Figure 8-28, and press Enter.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . > *NONE

Profile . . . . . . . . . . . . *DEFAULT

HMC1 . . . . .. . . . . . . . . > '9.5.17.171'


HMC2 . . . . .. . . . . . . . . *PROFILE
User . . . . .. . . . . . . . . > admin
Password . . .. . . . . . . . . >
Install Path .. . . . . . . . . '/ibm/dscli'
DSCLI CMD . .. . . . . . . . . > lsarray
Cmd Options .. . . . . . . . .
+ for more values
dev . . . . . . . . . . . . . . IBM.1750-13abvda
remotedev . . . . . . . . . . .
Volumes . . . . . . . . . . . .
+ for more values
More...
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-28 DS CLI single-shot command

JVM is initiated and the response to the DS CLI command displays immediately, as shown
in Figure 8-29.

Java Shell Display

Date/Time: July 20, 2005 3:54:07 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10ÿ9B)
=================================================================
A0 Assigned Normal 5 (6+P) S1,S2 R0 0 146.0
A1 Assigned Normal 5 (6+P) S3,S4 R1 0 146.0
Java program completed

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry

Figure 8-29 Response to DS CLI command

7. Press Enter to end the command.

424 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To use DS CLI scripts from i5/OS, perform the following steps:
1. Create a stream file in IFS to contain the DS CLI script using the edtf command, as
shown in Figure 8-30.

Edit File (EDTF)

Type choices, press Enter.

Stream file, or . . . . . . . . /lsarray

Data base file . . . . . . . . . Name


Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-30 Create script file

The empty file displays, as shown in Figure 8-31.

Edit File: /lsarray


Record : 1 of 1 by 8 Column : 1 59 by 74
Control :

CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
************Beginning of data**************

************End of Data********************

F2=Save F3=Save/Exit F12=Exit F15=Services F16=Repeat find


F17=Repeat change F19=Left F20=Right
Figure 8-31 Empty file for script

Chapter 8. Using DS CLI with System i 425


2. Save the empty file by pressing F2. Press F15 to open the EDTF Options panel, as shown
in Figure 8-32.

EDTF Options Screen

Selection . . . . . . . . . . . .

1. Copy from stream file . . . . /lsarray

2. Copy from database file . . . Name


Library . . . . . . . . . . . Name, *LIBL, *CURL
Member . . . . . . . . . . . Name, *FIRST

3. Change CCSID of file . . . . . 00037 Job CCSID: 00037

4. Change CCSID of line . . . . . *NONE

5. Stream file EOL option . . . . *CRLF *CR, *LF, *CRLF, *LFCR, *USRDFN
User defined. . . . . . . . . Hexadecimal value

F3=Exit F12=Cancel
Figure 8-32 EDTF options

3. On the EDTF Options panel, enter 3 and enter 00819 as the Job CCSID, as shown in
Figure 8-33. Press Enter to perform the change. Then, press F3 to exit.

EDTF Options Screen

Selection . . . . . . . . . . . . 3

1. Copy from stream file . . . . /lsarray

2. Copy from database file . . . Name


Library . . . . . . . . . . . Name, *LIBL, *CURL
Member . . . . . . . . . . . Name, *FIRST

3. Change CCSID of file . . . . . 00819 Job CCSID: 00037

4. Change CCSID of line . . . . . *NONE

5. Stream file EOL option . . . . *CRLF *CR, *LF, *CRLF, *LFCR, *USRDFN
User defined. . . . . . . . . Hexadecimal value

F3=Exit F12=Cancel

Figure 8-33 Changing SSCID

With this, the CCSID of the script file is 819, which is needed for DS CLI scripts on i5/OS.

426 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Insert the DS CLI commands in the script file as shown in Figure 8-34. Then, press F3 to
save file and exit.

Edit File: /lsarray


Record : 1 of 1 by 8 Column : 1 59 by 74
Control :

CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+
************Beginning of data**************
lsarray -dev ibm.1750-13abvda
************End of Data********************

F2=Save F3=Save/Exit F12=Exit F15=Services F16=Repeat find


F17=Repeat change F19=Left F20=Right
Figure 8-34 Inserting scripts

5. To invoke the DS CLI, use command DS CLI, and press F4 to open the command prompt
panel. Insert the qualified name of script file at the Script parameter, as shown in
Figure 8-35, and press Enter.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . > '/lsarray'

Profile . . . . . . . . . . . . *DEFAULT

User . . . . . . . . . . . . . . admin
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
Output . . . . . . . . . . . . . /out1

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Figure 8-35 Using DSCLI on i5/OS

In the following sections, we describe examples of using DS CLI from i5/OS.

Chapter 8. Using DS CLI with System i 427


8.4.1 Invoking FlashCopy
In this example, DS CLI on i5/OS is used to perform cloning of the production partition. For
more information about this scenario, refer to Chapter 12, “Cloning i5/OS” on page 553.
Unless you are using the new i5/OS V6R1 quiesce for Copy Services function CHGASPACT
(see IBM System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103) you must perform FlashCopy from DS CLI in a partition that is not affected by the
actions of cloning, because the production partition can be shut down before FlashCopy is
taken and the backup partition might not be running at this time.

In our example, the production partition has all its disk units including the load source on
LUNs from a DS8000. The LUNs of production partition are in LSS 0x14 and belong to
volume group V4. You can show them by using the showvolgrp from DS CLI on i5/OS, as
shown in Figure 8-36.

Java Shell Display

I
dscli>
> showvolgrp v4
Date/Time: July 19, 2005 5:34:01 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.2
107-7580741
Name Volume Group 5
ID V4
Type OS400 Mask
Vols 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 140A

dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-36 LUNs belonging to production partition

428 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The LUNs that belong to the backup partition are in LSS 0x16 of the same DS8000 and are
contained in volume group V6. You can look at them using the showvolgrp command from DS
CLI on i5/OS, as shown in Figure 8-37.

* Java Shell Display

dscli>
> showvolgrp v6
Date/Time: July 19, 2005 5:35:04 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.2
107-7580741
Name Volgrp7
ID V6
Type OS400 Mask
Vols 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 160A

dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-37 LUNS belonging to backup partition

After the production partition is shut down, perform FlashCopy by using the DS CLI script for
performing FlashCopy. For instructions about how to use the DS CLI scripts, refer to 8.4,
“Using DS CLI on i5/OS” on page 422.

Figure 8-38 shows the script that you use to invoke FlashCopy. Observe that in this example,
we use the FlashCopy nocopy option. Therefore, we specify the -nocp parameter.

Browse : /flash.script
Record : 1 of 1 by 14 Column : 1 59 by 79
Control :

....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
************Beginning of data**************
mkflash -dev IBM.2107-7580741 -nocp 1400-140a:1600-160a
************End of Data********************

F3=Exit F10=Display Hex F12=Exit F15=Services F16=Repeat find


F19=Left F20=Right
Figure 8-38 Script for FlashCopy

Chapter 8. Using DS CLI with System i 429


To invoke the FlashCopy script, use enter dscli, and press F4 to open the command prompt.
Insert the qualified name of the script file, user ID for DS CLI, and password, and specify the
name of the output file to which DS CLI writes the results of the script commands, as shown
in Figure 8-39. Then, press Enter. Observe that in this example, we use the default DS CLI
profile, so we leave the Profile parameter as *DEFAULT.

Run Copy Services (DSCLI)

Type choices, press Enter.

Script . . . . . . . . . . . . . > '/flash.script'

Profile . . . . . . . . . . . . *DEFAULT

User . . . . . . . . . . . . . . admin
Password . . . . . . . . . . . .
Install Path . . . . . . . . . . '/ibm/dscli'
Output . . . . . . . . . . . . . /out3

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys

Figure 8-39 invoke script for FlashCopy

After the script complete successfully, observe the output in the IFS files, which you specified
when invoking the script. In our example, the file is /out3. Figure 8-40 shows the contents of
the file.

Browse : /out3
Record : 1 of 12 by 14 Column : 1 88 by 79
Control :

....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....
************Beginning of data**************
Date/Time: July 20, 2005 3:27:19 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.2107
CMUC00137I mkflash: FlashCopy pair 1400:1600 successfully created.
CMUC00137I mkflash: FlashCopy pair 1401:1601 successfully created.
CMUC00137I mkflash: FlashCopy pair 1402:1602 successfully created.
CMUC00137I mkflash: FlashCopy pair 1403:1603 successfully created.
CMUC00137I mkflash: FlashCopy pair 1404:1604 successfully created.
CMUC00137I mkflash: FlashCopy pair 1405:1605 successfully created.
CMUC00137I mkflash: FlashCopy pair 1406:1606 successfully created.
CMUC00137I mkflash: FlashCopy pair 1407:1607 successfully created.
CMUC00137I mkflash: FlashCopy pair 1408:1608 successfully created.
CMUC00137I mkflash: FlashCopy pair 1409:1609 successfully created.
CMUC00137I mkflash: FlashCopy pair 140A:160A successfully created.
************End of Data********************

Figure 8-40 Output of FlashCopy script

430 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
When the mkflash command completes and the FlashCopy pairs are created successfully,
you can re-IPL or resume the Production partition and begin working. At the same time, the
Backup partition can IPL to bring up the clone of the production partition.

8.4.2 Starting Remote Mirror (PPRC)


In this example, we use DS CLI on i5/OS to start Remote Mirror of between DS6000 and
DS8000. For a detailed description of using Remote Mirror with i5/OS partitions, refer to
Chapter 12, “Cloning i5/OS” on page 553.

The LUNs belonging to the production partition are on the DS6000. Two unprotected LUNs
are the external Load Source and its mirror on the DS6000. The other LUNs are protected
and connected in multipath.

Use the DS CLI showfbvol command to observe a particular LUN. Figure 8-41 shows the
showfbvol command. The LUN shown is the external Load Source. Observe the model A85
and datatype FB 520U, which denote an unprotected LUN.

Java Shell Display

dscli>
> showfbvol 1000
Date/Time: July 18, 2005 7:26:50 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Name i5_unprot_1000
ID 1000
accstate Online
datastate Normal
configstate Normal
deviceMTM 1750-A85
datatype FB 520U
addrgrp 1
extpool P0
exts 33
captype iSeries
cap (2ÿ30B) 32.8
cap (10ÿ9B) 35.2
cap (blocks) 68681728
volgrp V30,V14

dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-41 The showfbvol command

Chapter 8. Using DS CLI with System i 431


One mirrored instance of Load Source and the protected LUNs are in volume group V14 and
are connected to one System i FC adapter. The mirrored Load Source and all protected LUNs
are in volume group V15, and are connected to another System i FC adapter. You can display
volumes in both volume groups with the showvolgrp command, as shown in Figure 8-42 and
Figure 8-43.

Java Shell Display

> showvolgrp v14


Date/Time: July 18, 2005 7:14:37 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Name volgrp10
ID V14
Type OS400 Mask
Vols 1000 1001 1002 1101 1102

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry

Figure 8-42 The showvolgrp command V14

Java Shell Display

dscli>
> showvolgrp v15
Date/Time: July 18, 2005 7:09:50 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Name orange
ID V15
Type OS400 Mask
Vols 1001 1002 1100 1101 1102

dscli>
> showvolgrp v14
Date/Time: July 18, 2005 7:13:49 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry

Figure 8-43 The showvolgrp command V15

432 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To see the adapters to which volume groups are assigned, use the lshostconnect command,
as shown in Figure 8-44.

Java Shell Display

> lshostconnect
Date/Time: July 18, 2005 7:21:45 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Name ID WWPN HostType Profile portgrp
volgrpID
ESSIOport
========================================================================
adapter1 0000 10000000C942ED3E iSeries IBM iSeries - OS/400 0 V15
I0100
adapter2 0001 10000000C928D12A iSeries IBM iSeries - OS/400 0 V13
I0001
adapter0 0002 10000000C942BA4D iSeries IBM iSeries - OS/400 0 V14
I0001
dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-44 The lshostconnect command

Chapter 8. Using DS CLI with System i 433


To set up Remote Mirror between the two DS systems, you needed to establish PPRC paths
and start Remote Mirror. Perform the lsavailpprcport command to see the available ports
for PPRC paths, as shown in Figure 8-45.

Java Shell Display

on!offÑ £-fullidÑ £-bnr on!offÑ £-dev Storage_Image_IDÑ £-remotedev


Storage_
Image_IDÑ -remotewwnn WWNN Source_LSS_ID:Target_LSS_ID ! -
Tip: Enter "help lsavailpprcport" for more information.
dscli>
> lsavailpprcport -remotedev ibm.2107-7580741 -remotewwnn 5005076303FFC459
10:1
2
Date/Time: July 18, 2005 7:57:42 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
Local Port Attached Port Type
=============================
I0000 I0310 FCP
I0001 I0310 FCP
I0101 I0030 FCP
dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-45 The lsavailpprcport command

434 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To establish PPRC paths, perform the mkpprcpath command, as shown in Figure 8-46, to
establish two PPRC paths from source LSS 0x10 to target LSS 0x12 and from source LSS
0x11 to target LSS 0x13.

Java Shell Display

> mkpprcpath -dev IBM.1750-13ABVDA -remotedev IBM.2107-7580741 -srclss 10


-tgtlss 12 -remotewwnn 5005076303FFC459 I0000:I0310 I0101:I0030
Date/Time: July 20, 2005 2:40:05 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:12 successfully
establi
shed.
dscli>
> mkpprcpath -dev IBM.1750-13ABVDA -remotedev IBM.2107-7580741 -srclss 11
-tgtlss 13 -remotewwnn 5005076303FFC459 I0101:I0030 I0000:I0310
Date/Time: July 20, 2005 2:40:34 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
CMUC00149I mkpprcpath: Remote Mirror and Copy path 11:13 successfully
establi
shed.
dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry

Figure 8-46 The mkpprcpath command

Chapter 8. Using DS CLI with System i 435


After the PPRC paths are established, start Remote Mirror with the mkpprc command. In our
example, we perform the PPRC of all volumes in production partition, including mirrored Load
Source as shown in Figure 8-47.

Java Shell Display

> mkpprc -remotedev ibm.2107-7580741 -type mmir 1000-1002:1200-1202


1100-1102:1
300-1302
Date/Time: July 19, 2005 1:47:57 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1

750-13ABVDA
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:1200
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:1201
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1002:1202
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:1300
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:1301
successfully created.
CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1102:1302
successfully created.
dscli>
===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-47 Starting Remote Mirror with mkpprc

436 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In case a failure occurs on the production partition, terminate Remote Mirror using the rmpprc
command, as shown in Figure 8-48.

Java Shell Display

Date/Time: July 20, 2005 1:47:40 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1750-13ABVDA

dscli>
> rmpprc -remotedev ibm.2107-7580741 1000-1002:1200-1202 1100-1102:1300-1302
Date/Time: July 20, 2005 1:48:44 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.1
750-13ABVDA
CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and
Copy
volume pair relationship 1000-1002:1200-1202:? £y/nÑ:
> y
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1000:1200 relationship
successfully withdrawn.
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1001:1201 relationship
successfully withdrawn.
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1002:1202 relationship
successfully withdrawn.
CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and
Copy
volume pair relationship 1100-1102:1300-1302:? £y/nÑ:
> y
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1100:1300 relationship
successfully withdrawn.
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1101:1301 relationship
successfully withdrawn.
CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1102:1302 relationship
successfully withdrawn.
dscli>
Figure 8-48 Terminating Remote Mirror using rmpprc

Chapter 8. Using DS CLI with System i 437


On the target DS, create a volume group using the mkvolgrp command that contains the
target Remote Mirror volumes, as shown in Figure 8-49.

Java Shell Display


> mkvolgrp -dev ibm.2107-7580741 -type os400mask -volume 1200-1202,1300-1302 ta
rget
Date/Time: July 19, 2005 3:47:25 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.2
107-7580741
CMUC00030I mkvolgrp: Volume group V7 successfully created.
CMUN00022E mkvolgrp: The specified operation is unsupported.
dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 8-49 The mkvolgrp command

On the target DS, create a host connection using the mkhostconnect command and associate
it with the volume group containing the target volumes, as shown in Figure 8-50.

Java Shell Display

> mkhostconnect -dev ibm.2107-7580741 -wwname 10000000C928d12A -hosttype


iserie
s -volgrp v7 targetadapter
Date/Time: July 19, 2005 3:07:45 PM CDT IBM DSCLI Version: 5.0.4.32 DS:
IBM.2
107-7580741
CMUC00012I mkhostconnect: Host connection 0006 successfully created.
dscli>

===>

F3=Exit F6=Print F9=Retrieve F12=Exit


F13=Clear F17=Top F18=Bottom F21=CL command entry

Figure 8-50 The mkhostconnect command

Now, perform an IPL of the recovery partition that is connected to Remote Mirror targets.

438 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9

Chapter 9. Using DS GUI with System i


You use DS Storage Manager to configure and manage the IBM System Storage DS8000
and DS6000 storage servers throughout the customer network. DS Storage Manager
comprises two server components (the IBM System Storage DS Storage Manager server and
the IBM System Storage DS Network server) as well as a graphical user interface (GUI). You
can use the GUI either for simulated (offline) configuration or real-time (online) configurations.
To use the optional DS Copy Services functions, the online configuration mode is required.

In this chapter, we describe the following topics:


 Installation of the DS6000 Storage Manager
 The initial IBM i external storage configuration

The DS Storage Manager GUI is functionally equivalent to the DS Storage Manager


command-line interface (DS CLI). See Chapter 8, “Using DS CLI with System i” on page 391
for more information about DS CLI. The exception is that for IBM System Storage DS6000,
you can only perform the domain configuration tasks, such as assigning a storage unit to a
storage complex, using the DS Storage Manager GUI. Both the GUI and CLI use the same
DS Storage Manager server interfaces, as shown in Figure 9-1.

© Copyright IBM Corp. 2008. All rights reserved. 439


Figure 9-1 DS8000 (DS6000) Storage Manager communication paths

9.1 Installing DS Storage Manager


This chapter describes the following installation processes:
 DS6000 installation
An initial full management console installation of the DS6000 Storage Manager through
the installation wizard only and DS6000 initial configuration tasks. (For unattended
installation in silent mode or additional offline management console installations, refer to
the IBM System Storage DS6000 Installation, TroubleShooting and Recovery Guide,
GC26-7925.)
 DS8000 installation
Using the DS8000 Storage Manager pre-installed on the HMC for DS8000 initial
configuration tasks.

440 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9.1.1 Installing DS6000 Storage Manager
This section describes the DS6000 GUI installation, which includes the management and
network servers.

Prerequisites
This section describes the prerequisites for the hardware, browsers, and operating system.
Your system must meet these prerequisites for installation of the DS6000 Storage Manager
on a Windows PC serving as the Storage Management Console (SMC). The DS6000 Storage
Manager requires that the system that is used as the management console be available
continuously for custom operation, configuration, and problem management.

Table 9-1 lists the minimum hardware resources that are required on the PC that serves as
the management console.

Table 9-1 Minimum hardware requirements


Requirement Minimum Value

Disk 1 GB

Memory 1 GB RAM

Processor Pentium 4™ Processor 1.4 GHz

The management console runs through a browser. Table 9-2 lists the supported browsers.

Table 9-2 Supported browsers


Browser Version

Internet Explorer® 6.x

Netscape 6.2, 7.x

A number of Windows operating system versions support the management console, as listed
in Table 9-3. For our examples, we used Windows XP Pro SP2.

Table 9-3 Supported operating systems


Operating System Full management Offline management
console installation console installation

Windows Server® 2003 Enterprise Edition X X

Windows Server 2003 Standard Edition X X

Windows 2000 Advanced Server SP4 X (English only) X

Windows 2000 Server SP4 X (English only) X

Windows 2000 Professional SP4 X (English only) X

Windows XP Professional SP1 X

Windows XP Professional SP1a X

Windows XP Professional SP2 X X

Chapter 9. Using DS GUI with System i 441


Note: Only one DS Storage Manager full management console can be installed per
DS6000 storage unit. Additional consoles must be offline management type consoles.

In the supported browsers, you need to make certain configuration changes to display the
progress information correctly:
 Internet Explorer 6.x
 Netscape 6.2
 Netscape 7.x

Note: To properly display the installation progress bars, animations need to be turned on in
the browser as follows:
 Internet Explorer
a. Select Tools → Internet Options.
b. Select the Advanced tab and scroll down to the Multimedia section.
c. Ensure that Play animation in web pages is enabled.
 Netscape
a. Select Edit → Preferences.
b. Double-click Privacy and Security.
c. Select Images and select as many times as the image specifies in the Animated
image should loop section.

Appropriate browser security settings are also needed to open the DS Storage Manager in a
browser.

Note: In Internet Explorer, you can set security settings as follows:


1. Select Tools → Internet Options.
2. Select the Security tab and click Custom Level.
3. Scroll down to Miscellaneous setting, enable Allow META REFRESH.
4. Scroll down again to Scripting setting, enable Active scripting.

Installation procedure
The following steps describe the installation procedure on a Windows XP Pro SP2 PC:
1. Locate the Management GUI installation CD that came with the DS6000 product. Visit the
following IBM System Storage support Web site and check whether there are any updates
that are available or required for the installation of the Management GUI:
http://www.ibm.com/servers/storage/support/disk/ds6800/downloading.html
2. Review the installation guide that comes with the product.
3. Log on to the Windows environment that will be used for the installation of the DS Storage
Manager with Windows Administrator authority.
4. Insert the IBM System Storage DS Storage Manager CD into the CD-ROM drive. The
LaunchPad used for installation starts automatically within 15 to 30 seconds if autorun
mode is enabled for the CD-ROM drive under Windows. You can also start the LaunchPad
manually using Windows Explorer and browsing to the root of the CD. Then double-click
the LaunchPad.bat file.
5. When the DS Storage Manager panel opens, select Installation wizard to start the DS
Storage Manager installation.

442 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. The Welcome panel instructs you to view the readme file and installation guide and warns
that you should not be running other applications during the install. Select Next to
continue the installation (see Figure 9-2).

Figure 9-2 DS 6000 Storage Manager Installer Welcome window

7. Read the license agreement, and select I accept the terms in the license agreement to
accept the license agreement. Otherwise, click Cancel. Select Next to continue the
installation (see Figure 9-3).

Figure 9-3 DS6000 Storage Manager Installer License Agreement window

Chapter 9. Using DS GUI with System i 443


8. Select Next to accept the default destination directory (or you can optionally select a
different destination directory), as shown in Figure 9-4.

Figure 9-4 DS6000 Storage Manager Installer Destination Directory window

9. Select Next to accept the default DS Storage Manager Server host name and TCP/IP
ports (optionally choose different ports) and proceed to the SSL configuration window (see
Figure 9-5).

Figure 9-5 DS6000 Storage Manager Installer Server Parameters window

444 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
10.Select Generate the self-signed certificates during installation, and enter a password
including its confirmation for each the key file and trust file in the corresponding fields.
Select Next to proceed to the certificate window. Record these passwords as part of your
recovery and management documentation (see Figure 9-6).

Figure 9-6 DS6000 Storage Manager Installer SSL Configuration window

Chapter 9. Using DS GUI with System i 445


11.Complete the input fields for the certificate at least specifying common name as the
identity for the certificate, organization unit and country (adapt other default or optional
values if required). Select Next to continue with the Installation Confirmation window (see
Figure 9-7).

Figure 9-7 DS6000 Storage Manager Installer Generate Self-Signed Certificate window

12.Select Install to confirm your settings and to start the installation (optionally select back to
review or change any settings), as shown in Figure 9-8.

Figure 9-8 DS6000 Storage Manager Installer Installation Confirmation window

446 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.The installation wizard installs and updates the following components without intervention:
– DS Storage Manager Server (see Figure 9-9)
– DS Network Interface Server (see Figure 9-10)
– DS Storage Manager product components (see Figure 9-11)

Figure 9-9 DS Storage Manager Server Installation Progress window

Figure 9-10 DS Network Interface Server Installation Progress window

Chapter 9. Using DS GUI with System i 447


Figure 9-11 DS6000 Storage Manager Components Installation Progress window

14.The installation wizard shows the DS Storage Manager Installer Finish window after the
completed product installation (see Figure 9-12).

Figure 9-12 DS6000 Storage Manager Installer Finish window

448 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
15.You need to reboot Windows after the installation completes successfully. Select Finish to
reboot the Windows system now (optionally select to reboot the system at a later time), as
shown in Figure 9-13.

Figure 9-13 DS6000 Storage Manager Installer Reboot window

DS6000 Storage Manager post-installation configuration tasks


This section describes starting the DS Storage Manager and the following storage unit initial
configuration tasks required after installing the IBM System Storage DS6000 Storage
Manager product. It includes the following topics:
 Starting DS6000 Storage Manager
 Assigning a storage unit to a storage complex
 Specifying storage unit date and time
 Configuring problem notifications
 Registering for IBM MySupport

Chapter 9. Using DS GUI with System i 449


Starting DS6000 Storage Manager
After you install DS6000 Storage Manager and reboot Windows (see 9.1, “Installing DS
Storage Manager” on page 440) the underlying DS Storage Manager Server and DS Network
Interface Server services start automatically. You can verify the start by clicking Start →
Settings → Control Panel → Administrative Tools → Services. A window opens that
shows these services with a status of Started, as shown in Figure 9-14.

Figure 9-14 DS Storage Manager services

To start the DS Storage Manager GUI locally on the SMC, select Start → Programs → IBM
System Storage DS6000 Storage Manager → Open DS Storage Manager.

Alternatively, you can start the DS6000 Storage Manager GUI from a network client by
opening a Web browser and entering a Web address that includes the SMC_IP_address with
the external IP address of the DS6000 Storage Management Console.

For example, for a non-secure HTTP connection to the SMC, use:


http://SMC_IP_address:8451/DS6000/Login

Alternatively, for a secure HTTPS connection to the SMC, use:


https://SMC_IP_address:8452/DS6000/Login

Note: The DS Storage Manager Web addresses shown here are case sensitive.

Enter the default user name admin and password admin in the DS6000 Storage Manager
Sign On panel shown in Figure 9-15. You need to change the password immediately and
make a record of the new admin password. If you lose the admin password, you will have to
delete the current Storage Management GUI and reload it to recover.

Figure 9-15 DS6000 Storage Manager Sign On panel

450 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Assigning a storage unit to a storage complex
To start the process of assigning a DS6000 storage unit (which corresponds to a physical
DS6000 system) to a storage complex (Storage Manager administrative entity), perform the
following steps (see Figure 9-16):
1. Log in to the DS6000 Storage Manager (see “Starting DS6000 Storage Manager” on
page 450).
2. Select Real-time manager → Manage hardware → Storage complexes and choose
Assign Storage Unit from the Select Action drop-down menu.

Note: At any time during the installation, you can select Help through the ? button in the
information bar.

The style of the GUI is typically that you must select the Select Action pull-down menu
from the action bar and then highlight the task that you want to perform. The pull-down
menu closes, leaving the option that you selected displayed. Nothing happens unless
you complete the action by clicking Go.

The options available in the pull-down menu change as you progress through the
installation process.

Figure 9-16 DS6000 Storage Manager Storage complexes panel

Chapter 9. Using DS GUI with System i 451


3. Enter the following Storage unit properties information (Figure 9-17):
– A nickname for the storage unit
– An optional description
– The IP addresses from both of its processor cards (as previously defined in 10.4,
“Setting the DS6000 IP addresses” on page 526)
– The 7-digit serial number of the DS6000 (the serial number label is located at the front
right panel of the DS6000 server enclosure)
Select Next to continue.

Figure 9-17 DS6000 Storage Manager Assign Storage Unit panel

452 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Enter the following Network settings information (see Figure 9-18):
– A Gateway address
– A Subnet mask
– An optional Primary/Alternate domain name server
– An optional Maximum transmission unit information
Select Next to continue.

Figure 9-18 DS6000 Storage Manager Assign Storage Unit Network settings panel

Chapter 9. Using DS GUI with System i 453


5. At this point, the configuration is stored in the PC side of the management GUI. Select
Finish to assign the storage unit to the storage complex (Figure 9-19) Optionally, you can
step back and change the storage unit settings if desired.

Figure 9-19 DS6000 Storage Manager Assign Storage Unit Verification panel

Specifying storage unit date and time


Use the following steps to set the DS6000 storage unit date and time information:
1. Log in to the DS6000 Storage Manager (see “Starting DS6000 Storage Manager” on
page 450).
2. Select Real-time manager → Manage hardware → Storage units and choose
Configure from the Select Action menu (Figure 9-20). Then, select the storage unit.

Figure 9-20 DS6000 Storage Manager Storage Units panel

454 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Enter the date, time, and time zone information, and click OK to submit the changes (see
Figure 9-21).

Figure 9-21 DS6000 Storage Manager Configure Storage Unit Date and time zone panel

Configuring problem notifications


The DS6000 is a customer setup unit (CSU) and customer replaceable unit (CRU), meaning
that installation and service is designed to be performed by the customer, whereas the
DS8000 is serviced completely by IBM service representatives. Therefore, it is important to
follow the steps that we describe in this section to ensure that notifications about servicable
events are configured correctly on the DS6000 storage unit, which include:
 Defining customer contact information
 Setting up DS6000 call home support
 Configuring SNMP customer eMail notification

To configure problem notifications:


1. Define the customer contact information that is required for configuring call home and
problem notifications as follows:
a. Log in to the DS6000 Storage Manager (see “Starting DS6000 Storage Manager” on
page 450).
b. Select Real-time manager → Manage hardware → Storage units, select the storage
unit, and choose Customer contact in the Select Action menu.

Chapter 9. Using DS GUI with System i 455


c. Complete the following shipping information that is used for CRU shipments
(Figure 9-22):
• Country
• Phone number, which is split into the fields (country code, area or city code,
telephone number and extension)
• Mailing address, including city, state, ZIP code, building, floor, and room location (if
appropriate)
Then, select OK to continue.

Figure 9-22 DS6000 Storage Manager Customer Contact Shipping information panel

456 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
d. Complete the following contact information for a system administrator, which is used by
IBM service representatives to contact you with remote assistance (Figure 9-23):
• Name
• Telephone information
• E-mail address
Select OK to continue.

Figure 9-23 DS6000 Customer Contact Information panel

Chapter 9. Using DS GUI with System i 457


2. Set up DS6000 call home support as shown in Figure 9-24 to notify IBM support in case of
a DS6000 hardware or microcode failure condition requiring a service action.

Figure 9-24 DS6000 problem call home process

Important: If an SMTP server is not specified as we describe here, the DS6000 cannot
call home for e-mail alert messages, which enables rapid IBM support for remote
assistance to resolve issues.

To set up the call home process:


a. Select Real-time manager → Manage hardware → Storage units, select the storage
unit, and choose Configure notifications in the Select Action menu.

458 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
b. Ensure that Enable Call Home is selected, and enter the SMTP server name, its IP
address, and Server port (Figure 9-25). Then, click OK. Optionally, you can select
Apply and Test Call Home to send a connection test and to generate a test problem
log entry (error code BE810081). To complete this test, the SMC PC must be
connected to the Internet.

Figure 9-25 DS6000 Storage Manager Configure Notifications Define Call Home panel

Chapter 9. Using DS GUI with System i 459


3. Now, configure the DS6000 Simple Network Mail Protocol (SNMP) notification to ensure
that there is a notification regarding potential DS6000 error events, as follows:
a. Select Real-time Manager → Manage hardware → Storage units, select the storage
unit, and choose Configure notifications in the Select Action menu.
b. Select SNMP from the Configure Notifications panel (see Figure 9-25 on page 459).
c. Select Enable SNMP notification, and complete the SNMP trap destination
information by entering either the IP address, the Host name, or both. Then, specify the
SNMP community name as the string used for SNMP request authentication, which
defaults to Public (Figure 9-26). Optionally, specify an SNMP system contact name,
enter the SNMP Destination port (port 161 is the well-known port for SNMP), and
select OK to complete the SNMP notification setup.

Figure 9-26 DS6000 Storage Manager Configure Notifications Define SNMP connection panel

460 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Registering for IBM MySupport
IBM MySupport provides pro-active e-mail notification of DS6000 microcode updates and
how to obtain them (example shown in Figure 9-27). We highly recommend that you register
for MySupport to stay current on new DS6000 microcode fixes and enhancements.

At this point, we recommend that you have researched the currency of the Storage
Management GUI that you are installing. We also recommend that you test the MySupport
function to ensure that you are familiar with it before going live with the DS6000.

Figure 9-27 Example of IBM MySupport technical information notification

Chapter 9. Using DS GUI with System i 461


To register for IBM MySupport (Figure 9-28):
1. Access the IBM MySupport Web Site through your Web browser at:
http://www.ibm.com/support/mySupport

Figure 9-28 IBM MySupport Web Site sign in panel

2. Enter your existing IBM ID and Password, and select Submit to sign on.

Note: If you have not registered with IBM MySupport, select register now and
complete the required information in the My IBM Registration forms. Then, select
Submit to sign on with your new IBM ID and password. You need the IBM Customer
number that is associated with the DS6000 that you are installing.

462 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select the Edit profile tab and make the following selections under the Products section:
– Storage
– Computer Storage
– Disk Storage Systems
– System Storage DS6000 series
– System Storage DS6800
Select Add products to continue (see Figure 9-29).

Figure 9-29 IBM MySupport Web Site Edit profile Add products panel

Chapter 9. Using DS GUI with System i 463


4. Select the Subscribe to email tab on the Add products confirmation panel (see
Figure 9-30).

Figure 9-30 IBM MySupport Web Site Add products confirmation panel

5. Select Storage, and then select the following options:


– Please send these documents by weekly email
– Downloads and drivers
– Flashes
Select Update to continue (Figure 9-31).

Figure 9-31 IBM MySupport Web Site Subscribe to email panel

464 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Select Sign out in the Welcome panel to end your MySupport DS6000 registration
(Figure 9-32).

Figure 9-32 IBM MySupport Web site Subscribe to e-mail update confirmation panel

This completes DS6000 Storage Manager GUI and Server setup.

9.1.2 Installing DS8000 Storage Manager


The DS8000 Storage Manager is already pre-installed on the DS8000 Hardware
Management Console (HMC) that is connected to the customer network.

For optional separate installation of the DS8000 Storage Manager simulated (offline)
component on a customer client machine, refer to IBM System Storage DS8000 User’s
Guide, SC26-7915, which is available at:
http://www-1.ibm.com/support/docview.wss?rs=1113&context=HW2B2&dc=DA400&q1=ssg1*&u
id=ssg1S7001163&loc=en_US&cs=utf-8&lang=en

Compared to the DS6000, there are no further post-installation configuration tasks that are
required by the IBM service representative after installing the DS8000 except to apply the
storage unit activation keys, which we describe in 9.1.3, “Applying storage unit activation
keys” on page 469.

Starting the DS8000 Storage Manager


To start the DS8000 Storage Manager GUI for DS8000 machines without a System Storage
Productivity Console (SSPC) installed, open a Web browser and enter the Web address.
Substitute the HMC_IP_address with the external IP address of the DS8000 Hardware
Management Console (Figure 9-33 on page 466).

For example, for a non-secure HTTP connection to the HMC, use:


http://HMC_IP_address:8451/DS8000/Login

Alternatively, for a secure HTTPS connection to the HMC, use:


https://HMC_IP_address:8452/DS8000/Login

Note: The DS Storage Manager Web addresses that we show here are case sensitive.

Chapter 9. Using DS GUI with System i 465


At the DS8000 Storage Manager Sign On panel (Figure 9-33), enter admin for the user name.
Enter the admin user password, which defaults to admin. You need to change the password
after you sign on the first time and ensure that you record the new password. To recover a lost
password, you need to reset the password from the HMC, using the security recovery tool
(see 13.2.2, “Unlocking a DS8000 Storage Manager administrative password” on page 576).
Click OK to continue the Sign On procedure. The DS8000 Storage Manager Welcome panel
displays.

Figure 9-33 DS8000 Storage Manager Sign On panel

For DS8000 systems with SSPC installed (see 2.3.1, “Hardware overview” on page 13), you
access the DS Storage Manager GUI remotely through a Web browser that points to the
SSPC (Figure 9-34):
1. Access the SSPC through a Web browser at the following address:
http://SSPC_IP_address:9550/ITSRM/app/en_US/index.html
2. Click TPC GUI (Java Web Start) to launch the TPC GUI.

Note: The TPC GUI requires an IBM 1.4.2 JRE™. Select one of the IBM 1.4.2 JRE
links on the Web page to download and install it based on your OS platform.

Figure 9-34 Index page of System Storage Productivity Center

466 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. The TPC GUI window displays (Figure 9-35). Enter the user name, password, and the
SSPC server. Click OK to continue.

Figure 9-35 TCP GUI Sign On panel

4. Click Element Management to access the DS8000, as shown in Figure 9-36.

Figure 9-36 TPC GUI Enterprise Management window

Chapter 9. Using DS GUI with System i 467


5. The Element Management of TPC GUI displays. Click one of the DS8000 systems to
access its DS Storage Manager GUI (see Figure 9-37).

Figure 9-37 TPC GUI Element Management window

6. The DS8000 Storage Manager Welcome panel displays as shown in Figure 9-38.

Figure 9-38 DS8000 Storage Manager Welcome panel

468 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9.1.3 Applying storage unit activation keys
This section describes the application of the licensed internal code feature activation keys for
both the IBM System Storage DS6000 and DS8000 products.

After you have completed the DS6000 post-installation tasks (see “DS6000 Storage Manager
post-installation configuration tasks” on page 449) or the physical installation of the IBM
System Storage DS8000 storage unit has been completed by the IBM service representative,
you first need to configure logical storage on the DS6000. For the DS8000, you begin by
applying the licensed internal code feature activation keys as follows:
1. Use a Web browser to connect to the IBM DSFA Web Site:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds6000ic/index.jsp?topic=/com
.ibm.storage.smric.help.doc/f2d_cfgstgun_26kh49.html
2. Depending on your machine type, select either IBM System Storage DS8000 series or
IBM System Storage DS6000 series (see Figure 9-39).

Figure 9-39 IBM DFSA Web site

Chapter 9. Using DS GUI with System i 469


3. Complete the required information as follows:
a. Access the DS Storage Manager using another browser and select Real-time
manager → Manage hardware → Storage units and note the Model and Serial
Number information from the storage unit whose licensed functions are to be
activated.
b. Select this storage unit by marking the Select check box and select Properties from
the Select Action menu (see Figure 9-40).

Figure 9-40 DS Storage Manager Storage units panel

c. Note the Machine signature information from DS Storage Manager Storage Unit
Properties panel (see Figure 9-41).

Figure 9-41 DS Storage Manager Storage unit properties panel

470 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Next, go back to the browser that is connected to the DSFA Web site (the DS6000/8000
series machine) to display the required information. Enter the Model, Serial number, and
Machine signature information, and select Submit to continue retrieving the DS licensed
internal code feature activation keys (Figure 9-42). You can note the keys manually or you
can export them to a diskette to be applied when using the DS Storage Manager.

Note: In case the DSFA Web Site application cannot locate the 2244 license
authorization record because it is not attached to the DS serial number record, assign it
to the DS record in the DSFA application using the 2244 serial number that is provided
with the License Function Authorization document.

Figure 9-42 DSFA Web Site Select DS8000 series machine

Chapter 9. Using DS GUI with System i 471


For DS8000 only:
a. Go to the DS8000 Storage Manager Real-time manager → Manage hardware →
Storage images, and select the storage image whose LIC features you want to
activate. Choose Apply Activation Codes from the Select Action menu (see
Figure 9-43).

Figure 9-43 DS8000 Storage Manager storage images panel

b. Enter the DS8000 license internal code feature activation keys that you retrieved from
the DSFA Web site and select OK to continue (Figure 9-44).

472 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 9-44 DS8000 Storage Manager Apply activation codes panel

For DS6000 only:


a. Go to the DS6000 Storage Manager Real-time manager → Manage hardware →
Storage units and select the storage unit whose LIC features you want to activate.
Choose Configure from the Select Action menu (see Figure 9-45).

Figure 9-45 DS6000 Storage Manager Storage units panel

Chapter 9. Using DS GUI with System i 473


b. On the DS6000 Configure Storage Unit panel, select Activation Codes. Enter the
DS6000 license internal code feature activation keys that you retrieved from the DSFA
Web site, and select OK to continue (see Figure 9-46).

Figure 9-46 DS6000 Storage Manager Configure Storage Unit Activation codes panel

The following message displays:


CMUG00092W “This operation applies the activation codes to the storage image.
Select OK to apply the activation codes. Select Cancel to cancel the
operation.”
5. Select OK to complete the application of the DS licensed internal code activation codes.
For a 2107 LPAR model such as 9A2, repeat step 4 for each storage facility image.

Note: To see the capacity and storage type that is associated with the successful
application of the activation codes, repeat step 4.

9.2 Configuring DS Storage Manager logical storage


This section describes the configuration of DS8000 and DS6000 logical storage for the
attachment of IBM System i servers.

The steps that are involved in the logical storage configuration process include:
 Configuring arrays and ranks
 Creating extent pools
 Creating logical volumes
 Configuring I/O ports
 Creating volume groups
 Creating host systems

474 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Note: This section includes only example configurations. The configuration steps apply to
both DS8000 and DS6000. For the figures in this section, we include screen captures from
the DS8000 Storage Manager Release 3 GUI. However, where applicable, we include
comments on any differences from the DS6000 and releases prior to R3 of DS8000
Storage Manager. We chose the order of configuration steps to start with the storage
configuration itself before defining the host systems and host ports. Optionally, you can
define the host systems and host ports before starting the logical storage configuration.

The Offline (Simulated) DS Storage Manager GUI function is designed such that you can
do the logical configuration on a separate PC and then implement it later through
Customer Technical Support or IBM or IBM Business Partner services.

9.2.1 Configuring arrays and ranks


To configure arrays and ranks, follow these steps:
1. Log in to the DS Storage Manager (see either “Starting DS6000 Storage Manager” on
page 450 or “Starting the DS8000 Storage Manager” on page 465)
2. Select Real-time manager → Configure storage → Arrays, and for an LPAR DS8000
model select the Storage image (DS6000: Storage unit) from the Select storage image
(DS6000: Select storage unit) menu. Choose Create in the Select Action menu
(Figure 9-47).

Figure 9-47 DS8000 Storage Manager Arrays panel

Chapter 9. Using DS GUI with System i 475


3. Select the default option Create arrays automatically, which assign the arrays to array
sites automatically and creates one corresponding rank per array. Then, select Next to
continue (Figure 9-48).

Note: Normally, you do not need to control the array assignment to the available array
sites. We recommend that you use the automatic option instead of the manual array
creation option.

If you choose to use the manual array creation option on a DS6000, the two array sites
for an 8-DDM RAID5 array would need to be chosen deliberately so that only one of the
two array sites contains a spare DDM. If the choice is not made, the rank creation will
fail.

Figure 9-48 DS8000 Storage Manager Create Array panel

476 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. For the available DDM types, select the quantity of arrays to be created and the RAID
type, which is either RAID 5 or RAID 10 from the corresponding menus. (For DS6000,
select Create an 8 disk array.) Select Next to continue (Figure 9-49).

Important: For critical performance environments, such as Line of Business application


support, we strongly recommend that you create IBM System i storage on arrays built
with 15 KB RPM drives.

On DS6000, we strongly recommend that you only create arrays for IBM System i
storage on 8 disk arrays.

Figure 9-49 DS8000 Storage Manager Array configuration panel

Chapter 9. Using DS GUI with System i 477


5. Ensure that Add these arrays to ranks is selected as the default. The selected rank
storage type should be FB in the Select storage type menu. Then, select Next to continue
(Figure 9-50).

Note: If the “Add these arrays to ranks” option is not selected as the default, you will
need to create the ranks separately and associated them with an array. The current
1750/2107 product design has a one-to-one relationship between ranks and arrays. We
recommend using this option. To save time, add these arrays to ranks.

Figure 9-50 DS8000 Storage Manager Add arrays to ranks panel

478 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. The Verification panel shows the resulting array configuration. Verify that this information
is correct, and select Finish to start the array creation process. Optionally, you can step
back and change the array creation settings if needed (Figure 9-51).

Figure 9-51 DS8000 Storage Manager Create Array Verification panel

Chapter 9. Using DS GUI with System i 479


7. The Long Running Task Properties window displays and shows the array and rank
creation status for each array to be created. If all the tasks are complete, the State field
shows Finished. You can leave the Long Running Task running and select Close
(Figure 9-52). (For DS6000 and releases prior to R3 of DS8000, close and view the
summary.)

Figure 9-52 DS8000 Storage Manager Create Array Long Running Task Properties panel

480 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
8. Check the State and Status information Long Running Task Summary window to see if the
task completed successfully (Figure 9-53).

Note: You can also access the summary for long running configuration tasks by
selecting Real-time manager → Monitor system → Long running task summary.

Figure 9-53 DS8000 Storage Manager Long running task summary

9. View the created arrays by selecting Real-time manager → Configure storage →


Arrays, which shows the arrays and their rank assignment (Figure 9-54). For our example,
the newly created A14 and R14 do not display because they are not on the displayed page
3 of Arrays table and Ranks table.

Figure 9-54 DS8000 Storage Manager Arrays panel

Chapter 9. Using DS GUI with System i 481


10.View the created ranks by selecting Real-time manager → Configure storage → Ranks,
which shows the created ranks (Figure 9-55).

Note: The amount of total rank capacity is shown in binary GB, that is
1 GB = 1024 x 1024 x 1024 bytes.

Figure 9-55 DS8000 Storage Manager Ranks panel

482 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9.2.2 Creating extent pools
Before you create logical volumes from rank extents, you need to assign the ranks to extent
pools. This section describes how to create extent pools on DS8000 Release 3 or later and
on DS6000 and releases prior to R3.

DS8000 Release 3 or later


Create the extent pools as follows:
1. Select Real-time manager → Configure storage → Extent pools.
2. For an LPAR DS8000 model, select the storage image from the Select storage image
menu.
3. Then, choose Create New Extent Pools from the Select action menu, as shown in
Figure 9-56.

Figure 9-56 DS8000 Storage Manager Extent pools panel

4. The Create New Extent Pools panel displays:


a. Select FB for Storage Type, and select the RAID Type according to the type of RAID
protection that is chosen for the arrays and ranks that you created previously (see
Figure 9-57).
b. For Type of Configuration, choose Manual. The ranks that are not allocated to any
extent pools display in the table (Figure 9-57). Choose only one of the ranks that are
available in the table.

Chapter 9. Using DS GUI with System i 483


Important: We do not recommend that you use the Automatic option for Type of
Configuration. For workload separation and performance monitoring purposes, you
need to create one extent pool for each rank to ensure that a logical volume created
from an extent pool later in the process is not spread across different ranks. You can
accomplish use of multiple ranks for a System i server by assigning logical volumes
from different ranks to that server.

Figure 9-57 DS8000 Storage Manager Create New Extent Pools panel (upper)

5. Scroll down to see the lower portion of the panel (Figure 9-58):
a. For number of extent pools, select Single extent pool.
b. For the first extent pool, enter the pool name prefix of Extent Pool 0, for the second
extent pool, use Extent Pool 1, and so forth.
c. Enter 100 for the Storage Threshold percentage and 0 for the Storage Reserved
percentage.

Note: You can use the option to reserve storage from an extent pool to reserve storage
for a later project so that it is currently not made available for configuration. This
reserved storage does not become available until you explicitly change the amount of
reserved storage by modifying the extent pool properties.

d. Select Server 0 for the server assignment of Extent Pool 0, 2, 4, and so forth. Select
Server 1 for the server assignment of Extent Pool 1, 3, 5, and so forth. Then, select
Add Another Pool to continue creating other extent pools.

484 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Important: For DS8000 server resource affinity conventions and a performance
balanced configuration, we strongly recommend that you ensure that the nickname for
the extent pool is chosen such that even numbered extent pools are associated with
DS8000 Server 0 and odd numbered extent pools are associated with DS8000
Server 1. This association implies that there are equal amounts of even and odd extent
pools so that each of the two DS8000 servers are assigned exactly half of the available
amount of extent pools.

Figure 9-58 DS8000 Storage Manager Create New Extent Pools panel (lower)

Note: Repeat the previous steps to create Extent Pool 1, 2, and 3. When creating
Extent Pool 3, select OK to create the Extent Pool 3 and then close the Create New
Extent Pools panel.

6. Click OK when the task shows Finished and Success status (Figure 9-59).

Figure 9-59 DS8000 Storage Manager Creating extent pools panel

Chapter 9. Using DS GUI with System i 485


7. Finally, check the extent pool configuration by selecting Real-time manager → Configure
storage → Extent pools (see Figure 9-60).

Figure 9-60 DS8000 Storage Manager Extent pools window

486 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS6000 and releases prior to R3 of DS8000
Create the extent pools as follows:
1. Select Real-time manager → Configure storage → Extent pools.
2. For an LPAR DS8000 model, select the storage unit from the Select storage unit menu.
3. Then, choose Create New Extent Pools from the Select action menu, as shown in
Figure 9-61.

Figure 9-61 Storage Manager Extent pools panel

For DS6000 and releases prior to R3 of DS8000, select Create (Figure 9-62).

Figure 9-62 Previous release of DS8000 Storage Manager Extent pools panel

Chapter 9. Using DS GUI with System i 487


4. In the Create Extent Pool panel, choose Create custom extent pool, and select Next to
continue (Figure 9-63).

Figure 9-63 Storage Manager Create Extent Pool Definition method panel

488 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. In the Define properties panel (Figure 9-64):
a. Enter Extent Pool 0 as the nickname for the first extent pool, Extent Pool 1 for the
second extent pool, and so forth for each extent pool that you are creating.
b. Select FB for the Storage Type.
c. Select the RAID type according to the RAID protection chosen for the arrays that you
created previously.
d. Select 0 for the server for Extent Pool 0, 2, 4, and so forth. Select 1 for the server for
Extent Pool 1, 3, 5, and so forth.
e. Select Next to continue.

Figure 9-64 Storage Manager Create Extent Pool Define properties panel

Chapter 9. Using DS GUI with System i 489


6. In the Select ranks panel, select the rank that is associated with the extent pool
(Figure 9-65). For example, Extent Pool 0 is associated with R0, Extent Pool 1 is
associated with R1, and so forth. Select Next to continue.

Note: For DS6000 and DS8000 server resource affinity conventions, we recommend
that you ensure that even numbered extent pools are associated with even numbered
ranks and that odd numbered extent pools are associated with odd numbered ranks.
For consistency reasons, the extent pool number needs to match the rank number.

Figure 9-65 Storage Manager Create Extent Pool Select ranks panel

490 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
7. In the Reserve storage panel, enter 0 for the percentage of storage to reserve in the extent
pool, and click Next to continue (Figure 9-66).

Figure 9-66 Storage Manager Create Extent Pool Reserve storage panel

8. Review the attributes for the extent pool, and select Finish to create the extent pool
(Figure 9-67). Optionally, you can step back and change the extent pool creation settings if
desired.

Figure 9-67 Storage Manager Create Extent Pool Verification panel

9. A long running task window displays the extent pool creation process, which shows the
Finished and Status Success state, after the extent pool is created successfully. Select
Close and View Summary. Then, repeat steps 4 through 8 for each extent pool that you
are creating for each of the remaining configured ranks.

Chapter 9. Using DS GUI with System i 491


10.Finally check the extent pool configuration by selecting Real-time manager → Configure
storage → Extent pools (Figure 9-68).

Figure 9-68 Storage Manager Extent Pools panel

9.2.3 Creating logical volumes


Now that you have created the extent pools, you can create the logical volumes. This section
describes how to create DS8000 and DS6000 logical volumes for IBM System i host systems.
We show examples of creating a total of 46 LUNs for two System i partitions, each assigned
one protected multipath LUN to serve as a multipath external load source unit and 22 other
protected multipath LUNs (Figure 9-69).

Note: Beginning with V6R1 i5/OS, multipathing is supported on an external LSU. If you are
using i5/OS prior to V6R1, refer to 6.10, “Protecting the external load source unit” on
page 240 for information about how to set up external storage configuration to use
mirroring for your external load source to provide path protection.

492 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
System i LPAR System i LPAR
“Mickey" “Minnie"
I/O Tower I/O Tower I/O Tower I/O Tower
IOPless IOPless IOPless IOPless
Fibre Channel Fibre Channel Fibre Channel Fibre Channel
IOA IOA IOA IOA

LSU LSU
0 ... 22 0 ... 22

Figure 9-69 Attachment of two IBM System i partitions with external load source to DS8000

For workload separation and availability reasons, in this example we assign the volumes for
each partition on different extent pools, which per our recommendation to create one extent
pool for each rank, is on different array sites.

To create a logical volume, follow these steps:


1. Select Real-time manager → Configure storage → Open systems → Volumes →
Open systems.
2. In the Open systems panel, select the storage image (for DS6000, the storage unit) from
the Select storage image menu (for DS6000, the Select storage unit menu). Then, choose
Create from the Select Action menu (Figure 9-70).

Figure 9-70 DS8000 Storage Manager Volumes: Open systems panel

Chapter 9. Using DS GUI with System i 493


3. In the Select extent pool panel, choose one extent pool from the available extent pools in
the Select column to use for the creation of the logical volumes (Figure 9-71). Select Next
to continue.

Figure 9-71 DS8000 Storage Manager Create Volume Select extent pool panel

4. Create the protected LUNs for the LSU and all other LUNs for the two System i partitions.
In the Define volume characteristics panel, select iSeries - Protected as the Volume type
and Rotate volumes as the Extent allocation method (only available in DS8000 Release 3
GUI), as shown in Figure 9-72. Then, select Next to continue.

Note: The Extent allocation method menu is used to specify the extent allocation
method, which can be either the recommended default option of Rotate volumes or the
Rotate extents option (storage pool striping). We do not recommend the Rotate extent
method for using with System i (see “Logical volumes” on page 43).

494 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 9-72 DS8000 Storage Manager Create Volume Define volume characteristics panel

Note: The System i volume protection types of either unprotected or protected refer
only to the volume model type in the SCSI vital product data that is reported by the
DS6000 or DS8000 to the System i server. Both System i volume protection types have
the same DS6000 and DS8000 internal RAID protection. The two different types allow
System i customers to choose the type of protection even on external storage. The
unprotected type is required for i5/OS mirroring, for example if you want to mirror the
load source or want to have LUNs mirrored between two external storage servers using
i5/OS mirroring. These external storage servers can be at different sites for disaster
recovery.

The DS6000 logical volume creation attribute, “Enable write cache with mirroring,” is
always enabled for System i volume types to ensure DASD fast write data protection.

Chapter 9. Using DS GUI with System i 495


5. In the Define volume properties panel, complete the following information (Figure 9-73):
a. Select 35.16 GB as the volume size from the Size menu.
b. Choose Select LSSs for volumes so that the extent pool is unique.

Note: We explicitly select an LSS for the volume to be created, because in our
example we prefer to use a different LSS for each extent pool and array site. We
recommend that array sites, extent pools, and ranks have a unique one-to-one
relationship. This type of relationship helps with the association of a DS8000 volume
serial number on the System i server partition with the physical storage location for
the volume on the DS8000.

c. Click Calculate max quantity to use the complete remaining space in the extent pool
to create volumes of this same type. Verify that the Quantity field updates with the
resulting maximum number of volumes to create. Then, select Next to continue.

Figure 9-73 DS8000 Storage Manager Create Volume Define volume properties panel

6. In the Create volume nicknames panel, clear Generate a sequence of nicknames based
on the following, and select Next to continue (Figure 9-74). Optionally, you can select the
option to generate nicknames and specify a nickname for the volume by completing the
Prefix and Suffix entry fields.

Note: If no volume prefix and suffix is specified, the nickname is created from the
volume ID. In this case, we still recommend that you change the nickname in the
volume properties for volumes, which become the external load source LUNs, so that
you can identify them easily.

496 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 9-74 DS8000 Storage Manager Create volume nicknames panel

7. Select Finish to create the volume creation. Optionally, you can step back and change the
volume creation settings if needed (Figure 9-75).

Figure 9-75 DS8000 Storage Manager Create Volume Verification panel

Chapter 9. Using DS GUI with System i 497


8. The Long Running Task Properties panel displays. Close this panel by clicking OK
(Figure 9-76). You can also find the tasks detail by selecting Real-time manager →
Monitor system → Long running task summary. You can save the Long Running Task
Properties to a file.

Figure 9-76 DS8000 Storage Manager Create Volume long running task panel

498 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. Repeat these steps until you have created all 46 protected LUNs for the System i server
partitions. Select Realtime-manager → Configure storage → Open systems →
Volumes - Opensystems, and choose Select secondary filter → All. Then, select
Refresh (Figure 9-77).

Figure 9-77 DS8000 Storage Manager Volumes: Open Systems panel

9.2.4 Configuring I/O ports


Next, you configure the I/O ports. This section describes how to configure the DS8000 and
DS6000 Fibre Channel I/O port topology for attachment of IBM System i host systems.

To configure the I/O ports, follow these steps:


1. Select Real-time manager → Manage hardware → Storage images (for DS6000, select
Storage units).
2. Select the storage image (for DS6000, select the storage unit), and choose Configure I/O
Ports from the Select Action menu (Figure 9-78).

Figure 9-78 DS8000 Storage Manager Storage Images panel

Chapter 9. Using DS GUI with System i 499


3. Select the I/O ports that you want to change. To identify the physical location of an I/O port
on the DS8000, refer to Figure 9-79. (On the DS6000, I/O port interface identifiers 000x
are located on the upper processor complex card, and interface identifiers 001x are on the
lower processor complex card).

Figure 9-79 DS8000 I/O Enclosure Port Naming Convention

Attention: When configuring DS8000 and DS6000 storage I/O ports, adhere to the
following restrictions:
 The #2847 IOP requires the Fibre Channel switched-fabric (SCSI-FCP) protocol,
regardless of whether the IOAs are direct or switch attached to the DS8000
andDS6000. The Fibre Channel arbitrated loop (FC-AL) protocol is not supported by
the IBM System i #2847 loadsource IOP.
 IOP-less Fibre Channel cards #5749 or #5774 direct-attached to DS8000 support
FC-AL only.

500 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
In our example, we set up the System i server partitions to boot from the DS8000 and
DS6000 using a #2847 I/O processor. Thus, we configure the I/O ports to Fibre Channel
Switched-Fabric protocol by choosing Change to FcSf from the Select Action menu
(Figure 9-80).

Figure 9-80 DS8000 Storage Manager Configure I/O Ports panel

4. Accept the informational message, CMUG00118W, by selecting Continue to start the


actual DS6000 and DS8000 I/O port configuration process (Figure 9-81).

Figure 9-81 DS8000 Storage Manager Configure I/O Ports confirmation message panel

Chapter 9. Using DS GUI with System i 501


After the I/O port configuration process ends, the resulting I/O port configuration displays,
and you can review it (Figure 9-82).

Figure 9-82 DS8000 Storage Manager Configure I/O Ports panel

9.2.5 Creating volume groups


This section describes how to create volume groups as a container entity for the logical
volumes that you created. Then, the volume groups are assigned to the corresponding host
systems that can access the volumes in the volume group.

Important for DS6000 or releases prior to R3 of DS8000: You must create the host
connections first so that you can select them to which to connect the newly created volume
group when using the volume group creation process that we describe in this section.

502 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS8000 Release 3 or higher
To create volume groups, follow these steps:
1. Select Real-time manager → Configure storage → Open systems → Volume groups.
2. Select the storage image from the Select storage image menu.
3. Choose Create from the Select Action menu (Figure 9-83).

Figure 9-83 DS8000 Storage Manager Volume groups panel

Chapter 9. Using DS GUI with System i 503


4. In the Create New Volume Group panel, accept the default volume group nickname or
enter a different nickname if desired. Select IBM iSeries and AS/400 Servers
(OS/400)(iSeries) for the Host Type, and select the volumes to be included in the volume
group (Figure 9-84).

Hint: We assigned different LSSs for the extent pools. To assign volumes into a volume
group, choose the LSS and scroll up or down the Volumes window to select the
volumes.

Figure 9-84 DS8000 Storage Manager Create New Volume Group panel

504 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS6000 and releases prior to R3 of DS8000
To create volume groups, follow these steps:
1. Select Real-time manager → Configure storage → Open systems → Volume groups.
2. Select select the storage unit from the Select storage unit menu.
3. Choose Create from the Select Action menu (Figure 9-83).
4. In the Define volume group properties panel, accept the default volume group nickname or
enter a different nickname if desired. Then, select iSeries in the “Accessed by host types”
list (Figure 9-85). Select Next to continue.

Figure 9-85 Storage Manager Define volume group properties

Chapter 9. Using DS GUI with System i 505


5. In the Select host attachments panel, select the host attachment to which the newly
created volume group should attach from the “Select host attachment ID” list. Then, select
Next to continue. Because in our example we want to configure the System i server
partitions to boot from an external load source using i5/OS V6R1 multipathing, we select
two host attachments, each corresponding to a System i server I/O adapter (IOA) to attach
the volume group to (Figure 9-86).

Figure 9-86 Create Volume Group Select host attachments panel

506 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Select volumes for group panel, choose the volumes to include in the volume group
from the “Select volumes” list. Then, select Next to continue (Figure 9-87).

Hint: Use the Next/Previous Page button with the arrow icon to page through the
volume list pages.

Figure 9-87 Storage Manager Create Volume Group Select volumes for group panel

7. Select Finish to start the volume group creation process. Optionally, you can step back
and change the volume group creation settings if needed.
8. Repeat these steps to create volumes groups until you have created all the volume
groups. (In our example, we created only one volume group for each partition.)

Note for DS6000 and releases prior to R3 of DS8000: This ends the example of logical
storage configuration using the DS Storage Manager GUI. We not describe how to attach
the IBM System i partitions to the DS6000 and DS8000 external storage system.

Chapter 9. Using DS GUI with System i 507


9.2.6 Creating host systems
This section describes how to create the host system entities for the System i partitions to be
connected to the DS8000 and DS6000.

Important: Because only one volume group can be associated with a host system, we
define a host system entity for each System i server Fibre Channel I/O adapter (IOA)
instead of one host system entity with several connection ports for each System i server
partition (see Figure 9-69 on page 493).

DS8000 Release 3 or later


To create a host system, follow these steps:
1. Select Real-time manager → Manage hardware → Host connections. Select storage
image.
2. Choose New host connection from the Tasks section. Refer to Figure 9-88.

Figure 9-88 DS8000 Storage Manager Host connections panel

508 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Define Host Ports panel:
a. Enter a nickname that is associated with the System i server partition and its Fibre
Channel IOA that is connected to the DS8000. In our example, we choose the
nickname Mickey_0, where Mickey is the System i server host name and _0 denotes
to the first IOA (Figure 9-89).
b. Select Fibre Channel Point to Point/Switched (FcSf) as the Port Type.
c. Choose IBM iSeries and AS/400 Servers - OS/400(iSeries) as the Host Type.
d. Enter the 16-digit world-wide-port-name (WWPN) in the Host WWPN field, and click
Add. The WWPN that enter displays in the table.
e. Click Next to continue.

Figure 9-89 DS8000 Storage Manager Define Host Ports panel

4. In the Map Host Ports to a Volume Group panel, select Map to an existing volume
group, and select the volume group that is associated with the host connections. In our
example, the associated volume group is VolumeGroup 1 for host connection Mickey_0
(Figure 9-90).

Figure 9-90 DS8000 Storage Manager Map Host Ports to a Volume Group

Chapter 9. Using DS GUI with System i 509


5. In the Define I/O Ports panel, select Automatic (any valid I/O port), and click Next to
continue (see Figure 9-91).

Note: We recommend that you do not restrict the I/O port usage for the host system by
selecting specific storage I/O ports to which the host system can log in. For separation
of System i host systems from other host systems in the SAN, for DS8000 and DS6000
maintenance, and for reconfiguration, we recommend that you use the more flexible
zoning solutions offered by the Fibre Channel switch vendors.

Figure 9-91 DS8000 Storage Manager Define I/O Ports panel

510 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Verification panel, select Finish to start the host system creation process
(Figure 9-92). Optionally, you can step back and change the host connections creation
settings if desired.

Figure 9-92 DS8000 Storage Manager Create Host System Verification panel

Chapter 9. Using DS GUI with System i 511


7. Repeat these steps until you have created a host system for each of the two Fibre
Channel IOAs in each of the two System i server partitions. Afterwards, the host system
configuration looks as shown in Figure 9-93.

Figure 9-93 DS8000 Storage Manager Host connections panel

Note for DS8000 Release 3 or higher: This completes our example of logical storage
configuration using the DS Storage Manager GUI. Now, you are ready to attach the IBM
System i partitions to the DS8000 external storage system.

512 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
DS6000 and releases prior to R3 of DS8000
To create a host system, follow these steps:
1. Select Real-time manager → Manage hardware → Host systems). Select storage
complex.
2. Select Create from the Select Action menu.
3. In the General host information panel (Figure 9-94):
a. Choose IBM iSeries and AS/400 Servers - OS/400(iSeries) as the Type.
b. Enter a nickname to be associated with the System i server partition and its Fibre
Channel IOA that is connected to the DS8000.
c. Select Next to continue.
In our example, we choose the nickname Mickey_0, where Mickey is the System i server
host name and _0 denotes to the first IOA.

Figure 9-94 DS8000 Storage Manager Create Host System General host information panel

Chapter 9. Using DS GUI with System i 513


4. In the Define host ports panel, enter 1 as the quantity of host ports, and select Add. The
host port displays in a Defined host ports table. Ensure that Attachment Port Type is set to
FC Switch fabric (P-P), and select Next to continue (Figure 9-95).

Figure 9-95 DS8000 Storage Manager Define host ports panel

5. Enter the world-wide-port-name (WWPN) in the text field under the Port 1 entry. Then,
select OK to continue (Figure 9-96).

Figure 9-96 DS8000 Storage Manager Define host WWPN panel

514 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. In the Select storage images panel, choose the storage facility image (for DS6000, choose
the storage unit) to which the host system connects from the Available storage images
menu (for DS6000, select Available storage units from the menu). Select Add, which
moves the selection to the Selected storage images list (for DS6000, to the Selected
storage units list), as shown in Figure 9-97. Select Next to continue.

Figure 9-97 DS8000 Storage Manager Select storage images panel

7. In the Specify storage image parameters panel, ensure that any valid storage image I/O
port is selected for the “This host attachment can login” option (for DS6000, select any
valid storage unit I/O port).

Note: The IBM System i attachment to the DS6000 that supports concurrent code load
and to achieve the highest DS6000 I/O path failure protection requires a redundant
Fibre Channel path to each of the two DS6000 server processor cards.

Select Apply assignment to update the storage image allocation in the list, which
changes from 0 to 1. Then, click OK to continue (Figure 9-98).

Note: You might receive the following message:


CMUG00095E: Unable to attach host. No compatible I/O ports are available
on the storage image.

If so, make sure that the attachment port type that you selected corresponds to the
DS8000 and DS6000 I/O port topology configuration. That is, select Cancel and Back
to review the Attachment port type. If you need to correct it, select the host port to be
removed from the list, Defined host ports, and select Remove before beginning again in
the Define Host Ports panel. If the problem persists, go back to 9.2.4, “Configuring I/O
ports” on page 499.

Chapter 9. Using DS GUI with System i 515


Figure 9-98 DS8000 Storage Manager Specify storage image parameters panel

8. Select Finish on the Verification panel to start the host system creation process.
Optionally, you can step back and change the host connections creation settings if
desired.

516 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. Repeat these steps until you have created a host system for each of the two Fibre
Channel IOAs in each of the two System i server partitions. Afterwards the host system
configuration looks as shown in Figure 9-99.

Figure 9-99 DS8000 Storage Manager Host Systems panel

10.Return to 9.2.5, “Creating volume groups” on page 502 to create volume groups and to
complete the logical storage configuration for DS6000 and releases prior to R3 of
DS8000.

Chapter 9. Using DS GUI with System i 517


518 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
10

Chapter 10. Installing the IBM System


Storage DS6000 storage system
This chapter describes how to install the IBM System Storage DS6000 storage system. It
includes the following sections:
 Preparing the site and verifying the ship group
 Installing the DS6000 in a rack
 Cabling the DS6000
 Setting up the DS6000 IP addresses

© Copyright IBM Corp. 2008. All rights reserved. 519


10.1 Preparing the site and verifying the ship group
This section provides information to help you prepare to install the DS6000 into a rack.

10.1.1 Pre-installation planning


Prior to installing the DS6000 into a rack, ensure that the following items for the installation
site match the requirements listed in IBM TotalStorage DS6000 Introduction and Planning
Guide, GC26-7679:
 Safety requirements
 Space and floor load requirements
 Environmental requirements
 Power requirements
 Network communication requirements
 Storage complex setup with the DS6000 customization worksheets

10.1.2 Ship group verification


After moving the DS6000 to the installation site, ensure that the following standard ship group
items are included:
 Two 511 processor cards (server enclosure only) or two EX1 processor cards (storage
enclosure only)
 Two power supplies and fan assemblies
 Two battery backup units (server enclosures only) or two battery blanks (storage
enclosure only)
 16 blank trays (the server enclosure can come with up to 16 disk drive modules in place of
blank trays)
 One service information card tray (installed in the rear of the server enclosure, which is
located below the lower processor card)
 Rack-mounting hardware kit
 Cables
 Two 2.8 meter in-line power cords
 One Ethernet crossover cable (server enclosure only)
 One serial conversion cable (server enclosure only)
 Two 25 meter Ethernet cables (server enclosure only)
 Software
 Microcode CD
 CLI CD
 SMC software CD (SDD included)
 License Machine Code Agreement
 Statement of Limited Warranty
 Code Reference Flyer
 Electrostatic discharge (ESD) wrist strap
 Any optional items ordered according to the packing slip

520 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
If any items are missing or damaged, contact your IBM customer support before proceeding.

10.2 Installing the DS6000 in a rack


This section provides information about how to install the DS6000 in a rack and how to attach
the host systems.

10.2.1 Installing storage and server enclosures in a rack


For rack installation of the DS6000, see IBM System Storage DS6000 Introduction,
Installation, and Recovery Guide, GC26-7678, which is available at:
http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001177&aid=1

10.2.2 Attaching IBM System i host systems to the DS6000


This section describes the requirements that need to be met to attach an IBM System i server
to an IBM System Storage DS6000 system.

Supported IBM Fibre Channel I/O adapters


The supported adaptors include:
 #2766 PCI 2 Gb Fibre Channel Disk Controller
 #2787 PCI-X 2 Gb Fibre Channel Disk Controller
 #5760 PCI-X 4 Gb Fibre Channel Disk Controller

Each I/O adapter requires its own dedicated #2844 PCI I/O processor (IOP) or for external
boot the #2847 PCI IOP. For more information, see IBM TotalStorage DS6000 Host Systems
Attachment Guide, GC26-7680.

Required i5/OS V5R3 PTFs


Required i5/OS V5R3 PTFs include:
 MF33303
 MF33328
 MF33437
 MF33845
 SI14550
 SI14690
 SI14755.

These PTFs are all included in cumPTF level C4272530 or later.

External boot support through #2847 IOP requires i5/OS V5R3M5 or later.

For further details, refer to Chapter 6, “Implementing external storage with i5/OS” on
page 207.

Chapter 10. Installing the IBM System Storage DS6000 storage system 521
10.3 Cabling the DS6000
In this section, we describe how to route the cables for a basic DS6000 installation with up to
one storage enclosure (see Figure 10-1).

Figure 10-1 DS6000 Server Enclosure Connections

The numbered labels in Figure 10-1 refer to:


1. Power supply connector
2. Power control switch
3. 511 processor controller card DISK EXP Port 0
4. 511 processor controller card DISK EXP Port 1
5. 511 processor controller card DISK CONTRL Port 0
6. 511 processor controller card DISK CONTRL Port 1
7. 511 processor controller card Ethernet Port
8. 511 processor controller card SCSI Enclosure Services (SES) serial port
9. 511 processor controller card symmetric multiprocessing (SMP) serial port
10.511 processor controller card host port 0
11.511 processor controller card host port 1
12.511 processor controller card host port 2
13.511 processor controller card host port 3

522 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
10.3.1 Connecting IBM System i hosts to the DS6000 processor cards
To connect the System i server I/O adapter (IOA) to the DS6000 511 processor cards, follow
these steps:
1. Install a small form-factor pluggable (SFP) in a host port on the 511 processor card.

Note: For direct System i server attachment a Fibre Channel, shortwave SFP (feature
code #1310) is required. For attachment of the DS6000 to a Storage Area Network
(SAN), you can use either a long-wave SFP (feature code #1315) or a shortwave SFP,
depending on the SFP type of the SAN node to be connected to the DS6000.

2. Connect all the Fibre Channel cables from the System i server Fibre Channel IOAs or
switch to the 511 processor host ports (numbers 10, 11, 12, and 13 in Figure 10-1)

Important: For availability reasons and for DS6000 concurrent code load support, we
strongly recommend that you use i5/OS native multipathing when connecting each path
to a host port of a different 511 processor card to allow continuous I/O even if one 511
processor card is unavailable due to DS6000 microcode updates or maintenance.

10.3.2 Connecting the DS6000 to the customer network


Use the Ethernet interface port (number 7 in Figure 10-1) on the back to the DS6000 server
enclosure to connect each 511 processor card to your Ethernet network for management of
the server enclosure from the SMC through the DS6000 Storage Manager.

10.3.3 Connecting optional storage enclosures


To connect the first storage enclosure to a new DS6000 server enclosure installation with the
DS6000 not powered-on yet (see Figure 10-2 on page 524), follow these steps:
1. Connect a Fibre Channel cable from the left DISK CONTRL OUT port on the upper
processor card of the server enclosure to the left IN port on the upper processor card of
the storage enclosure.
2. Connect a Fibre Channel cable from the right DISK CONTRL OUT port on the upper
processor card of the server enclosure to the right IN port on the lower processor card of
the storage enclosure.
3. Connect a Fibre Channel cable from the left DISK CONTRL OUT port on the lower
processor card of the server enclosure to the left IN port on the lower processor card of
the storage enclosure.
4. Connect a Fibre Channel cable from the right DISK CONTRL OUT port on the lower
processor card of the server enclosure to the right IN port on the upper processor card of
the storage enclosure.

To connect a second storage enclosure to a new DS6000 server enclosure use two Fibre
Channel cables to make two connections from the OUT ports on the lower processor card of
the first storage enclosure to the IN ports on the lower processor card of the second storage
enclosure.

For attaching a third or further storage enclosure, refer to IBM System Storage DS6000
Introduction, Installation, and Recovery Guide, GC26-7678.

Chapter 10. Installing the IBM System Storage DS6000 storage system 523
Figure 10-2 DS6000 Server enclosure cabling with up to two storage enclosures

10.3.4 Turning on the DS6000


Each DS6000 server or storage enclosure uses two standard power cords, which should be
connected to a properly grounded AC power source as follows:
1. Attach the power cords to the server enclosure and storage enclosures AC power input
ports (number 1 in Figure 10-1 on page 522).
2. Connect the other ends of the server enclosure power cords to a electrical AC power
output. To achieve complete power redundancy plug each server power cord into a
separate independent external power circuit.
3. Repeat step 2 for each optional storage enclosure.

To turn on the power for the initial startup of the DS6000:


1. Verify that all communication and power cables are plugged into the back of the
enclosures.
2. Verify that all disk drives modules are seated properly.
3. Ensure power that is turned on to the supporting devices (for example, Ethernet switches,
SAN switches, and the DS Storage Manager management console)
4. Turn on the storage enclosures by pressing the white Power on/off button on each storage
enclosure.
5. Turn on the server enclosure by pressing the white Power on/off button.

524 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Turn on any attached IBM System i host system this is not already running.

Note: After the initial process to turn on the DS6000 is complete, detection of all
storage enclosure hardware the power to the attached storage enclosures can be
turned off or on automatically in conjunction with turning off or on the DS6000 server
enclosure.

7. Check for the correct DS6000 status after initial power on by verifying that the LED status
indicators show the indicators as shown in Figure 10-3.

Figure 10-3 DS6000 LED status indicators after successful installation

If all LEDs do not show the correct state, refer to Chapter 13, “Troubleshooting i5/OS with
external storage” on page 569 to help you diagnose the problem.

Chapter 10. Installing the IBM System Storage DS6000 storage system 525
10.4 Setting the DS6000 IP addresses
In this section, we discuss the setup of IP addressing for the DS6000 enclosure.

10.4.1 Setting the DS6000 server enclosure processor card IP addresses


To set the DS6000 server enclosure processor card IP addresses, follow these steps:
1. Connect the serial conversion cable (IBM P/N 22R1337) with a PC and the DS6000 server
enclosure processor card (number 9 in Figure 10-1 on page 522).
2. In our example, we use Windows HyperTerminal as the terminal program to connect to the
PC to the DS6000 server enclosure processor card. Select Start → Programs →
Accessories → Communications → HyperTerminal (see Figure 10-4).

Figure 10-4 Windows Start Menu HyperTerminal selection

3. Create a new connection by entering DS6000 in the Name field, and selecting OK to
continue (see Figure 10-5).

Figure 10-5 HyperTerminal New Connection window

526 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Choose the COM serial port to which you connected the cable for the DS6000 from the
“Connect using” menu, and select OK to continue (see Figure 10-6).

Tip: If you are unsure which COMx resource to select, access the Windows Device
Manager by right-clicking the My Computer icon on the Windows desktop. Then, select
Properties, and on the Hardware tab, select Device Manager. The COM resource that
is associated with the PC serial port to which you connected the cable the DS6000 is
listed in the Device Manager Ports (COM & LPT) section.

Figure 10-6 Windows HyperTerminal Connect To window

Chapter 10. Installing the IBM System Storage DS6000 storage system 527
5. Enter the port settings as listed in Table 10-1, and select OK to establish the connection
(Figure 10-7).

Table 10-1 Serial port connection settings


Port setting Value
Bits per second 38400
Data bits 8
Parity None
Stop bits 1
Flow control Hardware

Figure 10-7 Windows HyperTerminal COMx Properties window

528 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Enter the default user ID (guest) and password (guest) to access the DS6000 processor
card (Figure 10-8).

Figure 10-8 Windows HyperTerminal DS6000 login window

7. At the initial setting of the DS6000 processor card IP addresses, change the default guest
password to one of your choice as follows:
a. Choose 2. Change “guest” password from the ncnetconf Main Menu options
b. Enter the current password (guest).
c. Enter the new password. A confirmation message states that the password was
changed successfully.

Chapter 10. Installing the IBM System Storage DS6000 storage system 529
8. Select 1. Configure network parameters from the ncnetconf Main Menu options
(Figure 10-9).

Figure 10-9 Windows HyperTerminal DS6000 ncnetconf Main Menu window

530 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
9. Set the IP addresses for both DS6000 processor cards as follows:
a. Choose 1. Use static IP address from the Network configuration menu options
(Figure 10-10).

Figure 10-10 Windows HyperTerminal Network Configuration window

b. Change the IP address for the current DS6000 processor by choosing 1. IP address
for this node from the Static IP addresses configuration menu options.
c. When the IP Address? prompt displays, enter the desired IP address, and press Enter.
d. Change the IP address for the other DS6000 processor by choosing 2. IP address for
other node from the Static IP addresses configuration menu options.
e. When the IP Address? prompt displays, enter the desired IP address, and press Enter
f. Select 7. Back to Network Configuration to return to the Network configuration menu
(Figure 10-11).

Chapter 10. Installing the IBM System Storage DS6000 storage system 531
Figure 10-11 Windows HyperTerminal Static IP addresses Configuration window

g. Select 3. Advanced configuration options to set the domain name server and the
gateway settings for both DS6000 processor cards (Figure 10-12).

Figure 10-12 Windows HyperTerminal DS6000 Advanced Network Configuration window

532 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
h. Select 7. Back to Network Configuration.
i. Select 7. Back to Main Menu to return to the ncnetconf Main Menu.
j. Select 8. Apply network changes and exit from the options in the main menu to save
your changes and to exit the application.

Multiple IP addresses on the DS6000 Storage Manager console


If there are multiple IP addresses on the DS6000 Storage Manager management console
(see 9.1, “Installing DS Storage Manager” on page 440), the first network adapter must be on
the same subnet network as the DS6000. If this is not the case, you need to change the
binding order so that the management console IP address that is on the same subnet as the
DS6000 is listed first in the binding order.

Follow these steps:


1. Select Windows Start → Settings → Network Connections (Figure 10-13).

Figure 10-13 Windows Start menu for Network Connections

Chapter 10. Installing the IBM System Storage DS6000 storage system 533
2. From the Network Connections window, select Advanced → Advanced Settings
(Figure 10-14).

Figure 10-14 Windows Network Connections Advanced menu selection window

534 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. In the Adapters and Binding tab Connections view, ensure that the first network adapter
listed is the one that is on the same subnet as the DS6000 server enclosure processor
cards. If this is not the case, select the network adapter from the list that is on the same
network as the DS6000 server, and select the up arrow button to move this adapter to the
top of the list (Figure 10-15). Then, click OK.

Figure 10-15 Windows Network Connections Advanced Settings window

Chapter 10. Installing the IBM System Storage DS6000 storage system 535
536 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
11

Chapter 11. Usage considerations for Copy


Services with i5/OS
This chapter provides an overview of the disaster recovery (DR) and high availability (HA)
solutions that are available with Copy Services on System i.

We consider two major environments and their usage implications:


 Using boot from SAN with copying the entire disk space as the foundation for system
cloning, for FlashCopy backups and for disaster recovery
 Using switchable IASPs and System i clustering as the foundation for high-availability

© Copyright IBM Corp. 2008. All rights reserved. 537


11.1 Usage considerations for Copy Services with boot from
SAN
Before the #2487 I/O processor (IOP) or IOP-less Fibre Channel with boot from SAN support
became available, you could not boot from an external storage server. Without boot from
SAN, to use Copy Services with i5/OS, you had to use i5/OS mirroring to copy the load
source unit (LSU) to a LUN in the external storage server while having all other disks in the
ESS800, DS6000 or DS8000. See iSeries in Storage Area Networks A Guide to
Implementing FC Disk and Tape with iSeries, SG24-6220 for more details about this process.

Figure 11-1 shows the use of FlashCopy, where the LSU is on an internal drive and is
mirrored by i5/OS to a remote load source pair in the external disk subsystem, and then
FlashCopy is used to create an instant point-in-time copy for offline backup.

Backup
System i
or LPAR
Primary (inactive)
System i
LSU

LSU

Point in
time copy

Flash
LSU Copy LSU

Figure 11-1 FlashCopy for create offline backup

When the FlashCopy is complete, a second system or LPAR is attached as a backup system
to the external storage subsystem and the FlashCopy image. This second system has a
single internal LSU. Because it was not possible to IPL from this remote copy of the LSU, it
was necessary to perform a D-mode IPL and use the Dedicated Service Tools (DST) function
to perform a remote load source recovery from the LSU copy in the external storage
subsystem to the backup system’s internal drive. This process initializes the internal LSU
before doing the copy and takes a considerable amount of time.

538 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Figure 11-2 shows a similar arrangement when using Metro or Global Mirror to replicate data
to another, remote external disk subsystem.

Primary
Primary disk System i
subsystem

LSU

LSU

Metro or Global
Mirror
continuous copy

LSU
LSU

DR
System i
Secondary disk
subsystem

Figure 11-2 Basic Metro or Global Mirror

Although using a mirrored LSU can be acceptable under DR circumstances when invoking
the DR copy was infrequent, it is not practical to mirror the LSU on a daily basis if you use
FlashCopy to assist with offline backups. Thus, we do not recommend using this approach,
especially for FlashCopy.

Now, with boot from SAN through either #2487 IOP-based or IOP-less Fibre Channel, it is
possible to have the entire disk space, including the LSU, contained in the external storage
subsystem. This means that it is much easier to replicate the entire storage space.

System i external storage-based replication solutions using Copy Services with switchable
independent ASPs (IASPs) managed either by the existing System i Copy Services Toolkit or
the newly introduced i5/OS V6R1 High Availability Solutions Manager (HASM) licensed
product separate the application and its data into IASPs and replicate that with either
FlashCopy for backups (perhaps for populating a data warehouse or development
environment) or Metro Mirror or/and Global Mirror for DR purposes. When using such
switchable IASP replication solutions the production system and the backup system have
their own LSU and *SYSBAS and implementing boot from SAN typically does not help to
reduce the recovery time.

For simple environments, or for those applications that are not supported in an IASP, having
everything contained in *SYSBAS (system ASP plus user ASPs 2-32) is the easiest
environment to implement Copy Services. However, you must bear in mind that *SYSBAS
includes all work areas and swap space. Replicating these requires greater bandwidth and
can introduce other operational complexities that must be managed (for example, the target
will be an exact replica of the source system and will have exactly the same network attributes
as the source system). If used for disaster recovery purposes, these operational complexities
will not cause a problem. However, if you want to test the DR environment or if you want to

Chapter 11. Usage considerations for Copy Services with i5/OS 539
use FlashCopy for daily backups, you need to change the network attributes so that both
systems can be in the network at the same time.

For more advanced disaster recovery and high-availability environments, we recommend


using switchable IASPs for the applications and data and the System i Copy Services Toolkit
or HASM for managing the System i Copy Services environment. This approach inherits the
architected solution for System i availability which is also used with Switched Disks and
Cross-site Mirroring (XSM). IASPs allow for more automated failover techniques available
with i5/OS Clustering.

11.2 Copying the entire DASD space


With boot from SAN support through the #2847 IOP or IOP-less Fibre Channel it is now
possible to have the entire disk space in IBM System Storage DS8000, DS6000 or ESS
model 800 (see 4.2.1, “Planning considerations for boot from SAN” on page 78 for the boot
from SAN hardware and software requirements). This provides new availability and system
management opportunities for System i customers that were not possible previously.

By using the capability that we describe in this section, you can create a complete copy of
your entire system in moments. You can then use this copy in any way you want. For example,
you can use the copy to minimize windows during backup, to protect yourself from a failure
during an upgrade, or to provide a backup or test system. You can accomplish each of these
option by copying the entire DASD space with minimal impact to production operations.

These facilities fall into two main areas:


 Creating a clone of the system or partition
 Performing daily operational tasks such as backups and Disaster Recovery

Although both have similar characteristics, we differentiate these two environments in the
sections that follow.

11.2.1 Creating a clone


Cloning a system should be considered as an infrequent task and not part of the daily
operations. Typically, the clone is used in its entirety to avoid having to do a lengthy restore
from tape. Examples of when you might want to use cloning techniques are:
 Hardware maintenance
It is always advisable to have a complete backup of the system when doing significant
hardware maintenance in case corruption occurs. Using FlashCopy to create a clone
allows you to quickly revert to the original configuration and data in the event of any
problems without having to restore the system data and applications from tape.
 Software maintenance
Whenever you make significant software changes, you are introducing the possibility of
errors and failures. In the past, if such a software upgrade failed or was significantly
delayed, it was very difficult to revert to the prior version without major disruption or
operational effort. Indeed, it was often better to continue and fix problems rather than
revert to the earlier version.
Now, with the ability to create a complete copy of the whole environment, you have a copy
on disk which can be attached to the system or partition and IPLed normally. For example,
if you have planned a release upgrade over a weekend, you can now create a clone of the
entire environment on the same disk subsystem using FlashCopy immediately after doing

540 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
the system shutdown and perform the upgrade on the original copy. If problems or delays
occur, you can continue with the upgrade until just prior to the time the service is needed
to be available for the users. If the maintenance is not completed, you can abort the
maintenance and re-attach the target copy (or perhaps do a “fast reverse/restore” to the
original source LUNs) and do a normal IPL, rather than having to do a full system restore.
Historically, a major part of the time to upgrade your operating system or application
software has been related to the need to obtain a full and reliable backup. This would be
achieved by your usual backups, and in many case a full system save as well. This takes
an appreciable amount of time. You would then perform your upgrade and take another
backup, as you would not want to have to do the upgrade again. At this point you have
spent many hours just taking the backups.
By cloning, you are able to eliminate the vast majority of this lost time. Before starting your
upgrade, you would shutdown your system, then you would make a clone of the whole
system with a full copy of the data. You would then restart your system and start on the
upgrade. While you are doing this, you could concurrently be attaching the clone to a
second partition where you could then be taking a full backup for archive purposes. In the
event that your upgrade fails or you get into a situation that necessitates that you revert to
the original system, you would simply shutdown the system, change the production
system to use the clone image and resume production in a matter of minutes. You could
also attach the original image to another partition and examine the cause of the failure in
preparation for the next attempt.
 Creating a test environment:
Imagine that you have a new release of the operating system or a new version of a major
application that you have heavily customized and for which you need to test the upgrades
prior to putting them into production.
Traditionally, you have three options:
– Load a backup from your production system onto a test partition or system, and create
an environment that is identical to the production system. Then, go through the
upgrade procedure. If you need to start again, you go back to the beginning of the
process and load the backup. This method usually takes several hours just to prepare
the environment to start testing.
– Load the upgrade to your test partition or system, which can be difficult to achieve
because not all upgrades coexist with the existing level of software. If you need to back
out, then you needed to reload to a backup. Again, this method can be time
consuming.
– Load the upgrade straight to the production system, which is a good method if it is
successful. You always have backups before you need them and, in the event of a
failure, you then take several more hours to recover the system.
All of these methods have risks associated with them, not to mention the time that is
required to obtain the backups as well as load them.
Using cloning, you can take a near instant copy of a working system. This system can then
be attached to another separate system or partition, allowing you to be up and running in a
totally independent environment in a matter of minutes.
For a major upgrade, a clone means that you can revert to the starting position very
quickly. The clone is available for use immediately. It is just a matter of reassigning the
disks that are used by the partition and you are up and running. This process takes
minutes rather than the hours that a reload can take.
You also have an exact copy of the production system and all its data, which means that
you perform the upgrade exactly as you need to on the production system and discover
any problems.

Chapter 11. Usage considerations for Copy Services with i5/OS 541
 Creating a replica
You might have the need to create a new partition quite frequently. By creating a single
disk system on the external storage subsystem, you can simply clone the master, add the
additional resources required (additional LUNs and other I/O resources) and you have an
operational system in minutes without having to restore SLIC, i5/OS and Licensed
Program Products.

These are just some of the more common examples of when you can use an ad hoc clone of
the disks. Such functions are only possible when having your entire System i disk space on
external storage.

11.2.2 System backups using FlashCopy


Just as performing ad hoc cloning can be beneficial for maintenance, development, and
testing, so creating regular copies of the entire DASD space as part of the day-to-day tasks
can be very beneficial to operations, particularly to minimize the downtime often associated
with taking backups. When you want to make a full backup of your system, you have three
possible solutions:
 Save while Active (SWA)
 Standard i5/OS save commands
 Copy the entire System i storage space

Each of these solutions has its own benefits and uses.

Save while Active is built into i5/OS and does not require an IPL or any substantial restrictions
on your users, it achieves this by making a checkpoint of the objects so it can track any
changes that are made while the save is running. On very busy systems SWA can take some
time to achieve the checkpoint and as it locks objects for save there might be application
conflicts. Using the new “SWA Partial Transaction” function introduced in V5R3 allows the
save activity continue without holding extended locks on objects, therefore speeding up the
checkpoint acquisition.

Customers considering using SWA might find it easier to restrict the system to obtain the
checkpoint more quickly and then restart the applications. The actual backup itself is done
concurrently with normal operations. This save operation consumes additional resources, so
you would need to ensure there is capacity to use SWA.

Standard i5/OS system save commands, while not requiring an IPL or a power down, do
require that the system be in a restricted state for the whole duration of the save. This causes
a substantial downtime requirement for the application. Recovery from your backups will be
simpler than for save while active and journaling is not a requirement. When the save is
finished you have to restart your system. In Table 11-1 on page 543 we compare the standard
Save Library command (SAVLIB) plus the Non-system objects save parameter (*NONSYS)
and compare with SWA and FlashCopy.

To understand more about i5/OS’s built in backup and recovery techniques, visit the System i
Information Center at:
http://publib.boulder.ibm.com/iseries/

Prior to i5/OS V6R1, taking a copy of the entire System i disk space required that you shut
down the system to ensure that all of the modified data in main memory is flushed to disk.
The new i5/OS V6R1 quiesce for Copy Services function allows to quiesce I/O activity for
*SYSBAS and IASPs by suspending all database I/O operations and thus eliminating the
requirement to shut down your production system before taking a FlashCopy.

542 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Note: The DS8000 Release 3 space-efficient FlashCopy virtualization function (see 2.7.8,
“Space efficient FlashCopy” on page 52) allows you to lower the amount of physical
storage for the FlashCopy target volumes significantly by thinly provisioning the target
space proportional to the amount of write activity from the host fits very well for system
backup scenarios with saving to tape.

For further information about the new i5/OS quiesce for Copy Services function, refer to IBM
System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.

When you have shut down or quiesced your system, the actual copy for the clone is done in
seconds, after which you are able to IPL or resume I/O on your production system and return
it to service, while you perform your backup on a second system, or more likely, partition.

Table 11-1 provides a comparison between the three save methods.

Table 11-1 Comparison of save methods


Save While Active Savlib *Nonsys FlashCopy Backup

Powerdown required No No No (with V6R1 and


later)

Restricted state No Hours No

Duration of outage Minutes Hours Seconds

Journaling for recovery No No Yes

Performance impact Some N/A dedicated Can be separate


while saving partition or system
required

Time to recover Hours Hours Hours if restoring from


backup created with
FlashCopy; minutes if
using live FlashCopy
volumes

Incremental Yes Yes No


save/restore

Ease of save Easy Easy Moderate

11.2.3 Using a copy for Disaster Recovery


FlashCopy is generally not suitable for disaster recovery because due to its point-in-time copy
nature it cannot provide continuous disaster recovery protection nor can it can be used to
copy data to a second external disk subsystem. To provide an off-site copy for disaster
recovery purposes, use either Metro Mirror synchronous replication or Global Mirror
asynchronous replication depending on the distance between the two external disk
subsystems.

Unlike taking controlled point-in-time copies through FlashCopy after a controlled quiesce or
power-down, with Metro Mirror and Global Mirror which are constantly updating the target
copy, you cannot be assured of having a clean starting point in a disaster scenario where the
copy process all of a sudden got interrupted. There is no chance to preempt a disaster event
with a power down of the source system to flush objects from main storage. This applies to all

Chapter 11. Usage considerations for Copy Services with i5/OS 543
environments, regardless of whether IASPs are used or not. With both Metro Mirror and
Global Mirror, you have a restartable copy, but the restart point is at the same point that the
original system would be if an IPL was performed after the failure. The result is that all
recovery on the target system will include abnormal IPL recovery. It is critical that application
availability techniques like journaling are employed to accelerate and assist the recovery.

Important: As with all System i availability techniques, it is important to use i5/OS


journaling. This ensures that, even if objects remain in main memory, the journal receiver is
written to disk. Consequently, it is copied to the disaster recovery site using Metro Mirror or
Global Mirror and will be available on the disaster recovery server to apply changes to the
database when the system is started.

With Metro Mirror, the recovery point is the same as the point at which the production system
failed, i.e. an recovery point objective of zero (last transaction) is achieved. With Global Mirror,
the recovery point is where the last consistency group was formed. By default Global Mirror
consistency groups are continuously formed as often as the environment allows depending
on the bandwidth and write I/O rate.

Note: When using synchronous mirroring solutions, distance and bandwidth can have an
impact on the production system’s performance as write IOs must wait until the write
update for the remote copy has been acknowledged back to the host. This can cause
significant impact if there is a lot of write I/O for the system components such as swap
space, temporary work areas and temporary index builds.

Although the IBM Metro Mirror algorithm is very efficient compared with synchronous
mirroring techniques from other vendors, we recommend that for performance critical
System i workloads, especially when they have a high amount of write I/O activity, you
should at least have a PPRC performance modelling done with tools like DiskMagic or
perform a benchmark at an IBM benchmark center. If this proves to be too much of a
performance impact, you should consider Global Mirror which has been designed to not
impact the production server performance, or split the application into IASPs (see “System
architecture for System i availability” on page 545 for more details).

11.2.4 Considerations when copying the entire DASD space


You have to remember that because copying the entire DASD space creates a copy of the
whole source system, you must take into consideration the following points:
 The copy is an exact copy of the original source system in every respect.
 The system name and network attributes are identical.
 The TCP/IP settings are identical.
 The BRMS network information is identical.
 User profiles and passwords are identical.
 The Job Schedule entries are identical.
 Relational database entries are identical.

You should be extremely careful when you activate a partition that has been built from a
complete copy of the DASD space. In particular, you have to ensure that it does not
automatically connect to the network because this can cause substantial problems within both
the copy and its parent system.

You must ensure that your copy environment is correctly customized before attaching it to a
network. Remember that booting from a SAN and copying the entire DASD space is not a

544 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
high-availability solution, because it involves a large amount of subsequent work to make sure
that the copy works in the environment where it is used.

11.3 Using IASPs and Copy Services for System i high


availability
The new i5/OS V6R1 High Availability Solutions Manager (HASM) or the System i Copy
Services Toolkit are System i Copy Services management tools which provide a set of
functions to combine PPRC, IASP, and i5/OS cluster services for coordinated switchover and
failover processing through a cluster resource group (CRG). HASM also supports i5/OS
native HA solutions using switched-disks or/and XSM geographic mirroring. However in
contrast to the toolkit which supports full-system (entire DASD space) as well as IASP based
Copy Services solutions HASM’s support for external storage Copy Services replication
solutions is restricted to IASP based solutions only.

Note: Using independent ASPs with Copy Services is only supported with using either the
System i Copy Services Toolkit or HASM and a pre-sale and pre-install Solution Assurance
Review is highly recommended or required.

For further information about HASM and the System i Copy Services Toolkit, refer to IBM
System Storage Copy Services and IBM i: A Guide to Planning and Implementation,
SG24-7103.

In the following sections, we describe some disaster recovery, backup, and high availability
scenarios using the System i switchable IASP architecture.

11.3.1 System architecture for System i availability


IASPs are the basis for System i high availability solutions, such as Switched Disks and Cross
Site Mirroring (XSM), and it can also be used with High Availability Business Partner (HABP)
solutions for software replication.

Figure 11-3 shows the basic architecture for System i availability.

This architecture uses IASPs to isolate the applications and data from the system
components. The middle system (with yellow *SYSBAS disks) can be regarded as the
production server. Local availability is provided by the upper system with orange *SYSBAS.
The dark green IASP can be switched between these two local servers in the event of
planned or unplanned outages on the servers.

The system continues to run if errors occur in the IASP disk subsystem, no matter which disk
technology is used. With multiple IASPs, perhaps each holding different applications or
environments, failure in one IASP will not affect the others. Although disaster recovery (DR)
can be provided without IASPs, the level of availability can be increased using IASPs.

Disaster Recovery functions can be provided at a remote site by the server with gray
*SYSBAS. The light green IASP in the lower site could be replicated using either Cross Site
Mirroring (XSM) or an HABP solution.

Chapter 11. Usage considerations for Copy Services with i5/OS 545
LSU
*SYSBAS

HSL

Switchable
Tower
System i
LSU
*SYSBAS Cluster &
Device
Domain

TCP/IP LAN
Cluster- Comms

HSL

LSU
Tower *SYSBAS
Figure 11-3 System i Availability

Further variations of this model could include another server in the remote site to provide a
more symmetric solution. Multiple systems can also assist with workload balancing although
this is not an automatic function of IASPs.

546 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
By simply replacing the internal disks making up the IASPs with external storage, we can
introduce another replication method - storage-based data replication by Metro or Global
Mirror for offsite disaster recovery, or FlashCopy for onsite backups, as shown in Figure 11-4.

Production
DS or ESS
LSU
*SYSB AS
IASP
F ib re

HSL

Sw itchable
Tow er LSU
*SYSBAS

S ystem i
Fibre Channel

Cluster &
Metro or Global Device
Mirror Dom ain

T CP/IP LAN
Cluster- Com m s

IASP' F ib re

HSL
LSU
*SYS BAS
Tow er

D/R
DS or ESS

Figure 11-4 System i Availability with External Storage

Remember that the underlying architecture is identical, no matter which replication or


mirroring technique is employed. Moving applications to IASPs is fundamental to High
Availability as they not only allow various replication methods to be used but also provide
more isolation for failures.

Important: As with all System i availability techniques, it is important to use i5/OS


journaling, with the journal receivers being in the IASP. This ensures that even if objects
remain in main memory, the journal receiver will be written to disk in the IASP.
Consequently, they will be copied to the DR site using Metro or Global Mirror and will be
available on the DR server to be applied to the database when the IASP is varied on.

11.3.2 Backups
As well as providing DR capabilities as shown above, the System i Copy Services Toolkit or
HASM also support FlashCopy. This can be a great benefit for customers who want to
minimize the downtime associated with doing daily backups. By separating the application
data from the system using IASPs, it is possible to create a point-in-time copy of the
application data and to attach this copy to another server (or more likely partition) to perform
the backups, as shown in Figure 11-5.

The upper server with the yellow *SYSBAS is the production server. It is attached to the dark
green IASP in the external storage subsystem. When backups are to be done, the application
should either be quiesced for a short period of time allowing the IASP to be varied off or the
IASP should be quiesced using the new i5/OS V6R1 quiesce for Copy Services function.

Chapter 11. Usage considerations for Copy Services with i5/OS 547
Either way modified objects for the IASP from memory are flushed to disk where they can be
copied using FlashCopy.

LSU
*SYSBAS

Prod. System i or LPAR


iASP varied on

IASP
Fibre

(Cluster- Comm.)
Prod.

TCP/IP LAN
DS or ESS System i
Cluster &
Device
Domain

Fibre

Backup-System i or LPAR

IASP'
FlashCopy
LSU
*SYSBAS

Tape
Backup
IBM ENHANCED CAPACITY
CARTRIDGE SYSTEM TAPE

Figure 11-5 Using FlashCopy with IASPs

As soon as the FlashCopy command has completed (a matter of minutes), the dark green
IASP can be varied on again or resumed on the production server and the light green IASP’
can be attached and varied on to the backup server or partition where the backups can be
performed. Under most circumstances, we would anticipate that a partition would be used
allowing resources (memory and processor) to be re-allocated to the backup partition for the
duration of the backups. The System i Copy Services Toolkit manages the whole process.
IBM does not support this capability without using the Toolkit.

Important: Although it is possible to perform a FlashCopy without varying off the IASP, we
would normally advise customers to vary off the IASP or use the new i5/OS V6R1 quiesce
function (CHGASPACT). If this is not possible, it is imperative that journaling is used, with
the journal receivers being in the IASP, so that journal changes can be applied to the
database when the FlashCopy target IASP is varied on to the backup server.

548 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
If you have more than one production partition or server attached to the external storage
subsystem, you can use a single partition or server to do all your different production IASPs
backups, as shown in Figure 11-6.

LSU
*S Y S B AS
P rod. S ystem i A or LP AR A
iAS P -A varied on
e
br

Cluster- Comm.
Fi

TCP/IP LAN
IAS P -A IAS P -B
P rod.
DS or E S S iS eries
P rod.S ystem i B or LPAR B C luster &
iAS P -B varied on D evice
D om ain
F ib r
e

LSU
*S Y S B AS

Cluster- Comm.
TCP/IP LAN
Fi
br
e
IAS P -A' IASP -B ' B ackup-S ystem i or LP AR
C an concurrently support both
F lash Co py F lash C op y IAS P 's

S eperate S ystem i IO A(s) for each


IAS P ' LSU
*S YS B AS Tape
B ackup
IBM ENHANCED CAPACI TY
CARTRIDG E SYSTEM TAPE

Figure 11-6 Using a single backup partition or server for multiple production environments

In this case, all three environments (two production and one backup) are in the same cluster
and device domain, so all three nodes know about the two production IASPs (dark green and
dark blue). This allows each of the IASP copies to be attached and varied on to the single
backup server or partition. As the System i Copy Services Toolkit manages each IASP
separately, it requires separate Fibre Channel attachments on the backup server for each
IASP.

11.3.3 Providing both backup and DR capabilities


It is also possible to combine both Metro or Global Mirror with FlashCopy, as shown in
Figure 11-7. In this case, the remote site provides both DR and backup capabilities. Although
it is possible to have only one i5/OS instance (server or partition) at the remote site, we
recommend using two. With only one, should a disaster occur while backups are being
performed, it will be necessary to wait for backups to complete before invoking the DR or to
terminate the backups to allow the DR service to be used. In addition, if running the DR
service, there will be no facility to take backups and alternative methods would be required
(and perhaps not meeting availability expectations).

Chapter 11. Usage considerations for Copy Services with i5/OS 549
Prod. LSU
DS or ESS *SYSBAS
IASP
Prod. System i or LPAR
(iASP varied on)
Fibre

Cluster- Comm.
TCP/IP LAN
DR System i or LPAR System i
(iASP' varied off) Cluster &
IASP' Device
Domain
(sync.)
LSU
*SYSBAS
re
Fib

Cluster- Comm.
TCP/IP LAN
Backup System i or LPAR

Fibre
LSU
IASP'' *SYSBAS
DR Tape
(FlashCopy) DS or ESS Backup
IBM ENHANCED CAPACITY
CARTRIDGE SYSTEM TAPE

Figure 11-7 Remote DR and backup

Similarly, it is possible to do backups locally and have a remote DR service, as shown in


Figure 11-8.

IASP'' Tape
(FlashCopy) Prod. Backup System i or LPAR IBM ENHANCED CAPACITY
CARTRIDGE SYSTEM TAPE
Backup
DS or ESS

Fibre
Cluster- Comm.
TCP/IP LAN

LSU
*SYSBAS
Fibre

System i
IASP LSU
*SYSBAS Cluster &
Prod. System i or LPAR Device
(iASP varied on) Domain
Cluster- Comm.
TCP/IP LAN

Fibr
e
DR System i or LPAR
IASP' (iASP' varied off)
(sync.)
DR
DS or ESS LSU
*SYSBAS

Figure 11-8 Combining local backup and remote DR

550 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
These examples are just some of the possibilities that are enabled by using IASPs and Copy
Services. There can be many more, but the same principles apply. Use IASPs to separate the
application data and code from the system. This allows much more flexibility and resilience in
designing you availability solutions.

For more information about System i high availability and disaster recovery solutions, System
i Copy Services Toolkit, HASM and implementing Copy Services refer to IBM System Storage
Copy Services and IBM i: A Guide to Planning and Implementation, SG24-7103.

Chapter 11. Usage considerations for Copy Services with i5/OS 551
552 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
12

Chapter 12. Cloning i5/OS


In this chapter, we discuss the use and implementation of cloning in the i5/OS environment.
We cover the following topics:
 Understanding the cloning concept
 Considerations when cloning i5/OS systems
 Creating an i5/OS clone

© Copyright IBM Corp. 2008. All rights reserved. 553


12.1 Understanding the cloning concept
Cloning is a concept for System i that became newly available with boot from SAN.
Previously, to create a new system image, you had to perform a full installation of the SLIC
and i5/OS.

The new of boot from SAN support enables you to take advantage of some of the advanced
features available with the DS6000, DS8000 series and Copy Services functions. One of
these functions is known as FlashCopy; this function allows you to perform a near
instantaneous copy of the data held on a LUN or group of LUNs. Therefore, when you have a
system that only has external LUNs with no internal drives, you are able to create a clone of
your system.

Important: When we refer to a clone, we are referring to a copy of a system that only uses
external LUNs. Boot (or IPL) from SAN is, therefore, a prerequisite for this.

You need to have enough free storage space on your external storage server to
accommodate the clone. Additionally, you should remember that using FlashCopy with the
full-copy option, that is copying all tracks from source to target, to create a clone is very
resource intensive primarily for the involved external storage disk units. Running such
FlashCopy background copy tasks during the normal business operating hours could
cause performance impacts.

You should not attach a clone to your network until you have resolved any potential
conflicts that the clone has with the parent system.

By using the cloning capability that we describe in this chapter, you can create a complete
copy of your entire system in moments. You can then use this copy in any way you want. For
example, you can potentially use it to minimize windows during backup or to protect yourself
from a failure during an upgrade. You can even use it as a fast way to provide a backup or test
system. You can accomplish all of these tasks with minimal impact to your production
operations.

12.2 Considerations when cloning i5/OS systems


Cloning is a very useful tool, but you must remember that it is just one of a number of tools
that you can use. Traditionally, cloning always required that your system is powered down in
order that you can take a consistent image of the full system. While the time that was lost for
powering down and re-IPLing the production system was limited, not every environment could
tolerate the system outage.

If the restriction on system down time is related to having multiple applications on a single
system you could consider migrating some applications to another partition. Alternatively, if
you are able to shut down the application but not the system, then you should consider other
tools for quickly switching your applications using independent ASPs to another system like
the System i Copy Services Toolkit service offering from IBM STG Lab Services developed
from IBM Rochester. For further information about the toolkit offering refer to:
http://www-03.ibm.com/systems/services/labservices/labservices_i.html

554 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
A significant improvement in System i availability in a FlashCopy environment and is the new
i5/OS V6R1 quiesce for Copy Services function:

Tip: The new i5/OS V6R1 quiesce for Copy Services function, CHGASPACT CL
command, allows you to suspend all database I/O activity and therewith eliminate the
requirement for a system shut down to ensure a consistent database state before taking a
FlashCopy to create a clone of your production system.

For further information about this new quiesce function refer to IBM System Storage Copy
Services and IBM i: A Guide to Planning and Implementation, SG24-7103.

Because cloning creates a copy of the whole of the source system, you need to remember
the following considerations when you create a clone:
 A clone is an exact copy of the original source system in every respect.
 The system name and network attributes are identical.
 The TCP/IP settings are identical.
 The BRMS network information is identical.
 The Netserver settings are identical.
 User profiles and passwords are identical.
 The Job schedule entries are identical.
 Relational database entries are identical.

You need to take extreme when you activate a clone. In particular, you have to ensure that it
does not connect to the network automatically, because doing so can cause substantial
problems within both the clone and its parent system.

Imagine that you are in the process of creating a clone and that your network has a problem
in a router. Your network is effectively split in two. You finish your clone and connect it to a new
partition ready for use. When you IPL your clone, it might see itself plugged into the network
and working correctly. The job scheduler kicks in and starts updating some external systems
that it can see. While this is happening, your live production system is updating those other
systems that it can see. The result can be catastrophic.

Important: You should not attach a clone to your network until you have resolved any
potential conflicts that the clone has with the parent system.

You need to ensure that you have checked that your clone system is customized properly
before you attach it to a network.

Important: While cloning is a highly effective means of backing up a system for disaster
recovery, always remember it does not make sense to back up all objects on the clone
unless the backup is as part of a full backup for disaster recovery. In particular, if you bring
journals and associated receivers or system logs back from the clone to the production
system, the data content will not be relevant, because the systems would in fact have a
different data history reflected in the journals. This inconsistency will lead to unpredictable
results if attempted.

Chapter 12. Cloning i5/OS 555


Restriction: You must not attach any clone LUNs to the original parent system unless they
have been used on another partition first or deleted and re-created, that is re-formatted,
within the DS Storage System. Failure to observe this restriction will have unpredictable
results and could lead to loss of data.

This restriction is because the clone LUNs are perfect copies of LUNs that are on the
parent system, and as such, the system would not be able to tell the difference between
the original and the clone if they were attached to the same system.

As soon as you use the clone LUNs on a separate partition, they become owned by that
partition, when then makes them safe to be reused on the original partition.

12.3 Creating an i5/OS clone


In this section, we describe the steps to create a clone after turning off the production system
and bringing it into operational use. We then provide some examples of how you can script
this process.

The actual creation of the clone is very straightforward. Follow these steps:
1. Turn off your system using the PWRDWNSYS command.
2. Use DS CLI or the DS Storage Manager to create a FlashCopy of the LUNs that are
currently assigned to the partition.
For a clone, you typically perform FlashCopy using the full-copy option to physically copy
all source volumes to the targets because you normally intend to use the clone copy for a
longer time without performance implications to the production system.
3. After you have created the FlashCopy bitmap and established the copy, you can turn on
the production system again.
4. Next, attach the clone LUNs to a second partition, which is probably already configured for
you within the SAN
5. Ensure that the clone partition is not connected to a network.
6. Activate the clone system.
7. Modify any settings that will cause clashes with the parent system.
8. Perform the backup or any other functions that you want on the clone.

12.3.1 Creating a partition with DLPAR scripting


You can create a partition using the SSH interface into the HMC. In this section, we describe
the steps to create a simple partition. Before you start with scripting, you need to set up the
necessary programs and security keys as described in Logical Partitions on System i5: A
Guide to Planning and Configuring LPAR with HMC on System i, SG24-8000:
http://www.redbooks.ibm.com/abstracts/sg248000.html?Open

The examples in this section assume that you have already installed ssh and have it working
with your HMC.

The SSH interface to the HMC is a very powerful interface. Thus, some of the command
strings that are required can seem, at first glance, to be incomprehensible; however, when
you break down the command strings, they are in reality very simple to understand.

556 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Important: It is extremely important that when you are working with scripting that you
make sure that your spelling and selection of partition names is accurate. For example, if
you attempt to delete a partition and enter the wrong name, you might well delete the
wrong partition very quickly, which can have very predictable results and results in the loss
of data.

You will also find that some of the parameters require double quotation marks around
them. Because this the parameters are usually is a string enclosed in single quotation
marks, you need to use a \ (backslash) before the double quotation marks.

Creating a partition
Example 12-1 shows an example of a script to create a partition.

Example 12-1 Create a partition: The crtlpar command


PATH=$PATH:/QOpenSys/usr/bin:/usr/ccs/bin:/QOpenSys/usr/bin/X11:/usr/sbin:.:/usr/bin
ssh -T <hmcip> mksyscfg -r lpar -m <machine_name> -i
'name=<partition_name>,profile_name=<partition_profile_name>,lpar_type=os400,all_resources=0,min
_mem=4096,desired_mem=4096,max_mem=8192,proc_mode=ded,min_procs=1,desired_procs=1,max_procs=32,s
haring_mode=share_idle_procs,\"io_slots=21030011/none/0,21040011/none/0,21030010/none/1,21040010
/none/1\",lpar_io_pool_ids=none,load_source_slot=21040010,alt_restart_device_slot=21040011,conso
le_slot=hmc,alt_console_slot=none,op_console_slot=none,max_virtual_slots=10,virtual_serial_adapt
ers=none,virtual_scsi_adapters=none,\"virtual_eth_adapters=2/0/1//0/1\",virtual_opti_pool_id=0,h
sl_pool_id=0,conn_monitoring=0,auto_start=0'

The script consists of two statements:


 The first is a path statement to ensure that all the necessary code and runtimes that are
needed are available within the session. This statement is required in any script that you
write.
 The second statement is the ssh command, which is the command that does the work.
You use ssh with three parameters:
– The first parameter is -T, which prevents the allocation of a pseudo TTY terminal by the
HMC.
– The second parameter is the name or IP address of the HMC.
– The third parameter is the command to run.

You might notice in the example that the third parameter is quite a long string, but again we
can break down this string into smaller pieces. The command consists of the command name
and its associated parameters. Appendix B, “HMC CLI command definitions” on page 605
includes a full definition of all of the possible parameters.

The command in Example 12-1 uses the following parameters:


 name=<partition_name>
The name of the LPAR partition
 profile_name=<partition_profile_name>
The name of the partition profile
 lpar_type=os400
The type of partition to create
 all_resources=0
Defines whether all the machine resources should be owned by this partition

Chapter 12. Cloning i5/OS 557


 min_mem=4096
The minimum amount of memory required to start the partition
 desired_mem=4096
The amount of memory that should be used at startup if available
 max_mem=8192
The maximum amount of memory the partition can use
 proc_mode=ded
The type of processor allocation, in this case dedicated
 min_procs=1
The minimum number of processors the partition requires before it can be activated
 desired_procs=1
The number of processors that should be allocated to the partition if available
 max_procs=32
The maximum number of processors the partition can use
 sharing_mode=share_idle_procs
Processor sharing is allowed
 \"io_slots=21030011/none/0,21040011/none/0,21030010/none/1,21040010/none/1\"
A list of the I/O slots that are allocated to the partition. Notice the \” around the list.
Because of the forward slash (/) in the list, you need to enclose the list in quotation marks.
Because the entire command is in quotation marks, you delimit the quotation marks with
with a forward slash (\).
 lpar_io_pool_ids=none
Any I/O pools used
 load_source_slot=21040010
Where to find the load source
 alt_restart_device_slot=21040011
The location of the alternate restart device
 console_slot=hmc
The location of the console, or in this case the HMC
 alt_console_slot=none
The location of the alternate console, if any
 op_console_slot=none
The location of the operations console, if any
 max_virtual_slots=10
The total number of virtual I/O slots available
 virtual_serial_adapters=none
The slot information for any virtual serial adaptors. Note that the two standard virtual serial
adapters are not included in the list because they are always assigned.
 virtual_scsi_adapters=none
The slot information for any virtual SCSI adaptors

558 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
 \"virtual_eth_adapters=2/0/1//0/1\"
The slot information for any virtual Ethernet adaptors
 virtual_opti_pool_id=0
The pool ID for the virtual Opti Connect
 hsl_pool_id=0
The pool ID for HSL opticonnect
 conn_monitoring=0
Is connection monitoring enabled
 auto_start=0
Should the partition start automatically when the system starts

If you are familiar with the HMC GUI or WebSM interface into the HMC, you should recognize
most of these parameters from creating a partition with the GUI, because both the GUI and
the CLI are built over the underlying HMC functions. In fact, the CLI can be faster in many
aspects, while the GUI trades performance for ease of use.

Deleting a partition
Example 12-2 is a sample script for deleting a partition. The basic layout is the same as for
creating a script, but the command is much simpler.

Example 12-2 Delete a partition: The dltlpar command


PATH=$PATH:/QOpenSys/usr/bin:/usr/ccs/bin:/QOpenSys/usr/bin/X11:/usr/sbin:.:/usr/bin
ssh -T <hmcip> rmsyscfg -r lpar -m <machine_name> -n <partition_name>

In this example, the command we use is rmsyscfg, the full definition of which is included in
Example B-2 on page 614. The parameters that we use are:
 -r lpar
The level at which we want the command to operate, in this case the partition level
 -m <machine_name>
The name of the machine on which we want to operate
 -n <partition_name>
The name of the partition on which we want to operate

Activating a partition
In the case of cloning, it is most likely that you have created your partition but have left it in a
Not Activated status until it is needed. You, therefore, are more likely to need to activate or
deactivate the partition. To activate a partition, use a script as shown in Example 12-3.

Example 12-3 Activating a partition


PATH=$PATH:/QOpenSys/usr/bin:/usr/ccs/bin:/QOpenSys/usr/bin/X11:/usr/sbin:.:/usr/bin
ssh -T <hmcip> chsysstate -r lpar -m <machine_name> -o on -n <partition_name> -f
<partition_profile>

Chapter 12. Cloning i5/OS 559


Again, this example uses a standard path statement, followed by the ssh command. We use
the chsysstate command to change the status of the system, and the parameters are:
 -r lpar
The level at which we want the command to operate
 -m <machine_name>
The machine on which we want the command to operate
 -o on
The operation to perform, on means to activate it
 -n <partition_name>
The name of the profile to activate
 -f <partition_profile_name>
The name of the partition profile to use

Deactivating a partition
You might also need to deactivate a partition automatically as well. You can use the
chsysstate command with the -o shutdown option, as shown in Example 12-4.

Example 12-4 Deactivating a partition


PATH=$PATH:/QOpenSys/usr/bin:/usr/ccs/bin:/QOpenSys/usr/bin/X11:/usr/sbin:.:/usr/bin
ssh -T <hmcip> chsysstate -r lpar -m <machine_name> -o shutdown -n <partition_name> -f
<partition_profile>

12.3.2 Producing a rack configuration


You can also produce a rack configuration from the HMC, by using a script as shown in
Example 12-5. Using this script produces output similar to that shown in Example 12-6. By
changing the list of fields, you can select what attributes that you want to see.

Example 12-5 Produce a rack configuration


PATH=$PATH:/QOpenSys/usr/bin:/usr/ccs/bin:/QOpenSys/usr/bin/X11:/usr/sbin:.:/usr/bin
ssh -T <hmcip> lshwres -r io --rsubtype slot -m <machine_name> -F
lpar_name,unit_phys_loc,bus_id,phys_loc,vpd_type,description

Use the lshwres command to show the resources associated with the machine:
 -r io
The type of hardware resources to list, io means physical I/O
 --rsubtype slot
What level of detail to be provided
 -, <machine_name>
The machine on which you want to operate
 -F lpar_name,unit_phys_loc,bus_id,phys_loc,vpd_type,description
The data fields that you want included

Example B-4 on page 622 provides full detail about the lshwres command.

560 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Example 12-6 Output from lshwres
$
> lstrcl
ITSOMIGRTEST1,U5074.007.01D87DE,40,C01,2843,PCI I/O Processor
ITSOMIGRTEST1,U5074.007.01D87DE,40,C02,2757,PCI-X Ultra RAID Disk Controller
ITSOMIGRTEST1,U5074.007.01D87DE,40,C03,5708,SCSI bus controller
ITSOMIGRTEST1,U5074.007.01D87DE,40,C04,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C05,5703,PCI RAID Disk Unit Controller
ITSOMIGRTEST1,U5074.007.01D87DE,41,C06,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C07,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C09,5703,PCI RAID Disk Unit Controller
ITSOMIGRTEST1,U5074.007.01D87DE,41,C10,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C11,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C12,null,Empty slot
ITSOMIGRTEST1,U5074.007.01D87DE,41,C13,5706,PCI 10/100/1000Mbps Ethernet UTP
2-port
ITSOMIGRTEST1,U5074.007.01D87DE,41,C14,2847,I/O Processor
ITSOMIGRTEST1,U5074.007.01D87DE,41,C15,2766,PCI Fibre Channel Disk Controller
null,U5294.001.105867B,13,C11,null,I/O Processor
null,U5294.001.105867B,13,C12,null,PCI Ultra4 SCSI Disk Controller
null,U5294.001.105867B,13,C13,null,Empty slot
null,U5294.001.105867B,13,C14,null,Empty slot
null,U5294.001.105867B,13,C15,null,Empty slot
ITSO5804A,U5294.001.105867B,14,C01,2847,I/O Processor
ITSO5804A,U5294.001.105867B,14,C02,2787,PCI Fibre Channel Disk Controller
ITSO5804A,U5294.001.105867B,14,C03,2844,PCI I/O Processor
ITSO5804A,U5294.001.105867B,14,C04,2849,PCI 100/10Mbps Ethernet
null,U5294.001.105867B,15,C05,null,I/O Processor
null,U5294.001.105867B,15,C06,null,PCI Ultra4 SCSI Disk Controller
null,U5294.001.105867B,15,C07,null,Empty slot
null,U5294.001.105867B,15,C08,null,Empty slot
null,U5294.001.105867B,15,C09,null,Empty slot
Martin-Brower,U5294.001.105869B,19,C11,2844,PCI I/O Processor
Martin-Brower,U5294.001.105869B,19,C12,2780,PCI Ultra4 SCSI Disk Controller
Martin-Brower,U5294.001.105869B,19,C13,null,Empty slot
Martin-Brower,U5294.001.105869B,19,C14,null,Empty slot
Martin-Brower,U5294.001.105869B,19,C15,null,Empty slot
Martin-Brower,U5294.001.105869B,20,C01,2844,PCI I/O Processor
Martin-Brower,U5294.001.105869B,20,C02,2780,PCI Ultra4 SCSI Disk Controller
Martin-Brower,U5294.001.105869B,20,C03,2749,PCI Ultra Magnetic Media Controller
Martin-Brower,U5294.001.105869B,20,C04,2838,PCI 100/10Mbps Ethernet
Martin-Brower,U5294.001.105869B,21,C05,2844,PCI I/O Processor
Martin-Brower,U5294.001.105869B,21,C06,2780,PCI Ultra4 SCSI Disk Controller
Martin-Brower,U5294.001.105869B,21,C07,null,Empty slot
Martin-Brower,U5294.001.105869B,21,C08,null,Empty slot
Martin-Brower,U5294.001.105869B,21,C09,null,Empty slot
null,U5294.002.105868B,16,C11,null,Empty slot
null,U5294.002.105868B,16,C12,null,PCI Ultra4 SCSI Disk Controller
ITSOFCALTEST,U5294.002.105868B,16,C13,2847,I/O Processor
ITSOFCALTEST,U5294.002.105868B,16,C14,2766,PCI Fibre Channel Disk Controller
null,U5294.002.105868B,16,C15,null,Empty slot
ITSO5804A,U5294.002.105868B,17,C01,2847,I/O Processor
ITSO5804A,U5294.002.105868B,17,C02,2787,PCI Fibre Channel Disk Controller
ITSO5804A,U5294.002.105868B,17,C03,2844,PCI I/O Processor
ITSO5804A,U5294.002.105868B,17,C04,5703,PCI RAID Disk Unit Controller

Chapter 12. Cloning i5/OS 561


null,U5294.002.105868B,18,C05,null,Empty slot
null,U5294.002.105868B,18,C06,null,PCI Ultra4 SCSI Disk Controller
null,U5294.002.105868B,18,C07,null,Empty slot
null,U5294.002.105868B,18,C08,null,Empty slot
null,U5294.002.105868B,18,C09,null,Empty slot
test,U5294.002.105870B,22,C11,2844,PCI I/O Processor
test,U5294.002.105870B,22,C12,2780,PCI Ultra4 SCSI Disk Controller
test,U5294.002.105870B,22,C13,null,Empty slot
test,U5294.002.105870B,22,C14,null,Empty slot
test,U5294.002.105870B,22,C15,null,Empty slot
test,U5294.002.105870B,23,C01,null,Empty slot
test,U5294.002.105870B,23,C02,null,Empty slot
test,U5294.002.105870B,23,C03,null,Empty slot
test,U5294.002.105870B,23,C04,null,Empty slot
test,U5294.002.105870B,24,C05,2844,PCI I/O Processor
test,U5294.002.105870B,24,C06,2780,PCI Ultra4 SCSI Disk Controller
test,U5294.002.105870B,24,C07,2780,PCI Ultra4 SCSI Disk Controller
test,U5294.002.105870B,24,C08,2838,PCI 100/10Mbps Ethernet
test,U5294.002.105870B,24,C09,null,Empty slot
SixteenProcs,U9194.001.1079111,10,C11,null,I/O Processor
SixteenProcs,U9194.001.1079111,10,C12,null,PCI Ultra4 SCSI Disk Controller
SixteenProcs,U9194.001.1079111,10,C13,null,Empty slot
SixteenProcs,U9194.001.1079111,10,C14,null,Empty slot
SixteenProcs,U9194.001.1079111,10,C15,null,Empty slot
SixteenProcs,U9194.001.1079111,11,C01,null,I/O Processor
SixteenProcs,U9194.001.1079111,11,C02,null,Empty slot
SixteenProcs,U9194.001.1079111,11,C03,null,PCI Ultra4 SCSI Disk Controller
SixteenProcs,U9194.001.1079111,11,C04,null,PCI 10/100/1000Mbps Ethernet UTP
2-port
SixteenProcs,U9194.001.1079111,12,C05,null,I/O Processor
SixteenProcs,U9194.001.1079111,12,C06,null,PCI Ultra4 SCSI Disk Controller
SixteenProcs,U9194.001.1079111,12,C07,null,SCSI bus controller
SixteenProcs,U9194.001.1079111,12,C08,null,Empty slot
SixteenProcs,U9194.001.1079111,12,C09,null,Empty slot $

For further details about the HMC commands, see Appendix B, “HMC CLI command
definitions” on page 605.

12.3.3 Creating a storage copy with scripting


You can initiate the FlashCopy from DS CLI, which is available on a number of platforms
including Windows and i5/OS.

Note: If you are not using the new i5/OS V6R1 quiesce for Copy Services function, you
need to run DS CLI to invoke FlashCopy from another server, because your System i
server, in fact, is turned off.

You can find a full description of the DS CLI and its use, refer to Chapter 8, “Using DS CLI
with System i” on page 391.

562 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Example 12-7 creates the FlashCopy volume pairs from the Windows DS CLI using the
mkflash command. You can also use PPRC to create a copy in a different SAN environment if
you want. In addition to the command, the example also identifies the storage unit on which to
perform the action and the LUN pairs for which to create a FlashCopy relationship.

Example 12-7 Starting a flashcopy from DS CLI to make a clone


C:\Program Files\ibm\dscli>dscli
Enter the primary management console IP address: 9.5.17.156
Enter the secondary management console IP address:
Enter your username: admin
Enter your password:
dscli> mkflash -dev ibm.2107-7580741 1400:1600 1401:1601 1402:1602 1403:1603
1404:1604 1405:1605 1406:1606 1407:1607 1408:1608 1409:1609 140a:160a

Date/Time: July 15, 2005 10:07:05 AM CDT IBM DSCLI Version: 5.0.4.32 DS: IBM.
2107-7580741
CMUC00137I mkflash: FlashCopy pair 1400:1600 successfully created.
CMUC00137I mkflash: FlashCopy pair 1401:1601 successfully created.
CMUC00137I mkflash: FlashCopy pair 1402:1602 successfully created.
CMUC00137I mkflash: FlashCopy pair 1403:1603 successfully created.
CMUC00137I mkflash: FlashCopy pair 1404:1604 successfully created.
CMUC00137I mkflash: FlashCopy pair 1405:1605 successfully created.
CMUC00137I mkflash: FlashCopy pair 1406:1606 successfully created.
CMUC00137I mkflash: FlashCopy pair 1407:1607 successfully created.
CMUC00137I mkflash: FlashCopy pair 1408:1608 successfully created.
CMUC00137I mkflash: FlashCopy pair 1409:1609 successfully created.
CMUC00137I mkflash: FlashCopy pair 140A:160A successfully created.
dscli>

12.3.4 Handling clone startup


As mentioned previously, you need to make several changes before you can IPL the clone
beyond a restricted state. To avoid having to make the changes use a program to perform the
shutdown or quiesce immediately prior to cloning. This program uses a modified startup
program that take steps to prevent any issues.

In this section, we provide some sample code to use in this situation, which consists of two
ILE Control Language programs and a data area.

The first program DJPCLONE (Example 12-8 on page 564) sets the production system so
that it is ready for cloning by capturing the information that it requires to identify the clone. It
then takes over the i5/OS QSTRUPPGM system value and points it to the second program.

The second program DJPSTRUP (Example 12-9 on page 565) is a replacement startup
program to be called from the QSTRUPPGM system value. It performs the necessary checks
to restrict the startup on the clone system and also ensures that the production system IPLs
correctly with its original startup program and system values.

The data area DJPPRTN is used to store the following information about the production
system:
 System name
 Serial number
 Startup program
 Partition ID

Chapter 12. Cloning i5/OS 563


We also include a program (Example 12-10 on page 568) to create the DJPCLONE and
DJPSTRUP programs for you, because they require binding to a service program.

Using the sample code


You use the sample code by running DJPCLONE on your production partition when you are
ready to shut down or quiesce to build the clone. If you choose to shut down at this point, you
should have already stopped all applications and subsystems on the system. The program
DJPCONE does the processing that is necessary to identify the partition on startup and then
issue the command to either shut down or quiesce the system.

When the partition is then either in a Not Activated or suspended state, you initiate the
FlashCopy process to make the clone.

Important: Remember the considerations for activating a clone and preventing conflict
with the parent copy. See 12.2, “Considerations when cloning i5/OS systems” on page 554
for more information.

When the FlashCopy pairs are active, you can then re-IPL or resume your production system
and start your clone partition.

When the partitions IPL, the DJPSTRUP program runs as the startup program and takes
control of the systems at startup.

On the production system, it simply resets to normal production values and runs the regular
startup program. On the Clone system, it simply stops the startup and leaves the system in a
safe minimal state with only the console operational.

Important: This code is provided on an as-is basis. You need to include any additional
checks that you might deem necessary to prevent inappropriate use of these programs.
They are provided for education purposes only.

Example 12-8 DJPCLONE: Prepare for cloning


PGM

/* Declare variables to hold current values */

DCL VAR(&CURSYSNAM) TYPE(*CHAR) LEN(8)


DCL VAR(&CURSRLNBR) TYPE(*CHAR) LEN(8)
DCL VAR(&CURPRTN) TYPE(*CHAR) LEN(4)
DCL VAR(&CURSTRUP) TYPE(*CHAR) LEN(20)

/* Declare variables to hold work information */

DCL VAR(&WKDATA) TYPE(*CHAR) LEN(45)


DCL VAR(&WKFORMAT) TYPE(*CHAR) LEN(4) +
VALUE(X'00000001')
DCL VAR(&WKLEN) TYPE(*CHAR) LEN(4) +
VALUE(X'0000002D')
DCL VAR(&WKCH4) TYPE(*CHAR) LEN(4)

/* retrieve current values */

RTVNETA SYSNAME(&CURSYSNAM)

564 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
RTVSYSVAL SYSVAL(QSRLNBR) RTNVAR(&CURSRLNBR)
RTVSYSVAL SYSVAL(QSTRUPPGM) RTNVAR(&CURSTRUP)

/* obtain and extract partition id */

CALLPRC PRC('dlpar_get_info') PARM((&WKDATA) +


(&WKFORMAT *BYVAL) (&WKLEN *BYVAL))

CHGVAR VAR(&WKCH4) VALUE(%SST(&WKDATA 41 4))


CHGVAR VAR(&CURPRTN) VALUE(%BIN(&WKCH4))

/* Store current values */

CHGDTAARA DTAARA(DPAINTER/DJPPRTN (1 8)) +


VALUE(&CURSYSNAM)
CHGDTAARA DTAARA(DPAINTER/DJPPRTN (9 8)) +
VALUE(&CURSRLNBR)
CHGDTAARA DTAARA(DPAINTER/DJPPRTN (17 4)) VALUE(&CURPRTN)
CHGDTAARA DTAARA(DPAINTER/DJPPRTN (21 20)) +
VALUE(&CURSTRUP)

/* Set QSTRUPPGM so we have control after IPL */

CHGSYSVAL SYSVAL(QSTRUPPGM) VALUE('DJPSTRUP DPAINTER ')

/* POWER THE SYSTEM DOWN FOR THE FLASH TO OCCUR */

PWRDWNSYS OPTION(*IMMED) RESTART(*NO)

/* Alternatively use the i5/OS V6R1 quiesce for CopyServices function */


/* with commenting out above PWRDWNSYS statement and removing comments */
/* from below statements */

/* CHGASPACT ASPDEV(*SYSBAS) OPTION(*SUSPEND) SSPTIMO(30) + */


/* SSPTIMOACN(*END) */
/* MONMSG MSGID(CPCB717) EXEC(DO) */
/* DSCLI SCRIPT('/mkflashscript') USER(admin) OUTPUT('/outfile') */
/* CHGASPACT ASPDEV(*SYSBAS) OPTION(*RESUME) */
/* ENDDO */
ENDPGM

Example 12-9 DJPSTRUP: Startup program


PGM

/* Define variables for current values */

DCL VAR(&CURSYSNAM) TYPE(*CHAR) LEN(8)


DCL VAR(&CURSRLNBR) TYPE(*CHAR) LEN(8)
DCL VAR(&CURPRTN) TYPE(*CHAR) LEN(4)

/* Define variables for previous values */

DCL VAR(&PRVSYSNAM) TYPE(*CHAR) LEN(8)


DCL VAR(&PRVSRLNBR) TYPE(*CHAR) LEN(8)

Chapter 12. Cloning i5/OS 565


DCL VAR(&PRVPRTN) TYPE(*CHAR) LEN(4)
DCL VAR(&PRVSTRPGM) TYPE(*CHAR) LEN(20)

/* Define work variables */

DCL VAR(&WKDATA) TYPE(*CHAR) LEN(45)


DCL VAR(&WKFORMAT) TYPE(*CHAR) LEN(4) +
VALUE(X'00000001')
DCL VAR(&WKLEN) TYPE(*CHAR) LEN(4) +
VALUE(X'0000002D')
DCL VAR(&WKCH4) TYPE(*CHAR) LEN(4)
DCL VAR(&WKSTRUPPGM) TYPE(*CHAR) LEN(10)
DCL VAR(&WKSTRUPLIB) TYPE(*CHAR) LEN(10)

/* retrieve current values */

RTVNETA SYSNAME(&CURSYSNAM)
RTVSYSVAL SYSVAL(QSRLNBR) RTNVAR(&CURSRLNBR)
RTVSYSVAL SYSVAL(QSTRUPPGM) RTNVAR(&CURSTRUP)

/* obtain and extract partition id */

CALLPRC PRC('dlpar_get_info') PARM((&WKDATA) +


(&WKFORMAT *BYVAL) (&WKLEN *BYVAL))

CHGVAR VAR(&WKCH4) VALUE(%SST(&WKDATA 41 4))


CHGVAR VAR(&CURPRTN) VALUE(%BIN(&WKCH4))

/* retrieve existing data */

RTVDTAARA DTAARA(DPAINTER/DJPPRTN (1 8)) +


RTNVAR(&PRVSYSNAM)
RTVDTAARA DTAARA(DPAINTER/DJPPRTN (9 8)) +
RTNVAR(&PRVSRLNBR)

/* Check if any values have changed */

IF COND((&CURSYSNAM *NE &PRVSYSNAM) *OR +


(&CURSRLNBR *NE &PRVSRLNBR) *OR (&CURPRTN +
*NE &PRVPRTN)) THEN(DO)

/* Partition Changed */

SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) +


MSGDTA('ATTENTION: Clone image running on +
different partition/hardware') TOPGMQ(*EXT)
SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) +
MSGDTA('ATTENTION: Clone image running on +
different partition/hardware') +
MSGTYPE(*ESCAPE)

/* At this point you should include any code that you wish to run */
/* remembering that TCP/IP, Network Attributes, Relational Database */

566 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
/* entries, NetServer settings and the System Name are all copies */

RETURN

ENDDO

/* To get here the partition must be the same so this is the */


/* original system */

/* We should return the QSTRUPPGM system value to its original value */

RTVDTAARA DTAARA(DPAINTER/DJPPRTN (21 20)) +


RTNVAR(&PRVSTRPGM)

/* Check for a valid program before resusing, just in case */

CHGVAR VAR(&WKSTRUPPGM) VALUE(%SST(&PRVSTRPGM 1 10))


CHGVAR VAR(&WKSTRUPLIB) VALUE(%SST(&PRVSTRPGM 11 10))

CHKOBJ OBJ(&WKSTRUPLIB/&WKSTRUPPGM) OBJTYPE(*PGM)


MONMSG MSGID(CPF0000) EXEC(DO)
/* error occured so dont replace */
SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA('Unable +
to return QSTRUPPGM system value to +
original state') TOUSR(*SYSOPR)
SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA('Unable +
to return QSTRUPPGM system value to +
original state') MSGTYPE(*ESCAPE)
/* error occured so dont replace */
RETURN
ENDDO

CHGSYSVAL SYSVAL(QSTRUPPGM) VALUE(&PRVSTRPGM)

/* Run original QSTRUPPGM now */

CALL PGM(&WKSTRUPLIB/&WKSTRUPPGM)
MONMSG MSGID(CPF0000) EXEC(DO)

/* Error in the original prog, so send message */

SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) +


MSGDTA('Original image running on +
unchanged partition/hardware - QSTRUPPGM +
failed. See joblog') TOUSR(*SYSOPR) +
MSGTYPE(*COMP)
ENDDO

SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) +


MSGDTA('Original image running on +
unchanged partition/hardware') MSGTYPE(*COMP)

ENDPGM

Chapter 12. Cloning i5/OS 567


Example 12-10 Program to create DJPCLONE
PGM

CRTPGM PGM(DPAINTER/DJPCLONE) +
MODULE(DPAINTER/DJPCLONE) BNDSRVPGM(QPMLPMGT)

CRTPGM PGM(DPAINTER/DJPSTRUP) +
MODULE(DPAINTER/DJPSTRUP) BNDSRVPGM(QPMLPMGT)

ENDPGM

568 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13

Chapter 13. Troubleshooting i5/OS with


external storage
In this chapter, we provide troubleshooting information for IBM System i environments with
DS8000/DS6000 external storage. We discuss the following topics:
 i5/OS actions
– Debugging external load source problems
 DS8000 actions
– Checking the DS8000 problem log for open problems
– Unlocking a DS8000 Storage Manager administrative password
 DS6000 actions
– Checking the DS6000 system status
– Checking the DS6000 problem log for open problems
– Contacting IBM support
– Collecting DS6000 problem determination data
– Activating an IBM remote support VPN connection
– Unlocking a DS6000 Storage Manager administrative password
 DS CLI and GUI actions
– Debugging DS6000 DS CLI and GUI connection problems
– Debugging DS CLI for i5/OS installation issues
– Debugging DS CLI and GUI rank creation failures
– DS CLI data collection for IBM technical support
 SAN actions
– Isolating Fibre Channel link problems
– SAN data collection for IBM technical support

For further information about System i and DS8000 recovery, refer to IBM System i & System
Storage DS8000 Recovery Handbook at:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101099

© Copyright IBM Corp. 2008. All rights reserved. 569


13.1 i5/OS actions
In this section, we discuss some basic troubleshooting tasks from System i perspective.

13.1.1 Debugging external load source problems


To help isolate i5 server partition external load source issues, follow the procedure that we
describe in this section to check for any corresponding System Reference Codes (SRCs)
potentially logged on your IBM System i Hardware Management Console (HMC).

Follow these steps:


1. Log in to your IBM System i HMC (default user ID: hscroot, default password: abc123).
2. Select Systems Management → Servers from the HMC navigation tree, and select the
partition with which you want to work, as shown in Figure 13-1.

Figure 13-1 i5 HMC Workplace™ window

3. Click Tasks → Serviceability → Reference Code History as shown in (Figure 13-2).

Figure 13-2 Selecting Reference Code History

570 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. Select the Reference Code for which you want to see the details (Figure 13-3).

Figure 13-3 Viewing Reference Code history

Chapter 13. Troubleshooting i5/OS with external storage 571


5. Refer to Table 13-1, using the reference code details to further isolate the external boot
issue.

Note: You cannot get to your partition console in this state. Thus, the only way to further
debug this issue from the i5 server side is a D-mode IPL to DST. We recommend this
method only either with clear evidence of failed i5 hardware or as a last resort after
verifying that both the DS6000 or DS8000 and the SAN environment are OK.

Table 13-1 IOP serviceability information


Error Symptom Scenario Recommended Action

 SRC B2003200 on front Fatal hardware error (during  Perform a D-mode IPL
panel IPL)  Check the I/O processor
 No additional information (IOP) and I/O adapter (IOA)
available in ASM status in DST → HSM

 SRC B2003200 on front No valid IOA found (during IPL) Verify the following in your HMC
panel i5 partition config:
 ASM informational error log  A#2847 IOP with a #2766
entry B7005122 with a LS or #2787 IOA is assigned
Not Found code value of  Tagged I/O load source is
C6000001 set to #2766 or #2787 IOA
controlled by a #2847 IOP

 SRC B2003200 on front The Fibre Channel IOA is  Verify the DS6000/DS8000
panel operational but the Fibre host adapter port for the
 ASM informational error log Channel link out of the adapter load source IOA is
entry B7005122 with a LS is not (during IPL) configured as
Not Found code value of switched-fabric (FcSf)
C6000002 If direct attached to a DS8000
(DS6000):
 Verify the FC cable is fine
 Verify the DS host adapter
port is operational by using
a wrap-plug on the DS host
adapter resulting in a solid
green and flashing yellow
LED (bottom green)
If SAN attached to a
DS6000/DS8000:
 Verify the FC cable
between i5 and SAN switch
is fine
 Verify the SAN switch port
is operational (refer to your
switch vendor’s
documentation)

572 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Error Symptom Scenario Recommended Action

 SRC B2003200 on front The Fibre Channel IOA and link For code C6000000:
panel out of the IOA are operational  Verify your SAN switch
 ASM informational error log but the load source unit was not zoning is correct and there
entry B7005122 with a LS found (during IPL). is no DS6000/DS8000 port
Not Found code value of login restriction configured
C6xxyy00 For other codes C6xx0000, xx >
xx - number of devices 00:
(each DS6000/DS8000  Verify the DS6000/DS8000
count as one device even if host system configuration
connected to multiple ports for the boot IOA is
on the device) configured with the correct
yy - total number of LUNs WWPN
under all devices.  Verify there is a volume
example C6000000: group with at least one
no DS6000/DS8000 system volume attached to the
found on the fabric. DS6000/DS8000 host
example C6010300: system configuration for the
found one DS6000/ boot IOA
DS8000 system with three  Verify the DS6000/DS8000
LUNs but the requested LS volumes assigned to the
was not found. IOA are in normal status
For other codes C6xxyy00,
xx>00:
 Verify your
DS6000/DS8000 volume
group assigned to the i5
boot IOA contains the
volume ID being the load
source
 Verify the DS6000/DS8000
volume being the load
source is in normal status
 For new installs verify the
DS6000/DS8000 volume ID
supposed to be the load
source was installed
correctly with SLIC
V5R3M5 or higher

 SRC A600255/A600266 on Contact was lost with the device Perform LIC problem isolation
front panel indicated (during normal procedure LICIP13 (see the
operation) IBM Systems Hardware
Information Center:
http://publib.boulder.ibm.c
om/infocenter/systems/scope
/hw/index.jsp

Chapter 13. Troubleshooting i5/OS with external storage 573


13.2 DS8000 actions
The following tasks can help you to diagnose System i external storage problems from
DS8000 side.

13.2.1 Checking the DS8000 problem log for open problems


To check the DS8000 problem log, follow these steps:
1. Log in to the IBM System Storage DS8000 Hardware Management Console (HMC) either
directly at the console installed inside the DS8000 rack or through the WebSM remote
client with the predefined user ID customer and password cust0mer to access the
Web-based System Manager interface.

Tip: You can download the Web-based System Manager (WebSM) remote client for
remote HMC management as a self-extracting Windows install file directly from the
HMC at:
http://hmc_ip_address/setup.exe

2. Select Management Environment → your HMC host name → Service Applications →


Service Focal Point → Manage Servicable Events (Figure 13-4)

Figure 13-4 DS8000 WebSM Service Focal Point window

574 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. Select OK on the Manage Servicable Events — Select the Servicable Events window that
opens (Figure 13-5).

Figure 13-5 DS8000 WebSM Manage Servicable Events: Select Servicable Events window

4. Review any open problems listed in the Manage Servicable Events - Servicable Event
Overview (Figure 13-6).
If you have indications for a potential DS8000 storage related problem from your System i
server side, we especially recommend that you check the “Servicable event text” and
“First/Last reported time” information to see whether there are time matched problem log
error indications on the DS8000. For a DS8000 machine that is registered properly in the
IBM Remote Technical Assistance and Information Network (RETAIN®), there is a
Problem Management Hardware (PMH) ticket number that is associated for each problem
the DS8000 called home.

Figure 13-6 DS8000 WebSM Servicable Event Overview window

For further information about DS8000 serviceable events, refer to the IBM System Storage
DS8000 Service Information Center at:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000sv/index.jsp

Chapter 13. Troubleshooting i5/OS with external storage 575


13.2.2 Unlocking a DS8000 Storage Manager administrative password
Use the security recovery utility on the DS8000 Hardware Management Console (HMC) to
unlock the DS CLI or DS Storage Manager admin user account if it got locked after exceeding
the number of allowable login attempts with the wrong password:
1. Log in to your DS8000 HMC with user ID CE and password serv1cece.
2. Select Service Applications → Service Focal Point → Service Utilities.
3. Highlight the storage facility by selecting it from the list and then select the Selected →
Get HMC.
4. Highlight the HMC by selecting it from the list and then select the Selected → Start/Stop
ESSNI.
5. Reset the ESSNI security settings by clicking Reset in the Start/Stop ESSNI dialog
window (see Figure 13-7).

Figure 13-7 ESSNI Reset

A successful reset of the DS Storage Manager administrative password back to its default
password admin is indicated by the message shown in Figure 13-8.

Figure 13-8 ESSNI successful reset

576 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.3 DS6000 actions
The following tasks might help you to diagnose System i external storage problems from
DS6000 side.

13.3.1 Checking the DS6000 system status


To check the DS6000 system status, follow these steps:
1. Log in to the DS6000 Storage Manager (see Chapter 9, “Using DS GUI with System i” on
page 439).
2. Select Real-time Manager → Monitor system → Physical summary, choose the
Storage complex and Storage unit, and select Refresh to display the storage unit
resource status information (Figure 13-9).

Figure 13-9 DS6000 Storage Manager Physical summary window

3. Check each enclosure for all physical resource state to be Normal (green color). If there is
any resource in an Attention or Alert status, click its status field to get more information
about its non-normal resource status.

Chapter 13. Troubleshooting i5/OS with external storage 577


13.3.2 Checking the DS6000 problem log for open problems
To check the DS6000 problem log for open problems, follow theses steps:
1. Log in to the DS6000 Storage Manager (see 9.2, “Configuring DS Storage Manager
logical storage” on page 474).
2. Select Real-time Manager → Monitor system → Logs. Select between All log entries
or select Range of log entries and set the time frame in the “From this date,” “To this date,”
and time entry fields. Select Refresh to update the Log Entries display (Figure 13-10).

Figure 13-10 DS6000 Storage Manager Logs window

578 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
3. If there are no log entries in Status Open, the procedure to check for open problems ends
here. Otherwise proceed to the next step.
4. View details of any log entry in Status Open, starting from the oldest to the most recent
entry by clicking the Message ID or by marking the Select radio button that corresponds
with the log entry. Select View Details from the Select Action menu, and select Go to
continue (Figure 13-11).

Figure 13-11 DS6000 Storage Manager Logs View Details selection window

Chapter 13. Troubleshooting i5/OS with external storage 579


5. Review the Message text. Perform any included suggestions to correct the problem, and
click OK to continue (Figure 13-12).

Figure 13-12 DS6000 Storage Manager Log Details window

6. If you are successful in correcting the problem, close the problem entry by marking the
Select radio button and by selecting Close from the Select Action menu. A confirm
message displays, CMUR00003W, and asks if you really want to close the problem. Select
Continue (Figure 13-13).

Figure 13-13 DS6000 Storage Manager Logs CMUR0003W message window

If IBM support does not contact you, for example because the DS6000 call home was not
successful, and if you need further assistance on resolving the problem, you can contact
IBM support as described in the following sections.

580 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
13.3.3 Contacting IBM support
To contact IBM DS6000 technical support to open a problem ticket on the IBM Electronic
Services Web site as follows:
1. Contact IBM support by selecting Real-time Manager → Monitor system → Contact
IBM and by clicking Contact IBM (Figure 13-14).

Figure 13-14 DS6000 Storage Manager Contact IBM window

2. On the IBM Electronic Services Web site, select your country in the “Send request to”
menu, select Hardware in the Select type and submit menu, select All Hardware
products (the choices are based on your country choice) in the “Select product and
submit” menu. Then, select Submit to submit a service request to IBM support
(Figure 13-15).

Figure 13-15 IBM Electronic Services Web site

Chapter 13. Troubleshooting i5/OS with external storage 581


3. Sign in to the IBM Electronic Service call with your IBM ID and password. Optionally,
select register now if you do not have an IBM ID yet (Figure 13-16).

Figure 13-16 IBM Service Call Sign In Web site

582 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
4. On the Electronic Service Call Web page, select Place a request, and select Hardware
repair activities from the Request type menu with Repair/Fix hardware product from
the Sub-request type menu. Complete the remaining information that is requested in the
form, preferably with also entering the Error code in the Problem section, before selecting
Submit to have IBM support contact you for the opened service request (Figure 13-17).

Figure 13-17 IBM Electronic Service Call Place a request Web site

Chapter 13. Troubleshooting i5/OS with external storage 583


5. You get a confirmation that the service request has been opened. Optionally, you can click
the review or update link to see the service request details, including its status and the
assigned IBM problem number (Figure 13-18).

Figure 13-18 IBM Electronic Service Call Place a request confirmation Web site

6. Wait for IBM support to contact you for the newly opened problem ticket.

13.3.4 Collecting DS6000 problem determination data


If you have an open DS6000 issue that requires IBM support assistance for resolution, you
can send DS6000 problem determination data to IBM support to help solve the issue quickly.
Figure 13-19 shows the overall process for offloading problem determination data from the
SMC to IBM.

Figure 13-19 DS6000 problem determination data offload process

584 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
To send DS6000 problem determination data to IBM support, follow these steps:
1. Log in to the DS6000 Storage Manager (see Chapter 9, “Using DS GUI with System i” on
page 439).
2. Select Real-time Manager → Manage hardware → Storage units. Select the storage
unit, choose Copy and Send Problem Determination Data in the Select Action menu,
and then select Go to continue (Figure 13-20).

Figure 13-20 DS6000 Storage Manager Storage Units window

3. Choose Copy new data for Select a data type, mark both Traces and Dumps for Select a
file type, and select Next to continue (Figure 13-21).

Note: After selecting Next on the DS6000 Storage Manager Copy problem
determination data window, a process to collect trace (PE-package) and dump
(statesave) data runs automatically in the background. The panel does not refresh until
this process actually ends, which can take a few minutes.

The usability has been improved for future DS6000 Storage Manager releases.

Figure 13-21 DS6000 Storage Manager Copy problem determination data window

Chapter 13. Troubleshooting i5/OS with external storage 585


4. If your SMC has access to the public Internet, select Next on the Download problem
determination data window and proceed to the next step.
If your SMC has no Internet access, select Download data to local directory, which
creates a hyperlink for each file in the Select a file to download list. To save the files on
your local machine, simply click each file name, and select Save in the dialog window that
opens. Transfer these saved files without changing the file names from your local machine
to one that has Internet access and use anonymous binary FTP transfer to send them to
the testcase.software.ibm.com IBM server, directory /ssd/toibm/sharkdumps. Select
Cancel to finish collecting DS6000 problem determination data (Figure 13-22).

Figure 13-22 DS6000 Storage Manager Download problem determination data window

5. In the Send problem determination data to IBM panel, ensure that the Send all data
option is selected, and select Next to continue (Figure 13-23).

Figure 13-23 DS6000 Storage Manager Send problem determination data to IBM window

586 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
6. Select Finish on the Verification panel to start the automatic process of transferring
problem determination data through FTP port 21 to the testcase.software.ibm.com IBM
server (Figure 13-24).

Figure 13-24 DS6000 Storage Manager Problem determination data Verification window

13.3.5 Activating an IBM remote support VPN connection


You can activate a secure virtual private network (VPN) connection to the IBM service
organization that enables authorized IBM remote support to access your storage servers and
to assist you quickly with DS6000 problem determination. Figure 13-25 shows the overall
process for establishing a secure VPN connection to IBM support.

Figure 13-25 DS6000 remote support secure VPN connection

Chapter 13. Troubleshooting i5/OS with external storage 587


To activate a secure virtual private network (VPN) connection to the IBM service organization,
follow these steps:
1. Log in to the DS6000 Storage Manager (see Chapter 9, “Using DS GUI with System i” on
page 439).
2. Select Real-time Manager → Manage hardware → Storage units, select the storage
unit, and choose Activate Remote Support in the Select Action menu. Then, select Go to
continue (Figure 13-26).

Figure 13-26 DS6000 Storage Manager Storage Units Activate Remote Support selection window

3. Select OK in the Activate Remote Support panel to start the remote support VPN
connection to IBM (Figure 13-27).

Figure 13-27 DS6000 Storage Manager Activate Remote Support window

4. Close the long running task window which opens by selecting Close and View summary.
The established VPN connection is indicated by an IBMVPN network connection item in
the Windows system tray on the SMC (Figure 13-28).

Figure 13-28 DS6000 Storage Manager Activate Remote Support long running task window

588 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
5. To stop the remote support VPN connection after IBM remote support is done with their
problem analysis right-click the IBMVPN network connection item and select Disconnect.
(Figure 13-29)

Figure 13-29 Stopping the VPN connection from the IBMVPN connection item on the SMC

13.3.6 Unlocking a DS6000 Storage Manager administrative password


Use the security recovery utility on the DS6000 Storage Management Console (SMC) to
unlock the DS CLI or DS Storage Manager admin user account if it locks after exceeding the
number of allowable login attempts with the wrong password. Follow these steps:
1. On the SMC open a Command Prompt window by selecting Start → Run. Then, enter
cmd, and select OK.
2. Access the directory where the recovery utility was installed. If you used the default
DS6000 Storage Manager installation options, enter the following:
c:
cd \Program Files\IBM\dsniserver\bin\
3. Start the recovery utility by entering:
securityRecoveryUtility.bar -r
After successful completion, you receive the following message:
SecurityRecoveryUtility: Reset command was successful

13.4 DS CLI and GUI actions


You also have the option to use the DS CLI and both of the DS model GUIs to perform some
troubleshooting tasks. This sections explains these options.

13.4.1 Debugging DS6000 DS CLI and GUI connection problems


If you are facing a DS6000 DS CLI or DS Storage Manager GUI connection problem (such as
CUMMI8000E is unable to connect to management console server), follow these
troubleshooting steps:
1. If you were never able to connect before, check that the following items are valid and take
corrective actions if these criteria are not met:
a. Correct DS CLI/GUI and Java versions are installed for the DS6000 microcode on
which you are running.
b. Your Storage Management Console (SMC) Windows Control Panel → Regional and
Language Options is English (United States)
c. DS6000 storage server IP setup has been done correctly (see 10.4, “Setting the
DS6000 IP addresses” on page 526), it matches the network settings specified in the
DS6000 Storage Manager (see “Assigning a storage unit to a storage complex” on
page 451) and the DNS and subnet mask entries match those configured on your
Storage Management Console server which has been assigned a static IP address

Chapter 13. Troubleshooting i5/OS with external storage 589


d. If there is a firewall (for example, from Windows XP SP2) on the Storage Management
Console, verify that you are authenticated with it or it is deactivated.
2. If you were able to connect previously, check that the IBM DS Network Interface Server
service is running. Otherwise, start it manually (see “Starting DS6000 Storage Manager”
on page 450).

Note: Future DS Storage Manager versions will include an enhancement to prevent the
IBM DS Network Interface Server service becoming killed due to a log off of the
administrative user.

Contact IBM support (see 13.3.3, “Contacting IBM support” on page 581) if you need further
assistance.

13.4.2 Debugging DS CLI for i5/OS installation issues


The most common issue seen on the System i server after DS CLI installation is that the DS
CLI returns the following message:
CMUN00018E: Unable to connect to the storage management console server

This message can occur for many reasons, and they all relate to the Java SSL setup on
i5/OS.

If you receive this message, you need to review the DS CLI client log file at
/home/userprf/dscli/log/niclient.log. This file lists the connectivity attempts to the systems. The
DS CLI will always try to connect to an IBM System Storage DS6000 or DS8000 storage
subsystem first. If it cannot connect, it tries to communicate to an IBM System Storage
Enterprise Storage Server (ESS).

Tip: It is often helpful to delete the niclient.log file before you try a DS CLI command that
you know to be failing. The DS CLI re-create the file, and it is easier to see the relevant
connection information.

If the following message appears in the DS CLI niclient.log file Then your Java SSL is not set
up correctly:
javax.net.ssl.SSLHandshakeException: No compatible cipher suite available between
SSL end points

Perform the following steps to identify the SSL issue.


1. Check the prerequisites from the DS CLI readme.txt document, which include:
– The latest Java group PTF
– 5722SS1 option 34 Digital Certificate manager
– 5722AC3 *BASE Crypto Access Provider 128-bit
– 5722DG1 *BASE IBM HTTP Server for iSeries
– 5722JV1 option 6 Java Developer Kit 1.4
2. Check whether the 5722JV1 PTF SI14809 is on the System i server. If it is, you need to
remove it and reapply it. This PTF only fixes the problem for which it is released if
JDK1.4.2 is installed on the system before it is applied.
3. Check that the digital certificate manager (DCM) recognizes the 128 bit encryption (*AC3)
installation. Systems that are restored from tape typically restore invalid keys that prevent
the DCM from working correctly.

590 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The steps to check are:
a. Ensure that the admin HTTP server is started using the following command:
STRTCPSVR SERVER(*HTTP)HTTPSVR(*ADMIN)
b. Use a Web browser to connect to the OS/400 HTTP server admin port:
http://IP_address:2001
c. Sign on as QSECOFR.
d. Click the Digital Certificate Manager option.
If the following message appears in the Web browser, your DCM is not operating correctly:
You must install one of the cryptographic access provider products on your
system before using the Digital Certificate Manager (DCM) functions. Contact
your system administrator.
Use the following command to initialize your DCM only if this message is returned:
CALL QCAP3/QYAC3INAT
Now end the HTTP admin server and restart:
ENDTCPSVR SERVER(*HTTP)HTTPSVR(*ADMIN)
STRTCPSVR SERVER(*HTTP)HTTPSVR(*ADMIN)
Use these steps to check that it is operational.
4. If the following message appears in the DS CLI niclient.log file, typically your i5 server
system date is set incorrectly:
javax.net.ssl.SSLException: No certificate authorities are registered for this
secure application
The certificate exchange requires that the stem date is later than Tue Apr 29 20:55:29
UTC 2003, or the certificate that the SMC or HMC sends is rejected.

13.4.3 Debugging DS CLI and GUI rank creation failures


If rank DS6000/DS8000 rank creation fails through DS CLI or DS Storage Manager GUI with
the following message, check that the DS6000/DS8000 LIC activation keys are applied (see
9.1.3, “Applying storage unit activation keys” on page 469):
CMUN0214xx Unable to create rank:
 If the keys have not been applied, apply the storage unit activation keys, and retry the rank
creation.
 If the keys have been applied, perform the following actions:
– If you used DS CLI for trying to create the rank, see 13.5, “DS CLI data collection for
IBM technical support” on page 592.
– If you used the DS Storage Manager GUI on a DS6000, see 13.3.4, “Collecting
DS6000 problem determination data” on page 584.
– Contact IBM support.

Chapter 13. Troubleshooting i5/OS with external storage 591


13.5 DS CLI data collection for IBM technical support
To gather DS CLI client data for problem determination by IBM support, follow these steps:
1. Set the CLI verbose-mode command output:
setoutput -v on
or
edit dscli.profile under home dir/dscli/profile, change verbose: on
2. Run the failing CLI command and save the output into a text file.
3. Get the time stamp of the SMC/HMC when the failing CLI command executed.
4. Collect DS CLI client logs from:
– install dir/dscli/log (niCA.log, niClient.log)
– home dir/dscli/log (dscli_?_?.log)

Note:

For UNIX systems:


install dir default: /opt/ibm/dscli
home dir default: /home/youruserid

For Windows systems:


install dir defaults: C:\Program Files\IBM\DSCLI
home dir defaults: C:\Documents and Settings\youruserid

13.6 SAN actions


Troubleshooting the SAN can be more difficult, because the network can be huge and can
cover many different sites, hardware components, and connection providers.

13.6.1 Isolating Fibre Channel link problems


This section provides guidance on isolating Fibre Channel link connection problems between
a SAN switch and IBM System Storage DS8000 (DS6000). Each DS8000 (DS6000) Fibre
Channel port has a green and a yellow (bottom green) Emulex controlled status LED which
decodes as follows:
1. Link Up—Solid green with flashing yellow (bottom green):
This state indicates the card has passed Power On Self Test (POST), the link is up to the
Fibre Channel and the driver is likely to be loaded correctly.
2. Link Down—Flashing green with no flashing yellow (bottom green):
This state indicates the adapter has passed Power On Self Test (POST). There is no valid
link on Fibre Channel and there might be no driver loaded in the operating system.
3. Hardware failure—Any other combination of lights:
Any other combination normally indicates that the Fibre Channel card/SFP is defective.
Contact IBM technical storage support for verification and repair.

592 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Important: These standard Emulex LED codes are not suitable to debug IBM eServer i5
IOA connection problems, as the IOP/IOA might have been reset as part of error recovery.
However, they still apply for any connection of a DS6000/DS8000 to a SAN switch.

13.6.2 SAN data collection for IBM technical support


The following information is typically required by IBM technical support for analysis of SAN
related problems:
 Problem description, including data and time of occurrence
 Recovery actions attempted and their results
 SAN switch logs (for example, Brocade supportShow or Cisco show tech-support detail)
 SAN layout diagram

Chapter 13. Troubleshooting i5/OS with external storage 593


594 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
A

Appendix A. FAQs for boot from SAN with IOP-less


and #2847 IOP-based Fibre Channel

This appendix includes frequently asked questions (FAQs) regarding boot from SAN external
storage with the IOP-less and #2847 IOP-based Fibre Channel adapter cards.

© Copyright IBM Corp. 2008. All rights reserved. 595


Question 1. What is boot from SAN and what is it used for?
Boot from SAN refers to placing the i5/OS load source unit on an IBM external storage
subsystem without requiring an integrated internal disk unit on a System i POWER5 or
POWER6 server.

Compared to previous remote load source mirroring solutions, where the load source
remained on an internal disk unit mirrored to an external LUN, boot from SAN eliminates the
requirement for a remote load source recovery when the load source mirror mate shall be
used either for recovery purposes or system cloning. Storage-based data replication solutions
that replicate the whole system space, that is not only IASPs, take advantage of boot from
SAN by reducing significantly the recovery time to IPL another (recovery) system from the
copied boot from SAN external load source. Also, because no manual intervention for remote
load source recovery is required, boot from SAN lays a foundation for FlashCopy automation
solutions, for example allowing a fully automated system backup process with FlashCopy.

Question 2. What is new for boot from SAN with IOP-less Fibre
Channel?
The new System i IOP-less Fibre Channel (FC) technology, supported on System i POWER6
models only, provides inherent boot from SAN support. Two new IOP-less dual-port FC
adapters are available, the #5749 as a PCI-X version and the #5774 as a PCI Express (PCIe)
version. These IOP-less adapters support both disk and tape systems but not on the same
adapter port. For disk storage systems, they support only the IBM System Storage DS8000
series models.

For further information, refer to Chapter 4, “i5/OS planning for external storage” on page 75.

Question 3. What is the purpose of the #2847 IOP?


The #2847 I/O processor (IOP) is designed to support an i5/OS load source disk unit directly
inside a Fibre Channel that is connected external storage device, such as IBM System
Storage ESS model 800, DS6000, or DS8000 series. It is supported with IBM System i
POWER5 or POWER6 servers that are managed by a HMC.

Question 4. How is the #2844 IOP different from the #2847 IOP?
The #2847 IOP is designed specifically to support boot capabilities by placing the i5/OS load
source disk unit directly inside one of the supported external storage servers. Compared to
the #2844 IOP, which supports a wide range of IOAs, the #2847 IOP supports only the #2766,
#2787, or #5760 Fibre Channel disk controllers.

596 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Question 5. What System i hardware models and IBM System
Storage subsystems does the #2847 IOP support?
The #2847 IOP requires IBM System i POWER5 or POWER6 systems (Model 515, 520, 525,
550, 570, 595, or 9411-100) along with an IBM System Storage disk storage subsystem. IBM
has tested, and therefore supports, IBM System Storage ESS model 800, DS6000, and
DS8000 series. Other IBM System Storage ESS models or any OEM hardware configurations
are not tested.

Question 6. How many LUNs do the adapters for boot from SAN
support?
The new IOP-less dual-port Fibre Channel adapters #5749 or #5774 support up to 64 LUNs
per port, that is up to 128 LUNs per adapter.

With #2847 IOP-based boot from SAN, the LUNs are connected through a single-port Fibre
Channel adapter #2766, #2787, or #5760 supporting 32 LUNs in total. In addition to the load
source LUN, the #2847 can support up to 31 additional LUNs.

Question 7. Is multipath supported with boot from SAN?


With i5/OS V6R1, multipath is supported also for the boot from SAN LSU, regardless of
whether it is attached to an IOP-less Fibre Channel adapter or a #2847 IOP-based Fibre
Channel adapter.

Prior to i5/OS V6R1, multipath was not supported for the boot from SAN LSU though the
#2847 IOP supported multipath to other non-load source LUNs.

To provide protection for the SAN load source prior to i5/OS V6R1, you had to purchase a
second #2847 IOP instead of #2844, define an additional unprotected LUN, and use i5/OS
mirroring to protect the load source LUN. Remaining LUNs can take advantage of multipath
I/O.

Question 8. Why do I need an HMC with boot from SAN?


The HMC enables a system administrator to tag an I/O adapter to enable boot functions,
regardless of the card slot placements. Boot from SAN exploits this capability of the HMC to
tag the I/O adapter (IOA) and direct IPL requests from the i5/OS load source that is placed in
IBM System Storage disk storage subsystem. Therefore, System i customers that currently
have no HMC need to purchase an HMC to achieve this new support.

Question 9. Do I need to upgrade any software on my HMC?


To use the #2847 IOP for boot from SAN requires a minimum HMC firmware level of 5.1 or
above. For IOP-less Fibre Channel boot from SAN, no special consideration for the HMC
firmware level applies, given that IOP-less is supported only on System i POWER6 models
that require HMC firmware version 7 or above.

Appendix A. FAQs for boot from SAN with IOP-less and #2847 IOP-based Fibre Channel 597
Question 10. What are the minimum software requirements to
support #2847?
Minimum software requirements for i5/OS, HMC, system firmware, and IBM System Storage
disk storage subsystem include:
 Licensed Internal Code: V5R3M5 LIC (level RS 535-A or later)
 i5/OS Operating System: V5R3M0 Resave Level (level RS 530-10 or later)
 System Firmware: 2.3.5
 HMC Firmware: 5.1
 Latest microcode levels for IBM ESS model 800, DS6000, or DS8000 series
 Latest cumulative PTF package for i5/OS and Licensed Program Products.

Question 11. Will the #2847 IOP work with iSeries models?
No. Only iSeries systems with POWER5 processor-based systems with an HMC support
#2847 IOP.

Question 12. Do I need to upgrade my system firmware on a


System i5 server?
Using the #2847 IOP for boot from SAN on a System i POWER5 system requires a minimum
system firmware level of 2.3.5 or above. You can also update the system firmware using the
HMC.

Question 13. What changes do I need to make to an IBM System


Storage DS8000, DS6000, or ESS model 800 series to support
boot from SAN?
IOP-less Fibre Channel disk storage subsystem attachment whether used with boot from
SAN or not is supported only with IBM System Storage DS8000 series on a microcode level
2.4.3 or later. For direct attachment ensure the DS8000 host ports are configured as FC-AL
which is the only protocol support for IOP-less Fibre Channel direct attachment, for SAN
switch attachment the DS8000 host ports must be configured as FC-SW.

For #2847 IOP-based Fibre Channel boot from SAN no microcode or hardware changes are
required on the disk storage subsystem. However, during configuration time, it is important to
ensure that the ports are configured as FC-SW and not as FC-AL. Failure to configure the
ports as FC-SW will result in your system not being able to find the load source disk unit in the
storage subsystem. For more information, see 6.4, “Setting up an external load source unit”
on page 211.

598 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Question 14. Will I have to define the load source LUN as a
“protected” or as an “unprotected” LUN?
With i5/OS V6R1, multipath is supported for the load source LUN. So, i5/OS mirroring is not
required to provide path redundancy for the load source. Unless you are planning to use
i5/OS mirroring for data redundancy, you can define the load source LUN as a protected
volume model on the external storage subsystem.

Prior to i5/OS V6R1, multipath is not supported for the load source LUN. So, to provide
redundancy for the SAN Load Source, you need to specify it on the external storage
subsystem as unprotected to mirror the load source LUN using i5/OS mirroring. If you are not
planning to enable redundancy at the IOP level to provide an alternate path to the load
source, you can define the LUN as a protected LUN in the external storage subsystem.

Question 15. Will the Fibre Channel load source require direct
connectivity to my SAN storage device, or can I go through a
SAN fabric?
You can either have a direct point-to-point connection from the Fibre Channel adapter to the
Host Bay Adapter (HBA) on the external storage subsystem, or you can use one of the
supported SAN switches or directors. In the later case, zoning for System i IOAs is highly
recommended to avoid potential performance degradation and to allow for easier problem
isolation. For further information about SAN switch zoning for System i connectivity, refer to
4.2.5, “Planning for SAN connectivity” on page 92.

Question 16. Do I have to replace all of my #2844 IOPs with


#2847?
No. There is no need to replace all of your #2844 IOPs with the new #2847 IOP. You only
need one IOP to include support for external i5/OS load source per system, or per logical
partition, or a maximum of two IOPs per system or logical partition, where you are enabling
redundancy for the load source LUN. All additional LUNs can be supported using #2844.

Question 17. Can I share #2847 across multiple LPARs on the


same system?
No. You need to have a dedicated #2847 IOP for every load source LUN or up to two where
you have redundancy enabled at the IOP level. Each LPAR needs its own IOP to support the
i5/OS load source disk unit of that partition.

Appendix A. FAQs for boot from SAN with IOP-less and #2847 IOP-based Fibre Channel 599
Question 18. Is the #2847 IOP supported in Linux or AIX
partitions on System i?
No. Linux and AIX partitions do not need IOPs, and none of the IOPs, including the #2847,
are supported in these partitions. The supported Fibre Channel adapters installed in these
partitions support boot capabilities for Linux or AIX kernels loaded in these partitions.

Question 19. Where can I get additional information about


#2847 IOP?
For information about planning and implementation considerations that are required for
installing the 2847 IOP, you can use this book.

Question 20. Is the #2847 customer set up?


Yes, similar to the #2844, the #2847 is a customer set up IOP. You must ensure that all of the
pre-requisites for hardware and software are met to ensure that the configuration is
recognized.

Question 21. Will my system come preloaded with i5/OS when I


order #2847?
No, whenever you order the #2847 IOP, the preload of i5/OS software is disabled because no
integrated internal disk might be configured on that order. Instead, the system is shipped with
software media.

Question 22. What is the difference between V5R3M5 and


V5R3M0 for LIC?
V5R3M5 and later includes support for #2847, and V5R3M0 does not.

Question 23. Can I continue to use both internal and external


storage even though I have ordered the #2847 IOP?
Yes. You can continue to combine usage of both storage based on your application and
business needs. IBM recommends that within a given Auxiliary Storage Pool (ASP), you keep
one type of storage technology rather than inter-mixing the technologies for the same ASP.
For example, this mixed storage environment could apply when running multiple LPARs on a
the same system. You might have a requirement to allocate internal storage to certain LPARs
and SAN based disk storage on other LPARs.

600 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Question 24. Could I install #2847 on my iSeries model 8xx
system, or in one of the LPARs on this system?
#2847 IOP is not supported on iSeries model 8xx. Therefore, you cannot install it on the
system or in any of the LPARs that are defined on these systems.

Question 25. Will the #2847 IOP work with V5R3M0 Licensed
Internal Code?
No, minimum LIC requirement is V5R3M5 or later.

Question 26. What happens to my system name and network


attributes when I perform a point in time FlashCopy operation?
FlashCopy creates an identical image of the source system, and every system and user
object, including history logs, message queues, system configuration attributes, locales, and
network configuration attributes are duplicated.

When enabling the cloned system image, you need to perform a manual IPL to change the
system name and network attributes and to re-assign hardware resource names for your
network configuration objects prior to entering the cloned partition to your network.

Question 27. Prior to i5/OS V6R1, multipath I/O is not supported


on #2847 for SAN Load Source. Does this mean that the LUNs
attached to the Fibre Channel I/O adapter are unprotected by
multipath I/O?
Prior to i5/OS V6R1, only the load source LUN cannot be enabled for multipath I/O.
Remaining LUNs (up to 31 or them) can benefit from multipath I/O support.

For example, if you had an existing i5 system or logical partition using multipathed LUNs in
external storage, you would have either a RAID or mirror protected load source. To replace
this load source and maintain multipath to all other LUNs, you can use of the following
methods:
 The simplest method of load source migration is to purchase 2 x #2847 and 2 FC IOAs
and use these adapters to create a mirrored SAN Load Source pair, and 31 additional
multipath LUNs. You would need to ensure there is the slot capacity in your i5 system for
these additional features.
 Alternatively, you can use existing multipath LUNs by changing the IOPs that drive them.
This method is a complex migration scenario, so we overview it briefly here. You need to
plan whether the existing FC had capacity to support the migrated load source, less than
32 LUNs and throughput capacity. We assume this a two IOP multipath set. You upgrade
i5/OS to V5R3M5. Then, you swap the existing pair of 2844s supporting the multipath for a
pair of new 2847s with the system turned off or using concurrent maintenance. Assuming
load source redundancy is required, you need a pair of unprotected LUNs. Then, you

Appendix A. FAQs for boot from SAN with IOP-less and #2847 IOP-based Fibre Channel 601
perform load source migration as described in Chapter 7, “Migrating to i5/OS boot from
SAN” on page 277.

Question 28. Will the base IOP that is installed in every system
unit be replaced with the new #2847 IOP?
No. Among other devices, the base IOP is used to drive internal DVD or CD ROM drives, ECS
communications link, and base. These are still required even when you plan to attach all of
your disk storage using external SAN disk storage subsystem.

Question 29. Why does it take a long time to ship the #2847
IOP?
To ensure that important planning and implementation considerations are completed prior to
enabling the new #2847 IOP, IBM has deployed a mandatory technical review for all system
orders that have this new IOP. A questionnaire is generated automatically, and you need to
complete and return it to rchiroc@us.ibm.com. IBM then schedules a Technical Review call,
after which the order is approved for shipment.

You can direct additional questions regarding the Technical Review or the Questionnaire to:
mailto:rchiroc@us.ibm.com

Question 30. Do I need to complete the questionnaire that I got


after I ordered the #2847 IOP?
Yes. It is important to complete the questionnaire and mail the responses to:
mailto:rchiroc@us.ibm.com

This questionnaire allows IBM to schedule the required Technical Review prior to processing
your order.

Question 31. Where do I obtain information about the #2847 IOP


in Information Center?
The eServer or System i Information Center does not have documentation for this new IOP.
For information about planning and implementation considerations that are required for
installing the 2847 IOP, you can use this book.

Question 32. How many Fibre Channel adapters are supported


by the #2847 IOP?
One FC adapter is supported—either #5760, #2787, or #2766. The #2787 and #2766 are
withdrawn from marketing.

602 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Question 33. Can I use the #2847 IOP to attach my tape Fibre
Channel I/O adapter and also to boot from it?
No. The #2847 IOP is only designed to support SAN i5/OS load source LUN, and up to 31
additional LUNs. Tape adapters are not supported by this IOP.

Boot from Fibre Channel attached tape drives is only supported with System i POWER6
IOP-less cards Fibre Channel cards #5749 or #5774.

Question 34. How many card slots does the #2847 IOP require?
Can I install the IOP in 32-bit slot, or does it need to be in a
64-bit slot?
The IOP occupies one card slot and it can be placed in either a 32-bit or a 64-bit slot. It is
highly recommended that the Fibre Channel disk adapter be placed in a 64-bit card slot for
optimum performance.

Appendix A. FAQs for boot from SAN with IOP-less and #2847 IOP-based Fibre Channel 603
604 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
B

Appendix B. HMC CLI command definitions


This appendix includes extracts from the Hardware Management Console (HMC)
command-line interface (CLI) definitions that apply to Chapter 12, “Cloning i5/OS” on
page 553. You can find the full list of commands at the Fix Central, HMC page:
http://www14.software.ibm.com/webapp/set2/sas/f/hmc/home.html

The CLI references are located in the PDF files that are ordered by the HMC release. Be sure
to select the 4.5 Command Line Specification or later.

© Copyright IBM Corp. 2008. All rights reserved. 605


The mksyscfg command definition
Example B-1 includes the mksyscfg command definition.

Example: B-1 The mksyscfg command definition

NAME
mksyscfg - create system resources

SYNOPSIS
mksyscfg -r {lpar | prof | sysprof} -m managed-system
{-f configuration-file | -i "configuration-data"}
[--help]

DESCRIPTION
mksyscfg creates partitions, partition profiles, or system
profiles for the managed-system.

OPTIONS

-r The type of system resources to create. Valid val-


ues are lpar for partitions, prof for partition
profiles, and sysprof for system profiles.
When a partition is created, the default profile
for the partition is also created.

-m The name of the managed system for which the system


resources are to be created. The name may either
be the user-defined name for the managed system, or
be in the form tttt-mmm*ssssssss, where tttt is the
machine type, mmm is the model, and ssssssss is the
serial number of the managed system. The tttt-
mmm*ssssssss form must be used if there are multi-

ple managed systems with the same user-defined


name.

-f The name of the file containing the configuration


data needed to create the system resources. The
configuration data consists of attribute name/value
pairs, which are in comma separated value (CSV)
format. These attribute name/value pairs form a
configuration record. A line feed marks the end of
a configuration record. The file must contain one
configuration record for each resource to be cre-
ated, and each configuration record must be for the
same resource type.

The format of a configuration record is as follows:

attribute-name=value,attribute-name=value,...<LF>

Note that certain attributes accept a comma sepa-


rated list of values, as follows:

606 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
"attribute-name=value,value,...",...<LF>

When a list of values is specified, the attribute


name/value pair must be enclosed in double quotes.
Depending on the shell being used, nested double
quote characters may need to be preceded by an
escape character, which is usually a '\' character.

Attribute names for partitions (see below for


attribute names that are common to both partitions
and partition profiles):
name
name of the partition to create
[lpar_id]
profile_name
name of the default profile to create
lpar_env
Valid values are aixlinux, os400, or
vioserver
[shared_proc_pool_util_auth]
Valid values are:
0 - do not allow authority
1 - allow authority
[work_group_id]

Attribute names for partition profiles (see below


for attribute names that are common to both parti-
tion profiles and partitions):
name
name of the partition profile to create
lpar_name | lpar_id
name or ID of the partition for which
to create the profile

Attribute names for both partitions and partition

profiles:
[all_resources]
Valid values are:
0 - do not use all the managed system
resources
1 - use all the managed system resources
(this option is not valid for i5/OS
partitions on IBM eServer p5 servers)
min_mem
megabytes
desired_mem
megabytes
max_mem
megabytes

[proc_mode]

Appendix B. HMC CLI command definitions 607


Valid values are:
ded - dedicated processors
shared - shared processors
[min_procs]
[desired_procs]
[max_procs]
[min_proc_units]
[desired_proc_units]
[max_proc_units]
[min_5250_cpw_percent]
Only valid for i5/OS partitions in
managed systems that support the
assignment of 5250 CPW percentages

[desired_5250_cpw_percent]
Only valid for i5/OS partitions in
managed systems that support the
assignment of 5250 CPW percentages
[max_5250_cpw_percent]
Only valid for i5/OS partitions in
managed systems that support the
assignment of 5250 CPW percentages
[sharing_mode]
Valid values are:
keep_idle_procs - valid with dedicated
processors
share_idle_procs - valid with dedicated
processors
cap - valid with shared processors
uncap - valid with shared processors
[uncap_weight]
[io_slots]
Comma separated list of I/O slots, with
each I/O slot having the following
format:

slot-DRC-index/slot-IO-pool-ID/
is-required

Both '/' characters must be present, but


optional values may be omitted. Optional
values are slot-IO-pool-ID.

Valid values for is-required:


0 - no
1 - yes

For example:
21030002/3/1 specifies an I/O slot with a
DRC index of 21030002, it is assigned to
I/O pool 3, and it is a required slot.
[lpar_io_pool_ids]
comma separated
load_source_slot

608 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
i5/OS only
DRC index of I/O slot, or virtual slot
number
[alt_restart_device_slot]
i5/OS only
DRC index of I/O slot, or virtual slot
number
console_slot
i5/OS only
DRC index of I/O slot, virtual slot
number, or the value hmc
[alt_console_slot]
i5/OS only
DRC index of I/O slot, or virtual slot
number
[op_console_slot]
i5/OS only
DRC index of I/O slot, or virtual slot
number
[auto_start]
Valid values are:
0 - off
1 - on
[boot_mode]
AIX, Linux, and virtual I/O server only
Valid values are:
norm - normal
dd - diagnostic with default boot list
ds - diagnostic with stored boot list
of - Open Firmware OK prompt
sms - System Management Services
[power_ctrl_lpar_ids | power_ctrl_lpar_names]
comma separated
[conn_monitoring]
Valid values are:
0 - off
1 - on
[hsl_pool_id]
i5/OS only
Valid values are:
0 - HSL OptiConnect is disabled
1 - HSL OptiConnect is enabled
[virtual_opti_pool_id]
i5/OS only

Valid values are:


0 - virtual OptiConnect is disabled
1 - virtual OptiConnect is enabled
[max_virtual_slots]
[virtual_eth_adapters]
Comma separated list of virtual ethernet
adapters, with each adapter having the
following format:
virtual-slot-number/is-IEEE/
port-vlan-ID/additional-vlan-IDs/

Appendix B. HMC CLI command definitions 609


is-trunk/is-required

All 5 '/' characters must be present, but


optional values may be omitted. Optional
values are additional-vlan-IDs, and
is-trunk.

Valid values for is-IEEE, is-trunk, and


is-required:
0 - no
1 - yes

For example:
3/1/5/"6,7"/0/1
specifies a virtual ethernet adapter with
a virtual slot number of 3, is IEEE
802.1Q compatible, has a port virtual LAN
ID of 5, additional virtual LAN IDs of
6 and 7, it is not a trunk
adapter, and it is required.
[virtual_scsi_adapters]
Comma separated list of virtual SCSI
adapters, with each adapter having the
following format:

virtual-slot-number/client-or-server/
remote-lpar-ID/remote-lpar-name/
remote-slot-number/is-required

All 5 '/' characters must be present, but


optional values may be omitted. Optional
values for server adapters are
remote-lpar-ID, remote-lpar-name,
and remote-slot-number. Optional values
for client adapters are remote-lpar-ID or
remote-lpar-name (one of those values
is required, but not both).

Valid values for client-or-server:


client
server
i5/OS partitions on IBM eServer
i5 servers, and virtual I/O server
partitions only

Valid values for is-required:


0 - no
1 - yes

For example:
4/client/2//3/0
specifies a virtual SCSI client adapter
with a virtual slot number of 4, a
remote (server) partition ID of 2, a
remote (server) slot number of 3, and

610 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
it is not required.
[virtual_serial_adapters]
Comma separated list of virtual serial
adapters, with each adapter having the
following format:

virtual-slot-number/client-or-server/
supports-HMC/remote-lpar-ID/
remote-lpar-name/remote-slot-number/
is-required

All 6 '/' characters must be present, but


optional values may be omitted. Optional
values for server adapters are
supports-HMC, remote-lpar-ID,
remote-lpar-name, and remote-slot-number.
Optional values for client adapters are
remote-lpar-ID or remote-lpar-name (one

of those values is required, but not


both), and the supports-HMC value is
not allowed.

Valid values for client-or-server:


client
not valid for i5/OS partitions on
IBM eServer p5 servers
server
i5/OS and virtual I/O server
partitions only

Valid values for supports-HMC:


0 - no
1 - yes

Valid values for is-required:


0 - no
1 - yes

For example:
4/server/0////0
specifies a virtual serial server adapter
with a virtual slot number of 4, it does
not support an HMC connection, any client
adapter is allowed to connect to it, and
it is not required.
[sni_device_ids]
AIX, Linux, and virtual I/O server only

Comma separated list of Switch Network


Interface (SNI) adapter device IDs

Attribute names for system profiles:


name

Appendix B. HMC CLI command definitions 611


name of the system profile to create
lpar_names | lpar_ids
comma separated
profile_names
comma separated

Brackets around an attribute name indicate that the


attribute is optional.

The -f and the -i options are mutually exclusive.

-i This option allows you to enter configuration data


on the command line, instead of using a file. Data
entered on the command line must follow the same
format as data in a file, and must be enclosed in
double quotes.

When this option is used, only a single system


resource can be created.

The -i and the -f options are mutually exclusive.

--help Display the help text for this command and exit.

EXAMPLES
Create an AIX or Linux partition:

mksyscfg -r lpar -m system1 -i "name=aix_lpar2,


profile_name=prof1,lpar_env=aixlinux,min_mem=256,
desired_mem=1024,max_mem=1024,proc_mode=ded,
min_procs=1,desired_procs=1,max_procs=2,
sharing_mode=share_idle_procs,auto_start=1,
boot_mode=norm,lpar_io_pool_ids=3,
"io_slots=21010003/3/1,21030003//0""

Create an i5/OS partition profile:

mksyscfg -r prof -m 9406-570*34134441 -i "name=prof2,


lpar_id=3,min_mem=512,desired_mem=512,max_mem=1024,
proc_mode=shared,min_procs=1,desired_procs=1,max_procs=2,
min_proc_units=0.1,desired_proc_units=0.5,max_proc_units=1.5,
sharing_mode=uncap,uncap_weight=128,auto_start=1,
"lpar_io_pool_ids=1,2",
"io_slots=2101001B/1/1,2103001B/2/1,2105001B//0",
load_source_slot=2101001B,console_slot=hmc,
max_virtual_slots=4,
"virtual_scsi_adapters=2/client/2//3/1,3/server////1""

Create partition profiles using the configuration data in


the file /tmp/profcfg:

mksyscfg -r prof -m system1 -f /tmp/profcfg

612 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Create a system profile:

mksyscfg -r sysprof -m system1 -i "name=sysprof1,


"lpar_names=lpar1,lpar2","profile_names=prof1,prof1""

Appendix B. HMC CLI command definitions 613


The rmsyscfg command definition
Example B-2 includes the rmsyscfg command definition.

Example: B-2 The rmsyscfg command definition


NAME
rmsyscfg - remove a system resource

SYNOPSIS
rmsyscfg -r {lpar | prof | sysprof} -m managed-system
[-n resource-name] [-p partition-name]
[--id partition-ID] [--help]

DESCRIPTION
rmsyscfg removes a partition, a partition profile, or a
system profile from the managed-system.

OPTIONS
-r The type of system resource to remove. Valid val-
ues are lpar for a partition, prof for a partition
profile, and sysprof for a system profile.

When a partition is removed, all of the partition


profiles that are defined for that partition are
also removed.

When a partition profile is removed, any system


profiles that contain just that one partition pro-
file are also removed.

-m The name of the managed system from which the sys-


tem resource is to be removed. The name may either
be the user-defined name for the managed system, or
be in the form tttt-mmm*ssssssss, where tttt is the
machine type, mmm is the model, and ssssssss is the
serial number of the managed system. The tttt-
mmm*ssssssss form must be used if there are multi-
ple managed systems with the same user-defined
name.

-n The name of the system resource to remove.

To remove a partition, you must either use this


option to specify the name of the partition to
remove, or use the --id option to specify the par-
tition's ID. The -n and the --id options are mutu-
ally exclusive when removing a partition.

To remove a partition profile or a system profile,


you must use this option to specify the name of the
profile to remove.

-p The name of the partition which has the partition


profile to remove. This option is only valid when

614 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
removing a partition profile.

To remove a partition profile, you must either use


this option to specify the name of the partition
which has the partition profile to remove, or use
the --id option to specify the partition's ID. The
-p and the --id options are mutually exclusive.

--id The partition's ID.

To remove a partition, you must either use this


option to specify the ID of the partition to
remove, or use the -n option to specify the
partition's name. The --id and the -n options are
mutually exclusive when removing a partition.

To remove a partition profile, you must either use


this option to specify the ID of the partition that
has the profile to remove, or use the -p option to
specify the partition's name. The --id and the -p
options are mutually exclusive when removing a par-
tition profile.

This option is not valid when removing a system


profile.

--help Display the help text for this command and exit.

EXAMPLES
Remove the partition partition5:

rmsyscfg -r lpar -m system1 -n partition5

Remove the partition with ID 5:

rmsyscfg -r lpar -m system1 --id 5

Remove the partition profile prof1 for partition lpar3:

rmsyscfg -r prof -m system1 -n prof1 -p lpar3

Remove the system profile sysprof1:

rmsyscfg -r sysprof -m 9406-520*34134441 -n sysprof1

Appendix B. HMC CLI command definitions 615


The chsysstate command definition
Example B-3 includes the chsysstate command definition.

Example: B-3 The chsysstate command definition


NAME
chsysstate - change partition state or system state

SYNOPSIS
To power on a managed system:
chsysstate -m managed-system -r sys
-o {on | onstandby | onsysprof}
[-f system-profile-name]
[-k keylock-position]

To power off a managed system:


chsysstate -m managed-system -r sys
-o off [--immed]

To restart a managed system:


chsysstate -m managed-system -r sys
-o off --immed --restart

To rebuild a managed system or a managed frame:


chsysstate {-m managed-system | -e managed-frame}
-r {sys | frame} -o rebuild

To recover partition data for a managed system:


chsysstate -m managed-system -r sys -o recover

To set the keylock position for a managed system or a par-


tition:
chsysstate -m managed-system -r {sys | lpar}
-o chkey -k keylock-position
[{-n partition-name | --id partition-ID}]

To activate a partition:
chsysstate -m managed-system -r lpar -o on
{-n partition-name | --id partition-ID}
-f partition-profile-name
[-k keylock-position]
[-b boot-mode] [-i IPL-source]

To shut down or restart a partition:


chsysstate -m managed-system -r lpar
-o {shutdown | osshutdown | dumprestart |
retrydump}
{-n partition-name | --id partition-ID}
[--immed] [--restart]

To perform an operator panel service function on a parti-


tion:
chsysstate -m managed-system -r lpar
-o {dston | remotedstoff | remotedston |

616 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
iopreset | iopdump}
{-n partition-name | --id partition-ID}

To validate or activate a system profile:


chsysstate -m managed-system -r sysprof
-n system-profile-name
[-o on] [--continue] [--test]

To power off all of the unowned I/O units in a managed


frame:
chsysstate -e managed-frame -r frame -o unownediooff

DESCRIPTION
chsysstate changes the state of a partition, the managed-
system, or the managed-frame.

OPTIONS
-m The name of the managed system on which to perform
the operation. The name may either be the user-
defined name for the managed system, or be in the
form tttt-mmm*ssssssss, where tttt is the machine
type, mmm is the model, and ssssssss is the serial
number of the managed system. The tttt-
mmm*ssssssss form must be used if there are multi-
ple managed systems with the same user-defined
name.

This option is required when performing a parti-


tion, system profile, or managed system operation.
This option is not valid otherwise.

-e The name of the managed frame on which to perform


the operation. The name may either be the user-
defined name for the managed frame, or be in the
form tttt-mmm*ssssssss, where tttt is the type, mmm
is the model, and ssssssss is the serial number of
the managed frame. The tttt-mmm*ssssssss form must
be used if there are multiple managed frames with
the same user-defined name.

This option is required when performing a managed


frame operation. This option is not valid other-
wise.

-r The type of resource on which to perform the opera-


tion. Valid values are lpar for partition, sys for
managed system, sysprof for system profile, and
frame for managed frame.

-o The operation to perform. Valid values are:


on - activates a partition or a system profile,
or powers on the managed-system. When
powering on the managed-system,
partitions that are marked as auto start
and partitions that were running when the

Appendix B. HMC CLI command definitions 617


system was powered off are activated.
onstandby - powers on the managed-system to
standby state.
onsysprof - powers on the managed-system
then activates a system profile. Only
those partitions in the system profile
are activated.
off - powers off the managed-system. If
the --immed option is specified, a fast
power off (operator panel function 8) is
performed, otherwise a normal power off is
performed. If both the --immed and the
--restart options are specified, a
restart (operator panel function 3) of the
managed-system is performed.
rebuild - rebuilds the managed-system or the
managed-frame.
recover - recovers partition data for the
managed-system by restoring the data
from the backup file on the HMC.
chkey - sets the keylock position for a
partition or the managed-system.
shutdown - shuts down a partition. If the
--immed option is specified, an
immediate shut down (operator panel
function 8) is performed, otherwise a
delayed shut down is performed. If both
the --immed and the --restart
options are specified, an immediate
restart (operator panel function 3) of
the partition is performed.
osshutdown - issues the AIX "shutdown" command
to shut down an AIX or virtual I/O server
partition. If the --immed option is
specified, the AIX "shutdown -F" command
is issued to immediately shut down the
partition. If the --restart option
is specified, the "r" option is included
on the AIX shutdown command to restart the
partition.
dumprestart - initiates a dump on the partition
and restarts the partition when the dump
is complete (operator panel function 22).
retrydump - retries the dump on the partition
and restarts the partition when the dump
is complete (operator panel function 34).
This operation is valid for i5/OS
partitions only.
dston - activates dedicated service tools for
the partition (operator panel function
21). This operation is valid for i5/OS
partitions only.
remotedstoff - disables a remote service session
for the partition (operator panel function
65). This operation is valid for i5/OS

618 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
partitions only.
remotedston - enables a remote service session
for the partition (operator panel function
66). This operation is valid for i5/OS
partitions only.
iopreset - resets or reloads the failed IOP
(operator panel function 67). This
operation is valid for i5/OS partitions
only.
iopdump - allows use of the IOP control storage
dump (operator panel function 70). This
operation is valid for i5/OS partitions
only.
unownediooff - powers off all of the unowned
I/O units in a managed frame.

-f When activating a partition, use this option to


specify the name of the partition profile to use.

When powering on a managed system with a system


profile, use this option to specify the name of the
system profile to use.

-k The keylock position to set. Valid values are man-


ual and norm for normal.

This option is required when setting the keylock


position for a partition or a managed system. This
option is optional when powering on a managed sys-
tem or activating a partition.

--immed
If this option is specified when powering off a
managed system, a fast power off is performed.

This option must be specified when restarting a


managed system.

If this option is specified when shutting down or


restarting a partition, an immediate shut down or
restart is performed.

--restart
If this option is specified, the partition or man-
aged system will be restarted.

-n When performing a system profile operation, use


this option to specify the name of the system pro-
file on which to perform the operation.

When performing a partition operation, use either


this option to specify the name of the partition on
which to perform the operation, or use the --id
option to specify the partition's ID. The -n and
the --id options are mutually exclusive for parti-

Appendix B. HMC CLI command definitions 619


tion operations.

--id When performing a partition operation, use either


this option to specify the ID of the partition on
which to perform the operation, or use the -n
option to specify the partition's name. The --id
and the -n options are mutually exclusive for par-
tition operations.

-b The boot mode to use when activating an AIX, Linux,


or virtual I/O server partition. Valid values are
norm for normal, dd for diagnostic with default
boot list, ds for diagnostic with stored boot list,
of for Open Firmware OK prompt, or sms for System
Management Services.

-i The IPL source to use when activating an i5/OS par-


tition. Valid values are a, b, c, or d.

--test If this option is specified when performing a sys-


tem profile operation, the system profile is vali-
dated.

--continue
If this option is specified when activating a sys-
tem profile, remaining partitions will continue to
be activated after a partition activation failure
occurs.

--help Display the help text for this command and exit.

EXAMPLES
Power on a managed system and auto start partitions:

chsysstate -m 9406-520*10110CA -r sys -o on

Power on a managed system with a system profile:

chsysstate -m sys1 -r sys -o onsysprof -f mySysProf

Power off a managed system normally:

chsysstate -m sys1 -r sys -o off

Power off a managed system fast:

chsysstate -m sys1 -r sys -o off --immed

Restart a managed system:

chsysstate -m 9406-570*12345678 -r sys -o off --immed


--restart

Rebuild a managed system:

620 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
chsysstate -m 9406-570*12345678 -r sys -o rebuild

Recover partition data for a managed system:

chsysstate -m sys1 -r sys -o recover

Set the keylock position for a managed system:

chsysstate -m sys1 -r sys -o chkey -k manual

Activate i5/OS partition p1 using partition profile


p1_prof1 and IPL source b:

chsysstate -m sys1 -r lpar -o on -n p1 -f p1_prof1 -i b

Shut down the partition with ID 1:

chsysstate -m 9406-570*12345678 -r lpar -o shutdown --id 1

Issue the AIX shutdown command to immediately shut down


partition aix_p1:

chsysstate -m 9406-570*12345678 -r lpar -o osshutdown


-n p1 --immed

Immediately restart the partition with ID 1:

chsysstate -m 9406-570*12345678 -r lpar -o shutdown --id 1


--immed --restart

Enable a remote service session for the i5/OS partition


mylpar:

chsysstate -m sys1 -r lpar -o remotedston -n mylpar

Validate system profile sp1:

chsysstate -m sys1 -r sysprof -n sp1 --test

Validate then activate system profile sp1:

chsysstate -m sys1 -r sysprof -n sp1 -o on --test

Activate system profile mySysProf and continue activating


remaining partitions if a partition activation failure
occurs:

chsysstate -m 9406-570*12345678 -r sysprof -n mySysProf


-o on --continue

Rebuild a managed frame:

chsysstate -e myFrame -r frame -o rebuild

Appendix B. HMC CLI command definitions 621


The lshwres command definition
Example B-4 includes the lshwres command definition.

Example: B-4 The lshwres command definition


NAME
lshwres - list hardware resources

SYNOPSIS
To list physical I/O resources:
lshwres -r io --rsubtype {unit | bus | slot |
iopool | taggedio} -m managed-system
[--level {pool | sys}] [-R]
[--filter "filter-data"]
[-F [attribute-names] [--header]] [--help]

To list virtual I/O resources:


lshwres -r virtualio --rsubtype {eth | hsl |
virtualopti | scsi | serial | slot}
-m managed-system
[--level {lpar | slot | sys}]
[--filter "filter-data"]
[-F [attribute-names] [--header]] [--help]

To list memory resources:


lshwres -r mem -m managed-system
--level {lpar | sys} [-R]
[--maxmem quantity] [--filter "filter-data"]
[-F [attribute-names] [--header]] [--help]

To list processing resources:


lshwres -r proc -m managed-system
--level {lpar | pool | sys} [-R]
[--procunits quantity]
[--filter "filter-data"]
[-F [attribute-names] [--header]] [--help]

To list Switch Network Interface (SNI) adapter resources:


lshwres -r sni -m managed-system
[--filter "filter-data"]
[-F [attribute-names] [--header]] [--help]

DESCRIPTION
lshwres lists the hardware resources of the managed-sys-
tem, including physical I/O, virtual I/O, memory, process-
ing, and Switch Network Interface (SNI) adapter resources.

OPTIONS
-r The type of hardware resources to list. Valid val-
ues are io for physical I/O, virtualio for virtual
I/O, mem for memory, proc for processing, and sni
for SNI adapter resources.

--rsubtype

622 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
The subtype of hardware resources to list. Valid
physical I/O resource subtypes are unit for I/O
units, bus for I/O buses, slot for I/O slots,
iopool for I/O pools, and taggedio for tagged I/O
resources. Valid virtual I/O resource subtypes are
eth for virtual ethernet, hsl for High Speed Link
(HSL) OptiConnect, virtualopti for virtual OptiCon-
nect, scsi for virtual SCSI, serial for virtual
serial, and slot for virtual slot resources.

This option is required when listing physical I/O


or virtual I/O resources. This option is not valid
when listing memory, processing, or SNI adapter
resources.

-m The name of the managed system which has the hard-


ware resources to list. The name may either be the
user-defined name for the managed system, or be in
the form tttt-mmm*ssssssss, where tttt is the
machine type, mmm is the model, and ssssssss is the
serial number of the managed system. The tttt-
mmm*ssssssss form must be used if there are multi-
ple managed systems with the same user-defined
name.

--level
The level of information to list. Valid values are
lpar for partition, pool for pool, slot for slot,
and sys for system.

This option is required when listing I/O pool


resources, virtual ethernet, serial, or slot
resources, or memory or processing resources.

Valid levels for I/O pool resources are pool or


sys. Valid levels for virtual ethernet resources
are lpar or sys. Valid levels for virtual serial
resources are lpar. Valid levels for virtual slot
resources are lpar or slot. Valid levels for mem-
ory resources are lpar or sys. Valid levels for
processing resources are lpar, pool, or sys.

-R Only list information for partitions with hardware


resources that can be restored due to a dynamic
logical partitioning (DLPAR) operation failure.

The rsthwres command can be used to restore those


hardware resources.

This option is only valid for listing physical I/O


slots, or partition level memory or processing
resources.

--maxmem
When this option is specified, the required minimum

Appendix B. HMC CLI command definitions 623


memory amount needed for partitions to support the
maximum memory quantity specified is listed. All
memory quantities are in megabytes, and are a mul-
tiple of the memory region size for the managed-
system.

This information is useful for specifying memory


amounts in partition profiles.

This option is only valid when listing system level


memory resources.

--procunits
When this option is specified, the range of optimal
5250 CPW percentages for partitions assigned the
quantity of processing units specified is listed.
The quantity of processing units specified can have
up to 2 decimal places.

This information is useful when specifying the 5250


CPW percentages for partitions or partition pro-
files.

This option is only valid when listing system level


processing resources. Also, this option is only
valid when the managed-system supports the assign-
ment of 5250 CPW percentages to partitions.

--filter
The filter(s) to apply to the hardware resources to
be listed. Filters are used to select which hard-
ware resources of the specified type are to be
listed. If no filters are used, then all of the
hardware resources of the specified type will be
listed. For example, all of the physical I/O slots
on a specific I/O unit and bus can be listed by
using a filter to specify the I/O unit and the bus
which has the slots to list. Otherwise, if no fil-
ter is used, then all of the physical I/O slots in
the managed system will be listed.

The filter data consists of filter name/value


pairs, which are in comma separated value (CSV)
format. The filter data must be enclosed in double
quotes.

The format of the filter data is as follows:

"filter-name=value,filter-name=value,..."

Note that certain filters accept a comma separated


list of values, as follows:

""filter-name=value,value,...",..."

624 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
When a list of values is specified, the filter
name/value pair must be enclosed in double quotes.
Depending on the shell being used, nested double
quote characters may need to be preceded by an
escape character, which is usually a '\' character.

Unless otherwise indicated, multiple values can be


specified for each filter.

Valid filter names for this command:


buses
Specify I/O bus ID(s)
lpar_ids
Specify partition ID(s)
lpar_names
Specify partition user-defined name(s)
pools
Specify pool ID(s)
slots
Specify physical I/O slot DRC index(ices)
or virtual I/O slot number(s)
sni_device_ids
Specify SNI adapter device ID(s)
units
Specify I/O unit physical location
code(s)
vlans
Specify virtual LAN ID(s)

Valid filters with -r io --rsubtype unit:


units

Valid filters with -r io --rsubtype bus:


buses, units

Valid filters with -r io --rsubtype slot:


buses, lpar_ids | lpar_names, pools, slots,
units

Valid filters with -r io --rsubtype iopool --level


pool:
lpar_ids | lpar_names, pools

Valid filters with -r io --rsubtype taggedio:


lpar_ids | lpar_names

Valid filters with -r virtualio --rsubtype eth


--level lpar:
lpar_ids | lpar_names, slots, vlans

Valid filters with -r virtualio --rsubtype hsl:


lpar_ids | lpar_names, pools

Valid filters with -r virtualio --rsubtype virtu-


alopti:

Appendix B. HMC CLI command definitions 625


lpar_ids | lpar_names, pools

Valid filters with -r virtualio --rsubtype scsi:


lpar_ids | lpar_names, slots

Valid filters with -r virtualio --rsubtype serial


--level lpar:
lpar_ids | lpar_names, slots

Valid filters with -r virtualio --rsubtype slot


--level lpar:
lpar_ids | lpar_names

Valid filters with -r virtualio --rsubtype slot


--level slot:
lpar_ids | lpar_names, slots

Valid filters with -r mem --level lpar:


lpar_ids | lpar_names

Valid filters with -r proc --level lpar:


lpar_ids | lpar_names

Valid filters with -r sni:


lpar_ids | lpar_names | sni_device_ids

-F A delimiter separated list of attribute names for


the desired attribute values to be displayed for
each hardware resource. If no attribute names are
specified, then values for all of the attributes
for each hardware resource will be displayed.

When this option is specified, only attribute val-


ues will be displayed. No attribute names will be
displayed. The attribute values displayed will be
separated by the delimiter which was specified with
this option.

This option is useful when only attribute values


are desired to be displayed, or when the values of
only selected attributes are desired to be dis-
played.

--header

Display a header record, which is a delimiter sepa-


rated list of attribute names for the attribute
values that will be displayed. This header record
will be the first record displayed. This option is
only valid when used with the -F option.

--help Display the help text for this command and exit.

EXAMPLES
List all I/O units on the managed system:

626 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
lshwres -r io --rsubtype unit -m system1

List all buses on I/O unit U787A.001.0395036:

lshwres -r io --rsubtype bus -m 9406-570*12345678


--filter "units=U787A.001.0395036"

List only the DRC index, description, and the owning par-
tition for each physical I/O slot on buses 2 and 3 of I/O
unit U787A.001.0395036:

lshwres -r io --rsubtype slot -m system1 --filter


"units=U787A.001.0395036,"buses=2,3"" -F drc_index,
description,lpar_name

List all I/O pools and the partitions and slots assigned
to each I/O pool:

lshwres -r io --rsubtype iopool -m system1 --level pool

List the tagged I/O devices for the i5/OS partition that
has an ID of 1:

lshwres -r io --rsubtype taggedio -m 9406-520*100103A


--filter "lpar_ids=1"

List all virtual ethernet adapters on the managed system:

lshwres -r virtualio --rsubtype eth --level lpar -m


system1

List all virtual SCSI adapters on the managed system, and


only display attribute values for each adapter, following
a header of attribute names:

lshwres -r virtualio --rsubtype scsi -m system1 -F


--header

List all virtual slots for partition lpar1:

lshwres -r virtualio --rsubtype slot -m system1 --level


slot --filter "lpar_names=lpar1"

List system level memory information:

lshwres -r mem -m 9406-570*98765432 --level sys

List recoverable memory information:

lshwres -r mem -m 9406-570*98765432 --level lpar -R

List memory information for partitions lpar1 and lpar2:

lshwres -r mem -m system1 --level lpar --filter

Appendix B. HMC CLI command definitions 627


""lpar_names=lpar_1,lpar_2""

List only the installed and configurable processors on the


system, and separate the output values with a colon:

lshwres -r proc -m 9406-570*98765432 --level sys -F


installed_sys_proc_units:configurable_sys_proc_units

List processing resources for all partitions:

lshwres -r proc -m system1 --level lpar

List all SNI adapters on the managed system:

lshwres -r sni -m system1

628 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Related publications

We consider the publications that we list in this section particularly suitable for a more
detailed discussion of the topics that we cover in this book.

IBM Redbooks publications


For information about ordering these publications, see “How to get IBM Redbooks
publications” on page 630. Note that some of the documents referenced here might be
available in softcopy only.
 IBM System Storage DS6000 Series: Architecture and Implementation, SG24-6781
 IBM System Storage DS6000 Series: Copy Services in Open Environments, SG24-6783
 IBM System Storage DS8000 Architecture and Implementation, SG24-6786
 IBM System Storage DS8000: Copy Services in Open Environments, SG24-6788
 IBM i5 and iSeries System Handbook i5/OS Version 5, GA19-5486

Note: This material is updated regularly with IBM Redpaper publications.

 IBM i5, iSeries, and AS/400e System Builder IBM i5/OS Version 5 Release 3, GA24-2155

Note: This material is updated regularly with IBM Redpaper publications.

 iSeries in Storage Area Networks A Guide to Implementing FC Disk and Tape with iSeries,
SG24-6220
 IBM eServer iSeries Migration: System Migration and Upgrades at V5R1 and V5R2,
SG24-60555
 IBM eServer iSeries Migration: A Guide to Upgrades and Migrations to System i5,
SG24-72000

Online resources
The following Web sites and URLs are also relevant as further information sources:
 IBM Systems Information Centers
http://publib.boulder.ibm.com/eserver/
 IBM TotalStorage Enterprise Server Introduction and Planning Guide
http://www-1.ibm.com/support/docview.wss?rs=503&context=HW26L&dc=DA400&q1=plann
ing&uid=ssg1S7000003&loc=en_US&cs=utf-8&lang=en
 IBM System Storage DS6000 Introduction and Planning Guide
http://www-1.ibm.com/support/docview.wss?rs=1112&context=HW2A2&dc=DA400&q1=ssg1
*&uid=ssg1S7001072&loc=en_US&cs=utf-8&lang=en

© Copyright IBM Corp. 2008. All rights reserved. 629


 IBM System Storage DS6000 Information Center
http://publib.boulder.ibm.com/infocenter/ds6000ic/index.jsp
 IBM System Storage DS6000 Technical Notes
http://www-1.ibm.com/support/search.wss?q=ssg1*&tc=HW2A2&rs=1112&dc=DB500+D800+
D900+DA900+DA800+DA600+DB400+D100&dtm
 IBM System Storage DS8000: Introduction and Planning Guide
http://www-1.ibm.com/support/docview.wss?rs=1113&context=HW2B2&dc=DA400&q1=ssg1
*&uid=ssg1S7001073&loc=en_US&cs=utf-8&lang=en
 IBM System Storage DS8000 Information Center
http://publib.boulder.ibm.com/infocenter/ds8000ic/index.jsp
 IBM System Storage DS8000 User’s Guide
http://www-1.ibm.com/support/docview.wss?rs=1113&context=HW2B2&dc=DA400&q1=ssg1
*&uid=ssg1S7001163&loc=en_US&cs=utf-8&lang=en
 IBM System Storage DS8000 Technical Notes
http://www-1.ibm.com/support/search.wss?dc=DB500+D800+D900+DA900+DA800+DA600+DB
400+D100&tc=HW2B2&rs=1113&dtm
 VPN Implementation (IBM System Storage DS8000)
http://www-1.ibm.com/support/docview.wss?rs=1113&context=HW2B2&dc=DB500&uid=ssg
1S1002693&loc=en_US&cs=utf-8&lang=en

How to get IBM Redbooks publications


You can search for, view, or download IBM Redbooks, Redpapers, Hints and Tips, draft
publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at
this Web site:
ibm.com/redbooks

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

630 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
Index
disaster recovery 64
Numerics disk arm 115, 121, 128
12X loop 113 i5/OS workload 121
disk configuration 221–222, 231, 246–248, 283, 299, 383
A disk drive 122, 126, 196, 203
access density 125 disk operations per second 122
application disk drive modules 122
response time 119 disk drive set 15
array 40 Disk Magic 115, 120, 130, 132, 137, 139–143, 147–148,
array site 39 151–152, 154–155, 158–159, 163, 169, 176–178,
ASP 127, 130, 143, 224 182–183, 187, 189, 198–199, 201–202
Asynchronous PPRC. See Global Mirror cache values 189
attribute name 607, 626–627 size DS8000 140
auxiliary storage pool. See ASP disk pool 86, 226–227, 230, 233–235
disk response time 119
disk service time 147, 156, 174–175, 188–189, 203
B disk space 116, 122, 128, 130, 132, 138–139, 148–151,
Backup Recovery 7 154–155, 157, 159–160, 167–168, 173, 184, 188–189,
Backup Recovery and Media Services (BRMS) 7, 544 200, 203
batch job, duration 119 disk subsystem 119, 136, 144, 146–150, 155, 161, 164,
batch window 137 177, 180–183, 199–200
blocksize 117, 147, 197, 202 ESS1 184, 188
business continuity 10, 28 icon 145
Resiliency Family 17 iSeries icon 164
model 167
disk unit 5, 7, 81, 86, 196, 220–221, 225, 231, 236, 245,
C 278, 285, 304, 308, 321, 329–331, 346, 349–350, 362,
Capacity Magic 102 377, 380–381, 386–387
example 103 connection 86
clone 62 data function 339, 358, 367
i5/OS 62 recovery 221, 245, 305
clone system 555–556, 563–564 display
codes C6xxyy00 573 disk configuration
Commercial Processing Workload (CPW) 136 capacity 222, 232
Compute Intensive Workload (CIW) 136 status 248
Consistency Group displayed array sites 398
commands 18 D-mode IPL 216, 572
consistency group 544 DRC index 608, 625, 627
consistency groups 52 DS 443
copy disk unit data 300 DS CLI 17, 71, 211, 391–395, 404, 410–411, 414–422,
Copy Service 5–7, 391, 396, 401, 404, 422 425, 427–429, 431, 562, 589–591
CPW percentage 608, 624 command 393–394, 396, 404, 414–415, 418
customer replaceable unit (CRU) 456 command frame insert 415
customer setup unit (CSU) 455 command framework 395
Description 394
D form i5/OS. This 419
DASD 544 framework 395, 419
space 544 Insert CD 405
total copy 544 installation CD 405
Data Set FlashCopy 17, 29 interactive command framework 422
DDM 122 message 395
Dedicated Service Tool. See DST mkuser 395
device adapter 34 profile 395–396, 414, 419
direct access storage device. See DASD response 398
direct attach of external storage, example 56 script 394, 405, 414

© Copyright IBM Corp. 2008. All rights reserved. 631


script mode 393 service 19
single-shot command 422 setup 19
user 394 storage capacity 15
DS CLI command supported environment 16
frame 415 z/OS Global Mirror
frame insert 415 z/OS Metro/Global Mirror 19
framework 395 DS8000 server
mkuser 418 0 485
DS command-line interface. See DS CLI 1 485
DS management console. See DS MC resource affinity convention 485
DS Open API 27 DS8000 Storage Manager 440, 465–466, 468, 472–473,
DS Open application programming interface. See DS 475–482, 487–499, 501–503, 505, 507–508, 511–517
Open API GUI 465
DS Storage Manager 17, 20 Installation 465
real-time (online) configuration 27 optional separate installation 465
simulated (offline) configuration 27 sign 466
DS6000 443, 446, 448 DS8100 21
business continuity 10, 28 DS8300 21
compared to DS4000 series 11 storage system LPAR 16
compared to DS8000 11 DST 7, 86, 216, 218, 220, 245, 288, 290, 306, 312, 316,
configuration flexibility 32 330
dynamic LUN 32 environment 245
expansion enclosure 26 main menu 219
hardware overview 24 DST. See Dedicated Service Tool
information life cycle management 10 duration of batch job 119
infrastructure simplification 10 dynamic LUN 32
large LUN and CKD volume support 32 Dynamic Volume Expansion 44
resiliency 31
service and setup 31
simplified LUN masking 33 E
storage capacity 26 Enterprise Storage Server (ESS) 590
supported environment 27 ESS
switched FC-AL 25 compared to DS8000 10
volume creation and deletion 32 ESS 800 176, 180–181, 187, 391
DS6000 problem determination copy services 391
data 584, 586 correct values 181
DS6000 Storage Manager 439–441, 449–456, 460, expansion enclosure 26
473–474, 577–581, 585–589 extended long busy timeout 52
DS6800 Extended Remote Copy see z/OS Global Mirror
controller enclosure 24 extent 99
interoperability 31 extent pool 42, 121, 127, 154, 169–170, 209, 392,
major features 25 398–399, 483–485, 488–491, 493–494, 496
DS8000 484–485 available amount 485
common set of functions 20 available extents 400
compared to DS6000 11 complete remaining space 496
compared to ESS 10 large number 399
disk drive set 15 needed capacity 169
DS8100 21 shown status 400
DS8300 21 extent rotation 44
Fibre Channel disk drives 14 extents 41
FlashCopy 17 external LSU 211–212, 240, 321, 339, 358
Global Mirror path protection 240
hardware overview 13 external storage 120–121, 207–208, 211, 240, 278, 569
host adapter 14 7xx and prior only supported SCSI attachment 208
interoperability 19 main memory 123
Metro Mirror sizing iSeries workload 120
POWER5 14 supported environment 207
Resiliency Family 17 System i5 customer plans 125
scalability 21
series overview 12

632 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
F host adapter 82, 118
failover PCI space 118
to backup 70 planned number 149
failoverpprc command 70 host connection 45
FB 483 host port 150, 167–168, 181, 184–185, 200, 475,
FC-AL 514–515
switched 14 host system 475, 492, 499, 502, 508, 510–513, 515–517
Fibre Channel 121, 123, 208–209, 211, 572–573, 592 I/O port usage 510
adapter 115, 121, 125, 127, 129, 148, 151, 159, 162, i5 host systems 510
167, 176, 180–181, 185, 200 HSL loop 82
attachment 208
Disk Controller 208, 212, 309, 332, 351, 359, 371, I
387 I/O
I/O adapter 239 latency 65
port 121 operation 117–119, 125
request proceeds 118 operations 116
Switched-Fabric (FCSF) 501 property 213
valid link 592 rate 129
Fixed Block (FB) 478, 489 request 117–118
fixed block storage 209 tower 81, 112, 239
FlashCopy 17, 392, 402, 428–430, 554, 562, 564 I/O adapter (IOA) 81, 86, 196, 211, 214, 311, 313, 334,
data sets 29 353, 373, 383, 387, 506, 508–509, 513, 572–573, 593
i5/OS cloning 62 I/O per second 124–125, 129, 159, 162, 189
inband commands 18, 29 I/O pool 623, 627
incremental 29 3 608
multiple relationship 29 I/O port 499, 501, 510
FlashCopy to a Remote Mirror primary 17 physical location 500
I/O processor (IOP) 4, 81, 86, 208, 241–242, 279, 572,
G 593
geographic mirroring maximal I/O per second 124
with external disk 71 I/O unit 619, 624–625, 627
Global Copy 18, 30 i5/OS 137, 163–166, 169, 172
Global Mirror 30 cloning 62
asymmetrical replication 69 current cache usage 166
examples 67 current configuration 166
full system replication 68 extent pool 169
switchable IASP replication 69 mirroring 81
graphical user interface (GUI) 559 performance reports 137
separate extent pool 169
workload 116, 163
H following configurations 167
HA 14 sizing DS 116
hard disk drive (HDD) 157 i5/OS Performance Tools 132
hardware management 211, 287, 323, 332, 351, 359, i5/OS system 319
371, 570, 574 i5/OS workload
Hardware Management Console . See HMC disk operations 122
hardware overview 24 disk operations per second 122
hardware resource 560, 622–623 IASP (independent auxiliary storage pool) 224
Hardware Service Manager 86 IBM System Storage
HDD (hard disk drive) 157 DS CLI 7
utilization 157–158, 161 DS command-line interface 391
high availability 64 DS Storage Manager 7
High Availability Business Partner (HABP) 6 DS Storage Manager server 439
High Availability Solution Manager 4 DS6000 439, 469
High Availability Solutions Manager 63, 90 DS6000 series 469
High Link (HSL) 559, 609, 622–623, 625 DS6000/DS8000 storage subsystem 590
HMC 211–212, 216, 278–279, 287, 309, 312, 334, DS8000 574, 592
353–354, 363, 374–375, 382, 392, 394–396, 404, 440, DS8000 series 469
465, 556–557, 560, 570, 572, 574, 591, 609, 611–612, DS8000 storage unit 469
618 Enterprise Storage Server 5, 590

Index 633
subsystem 5 Licensed Internal Code. See LIC
IBM System Storage DS Storage Manager load source 6, 211, 213–214, 216, 218–220, 222,
server 439 245–248, 279, 284, 291, 297, 300, 310, 321, 324, 331,
IBM System Storage DS Storage Manager. See DS Stor- 333, 335, 339, 342, 349–350, 352, 354, 358, 360,
age Manager 363–365, 367, 369, 372, 374, 380, 383, 386, 389,
IBM System Storage Solution 5 401–403, 494, 506, 558, 570, 572–573
IBM Systems Director Navigator for i5/OS 4 IOP 211, 216, 240–241, 243, 500
IFS directory 404, 409, 419 Unit 328
inband commands 29 load source unit. See LSU
Incremental FlashCopy 17, 29 logical unit 43, 86, 220, 230, 239
independent auxiliary storage pool (IASP) 5–7, 130, 224 logical unit number. See LUN
information life cycle management 10 logical volume 43, 82, 118, 126, 207, 209–210, 220, 231,
infrastructure simplification 10 392, 483–484, 492, 494–495, 502
initial program load (IPL) 86, 210, 214, 220, 245, Logical volumes 42
247–248 LPAR 557, 559–560, 612, 615–616, 621
initial program load. See IPL LSU 208–211, 214, 216, 218, 240, 242, 244–245,
input/output. See I/O 247–248, 278, 285, 294, 296, 299, 305, 308–309, 311,
integrated file system (IFS) 404, 409, 411–412, 414–415, 313, 316, 321, 326, 330, 332, 334, 339, 345, 349, 351,
418–419, 422, 425, 430 353, 358–359, 361, 367, 371, 373, 383, 386–387, 492,
internal disc 278 538, 573
internal disk 139, 210 LUN 42, 62, 86, 118, 121, 125–126, 152, 158, 187, 203,
current cache usage 155 210, 240–241, 243, 286, 303, 305, 321, 329–330, 336,
current read cache percentage 157 339, 348–349, 355, 358, 365, 369, 379–380, 387, 431,
current workload 145 554, 563
host workload 139 masking 33
I/O rate 179
internal LSU 211, 285, 309, 321
RAID disk controller 211 M
inter-switch link (ISL) 82 Machine Type and Serial (MTS) 392
IOA 5760 129 main memory 116–117, 542
IOA. See I/O adapter (IOA) block od data 123
IOP. See I/O processor (IOP) manual IPL 214, 287, 323, 361, 373, 389
IOP-based Fibre Channel 129 memory page 116
IOP-less Fibre Channel 4, 78, 124–126 Metro Mirror 18, 30
IP address 394–396, 406, 415, 419, 450, 452, 459–460, examples 65
465, 557 full system replication 65
IPL 86, 210, 214, 220, 245, 247–248, 287–288, 307, 316, switchable IASP replication 66
323, 331, 338, 349, 357, 361, 367, 373, 380, 386–387, migration
389, 542, 555, 563–565, 616, 620–621 external mirrored load source to boot load source 61
IPL mode 307, 331, 350, 381 internal drives to external storage including load
IPL source 350, 381 source 60
iSeries 5, 81–83, 86, 207–210, 220, 224–225, 230–231, mirrored load source unit
233–234 unprotected LUNs 494
Model 825 176 Model 825 176
iSeries Copy Services Toolkit 90 multifunction IOP 247
iSeries Navigator 220, 224, 230, 233, 236 multipath
connection 86
I/O 81
J volume 231, 233, 236
Java Virtual Machine (JVM) 404, 422 Multiple Relationship FlashCopy 17, 29
journal receiver 544

N
K needed number 122
Keylock position 215, 288, 324, 342, 362, 374, 618, 621 non-configured unit 220, 222, 228, 245
null Empty slot 561–562

L
large LUN and CKD volume support 32 O
level LPAR 625–628 operating system 245, 391, 541, 592
LIC 218 automatic installation 245

634 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
operator panel raw capacity 93
function 618 Real-time manager 451, 454–455, 458, 460, 470,
service function 616 472–473, 475, 481–482, 486, 503, 505, 508, 513
OS/400 recovery point of objective 65
mirroring 210 Redbooks Web site 630
V5R3 83 Contact us xiv
Remote Mirror 392, 431, 434–438
Remote Mirror and Copy function. See RMC
P repository volume 43
page fault 117 response time
partition application 119
name 212, 309, 332, 341, 351, 359, 371 disk 119
profile name 212, 309 RMC
property 213–215 role swap 71
partition name 557 RPO 65
partition profile rsubtype slot 560, 625–627
name 332, 341
prof1 615
password file 395, 418 S
PCI I/O card placement rules 112 San
PCI-X IOP 208, 211 permanent data 388
peak period 137 SAN LUNs 62, 340, 554
performance expectations 111 SAN Volume Controller 15
performance report 132, 138 Save while Active 542
reported values 187 scalability 21
Performance Tools 189 SDD (Subsystem Device Driver) 81
PFA 31 Secure Socket Layer (SSL) 404
physical capacity 93 server layer 392
physical I/O servicable event 455, 574–575
resource 622 service processor
resource subtypes 623 correct level 278
slot 624 service processor (SP) 278
slot DRC index 625 service tool 86, 221, 245
planned number 150, 168 Simple Network Mail Protocol (SNMP) 455, 460
Point-in-time Copy see PTC simplified LUN masking 33
port group 45 single point 82, 321, 339
Power On Self Test (POST) 592 single-level storage 116
POWER5 14 sizing 111
PPRC Extended Distance. See Global Copy SMI-S 4, 15
PPRC license 389 SNI adapter
PPRC-XD. See Global Copy resource 622
press F4 414, 422, 427 source system 544
Problem Management Hardware (PMH) 575 space efficient FlashCopy 43, 87
Problem Report 336, 355, 365 space efficient logical volume 43
production partition 428–429, 431 spare disk drive 93
production system 541, 543, 555, 563–564 sparing rule 93
protected LUN 210 SRC B2003200 572–573
protected mirror 100 SSPC 15
PTC 28 Standby Capacity on Demand. See Standby CoD
PWRDWNSYS Option 288 Standby CoD
storage capacity 15, 26
storage facility 35
Q Storage Hardware Management Console (S-HMC) 14,
quiesce for CopyServices 4 19, 34
storage image 392, 396–397, 419, 472, 474–475, 483,
R 487, 493, 499, 503, 515–516
RAID-5 rank 152 serial number 396
rank 41 Storage Management Console (SMC) 441, 450, 584,
rank group 127 586, 588–589, 591
rankgroup 42 storage pool striping 44, 100
storage system logical partitions see storage system

Index 635
LPAR virtual address 116
storage system LPAR 16 virtual private network (VPN) 587–589
DS8300 16 virtual SCSI
future directions 21 adapter 558, 627
storage unit 392, 397, 439, 442, 449, 451–452, 454–455, client adapter 610
458, 460, 465, 469–470, 473–475, 487, 493, 499, 505, virtualio 622, 625–627
515, 577, 585, 588, 591 volume creation and deletion 32
Serial Number information 470 volume group 45, 127, 241, 397, 404, 428–429, 432,
StorWatch Expert 133 502, 504–508, 573
strip size 41 volumegroup 403–404, 438
STRSST command 280
Subsystem Device Driver (SDD) 81
SVC 15 W
Switch Network Interface (SNI) 611, 622, 625–626, 628 Web-based System Manager (WEBSM) 574
switched FC-AL 14, 25 world wide port name (WWPN) 45, 509, 514, 573
switchover write penalty 122
from production to backup 69
swpprc command 70 X
Synchronous PPRC see Metro Mirror XRC see z/OS Global Mirror
system ASP 81, 210, 221–222, 286, 316, 321, 339, 369,
386–387
System i Copy Services Toolkit 63, 71 Z
System i5 z/OS Global Mirror 18, 30
all disk in external storage 56 z/OS Metro/Global Mirror 19
external storage HA environments 64
System Licensed Internal Code 117
System Management Console (SMC) 392, 394, 396, 404
system profile 606, 611
Attribute names 611
mySysProf 621
operation 617, 620
sp1 621
sysprof1 615
system report 130, 132, 137, 141, 176, 189, 194–196,
201
expert cache storage pools 201
interactive cpu utilization 195
System Service Tools 221
System Storage Productivity Center 15
System Storage Solution 5–6
Managing eServer i5 Availability 6

T
tem resource 614
TPC 468
transfer size 117
typing DS CLI
inetractive DS CLI mode 394

U
unprotected LUN 210, 387, 431
Use DS CLI
interactive command mode 394
user profile 544

V
Valid filter 625–626
Valid value 607–610, 617, 619–620, 623

636 IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
IBM i and IBM System Storage: A
Guide to Implementing External
Disks on IBM i
IBM i and IBM System Storage: A Guide to
Implementing External Disks on IBM i
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
IBM i and IBM System Storage: A Guide to Implementing External Disks
IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i
IBM i and IBM System Storage: A
Guide to Implementing External
Disks on IBM i
IBM i and IBM System Storage: A
Guide to Implementing External
Disks on IBM i
Back cover ®

IBM i and IBM System Storage:


A Guide to Implementing External Disks on IBM i
®

Take advantage of This IBM Redbooks publication provides a broad discussion of a new
architecture of the IBM System Storage DS6000 and DS8000 and how INTERNATIONAL
DS8000 and DS6000
these products relate to System i servers. The book includes TECHNICAL
with IBM i
information for both planning and implementing IBM System i with the SUPPORT
IBM System Storage DS6000 or DS8000 series where you intend to ORGANIZATION
Learn about the externalize the i5/OS loadsource disk unit using boot from SAN. It also
storage performance covers migration from System i internal disks to IBM System Storage
and HA DS6000 and DS8000.
enhancements in IBM This book is intended for IBMers, IBM Business Partners, and
i 6.1 customers in the planning and implementation of external disk BUILDING TECHNICAL
attachments to System i servers. INFORMATION BASED ON
PRACTICAL EXPERIENCE
Understand how to The newest release of this book accounts for the following new
migrate from internal functions of IBM System i POWER6, i5/OS V6R1, and IBM System IBM Redbooks are developed
to external disks Storage DS8000 Release 3: by the IBM International
 System i POWER6 IOP-less Fiber Channel Technical Support
 i5/OS V6R1 multipath load source support Organization. Experts from
 i5/OS V6R1 quiesce for Copy Services
IBM, Customers and Partners
from around the world create
 i5/OS V6R1 High Availability Solution Manager (HASM) timely technical information
 i5/OS V6R1 SMI-S support based on realistic scenarios.
 i5/OS V6R1 multipath resetter HSM function Specific recommendations
 System i HMC V7 are provided to help you
 DS8000 R3 space efficient FlashCopy implement IT solutions more
 DS8000 R3 storage pool striping effectively in your
 DS8000 R3 System Storage Productive Center (SSPC) environment.
 DS8000 R3 Storage Manager GUI

For more information:


ibm.com/redbooks

SG24-7120-01 ISBN 073843132X

Anda mungkin juga menyukai