Anda di halaman 1dari 33

VIOS for AIX Storage Administrators

Speaker Name: Janel Barfield

PowerHA Senior Software Engineer


email: jgbarfie@us.ibm.com
© 2009 IBM Corporation
Agenda
ƒ Virtual storage configuration concepts
ƒ Describe and configure virtual SCSI
– Configure new file-backed virtual devices
– Configure NPIV resources
ƒ Answer questions as time permits
– email me with any questions
jgbarfie@us.ibm.com

© 2009 IBM Corporation


Virtual Storage Configuration Concepts
ƒ Virtual SCSI devices can be backed by many types
of physical storage device:
– physical volume
– logical volume
– file
– optical device
– tape
ƒ Virtual optical devices can also be created
– Used by the client like physical optical drives, but
implemented as files on the VIO server
ƒ Virtual I/O server version 2.1 introduces N_Port ID
Virtualization (NPIV)
– Allows virtual client visibility to the physical SAN
storage

© 2009 IBM Corporation


Virtual SCSI Overview
Virtual I/O Server Client Client Client
PHY
PHY
PHY
PHY VTD VTD
VTD
VTD VTD

S S S S C C C C

Physical
Storage
Hypervisor

The red connections show two clients


accessing the same physical storage VSCSI Client
(A) via two different server adapters PHY Physical C Virtual Adapter
(B) and virtual target devices (D) Adapter

The blue connection shows multiple VSCSI Server


target devices (D) attached to a S
Virtual Adapter VTD Virtual Target Device
single server adapter (B)

© 2009 IBM Corporation


Virtual SCSI Configuration (1 of 3)
1) Define virtual SCSI server in VIO Server partition
and client adapter in AIX or Linux partition

2) Check availability of virtual SCSI server adapters on VIO Server:


$ lsdev -virtual
name status description
vasi0 Available Virtual Asynchronous Services
Interface (VASI)
vhost0 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

© 2009 IBM Corporation


Virtual SCSI Configuration (2 of 3)
3) On the VIO Server, define storage resources
To create a volume group:
$ mkvg [ -f ][ -vg VolumeGroup ] PhysicalVolume ...

To create a logical volume:


$ mklv [ -lv NewLogicalVolume | -prefix Prefix ]
VolumeGroup Size [PhysicalVolume ... ]

To create a storage pool:


$ mksp [-f] StoragePool PhysicalVolume ...

To create a backing device from available space in a


storage pool:
$ mkbdsp [-sp StoragePool] Size [-bd BackingDevice]
-vadapter ServerVirtualSCSIAdapter

© 2009 IBM Corporation


Virtual SCSI Configuration (3 of 3)
4) On the VIO Server, define virtual target devices
$ mkvdev -vdev TargetDevice -vadapter VirtualServerAdapter
[ -dev DeviceName ]
For example:
$ mkvdev –vdev hdisk3 –vadapter vhost0
vtscsi0 Available
$ mkvdev –vdev lv10 –vadapter vhost0
vtscsi1 Available
$ mkvdev –vdev cd0 –vadapter vhost0
vtopt0 Available
Check the target devices with lsdev:
$ lsdev -virtual
name status description
vtscsi0 Available Virtual Target Device - Disk
vtscsi1 Available Virtual Target Device - Logical Volume
vtopt0 Available Virtual Target Device – Optical Media

5) Boot the client or run cfgmgr to use new virtual devices


© 2009 IBM Corporation
View Configuration with lsmap
Use lsmap from the VIO Server to verify mapping of virtual targets:
$ lsmap -vadapter vhost0
SVSA Physloc Client Partition ID
--------------- ----------------------------------- -----------------
vhost0 U9111.520.10F191F-V3-C6 0x00000003 Client LPAR ID
Server slot ID
VTD vtscsi0
LUN 0x8100000000000000
Backing device hdisk3
Physloc U787A.001.DNZ00G0-P1-T10-L8-L0

VTD vtscsi1
LUN 0x8200000000000000
Physical location
Backing device lv10
code
Physloc

VTD vtopt0
LUN 0x8300000000000000 LUN ID
Backing device cd0
Physloc

© 2009 IBM Corporation


View Configuration with lshwres
Use lshwres from the HMC to see system-wide view of
virtual I/O configuration (or view from HMC GUI)
hscroot@skylab-hmc:~> lshwres -r virtualio --rsubtype
scsi -m skylab
lpar_name=VIOS,lpar_id=1,slot_num=7,state=1,is_required=1,ad
apter_type=server
remote_lpar_id=4,remote_lpar_name=node3,remote_slot_num=6,
"backing_devices=drc_name=U787F.001.DPM0ZFL-P1-T10-L4-
L0/log_unit_num=0x8100000000000000/
device_name=hdisk1,drc_name=U787F.001.DPM0ZFL-P1-T10-L5-
L0/log_unit_num=0x820000000000000/“
lpar_name=node3,lpar_id=4,slot_num=6,state=1,is_required=1,a
dapter_type=client,remote_lpar_id=1,remote_lpar_name=VIOS,re
mote_slot_num=7,backing_devices=none

© 2009 IBM Corporation


Virtual Target Device Example

POWER6 System
LPAR1
VIOS
cl_lv
clientVG
vtscsi1 cd0

hdisk5 vtopt0 hdisk1


hdisk7

hdisk6 … vtscsi0 hdisk0


hdisk0

fcs0 fcs1 sas0 cd0 vhost0 vscsi0

POWER Hypervisor

SAN
FC card Internal storage Optical device

© 2009 IBM Corporation


File-Backed Virtual Devices
ƒ File-back (FB) virtual device types:
– File-backed disk devices
• Files created in storage pools can be used as hdisk on client
– File-backed optical media devices
• Create a Virtual Media Repository which can be stocked with
DVD-ROM/RAM media
• Clients can use images stored in repository as cd0 devices with media

ƒ FB virtual device characteristics:


– Read-only FB devices can be shared by multiple clients
– Bootable FB devices appear in SMS
– Reside in FB storage pools
• Mount Directory = /var/vio/storagepools/<FBSP_Name>
• LV_NAME = <FBSP_Name>
– Granularity as small as 1MB or as large as parent Logical Volume

FB virtual devices are new as of Virtual I/O Server V1.5

© 2009 IBM Corporation


Creating File-Backed Virtual Disks
ƒ Files on the virtual I/O Server can be used as backing storage:
1. Create a volume group (mkvg) or storage pool (mksp -f)
2. Create a FB disk storage pool (mksp -fb) inside volume
group/storage pool
3. Create a device in the pool (mkbdsp) and map to a vadapter
4. The client associated with that vadapter sees new FB device as
an hdisk

Volume Group/Storage Pool - contains hdisk(s)

FB Disk Storage Pool (contains FB virtual disks)


Target dev Target dev Target dev

© 2009 IBM Corporation


Create FB Virtual Disks Example (1 of 2)
ƒ Create new volume group/logical volume storage pool:
$ mkvg -vg newvg hdisk1 OR mksp -f newvg hdisk1 New storage pool
(newvg)
ƒ Create new FB storage pool in the logical volume storage pool:
$ mksp -fb fbpool -sp newvg -size 10g
fbpool New FB storage pool
File system created successfully. (fbpool) that is 10 GB
10444276 kilobytes total disk space. inside of newvg
New File System size is 20971520

ƒ Create new file device with a certain size, create the VTD, and map
to vhost adapter: Create new 30 MB file called fb_disk1
$ mkbdsp -sp fbpool 30m -bd fb_disk1 -vadapter vhost3
Creating file "fb_disk1" in storage pool "fbpool".
Assigning file "fb_disk1" as a backing device.
vtscsi3 Available
fb_disk1
Resulting VTD is named vtscsi3
and is mapped to vhost3

© 2009 IBM Corporation


Create FB Virtual Disks Example (2 of 2)
ƒ View mapping with new backing device:

$ lsmap -vadapter vhost3


SVSA Physloc Client Partition
ID
--------------- ----------------------------- ----------------
--
vhost3 U8203.E4A.10CD1F1-V1-C15 0x00000000

VTD vtscsi3
Status Available
LUN 0x8100000000000000
Backing device /var/vio/storagepools/fbpool/fb_disk1
Physloc

© 2009 IBM Corporation


Create FB Virtual Optical Device (1 of 2)
ƒ Create volume group/logical volume storage pool:
$ mkvg -vg medrep hdisk4 OR mksp -f medrep hdisk1

New storage pool (medrep)


ƒ Create 10 GB Virtual Media Repository in the LV pool:
$ mkrep -sp medrep -size 10G
Virtual Media Repository Created
Repository created within "VMLibrary_LV" logical volume

ƒ Create media (aixopt1) in repository from a file:


– Media could be blank, loaded from cd# device, or a file
$ mkvopt -name aixopt1 -file dvd.product.iso -ro

© 2009 IBM Corporation


Create FB Virtual Optical Device (2 of 2)
ƒ View repository and its contents:
$ lsrep
Size(mb) Free(mb) Parent Pool Parent Size Parent
Free
10198 6532 medrep 69888
59648
Name File Size Optical Access
aixopt1 3666 None ro

ƒ Create FB virtual optical device and map to vhost adapter:


$ mkvdev -fbo -vadapter vhost4
vtopt0 Available New VTD name

ƒ Load the image into the media device:


– Use the unloadopt command to unload
$ loadopt -vtd vtopt0 -disk aixopt1 -ro

© 2009 IBM Corporation


Viewing FB Configuration from the HMC

HMC command line example:


hmc:~> lshwres -m hurston -r virtualio --rsubtype scsi
lpar_name=VIOS,lpar_id=1,slot_num=16,state=1,is_required=0,adapte
r_type=server,remote_lpar_id=any,remote_lpar_name=,remote_slot_nu
m=any,"backing_devices=""0x8100000000000000//""""/var/vio/VMLibra
ry/aixopt1"""""""
. . .
© 2009 IBM Corporation
FB Device Command Examples (1 of 2)
ƒ List the repository and any contents:
$ lsrep
Size(mb) Free(mb) Parent Pool Parent Size Parent Free
10198 6532 medrep 69888 59648
Name File Size Optical Access
aixopt1 3666 vtopt0 ro

ƒ List the storage pools:


– Notice both LVPOOL and FBPOOL types:
$ lssp
Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type
rootvg 69888 44544 128 1 LVPOOL
NewVG 69888 59648 64 0 LVPOOL
medrep 69888 59648 64 0 LVPOOL
fbpool 10199 6072 64 2 FBPOOL

ƒ List out volume groups/storage pools (LVPOOL type only):


$ lsvg
rootvg
NewVG
Medrep

© 2009 IBM Corporation


FB Device Command Examples (2 of 2)
ƒ List LVPOOL details:
$ lssp -detail -sp NewVG
Name PVID Size(mb)
hdisk3 000cd1f195f987df 69888

ƒ List FBPOOL details:


$ lssp -bd -sp fbpool
Name Size(mb) VTD SVSA
fb_disk1 30 vtscsi3 vhost3
fb_disk2 4096 vtscsi4 vhost3

ƒ Show all mounts including FB devices:


$ mount
node mounted mounted over vfs date options
-------- --------------- --------------- ------ ------------ ---------------
/dev/hd4 / jfs2 Apr 18 13:01 rw,log=/dev/hd8
/dev/hd2 /usr jfs2 Apr 18 13:01 rw,log=/dev/hd8
/dev/hd9var /var jfs2 Apr 18 13:01 rw,log=/dev/hd8
/dev/hd3 /tmp jfs2 Apr 18 13:01 rw,log=/dev/hd8
/dev/hd1 /home jfs2 Apr 18 13:01 rw,log=/dev/hd8
/proc /proc procfs Apr 18 13:01 rw
/dev/hd10opt /opt jfs2 Apr 18 13:01 rw,log=/dev/hd8
/dev/fbpool /var/vio/storagepools/fbpool jfs2 Apr 28 12:04 rw,log=INLINE
/dev/VMLibrary_LV /var/vio/VMLibrary jfs2 Apr 28 14:36 rw,log=INLINE

© 2009 IBM Corporation


File-Backed Virtual Devices Example
ƒ Configure a file-backed virtual disk and file-backed
virtual optical device
VIOS LPAR1
vtscsi2
fb_disk1
fbpool1 fb_disk2
rootvg
(FB storage
pool)
cl_mksysb
medrep
stpool1 (Virtual AIX53_iso vtopt1 hdisk2
(LV storage pool) Media AIX61_iso
Repository)

hdisk1 cd1

hdisk0
vhost1 vscsi1

POWER Hypervisor

© 2009 IBM Corporation


N_Port ID Virtualization (NPIV)
ƒ NPIV is an industry standard technology that provides the capability to assign a
physical Fibre Channel adapter to multiple unique world wide port names (WWPN)
ƒ Assign at least one 8 Gigabit PCI Express Dual Port Fibre Channel Adapter to the
Virtual I/O Server
ƒ Create virtual client and server Fibre Channel adapter pair in each partition profile
through the HMC (or IVM)
– Always a one-to-one relationship
ƒ Each virtual Fibre Channel server adapter on the Virtual I/O Server partition connects
to one virtual Fibre Channel client adapter on a virtual I/O client partition.
– Each virtual Fibre Channel client adapter receives a pair of unique WWPNs
– The pair is critical, and both must be zoned (2nd WWPN is used for Live Partition
Mobility)
ƒ Virtual Fibre Channel server adapters are mapped to physical ports on the physical
Fibre Channel adapter on the VIO server
ƒ Using the SAN tools of the SAN switch vendor, you zone your NPIV-enabled switch to
include WWPNs that are created by the HMC for any virtual Fibre Channel client
adapter on virtual I/O client partitions with the WWPNs from your storage device in a
zone
– Just like for a physical storage environment

© 2009 IBM Corporation


NPIV Requirements

ƒ Power6 hardware
ƒ A minimum System Firmware level of EL340_039 for the IBM Power 520 and Power
550, and EM340_036 for the IBM Power 560 and IBM Power 570
ƒ Minimum of one 8 Gigabit PCI Express Dual Port Fibre Channel Adapter (Feature
Code 5735)
ƒ NPIV enabled SAN switch
– Only the first SAN switch which is attached to the Fibre Channel adapter in the
Virtual I/O Server needs to be NPIV capable. Other switches in your SAN
environment do not need to be NPIV capable.
ƒ Software
– HMC V7.3.4, or later
– Virtual I/O Server Version 2.1 with Fix Pack 20.1, or later
– AIX 5.3 TL9, or later
– AIX 6.1 TL2, or later
– SDD 1.7.2.0 + PTF 1.7.2.2
– SDDPCM 2.2.0.0 + PTF v2.2.0.6
– SDDPCM 2.4.0.0 + PTF v2.4.0.1

© 2009 IBM Corporation


NPIV Configuration Basics

1. Create virtual Fibre Channel adapters from the HMC for the
VIO server and client partitions
ƒ Creating the client adapter generates a pair of unique
WWPNs for the virtual client adapter
ƒ Based on a unique 6-digit prefix that comes with the
managed system, and includes 32,000 pairs of WWPNs that
are not reused (have to purchase more if you run out)
2. Map the virtual Fibre Channel server adapters on the VIO
server to the physical port of the physical Fibre Channel
adapter with the vfcmap command on the VIO server
3. Zone and map the WWPN for the client virtual Fibre
Channel adapter to the correct LUNs from the SAN switch
and storage manager

© 2009 IBM Corporation


SAN Switch Configuration for NPIV Support

ƒ On the SAN switch two things need to be done


before it can be used for NPIV.
1. Update the firmware to a minimum level of Fabric
OS (FOS) 5.3.0. To check the level of Fabric OS on
the switch, log on to the switch and run the version
command
2. Enable the NPIV capability on each port of the SAN
switch with the portCfgNPIVPort command (i.e., to
enable NPIV on port 16: portCfgNPIVPort 16,1)
– The portcfgshow command lists information for all ports

© 2009 IBM Corporation


Creating Virtual Fibre Channel Adapters
ƒ Create a virtual Fibre Channel
server adapter

ƒ Create a virtual Fibre Channel


client adapter

ƒ These dialogs look very much


like the dialogs to create
virtual SCSI adapters

© 2009 IBM Corporation


Create Mapping from Virtual to Physical Fibre Channel
Adapters on the VIOS (1 of 2)
ƒ The command lsdev -dev vfchost* lists all available virtual
Fibre Channel server adapters in the VIO server
$ lsdev -dev vfchost*
name status description
vfchost0 Available Virtual FC Server Adapter
ƒ The lsdev -dev fcs* command lists all available physical
Fibre Channel server adapters in the VIO server
$ lsdev -dev fcs*
name status description
fcs2 Available 8Gb PCI Express Dual Port FC Adapter
fcs3 Available 8Gb PCI Express Dual Port FC Adapter
ƒ Run the lsnports command to check the Fibre Channel
adapter NPIV readiness of the adapter and the SAN switch
(fabric should be set to 1)
$ lsnports
name physloc fabric tports aports swwpns awwpns
fcs3 U789D.001.DQDYKYW-P1-C6-T2 1 64 63 2048 2046

© 2009 IBM Corporation


Create Mapping from Virtual to Physical Fibre Channel
Adapters on the VIOS (2 of 2)
ƒ Map the virtual Fibre Channel server adapter to the physical Fibre
Channel adapter with the vfcmap command
vfcmap –vadapter vfchost0 –fcp fcs3
ƒ List the mappings with the lsmap –npiv command
lsmap –npiv –vadapter vfchost0
Name Physloc ClntID ClntName ClntOS
===== ====== ===== ======== =======
vfchost0 U9117.MMA.101F170-V1-C31 3 AIX61 AIX
Status:LOGGED_IN
FC name:fcs3 FC loc code:U789D.001.DQDYKYW-P1-C6-T2

© 2009 IBM Corporation


Create Zoning in the SAN Switch for the Client (1 of 2)
ƒ Get the information about the
WWPN of the virtual Fibre
Channel client adapter
created in the virtual I/O client
partition. From the HMC, look
at the virtual adapter
properties.

ƒ Logon to your SAN switch and create a new zoning or customize an


existing one
– The command zoneshow, available on the IBM 2109-F32 switch
lists the existing zones
– To add the WWPN c0:50:76:00:0a:fe:00:14 to the zone named
vios1 run the command:
zoneadd "vios1", "c0:50:76:00:0a:fe:00:14“
– To save and enable the new zoning, run the “cfgsave” and
“cfgenable npiv” commands

© 2009 IBM Corporation


Create Zoning in the SAN Switch for the Client (2 of 2)

ƒ With the zoneshow command, you can check if the added WWPN is active:
zoneshow
Defined configuration:
cfg: npiv vios1; vios2
zone: vios1 20:32:00:a0:b8:11:a6:62; c0:50:76:00:0a:fe:00:18;
c0:50:76:00:0a:fe:00:14
zone: vios2 C0:50:76:00:0A:FE:00:12; 20:43:00:a0:b8:11:a6:62

Effective configuration:
cfg: npiv
zone: vios1 20:32:00:a0:b8:11:a6:62
c0:50:76:00:0a:fe:00:18
c0:50:76:00:0a:fe:00:14
zone: vios2 c0:50:76:00:0a:fe:00:12
20:43:00:a0:b8:11:a6:62

ƒ After you have finished with the zoning, you need to map the LUN device(s) to the
WWPN from the SAN storage manager application

© 2009 IBM Corporation


Viewing NPIV Storage Access from the Client
ƒ You can list all virtual Fibre Channel client adapters in the virtual I/O
client partition using the following command:
# lsdev -l fcs*
fcs0 Available 31-T1 Virtual Fibre Channel Client Adapter
ƒ Disks attached through the virtual adapter are visible with lspv
# lspv
hdisk0 00c1f170e327afa7 rootvg active
hdisk1 00c1f170e170fbb2 None
hdisk2 none None
ƒ View paths to the virtual disk with lspath
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk1 vscsi0
Enabled hdisk0 vscsi1
Enabled hdisk2 fscsi0
ƒ Use the mpio_get_config command to get more detailed information.
For example:
# mpio_get_config –A
Storage Subsystem worldwide name: 60ab800114632000048ed17e
Storage Subsystem Name = 'ITSO_DS4800'
hdisk LUN # Ownership User Label
hdisk2 0 A (preferred) NPIV_AIX61

© 2009 IBM Corporation


Implementing Redundancy with NPIV

ƒ You can create multiple paths from a LUN in the SAN


to a virtual client via multiple virtual Fibre Channel
adapters
ƒ You can create multiple paths from a LUN in the SAN
to an AIX client using a combination of virtual and
physical Fibre Channel adapters
ƒ Set path priority, hcheck_interval, and hcheck_mode
for disks and paths with using MPIO

© 2009 IBM Corporation


Conclusion
ƒ There are many options available to provide virtual
storage to AIX clients with POWER5 and POWER6
systems
– Virtual SCSI devices (supported on POWER5 and
POWER6)
– Virtual Fibre Channel adapters (supported on
POWER6)
ƒ Consult the PowerVM Virtualization Managing and
Monitoring Redpaper for detailed information about
common use cases and configuration details
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAb
stracts/sg247590.html?OpenDocument

© 2009 IBM Corporation


Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.

Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or
is not significant within its relevant market.
Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.

For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:

*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA,
WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®

The following are trademarks or registered trademarks of other companies.

Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.
Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.
IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.

* All other products may be trademarks or registered trademarks of their respective companies.

Notes:
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any
user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload
processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have
achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.
This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to
change without notice. Consult your local IBM business contact for information on the product or services available in your area.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the
performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

33 © 2009 IBM Corporation