IBM Hybrid Systems Technical Lead, NA IMT West; CISSP Email: jrfyffe@us.ibm.com Last Update: June 15, 2012
Session objectives
Level set the audience on the current zEnterprise Server Offerings. Review recently announced capabilities (GA2). Provide a forum for questions.
Achieved two-year, positive compound growth rate for the 6th consecutive quarter as of 1Q12.
4
120+
100+
5
Today
Progress
Servers consolidated over 5,000 Current migration pace 2,000 servers / year
IBM will consolidate and virtualize thousands of server images onto IBM System z Mainframes Substantial savings: energy, software and systems support costs 80% less energy, 85% less floor space Enabled by virtualization capability
Energy savings >20,000 MegaWatt hours / year Reduction in floor space 47,000 square feet Sys Admin efficiency 100 images/admin vs. 30 before Consolidation Ratios Intel: X3850 - 15 Power: P5 - 65 ; P7 - 130 z: z10 - 130 ; z196 - 200
2012 IBM Corporation
S/360
S/370
370/XA
370/ESA
ESA/390
z/Architecture
1960s
1970s
24-bit addressing
1980s
Virtual addressing
1990s
31-bit addressing Sysplex
2000s
Binary Floating Point 64-bit addressing
z196 overview
Machine Type 2817 5 Models M15, M32, M49, M66 and M80 Processor Units (PUs) 5.2 GHz, 1200 MIPS, 50 BIPS
Increased Cache L1 cache/PU core (64 KB I-cache + 128 KB D-cache) L2 cache/PU core (1.5 MB) L3 cache shared by 4 PUs per chip (24 MB)
110+ new instructions Out of Order Execution 20 (24 for M80) PU cores per book (96 max) Up to 14 SAPs per system, 2 spares designated per system Dependant on the H/W model - up to 15,32,49,66 or 80 PU cores available for characterization
CPs, IFLs, ICFs, zAAPs, zIIPs, SAPs 3 sub-capacity points
Sub-capacity available for up to 15 CPs Memory (RAIM Redundant Array of Independent Memory) System Minimum of 32 GB Up to 768 GB per book Up to 3 TB for System and up to 1 TB per LPAR
Fixed HSA, standard 32/64/96/112/128/256 GB increments
I/O Up to 48 I/O Interconnects per System @ 6 GBps each Up to 4 Logical Channel Subsystems (LCSSs) STP - optional (No ETR) Water Cooling 2012 IBM Corporation Overhead Cabling
4000
3000
MHz
1.7 GHz 2000 420 MHz 550 MHz 770 MHz 1.2 GHz
1000
300 MHz
1997 G4
1998 G5
1999 G6
2000 z900
2003 z990
2005 z9 EC
2008 z10 EC
2010 z196
G4 1st full-custom CMOS S/390 G5 IEEE-standard BFP; branch target prediction G6 Copper Technology (Cu BEOL)
10
z900 Full 64-bit z/Architecture z990 Superscalar CISC pipeline z9 EC System level scaling
z10 EC Architectural extensions z196 Additional Architectural extensions and new cache structure
2012 IBM Corporation
IBM System z
IBM System z Balanced System Comparison for High End Servers
System I/O Bandwidth 384 GB/Sec*
288 GB/sec*
Memory 3 TB**
1.5 TB** 512 GB
256 GB
24 GB/sec 64 GB 300
1202
16-way
32-way
54-way 64-way * Servers exploit a subset of its designed I/O capability ** Up to 1 TB per LPAR PCI Processor Capacity Index
11
80-way Processors
zSeries 900
Copyright IBM Corporation 2012
12
z114 overview
13
Machine Type 2818 2 Models M05 and M10 3.8 Ghz, 26-3139 MIPS (UP speed 782 MIPS) Single frame, air cooled Non-raised floor option available Overhead Cabling and DC Power Options Processor Units (PUs) 7 PU cores per processor drawer (One for M05 and two for M10) Up to 2 SAPs per system, standard 2 spares designated for Model M10 Dependant on the H/W model - up to 5 or 10 PU cores available for characterization Central Processors (CPs), Integrated Facility for Linux (IFLs), Internal Coupling Facility (ICFs), System z Application Assist Processors (zAAPs), System z Integrated Information Processor (zIIP), optional - additional System Assist Processors (SAPs) 130 capacity settings Memory Up to 256 GB for System including HSA System minimum = 8 GB (Model M05), 16 GB (Model M10) 8 GB HSA separately managed RAIM standard Maximum for customer use 248 GB (Model M10) Increments of 8 or 32 GB I/O Support for non-PCIe Channel Cards Introduction of PCIe channel subsystem Up to 64 PCIe Channel Cards Up to 2 Logical Channel Subsystems (LCSSs) STP - optional (No ETR) 2012 IBM Corporation
3.5 GHz
3.8 GHz
1999
Multiprise 3000
2002
z800
2004
z890
2006
z9 BC
2008
z10 BC
2011
z114
z800 - Full 64-bit z/Architecture z890 - Superscalar CISC pipeline z9 BC - System level scaling
z10 BC - Architectural extensions Higher frequency CPU z114 Additional Architectural extensions and new cache structure
2012 IBM Corporation
System comparisons
72 GB/Sec
Memory 256 GB
256 GB
6 GB/Sec 170 64 GB 32 GB
344
474
673
z114
4-Way
z10 BC
5-Way Notes: 1. Capacity shown is for CPs only 2. z9, z10 and z114 can have additional PUs which can be used as Speciality Engines
15
z9 BC
Engines 5-Way
z890
z800
16
I/O infrastructure
New PCIe-based I/O infrastructure New PCIe I/O drawer
Increased port granularity Designed for improved power and bandwidth compared to I/O cage and I/O drawer
Storage
New PCIe-based FICON Express8S features ESCON Statement of Direction
Networking
New PCIe-based OSA-Express4S features
Coupling
New 12x InfiniBand and 1x InfiniBand features (HCA3-O fanouts) 12x InfiniBand - increased service times when using 12x IFB3 protocol 1x InfiniBand increased port count
17
IBM System z
SX and LX
OSA-Express4S
10 GbE, GbE
OSA-Express2 (Carry forward only)
GbE, 1000BASE-T
PSC (Carry forward or new build, no MES add)
18
OSA-Express4S
10 GbE LR and SR GbE SX and LX
IBM System z
HBA ASIC
HBA ASIC
FLASH
2 channels of LX or SX (no mix) Small form factor pluggable (SFP) optics - Concurrent repair/replace action for each SFP
LX
OR
SX
20
I/O driver benchmark I/Os per second 4k block size Channel 100% utilized
z H P F z H P F FICON Express8
52000 92000
z H P F
FICON Express4 and FICON FICON Express4 Express2 and 31000 FICON Express2 ESCON
1200 14000
FICON Express 8
20000
FICON Express8 S
20000
z10
z10
z196 z10
z196 z10
z196 z114
21
1700 1600 1500 1400 1300 1200 1100 1000 900 800 700 600 500 400 300 200 100 0
I/O driver benchmark MegaBytes per second Full-duplex Large sequential read/write mix
z196 z114 z H P F
1600
108% increase
z10 z9
z10
z196 z10
z196 z10
z196 z114
z196 z114
Copyright IBM Corporation 2012
IBM System z
PCIe
IBM ASIC
Gigabit Ethernet (GbE) CHPID types: OSD (OSN not supported) Single mode (LX) or multimode (SX) fiber Two ports of LX or two ports of SX 1 PCHID/CHPID
PCIe
IBM ASIC
FPGA
22
IBM System z
Designed to reduce the minimum round-trip networking time between z196/z114 systems (reduced latency)
Designed to improve round trip at the TCP/IP application layer
OSA-Express3 and 4S10 GbE 45% improvement compared to the OSA-Express2 10 GbE OSA-Express3 and 4S GbE 45% improvement compared to the OSA-Express2 GbE Designed to improve throughput (mixed inbound/outbound) OSA-Express3 and 4S 10 GbE 1.0 GBytes/ps @ 1492 MTU 1.1 GBytes/ps @ 8992 MTU 3-4 times the throughput of OSA-Express2 10 GbE 0.90 of Ethernet line speed sending outbound 1506-byte frames 1.25 of Ethernet line speed sending outbound 4048-byte frames
23 Copyright IBM Corporation 2012
IBM System z
80% increase
40% increase
1120 615
OSA-E3 OSA-E4S
1000
500
1180
OSA-E3
1680
OSA-E4S
70% increase
70% increase
Notes: AWM on z/OS z/OS is doing checksum 1 megabyte per second (MBps) is 1,048,576 bytes per second MBps represents payload throughput (does not count packet and frame headers)
1000 MBps
MBps
1500
1180 680
OSA-E3 OSA-E4S
1000
2080 1240
500
OSA-E3
OSA-E4S
24
IBM System z
IFB
IFB
HCA3-O fanout for 12x InfiniBand coupling links CHPID type CIB Improved service times with 12x IFB3 protocol Two ports per feature Fiber optic cabling 150 meters Supports connectivity to HCA2-O (No connectivity to System z9 HCA1-O) Link data rate of 6 GBps HCA3-O LR fanout for 1x InfiniBand coupling links CHPID type CIB Four ports per feature Fiber optic cabling 10 km unrepeated, 100 km repeated Supports connectivity to HCA2-O LR Link data rate server-to-server 5 Gbps Link data rate with WDM; 2.5 or 5 Gbps
IFB
IFB
IFB
IFB
25
IBM System z
IFB
IFB
Up to 16 CHPIDs across 2 ports* Two protocols 1. 12x IFB HCA3-O to HCA3-O or HCA2-O 2. 12x IFB3 - improved service times for HCA3-O to HCA3-O
12x IFB3 service times are designed to be 40% faster than 12x IFB
12x IFB3 protocol activation requirements Four or fewer CHPIDs per HCA3-O port
If more than four CHPIDs are defined per port, CHPIDs will use IFB protocol and run at 12x IFB service times
* Performance considerations may reduce the number of CHPIDs per port.
Note: The InfiniBand link data rates do not represent the performance of the link. The actual performance is dependent upon many factors including latency through the adapters, cable lengths, and the type of workload.
26
27
Leverage the latest operating systems to exploit the full value of the z114and z196
z/OS Version 1 Release 13 z/VM and Linux on System z z/VSE Version 5.1
z/OS Predictive Failure Analysis and Runtime Diagnostics - help provide early warning of certain system issues before they become obvious Updates to shorten batch window, simplify batch programming, and give you more flexibility in deploying batch applications. Enhancements to improve I/O performance for z/OS UNIX workloads in a Parallel Sysplex Improved backup capability and system responsiveness with less-disruptive backups. XML Performance improvements for complex docs Continued investment in simplification with z/OS Management Facility Support of new encryption and compliance standards and keys
Server and application consolidation on System z using Linux on System z and z/VM is the industry leader in large-scale, cost-efficient virtual server hosting zEnterprise extends the choice of integrated workloads through blades on zBX The z114 lowers the entry cost to get started with the Enterprise Linux Server Faster cores and a bigger system cache on the z196 let you do even more with less when running Linux on System z and z/VM Integrated blades on zBX will offer added dimension for workload optimization including applications on Windows
Introduces 64-bit virtual addressing to z/VSE Reduces memory constraints Allows to exploit more data in memory Continues the z/VSE strategy of protect, integrate, and extend (in short PIE) Protect existing customer investments in applications and data on z/VSE Integrate z/VSE with the rest of IT Extend with Linux on System z to build modern integrated solutions Exploitation of selected zEnterprise functions and features as well as IBM System Storage options Includes a SoD on CICS Explorer capabilities for CICS TS for VSE/ESA
28
Increased flexibility with Live Guest Relocation (LGR) to move virtual servers without disruption. Increased management of resources with multi-system virtualization to allow up to four z/VM instances to be clustered as a single system image. Increased scalability with up to four systems horizontally, even on mixed hardware generations. Increased availability through non-disruptively moving work to available system resources and non-disruptively moving system resources to work.
LGR is the very best z/VM software enhancement since 64-bit support became available.
Mark Shackelford, Vice President, Information Services, Baldor
29
CSL-WAVE 3.0 will provide day-one support for z/VM 6.2 allowing users to harness its powerful new features in an easy, graphical and very intuitive manner.
Sharon Chen, Founder and CEO, 2012 IBM Corporation CSL International Ltd
30
Trademarks
The following are trademarks of the International Business Machines Corporation in the United States and/or other countries.
31
Back-up Slides
32
33
Doubled HiperSockets to 32 Additional STP enhancements Doubled Coupling CHPIDs to 128 Improved PSIFB Coupling Link Physical Coupling Links increased to 72 (Model M10) New 32 slot PCIe Based I/O Drawer Increased granularity of I/O adapters New form factor I/O adapters i.e. FICON Express8S and OSA-Expres4S Humidity and altimeter smart sensors
Improved processor cache design New and additional instructions On Demand enhancements CFCC Level 17 enhancements Cryptographic enhancements 6 and 8 GBps interconnects 2 New OSA CHPIDs OSX and OSM
34
*All
Optional High Voltage DC power Optional overhead I/O cable exit zBX-002 with POWER7, DataPower XI50z and IBM System x Blades* NRF Support with either top exit or bottom exit I/O and power Reclassification from general business environment to data center
2012 IBM Corporation
statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represents goals and objectives only.
IBM System z
z196 GA2 I/O Infrastructure (PCIe Based) with PCIe I/O drawer
Book 0 Memory
PU PU PU
Book 1 Memory
PU PU PU
Book 2 Memory
PU PU PU
Book 3 Memory
PU PU PU
Fanouts
PCIe (8x)
x16 PCIe Gen2 8 GBps RII
PCIe (8x)
HCA2 (8x)
6 GBps
HCA2 (8x)
Fanouts
PCIe switch
PCIe switch
PCIe switch
RII
PCIe switch
IFB-MP
RII
IFB-MP
IFB-MP
RII
IFB-MP
2 GBps mSTI
FPGA
Ports OSA-Express3
OSA-Express4S
FICON Express8
FPGA
2GBps mSTI
IBM System z
Fanout slots
Up to 8 fanout cards per z196 book M15 (1 book) up to 8 M32 (2 books) up to 16 M49 (3 books) up to 20 M66 and M80 (four books) up to 24
I/O fanouts compete for fanout slots with the the InfiniBand HCA fanouts that support coupling: HCA2-O 12x two InfiniBand DDR links HCA2-O LR two 1x InfiniBand DDR links HCA3-O two 12x InfiniBand DDR links HCA3-O LR four 1x InfiniBand DDR links
PCIe
PCIe
PCIe
PCIe fanout PCIe I/O Interconnect links Supports two copper cable PCIe 8 GBps interconnects to two 8-card PCIe I/O domain multiplexers. Always plugged in pairs for redundancy. HCA2-C fanout InfiniBand I/O Interconnect Supports two copper cable 12x InfiniBand DDR 6 GBps interconnects to two 4-card I/O domain multiplexers. Always plugged in pairs for redundancy.
Copyright IBM Corporation 2012
IFB
HCA2-C
IFB
36
IBM System z
Up to 4 fanouts per z114 CEC drawer M05 (one CEC drawer) up to 4 fanouts M10 (two CEC drawers) up to 8 fanouts
Drawer 1 M05 and M10
F A N O U T
F A N O U T
F A N O U T
F A N O U T
I/O fanouts compete for fanout slots with the InfiniBand HCA fanouts that support coupling: HCA2-O 12x two InfiniBand DDR links HCA2-O LR two 1x InfiniBand DDR links HCA3-O two 12x InfiniBand DDR links HCA3-O LR four 1x InfiniBand DDR links
PCIe
PCIe
PCIe
PCIe fanout PCIe I/O Interconnect links Supports two PCIe 8 GBps interconnects on copper cables to two 8-card PCIe I/O domain switches. Always plugged in pairs for redundancy. HCA2-C fanout InfiniBand I/O Interconnect Supports two 12x InfiniBand DDR 6 GBps interconnects on copper cables to two 4-card I/O domain multiplexers. Always plugged in pairs for redundancy.
Copyright IBM Corporation 2012
IFB
37
HCA2-C
IFB