Anda di halaman 1dari 74

DB2 10.

5 (including BLU Acceleration)


A Technical Overview
Matt Huras, IBM
Tridex Regional DB2 Users Group
Sept 2013 Meeting, NYC

Click to edit Master title style

DB2 10.5

DB2 10.5
Analytics at the Speed of Thought
BLU Acceleration

DB2 10
pureScale (DB2 9.8)
Virtually Unlimited Capacity
Transparent Scalability
Leading Availability

3x Query Performance Boost


50% Compression Boost
Temporal Query
noSQL Graph Store
HADR Multiple Standby

Always Available Transactions


Online Maintenance
pureScale HADR

Future Proof Versatility


Enhanced SQL & noSQL function

TCO & Performance


TCO & Performance
TCO & Performance
Any OLTP/ERP workload
Start small; grow with your
business

Ease of Development
Application Transparent
Scaling
Avoid the risk & cost of
tuning your applications to
the database topology

Reliability /Availability
Maintain service across
planned & unplanned
events

3x Query Performance
New Index Exploitation
Adaptive Compression
Multi-temp Storage
Real-time Warehousing

Ease of Development
Temporal Query
98% SQL Compattibiltiy
Graph Store
RCAC

Reliability / Availability
pureScale Integration &
Enhancements
WLM Enhancements
Reorg Avoidance
HADR Mutliple Standby

Memory-optimized BLU Acceleration


Workload Consolidation with pureScale
Even more performance !

Ease of Development
Enhanced SQL Compatibility
More noSQL Integration

Reliability / Availability
Rolling Updates
HADR with pureScale
pureScale Active / active DR
Enhancements
Online add/drop Member
Other availability enhancements

Click to edit Master title style

What is DB2 with BLU Acceleration?

New innovative technology for analytic queries


Columnar storage
New run-time engine with vector (aka SIMD) processing, deep
multi-core optimizations and cache-aware memory
management
Active compression - unique encoding for further storage
reduction beyond DB2 10 levels, and run-time processing
without decompression

Revolution by Evolution
Built directly into the DB2 kernel
BLU tables can coexists with traditional row tables, in same
schema, tablespaces, bufferpools
Query any combination of BLU or row data
Memory-optimized (not in-memory)

Value : Order-of-magnitude benefits in


Performance
Storage savings
Time to value

How fast is it ? Results from the DB2 10.5 Beta


Customer

Speedup over
DB2 10.1

Large Financial
Services Company

46.8x

8x-25x

Global ISV Mart Workload

37.4x

Analytics Reporting Vendor

13.0x

improvement
is common

Global Retailer

6.1x

Large European Bank

5.6x

It was amazing to see the faster query times compared to the performance
results with our row-organized tables. The performance of four of our
queries improved by over 100-fold! The best outcome was a query that
finished 137x faster by using BLU Acceleration.
- Kent Collins, Database Solutions Architect, BNSF Railway

Recent Internal Test


POPS
POPS(Proof
(Proofof
ofPerformance
Performanceand
and
Scalability)
Scalability)

Broad
Broadrange
rangeofofqueries
querieswith
withvarying
varying
selectivity
/
aggregation
selectivity / aggregation

Substantial
SubstantialStorage
StorageSavings
Savingswith
with
BLU
Acceleration
BLU Acceleration
2.5x
2.5xless
lessspace
spacethan
thanDB2
DB210.1
10.1

Massive
MassivePerformance
PerformanceGains
Gains
133x
133xspeedup
speedupover
overDB2
DB210.1
10.1
Maximum
Maximumquery
queryspeed
speedup
upover
over900x
900x

Total elapsed time over all queries (min)

~4TB
~4TBofofraw
rawdata
data
22fact
tables
fact tables
55dimension
dimensiontables
tables

621

DB2 10.1
DB2 BLU

600
500
400

133x

Derived
Derivedfrom
fromRedbrick
Redbrickperformance
performance
test
test
Classic
Classicsales
salesanalytics
analytics
5.5years
of
data
5.5years of data(2000
(2000days)
days)for
for63
63
stores
stores

700

300
200
100

4.7
0

MV
Intel Xeon Processor E5-4650
M
Y
32 cores total (4 CPUs)
ts
384 GB
s
te
DS5300 (2x16 disks)
b
La

Significant Storage Savings


~2x-3x storage reduction vs DB2 10.1 adaptive compression
(comparing all objects - tables, indexes, etc)
New advanced compression techniques
Fewer storage objects required

DB2 with BLU Accel.

b
La

ts
s
te

MV
M
Y

DB2 with BLU Acceleration : The 7 Big Ideas

7
6

2013 IBM Corporation

7 Big Ideas: 2 Compute Friendly Encoding and Compression


Massive compression with approximate Huffman encoding
The more frequent the value, the fewer bits it is encoded with
Eg., there will typically be more sales records from states with higher populations
New York and California, may be encoded with only 1 or 2 bits
Alaska and Rhode Island may be encoded in 12 bits

Conceptual
Compression
Dictionary

STATE
New York
California
Illinois
Michigan
Florida
Alaska
Rhode Isl

Encoding

Register-friendly encoding optimizes CPU & memory efficiency


Encoded values packed together to match the register width of the CPU
Fewer I/Os, better memory utilization, fewer CPU cycles to process

Register Length

7 Big Ideas: 2 Data Remains Compressed During Evaluation


Encoded values do not need to be decompressed during evaluation
Predicates (=, <, >, >=, <=, Between, etc), joins, aggregations and more work
directly on encoded values
SELECT COUNT(*) FROM T1 WHERE STATE = California
California
STATE
Michigan
California
New York
California
New York
Illinois
California
Alaska
Rhode Is
California
9

Encoding

Encode
Count = 1
4
3
2

2013 IBM Corporation

7 Big Ideas:

Multiply the Power of the CPU

Without
Performance
SIMD increase
processing
with
the
Single
CPU will
Instruction
apply each
Multiple
instruction
Data (SIMD)
to each
data
Using
element
hardware instructions, DB2 with BLU Acceleration can apply a
single instruction to many data elements simultaneously
Eg. compare
records joins,
to 2005
Predicate
evaluation,
grouping, arithmetic
2001

2002

2003

2004

2005

2006

2007

2008

2009

2005
2001
2009
2001
2007
2006
2005
2004
2003
2002

2006
2002
2010

2011

2012

2007
2003
2011

Data

Instruction
Compare
= 2005

2010

2008
2004
2012

Data
Instruction

Processor
2005
Core

Result Stream

Compare
Compare
== 2005
2005

Processor
2005
Core

Result Stream

7 Big Ideas:

Core-Friendly Parallelism

BLU queries automatically parallelized across cores, and,


achieve excellent multi-core scalability via
careful data placement and alignment
careful attention to physical attributes of the server
and other factors, designed to

maximize CPU cache hit rate & cacheline efficiency


SELECT c1 FROM

core
cache
line

Cacheline
ping-pong

cache
core 0 working
on blue data

Main memory
layout
11

SELECT c2 FROM

SELECT c1 FROM

core

core

cache

cache

core 1 working
on green data

SELECT c2 FROM

Minimal
Traffic

core
cache

7 Big Ideas:

Core-Friendly Parallelism

BLU queries automatically parallelized across cores, and,


achieve excellent multi-core scalability via
careful data placement and alignment
careful attention to physical attributes of the server
and other factors, designed to

maximize CPU cache hit rate & cacheline efficiency

larger
working
set of
memory
accesses

core

core

core

core

cache

cache

cache

cache

BLU tries to
match
working
set to actual
cache size

Minimized
Memory Access

Frequent, Slow
Memory Access

2 3

2 3

4 5

4 5

remaining
portion of
data is
processed in
sequence

7 Big Ideas:

Column Oriented Storage

Massive improvements in I/O efficiency


Only perform I/O on the columns involved in the query
No need to consume bandwidth for other columns
Deeper compression possible due to commonality within column values

Massive improvements in memory and cache efficiency


Columnar data kept compressed in memory
Data packed into cache friendly structures
Late materialization
Predicates, joins, scans, etc. all operate on columns packed in memory

Rows are not materialized until absolutely necessary to build result set
No need to consume memory/cache space & bandwidth for unneeded columns
Columns stored
separately
and packed in
different buffers
in memory

C1 C2 C3 C4 C5 C6 C7 C8

SELECT C4 ... WHERE C4=X


Consumes I/O bandwidth
memory buffers and memory
bandwidth only for C4

7 Big Ideas:

Scan-Friendly Memory Caching

Memory-optimized (not In-Memory)


No need to ensure all data fits in memory

BLU includes new scan-friendly victim selection to keep a near


optimal % of pages buffered in memory
Traditional RDMSes use most recently used victim selection for large scans
Theres no hope of caching everything, so just victimize the last page read

A key BLU design point is to run well when all data fits in memory, and when it
doesnt !
Even with large scans, BLU prefers
selected pages in the bufferpool, using
an algorithm that adaptively computes
a target hit ratio for the current scan,
based on the size of the bufferpool,
the frequency of pages being re-accessed
in the same scan, and other factors

RAM
Near optimal caching

Benefit: less I/O !


DISKS

14

2013 IBM Corporation

7 Big Ideas:

Data skipping

Automatic detection of large sections of data that do not qualify for a


query and can be ignored
Order of magnitude savings in all of I/O, RAM, and CPU
No DBA action to define or use Synopsis automatically created
and maintained as data is LOADed or INSERTed
Persistent storage of min and max values for sections of data values

15

2013 IBM Corporation

How BLU Helps: A Hypothetical Example


The setup:
4TB table with 100 columns, 10 years of data, 2004-2013.
The query:
SELECT COUNT(*) from MYTABLE where YEAR = 2010

The challenge:
Subsecond response to a 4TB query on a 32 core server,
without defining an index.
The action:
1. Compression reduces data size to 1/10th

Divide by 10

400GB

2. Columnar access touches only 1 of 100 columns

Divide by 100

4GB

3. Automatic synopsis eliminates pages without 2010 data

Divide by 10

400MB

4. Core-friendly parallelism on 32 core system

Each core scans 1/32

12.5MB

5. Compute-friendly encoding and SIMD, scan efficiency is 4x faster than traditional

Divide by ~4

3.1MB

7 Big Ideas:

Simple to Implement and Use

LOAD and then run queries


Significantly reduced or no need for,

Indexes
REORG (its automated)
RUNSTATS (its automated)
MDC or MQTs or Materialized Views
Statistical views
Optimizer hints

It is just DB2!
Same SQL, language interfaces, administration
Same DB2 process model, storage, bufferpools

The BLU Acceleration technology has some obvious benefits: It makes our analytical
queries run 4-15x faster and decreases the size of our tables by a factor of 10x. But its
when I think about all the things I don't have to do with BLU, it made me appreciate the
technology even more: no tuning, no partitioning, no indexes, no aggregates.
-Andrew Juarez, Lead SAP Basis and DBA
17

2013 IBM Corporation

7 Big Ideas:

Simple to Implement and Use

One setting optimizes the system for BLU Acceleration


Set DB2_WORKLOAD=ANALYTICS
Informs DB2 that the database will be used for analytic workloads

Automatically configures DB2 for optimal analytics performance

Makes column-organized tables the default table type


Sets up default page (32KB) and extent size (4) appropriate for analytics
Enables automatic workload concurrency management
Enables automatic space reclaim
Memory for caching, sorting and hashing (bufferpool, sortheap), utilities (utility heap) are
automatically initialized based on the server size and available RAM

Simple Table Creation


If DB2_WORKLOAD=ANALYTICS, tables will be created column
organized automatically
Data is always automatically compressed - no options
For mixed table types can define tables as ORGANIZE BY COLUMN or ROW

Utility to convert tables from row-organized to column-organized


db2convert utility
18

2013 IBM Corporation

CREATE STORAGE SG1 ON path1, path2, path3


How
to Create a BLU Table
CREATE TABLESPACE TS1 USING STOGROUP SG1
CREATE TABLE SALES
(SALESKEY BIGINT not null,
SALESPERSONKEY INT not null,
PRODUCTKEY INT not null,
PERIODKEY BIGINT not null,
)
ORGANIZE BY COLUMN
IN TS1
T1

SALES

Use
Usethe
thenew
newDFT_TABLE_ORG
DFT_TABLE_ORG
database
databaseconfiguration
configurationparameter
parameter
to
set
default
table
organization
to set default table organization
(row
(rowor
orcolumn)
column)

created in
TS1

created using
Storage
Group

created with bufferpool


Bufferpool

DB2 10.5 Automatic Concurrency Management


Every additional query naturally consumes more memory, locks, CPU & memory bandwidth
In some databases, more queries can lead to contention and performance degradation
DB2 10.5 avoids this by automatically optimizing the level of concurrency

DB2 10.5 allows an arbitrarily high number of concurrent queries to be submitted, but limits the number that consume resources at
any point in time
Lightweight queries that instant response, bypass this control

Enabled automatically when DB2_WORKLOAD=ANALYTICS

Applications & Users

DB2 DBMS kernel

Up to tens of thousands of
SQL queries at once

Moderate number of
queries consume resources

SQL Queries

Automatically determined based


on available machine CPU
resources

..
.
20

2013 IBM Corporation

DB2 10.5 Automatic Concurrency Management : Details


New objects created in all 10.5 databases

SYSDEFAULTMANAGEDSUBCLASS - default subclass for managed queries


SYSDEFAULTUSERWAS - default work class set to map expensive queries (cost > X) to the above subclass
SYSDEFAULTCONCURRENT - default concurrency threshold to limit concurrently executing managed queries to N

Default concurrency threshold enabled on database creation when


DB2_WORKLOAD=ANALYTICS
X and N are determined automatically by DB2

Threshold

Based on available CPU resources


SYSDEFAULTCONCURRENT

SYSDEFAULTUSERCLASS
limit concurrency
to N, queue excess
query cost
>X?
SYSDEFAULTUSERWORKLOAD

SYSDEFAULTUSERWAS

SYSDEFAULTMANAGEDSUBCLASS
SYSDEFAULTSUBCLASS

else
Work Class Set

DB2 10.5 additions shown in blue

Automatic Space Reclaim


Column1

Column2

Column3

2013

2013

2013

2012

2012

2013

Traditional reorg not needed with BLU


tables
No concept of clustering

Deleted space can be easily reclaimed via


REORG TABLE t1 RECLAIM EXTENTS
Freed extents can exist anywhere in the
column object (uses efficient sparse table
technique used with MDC and ITC tables)
The storage can be subsequently reused by
any table in the tablespace
Done online while work continues

Storage
extent

2012
Done automatically via DB2s automatic
table maintenance when DB2_WORKLOAD=
ANALYTICS

DELETE * FROM MyTable


WHERE Year = 2012

Space is freed online while work continues

These extents hold only


deleted data

22

2013 IBM Corporation

2012

A Brief Look at Internals : BLU Storage


CREATE TABLE t1
IN TS1 INDEX IN TS2

Row
organized
table

ALTER TABLE
ADD CONSTRAINT uc1
UNIQUE (c2)
CREATE INDEX i1
CREATE INDEX i2
CREATE INDEX i3
Index (inx)
object

(each extent contain


pages of rows)

TS1

uc1

i1

i2

i3

TS2

Column
organized
table

ALTER TABLE
ADD CONSTRAINT uc1
UNIQUE (c2)
Table (dat)
object

Table (dat) object

extent
extent
extent
extent
extent
extent
extent

CREATE TABLE t1
ORGANIZE BY COLUMN
IN TS1 INDEX IN TS2

Column (col)
object

Synopsis

(records the
(meta data &
(each extent
range of column
compression contains pages of values existing in
dictionary)
data for 1 column) different regions
of the table)

c1
c2
c3
c4
c1
c2
c3
c4
TS1

Index
(inx)
object

uc1

TS2

A Quick Look at BLU Internals : A Scenario


0) DB2_WORKLOAD=ANALYTICS
1) CREATE db, tablespaces
2) CREATE TABLE t1
PRIMARY KEY (c1)
ORGANIZE BY COLUMN
IN TS1 INDEX IN TS2

3) LOAD FROM myfile INTO t1


automatically maintains synopsis
& collects table & index statistics

4) SELECT c2,c3 FROM t1


WHERE
5) INSERT INTO t1
automatically maintains synopsis

6) DELETE FROM t1 WHERE


7) Automatic table maintenance
returns space to tablespace

Table (dat)
object

Column (col)
object

Synopsis

(records the
(meta data &
(each extent
range of column
compression contains pages of values existing in Index
dictionary)
data for 1 column) different regions (inx)
object
of the table)

<empty>
c1
<empty>
c2
<empty>
c3
<empty>
c4
c1
c2
c3
c4
TS1

TS2

1TB SAP BW Queries

IBM Power 7 750


AIX6.1 TL6
3.3GHz 32-core
130GB RAM available
DS5300 w 48 spindles

5000
4500

DB2 10.1
BLU on pre-GA 10.5 Build

Elapsed Time (seconds)

4000
3500
3000

42.8x
42.8x
Speedup
Speedup

2500
2000
1500
1000

771s

500
La
b

te
s

ts

0
-

YM
MV

Q01 Q01a Q02 Q03 Q04 Q05 Q06 Q07 Q08 Q09 Q10 Q11 Q12 Q12a Q13 Q14 Q15 Q16 Q17 Q18 Q19 Q20
Query
2013 IBM Corporation
25

18s

Example BLU Use Case : EDW Offload


EDW Application

OLAP Application

Cognos BI
with BLU Acceleration

Cognos BI
with BLU Acceleration
Load and Go

Enterprise Data Warehouse

Analytic
Data Mart
(BLU Tables)
Multi-platform software

26

Cognos with BLU Acceleration

Server: POWER7+ 760


CPU: 48 cores @ 3.4GHz , 1TB RAM
Cognos/DB2 client LPAR: 23 cores, 384GB RAM
DB2 server LPAR: 24 cores, 460GB RAM
1 core, 4GB RAM dedicated to VIOS
Storage: V7000 with 1TB SSD and 4TB HDD

Cognos BI 10.2
Dynamic Cubes (ROLAP)

Cognos Aggregate Cache Load Elapsed Time


DB2 10.1

DB2 10.5

Extends Dynamic Query with in-memory


caching of members, data, expressions,
results, and aggregates

18x faster

963GB of raw data


7 fact tables
17 dimension tables
Workload consists of both loading the cache and
running adhoc reports not satisfied in the cache

Report Workload Elapsed Time


DB2 10.1

DB2 10.5

In-Memory Aggregate Cache Load


18x faster than DB2 10.1

14x faster

Ad-hoc Cognos Reports


14x faster than DB2 10.1

27

DB2 with BLU Acceleration : Summary


Breakthrough technology

DB2
DB2
WITH BLU
ACCELERATION

10.5

Combines and extends the leading


technologies
Over 25 patents filed and pending
Leveraging years of IBM R&D spanning 10
laboratories in 7 countries worldwide

Typical experience

Super analytics
Super easy

8x-25x performance gains


10x storage savings vs. uncompressed data
with indexes
Simple to implement and use

Order of magnitude improvements in


Consumability
Speed
Storage savings

Click to edit Master title style

DB2 10.5

DB2 10.5 & Directions


Analytics at the Speed of Thought
BLU Acceleration

DB2 10
pureScale (DB2 9.8)
Virtually Unlimited Capacity
Transparent Scalability
Leading Availability

3x Query Performance Boost


50% Compression Boost
Temporal Query
noSQL Graph Store
HADR Multiple Standby

Any OLTP/ERP workload


Start small; grow with your
business

Application Transparent
Scaling
Avoid the risk & cost of
tuning your applications to
the database topology

Availability
Maintain service across
planned & unplanned
events

Online Maintenance
pureScale HADR

Future Proof Versatility

TCO & Performance

TCO & Performance


Virtually Unlimited
Capacity

Always Available Transactions

3x Query Performance
New Index Exploitation
Adaptive Compression
Multi-temp Storage
Real-time Warehousing

Ease of Development
Temporal Query
98% SQL Compattibiltiy
Graph Store
RCAC

Reliability / Availability
pureScale Integration &
Enhancements
WLM Enhancements
Reorg Avoidance
HADR Mutliple Standby

Memory-optimized BLU Acceleration


Workload Consolidation with pureScale
Even more performance !

Ease of Development
Enhanced SQL Compatibility
Index on Expression
More noSQL Integration

Reliability / Availability
Rolling Updates
HADR with pureScale
pureScale Active / active DR
Enhancements
Online add/drop Member
Other availability enhancements

Extending the pureScale Value Proposition


Virtually Unlimited Capacity
Buy only what you need, add capacity as
your needs grow

Application Transparency
Avoid the risk and cost of
application changes

Continuous Availability
Deliver uninterrupted access to your data
with consistent performance

Learning from the undisputed Gold Standard... System z

pureScale as a Consolidation Platform


Consolidate multiple workloads to on the same resource infrastructure
Save management and resource costs

Workload A
Applications

Workload B
Applications

Workload C
Applications
Member

Failover

DB2

DB2

Member

Passive
until
Failover

DB2

DATABASE B

Member

CF

Passive
until
Failover

Shared
Storage
DATABASE A

DATABASE A

Member

Failover

Failover

Passive
until
Failover

Member

DATABASE B

DATABASE N

DATABASE N

Single pool of members can serve all database workloads

Member Subsets : Motivation


Current pureScale Workload
Balancing Design
Workload

All work is automatically balanced


across all members
Rebalancing occurs on transaction
or connection boundaries
(configuration option)
Works very well with single
workloads, even with nonhomogenous system configurations
Not ideal for some scenarios

OLTP

Batch

Member

Member

Member

Member

Member

CF

Automatic Workload Balancing

Shared
Storage

Member Subsets

Batch

Provides workload balancing and


management within defined member
subsets

Member

Member

OLTP

Member

Member

Member

CF

Subset Workload Balancing

Member subsets:

Database

Are defined as a database alias with


a new SP
SYSPROC.WLM_CREATE_MEMBER_SUBSET

Workload 1

Workload 3

Can be modified dynamically


SYSPROC.WLM_ALTER_MEMBER_SUBSET

Workload 2

- applications react by distributing work to


new members and draining work from
removed members

Member
Member

Member

Member

Member

Member

Member

CF

On member failure, applications are


automatically re-routed to another
member in the subset
- if no such member is active, a member
not part of any subset can be chosen
(if the subset is defined as inclusive)

Shared hot spare members - can be used by any workload using


an inclusive member subset, if all members in the subset fail

Per-Member Self Memory Management


Tuning
Workload 2

Current pureScale
STMM design

Single tuning member makes


local tuning decisions

Broadcasts memory tuning


changes to other members

Tuning member is highly


available - automatically
moves to another member in
the event of failure on original
member

Works well in single


homogeneous workload
scenarios
Not ideal in workload
consolidation scenarios

Workload 4

Workload
Workload 1

Member

Workload 5

Workload 3

Member

Member

Member

Member

STMM
Daemon

CF

Tuning Decisions Broadcast


Shared
Storage

Per-Member Self-Memory Management Tuning


Per-member STMM
approach

Each member makes


autonomous tuning decisions

Workload 2

Workload 1

Workload 4

Workload 5

Workload 3

Key use cases:


Workload consolidation
Non-homogenous member
configurations

Default for new databases


Member

Existing databases retain


current behavior

Control via:

CALL SYSPROC.ADMIN_CMD
( "get stmm tuning member" )

STMM
Daemon

Member
STMM
Daemon

Member
STMM
Daemon

Member
STMM
Daemon

Member
STMM
Daemon

CF

Workload-specific Tuning Decisions

Shared
Storage

CALL SYSPROC.ADMIN_CMD
( update stmm tuning member -2" )

New -2 setting invokes per-member tuning

Explicit Hierarchical Locking (EHL)


Designed to remove data sharing costs for tables/partitions that are
only accessed by a single member
Avoids CF communication if object sharing not occurring

Example target scenarios


Workload consolidation
Multi-tenancy
Directed batch

Enabled via new OPT_DIRECT_WRKLD database configuration


parameter
Detection of data access patterns happens automatically and EHL will kick in when data is not
being shared after configuration parameter set

Member

Member

Member

Member
CF
CF

Table or
Partition A

Table or
Partition B

Table or
Partition C

Table or
Partition D

No member/CF
communication
necessary

La
b

Multi-Tenancy Demo :
10 Independent Workloads

te
st
s

10
10Members
Membersxx10
10cores
coreseach,
each,22CF
CFxx88cores
coreseach
each
Each
member
runs
a
separate
70%
read
/
30%
Each member runs a separate 70% read / 30%write
writetransactional
transactionalworkload,
workload,
representing
a
different
tenant
representing a different tenant

Over
Over90%
90%scaling
scalingat
at10
10members
members!!
More
than
850,000
SQL
statements
per
More
than
850,000
SQL
statements
per
E.g.
different
regions,
different
subsidiaries,
different
customers
in
a
E.g. different regions, different subsidiaries, different customers in aSaaS
SaaS
environment
second
across
10
members
environment
second
across 10 members

CFp

YM
MV

CFs

10
9

10 Gb RoCE switch
8

RDMA interconnect over 10 Gb RoCE Ethernet


Relative Performance

7
6
5

M0

M1

M2

M3

M4

M5

M6

M7

M8

M9

4
3

Fiberchannel storage interconnect


2

4x IBM TMS820
Flash Storage Units
20 TB each

1
0
1

#Members Running Workloads

10

Random Key Indexes


Example Motivating Scenario

An online store defines an ORDER_NUM column


ORDER_NUM is indexed for fast lookups
Each concurrent transaction gets a newly incremented ORDER_NUM value
Results in frequent attempts by each member to update last index leaf page to
add the latest ORDER_NUM

Random key indexes


CREATE INDEX i1 ON t1 (INT ORDER_NUM RANDOM)

Each ORDER_NUM is randomized before insertion into the index


Spreads access requests evenly across index leaf pages
Lookups apply reverse algorithm
Not usable for scans

Random Indexes : Extreme Example


The Scenario

18000

4 Members x 32c per


member, SLES Linux

16000

Heavy insert activity into


ascending timestamp index

12000

Creates hot spot at high end of


index

17449

14000

TPS

10000
8000
6000
4000
2000
0

2216
Regular index

Random index

b
La

ts
s
te

V
M
YM

39

IBM achieves new WORLD RECORD


on three-tier SAP Sales and Distribution (SD) standard
application benchmark with record 266,000 SAP SD users;
reaching 1,471,680 SAPS1
Featuring 64-core IBM Power 780 AIX 7.1 & DB2 10.5
SAP SD Benchmark Users

1.47x more
users than best
Oracle3 result

DB2 on Power has


held the leadership
result for the highest
number of SAP SD
users on the three-tier
SAP SD standard
application benchmark
for over 7 years2

1)

Results of DB2 10.5 on IBM Power 780 on the three-tier SAP SD standard application benchmark on SAP enhancement package 5 for SAP ERP 6.0, achieved 266,000 SAP SD benchmark
users, certification # 2013010. Configuration: 8 processors / 64 cores / 256 threads, POWER7+ 3.72 GHz, 512 GB memory, running AIX 7.1
2)
Results of DB2 UDB 8.2.2 on IBM eServer p5 Model 595 on the three-tier SAP SD standard application benchmark running SAP R/3 Enterprise 4.70 (ERP) software, achieved 168,300
SAP SD benchmark users, certification # 2005021. Configuration:32-core SMP, POWER5, 1.9 GHz, 256 GB memory, running AIX 5.3
3)
Results of Oracle 11g Real Application Clusters (RAC) on SAP sales and distribution-parallel standard application benchmark running the SAP enhancement package 4 for SAP ERP 6.0,
achieved 180,000 SAP SD benchmark users, certification # 2011037. Configuration: 8 x Sun Fire X4800 M2 each with 8 processors / 80 cores / 160 threads, Intel Xeon Processor E7-8870,
2.40 GHz, 8 x 512 GB memory, running Solaris 10
Source: http://www.sap.com/benchmark
SAP, R/3 and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and several other countries.
All other trademarks are the property of their respective owners.

Click to edit Master title style

DB2 10.5

DB2 10.5 & Directions


Analytics at the Speed of Thought
BLU Acceleration

DB2 10
pureScale (DB2 9.8)
Virtually Unlimited Capacity
Transparent Scalability
Leading Availability

3x Query Performance Boost


50% Compression Boost
Temporal Query
noSQL Graph Store
HADR Multiple Standby

Any OLTP/ERP workload


Start small; grow with your
business

Application Transparent
Scaling
Avoid the risk & cost of
tuning your applications to
the database topology

Availability
Maintain service across
planned & unplanned
events

Online Maintenance
pureScale HADR

Future Proof Versatility

TCO & Performance

TCO & Performance


Virtually Unlimited
Capacity

Always Available Transactions

3x Query Performance
New Index Exploitation
Adaptive Compression
Multi-temp Storage
Real-time Warehousing

Ease of Development
Temporal Query
98% SQL Compattibiltiy
Graph Store
RCAC

Reliability / Availability
pureScale Integration &
Enhancements
WLM Enhancements
Reorg Avoidance
HADR Mutliple Standby

Memory-optimized BLU Acceleration


Workload Consolidation with pureScale
Even more pureScale Performance !

Ease of Development
Enhanced SQL Compatibility
Index on Expression
More noSQL Integration

Reliability / Availability
Rolling Updates
HADR with pureScale
pureScale Active / active DR
Enhancements
Online add/drop Member
Other availability enhancements

Index on Expression
What ?
Allow indexes to be defined with an expression, eg.
CREATE INDEX i1 ON emp (UPPER(lastname), salary+bonus)

Value Proposition
Efficient execution of SQL statements with such expressions, eg.
SELECT * FROM emp WHERE UPPER(lastname) = ?
SELECT * FROM emp WHERE salary+bonus = ?
Avoid the drawbacks of work-around (index on generated column)
Space consumption of extra column
Potential need to modify applications to reference the new column

Excluding NULL Keys from Indexes


What ?
Allow indexes to be defined so that NULL keys are excluded, eg:
CREATE [UNIQUE] INDEX i1(c1, c2) EXCLUDE NULL KEYS

Value Proposition
Support applications whose semantics require unique enforcement, but only where
keys are not NULL
Storage savings ! (Avoid indexing NULLs if they are infrequently queried)

Notes
A NULL key is one where all key components are NULL
C1

C2

NULL
1

NULL

Excluded from
ENK index

Insert
Unique Constraint
C1

C2

NULL

NULL

NULL

Regular Index

ENK Index

Fail

Fail
Success

Extended Row Size


What ?
Allow tables to be defined with a row size which exceeds page size, eg:
CREATE TABLESPACE TS1 PAGESIZE 4K
CREATE TABLE T(c1 VARCHAR(4000), VARCHAR(4000)) IN TS1

Value Proposition
Support applications that require long row definitions
Avoid a lengthy table redefinition to change pagesize

Notes
New maximum row length: 1,048,319 bytes
Excess row data stored in a LOB
Performance penalty when need to go off page for a portion of row
Usually OK in most scenarios instances of long rows are rare

If you expect long rows to be common, try to use a larger page size

Extended Row Size Scenario


EXTENDED_ROW_SZ=ENABLE is the
default only for new databases

0) DB2 UPDATE DB CFG FOR DB1


USING EXTENDED_ROW_SZ=ENABLE
Table (dat)
object

1) CREATE TABLESPACE TS1


PAGESIZE 32K
2) CREATE TABLE t1
(C1 INT,
C2 VARCHAR(30000),
C3 VARCHAR(10000))
IN TS1 LONG IN TS2

LOB (lob)
object

LOB

3) INSERT INTO t1 WITH VALUES


(1,<10KB str>,<10KB str>)

creates
extended
row

4) INSERT INTO t1 WITH VALUES


(2,<30KB str>,<10KB str>)
5) UPDATE t1 SET C3=NULL
WHERE C1=2

removes
extended
row

TS1

TS2

SYSTABLES
SYSTABLESPCTEXTENDEDROWS
PCTEXTENDEDROWScolumn
columnshows
shows%%
ofofrows
rowsininaatable
tablethat
thatare
areextended
extended

Steady Increase in SQL Compatibility Over Time

Steady increase in compatibility over time


More and more complex applications

DB2 10.5 estimated to provide >98% statement compatibility


Data is based on DCW (Database Conversion Workbench) DB2 reports

New Era Applications

Social

Mobile

Big Data
Analytics

Application Characteristics
Engaging
Mobile
Dynamic
Competitive
Fashionable
Scalable
Rapidly Changing !

App Development Trends


Need for Agility

Cloud

Rapid application development and evolution


No schema first - developers resist solutions
that require delays to sync up with data modelers
or change windows

NoSQL JSON data stores

JSON schema is simple and can be evolved


rapidly without data modelers
Native to the application space (eg Javascript)
Simple model for persisting Java and Javascript
objects

Background : What is JSON ?


Simple format for data
exchange

{
"firstName": "John",
"lastName" : "Smith",
"age"
: 25,
"address" :
{
"streetAddress": "21 2nd Street",
"city"
: "New York",
"state"
: "NY",
"postalCode" : "10021"
},
"phoneNumber":
[
{
"type" : "home",
"number": "212 555-1234"
},
{
"type" : "fax",
"number": "646 555-4567"
}
]

Self-describing, schema-less
Very simple (eg. tag:value format)
Human readable
Based on JavaScript, initially targeted for
web applications, but,

Use is rapidly spreading


The data interchange format for the Web
JavaScript is very popular in mobile and
systems-of-engagement applications

Analytics
Eg. An organization stores a large quantity
of web statistics stored as JSON
documents, and wants to perform analytics

Click to edit Master title style

Typical JSON Open Source Datastore Attributes


Optimized for high speed data ingest
Relaxed/absent ACID properties
Logging is often turned off or done asynchronously to improve performance
Fire and forget inserts
Applications include checking logic to verify update occured

No concept of commit or rollback; each JSON update is independent


Applications implement compensation logic to update multiple documents with ACID
properties

No document-level locking
Applications manage a revision tag to detect document update conflicts

Data is sharded for scalability


Shards are replicated asynchronously for availability
Queries to replica nodes can return back-level data sometimes

JSON documents are stored in collections


No join across collections, requires in application joins

Limited options for security, temporal, geo-spatial,...

DB2 JSON Datastore : Concept & Motivation


What : Built-in JSON support in DB2 !
Including support for popular noSQL
JSON APIs
Why : Preserve mature DBMS features;

Leverage existing skills and tools


Multi-statement Transactions
ACID
Extreme scale, performance and high
availability
Comprehensive Security
Management/Operations

The Best of Both Worlds Agility with a Trusted Foundation

Click to edit Master title style

DB2 JSON Datastore : Features


Binary formatted JSON stored in
the database (in LOBs)
Btree indexing to JSON
elements for fast query
processing
Java API and command line
Optional Fire-forget inserts
Supports transactions
Smart query re-write
DB2 ecosystem of tools
Extend support to more
applications and developers

Applications
Java

PHP

NodeJS

BSON Wire Protocol

AIM Developed MongoDB Wire Protocol


NoSQL JSON Wire Listener

Java Apps
JSON
Command Shell

JSON API

JDBC Driver

SQL via DRDA


inlined
LOBs

DB2
item

Key

Shopping Cart

<binary JSON>

<binary JSON>

<binary JSON>

item:camera

Click to edit Master title style

DB2 10.5

DB2 10.5 & Directions


Analytics at the Speed of Thought
BLU Acceleration

DB2 10
pureScale (DB2 9.8)
Virtually Unlimited Capacity
Transparent Scalability
Leading Availability

3x Query Performance Boost


50% Compression Boost
Temporal Query
noSQL Graph Store
HADR Multiple Standby

Any OLTP/ERP workload


Start small; grow with your
business

Application Transparent
Scaling
Avoid the risk & cost of
tuning your applications to
the database topology

Availability
Maintain service across
planned & unplanned
events

Online Maintenance
pureScale HADR

Future Proof Versatility

TCO & Performance

TCO & Performance


Virtually Unlimited
Capacity

Always Available Transactions

3x Query Performance
New Index Exploitation
Adaptive Compression
Multi-temp Storage
Real-time Warehousing

Ease of Development
Temporal Query
98% SQL Compattibiltiy
Graph Store
RCAC

Reliability / Availability
pureScale Integration &
Enhancements
WLM Enhancements
Reorg Avoidance
HADR Mutliple Standby

Memory-optimized BLU Acceleration


Workload Consolidation with pureScale
Even more pureScale Performance !

Ease of Development
Enhanced SQL Compatibility
Index on Expression
More noSQL Integration

Reliability / Availability
Rolling Updates
HADR with pureScale
pureScale Active / active DR
Enhancements
Online add/drop Member
Other availability enhancements

Active/Active Disaster Recovery with GPDC on Linux


Active/Active DR with Geographically Dispersed pureScale Cluster
Unique Values :
Coherent active/active access regardless of site (no need for conflict detection/avoidance)
Synchronous (no transaction loss on failures)
DR site constantly being tested

Applications
s
tionns
neecctio
n
n
o
n
C
/3Co
~~11/2

M1

M3

CFP

~~12A
//23llCC
Coon
onnnnneeec
ionns
cctttio
io
nss

10s of km

Site A

Previously validated only on Power

CFS

M2

Site B

M4

GDPC Sweet Spot


60%

R/W Ratio
Portion of
UPDATE,
INSERT, or
DELETE
Operations
in
Workload

Good
candidate
workloads for other
replication
technologies

70%

80%

90%

Good
candidate
workloads /
configurations
for GDPC

100%
10km

20km

30km

40km

50km

60km 70km

Site-to-site distance

pureScale Cluster Extension without Downtime


New members can be added to an instance while it is online
No impact to workloads running on existing members.
New member configuration is copied from an existing reference member.
Can be reconfigured later if needed

Workload can be immediately directed to the newly added member once it is started.

Other notes
New optional mid option to indicate
member number to be added
New member can be added
to an existing member host
Backup no longer needed
after adding new members.

Member
added
online

Member

Log

CF

55

Member

Log

Member

Log

Member

Member

Log

Log

CF

Four Steps to Online Cluster Extension


1. Add the member to the instance
Invoked from existing host in instance
Initial new member configuration derived from invoking host (reference member)

$ db2iupdt add m hostC -mnet hostC-ib0 db2sdin1

2. (Optional) Reconfigure the new member as needed


The reference member is used both for member specific database manager configuration (eg. instance
memory) and for member specific database configuration parameters
If needed, update selected parameters on the new member

$ db2 update database configuration member 5

3. Start the new member


If using default workload balancing, clients will be automatically directed to new member
Else see step 4

$ db2start member 12

4. (Optional) Update member subset or affinity


definitions as needed
Subsets: new member not a member of any subset until explicitly added (can be done online)
Affinity: you can add new member to db2dsdriver.cfg, and dynamically reload in application via API

Topology-Changing Restore
Allow restore of M-member backup to N-member instance
Allow restore from pS to non-pS and vica-versa
Backup image can be online if N is a superset of M

BACKUP IMAGE
C
BA

Member

Member

RE

P
KU

Member

Member

ST
OR
E

Member

Member

Member

Member

CF

Logs

CF

Logs

Shared
Storage

DATABASE MYDB

DB PARTITION

pureScale Feature

Shared
Storage

DATABASE MYDB

DB PARTITION

pureScale Feature

Example : Recovery after a Media Failure


Member 0
Member 1
Member 2
t0: Backup
Database
(online)

t2: Add
Member 2
t1: Backup
tablespace
TBSP0
(online)

t3: Backup
tablespace
TBSP1
(online)

t4: Media
Failure

1) Restore the full database backup image taken at t0


$ db2 restore database sample from /mybackup
2) (Optional) Restore the backup of tablespace TBSP0 taken at t1
$ db2 restore database sample tablespace(tbsp0) from /mybackup taken at t1
3) (Optional) Restore the backup of tablespace TBSP1 taken at t3
$ db2 restore database sample tablespace(tbsp1) from /mybackup taken at t3
4) Rollforward the database to the end of logs this replays the add member 2 event
$ db2 rollforward database sample to end of logs and stop

Review : Manual Flash Copy Backup


Flashcopy backup and restore a largely manual process:
Backup
1.
2.
3.
4.
5.
6.

Identify LUN(s) associated with the database


Identify free target LUN(s) for the copy
Establish the flashcopy pair(s)
Issue DB2 SUSPEND I/O command to tell DB2 to suspend write I/Os
Issue storage commands necessary to do the actual flash copy
Issue DB2 RESUME I/O command to return DB2 to normal

Restore
1.
2.
3.

Restore/copy target LUN(s) containing backup of interest


Issue DB2INIDB command to initialize the database for rollforward recovery
Issue DB2 ROLL FORWARD command

DB2 Database

No
Nohistory
historyfile
fileentry
entry
Error
Errorprone
prone

Flash Copy
Source LUNs

Target LUNs

Review : Integrated Flash Copy Backup


Flashcopy backup/restore just like any other DB2 backup
Backup

1.
2.
3.
4.
5.
6.

Identify LUN(s) associated with the database


Identify free target LUN(s) for the copy
Establish
DB2the
BACKUP
flashcopy DB
pair(s)
sample USE SNAPSHOT
Issue DB2 SUSPEND I/O command to tell DB2 to suspend write I/Os
Issue storage commands necessary to do the actual flash copy
Issue DB2 RESUME I/O command to return DB2 to normal

Restore

1.
2.
3.

Restore/copy target LUN(s) containing backup of interest


DB2
RESTORE
DBto sample
USE
SNAPSHOT
Issue
DB2INIDB
command
initialize the
database
for rollforward recovery
Issue DB2 ROLL FORWARD command

DB2 ROLLFORWARD

DB2 Database
Flash Copy
Source LUNs

Target LUNs

History
Historyfile
filerecord
record
Simple
Simple!!
Wide
Wide(but
(butnot
notexhaustive)
exhaustive)
storage
storagesupport
support

Scripted Interface for Flash Copy Backup


Flashcopy backup/restore just like any other DB2 backup
Backup

1.
2.
3.
4.
5.
6.

Identify LUN(s) associated with the database


Identify
free
target LUN(s)
the copy USE SNAPSHOT SCRIPT
DB2
BACKUP
DBforsample
Establish the/myscript.sh
flashcopy pair(s)
Issue DB2 SUSPEND I/O command to tell DB2 to suspend write I/Os
Issue storage commands necessary to do the actual flash copy
Issue DB2 RESUME I/O command to return DB2 to normal

Restore

1.
2.
3.

Restore/copy target LUN(s) containing backup of interest


DB2
RESTORE
DBto sample
USE
SNAPSHOT
SCRIPT
Issue
DB2INIDB
command
initialize the
database
for rollforward
recovery
Issue DB2 ROLL
FORWARD command
/myscript.sh
TAKEN AT <timestamp>

DB2 ROLLFORWARD

DB2 Database
Flash Copy
Source LUNs

Target LUNs

History
Historyfile
filerecord
record
Simple
Simpleto
touse
use!!
Wider
Widerstorage
storagesupport
support
enabled
enabled

Want to write your own Script ?


Snapshot (backup) Example Flow
DBA

DB2

Script

DBA

DB2

Script

Script must support these actions


SNAPSHOT
(BACKUP)

RESTORE

DELETE

QUERY

prepare

prepare

prepare

prepare

snapshot

restore

delete

verify
storemetadata
rollback

Example
Examplescript
scriptinin
samples/BARVendor/libacssc.sh
samples/BARVendor/libacssc.sh

REORG Enhancements
Online inplace reorg support on a table using adaptive
compression
Online inplace reorg support in pureScale
Fastpath option for online inplace reorg to clean up
overflow records only
$ db reorg table T1 inplace cleanup overflows

Reorg with RECLAIM EXTENTS can cleanup partially


empty extents for Insert Time Clustered tables

DB2 10 Review : Insert-Time-Clustered Tables

Rows clustered by insert time


Very predominant pattern : rows inserted together are often deleted together
Results in many extents naturally becoming free after deletions
Invoke extent reclamation explicitly (or rely on Automatic Table Maintenance daemon), eg:

8am

9am

10am

11am

Extent
Boundaries

Extents quickly returned to tablespace


Available for other tables, indexes

1) INSERTS
2) DELETE WHERE
3) REORG RECLAIM EXTENTS

12pm

REORG RECLAIM EXTENTS Reclaims More !


Extent Boundaries

DB2 REORG TABLE T1 ..


RECLAIM EXTENTS ALOW WRITE ACCESS

Embedded empty extents


can be easily reclaimed
from the table/index

New extended ITC RECLAIM

Light weight task to move rows out of almost empty extents

More complete space reclamation

Still very fast

Online Rolling Updates


DBAs can apply DB2
maintenance without
an outage window

Single Database View

Procedure:
1.
2.
3.
4.

Drain (aka Quiesce)


Remove & Maintain
Re-integrate
Repeat until done

DB2

DB2

DB2

DB2

Rolling Updates Concepts


C

Code level:
level: FP1
GA
Code

db2stop
db2iupdt
db2start
member1
member1
member1
quiesce

member1

Code level: FP1


GA

db2stop
db2start
db2iupdt
member2
member2
quiesce

member2

COMMIT
FP1 committed. New function available. Cannot roll down to GA anymore
Transparent ZERO database downtime

Rolling Updates : More Detail


InstallFixpack command enhanced to simplify an online update
One invocation per host, drives all of the steps needed for all members and CFs on the host
Execute once per host in your instance, and one additional time to commit
$ installFixPack

-p <install_path> -l <install_log> I <InstName> -commit_level

New informational configuration parameter :


Current Effective Code Level (CECL) - denotes committed code level in the cluster
denotes level of function available in the cluster

Code level:
level: FP1
GA
Code

Code level: FP1


GA

InstallFixpack
db2start member1

InstallFixpack
db2start member2

member2

member1
CECL = 10.5 GA
FP1

InstallFixpack -commit_level

pureScale HADR
Simple DR solution for pureScale
Built in resiliency
Tolerant of member failures on primary
and standby
Another member takes-over
sending/receiving log data

CF

CF

Primary Cluster

Can access failed members logs

Simple configuration
No need specify all addresses of other
side (an automatic discovery protocol
does that)

Eliminate back pressure on primary


via log spooling on standby
Initial support includes
Async and super-async

CF

CF

Standby DR Cluster

pureScale HADR : Attributes


Single system view

START / STOP / ACTIVATE / DEACTIVATE / TAKEOVER commands only need to


be issued once, not once per member

One member on standby is designated the replay member

All primary members send log to parallel threads on a replay member on standby
The replay member is highly available
If the current replay member fails, DB2 will automatically run replay on another member

Assisted Remote Catchup (ARCU)

If one primary member is not available, standby can obtain its logs via another
primary member that is available

Standby requirement

Must also be running with pureScale with the same number of members (they can be
logical members)

pureScale HADR Built-in Resiliency


Primary site

Standby site

Member

Transactions

Member

Link
Link
Member

Member

Member

Member
CF
CF

CF
CF

Member 3
sends member
1s logs
Logs 1

Replay
member

Logs 2

Logs 3

DB2 10.5 Editions New Simple Packages!


For Departmental Use

For Enterprise Use

Advanced Workgroup Server Edition


Fully functional offering for small OLTP and analytic
deployments
Primarily used in department environments within large
enterprises. Also available for SMB/MM deployments
Limited by TB, memory, sockets and/or cores
Includes Tools, Compression, BLU, pureScale and DPF

Advanced Enterprise Server Edition


Fully functional offering for Enterprise Class OLTP and/or
analytic deployments
Targeting full enterprise / full data center requirements
No TB, memory, socket or core limit
Includes Tools, Compression, BLU, pureScale and DPF

Fully Functional

Unlimited Capacity

Limited Capacity
Workgroup Server Edition

Enterprise Server Edition

Capabilities for an entry level offering


Targeting single server requirements with less intense
workloads in both the OLTP and analytic space
Limited by TB, memory, sockets and/or cores
Does not include Tools, Compression, BLU, pureScale
and DPF

Capabilities for an entry level offering


Targeting single server, enterprise requirements with
more intense workloads in both the OLTP and analytic
space
No TB, memory, socket or core limit
Does not include Tools, Compression, BLU, pureScale
and DPF

Base Function Only

Express Edition

CEO Enterprise and Advanced

Express-C and Developer Edition

Comprehensive 10.5 Tooling Support


Some of the Highlights
Optim Performance Manager (OPM)
New metrics for columnar query processing and other 10.5 capabilities

Optim Query Workload Tuner (OQWT)


New table Organization
(ie. BLU) Advisor

Data Studio
BLU support
HADR Multiple Standbys
pureScale support
Enhancements

Summary & Questions


DB2 10.5 Themes
Speed of Thought Analytics - with new BLU Acceleration
8-25x faster reporting and analytics1; more than 1000x seen in some lab test queries2
10x storage space savings seen during beta test3

Always Available Transactions - with enhanced pureScale


Online rolling maintenance updates with no planned downtime4
Designed for disaster recovery over distances of 1000s km5

Unprecedented Affordability
In-memory speed and simplicity on existing infrastructure
Optimized for SAP workloads for faster performance and to help dramatically reduce costs
Upgrade to DB2 with average. 98% Oracle Database application compatibility7

Future-Proof Versatility
Optimized capabilities for both OLTP and data warehousing
Business grade NoSQL and mobile database for greater application flexibility
1

Based on internal IBM testing of sample analytic workloads comparing queries accessing row-based tables on DB2 10.1 vs. columnar tables on DB2 10.5. Performance improvement figures are cumulative of all queries in the workload. Individual results will vary
depending on individual workloads, configurations and conditions.
Based on internal IBM tests of pure analytic workloads comparing queries accessing row-based tables on DB2 10.1 vs. columnar tables on DB2 10.5. Results not typical. Individual results will vary depending on individual workloads, configurations and conditions,
including size and content of the table, and number of elements being queried from a given table.
3
Client-reported testing results in DB2 10.5 early release program. Individual results will vary depending on individual workloads, configurations and conditions, including table size and content.
4 Based on IBM design for normal operation with rolling maintenance updates of DB2 server software on a pureScale cluster. Individual results will vary depending on individual workloads, configurations and conditions, network availability and bandwidth.
5 Based on IBM design for normal operation under typical workload. Individual results will vary depending on individual workloads, configurations and conditions, network availability and bandwidth.
6 Available with DB2 Advanced Enterprise Server Edition.
7 Based on internal tests and reported client experience from 28 Sep 2011 to 07 Mar 2012.
2

Anda mungkin juga menyukai