Anda di halaman 1dari 39

Oracle Active Data Guard

Performance
Joseph Meeks
Director, Product Management
Oracle High Availability Systems

1Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Note to viewer

These slides provide various aspects of performance data for


Data Guard and Active Data Guard we are in the process of
updating for Oracle Database 12c.

It can be shared with customers, but is not intended to be a


canned presentation ready to go in its entirety

It provides SCs data that can be used to substantiate Data Guard


performance or to provide focused answers to particular concerns
that may be expressed by customers.

2Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Note to viewer
See this FAQ for more customer and sales collateral
http://database.us.oracle.com/pls/htmldb/f?

p=301:75:101451461043366::::P75_ID,P75_AREAID:21704,2

3Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Agenda Data Guard Performance


Failover and Switchover Timings
SYNC Transport Performance
ASYNC Transport Performance
Primary Performance with Multiple Standby Databases
Redo Transport Compression
Standby Apply Performance

4Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Data Guard 12.1 Example - Faster Failover

43 seconds
2,000 sessions
on both primary
and standby

P
5Copyright 2012, Oracle and/or its affiliates. All rights reserved.

n
i
m
i
l
re

# of database
sessions on
primary and

y
r
a

48 seconds
2,000 sessions
on both primary
and standby

# of database
sessions on
primary and

Data Guard 12.1 Example Faster Switchover

y
r
a

72 seconds
83 seconds

1,000 sessions on
both primary and
standby

500 sessions on
both primary and
standby

P
6Copyright 2012, Oracle and/or its affiliates. All rights reserved.

n
i
m
i
l
re
# of database
sessions on
primary and
standby

# of
database
sessions on
primary and

Agenda Data Guard Performance


Failover and Switchover Timings
SYNC Transport Performance
ASYNC Transport Performance
Primary Performance with Multiple Standby Databases
Redo Transport Compression
Standby Apply Performance

7Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Synchronous Redo Transport


Zero Data Loss

Primary database performance is impacted by the total round-trip time for

acknowledgement to be received from the standby database


Data Guard NSS process transmits Redo to the standby directly from log buffer, in

parallel with local log file write


Standby receives redo, writes to a standby redo log file (SRL), then returns ACK
Primary receives standby ACK, then acknowledges commit success to app

The following performance tests show the impact of SYNC transport on primary

database using various workloads and latencies


In all cases, transport was able to keep pace with generation no lag
We are working on test data for Fast Sync (SYNCNOAFFIRM) in Oracle

Database 12c (same process as above, but standby acks primary as soon as
redo is received in memory it does not wait for SRL write.
8Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Test 1) Synchronous Redo Transport

OLTP with Random Small Insert < 1ms RTT Network Latency
Workload:
Random small inserts (OLTP) to 9 tables with 787 commits per second
132 K redo size, 1368 logical reads, 692 block changes per transaction

Sun Fire X4800 M2 (Exadata X2-8)


1 TB RAM, 64 Cores, Oracle Database 11.2.0.3, Oracle Linux
InfiniBand, seven Exadata cells, Exadata Software 11.2.3.2

Exadata Smart Flash, Smart Flash Logging and Write-Back flash

enabled provided significant gains

9Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Test 1) Synchronous Redo Transport

OLTP with Random Small Inserts and < 1ms RTT Network Latency

With Data Guard Synchronous Transport Enabled


Redo Rate

Local standby,
104,051,368.80

Data Guard Transport Disabled


104,143,368.00

Txn Rate

20000000

40000000

60000000

80000000

100000000

<1ms RTT
99MB/s redo rate
<1% impact on
database
throughput
1% impact on
transaction rate

120000000

RTT = network round trip time


10Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Test 2) Synchronous Redo Transport

Swingbench OLTP Workload with Metro-Area Network Latency


Exadata X2-8, 2-node RAC database
smart flash logging, smart write back flash

Swingbench OLTP workload


Random DMLs, 1 ms think time, 400 users, 6000+ transactions per

second, 30MB/s peak redo rate (different from test 2)


Transaction profile
5K redo size, 120 logical reads, 30 block changes per transaction

1 and 5ms RTT network latency

11Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Test 2) Synchronous Redo Transport

Swingbench OLTP Workload with Metro-Area Network Latency


Transactions
per/second

Swingbench OLTP

6000
5000
4000

6363

6151

6077

tps

tps

tps

Baseline
No Data Guard

Data Guard SYNC


1ms RTT
Network Latency

3000
2000
1000
0

12Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Data Guard SYNC


5ms RTT
Network Latency

30 MB/s redo
3% impact at

1ms RTT
5% impact at
5ms RTT

Test 3) Synchronous Redo Transport

Large Insert OLTP Workload with Metro-Area Network Latency


Exadata X2-8, 2-node RAC database
smart flash logging, smart write back flash

Large insert OLTP workload


180+ transactions per second, 83MB/s peak redo rate, random tables

Transaction profile
440K redo size, 6000 logical reads, 2100 block changes per transaction

1, 2 and 5ms RTT network latency

13Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Test 3) Synchronous Redo Transport

Large Insert OLTP Workload with Metro-Area Network Latency


Transactions
per/second

Large Insert - OLTP

200
150

83 MB/s redo
<1%% impact

189

188

177

167

tps

tps

tps

tps

2ms RTT
Network
Latency

5ms RTT
Network
Latency

100
50
0

Baseline
No
Data Guard

14Copyright 2012, Oracle and/or its affiliates. All rights reserved.

1ms RTT
Network
Latency

at 1ms RTT
7% impact at
2ms RTT
12% impact at
5ms RTT

Test 4) Synchronous Redo Transport

Mixed OLTP workload with Metro-Area Network Latency


Exadata X2-8, 2-node RAC database
smart flash logging, smart write back flash

Mixed workload with high TPS


Swingbench plus large insert workloads
26000+ txn per second and 112 MB/sec peak redo rate

Transaction profile
4K redo size, 51 logical reads, 22 block changes per transaction

1, 2 and 5ms RTT network latency

15Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Test 4) Synchronous Redo Transport

Mixed OLTP workload with Metro-Area Network Latency


35,000

Swingbench plus large insert

30,000

112 MB/s redo

25,000

3% impact at < 1ms RTT

Txn Rate
Redo Rate
20,000

5% impact at 2ms RTT

15,000

6% impact at 5ms RTT

10,000

Note: 0ms latency on graph


represents values falling in
the range <1ms

5,000
0

No Sync

0ms

16Copyright 2012, Oracle and/or its affiliates. All rights reserved.

2ms

5ms

10ms

20ms

Additional SYNC Configuration Details


For the Previous Series of Synchronous Transport Tests

No system bottlenecks (CPU, IO or memory) were encountered during

any of the test runs


Primary and standby databases had 4GB online redo logs
Log buffer was set to the maximum of 256MB
OS max TCP socket buffer size set to 128MB on both primary and standby
Oracle Net configured on both sides to send and receive 128MB with an

SDU for 32k


Redo is being shipped over a 10GigE network between the two systems.
Approximately 8-12 checkpoints/log switches are occurring per run

17Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Customer References for SYNC Transport


Fannie Mae Case Study that includes performance data
Other SYNC references
Amazon
Intel
MorphoTrak prior biometrics division of Motorola, case study, podcast, presentation
Enterprise Holdings
Discover Financial Services, podcast, presentation
Paychex
VocaLink

18Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Synchronous Redo Transport

Caveat that Applies to ALL SYNC Performance Comparisons


Redo rates achieved are influenced by network latency, redo-write

size, and commit concurrency in a dynamic relationship with each


other that will vary for every environment and application
Test results illustrate how an example workload can scale with minimal
impact to primary database performance
Actual mileage will vary with each application and environment.
Oracle recommends customers conduct their own tests, using their
workload and environment. Oracle tests are not a substitute.

19Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Agenda
Failover and Switchover Timings
SYNC Transport Performance
ASYNC Transport Performance
Primary Performance with Multiple Standby Databases
Redo Transport Compression
Standby Apply Performance

20Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Asynchronous Redo Transport


Near Zero Data Loss

ASYNC does not wait for primary acknowledgement


A Data Guard NSA process transmits directly from log buffer in parallel with

local log file write


NSA reads from disk (online redo log file) if log buffer is recycled before redo

transmission is completed
ASYNC has minimal impact on primary database performance
Network latency has little, if any, impact on transport throughput
Uses Data Guard 11g streaming protocol & correctly sized TCP send/receive buffers

Performance tests are useful to characterize max redo volume that ASYNC is

able to support without transport lag


Goal is to ship redo as fast as generated without impacting primary performance
21Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Asynchronous Test Configuration


Details

100GB online redo logs


Log buffer set to the maximum of 256MB
OS max TCP socket buffer size set to 128MB on primary and standby
Oracle Net configured on both sides to send and receive 128MB
Read buffer size set to 256 (_log_read_buffer_size=256) and archive buffers

set to 256 (_log_archive_buffers=256) on primary and standby


Redo is shipped over the IB network between primary and standby nodes

(insures that transport is not bandwidth constrained)


Near-zero network latency, approximate throughput of 1200MB/sec.

22Copyright 2012, Oracle and/or its affiliates. All rights reserved.

ASYNC Redo Transport Performance Test


Oracle Database 11.2.

Data Guard ASYNC transport can sustain very

600
500
Redo
Transport
MB/sec

400

high rates
484

Zero transport lag

Add RAC nodes to scale transport performance

300

Each node generates its own redo thread and has a

dedicated Data Guard transport process

200

Performance will scale as nodes are added assuming

adequate CPU, I/O, and network resources

100
0

484 MB/sec on single node

A 10GigE NIC on standby receives data at

maximum of 1.2 GB/second

Single Instance

Standby can be configured to receive redo across two

or more instances
23Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Data Guard 11g Streaming Network Protocol

High Network Latency has Negligible Impact on Network Throughput

Redo
Transport
Rate
MB/sec

35

Streaming protocol is new with Data Guard 11g

30

Test measured throughput with 0 100ms RTT

25

ASYNC tuning best practices

20

Set correct TCP send/receive buffer size = 3 x

15

Network
Latency

10

0ms
25m
s

5
0

ASYNC

24Copyright 2012, Oracle and/or its affiliates. All rights reserved.

BDP (bandwidth delay product)


BDP = bandwidth x round-trip network latency
Increase log buffer size if needed to keep NSA

process reading from memory


See support note 951152.1

X$LOGBUF_READHIST to determine

buffer hit rate

Agenda
Failover and Switchover Timings
SYNC Transport Performance
ASYNC Transport Performance
Primary Performance with Multiple Standby Databases
Redo Transport Compression
Standby Apply Performance

25Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Multi-Standby Configuration
Primary - A

Local Standby - B

A growing number of customers use multi-standby Data

Guard configurations.
SYNC

Additional standbys are used for:


Local zero data loss HA failover with remote DR
Rolling maintenance to reduce planned downtime

ASYNC

Offloading backups, reporting, and recovery from primary


Reader farms scale read-only performance

This leads to the question: How is primary database


Remote
Standby - C

26Copyright 2012, Oracle and/or its affiliates. All rights reserved.

performance affected as the number of remote transport


destinations increases?

Redo Transport in Multi-Standby Configuration

Primary Performance Impact: 14 Asynchronous Transport Destinations


Increase in CPU

Change in redo volume

(compared to baseline)

(compared to baseline)

105.0%
104.0%
103.0%
102.0%
101.0%
100.0%
99.0%
98.0%
97.0%
0 - 14 destinations
27Copyright 2012, Oracle and/or its affiliates. All rights reserved.

102.0%
101.0%
100.0%
99.0%
98.0%
97.0%
96.0%
95.0%
94.0%
93.0%

0 -14 destinations

Redo Transport in Multi-Standby Configuration

Primary Performance Impact: 1 SYNC and multiple ASYNC Destinations


Increase in CPU
(compared to baseline)

104.0%

102.0%

(compared to baseline)

100.0%

102.0%

98.0%

100.0%

96.0%

98.0%
96.0%

Change in redo volume

94.0%
Zero

1/0

1/1

1/14

# of SYNC/ASYNC destinations
28Copyright 2012, Oracle and/or its affiliates. All rights reserved.

92.0%

Zero

1/0

1/1

1/14

# of SYNC/ASYNC destinations

Redo Transport for Gap Resolution


Standby databases can be configured to request log files needed to

resolve gaps from other standbys in a multi-standby configuration


A standby database that is local to the primary database is normally
the preferred location to service gap requests
Local standby database are least likely to be impacted by network outages
Other standbys are listed next
The primary database services gap requests only as a last resort
Utilizing a standby for gap resolution avoids any overhead on the primary

database
29Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Agenda
Failover and Switchover Timings
SYNC Transport Performance
ASYNC Transport Performance
Primary Performance with Multiple Standby Databases
Redo Transport Compression
Standby Apply Performance

30Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Redo Transport Compression

Conserve Bandwidth and Improve RPO when Bandwidth Constrained


Test configuration

Transport
Lag - MB

12.5 MB/second bandwidth


22 MB/second redo volume

22 MB/sec
uncompressed

Uncompressed volume exceeds

available bandwidth
Recovery Point Objective (RPO)

impossible to achieve
perpetual increase in transport lag
12 MB/sec
compressed

50% compression ratio results in:


volume < bandwidth = achieve RPO
ratio will vary across workloads

Elapsed Time - Minutes


31Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Requires Advanced Compression

Agenda
Failover and Switchover Timings
SYNC Transport Performance
ASYNC Transport Performance
Primary Performance with Multiple Standby Databases
Redo Transport Compression
Standby Apply Performance

32Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Standby Apply Performance Test


Redo apply was first disabled to accumulate a large number of log files

at the standby database. Redo apply was then restarted to evaluate


max apply rate for this workload.
All standby log files were written to disk in Fast Recovery Area
Exadata Write Back Flash Cache increased the redo apply rate from
72MB/second to 174MB/second using test workload (Oracle 11.2.0.3)
Apply rates will vary based upon platform and workload

Achieved volumes do not represent physical limits


They only represent the particular test case configuration and workload,

higher apply rates have been achieved in practice by production customers


33Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Apply Performance at Standby Database


Test 1: no write-back flash

cache
On Exadata x2-2 quarter rack
Swing bench OLTP workload
72 MB/second apply rate
I/O bound during checkpoints
1,762ms for checkpoint

complete
110ms DB File Parallel Write

34Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Apply Performance at Standby Database


Test 2: a repeat of the previous

test but with write-back flash


cache enabled
On Exadata x2-2 quarter rack
Swing bench OLTP workload
174 MB/second apply rate
Checkpoint completes in

633ms vs 1,762ms
DB File Parallel Write is

21ms vs 110ms
35Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Two Production Customer Examples


Data Guard Redo Apply Performance
Thomson-Reuters
Data Warehouse on Exadata, prior to write-back flash cache
While resolving a gap of observed an average apply rate of 580MB/second

Allstate Insurance
Data Warehouse ETL processing resulted in average apply rate over a 3

hour period of 668MB/second, with peaks hitting 900MB/second

36Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Redo Apply Performance for Different Releases


Range of Observed Apply Rates for Batch and OLTP

Standby
Apply
Rate

700
600
500
400
300
200
100
0

MB/sec

37Copyright 2012, Oracle and/or its affiliates. All rights reserved.

High End - Batch


High End - OLTP

38Copyright 2012, Oracle and/or its affiliates. All rights reserved.

39Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Anda mungkin juga menyukai