November 2015
Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to
SolutionLab@hds.com. To assist the routing of this message, use the paper number in the subject and the title of this white
paper in the text.
Contents
Solution Overview................................................................................................................ 3
Hardware Elements......................................................................................................................... 7
Software Elements ........................................................................................................................ 11
Solution Design................................................................................................................... 12
Hitachi Compute Blade 2500 Chassis Configuration ...................................................................
520X B2 Server Blade Architecture ..............................................................................................
Fibre Channel SAN Architecture ...................................................................................................
Storage Architecture .....................................................................................................................
Hitachi NAS Platform 4060 Architecture.......................................................................................
Network File System Design for Shared Binaries .........................................................................
Management Server......................................................................................................................
Network Architecture ....................................................................................................................
SAP Storage Connector API Fibre Channel Client .......................................................................
Journal Sizing................................................................................................................................
Inter-site Configuration .................................................................................................................
Disaster Recovery and Replication Components .........................................................................
12
14
14
15
23
25
26
27
33
33
35
36
Hitachi Compute Blade 2500 (CB 2500) chassis with 520X B2 server blades
SAP HANA
The testing of this solution in the lab was only on a 2+1 configuration. However, this reference architecture supports the
scale-out configurations as listed in Hitachi Unified Compute Platform for the SAP HANA Platform using 3 TB SAP HANA
Nodes in a Scale-out 48 TB Configuration of 16 Active Nodes and 3 Standby Nodes with Hitachi Compute Blade 2500
Chassis, 520X B2 Server Blades, and Hitachi Virtual Storage Platform G1000 Reference Architecture Guide (AS-399-02).
For information concerning your implementation, please contact your GSS representative for more details.
This document supports understanding the example architecture of disaster recovery using Hitachi Universal Replicator
on Unified Compute Platform for SAP HANA in a scale-out configuration.
The scale out environment for Unified Compute Platform for SAP HANA is a preconfigured analytical appliance that
provides real-time access to operational data for use in analytic models. Changes to this architecture require approval
from the following at Hitachi Data Systems:
Sales
2
Failure
of SAP HANA results in revenue loss. For protection from this loss, use two sites in the disaster recovery strategy.
In addition to failover production, the second site can handle the quality assurance environment of the SAP HANA
landscape.
The primary business problem this solution answers is disaster recovery for SAP HANA. This solution performs
asynchronous replication of SAP HANA data volumes and log volumes on Hitachi Virtual Storage Platform G1000 to the
secondary site. It also performs asynchronous replication of the SAP HANA binaries and other configuration files stored in
the /hana/shared file system on Hitachi NAS Platform 4060 to the secondary site.
Data centers at each site must have almost identical hardware for this disaster recovery solution. Implementing Hitachi
Universal Replicator on this environment permits the additional use of the secondary site as a quality assurance
environment. This additional use requires adding additional disk drives on the secondary site storage to run the quality
assurance system.
Hitachi Virtual Storage Platform G1000 technology permits maintaining sufficient performance for the SAP HANA
production instance and site-to-site replication.
This technical paper assumes you have familiarity with the following:
Solution Overview
The primary site A contains a production SAP HANA database instance. Implement this site as a scale-out configuration
of Hitachi Unified Compute Platform for the SAP HANA platform.
The secondary site is an exact replica of primary site, with the exception of optional additional storage disks. Site B
houses the following:
(Optional) Non-production instances, such as the following that requires additional disks:
Quality assurance
Development
The design of the SAP HANA in-memory database enables this solution to use the same set of Hitachi Compute Blade
2500 nodes for production and non-production instances.
In the test environment for this solution, the Hitachi Virtual Storage Platform G1000 at the secondary site housed two sets
of disks for data, log, and /hana/shared LUNs for the following:
In this solution, install command control interface on the management servers on the primary site and secondary site,
which performs the data replication operations within Hitachi Virtual Storage Platform G1000 at each site using Hitachi
Open Remote Copy Manager instances. Each instance at the primary site and secondary site management server has its
own Hitachi Open Remote Copy Manager configuration file that lists the following for replication between two sites:
Hitachi Virtual Storage Platform G1000 works with Hitachi Universal Replicator software to enable SAP HANA disaster
recovery. This solution uses a Fibre Channel over IP (FCIP) switch for the wide area network connections between each
Virtual Storage Platform G1000 at the primary site and the secondary site. Take the options for long distance data
replication in the landscape using the pre-existing enterprise network infrastructure and service provider into
consideration for the SAP HANA disaster recovery setup.
With Hitachi Universal Replicator, updates on the SAP HANA nodes to the primary production volume on the primary site
Virtual Storage Platform G1000 are copied to a local journal volume at primary site storage. The secondary site Virtual
Storage Platform G1000 pulls data from the primary site journal volume across the inter-site wide area network
connection to the secondary volume. The local system is free to perform its role as a transaction processing resource
rather than as a replication engine.
For the asynchronous replication of Hitachi NAS Platform LUNs hosting the /hana/shared file system on Virtual Storage
Platform G1000, perform a one-time initial configuration after completing the initial pair copy operations to configure the
RAID mirror relationships between the NAS Platform LUNs at both sites. Make the NAS Platform clusters at both the sites
aware of the Universal Replicator LUN relationship between them during this initial configuration by using the RAID
mirroring command (sd-mirror-remotely).
4 case of an outage or any component failure in the primary site, an administrator initiates a manual failover to the
In
secondary site using customized scripts. There are two different possibilities for enabling client connection recovery,
using either virtual IP failover or DNS failover configuration. The actual implementation differs, based on the network and
cluster management capabilities.
This solution supports the following four different disaster recovery options.
1. Add a Disaster Recovery Site
This option is for a primary site for production and a secondary site for disaster recovery.
2. Add a Quality Assurance or Development Instance
This option is for a primary site for production and a secondary site for disaster recovery site and a single quality
assurance or development SAP HANA instance.
3. Disaster Recovery Connectivity Bundle
This option is for a Fibre Channel over IP (FCIP) switch between two sites.
4. Only Disaster Recovery Site
This option is for adding a secondary site for a disaster recovery solution to an existing SAP HANA landscape. (The
primary site for production already exists.)
To perform the replication of Hitachi Unified Compute Platform for the SAP HANA platform, the reference solution uses
the following:
Figure 1
7
Hardware
Elements
Table 1 describes the hardware used to deploy a three active nodes and one standby node configuration at each site. All
the configurations of Hitachi Unified Compute Platform for the SAP HANA Platform support Hitachi Universal Replicator.
Table 1. Hardware Elements
Hardware
Quantity
Configuration
Role
2 management modules
24
768 GB RAM
2 cluster ports
2 10 Gb ports
2 Ethernet ports
8
Table 1. Hardware Elements (Continued)
Hardware
Quantity
Configuration
Role
MPB - 2 pairs
NTP
1 GbE Management
Network
48 ports
1 GbE
24 ports
16 x 8Gb/sec Fibre Channel
9
Hitachi
Compute Blade 2500
Hitachi Compute Blade 2500 delivers enterprise computing power and performance with unprecedented scalability and
configuration flexibility. Lower your costs and protect your investment.
Flexible I/O architecture and logical partitioning allow configurations to match application needs exactly with Hitachi
Compute Blade 2500. Multiple applications easily and securely co-exist in the same chassis.
Add server management and system monitoring at no cost with Hitachi Compute Systems Manager. Seamlessly integrate
with Hitachi Command Suite in Hitachi storage environments.
This configuration uses 24 520X B2 server blades in the Hitachi Compute Blade 2500 chassis. Table 2 has the
specifications for the 520X B2 server blades used in this solution.
Table 2. 520X B2 Server Blade Configuration
Feature
Configuration
Processors
Memory
Network ports
Other interfaces
Processor SKU
Processor frequency
Processor cores
Memory DIMM slots
10
Hitachi
NAS Platform 4060
Hitachi NAS Platform is an advanced and integrated network attached storage (NAS) solution. It provides a powerful tool
for file sharing, file server consolidation, data protection, and business-critical NAS workloads.
Powerful hardware-accelerated file system with multi-protocol file services, dynamic provisioning, intelligent tiering,
virtualization, and cloud infrastructure
Seamless integration with Hitachi SAN storage, Hitachi Command Suite, and Hitachi Data Discovery Suite for
advanced search and index
Integration with Hitachi Content Platform for active archiving, regulatory compliance, and large object storage for
cloud infrastructure
This solution uses NAS Platform 4060 servers for file system sharing of the global binary and configuration SAP HANA
files. There are four NAS Platform 4060 server nodes.
The system management unit provides front-end server administration and monitoring tools for NAS Platform. It supports
clustering and acts as a quorum device in a cluster.
Data LUNs
Log LUNs
This solution uses two Hitachi Virtual Storage Platform G1000 storage platforms.
To properly size the storage, refer to Hitachi Unified Compute Platform for the SAP HANA Platform using 3 TB SAP HANA
Nodes in a Scale-out 48 TB Configuration of 16 Active Nodes and 3 Standby Nodes with Hitachi Compute Blade 2500
Chassis, 520X B2 Server Blades, and Hitachi Virtual Storage Platform G1000 Reference Architecture Guide (AS-399-02).
At any time, the secondary site only has one live SAP HANA instance. The secondary site is for production or quality
assurance. Normally the quality assurance instance is available for use at the secondary site in case of a service outage,
when it becomes the production instance.
Use the server priority manager at the secondary site to do the following:
Designate the prioritized ports (replication) and non-prioritized ports (quality assurance).
Set the upper limits and thresholds for the I/O activity of these ports to prevent low-priority activities from negatively
affecting the high priority activities, such as replication.
Additional information is available in the Hitachi Virtual Storage Platform G1000 Performance Guide.
10
11
Software
Elements
Table 3 describes the software products used to deploy this three active node and one standby node configuration.
Table 3. Software Elements
Software
Version
Microcode dependent
SMU software
12.4.3924.05
12.4.3924.11
Microcode dependent
01-34-03/04
8.0.1
Operating System
SAP HANA
Standard Edition
4.1.3b or later
7.4.00 or later
7.4.00 or later
11
12
Solution Design
The detailed design for using Hitachi Universal Replicator with this Hitachi Unified Compute Platform for the SAP HANA
platform solution is based on specifications from SAP and is a 3+1 node configuration that includes the following:
One 2-port 10GBASE-SR PCI-Ex card for each of the following IOBD slots:
01B
03A
09B
11A
02A
04B
10A
12B
One FIVE-EX 16 Gb/sec 2-port Fibre Channel PCI-Ex card for each of the following IOBD slots:
01A
09A
04A
12A
12
13
Figure
2 shows the front and back view of Chassis 1 for Hitachi Compute Blade 2500. Use the same configuration for the
remaining chassis.
Figure 2
Table 4 on page 14 shows the Hitachi Compute Blade 2500 chassis configuration of all the SAP HANA nodes.
13
14
Table 4. Hitachi Compute Blade 2500 Chassis Configuration of SAP HANA Nodes for Each Site
Chassis
Server Blades
Chassis 1
1, 3, 5, and 7
Master
Chassis 1
Worker
Chassis 2
1, 3, 5, and 7
Standby
Server Blades
432
Total Memory
9 TB
Chassis
Virtual Storage
Platform G1000
Ports
Node1
Chassis 1
1C
Node1
Chassis 1
2C
Node2
Chassis 1
3C
Node2
Chassis 1
4C
Node3
Chassis 2
5C
Node3
Chassis 2
6C
HNAS
HNAS1
1A
14
15
Table
6. Storage Port Mapping (Continued)
SAP HANA
Node
Chassis
Virtual Storage
Platform G1000
Ports
HNAS
HNAS1
2A
HNAS
HNAS2
1B
HNAS
HNAS2
2B
Set the port properties for the direct connection between Hitachi Compute Blade 2500 and Hitachi Virtual Storage
Platform G1000, as shown in Table 7.
Table 7. Port Properties
Property
Value
Port Attribute
Target
Port Security
Disabled
Port Speed
Auto
Fabric
ON
Connection Type
P-to-P
The Hitachi FIVE-FX 16 Gb/sec HBA can emulate FC-SW virtually. Set the BIOS for the Hitachi FIVE-FX HBA to enable for
Multiple Port ID. Use the Fabric storage port setting of ON to set FC-SW virtual mode.
Storage Architecture
The central storage system for the SAP HANA scale-out cluster is a Hitachi Virtual Storage Platform G1000 storage
platform. Several usage aspects divide the space provided by Virtual Storage Platform G1000, as follows:
Block storage provisioning for Hitachi NAS Platform shared file system to store the SAP HANA binaries and clusterwide configuration files.
We follow a design that utilizes a building block approach in multiples of four active nodes. So we use a four active node
building block design to provision storage for this setup
Figure 3 on page 17 shows the RAID group configuration for the Virtual Storage Platform G1000 architecture used in the
SAP HANA appliance with four active nodes and one standby node.
Each SAP HANA node has its own data volume and log volume. Only active SAP HANA nodes need data volumes and log
volumes. Standby nodes do not require these volumes.
15
16 design follows a four-node building block approach for the SAP HANA data volumes, log volumes, and shared
This
binaries. Provision the parity groups in Figure 3, as follows.
From this parity group, create five LDEVs, each with a capacity of 100 GB.
Map each LDEV exclusively to the corresponding SAP HANA node as follows: LUN ID 00.
The installation of SUSE Linux Enterprise Server for SAP Applications or Red Hat Enterprise Linux resides on the
operating system LUN.
A single parity group configured as RAID-6 (6D+2P) on 900 GB drives provisions the operating system LUN for
SAP HANA nodes 1 to 5 (up to a maximum of 19) on Virtual Storage Platform G1000.
Each node has its own 100 GB LUN on Virtual Storage Platform G1000 for the operating system volumes.
The block storage for Hitachi NAS Platform consists of three parity groups on Virtual Storage Platform G1000,
each configured as RAID-6 (6D+2P) on 24 900 GB drives to store the shared binaries and configuration files of
the SAP HANA database.
In each of the three parity groups, create two LDEVs of 2400GB each.
Create a dynamic provisioning pool named HNAS_HDP_pool. Assign all the created Hitachi NAS Platform LDEVs
to this pool. This allows the use of all the disks concurrently on NAS Platform for better performance.
For a 4-node building block, create six virtual volumes, each with 2400 GB in HNAS_HDP_pool. Do the LUN path
assignment for these virtual volumes to the ports in Table 6 on page 14 connected to Hitachi NAS Platform.
For the SAP HANA log volumes, create two parity groups configured as RAID-6 (6D+2P) on 16 900 GB drives.
Map each SAP HANA log volume to all SAP HANA nodes at each port with the LUN ID 1 of the specified host.
For the SAP HANA data volumes, create four parity groups, each configured as RAID-6 (14D+2P) on 64 900 GB
drives.
Create four LDEVs with a capacity of 2816 GB per each parity group. Table 8 on page 18 shows the parity groups
and LDEVs created for data volumes.
Assign four LDEVs (with LUN ID 2-5) for use as data volumes to each SAP HANA node, as shown in Table 8.
16
17
Figure 3
Table 8 on page 18 shows the parity groups and LDEV assignment of boot volumes, the Hitachi NAS Platform volumes,
the SAP HANA log volumes, and the SAP HANA data volumes for production system at both the sites.
17
18
Table 8. Groups and LDEV Assignment of Operating System Boot, Hitachi NAS Platform, SAP HANA Log Volumes,
and SAP HANA Data Volumes for Production System at Primary and Secondary
Parity
Group
ID
HANA_OS_LUN_N1
100 GB
HANA_OS_LUN_N2
100 GB
00:03:00
HANA_OS_LUN_N3
100 GB
00:04:00
HANA_OS_LUN_N4
100 GB
HNAS_VOL_1
2400 GB
HNAS_VOL_2
2400 GB
HNAS_VOL_3
2400 GB
HNAS_VOL_4
2400 GB
HNAS_VOL_5
2400 GB
HNAS_VOL_6
2400 GB
HANA_LOG_N1
600 GB
HANA_LOG_N2
600 GB
HANA_LOG_N3
600 GB
HANA_LOG_N4
600 GB
00:01:02
HANA_DATA_N1_01
2816 GB
00:02:02
HANA_DATA_N2_01
2816 GB
00:03:02
HANA_DATA_N3_01
2816 GB
00:04:02
HANA_DATA_N4_01
2816 GB
00:01:03
HANA_DATA_N1_02
2816 GB
00:02:03
HANA_DATA_N2_02
2816 GB
00:03:03
HANA_DATA_N3_02
2816 GB
00:04:03
HANA_DATA_N4_02
2816 GB
00:01:04
HANA_DATA_N1_03
2816 GB
00:02:04
HANA_DATA_N2_03
2816 GB
00:03:04
HANA_DATA_N3_03
2816 GB
00:04:04
HANA_DATA_N4_03
2816 GB
00:01:05
HANA_DATA_N1_04
2816 GB
00:02:05
HANA_DATA_N2_04
2816 GB
00:03:05
HANA_DATA_N3_04
2816 GB
00:04:05
HANA_DATA_N4_04
2816 GB
2
3
4
5
6
7
10
LDEV ID
Name
LDEV Size
Table 9 on page 19 shows the dynamic provisioning pool IDs and virtual volume LDEV IDs for Hitachi NAS Platform.
18
19
Table 9. Dynamic Provisioning Pool IDs and Virtual Volume LDEV IDs of Hitachi NAS Platform for Production
System at Both Sites
Dynamic
Provisionin
g Pool ID
Dynamic
Provisioning
Pool Name
Virtual
Volume
LDEV ID for
Hitachi
NAS
Platform
Shared
Binaries
Virtual
Volume
Size for
Hitachi
NAS
Platform
Shared
Binaries
MPB
Assignment
HNAS_HDP_Pool
HNAS_HANA_VVOL_1
00:0A:01
2400.00 GB
MPB0
HNAS_HANA_VVOL_2
00:0A:02
2400.00 GB
MPB4
HNAS_HANA_VVOL_3
00:0A:03
2400.00 GB
MPB0
HNAS_HANA_VVOL_4
00:0A:04
2400.00 GB
MPB4
HNAS_HANA_VVOL_5
00:0A:05
2400.00 GB
MPB0
HNAS_HANA_VVOL_6
00:0A:06
2400.00 GB
MPB4
While mapping the LUN path assignment for each node, add the LUNs in the following order:
1. Map the OS LUN for the specific SAP HANA node.
2. Map the log volume and data volume of each SAP HANA node.
Table 10 shows an example configuration of the LUN path assignment for node01 on primary site. The LUN assignment
would be the same for all nodes except for the first LUN, which would be the operating system LUN of that specific node.
Table 10. LUN Path Assignment at Primary Site
LUN ID
LDEV ID
LDEV Name
0000
00:01:00
hananode01
0001
00:01:01
LOG_1
0002
00:01:02
DATA_1_01
0003
00:01:03
DATA_1_02
0004
00:01:04
DATA_1_03
0005
00:01:05
DATA_1_04
0006
00:02:01
LOG_2
0007
00:02:02
DATA_2_01
0008
00:02:03
DATA_2_02
0009
00:02:04
DATA_2_03
0010
00:02:05
DATA_2_04
0011
00:03:01
LOG_3
0012
00:03:02
DATA_3_01
0013
00:03:03
DATA_3_02
0014
00:03:04
DATA_3_03
0015
00:03:05
DATA_3_04
0016
00:04:01
LOG_4
19
20
Table
10. LUN Path Assignment at Primary Site (Continued)
LUN ID
LDEV ID
LDEV Name
0017
00:04:02
DATA_4_01
0018
00:04:03
DATA_4_02
0019
00:04:04
DATA_4_03
0020
00:04:05
DATA_4_04
Figure 4 shows the LUN and port assignment for the maximum SAP HANA server nodes 19 on each Hitachi Virtual
Storage Platform G1000.
Figure 4
20
21
Table
11 shows the parity groups and LDEV assignment the Hitachi NAS Platform volumes, the SAP HANA log volumes,
and the SAP HANA data volumes for quality assurance system at the secondary site.
Table 11. Groups and LDEV Assignment of Operating System Boot, Hitachi NAS Platform, SAP HANA Log Volumes,
and SAP HANA Data Volumes for Quality Assurance System at the Secondary Site
Parity
Group
ID
11
12
13
14
15
16
17
18
19
LDEV Name
LDEV Size
HNAS_VOL_QA_1
2400 GB
HNAS_VOL_QA_2
2400 GB
HNAS_VOL_QA_3
2400 GB
HNAS_VOL_QA_4
2400 GB
HNAS_VOL_QA_5
2400 GB
HNAS_VOL_QA_6
2400 GB
HANA_LOG_QA_N1
600 GB
HANA_LOG_QA_N2
600 GB
HANA_LOG_QA_N3
600 GB
HANA_LOG_QA_N4
600 GB
RAID-6 (14D+2P)RAID-6
(14D+2P) on 900 GB 10k
RPM SAS drives
00:01:07
HANA_DATA_QA_N1_01
2816 GB
00:02:07
HANA_DATA_QA_N2_01
2816 GB
00:03:07
HANA_DATA_QA_N3_01
2816 GB
00:04:07
HANA_DATA_QA_N4_01
2816 GB
00:01:08
HANA_DATA_QA_N1_02
2816 GB
00:02:08
HANA_DATA_QA_N2_02
2816 GB
00:03:08
HANA_DATA_QA_N3_02
2816 GB
00:04:08
HANA_DATA_QA_N4_02
2816 GB
00:01:09
HANA_DATA_QA_N1_03
2816 GB
00:02:09
HANA_DATA_QA_N2_03
2816 GB
00:03:09
HANA_DATA_QA_N3_03
2816 GB
00:04:09
HANA_DATA_QA_N4_03
2816 GB
00:01:10
HANA_DATA_QA_N1_04
2816 GB
00:02:10
HANA_DATA_QA_N2_04
2816 GB
00:03:10
HANA_DATA_QA_N3_04
2816 GB
00:04:10
HANA_DATA_QA_N4_04
2816 GB
LDEV ID
21
22
Table
12 shows the dynamic provisioning pool IDs and virtual volume LDEV IDs for Hitachi NAS Platform.
Table 12. Dynamic Provisioning Pool IDs and Virtual Volume LDEV IDs of Hitachi NAS Platform for Quality
Assurance System at the Secondary Site
Dynamic
Provisioning
Pool ID
Dynamic
Provisioning
Pool Name
Virtual
Volume
LDEV ID for
NAS Platform
Shared
Binaries
Virtual
Volume Size
for NAS
Platform
Shared
Binaries
MPB
Assignment
HNAS_QA_HDP_P HNAS_QA_VVOL_1
ool
HNAS_QA_VVOL_2
00:0A:25
2400.00 GB
MPB0
00:0A:26
2400.00 GB
MPB4
HNAS_QA_VVOL_3
00:0A:27
2400.00 GB
MPB0
HNAS_QA_VVOL_4
00:0A:28
2400.00 GB
MPB4
HNAS_QA_VVOL_5
00:0A:29
2400.00 GB
MPB0
HNAS_QA_VVOL_6
00:0A:30
2400.00 GB
MPB4
Table 13 shows an example configuration of the LUN path assignment for Node001 on the primary site. The LUN
assignment would be the same for all nodes except for the first LUN, which would be the operating system boot LUN of
that specific node.
Table 13. LUN Path Assignment at the Secondary Site
LUN ID
LDEV ID
LDEV Name
0000
00:01:00
hananode01
0001
00:01:01
LOG_1
0002
00:01:02
DATA_1_01
0003
00:01:03
DATA_1_02
0004
00:01:04
DATA_1_03
0005
00:01:05
DATA_1_04
0006
00:02:01
LOG_2
0007
00:02:02
DATA_2_01
0008
00:02:03
DATA_2_02
0009
00:02:04
DATA_2_03
0010
00:02:05
DATA_2_04
0011
00:03:01
LOG_3
0012
00:03:02
DATA_3_01
0013
00:03:03
DATA_3_02
0014
00:03:04
DATA_3_03
0015
00:03:05
DATA_3_04
0016
00:04:01
LOG_4
0017
00:04:02
DATA_4_01
0018
00:04:03
DATA_4_02
0019
00:04:04
DATA_4_03
22
23
Table
13. LUN Path Assignment at the Secondary Site (Continued)
LUN ID
LDEV ID
LDEV Name
0020
00:04:05
DATA_4_04
0021
00:01:01
LOG_QA_1
0022
00:01:02
DATA_QA_1_01
0023
00:01:03
DATA_QA_1_02
0024
00:01:04
DATA_QA_1_03
0025
00:01:05
DATA_QA_1_04
0026
00:02:01
LOG_QA_2
0027
00:02:02
DATA_QA_2_01
0028
00:02:03
DATA_QA_2_02
0029
00:02:04
DATA_QA_2_03
0030
00:02:05
DATA_QA_2_04
0031
00:03:01
LOG_QA_3
0032
00:03:02
DATA_QA_3_01
0033
00:03:03
DATA_QA_3_02
0034
00:03:04
DATA_QA_3_03
0035
00:03:05
DATA_QA_3_04
0036
00:04:01
LOG_QA_4
0037
00:04:02
DATA_QA_4_01
0038
00:04:03
DATA_QA_4_02
0039
00:04:04
DATA_QA_4_03
0040
00:04:05
DATA_QA_4_04
23
24
Private
Management Network
Connect the private management interfaces of the Hitachi NAS Platform 4060 servers and the system management unit
to the Brocade ICX 6430-24 port switch. Devices connected to this private ICX 6430-24 port switch are only accessible
through the system management unit.
Storage Subsystem
This solution uses Hitachi Virtual Storage Platform G1000 as the storage subsystem. Hitachi NAS Platform has direct
attached Fibre Channel connections with Virtual Storage Platform G1000.
Server Connections
Figure 5 shows the back of Hitachi NAS Platform 4060.
Figure 5
Port C1 and Port C2 are the NAS Platform 4060 cluster ports. To enable clustering, do the following:
Connect Port C1 of first NAS Platform server to Port C1 of the second NAS Platform server.
Connect Port C2 of first NAS Platform server to Port C2 of the second NAS Platform server.
Port tg1 and Port tg2 are 10 GbE ports. Link aggregate and connect these ports to the Brocade VDX 6740-48 switches.
Connect Fibre Channel Port FC1 and Port FC3 directly to the Hitachi Virtual Storage Platform G1000 ports, as follows:
Connect port eth0 of the NAS Platform server to the management network on the Brocade ICX 6430-48 port switch.
Connect port eth1 of the NAS platform server to the private network on the Brocade ICX 6430-24 port switch.
24
25 the direct connection between Hitachi NAS Platform and Hitachi Virtual Storage Platform G1000, set the port
For
properties as shown in Table 14.
Table 14. Hitachi Virtual Storage Platform G1000 Port Properties
Property
Value
Port Attribute
Target
Port Security
Disable
Port Speed
Auto (8 Gb/sec)
Fabric
OFF
Connection Type
FC-AL
With these system drives, create a single storage pool called HANABIN_<SID>.
Create the shared file system hana_shared_<SID> using the storage pool HANABIN_<SID> with the following:
Block size of 32 KB
Mount and then export the file system. Mount the NFS export /hana_shared_<SID> on the file system path /hana/
shared/<SID> on all nineteen SAP HANA nodes.
Set the MTU size to 9000 on both NAS Platform nodes.
Follow similar procedure for the QA system on the secondary site.
25
26
Management
Server
This solution uses one node of a four-node Quanta Cloud Technology QuantaPlex T41S-2U server for the management
server. The management server acts as a central device for managing the SAP HANA platform.
Manage the following from the management server:
NTP configuration
Figure 6 on page 27 shows the management server network ports using one dual port 1 GbE Base-T Intel i350 mezzanine
card.
Slot 01 Port 2 Connect this port to the local area network. It provides 1 GbE network to the management server.
Slot 01 Port 1 Connect this port to the Brocade ICX 6430-48 port switch that provides 1 GbE network to all other
switches, chassis, and Hitachi NAS Platform nodes.
One dual port 10 GbE Intel 82599ES SFP+ OCP mezzanine card
One Emulex 2-port 8 Gb/sec Fibre Channel HBA on the PCIe slot
Connect the 10 GbE network ports to two different Brocade switches, VDX 6740-A and VDX 6740-B, to provide
management access to the SAP HANA nodes from the Quanta 2U4N server using the NFS network.
26
27
Figure 6
Install the following software on the management server:
PuTTY
Teraterm
WinSCP
Network Architecture
For the client network, two Brocade VDX 6740 switches provide external connectivity. In the solution rack, the switch
placement is as follows:
27
28 scale-out solution for SAP HANA internally uses the NFS network and the inter-cluster network only. They can share a
The
switch-pair, given that traffic can be strictly separated. Accomplish this by using VLANs on two ISL-paired Brocade VDX
6740 switches. In the solution rack, refer to the switches as follows:
Connect the SAP HANA inter-cluster network to Switch 51 and Switch 52 (Brocade VDX 6740). Set an MTU size
of 9100, in accordance with Brocade best practices.
By default, isolate using a VLAN of 200. However, a different VLAN can be used to isolate this network.
Connect the SAP HANA client network to Switch 53 and Switch 54 (Brocade VDX 6740). Set with an MTU size of
9100, in accordance with Brocade best practices.
Connect the up-link ports of the Brocade VDX 6740-48 switches to the local area network.
Connect the SAP HANA NFS network to Switch 51 and Switch 52 (Brocade VDX 6740). Set with an MTU size of
9100, in accordance with Brocade best practices.
28
29
Hitachi NAS Platform Private Network
This network is used for the NAS Platform heartbeat.
Management Network
This network is used with the management server. See Management Server, starting on page 26.
The management network does not need to have a VLAN assigned to it.
The Brocade ICX 6430-48 port switch uses the default switch configuration.
The SAP HANA inter-cluster network, client network, and NFS network are required to have the following:
To meet these requirements, use four dual-port 10 GbE PCIe cards per SAP HANA node. Bond two ports from different
PCIe network adapters at the operating system level using link aggregation, following the IEEE 802.3ad Link Aggregation
standard for each of the three networks:
NFS
Client network
Connections of each bond need to go to physically different VDX 6740 switches, so when one switch fails there is still
another route to the corresponding host. This solution connects two switches together using inter switch link. It lets both
switches act together as one single logical switch with the characteristics that, if one switch fails, there still is a path to the
hosts.
The complete network setup uses the ports on the 10 GbE PCI-Ex cards listed in Table 15 as an example configuration for
chassis 1. Follow the similar configuration for SAP HANA nodes on other chassis as well.
Table 15. Network Setup Using 10 GbE PCI-Ex NIC Cards
PCI-Ex Slot Number
Port
Operating System
Level eth Port
Network Description
IOBD 01B
eth9921
IOBD 01B
eth9911
IOBD 02A
eth9902
IOBD 03A
eth9901
IOBD 04B
eth9922
IOBD 04B
eth9912
IOBD 09B
eth9921
IOBD 09B
eth9911
IOBD 10A
eth9902
29
30
Table 15. Network Setup Using 10 GbE PCI-Ex NIC Cards (Continued)
PCI-Ex Slot Number
Port
Operating System
Level eth Port
Network Description
IOBD 11A
eth9901
IOBD 12B
eth9922
IOBD 12B
eth9912
Figure 7
30
31
Configure
the SAP HANA inter-cluster network using operating system-level bonding on every node. Make the following
10 GbE connections, as listed in Table 16.
Table 16. SAP HANA Inter-Cluster Port Mapping
Chassis, PCI-Ex Slot Number, Port
Bond
Bond 0 of Node 1
Bond 0 of Node 1
Bond 0 of Node 2
Bond 0 of Node 2
Figure 8
31
32
Configure
the SAP HANA NFS network using operating system-level bonding on every node. Make the 10 GbE
connections using the mappings in Table 17.
Table 17. SAP HANA NFS Network Port Mappings
Chassis, PCI-Ex Slot Number, Port
Bond
Bond 1 of Node 1
Bond 1 of Node 1
Bond 1 of Node 2
Bond 1 of Node 2
Figure 9
32
33
Configure
the SAP HANA client network using operating system-level bonding on every node. Make the following 10GbE
connections using the port mappings shown in Table 18.
Table 18. SAP HANA Client Network Port Mappings
Chassis, PCI-Ex Slot Number, Port
Bond
Bond 2 of Node 1
Bond 2 of Node 1
Bond 2 of Node 2
Bond 2 of Node 2
Failover handling
Storage connector clients implement the functions defined in the storage connector API.
The scale-out of Hitachi Unified Compute Platform for the SAP HANA platform uses the fcClientLVM implementation,
which supports the use of logical volume manager. SAP supports this solution to enable the use of high-performance
Fibre Channel devices for a scale-out installation.
The fcClientLVM implementation uses standard Linux commands, such as multipath and sg_persist. Install and configure
these commands.
The fcClientLVM implementation is responsible for mounting the SAP HANA volumes. It also implements a proper fencing
mechanism during a failover by means of SCSI-3 persistent reservations.
Configuration of the SAP Storage Connector API is contained within the SAP global.ini file in /hana/shared/<SID>/
global/hdb/custom/config.
Journal Sizing
The following are maximum throughput numbers, per journal:
If the distance between the sites is less than 2000 km (about 1200 miles), then the maximum throughput is about 750
MB/sec.
If the distance is greater than 2000 km, then the maximum throughput is about 500 MB/sec.
If the distance is greater than 5000 km (about 3100 miles), then the maximum throughput is about 350 MB/sec.
With one RAID group configured as RAID-6 (6D+2P) using 900 GB SAS drives, testing in the Hitachi Data System lab
achieved a throughput of 69 MB/sec.
For the base configuration of the solution, it is recommended to use at least two RAID groups configured as RAID-6
(6D+2P) with 900 GB SAS drives in a dynamic provisioning pool for use as a journal group.
Use this is the formula used to calculate the journal capacity:
Journal Capacity = Peak hour megabytes change Number of hours
33
34
Table
19 provides the journal sizing information.
Table 19. Journal Sizing
Number of RAID
groups configured as
RAID-6 (6D+2P)
Hitachi Universal
Replicator Peak
Write
Throughput in
MB/sec
Hitachi
Universal
Replicator Peak
Write
Throughput in
GB/hour
Hitachi Universal
Replicator
Journal Space in
GB
138
485
9600
19.7
207
727
14400
19.8
276
970
19200
19.7
345
1212
24000
19.8
414
1455
28800
19.7
483
1698
33600
19.7
552
1940
38400
19.7
621
2183
43200
19.7
10
690
2425
48000
19.7
11
759
2668
52800
19.7
Using the two RAID groups configured as RAID-6 (6D+2P), create LDEVs as in Table 20.
Table 20. LDEVs for Use with HUR_Journal_Group Dynamic Provisioning Pool
Parity
Group
ID
LDEV ID
LDEV Name
LDEV Size
MPB
Assignment
RAID-6 (6D+2P) on
900 GB 10k RPM
SAS drives
00:0B:01
Journal_1
2400 GB
MPB0
00:0B:02
Journal_2
2400 GB
MPB4
RAID-6 (6D+2P) on
900 GB 10k RPM
SAS drives
00:0B:03
Journal_3
2400 GB
MPB0
00:0B:04
Journal_4
2400 GB
MPB4
Create a dynamic provisioning pool named HUR_Journal_Group and assign all the LDEVs created in Table 20 to this pool.
Create the four virtual volumes in Table 21, each with 2400 GB for use as a journal volume.
Table 21. Dynamic Provisioning Pool IDs and Virtual Volume LDEV IDs
Dynamic
Provisioning
Pool ID
Dynamic Provisioning
Pool Name
Virtual Volume
Names
Virtual
Volume
LDEV ID
Virtual
Volume
size
MPB
Assignment
HUR_Journal_Group_1
Journal_VVOL_1
00:0C:01
2400.00 GB
MPB0
Journal_VVOL_2
00:0C:02
2400.00 GB
MPB4
Journal_VVOL_3
00:0C:03
2400.00 GB
MPB0
Journal_VVOL_4
00:0C:04
2400.00 GB
MPB4
HUR_Journal_Group_2
To achieve a zero RPO, for a distance lower than 2000 km (about 1240 miles) we need the maximum of 750 MB/sec
journal throughput which is only possible by the use of 11 RAID-6 (6D+2P) parity groups.
34
35
During
implementation of the Hitachi Universal Replicator service, from Hitachi Data Systems Global Services needs to
right-size the journal volumes based on the actual workload, throughput, and RPO requirement. Then, the following needs
to happen:
Add additional disks to create a journal volume of RAID groups configured as RAID-6 (6D+2P) on 900 GB drives.
Create additional LDEVs and add them to the existing dynamic provisioning pool HUR_Journal_Group.
For every single RAID group added to the pool, create two virtual volumes of 2400 GB size to use as a journal volume.
Inter-site Configuration
SAN switch and long distance amplifiers must be between the primary site and the secondary site, as and when
applicable.
If you have the SAN switches and long distance amplifiers in the existing infrastructure, you can utilize the same
infrastructure.
Table 22 lists the target and initiator ports of the two storage systems at each site along with zoning alias. Both sites need
one zoning configuration.
Table 22. Zoning Configuration
Initiator Port
Zone Name
PrimA_Port3B
SecB_Port7B
PrimA_Port3B_SecB_Port7B
PrimA_Port4B
SecB_Port8B
PrimA_Port4B_SecB_Port8B
SecB_Port3B
PrimA_Port7B
SecB_Port3B_PrimA_Port7B
SecB_Port4B
PrimA_Port8B
SecB_Port4B_PrimA_Port8B
Table 23 has the details on the zoning between the management server and Hitachi Virtual Storage Platform G1000.
With this configuration, command devices on the primary system and secondary system are accessible on the
management servers at the primary site and secondary site.
Table 23. Hitachi Compute Rack Management Server Zoning
Virtual Storage
Platform
G1000 Alias
Hitachi Compute
Rack Management
Server Alias
Zone Name
PrimA_Port5B
MGMT_PRI_Port0
MGMT_Pri_Sec
SecB_Port5B
MGMT_SEC_Port0
MGMT_Pri_Sec
PrimA_Port6B
MGMT_PRI_Port1
MGMT_Pri_Sec
SecB_Port6B
MGMT_SEC_Port1
MGMT_Pri_Sec
35
36
Disaster
In this reference architecture for SAP HANA disaster recovery, Hitachi Universal Replicator setup requires doing the
following:
Storage System Command devices and Hitachi Universal Replicator volumes (P-VOLs and S-VOLS)
Quanta Cloud Technology QuantaPlex T41S-2U Management Server Hitachi Open Remote Copy Manager
36
37 instances of Open Remote Copy Manager are required for Hitachi Universal Replicator replication to be operational.
Two
A Hitachi Open Remote Copy Manager instance on the primary management server manages the P-VOLs.
A Hitachi Open Remote Copy Manager instance on the secondary management server manages the S-VOLs.
The Hitachi Open Remote Copy Manager configuration file defines the communication path and the logical units to be
controlled. Each instance has its own configuration file. The configuration file lists the following for replication:
Log volumes
Figure 10 shows the content of the configuration file (horcm04.conf) used by the Hitachi Open Remote Copy Manager
instance on the primary management server.
Figure 10
37
38
Figure
11 shows the configuration file (horcm06.conf) files used by the Hitachi Open Remote Copy Manager instance on
the secondary management server.
Figure 11
Figure 12 lists the entries that need for adding to the services file of the management server on both sides for Open
Remote Copy Manager to function.
Figure 12
38
39
Setup
Hitachi Universal Replicator
The reference architecture setup is as follows:
For the failover to the secondary site, this solution uses two initiator ports on Hitachi Virtual Storage Platform G1000
at the primary site connected to two RCU target ports on Hitachi Virtual Storage Platform G1000 at the secondary
site.
For the failback to the primary site, it uses two initiator ports from the storage system at the secondary site connected
to the two RCU target ports on the storage system at the primary site.
The initiator and RCU target ports on Hitachi Virtual Storage Platform G1000 on the primary site connect to the
Brocade 7800 Fibre Channel switch at the primary site.
The initiator and RCU target ports on Hitachi Virtual Storage Platform G1000 on the secondary site connect to the
Brocade 7800 Fibre channel switch at the secondary site.
Define the port attributes for initiator and target ports on Hitachi Virtual Storage Platform G1000. Configure the
storage system for Hitachi Universal Replicator replication by defining the logical paths for replication. Refer to the
Hitachi Virtual Storage Platform G1000 Hitachi Universal Replicator User Guide for the setup of logical paths for
replication and defining port attributes.
39
40
Initial
Pair Copy Opearation
Perform the initial data transfer, called the pair copy operation, between the primary site and the secondary SAP HANA
node volumes in data/log/HNNAS with the following command, issued on the primary side:
# C:\horcm\etc\paircreate -IH04 g HANADR -vl f async 0
When executing the paircreate command, the initial copy happens. The primary storage system copies all the data in
sequence from the P-VOL directly to the S-VOL.
During the initial copy process, the status of the P-VOL and S-VOL is COPY. On completion of the initial copy process, the
status of the P-VOL and S-VOL changes to PAIR.
Initial Configuration for Replication of Hitachi NAS Platform 4060 LUNs
To use Hitachi Universal Replicator to replicate the SAP HANA LUNs on Hitachi NAS Platform, perform the following initial
configuration steps. This is a one-time configuration to setup the RAID mirror relationship between LUNs on Hitachi NAS
Platform. The steps only list a high-level overview of the important RAID mirroring commands on NAS Platform.
To perform the initial configuration, do the following.
1. Complete the initial pair copy (pair create) operation between the primary site and the secondary site for SAP HANA
these volumes:
Data
Log
NAS Platform
ii. Copy the cluster information (cluster UUID and the cluster name) in a text editor window and keep it for later.
iii. Log on as root to the SMU of the secondary site.
iv. Verify that you are in the root folder.
v. Run the commands to get the luids and cod information of the NAS Platform disks to be replicated.
vi. Copy these files to the SMU on the primary site.
(3) On the console of NAS Platform for the primary site, do the following:
i.
ii. Go to the folder where you copied the files from the secondary site.
iii. Run the command to mirror the disks.
iv. List the span information to confirm the mirror relationship.
v. Add the UUID of the NAS Platform on the secondary site to the span, using the cluster UUID and cluster
name using information from Step 2.ii .
40
ii. List the span. Expect a clean output, not showing the production instance span.
3. Test the configuration by performing failover to the secondary site.
(1) Ensure the pair is synchronized completely.
(2) Shutdown Production on the Primary site
(3) Unmount /hana/shared/, the data file system, and the log file system at the primary site for production instance on
the SAP HANA nodes as well as Hitachi NAS Platform servers.
(4) Go to the secondary site and do the following:
i.
ii. Unmount /hana/shared/, the data file system, and the log file system at the secondary site for quality
assurance instance on the SAP HANA nodes as NAS Platform.
iii. Using command control interface, run a pair split with a read write.
iv. List the file system. This should show HANA_SHARED_<SID> as unmounted.
v. After the first failover, use 'evsfs' to bind each file system to an EVS at the secondary site.
vi. Mount the production shared file system on NAS Platform.
vii. Mount the production shared file system on SAP HANA nodes.
viii.Check at the operating system level of the SAP HANA nodes if the /hana/shared file system is mounted.
ix. Start the SAP HANA system and validate if it successfully comes up.
4. Test the configuration by performing failback to primary site.
(1) Shutdown the production SAP HANA database at the secondary site. Umount the hana_shared_<SID> file system
on the secondary NAS Platform cluster.
(2) Do a pair resync with swaps. Once it is 100% completed, start the failback.
(3) At the primary site, do the following:
i.
On command control interface, run the pair split with read and write.
ii. On the SMU, list the spans. This ensures the mirror relationship.
iii. Mount the hana_shared_<SID> file system on the primary NAS Platform cluster and HANA nodes. Check at
the operating system level of SAP HANA nodes if the /hana/shared file system is mounted.
iv. Start the SAP HANA system and validate if it successfully comes up.
v. Once validation completed, run the pair resync with swaps.
If one site is permanently lost and the surviving LUs are promoted into the SSWS state, it is necessary to run 'sd-peg -up'
on the surviving NAS Platform cluster to make it treats the S-VOLs as P-VOLs. Otherwise, 'sd-peg' should never be used.
41
42addition, registry changes made on one cluster while it is in production always need recording when made, and then
In
copied to the other cluster after the next failover:
If creating any new file systems have, bind them to EVSs using the 'evsfs' command, and then exported.
If deleting any file systems on one cluster, delete them from the registry of the other cluster by the filesystem-forgetand-delete-nv-data command.
If any exports have been created, deleted, or modified on one cluster, make the same changes on the other cluster.
42
43
Engineering Validation
Validation of this reference architecture was conducted in the Hitachi Data Systems laboratory. The steps to failover to the
secondary site and then to failback to primary site with soft and hard techniques using Hitachi Universal Replicator were
tested.
Test Methodology
To test the setup, the following scenarios were executed in the lab:
Planned failover to the secondary site during mid-flush and then failback to the primary
Automated failover and failback using Hitachi Disaster Recovery Manager for SAP HANA without quality assurance on
the secondary site
Test Results
All the tests passed without issues. The RTO was less than an hour.
43
Corporate Headquarters
2845 Lafayette Street
Santa Clara, CA 96050-2639 USA
www.HDS.com
community.HDS.com
HITACHI is a trademark or registered trademark of Hitachi, Ltd. Hitachi Data Systems Corporation 2015. All rights reserved. Hi-Track is a trademark or registered trademark of
Hitachi Data Systems Corporation. Microsoft, Windows Server, and Internet Explorer are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service
marks, and company names are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by
Hitachi Data Systems Corporation.
AS-440-00, November 2015,