Anda di halaman 1dari 40

Technical white paper

Deploying an Oracle 11gR2 Real


Application Cluster (RAC)
database with HP Virtual
Connect FlexFabric
Installation and configuration

Table of contents
Executive summary ...................................................................................................................................................................... 2
Converged Infrastructure overview ........................................................................................................................................... 3
Introduction to HP Virtual Connect FlexFabric ........................................................................................................................ 4
Deploying Oracle 11gR2 RAC with HP Virtual Connect FlexFabric ..................................................................................... 4
Configuring FCoE and network ............................................................................................................................................... 5
Configure stacking link between two enclosures .............................................................................................................. 7
Configuring the Virtual Connect FlexFabric modules ........................................................................................................ 8
Configuring storage resources ................................................................................................................................................. 11
Setting up access to the blades as hosts .......................................................................................................................... 11
Setting up the boot device for installation ........................................................................................................................ 14
Installing Linux ............................................................................................................................................................................. 16
Setting up Oracle Automated Storage Management (ASM) with multipath after the Linux installation ................. 20
Installing HP Network and Multipath Enhancements for Linux .................................................................................... 22
Configuring Multipath drivers for Oracle ASM devices .................................................................................................... 23
Setting up Linux bonding for public and private networks ................................................................................................ 25
Setting up Jumbo Frames for public and private networks ............................................................................................... 26
Configuration ........................................................................................................................................................................... 26
Test ............................................................................................................................................................................................ 27
Setup and installation of Oracle 11gR2 ................................................................................................................................. 28
Installing Oracle RAC .............................................................................................................................................................. 28
Installing Oracle database ..................................................................................................................................................... 31
Appendix A Hardware and software configuration tested .............................................................................................. 32
Appendix B Oracle High Availability Internet Protocol (HAIP) redundant node interconnect usage .................. 32
Appendix C Global Naming Service configuration on 11gR2 RAC ................................................................................. 35
Summary ....................................................................................................................................................................................... 39
For more information ................................................................................................................................................................. 40





Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


2
Executive summary
This white paper provides an illustration of how to install and configure an Oracle 11gR2 RAC database cluster with HP
ProLiant server blades and HP Virtual Connect (VC) FlexFabric, a key component of the HP Converged Infrastructure
strategy, and how FlexFabric can be introduced into an Oracle database environment. The focus of the document is on the
steps necessary to configure the VC FlexFabric modules using Oracle 11gR2 RAC based on our test and validation
environment.
HP carried out tests to validate that FlexFabric Fibre Channel over Ethernet (FCoE) functions correctly when used at the
server-edge of an HP BladeSystem configuration for Oracle 11gR2 RAC database to replace traditional Fibre Channel (FC)
connectivity. In this scenario, standard Ethernet network connectivity was maintained via the same Converged Network
Adapter (CNA). The support note can be reviewed at https://support.oracle.com under the title: Oracle Certification of 11.2
running FCoE and Virtual Connect technologies Note [ID 1405274.1].
How does this work? Virtual Connect FlexFabric modules use industry-standard FCoE inside the BladeSystem c7000
enclosure and also bring the data traffic out of the FlexFabric modules in the familiar form of todays industry standard
Ethernet, Fibre Channel, and iSCSI. You can save money and avoid operational disruption. The Virtual Connect FlexFabric
adapter that is built into the server, and also available as a mezzanine card, takes the place of both Ethernet NICs and Fibre
Channel HBAs. The industry term for this basic capability is Converged Network Adapter (CNA), but HP greatly enhances its
FlexFabric Adapter with the power of Flex-10 capabilities. Flex-10 gives you the ability to partition your Ethernet network
into multiple channels with guaranteed bandwidth, appropriate for each connection in the Oracle RAC environment.
A single CNA can support multiple Internet Protocol (IP) networks as well as FC storage, leading to the hardware cost
savings. Moreover, CNA capability has been built into all G7 HP ProLiant server blades and Gen8 server blades with
FlexibleLOM choice of FlexFabric.
The HP Virtual Connect FlexFabric and Flex-10 solutions provide savings in four areas:
Less equipment to buy and pay for
Control bandwidth to match your application requirements
LAN and SAN administrators time isnt wasted because the system administrator is more self-sufficient
Less equipment means reduced power consumption (lower power bills)
Oracle announced the certification of Oracle Database 11g Release 2 (single instance and RAC) on Linux with Hewlett-
Packard ProLiant servers incorporating FCoE. Details about the HP ProLiant platform are provided in the link below. The
details of HP ProLiant certification are documented at oracle.com/technetwork/database/enterprise-edition/tech-generic-
linux-new-086754.html. This certification is only for database versions 11gR2 and later. Earlier versions of Oracle database
products are not certified by Oracle at this time. Only one or two c7000 enclosures are certified by Oracle with FCoE. With
Flex-10 or Flex-10D, Oracle does not have a limit on the number of enclosures that they support.
Configuration tips:
If the cluster will be under very heavy load, use identical or nearly identical systems to avoid having bottlenecks,
especially in processing network interrupts.
In our tests on HP ProLiant BL460c G7 servers, using the embedded NC553i FCoE provided about 50% more throughput
than using NC553m mezzanine cards. Using NC553m mezzanine cards provided about 100% more throughput than
using NC551m mezzanine cards.
Use stacking links between two enclosures for the private Oracle RAC network.
Use Red Hat Enterprise Linux (RHEL) / Oracle Linux 5.8 or later, although version 5.5 is supported for one enclosure.
Note
For support of FlexFabric for Oracle single instance database please see Certification Information for Oracle Database on
Linux x86 [ID 1307056.1] support note at https://support.oracle.com.
For support of FlexFabric for Oracle RAC cluster database please see Certification of Oracle Database 11.2 with Hewlett-
Packard ProLiant running FCoE and Virtual Connect technologies [ID 1405274.1] at https://support.oracle.com.
Target audience: This paper is intended for system/database administrators and system/solution architects wishing to
learn more about FCoE with FlexFabric configuration with Oracle RAC database instances. Knowledge of Linux and Oracle
11gR2 Real Application Cluster software is necessary to do a similar installation.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



3
Converged Infrastructure overview
HP Converged Infrastructure delivers the framework for a dynamic data center, eliminating costly, rigid IT silos while
unlocking resources for innovation rather than management. This infrastructure matches the supply of IT resources with the
demand for business applications; its overarching requirements include the following:
Modularity
Openness
Virtualization
Resilience
Orchestration
By transitioning away from a product-centric approach to a shared-service management model, HP Converged
Infrastructure can accelerate standardization, reduce operating costs, and accelerate business results.
A dynamic Oracle business requires a matching IT infrastructure. You need a data center with the flexibility to automatically
add processing power to accommodate spikes in Oracle database traffic and the agility to shift resources from one
application to another as demand changes. To become truly dynamic you must start thinking beyond server virtualization
and consider the benefits of virtualizing your entire infrastructure. Thus, virtualization is a key component of HP Converged
Infrastructure.
The four core components of an HP Converged Infrastructure architecture are shown in Figure 1.
1

Figure 1. HP Converged Infrastructure architecture


1
For more information, refer to the HP brochure, HP Converged Infrastructure Unleash your potential An HP Converged Infrastructure
innovation primer.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


4
HP Virtual Connect FlexFabric simplifies the cabling and physical server consolidation efforts. HP FlexFabric utilizes two
technologies, Converged Enhanced Ethernet (CEE) and Fibre Channel over Ethernet (FCoE), which comes built-in in every
ProLiant G7 blade and Gen8 blades with FlexibleLOM choice of FlexFabric. HP FlexFabric is a high performance, virtualized,
low-latency network which will consolidate both Ethernet and Fibre Channel into a single Virtual Connect module which will
lower networking complexities and total cost.
Introduction to HP Virtual Connect FlexFabric
Virtual Connect FlexFabric represents the third generation of HP award winning Virtual Connect technology with over 2.7
million ports deployed in data centers today. This technology is a solution to a growing problem that simplifies the server
edge by replacing traditional switches and modules with a converged and virtualized way to connect servers to different
networks with one device, over one wire, utilizing industry standards. HP Virtual Connect FlexFabric allows you to eliminate
separate interconnect modules and mezzanine cards for disparate protocols like Ethernet and FC. This allows you to
consolidate all of the traffic onto a single interconnect architecture, simplifying the design and cabling complexity of your
environment.
Beyond just these benefits, VC FlexFabric modules also provide significant flexibility in designing the network and storage
connectivity requirements for your server blades in an Oracle database environment. All ProLiant G7 blades and Gen8
blades with FlexibleLOM choice of FlexFabric include VC FlexFabric capability built-in with integrated VC FlexFabric NICs, and
any ProLiant G6 blade can be easily upgraded to support VC FlexFabric with Virtual Connect FlexFabric Adapter mezzanine
cards. These adapters and mezzanine cards have two, 10 Gb FlexFabric ports which can be carved up into as many as 4
physical functions per port. A physical function can be a FlexNIC, FlexHBA-FCoE, or FlexHBA-iSCSI supporting Ethernet, FCoE,
or iSCSI traffic respectively. Each adapter port can have at most 1 storage function; so users can configure up to 4 FlexNIC
physical functions if there are no storage requirements or one FlexHBA physical function for FCoE or iSCSI along with up to
three FlexNICs. The bandwidth of these ports can be configured to satisfy the requirements of your environment. Each port
is capable of up to 10 Gb of bandwidth which is explicitly distributed among the physical functions of that port.
This gives you a tremendous amount of flexibility when designing and configuring your network and storage requirements
for your database environment. This flexibility is especially salient for virtualization environments which have a requirement
for a lot of different networks that have varying bandwidth and segmentation requirements. In a traditional networking
design, this would require a number of additional network cards, cables, and uplink ports which quickly drive up the total
cost of the solution. With VC FlexFabric, you have the unique flexibility to allocate and fine-tune network and storage
bandwidth for each connection and define each of those physical functions with the specific bandwidth requirements for
that network without having to overprovision bandwidth based on static network speeds.
Deploying Oracle 11gR2 RAC with HP Virtual Connect FlexFabric
As an integral part of supporting HP Converged Infrastructure, FlexFabric is HPs implementation of the industry standard
Fibre Channel over Ethernet (FCoE). Using FCoE combined with HP Virtual Connect technology, while following the guidelines
in this document, provides our customers simple, easy to manage databases or other computing environments. The Virtual
Connect features rip and replace, profile migration and high availability are all considered and supported herein.
The half-height ProLiant G7 and Gen8 (with FlexibleLOM choice of FlexFabric) server blades have two LANs on Motherboard
(LOMs) while the full-height ProLiant G7 and Gen8 server blades can have up to four LOMs. The LOM interfaces connect to
I/O Bays 1 and 2 of the c7000 enclosure, with Bays 1 and 2 containing FlexFabric modules. The half-height G7 and Gen8
blades with two LOMs will connect one LOM to Bay 1 and one LOM to Bay 2. A full-height G7 or Gen8 blade with four LOMs
can have two connections to each FlexFabric module. FlexFabric modules should be installed in pairs; Figure 2 describes the
module port configuration. It has 16 10Gb downlinks available to servers. For example, in a fully loaded enclosure the
module can have one connection to one FlexFabric Adapter on each of 16 half-height servers in a c7000 enclosure, or if you
have full-height server blades, the module can connect with two FlexFabric Adapters on each of the 8 servers. The modules
connect automatically across the signal midplane in the c-Class blade enclosure. Each module provides eight uplink ports to
the data center network including: four 10Gb SR, LR fibre and copper SFP+ (Ethernet and Fibre Channel) and four 10Gb SR,
LRM and LR fibre and copper SFP/SFP+ (Ethernet). It supports a wide variety of aggregation methods including NPIV, 802.1q
and NIC bonding/teaming.
HP Virtual Connect eliminates the task of modifying BIOS on the X86 architecture by performing those changes via Virtual
Connect Manager (VCM) as needed.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



5
Figure 2. HP VC FlexFabric 10Gb/24-Port Module

Configuring FCoE and network
In a typical environment, a local area network (LAN) carries Ethernet traffic for core business applications, while a storage
array network (SAN) carries Fibre Channel (FC) frames. Each network has its own technology, protocol, and management
requirements, which drives up the complexity and cost of the infrastructure.
To simplify the infrastructure, you can consolidate Ethernet and FC traffic to the server-edge via FCoE, which uses a lossless
transmission model to run Ethernet traffic alongside encapsulated FC frames over a 10 Gb Ethernet network. FCoE retains
the FC operational model, providing seamless connectivity between the two storage networks.
The benefits of FCoE include:
Less complex than iSCSI
Less overhead than Transmission Control Protocol (TCP)/IP
No need for higher-level protocols for packet reassembly
As shown in Figure 3, you only need a single dual-port interface card known as a converged network adapter (CNA) to
consolidate a particular servers Ethernet and FC traffic, replacing multiple network interface cards (NICs) and host bus
adapters (HBAs). Virtual Connect FlexFabric modules at the server-edge seamlessly separate upstream LAN and SAN traffic
and converts downstream traffic to FCoE.
Note
Make sure you have all the latest versions of firmware and drivers updated for iLO, OA, CNA, VCM, EVA and any other
hardware components installed. We experienced a few issues until everything was updated as outlined in the
documentation.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


6
Figure 3. Dual-port CNA implementation on HP ProLiant G6 and G7 server blades. Gen8 blades with FlexibleLOM choice of FlexFabric are
Integrated like G7 blades.

FCoE can be implemented in any Oracle database architecture (single instance or Real Application Clusters) using the
following components:
Install redundant HP VC FlexFabric 10Gb/24-port modules in the HP BladeSystem c7000 enclosure
Utilize redundant CNA ports in each ProLiant blade with an Oracle database instance
HP ProLiant BL460c G6 server blades: Mezzanine card (HP NC551m Dual Port FlexFabric 10 Gb CNA)
HP ProLiant BL460c G7 server blades: Integrated (HP NC553i Dual Port FlexFabric 10 Gb CNA)
HP ProLiant BL460c Gen8 blades with FlexibleLOM choice of FlexFabric: Integrated (HP FlexFabric 10Gb 2-port 554FLB
FIO Adapter)
After deploying the above components, communications between ProLiant blades and the enclosure utilize FCoE for:
communications between Virtual Connect FlexFabric modules, upstream FC switches, and the Oracle database on an HP
Enterprise Virtual Array (EVA) storage array. For the tested Oracle RAC database installation, HP used a single HP
BladeSystem c7000 enclosure equipped with two HP ProLiant BL460c G7 server blades; an HP EVA8400 FC array provided
backend storage. FCoE connectivity was achieved via the following components:
HP Virtual Connect FlexFabric modules installed in bays 1 and 2 of the enclosure
FlexFabric CNAs integrated on each blade
Figure 4 illustrates the basic FCoE configuration for our Oracle database clustered environment.
More detail about the hardware and software used in the test can be found in Appendix A.
Note
Virtual Connect FlexFabric modules have dedicated internal stacking links that provide the ability for Uplink Port redundancy.
With G6 blades, CNA functionality is delivered via a mezzanine card and needs to be connected to bays 3 and 4. HP ProLiant
G7 blades and Gen8 blades with FlexibleLOM choice of FlexFabric provide integrated CNA functionality, allowing Virtual
Connect FlexFabric modules to be installed in bays 1 and 2. See HP ProLiant BL460c G7 Server Blade User Guide and HP
ProLiant BL460c Gen8 Server Blade User Guide for more detail.
After inserting the Virtual Connect FlexFabric modules into the appropriate bays, you can then connect these modules to the
external LAN and SAN, as shown in Figure 4.
The following connections are made:
Ethernet One port on each Virtual Connect FlexFabric module for a total of two uplinks provide redundant Ethernet
network connections from the enclosure to external HP Networking switches.
FC SAN One port on each Virtual Connect FlexFabric module for a total of two uplinks provide FC connections to the
SAN via external FC SAN switches.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



7
While external switches were used for FC connections in the tested configuration, you could, if desired, utilize FC switches
integrated into the EVA8400 array. Whichever switches are used; you can configure redundant connections from the blades
to SAN fabrics and, ultimately, any storage array.
Note
Connections between Virtual Connect FlexFabric modules and upstream FC switches use FC protocol; FCoE is only used for
downstream connections between Virtual Connect FlexFabric modules and the CNAs in ProLiant blades.
Currently, utilizing one or two c7000 enclosure chassis is supported with dual Virtual Connect FlexFabric modules.
Connecting more than two c7000 chassis to form a RAC cluster is not supported. c3000 enclosures are not supported.
Figure 4. Basic Dual Enclosure FCoE Configuration Example

Configure stacking link between two enclosures
Single VC domains can occupy up to four physically linked enclosures in a configuration called multi-enclosure stacking. You
can implement multi-enclosure stacking with module-to-module links. Multi-enclosure stacking gives you additional
configuration flexibility:
It provides connectivity for any blade server to any uplink port in the VC domain
It reduces expensive upstream switch port utilization requiring fewer required cables for uplink connectivity
It supplies a 10GbE+ backbone with multi-enclosure failover
It gives the ability to move a profile between enclosures
Reduces data center core switch traffic because internal communication between enclosures remains inside the VC
domain (for example, cluster server heartbeats or VMware vMotion traffic)
It weathers a failure or outage of a sustained chassis, module, uplink or upstream switch while maintaining network
connectivity
It needs fewer management touch points because multi-enclosure stacking consolidates VCM interfaces
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


8
With multi-enclosure stacking, any VC uplink on any VC Ethernet module within the VC domain can provide external
connectivity for any server downlink. You can also configure VC for connectivity between any set of server downlinks on any
VC Ethernet module. VC provides this flexible connectivity by stacking links between VC Ethernet modules. Stacking links let
you configure and operate all VC Ethernet modules in the VC domain as a single device.
Figure 5. This multi-enclosure stacking configuration includes redundant FlexFabric modules and stacking links between the modules.


Configuring the Virtual Connect FlexFabric modules
After installing and connecting the Virtual Connect FlexFabric modules, you should define the following:
SAN fabrics
Ethernet network
Server profiles for the ProLiant blades
HP VC FlexFabric configuration can be performed via the Virtual Connect command-line interface (CLI) or the web-based
Virtual Connect Manager (VCM). In order to support rip and replace or profile migration, the VC should be configured using
the virtual addresses for both MAC and WWID, as opposed to physical addresses or factory default.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



9
Defining SAN fabrics
To define the SAN fabrics, select Define SAN Fabric within VCM.
Figure 6 provides an overview of the two SAN fabrics.
Figure 6. SAN fabric summary

SAN fabrics were defined as follows (refer to Figure 6):
Bottom-fabric (SAN2) Utilizes port 3 of the Virtual Connect FlexFabric module installed in bay 2
Top-fabric (SAN1) Utilizes port 3 of the Virtual Connect FlexFabric module installed in bay 1
Port speed was set to 4Gb to match the speed of the upstream SAN fabric.
Ethernet network
To support the networking requirements of the tested configuration, VCM was used to create a redundant Ethernet
network. Select Define Ethernet Network; set up the network as shown in Figure 7.
Figure 7. Server Ethernet Network Connection Summary

Defining Virtual Connect FlexFabric server profiles
After installing the blades into the enclosure the user must create a profile for each blade. The bandwidth for Fibre Channel
(FC) and Ethernet is also configured at this time. Quality of Service (QoS) will be determined by how much bandwidth is
allocated to each virtual LAN or FC port.
For the tested configuration, an Ethernet network connection and two FCoE connections were added to each server profile.
These connections utilize both ports of the CNA installed in each ProLiant blade.
To define a server profile, select Define Server profile.
The HP ProLiant G7 and Gen8 (with FlexibleLOM choice of FlexFabric) series server blades ship with two 10 Gb embedded
LAN on Motherboard (LOM) devices. When shipped the default configuration allocates 4 Gbit to the FC connection and the
remainder is shared by Ethernet devices. In this example we have two public and two private networks, as required for
redundancy. Figure 8 illustrates the server profile using the HP Virtual Connect Manager software.
Network interface ports within the ProLiant blades are automatically mapped to appropriate HP BladeSystem enclosure
bays. In this example, the following ports were detected:
LAN on Motherboard (LOM) LOM:1-a and b (Embedded NC553i 2- Port FlexFabric 10GbE Converged Network Adapter)
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


10
These ports were mapped as follows:
Bay 1 LOM:1-b FC Port 1
Bay 2 LOM:2-b FC Port 2
Bay 1 LOM:1-a Public 1
Bay 2 LOM:2-a Public 2
Bay 1 LOM:1-c Private 1
Bay 2 LOM:2-c Private 2
By defining connections in the blades profile, you establish whether or not the blade can access a particular Ethernet, iSCSI,
FC, or FCoE network.
Note
Networks can be defined in any order.
By default, port speed is set to Preferred, which causes bandwidth to be divided equally between networks assigned to the
particular server profile. To allocate specific bandwidths, right-click the appropriate cell in the Port speed column and then
select Custom to invoke the Custom Port Speed dialog. Configure the desired bandwidths (3 Gb for Ethernet, 4 Gb for
FCoE). If the user chooses not to allocate bandwidth, the VCM will divide it equally among the network resources. The
bandwidth allocated in these examples complies with the recommended resource spilt between FC and Ethernet for Oracle
RAC. If more FC bandwidth is required, make sure the private network has sufficient bandwidth to ensure Oracle clusterware
vote and heartbeat availability requirements. The flexibility of the Virtual Connect FlexFabric design allows you to allocate
bandwidth to meet the exact needs of a particular implementation
Note
For step-by-step instructions, refer to product-specific installation and configuration documentation.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



11
Figure 8. Editing the server profile with HP Virtual Connect Manager

Configuring storage resources
Before installing Oracle 11gR2, the storage needs to be set up and make sure it has been presented and configured
correctly. This section outlines how to set up the backend EVA8400 array. For the EVA8400, reference the HP 6400/8400
Enterprise Virtual Array User Guide. If you have a different array you would need to follow similar procedures.
Setting up access to the blades as hosts
To allow the storage to be presented to blades, you must add the RAC hosts to the EVA8400 array via HP Command View
EVA software. The key attributes needed to define a host in this example are the World Wide Port Names (WWPNs) of the
FCoE ports. Along with Media Access Control (MAC) addresses, WWPNs can be ascertained by reviewing the appropriate
server profile, as shown in Figure 8.
Using these WWPN values, you can create the hosts on the array. These values are also used to modify SAN switch zoning
information to ensure that LUNs are visible from ports on all blades. After the hosts have been added to the EVA8400 array,
the next step is to configure and present the boot and Oracle database LUNs.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


12
Add host entries for each host in the cluster. Figure 9 shows adding host aps81-170 in Command View EVA. We would do
the same thing for every other host in our cluster.
Figure 9. Adding host entries aps81-170 to EVA Storage

Add all available paths to each host. Figure 10 shows adding a host port for aps81-170. We would repeat this for each
available path on the host and do the same for each host in our cluster.
Figure 10. Adding a World Wide Name of a host adapter port to each Host

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



13
Create a virtual boot disk for each host and present it to that specific host for installing Linux. We would do this for each
node in the cluster. Note the disk is presented to the host during creation.
Figure 11. Example of creating a virtual boot disk for host aps81-170

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


14
Setting up the boot device for installation
First the user needs to identify the boot WWID and LUN from the FC storage device. In this example, the HP EVA8400 array
has four FC connections per controller referenced as FP1, FP2, FP3, and FP4 in Command View EVA; we chose the first port
from each controller for connectivity to the HP VC FlexFabric modules as shown in Figure 12.
Figure 12. Identifying the WWID from the HP EVA storage using HP Command View



Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



15
Then, as shown in Figure 13, we find the LUN of the boot device we have chosen to boot from on each host. Also note that
we control access to the boot device with FC host presentation. We have identified our device as aps81-170-boot,
therefore we are using LUN 1.
Figure 13. Define each server blade boot LUN in the host properties FC presentation

We now go to the Virtual Connect Manager server profile for each cluster node and enter the boot parameters into the hosts
VC profile (Figure 14). Our example boot from SAN profile is aps81-170. Select the Fibre Channel Boot Parameters box to
define the WWID of the storage controller host port and boot LUN. A popup window will appear to allow you to enter the
boot path WWID and LUN. We would repeat this for each host in the cluster.
Figure 14. Virtual Connect Profile showing boot path WWID and LUN using HP Virtual Connect Manager

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


16

Installing Linux
Before installing Linux, make sure you have the latest HP Service Pack for ProLiant (SPP) installed. SPPs represent operating
system (OS) specific bundles of ProLiant optimized drivers, utilities, and management agents. These bundles of software
are tested together to ensure proper installation and functionality. SPPs are released concurrently with the HP SmartStart
CD, and can also be released outside of the SmartStart cycle and available to customers from the HP Business Support
Center - ProLiant Support Pack page or at HP Insight Foundation suite for ProLiant servers website. [Note: HP Service Pack
for ProLiant is the new name for ProLiant Support Pack (PSP).] Before installing the SPP, check the pre-requisites in the
documentation.
When installing and configuring Red Hat Enterprise Linux on the server blades refer to the Red Hat Enterprise Linux 5
Installation Guide and the 5.8 Release Notes.
You will follow the same procedures for the Linux installation of each host in your database cluster. We use host aps81-170
as our example. Install Linux using the multipath kernel option, thus setting up the /dev/mapper devices and access
configuration files during install. Verify the system is using the Linux multipath option as shown in Figure 15 (boot: linux
mpath_). We will need to modify those files at a later time.
Note
Only Oracle Linux or Red Hat Enterprise Linux 5.8 or newer is currently certified by Oracle with FlexFabric FCoE. Certification
with SUSE Linux Enterprise Server (SLES) or Microsoft Windows is not available from Oracle. Also this solution is
supported for bare metal installations only. Please check the Oracle support note for updates.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



17
Figure 15. Setting up Red Hat Linux Installation Boot parameters

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


18
Choose the mapper device during the installation as shown in Figure 16. This is the 150 GB LUN we created on the EVA8400,
listed here as the controller type HSV450. Please note that we will not need any specific drivers for HP FlexFabric since the
device emulates a native FC and Ethernet configuration to the operating system. We will not add the LUNs for the database
ASM drives until after installing Linux.
Figure 16. Linux Installation Drive Device Selection

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



19
Defining network parameters during installation is also something we do for efficiency as shown in Figure 17; however note
that the network devices will change later when we implement network bonding. Make sure if you have a local drive
installed in the blade that it is disabled by removing or deleting the logical drive using the HP Smart Array Utility.
Figure 17. Linux Installation Defining Network Devices

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


20
Adding the Software Development option during the Linux installation will save time later when installing Oracle RAC; a
large number of mandatory subsets are included within this option. See Figure 18. The device mapper multipath software
package is included in the default Red Hat Enterprise Linux 5.8 and higher install. We will replace it later with the HP specific
multipath Linux patch RPMs.
Figure 18. Linux Installation Optional Subsets

Setting up Oracle Automated Storage Management (ASM) with multipath
after the Linux installation
Reference the following Oracle documentation for setting up ASM. Oracle Database Installation Guide 11g Release 2 11.2)
for Linux and Oracle Grid Infrastructure.
Using the Command View EVA Storage Management Utility we find the World Wide LUN Name (Figure 20) for each device
(Vdisk) we are using in the installation. In this example we have eight FC paths to the two EVA8400 storage controllers;
therefore, we choose, for performance reasons, to create eight LUN devices, each RAID 0+1(Vraid1). You will need the
unique World Wide LUN Name for each device. Figure 19 shows an example of configuring and presenting the Vdisk named
aps-fcoe-asm0 to our example host aps81-171. We then view the Vdisk properties to verify host connectivity to the device
we named aps-fcoe-asm0.
For ASM we configured external redundancy done by the hardware array instead of ASM software mirroring.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



21
Figure 19. Creating Vdisks and presenting to all BL460c hosts


Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


22

We have now created 8 ASM disks (aps-fcoe-asm0 aps-fcoe-asm7) using the same method and presented those to all
host cluster nodes (aps81-170 and aps81-171).
Figure 20. EVA Virtual Disk showing World Wide LUN Name

Installing HP Network and Multipath Enhancements for Linux
HP provides the software for both the network and the storage multipath connectivity for Linux. These packages improve
the reliability and quality for high availability applications. Before configuring the multipath or network software HP
recommends the customer obtain the following products and install them on each node in the Oracle database cluster. HP
supports the Linux inbox driver and provides setup templates for HP disk arrays.
HP NC-Series Emulex 10GbE Driver for Linux
Device Mapper Multipath Enablement Kit for HP Disk Arrays
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



23
These tools help to configure critical configuration files:
/etc/multipath.conf
HBA parameter file
/boot/initrd
Configuring Multipath drivers for Oracle ASM devices
Red Hat Linux ships with device-mapper software used for the multipath and device ownership. The configuration file
(/etc/multipath.conf) is created during installation of the software for the boot device. The eight ASM devices (aps-fcoe-
asm[0-7]) created earlier (see Figure 19) are to be added to the configuration file with the appropriate Oracle user and
group identifier device ownership (uid and gid).
The /etc/multipath.conf file consists of the following sections to configure the attributes of a multipath device:
System defaults (defaults)
Black-listed devices (devnode_blacklist/blacklist)
Storage array model settings (devices)
Multipath device settings (multipaths)
Blacklist exceptions (blacklist_exceptions)
The defaults section defines default values for attributes which are used whenever a required setting is unavailable. The
blacklist section defines which devices should be excluded from the multipath topology discovery. The blacklist_exceptions
section defines which devices should be included in the multipath topology discovery, despite being listed in the blacklist
section. The multipaths section defines the multipath topologies. They are indexed by a World Wide Identifier (WWID). The
devices section defines the device-specific settings based on vendor and product values.
The multipath devices are created in the /dev/mapper directory in the hosts. These devices are similar to any other block
devices present in the host, and are used for any block or file level I/O operations, such as creating the file system.
You must use the devices under /dev/mapper/. You can create a new device alias by using the alias and the WWID attributes
of the multipath device present in the multipath subsection of the /etc/mutipath.conf file.
We already created the LUNs in the EVA8400 using Command View EVA and presented them to both host cluster nodes
aps81-170 and aps81-171.
To check the available paths to the root device execute the following command:
# multipath -l
To check if the LUNs are visible to Linux we can also execute fdisk l.
Next we need to make sure we have persistent device names within the cluster in the /etc/multipath.conf after executing.
multipath -v0 # Configures multipath map information
To get a list with WWIDs of these multipath devices check the file /var/lib/multipath/bindings.
These WWIDs can now be used to check the multipath device names added to the entries in the file
/etc/multipath.conf. The multipath.conf file also allows explicit uid/gid ownership to be established for the mapped
device files. By selecting the uid/gid of the Oracle Grid administrator, this feature simplifies the deployment of ASM on the
multipath disks.
# Check the /etc/multipath.conf file to make sure our multipath devices are
enabled.
\
}
multipaths {
multipath {
wwid 36001438005de9bf00000d00000300000
alias asm_dsk0
uid 1100
gid 1202
}
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


24
multipath {
wwid 36001438005de9bf00000d00000340000
alias asm_dsk1
uid 1100
gid 1202
}
multipath {
wwid 36001438005de9bf00000d00000380000
alias asm_dsk2
uid 1100
gid 1202
}
multipath {
wwid 36001438005de9bf00000d000003c0000
alias asm_dsk3
uid 1100
gid 1202
}
multipath {
wwid 36001438005de9bf00000d00000400000
alias asm_dsk4
uid 1100
gid 1202
}
multipath {
wwid 36001438005de9bf00000d00000440000
alias asm_dsk5
uid 1100
gid 1202
}
multipath {
wwid 36001438005de9bf00000d00000480000
alias asm_dsk6
uid 1100
gid 1202
}
multipath {
wwid 36001438005de9bf00000d000004c0000
alias asm_dsk7
uid 1100
gid 1202
}

To create the multipath devices with the defined alias names, execute "multipath -v0" (you may need to execute
"multipath -F" first to get rid of the old device names). In order to make sure we have the same persistent device names
cluster wide, copy these entries into the /etc/multipath.conf on all nodes in the cluster.
Finally either reboot or reload the multipath daemon with multipath r.
Check the multipath device names and paths with "multipath v0".
# multipath -v0

If the multipath driver is not enabled by default at boot, run:
# chkconfig [--level levels] multipathd on
The result should be similar to the following after reloading the multipath daemon. These steps are repeated on every
cluster node. Note the ownership and file protection are set according to Oracles recommendations.
[root@aps81-170 ~]# ls -l /dev/mapper
total 0
brw-rw---- 1 grid asmadmin 253, 9 May 19 14:24 asm_dsk0
brw-rw---- 1 grid asmadmin 253, 6 May 19 14:23 asm_dsk1
brw-rw---- 1 grid asmadmin 253, 10 May 19 14:23 asm_dsk2
brw-rw---- 1 grid asmadmin 253, 11 May 19 14:24 asm_dsk3
brw-rw---- 1 grid asmadmin 253, 12 May 19 14:24 asm_dsk4
brw-rw---- 1 grid asmadmin 253, 5 May 19 14:24 asm_dsk5
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



25
brw-rw---- 1 grid asmadmin 253, 7 May 19 14:24 asm_dsk6
brw-rw---- 1 grid asmadmin 253, 8 May 19 14:24 asm_dsk7

We need to make sure the right level of permission is set to the shared volume. This can be achieved by two ways:
Updating the rc.local file
Create a udev rule
If you use ASMlib, then you do not need to ensure permissions and device path persistency in udev. If you do not use ASMlib,
then you would create a custom rules file. When udev is started, it sequentially carries out rules (configuration directives)
defined in rules files.
Setting up Linux bonding for public and private networks
The HP FlexFabric module will provide failover for uplink failure; however for downlinks we must use the operating system
to provide that redundancy. In our case we use Linux NIC Bonding.
Setting up Linux NIC Bonding is accomplished by enabling the bond driver and setting up the configuration files.
First we edit the modules configuration file and start the bond driver with the parameters we desire for this installation.
# head -4 /etc/modprobe.conf
alias bond0 bonding
options bond0 mode=0 miimon=100
alias bond1 bonding
options bond1 mode=0 miimon=100

# modprobe bonding

Second we create the network bond configuration files in /etc/sysconfig/network-scripts, bond0 for public and bond1 for
private.

# File ifcfg-bond0
#
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.50.80.0
NETMASK=255.255.240.0
IPADDR=10.50.81.170
USERCTL=no

# File ifcfg-bond1
#
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.81.0
NETMASK=255.255.255.0
IPADDR=192.168.81.170
USERCTL=no

Lastly we assign the NICs, by referencing the Virtual Connect server profile for the correct MAC address.
# File ifcfg-0
# ServerEngines Corp. Emulex OneConnect 10Gb NIC (be3)
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
HWADDR=00:17:A4:77:C4:08

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


26
# File ifcfg-1
# ServerEngines Corp. Emulex OneConnect 10Gb NIC (be3)
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
HWADDR=00:17:A4:77:C4:0A

# File ifcfg-2
# ServerEngines Corp. Emulex OneConnect 10Gb NIC (be3)
DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond1
SLAVE=yes
USERCTL=no
HWADDR=00:17:A4:77:C4:1E

# File ifcfg-3
# ServerEngines Corp. Emulex OneConnect 10Gb NIC (be3)
DEVICE=eth3
BOOTPROTO=none
ONBOOT=yes
MASTER=bond1
SLAVE=yes
USERCTL=no
HWADDR=00:17:A4:77:C4:20

Now restart the network (or reboot) to reconfigure both private networks for bonding.
# service network restart

Setting up Jumbo Frames for public and private networks
For RAC Interconnect traffic, devices correctly configured for Jumbo Frames improves performance by reducing the TCP,
UDP, and Ethernet overhead that occurs when large messages have to be broken up into the smaller frames of standard
Ethernet. Because one larger packet can be sent, inter-packet latency between various smaller packets is eliminated. The
increase in performance is most noticeable in scenarios requiring high throughput and bandwidth and when systems are
CPU bound.
Configuration
In order to make Jumbo Frames work properly for a Cluster Interconnect network, please carefully configure the host, its
Network Interface Card, and the switch level as shown in the following example:
1. The host's network adapter must be configured with a persistent MTU size of 9000 (which will survive reboots).
For example, ifconfig -mtu 9000 followed by ifconfig -a to show the setting completed.

[root@shep1 etc]# cat /etc/sysconfig/network-scripts/ifcfg-eth3

# Emulex Corporation OneConnect 10Gb NIC (be3)
DEVICE=eth3
BOOTPROTO=yes
ONBOOT=yes
HWADDR=00:17:a4:77:ec:08
HOTPLUG=yes
ONBOOT=yes
IPADDR=10.168.3.30
NETMASK=255.255.255.0
NETWORK=10.168.3.0
MTU=9000
MRU=9000
USERCTL=NO

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



27


2. Certain NICs require additional hardware configuration.
3. The LAN switches must also be properly configured to increase the MTU for Jumbo Frames support.
4. Ensure the changes made are permanent (survives a power cycle) and that both "Jumbo" refer to same size,
recommended 9000 (some switches do not support this size). Because of the lack of standards with Jumbo Frames the
interoperability between switches can be problematic and requires advanced networking skills to troubleshoot.
5. Remember that the smallest MTU used by any device in a given network path determines the maximum MTU (the MTU
ceiling) for all traffic travelling along that path.

Failing to properly set these parameters in all nodes of the Cluster and Switches can result in unpredictable errors as well as
degradation in performance.

Test
Trace route: In the example below, notice that the 9000 packet goes through with no error, while the 9001 fails, this is a
correct configuration that supports a message of up to 9000 bytes with no fragmentation:

[root@shep1 ~]# traceroute -F shep2-priv1 9000
traceroute to shep2-priv1 (10.168.3.31), 30 hops max, 9000 byte packets
1 shep2-priv1.aisscorp.com (10.168.3.31) 0.023 ms 0.013 ms 0.008 ms
[root@shep1 ~]# traceroute -F shep2-priv1 9001
traceroute to shep2-priv1 (10.168.3.31), 30 hops max, 9001 byte packets
send: Message too long

Ping: With ping we have to take into account the overhead of about 28 bytes per packet, so 8972 bytes go through with no
errors, while 8973 bytes fail, this is a correct configuration that supports a message of up to 9000 bytes with no
fragmentation:

[root@shep1 ~]# ping -c 2 -M do -s 8972 shep2-priv1
PING shep2-priv1.aisscorp.com (10.168.3.31) 8972(9000) bytes of data.
8980 bytes from shep2-priv1.aisscorp.com (10.168.3.31): icmp_seq=1 ttl=64 time=0.238 ms
8980 bytes from shep2-priv1.aisscorp.com (10.168.3.31): icmp_seq=2 ttl=64 time=0.276 ms

[root@shep1 ~]# ping -c 2 -M do -s 8973 shep2-priv1
PING shep2-priv1.aisscorp.com (10.168.3.31) 8973(9001) bytes of data.
From shep1-priv1.aisscorp.com (10.168.3.30) icmp_seq=1 Frag needed and DF set (mtu = 9000)
From shep1-priv1.aisscorp.com (10.168.3.30) icmp_seq=1 Frag needed and DF set (mtu = 9000)

--- shep2-priv1.aisscorp.com ping statistics ---
0 packets transmitted, 0 received, +2 errors

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


28


For more information, check Oracle support.oracle.com Document ID: 341788.1, which requires logging into
https://support.oracle.com.
Setup and installation of Oracle 11gR2
Installing Oracle RAC
Oracle RAC needs at least two physical interfaces. The first one is dedicated to the interconnect traffic. The second one will
be used for public access to the server and for the Oracle Virtual-IP address as well. To implement NIC bonding will require
additional network interfaces. Please note the interface naming should be the same on all nodes of the cluster.
If you have a DNS environment you need to configure the following addresses manually in your corporate DNS:
A public IP address for each node
A virtual IP address for each node
A private IP address for each node
Three single client access name (SCAN) addresses for the cluster. Note, the SCAN cluster names need to be resolved by
the DNS and should not be stored in the /etc/hosts file. Three addresses is a recommendation.
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



29
While we wont be covering the setup of the public or scan network interfaces or step-by-step Oracle RAC installation, (See
Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux, Chapter 3, for how to install an RAC cluster
environment), it is important to point out where we use the FlexFabric network interface resources for the Oracle cluster
interconnect that we created earlier using bonding. As shown in Figure 21, be sure the interface names (bond0 and bond1)
are the bonded devices we created earlier. The FlexFabric module will support uplink failover but not for downlink
connections, we need these bonded NICs to protect against a complete module failure.
Figure 21. Check that the interface names match our NIC bonding names

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


30
In Figure 22 we must manually enter the path for the multipath device created earlier.
Figure 22. Entering the Linux mapper device

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



31
The Oracle ASM disk group configuration uses all eight multipath devices created earlier (Figure 23). We select External
redundancy because the EVA8400 was configured for RAID 0+1 for the 8 devices. Oracle ASM will still stripe across the 8
devices automatically. This configuration will provide an optimal setup.
Figure 23. Selecting the ASM devices and using External redundancy

Installing Oracle database
When installing the RAC software as well as the Oracle database there is no functionality difference between using individual
network or FC HBA cards versus using the FlexFabric solution when accessing the attached storage and networks.
Please refer to the following Oracle documentation for setting up an Oracle 11gR2 database. To review the support notes
will require you to create an account at https://support.oracle.com.
Support Note Oracle Document
Oracle Database Installation Guide 11g Release 2 (11.2) for Linux
Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX
Oracle Grid Infrastructure Installation Guide 11g Release 2 for Linux
Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2
560992.1 Red Hat and Oracle Enterprise Linux Kernel Versions and Release Strings
169706.1 Oracle Database on UNIX AIX, HP-UX, Linux, Mac OS X, Solaris, Tru64 UNIX Operating Systems Installation and
Configuration Requirements Quick Reference (8.0.5 to 11.2)
880989.1 Requirements for Installing Oracle 11gR2 RDBMS on RHEL (and OEL)
810394.1 RAC and Oracle Clusterware Starter Kit and Best Practices
811306.1 RAC Starter Kit and Best Practices (Linux)

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


32
Appendix A Hardware and software configuration tested
Table 1. Hardware and software components
Components Size Software
(2) c7000 Enclosure


(8) ProLiant BL460c G7 blades 72GB Memory, Six-Core Intel Xeon @ 2.93GHz
(2) 146GB 15K SAS Drives
Red Hat Enterprise
Linux 5.8
(2) HP EVA8400 Storage 14GB cache
(30) FC 146GB 15K FC Drives
(2) 8/24 (16) Full Fabric Ports Enabled SAN Switch
Command View EVA
Blade Interconnect (4) HP Virtual Connect FlexFabric 10 Gb/24-Port Modules VC Manager
(2) HP 5820 Switches 24 - 1/10 GigE ports
Oracle Enterprise Edition 11gR2 500GB database Oracle 11gR2 RAC

Appendix B Oracle High Availability Internet Protocol (HAIP) redundant
node interconnect usage
An alternative node interconnect bonding solution to the Linux private interconnect bonding solution used in this paper is to
use Oracle HAIP. Oracle Clusterware 11g Release 2 combined with Oracle Automatic Storage Management (ASM), has
become known as Oracles Grid Infrastructure software and includes redundant node interconnect software. While in
previous Oracle database releases NIC bonding, trunking, teaming, or similar technology was required between the
database instance nodes as redundant, dedicated and private communication interconnects. The Oracle Grid Infrastructure
Clusterware could be used to provide another solution option to ensure interconnect redundancy. This functionality is
available starting with Oracle Database 11g Release 2, Patch Set One (11.2.0.2) and higher.
The Redundant Interconnect Usage feature does not operate on the network interfaces directly. Instead, it is based on a
multiple-listening-endpoint architecture, in which a highly available virtual IP (the HAIP) is assigned to each private network
(up to 4 interfaces supported).
By default, Oracle Real Application Clusters (RAC) software can use all of the HAIP addresses for the private network
communication for redundancy, also providing load balancing across the set of interfaces identified as the private network.
If a private interconnect interface fails or becomes non-communicative, then Oracle Clusterware transparently moves the
corresponding HAIP address of the failed network to one of the remaining functional interfaces. When the failed interface
once again becomes available the HAIP address migrates back, both failover and failback migrations occur without any user
intervention.
HAIP is configured automatically by an Oracle Grid Infrastructure resource. HAIP is configured as an interconnect alias on
each private network provided to the Oracle installer. The HAIP address is assigned using the 169.254.*.* self-assigned
addressing space. All RAC interconnect traffic will use the HAIP addresses. HAIP can use as many as four networks and they
may be configured as the same or different subnets. If the Oracle installer chooses to provide failover using HAIP as
opposed to bonding described earlier in this document, be aware Oracle HAIP only provides failover for the Oracle RAC
interconnect traffic, any other traffic running on the private interconnect will not failover to the remaining network
interfaces. If you require other network traffic on this private network then Linux NIC bonding may be a better option. See
Oracle Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks documentation for more details.
Note
For HAIP functionality to work properly in an FCoE environment, you need to obtain the fix for Oracle Bug 13102312. See
https://support.oracle.com.
To prepare for installing Oracle RAC using HAIP do not bond the private interconnect but rather configure at least two
interfaces with two private subnets as follows:
Examples from the file: /etc/sysconfig/network-scripts/ifcfg-eth[2-3]
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



33
# ServerEngines Corp. Emulex OneConnect 10Gb NIC (be3)
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:17:A4:77:C4:26
IPADDR=192.168.1.172
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes

# ServerEngines Corp. Emulex OneConnect 10Gb NIC (be3)
DEVICE=eth3
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:17:A4:77:C4:28
IPADDR=192.168.2.172
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes

In the HP Virtual Connect configurations make sure the public and private networks are assigned to different datacenter
switches, for example configure one datacenter switch to Bay 1 and the other datacenter switch to Bay 2. This will provide
protection against any type of datacenter switch failure. Please note the Ethernet NIC Adapter MAC address mapping to the
LOM and Bay number as shown in Figure 24.
Figure 24. Setting up the public and private networks for Oracle clustering and HAIP

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


34
The only other difference with the installation between Linux NIC Bonding described in this document and HAIP is when
running the Oracle Grid installation the Oracle installer needs to assign both private networks for Oracle RAC internode
communication as shown in Figure 25.
Figure 25. Assigning the private network interfaces using Oracle Grid Infrastructure for RAC internode traffic.

After installation, the Oracle installer can check the status of HAIP by querying the Oracle database cluster using the Oracle
interface configuration tool oifcfg to list the Oracle Grid cluster private networks.
[root@aps81-172~]# oifcfg iflist
eth2 192.168.1.0 global cluster_interconnect
eth3 192.168.2.0 global cluster_interconnect
bond0 10.50.80.0 global public

You can verify the results of the networks with the Linux ifconfig command. If we query the interface and interconnect alias
we see Oracle Grid has successfully created HAIP networks on both private interconnects in our example eth2 and eth3.
Notice that Oracle Grid installer created interconnect alias addresses eth2:1 and eth3:1 using 169.254.*.* self assigned
addresses.
[root@aps81-172 ~]# ifconfig eth2 | head -2
eth2 Link encap:Ethernet HWaddr 00:17:A4:77:C4:26
inet addr:192.168.1.172 Bcast:192.168.1.255 Mask:255.255.255.0

[root@aps81-172~]# ifconfig eth2:1| head -2
eth2:1 Link encap:Ethernet HWaddr 00:17:A4:77:C4:26
inet addr:169.254.27.82 Bcast:169.254.127.255 Mask:255.255.128.0

[root@aps81-172 ~]# ifconfig eth3 | head -2
eth3 Link encap:Ethernet HWaddr 00:17:A4:77:C4:28
inet addr:192.168.2.172 Bcast:192.168.2.255 Mask:255.255.255.0

[root@aps81-172~]# ifconfig eth3:1| head -2
eth3:1 Link encap:Ethernet HWaddr 00:17:A4:77:C4:28
inet addr:169.254.143.150 Bcast:169.254.255.255 Mask:255.255.128.0
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



35

And finally to verify the Oracle database instance is using the correct HAIP networks, the following entries can be found in
the Oracle RDBMS alert log file.

Private Interface 'eth2:1' configured from GPnP for use as a private interconnect.
[name='eth2:1', type=1, ip=169.254.27.82, mac=00-17-a4-77-c4-26,
net=169.254.0.0/17, mask=255.255.128.0, use=haip:cluster_interconnect/62]

Private Interface 'eth3:1' configured from GPnP for use as a private interconnect.
[name='eth3:1', type=1, ip=169.254.143.150, mac=00-17-a4-77-c4-28,
net=169.254.128.0/17, mask=255.255.128.0, use=haip:cluster_interconnect/62]

Appendix C Global Naming Service configuration on 11gR2 RAC
If you plan to use Grid Naming Service (GNS), then before Oracle Grid Infrastructure installation, you must configure your
domain name server (DNS) to send to GNS name resolution requests for the subdomain GNS serves, which are the cluster
member nodes.
When GNS is enabled, then name resolution requests to the cluster are delegated to the GNS, which is listening on the GNS
virtual IP address. We can define this address in the DNS domain before installation. The DNS must be configured to
delegate resolution requests for cluster names (any names in the subdomain delegated to the cluster) to the GNS. When a
request comes to the domain, GNS processes the requests and responds with the appropriate addresses for the name
requested. In our example below, we will be using the domain aisscorp.com as our name.
Step 1: Install the binaries to setup the DNS server. The Linux rpms are installed on the DNS server called shep-client.
bind-libs-9.3.6-4.P1.el5_4.2
bind-utils-9.3.6-4.P1.el5_4.2
bind-9.3.6-4.P1.el5_4.2

Step 2: Edit the file /etc/named.conf. The named.conf file should contain a section for global settings and a section for zone
file settings.
[root@shep-client ~]#vi/etc/named.conf

options {
directory "/var/named"; // Base directory for named
allow-transfer {"none";}; // Slave serves that can pull zone transfer. Ban
everyone by default
};
zone "3.168.192.IN-ADDR.ARPA." IN { // Reverse zone.
type master;
notify no;
file "aisscorp.reverse";
};
zone "aisscorp.com." IN {
type master;
notify no;
file "aisscorp1.zone";
};
include "/etc/rndc.key";


Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


36
Step 3: Edit the file /var/named/aisscorp1.zone to create the zone information for the aisscorp1.zone.
[root@shep-client ~]#vi /var/named/aisscorp1.zone

$TTL 1H ; Time to live
$ORIGIN aisscorp.com.
@ IN SOA shep-client root.aisscorp.com. (
2009011201 ; serial (todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour;
A 192.168.3.49
NS shep-client; is the name server for aisscorp.com
shep1 A 192.168.3.30
shep2 A 192.168.3.31
shep3 A 192.168.3.37
shep4 A 192.168.3.38
shep5 A 192.168.3.41
shep6 A 192.168.3.42
shep7 A 192.168.3.43
shep8 A 192.168.3.44
shep-client A 192.168.3.49
gns-grid A 192.168.3.29 ; A record for the GNS;
;sub-domain(gns.aisscorp.com) definitions
$ORIGIN gns.aisscorp.com.
@ NS gns-grid.aisscorp.com. ; name server for the
gns.aisscorp.com


Step 4: Edit the file /var/named/aisscorp.reverse to create reverse zone information.
[root@shep-client ~]#vi /var/named/aisscorp.reverse

$TTL 1H
@ IN SOA shep-client root.aisscorp.com. (
2009011201 ; serial (todays date + todays serial #)
3H ; refresh 3 hours
1H ; retry 1 hour
1W ; expire 1 week
1D ) ; minimum 24 hour;
NS shep-client
30 PTR shep1.aisscorp.com.
31 PTR shep2.aisscorp.com.
37 PTR shep3.aisscorp.com.
38 PTR shep4.aisscorp.com.
41 PTR shep5.aisscorp.com.
42 PTR shep6.aisscorp.com.
43 PTR shep7.aisscorp.com.
44 PTR shep8.aisscorp.com.
49 PTR shep-client.aisscorp.com.
29 PTR gns-grid.aisscorp.com. ; reverse mapping for GNS


Step 5: Start the DNS server and ensure DNS service restart on the reboot.
[root@shep-client ~]# service named restart
[root@shep-client ~]#chkconfig named on

Step 6: Edit /etc/resolve.conf and /etc/nsswitch.conf on all the RAC node servers with DNS information.
[root@shep-client scripts]# vi /etc/resolv.conf
search aisscorp.com gns.aisscorp.com
nameserver 192.168.3.49
nameserver 192.168.3.11

[root@shep-client scripts]# vi /etc/nsswitch.conf
hosts dns files
Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



37

Step 7: To verify the DNS server is working, issue the dig command from all RAC nodes.
[root@shep1 scripts]# dig gns-grid.aisscorp.com

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> gns-grid.aisscorp.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15859
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;gns-grid.aisscorp.com. IN A

;; ANSWER SECTION:
gns-grid.aisscorp.com. 3600 IN A 192.168.3.29

;; AUTHORITY SECTION:
aisscorp.com. 3600 IN NS shep-client.aisscorp.com.

;; ADDITIONAL SECTION:
Shep-client.aisscorp.com. 3600 IN A 192.168.3.49

;; Query time: 41 msec
;; SERVER: 192.168.3.49#53(192.168.3.49)
;; WHEN: Thu Aug 25 04:18:03 2011
;; MSG SIZE rcvd: 91

Do the following from each server node.
The dig command shows (highlighted in green) gns-grid.aisscorp.com is resolving both forward and reverse name
resolution to 192.168.3.29, which is the GNS address. Execute the dig command on each server.
[root@shep1 scripts]# dig x 192.168.3.29

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> -x 192.168.3.29
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50228
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;29.3.168.192.in-addr.arpa. IN PTR

;; ANSWER SECTION:
29.3.168.192.in-addr.arpa. 3600 IN PTR gns-grid.aisscorp.com.

;; AUTHORITY SECTION:
3.168.192.in-addr.arpa. 3600 IN NS shep-client.3.168.192.in-
addr.arpa.

;; Query time: 0 msec
;; SERVER: 192.168.3.49#53(192.168.3.49)
;; WHEN: Thu Aug 25 04:20:27 2011
;; MSG SIZE rcvd: 98

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


38
Step 8: Enter the SCAN details for the cluster which we have configured in the pre-installation steps as shown in Figure 26.
Cluster Name gns
SCAN Name gns-scan.gns.aisscorp.com
SCAN Port 1521
GNS Sub Domain gns.aisscorp.com
GNS VIP Address 192.168.3.29

Figure 26. Setting up the SCAN information for the cluster.

Step 9: Add all the RAC nodes in the cluster as defined in the Oracle cluster documentation. We don't need to add the VIP
information for each RAC node since it will now be assigned through DHCP and GNS.

Check the Status of DNS and GNS
Step 10: We can use the dig command to verify the DNS is forwarding the request to GNS.

[root@shep1 NEW]# dig gns-scan.gns.aisscorp.com

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> gns-scan.gns.aisscorp.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51486
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;gns-scan.gns.aisscorp.com. IN A

;; ANSWER SECTION:
gns-scan.gns.aisscorp.com. 114 IN A 192.168.3.68
gns-scan.gns.aisscorp.com. 114 IN A 192.168.3.52
gns-scan.gns.aisscorp.com. 114 IN A 192.168.3.53

;; AUTHORITY SECTION:
gns.aisscorp.com. 86400 IN NS gns-grid.aisscorp.com.

;; ADDITIONAL SECTION:
gns-grid.aisscorp.com. 86400 IN A 192.168.3.29

;; Query time: 0 msec
;; SERVER: 192.168.3.49#53(192.168.3.49)
;; WHEN: Sat Aug 25 14:28:45 2011
;; MSG SIZE rcvd: 130

Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric



39
Verify GNS is resolving the SCAN address by executing nslookup.

Summary
The key benefits of Virtual Connect FlexFabric for Oracle database environments are:
Oracle Single Instance and Real Application Clusters are certified and supported by Oracle for 11gR2 and higher with
Oracle Linux or Red Hat Enterprise Linux 5.8 and higher. Check Oracle support for latest certification.
Simplified wire-once connections to LANs and SANs and then add servers, replace servers, and move workloads in
minutes
Drive best network economics with Flex-10 technology, better than FCoE alone.
Reduce LAN/SAN interconnect modules requirement.
Reduce enclosure interconnect capital expense.
Increase configuration flexibility with a single module to auto-run all fabrics: FCoE, native FC, 1G/10G Ethernet, and iSCSI
Best LAN and SAN converged infrastructure for virtualization environments
Reduce networking cables and ports requirement.
Use Jumbo Frames for network packets.
Use Red Hat Linux / Oracle Linux 5.8 or later even though version 5.5 is supported for single enclosure.
Using the embedded NC553i on HP ProLiant BL460c G7 servers, FCoE gave about 50% better throughput compared to
using an NC553m mezzanine card in our tests. The NC553m mezzanine card achieved about 100% better throughput
than an NC551m mezzanine card.
Use stacking links for the Oracle RAC private network.
For best performance, have the RAC cluster use identical or nearly identical systems to prevent network and fibre channel
bottlenecks.
Support todays and tomorrows I/O needs



Technical white paper | Deploying an Oracle 11gR2 Real Application Cluster (RAC) database with HP Virtual Connect FlexFabric


For more information
HP FlexFabric Virtualized network connections and capacity From the edge to the core
http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA0-7725ENW
Virtual Connect FlexFabric Cookbook
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02616817/c02616817.pdf
HP Storage hp.com/go/storage
Reference Architecture for HP Data Protector and Oracle 11gR2 RAC on Linux
http://h20195.www2.hp.com/V2/getdocument.aspx?docname=4AA3-9092ENW
HP Converged Infrastructure hp.com/go/ConvergedInfrastructure
HP Virtual Connect hp.com/go/virtualconnect
HP CloudSystem Matrix hp.com/go/matrix
HP Solution Demo Portal hp.com/go/solutiondemoportal
Converged networks with Fibre Channel over Ethernet (FCoE) and Data Center Bridging (DCB)
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01681871/c01681871.pdf
HP Multifunction Networking Products
http://h18004.www1.hp.com/products/servers/proliant-advantage/networking.html
HP/Oracle Collateral, Demos, Reference Configurations and more hp.com/go/oracle
Oracle Certification of 11.2 running FCoE and Virtual Connect technologies Note [ID 1405274.1]
https://support.oracle.com
Oracle RAC Technologies Matrix for Linux Platforms
oracle.com/technetwork/database/enterprise-edition/tech-generic-linux-new-086754.html

To help us improve our documents, please provide feedback at hp.com/solutions/feedback.














Sign up for updates
hp.com/go/getupdated





Copyright 2012, 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only
warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should
be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Oracle is a registered trademark of Oracle and/or its affiliates. UNIX is a
registered trademark of The Open Group.

4AA4-0227ENW, August 2013, Rev. 2

Anda mungkin juga menyukai