Anda di halaman 1dari 46

Oracle Applications 11.5.

9
with 9i RAC
Installation & Configuration
Linux and Unix (generic)

Author: Jim Stone, Oracle Advanced Product Services


Creation Date: August 20, 2003
Last Updated: August 10, 2004
Version: 1.8

Oracle Applications Release 11i and RAC Installation, and Configuration 1


Contents

1.0 Introduction .......................................................................................................................3

2.0 Objective .............................................................................................................................4

3.0 Audience.............................................................................................................................6

4.0 Environment ......................................................................................................................7

5.0 RAC General Information ................................................................................................8

6.0 Applications Installation/Configuration .....................................................................10

7.0 Test Applications Failover Scenarios............................................................................35

8.0 Finishing the RAC Installation ......................................................................................39

Appendix B – Control File....................................................................................................40

Appendix C – DD Datafiles to Raw Devices .....................................................................41

Appendix D – SQL Scripts ...................................................................................................42

Appendix E – Net Configuration Files ...............................................................................44

References...............................................................................................................................46

Oracle Applications Release 11i and RAC Installation, and Configuration 2


1.0 Introduction
This paper focuses on the installation, setup, configuration, and fail-over
considerations for Oracle Applications Release 11.5 in a RAC environment.
Specifically release 11.5.9 with Rapid Clone, as the procedure outlined in this
paper will differ significantly for earlier Applications releases because of the
change in the Applications tech stack to include Oracle RDBMS 9.2.0.3 in the
release 11.5.9 distribution. The cloning procedure will also differ
significantly from earlier releases that do not utilize Rapid Clone.

The purpose of this document is to identify, and determine the steps and
requirements necessary to install Oracle Applications using the standard
Rapidwiz installation, convert the existing single instance installation to Real
Application Clusters (RAC) using standard installation and upgrade
methods. Failover testing is conducted within existing constraints, as the
current Oracle Applications Release 11i architecture supports dynamic
failover with respect to Concurrent Processing only. Dynamic OLTP
workload failover, specifically Transparent Application Failover (TAF) is not
supported.

The intent behind the methodology outlined here is to provide the


instructions required in order to complete a 9i RAC implementation with
Applications 11i, and provide checkpoints where verification, and progress
can be measured in order to identify and resolve any issues that are
encountered within their relevant steps. The alternative of executing
multiple tasks, and performing verification testing upon completion,
provides no mechanism to isolate process failures to specific components,
and perform corrective actions necessary to complete the procedure.

Following the steps outlined within this document, and completing the tasks
required in order to install Oracle Applications Release 11i, convert the single
instance installation to RAC, test and verify the configuration, requires a
significant amount of time and effort and should not be undertaken without
committing to executing the process from start to finish. The estimated
minimum number of hours required in order to complete all of these tasks is
approximately 40 hours under optimal conditions.

The procedure outlined in this document is intended to be followed when


working in either a Linux, or Unix OS environment, and provides specific
instructions related to the tasks to configure a Linux environment (tested on
Linux Red Hat Advanced Server 2.1 and 3.0). These are indicated where
relevant throughout the document, and can be ignored if working in a Unix
environment. No platform specific Unix issues are addressed, as these vary
significantly among different OS’, and are beyond the scope of this
document.

Oracle Applications Release 11i and RAC Installation, and Configuration 3


2.0 Objective
The primary objective of the installation and configuration steps outlined,
and subsequent testing that is presented in this document is to identify the
tasks necessary to perform a standard Oracle Applications Release 11i single
instance installation, and then convert the single instance RDBMS installation
to Oracle Real Application Clusters (RAC). These steps are presented with
sufficient detail to identify each task and point out any behavior, or specifics
relevant to ensure the execution of each task is successful. The reader should
have sufficient knowledge, and proficiency in each of the necessary
disciplines required in order to execute each task with the level of detail
provided.

The Oracle Applications Release 11i technology stack (Applications Forms


and Reports server, Apache server, and Database server) can be configured
in a variety of different ways dependent on hardware availability and
specific business and processing requirements. No workload partitioning,
load balancing, or scalability aspects of the RAC configuration are considered
during the execution of this process. With Applications Release 11.5.9 the
Applications software distribution includes an out of the box integration of
RDBMS features for Applications specific to Oracle9i R2 functionality for
Automatic PGA Memory Management, and Automatic Undo Management.
It is beyond the scope of this document to provide specifics related to the
internals, tuning aspects, or functionality of these features. This information
can be found in the Oracle9i R2 Database Administrators Guide.

RAC instance failover testing with Oracle Applications Release 11i is


conducted within the current limitations of the components that make up the
Applications Technology Stack – Transparent Applications Failover (TAF) is
not supported in Oracle Applications. TAF is currently supported only in
OCI8 based applications (SQL*Plus, PRO*). In an Applications environment
dynamic session failover is not possible, and requires session termination,
and reinitiation during the RAC failover process.

The flexibility of the Applications architecture enables the Oracle


Applications technology stack to be implemented in a number of different
configuration scenarios – in a single or multi node configuration. In one of
the most basic configuration implementations, Applications can be
configured in a single node, or collapsed architectural configuration. In this
scenario, a single installation is implemented on a single node/tier server.
The Applications Tier, Middle Tier, and Database Tier all reside on the same
single server platform.

The next most common scenario is the two-node/tier configuration. In this


scenario for example, the Applications, and Middle Tier are deployed on a
single server, and the Database Tier is deployed on a separate server. This
configuration can be altered to include a Database Tier, which could utilize
RAC. Another common configuration scenario is a three-node/tier model, in
which the Applications Tier components reside on one node, the Middle Tier
components on another, and the Database Tier on yet another node/server.

Variations to these configurations can include Distributed Concurrent


Processing (DCP) and the implementation of an IP load balancer such as Big
IP, or Cisco LocalDirector, along with JDBC and Forms Load Balancing
among multiple Middle Tier servers. Application Failover can be

Oracle Applications Release 11i and RAC Installation, and Configuration 4


implemented for the Applications and Middle Tier, the Database Tier, on any
single, or all of the server platforms in the configuration. The complexity of
each scenario is dependent on the availability, recovery, performance, and
manageability requirements for each specific installation.

Although Oracle Applications does not fully exploit the functionality of


dynamic session failover, Applications’ workload failover is possible with the
implementation of Parallel Concurrent Processing (PCP), and the
requirement that session termination and reconnect is necessary during a
failover scenario. This functionality when implemented in an RAC
environment provides for a very short, limited down time window, and high
availability for Oracle Applications implementations.

In addition workload distribution among RAC nodes, a RAC configuration


provides considerable scalability, and performance advantages where these
requirements are needed over a traditional single instance installation.
Applications OLTP users can be directed to one node or another in a RAC
cluster, and Concurrent Processing can be distributed to yet another node in
order to exploit the scalability advantages of RAC clusters running Oracle
Applications. Cache Fusion technology that is available in RAC, significantly
enhanced from the Oracle8i implementation, eliminates much of the
performance overhead associated with the old disk ping model used in prior
OPS releases, and further enables the distribution of Applications’ workload
among nodes in an RAC cluster with reduced overhead and performance
impacts.

Oracle Applications Release 11i and RAC Installation, and Configuration 5


3.0 Audience
This paper is intended for Applications development, product management,
system architects, and system administrators involved in deploying and
configuring Oracle Applications. This document will also be useful to field
engineers and consulting organizations to facilitate installations and
configuration requirements.

Familiarity with the installation, configuration, and operational details of


Oracle Applications 11i components, and Oracle9i RDBMS software is
required in order to execute and the procedures outlined in this document.
This includes knowledge regarding the use of the Oracle Universal Installer
(OUI), and Oracle Applications administration utilities (ADPATCH,
ADADMIN, ADCTRL). Knowledge of Database Administration, RAC
concepts, and general Applications and RDBMS Server architecture is also
required.

With the changes in that have been implemented in the 11.5.9 Applications
technology stack, the process outlined in this document differs significantly
from the previous 11.5.8 Applications documentation. This is primarily due
to the change in the bundled Applications’ RDBMS from 8.1.7.4, to 9.2.0.3.
Although this eliminates the need for an RDBMS upgrade during the RAC
conversion, lack of RAC support in the current Rapidwiz installation requires
a separate 9.2.0.x RDBMS ORACLE_HOME to be installed in order to
support RAC (and Cluster Manager on Linux) requirements. The latest
certified patchset release of 9iR2 should be installed according to the
Interoperability Notes for Oracle Applications.

Oracle Applications Release 11i and RAC Installation, and Configuration 6


4.0 Environment
This section describes the software configuration of the platform and
environment where testing, and execution of this procedure was performed.

This section also includes minimum Tech Stack requirements that must be
met for installation.

4.1 Software Configuration

Oracle Software
The following Oracle software was used during this testing:

Oracle Applications Release 11.5.9 (production release)


Oracle9i Release 9.2.0.4 (production release)
Oracle Cluster Manager Release 9.2.0.4 (production release)
Oracle9i Real Application Release 9.2.0.4 (production release)
Clusters Management
Oracle Cluster File Sysem Release 9.2.0.2.0.41

Applications Tech Stack


The following Tech Stack guidelines are required:

Techstack Component Minimum Version Recommended Version

Oracle RDBMS 9.2.0.2 Latest Certified Patchset

iAS 1.3.12 1.0.2.2.2

Developer 6i Patchset 10 Latest Certified Patchset

JDBC Thin 9iR2 9iR2

OA Framework 5.6E 5.6E

OAM 2.0 2.2.0

Jinitiator 1.1.8.16 1.1.8.16

Oracle Applications Release 11i and RAC Installation, and Configuration 7


5.0 RAC General Information

5.1 Cluster
A cluster is a group of nodes that are interconnected to work as a single,
highly available and scalable system. The primary cluster components are
processor nodes, a cluster interconnect and shared disk subsystem. Clusters
share disk, not memory. Each node also has its own operating system,
database and application software.

Clusters provide improved fault tolerance and modular incremental system


growth over symmetric multi-processor systems. Clustering delivers high
availability to users in the event of system failures. High availability is
achieved by providing redundant hardware components such as redundant
nodes, interconnects and disks. A cluster provides a single virtual
application space in which all application transactions can be performed -
transactions can span CPU and memory resources of multiple cluster nodes.
In this manner, clusters provide the hardware foundation for application
load balancing across cluster nodes. Since clusters provide a unified view of
all disks on the system, systems administration and application management
operations can be centralized.

5.2 Raw Filesysem (Shared Disk)


Unix filesystems can be configured as either ‘cooked’, or ‘raw’ devices. The
primary difference is that a ‘raw’ disk is one that has not had a Unix file
system built on it. Raw disk also bypass the Unix buffer cache that is used by
a ‘cooked’ filesystem (UFS).

All nodes in a Real Application Cluster require simultaneous access to the


shared disks where the database datafiles reside. The implementation of a
shared disk subsystem is operating system dependent, and a RAC
implementation can utilize either a cluster file system, or raw files in order to
support placement of the database datafiles. The use of a cluster file systems
greatly simplifies the installation and administration of a Real Application
Clusters environment.

5.3 Volume Manager


A volume manager is a storage management subsystem that allows you to
manage physical disks as logical devices called volumes. A volume is a
logical device that appears to data management systems as a physical disk
partition device. A volume manager overcomes physical restrictions
imposed by hardware disk devices by providing a logical volume
management layer. This allows volumes to span multiple disks. Volume
managers are supplied by OS hardware vendors, or can be obtained from 3rd
party vendors.

5.3 Real Application Clusters


Real Application Clusters is a computing environment that harnesses the
processing power of multiple, interconnected computers. Real Application
Clusters software and a collection of hardware known as a cluster unites the
processing power of each component to become a single, robust computing
environment. A cluster generally comprises two or more computers, (or

Oracle Applications Release 11i and RAC Installation, and Configuration 8


nodes). A cluster is also sometimes referred to as a loosely coupled computer
system.

In Real Application Clusters environments, all nodes concurrently execute


transactions against the same database. Real Application Clusters
coordinates each node’s access to the shared data to provide consistency and
integrity.

Oracle Applications Release 11i and RAC Installation, and Configuration 9


6.0 Applications Installation/Configuration
The tasks that need to be performed in order to complete the RAC
installation for Oracle Applications 11i (11.5.9) consist of the following:

1. Install OCFS, and Oracle Cluster Manager (Linux specific)


2. Install Oracle Applications 11i – standard installation.
3. Install Oracle9i with the RAC option and apply the latest certified
patchset.
4. Switch the Applications 11.5.9 ORACLE_HOME to the Oracle9i RAC
ORACLE_HOME
5. Create the shared files for RAC.
6. Convert the single instance 11i Oracle9i database to Oracle9i RAC.
7. Establish the Applications Failover Environment.
8. Clone the Applications installation to all nodes in the cluster.
9. Set up Parallel Concurrent Processing (PCP).
Each of these tasks will be reviewed in detail in the sections provided below.
It is best to use a phased approach to complete each of the tasks that are
required in order to move from a single-instance to RAC environment, with
verification testing between each step/phase.

Note that the installation and configuration of all cluster hardware


components, and software, in a Unix environment should be performed and
verified before starting this procedure (Linux cluster installation instructions
are part of this procedure). This will enable the execution of tasks dependent
on cluster functionality to be executed without interruption. This includes
the Oracle9i RDBMS software installation for RAC to each node in the
cluster, and the configuration of raw or cluster filesystems required for RAC.

In an Applications environment the configuration normally utilizes an


‘instance’ name such as PROD, TEST, VIS depending upon the type of
installation that is performed. Normally the ‘Applications instance’, matches
the ‘database instance’, or SID. When configuring Applications for a RAC
environment it is necessary to utilize two different SID designations.
Throughout this document when specific references are intended for the
database SID this will be referred to as <DB_SID>. This is the SID for the
Oracle Server instance. The <SID> references are specific to the Applications
SID, or TWO_TASK that is defined during the Rapidwiz installation. These
are common until the RAC conversion, at which time the <DB_SID> becomes
a separate designation, specific to the RDBMS instances on each cluster node.
With Applications release 11.5.9 the Applications context is also
implemented. Implementation of the context appends the hostname to the
Applications SID for many of the directory structures used in Applications
that were formerly identified by only the Applications SID. The format of
the Applications context is <SID>_<HOST>, and is referred to where
appropriate throughout this documentation.

When choosing a SID name during the Applications Rapidwiz installation


choose a name that is generic, but meaningful, as this will initially be used as
both the Oracle instance name, database name, and Net8 service name
(TWO_TASK) identifier. This is advisable because when converting the

Oracle Applications Release 11i and RAC Installation, and Configuration 10


installation to RAC the database and Net8 service names are not changed,
only the Oracle instance name (DB_SID). During the testing that was
performed for this documentation a SID name of TRN was used, and is
referred to throughout this procedure in various examples.

Note that patching requirements will be unique to each platform, and


Applications release, so that is it not practical to identify a specific series of
patches for each particular installation. The identification of major release
level requirements is performed in this document, such as the application of
Concurrent Processing Rollup Patch E (CP.E), and any significant one-off
patches that are required.

Applications release 11.5.9 incorporated Oracle9i RDBMS 9.2.0.3 into the


Applications tech stack with the standard software distribution. This release
bundle provides the latest version of Oracle9i R2 for Applications, and all of
the included functionality in this release. With the requirement to enable
RAC and convert the existing RDBMS from a single instance to a RAC
configuration it is less complex to install a separate Oracle9i home, outside of
the 11.5.9 filesystem, apply the latest certified RDBMS patchset, patch the
installation according to the Interoperability Notes for Oracle Applications,
and then switch the release 11.5.9 Oracle9i RDBMS to use the newly installed
Oracle9i 9.2 RAC distribution. This requirement is covered in more detail in
the relevant section, later in this document.

6.1 Install Oracle Cluster File System and Oracle Cluster Manager
In order to provide support for the shared database files, and the necessary
cluster support required by RAC, in a Linux environment, Oracle Cluster File
System, and Oracle Cluster Manager must be installed. If working in a Unix
environment ensure that either the vendor specific or 3rd party products are
installed to provide this functionality.

It is important to note that at this time OCFS 1.0 only supports Oracle
database datafiles (data, redo, control, and archive log files). Non-Oracle
files, other than the quorum, and srvm configuration files should not be
placed on OCFS disk.

Instructions for installing OCFS can be found in MetaLink Note:220178.1,


and the Oracle Cluster File System Installation Notes. In order to install
Oracle Cluster File System on Linux perform the following tasks:

1. Format the shared devices on each node using FDISK as appropriate for
the particular installation. FDISK is an OS utility it’s particular usage,
and instructions can be found with your OS documentation.

2. Create the mount points for the OCFS filesystems on each cluster node.
This is done as the root user (e.g. mkdir -p /d01 /d02 /d03 /d04 /d05 /d06
/d07).

3. Install the OCFS RPM files per the OCFS Installation Instructions on each
cluster node. The latest RPM’s can be found following the instructions in
Note:238278.1 on MetaLink. If using a custom kernel issues with the
installation are unsupported. The RPM’s would be installed as the root
user. After downloading each required RPM into a working directory
issue the commands:
rpm –i ocfs-support-1.0-2.i686.rpm
rpm –i ocfs-2.4.9-e.3-1.0-2.i686.rpm

Oracle Applications Release 11i and RAC Installation, and Configuration 11


rpm –i ocfs-tools-1.0-2.1.i686.rpm

Note that the RPM versions may differ for your installation, but the
modules should be installed in the order above.
4. Using the utility ocfstool, generate the file /etc/ocfs.conf. When
generating the ocfs.conf file, use the private adaptor, not public. From an
X-Windows, or VNC session as the root user run the ocfstool by issuing
the command: /usr/sbin/ocfstool. Select the Tasks menu, and supply
the information for the private adaptor. It is not necessary to change the
default port number.

5. Load the ocsf modules as root: /sbin/load_ocfs. Check to ensure the


module was loaded by running /sbin/lsmod | grep ocfs.

6. Run make for the files system either from the ocfstool, or by command
line. When running the make command it is executed from one cluster
node only, and is not to be re-run from each node. If executing make
from the command line the format is below where <dir> is the directory
that was created in #2 above, and <device) is the physical device name
formatted in #1 above.

mkfs.ocfs -F -b 128 -L /<dir> -m /<dir> -u 500 -g 1000 –p 0775/<device>

7. After all devices have been formatted, mount the devices to each cluster
node. An example of mounting an ocfs device is below. In this example
we are mounting <device> /dev/sda1 to <dir> /d01.

mount –t ocfs /dev/sda1 /d01

8. Reboot all cluster nodes, ensure all modules are loaded as required after
startup, and the filesystems are mounted to each cluster node.

After installing the cluster filesystem support, with OCFS it is necessary to


install Oracle9i in order to provide cluster support. This is accomplished by
creating an Oracle9i home, separate from the Applications 11i distribution,
and installing the Oracle9i Cluster Manager (OCM). Install the Oracle 9.2.0.1
Cluster Manager, then upgraded OCM to release 9.2.0.x (latest certified
RDBMS patchset with Oracle Applications) using the following steps.

1. If not a component of the base Linux software distribution, install the


hangcheck-timer RPM. This is required by Oracle Cluster Manager. The
latest version of the hangcheck-timer can be found following the
instructions in MetaLink from Note:232355.1. After downloading the
required RPM into a working directory issue the command as the root
user:
rpm –i hangcheck-timer-2.4.9-e.3-0.4.0-2.i686.rpm

Note that the RPM version may differ for your installation.
2. Create an Oracle home, separate from the 11.5.9 filesystem, and install
Oracle Cluster Manager into the 9.2 ORACLE_HOME. When running
the Oracle Universal Installer from the 9.2.0.1 Oracle distribution, ensure
there is not an /etc/oraInst.loc file pointing to an existing (old)
oraInventory location. During the installation place the oraInventory
into the ORACLE_BASE directory for the 9.2 ORACLE_HOME. The
Applications installation performed in the next section with Rapidwiz

Oracle Applications Release 11i and RAC Installation, and Configuration 12


will place the Applications oraInventory into /etc/oraInventory, or
/var/opt/oracle/oraInventory on Unix.

3. Cluster Manager will require a quorum disk on the on the OCFS


filesystem that was created earlier in the steps above. Touch a file in one
of the OCFS directories, and use this when configuring OCFS (e.g. touch
/d01/quorum). Note that prior to Oracle release 9.2.0.3 the quorum disk
was not supported on OCFS.

4. After the 9.2.0.1 installation has completed, apply the latest 9.2 patchset
for Oracle Cluster Manager, and configure the Cluster Manager using the
instructions in MetaLink Note:222746.1. Also see the Oralce Cluster File
System, Installation Notes, Release 1.0 for RedHat Linux Advanced
Server 2.1. If installing directly to RDBMS 9.2.0.4, skip 4b – 6. Also, refer
to Note:256617.1 for a working example of the cmcft.ora file.
a. Run cmcfg.sh as oracle from $ORACLE_HOME/oracm/admin
b. Make the following configuration changes to cmcfg.ora in
$ORACLE_HOME/oracm/admin:
1. Remove the watchdog parameters:
WatchdogSafetyMargin=5000
WatchdogTimerMargin=60000
2. Add the parameters
KernelModuleName=hangcheck-timer
HeartBeat=15000
5. Make the following changes to the
$ORACLE_HOME/oracm/admin/ocmargs.ora on both nodes:
a. Remove the watchdog parameter:
watchdogd
6. Edit the ocmstart.sh and remove all references to watchdog:
# watchdogd's default log file
#WATCHDOGD_LOG_FILE=$ORACLE_HOME/oracm/log/wdd.log
# watchdogd's default backup file
#WATCHDOGD_BAK_FILE=$ORACLE_HOME/oracm/log/wdd.log.bak
# Get arguments
#watchdogd_args=`grep '^watchdogd' $OCMARGS_FILE |\
# sed -e 's+^watchdogd *++'`
# Check watchdogd's existance
#if watchdogd status | grep 'Watchdog daemon active' >/dev/null
#then
# echo 'ocmstart.sh: Error: watchdogd is already running'
# exit 1
#fi
# Backup the old watchdogd log
#if test -r $WATCHDOGD_LOG_FILE
#then
# mv $WATCHDOGD_LOG_FILE $WATCHDOGD_BAK_FILE
#fi
# Startup watchdogd
#echo watchdogd $watchdogd_args
#watchdogd $watchdogd_args
7. Load the module for hangcheck-timer as root:
/sbin/insmod hangcheck-timer hangcheck_tick=30
hangcheck_margin=180

Oracle Applications Release 11i and RAC Installation, and Configuration 13


After loading the hangcheck-timer check to ensure the module was
loaded (i.e. /sbin/lsmod | grep hangcheck-timer)

8. Start the Oracle Cluster Manager. Before starting OCMverify that the log
directory under ORACLE_HOME/oracm were created during the
installation (i.e. ORACLE_HOME/oracm/log/cm.out). As the root user,
source the Oracle environment and run
ORACLE_HOME/oracm/bin/ocmstart.sh.
At this point in time you have installed cluster support, either Linux specific
using Oracle Cluster Manager, or vendor specific/3rd party for Unix. The
installation and configuration of a shared disk subsystem should also have
been completed, either using Oracle Cluster File System for Linux, or vendor
specific/3rd party for Unix.

6.2 Install Oracle Applications 11i


The initial Oracle Applications installation consists of a standard Rapidwiz
installation, following the instructions provided in the Installing Oracle
Applications Release 11i documentation.

Depending upon configuration specifics, such as the type of architectural


deployment being performed (single, or multi-node installation) running
Rapidwiz may have to be performed on multiple servers (tiers) in the
environment. The best approach for managing the APPL_TOP and
associated Middle-Tier components (iAS, Forms, Reports) in a cluster
environment is to perform the installation on a single node (or the primary
nodes in a multi-node configuration), and then clone these components using
the procedure outlined in the Cloning Oracle Applications Release 11i with
AutoConfig White Paper to the other nodes in the cluster.

Prior to performing any of the installation processes, when defining the


Applications and Oracle users (single-user, or multi-user), ensure that the uid
and group id of each user are the same on each node in the cluster. Verify
that the software owners defined for the RAC and Applications environment
can execute rcp, and rsh operations between all nodes in the cluster. Also it
is recommended, but not required, that the directory structure between the
nodes where RAC and Applications will be installed is the same. Using a
different directory structure between server nodes places restrictions on how
some of the installation tasks function, and requires additional tasks during
the cloning procedure.

Before starting Rapidwiz ensure that the /etc/oraInst.loc file used during the
Oracle9i Cluster Manager installation is renamed so that the inventory for
Applications is not mixed with the 9.2 Cluster Manager installation. On
Linux Rapidwiz for Applications uses the /etc/oraInst.loc pointing the
inventory to /etc/oraInventory. On Unix Rapidwiz for Applications uses the
/var/opt/oracle /oraInst.loc pointing the inventory to
/var/opt/oracle/oraInventory.

Follow the standard installation instructions while installing the Release 11i
software on primary/source node in the cluster, as would be performed for a
single node installation. If prompted during the Applications 11i installation
process with the Cluster Node Selection Screen for the RAC option during
the Database Server (RDBMS) installation do not select any of the remote
nodes that are available. If while running Rapidwiz, the Cluster Node
Selection Screen is completed for the additional nodes that make up the

Oracle Applications Release 11i and RAC Installation, and Configuration 14


cluster, the Oracle9i RDBMS is installed on all nodes in the cluster
(installation files are copied to the remote node(s) in the cluster. As part of
the installation process Rapidwiz uses a seeded response file during the
Applications installation, and although the Cluster Node Selection Screen
may appear, and can be utilized during the installation, the RAC option is
not actually installed on any of the nodes that make up the cluster, even
though the 9i RDBMS software is copied to the remote nodes during the
installation using the Unix rcp command.

With the requirement to install RAC and convert the existing RDBMS from a
single instance to a RAC configuration it is less complex to install a separate
Oracle9i home, outside of the 11.5.9 filesystem, apply the latest certified
RDBMS patchset, patch the installation according to the Interoperability
Notes for Oracle Applications, and then switch the Applications release
11.5.9 Oracle9i RDBMS to use the newly installed Oracle9i RAC distribution.
This task is completed during the steps necessary to move the single instance
RDBMS installation to RAC in the next section, so no action is required
during the initial Applications installation tasks.

If using Oracle Cluster File System (OCFS), or a 3rd parth CFS, the installation
can be performed directly to OCFS/CSF by making the appropriate
DATA_TOP assignments while running Rapidwiz. The results will vary
depending on the I/O configuration of the shared disk subsystem. Some
configurations will not support installing directly to shared disk, whether
OCFS or 3rd party cluster filesystem is utilized. If the installation will not
support a direct install to OCFS/CFS disk, the Rapidwiz installation will fail,
due to the database datafiles dropping blocks during the unzip phase to the
shared filesystem.

After the Applications Rapidwiz installation completes all of the


Applications, Middle Tier, and Database server processes will be active. At
this time the installation should be tested completely before performing any
steps required in order to switch the Applications 11.5.9 Oracle9i
ORACLE_HOME to the separate Oracle9i RDBMS, and Cluster Manager
home (covered in the next section), or move from a single-instance to RAC
configuration. This should include all startup/shutdown scripts,
configuration of any Applications modules as required for specific site
requirements. After completing these tasks all of the Applications and
Database processes should shut down in an orderly, and normal manner
before proceeding with the next steps.

6.3 Install Oracle9i with the RAC option


It is necessary to install a separate Oracle9i home, in order to provide support
for the RAC environment. The Applications 11.5.9 distribution delivers
Oracle9i release 9.2.0.3 as part of the Applications tech stack. This provides
the latest features, and functionality of Oracle9i R2 for Applications. During
the Applications 11.5.9 Rapidwiz installation Oracle9i is installed using a
response file, which does not include the RAC option.

If cluster support is installed into the existing 11.5.9, 9.2.0.3


ORACLE_HOME, by running the Oracle Universal Installer (OUI) from a
non-Applications distribution, and a cluster installation is performed for the
RAC option, only the RAC components are installed across the cluster nodes.
This installation will not provide support for the Applications environment,
as a number of components will be missing from the remote cluster nodes. It
is less complicated to perform a separate Oracle9i 9.2 installation , apply the

Oracle Applications Release 11i and RAC Installation, and Configuration 15


latest certified RDBMS patchset, patch the installation according to the
Interoperability Notes for Oracle Applications, and switch the 11.5.9 RDBMS
ORACLE HOME to the newly installed location. Reinstalling RAC and all of
the necessary components over the existing 11.5.9, 9.2.0.3 ORACLE_HOME
will require the same effort, as the base 9.2.0.1 relase will need to be installed
to include the RAC option along with all of the additional options, patched
with the latest RDBMS 9.2.0.x patchset, and then all the required
Interoperability patches applied.

In order to complete the installation for Oracle9i with the RAC option, to
support the conversion of Applications 11.5.9 to a RAC environment, on
Linux install Oracle9i with the RAC option into the existing Oracle9i home
used for the Cluster Manager installation in section 6.1. If working in a Unix
environment create a separate location for the new 9.2.0.x RAC
ORACLE_HOME, outside of the Applications 11.5.9 filesystem. Follow the
Oracle9i installation instructions in Note:216550.1 Oracle Applications
Release 11i with Oracle9i Release 2 (9.2.0) Interoperability Note. When
following the Interoperability note, only execute the steps related to installing
Oracle9i. These are Section 1 steps 5 – 9, in the current Interoperability Note
dated March 2004.

Before starting the Oracle 9i installation ensure that the /etc/oraInst.loc file
(on Linux), or the /var/opt/oracle/oraInst.loc (on Unix) used during the
Rapidwiz installation for Applications is renamed so that the inventory for
Applications does not get mixed with the 9.2.0.x RDBMS installation. On
Linux restore the /etc/oraInst.loc file used during the Oracle Cluster
Manager installation, so that the RDBMS installation points to the same
oraInventory. On Unix rename the /var/opt/oracle/oraInst.loc file that was
used during the Applications installation, and place the oraInventory into the
ORACLE_BASE location of the new ORACLE_HOME for the 9.2.0.x RAC
installation. Also, ensure the environment variables are set to the new/OCFS
9.2 ORACLE_HOME. The cluster node selection screen should be displayed
during the 9i RDBMS installation. Select all available nodes where the RAC
instances will run during the installation when prompted.

The Oracle9i RAC installation will require a Server Management (SRVM)


configuration device during installation. This is created on the OCFS, or
other cluster filesystem disk by touching a file named appropriately and then
specifying this when prompted during the installation (e.g touch srvmcfg). If
using raw volumes create the raw device prior to running the Oracle9i RAC
installation and specify this file during the installation when prompted for
the device.

It is also necessary to install OPatch in order to provide support for the


RDBMS patch application required as part of the Interoperability Notes. The
latest release of OPatch can be found, along with download instructions in
MetaLink Note:224346.1. When installing OPatch in a Linux environment it
is necessary to implement the work-around outlined in bug 2742686 in order
to correct an issue where the inventory does not reflect a RAC system during
opatch execution. Also, if an -invPtrLoc pointer was used during the 9i RAC
installation into the 9.2 ORACLE_HOME, execute OPatch using the same
pointer.

Oracle Applications Release 11i and RAC Installation, and Configuration 16


6.4 Switch the Applications 11.5.9 ORACLE_HOME to the Oracle9i RAC
ORACLE_HOME and Upgrade the RDBMS
Installing Oracle9i with the RAC option will establish the environment in
preparation for using Oracle Applications 11i with RAC. At this point in
time there is a completed Applications installation that has been configured,
tested and validated for all functional, and utilization requirements. Also
Oracle Cluster File System, or other cluster filesystem , and Oracle Cluster
Manager, or OS cluster software have been installed along with Oracle9i
release 9.2.0.x with the RAC option according to the Interoperability Notes
for Oracle Applications.
In order to switch the Applications 9.2.0.3, Oracle9i ORACLE_HOME to the
Oracle9i 9.2.0.x ORACLE_HOME used for the Oracle Cluster Manager, and
RAC option, follow the steps outlined below. The Applications release 11.5.9
Oracle9i 9.2.0.3 ORACLE_HOME is AutoConfig enabled, and using
AutoConfig to generate the configuration files significantly reduces the
manual steps required during the process of swapping ORACLE_HOMES.
Using AutoConfig addresses the manual tasks of updating the tnsname.ora,
listener.ora, and startup scripts for the Oracle9i RDBMS. The procedure
outlined below utilizes AutoConfig.

Ensure that all of the Applications and Database server processes have been
shut down in an orderly manner, using the shutdown scripts provided with
Oracle Applications. These are the adstpall.sh script found in
COMMON_TOP/admin/scripts.<SID>_<HOST>, and the addbctl.sh and
addlnctl.sh scripts found in the 9.2
ORACLE_HOME/appsutil/scripts/<SID>_<HOST> directory.

Switch the Applications 9.2.0.3, Oracle9i ORACLE_HOME to the newly


installed Oracle9i 9.2.0.x ORACLE_HOME (existing Linux OCFS
ORACLE_HOME), and upgrade to the latest certified RDBMS patchsetusing
the following steps:

1. Apply the latest AutoConfig patches according to MetaLink


Note:165195.1.

2. Copy the Oracle9i 11.5.9 ORACLE_HOME/appsutil directory to the


Oracle9i RAC ORACLE_HOME. This will include the
~/scripts/<SID>_<HOST> directory along with the context file.

3. Using the context editor (or vi) edit the context file <SID>_<HOST>.xml
at the new location for ORACLE_HOME/appsutil and change the 11.5.9
Oracle9i ORACLE_HOME location to the Oracle9i RAC
ORACLE_HOME’s new location. This will include all references to the
old ORACLE_HOME to be replaced by the new ORACLE_HOME. If
performing a 32 to 64-bit migration also update the ‘s_bits’ value to
indicate this change.

4. Edit the adautocfg.sh script to replace the old location for CTX_FILE with
the updated version in the new Oracle9i RAC
ORACLE_HOME/appsutil/scripts/<SID>_<HOST> directory(e.g.
CTX_FILE="/oralocal/apradb/9.2.0/appsutil/TRN_minerva.xml" to
CTX_FILE="/oralocal/oracle/9.2.0/appsutil/TRN_minerva.xml"). Also
replace any references of the old ORACLE_HOME with the new location
of the Oracle9i RAC ORACLE_HOME.

Oracle Applications Release 11i and RAC Installation, and Configuration 17


5. Set the environment to point to the new Oracle9i RAC ORACLE_HOME
and regenerate the RDBMS configuration with Autoconfig following the
instructions in Note:165195.1 Section 5: Maintaining System
Configurations. Under Step 2 - Commands to maintain System
Configurations on the Database Tier, by running AutoConfig
adautocfg.sh. All scripts listed below are located in <RDBMS
ORACLE_HOME>/appsutil/scripts/<CONTEXT_NAME>.

6. Verify the <SID>_<HOST>.env scripts located in the 9.2.0.x RAC


ORACLE_HOME (the new home) directory point to the new location
and modify any custom environment scripts as needed.

7. Verify the AutoConfig changes and update any custom environment files
with the new Oracle9i RAC ORACLE_HOME. The AutoConfig changes
will include:
a. The ‘admin’ directory is created under the new Oracle9i RAC
ORACLE_HOME. This will include the <SID>_<HOST>/bdump,
cdump and udump directories.
b. The ORACLE_HOME/network/admin/<SID>_<HOST> directory
is created under the new Oracle9i RAC ORACLE_HOME. This will
include the thsnames.ora and listener.ora files, and is the
$TNS_ADMIN location for the 11.5.9 RDBMS installation.

8. Complete the applicable patchset application instructions as outlined in


Note:216550.1 Oracle Applications Release 11i with Oracle9i Release 2
(9.2.0) Interoperability Note Interoperability Notes, in order to upgrade
the Applications 9.2.0.3 RDBMS release to the latest certified 9.2.0.x
RDBMS with Applications 11i. Follow Section 2: Applying the latest
certified Oracle9i Relase 2 (9.2.0) patchset. When executing the
instructions it is not necessary to perform the steps that were completed
in section 6.3 Install Oracle9i with the RAC option, of this document.

9. If performing a 32 to 64-bit migration for the Oracle RDBMS see notes


1836491 and 197031.1 for instructions regarding converting the 32-bit
RDBMS to 64-bits. If executing a 9.2.0.x RDBMS patchset application the
32 to 64-bit conversion will be performed automatically while the
database upgrade scripts are executed.

After completing all of the above steps you should be able to start the
Oracle9i RDBMS from the new ORACLE_HOME, along with the
Applications, and Middle Tier, processes on the source (primary) node.
Again a complete test of the Applications environment should be conducted
to ensure that all functions work properly.

6.5 Create the Shared Files for RAC


The Database Server datafiles utilized by the Applications instance are
created and installed during the Rapidwiz installation, and will be located in
the DATA_TOP directories that were defined during the Rapidwiz
installation. It is necessary to copy these files from their installation
directories to the shared disk, which is either an OCFS filesystem, other
vendor cluster filesystem or raw disk during the single instance to RAC
conversion.

Each hardware platform provides specific utilities, and instructions in order


to configure and manage shared cluster disks. In addition, there are several

Oracle Applications Release 11i and RAC Installation, and Configuration 18


3rd party vendors that provide these tools, along with specific instructions
required in order to install and configure cluster software for each platform.

Using whatever method of configuration is available, each datafile that is


part of the Oracle Applications database must have a shared disk (raw, or
CFS) file created for the RAC database that will be utilized. These files must
include all existing datafiles (data and index), control files, rollback, and
online redo logs that comprise the Applications database. A listing can be
generated with the files that need to be moved from Unix filesystem to raw
disk, or cluster file system by utilizing the commands in Appendix D - SQL
Scripts Generate Data File Listing from SQL*Plus, spooling the output, and
editing the files to remove extraneous data, and provide site specific
information.

In addition to the datafiles create a raw file for a second rollback tablespace, a
second group of online redo logs, and the existing control files. Also, by this
point in time you will have created a shared file for the quorum, and Server
Management (SRVM) configuration devices (see the Oracle9i Real
Application Clusters Setup and Configuration guide chapter 2 Configuring
the Shared Disks for information regarding SRVM).

At this time all that is necessary is to complete the creation of each of the
shared files that will be needed, and ensure that there is adequate space
available to accommodate the original data file, and associated overhead, for
raw files this includes an extra data block, to store OS header information.
This is not the same as the database block size used for Applications, which
is typically 8k, and may vary for different platforms. The simplest, although
not exact, approach is to add 1Mb to each file size if using raw devices. This
isnot necessary with cluster filesystem devices.

If utilizing RAW devices, and moving to the new Oracle Applications


Tablespace Model during the 11i/RAC conversion process follow MetaLink
Note:261925.1 (Implementing the New Oracle Applications Tablespace
Model (OATM) – Converting UFS to RAW Devices) either at this time, or
after completing the remainder of the steps outlined in this document.

6.6 Convert the Single Instance 11i Database to RAC Environment


The process of moving from a single instance to RAC environment requires
that the database environment be redefined. The control file is recreated in
order to increase the maxinstances parameter, and specify the new locations
for the database datafiles on the raw disk partitions (if required), along with
changes to several environment files (init.ora, Net configuration, and
Applications environment files).

In making the required modifications the environment is preserved to utilize


the existing Database name, and TWO_TASK entry from the original single
instance Applications installation. In the examples that follow the original
instance, and database were created using the name TRN. As the single
instance installation is converted to a RAC environment the database, and
TWO_TASK entries need not be changed. This minimizes the number of
configuration files that need to be modified.

Ensure that all of the Applications and Database server processes have been
shut down in an orderly manner, using the shutdown scripts provided with
Oracle Applications.

Oracle Applications Release 11i and RAC Installation, and Configuration 19


Perform the steps outlined below to move the single instance to RAC. As
noted earlier, the Applications Oracle9i 9.2.0.3 ORACLE_HOME is
AutoConfig enabled, and using AutoConfig to generate the configuration
files can significantly reduce the manual steps required in order to update
the tnsname.ora, listener.ora, and startup scripts for the Oracle9i RDBMS.
The procedure outlined below utilizes AutoConfig.

1. Create a control file script from primary instance by connecting to the


database through SQL*Plus, and issuing the command:
alter database backup controlfile to trace;
This script will be used to generate a new controlfile for the RAC
environment. The output will be located in the user_dump_dest. The
easiest way to find the file is to grep for the string CONTROL (i.e. grep
CONTROL *.trc), the time/date stamp should be current.
After completing this task shut down all of the Applications and
database processes.

2. Copy the trace file generated from the user_dump_dest location to a


working directory and edit the controlfile script to remove the recovery
section, and set maxinstances=2 (or the number of nodes that are going
to be configured in the cluster). Also update all of the datafile definitions
for the log, datafiles, and tempfiles. It is not necessary to change the
database name, although in the steps that follow the instance name used
during the original Applications installation on the primary node will be
changed. An example of an edited create controlfile script is shown in
Appendix B – Control File. Oracle9i produces two versions of the create
controlfile script, when the ‘alter database backup controlfile to trace;’
command is issued. Use the version with the NORESETLOGS
parameter.

3. If using raw devices, or Cluster Filesystem (CFS), copy the existing


database files to the raw/CFS devices using dd. An example of SQL to
generate the dd commands necessary are shown in Appendix C. The
existing files will be located in the DATA_TOP directories defined
during the Rapidwiz installation for sys, logs, data, and index datafiles.
Do not delete the original datafiles until after the database datafiles have
been successfully moved to shared disk, and the environment tested in
the RAC configuration. There are platform variations for dd, and
requirements to skip the OS header block when copying files to a raw
device, for example skipping the OS header block is not required on
Solaris and Linux but is on HP and AIX. Also, the blocksize specification
used during dd can have a significant performance impact regarding the
time to execute the copy operation. Test with different blocksize
parameters before undertaking the task of using dd to move all datafiles.
During the internal testing performed bs= 65536 performed best in our
environment.

4. Modify the listener.ora in the 9.2.0.x


ORACLE_HOME/network/admin/<SID>_<HOST> directory on the
source (primary) node. Change the listener name from the name used
during the initial Applications installation to a unique name (e.g. TRN to
LISTENER_TRNM). The same changes must be made on all nodes in the
cluster. The listener name should be unique on each node in the cluster
(e.g. LISTENER_TRNM on the local (source) node, and
LISTENER_TRNZ on the remote (target) node. Also modify the

Oracle Applications Release 11i and RAC Installation, and Configuration 20


listener.ora file to reflect the change in the instance name for each node
(e.g SID_NAME changed from TRN to TRNM, and TRNZ).

5. Modify the tnsnames.ora files in the 9.2.0, 8.0.6 and iAS


ORACLE_HOME/network/admin/<SID>_<HOST> directories on each
node to reflect the changes in the instance name (e.g SID changed from
TRN to TRNM and TRNZ). Note that the Net alias in the
TNSNAMES.ORA file is not modified from the original setting defined
for TWO_TASK.

6. In the 9.2.0.x RDBMS $ORACLE_HOME/dbs directory, copy the existing


init.ora parameter file to a backup copy. Then modify the instance
parameter (init.ora) file to add the parameters below. Rename the file to
match the new RAC instance name (e.g. initTRNM.ora). These
parameters will also need to be modified on each of the other instances
that reside on the other nodes in the cluster, after the files are copied to
the other nodes, in the next steps.
a. cluster_database = true
b. thread =1
c. instance_number=1
d. instance_name = trnm # Append a suffix for node

In addition to the parameter changes, if using raw disk, or CFS, change


the location of the control_files parameter to point to the raw device/CFS
location created earlier.

7. Modify the 9.2.0


ORACLE_HOME/appsutil/scripts/<SID>_<HOST>/adstrtdb.sql script
on each node, and change the pfile entry to reflect the new instance name
(e.g. trn to trnm in pfile=/bootcamp1/trndb/9.2.0/dbs/initTRNM.ora)
(the new <DB_SID>). Also change any references to the <SID>_<HOST>
to reflect the new hostname.

Do not modify any DB_NAME= parameters in any of the ad*.sh


startup/shutdown scripts in $COMMON_TOP/admin/scripts. These are
used to point to environment files, and have nothing to do with the
database name.

Also modify any custom scripts and change the DB_SID as required.

8. Ensure the Global Services Daemon (GSD) is started on all cluster nodes.
Prior to starting the GSD initialize the Server Management (SRVM)
configuration device, then start the GSD. This is done as the Oracle user
from the OS prompt:
srvconfig –init
gsdctl start

9. Recreate the controlfile using the script from step 1. In order to perform
this task the database must be started in ‘nomount’ state, and the SQL
script executed from the SQL*Plus prompt. If any errors occur during
this step it will be necessary to abort the database, correct the errors, and
rerun the script. In a worst-case scenario you should have the datafiles in
place from the original single instance configuration. If configuring with
raw disk partitions or OCFS, they can always be recopied to raw disk, or
cluster filesystem again if necessary.

Oracle Applications Release 11i and RAC Installation, and Configuration 21


10. After the controlfile creation has successfully completed (the database is
left in a mounted state), add a second redo thread (2) for instance 2. See
Appendix D - SQL Scripts, Create a 2 nd Redo Thread for the command
syntax. When creating the redo members for the second redo log group
match the file sizes, and group characteristics to the existing thread on
the primary (source) node. If the existing redo log file thread (thread 1)
has two members, and each member is 100Mb, create the new group
with two members, each 100Mb. After completing this step you can now
open the database on the local (primary) node.

11. The Release 11.5.9 RDBMS utilizes System Managed Undo (SMU). You
will need to add the init.ora parameters to the init<DB_SID>.ora for the
secondary instances’ undo tablespace to each init.ora file for each
additional RAC instance in the cluster. Create the required Undo
tablespace from the active instance before proceeding. See Appendix D –
SQL Scripts, Create Undo Tablespace for the command syntax.

If using traditional rollback tablespaces, define private rollback segments


for each instance. Either split the existing rollback segments between the
instances, or add additional rollback segments, so that each node has
sufficient rollback segments available. Use the same characteristics for
Rollback Segments created on the second instance as exists on the local
(primary) node. Create the same number, and size of rollback segments.
Update the init.ora file on each node to ensure that each instance has
unique rollback segments defined at instance startup.

12. Modify the second thread of redo added to make the thread private. This
is accomplished by disabling and enabling the redo thread from another
instance in the cluster. The instance being modified must be shut down at
the time the modification is made. See Appendix D SQL Scripts, Disable
and Enable Redo Thread for the command syntax for this operation.

13. Copy the 9.2.0.4 ORACLE_HOME/appsutil directory from the source


(primary) node to each target (remote) node in the cluster

14. Rename the <SID>_<HOST>.xml file in ORACLE_HOME/appsutil


directory on each target (remote) node. This will need to be renamed to
match the remote hosts. It is not necessary to change the <SID>
designation, only the <HOST> (e.g. TRN_minerva.xml to
TRN_zeus.xml).

15. Using the context editor (or vi), edit the renamed context file
<SID>_<HOST>.xml, and change any reference from the source host to
the target host. Also update any changes in location for file system
location differences, as required.

16. Edit the adautocfg.sh script on the remote (target) host, in the remote
hosts Oracle9i RAC ORACLE_HOME/appsutil/scripts/<SID>_<HOST>
directory to replace the old location of the CTX_FILE to point to the
renamed CTX_FILE in ORACLE_HOME/appsutil directory. The
renamed <SID>_<HOST>.xml file has the updates for the new target
host (e.g.
CTX_FILE="/oralocal/apradb/9.2.0/appsutil/TRN_minerva.xml" to
CTX_FILE="/oralocal/oracle/9.2.0/appsutil/TRN_zeus.xml"). Also
ensure any references to the ORACLE_HOME point to the location of the
Oracle9i RAC ORACLE_HOME on the target host. It is not necessary to
rename the <SID>_<HOST> directory under

Oracle Applications Release 11i and RAC Installation, and Configuration 22


ORACLE_HOME/appsutil/scripts as this will be regenerated while
running adautocfg.sh.

17. Regenerate the RDBMS configuration with Autoconfig following the


instructions in Note:165195.1 Section 5: Maintaining System
Configurations. Under Step 2 - Commands to maintain System
Configurations on the Database Tier, run AutoConfig adautocfg.sh

18. Verify the AutoConfig changes and update any custom environment
files with the new Oracle9i RAC ORACLE_HOME. The AutoConfig
changes will include:
a. The ‘admin’ directory is created under the new Oracle9i RAC
ORACLE_HOME. This will include the <SID>_<HOST>/bdump,
cdump and udump directories.
b. The ORACLE_HOME/network/admin/<SID>_<HOST> directory
is created under the new Oracle9i RAC ORACLE_HOME. This will
include the tnsnames.ora and listener.ora files, and is the
$TNS_ADMIN location for the 11.5.9 RDBMS installation.

c. Modify the tnsnames.ora and listener.ora files as in steps 4 & 5 to


include the instance specific configuration changes.

19. In the 9.2.0.x RDBMS $ORACLE_HOME/dbs directory of the target


node, copy the init.ora parameter file generated by AutoConfig to a
backup copy. Then modify the instance parameter (init.ora) file to add
the parameters below. Rename the file to match the new RAC instance
name (e.g. initTRNZ.ora). These parameters will also need to be added,
and modified on any other instances that reside on the other nodes in the
cluster as the additional nodes are configured. Modify the parameters:
a. cluster_database = true
b. thread =2 # Used 1 for the local node
c. instance_number=2 # Used 1 for the local node
d. instance_name = trnz # Append a suffix for node

Also modify the UNDO_TABLESPACE definition for the second instance


(this will be added in the steps below), to point to it’s own tablespace
(e.g. APPS_UNDOTS1, to APPS_UNDOTS2), and the IFILE parameter if
required.

20. If it exists, also copy the existing ifilecbo.ora from the source (primary)
nodes ORACLE_HOME/dbs directory to each target (secondary) node(s)
ORACLE_HOME/dbs directory. It is not necessary to modify this file for
the secondary nodes. The 11.5.9 ifilecbo.ora is referenced from the
init.ora but the ifile is empty.

21. Start the second instance. After the second instance is started shut down
the primary instance, and modify the redo thread to private using the
same procedure as in the previous step.

22. Recreate any database links that reference the renamed SID.

Using AutoConfig to make the edits is much less time consuming, and
reduces most of the manual effort, although currently AutoConfig is not RAC
aware, so the configuration file edits above (to tnsnames.ora, listener.ora, and
the RDBMS startup and shutdown scripts) will need to be repeated any time
AutoConfig is run in the Database tier.

Oracle Applications Release 11i and RAC Installation, and Configuration 23


After completing all of the required steps you should be able to start the
Applications, Middle Tier, and Database processes on the local (primary)
node, and start the Database and Net Listener processes on the remote
node(s). Again a test of the Applications environment should be conducted
to ensure that all functions work properly.

Note that the Self-Service applications will not function at this time due to
required configuration changes to the *.dbc files located in
$FND_TOP/secure. These modifications will be completed in steps 6 and 7
in the next section.

6.7 Establish the Applications Failover Environment


As stated earlier, Oracle Applications does not support dynamic session
failover. Applications workload failover is possible with the implementation
of Parallel Concurrent Processing, and the requirement that session
termination and reconnect is necessary during a failover scenario. This
functionality when implemented in an RAC environment provides for a
shortened, down time window, and higher availability for Oracle
Applications implementations.

It is beyond the scope of the procedures outlined in this document to address


these details, and the limitations of Applications failover restrictions, other
than to document that the benefits of a RAC configuration related to minimal
down time, and workload failover are far greater than a single instance
configuration. Follow the instructions provided in this section in order to
configure a connect time failover environment only.

Ensure that all of the Applications and Database server processes have been
shut down in an orderly manner, using the shutdown scripts provided with
Oracle Applications.

Applications failover configuration for Self-Service applications in a RAC


environment, requires that a *.DBC file exist for each RAC node in the
database cluster. In order to configure the Applications environment to
support the existing failover capabilities it is necessary to make the following
changes:

1. Copy the $FND_TOP/secure dbc file to a new name which reflects the
new instance names for the RAC instances (e.g. minerva_trn.dbc to
minerva_trnm.dbc, and minerva.us.oracle.com_trn.dbc to
minerva.us.oracle.com_trnm.dbc). It is not necessary to regenerate the
dbc files. The default naming format for the *.dbc files are
<hostname_instance.dbc>.

2. Edit the dbc files copied in the step above and change the DB_NAME
entry to reflect the new instance names in each corresponding file (i.e.
DB_NAME=TRN to DB_NAME=TRNM).

3. Copy the modified dbc files to new names that reflect the remote node(s)
in the cluster (e.g. $FND_TOP/secure minerva_trn.dbc to zeus_trnz.dbc,
and zeus.us.oracle.com_trnz.dbc). Then modify each file to reflect the
host and instance name for that node (e.g. edit the zeus*.dbc files and
change DB_NAME=TRN to DB_NAME=TRNZ,
DB_HOST=minerva.us.oracle.com to
DB_HOST=zeus.us.oracle.com).

Oracle Applications Release 11i and RAC Installation, and Configuration 24


4. As of release 11.5.9, the E-Business page and IP homepage are now on
OA framework (included in FND.G). This change in the default login
processing requires that jserv.properties be updated in order to reflect
the modified DBC file format. It is necessary to locate the entryies
specified below, and change the DBC file from the installation default to
the instance specific entry (e.g. minerva_trn.dbc to minerva_trnm.dbc):
#
# JTF wrapper.bin parameters
#
wrapper.bin.parameters=-DJTFDBCFILE=

#
#Web ADI Properties
#
wrapper.bin.parameters=-DBNEDBCFILE=
It will also be necessary to make this change to reflect the DBC file name
change in jserv.properties, on any new node, after completing the cloning
instructions outlined in section 6.8 Clone the Applications installation, in
order to clone the installation to another node, or middle tier server.

After performing the above modifications to jserv.properties, and


specifying an instance specific DBC file, a RAC instance failure will
require manual intervention, to update the configuration with the
surviving instance’s DBC file (i.e. change minerva_trnm.dbc to
minerva_trnz.dbc after instance TRNM fails).

This requirement can be eliminated by following the procedure outlined


in Note.244366.1, which provides instructions for configuring the DBC
file with the APPS_JDBC_URL parameter.

The APPS_JDBC_URL parameter can be utilized in the DBC file to


specify LOAD_BALANCE=OFF, if JDBC load balancing is not required,
which will provide an ADDRESS_LIST parameter for JDBC connections,
and the associated connect time failover support.

Once this configuration is implemented, using a generic DBC file name


(e.g. <servicename>.dbc), and the associated DBC entries are specified in
the DJTFDBCFILE, and DBNEDBCFILE parameters in jserv.properties,
all requirements for manual intervention due to specifying the DBC file
name in jserv.properties during a RAC instance or node failure are
eliminated.
In order to establish a Net connect time failover environment to provide
connection routing to an alternate node during an instance or node failure,
configure the Net listener.ora, and tnsnames.ora parameter files following
the instructions provided below.

1. In the 9.2.0 RDBMS


$ORACLE_HOME/network/admin/<SID>_<HOST> directory (not the
8.0 Applications home), copy the existing tnsnames.ora file to a backup
copy. Then modify the tnsnames.ora in the 9.2.0 ORACLE_HOME with
the required parameters in order to specify multiple nodes in the address
list. See the listener.ora, and tnsnames.ora configuration files in
Appendix E – Net Configuration Files for a sample. These modifications
will also need to be made on each of the other instances that reside on
other nodes in the cluster. When determining the order for host name

Oracle Applications Release 11i and RAC Installation, and Configuration 25


resolution, make the specification for the local node that the
tnsnames.ora resides on first in the search order on each node (on
Minerva for the TRN alias Minerva is the local node so the tnsnames.ora
has Minerva listed first in the example. This would be reversed on the
other node in the cluster, where Zeus would be specified first in the
search order, and so on for other nodes).

2. In the 9.2.0 RDBMS $ORACLE_HOME/dbs directory, modify the


instance parameter (init<DB_SID>.ora) file to add the parameters below.
These will also need to be modified on each of the other instances that
reside on other nodes in the cluster.
a. service_names = trndb #Add to each node in the
cluster
b. local_listener = listener_trnz #Specify the
listener name on each node.
Note that adding local_listener to the init.ora requires that the tns alias
for the listener be added to the tnsnames.ora file on each node, as in the
configuration files referenced in step 1 above.

3. Update any custom startup scripts that contain references to the Net8
listener. None of the standard Applications startup/shutdown scripts
have the 9.2.0 ORACLE_HOME listener name referenced.

4. Modify the tnsnames.ora in the Applications 8.0.6 ORACLE_HOME with


the required parameters in order to specify multiple nodes in the address
list. See the listener.ora, and tnsnames.ora configuration files in
Appendix E – Net Configuration Files for a sample.

5. Modify the tnsnames.ora in the Applications iAS ORACLE_HOME with


the required parameters in order to specify multiple nodes in the address
list. See the listener.ora, and tnsnames.ora configuration files in
Appendix E – Net Configuration Files for a sample.

Do NOT modify any DB_NAME= parameters in any of the ad*.sh


startup/shutdown scripts in $COMMON_TOP/admin/scripts. These
are used to point to environment files, and have nothing to do with the
database name.

After completing all of the required steps you should be able to start the
Applications, Middle Tier, and Database processes on the local (primary)
node, and start the Database and Net Listener processes on the remote
node(s). Again a complete test of the Applications environment should be
conducted to ensure that all functions work properly.

The Applications environment is now configured to support loss of an RAC


instance, although failover is not ‘transparent’, and must be initiated by
terminating all user sessions, and reconnecting to the same url. Batch
workload (concurrent program submissions) will need manual intervention,
as the environment has not been established through Parallel Concurrent
Processing to enable workload migration from the primary to an alternate
node in the cluster.

6.8 Clone the Applications installation


At this point the configuration in place reflects a single Oracle Applications
installation, with Oracle9i RAC as the Database Tier. This configuration
allows for fault tolerance/redundancy in the Database Tier component only.

Oracle Applications Release 11i and RAC Installation, and Configuration 26


In order to support site-specific failover requirements, and maximize the
benefits, and flexibility of running Oracle Applications in a RAC cluster, the
Applications Tier components will need to be installed on multiple nodes in
the RAC cluster, or on multiple Applications/Middle-Tier servers. This
positions the environment to take full advantage of the flexibility and fault
tolerance characteristics described earlier in this document.

The process of cloning Oracle Applications Release 11i has undergone


significant changes since it’s original implementation. Several versions of the
Cloning Oracle Applications Release 11i White Papers have been published,
and recently a new process utilizing Rapid Clone has been introduced. The
Cloning with Rapid Clone procedure has also undergone significant changes
since it’s introduction.

In executing the cloning process, regardless of the variations that have been
introduced in different versions of the cloning procedures, Oracle
Applications 11i is broken into it’s component structure during the cloning
process, and each component layer is copied from the source to target server.
Cloning normally is performed in order to copy the RDBMS (9.2.0
ORACLE_HOME, and database datafiles), Applications tier components
(APPL_TOP, COMMON_TOP, 8.0.6 ORACLE_HOME, and iAS
ORACLE_HOME). When cloning in a RAC environment we do not clone the
RDBMS components, as this is not required. Any steps that reference
RDBMS components during the cloning procedure can be skipped.

To prepare for cloning with Applications Release 11.5.9 review MetaLink


Note 165195.1, Using AutoConfig to Manage System Configurations with
Oracle Applications 11i. Also, it is critical that you apply the latest
AutoConfig rollup patch, and any other patches, as indicated in the note
before attempting the cloning procedure.

In order to replicate the existing Applications/Middle-Tier installation on the


source (primary) node in the cluster to the target (secondary) node(s) follow
the Cloning Oracle Applications Release 11i with Rapid Clone procedure that
is available from MetaLink (Note: 135792.1) to clone (copy) the Applications
installation from the primary (source) to secondary (target) node. The
procedure used in this document is currently outlined in Note:230672.1, and
is dated July 23, 2004 (Updated-Date), utilizing Rapid Clone patch 3271975,
and AutoConfig pre-req patch 3785771. The methodology to be followed is
that when cloning a RAC environment the RDBMS components are not
cloned, only the Applications and Middle-Tier components require cloning
between Applications/Middle-Tier, or RAC nodes.

Ensure that all of the Applications processes have been shut down in an
orderly manner, using the shutdown scripts provided with Oracle
Applications. Also verify that the oraInst.loc inventory pointer is set to the
Applications oraInventory location that was created during the Rapidwiz
installation, and has the file has correct ownerships and file permissions.

Follow the steps outlined the Cloning procedure that is outlined in Cloning
Oracle Applications Release 11i with Rapid Clone, with the following
instructions:

1. Under the section Prerequisites


a. It is not necessary to migrate release 11.5.9 to AutoConfig, as this is
part of the pre-installed configuration.

Oracle Applications Release 11i and RAC Installation, and Configuration 27


b. Apply all prerequisite patches as indicated in the documentation,
and ensure that all minimum version requirements are met for the
OUI, Pearl, JDK, and Zip. This includes applying the latest version of
the Rapid Clone patch as indicated in the cloning documentation.
Also apply the latest AutoConfig rollup patch as indicated in
Note:165195.1 before updating the source system with AutoConfig.
c. Prior to applying any patches to the source system save the existing
copies of the tnsnames.ora, and listener.ora files located in the 8.0.6
and iAS ORACLE_HOME locations on the source node to preserve
the files (i.e.
~/<SID>ora/8.0.6/network/admin/<SID>/tnsnames.ora). Copy
the files to a file named tnsnames.ora.preclone.
d. Prior to running AutoConfig on the source system copy the existing
tnsnames.ora from the Applications 8.0.6
ORACLE_HOME/network/admin/<SID> directory to
$AD_TOP/admin/template. This will be used to replace the
existing tnsnames.ora template file. Copy the original template file
to preserve the file before replacing them with the modified
tnsnames.ora file.
e. It is not necessary to implement AutoConfig in the RDBMS
ORACLE_HOME as this is part of the base 11.5.9 installation.
f. In order to address a connectivity issue with AutoConfig, during the
configuration regeneration process prior to cloning, specific to
supporting a RAC configuration, it is necessary to implement the
following work-around to temporarily swap the RAC instance name
from the instance specific naming convention, back to the common
name (the name used during the initial Applications installation), in
order to configure the target system successfully (e.g. TRNM, to
TRN):
1. Rename the init<SID>.ora in the 9.2.0 ORACLE_HOME from the
instance specific name to the generic name used during the
initial Rapidwiz installation (e.g. TRNM, to TRN).
2. Edit the renamed file and modify the instance_name parameter
to the same name used above.
3. Edit the listener.ora in the 9.2.0 ORACLE_HOME, and add the
SID to the SID_LIST (or temporarily replace the existing entry).
4. Edit the tnsnames.ora in the 9.2.0 ORACLE_HOME, and ensure
the alias is defined for the above instance.
5. Verify the Applications Context file contains only references to
the generic SID, and not the DB_SID (e.g. TRN not TRNM).
g. After verifying the successful execution of AutoConfig, shut down
all of the Applications and database processes on the source node,
and back out all of the changes made in step e, 1-4 above, to return
the configuration to utilize the DB_SID for the RAC environment.
h. Merge any changes back into the tnsnames.ora for the address_list
specified in order to support conect time failover after running
AutoConfig. Also verify the the listener.ora contains all of the
appropriate entries, and syntax.
i. Thoroughly test the source system to ensure all functionality is
working as expected.

2. Under the section Clone Oracle Applications 11i


a. When executing steps to prepare the source system skip the step(s)
specific to the database tier, as this is not required in a RAC
environment.

Oracle Applications Release 11i and RAC Installation, and Configuration 28


b. When executing steps to copy the source system to the target system
skip the step(s) specific to the database tier, as this is not required in
a RAC environment.
c. When executing steps to configure the target system skip the step(s)
specific to the database tier, as this is not required in a RAC
environment.
d. Update the tnsnames.ora file in the 8.0.6 and iAS ORACLE_HOMEs
to replace the hostname and port specifications as required for the
target (local) node. Also make the target (local) node first in the
address_list in the tnsnames.ora files.
e. To generate the correctly formatted files during the cloning process,
during the execution of adcfgclone.pl, copy the existing tnsnames.ora
from the Applications 8.0.6
ORACLE_HOME/network/admin/<SID>_<HOST> directory to
$AD_TOP/admin/template. This will be used to replace the
existing tnsnames.ora template file. Copy the original template file
to preserve the file before replacing it with the respective tnsnames
file.
f. In order to address a connectivity issue with Rapid Clone, during the
execution of ‘perl adcfgclone.pl appsTier ‘ specific to supporting a
RAC configuration, it is necessary to implement the following work-
around to temporarily swap the RAC instance name from the
instance specific naming convention, back to the common name (the
name used during the initial Applications installation), in order to
configure the target system successfully (e.g. TRNZ, to TRN):
1. Rename the init<SID>.ora in the 9.2.0 ORACLE_HOME from the
instance specific name to a generic name (e.g. TRNZ, to TRN).
2. Edit the renamed file and modify the instance_name parameter
to the same name used above.
3. Edit the listener.ora in the 9.2.0 ORACLE_HOME, and add the
SID to the SID_LIST (or temporarily replace the existing entry).
4. Edit the tnsnames.ora in the 9.2.0 ORACLE_HOME, and ensure
the alias is defined for the above instance.
5. Verify the Applications Context file contains only references to
the generic SID, and not the DB_SID (e.g. TRN not TRNZ).
g. After verifying the successful execution of ‘perl adcfgclone.pl
appsTier’, shut down all of the Applications and database processes
on the target node, and back out all of the changes made in step f, 1-
4.
3. Under the section Finishing Tasks
a. Merge the changes to tnsnames.ora in the 8.0.6 and iAS
ORACLE_HOMES. This will add the connect time failover, or TAF
definitions, depending on which were configured in the previous
section, from the tnsnames.ora.preclone to the regenerated
tnsnames.ora. This should be performed on all cluster nodes. order
to test. The tnsnames.ora files should be merged so that each contain
alias’ for FNDFS_*, and FNDSM_* entries for each node in the RAC
cluster. The tnsnames file should only contain one entry for the
REP60_* alias, and IFILE entries.
b. Verify that the FNDSM_<SID> entry has been added to the
listener.ora file under the 8.0.6
ORACLE_HOME/network/admin/<SID> directory. See WebiV
Note:165041.1 for instructions regarding configuring this entry.
c. As of release 11.5.9, the E-Business page and IP homepage are now
on OA framework (included in FND.G). This change in the default
login processing requires that jserv.properties be updated in order to

Oracle Applications Release 11i and RAC Installation, and Configuration 29


reflect the modified DBC file format. It is necessary to locate the
entryies specified below, and change the DBC file from the
installation default to the instance specific entry (e.g.
minerva_trn.dbc to minerva_trnm.dbc):
#
# JTF wrapper.bin parameters
#
wrapper.bin.parameters=-DJTFDBCFILE=

#
#Web ADI Properties
#
wrapper.bin.parameters=-DBNEDBCFILE=
It will also be necessary to make this change to reflect the DBC file
name change in jserv.properties, on any new node, after completing
the cloning instructions outlined in section 6.8 Clone the
Applications installation, in order to clone the installation to another
node, or middle tier server.

After performing the above modifications to jserv.properties, and


specifying an instance specific DBC file, a RAC instance failure will
require manual intervention, to update the configuration with the
surviving instance’s DBC file (i.e. change minerva_trnm.dbc to
minerva_trnz.dbc after instance TRNM fails).
d. Start the database processes, and the Net listener on the target node.
e. The database profiles should have been reset to the target node
during the execution of adcfgclone.pl. If the profiles were not reset
correctly for any reason, manually reset the profiles using the scripts
located in the directory $COMMON_TOP/admin/install/<SID>.
When executing these scripts it is necessary to pass the applications
username and password:
abmwebprf.sh apps <password>
adadmprf.sh apps <password>
afadmprf.sh apps <password>
afcmprf.sh apps <password>
affrmprf.sh apps <password>
afwebprf.sh apps <password>
ahladmprf.sh apps <password>
amscmprf.sh apps <password>
amswebprf.sh apps <password>
aradmprf.sh apps <password>
bisadmprf.sh apps <password>
cnadmprf.sh apps <password>
cncmprf.sh apps <password>
csdadmprf.sh apps <password>
cseadmprf.sh apps <password>
csfagprf.sh apps <password>
csfwebprf.sh apps <password>
csiadmprf.sh apps <password>
eamadmprf.sh apps <password>
ecxadmprf.sh apps <password>
fteadmprf.sh apps <password>
gladmprf.sh apps <password>
ibywebprf.sh apps <password>
icxwebprf.sh apps <password>
igccmprf.sh apps <password>

Oracle Applications Release 11i and RAC Installation, and Configuration 30


jtfadmprf.sh apps <password>
okeadmprf.sh apps <password>
okladmprf.sh apps <password>
oksfrmprf.sh apps <password>
ontadmprf.sh apps <password>
paadmprf.sh apps <password>
wipadmprf.sh apps <password>
wshadmprf.sh apps <password>
xnccmprf.sh apps <password>

f. After verifying the configuration, replace the existing tnsnames


template file with the existing file from the Applications (8.0.6)
ORACLE_HOME (this will be file tnsnames.ora in
$AD_TOP/admin/template as on the source node). This will ensure
it is overwritten during the subsequent steps in configuring PCP.

After completing all of the required steps to Clone the Applications


Installation and test the configuration on the target node(s) you should be
able to return to a normal configuration by stopping all of the applications
processes and running the scripts in step e above on the source node. After
resetting the profiles back to the source node start the Applications, and
Middle Tier, processes on the source (primary) node, and the Database and
Net Listener processes on the target node(s). Again a test of the Applications
environment should be conducted to ensure that all functions work properly.
Note if using an IP failover solution for the Middle-Tier servers the database
profiles will not require to be reset as in d above. This is applicable in a
‘vanilla’ environment only, with no IP failover or load balancing
implementation.

6.9 Set up Parallel Concurrent Processing (PCP)


Parallel concurrent processing allows you to distribute concurrent managers,
and workload across multiple nodes in a cluster, or networked environment.
PCP can also be implemented in a RAC environment in order to provide
automated failover of workload should the primary (source), or secondary
(target) concurrent processing nodes fail.

There are several different failure scenarios that can occur depending on the
type of Applications Technology Stack implementation that is performed.
The most basic scenarios are:
1. The database instance that supports the Applications and Middle-Tier(s) can
fail.
2. The Database Tier server that supports PCP processing can fail
3. The Applications/Middle-Tier server that supports the CP (and
Applications) base can fail.

In a single tier configuration a node failure will impact Concurrent


Processing operations do to any of these failure conditions. In a multi-node
configuration the impact of any these types of failures will be dependent
upon how processing is distributed among the nodes in the configuration.
Parallel Concurrent Processing provides seamless failover of Concurrent
Processing in the event that either of these types failures takes place.

Ensure that all of the Applications and Database server processes have been
shut down in an orderly manner, using the shutdown scripts provided with
Oracle Applications.

Oracle Applications Release 11i and RAC Installation, and Configuration 31


In order to set up Setup Parallel Concurrent Processing Using AutoConfig
with GSM, follow the instructions in the 11.5.9 Oracle Applications System
Administrators Guide under Implementing Parallel Concurrent Processing.
Also follow the instructions in Note:240818.1 for information regarding
Transaction Manager Setup and Configuration Requirement in an 11i RAC
Environment. Also ensure that the latest CP rollup patch is applied in the
environment according to MetaLink Note: 106492.1.

Setup Parallel Concurrent Processing using the following steps:


1. Applications 11.5.9 and higher is configured to use GSM. Verify the
configuration on each node (see WebIV Note:165041.1).

2. On each cluster node edit the Applications Context file (<SID>.xml),


that resides in APPL_TOP/admin, to set the variable <APPLDCP
oa_var=”s_appldcp”> ON </APPLDCP>. It is normally set to OFF.

3. Prior to regenerating the configuration, copy the existing


tnsnames.ora, listener.ora and sqlnet.ora files, where they exist,
under the 8.0.6 and iAS ORACLE_HOME locations on the each node
to preserve the files (i.e.
/<some_directory>/<SID>ora/$ORACLE_HOME/network/admin
/<SID>/tnsnames.ora). If any of the Applications startup scripts
that reside in COMMON_TOP/admin/scripts/<SID> have been
modified also copy these to preserve the files.

4. Regenerate the configuration by running adautocfg.sh on each


cluster node as outlined in Note:165195.1. This will require that the
same work-around be implemented as in section 6.8 Clone the
Applications Installation, step 2-f, on each cluster node prior to
executing AutoConfig in order to implement the PCP modifications.

5. After regenerating the configuration merge any changes back into


the tnsnames.ora, listener.ora and sqlnet.ora files in the network
directories, and the startup scripts in the
COMMON_TOP/admin/scripts/<SID> directory. Each nodes
tnsnames.ora file must contain the aliases that exist on all other
nodes in the cluster. When merging tnsnames.ora files ensure that
each node contains all other nodes tnsnames.ora entries. This
includes tns entries for any Applications tier nodes where a
concurrent request could be initiated, or request output to be viewed.

6. In the tnsnames.ora file on each node ensure that there is an alias


that matches the instance name from GV$INSTANCE for each Oracle
instance on each RAC node in the cluster. This is required in order
for the Service Manager (FNDSM) to establish connectivity to the
local node during startup. The entry for the local node will be the
entry that is used for the TWO_TASK entry in APPSORA.env on
each node in where PCP is configured (this is modified in step 12).
Also ensure that connectivity requirements are established as
required on the target (secondary) node. If needed edit the
tnsnames.ora file that was configured for connect time failover, and
place the local node first in the ADDRESS_LIST.

7. Verify that the FNDSM_<SID> entry has been added to the


listener.ora file under the 8.0.6
ORACLE_HOME/network/admin/<SID> directory. See WebiV

Oracle Applications Release 11i and RAC Installation, and Configuration 32


Note:165041.1 for instructions regarding configuring this entry.
NOTE: With the implementation of GSM the 8.0.6 Applications, and
9.2.0 Database listeners must be active on all PCP nodes in the cluster
during normal operations.

8. AutoConfig will update the database profiles and reset them for the
node from which it was last run. If necessary reset the database
profiles back to their original settings by running the scripts outlined
in step 3-d in 6.8 Clone the Applications installation.

9. Ensure that the Applications Listener (e.g APPS_<SID>) is active on


each node in the cluster where PCP is configured. On each node
start the database and Forms Server processes as required by the
configuration that has been implemented.

10. Navigate to Install > Nodes and ensure that each node is registered.
Use the node name as it appears when executing a ‘nodename’ from
the Unix prompt on the server. GSM will add the appropriate
services for each node at startup.

11. Navigate to Concurrent > Manager > Define, and define the primary
and secondary node names for all the concurrent managers
according to the desired configuration for each node’s workload.
The Internal Concurrent Manager is defined on the primary PCP
node only. When defining the Internal Monitor for the secondary
(target) node, add the primary node as the secondary node
designation to the Internal Monitor, and assign a standard work shift
with one process. Also, make the primary/secondary node,
workshift and process assignments for the Internal Monitor on the
primary cluster node.
12. Prior to starting the Manager processes it is necessary to edit the
APPSORA.env file on each node in order to specify a TWO_TASK
entry that contains the INSTANCE_NAME parameter for each of the
local nodes Oracle instance, in order to bind each Manager to the
local instance. This should be done regardless of whether Listener
load balancing is configured, as it will ensure the configuration
conforms to the required standards of having the TWO_TASK set to
the instance name of each node as specified in GV$INSTANCE. See
MetaLink Note:241370.1 for information regarding configuring PCP
in a RAC environment.

13. Start the Concurrent Processes on their primary node(s). The


environment defined in APPSORA.env is the environment that the
Service Manager passes on to each process that it initializes on behalf
of the Internal Concurrent Manager.

14. Navigate to Concurrent > Manager > Administer and verify that the
Service Manager and Internal Monitor are activated on the
secondary node. The Internal Monitor should not be active on the
primary cluster node.

15. Stop and restart the Concurrent Manager processes their primary
node(s), and verify that the managers are starting on their
appropriate nodes. On the target (secondary) node in addition to

Oracle Applications Release 11i and RAC Installation, and Configuration 33


any defined managers you will see an FNDSM process (the Service
Manager), along with the FNDIMON process (Internal Monitor).

Again a complete test of the Applications environment should be conducted


to ensure that all functions work properly. This should include a defined test
plan in order to simulate and test redundancy for instance, and node failures
across all Applications and database tiers.

Oracle Applications Release 11i and RAC Installation, and Configuration 34


7.0 Test Applications Failover Scenarios
The failover testing that is performed during the installation and verification
of these procedures is designed to validate the functionality of Parallel
Concurrent Processing’s failover capabilities, and to observe, and understand
the requirements necessary in order to move Applications, and Middle Tier
workloads from one server to another.

With the number of architectural configuration options available with Oracle


Applications’ implementations it is not within the scope of this document, or
practical to try to test and validate all of the possible scenarios. The testing
that is performed demonstrates the behavior of having a Database Tier, and
Applications/Middle Tier configuration, where the architecture is divided
between two separate and distinct server platforms. In this type of scenario,
what might be considered an active/passive configuration with the primary
server is supporting the Database, Applications, and Middle Tier
components, while the other node(s) in the cluster supports a RAC instance
only.

Database Tier failures are transparent, in that RAC provides the fault
tolerance necessary for seamless recovery of batch workload from one node
to another through the Parallel Concurrent Processing workload migration
mechanism. Applications and Middle Tier failures are minimized in that
workload migration for OLTP processing can be performed with a minimum
of downtime, or automated using cluster failover management functionality,
or a big IP implementation, in order to re-point Applications users, and
Middle Tier components when required.

When failover, or workload migration of Concurrent Processing takes place


due to a node failure the Internal Concurrent Manager performs an
availability check of the remote node using the ‘ping’ utility. If the node can
be pinged then the managers are started on that node. Once a node becomes
unavailable the Concurrent Processing workload is moved to the secondary
node as defined in the Concurrent Program definition in Applications. If the
node becomes available (pingable) again, workload will be migrated back to
that node if it is defined as the primary processing node. These activities are
designed to occur automatically, without intervention, through the Service
Management and Parallel Concurrent Processing functionality, and will
seamlessly execute, unless the Concurrent Managers are manually stopped
on the source (primary) node and manually restarted on a target (secondary)
node during a failure scenario. It is the intended behavior that the FNDSM
process terminates on the target (secondary) node during a CP migration due
to a failure.

If Concurrent Processing is manually terminated on the source (primary)


node, and moved to a target (secondary) node, then the database profile that
specifies the ops_instance number where the request is supposed to run
needs to be updated so that the program will run on the secondary node.
Otherwise the request will remain in a ‘pending’ status until the primary
node is active again, and the processes migrate back to the primary node.

Applications Tier and Middle Tier Failures require that the Database profiles
for all of the Applications components be updated. This is accomplished
most easily using the scripts that are generated during the installation and

Oracle Applications Release 11i and RAC Installation, and Configuration 35


cloning processes. If these updates are not performed various connectivity
issues will be encountered through the Forms, and Self-Service Applications.

Two basic failure scenarios are demonstrated:

1. Loss of a RAC database instance.


2. Complete loss of a RAC node.
Through the execution of each scenario the behavior of the Applications,
Middle Tier, and Database components is demonstrated, and the various
configurations, and behaviors can be extrapolated from these basic tests.
When testing each scenario batch and OLTP workload are simulated using
standard Applications processes. Batch workload is tested using a
Concurrent Program submission for Gather Schema Statistics. This allows
auditing and tracking of the progress, and completion of the batch
submission via monitoring the database tables last analyzed timestamp.
OLTP workload is tested via access to common Forms functions, such as
Security.

These tests are provided as sample procedures only.

7.1 Simulate the Loss of a RAC Database Instance and Net Listener
A database instance failure is simulated by performing the following tasks:
1. Start the Database processes on each node in the RAC cluster, and the
Applications and Middle Tier processes on their respective primary
node(s).

2. Delete schema statistics for AR using exec


dbms_stats.delete_schema_stats(‘AR’);

3. Check tables using the query:


select table_name, TO_CHAR(LAST_ANALYZED,'YYYY-MM-DD
HH24:MI:SS') LAST_ANALYZED
from dba_tables where owner = ‘AR’;

4. Submit the Gather Schema Statistics concurrent program for AR from


Minerva navigate to: Sysadmin -> Concurrent -> Requests -> Submit a
New Request -> Gather Schema Statistics (AR::99:NOBACKUP:).

5. Once the request has started abort the primary RAC instance, and kill the
Net listener. The Concurrent Processing executables that are running on
the primary node in the cluster will terminate and then restart after
migrating their database connections to the backup node. Although this
process may take several minutes to complete, depending on workload,
there is no need for any manual intervention.

6. There is no need to stop and restart the Applications, and Middle Tier
processes only on their primary node (only the RAC instance failed,
nothing else). Do not stop and restart the Concurrent Processing
executables using adcmctl.sh. OLTP users should be able to connect
using the primary URL, as normal, after completely disconnecting their
failed session, and the CP processing should move to the target
(secondary) RAC node.

Oracle Applications Release 11i and RAC Installation, and Configuration 36


7.2 Simulate Complete Loss of a RAC Node Including Apps Middle Tier
1. Start Database processes on each node in the RAC cluster, and the
Applications and Middle Tier processes on their respective primary
node(s).

2. Delete schema statistics for AR using exec


dbms_stats.delete_schema_stats(‘AR’);

3. Check tables using the query:


select table_name, TO_CHAR(LAST_ANALYZED,'YYYY-MM-DD
HH24:MI:SS') LAST_ANALYZED
from dba_tables where owner = ‘AR’;

4. Submit the Gather Schema Statistics concurrent program for AR from


Minerva navigate to: Sysadmin -> Concurrent -> Requests -> Submit a
New Request -> Gather Schema Statistics (AR::99:NOBACKUP:).

5. Make the primary node ‘unpingable’ to simulate a failure. This is


accomplished by downing network interface that is assigned to the IP
address for the hostname used when configuring this node. Note that
this testing is intrusive, and will cause the server to be unreachable over
the existing network for all connections:
ifconfig <interface> down

6. Kill the applications processes on the source (primary) as it is simulated


down.
kill -9 `ps -ef | grep <SID> | awk '{print $2};'`

Once the interface is down, and processes are killed CP workload will
start migrating to the secondary node automatically. It may take several
minutes for the entire process of restarting the managers on the
secondary node to complete.

7. Run the scripts below from SQL*PLUS on the secondary Applications


Tier/Middle Tier server. They are located in the
$COMMON_TOP/admin/install/<SID> directory. When executing the
scripts it is necessary to pass the applications username and password:
abmwebprf.sh apps <password>
adadmprf.sh apps <password>
afadmprf.sh apps <password>
afcmprf.sh apps <password>
affrmprf.sh apps <password>
afwebprf.sh apps <password>
ahladmprf.sh apps <password>
amscmprf.sh apps <password>
amswebprf.sh apps <password>
aradmprf.sh apps <password>
bisadmprf.sh apps <password>
cnadmprf.sh apps <password>
cncmprf.sh apps <password>
csdadmprf.sh apps <password>
cseadmprf.sh apps <password>
csfagprf.sh apps <password>
csfwebprf.sh apps <password>
csiadmprf.sh apps <password>

Oracle Applications Release 11i and RAC Installation, and Configuration 37


eamadmprf.sh apps <password>
ecxadmprf.sh apps <password>
fteadmprf.sh apps <password>
gladmprf.sh apps <password>
ibywebprf.sh apps <password>
icxwebprf.sh apps <password>
igccmprf.sh apps <password>
jtfadmprf.sh apps <password>
okeadmprf.sh apps <password>
okladmprf.sh apps <password>
oksfrmprf.sh apps <password>
ontadmprf.sh apps <password>
paadmprf.sh apps <password>
wipadmprf.sh apps <password>
wshadmprf.sh apps <password>
xnccmprf.sh apps <password>

8. Start the Applications processes on the target (secondary) node. Do NOT


restart the Concurrent Processing processes, as they have migrated from
the source (primary) to the target (secondary) node automatically.

9. Applications users can connect to the system using the alternate URL.
On our test system the primary URL for Applications is:
http://minerva.us.oracle.com:8092/dev60cgi/f60cgi
The alternate URL is:
http://zeus.us.oracle.com:8092/dev60cgi/f60cgi
10. The CP request output from migrated processes will show activity from
the time of migration to the backup system, until the completion time.
The primary system will contain log information also. In order to view
output of a request submitted for a node that is simulated ‘down’ the
FNDFS Listener must be active. This is the Applications Net8 Listener
process. New requests submitted are running normally. No action is
required.

Oracle Applications Release 11i and RAC Installation, and Configuration 38


8.0 Finishing the RAC Installation
After completing all of the steps outlined in this document, testing and
verifying the installation there are several finishing tasks that need to be
performed.

1. Remove the 11.5.9 9.2.0.3 ORACLE_HOME.

2. Delete the datafiles that remain in the DATA_TOP directories that were
used during the initial Applications installation. All of these files should
have been moved to shared disk at this time.

3. Remove any unnecessary context files or directories on each target host


copied during the cloning process that point to the original source host.

Once these tasks have been completed the next tasks that should be
addressed are implementing Oracle9i New Features, such as System
Parameter File, Automatic Undo Management, and Automatic PGA Memory
Management.

Oracle Applications Release 11i and RAC Installation, and Configuration 39


Appendix B – Control File
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "TRN" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 500
MAXINSTANCES 2
MAXLOGHISTORY 453
LOGFILE
GROUP 1 '/dev/vx/rdsk/Stripe11/log2VISION.dbf' SIZE 5M,
GROUP 2 '/dev/vx/rdsk/Stripe11/log1VISION.dbf' SIZE 5M
DATAFILE
'/dev/vx/rdsk/Stripe11/sys1VISION.dbf',
'/dev/vx/rdsk/Stripe11/sys2VISION.dbf',
'/dev/vx/rdsk/Stripe11/sys3VISION.dbf',
'/dev/vx/rdsk/Stripe11/sys4VISION.dbf',
'/dev/vx/rdsk/Stripe11/sys5VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat1VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat2VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat3VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat4VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat5VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat6VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat7VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat8VISION.dbf',
'/dev/vx/rdsk/Stripe11/idx1VISION.dbf',
'/dev/vx/rdsk/Stripe11/idx2VISION.dbf',
'/dev/vx/rdsk/Stripe11/idx3VISION.dbf',
'/dev/vx/rdsk/Stripe11/idx4VISION.dbf',
'/dev/vx/rdsk/Stripe11/ctx1VISION.dbf',
'/dev/vx/rdsk/Stripe11/owa1VISION.dbf',
'/dev/vx/rdsk/Stripe11/ofctx1VISION.dbf',
'/dev/vx/rdsk/Stripe11/ofcid1VISION.dbf',
'/dev/vx/rdsk/Stripe11/ofcmn1VISION.dbf',
'/dev/vx/rdsk/Stripe11/ofcmg1VISION.dbf',
'/dev/vx/rdsk/Stripe11/ofcdb1VISION.dbf',
'/dev/vx/rdsk/Stripe11/rbs1VISION.dbf',
'/dev/vx/rdsk/Stripe11/idx5VISION.dbf',
'/dev/vx/rdsk/Stripe11/dat9VISION.dbf',
'/dev/vx/rdsk/Stripe11/sys6VISION.dbf'
CHARACTER SET WE8ISO8859P1
;
REM Database can now be opened normally.
ALTER DATABASE OPEN;
REM Commands to add tempfiles to temporary tablespaces.
REM Online tempfiles have complete space information.
REM Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/dev/vx/rdsk/Stripe11/tmp01.dbf'
SIZE 786432000 reuse AUTOEXTEND OFF;
REM End of tempfile additions.

Oracle Applications Release 11i and RAC Installation, and Configuration 40


Appendix C – DD Datafiles to Raw Devices
set heading off
set pagesize 99
set linesize 100
set feedback off
set echo off
select 'dd if=’||file_name||’ of=/dev/vx/rdsk/Stripe11/’||substr(file_name,-17)||’
bs= 65536’ from dba_data_files;
select 'dd if=’||file_name||’ of=/dev/vx/rdsk/Stripe11/’||substr(file_name,-17)||’
bs= 65536’ from dba_temp_files;
select 'dd if=’||member||’ of=/dev/vx/rdsk/Stripe11/’||substr(member,-17)||’ bs=
65536’ from v$logfile;
select 'dd if=’||name||’ of=/dev/vx/rdsk/Stripe11/’||substr(name,-17)||’ bs= 65536’
from v$controlfile;

DD Script Generation SQL

Oracle Applications Release 11i and RAC Installation, and Configuration 41


Appendix D – SQL Scripts

ALTER DATABASE ADD LOGFILE THREAD 2


GROUP 3 '/dev/vx/rdsk/Stripe12/log0102.dbf' SIZE 5M reuse,
GROUP 4 '/dev/vx/rdsk/Stripe12/log0202.dbf' SIZE 5M reuse;
Create a 2nd Redo Thread

set heading off


set feedback off
set pagesize 99
set linesize 100
set echo off

select 'vxassist -g Stripe11 -U gen make '|| substr(file_name, -17)||’ ‘||


(bytes/1048576+1)||'m’ ||’ layout=nolog' from dba_data_files;

select 'vxassist -g Stripe11 -U gen make '|| substr(file_name, -17)||’ ‘||


(bytes/1048576+1)||'m’ ||’ layout=nolog' from dba_temp_files;

select 'vxassist -g Stripe11 -U gen make '|| substr(f.member, -17)||’ ‘||


(l.bytes/1048576+1)||'m’ ||’ layout=nolog' from v$logfile f, v$log l;

select 'vxassist -g Stripe11 -U gen make '|| substr(name, -17)||’ ‘||' 50m’ ||’
layout=nolog' from v$controlfile;

Generate Data File Listing

alter database disable thread 2;


alter database enable thread 2;
Disable and enable redo thread

column instance_name format a6 heading ‘Inst|Name’


column machine format a10 heading ‘Login|Machine’
column username format a10
column failover_type format a8 heading ‘Failover|Type’
column failover_method format a8 heading ‘Failover|Method’
column failed_over format a6 heading ‘Failed|Over’
column count format 9999999 heading ‘Process|Count’

SELECT i.instance_name,machine,username, failover_type,


failover_method, failed_over, count(*)count
FROM gv$session s,gv$instance i
where s.inst_id=i.instance_number
GROUP BY instance_name,machine,username, failover_type,
failover_method, failed_over;
Monitor Failover

Oracle Applications Release 11i and RAC Installation, and Configuration 42


Appendix D – SQL Scripts
select request_id, status_code, requested_start_date, ops_instance
from fnd_concurrent_requests
where ops_instance = 2
and phase_code = ‘P’;
Display Pending Concurrent Requests

update fnd_concurrent_requests
set ops_instance = 1
where ops_instance = 2
and phase_code = ‘P’;

Update Pending Concurrent Requests

create undo tablespace APPS_UNDOTS2


datafile '/d07/trndata/undo02.dbf' size 2000m reuse
extent management local;
Create Undo Tablespace

Oracle Applications Release 11i and RAC Installation, and Configuration 43


Appendix E – Net Configuration Files

Listener.ora configuration:
#
# LISTENER.ORA FOR APPLICATIONS - Database Server
#

#
# Net8 definition for Database listener
#

LISTENER_TRNM =
(ADDRESS_LIST =
(ADDRESS= (PROTOCOL= IPC)(KEY= EXTPROCTRN))
(ADDRESS= (PROTOCOL= TCP)(Host= minerva)(Port= 1593))
)

SID_LIST_LISTENER_TRNM =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME= TRN)
(ORACLE_HOME= /bootcamp1/trndb/9.2.0)
(SID_NAME = TRNM)
)
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /bootcamp1/trndb/9.2.0)
(PROGRAM = extproc)
)
)

STARTUP_WAIT_TIME_LISTENER_TRNM = 0
CONNECT_TIMEOUT_LISTENER_TRNM = 10
TRACE_LEVEL_LISTENER_TRN = OFF

LOG_DIRECTORY_LISTENER_TRNM = /bootcamp1/trndb/9.2.0/network/admin
LOG_FILE_LISTENER_TRNM = LISTENER_TRNM.log
TRACE_DIRECTORY_LISTENER_TRNM = /bootcamp1/trndb/9.2.0/network/admin
TRACE_FILE_LISTENER_TRNM = LISTENER_TRNM.trc

Oracle Applications Release 11i and RAC Installation, and Configuration 44


Appendix E – Net Configuration Files
tnsnames.ora configuration:
#
# TNSNAMES.ORA FOR APPLICATIONS
#
# Net8 definition for the database

TRN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(PORT = 1593)(HOST = minerva))
(ADDRESS = (PROTOCOL = TCP)(PORT = 1593)(HOST = zeus))
)
(CONNECT_DATA = (SERVICE_NAME = trndb)(SERVER=DEDICATED)
))

TRNM = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=minerva)(PORT=1593))
(CONNECT_DATA=(INSTANCE_NAME=TRNM)(SERVICE_NAME=trndb))
)
TRNZ = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=zeus)(PORT=1593))
(CONNECT_DATA=(INSTANCE_NAME =TRNZ)(SERVICE_NAME=trndb))
)

#
# Intermedia
#
extproc_connection_data =
(DESCRIPTION=
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCTRN))
)
(CONNECT_DATA=
(SID=PLSExtProc)
(PRESENTATION = RO)
) )
LISTENER_TRNM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = minerva)(PORT = 1593))
)

LISTENER_TRNZ =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = zeus)(PORT = 1593))
)

Oracle Applications Release 11i and RAC Installation, and Configuration 45


References
Oracle Applications System Administrator’s Guide, Release 11i - A96154-03

Installing Oracle Applications Release 11i (11.5.9) - B10638-01

Oracle9i Real Application Clusters - A89867-02

Veritas Volume Manager on Solaris & Real Application Clusters - Note:201207.1

Oracle Parallel Server For SunCluster Solutions for Mission Critical Computing

Cloning Oracle Applications Release 11i - CR: 282930

Setting Up Parallel Concurrent Processing On Unix Server - Note: 185489.1

Interoperability Notes Oracle Applications Release 11i with Oracle9i Release 9.2.0 -
Note 162091.1
Oracle9i Real Application Clusters Setup and Configuration, Release 2 (9.2) -
A96600-01

Oracle9i Real Application Clusters Concepts, Release 2 (9.2) - A96597-01

Installing and setting up ocfs on Linux - Basic Guide - Note:220178.1

How to find the current OCFS version for Linux - Note:238278.1

Hangcheck Timer FAQ - Note:232355.1

RAC Linux 9.2: Configuration of cmcfg.ora and ocmargs.ora - Note:222746.1

Oralce Cluster File System, Installation Notes, Release 1.0 for RedHat Linux
Advanced Server 2.1 Part No. B10499-01

Oracle Applications Release 11i with Oracle9i Release 2 (9.2.0) - Note:216550.1

Opatch - Where Can I Find the Latest Version of Opatch? - Note:224346.1

Cloning Oracle Applications Release 11i - Note: 135792.1

Cloning Oracle Applications Release 11i with Rapid Clone - Note:230672.1

Generic Service Management - Note:165041.1

Transaction Manager Setup and Configuration Requirement in an 11i RAC


Environment - Note:240818.1

Concurrent Manager Setup and Configuration Requirements in an 11i RAC


Environment - Note:241370.1

Oracle Applications Release 11i and RAC Installation, and Configuration 46

Anda mungkin juga menyukai