9
with 9i RAC
Installation & Configuration
Linux and Unix (generic)
3.0 Audience.............................................................................................................................6
References...............................................................................................................................46
The purpose of this document is to identify, and determine the steps and
requirements necessary to install Oracle Applications using the standard
Rapidwiz installation, convert the existing single instance installation to Real
Application Clusters (RAC) using standard installation and upgrade
methods. Failover testing is conducted within existing constraints, as the
current Oracle Applications Release 11i architecture supports dynamic
failover with respect to Concurrent Processing only. Dynamic OLTP
workload failover, specifically Transparent Application Failover (TAF) is not
supported.
Following the steps outlined within this document, and completing the tasks
required in order to install Oracle Applications Release 11i, convert the single
instance installation to RAC, test and verify the configuration, requires a
significant amount of time and effort and should not be undertaken without
committing to executing the process from start to finish. The estimated
minimum number of hours required in order to complete all of these tasks is
approximately 40 hours under optimal conditions.
With the changes in that have been implemented in the 11.5.9 Applications
technology stack, the process outlined in this document differs significantly
from the previous 11.5.8 Applications documentation. This is primarily due
to the change in the bundled Applications’ RDBMS from 8.1.7.4, to 9.2.0.3.
Although this eliminates the need for an RDBMS upgrade during the RAC
conversion, lack of RAC support in the current Rapidwiz installation requires
a separate 9.2.0.x RDBMS ORACLE_HOME to be installed in order to
support RAC (and Cluster Manager on Linux) requirements. The latest
certified patchset release of 9iR2 should be installed according to the
Interoperability Notes for Oracle Applications.
This section also includes minimum Tech Stack requirements that must be
met for installation.
Oracle Software
The following Oracle software was used during this testing:
5.1 Cluster
A cluster is a group of nodes that are interconnected to work as a single,
highly available and scalable system. The primary cluster components are
processor nodes, a cluster interconnect and shared disk subsystem. Clusters
share disk, not memory. Each node also has its own operating system,
database and application software.
6.1 Install Oracle Cluster File System and Oracle Cluster Manager
In order to provide support for the shared database files, and the necessary
cluster support required by RAC, in a Linux environment, Oracle Cluster File
System, and Oracle Cluster Manager must be installed. If working in a Unix
environment ensure that either the vendor specific or 3rd party products are
installed to provide this functionality.
It is important to note that at this time OCFS 1.0 only supports Oracle
database datafiles (data, redo, control, and archive log files). Non-Oracle
files, other than the quorum, and srvm configuration files should not be
placed on OCFS disk.
1. Format the shared devices on each node using FDISK as appropriate for
the particular installation. FDISK is an OS utility it’s particular usage,
and instructions can be found with your OS documentation.
2. Create the mount points for the OCFS filesystems on each cluster node.
This is done as the root user (e.g. mkdir -p /d01 /d02 /d03 /d04 /d05 /d06
/d07).
3. Install the OCFS RPM files per the OCFS Installation Instructions on each
cluster node. The latest RPM’s can be found following the instructions in
Note:238278.1 on MetaLink. If using a custom kernel issues with the
installation are unsupported. The RPM’s would be installed as the root
user. After downloading each required RPM into a working directory
issue the commands:
rpm –i ocfs-support-1.0-2.i686.rpm
rpm –i ocfs-2.4.9-e.3-1.0-2.i686.rpm
Note that the RPM versions may differ for your installation, but the
modules should be installed in the order above.
4. Using the utility ocfstool, generate the file /etc/ocfs.conf. When
generating the ocfs.conf file, use the private adaptor, not public. From an
X-Windows, or VNC session as the root user run the ocfstool by issuing
the command: /usr/sbin/ocfstool. Select the Tasks menu, and supply
the information for the private adaptor. It is not necessary to change the
default port number.
6. Run make for the files system either from the ocfstool, or by command
line. When running the make command it is executed from one cluster
node only, and is not to be re-run from each node. If executing make
from the command line the format is below where <dir> is the directory
that was created in #2 above, and <device) is the physical device name
formatted in #1 above.
7. After all devices have been formatted, mount the devices to each cluster
node. An example of mounting an ocfs device is below. In this example
we are mounting <device> /dev/sda1 to <dir> /d01.
8. Reboot all cluster nodes, ensure all modules are loaded as required after
startup, and the filesystems are mounted to each cluster node.
Note that the RPM version may differ for your installation.
2. Create an Oracle home, separate from the 11.5.9 filesystem, and install
Oracle Cluster Manager into the 9.2 ORACLE_HOME. When running
the Oracle Universal Installer from the 9.2.0.1 Oracle distribution, ensure
there is not an /etc/oraInst.loc file pointing to an existing (old)
oraInventory location. During the installation place the oraInventory
into the ORACLE_BASE directory for the 9.2 ORACLE_HOME. The
Applications installation performed in the next section with Rapidwiz
4. After the 9.2.0.1 installation has completed, apply the latest 9.2 patchset
for Oracle Cluster Manager, and configure the Cluster Manager using the
instructions in MetaLink Note:222746.1. Also see the Oralce Cluster File
System, Installation Notes, Release 1.0 for RedHat Linux Advanced
Server 2.1. If installing directly to RDBMS 9.2.0.4, skip 4b – 6. Also, refer
to Note:256617.1 for a working example of the cmcft.ora file.
a. Run cmcfg.sh as oracle from $ORACLE_HOME/oracm/admin
b. Make the following configuration changes to cmcfg.ora in
$ORACLE_HOME/oracm/admin:
1. Remove the watchdog parameters:
WatchdogSafetyMargin=5000
WatchdogTimerMargin=60000
2. Add the parameters
KernelModuleName=hangcheck-timer
HeartBeat=15000
5. Make the following changes to the
$ORACLE_HOME/oracm/admin/ocmargs.ora on both nodes:
a. Remove the watchdog parameter:
watchdogd
6. Edit the ocmstart.sh and remove all references to watchdog:
# watchdogd's default log file
#WATCHDOGD_LOG_FILE=$ORACLE_HOME/oracm/log/wdd.log
# watchdogd's default backup file
#WATCHDOGD_BAK_FILE=$ORACLE_HOME/oracm/log/wdd.log.bak
# Get arguments
#watchdogd_args=`grep '^watchdogd' $OCMARGS_FILE |\
# sed -e 's+^watchdogd *++'`
# Check watchdogd's existance
#if watchdogd status | grep 'Watchdog daemon active' >/dev/null
#then
# echo 'ocmstart.sh: Error: watchdogd is already running'
# exit 1
#fi
# Backup the old watchdogd log
#if test -r $WATCHDOGD_LOG_FILE
#then
# mv $WATCHDOGD_LOG_FILE $WATCHDOGD_BAK_FILE
#fi
# Startup watchdogd
#echo watchdogd $watchdogd_args
#watchdogd $watchdogd_args
7. Load the module for hangcheck-timer as root:
/sbin/insmod hangcheck-timer hangcheck_tick=30
hangcheck_margin=180
8. Start the Oracle Cluster Manager. Before starting OCMverify that the log
directory under ORACLE_HOME/oracm were created during the
installation (i.e. ORACLE_HOME/oracm/log/cm.out). As the root user,
source the Oracle environment and run
ORACLE_HOME/oracm/bin/ocmstart.sh.
At this point in time you have installed cluster support, either Linux specific
using Oracle Cluster Manager, or vendor specific/3rd party for Unix. The
installation and configuration of a shared disk subsystem should also have
been completed, either using Oracle Cluster File System for Linux, or vendor
specific/3rd party for Unix.
Before starting Rapidwiz ensure that the /etc/oraInst.loc file used during the
Oracle9i Cluster Manager installation is renamed so that the inventory for
Applications is not mixed with the 9.2 Cluster Manager installation. On
Linux Rapidwiz for Applications uses the /etc/oraInst.loc pointing the
inventory to /etc/oraInventory. On Unix Rapidwiz for Applications uses the
/var/opt/oracle /oraInst.loc pointing the inventory to
/var/opt/oracle/oraInventory.
Follow the standard installation instructions while installing the Release 11i
software on primary/source node in the cluster, as would be performed for a
single node installation. If prompted during the Applications 11i installation
process with the Cluster Node Selection Screen for the RAC option during
the Database Server (RDBMS) installation do not select any of the remote
nodes that are available. If while running Rapidwiz, the Cluster Node
Selection Screen is completed for the additional nodes that make up the
With the requirement to install RAC and convert the existing RDBMS from a
single instance to a RAC configuration it is less complex to install a separate
Oracle9i home, outside of the 11.5.9 filesystem, apply the latest certified
RDBMS patchset, patch the installation according to the Interoperability
Notes for Oracle Applications, and then switch the Applications release
11.5.9 Oracle9i RDBMS to use the newly installed Oracle9i RAC distribution.
This task is completed during the steps necessary to move the single instance
RDBMS installation to RAC in the next section, so no action is required
during the initial Applications installation tasks.
If using Oracle Cluster File System (OCFS), or a 3rd parth CFS, the installation
can be performed directly to OCFS/CSF by making the appropriate
DATA_TOP assignments while running Rapidwiz. The results will vary
depending on the I/O configuration of the shared disk subsystem. Some
configurations will not support installing directly to shared disk, whether
OCFS or 3rd party cluster filesystem is utilized. If the installation will not
support a direct install to OCFS/CFS disk, the Rapidwiz installation will fail,
due to the database datafiles dropping blocks during the unzip phase to the
shared filesystem.
In order to complete the installation for Oracle9i with the RAC option, to
support the conversion of Applications 11.5.9 to a RAC environment, on
Linux install Oracle9i with the RAC option into the existing Oracle9i home
used for the Cluster Manager installation in section 6.1. If working in a Unix
environment create a separate location for the new 9.2.0.x RAC
ORACLE_HOME, outside of the Applications 11.5.9 filesystem. Follow the
Oracle9i installation instructions in Note:216550.1 Oracle Applications
Release 11i with Oracle9i Release 2 (9.2.0) Interoperability Note. When
following the Interoperability note, only execute the steps related to installing
Oracle9i. These are Section 1 steps 5 – 9, in the current Interoperability Note
dated March 2004.
Before starting the Oracle 9i installation ensure that the /etc/oraInst.loc file
(on Linux), or the /var/opt/oracle/oraInst.loc (on Unix) used during the
Rapidwiz installation for Applications is renamed so that the inventory for
Applications does not get mixed with the 9.2.0.x RDBMS installation. On
Linux restore the /etc/oraInst.loc file used during the Oracle Cluster
Manager installation, so that the RDBMS installation points to the same
oraInventory. On Unix rename the /var/opt/oracle/oraInst.loc file that was
used during the Applications installation, and place the oraInventory into the
ORACLE_BASE location of the new ORACLE_HOME for the 9.2.0.x RAC
installation. Also, ensure the environment variables are set to the new/OCFS
9.2 ORACLE_HOME. The cluster node selection screen should be displayed
during the 9i RDBMS installation. Select all available nodes where the RAC
instances will run during the installation when prompted.
Ensure that all of the Applications and Database server processes have been
shut down in an orderly manner, using the shutdown scripts provided with
Oracle Applications. These are the adstpall.sh script found in
COMMON_TOP/admin/scripts.<SID>_<HOST>, and the addbctl.sh and
addlnctl.sh scripts found in the 9.2
ORACLE_HOME/appsutil/scripts/<SID>_<HOST> directory.
3. Using the context editor (or vi) edit the context file <SID>_<HOST>.xml
at the new location for ORACLE_HOME/appsutil and change the 11.5.9
Oracle9i ORACLE_HOME location to the Oracle9i RAC
ORACLE_HOME’s new location. This will include all references to the
old ORACLE_HOME to be replaced by the new ORACLE_HOME. If
performing a 32 to 64-bit migration also update the ‘s_bits’ value to
indicate this change.
4. Edit the adautocfg.sh script to replace the old location for CTX_FILE with
the updated version in the new Oracle9i RAC
ORACLE_HOME/appsutil/scripts/<SID>_<HOST> directory(e.g.
CTX_FILE="/oralocal/apradb/9.2.0/appsutil/TRN_minerva.xml" to
CTX_FILE="/oralocal/oracle/9.2.0/appsutil/TRN_minerva.xml"). Also
replace any references of the old ORACLE_HOME with the new location
of the Oracle9i RAC ORACLE_HOME.
7. Verify the AutoConfig changes and update any custom environment files
with the new Oracle9i RAC ORACLE_HOME. The AutoConfig changes
will include:
a. The ‘admin’ directory is created under the new Oracle9i RAC
ORACLE_HOME. This will include the <SID>_<HOST>/bdump,
cdump and udump directories.
b. The ORACLE_HOME/network/admin/<SID>_<HOST> directory
is created under the new Oracle9i RAC ORACLE_HOME. This will
include the thsnames.ora and listener.ora files, and is the
$TNS_ADMIN location for the 11.5.9 RDBMS installation.
After completing all of the above steps you should be able to start the
Oracle9i RDBMS from the new ORACLE_HOME, along with the
Applications, and Middle Tier, processes on the source (primary) node.
Again a complete test of the Applications environment should be conducted
to ensure that all functions work properly.
In addition to the datafiles create a raw file for a second rollback tablespace, a
second group of online redo logs, and the existing control files. Also, by this
point in time you will have created a shared file for the quorum, and Server
Management (SRVM) configuration devices (see the Oracle9i Real
Application Clusters Setup and Configuration guide chapter 2 Configuring
the Shared Disks for information regarding SRVM).
At this time all that is necessary is to complete the creation of each of the
shared files that will be needed, and ensure that there is adequate space
available to accommodate the original data file, and associated overhead, for
raw files this includes an extra data block, to store OS header information.
This is not the same as the database block size used for Applications, which
is typically 8k, and may vary for different platforms. The simplest, although
not exact, approach is to add 1Mb to each file size if using raw devices. This
isnot necessary with cluster filesystem devices.
Ensure that all of the Applications and Database server processes have been
shut down in an orderly manner, using the shutdown scripts provided with
Oracle Applications.
Also modify any custom scripts and change the DB_SID as required.
8. Ensure the Global Services Daemon (GSD) is started on all cluster nodes.
Prior to starting the GSD initialize the Server Management (SRVM)
configuration device, then start the GSD. This is done as the Oracle user
from the OS prompt:
srvconfig –init
gsdctl start
9. Recreate the controlfile using the script from step 1. In order to perform
this task the database must be started in ‘nomount’ state, and the SQL
script executed from the SQL*Plus prompt. If any errors occur during
this step it will be necessary to abort the database, correct the errors, and
rerun the script. In a worst-case scenario you should have the datafiles in
place from the original single instance configuration. If configuring with
raw disk partitions or OCFS, they can always be recopied to raw disk, or
cluster filesystem again if necessary.
11. The Release 11.5.9 RDBMS utilizes System Managed Undo (SMU). You
will need to add the init.ora parameters to the init<DB_SID>.ora for the
secondary instances’ undo tablespace to each init.ora file for each
additional RAC instance in the cluster. Create the required Undo
tablespace from the active instance before proceeding. See Appendix D –
SQL Scripts, Create Undo Tablespace for the command syntax.
12. Modify the second thread of redo added to make the thread private. This
is accomplished by disabling and enabling the redo thread from another
instance in the cluster. The instance being modified must be shut down at
the time the modification is made. See Appendix D SQL Scripts, Disable
and Enable Redo Thread for the command syntax for this operation.
15. Using the context editor (or vi), edit the renamed context file
<SID>_<HOST>.xml, and change any reference from the source host to
the target host. Also update any changes in location for file system
location differences, as required.
16. Edit the adautocfg.sh script on the remote (target) host, in the remote
hosts Oracle9i RAC ORACLE_HOME/appsutil/scripts/<SID>_<HOST>
directory to replace the old location of the CTX_FILE to point to the
renamed CTX_FILE in ORACLE_HOME/appsutil directory. The
renamed <SID>_<HOST>.xml file has the updates for the new target
host (e.g.
CTX_FILE="/oralocal/apradb/9.2.0/appsutil/TRN_minerva.xml" to
CTX_FILE="/oralocal/oracle/9.2.0/appsutil/TRN_zeus.xml"). Also
ensure any references to the ORACLE_HOME point to the location of the
Oracle9i RAC ORACLE_HOME on the target host. It is not necessary to
rename the <SID>_<HOST> directory under
18. Verify the AutoConfig changes and update any custom environment
files with the new Oracle9i RAC ORACLE_HOME. The AutoConfig
changes will include:
a. The ‘admin’ directory is created under the new Oracle9i RAC
ORACLE_HOME. This will include the <SID>_<HOST>/bdump,
cdump and udump directories.
b. The ORACLE_HOME/network/admin/<SID>_<HOST> directory
is created under the new Oracle9i RAC ORACLE_HOME. This will
include the tnsnames.ora and listener.ora files, and is the
$TNS_ADMIN location for the 11.5.9 RDBMS installation.
20. If it exists, also copy the existing ifilecbo.ora from the source (primary)
nodes ORACLE_HOME/dbs directory to each target (secondary) node(s)
ORACLE_HOME/dbs directory. It is not necessary to modify this file for
the secondary nodes. The 11.5.9 ifilecbo.ora is referenced from the
init.ora but the ifile is empty.
21. Start the second instance. After the second instance is started shut down
the primary instance, and modify the redo thread to private using the
same procedure as in the previous step.
22. Recreate any database links that reference the renamed SID.
Using AutoConfig to make the edits is much less time consuming, and
reduces most of the manual effort, although currently AutoConfig is not RAC
aware, so the configuration file edits above (to tnsnames.ora, listener.ora, and
the RDBMS startup and shutdown scripts) will need to be repeated any time
AutoConfig is run in the Database tier.
Note that the Self-Service applications will not function at this time due to
required configuration changes to the *.dbc files located in
$FND_TOP/secure. These modifications will be completed in steps 6 and 7
in the next section.
Ensure that all of the Applications and Database server processes have been
shut down in an orderly manner, using the shutdown scripts provided with
Oracle Applications.
1. Copy the $FND_TOP/secure dbc file to a new name which reflects the
new instance names for the RAC instances (e.g. minerva_trn.dbc to
minerva_trnm.dbc, and minerva.us.oracle.com_trn.dbc to
minerva.us.oracle.com_trnm.dbc). It is not necessary to regenerate the
dbc files. The default naming format for the *.dbc files are
<hostname_instance.dbc>.
2. Edit the dbc files copied in the step above and change the DB_NAME
entry to reflect the new instance names in each corresponding file (i.e.
DB_NAME=TRN to DB_NAME=TRNM).
3. Copy the modified dbc files to new names that reflect the remote node(s)
in the cluster (e.g. $FND_TOP/secure minerva_trn.dbc to zeus_trnz.dbc,
and zeus.us.oracle.com_trnz.dbc). Then modify each file to reflect the
host and instance name for that node (e.g. edit the zeus*.dbc files and
change DB_NAME=TRN to DB_NAME=TRNZ,
DB_HOST=minerva.us.oracle.com to
DB_HOST=zeus.us.oracle.com).
#
#Web ADI Properties
#
wrapper.bin.parameters=-DBNEDBCFILE=
It will also be necessary to make this change to reflect the DBC file name
change in jserv.properties, on any new node, after completing the cloning
instructions outlined in section 6.8 Clone the Applications installation, in
order to clone the installation to another node, or middle tier server.
3. Update any custom startup scripts that contain references to the Net8
listener. None of the standard Applications startup/shutdown scripts
have the 9.2.0 ORACLE_HOME listener name referenced.
After completing all of the required steps you should be able to start the
Applications, Middle Tier, and Database processes on the local (primary)
node, and start the Database and Net Listener processes on the remote
node(s). Again a complete test of the Applications environment should be
conducted to ensure that all functions work properly.
In executing the cloning process, regardless of the variations that have been
introduced in different versions of the cloning procedures, Oracle
Applications 11i is broken into it’s component structure during the cloning
process, and each component layer is copied from the source to target server.
Cloning normally is performed in order to copy the RDBMS (9.2.0
ORACLE_HOME, and database datafiles), Applications tier components
(APPL_TOP, COMMON_TOP, 8.0.6 ORACLE_HOME, and iAS
ORACLE_HOME). When cloning in a RAC environment we do not clone the
RDBMS components, as this is not required. Any steps that reference
RDBMS components during the cloning procedure can be skipped.
Ensure that all of the Applications processes have been shut down in an
orderly manner, using the shutdown scripts provided with Oracle
Applications. Also verify that the oraInst.loc inventory pointer is set to the
Applications oraInventory location that was created during the Rapidwiz
installation, and has the file has correct ownerships and file permissions.
Follow the steps outlined the Cloning procedure that is outlined in Cloning
Oracle Applications Release 11i with Rapid Clone, with the following
instructions:
#
#Web ADI Properties
#
wrapper.bin.parameters=-DBNEDBCFILE=
It will also be necessary to make this change to reflect the DBC file
name change in jserv.properties, on any new node, after completing
the cloning instructions outlined in section 6.8 Clone the
Applications installation, in order to clone the installation to another
node, or middle tier server.
There are several different failure scenarios that can occur depending on the
type of Applications Technology Stack implementation that is performed.
The most basic scenarios are:
1. The database instance that supports the Applications and Middle-Tier(s) can
fail.
2. The Database Tier server that supports PCP processing can fail
3. The Applications/Middle-Tier server that supports the CP (and
Applications) base can fail.
Ensure that all of the Applications and Database server processes have been
shut down in an orderly manner, using the shutdown scripts provided with
Oracle Applications.
8. AutoConfig will update the database profiles and reset them for the
node from which it was last run. If necessary reset the database
profiles back to their original settings by running the scripts outlined
in step 3-d in 6.8 Clone the Applications installation.
10. Navigate to Install > Nodes and ensure that each node is registered.
Use the node name as it appears when executing a ‘nodename’ from
the Unix prompt on the server. GSM will add the appropriate
services for each node at startup.
11. Navigate to Concurrent > Manager > Define, and define the primary
and secondary node names for all the concurrent managers
according to the desired configuration for each node’s workload.
The Internal Concurrent Manager is defined on the primary PCP
node only. When defining the Internal Monitor for the secondary
(target) node, add the primary node as the secondary node
designation to the Internal Monitor, and assign a standard work shift
with one process. Also, make the primary/secondary node,
workshift and process assignments for the Internal Monitor on the
primary cluster node.
12. Prior to starting the Manager processes it is necessary to edit the
APPSORA.env file on each node in order to specify a TWO_TASK
entry that contains the INSTANCE_NAME parameter for each of the
local nodes Oracle instance, in order to bind each Manager to the
local instance. This should be done regardless of whether Listener
load balancing is configured, as it will ensure the configuration
conforms to the required standards of having the TWO_TASK set to
the instance name of each node as specified in GV$INSTANCE. See
MetaLink Note:241370.1 for information regarding configuring PCP
in a RAC environment.
14. Navigate to Concurrent > Manager > Administer and verify that the
Service Manager and Internal Monitor are activated on the
secondary node. The Internal Monitor should not be active on the
primary cluster node.
15. Stop and restart the Concurrent Manager processes their primary
node(s), and verify that the managers are starting on their
appropriate nodes. On the target (secondary) node in addition to
Database Tier failures are transparent, in that RAC provides the fault
tolerance necessary for seamless recovery of batch workload from one node
to another through the Parallel Concurrent Processing workload migration
mechanism. Applications and Middle Tier failures are minimized in that
workload migration for OLTP processing can be performed with a minimum
of downtime, or automated using cluster failover management functionality,
or a big IP implementation, in order to re-point Applications users, and
Middle Tier components when required.
Applications Tier and Middle Tier Failures require that the Database profiles
for all of the Applications components be updated. This is accomplished
most easily using the scripts that are generated during the installation and
7.1 Simulate the Loss of a RAC Database Instance and Net Listener
A database instance failure is simulated by performing the following tasks:
1. Start the Database processes on each node in the RAC cluster, and the
Applications and Middle Tier processes on their respective primary
node(s).
5. Once the request has started abort the primary RAC instance, and kill the
Net listener. The Concurrent Processing executables that are running on
the primary node in the cluster will terminate and then restart after
migrating their database connections to the backup node. Although this
process may take several minutes to complete, depending on workload,
there is no need for any manual intervention.
6. There is no need to stop and restart the Applications, and Middle Tier
processes only on their primary node (only the RAC instance failed,
nothing else). Do not stop and restart the Concurrent Processing
executables using adcmctl.sh. OLTP users should be able to connect
using the primary URL, as normal, after completely disconnecting their
failed session, and the CP processing should move to the target
(secondary) RAC node.
Once the interface is down, and processes are killed CP workload will
start migrating to the secondary node automatically. It may take several
minutes for the entire process of restarting the managers on the
secondary node to complete.
9. Applications users can connect to the system using the alternate URL.
On our test system the primary URL for Applications is:
http://minerva.us.oracle.com:8092/dev60cgi/f60cgi
The alternate URL is:
http://zeus.us.oracle.com:8092/dev60cgi/f60cgi
10. The CP request output from migrated processes will show activity from
the time of migration to the backup system, until the completion time.
The primary system will contain log information also. In order to view
output of a request submitted for a node that is simulated ‘down’ the
FNDFS Listener must be active. This is the Applications Net8 Listener
process. New requests submitted are running normally. No action is
required.
2. Delete the datafiles that remain in the DATA_TOP directories that were
used during the initial Applications installation. All of these files should
have been moved to shared disk at this time.
Once these tasks have been completed the next tasks that should be
addressed are implementing Oracle9i New Features, such as System
Parameter File, Automatic Undo Management, and Automatic PGA Memory
Management.
select 'vxassist -g Stripe11 -U gen make '|| substr(name, -17)||’ ‘||' 50m’ ||’
layout=nolog' from v$controlfile;
update fnd_concurrent_requests
set ops_instance = 1
where ops_instance = 2
and phase_code = ‘P’;
Listener.ora configuration:
#
# LISTENER.ORA FOR APPLICATIONS - Database Server
#
#
# Net8 definition for Database listener
#
LISTENER_TRNM =
(ADDRESS_LIST =
(ADDRESS= (PROTOCOL= IPC)(KEY= EXTPROCTRN))
(ADDRESS= (PROTOCOL= TCP)(Host= minerva)(Port= 1593))
)
SID_LIST_LISTENER_TRNM =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME= TRN)
(ORACLE_HOME= /bootcamp1/trndb/9.2.0)
(SID_NAME = TRNM)
)
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /bootcamp1/trndb/9.2.0)
(PROGRAM = extproc)
)
)
STARTUP_WAIT_TIME_LISTENER_TRNM = 0
CONNECT_TIMEOUT_LISTENER_TRNM = 10
TRACE_LEVEL_LISTENER_TRN = OFF
LOG_DIRECTORY_LISTENER_TRNM = /bootcamp1/trndb/9.2.0/network/admin
LOG_FILE_LISTENER_TRNM = LISTENER_TRNM.log
TRACE_DIRECTORY_LISTENER_TRNM = /bootcamp1/trndb/9.2.0/network/admin
TRACE_FILE_LISTENER_TRNM = LISTENER_TRNM.trc
TRN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(PORT = 1593)(HOST = minerva))
(ADDRESS = (PROTOCOL = TCP)(PORT = 1593)(HOST = zeus))
)
(CONNECT_DATA = (SERVICE_NAME = trndb)(SERVER=DEDICATED)
))
TRNM = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=minerva)(PORT=1593))
(CONNECT_DATA=(INSTANCE_NAME=TRNM)(SERVICE_NAME=trndb))
)
TRNZ = (DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=zeus)(PORT=1593))
(CONNECT_DATA=(INSTANCE_NAME =TRNZ)(SERVICE_NAME=trndb))
)
#
# Intermedia
#
extproc_connection_data =
(DESCRIPTION=
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROCTRN))
)
(CONNECT_DATA=
(SID=PLSExtProc)
(PRESENTATION = RO)
) )
LISTENER_TRNM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = minerva)(PORT = 1593))
)
LISTENER_TRNZ =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = zeus)(PORT = 1593))
)
Oracle Parallel Server For SunCluster Solutions for Mission Critical Computing
Interoperability Notes Oracle Applications Release 11i with Oracle9i Release 9.2.0 -
Note 162091.1
Oracle9i Real Application Clusters Setup and Configuration, Release 2 (9.2) -
A96600-01
Oralce Cluster File System, Installation Notes, Release 1.0 for RedHat Linux
Advanced Server 2.1 Part No. B10499-01