Anda di halaman 1dari 4

How to Configure Grid Control Agents to Monitor Virtual Hostname in HA environments [ID 406014.

1] Modified 04-SEP-2010 Type WHITE PAPER Status PUBLISHED

In this Document Abstract Document History How to Configure Grid Control Agents to Monitor Virtual Hostname in HA environments Overview and Requirements Installation and Configuration Summary References

Applies to:
Enterprise Manager Grid Control - Version: 10.2.0.1 to 11.1.0.1 - Release: 10.2 to 11.1 Information in this document applies to any platform.

Abstract
Scope and Application: Grid Control 10.2.0.1 + All Unix platforms; for Windows use Note 464191.1 This document provides a general reference for Grid Control administrators on configuring 10g Grid Control agents in Cold Failover Cluster (CFC) environments

Document History
Last Updated 04-SEP-2010 Expire Date 03-SEP-2010

How to Configure Grid Control Agents to Monitor Virtual Hostname in HA environments Overview and Requirements
In order for Grid Control agent to fail over to a different host, the following conditions must be met: 1. Installations must be done using a Virtual Hostname associated with a unique IP address. 2. The virtual hostname used for the group, must be used for any service that runs inside this virtual group. For example: listeners, HTTP servers, iAS etc must use the virtual hostname. 3. Install on a shared disk/volume, which holds the binaries, configuration and runtime data.** 4. Configuration data and metadata must also failover to the surviving node 5. Inventory location must failover to the surviving node 6. Software owner and timezone parameters must be the same on all cluster member nodes that will host this agent ** Note: Any reference to shared could also be true for non-shared failover volumes, which can be mounted on active hosts after failover.

An agent must be installed on each physical node in the cluster to monitor the local services.

An alternate solution for CFC deployments, Grid Control 10.2.0.4 offers a "relocate_target" feature using EMCLI, where the physical agent monitors all virtual services hosted by the physical cluster member. See Note 577443.1 for details.

Installation and Configuration

A. Setup the Virtual Hostname/Virtual IP Address The virtual hostname must be static and resolveable consistently on the network. All nodes participating in the setup must resolve the virtual IP address to the same hostname. Standard TCP tools such as "nslookup" and "traceroute" can be used to verify the hostname. Validate using the commands listed below
nslookup <virtual hostname> -> returns the virtual IP address and fully qualified hostname nslookup <virtual IP> -> returns the virtual IP address and fully qualified hostname

Make sure to try these commands on every node of the cluster, and verify that the correct information is returned.

B. Setup Shared Storage This can be storage managed by the clusterware that is in use, or you can use any shared file system (FS) volume as long as it is not an unsupported type, such as OCFS V1. The most common shared FS is NFS. You can also use non-shareable volumes that are mounted upon failover to the succeeding host. (Such is the case in Windows environments)

C. Setup the Environment Before you launch the installer, certain environment variables need to be verified. Each of these variables must be set identical for the account installing the software on ALL machines participating in the cluster:

OS variable TZ (All cluster member nodes must be time synchronized) Timezone setting. It is recommended to unset this variable prior to installation PERL variables Variables like PERL5LIB should also be unset to prevent from picking up the wrong set of PERL libraries

D. Synchronize OS User IDs The user and group of the software owner should be defined identically on all nodes of the cluster. This can be verified using the id command:
$ id a uid=550(oracle) gid=50(oinstall) groups=501(dba)

The agent software owner must be a member of the targets primary group if you are using a different user for each software home.

E. Setup Inventory Each failover group or virtual hostname package should have its own inventory. Use the same inventory when you install the agent inside each virtual group. This is accomplished by pointing the installer to the groups oraInst.loc file using -invPtrLoc parameter. F. Install the Software: 1. Run the installer using the inventory location file oraInst.loc as well as specifying the hostname of the virtual group. For example:
runInstaller -invPtrLoc /app/oracle/share1/oraInst.loc ORACLE_HOSTNAME=lxdb.acme.com debug (-debug parameter is optional) (For 11g agents, you must use a response file for silent installations)

Modifiche Response file SECURITY_UPDATES_VIA_MYORACLESUPPORT=false DECLINE_SECURITY_UPDATES=true INSTALL_UPDATES_SELECTION=skip ORACLE_AGENT_HOME_LOCATION=/u01/oracle/ b_silentInstall =true OMS_HOST=xxxxxx.xxxxxx.com The grid control oms server. OMS_PORT=4889 AGENT_REGISTRATION_PASSWORD=xxxxxxx FROM_LOCATION=/u01/oracle/stage/aix/agent/stage/products.xml 2. Continue the rest of the installation normally. Note: Agents will be installed using the default port 3872. By default, each agent is configured to listen on all NIC's. This may cause failure for each subsequent agent startup. To avoid this problem, edit the local agent's emd.properties file and change:
AgentListenOnAllNICs=TRUE to AgentListenOnAllNICs=FALSE

Then bounce the local agent. You can set this value for each additional agent to avoid startup failures.

G. Startup of Services: Ensure that you start your services in the proper order: Note: ORACLE_HOSTNAME must be set when running any emctl command to control the virtual agent.

- establish IP address on the active node - start all services normally except the agent - unset all environment variables such as ORACLE_HOME, *LIB*, ORACLE_SID, PERL5LIB etc.

- cd to the agent home / bin directory - start the agent In case of failover - Establish IP on failover node - start all services normally except the agent - unset all environment variables such as ORACLE_HOME, *LIB*, ORACLE_SID, PERL5LIB etc. - cd to the agent home / bin directory - start the agent

Summary
If you have a large number of virtual groups to monitor on each cluster node, you may want to consider using only one agent on each cluster node (physical host) and use EMCLI to relocate targets as they failover to other cluster nodes. Note 577443.1 How to Setup and Configure Target Relocate using EMCLI

References
NOTE:330072.1 - How To Configure Grid Control Components for High Availability NOTE:405642.1 - How to Configure Grid Control OMS in Active/Passive CFC Environments failover / HA NOTE:405979.1 - How to Configure Grid Control Repository in Active/Passive HA environments NOTE:464191.1 - How to Configure Grid Control Agents in Windows HA - Failover Cluster Environments NOTE:549270.1 - How to configure Grid Control 10.2.0.4 Management Servers behind a Server Load Balancer (SLB) NOTE:577443.1 - How to Setup and Configure Target Relocate using EMCLI Enterprise Manager Maximum Availability Architecture (MAA) Best Practices athttp://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_EnterpriseManagerBestPractic es.pdf For Fail Safe deployments, please see http://download.oracle.com/docs/html/B25003_01/toc.htm#BABCHGJD Advanced Config Guide http://download.oracle.com/docs/cd/B16240_01/doc/em.102/e10954/actpass_env.htm#CHDHAHCB

Related

Products

Enterprise Management > Enterprise Manager Consoles, Packs, and Plugins > Enterprise Manager Grid Control > Enterprise Manager Grid Control

Keywords FAILOVER; CLUSTERWARE; GRID CONTROL; VIRTUAL HOST


Back to top Rate this document

Anda mungkin juga menyukai