White Paper
Case 2: Adding back the node: /*In the below example have taken the host "ne1-int1addb-002" which was the re-imaged host due to corrupt hard disk adding this back to the existing cluster node "ne1-int1addb-001"*/ a. Verify OS level configs are accurate and IPs are configured correctly. b. Prepare the config file: vi config.env export CLUSTER_CURRENT_NODES=1 #Current number of nodes existing in the cluster export CLUSTER_EXISTING_NODES=ne1-int1addb-001 # Current Node names in the cluster, if more than one node put as comma (,) separated nodenames. O_HOME=/home/oracle/product/11.2 # Oracle Home Location G_HOME=/home/oragrid/product/11.2 # Grid Home Location export CLUSTER_NEW_NODES=ne1-int1addb-002 # New Node that would be added/re-added to the cluster export CLUSTER_NEW_VIRTUAL_HOSTNAMES=ne1-int1addb-002-v # New Node with VIP that would be added/re-added to the cluster export TIME_ZONE=PDT # Time Zone c. Setup ssh oracle user equivalence between all nodes. d. Login to all the cluster nodes and run the pre_step.sh ssh to all nodes including the re-imaged node and run the pre_step.sh script as root user ex.: ssh to ne1-int1addb-00[1-2].adx.ne1.yahoo.com -bash-3.2$ sudo su - root Password: [root@ne1-int1addb-001 ~]# cd <script_location> [root@ne1-int1addb-001 ~]# sh pre_step.sh -bash-3.2$ sudo su - root Password: [root@ne1-int1addb-002 ~]# cd <script_location>
Conclusion: The above procedures have been time tested and setup in order to reduce the human effort to perform cluster node operations in terms of adding a new node or re-adding an existing node as when the requirement comes up. Weve seen significant improvement in alignment of the standards followed by the team by simplifying these complicated operations of performing such tasks. Reference: Scripts Location: http://svn.corp.yahoo.com/view/yahoo/ysm/AdSys/DBs/trunk/dba/Oracle_add_ node