January 2012
Authors:
Efran Snchez Platform Technology Manager Oracle Server Technologies, PTS
Contributors / Reviewers:
Andr Sousa Senior Technologist Oracle Server Technologies, PTS
Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 www.oracle.com Oracle Corporation provides the software that powers the Internet. Oracle is a registered trademark of Oracle Corporation. Various product and service names referenced herein may be trademarks of Oracle Corporation. All other product and service names mentioned may be trademarks of their respective owners. Copyright 2011 Oracle Corporation All rights reserved.
Some parts of this workshop are based on the article Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI by Jeffrey Hunter.
Platform Technology Solutions, Latin America 2. The oracle-validated package verifies and sets system parameters based configuration recommendations for Oracle Linux, the files updated are: /etc/sysctl.conf /etc/security/limits.conf /etc/modprobe.conf /boot/grub/menu.lst This package will modify module parameters and re-insert them, it also installs any required packages for Oracle Databases. yum install oracle-validated It's recommended that you also install these packages for previous versions compatibility : yum install libXp-devel openmotif22 openmotif 3. Install Automatic Storage Manager (ASM) packages yum install oracleasm-support oracleasm-2.6.18-274.el5 4. Clean all cached files from any enabled repository. It's useful to run it from time to time to make sure there is nothing using unnecessary space in /var/cache/yum. yum clean all Eject the cdrom and disable the ISO image in order not to boot the OS Installation on next reboot. Optionally you can configure the public yum repository to install new updates in the future, skip this step for the workshop: 5. Disable current local-cdrom repository by changing enable=0 6. Download and install the Oracle Linux 5 repo file to your system. # cd /etc/yum.repos.d # wget http://public-yum.oracle.com/public-yum-el5.repo 7. Enable both the [ol5_u7_base] repositories in the yum configuration file by changing enable=0 to enable=1 in those sections. 8. To update your system use the following yum command: # yum update
After the /etc/hosts file is configured on db1 copy the file to the other node(s) (db2) using scp. You will be prompted to enter the root password of the remote node(s) for example: scp /etc/hosts <db2>:/etc/hosts As root, verify network configuration by pinging db1 from db2 and vice versa. As root, run the following commands on each node. ping -c 1 db1 ping -c 1 db2 ping -c 1 db1-priv ping -c 1 db2-priv
Platform Technology Solutions, Latin America Note that you will not be able to ping the virtual IPs (db1-vip, etc.) until after the clusterware is installed, up and running. Check that no gateway is defined for private interconnect. If you find any problems, run as root user, the network configuration program: /usr/bin/system-config-network Verify MTU size for the private network interface To set the current MTU size: ifconfig eth1 mtu 1500 To make this change permanent, add MTU=1500 at the end of this the eth1 configuration file cat >> /etc/sysconfig/network-scripts/ifcfg-eth1 <<EOF MT=1500 EOF Execute the same command on the second node. Configure DNS name resolution cat > /etc/resolv.conf <<EOF search local.com options timeout:1 nameserver 10.0.5.254 EOF Execute the following command to test the DNS availability. nslookup db-cluster-scan Server: Address: 10.0.5.254 10.0.5.254#53
1.4.- Configure Cluster Time Synchronization Service - (CTSS) and Hangcheck Timer
If the Network Time Protocol (NTP) service is not available or properly configured, you can use Cluster Time Synchronization Service to provide synchronization service in the cluster but first you need to deconfigure and de-install current NTP configuration. To deactivate the NTP service, you must stop the existing ntpd service, disable it from the initialization sequences and remove the ntp.conf file. To complete these steps on Oracle Enterprise Linux, run the following commands as the root user on both Oracle RAC nodes: /sbin/service ntpd stop chkconfig ntpd off mv /etc/ntp.conf /etc/ntp.conf.original Also remove the following file (This file maintains the pid for the NTP daemon) : rm /var/run/ntpd.pid
Oracle Automatic Storage Management Group asmadmin ASM Database Administrator Group ASM Operator Group Database Administrator Database Operator asmdba asmoper dba oper
As root on both db1 and db2, create the groups, dba, oinstall and the user oracle. /usr/sbin/groupadd oinstall /usr/sbin/groupadd dba /usr/sbin/groupadd oper /usr/sbin/groupadd asmadmin /usr/sbin/groupadd asmdba /usr/sbin/groupadd asmoper The following command will create the oracle user and the users home directory with the default group as oinstall and secondary group as dba. The user default shell will be bash. Useradd unix man pages will provide additional details on the command: useradd -g oinstall -G asmadmin -m -s /bin/bash -d /home/grid -r grid usermod -g oinstall -G asmadmin,asmdba,asmoper,dba grid useradd -g oinstall -G dba -m -s /bin/bash -d /home/oracle -r oracle usermod -g oinstall -G dba,asmadmin,asmdba -s /bin/bash oracle Set the password for the oracle and grid account, use welcome1. passwd oracle
Changing password for user oracle. New UNIX password:<enter password> retype new UNIX password:<enter password> passwd: all authentication tokens updated successfully.
passwd grid Verify that the attributes of the user oracle are identical on both db1 and db2 id oracle id grid
Platform Technology Solutions, Latin America The command output should be as follows:
[root@db01 ~]# id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54324(asmadmin),54325(asmdba) [root@db01 ~]# id grid uid=102(grid) gid=54321(oinstall) groups=54321(oinstall),54324(asmadmin),54325(asmdba),54326(asmoper)
Enable xhost permissions is case you want to login with root and switch to oracle or grid user: xhost + Re-Login or switch as oracle OS user and edit .bash_profile file with the following:
umask 022 if [ -t 0 ]; then stty intr ^C fi export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/rac #export ORACLE_SID=<your sid> export ORACLE_PATH=/u01/app/oracle/common/oracle/sql export ORACLE_TERM=xterm PATH=${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/common/oracle/bin export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH THREADS_FLAG=native; export THREADS_FLAG
Platform Technology Solutions, Latin America Login or switch as grid user and edit .bash_profile with the following
umask 022 if [ -t 0 ]; then stty intr ^C fi export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/grid/11.2.0/infra #export ORACLE_SID=<your sid> export CV_NODE_ALL=db1,db2 export CVUQDISK_GRP=oinstall export ORACLE_PATH=/u01/app/oracle/common/oracle/sql export ORACLE_TERM=xterm PATH=${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/common/oracle/bin export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH THREADS_FLAG=native; export THREADS_FLAG
Platform Technology Solutions, Latin America Check the connections with the following commands in both nodes. Execute each line, one at a time and choose to permanently add the host to the list of known hosts. ssh ssh ssh ssh db1 date db2 date db1-priv date db2-priv date
Try the next line to see if everything works ssh db1 date; ssh db2 date; ssh db1-priv date; ssh db2-priv date Execute the same procedure for user grid
Start iscsi services: chmod a+x /usr/local/bin/iscsidev chkconfig iscsid on service iscsid start setsebool -P iscsid_disable_trans=1 iscsiadm -m discovery -t sendtargets -p nas01 service iscsi restart
Platform Technology Solutions, Latin America Display running sessions: iscsiadm -m session tcp: tcp: tcp: tcp: tcp: tcp: tcp: tcp: tcp: tcp: tcp: tcp: [1] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk06 [10] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk08 [11] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk03 [12] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk01 [2] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk04 [3] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk02 [4] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk07 [5] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk09 [6] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk11 [7] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk12 [8] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk10 [9] 10.0.4.21:3260,1 iqn.oracle.com:nas01.disk05
Check that the /dev/iscsi links are created, notice the order assigned. ls -l /dev/iscsi
First Disk [root@db01 ~]# fdisk /dev/iscsi/nas01.disk01.p Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n
Platform Technology Solutions, Latin America Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1009, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009): +100M Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (49-1009, default 49): <press enter> Using default value 49 Last cylinder or +size or +sizeM or +sizeK (472-1009, default 1009): <enter> Using default value 1009 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. A method for cloning a partition table in linux is to use sfdisk, we are going to apply the same configuration for all disks. sfdisk -d /dev/iscsi/nas01.disk01.p>disk01part.txt sfdisk sfdisk sfdisk sfdisk sfdisk sfdisk sfdisk sfdisk sfdisk sfdisk sfdisk /dev/iscsi/nas01.disk02.p<disk01part.txt /dev/iscsi/nas01.disk03.p<disk01part.txt /dev/iscsi/nas01.disk04.p<disk01part.txt /dev/iscsi/nas01.disk05.p<disk01part.txt /dev/iscsi/nas01.disk06.p<disk01part.txt /dev/iscsi/nas01.disk07.p<disk01part.txt /dev/iscsi/nas01.disk08.p<disk01part.txt /dev/iscsi/nas01.disk09.p<disk01part.txt /dev/iscsi/nas01.disk10.p<disk01part.txt /dev/iscsi/nas01.disk11.p<disk01part.txt /dev/iscsi/nas01.disk12.p<disk01part.txt
Platform Technology Solutions, Latin America Initialize all block devices with the following commands from db1: dd if=/dev/zero of=/dev/iscsi/nas01.disk01.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk01.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk02.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk02.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk03.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk03.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk04.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk04.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk05.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk05.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk06.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk06.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk07.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk07.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk08.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk08.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk09.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk09.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk10.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk10.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk11.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk11.p2 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk12.p1 bs=1000k count=99 dd if=/dev/zero of=/dev/iscsi/nas01.disk12.p2 bs=1000k count=99 You'll need to propagate changes on node 2 by executing: For a SAN configuration: partprobe For an iscsi configuration (Virtualbox): service iscsi restart
[ [
OK OK
] ]
Every disk that ASMLib is going to be accessing needs to be made available. This is accomplished by creating an ASM disk in db1:
/etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk NAS01_GRID01 NAS01_GRID02 NAS01_GRID03 NAS01_GRID04 NAS01_GRID05 NAS01_GRID06 NAS01_GRID07 NAS01_GRID08 NAS01_GRID09 NAS01_GRID10 NAS01_GRID11 NAS01_GRID12 NAS01_DATA01 NAS01_DATA02 NAS01_DATA03 NAS01_DATA04 NAS01_DATA05 NAS01_DATA06 NAS01_DATA07 NAS01_DATA08 NAS01_DATA09 NAS01_DATA10 NAS01_DATA11 NAS01_DATA12 /dev/iscsi/nas01.disk01.p1 /dev/iscsi/nas01.disk02.p1 /dev/iscsi/nas01.disk03.p1 /dev/iscsi/nas01.disk04.p1 /dev/iscsi/nas01.disk05.p1 /dev/iscsi/nas01.disk06.p1 /dev/iscsi/nas01.disk07.p1 /dev/iscsi/nas01.disk08.p1 /dev/iscsi/nas01.disk09.p1 /dev/iscsi/nas01.disk10.p1 /dev/iscsi/nas01.disk11.p1 /dev/iscsi/nas01.disk12.p1 /dev/iscsi/nas01.disk01.p2 /dev/iscsi/nas01.disk02.p2 /dev/iscsi/nas01.disk03.p2 /dev/iscsi/nas01.disk04.p2 /dev/iscsi/nas01.disk05.p2 /dev/iscsi/nas01.disk06.p2 /dev/iscsi/nas01.disk07.p2 /dev/iscsi/nas01.disk08.p2 /dev/iscsi/nas01.disk09.p2 /dev/iscsi/nas01.disk10.p2 /dev/iscsi/nas01.disk11.p2 /dev/iscsi/nas01.disk12.p2
When a disk is marked with asmlib, other nodes have to be refreshed, just run the 'scandisks' option on db2:
# /etc/init.d/oracleasm scandisks Scanning system for ASM disks [ OK ]
Because we are going to use asmlib support, we no longer need to assign permissions to block devices upon reboot in /etc/rc.local file, asmlib will take care of that.
Name: node-cluster-scan.local.com Address: 10.0.5.20 Remember to perform these actions on both Oracle RAC nodes.
2.1 Install 11g Grid Infrastructure, former 10g Cluster Ready Services (CRS)
The installer needs to be run from one node in the cluster under an X environment. Run the following steps in VNC (or another X client) on only the first node in the cluster as grid user. Review .bash_profile configuration: Run the Oracle Universal Installer: /install/11gR2/grid/runInstaller more ~/.bash_profile
Screen Name
Select Installation Option Select Installation Type Select Product Languages
Response
Select "Install and Configure Grid Infrastructure for a Cluster" Select "Advanced Installation" Click Next Cluster Name: db-cluster SCAN Name: db-cluster-scan SCAN Port: 1521 Only Un-check the option to "Configure GNS", Click Next
Click the "Add" button to add "db2" and its virtual IP address "db2-vip", Click Next Identify the network interface to be used for the "Public" and "Private" network. Make any changes necessary to match the values in the table below:
Select "Automatic Storage Management (ASM)", Click Next Change Discovery Path to /dev/oracleasm/disks/* Create an ASM Disk Group that will be used to store the Oracle Clusterware files according to the values in the following values:
Disk Group: GRID Name Redundancy: External Redundancy Disks: NAS01_GRID* Click Next In a production environment is always recommended to use at least Normal Redundancy
For the purpose of this article, I choose to "Use same passwords for these accounts", Click Next Select "Do not use Intelligent Platform Management Interface (IPMI)". Make any changes necessary to match the values: OSDBA for ASM: asmdba OSOPER for ASM: asmoper OSASM: asmadmin Click Next
Review default values, those are preloaded from the environment variables we already set in the OS user. Click Next
Create Inventory
Inventory Directory: /u01/app/oraInventory oraInventory Group Name: oinstall If OUI detects an incomplete task that is marked "fixable", then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button.
Prerequisite Checks
The fixup script is generated during installation. You will be prompted to run the script as root in a separate terminal session. Ignore the Device Checks for ASM error by selecting the Ignore All checkbox Click Finish to start the installation. The installer performs the Oracle grid infrastructure setup process on both Oracle RAC nodes. Run the orainstRoot.sh script on both nodes in the RAC cluster: [root@db1 ~]# /u01/app/oraInventory/orainstRoot.sh [root@db2 ~]# /u01/app/oraInventory/orainstRoot.sh Within the same new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), stay logged in as the root user account. Run the root.sh script on both nodes in the RAC cluster one at a time starting with the node you are performing the install from:
Summary Setup
[root@db1 ~]# /u01/app/11.2.0/grid/root.sh [root@db2 ~]# /u01/app/11.2.0/grid/root.sh The root.sh script can take several minutes to run. When running root.sh on the last node, you will receive output similar to the following which signifies a successful install: ... The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.
Finish
At the end of the installation, click the [Close] button to exit the OUI.
Install verification
The Installed Cluster Verify Utility can be used to verify the CRS installation. Run the Cluster Verify Utility as grid user, if running from one node replace all for the node name for example: db1. cluvfy stage -post crsinst -n all Reboot the server, in the following section we'll execute some commands to make sure all services started successfully.
Troubleshooting: If something goes wrong when executing the root.sh you can review the log and repair the error, but before executing the script again, de-configure the node and then re-execute the root.sh script. Dont execute this is you finished the configuration correctly <oracle_home>crs/install/rootcrs.pl -deconfig -force
Check Cluster Nodes [grid@db1 ~]$ olsnodes -n Check Oracle TNS Listener Process on Both Nodes
[grid@db1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}' LISTENER_SCAN1 LISTENER [grid@db2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}' LISTENER
Platform Technology Solutions, Latin America Another method is to use the command: [grid@db1 ~] srvctl status listener
Listener LISTENER is enabled Listener LISTENER is running on node(s): db1
Confirming Oracle ASM Function for Oracle Clusterware Files If you installed the OCR and voting disk files on Oracle ASM, then use the following command syntax as the Grid Infrastructure installation owner to confirm that your Oracle ASM installation is running: [grid@db1 ~]$ srvctl status asm -a
ASM is running on db1,db2 ASM is enabled.
Note: To manage Oracle ASM or Oracle Net 11g release 2 (11.2) or later installations, use the srvctl binary in the Oracle grid infrastructure home for a cluster (Grid home). When we install Oracle Real Application Clusters (the Oracle database software), you cannot use the srvctl binary in the database home to manage Oracle ASM or Oracle Net which reside in the Oracle grid infrastructure home.
Platform Technology Solutions, Latin America Voting Disk Management In prior releases, it was highly recommended to back up the voting disk using the dd command after installing the Oracle Clusterware software. With Oracle Clusterware release 11.2 and later, backing up and restoring a voting disk using the dd is not supported and may result in the loss of the voting disk. Backing up the voting disks in Oracle Clusterware 11g release 2 is no longer required. The voting disk data is automatically backed up in OCR as part of any configuration change and is automatically restored to any voting disk added. To learn more about managing the voting disks, Oracle Cluster Registry (OCR), and Oracle Local Registry (OLR), please refer to the Oracle Clusterware Administration and Deployment Guide 11 g Release 2 (11.2).
Back Up the root.sh Script Oracle recommends that you back up the root.sh script after you complete an installation. If you install other products in the same Oracle home directory, then the installer updates the contents of the existing root.sh script during the installation. If you require information contained in the original root.sh script, then you can recover it from the root.sh file copy. Back up the root.sh file on both Oracle RAC nodes as root: [root@db1 ~]# cd /u01/app/grid/11.2.0/infra [root@db1 grid]# cp root.sh root.sh.db1 [root@db2 ~]# cd /u01/app/grid/11.2.0/infra [root@db2 grid]# cp root.sh root.sh.db2 In order for JDBC Fast Connection Failover to work, you must start Global Service Daemon (GSD) for the first time on each node as root user, this step is optional. [root@db1 ~]# /u01/app/grid/11.2.0/infra/bin/gsdctl start Next time you reboot the server or restart cluster services, GSD services will start automatically.
Create Additional ASM Disk Groups using ASMCA Perform the following tasks as the grid user to create two additional ASM disk groups: [grid@db1 ~]$ asmca &
Screen Name
Disk Groups
Response
From the "Disk Groups" tab, click the "Create" button. The "Create Disk Group" dialog should show two of the ASMLib volumes we created earlier in this guide. When creating the datbase ASM disk group, use "DATA" for the "Disk Group Name". In the "Redundancy" section, choose "External Redundancy", for production is recommended at least normal redundancy. Finally, check all the ASMLib volumes remaining in the "Select Member Disks" section, If necessary change the Disk Discovery Path to: /dev/oracleasm/disks/* After verifying all values in this dialog are correct, click the [OK] button.
After creating the first ASM disk group, you will be returned to the initial dialog, if necessary you can create additional diskgroups Exit the ASM Configuration Assistant by clicking the [Exit] button.
Congratulations, you finished the first installation stage, see you tomorrow for the next lab.