Anda di halaman 1dari 87

Build Oracle RAC 11.2.0.3 on Oracle Solaris 11 11.

11 using Oracle VirtualBox


In the article you will have a look at how to use some Oracle VirtualBox features to build two node Oracle 11gR2 11.2.0.3 RAC system on Oracle Solaris 11 11.11 x86-64. For information about a similar two node Oracle RAC 11.2 setup on Solaris 10 x86-64 using VirtualBox click here. The article will emphasize on the Solaris 11 management options and configurations required to meet Oracle 11.2.0.3 RAC installation prerequisites. The following software will be used: 1. Oracle 11.2.0.3 for Solaris (x86-64)- patch 10404530. Download from MOS here. 2. Oracle Solaris 11 (x86-64). I used Oracle Solaris 11 11/11 Live Media for x86. Download from here. 3. Oracle VM VirtualBox 4.1.8 Download from here. There will be two virtual machines Sol1 and Sol2, each of them will be configured with 4GB RAM 160GB bootable disk NIC NAT: for access to Internet NIC - bridged for public interface in RAC with address 192.168.2.21/22 (first IP 192.168.2.21 on sol1 and second IP 192.168.2.22 on node sol2). These are public interface in RAC. NIC bridged for private interface in RAC with address 10.10.2.21/22 (first IP 10.10.2.21 on sol1 and second IP 10.10.2.22 on node sol2). These are private interface in RAC. NIC Host Only : for FTP/SSH/telnet/SCP access from the host OS to the guest OS. (first IP 192.168.56.51 on sol1 and second IP 192.168.56.51on node sol2). 5 10GB attached shared disks for the ASM storage. Sol1 VM will run Solaris 11 guest with hostname sol1. Sol2 VM will run Solaris 11 guest with hostname sol2.

Note. For access to the Guest OS (Solaris) from the Host (MS Windows Vista 64) via ssh/scp/telnet/ftp. For the host OS to be able to access the guest OS a Host Only adapter is required and the corresponding IP on the guest OS should be within the IP subnet of the VirtualBox Host-Only adapter. On the host (Windows in this case) we have.
Ethernet adapter VirtualBox Host-Only Network: Connection-specific Description . . . . Physical Address. . DHCP Enabled. . . . DNS . . . . . . Suffix . . . . . . . . . . . . . . . . : : VirtualBox Host-Only Ethernet Adapter : 08-00-27-00-3C-B7 : No

Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.56.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : NetBIOS over Tcpip. . . . . . . . : Enabled

Thus, the IP, corresponding to the Host-Only NIC, inside the guest (Solaris in this case) should be within the 192.168.56.* subnet. Should the VirtualBox Host-Only adapter IP change, make sure that the IP within the guest OS is within the same subnet as the IP of the VirtualBox Host-Only adapter. Examples of accessing Solaris 11 from a cygwin on Windows using ssh/scp.
bash-3.2$ uname -a CYGWIN_NT-6.0-WOW64 userpc 1.7.5(0.225/5/3) 2010-04-12 19:07 i686 Cygwin bash-3.2$ bash-3.2$ ssh -X gjilevski@192.168.56.52 The authenticity of host '192.168.56.52 (192.168.56.52)' can't be established. RSA key fingerprint is 1b:f8:c2:74:cf:29:4b:e8:0d:a6:d8:f6:d9:51:92:72. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.56.52' (RSA) to the list of known hosts. Password: Warning: untrusted X11 forwarding setup failed: xauth key data not generated Warning: No xauth data; using fake authentication data for X11 forwarding. Last login: Mon Feb 6 22:22:49 2012 Oracle Corporation SunOS 5.11 11.0 November 2011 gjilevski@sol2:~$ uname -a SunOS sol2 5.11 11.0 i86pc i386 i86pc gjilevski@sol2:~$ ls /u01/sh.sh /u01/sh.sh gjilevski@sol2:~$ exit logout Connection to 192.168.56.52 closed. bash-3.2$ scp gjilevski@192.168.56.52:/u01/sh.sh . Password: sh.sh 100% 682 0.7KB/s 00:00 bash-3.2$ uname -a CYGWIN_NT-6.0-WOW64 userpc 1.7.5(0.225/5/3) 2010-04-12 19:07 i686 Cygwin bash-3.2$

Same approach can be used for telnet or ftp access from host OS if ftp or telnet services are enabled on the guest Solaris OS. First you will create the first virtual machine and configure network interfaces and install Solaris software and meet the Oracle RAC installation prerequisites. You will than attach the shared disks to the first virtual machine and prepare the shared storage for ASM. Last you will clone the boot disk of the first VM to create the second VM and plug the shared storage and change the IPs and hostname. In this article you will have role separation with two different accounts one (grid) for ASM storage management and the second (oracle) for RDBMS installation. In order to support role separation you will create additional groups for (asmadmin, asmdba, asmoper, oinstall) and ( dba. oinstall, oper). At the end you will install GI, RDBMS and create a two node RAC database. You will see how to fix the problems that were encountered during the installation.

1. Create the first VM Sol1


Oracle Solaris 11 was released late 2011. In order to obtain detailed information for installation options click here or access Solaris 11 documentation click here. I am using Oracle Solaris 11 11/11 Live Media for x86. It performs default installation using Automatic network management. After Solaris 11 installation, I will switch to Manual network management and modify IP addresses and hostnames. Select New and click Next. Enter the name of the VM (Sol1) and press Next.

Select 4096 MB for the RAM of the VM and press Next to continue.

Select create a new disk for the boot disk and press Next to continue.

Select Dynamically expanding storage and press Next to continue.

Select 160GB (not 16GB) and press Next to continue. (16GB is not sufficient to install Solaris and Oracle GI and RDBMS).

Press Next to continue.

Press Finish. This concludes the VM creation. After that select the VM click settings and add the four NIC as specified. First to be NAT, bridged, bridged and HostOnly. Create the disks to be used as shared.
VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage createhd createhd createhd createhd createhd createhd createhd createhd createhd createhd --filename --filename --filename --filename --filename --filename --filename --filename --filename --filename d:\vb\asm1.vdi d:\vb\asm2.vdi d:\vb\asm3.vdi d:\vb\asm4.vdi d:\vb\asm5.vdi --size --size --size --size --size 10240 10240 10240 10240 10240 --format --format --format --format --format VDI VDI VDI VDI VDI --variant --variant --variant --variant --variant Fixed Fixed Fixed Fixed Fixed

d:\vb\asm6.vdi --size 10240 --format VDI --variant Fixed d:\vb\asm7.vdi --size 10240 --format VDI --variant Fixed d:\vb\asm8.vdi --size 10240 --format VDI --variant Fixed d:\vb\asm9.vdi --size 10240 --format VDI --variant Fixed d:\vb\asm10.vdi --size 10240 --format VDI --variant Fixed

Attach the shared disks to the VM and mark them as shared. Note that for a disk to be shared it must be fixed.
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 1 --device 0 -type hdd --medium d:\vb\asm1.vdi --mtype shareable VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 2 --device 0 -type hdd --medium d:\vb\asm2.vdi --mtype shareable VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 3 --device 0 -type hdd --medium d:\vb\asm3.vdi --mtype shareable

VBoxManage storageattach Sol1 --storagectl "SATA Controller" -type hdd --medium d:\vb\asm4.vdi --mtype shareable VBoxManage storageattach Sol1 --storagectl "SATA Controller" -type hdd --medium d:\vb\asm5.vdi --mtype shareable VBoxManage storageattach Sol1 --storagectl "SATA Controller" -type hdd --medium d:\vb\asm6.vdi --mtype shareable VBoxManage storageattach Sol1 --storagectl "SATA Controller" -type hdd --medium d:\vb\asm7.vdi --mtype shareable VBoxManage storageattach Sol1 --storagectl "SATA Controller" -type hdd --medium d:\vb\asm8.vdi --mtype shareable VBoxManage storageattach Sol1 --storagectl "SATA Controller" -type hdd --medium d:\vb\asm9.vdi --mtype shareable VBoxManage storageattach Sol1 --storagectl "SATA Controller" --type hdd --medium d:\vb\asm10.vdi --mtype shareable

--port 4 --device 0 --port 5 --device 0 --port 6 --device 0 --port 7 --device 0 --port 8 --device 0 --port 9 --device 0 --port 10 --device 0

VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage VBoxManage

modifyhd modifyhd modifyhd modifyhd modifyhd modifyhd modifyhd modifyhd modifyhd modifyhd

d:\vb\asm1.vdi --type shareable d:\vb\asm2.vdi --type shareable d:\vb\asm3.vdi --type shareable d:\vb\asm4.vdi --type shareable d:\vb\asm5.vdi --type shareable d:\vb\asm6.vdi --type shareable d:\vb\asm7.vdi --type shareable d:\vb\asm8.vdi --type shareable d:\vb\asm9.vdi --type shareable d:\vb\asm10.vdi --type shareable

The configuration will look like the graphical

image below.

Now that the shared disks are attached and the virtual NICs are specified lets start the VM for Solaris installation. After the VM is created and started specify the media for Solaris installation. The iso image file can be either staged in a directory on the host OS or on DVD. Start the VM and specify the installation media containing the Solaris iso. In this case the iso media is on a Windows directory and you press on the yellow folder with green arrow to select it and press Next to continue.

Note that although the image is for Solaris 10 but you must use Solaris 11 iso file and will work the same way.

Press Next to continue.

Press Finish to start. Once the system boots up Select Solaris 11 and press Return.

Select keyboard.

Select language.

Wait until the virtual OS is launched in memory. Note that Automatic Network is selected. In order to start the real installation click Install Oracle Solaris.

Press Next.

Select the 160GB disk. I did not partition the disk but later added in swap space for the OUI to succeed. The default swap is 1GB and OUI asks for minimum 4GB in order to install 11.2.0.3 RAC.

Select your time zone.

Create an account with administrative privileges. Starting with Solaris 11 root is defined as a role and you need a user to log in. A role cannot log you in. Specify the hostname.

Review and press Install.

Wait for the install to completes.

Reboot once installation completes.

Select Solaris 11.

Login in with the previously created user.

Select login session.

Finally you are complete the Solaris 11 installation.

2. Installing Guest Additions


After login as yourself, on the VirtualBox console select Devices->Install Guest Additions The Virtual Editions media gets mounted as a virtual CD. Select OK.

Here you have a chance to change the root password. Initially, use the default (solaris) which is expired and need a change.

Reboot the Solaris guest VM.

3. Shared Folders configuration

Shared folders are configured as follows.

VirtualBox allows you to mount on the guest OS folders from the host OS. If using latest VirtualBox 4.1 and later you can automatically add shared folders and VirtualBox mounts them for you.

In case you are on a previous VirtualBox release, as root user create directories for each folder and use the following command to mount. pfexec mount -F vboxfs Solaris10 /whatever-you-want In my case I created two directories /software and /OracleVMServer and mounted the shared folders as follow. pfexec mount -F vboxfs pfexec mount -F vboxfs software /software /OracleVMServer

OracleVMServer

To create a permanent mount points that persist a reboot add the following line to the /etc/vfstab file. software OracleVMServer - /software vboxfs - yes vboxfs - yes -

- /OracleVMServer

4. Setting the shared storage for ASM and SWAP

The default configured swap space was 1GB. For Oracle 11.2 installation 4GB minimum is required. Using the following commands drop and recreate the swap with desired size 5GB. root@sol1:~# swap -d /dev/zvol/dsk/rpool/swap root@sol1:~# zfs volsize=5G rpool/swap root@sol1:~# swap -a /dev/zvol/dsk/rpool/swap root@sol1:~# swap -s total: 259856k bytes allocated + 35808k reserved = 295664k used, 6669044k available root@sol1:~# swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 195,2 8 10485752 10485752 root@sol1:~# There are ten shared disks attached to the Solaris VM. Those disks are visible as c1t1d0, c1t2d0,c1t3d0,c1t4d0, c1t5d0, c1t6d0, c1t7d0, c1t8d0, c1t9d0 and c1t10d0. You need to format and prepare them for the ASM with privileges (0660) and ownership (grid:asmadmin). Each disk is formatted and a partition is created starting from 1 cylinder. The available disks can be seen using the format command as shown below. Than for each disk you invoke fdisk and accept Solaris as default type of partition. Then you partition with partition command and select 0 partition to start from first cylinder and enter size 9.95GB (remember 10GB was the original size) and print the partition table. You repeat the procedure for all ten shared disks. The flow of commands is as follow.
root@sol1:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 20883 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c1t1d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@1,0 2. c1t2d0 <DEFAULT cyl 1303 alt 2 hd 255 se c 63> /pci@0,0/pci8086,2829@d/disk@2,0 3. c1t3d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@3,0 4. c1t4d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@4,0 5. c1t5d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@5,0 6. c1t6d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@6,0 7. c1t7d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@7,0 8. c1t8d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@8,0 9. c1t9d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@9,0 10. c1t10d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@a,0 Specify disk (enter its number): 4 selecting c1t4d0 [disk formatted]

FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk fdisk - run the fdisk program repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show vendor, product and revision volname - set 8-character volume name !<cmd> - execute <cmd>, then return quit format> format> format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, partition table. y format> format> partition PARTITION MENU: 0 1 2 3 4 5 6 7 select modify name print label !<cmd> quit partition> 0 Part Tag 0 unassigned otherwise type "n" to edit the

change `0' partition change `1' partition change `2' partition change `3' partition change `4' partition change `5' partition change `6' partition change `7' partition select a predefined table modify a predefined partition table name the current table display the current table write partition map and label to the disk execute <cmd>, then return Flag wm Cylinders 0 Size 0 Blocks (0/0/0)

Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 1 Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 9.95g partition> print Current partition table (unnamed): Total disk cylinders available: 1302 + 2 (reserved cylinders) Part Tag 0 unassigned 1 unassigned 2 backup 3 unassigned 4 unassigned 5 unassigned 6 unassigned 7 unassigned 8 boot Flag wm wm wu wm wm wm wm wm wu Cylinders 1 - 1299 0 0 - 1301 0 0 0 0 0 0 0 Size 9.95GB 0 9.97GB 0 0 0 0 0 7.84MB Blocks (1299/0/0) 20868435 (0/0/0) 0 (1302/0/0) 20916630 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0 (1/0/0) 16065

9 unassigned

wm

(0/0/0)

partition> partition> label Ready to label disk, continue? y partition>quit

In this case the whole disk c1t4d0 was partitioned as one partition and made available for ASM. Repeat the same steps for all remaining nine shared disks. Since slice 0 was used you need to make those available to ASM by setting the permissions and ownership as show below.

chown chmod chown chmod chown chmod chown chmod chown chmod chown chmod chown chmod chown chmod chown chmod chown chmod

grid:asmadmin /dev/rdsk/c1t1d0s0 660 /dev/rdsk/c1t1d0s0 grid:asmadmin /dev/rdsk/c1t10d0s0 660 /dev/rdsk/c1t10d0s0 grid:asmadmin /dev/rdsk/c1t9d0s0 660 /dev/rdsk/c1t9d0s0 grid:asmadmin /dev/rdsk/c1t8d0s0 660 /dev/rdsk/c1t8d0s0 grid:asmadmin /dev/rdsk/c1t7d0s0 660 /dev/rdsk/c1t7d0s0 grid:asmadmin /dev/rdsk/c1t6d0s0 660 /dev/rdsk/c1t6d0s0 grid:asmadmin /dev/rdsk/c1t5d0s0 660 /dev/rdsk/c1t5d0s0 grid:asmadmin /dev/rdsk/c1t4d0s0 660 /dev/rdsk/c1t4d0s0 grid:asmadmin /dev/rdsk/c1t3d0s0 660 /dev/rdsk/c1t3d0s0 grid:asmadmin /dev/rdsk/c1t2d0s0 660 /dev/rdsk/c1t2d0s0

For more information refer to Oracle documentation here.

5. Creating Solaris users and groups


Oracle starting with 11gR2 is enabling a role separation when installing Oracle GI and Oracle RDBMS. The idea is Oracle GI will have a user grid as ASM administrator and Oracle RDBMS will have a user oracle as RDBMS administrator. Both users share oinstall group so that both Oracle GI and Oracle RDBMS share the same inventory. However each admin has different groups. The grid user is a member of asmadmin, asmdba, asmoper and the primary group is oinstall. The oracle user is a member of dba, asmdba and oper and the primary group is oinstall. Users and groups are created with the following commands.
groupadd -g 1000 oinstall groupadd -g 1020 asmadmin groupadd -g 1021 asmdba groupadd -g 1022 asmoper groupadd -g 1031 dba groupadd -g 1032 oper useradd -u 1100 -g oinstall -G asmoper,asmadmin,asmdba -d /export/home/grid -m grid useradd -u 1101 -g oinstall -G oper,dba,asmdba -d /export/home/oracle -m oracle

User grid and user oracle separate the responsibilities between storage administrator and Oracle DBA.

6. Create the directories for ORACLE_BASE and ORACLE_HOME for Oracle GI and Oracle RDBMS
mkdir mkdir chown mkdir chown chmod mkdir chown chmod mkdir chown chmod -p /u01/app/11.2.0/grid -p /u01/app/grid -R grid:oinstall /u01 -p /u01/app/oracle oracle:oinstall /u01/app/oracle -R 775 /u01 -p /u01/app/11.2.0/grid grid:oinstall /u01/app/11.2.0/grid -R 775 /u01/app/11.2.0/grid -p /u01/app/oracle/product/11.2.0/db_1 -R oracle:oinstall /u01/app/oracle -R 775 /u01/app/oracle

7. Setup Solaris kernel parameters.


Create two projects for Oracle GI and Oracle RDBMS users respectively. Set the share memory parameters.
projadd projmod projmod projmod projmod projadd projmod projmod projmod projmod -U grid -K "project.max-shm-memory=(priv,6g,deny)" user.grid -sK "project.max-sem-nsems=(priv,512,deny)" user.grid -sK "project.max-sem-ids=(priv,128,deny)" user.grid -sK "project.max-shm-ids=(priv,128,deny)" user.grid -sK "project.max-shm-memory=(priv,6g,deny)" user.grid -U oracle -K "project.max-shm-memory=(priv,6g,deny)" user.oracle -sK "project.max-sem-nsems=(priv,512,deny)" user.oracle -sK "project.max-sem-ids=(priv,128,deny)" user.oracle -sK "project.max-shm-ids=(priv,128,deny)" user.oracle -sK "project.max-shm-memory=(priv,6g,deny)" user.oracle

/usr/sbin/projmod -sK "process.max-file-descriptor=(priv,65536,deny)" user.oracle /usr/sbin/projmod -sK "process.max-file-descriptor=(priv,65536,deny)" user.grid

If the max file descriptors are not set properly there might be errors starting OUI.

Set the TCP and UDP kernel parameters


/usr/sbin/ndd /usr/sbin/ndd /usr/sbin/ndd /usr/sbin/ndd -set -set -set -set /dev/tcp /dev/tcp /dev/udp /dev/udp tcp_smallest_anon_port 9000 tcp_largest_anon_port 65500 udp_smallest_anon_port 9000 udp_largest_anon_port 65500

Put in /etc/inittab the following persist across reboot.

lines for the TCP and UDP parameters to

tm::sysinit:/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000 > /dev/console tm::sysinit:/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500 > /dev/console

tm::sysinit:/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000 > /dev/console tm::sysinit:/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500 > /dev/console

8. Automatic SSH configuration


Oracle recommends using OUI to setup ssh user equivalence where users can connect across the node of the cluster without a password. To avoid errors while attaching $OH when the remote node closes a connection prematurely set in /etc/ssh/sshd_config file LoginGraceTime 0. After changing the file restart ssh service. svcadm restart ssh

9. Enable Core file creation


Make sure that Core Dumps are enabled. To check if core dumps are enabled use coreadm command as follows.

# coreadm global global init init

core file pattern: core file content: core file pattern: core file content: global core dumps: per-process core dumps: global setid core dumps: per-process setid core dumps: global core dump logging:

default core default disabled enabled disabled disabled disabled

As root user make the following directory. And enable core dumps as show below.

mkdir -p /var/cores coreadm -g /var/cores/%f.%n.%p.%t.core -e global -e global-setid -e log -d process -d proc-setid Verify the result.

# coreadm global global init init

core file pattern: core file content: core file pattern: core file content: global core dumps: per-process core dumps: global setid core dumps: per-process setid core dumps: global core dump logging:

/var/cores/%f.%n.%p.%t.core default core default enabled disabled enabled disabled enabled

10.

Network Time protocol Settings

You have two options for time synchronization: an operating system configured network time protocol (NTP), or Oracle Cluster Time Synchronization Service. Oracle Cluster Time Synchronization Service is designed for organizations whose cluster servers are unable to access NTP services. If you use NTP, then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode. If you do not have NTP daemons, then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server. So there are two options A. Disable NTP and rely entirely on CTSS: As root execute the command below:
/usr/sbin/svcadm disable ntp

B. Configure NTP and use it together with CTSS: As root edit the /etc/inet/ntp.conf file to add "slewalways yes" and "disable pll" to the file. After you make these changes, restart xntpd using the command
/usr/sbin/svcadm restart ntp.

NTP is disabled in the installation reflected in article. 11. Create the following profiles

For user grid


umask 022 ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/11.2.0/grid ORACLE_SID=+ASM1 LD_LIBRARY_PATH=$ORACLE_HOME/lib PATH=$PATH:/usr/local/bin:/usr/sbin:/usr/bin:/usr/openwin/bin:/usr/ucb:$ORACLE_H OME/bin export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH TEMP=/tmp TMPDIR=/tmp export TEMP TMPDIR ulimit ulimit ulimit ulimit ulimit -t -f -d -s -v unlimited unlimited unlimited unlimited unlimited

if [ -t 0 ]; then stty intr ^C fi

For user oracle


umask 022 ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

ORACLE_SID=D11G LD_LIBRARY_PATH=$ORACLE_HOME/lib PATH=$PATH:/usr/local/bin:/usr/sbin:/usr/bin:/usr/openwin/bin:/usr/ucb:$ORACLE_HOME/bin export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH TEMP=/tmp TMPDIR=/tmp export TEMP TMPDIR ulimit -t unlimited ulimit -f unlimited ulimit -d unlimited ulimit -s unlimited ulimit -v unlimited if [ -t 0 ]; then stty intr ^C fi

12.

Set up file descriptors

Edit the /etc/system to add the following parameters.


# Hard limit on file descriptors for single process set rlim_fd_max = 65536 # Soft limit on the file descriptors for a single process set rlim_fd_cur = 65536

13.

Require packages and patches

For Solaris 10 x86-64 the following packages and patches are required.
SUNWarc SUNWbtool SUNWcsl SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWi1of (ISO8859-1) SUNWi1cs (ISO8859-15) SUNWi15cs SUNWxwfnt 119961-05 or later 119964-14 or later 120754-06 or later 139556-08 or later 139575-03 or later 141415-04 or later 141445-09 or later [11.2.0.2]

Since I am on Solaris 11 I did


root@sol2:~# pkg install SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNW SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt SUNWcsl Creating Plan pkg install: The following pattern(s) did not match any allowable packages. Try using a different matching pattern, or refreshing publisher information: SUNWi1of SUNW SUNWxwfnt root@sol2:~#

Solaris 11 is supported and certified for Oracle 11.2.0.3. Despite, that I did not find information about packages and patches for Oracle 11.2.0.3 OUI confirmed later that installed packages and patches suffice for Oracle 11.2.0.3 RAC installation on Solaris 11.

14.

Configure IP addresses and confirm hostname

The performed Oracle Solaris 11 11/11 Live Media installation provides Automatic Network Management. In order to manually set up IP addresses I switched to Manual Network management using the following command.
root@sol1:~# netadm enable -p ncp DefaultFixed Enabling ncp 'DefaultFixed' root@sol1:~#

After switch to manual Network Management I have.

gjilevski@sol1:~$ LINK net1 net2 net0 net3 gjilevski@sol1:~$

dladm show-phys MEDIA Ethernet Ethernet Ethernet Ethernet

STATE unknown unknown unknown unknown

SPEED 0 0 0 0

DUPLEX unknown unknown unknown unknown

DEVICE e1000g1 e1000g2 e1000g0 e1000g3

In Solaris 11 IP addresses are set as follows:


ipadm create-ip net0 ipadm create-addr -T dhcp net0/addr ipadm create-ip net1 ipadm create-addr -T static -a local=192.168.2.21/24 net1/addr ipadm create-ip net2 ipadm create-addr -T static -a local=10.10.2.21/24 net2/addr ipadm create-ip net3 ipadm create-addr -T static -a local=192.168.56.51/24 net3/addr

At the end I have:

root@sol1:~# dladm show-phys LINK MEDIA net1 Ethernet net2 Ethernet net0 Ethernet net3 Ethernet root@sol1:~#

STATE up up up up

SPEED 1000 1000 1000 1000

DUPLEX full full f ull full

DEVICE e1000g1 e1000g2 e1000g0 e1000g3

Hostname was configured during Solaris 11 installation. 15.

Modify /etc/hosts file to have entries for the public, private and scan addresses similarly to the output below

gjilevski@sol2:~$ cat /etc/hosts # # Copyright 2009 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Internet host table # ::1 localhost 127.0.0.1 localhost 192.168.2.21 sol1 sol1.gj.com 192.168.2.22 sol2 sol2.gj.com 192.168.2.31 sol1-vip sol1-vip.gj.com 192.168.2.32 sol2-vip sol2-vip.gj.com 192.168.2.51 scan-sol scan-sol.gj.com 10.10.2.21 sol1-priv sol1-priv.gj.com 10.10.2.22 sol2-priv sol2-priv.gj.com 192.168.56.51 sol1-exp sol1-exp.gj.com 192.168.56.52 sol2-exp sol2-exp.gj.com

gjilevski@sol2:~$

16. Clone the boot disk of Sol1 VM to create a second VM corresponding to the second node.
d:\VB>VBoxManage createhd --filename d:\vb\asm_5.vdi --size 10240 --format VDI -variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Disk image created. UUID: 0df463ae-587a-4f62-bcd1-ac7dbab86f8f d:\VB>

Create a second VM called Sol2 with the same resource as Sol1 but specify the cloned disk as a bootable disk. Attach the shared disks to the second VM using commands as described below.

VBoxManage storageattach port 1 --device 0 --type shareable VBoxManage storageattach port 2 --device 0 --type shareable VBoxManage storageattach port 3 --device 0 --type shareable VBoxManage storageattach port 4 --device 0 --type shareable

Sol2 --storagectl "SATA Controller" -hdd --medium d:\vb\asm1.vdi --mtype Sol2 --storagectl "SATA Controller" -hdd --medium d:\vb\asm2.vdi --mtype Sol2 --storagectl "SATA Controller" -hdd --medium d:\vb\asm3.vdi --mtype Sol2 --storagectl "SATA Controller" -hdd --medium d:\vb\asm4.vdi --mtype

VBoxManage storageattach Sol2 --storagectl "SATA Controller" -port 5 --device 0 --type hdd --medium d:\vb\asm5.vdi --mtype shareable VBoxManage storageattach Sol2 --storagectl "SATA Controller" -port 6 --device 0 --type hdd --medium d:\vb\asm6.vdi --mtype shareable VBoxManage storageattach Sol2 --storagectl "SATA Controller" -port 7 --device 0 --type hdd --medium d:\vb\asm7.vdi --mtype shareable VBoxManage storageattach Sol2 --storagectl "SATA Controller" -port 8 --device 0 --type hdd --medium d:\vb\asm8.vdi --mtype shareable VBoxManage storageattach Sol2 --storagectl "SATA Controller" -port 9 --device 0 --type hdd --medium d:\vb\asm9.vdi --mtype shareable VBoxManage storageattach Sol2 --storagectl "SATA Controller" -port 10 --device 0 --type hdd --medium d:\vb\asm10.vdi --mtype shareable VBoxManage modifyhd d:\vb\asm1.vdi --type shareable VBoxManage modifyhd d:\vb\asm2.vdi --type shareable VBoxManage modifyhd d:\vb\asm3.vdi --type shareable VBoxManage modifyhd d:\vb\asm4.vdi --type shareable VBoxManage modifyhd d:\vb\asm5.vdi --type shareable VBoxManage modifyhd d:\vb\asm6.vdi --type shareable VBoxManage modifyhd d:\vb\asm7.vdi --type shareable VBoxManage modifyhd d:\vb\asm8.vdi --type shareable VBoxManage modifyhd d:\vb\asm9.vdi --type shareable VBoxManage modifyhd d:\vb\asm10.vdi --type shareable

17.

Modify the hostname and IPs on the cloned VM Sol2

In order to avoid collision with IP start only the second cloned VM and change the IP addresses and hostname. Initially both machines apparently have same IPs and hostname. As root on Sol2 VM execute the following commands. In order to modify the hostname to sol2 execute the following command to change the host and delete the interfaces and reboot the node. svccfg -s svc:/system/identity:node setp rop config/nodename =sol2 svcadm refresh svc:/system/id entity:node svcadm restart svc:/system/identity:node ipadm delete-ip net0 ipadm delete-ip net1 ipadm delete-ip net2 ipadm delete-ip net3 After restart as root execute the following commands to setup the IP addresses. ipadm create-ip net0 ipadm create-addr -T dhcp net0/addr

ipadm create-ip net1 ipadm create-addr -T static -a local=192.168.2.22/24 net1/addr ipadm create-ip net2 ipadm create-addr -T static -a local=10.10.2.22/24 net2/addr ipadm create-ip net3 ipadm create-addr -T static -a local=192.168.56.52/24 net3/addr root@sol2:~# dladm show-phys LINK MEDIA STATE net1 Ethernet up net2 Ethernet up net0 Ethernet up net3 Ethernet up root@sol2:~#

SPEED DUPLEX DEVICE 1000 full e1000g1 1000 full e1000g2 1000 full e1000g0 1000 full e1000g3

18. Reboot the cloned Sol2 and fire up the Sol1 19. Use OUI to setup ssh user equivalence for user grid and oracle. 20. Run cluvfy to verify the prerequisites.
Use cluvfy from the stage directory to make sure that prerequisites are met. ./runcluvfy.sh stage -post hwos -n sol1,sol2 verbose /runcluvfy.sh stage -pre crsinst -n sol1,sol2 -verbose

21.

Start OUI for Oracle GI installation.


AWT_TOOLKIT be

Oracle 11.2.0.3 on Solaris 11 requires that variable exported. export AWT_TOOLKIT=XToolkit Failure to do so will produce the following error.

Preparing to launch O racle Universal Installer fro m /tmp/Ora Install2012 -02-04_05-50-46AM. Please wait ...grid@sol1:/u01/stage/grid$ Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/Ora Install2012-0204_05-50-46AM/jdk/jre/lib/amd64/mo tif21/libma wt.so: ld.so .1: java: fatal: libXm.so .4: open failed : No su ch file o r directory at java.lang.ClassLoader$NativeLibra ry.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1753) at java.lang.ClassLoader.loadLibrary(ClassLoader.java :1649) at java.lang.Runtime.load0(Runtime.java:769) at java.lang.System.load(System.java :968) at java.lang.ClassLoader$NativeLibra ry.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1753) at java.lang.ClassLoader.loadLibrary(ClassLoader.java :1670) at java.lang.Runtime.loadLibrary0(Runtime.java:822) at java.lang.System.loadLibrary(System.java:993)

at sun.security.action.LoadLibraryAction.run(LoadLibra ryAction.java:50) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLib raries(Toolkit.java:1509) at java.awt.Toolkit.<clinit>(Toolkit.java:1530) at com.jgoodies.looks.LookUtils.isLowResolution(LookUtils.java:484) at com.jgoodies.looks.LookUtils.<clinit>(LookUtils.java :249) at com.jgoodies.looks.plastic.PlasticLookAndFeel.<clinit>(PlasticLookAndFeel.java:135) at java.lang.Class.fo rName0(Native Method) at java.lang.Class.fo rName(Class.java:242) at javax.swing.SwingUtilities.loadSystemCla ss(Swing Utilities.java:1779) at javax.swing.UIManager.setLookAndFeel(UIManager.java :453) at oracle.install.co mmons.util.Application.startup(Application.java:780) at oracle.install.co mmons.flow.FlowApplica tion.sta rtup(Flo wApplicatio n.java :165) at oracle.install.co mmons.flow.FlowApplica tion.sta rtup(Flo wApplication.java :182) at oracle.install.co mmons.base.driver.common.Installer.sta rtup(In staller.java :348) at oracle.install.ivw.crs.driver.CRSInstaller.sta rtup(CRSInstaller.java:98) at oracle.install.ivw.crs.driver.CRSInstaller.main(CRSInstaller.java:105) After issuing xhost + from a root connected terminal session and setting a DISPLAY=:0 and AWT_TOOLKIT=XToolkit in a session logged as grid run OUI. Select Skip software update and press Next to continue.

Select Install and configure GI

Select typical installation.

Specify the SCAN name and enter data for the second node.

Make sure that interfaces are properly selected.

Select ASM as storage option.

Specify dusk group name, redundancy and disks. I selected HIGH redundancy.

Specify inventory location.

Examine the prerequisites check.

Review the summary and press Install.

Wait for the OUI to prompt for the scripts to be run as root.

root@sol1:/u01/app/11.2.0/grid# ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation OLR initialization - successful root wallet root wallet cert root cert export

peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding Clusterware entries to inittab CRS-2672: Attempting to start 'ora.mdnsd' on 'sol1' CRS-2676: Start of 'ora.mdnsd' on 'sol1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'sol1' CRS-2676: Start of 'ora.gpnpd' on 'sol1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'sol1' CRS-2672: Attempting to start 'ora.gipcd' on 'sol1' CRS-2676: Start of 'ora.gipcd' on 'sol1' succeeded CRS-2676: Start of 'ora.cssdmonitor' on 'sol1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'sol1' CRS-2672: Attempting to start 'ora.diskmon' on 'sol1' CRS-2676: Start of 'ora.diskmon' on 'sol1' succeeded CRS-2676: Start of 'ora.cssd' on 'sol1' succeeded ASM created and started successfully. Disk Group DATA created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 054f135ad3644fbabff3f058ef28ff75. Successful addition of voting disk 834ba35a0f6c4fe0bf04700b9edc2bb2. Successful addition of voting disk 3a99cb045bd94f5ebf038d4b4584228c. Successful addition of voting disk 133e3427001c4f0dbfa957587866693a. Successful addition of voting disk 90b3a7e49b004ff3bf58d1aa6eb91bbb. Successfully replaced voting disk group with +DATA. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----------------------------- --------1. ONLINE 054f135ad3644fbabff3f058ef28ff75 (/dev/rdsk/c4t1d0s0) [DATA] 2. ONLINE 834ba35a0f6c4fe0bf04700b9edc2bb2 (/dev/rdsk/c4t2d0s0) [DATA] 3. ONLINE 3a99cb045bd94f5ebf038d4b4584228c (/dev/rdsk/c4t3d0s0) [DATA] 4. ONLINE 133e3427001c4f0dbfa957587866693a (/dev/rdsk/c4t4d0s0) [DATA] 5. ONLINE 90b3a7e49b004ff3bf58d1aa6eb91bbb (/dev/rdsk/c4t5d0s0) [DATA] Located 5 voting disk(s).

CRS-2672: Attempting to start 'ora.asm' on 'sol1' CRS-2676: Start of 'ora.asm' on 'sol1' succeeded CRS-2672: Attempting to start 'ora.DATA.dg' on 'sol1' CRS-2676: Start of 'ora.DATA.dg' on 'sol1' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded root@sol1:/u01/app/11.2.0/grid# root@sol2:/u01/app/11.2.0/grid# ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Creating /usr/local/bin directory... Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /var/opt/oracle/oratab file... Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation OLR initialization - successful Adding Clusterware entries to inittab CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node sol1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster Configure Oracle Grid Infrastructure for a Cluster ... succeeded root@sol2:/u01/app/11.2.0/grid#

At Final configuration step CLUVFY fails.

INFO: ERROR: INFO: PRVG-1101 : SCAN name "scan-sol" failed to resolve INFO: ERROR: INFO: PRVF-4657 : Name resolution setup check for "scan-sol" (IP address: 192.168.2.51) failed INFO: ERROR: INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-sol"

As this is the only error in the log I ignored it. The error is due to not properly defined SCAN in DNS. Exit the OUI.

Verify that GI is properly installed and configured.


grid@sol2:/u01/app/11.2.0/grid/log/sol2$ crsctl check cluster -all ************************************************************** sol1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** sol2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** grid@sol2:/u01/app/11.2.0/grid/log/sol2$ crsctl stat res -t -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.DATA.dg ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.LISTENER.lsnr ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.asm ONLINE ONLINE sol1 Started

ONLINE ora.gsd

ONLINE

sol2

Started

OFFLINE OFFLINE sol1 OFFLINE OFFLINE sol2 ora.net1.network ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.ons ONLINE ONLINE sol1 ONLINE ONLINE sol2 -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE sol1 ora.cvu 1 ONLINE ONLINE sol1 ora.oc4j 1 ONLINE ONLINE sol1 ora.scan1.vip 1 ONLINE ONLINE sol1 ora.sol1.vip 1 ONLINE ONLINE sol1 ora.sol2.vip 1 ONLINE ONLINE sol2 grid@sol2:/u01/app/11.2.0/grid/log/sol2$ crsctl stat res -t -init -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------Cluster Resources -------------------------------------------------------------------------------ora.asm 1 ONLINE ONLINE sol2 Started ora.cluster_interconnect.haip 1 ONLINE ONLINE sol2 ora.crf 1 ONLINE ONLINE sol2 ora.crsd 1 ONLINE ONLINE sol2 ora.cssd 1 ONLINE ONLINE sol2 ora.cssdmonitor 1 ONLINE ONLINE sol2 ora.ctssd 1 ONLINE ONLINE sol2 ACTIVE:0 ora.diskmon 1 OFFLINE OFFLINE ora.evmd 1 ONLINE ONLINE sol2 ora.gipcd 1 ONLINE ONLINE sol2 ora.gpnpd 1 ONLINE ONLINE sol2 ora.mdnsd 1 ONLINE ONLINE sol2 grid@sol2:/u01/app/11.2.0/grid/log/sol2$

22.

Oracle RDBMS installation only

Log in as oracle user. Set and export the following variables.


export AWT_TOOLKIT=XToolkit export DISPLAY=:0.0

Run the installer. Ignore the E-mail notification and press Next to continue.

Select skip software updates.

Select install software only.

Select both nodes and Oracle RAC database.

Select languages.

Select EE.

Select location.

Select groups.

Examine the checks.

Separate checks confirmed that I can continue. root@sol1:/u01/app/11.2.0/grid/bin# ./crsctl stat res ora.net1.network NAME=ora.net1.network TYPE=ora.network.type TARGET=ONLINE , ONLINE STATE=ONLINE on sol1, ONLINE on sol2 root@sol1:/u01/app/11.2.0/grid/bin# ./crsctl stat res ora.sol1.vip NAME=ora.sol1.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on sol1 root@sol1:/u01/app/11.2.0/grid/bin# ./crsctl stat res ora.sol2.vip NAME=ora.sol2.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on sol2 root@sol1:/u01/app/11.2.0/grid/bin#

Ignore examine Summary and run installation.

Run the scripts. Exit the installer.

23.

Create a cluster database using dbca

If you intend to have separate disk groups as FRA create it. In my case I created the following disk groups.

While logged in as oracle user and properly set environment variables run dbca. Select RAC database.

Select Create a database.

Select a template.

Select Admin managed and both nodes.

Select OEM option.

Specify password(s).

Error!

Look at: Dbca Does Not Show ASM Diskgroup Information [ID 1286434.1] ASM Diskgroup Can Not Be Shown When Creating Database With DBCA [ID 1269734.1] Turned out permission issue. Add setuid and setgid bit for oracle binary under $GRID_HOME/bin: $ chmod 6755 $GRID_HOME/bin/oracle The oracle binary for both GRID_HOME/bin and RDBMS ORACLE_HOME/bin should have 6755 permission, eg: -rwsr-sx

Select a Disk Group.

Select FRA and Archiving options.

Select sample schemas.

Select parameters.

Press Next.

Select Create.

Review the summary.

Modify the .profile to include ORACLE_UNQNAME and start dbconsole.

Modify passwords and take note of the OEM DC URL.

At the end database is up and running.


gjilevski@sol1:/u01/app/11.2.0/grid/bin$ ./crsctl stat res -t -------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------Local Resources -------------------------------------------------------------------------------ora.DATA.dg ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.DATADG.dg ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.LISTENER.lsnr ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.asm ONLINE ONLINE sol1 Started ONLINE ONLINE sol2 Started ora.gsd OFFLINE OFFLINE sol1 OFFLINE OFFLINE sol2 ora.net1.network ONLINE ONLINE sol1 ONLINE ONLINE sol2 ora.ons ONLINE ONLINE sol1 ONLINE ONLINE sol2 -------------------------------------------------------------------------------Cluster Resources

-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE sol2 ora.cvu 1 ONLINE ONLINE sol2 ora.d11g.db 1 ONLINE ONLINE sol1 Open 2 ONLINE ONLINE sol2 Open ora.oc4j 1 ONLINE ONLINE sol2 ora.scan1.vip 1 ONLINE ONLINE sol2 ora.sol1.vip 1 ONLINE ONLINE sol1 ora.sol2.vip 1 ONLINE ONLINE sol2 gjilevski@sol1:/u01/app/11.2.0/grid/bin$ oracle@sol2:~$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Mon Feb 6 18:24:05 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from v$active_instances; INST_NUMBER ----------1 2 SQL> SQL> INST_NAME -----------------------------------------------------------sol1.gj.com:D11G1 sol2.gj.com:D11G2

OEM status is as follows for the cluster database.

OEM status is as follows for the cluster.

Anda mungkin juga menyukai