This demo describes how to install Oracle 11gR2 (11.2.0.1) 2 Node (Linux) RAC on VMware Workstation-8. The
hardware/software resources used in this demo are given below.
50 GB free space for each RAC Node Just to install 2 Node RAC, 20 GB free space on each
node is enough
Note: OEL-5.5 or advanced Linux version can be used for 11gR2 RAC installation. One advantage of using OEL-5.8 is ,
all required RPMs for ASMLib are already installed in OS.Prior to start RAC installation it is assumed that VMware-8
and OEL-5.8 are installed in Laptop. Installation of LINUX and VMware are not covered in this document.
Click on Next.
Put Memory for Node-1 and click Next. I have 16GB RAM in my Laptop, so I put 3GB for Node-1. If you have 8GB RAM
in Laptop, you can use 2GB (2.5 GB is preferable) for each nodes.
Put Disk size and select the below option. I used 50GB for each RAC node (for future use, to stage Grid/ RDBMS
software), but 20GB (10GB for Linux and 10GB for Grid+Database is enough for each RAC node just for installation
purpose.
Put Disk name (file name which will be used a disk). By default it takes Node_Name.vmdk (RAC1.vmdk)
After successful installation of Linux OS on VM Node-1, it displays as given below. Start RAC1 by clicking on Power on
button:
8
Check required RPMs and additional Setup for Oracle 11gR2 on OEL-5
binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-2.5-24 (32 bit)
glibc-common-2.5
glibc-devel-2.5
glibc-devel-2.5 (32 bit)
glibc-headers-2.5
ksh-20060214
libaio-0.3.106
libaio-0.3.106 (32 bit)
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
libgcc-4.1.2
libgcc-4.1.2 (32 bit)
libstdc++-4.1.2
libstdc++-4.1.2 (32 bit)
libstdc++-devel 4.1.2
make-3.81
sysstat-7.0.2
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-devel-0.125
glibc-devel-2.5
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-2.2.11 (32 bit)
unixODBC-devel-2.2.11
unixODBC-devel-2.2.11 (32 bit)
To install RPM ,use the below command with root user. All the RPMs are available in Server directory under Linux
media-CD/DVD. Also many of the RPMs can be downloaded from http://rpm.pbone.net/
Check kernel parameters for Oracle as given below. Please note that any parameter value which is higher than below
mentioned value shouldn't be modified. If the value is less than as mentioned below, then needs to be modified. Edit
/etc/sysctl.conf as root user. Use /sbin/sysctl -p command to apply the new settings.
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1054504960
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
Disable the Linux firewall. First stop the services. To permanently disable the firewall, use the second command.
10
Create oracle user, OS groups and directories where Oracle software will be installed.
[root@rac1 ~]# groupadd dba
[root@rac1 ~]# groupadd oinstall
[root@rac1 ~]# groupadd asmdba
[root@rac1 ~]# groupadd asmadmin
[root@rac1 ~]# useradd -g oinstall -G dba,asmdba,asmadmin oracle
Create directory where oracle software will be installed and set the ownership.
Login as the "oracle" user and add the following lines at the end of the "/home/oracle/.bash_profile" file.
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ASMLib Setup
The below RPMs are required for ASMLib. As we are using OEL-5.8 with kernel version: 2.6.32-300.10.1.el5uek, there
is no separate oracleasmlib* RPM available. The ASMLib kernel module is build with EUK kernel, so no need to install
that RPM to make ASM working without any issue. More information is available @ http://sethmiller.org/it/oracleasmlib-
not-necessary/
oracleasm-support-2.1.7-1.el5.x86_64.rpm
oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm --> Not required in OEL-5.8 and higher OS version
11
oracleasm-support-2.1.8-1.el5
oracleasm-2.6.18-348.12.1.el5-2.0.5-1.el5
Configure the ASMLib driver (to be owned by oracle and the dba group). It needs to be loaded on every reboot.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Shutdown RAC1. Right click on RAC1 --> Settings --> Click on "Add"
12
13
Put the size of Disk. I put 10GB for doing some testing on Disk group. Otherwise 5GB is enough for demo purpose.
14
Put a Disk file name and place the file in a separate location (C:\TUSAR\VMWARE\RACSHARE) because this disk will
be shared with RAC2 node. Click Finish to create the disk. It takes few minutes to create the disk.
15
Select device node as SCSI 1:0 from drop down menu (which means SCSI controller 1, device 0) and press OK.
Now we have to modify VMware hardware profile file (vmx) for RAC1 to make this disk as shared (in cluster).These
changes force VM not try to buffer reads and writes to the disk directly.Take a backup of the below file before modifying.
16
We need to put the below lines in rac1.vmx file and save it. If any of the parameter is already present , don't put it twice
(it will throw error during VM startup)
disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
scsi1.sharedBus = "virtual"
The new disk (ASM_DISK1) has the following entries. No need to change anything with below entries.
scsi1:0.present = "TRUE"
scsi1:0.fileName = "C:\TUSAR\VMWARE\RACSHARE\ASM_DISK1"
scsi1:0.writeThrough = "TRUE"
scsi1:0.mode = "independent-persistent"
scsi1:0.deviceType = "disk"
scsi1:0.redo = ""
Start RAC1 VM to format new disk. Use "fdisk" command to partition new disk (/dev/sdb).
17
After partitioning new disk (/dev/sdb1), you will find an entry like below with command "fdisk -l"
Now we can use this disk for ASM. Use the below command to create ASM disk.
Shutdown RAC1 node to configure Network interfaces. Taking backup of RAC1 is a good idea here as we are done with
all setups except Network Interface.
The Oracle RAC requires at least two network connections between cluster nodes. One network will be the public IP and
the second will be a private IP reserved for inter-cluster traffic.
18
This allows us to create new virtual networks that allow our guest VMs to talk amongst themselves, to the host computer
and/or to the outside world. We are going to create two virtual networks (VMnet2 and VMnet3) by clicking on Add
Network. For both VMnet2 and VMnet3 DHCP is enabled.
VMnet2 will be a host-only network allowing communication to other VMs and the host, but not to the outside world. We
will assign a IP of 10.10.1.0 to VMnet2, and a Subnet mask of 255.255.255.0
VMnet3 will also be a host-only network allowing communication to other VMs and the host, but not to the outside world.
We will assign a Subnet IP of 10.10.2.0 to VMnet3, and a Subnet mask of 255.255.255.0
Now we have our two networks created in VMware, we will add two new NICs to our VM.
Right click on the RAC1 in the VMware Workstation menu, and select Settings. This will bring up the Virtual Machine
Settings panel. Now click Add. The Add Hardware Wizard allows us to select Network Adapter
19
On the Network Adapter Type menu, we will select the Custom: Specific virtual network radio button, and use the drop
down to select VMNet2(Host-only). Click Finish to create the NIC.
Now repeat the above steps to add another Network Adapter, this time using VMNet3.
20
Now repeat the above steps to add another Network Adapter, this time using VMNet3.The final VM hardware
configuration should look like this:
21
Now we can clone our VM (RAC1) which will be RAC2. From this point forward we will have two machines i.e. RAC1
and RAC2.As RAC1 is now shutdown, go to VM --> Manage --> Clone.
Select Create a full clone, put VM name: RAC2 and Location to store VM files. Click Finish to start clone.
22
After successful clone, we have two VM i.e. RAC1 and RAC2. Now start both VM and login as root.Now we need to
configure Network IP add, hostname.
Log in to Node-1 (RAC1) as root and select System->Administration->Network. There will be three network interfaces
(devices) as given below.
eth0 : The bridged network adapter that connects us to the outside world (no configuration required)
eth1 : It is using VMnet2, the network we intend to be our public RAC network
eth2 : It is using VMnet3 which is what we plan to use for private cluster traffic
Select the eth1 adapter and click the edit button to bring up the Ethernet Device control panel. We will select Statically
set IP addresses to assign a static IP address to this NIC. We will assign IP address 10.10.1.10. Make sure that
Activate device when computer starts remains enabled.
Now repeat the above steps for eth2, this time statically assigning the IP address 10.10.2.10
23
Make sure you save your chances before closing the Network Configuration editor window.
24
Now log in to RAC2 and follow the above steps for IP Add configuration. Once this is done, we have the below n/w
interfaces ready with following IP Add.
25
Check hostname in the file /etc/sysconfig/network with root account, edit this file and change the host name on each
machine. If we change hostname in this file, we need to reboot VM. My domain name is "miracle.com" (Oracle's
Miracle)
RAC1:
RAC2:
It's a good idea to bounce both VM (RAC1 and RAC2) and check if everything back with above as expected. Please
note, now onwards whatever changes we do , it mean for both the nodes (RAC1 and RAC2) accordingly.
26
Grid instiller demands a VIP to be used as SCAN VIP (for SCAN listener). It doesn't allow to proceed with installation
without providing the VIP (SCAN). As a work around, we can place VIP (which must be free and is not used in the
network) in /etc/hosts files in both the nodes to make the installer continue with Grid installation. Later you can use local
listener instead of SCAN (disable) and remove these entries from /etc/hosts.
In our demo, we are going to use SCAN listener (which is an added benefit to handle user connections irrespective of
number of nodes present in a cluster. For using SCAN, we need to configure DNS to resolve VIP addresses to be used
by SCAN. Oracle recommends three VIP should be enough for SCAN to handle multiple number of database
connection. Here we are using two VIP for SCAN (just for demo purpose, later we can add one more VIP to SCAN).
We are using SCAN VIP, still I suggest to place SCAN VIP entries in /etc/hosts during installation, later we can remove
these SCAN entries.
We will be using these VIPs for SCAN and these need to be configured in DNS.
27
The below RPM needs to be installed to start DNS in Linux Server. It is already available in OEL-5.8.
For DNS setup, we need to create file /etc/named.conf with below entries. If the file is already present, then before
editing take a backup.
zone "miracle.com" IN {
type master;
file "miracle.zone";
allow-update { none; };
};
In real scenario, DNS server holds this file. In our case you can place this file in both RAC1 and RAC2 (with same
entries) as given below. It helps to operate both the nodes as DSN server.
zone "miracle.com" IN {
type master;
file "miracle.zone";
allow-update { none; };
};
Change forwarders IP as per your Tertiary DNS (In my case it is 192.168.0.1). Go to the below location to check what is
the Tertiary DNS. Generally it is 192.168.1.1 in most of the home networks..
28
System->Administration->Network
RAC1:
miracle.com. IN NS 10.10.1.20
localhost IN A 127.0.0.1
rac1.miracle.com. IN A 10.10.1.10
rac2.miracle.com. IN A 10.10.1.20
rac1-vip.miracle.com. IN A 10.10.1.11
rac2-vip.miracle.com. IN A 10.10.1.21
rac-scan.miracle.com. IN A 10.10.1.12
rac-scan.miracle.com. IN A 10.10.1.22
RAC2:
29
miracle.com. IN NS 10.10.1.20
localhost IN A 127.0.0.1
rac1.miracle.com. IN A 10.10.1.10
rac2.miracle.com. IN A 10.10.1.20
rac1-vip.miracle.com. IN A 10.10.1.11
rac2-vip.miracle.com. IN A 10.10.1.21
rac-scan.miracle.com. IN A 10.10.1.12
rac-scan.miracle.com. IN A 10.10.1.22
RAC2:
Now we have to modify Ethernet adapter files to avoid any overwrite of /etc/resolve.conf file.
Place "PEERDNS=no" in both ifcfg-eth1 and ifcfg-eth2 files (both nodes) to avoid any overwrite of /etc/resolve.conf
30
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
HWADDR=00:0c:29:74:c4:57
NETMASK=255.255.255.0
IPADDR=10.10.1.10
TYPE=Ethernet
USERCTL=no
IPV6INIT=no
PEERDNS=no
We also need to set the DNS service to auto-start on reboot (both nodes)
Now we can do look up the SCAN IP addresses. Now check whether rac-scan can resolve the IP add from DNS with
round-robin format (on both nodes).
Name: rac-scan.miracle.com
Address: 10.10.1.22 --> 1st SCAN VIP
Name: rac-scan.miracle.com
Address: 10.10.1.12 --> 2nd SCAN VIP
31
Name: rac-scan.miracle.com
Address: 10.10.1.12 --> 1st SCAN VIP
Name: rac-scan.miracle.com
Address: 10.10.1.22 --> 2nd SCAN VIP
Name: rac-scan.miracle.com
Address: 10.10.1.22 --> 1st SCAN VIP
Name: rac-scan.miracle.com
Address: 10.10.1.12 --> 2nd SCAN VIP
Also check nslookup for other RAC IP add (on both nodes)
Name: rac1.miracle.com
Address: 10.10.1.10
Name: rac1-vip.miracle.com
Address: 10.10.1.11
Name: rac2.miracle.com
Address: 10.10.1.20
Name: rac2-vip.miracle.com
Address: 10.10.1.21
32
RAC1:
# Drift file.
driftfile /etc/ntp/drift
RAC2:
# Drift file.
driftfile /etc/ntp/drift
Add the slewing option (it prevents the NTP daemon from resetting the clock in the event of a gap occurring).Modify
/etc/sysconfig/ntpd file and add the -x option.
33
NTPDATE_OPTIONS=""
We have already created the shared disk and configured that disk to be used by ASM in RAC1. After cloning of RAC2
(from RAC1), this shared disk must be visible to RAC2. In case the partition table is not updated in RAC2, then we need
to do partition the same disk in RAC2 to update the partition table. Check partition of new disk (/dev/sdb1) is visible in
RAC2 with command "fdisk -l".
34
Please note if and only if the partition is not reflected on RAC2, then do the partition as given above. In general it
reflects on RAC2 as it is being copied from RAC1 (clone).
Log in with oracle user, copy downloaded software (Grid and Database) in Node-1 (RAC1) and unzip with user "oracle"
35
Before starting with Grid installation, we need to setup SSH on both the nodes. In 10g CRS installation, we use to do
SSH setup manually in all the nodes, but in 11gR2 Grid, the following script will setup SSH on both nodes with running
from one node. Log in to RAC1 as oracle user and open a command terminal, run the below script.
RAC1:
It will prompt twice for oracle password for each node (RAC1, RAC2) and will setup SSH on both nodes. After successful
run of this script, check ssh on both nodes.
RAC1:
RAC2:
Now run runcluvfy.sh in RAC1 and confirm all the settings are fine before installation of Grid.
36
Before Grid installation check the Display setting in terminal. VNC, Hummingbird or any X-Windows system can be used
to access the server terminal to start the runinstaller. Run "xclock" in terminal and check the Display setting is correct or
not. If you find a clock as given below, that it confirms that Display setting is fine.
It looks all the setup is fine, so we can proceed with Grid installation.
37
38
39
40
41
42
43
44
45
Run orainstRoot.sh in RAC1, after finish up then run orainstRoot.sh in RAC2. Then run root.sh in RAC1 and after finish
up, run root.sh in RAC2.Please note, run all the scripts one by one and after successful run of all scripts press OK.
46
The output of these scripts are attached here. Verify the output and confirm that there is no error.
48
49
50
51
52
53
54
55
56
57
58
59
60
ISSUE: After installation of Oracle RAC 11gR2 in two nodes, I found there is only one SCAN Listener is running in one
node. I have configured two SCAN VIP with DNS during Grid installation. Even after successful Grid installation, only
one SCAN listener was running. The below configuration enables two SCAN Listener with respect to two SCAN VIP
(DNA). Solution for this issue is described in the following document along with SCAN Listener and Database connection
details.
Thanks,
Tusar
Reference:
http://gruffdba.wordpress.com/2012/10/26/oracle-11gr2-2-node-rac-on-vmware-workstation-8-
introduction/
http://www.oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-on-ol5-using-vmware-
server-2.php
62