2 with Oracle
VirtualBox
May 8, 2014 by Andr 4 Comments
In this artice Ill try to describe in as exact manner as possible, how to install Solaris 11.2 and
then install RAC 12c on top of Virtualbox. This build is made on a mac so if you are running
windows or linux there might some minor differences when it comes to the Virtualbox settings.
This Articel is is Part 2 out of 3 For my guide to install Oracle Database 12c on RAC running
Solaris 11.2 on Virtualbox.
Part 1 can be found here, and Part 3 can be found here
The system will consist of 2 Solaris 11.2 (x86-64) machines with RAC 12c on.
You need to have at least 4GB of RAM assigned to each node, otherwise this will not work since
you will run OOM.
Import The Ova file into Virtual box (File>Import Appliance) and then locate the file:
Then change the Name to node1 and Reinitialize the MAC address:
When the import is finished we have to create the disks for ASM. Shared disks MUST be fixed!
1
2 for i in {1..10}; do VBoxManage createhd --filename asm_disk$i.vdi --size
3 5120 --format VDI --variant Fixed; done
4 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
5 Disk image created. UUID: 71875fdf-93c8-4673-a2d1-2bb2e43ecf62
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
6 Disk image created. UUID: f51d7393-0aeb-4df0-a0e6-18203175ace1
7 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
8 Disk image created. UUID: cecbb425-71ff-4402-b548-7f59aa77dc85
9 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
10Disk image created. UUID: 9fb09abf-e797-48fa-8834-d001280c8796
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
11 Disk image created. UUID: ac49485e-916a-4bdb-bd19-de9eaf1f9c2f
120%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
13Disk image created. UUID: 4654415f-3f92-4bbb-a1ed-533e8455e827
140%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
15Disk image created. UUID: 77715952-7b9e-4ef3-9558-cc917d86f688
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
16Disk image created. UUID: 03a5ff3b-e232-4a6d-844d-d06af37ec417
170%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
18Disk image created. UUID: 765e037d-04c7-443e-bf32-dc315c02e444
190%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: f51f9c86-49eb-4ae0-b67c-5bbcd2fb06b7
20
21
In order to add the disks we need to have a SATA controller, and since oracles OVA only have an
IDE controller, we need to add a SATA Controller for our virtual machine:
VirtualBox should now have 10 new SATA disk using the new SATA Controller:
You can setup these from Global Preferences for VirtualBox (For mac that is CMD,) Choose
Now configure the Virtual machine using those nets, all configured networks should look like
this:
Now select your Region, Language, Time, Keyboard (No screenshot for that) then set root
password and create your own user. Worth noting here is that they seem to have disabled root as
a role(default since Solaris 11.1) for this OVA/Beta
Now verify and apply
When all is done, log in with your own username and then start the Terminal and check your IP for
net3/v4(should be 192.168.56.101)
Now we can login using ssh from our host, makes things much easier.
Network configuration
We now have to set correct network settings for our private and public interfaces
net1 will be our public interface and net2 will be our private interface
RAC need the hostname for private/public vip and scan to pass the install.Add this to /etc/hosts
1
root@node1:~# groupadd -g 8000 oinstall
2 root@node1:~# groupadd -g 8050 asmadmin
3 root@node1:~# groupadd -g 8051 asmdba
4 root@node1:~# groupadd -g 8052 asmoper
5 root@node1:~# groupadd -g 8061 dba
6 root@node1:~# groupadd -g 8062 oper
root@node1:~# useradd -u 8100 -g oinstall -G asmoper,asmadmin,asmdba -d
7 /export/home/grid -m grid
8 root@node1:~# useradd -u 8101 -g oinstall -G oper,dba,asmdba -d
9 /export/home/oracle -m oracle
10root@node1:~# passwd oracle
New Password:
11 Re-enter new Password:
12passwd: password successfully changed for oracle
13root@node1:~# passwd grid
14New Password:
15Re-enter new Password:
passwd: password successfully changed for grid
16root@node1:~#
17
1 umask 022
export ORACLE_BASE=/oracle/app/grid
2 export ORACLE_HOME=/oracle/app/12.1.0.1/grid/
3 export ORACLE_SID=+ASM1
4 export TEMP=/tmp
5 export TMPDIR=/tmp
6 export LD_LIBRARY_PATH=$ORACLE_HOME/lib
7
8 export PATH=$PATH:/usr/bin:/usr/sbin:$ORACLE_HOME/bin
9
10ulimit -t unlimited
11 ulimit -f unlimited
12ulimit -d unlimited
13ulimit -s unlimited
ulimit -v unlimited
14
15
1
2 umask 022
3 export ORACLE_BASE=/oracle
export ORACLE_HOME=$ORACLE_BASE/app/oracle/product/12.1.0.1/dbhome_1
4 export ORACLE_SID=PRODBexport TEMP=/tmp
5 export TMPDIR=/tmp
6 export LD_LIBRARY_PATH=$ORACLE_HOME/lib
7
8 export PATH=$PATH:/usr/bin:/usr/sbin:$ORACLE_HOME/bin
9
10ulimit -t unlimited
ulimit -f unlimited
11 ulimit -d unlimited
12ulimit -s unlimited
13ulimit -v unlimited
14
1root@node1:~# df -h /tmp/
2Filesystem Size Used Available Capacity Mounted on
3swap 4.5G 8K 4.5G 1% /tmp
OK now its time to take care of our 10 disks attached from VirtualBox. The disks are shown
using format:
1
2
3 root@node1:~# format
4 Searching for disks...done
5
6 AVAILABLE DISK SELECTIONS:
0. c1d0 <VBOX HAR-fbaa03ff-6040cb3-0001-31.25GB>
7 /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
8 1. c2t1d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
9 /pci@0,0/pci8086,2829@d/disk@1,0
10 2. c2t2d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
/pci@0,0/pci8086,2829@d/disk@2,0
11 3. c2t3d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
12 /pci@0,0/pci8086,2829@d/disk@3,0
13 4. c2t4d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
14 /pci@0,0/pci8086,2829@d/disk@4,0
15 5. c2t5d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
/pci@0,0/pci8086,2829@d/disk@5,0
16 6. c2t6d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
17 /pci@0,0/pci8086,2829@d/disk@6,0
18 7. c2t7d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
19 /pci@0,0/pci8086,2829@d/disk@7,0
8. c2t8d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
20
/pci@0,0/pci8086,2829@d/disk@8,0
21 9. c2t9d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
22 /pci@0,0/pci8086,2829@d/disk@9,0
23 10. c2t10d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
24 /pci@0,0/pci8086,2829@d/disk@a,0
25
26
As you can see we have all 11 disks here, the root disk (disk 0) and the 10 disks we created using
VBoxManage. We need to format and setup the 10 shared disks with the correct permissions and
ownership. All disks need to be partitioned on partition 0 from cylinder 1. This must be done for
all 10 disks.
When this is done we need to set the correct ownership according to RAC installation notes
Will take care of that and if we verify, this is the way it should be:
SSH keypairing.
Oracle recommends to let the OUI take care of the keypair generation steps, but to avoid the
remote nodes to close prematurely we change the LoginGraceTime to 0 (no time limit)in the
/etc/ssh/sshd_config and then restart ssh
Time Settings.
Time is always important, if your not in time you might get fired, same thing goes to RAC it needs to be in
sync on all nodes. To achieve that we can use NTP or the Cluster Time Synchronization daemon also
known as CTSSD. I dont have the means of using NTP here so Im turning it of to let ctssd take care of
timing:
Create a new host in virtualbox with the same configuration as node1, but DONt clone the
machine, as this destroyed my ASM disk configuration, but please prove me wrong
Network settings:
Storage Settings :
Then add the ASM disks again but this time to node2:
Now we can start node2, but for now keep node1 turned off so we can change IP
Login with your username created on node1:
Then change hostname and reconfigure net1 and net2 (public and private interface):
Reboot and start node1 again, and we should be ready to install RAC
Install RAC
Now we can start install GRID itself. But in order to make the installation go smoothly we need
to check all prerequisites.
First Set ut User Equivalence for the the grid user by running the setutssh OUI
From Node2:
Answer all questions and the ssh setup should be done! WRONG for some reason we need to add the
node2 public key to node1
Now we can verify the prerequisites. First we check hardware and operation system configuration
Post-check for hardware and operating system setup was unsuccessful on all
the nodes.
As you can see we have a network error for net0. This is expected since this the network have the
same IP. This is not however important since net 0 only is used for Internet communication so it
can safely be ignored.
I Also have an error about the soft limit for maximum open file descriptors on node1
runcluvfy can try to fix Shared memory parameters and Open file descriptor and UDP
send/receive parameters according to this using the -fixup parameter
grid@node1:/mnt/sf_software/grid$ ./runInstaller
1Starting Oracle Universal Installer...
2
3Checking Temp space: must be greater than 180 MB. Actual 4610 MB Passed
4Checking swap space: must be greater than 150 MB. Actual 4840 MB Passed
5 Checking monitor: must be configured to display at least 256 colors.
6Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-05-
705_11-57-29AM. Please wait ...grid@node1:/mnt/sf_software/grid$
Memory will fail! but the rest is fixable (worth noting is that cluvfy didnt see this or something
changed). Fix by running the script as root on both nodes, then recheck
1 root@node1:/oracle/app/12.1.0.1/grid# ./root.sh
Performing root user operation for Oracle 12c
2
3
The following environment variables are set as:
4 ORACLE_OWNER= grid
5 ORACLE_HOME= /oracle/app/12.1.0.1/grid
6
7 Enter the full pathname of the local bin directory: [/usr/local/bin]:
8 The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
9 The contents of "coraenv" have not changed. No need to overwrite.
10
11 Entries will be added to the /var/opt/oracle/oratab file as needed by
12 Database Configuration Assistant when a database is created
13 Finished running generic part of root script.
14 Now product-specific root actions will be performed.
Using configuration parameter file:
15 /oracle/app/12.1.0.1/grid/crs/install/crsconfig_params
16 2014/05/08 10:38:41 CLSRSC-363: User ignored prerequisites during
17 installation
18
19 CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'node1'
20 CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
21 CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
22 CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
23 CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
24 CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
25 CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
26 CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
27 CRS-2793: Shutdown of Oracle High Availability Services-managed resources
28 on 'node1' has completed
29 CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
30 CRS-2672: Attempting to start 'ora.evmd' on 'node1'
31 CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
32 CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
33 CRS-2676: Start of 'ora.evmd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
34 CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
35 CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
36 CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
37 CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
38 CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node1'
39 CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
40 CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
41 CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
42
43 ASM created and started successfully.
44
Disk Group DATA created successfully.
45
46
CRS-2672: Attempting to start 'ora.storage' on 'node1'
47 CRS-2676: Start of 'ora.storage' on 'node1' succeeded
48 CRS-2672: Attempting to start 'ora.crsd' on 'node1'
49 CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
50 CRS-4256: Updating the profile
Successful addition of voting disk 02dac02ad6df4f67bffa90ee10b6847e.
51 Successful addition of voting disk 9c07ac5062d94f17bf5bf340ba08ff0a.
52 Successful addition of voting disk 3767b33b50bd4f3dbf5ddee4de361f30.
53 Successfully replaced voting disk group with +DATA.
54 CRS-4256: Updating the profile
55 CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
56 -- ----- ----------------- --------- ---------
57 1. ONLINE 02dac02ad6df4f67bffa90ee10b6847e (/dev/rdsk/c2t10d0s0) [DATA]
58 2. ONLINE 9c07ac5062d94f17bf5bf340ba08ff0a (/dev/rdsk/c2t1d0s0) [DATA]
59 3. ONLINE 3767b33b50bd4f3dbf5ddee4de361f30 (/dev/rdsk/c2t2d0s0) [DATA]
Located 3 voting disk(s).
60 CRS-2791: Starting shutdown of Oracle High Availability Services-managed
61 resources on 'node1'
62 CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
63 CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
64 CRS-2673: Attempting to stop 'ora.storage' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
65 CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
66 CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
67 CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
68 CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
69 CRS-2677: Stop of 'ora.storage' on 'node1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
70 CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
71 CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
72 CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
73 CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
74 CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
75 CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
76 CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
77 CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
78 CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
79 CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
80 CRS-2793: Shutdown of Oracle High Availability Services-managed resources
81 on 'node1' has completed
82 CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
83
CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
84 CRS-2672: Attempting to start 'ora.evmd' on 'node1'
85 CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
86 CRS-2676: Start of 'ora.evmd' on 'node1' succeeded
87 CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
88 CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
89 CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
90 CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
91 CRS-2672: Attempting to start 'ora.cssd' on 'node1'
92 CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
93 CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
94 CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server
'node1'
95 CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
96 CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node1'
97 CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
98 CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded
99 CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node1'
100CRS-2676: Start of 'ora.asm' on 'node1' succeeded
101CRS-2672: Attempting to start 'ora.storage' on 'node1'
102CRS-2676: Start of 'ora.storage' on 'node1' succeeded
103CRS-2672: Attempting to start 'ora.crsd' on 'node1'
104CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
105CRS-6017: Processing resource auto-start for servers: node1
106CRS-6016: Resource auto-start has completed for server node1
107CRS-6024: Completed start of Oracle Cluster Ready Services-managed
108resources
CRS-4123: Oracle High Availability Services has been started.
1092014/05/08 10:51:52 CLSRSC-343: Successfully started Oracle clusterware
110 stack
111
112 CRS-2672: Attempting to start 'ora.asm' on 'node1'
113 CRS-2676: Start of 'ora.asm' on 'node1' succeeded
114 CRS-2672: Attempting to start 'ora.DATA.dg' on 'node1'
CRS-2676: Start of 'ora.DATA.dg' on 'node1' succeeded
115 2014/05/08 10:53:49 CLSRSC-325: Configure Oracle Grid Infrastructure for a
116 Cluster ... succeeded
117
118 root@node1:/oracle/app/12.1.0.1/grid# ../../grid/
119 admin/ cfgtoollogs/ checkpoints/ crsdata/ diag/ node1/
root@node1:/oracle/app/12.1.0.1/grid#
120/oracle/app/oraInventory/orainstRoot.sh
121Changing permissions of /oracle/app/oraInventory.
122Adding read,write permissions for group.
123Removing read,write,execute permissions for world.
124
125Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete.
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
Then also Execute the scripts on node2 and we are all done
Next up is installing the Database, thats the easy part so if you are fine till here, then i can
promise you are all done.