Anda di halaman 1dari 26

Install Grid 12c on Solaris 11.

2 with Oracle
VirtualBox
May 8, 2014 by Andr 4 Comments

In this artice Ill try to describe in as exact manner as possible, how to install Solaris 11.2 and
then install RAC 12c on top of Virtualbox. This build is made on a mac so if you are running
windows or linux there might some minor differences when it comes to the Virtualbox settings.

This Articel is is Part 2 out of 3 For my guide to install Oracle Database 12c on RAC running
Solaris 11.2 on Virtualbox.
Part 1 can be found here, and Part 3 can be found here

The system will consist of 2 Solaris 11.2 (x86-64) machines with RAC 12c on.

You need to have at least 4GB of RAM assigned to each node, otherwise this will not work since
you will run OOM.

First we need to download all the binaries:

1. Download Solaris 11.2 Virtualbox Templates


from http://www.oracle.com/technetwork/server-storage/solaris11/downloads/beta-vm-
templates-2190885.html.
2. Download Grid Infrastructure for x86-64 from here: ZIP1 and ZIP2
3. And if you dont already have virtualbox you can donwload it here

Import The Ova file into Virtual box (File>Import Appliance) and then locate the file:

Then change the Name to node1 and Reinitialize the MAC address:
When the import is finished we have to create the disks for ASM. Shared disks MUST be fixed!

1
2 for i in {1..10}; do VBoxManage createhd --filename asm_disk$i.vdi --size
3 5120 --format VDI --variant Fixed; done
4 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
5 Disk image created. UUID: 71875fdf-93c8-4673-a2d1-2bb2e43ecf62
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
6 Disk image created. UUID: f51d7393-0aeb-4df0-a0e6-18203175ace1
7 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
8 Disk image created. UUID: cecbb425-71ff-4402-b548-7f59aa77dc85
9 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
10Disk image created. UUID: 9fb09abf-e797-48fa-8834-d001280c8796
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
11 Disk image created. UUID: ac49485e-916a-4bdb-bd19-de9eaf1f9c2f
120%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
13Disk image created. UUID: 4654415f-3f92-4bbb-a1ed-533e8455e827
140%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
15Disk image created. UUID: 77715952-7b9e-4ef3-9558-cc917d86f688
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
16Disk image created. UUID: 03a5ff3b-e232-4a6d-844d-d06af37ec417
170%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
18Disk image created. UUID: 765e037d-04c7-443e-bf32-dc315c02e444
190%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: f51f9c86-49eb-4ae0-b67c-5bbcd2fb06b7
20
21

In order to add the disks we need to have a SATA controller, and since oracles OVA only have an
IDE controller, we need to add a SATA Controller for our virtual machine:

Then Attach the disks as shared like this:

1for i in {1..10}; do VBoxManage storageattach node1 --storagectl "SATA" --port


$i --device 0 --type hdd --medium asm_disk$i.vdi --mtype shareable ; done
2for i in {1..10}; do VBoxManage modifyhd asm_disk$i.vdi --type shareable; done

VirtualBox should now have 10 new SATA disk using the new SATA Controller:

OK, lets configure the Network interfaces in VirtualBox.


This setup will user 4 Network adapters
3 of them should be of the type Host-Only Network and one (to connect to internet) should be of
type NAT.

You can setup these from Global Preferences for VirtualBox (For mac that is CMD,) Choose

Network and add three Host-Only Networks like:

This will create three local interfaces:

1ifconfig -a|grep vbox


2vboxnet0: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
3vboxnet1: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
4vboxnet2: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500

Now configure the Virtual machine using those nets, all configured networks should look like
this:

Now we are ready to start up the Virtual


Machine:
Boot the Solaris 11.2 BETA and press F2 to continue . You probably have some mouse issues.
TO release the mouse from the screen by default you can use LEFT CMD (OS X only)

Set the hostname to node1

Well configure the network at a later point in time:

Now select your Region, Language, Time, Keyboard (No screenshot for that) then set root
password and create your own user. Worth noting here is that they seem to have disabled root as
a role(default since Solaris 11.1) for this OVA/Beta
Now verify and apply

When all is done, log in with your own username and then start the Terminal and check your IP for
net3/v4(should be 192.168.56.101)

1root@node1:~# ipadm show-addr|grep net3/v4


2net3/v4 dhcp ok 192.168.56.101/24
3root@node1:~#

Now we can login using ssh from our host, makes things much easier.

1 andrek$ ssh andrek@192.168.56.101


2 The authenticity of host '192.168.56.101 (192.168.56.101)' can't be
3 established.
RSA key fingerprint is 75:59:11:63:b0:11:df:bb:52:8f:2d:13:6e:47:bc:db.
4 Are you sure you want to continue connecting (yes/no)? yes
5 Warning: Permanently added '192.168.56.101' (RSA) to the list of known
6 hosts.
7 Password:
8 Last login: Sun May 4 20:11:21 2014
Oracle Corporation SunOS 5.11 11.2 April 2014
9 andrek@node1:~$ su -
10Password:
11 Oracle Corporation SunOS 5.11 11.2 April 2014
12 root@node1:~#

Network configuration
We now have to set correct network settings for our private and public interfaces
net1 will be our public interface and net2 will be our private interface

1 root@node1:~# ipadm delete-ip net2


root@node1:~# ipadm create-addr -T static -a local=192.168.57.50/24 net2/v4
2 ipadm: cannot create address: No such interface
3 root@node1:~# ipadm create-ip net2
4 root@node1:~# ipadm create-addr -T static -a local=192.168.57.50/24 net2/v4
5 root@node1:~# ipadm delete-ip net1
6 root@node1:~# ipadm create-ip net1
root@node1:~# ipadm create-addr -T static -a local=192.168.58.50/24 net1/v4
7 root@node1:~# ipadm show-addr |grep v4
8 lo0/v4 static ok 127.0.0.1/8
9 net0/v4 dhcp ok 10.0.2.15/24
10
11 net1/v4 static ok 192.168.58.50/24
12net2/v4 static ok 192.168.57.50/24
13net3/v4 dhcp ok 192.168.56.101/24
14

Configure the hostfile

RAC need the hostname for private/public vip and scan to pass the install.Add this to /etc/hosts

1192.168.58.50 node1 node1.protractus.net


2192.168.58.51 node2 node2.protractus.net
3192.168.58.60 node1-vip node1-vip.protractus.net
4192.168.58.61 node2-vip node2-vip.protractus.net
5192.168.58.70 nodes-scan nodes-scan.protractus.net
192.168.57.50 node1-priv node1-priv.protractus.net
6192.168.57.51 node2-priv node2-priv.protractus.net
7

Create ORACLE and GRID users and groups

1
root@node1:~# groupadd -g 8000 oinstall
2 root@node1:~# groupadd -g 8050 asmadmin
3 root@node1:~# groupadd -g 8051 asmdba
4 root@node1:~# groupadd -g 8052 asmoper
5 root@node1:~# groupadd -g 8061 dba
6 root@node1:~# groupadd -g 8062 oper
root@node1:~# useradd -u 8100 -g oinstall -G asmoper,asmadmin,asmdba -d
7 /export/home/grid -m grid
8 root@node1:~# useradd -u 8101 -g oinstall -G oper,dba,asmdba -d
9 /export/home/oracle -m oracle
10root@node1:~# passwd oracle
New Password:
11 Re-enter new Password:
12passwd: password successfully changed for oracle
13root@node1:~# passwd grid
14New Password:
15Re-enter new Password:
passwd: password successfully changed for grid
16root@node1:~#
17

Add the following to grid users profile(~/.profile):

1 umask 022
export ORACLE_BASE=/oracle/app/grid
2 export ORACLE_HOME=/oracle/app/12.1.0.1/grid/
3 export ORACLE_SID=+ASM1
4 export TEMP=/tmp
5 export TMPDIR=/tmp
6 export LD_LIBRARY_PATH=$ORACLE_HOME/lib
7
8 export PATH=$PATH:/usr/bin:/usr/sbin:$ORACLE_HOME/bin
9
10ulimit -t unlimited
11 ulimit -f unlimited
12ulimit -d unlimited
13ulimit -s unlimited
ulimit -v unlimited
14
15

And the following to the oracle users profile (~/.profile)

1
2 umask 022
3 export ORACLE_BASE=/oracle
export ORACLE_HOME=$ORACLE_BASE/app/oracle/product/12.1.0.1/dbhome_1
4 export ORACLE_SID=PRODBexport TEMP=/tmp
5 export TMPDIR=/tmp
6 export LD_LIBRARY_PATH=$ORACLE_HOME/lib
7
8 export PATH=$PATH:/usr/bin:/usr/sbin:$ORACLE_HOME/bin
9
10ulimit -t unlimited
ulimit -f unlimited
11 ulimit -d unlimited
12ulimit -s unlimited
13ulimit -v unlimited
14

Setup the directories for ORACLE and GRID

1root@node1:~# mkdir -p /oracle/app/grid/


2root@node1:~# mkdir -p /oracle/app/12.1.0.1/grid/
3root@node1:~# chown -R grid:oinstall /oracle/
4root@node1:~# mkdir -p /oracle/app/oracle/product/12.1.0.1/dbhome_1
5root@node1:~# chown oracle:oinstall /oracle/app/oracle/
root@node1:~# chown -R 770 /oracle/
6

Configure the Storage and SWAP


According to 12c pre req documentation we need approx. 1 times the RAM when having 4Gb RAM, this
machine has ~4.5GB as default so we should be OK here

1root@node1:~# df -h /tmp/
2Filesystem Size Used Available Capacity Mounted on
3swap 4.5G 8K 4.5G 1% /tmp
OK now its time to take care of our 10 disks attached from VirtualBox. The disks are shown
using format:

1
2
3 root@node1:~# format
4 Searching for disks...done
5
6 AVAILABLE DISK SELECTIONS:
0. c1d0 <VBOX HAR-fbaa03ff-6040cb3-0001-31.25GB>
7 /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
8 1. c2t1d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
9 /pci@0,0/pci8086,2829@d/disk@1,0
10 2. c2t2d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
/pci@0,0/pci8086,2829@d/disk@2,0
11 3. c2t3d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
12 /pci@0,0/pci8086,2829@d/disk@3,0
13 4. c2t4d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
14 /pci@0,0/pci8086,2829@d/disk@4,0
15 5. c2t5d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
/pci@0,0/pci8086,2829@d/disk@5,0
16 6. c2t6d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
17 /pci@0,0/pci8086,2829@d/disk@6,0
18 7. c2t7d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
19 /pci@0,0/pci8086,2829@d/disk@7,0
8. c2t8d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
20
/pci@0,0/pci8086,2829@d/disk@8,0
21 9. c2t9d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
22 /pci@0,0/pci8086,2829@d/disk@9,0
23 10. c2t10d0 <ATA-VBOX HARDDISK-1.0 cyl 2558 alt 2 hd 128 sec 32>
24 /pci@0,0/pci8086,2829@d/disk@a,0
25
26

As you can see we have all 11 disks here, the root disk (disk 0) and the 10 disks we created using
VBoxManage. We need to format and setup the 10 shared disks with the correct permissions and
ownership. All disks need to be partitioned on partition 0 from cylinder 1. This must be done for
all 10 disks.

1 Specify disk (enter its number): 1


2 selecting c2t1d0
[disk formatted]
3 No Solaris fdisk partition found.
4
5 FORMAT MENU:
6 disk - select a disk
7 type - select (define) a disk type
8 partition - select (define) a partition table
current - describe the current disk
9
format - format and analyze the disk
10
fdisk - run the fdisk program
11
repair - repair a defective sector
12 label - write label to the disk
analyze - surface analysis
13
defect - defect list management
14 backup - search for backup labels
15 verify - read and display labels
16 save - save new disk/partition definitions
17 inquiry - show disk ID
volname - set 8-character volume name
18 !<cmd> - execute <cmd>, then return
19 quit
20format> fdisk
21No fdisk table exists. The default partition for the disk is:
22
23 a 100% "SOLARIS System" partition
24
Type "y" to accept the default partition, otherwise type "n" to edit the
25 partition table.
26y
27format> partition
28
29PARTITION MENU:
0 - change `0' partition
30 1 - change `1' partition
31 2 - change `2' partition
32 3 - change `3' partition
33 4 - change `4' partition
34 5 - change `5' partition
6 - change `6' partition
35 7 - change `7' partition
36 select - select a predefined table
37 modify - modify a predefined partition table
38 name - name the current table
39 print - display the current table
label - write partition map and label to the disk
40 !<cmd> - execute <cmd>, then return
41 quit
42partition> 0
43Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
44
45Enter partition id tag[unassigned]:
46Enter partition permission flags[wm]:
47Enter new starting cyl[0]: 1
48Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 4.95g
49partition> print
Current partition table (unnamed):
50Total disk cylinders available: 2557 + 2 (reserved cylinders)
51
52Part Tag Flag Cylinders Size Blocks
53 0 unassigned wm 1 - 2535 4.95GB (2535/0/0) 10383360
54 1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 2556 4.99GB (2557/0/0) 10473472
55
3 unassigned wm 0 0 (0/0/0) 0
56 4 unassigned wm 0 0 (0/0/0) 0
57 5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
58
59
60
61
62
63
64 7 unassigned wm 0 0 (0/0/0) 0
65 8 boot wu 0 - 0 2.00MB (1/0/0) 4096
66 9 unassigned wm 0 0 (0/0/0) 0
67
68partition> label
69Ready to label disk, continue? y
70
partition> quit
71
72
73
74
75
76
77

When this is done we need to set the correct ownership according to RAC installation notes

for i in {1..10}; do chown grid:asmadmin /dev/rdsk/c2t$i'd0s0'; chmod 660


1/dev/rdsk/c2t$i'd0s0';done

Will take care of that and if we verify, this is the way it should be:

root@node1:/dev/rdsk# for i in {1..10}; do ls -al /dev/rdsk/c2t$i'd0s0'|awk


'{print $11}' |xargs ls -al ;done
crw-rw---- 1 grid asmadmin 229, 64 May 4 22:19
../../devices/pci@0,0/pci8086,2829@d/disk@1,0:a,raw
crw-rw---- 1 grid asmadmin 229, 128 May 4 22:19
1 ../../devices/pci@0,0/pci8086,2829@d/disk@2,0:a,raw
2 crw-rw---- 1 grid asmadmin 229, 192 May 4 22:19
3 ../../devices/pci@0,0/pci8086,2829@d/disk@3,0:a,raw
4 crw-rw---- 1 grid asmadmin 229, 256 May 4 22:19
5 ../../devices/pci@0,0/pci8086,2829@d/disk@4,0:a,raw
crw-rw---- 1 grid asmadmin 229, 320 May 4 22:19
6 ../../devices/pci@0,0/pci8086,2829@d/disk@5,0:a,raw
7 crw-rw---- 1 grid asmadmin 229, 384 May 4 22:19
8 ../../devices/pci@0,0/pci8086,2829@d/disk@6,0:a,raw
9 crw-rw---- 1 grid asmadmin 229, 448 May 4 22:19
../../devices/pci@0,0/pci8086,2829@d/disk@7,0:a,raw
10crw-rw---- 1 grid asmadmin 229, 512 May 4 22:19
11 ../../devices/pci@0,0/pci8086,2829@d/disk@8,0:a,raw
crw-rw---- 1 grid asmadmin 229, 576 May 4 22:19
../../devices/pci@0,0/pci8086,2829@d/disk@9,0:a,raw
crw-rw---- 1 grid asmadmin 229, 640 May 4 22:19
../../devices/pci@0,0/pci8086,2829@d/disk@a,0:a,raw
Solaris Kernel tweaking
Projects, you just love them, makes resource management such a nice thing :)Here make two projects
one for Grid and one for Oracle.

root@node1:/dev/rdsk# projadd -G oinstall -K "project.max-shm-


memory=(priv,6g,deny)" user.grid
root@node1:/dev/rdsk# projmod -sK "project.max-sem-nsems=(priv,512,deny)"
user.grid
1 root@node1:/dev/rdsk# projmod -sK "project.max-sem-ids=(priv,128,deny)"
2 user.grid
root@node1:/dev/rdsk# projmod -sK "project.max-shm-ids=(priv,128,deny)"
3 user.grid
4 root@node1:/dev/rdsk# projmod -sK "process.max-file-
5 descriptor=(priv,65536,deny)" user.grid
6 root@node1:/dev/rdsk# projadd -G oinstall -K "project.max-shm-
7 memory=(priv,6g,deny)" user.oracle
root@node1:/dev/rdsk# projmod -sK "project.max-sem-nsems=(priv,512,deny)"
8 user.oracle
9 root@node1:/dev/rdsk# projmod -sK "project.max-sem-ids=(priv,128,deny)"
10user.oracle
root@node1:/dev/rdsk# projmod -sK "project.max-shm-ids=(priv,128,deny)"
user.oracle
root@node1:/dev/rdsk# projmod -sK "process.max-file-
descriptor=(priv,65536,deny)" user.oracle

SSH keypairing.
Oracle recommends to let the OUI take care of the keypair generation steps, but to avoid the
remote nodes to close prematurely we change the LoginGraceTime to 0 (no time limit)in the
/etc/ssh/sshd_config and then restart ssh

1svcadm restart ssh

Time Settings.
Time is always important, if your not in time you might get fired, same thing goes to RAC it needs to be in
sync on all nodes. To achieve that we can use NTP or the Cluster Time Synchronization daemon also
known as CTSSD. I dont have the means of using NTP here so Im turning it of to let ctssd take care of
timing:

1svcadm disable ntp

Create the second node


Turn of node1 and then clone the BOOT disk from node1
VBoxManage clonehd sol-11_2-beta-vbox-disk1.vmdk ../node2/sol-11_2-beta-vbox-
1disk1.vmdk --format vmdk

Create a new host in virtualbox with the same configuration as node1, but DONt clone the

machine, as this destroyed my ASM disk configuration, but please prove me wrong

Network settings:

Storage Settings :

Then add the ASM disks again but this time to node2:

for i in {1..10}; do VBoxManage storageattach node_2 --storagectl "SATA" --port


1$i --device 0 --type hdd --medium asm_disk$i.vdi --mtype shareable ;done

Now we can start node2, but for now keep node1 turned off so we can change IP
Login with your username created on node1:
Then change hostname and reconfigure net1 and net2 (public and private interface):

1 root@node1:~# svccfg -s svc:/system/identity:node setprop config/nodename


2 =node2
root@node1:~# svcadm refresh svc:/system/identity:node
3 root@node1:~# svcadm restart svc:/system/identity:node
4 root@node1:~# ipadm delete-ip net1
5 root@node1:~# ipadm delete-ip net2
6 root@node1:~# ipadm create-ip net1
root@node1:~# ipadm create-addr -T static -a local=192.168.58.51/24 net1/v4
7 root@node1:~# ipadm create-ip net2
8 root@node1:~# ipadm create-addr -T static -a local=192.168.57.51/24 net2/v4
9 root@node1:~# ipadm show-addr
10ADDROBJ TYPE STATE ADDR
11 lo0/v4 static ok 127.0.0.1/8
12 net0/v4 dhcp ok 10.0.2.15/24
13net1/v4 static ok 192.168.58.51/24
14net2/v4 static ok 192.168.57.51/24
15net3/v4 dhcp ok 192.168.56.102/24
16lo0/v6 static ok ::1/128
17net0/v6 addrconf ok fe80::a00:27ff:fed1:d0f/10
18net3/v6 addrconf ok fe80::a00:27ff:fea1:9e2c/10
19 root@node1:~# dladm show-phys
20LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full e1000g0
21net1 Ethernet up 1000 full e1000g1
22net2 Ethernet up 1000 full e1000g2
23
24net3 Ethernet up 1000 full e1000g3
25

Reboot and start node1 again, and we should be ready to install RAC

Install RAC
Now we can start install GRID itself. But in order to make the installation go smoothly we need
to check all prerequisites.
First Set ut User Equivalence for the the grid user by running the setutssh OUI

From Node2:

./sshsetup/sshUserSetup.sh -user grid -hosts "node1 node2"


1-noPromptPassphrase

Answer all questions and the ssh setup should be done! WRONG for some reason we need to add the
node2 public key to node1

cat /export/home/grid/.ssh/id_rsa.pub|ssh node1 'cat - >>


1~/.ssh/authorized_keys'

enter password and THEN we are done.

Now we can verify the prerequisites. First we check hardware and operation system configuration

1 grid@node2:/mnt/sf_software/grid$ ./runcluvfy.sh stage -post hwos -n


node1,node2 -verbose
2
3
Performing post-checks for hardware and operating system setup
4
5 Checking node reachability...
6
7 Check: Node reachability from node "node2"
8 Destination Node Reachable?
9 ------------------------------------ ------------------------
node2 yes
10 node1 yes
11 Result: Node reachability check passed from node "node2"
12
13 Checking user equivalence...
14
15 Check: User equivalence for user "grid"
16 Node Name Status
------------------------------------ ------------------------
17 node2 passed
18 node1 passed
19 Result: User equivalence check passed for user "grid"
20
21 Checking node connectivity...
22
Checking hosts config file...
23
Node Name Status
24 ------------------------------------ ------------------------
25 node2 passed
26 node1 passed
27
28 Verification of the hosts config file successful
29
Interface information for node "node2"
30 Name IP Address Subnet Gateway Def. Gateway HW
31 Address MTU
32 ------ --------------- --------------- --------------- ---------------
33 ----------------- ------
34 net0 10.0.2.15 10.0.2.0 10.0.2.15 10.0.2.2
08:00:27:D1:0D:0F 1500
35 net1 192.168.58.51 192.168.58.0 192.168.58.51 10.0.2.2
36 08:00:27:3C:1C:82 1500
37 net2 192.168.57.51 192.168.57.0 192.168.57.51 10.0.2.2
38 08:00:27:EF:BB:59 1500
net3 192.168.56.102 192.168.56.0 192.168.56.102 10.0.2.2
39 08:00:27:A1:9E:2C 1500
40
41 Interface information for node "node1"
42 Name IP Address Subnet Gateway Def. Gateway HW
43 Address MTU
44 ------ --------------- --------------- --------------- ---------------
----------------- ------
45 net0 10.0.2.15 10.0.2.0 10.0.2.15 10.0.2.2
46 08:00:27:9F:3B:5B 1500
47 net1 192.168.58.50 192.168.58.0 192.168.58.50 10.0.2.2
48 08:00:27:C0:FF:2B 1500
49 net2 192.168.57.50 192.168.57.0 192.168.57.50 10.0.2.2
08:00:27:64:5A:AD 1500
50 net3 192.168.56.101 192.168.56.0 192.168.56.101 10.0.2.2
51 08:00:27:64:F7:C3 1500
52
53 Check: Node connectivity of subnet "10.0.2.0"
54 Source Destination
Connected?
55 ------------------------------ ------------------------------
56 ----------------
57 node2[10.0.2.15] node1[10.0.2.15] yes
58 Result: Node connectivity passed for subnet "10.0.2.0" with node(s)
59 node2,node1
60
Check: TCP connectivity of subnet "10.0.2.0"
61 Source Destination
62 Connected?
63 ------------------------------ ------------------------------
64 ----------------
node2:10.0.2.15 node1:10.0.2.15
65
failed
66
67 ERROR:
68 PRVF-7617 : Node connectivity between "node2 : 10.0.2.15" and "node1 :
10.0.2.15" failed
69 Result: TCP connectivity check failed for subnet "10.0.2.0"
70
71 Check: Node connectivity of subnet "192.168.58.0"
72 Source Destination
73 Connected?
------------------------------ ------------------------------
74 ----------------
75 node2[192.168.58.51] node1[192.168.58.50] yes
76 Result: Node connectivity passed for subnet "192.168.58.0" with node(s)
77 node2,node1
78
79 Check: TCP connectivity of subnet "192.168.58.0"
Source Destination
80 Connected?
81 ------------------------------ ------------------------------
82 ----------------
83 node2:192.168.58.51 node1:192.168.58.50 passed
Result: TCP connectivity check passed for subnet "192.168.58.0"
84
85 Check: Node connectivity of subnet "192.168.57.0"
86 Source Destination
87 Connected?
88 ------------------------------ ------------------------------
89 ----------------
node2[192.168.57.51] node1[192.168.57.50] yes
90 Result: Node connectivity passed for subnet "192.168.57.0" with node(s)
91 node2,node1
92
93 Check: TCP connectivity of subnet "192.168.57.0"
94 Source Destination
95 Connected?
------------------------------ ------------------------------
96 ----------------
97 node2:192.168.57.51 node1:192.168.57.50 passed
98 Result: TCP connectivity check passed for subnet "192.168.57.0"
99
100Check: Node connectivity of subnet "192.168.56.0"
Source Destination
101Connected?
102 ------------------------------ ------------------------------
103----------------
104 node2[192.168.56.102] node1[192.168.56.101] yes
105 Result: Node connectivity passed for subnet "192.168.56.0" with node(s)
node2,node1
106
107Check: TCP connectivity of subnet "192.168.56.0"
108 Source Destination
109Connected?
110 ------------------------------ ------------------------------
111 ----------------
node2:192.168.56.102 node1:192.168.56.101 passed
112 Result: TCP connectivity check passed for subnet "192.168.56.0"
113
114 Interfaces found on subnet "10.0.2.0" that are likely candidates for VIP
are:
115 node2 net0:10.0.2.15
116 node1 net0:10.0.2.15
117
118 Interfaces found on subnet "192.168.58.0" that are likely candidates for a
119 private interconnect are:
node2 net1:192.168.58.51
120node1 net1:192.168.58.50
121
122Interfaces found on subnet "192.168.57.0" that are likely candidates for a
123private interconnect are:
124node2 net2:192.168.57.51
125node1 net2:192.168.57.50
126
Interfaces found on subnet "192.168.56.0" that are likely candidates for a
127private interconnect are:
128node2 net3:192.168.56.102
129node1 net3:192.168.56.101
130Checking subnet mask consistency...
131Subnet mask consistency check passed for subnet "10.0.2.0".
Subnet mask consistency check passed for subnet "192.168.58.0".
132Subnet mask consistency check passed for subnet "192.168.57.0".
133Subnet mask consistency check passed for subnet "192.168.56.0".
134Subnet mask consistency check passed.
135
136ERROR:
PRVG-1172 : The IP address "10.0.2.15" is on multiple interfaces
137"net0,net0" on nodes "node2,node1"
138
139Result: Node connectivity check failed
140
141Checking multicast communication...
142
143Checking subnet "10.0.2.0" for multicast communication with multicast group
144"224.0.0.251"...
Check of subnet "10.0.2.0" for multicast communication with multicast group
145"224.0.0.251" passed.
146
147Check of multicast communication passed.
148
149Checking for multiple users with UID value 0
150Result: Check for multiple users with UID value 0 passed
151Check: Time zone consistency
Result: Time zone consistency check passed
152
153Checking shared storage accessibility...
154
155WARNING:
156node1:Cannot verify the shared state for device /dev/dsk/c1d0s8 due to
157Universally Unique Identifiers (UUIDs) not being found, or different values
being found, for this device across nodes:
158 node2,node1
159
160WARNING:
node1:Cannot verify the shared state for device /dev/dsk/c1d0s9 due to
161Universally Unique Identifiers (UUIDs) not being found, or different values
162being found, for this device across nodes:
163 node2,node1
164
165WARNING:
node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s0 due to
166Universally Unique Identifiers (UUIDs) not being found, or different values
167being found, for this device across nodes:
168 node2,node1
169
170WARNING:
171node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s1 due to
Universally Unique Identifiers (UUIDs) not being found, or different values
172being found, for this device across nodes:
173 node2,node1
174
175WARNING:
176node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s2 due to
177Universally Unique Identifiers (UUIDs) not being found, or different values
being found, for this device across nodes:
178 node2,node1
179
180WARNING:
181node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s3 due to
182Universally Unique Identifiers (UUIDs) not being found, or different values
being found, for this device across nodes:
183 node2,node1
184
185WARNING:
186node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s4 due to
187Universally Unique Identifiers (UUIDs) not being found, or different values
188being found, for this device across nodes:
node2,node1
189
190WARNING:
191node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s5 due to
192Universally Unique Identifiers (UUIDs) not being found, or different values
193being found, for this device across nodes:
node2,node1
194
195WARNING:
196node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s6 due to
197Universally Unique Identifiers (UUIDs) not being found, or different values
198being found, for this device across nodes:
199 node2,node1
200
WARNING:
201node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s7 due to
202Universally Unique Identifiers (UUIDs) not being found, or different values
203being found, for this device across nodes:
node2,node1
204
205WARNING:
206node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s8 due to
Universally Unique Identifiers (UUIDs) not being found, or different values
207being found, for this device across nodes:
208 node2,node1
209
210WARNING:
211 node1:Cannot verify the shared state for device /dev/dsk/c1t1d0s9 due to
Universally Unique Identifiers (UUIDs) not being found, or different values
212being found, for this device across nodes:
213 node2,node1
214
215 Disk Sharing Nodes (2 in count)
216 ------------------------------------ ------------------------
217 /dev/dsk/c2t10d0s0 node2 node1
218
219 Disk Sharing Nodes (2 in count)
------------------------------------ ------------------------
220
/dev/dsk/c2t1d0s0 node2 node1
221
222 /dev/dsk/c2t1d0s1 node2 node1
/dev/dsk/c2t1d0s2 node2 node1
223
224 /dev/dsk/c2t1d0s3 node2 node1
225 /dev/dsk/c2t1d0s4 node2 node1
226 /dev/dsk/c2t1d0s5 node2 node1
227 /dev/dsk/c2t1d0s6 node2 node1
228 /dev/dsk/c2t1d0s7 node2 node1
229 /dev/dsk/c2t1d0s8 node2 node1
230 /dev/dsk/c2t1d0s9 node2 node1
231
232 Disk Sharing Nodes (2 in count)
------------------------------------ ------------------------
233
/dev/dsk/c2t2d0s0 node2 node1
234
/dev/dsk/c2t2d0s1 node2 node1
235
/dev/dsk/c2t2d0s2 node2 node1
236
237 /dev/dsk/c2t2d0s3 node2 node1
/dev/dsk/c2t2d0s4 node2 node1
238
239 /dev/dsk/c2t2d0s5 node2 node1
240 /dev/dsk/c2t2d0s6 node2 node1
241 /dev/dsk/c2t2d0s7 node2 node1
242 /dev/dsk/c2t2d0s8 node2 node1
243 /dev/dsk/c2t2d0s9 node2 node1
244
245 Disk Sharing Nodes (2 in count)
------------------------------------ ------------------------
246 /dev/dsk/c2t3d0s0 node2 node1
247
/dev/dsk/c2t3d0s1 node2 node1
248
/dev/dsk/c2t3d0s2 node2 node1
249
/dev/dsk/c2t3d0s3 node2 node1
250 /dev/dsk/c2t3d0s4 node2 node1
251 /dev/dsk/c2t3d0s5 node2 node1
252 /dev/dsk/c2t3d0s6 node2 node1
253 /dev/dsk/c2t3d0s7 node2 node1
254 /dev/dsk/c2t3d0s8 node2 node1
255 /dev/dsk/c2t3d0s9 node2 node1
256
257 Disk Sharing Nodes (2 in count)
258 ------------------------------------ ------------------------
259 /dev/dsk/c2t4d0s0 node2 node1
260 /dev/dsk/c2t4d0s1 node2 node1
261 /dev/dsk/c2t4d0s2 node2 node1
262 /dev/dsk/c2t4d0s3 node2 node1
263 /dev/dsk/c2t4d0s4 node2 node1
264 /dev/dsk/c2t4d0s5 node2 node1
265 /dev/dsk/c2t4d0s6 node2 node1
266 /dev/dsk/c2t4d0s7 node2 node1
267 /dev/dsk/c2t4d0s8 node2 node1
268 /dev/dsk/c2t4d0s9 node2 node1
269
270 Disk Sharing Nodes (2 in count)
271 ------------------------------------ ------------------------
272 /dev/dsk/c2t5d0s0 node2 node1
273 /dev/dsk/c2t5d0s1 node2 node1
274 /dev/dsk/c2t5d0s2 node2 node1
275 /dev/dsk/c2t5d0s3 node2 node1
276 /dev/dsk/c2t5d0s4 node2 node1
277 /dev/dsk/c2t5d0s5 node2 node1
278 /dev/dsk/c2t5d0s6 node2 node1
279 /dev/dsk/c2t5d0s7 node2 node1
280 /dev/dsk/c2t5d0s8 node2 node1
281 /dev/dsk/c2t5d0s9 node2 node1
282
283 Disk Sharing Nodes (2 in count)
284 ------------------------------------ ------------------------
285 /dev/dsk/c2t6d0s0 node2 node1
286 /dev/dsk/c2t6d0s1 node2 node1
287 /dev/dsk/c2t6d0s2 node2 node1
288 /dev/dsk/c2t6d0s3 node2 node1
289 /dev/dsk/c2t6d0s4 node2 node1
290 /dev/dsk/c2t6d0s5 node2 node1
291 /dev/dsk/c2t6d0s6 node2 node1
292 /dev/dsk/c2t6d0s7 node2 node1
293 /dev/dsk/c2t6d0s8 node2 node1
294 /dev/dsk/c2t6d0s9 node2 node1
295
Disk Sharing Nodes (2 in count)
296 ------------------------------------ ------------------------
297 /dev/dsk/c2t7d0s0 node2 node1
298 /dev/dsk/c2t7d0s1 node2 node1
299 /dev/dsk/c2t7d0s2 node2 node1
300 /dev/dsk/c2t7d0s3 node2 node1
301 /dev/dsk/c2t7d0s4 node2 node1
302 /dev/dsk/c2t7d0s5 node2 node1
303 /dev/dsk/c2t7d0s6 node2 node1
304 /dev/dsk/c2t7d0s7 node2 node1
305 /dev/dsk/c2t7d0s8 node2 node1
306 /dev/dsk/c2t7d0s9 node2 node1
307
308 Disk Sharing Nodes (2 in count)
309 ------------------------------------ ------------------------
310 /dev/dsk/c2t8d0s0 node2 node1
311 /dev/dsk/c2t8d0s1 node2 node1
312 /dev/dsk/c2t8d0s2 node2 node1
313 /dev/dsk/c2t8d0s3 node2 node1
314 /dev/dsk/c2t8d0s4 node2 node1
315 /dev/dsk/c2t8d0s5 node2 node1
316 /dev/dsk/c2t8d0s6 node2 node1
317 /dev/dsk/c2t8d0s7 node2 node1
318 /dev/dsk/c2t8d0s8 node2 node1
319 /dev/dsk/c2t8d0s9 node2 node1

Disk Sharing Nodes (2 in count)


------------------------------------ ------------------------
/dev/dsk/c2t9d0s0 node2 node1
/dev/dsk/c2t9d0s1 node2 node1
/dev/dsk/c2t9d0s2 node2 node1
/dev/dsk/c2t9d0s3 node2 node1
/dev/dsk/c2t9d0s4 node2 node1
/dev/dsk/c2t9d0s5 node2 node1
/dev/dsk/c2t9d0s6 node2 node1
/dev/dsk/c2t9d0s7 node2 node1
/dev/dsk/c2t9d0s8 node2 node1
/dev/dsk/c2t9d0s9 node2 node1

Shared storage check was successful on nodes "node2,node1"

Checking integrity of name service switch configuration file


"/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across
nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry
is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file
"/etc/nsswitch.conf" passed

Post-check for hardware and operating system setup was unsuccessful on all
the nodes.

As you can see we have a network error for net0. This is expected since this the network have the
same IP. This is not however important since net 0 only is used for Internet communication so it
can safely be ignored.

Now we can start verifying the cluster service setup.


Running

1runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose


First gives me an expected error regarding the Memory since i only have 2Gb and i need
4Gb.
Again net0 gives an error

I Also have an error about the soft limit for maximum open file descriptors on node1

runcluvfy can try to fix Shared memory parameters and Open file descriptor and UDP
send/receive parameters according to this using the -fixup parameter

1 runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose


...
2 ..
3 .
4 ****************************************************************************
5 **************
6 Following is the list of fixable prerequisites selected to fix in this
session
7 ****************************************************************************
8 **************
9
1 -------------- --------------- ----------------
0 Check failed. Failed on nodes Reboot required?
1 -------------- --------------- ----------------
1 Soft Limit: maximum open node1 yes
file descriptors
1
2 Execute "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" as root user on nodes "node1"
1 to perform the fix up operations manually
3
1 Press ENTER key to continue after execution of
4 "/tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has completed on nodes "node1"
1
Fix: Soft Limit: maximum open file descriptors
5
Node Name Status
1 ------------------------------------ ------------------------
6 node1 successful
1 Result: "Soft Limit: maximum open file descriptors" was successfully fixed
on all the applicable nodes
7
1
8
1
9
2
0
2
1 Fix up operations were successfully completed on all the applicable nodes
2
2 Pre-check for cluster services setup was unsuccessful on all the nodes.
2
3
2
4
2
5
2
6

Then restart node1.

Install the grid Software


Start the installer:

grid@node1:/mnt/sf_software/grid$ ./runInstaller
1Starting Oracle Universal Installer...
2
3Checking Temp space: must be greater than 180 MB. Actual 4610 MB Passed
4Checking swap space: must be greater than 150 MB. Actual 4840 MB Passed
5 Checking monitor: must be configured to display at least 256 colors.
6Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-05-
705_11-57-29AM. Please wait ...grid@node1:/mnt/sf_software/grid$

Skip the Software Updates

Install Configure Grid for a Cluster

This time will set up a standard cluster.

Use basic configuration


Add Scan name and second node

Setup the correct network interfaces.

Select Oracle Automatic Storage Management.

Create the ASM diskgroup by choosing the disks.

Select oraInventory location (default)

Unless you didnt setup sudoers do this manually later:

Memory will fail! but the rest is fixable (worth noting is that cluvfy didnt see this or something
changed). Fix by running the script as root on both nodes, then recheck

Ignore the remaining warnings and proceed.

Review, install and grab some Java:

Execute the root scripts when prompted.


Running the Script should run without any errors if it doesnt.

1 root@node1:/oracle/app/12.1.0.1/grid# ./root.sh
Performing root user operation for Oracle 12c
2
3
The following environment variables are set as:
4 ORACLE_OWNER= grid
5 ORACLE_HOME= /oracle/app/12.1.0.1/grid
6
7 Enter the full pathname of the local bin directory: [/usr/local/bin]:
8 The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
9 The contents of "coraenv" have not changed. No need to overwrite.
10
11 Entries will be added to the /var/opt/oracle/oratab file as needed by
12 Database Configuration Assistant when a database is created
13 Finished running generic part of root script.
14 Now product-specific root actions will be performed.
Using configuration parameter file:
15 /oracle/app/12.1.0.1/grid/crs/install/crsconfig_params
16 2014/05/08 10:38:41 CLSRSC-363: User ignored prerequisites during
17 installation
18
19 CRS-2791: Starting shutdown of Oracle High Availability Services-managed
resources on 'node1'
20 CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
21 CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
22 CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
23 CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
24 CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
25 CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
26 CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
27 CRS-2793: Shutdown of Oracle High Availability Services-managed resources
28 on 'node1' has completed
29 CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
30 CRS-2672: Attempting to start 'ora.evmd' on 'node1'
31 CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
32 CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
33 CRS-2676: Start of 'ora.evmd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
34 CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
35 CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
36 CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
37 CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
38 CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node1'
39 CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
40 CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
41 CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
42
43 ASM created and started successfully.
44
Disk Group DATA created successfully.
45
46
CRS-2672: Attempting to start 'ora.storage' on 'node1'
47 CRS-2676: Start of 'ora.storage' on 'node1' succeeded
48 CRS-2672: Attempting to start 'ora.crsd' on 'node1'
49 CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
50 CRS-4256: Updating the profile
Successful addition of voting disk 02dac02ad6df4f67bffa90ee10b6847e.
51 Successful addition of voting disk 9c07ac5062d94f17bf5bf340ba08ff0a.
52 Successful addition of voting disk 3767b33b50bd4f3dbf5ddee4de361f30.
53 Successfully replaced voting disk group with +DATA.
54 CRS-4256: Updating the profile
55 CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
56 -- ----- ----------------- --------- ---------
57 1. ONLINE 02dac02ad6df4f67bffa90ee10b6847e (/dev/rdsk/c2t10d0s0) [DATA]
58 2. ONLINE 9c07ac5062d94f17bf5bf340ba08ff0a (/dev/rdsk/c2t1d0s0) [DATA]
59 3. ONLINE 3767b33b50bd4f3dbf5ddee4de361f30 (/dev/rdsk/c2t2d0s0) [DATA]
Located 3 voting disk(s).
60 CRS-2791: Starting shutdown of Oracle High Availability Services-managed
61 resources on 'node1'
62 CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
63 CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
64 CRS-2673: Attempting to stop 'ora.storage' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
65 CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
66 CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
67 CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
68 CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
69 CRS-2677: Stop of 'ora.storage' on 'node1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
70 CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
71 CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
72 CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
73 CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
74 CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
75 CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
76 CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
77 CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
78 CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
79 CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
80 CRS-2793: Shutdown of Oracle High Availability Services-managed resources
81 on 'node1' has completed
82 CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
83
CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
84 CRS-2672: Attempting to start 'ora.evmd' on 'node1'
85 CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
86 CRS-2676: Start of 'ora.evmd' on 'node1' succeeded
87 CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
88 CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
89 CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
90 CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
91 CRS-2672: Attempting to start 'ora.cssd' on 'node1'
92 CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
93 CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
94 CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server
'node1'
95 CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
96 CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node1'
97 CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
98 CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded
99 CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node1'
100CRS-2676: Start of 'ora.asm' on 'node1' succeeded
101CRS-2672: Attempting to start 'ora.storage' on 'node1'
102CRS-2676: Start of 'ora.storage' on 'node1' succeeded
103CRS-2672: Attempting to start 'ora.crsd' on 'node1'
104CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
105CRS-6017: Processing resource auto-start for servers: node1
106CRS-6016: Resource auto-start has completed for server node1
107CRS-6024: Completed start of Oracle Cluster Ready Services-managed
108resources
CRS-4123: Oracle High Availability Services has been started.
1092014/05/08 10:51:52 CLSRSC-343: Successfully started Oracle clusterware
110 stack
111
112 CRS-2672: Attempting to start 'ora.asm' on 'node1'
113 CRS-2676: Start of 'ora.asm' on 'node1' succeeded
114 CRS-2672: Attempting to start 'ora.DATA.dg' on 'node1'
CRS-2676: Start of 'ora.DATA.dg' on 'node1' succeeded
115 2014/05/08 10:53:49 CLSRSC-325: Configure Oracle Grid Infrastructure for a
116 Cluster ... succeeded
117
118 root@node1:/oracle/app/12.1.0.1/grid# ../../grid/
119 admin/ cfgtoollogs/ checkpoints/ crsdata/ diag/ node1/
root@node1:/oracle/app/12.1.0.1/grid#
120/oracle/app/oraInventory/orainstRoot.sh
121Changing permissions of /oracle/app/oraInventory.
122Adding read,write permissions for group.
123Removing read,write,execute permissions for world.
124
125Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete.
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140

Then also Execute the scripts on node2 and we are all done

Next up is installing the Database, thats the easy part so if you are fine till here, then i can
promise you are all done.

Anda mungkin juga menyukai