Posted by Kamran Agayev A. on April 5, 2011 In this guide Im going to show you the process of creating Oracle 10g R2 RAC on OEL4. First of all I have to mention that Ive prepared this guide based on the well known RAC on VMWare guide of for Vincent Chan which can be found at OTN. After using that guide Ive decided to create a more screenshot based explained guide and prepared this blog post which contains 150 screenshots! These days wordpress.com is working veeeeery slowly, so it took me two days to upload all images and create this blog post That was really boring. But now the blog post is online and I would be glad to hear the visitors valuable feedbacks. In this tutorial, we use OEL4 because Oracle 10gR2 is not compatible with OEL5 (as the db was released before the OS released) As in my all tutorials, I use vmware virtual machine. In this guide I will create two virtual machines. Lets start creating the first machine. But before it, create three directories inside one folder (for example c:\vmware) 1 C:\vmware\ra c1 2 3 C:\vmware\ra c2 4 5 C:\vmware\sharedstora ge Create the following virtual machine in the first folder and create all shared storages in the third folder
Click Next
Provide the name of the virtual machine (rac1), select the location for vmware disk (you can make it c:\vmware\rac1) and click Next
Define the size of the hard drive of the virtual machine and click Next (set it to 20 GB and dont check Allocate all disk space now checkbox)
Mount the ISO image of the OEL4 installation and start adding more four hard drives and one Ethernet device. Click on Add button
For the first device, specify the disk size as 3gb, check Allocate all disk space now and click Next
Create a separate folder named C:\vmware\sharedstorage in your hard drive, set the name of the new hard drive to ocfs2disk.vmdk.
After creating the first device, create more three devices with 4GB in size (asmdisk1.vmdk, asmdisk2.vmdk, asmdisk3.vmdk) and make all of them Independent->Persistent and dont allocate the disk space for each of them
Next, start changing the device node for each of them. Start from the firstly added hard drive, select it, click on Advanced button and make it SCSI 1:0. For next hard drive make it 1:1 and so on
Make sure that the last state of your virtual machine looks like as its seen above
Then locate the configuration file of the virtual machine and start editing it
Add the lines that are marked in bold to the configuration file to make the devices be shared between to nodes - By specifying disk.locking to FALSE will allow any virtual machine to load a SCSI disk device even its in use by an another virtual machine - Specify diskLib.dataCacheMaxSize = 0 to turn off the disk caching for clustered virtual machines. - By specifying scsi1.sharedBus = virtual will give the whole bus the ability to be shared. This prevents the locking of this specific disk
As you have already mounted the ISO image of the OEL4, the above screen appears
Click Skip
Click Next
Select Disk Druid for disk partitioning method and click Next
Specify / as a mount point, make its files system ext3 and make the End Cylinder 900 (to make the size of the root folder 7Gb). Check Force to be a primary partition and click Ok
Select File System Type as swap and change End Cylinder to 1170
Create mount point called /u01, make its files system ext3 and make End Cylinder 2610 and click Ok
Make sure that the last state of your disk partitioning looks like as its seen above
Now lets configure the network devices. Select the first device and click Edit
Uncheck Configure using DHCP and provide the following ip address and netmask: IP Address: 192.168.2.131 Netmask: 255.255.255.0
Select the second device, edit it, uncheck Configure using DHCP and provide the following ip address and netmask IP address: 10.10.10.31 Netmask: 255.255.255.0
Set the hostname as rac1.test.az (you can provide any domain name) and set the gateway to 192.168.2.1
Select the default language for the system and click Next
Provide the password for the root user and click next
Select necessary packages for Oracle installation. Heres the list of the necessary packages: X Window System Gnome Desktop Environment
Editors Graphical Internet Server Configuration Tools Legacy Network (click Details and select rsh_server and telnetserver) Development Tools Legacy Software Development Administration Tools System Tools (select all packages that starts with ocfs2 and oracleasm, select systat as well)
Click Next
Now lets install vmware tools. For this, disconnect the mounted ISO image, choose Install VMware Tools from VM menu
Double click on VMware tools icon and run the .rpm file by double clicking on it
After the window is closed, open new terminal and run vmwareconfig-tools.pl and finish the installation of vmware tools
To synchronize the time on the virtual machine with the host machine execute vmware-toolbox on the terminal window and check the check box
Edit /boot/grub/grub.conf file and add clock=pit nosmp noapic nolapic to the line that reads kernel /boot The clock=pit prevents the clock for running to quickly and nosmp noapic nolapic prevents the clock from running too slowly. After you make the change, reboot the machine for change to take effect
Now lets start the prerequisite steps for Oracle installation. For this well create a group, a user and some directories
view source
print?
0 groupadd
1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 1 0 1 1 1 2 1 3
oinstall
groupadd dba
mkdir -p /export/home/oracle
mkdir /ocfs
passwd oracle
Change the .bash_profile (and .bashrc) file and add the following lines:
view source
print?
0 export
1 0 2 0 3 0 4 0 5 0 6 0 7
EDITOR=vi
export ORACLE_SID=devdb1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db _1
0 8 0 export 9 ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs _1 1 0 1 export 1 LD_LIBRARY_PATH=$ORACLE_HOME/lib 1 2 1 export PATH=$ORACLE_HOME/bin: 3 $ORA_CRS_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R 6/bin 14 1 umask 5 022
Now switch to oracle user with su oracle command. Make sure all environment variables are set (echo $ORACLE_HOME). After that, create the following directories:
view source
print?
1 su - oracle 2 3 mkdir -p $ORACLE_BASE/admin 4 5 mkdir -p $ORACLE_HOME 6 7 mkdir -p $ORA_CRS_HOME 8 9 mkdir -p /u01/oradata/devdb Note that if environment variables are not set correctly, then the above mentioned directories will not be created.
Change /etc/security/limits.conf file with a root user and add following lines:
view source
print?
2047 2 3 oracle hard nproc 16384 4 5 oracle soft nofile 1024 6 7 oracle hard nofile 65536
print?
Now mount the third installation cd of the OEL4, connect it and open new terminal. Switch to the RPMS folder inside the cd and install libaio-0.3.105-2.i386.rpm and openmotif21-2.1.3011.RHEL4.6.i386.rpm packages
view source
print?
print?
0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
fs.file-max = 65536
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144
/sbin/sysctl -p
Now lets configure the network configuration files. For this we need to add IP addresses and hostnames to the /etc/hosts file in each node and test the connection by pinging the hostnames
view source
print?
0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 1 0 1 1 1 2 1 3
127.0.0.1 localhost
Try the connection by pinging all hostnames (dont ping VIP addresses as they will be created during clusterware installation): ping rac1.test.az ping rac1-priv.test.az and son on
Now start creating disk partitions for OCFS2 and ASM. /dev/sdb will be used for OCFS2 and rest devices will be used for ASM. fdisk /dev/sdb Type n (to create new partition)
Type p to create a primary partition Type 1 to provide the partition number Double click on Enter and type w to save the changes Perform above steps for all hard disk: fdisk /dev/sdc fdisk /dev/sdd fdisk /dev/sde
To map the raw devices to the shared partitions, change /etc/sysconfig/rawdevices file:
view source
print?
1 /dev/raw/raw1
/dev/sdc1 2 3 /dev/raw/raw2 /dev/sdd1 4 5 /dev/raw/raw3 /dev/sde1 And run the following command to make it effective /sbin/service rawdevices restart Then change the permission for all newly created raw devices:
view source
print?
1 chown oracle:dba /dev/raw/raw[13] 2 3 chmod 660 /dev/raw/raw[1-3] 4 5 ls -lat /dev/raw/raw* Next, switch to the oracle user and create links for raw devices
view source
print?
1 su - oracle 2 3 ln -sf /dev/raw/raw1 /u01/oradata/devdb/asmdisk1 4 5 ln -sf /dev/raw/raw2 /u01/oradata/devdb/asmdisk2 6 7 ln -sf /dev/raw/raw3 /u01/oradata/devdb/asmdisk3
As the raw devices are remapped on boot, change /etc/udev/permissions.d/50-udev.permissions with the root user and add the following lines:
view source
print?
1 # raw devices 2 3 ram*:root:disk:066 0 4 5 #raw/*:root:disk:066 0 6 7 raw/*:oracle:dba:066 0 After performing all above steps, shutdown the virtual machine. Then copy all its files to another directory (c:\vmware\rac2)
Open it, switch to the Options tab, change its name to rac2 and start it
Open Network Configuration and change addresses of each Ethernet device. eth0 192.168.2.132 eth1 10.10.10.32
Then from Hardware Device type click on Probe button to get new MAC address, enable both network devices, change hostname to rac2.test.az and click Ctrl+S to save the changes. Then add the following line to the /etc/hosts file 127.0.0.1 localhost
Now its time to establish user equivalence with SSH. Oracle Universal Installer installs the binaries in one node and then propagates the files to the other nodes. For this, it uses ssh and scp command in the background during installation to run remote commands and copy files to the other cluster nodes. So SSH must be configured so that these commands not prompt for a password. For this power on the first machine, login with root user, switch to the oracle user and generate RSA and DSA key pairs su oracle ssh-keygen t rsa (click Enter twice) ssh-keygen t dsa (click Enter twice) Perform above steps in the second node (rac2)
Now (from rac1) add the generated keys to the ~/.ssh/authorized_keys file
view source
print?
~/.ssh/authorized_keys 2 3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Then from rac1 SSH to rac2 twice and add the .rsa and .dsa keys to the authorized_keys file that locates in the first node:
view source
print?
1 ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 2 3 ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Now copy the authorized_keys file from rac1 to rac2:
view source
print?
1 scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys After performing all above steps, you should successfully open SSH connection from rac1 to rac2 and vice verse. So run the following commands in both nodes and ensure that youre not prompted for the password for the second time:
view source
print?
0 1 0 2 0 3 0 4 0 5 0 6
0 7 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5
I want to note again Please make sure that after running above commands on each node, youre not prompted for the password for the second time
Now lets configure Oracle ASM (Automatic Storage Management). So run the following commands from both nodes with root user: /etc/init.d/oracleasm configure Pass oracle as a parameter for default user
Pass dba as a parameter for default group Pass y for the third and fourth parameters Then create ASM disks on any node (try on the first node rac1) with a root user: /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1 Marking disk /dev/sdc1 as an ASM disk: [ OK ] /etc/init.d/oracleasm createdisk VOL2 /dev/sdd1 Marking disk /dev/sdd1 as an ASM disk: [ OK ] /etc/init.d/oracleasm createdisk VOL3 /dev/sde1 Marking disk /dev/sde1 as an ASM disk: [ OK ] Verify that the ASM disks are visible from every node. /etc/init.d/oracleasm scandisks Scanning system for ASM disks: [ OK ] /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3
Now lets configure Oracle Cluster File System (OCFS2). For this, run ocfs2console with a root user from the first node. Then from the Cluster menu select Configure Nodes, click Add button and apply both nodes: rac1
Then propagate the configuration to the second node. For this select Propagate Configuration from the Cluster menu.
To configure O2CB to start at the boot unload and configure it on both nodes as a root user: /etc/init.d/o2cb unload /etc/init.d/o2cb configure
Now format the file system on the first node (rac1). For this run ocfs2console program, select Format from the Tasks menu and click OK to format the drive. Press Ctrl+Q to quit
Now execute the following command on both nodes to mount the files system mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs and add the following line to the /etc/fstab to mount the files system on boot /dev/sdb1 /ocfs ocfs2 _netdev,datavolume,nointr 0 0
Create a clusterware directory under /ocfs folder and change the owner:
view source
print?
1 mkdir /ocfs/clusterware 2 3 chown -R oracle:dba /ocfs Now to test the shared device, create a file in the /ocfs directory from the first node (rac1) and check the same folder in the second node. cd /ocfs touch test_file ls Now download the clusterware installation, copy it under /tmp directory, unzip it and start the installation ./runInstaller
Provide the folder for the Inventory and click Next Create a clusterware directory under /ocfs folder and change the owner:
view source
print?
1 mkdir /ocfs/clusterware</span></p> 2 <p class="MsoNormal" style="margin: 0 0 10pt;"><span style="font-family: Calibri; font-size: small;">chown -R oracle:dba /ocfs Now to test the shared device, create a file in the /ocfs directory from the first node (rac1) and check the same folder in the second node. cd /ocfs touch test_file ls
Now download the clusterware installation, copy it under /tmp directory, unzip it and start the installation ./runInstaller
After checking all prerequisites it should not give any warning, so click Next
Click on Add button and provide the information on the second node: Public Node Name: rac2.test.az Private Node Name: rac2-priv.test.az
Click on Edit button, change the Interface type of the first Ethernet device (eth0) to Public and the second to Private
Select External Redundancy and provide the location for OCR : /ocfs/clusterware/ocr
Select External Redundancy and provide the location for Voting Disk /ocfs/clusterware/votingdisk
After installation completes, run both scripts on both nodes Run /u01/app/oracle/oraInventory/orainstRoot.sh on rac1 and rac2 (wait each script to complete before running it on the second node) Run /u01/app/oracle/product/10.2.0/crs_1/root.sh on rac1 and rac2 (wait each script to complete before running it on the second node)
After running the second script on the second node (rac2) youll get an error (on running VIPCA), so you need to run it manually. Switch to the following directory cd /u01/app/oracle/product/10.2.0/crs_1/bin and run the ./vipca to create and configure VIP
Select the first Ethernet device and Click Next After running the second script on the second node (rac2) youll get an error (on running VIPCA), so you need to run it manually. Switch to the following directory cd /u01/app/oracle/product/10.2.0/crs_1/bin and run the ./vipca to create and configure VIP
Type rac1-vip on the IP Alias Name for the first node (rac1). The rest boxes will be filled automatically. Click Next
After vipca finished successfully, switch to the first node and click OK button on the script running window.
Now copy the installation of the database (Oracle 10gR2) to the /tmp directory, unzip and start the installation. You need to start the installation with an oracle user, so run xhost + from the root user to allow the connection to the X server and switch to the oracle user xhost +
su oracle ./runInstaller
Lets install just a software, so check Install database Software only and click Next
Execute the mentioned script on both nodes (wait for the script to finish before running it on the second node)
After installation finishes, run dbca (Database Configuration Assistant), select Oracle Real Application Clusters database and click Next
Provide the password for an ASM instance, select Create initialization parameter file (IFILE) and click Next
Now lets create an ASM disks. For this, click Create New button
Provide the name of the diskgroup dg1, select the External redundancy, select two disks raw1,raw2 and click OK
The disk group will not be mounted in the second node, so omit this warning. The second node should be restarted (but not now)
Create the second disk group for flash recovery area (fg), select an External redundancy, select the last device (raw3) and click Ok
As you see, the state of disk group shows that its not mounted on the second node. For this, we need to restart the second node. Click on Finish and restart the second node. After it starts, login with root user and call dbca from the first node again.
Move to the above window again and youll see that the disk group is mounted on both nodes. Click on Finish button
Uncheck Configure the Database with Enterprise Manager as its taking too much (some hours) to finish (however, if you have enough RAM, you can check it) and click Next
Provide the password for the SYS user and click Next
Check Specify Flash Recovery Area and chose FG disk group and click Next
Click Next
Click Next
Click Next
After some hours (as I was running each virtual machine with 1GB RAM) this screen appears. Click Exit
After all, check the status of the Clusterware. As you see, some applications are with OFFLINE state. To make them online, stop and start them with SRVCTL utility as its shown above
After all, check the CRS status again. As you see, the State column of all applications are ONLINE
Now connect to the database from the first node (rac1) and run the following query SQL>col host_name format a20 SQL>SELECT instance_name, host_name, thread#, status from gv$instance;
print?
0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2
Enter password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Production With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL>exi t
Enter
1 2 2 2 3 2 4 2 5 2 6 2 7
password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 Production With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL&g t;
print?
0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 1 0 1 1 1 2 1 3 1
SQL> col file_name format a45 SQL> select file_name, bytes/1024/1024 size from dba_data_files;
FILE_NAME size --------------------------------------------- ---------+DG1/devdb/datafile/users.259.747087235 5 +DG1/devdb/datafile/sysaux.257.74708722 1 240 +DG1/devdb/datafile/undotbs1.258.7470872 33 25 +DG1/devdb/datafile/system.256.74708720 9 480 +DG1/devdb/datafile/undotbs2.264.7470882
4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 3 0 3 1 3 2 3 3 3 4
31 25
SQL&g t;
SQL> col member format a45 SQL> select group#, type, member from v$logfile;
GROUP# TYPE MEMBER ---------- ------- --------------------------------------------2 ONLINE +DG1/devdb/onlinelog/group_2.262.747087539 2 ONLINE +FG/devdb/onlinelog/group_2.258.747087547 1 ONLINE +DG1/devdb/onlinelog/group_1.261.747087519 1 ONLINE +FG/devdb/onlinelog/group_1.257.747087533 3 ONLINE +DG1/devdb/onlinelog/group_3.265.747132209 3 ONLINE +FG/devdb/onlinelog/group_3.259.747132221 4 ONLINE +DG1/devdb/onlinelog/group_4.266.747132235 4 ONLINE +FG/devdb/onlinelog/group_4.260.747132249
8 rows selected.
SQL&g t;
print?
0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 1 0 1 1 1 2
SQL> select group_number, name, state, type, total_mb, usable_file_mb from v$asm_diskgroup; GROUP_NUMBER NAME STATE TYPE TOTAL_MB USABLE_FILE_MB ------------ -------------------- ----------- ------ ---------- -------------1 DG1 MOUNTED EXTERN 8188 7048 2 FG MOUNTED EXTERN 4094 3760
SQL&g t;
Now lets create a Service. Services are used to manage the workload in an RAC environment and provide high availability. To create the service run dbca
When you specify PREFERRED instances, you are specifying the number of instances on which a service will normally run. The Oracle Clusterware attempts to ensure that the service always runs on the number of nodes for which you have configured the service. Afterwards, due to either instance failure or planned service relocations, a service may be running on an AVAILABLE instance
Select Preferred for the first instance, and Available for the second instance, change the TAF policy to Basic and click Finish After the Service created automatically, check tnsnames.ora file and youll see that the new entry is added When you specify PREFERRED instances, you are specifying the number of instances on which a service will normally run. The Oracle Clusterware attempts to ensure that the service always runs on the number of nodes for which you have configured the service. Afterwards, due to either instance failure or planned service relocations, a service may be running on an AVAILABLE instance Select Preferred for the first instance, and Available for the second instance, change the TAF policy to Basic and click Finish
After the Service created automatically, check tnsnames.ora file and youll see that the new entry is added
Try to connect to the database using this service. As you see, well automatically connect to the first instance. Now lets check the RAC high availability
For this, while connecting to the first instance (devdb1) using a service, open new terminal, connect to the first instance and shut it down
Now go back to the first session and query the v$instance view again. As you see, youll be automatically forwarded to the second instance In this step by step tutorial Ive shown you the deep step by step guide using 150 screenshots to make the RAC installation easier for you. I hope youll successfully install RAC and make your own tests. Good Luck!
AD V E RT I S E M E N T
Like this:
Like 2 bloggers like this post.
This entry was posted on April 5, 2011 at 1:16 pm and is filed under Administration, RAC issues. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
Hi kamran, im not able able to ping the rac2 machine. im getting destination host unreachable this what i have on both /etc/hosts file 127.0.0.1 localhost 192.168.2.131 rac1.test.ca rac1 192.168.2.31 rac1-vip.test.ca rac1-vip 10.10.10.31 rac1-priv.test.ca rac1-priv 192.168.2.132 rac2.test.ca rac2 192.168.2.32 rac2-vip.test.ca rac2-vip 10.10.10.32 rac2-priv.test.ca rac2-priv this is my network settings from the host OS C:\Users\Administrator>ipconfig
Windows IP Configuration Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : IPv4 Address. . . . . . . . . . . : 169.254.2.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : gateway.2wire.net Link-local IPv6 Address . . . . . : fe80::f42f:5673:3e63:3c42%14 IPv4 Address. . . . . . . . . . . : 192.168.2.22 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.2.1 Ethernet adapter Bluetooth Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::f5f4:c6cf:1a9a:83b9%19 IPv4 Address. . . . . . . . . . . : 192.168.65.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::4d71:e31c:3448:77a5%20 IPv4 Address. . . . . . . . . . . : 192.168.183.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . :
Tunnel adapter isatap.{81D569CB-A721-4718-881BF1B45A0F4E08}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{A0316E71-53CD-4AB8-B35FDDDD66EDDA8A}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Teredo Tunneling Pseudo-Interface: Connection-specific DNS Suffix . : IPv6 Address. . . . . . . . . . . : 2001:0:4137:9e76:2067:deb:3f57:fde9 Link-local IPv6 Address . . . . . : fe80::2067:deb:3f57:fde9%15 Default Gateway . . . . . . . . . : :: Tunnel adapter isatap.{24B7D0DB-9097-4AC3-91EAC92E6350F04E}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : C:\Users\Administrator>
M. Imran said
May 27, 2011 at 8:07 pm
Dear Kamran, How I can check that my OLE 4.8 vmware is sharing disks. I have fallowed the steps carefully but my OCFS2 disk not mount at second node, I cant see it on second node after mounting it successfully on node1 (server of ocfs) I check /etc/ocfs2/cluster.conf that is properly formatted and when I check status of o2cb it shows me Active heartbeat at node1 only.
I am using vmware 7 workstation, my vmx file is as fallows. I am great-full for you help. .encoding = windows-1252 config.version = 8 virtualHW.version = 7 scsi0.present = TRUE scsi0.virtualDev = lsilogic memsize = 1024 scsi0:0.present = TRUE scsi0:0.fileName = boot_linux.vmdk ide1:0.present = TRUE ide1:0.fileName = C:\Program Files\VMware\VMware Workstation\linux.iso ide1:0.deviceType = cdrom-image floppy0.startConnected = FALSE floppy0.fileName = floppy0.autodetect = TRUE ethernet0.present = TRUE ethernet0.wakeOnPcktRcv = FALSE ethernet0.addressType = generated usb.present = TRUE ehci.present = TRUE sound.present = TRUE sound.fileName = -1 sound.autodetect = TRUE serial0.present = TRUE serial0.fileType = thinprint pciBridge0.present = TRUE pciBridge4.present = TRUE pciBridge4.virtualDev = pcieRootPort pciBridge4.functions = 8 pciBridge5.present = TRUE pciBridge5.virtualDev = pcieRootPort pciBridge5.functions = 8 pciBridge6.present = TRUE pciBridge6.virtualDev = pcieRootPort pciBridge6.functions = 8 pciBridge7.present = TRUE pciBridge7.virtualDev = pcieRootPort pciBridge7.functions = 8 vmci0.present = TRUE roamingVM.exitBehavior = go displayName = node1
guestOS = oraclelinux nvram = Oracle Enterprise Linux.nvram virtualHW.productCompatibility = hosted printers.enabled = TRUE extendedConfigFile = Oracle Enterprise Linux.vmxf ethernet0.generatedAddress = 00:0c:29:d6:b5:40 tools.syncTime = TRUE uuid.location = 56 4d 3d 26 e2 2e e0 18-34 10 06 cb 18 d6 b5 40 uuid.bios = 56 4d 3d 26 e2 2e e0 18-34 10 06 cb 18 d6 b5 40 cleanShutdown = FALSE replay.supported = TRUE replay.filename = scsi0:0.redo = pciBridge0.pciSlotNumber = 17 pciBridge4.pciSlotNumber = 21 pciBridge5.pciSlotNumber = 22 pciBridge6.pciSlotNumber = 23 pciBridge7.pciSlotNumber = 24 scsi0.pciSlotNumber = 16 usb.pciSlotNumber = 32 ethernet0.pciSlotNumber = 33 sound.pciSlotNumber = 34 ehci.pciSlotNumber = 35 vmci0.pciSlotNumber = 36 vmotion.checkpointFBSize = 16777216 ethernet0.generatedAddressOffset = 0 vmci0.id = 416724289 tools.remindInstall = FALSE ethernet0.connectionType = bridged disk.locking=FALSE diskLib.dataCacheMaxSize= 0 scsi0.sharedBus =virtual scsi0:1.deviceType= disk scsi0:1.present = TRUE scsi0:1.fileName = asm1.vmdk scsi0:1.mode = independent-persistent scsi0:2.deviceType= disk scsi0:2.present = TRUE
scsi0:2.fileName = asm2.vmdk scsi0:2.mode = independent-persistent scsi0:3.deviceType= disk scsi0:3.present = TRUE scsi0:3.fileName = asm3.vmdk scsi0:3.mode = independent-persistent scsi0:4.deviceType= disk scsi0:4.present = TRUE scsi0:4.fileName = ocr.vmdk scsi0:4.mode = independent-persistent ethernet1.present = TRUE ethernet1.connectionType = hostonly ethernet1.wakeOnPcktRcv = FALSE ethernet1.addressType = generated ethernet1.generatedAddress = 00:0c:29:d6:b5:4a scsi0:1.redo = scsi0:2.redo = scsi0:3.redo = scsi0:4.redo = ethernet1.pciSlotNumber = 37 ethernet1.generatedAddressOffset = 10 ide1:0.startConnected = FALSE unity.wasCapable = FALSE den" alt="" />