Anda di halaman 1dari 6

10/12/2014 Setting up Clustering in Linux - Initq

http://initq.com/index.php/Setting_up_Clustering_in_Linux 1/6
Download the needed iso
1. Download the latest rhel5 or rhel6
2. Download openfiler
Install rhel and openfiler in Virtualbox
The following articles will show you how to setup rhel and openfiler in Virtualbox.
1. Use this article to setup 3 machines. 2 for the cluster nodes and 1 for luci. Setup RHEL in VirtualBox
2. You will need openfiler for storage. This can be used as iscsi or NAS simulations. Setup openfiler in VirtualBox
3. Setup networking in VirtualBox
Setup Multipathing on the nodes
You will need to setup multipathing on both nodes to simulate failure. Please refer to Setting up Multipathing on Linux to set this up.
Turn off iptables
Please turn off iptables on all 4 of your VMs. Both nodes, luci and openfiler.
Install ricci on two rhel nodes
[root@node1 ~]# yum install ricci
Loaded plugins: product-id, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Cluster | 1.5 kB 00:00
ClusterStorage | 1.5 kB 00:00
Server | 1.5 kB 00:00
VT | 1.3 kB 00:00
Setting up Install Process
Package ricci-0.12.2-64.el5.x86_64 already installed and latest version
Nothing to do
Install luci on luci VM
[root@luci ~]# yum install luci
Loaded plugins: product-id, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Cluster | 1.5 kB 00:00
ClusterStorage | 1.5 kB 00:00
Server | 1.5 kB 00:00
VT | 1.3 kB 00:00
Setting up Install Process
Package luci-0.12.2-64.el5.x86_64 already installed and latest version
Nothing to do
Initialize luci
[root@luci etc]# luci_admin password
The luci site has not been initialized.
To initialize it, execute
/usr/sbin/luci_admin init
[root@luci etc]# /usr/sbin/luci_admin init
Initializing the luci server


Creating the 'admin' user
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux 2/6

Enter password:
Confirm password:

Please wait...
The admin password has been successfully set.
Generating SSL certificates...
The luci server has been successfully initialized
Access luci
Point your web browser to https://luci.localdomain:8084 to access luci
Install apache on both nodes
[root@node2 ~]# yum install httpd
Fix hosts files on both nodes, luci and openfiler
Please fix the /etc/hosts file on all 4 of your machines with exactly the same entries.
192.168.1.203 luci luci.localdomain
192.168.1.202 node1
192.168.1.201 node2
192.168.1.200 openfiler
Fix ricci password on both nodes
[root@node1 ~]# passwd ricci
Create and setup cluster from luci gui
1. Cluster/ Create a New Cluster. Call it mycluster.
2. Add your two nodes with the ricci password you set on both nodes.
3. Check boxes: Download Packages, Reboot the nodes, Enable share storage support.
Go to node 1 and node 2 and do:
[root@node1 ~]# tail -f /var/log/secure /var/log/messages
Go back to luci and click "Create Cluster". Now watch your screen on node1 and node2. Both nodes will be rebooted after the packages
are installed.
Check cluster.conf on both nodes
[root@node2 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster alias="mycluster" config_version="4" name="mycluster">
<clusternodes>
<clusternode name="node1" nodeid="1" votes="1" />
<clusternode name="node2" nodeid="2" votes="1" />
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices/>
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux 3/6
<rm/>
</cluster>
Turn on cman service if there are issues
If you see any node issues in luci then go to that node and check the cluster.conf file, correct it and start/restart the cman service.
Add resources to your cluster
1. Click on Cluster/Resources.
2. Choose IP address resource and add an unused ip address in your subnet. We will choose 192.168.1.204. Check the monitor the
link box. Number of seconds should be kept the same.
3. Add file system resource. Choose the mpath0 partition we created in Setting up Multipathing on Linux section. Add the name
mpath0p1, filesystem will be ext3, mount point will be /var/www/html and device will be /dev/mapper/mpath0p1. Check boxes for
force mount, reboot host node if unmount fails and check file system before mounting.
4. Add Script resource to restart apache. In the name field just put httpd and in the full path to script file put /etc/init.d/httpd
Failover domain
1. Click Cluster/Failover Domains.
2. Add a Failover Domain.
3. Name it node2.
4. Check prioritized, Restrict failover to this domain's member and do not fail back service in this domain.
5. Check both boxes for members and change priority for node2 to 2.
Fence Devices
1. Click Clustering/Shared Fence Devices.
2. Add a Fence Device.
3. Choose the type of device you have. In our case we will choose Virtual Machine Fencing.
4. Give it a name and select.
Setup apache directory for cluster
[root@node1 www]# mount /dev/mapper/mpath0
mpath0 mpath0p1
[root@node1 www]# mount /dev/mapper/mpath0p1 /var/www/html
[root@node1 www]# vi /var/www/html/index.html

Put something in this file.

[root@node1 www]# /etc/init.d/httpd restart
Stopping httpd: [FAILED]
Starting httpd: httpd: apr_sockaddr_info_get() failed for node1.localdomain
httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
[ OK ]
[root@node1 www]#
Stop the manual httpd services
[root@node1 www]# /etc/init.d/httpd stop
Stopping httpd: [ OK ]
[root@node1 www]# umount /var/www/html/
rgmanager is running on both nodes
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux 4/6
[root@node2 cluster]# /etc/init.d/rgmanager start
Starting Cluster Service Manager: [ OK ]
[root@node2 init.d]# chkconfig rgmanager on
Unlock resource group
[root@node1 init.d]# clusvcadm -l
Resource groups locked
[root@node1 init.d]# clusvcadm -u
Resource groups unlocked
Create Cluster Service
1. Click on Cluster/mycluster/Service.
2. Name is mywebservice.
3. Check boxes to automatically start the service and run exclusive.
4. Failover domain node2.
5. Recover policy relocate.
6. Submit.
Add resources to Service
1. Click on mywebservice and add resource to service.
2. Choose the three resources we created, ip, mpath0p1 and script.
3. Save
Check your cluster
[root@node1 init.d]# clustat
Cluster Status for mycluster @ Sat Sep 21 21:35:19 2013
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
node1 1 Online, Local, rgmanager
node2 2 Online, rgmanager

Service Name Owner (Last) State
------- ---- ----- ------ -----
service:mywebservice node1 started
Check your Webpage
1. Check http://192.168.1.204/
Test the cluster
[root@node1 init.d]# /etc/init.d/httpd status
httpd (pid 17150) is running...

[root@node1 init.d]# clustat
Cluster Status for mycluster @ Sat Sep 21 21:48:23 2013
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
node1 1 Online, Local, rgmanager
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux 5/6
node2 2 Online, rgmanager

Service Name Owner (Last) State
------- ---- ----- ------ -----
service:mywebservice node1 started

[root@node2 init.d]# /etc/init.d/httpd status
httpd is stopped
We can see that apache is running on node1 only. We will reboot node1 and see if the service is started on second node.
[root@node2 init.d]# clustat
Cluster Status for mycluster @ Sat Sep 21 21:49:49 2013
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
node1 1 Online
node2 2 Online, Local, rgmanager

Service Name Owner (Last) State
------- ---- ----- ------ -----
service:mywebservice (node1) stopped
[root@node2 init.d]# clustat
Cluster Status for mycluster @ Sat Sep 21 21:49:52 2013
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
node1 1 Online
node2 2 Online, Local, rgmanager

Service Name Owner (Last) State
------- ---- ----- ------ -----
service:mywebservice node2 starting
Check Virtual ip, mount and httpd on node2
[root@node2 init.d]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 08:00:27:4e:27:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.201/24 brd 192.168.1.255 scope global eth0
inet 192.168.1.204/24 scope global secondary eth0
inet6 fe80::a00:27ff:fe4e:2755/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 08:00:27:af:27:89 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.2/24 brd 10.0.2.255 scope global eth1
inet6 fe80::a00:27ff:feaf:2789/64 scope link
valid_lft forever preferred_lft forever
4: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0


[root@node2 init.d]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
19G 2.4G 16G 14% /
/dev/sda1 99M 13M 82M 14% /boot
tmpfs 249M 0 249M 0% /dev/shm
/dev/hdc 4.1G 4.1G 0 100% /mnt/cdrom
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux 6/6
/dev/mapper/mpath0p1 4.7G 138M 4.4G 4% /var/www/html


[root@node2 init.d]# /etc/init.d/httpd status
httpd (pid 16372) is running...
DLM and CLVMd
Cluster based locking used in for DLM and CLVMd.
GFS with clustering
Please read GFS2 with cluster to understand GFS.

Anda mungkin juga menyukai