Mount Same Partition in Different Servers (using Cluster)

In this tutorial, I will show you on how to mount a same partition in different servers. We will use RedHat Cluster Suite which available in CentOS repository, with GFS2 file system. The server architecture will be as below:

The file server is run on Openfiler, and we will use iSCSI initiator to mount the hard disk in both nodes. I assume that you have one partition called ‘data’ created in Openfiler as iSCSI target and mysql related package has been installed. If you not have one, you can refer to this post: Create iSCSI Target in OpenFiler .

1. Make sure all servers /etc/hosts has following value:

192.168.100.1   openfiler.cluster.local openfiler
192.168.100.11  node1.cluster.local    node1
192.168.100.12  node2.cluster.local    node2

2. Since node1 will be the head node in cluster, we will need to install following group packages via yum:

$ yum groupinstall -y "High Availability" "High Availability Management" "Resilient Storage"
$ yum install -y iscsi-initiator-utils openssl

3. In node2, we need to install similar things except “High Availability Management” group package:

$ yum groupinstall -y "High Availability" "Resilient Storage"
$ yum install -y iscsi-initiator-utils openssl

4. We need to allow certain ports so all nodes can communicate correctly. You need to add following line into /etc/sysconfig/iptables before any REJECT line (usually before the last 3 lines) in both nodes:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 11111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 21064 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 16851 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8084 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 5404 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 5405 -j ACCEPT

Restart and save the iptables:

$ service iptables restart
$ service iptables save
$ setenforce 0

5. Make sure SElinux is disabled. Edit /etc/sysconfig/selinux and change following value:

SELINUX=disabled

6. Lets start all required services to run cluster in node1 and node2. Luci should only be started at node1 because it is the head node:

$ chkconfig luci on #only node1
$ service luci start #only node1
$ chkconfig ricci on
$ service ricci start
$ chkconfig iscsi on

7. We need to create ricci password so luci can communicate between cluster nodes. In this case, I will use the same password as root password for ricci:

$ passwd ricci

8. Lets create the cluster using RedHat Cluster Suite. Access the luci web management portal at https://192.168.100.11:8084 and login using root username and password. After login, go to Manage Clusters > Create > and enter the cluster name including nodes information. Example as below:

8. Wait for sometimes for cluster suite to be initialize in both nodes. After that, you should see that FileStorage indicator turn to green and you can list out all the cluster members as example below:

9. Cluster setup completed. Now we need to initialize iSCSI in node1. To discover iSCSI targets, run following command:

$ iscsiadm -m discovery -t sendtargets -p openfiler.cluster.local
Starting iscsid:                                           [  OK  ]
192.168.100.1:3260,1 iqn.2006-01.local.cluster.openfiler:data

10.  If you see some result as above, means we can see and get connected to the iSCSI target. We just need to do another restart so we can access the target:

$ service iscsi restart

11. In this server, I found out a new partition which is /dev/sdb. This is the iSCSI disk being discovered previously. We need to create one partition which is  /dev/sdb1 in this disk:

$ fdisk /dev/sdb

Sequence during fdisk: n > p > 1 > enter > enter > w

12. Now we need to format the partition with GFS2 file system. Command as below:

$ mkfs.gfs2 -p lock_dlm -t FileStorage:data -j 4 /dev/sdb1
Are you sure you want to proceed? [y/n] y
 
Device:                    /dev/sdb1
Blocksize:                 4096
Device Size                47.66 GB (12492796 blocks)
Filesystem Size:           47.66 GB (12492794 blocks)
Journals:                  4
Resource Groups:           191
Locking Protocol:          "lock_dlm"
Lock Table:                "FileStorage:data"
UUID:                      1A018632-7752-DAC0-DCEC-8C27E60C47E7

13. Lets create the mount point:

$ mkdir -p /storage/data

14. Edit the /etc/fstab by add following line with UUID and mount the GFS2 partition:

UUID=1A018632-7752-DAC0-DCEC-8C27E60C47E7 /storage/data gfs2 noatime,nodiratime  0 0

Mount the file system from GFS2 init service:

$ chkconfig gfs2 on
$ service gfs2 start

15. Now in node2, we just need to find the iSCSI target as step 9 and 10 and mount the device as step 13 and 14 afterwards.

Done! You can now see both directory (/storage/data) is sync together. Any new file being created in this directory will appear in both servers. You can use this simple cluster setup as the root directory of FTP server, NFS or Samba file sharing. If node1 is down, there is node2 available to take over the task. You also can add node3 or additional server easily to be the front-end.

If I have time, I will show you on how to do auto-failover using RedHat Cluster Suite. Cheers!