1

I am trying to implement a 2 Node REDHAT HA cluster. The following is my environment.

VMWare WorkStation 10.01

  1. Node-1 >> CentOS-6.3 x86_64
  2. Node-2 >> CentOS-6.3 x86_64
  3. Node-3 >> CentOS-6.3 x86_64 [ Luci ]
  4. Openfileresa-2.99.1-x86_64

I have setup the cluster successfully and all the services are running fine on Luci server and the nodes. The iSCSI target and initiator are also working fine. The problem is that the drive names doesn't persist after the reboot of any particular nodes. This creates problem with the fail-over in the cluster. After two days of intensive online research I have done all that I can, from the following links. But still I and stuck with this disk naming issue.

pubs.vmware.com/workstation-10/index.jsp?topic=%2Fcom.vmware.ws.using.doc%2FGUID-E601BE81-59B5-4B6C-BD96-2E1F41CBBDB2.html

http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/sect-Virtualization-Tips_and_tricks-Configuring_LUN_Persistence.html

http://jablonskis.org/2011/persistent-iscsi-lun-device-name/index.html

P.S: I am using a single-path setup with no fencing mechanism as VMWare doesn't support that.

I have used the udev rules and assigned the UUID. I have added the following rule;

KERNEL=="sd[a-z]", SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id -g -u -d /dev/$name", RESULT=="14f504e46494c45526f416b7a4b4e2d4176584a2d45763153", NAME="webcl"

Now when I list "fdisk". The disk for the LUN is not even displayed in the list, even though the device name "webcl" appear under /dev

[root@node1 dev]# ls -l webcl

brw-rw---- 1 root disk 8, 32 Sep 30 22:25 webcl

Thomas M
  • 11
  • 3
  • So did you implement LUN persistance through udev rules using the UUID of the device as specified in your URL's? If so which part did not work? Please show examples. – geedoubleya Sep 30 '14 at 15:18
  • Exactly. I have used the udev rules and assigned the UUID. I have added the following rule; KERNEL=="sd[a-z]", SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id -g -u -d /dev/$name", RESULT=="14f504e46494c45526f416b7a4b4e2d4176584a2d45763153", NAME="webcl" – Thomas M Sep 30 '14 at 15:39
  • Did you test this with the `udevadm test /dev/sdxx` to see if it is picking up the UUID? Please edit the original question with useful answers rather than adding comments. (if allowed) – geedoubleya Sep 30 '14 at 15:56
  • The test gives me an error. "unable to open device '/sys/dev/webcl'" but it gives the same result for my root as well "unable to open device '/sys/dev/sda2'" :-( – Thomas M Sep 30 '14 at 17:22
  • I don't think you're helping yourself here by using VMWare Workstation - you *can* do this sort of thing but really you should be using ESXi for this instead, I think it'd go much smoother. – Chopper3 Oct 01 '14 at 10:17

1 Answers1

1

Instead of using udev rules to maintain the name, you have a couple of options:

With the iscsi target, you should be able to use the WWID instead by using the /dev/disk/by-id/scsi-.... address.
If you list the contents of that directory, a symbolic link should exist to the relevant iscsi disk (/dev/sda2). This target will not change even if the device name changes.

Alternatively you could use the clustered logical volume manager clvmd to manage the disk, as the UUID is used in the clvm config.
To enable this - install and enable clvmd on both nodes then do the following to bring the disk under clvmd control.

Initialise the disk:
pvcreate /dev/sda2

Run pvscan on the other node(s).

Create the volume group encapsulating the disk (change the name):
vgcreate iscsi_cvg /dev/sda2

Create the logical volume using the entire volume group:
lvcreate -l 100%FREE -n iscsishareddisk iscsi_cvg

Run lvscan on the other node(s)

Create the file system:
mkfs.ext4 /dev/iscsi_cvg/iscsishareddisk

On both nodes create the mount directory, test the volume can be mounted and unmounted seperately.

Ensure the cluster flag is set in volume group with the vgs command (last attribute will be c)

To enable this if missing:
vgchange -cy iscsi_cvg --config 'global {locking_type = 3}'

Ensure the locking_type is set to 3 in /etc/lvm/lvm/conf.

This clustered volume can then be referenced in your cluster.conf.
Before adding it into the cluster configuration ensure the logical volume is no longer active:
lvchange -an iscsi_cvg

A very useful article on iscsi targets in a clustered environment is HERE.
Just ignore the multipathing if you want to stick to a single path solution.

geedoubleya
  • 712
  • 4
  • 10
  • I have created a logical volume, created filesystem, tested it by mounting and unmounting etc. as per the steps you have provided. Yet my cluster simply refuses to detect it. rgmanager [fs] stop: Could not match /dev/mapper/iscsi_web-iscsi_lvp1 with a real device – Thomas M Oct 01 '14 at 15:43
  • It sounds like you may be having problems with the iscsi target. What does `ls -l /var/lib/iscsi/nodes/` show? I have added a very useful URL to the end of my `Answer`, which goes into detail about iscsi targets in the redhat cluster. – geedoubleya Oct 01 '14 at 15:54
  • Sorry, it was a typo from my end. I just corrected it and it is fine now. Thanks a lot mate. :-) – Thomas M Oct 01 '14 at 16:04
  • No problem, if this was a useful answer then please up vote. – geedoubleya Oct 01 '14 at 16:08
  • Sorry, I tried, but they say I need 15 reputations to do so. Well I will surely do it once I acquire 15. Thanks again. :) – Thomas M Oct 01 '14 at 16:36