I need to attach a fibre channel SAN disk to one virtual machine that resides on an active/passive cluster (pacemaker, corosync). As it is the backup storage, corruption has to be strongly avoided. Although using fencing, is it enough to trust that the machine only runs on one machine at the same time or should I for example partition the space and create a DRBD device on top?
Asked
Active
Viewed 258 times
1
-
"[...] or should I for example partition the space and create a DRBD device on top?": Could you elaborate on that? – gxx Apr 21 '16 at 09:27
-
Thanks for your comment! I would create two partitions for each cluster node and replicate the data through drbd, so that data is synced onto the passive node. – alexander belov Apr 21 '16 at 15:03
-
So, for DRBD, you would use local storage, or still the SAN (and partition this?)? Actually, what I wanted to say: Just using DRBD doesn't prevent data corruption. To be somewhat on the "safe side" you would have to ensure that the network links etc. are redundant, etc. Additional question: Which SAN are you using? – gxx Apr 21 '16 at 15:43
-
Is this HP equipment? – ewwhite Apr 21 '16 at 23:33
-
Its an LSI Engenio storage. @gf_ the cluster works fine and the vm already runs on local DRBD disks. Problem is, the cluster nodes need to access the new storage too - for backup reasons. So i would have exported it from the vm to the nodes via NFS. – alexander belov Apr 22 '16 at 07:18
1 Answers
1
If you are fencing hosts with for example impi and one node gets shot in the head with your backup DB running then it's very possible that you will corrupt the database unless it's on a fault tolerant partition that the other host can pick up- can't you use something like galera across the corosync cluster?
https://mariadb.com/blog/how-make-maxscale-high-available-corosyncpacemaker

Sum1sAdmin
- 1,934
- 1
- 12
- 20
-
It would just be a raw device, that is passed to the vm as a disk. The virtual machine runs as a cluster resource on the pacemaker cluster and is migrated with other services - so it should never run on both nodes (ilo4 fencing). The purpose of the cluster is to run an oVirt Engine as a VM, that controls other hosts. But i need to pass the SAN to it, so that the oVirt Manager can use it as the export domain. – alexander belov Apr 21 '16 at 15:10
-
-
The ovirt engine is a regular KVM, so i could just pass the SAN disk to it - as the vm only runs once, i thought it could be done without data corruption? – alexander belov Apr 21 '16 at 15:28
-
last time I did this was with active/active over global filesystem -but no virtualization. I think maybe a little risk, VM will have cache when the host is fenced. – Sum1sAdmin Apr 21 '16 at 15:42
-
Ah, so i could create a GFS2 on the disk and then let the two cluster nodes and the vm access it? Because otherwise i would have exported the disk from the vm to the cluster nodes via NFS. – alexander belov Apr 22 '16 at 06:28
-
yeah, for shared concurrent access, might be over kill for a single db engine but you could add the db service to corosync - I think you still need a small raw disk for quroum.and it's easiest with 3 hosts. – Sum1sAdmin Apr 22 '16 at 09:41