I'm looking for some advice on setting-up multi-host I/O access on our SAN:
I have a blade enclosure (PowerEdge1000e) containing an Equallogic PS-M4110 storage blade with a single RAID6 volume currently formatted as ext4.
This is connected via iSCSI to one of other blades (all running ubuntu server 14.04) and mounted there as a standard drive.
Now I am trying to connect another of the blades in the enclosure to the SAN in a way that allows multi-host I/O.
Preferably trying to avoid the obvious solution of NFS because some of the slightly questionably coded tools we use have a habit of crashing and burning when doing high I/O to NFS. This is particularly problematic as these tools take weeks to run and don't have many opportunities to checkpoint (have you guessed this is an academic environment yet?).
However, everything plays nicely with the current iSCSI set-up. So I was leaning towards a cluster-aware or distributed file-system + iSCSI as the best option but I'm worried of split-brain issues etc as we only have 1 node.
1) Is any of the above remotely sane?
2) Do you have any recommendations of which fs to use (FOSS and linux compatible preferable)?